I'm increasingly convinced that Algorithms-and-Data-Structure interviews are essentially being used as a proxy for:
- General IQ. Can this person understand and apply complex ideas
- Grit. Is this person hard-working enough to learn things that take time and effort
It's the software equivalent of the NFL scouting combine. The goal is not to create a test that is similar to the day-to-day job. But rather, create a test that isolates and evaluates a specific set of skills, which you think are important to the organization.
Unfortunately, those kinds of interviews also select for some other things that they shouldn't.
* Youth. People who have very recently studied these things in school, and use the same languages as the interviewers, have an advantage.
* Free time. People who have families (for example) might have less free time to study "Cracking the Code Interview" and such.
* Absence of anxiety. This disadvantages women, minorities, and people with psychological conditions that should be covered by ADA. Also, people whose financial situation is precarious will be more anxious than those who don't need the job, independent of which is actually a better candidate.
* Conformity. People who can recognize the flaws in a measurement technique, and who have the strength of character to push back against its application - both good qualities for a candidate - will self select out.
There's a lot of overlap among these, of course. There are better ways to measure "general IQ" and "grit" (which are both questionable concepts anyway). I've passed every such interview I've ever taken, but I refuse to administer them (despite the fact that my refusal has carried a quite tangible cost) because whatever benefit they provide is outweighed by their many flaws.
> Resistance to anxiety. This disadvantages women, minorities, and people with psychological conditions that should be covered by ADA.
I resemble some of those categories, and I don't know if I would feel comfortable making the leap to correlate them to a some inherent reduced level of resistance to anxiety. That seems like a generalization which I feel that, on an aggregate level, seems unsupportable by data.
I think that determination should be on a case-by-case basis, as is currently done at universities.
Allow me to elaborate, then. There's an inherent power dynamic in interviews, which creates stress in the interviewees. That effect is magnified for anyone who is unlike their interviewers, who still tend to be white and male. It's magnified still further when the power dynamic within the interview reflect the one that - very unfortunately - still persists in society at large. Lastly, the funny thing about stress/anxiety is that it tends to be additive. Having experienced high levels recently (including in other interviews) makes one more susceptible to new triggers. Thus, anyone who is the least bit "outside" in any way will feel more anxious.
Is any of that even controversial enough to require citation? How many Psychology or Sociology 101 textbooks should I cite? It's easy enough for those who don't feel this kind of anxiety themselves to brush it off, but for those who are less fortunate all those magnifiers can create an anxiety level that's quite debilitating.
I can see what you're saying (in good faith). I do think interview anxiety is a collocation of multiple factors. I don't think the correlation of anxiety to the listed categories are easily deconvolved from other factors -- it might seem like a common-sense correlation, especially if you start from certain priors, but it's a big assumption to make on average.
(Only this past week I had experiences that challenged my assumptions about certain demographics (seniors) and how I would expect they would behave, and how they actually behaved. The lesson I learned was -- don't assume, always collect real data)
The white-male interviewer power dynamic has some basis in reality (I've experienced it occasionally, not all the time), but its effect on my interview performance may be less than 10%? (to throw out a number). I find I'm much more affected -- maybe 90% -- about (1) my competence in the subject matter and (2) how well practiced I am (for instance, I know the theory for a great many subjects but am unpracticed at some of them, so I tend to stumble and lack ease when it comes to demonstrating my subject matter knowledge in real time).
For different people, those percentages shift, and I believe in a way that is not obviously or necessarily correlated with their demographic (psychological conditions, yes, but also depends on which ones -- some don't affect anxiety). But all I have is anecdata so I'm not able to provide strong evidence one way or another so this is just my two cents worth -- and I do mean this in good faith.
I appreciate your willingness to continue this conversation in a constructive way. So let me ask you a question. You say the white-male interviewer power dynamic might affect your interview performance by 10% or less. (Personally I'd pick an even lower number, but then I'm a white male myself and also old enough that I've been senior to most of my interviewers for a while now.) Would a 10% difference in interview performance be enough, on average, to affect who gets offers or what those offers are?
My basic point here is that even 1% would be too much. If an interview process creates any inherent disadvantage for some groups, I'd say it deserves serious scrutiny. BTW, by "inherent" I mean beyond what can be addressed by bias training and such. That can enable interviewers to conduct any type of interview in as fair and kind a way as possible, but not to change the interview structure itself. If the structure is the problem, training isn't the solution. I think the in-person white-board algorithm interview is unavoidably weighted toward factors that have nothing to do with likely on-the-job performance, and thus should be avoided.
I'm probably not the best person to gauge that in general, but I think the "inherent" part is where we differ.
Going back to whiteboarding: I don't like whiteboarding myself, but to be fair it does measure certain dimensions quickness-on-feet, memorization abilities, ability to exude presence, fluency in language, etc. While these are laudable abilities on their own, I agree they might not correlate with overall on-the-job performance (but it depends on the job).
I guess "job performance" is this amorphous latent variable y that is correlated with a bunch of direct predictors x which we can measure, u which we can't measure, so we use proxy variables z to stand-in for them: y = f(x, u, z).
The worry is that some candidates, who may be bad for the job, but just happen to be good at these proxy dimensions (or train for them) might get the job; on the flip side, we may exclude certain candidates who are potentially good for the job but perform badly at the proxy dimensions. Whiteboarding measures the proxy dimension z.
Edit: oh look, an article on HN's front page on this very issue:
>Is any of that even controversial enough to require citation?
I'm sorry, excuse me? Are you saying that non-minority, non-women don't suffer anxiety? Your parent comment certainly seems to suggest that. Which, at a minimum, is flat out wrong. Educate yourself[0]. And then zoom out and ask yourself why it's not only permissible, but often lauded, to so flippantly say what you just said.
> I'm sorry, excuse me? Are you saying that non-minority, non-women don't suffer anxiety?
To be fair, OP did say "That effect is magnified for anyone who is unlike their interviewers"
But, I do agree some evidence from OP would help their claims.
I wouldn't be surprised if minorities experience more anxiety, on average, in situations like an interview, though. I think it's established that imposter syndrome is more frequently encountered for example but it's too late here to go digging for evidence :)
Stop dehumanizing an entire group of people. All humans feel anxiety and that's a fact. Excluding one group of people that is equally likely to experience it because it's currently in vogue is still discriminatory.
Yeah I’ve never heard of women and minorities having less resistance to anxiety as a group. I’m curious where that’s coming from. Certain psychological conditions I could see, but that’s such a broad category that I think that’s also an over generalization.
I don’t know about “resistance to anxiety”, but it can certainly be anxiety-inducing to be the only member of a given minority in the interviewing room (or entire floor, as is sometimes the case in tech).
What if qualities you might judge in an interview other than “general IQ” and “grit” are even more bias-prone, and using those other qualities is actually worse than measuring “general IQ” and “grit” and applying a corrective factor?
In other words, I’d rather work with a disadvantaged person who may appear rough around the edges but made it this far and has the “general IQ” and “grit” to do well on the programming problem, than the preppy white kid who has a lifetime of experience preparing for the task of exuding status and competency when answering behavioral questions or engineering case studies, but lacks the “general IQ” and “grit” to solve the programming problem.
Just curious, what are some examples of better ways to measure general IQ and grit? You said they are questionable concepts, so how do _you_ interview people?
I don't, any more. I'm at a company that enforces a very rigid structure that I don't agree with, so I simply opt out. I pay for it every review cycle. Ironically, the rigidity of that structure is explicitly intended to reduce personal bias, which is a laudable goal, and I believe it succeeds. Unfortunately, I think it just replaces personal bias with systemic bias.
When I did interview, which was a lot at times when I was in a leadership role at a couple of startups, my favorite interview technique was to let the candidate lead and I'd follow. If I wanted to ask about algorithms, I'd ask about one they'd used in a project they'd worked on. How did it work? What were its strengths and pitfalls? What others were considered? What bugs were found in its implementation, or caused by its use? Besides flipping the control dynamic of the interview, it often led to more interesting conversations. Highly recommend.
> what are some examples of better ways to measure general IQ and grit
IQ has been under a shadow since _The Bell Curve_ and I'm not keen on letting it back out. ;) If one must measure it, I'd say measure it directly with simple challenges (e.g. memory or pattern completion) or puzzles ... but even those are apparently fraught with cultural baggage and of questionable relevance to a knowledge-heavy domain like programming.
As for grit, it's often readily apparent from someone's resume. Were they self-taught, worked through college, or took a free ride? Did they stay with companies and projects through hard times and get promoted "in the field" or were they always the first rat to abandon ship? It usually only takes a few questions to figure out whether someone's a coaster or a fighter. Funnily enough, the people with the most actual evidence of grit are the ones least likely to have spent their time studying specifically for the interview. They were busy actually doing stuff.
Algorithms and Data Structures don't measure either of those things. General IQ is not measured by very specific technical problems. Nor is learning something specific an indication of "grit".
IQ is a measure of mental flexibility independent of context. Being able to solve tricky algorithm problems is a measure of mental flexibility in the context of programming. The two are strongly correlated. Of course, the test is confounded by a baseline of programming and CS knowledge.
This is absolutely not true. I have seen amazing scientists and engineers get turned down by FAANG companies because they did not happen to study particular algorithms that were featured in that day's set of questions, and mediocre engineers fly through because they put in more time studying, or perhaps got lucky with the subset of technical questions that were asked.
I agree with the comment you were replying to. There will always be outliers, but testing prowess in technical problem solving and grit correlates well enough with the job of a software engineer.
The "amazing scientist" maybe was not a good culture fit, or, yes, the interview was botched.
Good candidates are so good that they often enough compensate for bad interviews. Sure it means we sometimes don't hire the best, but that's better IME than sometimes hiring a not so good candiate.
Yes, such tests are subject to a certain amount of gaming and prior knowledge. But that doesn't invalidate the fact that they are correlated with raw intelligence.
> General IQ is not measured by very specific technical problems.
I do not see data and algorithm interviews as very specific technical problems.
I do not have a computer science background, and have been able to bring myself up to speed to the general level expected without too much hassle.
I don't think I'm particularly special. I just searched and spent some time learning this stuff over a few weekends. If you are serious about a career, I don't really think that this is too much of an ask.
From my experience, when you have a big pool of candidates, the ones that pass not necessarily super stars, but they tend to perform at a relative stable level.
No, it is black and white. Just because you have lots of candidates doesn't mean you need to pick sub-par questions.
Interview for the skills you actually need. If the person isn't implementing algorithms and data structures from scratch, it's a shit question. Why would you ask questions that don't match the actual work they'll be doing?
If they will be doing this work, then obviously it's a fair question.
I disagree with that comment, based on my experience.
> Why would you ask questions that don't match the actual work they'll be doing?
As an interviewer, and likely peer or manager of the potential new hire, I want to understand growth potentials as well, so I want to challenge you during the interview. In my particular situation, there are so many candidates, and so many mediocre ones, we needed to raise the bar, and it served us well.
What? I'd go so far as to say this is universally incorrect. Having interviewed hundreds of people at multiple companies, at every level (intern to director), I have never been told what technical questions to ask....
> because if everyone is asking whatever they want there is no way to compare one candidate to another.
That's obviously not true....
> Most companies require interviewers to pick a question from their internal 'question bank'
Again, having worked at some fairly big and respected companies, this has never been the case.
I'm not interviewing for rote candidates. Everyone is different. Ergo, the questions are different. I could never imagine hiring senior developers and security engineers with questions from a "question bank".
If you're just doing boilerplate, you're probably getting very sub-par employees.
Please take a look at these. I recently interviewed at FB and I got 2 questions in phone screen that were from leetcode with FB tag.
> Everyone is different. Ergo, the questions are different.
Facebook is running interview factory, they just don't have time to customize interview for each candidate. Their own recruiter told me to practise questions from leetcode tagged with facebook.
I agree with you re your reasons for not using a 'question bank' but thats just not the realty.
Answering data structure/algorithm questions is absolutely meant as a measure of grit. It takes perhaps hundreds of hours of leetcode grinding to be able to quickly answer any problem which might come up in an interview. The point is to find out how committed applicants are.
I hear this so often that I wonder if people actually believe it. If you do, keep in mind that it would require a fairly large conspiracy within tech circles to maintain. A more likely explanation is that it's less effort to come up with and apply a leetcode-style question in an interview scenario.
Never attribute to malice that which is adequately explained by laziness and cargo-culting.
I'll disagree with this one. I'd actually be inclined to job-hop more frequently if I have the skill to pass hard interviews at any tier 1 company, thus maximizing my earnings.
If I don't, I just stay put knowing I lucked out at a great company, knowing it would be hard to get lucky again.
I had the opposite experience, once you grind problems on leetcode and figure them out yourself, you somehow never forget them, esp under pressure in an interview. The problems aren't exactly the same of course but like Polya's book "how to solve it" describes solving a problem by considering something similar you solved, when presented with some interview problem to work through the first things that came to my mind each time were previous silly leetcode solutions I came up with that bought me some time instead of blanking out.
If you're working on a large scale project you wouldn't be implementing a specific algorithm by yourself anyway. You'd be discussing a specific problem and then discussing the pros and cons of different algorithms that you want to implement and implementing POCs in different algorithms. The interview is just to see if you can think and discuss in terms of algorithms and data structures.
Technical interviews problems do sometimes actually resemble some discussions that happen in the real world however there are two major problems that I see in the interview space that don't have clear solutions:
1) one or two people in the discussion (interviewers) already know the perfect solution to the problem and are contributing as little as possible to the discussion.
2) the actual amount of allotted time to brainstorm a solution to the problem is realistically only ~10 minutes not the ~45 minutes you are allotted because of the time it takes to implement the solution with real code.
It would be nice if no one in the interview knew the answer to the problem before starting the discussion and this discussion would be pretty realistic aside from the lack of help from the interviewers. However, you cannot easily/fairly compare candidates with so many different questions.
Technical interviews would also be far more realistic if they allotted more time for problem solving but I imagine if the standard 45 minutes was bumped to over an hour, the only outcome is that more candidates would perform very well and that's not an outcome companies actually want unfortunately.
I'm completely in favor of randomized interview questions. My personal favorite approach is a two-way interview. You get to ask a technical interview question of the person you're interviewing with. If they ask you a ridiculous question, they get hit with a ridiculous question also. As an employer you could monitor this and see if you're employees are giving unfair questions to candidates they can't answer themselves. It would also be a good idea for HR to record all of these interviews so they can have someone knowledgeable judge the interview process and track the performance of new hires versus the questions they are asking. If the interviewer starts having problems answering their own difficulty level in questions then maybe they should be looking for a new job also. I was once asked a ridiculous question and threw an interviewer off by asking them how many times they had actually implemented something like they were asking, they stopped talking immediately.
My personal opinion is that all of this should be scrapped in favor of seeing what the person has produced and having them partially implement a solution of something they have written in the past. It is a virtual impossibility that anyone could work on a project of any sort without being able to sit down for an 8 hour interview where they are responsible for partially re-creating something they have worked on in the past. To this day I still remember large parts of source code I've worked on 10 years in the past.
Pick up a neuroscience book and you're going to quickly find out why your statement isn't possible. Have you ever had to conduct tech interviews or spoken to people who have? Tech interviews are the biggest crapshoot you'll ever see, for every 100 interviews you'll be lucky to find a single person who can code fizz buzz on the fly, introduce a small variation in fizz buzz that requires them to be able to read their own code and they can't modify their own code. The number of people who have a long enough short term memory and can concentrate on a single thing for three hours straight is exceedingly small.
Even the very best tech companies are constantly filtering through thousands of people who claim to have worked on extremely complicated development projects but can't code extremely simple things.
Yeah larger desirable companies already have no shortage of well performing candidates to choose from so why make the interview more fair in any way? That would just make selection more difficult for them. Smaller companies might not have this issue but will probably follow the practices of larger, more successful companies anyway.
I'll get myself some down voting here... I kind of like asking questions about data structures and algorithms. I don't see them as simple black and white questions though. I see several values: 1) If you're a "computer science" degreed person, I expect basic understanding of basic algorithms and data structures. As for your basic job stuff, things like Java have multiple implementations of maps or hash tables you should have some idea when to pick one vs the other. I don't think that's off limits. 2) I like to ask a couple challenges or questions that sort of make the interviewee a little uncomfortable, not to get the answer but to work with them on it and see how they work through that situation. I intentionally try not to be cruel but I want them to think, I want to communicate with them as they sort through it, I want to see how that stuff goes. I've had 60+ minute conversations about engineering a professional grade hash table and the different things you have to consider, that's good stuff. The 3rd thing, and this is maybe more subtle, there is a gigantic difference between implementing quick sort and qsort in glibc, the same can be said about most algorithms and data structures and I think at least a cursory knowledge of that is an indication of some wisdom. Should you be asked to implement data structure x or classical algorithm y, understanding the mechanics beyond simply copying it out of a text book indicates some wisdom.
After a grueling interview process at Goldman Sachs, with 7+ technical interviews that required me to solve very specific questions on college-level Maths, Stats and Computer Science (admittedly, I was applying for a quant job position), I was eventually asked to interview candidates myself. While I did not feel entitled to change the current interviewing culture at the company by asking questions of a completely different nature from what I got asked in the first place, I conducted my interviews in a very similar way to what you described. By no means did I expect applicants to reach a definitive answer, but I instead worked on the problems with them to see how far their intelligence, creativity, curiosity and, most importantly, their ability to well communicate their thought process would take them.
Such interviews used to take a while hour of my time (which is a lot to afford when you work in a bank), but by the end of each I believed to confidently ascertain the candidate’s ability to thrive on our team. In retrospect, it has served me really well.
All in all, the problems posed (and the solutions given to them) might not carry as much weight in the final decision as the discussion held with the applicants. As long as the questions asked give them some material to work on, and DS and algorithms usually serve this purpose very well for us engineers and developers in general, one should be able to effectively select candidates given some time investment.
The problem for me is do I want to spend my time and effort learning solutions to leetcode problems or do I want to spend time and effort learning actual development skills? Which would be more valuable in the long run? Every hour you have to spend on leetcode just to be able to pass some arbitrary measure of your leetcode ability is one less hour you can spend on learning something else.
I suspect there are better IQ tests than random algorithms questions.
Isn’t the nfl scouting combine bullshit though? The players who tend to be most favoured are those who did well at playing football in the previous season, rather than those who scored well on the IQ or creative writing or jumping tests
This is correct. It's unfortunate, but unless you can mandate Google to change their entire interviewing regime, you have to simply understand that this is the perspective, and move forward demonstrating yourself against this criteria, whatever else you may think.
remember, players opt out of the combine. and a good portion do, if they have a dope highlight reel. though they will conduct interviews with the gm for personality what we call in the tech field: "cultural fit"
- General IQ. Can this person understand and apply complex ideas
- Grit. Is this person hard-working enough to learn things that take time and effort
It's the software equivalent of the NFL scouting combine. The goal is not to create a test that is similar to the day-to-day job. But rather, create a test that isolates and evaluates a specific set of skills, which you think are important to the organization.