#DisruptTechInterviews
Recently1 a storify story bomb surfaced about how psychology can shed light on awful tech interview practices (also see the original twitter thread).
Let’s run down the ideas Ann talked about. These ideas are really important, but unless we drill these ideas into the heads of companies2, nothing will change3.
This page is ~6,500 words. Full reading time: ~30 minutes (¼ of the page is footnotes though). Skim reading time: ~7 minutes.
So like, does tech know there's a whole area of study that can help them #DisruptTechInterviews. It's called ‘Psychology’.
— ann$ (@anyharder) August 27, 2015
Do You See
The psychodynamics of coding interviews in one picture.
— Jonathan Eyler-Werve (@EylerWerve) December 16, 2015
Ann drops the knowledge bomb of Social Facilitation:
Also please be aware of the two sides of the same coin known as Social Facilitation Effects!
— ann$ (@anyharder) August 27, 2015
You've experienced this: you're typing away and suddenly someone is over your shoulder and yuo ca n't eevn nmyore.
— ann$ (@anyharder) August 27, 2015
Social Facilitation happens when performing for an audience:
- you perform better if you’re confident in your abilities.
- “don’t worry, i got this”
- you perform worse when you’re not confident or your confidence is shaken (stumbling then fearing judgment).
- “uh… i’m not sure… now you’re judging me as inferior… you’ve seen my flaws… now i can’t think because i’m worried about looking better… but now i’m afraid and can’t think… i wish i could think right now…”
Social Facilitation, if you aren’t employed as a performer, is something you probably experience intermittently throughout life—probably in the form of interviewing under pressure. Since most people interview sporadically, it’s difficult to grasp why you’re suddenly stumbling. “I know all of this, but I keep failing? What’s wrong with me?”
Being a proper interviewee requires more of a teaching talent than head-down developer skills. As an interviewee, you are performing a teaching task, not a development task. Your goal is to solve a problem while showing as little weakness as possible, so your focus is probably on your own self image, not necessarily on the problem at hand.
Nerves
Ann continues…
I've heard interviewers talk about accounting for nerves, which is great! but seriously, people get nervous and it makes them worse at stuff
— ann$ (@anyharder) August 27, 2015
We are likely to forget our shit during an interview. A friend recently me how he blanked during an interview when asked to reverse a string
— ann$ (@anyharder) August 27, 2015
Technical interviews are paradoxical. They try to evaluate, in a short amount of time4:
- technical aptitude
- interpersonal aptitude (culture fit)
- future aptitude (growth potential)
Accurately measuring each of these requires a formalized testing framework, but tech interviews routinely get killed after a simple 20 minute initial interaction5 over a low quality phone connection.
Tech interviews tend to hit two extremes:
- very basic, almost insultingly simple questions6
- “what is a string.”
- “what is a list.”
- “name three operating systems.”
- “name three open source licenses.”
- details of intensely narrow interest to the interviewer
- but not relevant in any day-to-day situations outside of the hiring company7.
There’s also always the possibility a tech interviewer is an unaware and insecure person who tries to one-up their interviewees. An interviewee has no recourse when assigned an interviewer who’s trying to show they are better than the other person by throwing confusing, complicated, and convoluted problems blindly at the interviewee then laughing as the target struggles to the point of breaking down8.
The bad interviewer cycle has gone on for so long to a point where everybody now is almost blameless. Junior developers learned tribal interview “practices” from older developers, then passed that cargo culting down to new generations of junior developers. Nobody knows why we have such broken interviews, but “we’ve always done it this way,” so nobody dares challenge the established, yet blatantly broken, processes.
We can fix this. We can do better.
We can fix interview irregularities by implementing documented processes backed by science.
Context
Encoding Specificity
Ann covers another major way interviews fall apart due to encoding specificity9:
OH GOD that reminds me. Whiteboarding is stupid because of a concept known as “encoding specificity.”
— ann$ (@anyharder) August 27, 2015
Anyway, stop with the whiteboarding. Code doesn't run on whiteboards, people don't write it on whiteboards.
— ann$ (@anyharder) August 27, 2015
Basically, folks “encode” in their text editors and terminals. If they do well on a wb it's because they practiced for your stupid interview
— ann$ (@anyharder) August 27, 2015
Brains aren’t databases. Brains aren’t key-value stores. Brains don’t have perfect question-answer lookup circuits. Brains are associative content-addressed connectivist blobs of meat.
Encoding specificity means:
- retrieval is easiest when you’re in an environment where you routinely use or obtain knowledge.
- knowledge automatically presents itself when your brain detects previous environment conditions.
When people say “this weather is weird, it feels like Spring in December10” or “it doesn’t feel like Tuesday because Monday was a holiday” — those are examples of your brain evaluating your environment and returning counter-factual feelings you know are wrong, but still feel kinda right. Your feelings—and adjunct knowledge—are partially encoded by your sensorium connectivity11.
Much of your favorite programming language syntax isn’t expressly conscious to you; your fingers just type out some symbols as needed12. When writing a new function, you don’t think “okay, we need a function name. let’s make a name, press equals, then press these keys in sequence: ‘f’ ‘u’ ‘n’ ‘c’ ‘t’ ‘i’ ‘o’ ‘n’, …” (e.g. var poo = function(abc, def) {}
)13
But, in a whiteboard coding scenario, all your automatic finger actions are irrelevant. Your brain ends up processing every single letter as you write14, which kills both depth and breadth of forward thinking for the actual problem (and also detracts from interpersonal levels of planning, explaining, and clarity required in interviews).
One recommended way of passing whiteboard-heavy interviews is to do all your interview prep in writing, as Ann points out:
But I bet you a million dollars that even an eng who's studied on a whiteboard will code better in his/her preferred coding environment.
— ann$ (@anyharder) August 27, 2015
Isn't that what you want? Don't you want to take out the noise of nerves and memory hacks? You're hiring an engineer, not an interviewee.
— ann$ (@anyharder) August 27, 2015
I don’t know about you, but I have better uses of my time than “teaching to the test” because companies don’t know how to properly evaluate abilities, talent, interpersonal relations, and growth prospects.
Practical Interviewer Advice
I’ve talked about tech interviews before. Interviewing is a topic near and dear to my heart since it involves really smart people deluding themselves into thinking they know how the world works, even though they can be proven to be objectively wrong15.
Interviewing involves multiple impossible tasks16, but so many companies completely ignore the Interview-Complete problem and become resigned to just suffer “acceptable losses” when evaluating candidates because “that’s how everybody else does it.”17
Modern tech interviews (attempt to) evaluate:
- competency in various tasks
- P(your personality defects | existing employees defects)18 — (i.e. “culture fit”)19
- future productivity over the next 2, 5, 10 years
We can shorten these down to evaluating20:
- technical
- interpersonal
- growth
Each area also has sub-categories of:
- ability
- something you understand
- something you can reproduce
- your problem solving approaches
- talent
- making correct decisions without extensive thinking
- involves knowledge and experience to narrow down correct scenarios, not exhaustive trial-and-error evaluation of an entire state space
- generating creative output
- providing more value than you capture
- making correct decisions without extensive thinking
Talent is an ability multiplier, so employers naturally want to capture as much talent as possible.
Talent is difficult to compensate against though. How do you design a compensation scale around a baseline of “well, employee J is just so much smarter than the rest of you even without working as much….”
Because the Compensation of Talent problem is so ill-defined, companies want to hire high performing individuals, pay them on the same scale as median performing individuals, then hope high talent havers never notice the imbalance. (The classic problem of 10x employees not receiving 10x compensation21.)
Terminology
Methodology
But there's more to psychology than its findings. There is first M E T H O D O L O G Y.
— ann$ (@anyharder) August 27, 2015
- maybe interviews should be based less around superstition and more around testable results
- How to methodology?
- Decide on a target outcome (something to observe)
- After deciding on the outcome, justify the outcome. Why do you want this?
- After deciding on your target, define it. Every employee conducting interviews for your target outcome must be able to exactly specify the qualities of your target outcome.
- Use whiteboards, make lists, nerd out while defining your target.
- From the definition of your target, specify the most important items. Rank them.
- Most relevant (critical) to least relevant (nice to have)
- You have now defined your justification in a measurable way.
- Don’t be dogmatic. Your definitions and targets should be living documents.
- Update your criteria based on observations. Be Bayesian22.
- If you are wrong, update your assumptions. Don’t continue being wrong.
- Update your criteria based on observations. Be Bayesian22.
Measures (interview criteria)
The idea is simple: there's a thing you want to observe (e.g., ninja engineer rock star crushing). But how! Come up with a plan!
— ann$ (@anyharder) August 27, 2015
FIRST justify why you think ninja engineer rockstar crushing is important. DON'T SKIP THIS STEP. THE BURDEN IS ON YOU.
— ann$ (@anyharder) August 27, 2015
SECOND what does ninja rockstar etc look like? it is probably multiply determined. but to measure it, you're going to have to define it.
— ann$ (@anyharder) August 27, 2015
and I mean, like actual define it. Not just handwave define it. Use those whiteboards you're so fond of and make a list of all the things.
— ann$ (@anyharder) August 27, 2015
Ex., you think systematic debugging is a good quality in an eng. You operationally define this to examine the way they move thru the code.
— ann$ (@anyharder) August 27, 2015
You are not measuring the speed with which they debug, nor do you care if they fixed it. If you do, those are separate measures.
— ann$ (@anyharder) August 27, 2015
But how do you know if any of this is any good? Is thought process while debugging a good measure of systematic debugging?
— ann$ (@anyharder) August 27, 2015
- Example: hiring for debugging talent
- If you decide
debugging
is important for the position you’re hiring against- Test exactly what you defined. - If you are testing for moving-through-code, don’t simultaneously test: - “moving through code at superhuman speeds.” - “detects and fixes subtle broken code at the same time.”
- Write, record, justify why you think this is a good test for candidates.
- Speed and fixing are orthogonal to your primary defined goal of moving-through-code.
- Don’t conflate separate measures.
- (NB: because measures are separate does not mean they are independent, but do not automatically assume correlation.)
- (Also, if you do notice correlation between skills with one person, do not assume it applies to every person!23)
- Don’t conflate separate measures.
- If you decide
Validity and Reliability (and Validity § Reliability)
/ It's time to check the validity and reliability of your measure! /
— ann$ (@anyharder) August 27, 2015
A measure is valid if it measures the thing you meant to measure. It's reliable if you get the same measurement after repeated testing.
— ann$ (@anyharder) August 27, 2015
A ruler is not a valid measure of the weight of a shoe. It is, however, a valid and reliable measure of its length!
— ann$ (@anyharder) August 27, 2015
- Validity
- Are you testing what you think you’re testing?
- Are you measuring what you think you are measuring?
- given example: tape measure is not valid measure of weight of a shoe.
- But it is valid and reliable measure of length.
- given example: tape measure is not valid measure of weight of a shoe.
The ruler is reliable because it will always tell you the same length for the same item!
— ann$ (@anyharder) August 27, 2015
- Reliability
- Always obtain the same result for the same test.
Validity can be checked lots of ways! Take the best most systematic debuggers on your team. How did they do on your thought process measure?
— ann$ (@anyharder) August 27, 2015
- How to check reliability? - Test same person multiple times! (perhaps over a longer time duration to compensate for memory effects) - If the same person scores both highly and poorly on the same tests across multiple attempts, (given they aren’t learning and trying to ‘beat’ you), then your evaluation measure isn’t reliable.
Face Validity (and Face Validity)
Oh, also measurements (i.e., interview criteria) should have face validity! this means there doesn't need to be a gotcha!
— ann$ (@anyharder) August 27, 2015
Concurrent Validity
If they also scored well, hooray! You have convergent validity. <3
— ann$ (@anyharder) August 27, 2015
CORRECTION: this should say ‘concurrent validity.’ (Aside: it is a weird thing to read your tweetstorm a day later) https://t.co/gUC3HS3dnt
— ann$ (@anyharder) August 28, 2015
concurrent validity is when your measurement can distinguish between high and low scorers on your measure
— ann$ (@anyharder) August 28, 2015
so in my example, your debugging test should separate the good debuggers from the bad. if it can, it has concurrent validity! 🌈
— ann$ (@anyharder) August 28, 2015
- How to check validity?
- In the case of hiring, test multiple current employees who you think would score high on your tests!
- If your current rockstar ninja crushers score well, you have concurrent validity
- If high performers score high and lower performers score low, you have concurrent validity.
- Concurrent Validity
- Your measure can distinguish between high scores and low scores.
- But, testing high scores isn’t good enough. You’ve also gotta find people who suck at the goal and test them.
- Find people who you know are poor debuggers (maybe the CEO24). Test them. If poor performers actually pass your test with high scores, your test is flawed. You lack concurrent validity.
Testing
Now find some bad debuggers. Maybe engs on your team who have other lovable qualities but are known fuckups at this, idk. How did they do?
— ann$ (@anyharder) August 27, 2015
They should have scored poorly on your measure. If they didn't and are universally recognized as Poor Debuggers, that's not a good sign.
— ann$ (@anyharder) August 27, 2015
Is your measure reliable?
As for reliability, a simple way to do this is by giving the same test multiple times to the same person!
— ann$ (@anyharder) August 27, 2015
(There are complications in this example because of learning and memory, but they're not insurmountable.)
— ann$ (@anyharder) August 27, 2015
Let a month pass, have the same folks do the debugging task again! If they scored the same (bad ones did bad, good ones did good), yay! /
— ann$ (@anyharder) August 27, 2015
If they didn't, womp. Your measure is not reliable!
— ann$ (@anyharder) August 27, 2015
Skills must be measurable; measures must show reliability and validity
i glossed over a lot of details, but the gist is this: make sure the thing you want is measurable, then measure it reliably and validly.
— ann$ (@anyharder) August 27, 2015
feel free to iterate too! just don't confuse ability to balance a tree under pressure with, say, good communication skills!
— ann$ (@anyharder) August 27, 2015
Ability to balance a tree is probably a good measure of
- ability to balance a tree
- recency of CS degree
- studying for tech interviews
— ann$ (@anyharder) August 27, 2015
Tree-balancing speed is probably a good measure of
- typing speed
- familiarity with the problem
- language fluency
— ann$ (@anyharder) August 27, 2015
I am just brainstorming! But do you see how things can be multiply determined & you get signal mixed in with noise, to use terms y'all like?
— ann$ (@anyharder) August 27, 2015
science!
There are –you guys are you listening – there are solutions to problems in tech that don't require tech and were invented a long time ago!
— ann$ (@anyharder) August 27, 2015
all you need is ✨ t h e s c i e n t i f i c m e t h o d ✨ it's how we observe, measure, and predict about the world!
— ann$ (@anyharder) August 27, 2015
Interview Flow
The typical response to interviewees complaining about companies only testing intro-to-CS concepts: tell candidates to go study study study — you better work hard to pass our interviews!
If someone has been highly productive in an industry for 5, 10, 15, 20+ years, but you tell them they must practice for 40 hours to pass your interview, what’s even going on25?
If people can hack your interviews by studying right before them, why does experience matter? Does experience matter? I’ve seen plenty of interviews die at the “you can’t balance a tree while under interview pressure” stage regardless of how much previous impact you’ve had in the world over the past ten or twenty years as a whole.
It’s almost as if one company started running tech interviews in a pretend academic26 setting then every other company copied their methods without understating the original goals or prejudices involved in original interview designs.
If you can’t cite any reason for how you conduct interviews other than “it’s the current fad” or “it just feels right,” maybe you should consider redesigning your interview methods27 (perhaps even using tips from this article!).
Everybody knows Google’s hiring process is awful, but their process hasn’t changed in 15 years. Do they care? They don’t seem to. They’ll give it lip service, but the recruiters (or anybody else capable of shaping company wide people-based processes) are clearly viewed as “lesser” in a company where half of the employees have CS doctorates. The goal of an interview process isn’t to be like Google—the goal is to actually be accurate and fair at evaluating people.
Interviews are qualitative problems, not objective measures of an entire candidate’s personality and talent profile. A dangerous thought pattern here is when companies become absolutely certain in the infallibility of their interview processes as quantitatively objective measures of candidate ability.
Compensation Detour
Don’t let this blow your mind too much, but 10x employees28 (not even considering 100x employees) don’t even make 10x minimum wage, much less 10x the rate of a 1x industry employee29.
<rant> If we consider the current median freshployee (hired directly out of school, no real experience, just a bundle of unorganized knowledge and excitement) to be making $125,000 in SF, does a 10x employee with 20 years experience make $1,250,000 per year? If you’re a 100x employee, are you being paid $12 million per year? No, you’re probably being paid a salary between 1.25x to 2x of what a junior engineer makes regardless of how much experience and talent you have30. </rant>
Employees with global-scale impact profiles should be making a minimum of $3 million per year in salary. The math doesn’t work out any other way unless you truly believe employees exist only to subsidize company valuations without retaining any value they directly created31.
Worksheet
Main takeaways from this article:
- serious interviews must be designed
- interviews must have formal specifications
- use well-defined methodologies for evaluating humans during Interviews
Now it’s time to design your own interview methodologies.
Make a list of skills for every position you want to hire. Prefer skills over products. Don’t just write an exhaustive list of platforms and products you expect employees to use.
For every skill, make a list of measures capable of evaluating those skills in isolation.
Now create your interview process.
Write32 how your interview measures fit each of these points:
- What do you want to measure?
- Are you measuring in valid and reliable ways?
- Your measure must evaluate what you expect it to evaluate (be valid).
- Your measure must evaluate high performers high and low performers low (have concurrent validity).
- Your measure must generate repeatable results (be reliable).
- Your measure must be direct with no tricks or gotcha journalism (have face validity).
- If you are creating measures to be used by more than one interviewer, your measure must also perform the same regardless of who is administering the interview (have inter-rater reliability).
- Does Grumpy Gunther give off negative intimidating vibes causing interviewees to panic and fail because of stress while Happy Hedwig gives interviewees a chill vibe so they share their knowledge easier when given the exact same questions?
Learn More
As you may have deduced from the previous sections, interviewing is a research problem. Specifically, a psychology research problem. You are testing humans and determining if they conform to your hypotheses about qualities of useful employees.
Interviews can be conducted anywhere on a spectrum from “make it up as you go along” all the way up to “collect data and generate statistics across all interviews as formal research.”
Here are a few links for learning more about research design and terminology:
- Research Design (PDF; 16 pages)
- Instrument, Validity, Reliability
- Validity in Research Design
- Exploring Reliability also covers inter-rater reliability, internal consistency reliability, and a couple other terms.
Conclusion
It’s important to remember no single measure can evaluate a whole person33. You may be 99th percentile in C# and absolutely useless in Erlang. That doesn’t make you incompetent. You may be 99.9th percentile in navigating corporate politics at your current company, but that doesn’t necessarily translate to any signal in an interview at a different company34.
As an interviewer, you must be generous towards your interviewees. Always remember you are participating in an innately flawed process. Your responsibility as an interviewer is to help interviewees do the best they can given such an awkward and out of context situation.
As an interviewee, there’s a good chance the company you’re interviewing at didn’t hire existing employees as full time interviewers, so most employees you speak with will be making it up as they go along. The controlling side of an interview requires high interpersonal aptitude while also being capable of judging strangers in awkward environments fairly and consistently. As an interviewee, the best you can do is keep in mind the interviewer is as lost and confused as you are while not taking any rejections personally—you are working in a flawed system, nothing is personal.
We’re in uncharted territory here. Good luck.
Rants Discovered While Writing This
The remainder of this page is rants concerning Internet-based sites discovered in various stages of brokenness and disrepair during the writing of this page.
Storify, we need to talk.
Scrolling is Hell
Scrollbars matter. When loading a page, it’s nice to see the length of the content indicated by the scrollbar. If you don’t have infinite content, don’t use partial-onload-scrolling tricks. Just show all the content at once. Infinite-load page tricks only makes sense if you have unrelated content showing on every new partial load.
It takes five “go to end of page” clicks before the entire article/story loads on storify. You have successfully optimized your load performance to the point where your optimization is less optimal than just pre-generating the entire page up front like way back in 1995. Bravo.
You are also serving non-https content from your https pages. You are including non-secure images from twitter, but twitter immediately redirects these to https links. So, this is a one character fix. (Perhaps the twitter auto-tweet formatting is doing the dumb action here, but outsourced incompetence doesn’t give you a free pass.)
Some of your developers may need to do the Chrome Mixed Content Warning Walk of Shame.
Jobs are Hell
It’s always interesting to check the Jobs page of a site to see what’s going on behind the scenes.
I clicked on the storify ‘Jobs’ link (https://storify.com/jobs
) at the bottom of the page. It redirects to http://web.livefyre.com/careers/
then immediately pops up an alert()
box saying WARNING: Your Flash Player is outdated! Please upgrade to continue!
then it makes Chrome immediately download adobe_flashplayer_8.exe
on my mac. What are you even doing?!
I ran adobe_flashplayer_8.exe
through virustotal.com, and it hit four results:
Baidu-International Trojan.MSIL.Injector.LQV 20150830
ESET-NOD32 a variant of MSIL/Injector.LQV 20150830
Kaspersky HEUR:Trojan.Win32.Generic 20150830
Qihoo-360 HEUR/QVM03.0.Malware.Gen 20150830
With additional details:
Copyright Copyright (C) 2006-2013 JnbTaRjJ yOa6JYniCR
Publisher bO4WFYUR
Product DyE4jCnl5JJ
Original name irznz68.exe
Good job picking a secure outsourced trojan provider for your job seekers35!
Jobs Update
When I first wrote this piece, their jobs link was downloading malware. Then their jobs link just redirected to a non-existing URL of http://web.livefyre.com/wp-content/adobe_flashplayer_installer_7.exe
which I guess is better since at least now the malware doesn’t exist and you just get a 404 page?
Jobs Update Update
Two weeks later, the jobs link still redirected to http://web.livefyre.com/careers/
but the page no longer hosted malware and is a typical shiny jobs page.
Twitter, we need to talk.
Withdrawn
When I first wrote this piece, the twitter search URL was broken. It has since been fixed36. But, it was broken for multiple days at one point.
I’m retaining my complaints because I enjoy a good rant.
Search Be Dead
The storify page provides a link to Twitter search result as https://twitter.com/#!/search?q=%23DisruptTechInterviews
, but when you click on it, two things happen:
- First, it looks like a designer had a seizure and just vomited all over my screen37
- Next, the user sees it has no actual search results (which, since the page is so disorganized, takes a few seconds for the user to even decipher from the on screen catastrophe)
Because Twitter wanted to be super fancy and fad-driven, they used fragment args for search queries in the past.
But this isn’t Lion King. The Past isn’t The Past. This is Internet. The Past is with us forever.
So, in this case, Twitter must continue to support redirecting from fragments. Forever. That’s how The Internet works. Seems one of the largest companies on the Internet should know how the Internet works. Oh, and good luck getting a fragment to respond with a 301.
All Twitter libraries generate search results to the twitter root site fragment. Here’s a quick example:
> twttr.txt.autoLink("hello #hashtag")
"hello <a href="https://twitter.com/#!/search?q=%23hashtag" title="#hashtag" class="tweet-url hashtag" rel="nofollow">#hashtag</a>"
Important note: Twitter-controlled assets (e.g. twitter.com) link the hashtag as https://twitter.com/hashtag/DisruptTechInterviews?src=hash
— so at least they are taking care of themselves.
So, unless you’ve announced a global recall of every client-side and server-side twitter library, fix yer dang inbound links. As of right now, the official twitter client library twitter-text/js doesn’t even have proper search links.
For bonus points, you may also want to fix whatever process inside of Twitter let people make frontend changes that break every previous twitter search link without anybody noticing before it went live.
(Sure, somebody internally will argue “We didn’t notice because it wasn’t throwing 404s! We can’t track broken fragments because they don’t generate 404s!” Your only solution is to always include fragment forwarding and fragment analytics in the twitter.com front page JavaScript to monitor future breakage. Always always—until twitter.com resolves back into a Network Solutions parked domain page. Welcome to the world of web fads we can never kill off.)
Well, “recently” when I started writing this…↩
corporate trepanation↩
companies always want the best employees, but seldom take the time to create the best interview processes—employees are seen as holy while interview candidates are viewed as disposable, distractions, dilettantes who aren’t important enough to have already been working inside the greatness of the hiring company↩
We obviously can’t expend much effort on interviews! If people are talented and compatible with our company, it’s clear within the first 5 minutes. Why should we waste time trying to figure out anything else?↩
The ability to write a balanced tree in 10 minutes is not indicative of the ability to create documented, maintainable, and extensible software over a period of years with a team of coworkers and external contributors.↩
these often happen when you have extreme junior employees interviewing senior candidates; it’s a Sapir–Whorf problem where junior employees can’t even conceptualize proper questions since they don’t have the experience themselves↩
Oh, you want me to implement a recursive spell check trie right here on the whiteboard? I did this ten years ago, and now I use libraries to do the same thing. I’m not sure I can rewrite it and re-debug it and get my brain in a recursive-trie flow state (or even remember the implementation of the data structure itself) while you stare at me under a 15 minute time limit.↩
Some interviewers seem to treat interviews as a testosterone fueled sport. Though, some companies self-select for hyper-male, bro-powered employees, so maybe it’s by design.
The only thing worse than a failed interview is getting hired at a place with a narrow self-involved culture to the point where new hires feel forever excluded from the long timers.
Companies with awful hiring practices often have awful on-boarding practices, but that’s an entirely different topic.↩
I’ve previously attributed this knowledge-vs-output-vs-ability disconnect problem to contextual awareness, but it’s all nearly the same.↩
sorry southern hemisphere↩
brains don’t do anything in isolation. We have songs remind us of people in our past, we have video games remind us of our youth, we have temperatures remind us of seasons long past; and we have familiar desk contexts help fuel programming knowledge, concentration, and depth of thought.↩
You can also think of this as the ‘lengthy password effect’ — Have you ever had a complex password you can type quickly without thinking about it, but you can’t write the same password on paper (or you can’t even type the password one handed)? bam encoding specificity. (also: bam putamen).↩
plus, this one weird trick↩
as well as: the environment of the room, the mood of anybody watching you, your body language (are you standing weird? yeah, don’t do that.), the distraction of people walking around outside the all-glass conference room, how you’re kinda light headed because this is your 5th interviewer in 4 hours and you haven’t had a break or food…↩
because they blindly assume subjective interviews reflect an objective results↩
Proposal: Interview-Complete complexity class. Cannot be verified quickly, but many poorly constructed and incomplete tests exist. Why can’t we even verify Interview-Complete problems? Interviewing, assuming human candidates, is a high dimensional problem combining the measurement of life experience, personality, current brain chemistry, future brain chemistry, technical skill, and technical skills capable of surfacing during narrow pre-contrived scenarios.
Interviewing is actually a Chinese room process where, through limited question-answer cycles, you are trying to determine if a person is a real smart intelligent person or just a shell of wasted possibilities. (spoiler: the room is our brains and we can only communicate by passing information through our low bitrate brain-connected output-senses (body language, speech, physical motion (writing, ear wiggling, toe tapping)).↩
imagine if you accepted people based mainly on personality and growth potential because you realize ambitious people can learn and grow faster than others know. you could even turn that idea into a multi billion dollar business.
You don’t attract the best people by running a we only talk to people who are already successful contest. You attract the best people by finding under-realized talent and providing a platform to grow without limit.↩
(P(existing employees defects | your defects) * P(your defects)) / P(existing employees defects)
Plus, personality issues aren’t global across a company (unless your company is tiny)—personality issues depend on teams. If you have a completely open office plan, it also depends on where you sit and what you hear. Are you a senior developer sitting within earshot of the CEOs assistant who spends four hours a day complaining about their winter Vitamin-D deficiency and it’s slowly driving you mad? Some things can’t be accounted for in interviews when the end employment targets (environments, teams, locations) are unknown or even unknowable (known unknowns).↩
Also known as “compensating for an employee who is already a Violent Victor.”
If there’s an existing employee, Violent Victor, who runs around the office telling everybody to fuck off and who is verbally abusive to other employees, the job of interviewers is to also suss out whether you’ll give Violent Victor his space, whether you’ll escalate his situations, whether you’re not thick skinned and will shake/cry/go home early, or whether you’ll try to make it an HR issue.
After all, Violent Victor has been around a long time so everybody else just tolerates his abuse. You should too if you want to fit in here and be a team player.↩
These criteria aren’t enough for competency testing, but companies seem to stop with these.
I’ve never had an interview contain the question: how would you explain this concept over email? Now write a sample email for us.
The biggest problem in any company is usually not technical, but communication problems. Companies easily burn 30% to 70% of their aggregate productivity on trying to coordinate and act on poor communication between employees.
Ideally, companies should be constantly training poor communicators to help them improve. Instead, we end up with the current state of email armageddon.
Other things I’ve never been asked in an interview: long-term maintenance strategies, documentation strategies, when to automate vs. not, … — but, sure, keep testing Fibonacci, that’s exceedingly relevant.↩
Imagine if all companies were centrally controlled by a global Company Oversight Committee and no company could make more than 2X the median company revenue. If your company did make more than 2X median company revenue, your excess revenue would be transferred to the Company Oversight Committee itself. That’s basically how employment works.
Being a startup founder is one of the classic ways to hack employment because then there isn’t a big head in the sky dictating your maximum monetary reward no matter how much impact you have on the world.
Amusingly, when high performing creative individuals decide to “hack employment” by starting a startup (then retain 60% to 80% ownership for themselves (and 90% voting rights regardless of “ownership”)), they quickly become managers who stop generating their own creative output. Now they are part of the productivity surplus talent leeching parasite class and no longer one of the high productivity generators of society.
Then, employees see they are being exploited by idle parasites taking 99.999% of their individual employee productivity (based on bird seed levels of employee stock options), the non-gruntled employees leave to create their own startups, eventually become executives of new startups, stop being individually productive, and the cycle begins anew.↩
Our original bae↩
Google fallacy: all our employees are good at recent CS PhD grad student tasks, so only good employees excel at recently-graduated grad student tasks.↩
what are they good for anyway?↩
This also goes back to “employees” being low status people who have to fight and compete for approval of the God-like company (while “executives” are picked by divine hands and placed into roles due to reputation or nepotism with little scrutiny involved all around).↩
<google> because clearly, academics are the smartest, most capable, and most logical humans alive. Everyone else is dirt compared to an academic. Who else is wise enough to give up years of earnings and productivity to the study of grant writing while living off 1/4th market rate stipends?
Maybe people who make poor life choices like to surround themselves with others who also make poor life choices to validate their past underachievement? </google>
Of course, the flip side is 19 year old dropouts who start companies to make ALL THE MONEY for themselves, which is clearly the best alternative. Wait. You’re saying there’s subtlety involved and we can’t reduce everybody to an aggravated stereotype? Say it ain’t so…↩
unless your entire hiring goal is to just hire clones of yourself or people who have (by random chance) gone through your exact same life trajectory. This is also known as the “we only hire people who went to the schools we went to” approach.↩
yes, the concept of orders-of-magnitude employees is bullshit for productivity reasons, but it is not bullshit from a worldwide impact calculation. You can’t be 100x more productive than a co-worker (unless you work 2,400 hour days), but you can generate 100x more value in the world if you make the right choices combined with the right talent and amplified by underlying luck particles (luckticles; also see: luckon wave/particle duality and how luckons generate interference patterns when confronted with narrow choices)↩
Assuming $15 minimum wage and a standard 2,087 work hours per year, minimum wage is $31,305 per year. 10x minimum wage is $313,050 per year.
Most employers seem to consider even $250,000 per year to be “extreme princess levels of compensation” and, if you do manage to get paid at those levels, your employer then blackballs you from future raises and promotions because they can’t imagine paying you even more (regardless of how many tens of millions of dollars you either save or generate for them or for the world as a whole per year).↩
unless you’re an executive-level dweeb↩
valuable employees are basically like welfare for executives with high compensation packages—the CEO (most likely) isn’t worth their $300 million stock package at all (exceptions include product/design driven (Jobs, Cook) or engineering driven (Musk) CEOs with concrete contributions, with their blood on the line and their head in the game, not just contributions of abstract “thought leadership” and rubbing elbows with other CEOs); CEOs are imminently replaceable; it’s the front line workers who built everything out of their minds (and this isn’t even considering compensating all the open source projects your company uses to be more efficient and reduce costs without ever contributing back towards (with actual mo-nay) as well)↩
in a collaborative document your entire team can edit and improve over time↩
people are vector-valued quantities. If you expect to judge the full competence of a whole person based on magnifying one dimension of their 1,000 element personality vector (which changes based on which random interviewer is feeding someone their pet questions), you are basically doing a random walk across the whole feature space of “important traits for reliable and professional employees” and would generate better results by rolling a d20 for your post-interview evaluation score sheet.↩
One of the worst things to be best in the world at is your employer’s highly specialized, non-generalizable systems. Luckily, shared common-platform open source software is becoming more widely used and helping to make technical skills more easily transferable across companies.↩
Two kinds of responses to security incidents:
- lol, another compromised wordpress install, what can you do? just gotta go with the flow. not our fault.
- why doesn’t anybody take responsibility for delivering trojans directly to users through innocuous means? are all websites run by mole people?
Kinda? There’s a big FUOC before the results re-render↩
It is possible Twitter designs for a new breed of super humans who can physically process 16 different conflicting visual items at the same time. In that case, these design complaints are invalid. It is also possible I’m stuck in a twitter A/B testing hellhole. Twitter.com user pages often randomly change interface elements throughout the day. (Turns out the giant content seizure page is just the new default twitter homepage which makes no visual coherent sense besides CONTENT CONTENT CONTENT!!!)↩