Vetting Candidates for Remote Work | Turing Hire
What you need to know about a candidate to make a great long-distance hire
Hiring remote, global talent is tough. Most of the usual signals you rely on when hiring someone in the US don’t apply. You may not recognize the school a candidate attended. You may never have heard of the companies a candidate has worked for, and you may not have any idea if the people providing references are genuine. You also can’t rely on recruiters because they’re likely dealing with these same problems.
So how do you make sure a remote prospect is up to the task and won’t simply slow you down?
Overcoming sourcing challenges with rigorous vetting
When the usual means of identifying talent are likely to fail you, a new process is needed. At my current company Turing, we’re working to solve this problem. In this article, I’m going to share with you what I’ve learned during hundreds of technical screens. My goal is to help you identify best practices for vetting prospects and making sure your placements are going to be capable of delivering the results you require.
At Turing, we match developers from all over the world with positions at some of the world’s best and most interesting startups. Every time we make a placement, we put our reputation on the line. In other words, we can’t afford to make bad matches.
Since we are unable to rely on typical signals that would allow us to determine if someone is good enough for our clients, we’ve developed a system that provides unbiased feedback about a candidate’s skills in all of the areas critical for their remote-work success.
Core to our approach is a highly structured vetting process that incorporates sophisticated automated testing as well as detailed, in-person technical screening for individuals that have been able to successfully pass our coding examinations.
All of our testing is intended to help determine key facts about the experience, skills, and capabilities of a candidate. What we’re trying to understand through our tests and interviews will help us determine if a person has the skills they claim to have and whether they’re capable of performing basic tasks, managing projects, and people, or even leading entire projects from conception through implementation.
So, at a baseline, we’re trying to determine if someone can contribute to a codebase in a meaningful way. Maybe their skills only support accomplishing scoped, individual tasks. For example, can this person add a button that does x, y, and z on this web page and build it in a way that takes into account the full technical stack of the application?
Or can this person go in and add unit-testing to some kind of already written back end piece of logic? Generally, can they contribute within an already established structure and do things that won’t upset that structure, if given functional requirements?
Then, there’s a level of complexity beyond that. Can this person take a direction like “Hey, we want this larger-scope feature built?”, and successfully run with it? And can they execute at that level of complexity, something that is going to be composed of many tasks? Can this person, for instance, build a new signup flow, or build a new matching algorithm for some sort of matchmaking service? Can they individually make the sorts of principled tradeoffs in design and implementation that is inherent to successfully building at that level of complexity requires? Does this person have the sophistication to read between the lines and identify functional requirements implicit in a higher level and more coarse-grained specification?
But most critically, at both the task and feature-based levels, we’re trying to determine if this person will be able to contribute to an already established infrastructure, both technically and procedurally.
Identifying coders, leaders, and project architects
For more senior placements we start getting into higher levels of architectural complexity. Can this person start an entire project from scratch if they’re only given a general direction? If they’re tasked with building and deploying a new Android app that does something novel, can they deliver? Or if a company is expanding their product into a whole new space, can the developer take some rough business ideas and some rough sketches about how the company would like to go about doing this, and then build out a full-stack product, from the UI to the backend architecture to the design of the database models?
Can they manage the entire process from system architecture design to writing elegant implementations? Do they possess the depth and breadth of experience, as well just general horsepower, to be skilled enough to implement the MVP from the front end to the backend to the database as well as whatever infrastructure they’re going to use to actually deploy the code?
Even vetting somebody in terms of their technical acumen from a general level that seeks to get a signal on someone’s “seniority” is a really complex thing to do. Proper vetting needs to understand whether a person can design systems at a very high level. Can they understand how pieces fit together and how those pieces talk to one another? And then you start getting down into deeper levels of abstraction. Does a candidate understand how this feature fits into the greater scope of the product?
Can they assimilate to use the tools in a particular company? Can then adapt to the cadences and workflows of the team that they’re on, and work with other people to be part of a bigger whole? And then, at the task level, you need to determine if someone has fundamental baseline computer science abilities? Will they build efficient pieces of code? Do they understand notions of runtime and space complexity, and how that might apply to the code that they write, and the specific problems they are being asked to solve?
Are they able to conceptualize how their code will typically be used and how the stuff that they write will actually be run?
But the most critical thing to keep in mind is that it all comes down to code. Code has
a dual purpose. It is going to be executed by a computer, and it’s going to be executed at certain cadences and using certain pieces of data and memory. But it’s also the stuff that people are going to read, and have to maintain. People will have to go in and either edit or understand what your piece of code is doing in order to write their own piece of code, to modify or extend the functionality of a given application. Designing abstractions and writing code that can be comprehended by another human is extremely important.
AI versus Human Vetting
It’s almost an overwhelming list of skills that somebody requires to be considered a “good” software engineer. Trying to vet them all in an hour-long phone interview is very difficult. At Turing, we’ve realized that it’s possible to take the human out of it. To a point.
We do have an hour-long technical interview that we use with some of the most skilled developers that we’ve found on our platform, where we validate and extend upon things that we find through automated testing.
The technical interview allows us to screen for things that we find are very, very hard to test for. Of course, we’re interested in their communication capabilities, but we also like to test some technical things in an interview in addition to an automated test.
We can get a really good signal as to whether somebody actually knows a particular framework or whether they speak a particular language. What’s really nice is that we can provide skill-validation to clients who are looking to get somebody up and running with the stack that they’re using as quickly as possible.
We’ve also found that we can automate testing of more general-domain kind of format
For instance, we can find out if the candidate knows how to build a server. Do they know and understand how a database is going to interact with a server, and how that server might interact with a front-end client? Do they know common design patterns that might be encountered in software engineering, and how those patterns might best be applied?
Is a candidate familiar with the types of algorithms they might encounter in software engineering? Or, given this piece of code in a language that you purport to know, can you tell us what you’d expect to happen if it was run with a certain type of input? We’ve found that those types of questions are really good for the types of automated testing that we currently do.
And we think that we get a pretty good signal on a developer’s mastery of a specific type of coding, say front-end development or back-end systems development, or mobile development or database design.
There’s really good tooling that we’ve built upon, that allows you to run code in a browser. And this allows us to do things such as automated live coding tests. We can do automated live algorithm testing in this sort of format with a significant degree of success, in terms of being able to test algorithmic correctness and efficiency.
We’re able to test whether or not somebody can write code that fulfills a particular function within a particular amount of time, and with a particular amount of like memory. We’re really excited to expand upon this method and see what further coding-based automated tests we can do.
Where automated testing breaks down
But even in a live coding format, there are holes that we have in terms of our automated tests.
Right now, it’s still very hard to get a computer to tell us what the elegance of somebody’s code is. Or how well it was organized, or how readable it is, or how well abstracted it is.
That’s where I really feel like a technical interview comes in handy. Because then I can present candidates with situations they might encounter during their work and they can walk me through how they’d design the solution to the problem. Doing this during a technical interview can help me understand what a candidate’s thinking is, and what kind of code, organizationally, they’d spit out to approximate a solution. This really helps me get into a prospect’s critical thinking. I can really see how they handle problems with uncertain specifications, how they ask questions about getting required specifications, and more generally a get a clearer idea of what the nature of their programming abstractions and elegance might be.
“Automated testing establishes a bar that filters people out. The in-person interview confirms the testing and tests the candidate on critical things that are currently hard to measure in an automated way.”
In general, what we’ve learned at Turing is that a well-designed and comprehensive automated testing facility is very cost-effective when you need to screen a large number of applicants blind. If I had to do an in-person interview, or even review and background check every candidate that wanted to work with Turing, there wouldn’t be enough hours in the day or enough days in the year.
And as our testing capabilities continue to evolve it makes our ability to find the best candidates and then invest our time where it’s most productive; doing technical screens for the top-tier applicants only.
What to do if you don’t have automated testing
In my next post, I’ll talk a bit about what you can do if you don’t have an automated testing facility. I’ll also dig into the way you can make the onboarding process simpler, and how you can spot early signs that a remote hire is struggling or even failing. Stay tuned!