A Simple Way to Improve Data Science Interviews

Identifying the top 5% of candidates via Technical Problem Framing

Photo by pine watt on Unsplash

In this post I share a story about a mistake I made as an inexperienced Data Science hiring manager, and how it changed the way I conduct technical interviews. I also walk through an example Data Science interview prompt and show how stronger candidates approach the problem differently than weaker candidates. While I focus my discussion on Data Science, most of my insights and suggestions are relevant for any technical role, including Software Engineering, Data Engineering, etc.

But first, a brief bit of background about myself.

I’ve been working in Software Engineering and Data Science/Machine Learning for about nine years now. I’ve worked at companies of all sizes — the largest being Wayfair (13k employees), and the smallest being my current employer, Fi (~100 employees), where I’m the VP of Data. I’m now approaching an inflection point where half of my career has been spent as an Individual Contributor (IC), and half as a Manager/Director/VP. During the latter half, I’ve built or inherited teams ranging from two to 15 individuals. In that time, I’ve hired about 20 people, conducted hundreds of interviews, and designed countless interview pipelines.

During my time as a hiring manager, I’ve made many successful hires, but also made my share of mistakes along the way. For instance, early on in my hiring manager career when I was first tasked with the responsibility of constructing an interview pipeline from scratch, I made one of my biggest hiring blunders. It took me another year or two to fully understand the mistake I’d made. But once I was able to articulate it, I knew it was avoidable, and was able to take steps to ensure that it didn’t happen again.

This post is about that mistake, and what I do to avoid making it again.

My hiring mistake

Photo by Eastman Childs on Unsplash

In 2019, I was promoted from Senior Machine Learning Engineer to Lead Data Scientist, which was a management role. My team was looking to build out new modeling applications that required different models and integrations than those that I’d already built. And since I’d recently taken on a management role, I didn’t have the bandwidth to build out all of the needed infrastructure myself. So I set out to hire for a Senior Data Scientist to help build and maintain the new models and integrations.

Interview process

I designed an interview pipeline that consisted of a hiring manager screen, a takehome project, and a few panel interviews. Except for the cross-functional interview, all of the interviews were technical in nature, and involved some form of machine learning, data engineering, or software engineering challenges and design problems. Within a couple months, we landed our ideal candidate.

My 2019 Data Science interview pipeline

The new-hire’s first few weeks went smoothly. And once they were up to speed on the tech stack, teammates, and workflow process, I assigned them a larger project.

Symptoms emerge

After a couple weeks on their assigned project, I noticed their tasks were taking longer than anticipated. So I spent extra time each week with them to ensure that things stayed on track. But unfortunately, things did not improve. Nearly every time we met to discuss progress and next steps, there seemed to be no progress made. Instead, they’d surface newly encountered technical obstacles that, from their perspective, needed to resolved in order to move forward. I remember feeling frustrated because I had trouble understanding how all of the technical obstacles they were bringing up were so relevant, since they seemed to come out of nowhere.

I recall being two months into the project which we projected would take two weeks to complete, yet we still didn’t have a functioning solution. Worse, we didn’t even have a clear timeframe for completion.

The underlying issue

Photo by Alexander Hafemann on Unsplash

Now that I’ve been managing and hiring for a number of years, and have seen my share of both successes and failures with new-hires, I’m able to articulate exactly what the underlying issue was, and where I went wrong.

The underlying issue was that they lacked a skillset that was critical to success in their role. On the surface, it might seem like they lacked technical ability, since they frequently surfaced technical obstacles that they could not quickly resolve. But that wasn’t the case. In fact, their technical abilities were exceptional.

Rather, they lacked the ability to understand the connection between the technical application and the business need, which prevented them from knowing when and how to make tradeoffs. This manifested as insurmountable technical obstacles, each of which could have been avoided by simplifying the problem statement.

For instance, one of the challenges they kept running into was due to the size of the dataset they were working with. But every time they cited this as an issue, I’d suggest trimming the dataset down to only the top three or four features we were interested in, and then filtering to only records that were likely to be relevant. Doing so would have reduced the overall dataset to less than .5% of its original size, which would have avoided any issues due to volume, and would have given us 80% of the value-add of the entire dataset. But every time I’d suggest this, it was clear that they had not thought about this, despite the fact that I repeatedly brought it up.

Technical Problem Framing

To recap – the new-hire had trouble maintaining a strong understanding of both the business context and the technical context at the same time, and so the technical tasks they set out to solve were often more complicated than they needed to be. In other words, they had issues with technical problem framing, which is the ability to frame a business objective as a technical objective, and the ability to understand how a set of requirements represent an underlying business objective.

For those not familiar with technical problem framing, or the typical workflow of a Data team – typically, requirements are provided by either a Product Manager (PM) or by the manager/technical lead. But even in cases where requirements are provided to ICs, requirements are never fully exhaustive. Therefore, it’s essential that ICs be able to understand the objective that led to those requirements. If they can’t do this for themselves, then they’ll need to be watched very closely by their manager or their PM. This limits the scalability of the team, and generally causes friction between the IC and their manager/PM.

When I reflect on this scenario, it’s clear where I went wrong – I didn’t construct an interview that evaluated for technical problem framing, and this skillset was a requirement for them to succeed in their role. Once I realized this, I started experimenting with ways to bake this into my interview process. Fortunately, the thing I found most effective required only a a minor adjustment.

Adjusting the interviews

Here’s what I do differently.

For at least one technical interview, I embed the technical task in a real-world business scenario where it’s required to fully understand the added context in order to adequately solve the problem.

In addition to evaluating technical ability, this adjusted interview also evaluates the candidate’s ability to infer from requirements what the actual intention of the project is, and then ensure that this intention is achieved when designing the technical solution.

Next I’ll walk through an example interview that does not evaluate technical problem framing, and discuss what a strong solution looks like. Then I’ll show that same interview but with the technical problem framing adjustment, and show how it modifies what’s considered a strong solution.

You can find the original dataset I use for this interview here. You can also find the interview prompt configured as a Kaggle notebook here.

Example 1: Data Science modeling interview WITHOUT technical problem framing assessment

Here’s the interview prompt without any technical problem framing assessment.

# Interview WITHOUT technical problem framing #
# as part of the assessment. #

# We have provided you with a dataset
# consisting of patient health information
# related to cardiac arrest (heart-attacks).
# Each record represents a patient that
# visited the Emergency Room (ER) because they
# were experiencing chest pains. Each column
# corresponds to a measurement that was taken
# at the time they arrived at the ER, including
# the type of chest pain they were
# experiencing. The dataset also contains a
# binary column that indicates whether or not
# the patient ended up having a heart-attack
# within 48 hours of their ER visit.

import pandas as pd

df = pd.read_csv(f”{filepath}/heart.csv”)

# Your task is to construct a model that
# predicts whether a candidate will have a
# heart-attack based on the provided inputs.

def predict_heart_attack(row):
Accepts one row of heart-attack dataset.
Returns 0 or 1 as the prediction.
passFirst five rows of heart-attack dataset.

I used to conduct this interview in a live setting, where it’s convenient to have a small and clean dataset to work with. The dataset is small (303 rows and 13 inputs) and relatively clean, so candidates with any amount of ML experience can build a classifier without much difficulty.


Weaker candidates are easy to identify since they typically struggle to build even a basic model within the allotted time, let alone a good one. The more nuanced task as an interviewer is to identify the “good” candidates from the “great” candidates. In addition to demonstrating the ability to build a working classifier in a short time frame, stronger candidates typically differentiate themselves by (1) taking an iterative approach – they get something working quickly and then improve on it, and (2) by making deliberate decisions. For instance, when I ask them why they chose a specific performance measure to assess their model’s performance, they have a specific answer. Weaker or less experienced candidates will give answers, but without any real justification for it.

Example 2: Data Science modeling interview WITH technical problem framing assessment

Here’s that same interview question but with an embedded business scenario, so it includes technical problem framing as part of what’s being assessed.

# Interview WITH technical problem framing as #
# part of the assessment. #

# An Emergency Room (ER) is receiving an
# overwhelming number of patients experiencing
# chest pains, which are a symptom of heart
# attacks. Patients who are showing other
# symptoms of heart attacks should be
# prioritized (fast-tracked) upon entering the
# ER waiting room in order to mitigate the
# effects of the heart-attack, or avoid it
# altogether.
# On average, the ER is equipped to fast-track
# 20% of patients who are experiencing chest
# pain, allowing them to skip the patient
# queue. Currently, the ER’s policy is to
# fast-track any patients who are
# experiencing Type 2 chest pain (atypical
# angina). This corresponds to a value of
# `df[‘cp’] == 1` in the dataset. The ER staff
# thinks that their existing policy is
# suboptimal, and is requesting that you
# perform an analysis on this patient data in
# order to develop a policy that better
# prioritizes high-risk patients.
# We have provided you with a dataset
# consisting of patient health information
# related to heart-attacks. Each record
# represents a patient that visited the ER
# because they were experiencing chest pains.
# Each column corresponds to a measurement that
# was taken at the time they arrived at the ER,
# including the type of chest pain they were
# experiencing. The dataset also contains a
# binary column that indicates whether or not
# the patient ended up having a heart-attack
# within 48 hours of their ER visit.

import pandas as pd

df = pd.read_csv(f”{filepath}/heart.csv”)

# Your task is to use the dataset to construct
# a fast-track policy that is better than the
# ER’s current policy.

def fast_track(row):
Accepts one row of heart-attack dataset.
Returns 0 or 1 as the decision to
passFirst five rows of heart-attack dataset.

Notice that the technical aspects of the problem remain unchanged – the exact same dataset is used and the solution signature is the same. But there is now additional information that changes the profile of an ideal solution.

New problem statement

The added business context introduces two new pieces of information that the candidate needs to understand prior to starting on their solution. The first is that there is a constraint that only 20% of patients can be fast-tracked. This corresponds to 60.6 patients, or 61 if we round up:

.20 * len(df) # Outputs 60.6

Therefore, the maximum number of patients we can “save” by fast-tracking is 61, since the ER cannot fast-track more than that.

The second new piece of information that the ER context provide is that the ER has a baseline strategy that needs to be outperformed in order for the new policy to be considered. This baseline strategy results in correctly predicting 41 heart attacks:

# The ER baseline policiy is to fast-track
# any patient with Type 2 chest pain,
# which corresponds to
# `df[‘cp’] == 1`. So the ER baseline
# strategy is to return 1 when
# df[‘cp’] == 1, and 0 otherwise.
.agg([‘mean’, ‘count’])
)The output of the `groupby`, showing the breakdown of heart-attack rates for each type of chest pain..82 * 50 # outputs 41

Combining the added constraint (61 total fast-tracks) with the goal of beating the baseline (41 correct predictions), we can formulate the new objective as: find a classifier that has a recall@k (with k=61) greater than 41.

Weak candidates

Candidates who are weak at technical problem framing will gloss over these two pieces of information and jump immediately into solution mode. This typically results in one of two suboptimal solutions: one with high precision, but with recall equal to or lower than 41, or one with high recall but the precision is so bad that the first 61 patients fast-tracked won’t yield more than 41 heart attacks. As the interviewer, I’ll give hints to steer the candidate if I see them heading in the wrong direction. Some candidates will pick up on my hints and adjust course correctly, while others will still struggle to identify the right problem to solve.

Strong candidates

Candidates who are strong at technical problem framing will approach the problem differently. Instead of jumping into solution mode right from the start, they take time up front to read through the prompt thoroughly, often multiple times, in order to make sure they understand the context.

Next, they do something that is strongly correlated with success, and something I pay close attention to:

The best candidates write out their approach before starting on it, and then ask me (the interviewer) if it sounds reasonable.

When I observe this, it’s music to my ears. Why? Because this is exactly what I want them to do if they were to join my team. I want someone who can articulate their plan in advance before diving into it, and also has the awareness to run it by me before they start on it. Although this takes longer up front, it reduces the need to double back and change approaches in the middle of the interview, so it ensures that the remaining time they have is well spent.

Candidates that are able to articulate the correct problem are typically able to solve the challenge as well. This shouldn’t be surprising since it’s not very difficult to beat the baseline. For instance, even the following simple rule-based solution beats the baseline:

def fast_track(row):
A very simple solution that still beats the
# “cp” is the column for chest pain.
if row[‘cp’] == 2 and row[‘sex’] == 0:
return 1
elif row[‘cp’] == 1 and row[‘sex’] == 0:
return 1
return 0

# Check the performance
df[‘pred’] = df.apply(
lambda row: fast_track(row),
top_k_preds = df.sort_values(‘pred’).tail(61)
recall_at_k = len(
.query(‘had_heart_attack == 1’)
.query(pred == 1))
print(f”Recall@61 = {recall_at_k}”)
# Outputs Recall@61 = 50

But bonus points are definitely given if the candidate can state the problem clearly and solve it to its maximum extent (perfect recall at k=61).

Benefits of interviewing for technical problem framing

The major benefit of interviewing for technical problem framing is the people who pass, and therefore get hired, are able to operate with more independence. Because they’re able to internalize the objectives they are tasked with improving, they can reduce the amount of overhead work needed from their manager, as well as from Product Owners. This is critical towards scaling the impact of a technical team, especially at smaller organizations where there is little PM support and where managers are expected to be IC’s and so they have limited bandwidth to oversee lots of projects.

We’ve been able to keep the Data team at Fi, for instance, very small and nimble, and much of this is because we’ve only hired individuals with a strong technical problem framing ability. We’re currently a team of just four individuals (soon to be five), yet we serve all Data-related needs for a business of 100+ and own all of the ETL processes, Data Warehouse design and maintenance, Tableau reporting, deep dive and root cause analyses, machine learning and predictive modeling, and more recently R&D for new feature development. And the domains we cover are literally every aspect of the business — Finance, Marketing, Customer Experience, Engineering, Hardware, Firmware, Operations, and Product. The way that we’re able to take on so much work, and cover so many domains, is because everyone on the team is skilled at taking a loosely defined problem and mapping it to a technical problem statement.

Coming Soon

Stay tuned for a future post where I talk through how to improve your own ability to frame technical problems, as well as how to improve your team’s ability to do this if you’re a manager.

A Simple Way to Improve Data Science Interviews was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.


Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

We don’t spam!

Leave a Comment

Scroll to Top