At some point in every search, there's a moment when the data tells you something is wrong.

For us, that signal has a specific shape: when fewer than one in four candidates we present advance past the first interview stage, we stop the search and go back to the brief.

Not because the candidates are weak. Usually they're not. We stop because that conversion rate tells us something fundamental is misaligned between what the client thinks they're hiring for and what we're actually sourcing. Running more candidates through a broken process doesn't fix the process.

This is the rule we call 1-in-4, and it's been one of the most useful constraints we've built into how we work.

Where the rule comes from

I've been doing technical recruiting since 2007. For the first several years, the default assumption — mine, and most firms' — was that a low advance rate meant the sourcing needed to improve. More candidates, better channels, different search strings.

That's sometimes true. But it's not usually the real problem.

The more common pattern is this: the brief was incomplete when the search started. Something critical was left undefined. What "senior" means in this team. How much autonomy the role actually requires. Whether the client wants someone to lead or someone to execute. What the engineering culture is like on a Tuesday afternoon when the sprint is going sideways.

When those things are undefined, interviewers apply their own filters — different filters, often contradictory ones. The process generates noise, not signal. The 1-in-4 rule was our response to that pattern. It's not a benchmark for quality — it's a diagnostic trigger.

What the number actually measures

After the first client interview — typically a technical screen or initial conversation with an engineering lead — we track how many candidates advance to the next stage. If fewer than 25% of the candidates we've presented are moving forward, that's the signal.

We chose first interview as the measurement point because it's early enough to course-correct before significant time has been spent on both sides, and late enough that we're not measuring our own pre-screening.

What a recalibration actually looks like

When we hit the threshold, we call the client. Not to report the number — to understand it. The conversation usually starts with the rejections. We ask about each candidate who didn't advance: not just "what was wrong" but "what specifically was missing."

Some examples of what we've found:

  • The seniority definition was wrong. The brief said "senior" but the interviewers were evaluating against a staff-level bar.
  • The technical context wasn't communicated. A candidate looked strong on paper but had never worked with the scale of data the client's system processes.
  • The role had changed. Between the brief and the first interviews, the team had made a decision that shifted the scope of the role.
  • Two interviewers had different mental models. One was screening for technical depth. The other was screening for communication and cross-functional work style.

None of these are unusual. They're standard features of how technical hiring actually works inside growing companies. The 1-in-4 rule doesn't prevent them — it forces them to surface before too much time has been spent.

Why most firms don't do this

The incentive structure in contingency recruiting is to keep moving. If you only get paid when a hire is made, stopping the process to recalibrate costs you time and delays the fee. The path of least resistance is to run more candidates through and hope the next batch lands better.

This is why volume shops exist. More submissions, more chances, faster close. The client gets a hire. Whether it's the right hire is someone else's problem.

We stopped accepting that trade a long time ago. The incentive to get it right is real: a hire that doesn't work out costs us the relationship. Over time, the 1-in-4 rule became a genuine operating principle rather than a risk management tool.

The broader principle

1-in-4 is a specific number attached to a specific point in the process. The underlying principle is simpler: if the data says something is wrong, stop and look at the data instead of running faster.

We track time-to-offer, offer acceptance rate, and 90-day retention for every search. Each of those numbers tells you something about where the process is breaking down. None of them are useful if you ignore them.

Technical recruiting produces a lot of data. Most of it sits in spreadsheets and ATS dashboards and is never looked at analytically. The firms that use it well run better processes, produce better hires, and build relationships with clients who are willing to hire them again.


Sources: Bondy Group internal sourcing and placement data (2008–present). SHRM cost-of-bad-hire estimates (1.5–2x annual salary).