
Prove the Hard Parts First
A practical playbook for de-risking complex builds by attacking latency, correctness, and integration early
We've seen this pattern dozens of times: a team spends months building the 'easy' parts of a system—UI, CRUD operations, basic workflows—only to discover in month 11 that the core technical challenge is unsolvable, or at least unsolvable within their constraints.
The project dies not from lack of effort, but from late discovery. The hard part was always there. They just chose to ignore it until it was too late.
What Are the Hard Parts?
The hard parts are the technical or architectural challenges that, if unsolvable, make the rest of the project irrelevant. They usually fall into a few categories:
- Latency: Can we make this fast enough for users to care?
- Correctness: Can we make this accurate enough to trust?
- Scale: Will this break when we 10x the load?
- Integration: Can we actually connect to the systems we need?
- Compliance: Can we do this within legal/regulatory bounds?
The common thread: these aren't features you can stub out. They're constraints that define whether the entire project is viable.
Why Teams Avoid the Hard Parts
It's not laziness. It's psychology and process:
- Building the easy parts feels like progress (and looks good in demos)
- The hard parts are uncertain, so they're easy to deprioritize
- Roadmaps reward feature completion, not risk reduction
- Admitting you don't know if something is possible feels like failure
"The easy path is to build what you know how to build and hope the hard part works itself out. It never does.
The Pathfinder Approach
We use a simple framework: identify the 2-3 hardest technical challenges, then build the thinnest possible slice that proves or disproves them.
Step 1: Name the Risks
Write down the assumptions that, if wrong, would kill the project. Be specific:
- "Can we retrieve relevant context from 10M documents in under 200ms?"
- "Can our LLM generate SQL that's safe and correct 95% of the time?"
- "Can we process 10K transactions per second without dropping any?"
Step 2: Design a Spike
For each hard part, design a minimal test that produces real data. Not a thought experiment—a working prototype that hits real constraints:
- Use production-scale data (or close to it)
- Measure actual latency, not theoretical performance
- Test against real third-party APIs, not mocks
- Run under realistic load
Step 3: Decide Based on Reality
Now you have data. Make a decision:
- Green: The approach works. Build the real system.
- Yellow: It works, but barely. Discuss trade-offs explicitly.
- Red: It doesn't work. Pivot or kill the project now.
Red is not failure. Red early is success. It saves you from building the wrong thing for six months.
Real Examples
Case 1: RAG Latency
A client wanted to build a real-time customer support assistant. The hard part: retrieving relevant context from 5M support tickets in under 500ms.
We built a spike in week one. Tried three vector DB configurations, measured P95 latency under load. Result: one config hit 300ms, two didn't. We picked the winner and moved on. Total time: 5 days.
Case 2: SQL Generation Accuracy
Another client wanted an AI that could answer business questions by writing SQL. Hard part: could the LLM write correct, safe queries against a complex schema?
We built a test harness with 50 real questions and a golden dataset. Measured accuracy across three models and five prompting strategies. Found one combo that hit 92% correctness. Knew in 10 days whether this was viable.
When to Use This
This approach is essential when:
- You're building something novel (no one's done exactly this before)
- You're integrating with unfamiliar systems
- You're pushing the limits of performance, scale, or accuracy
- The project is expensive enough that failure would hurt
It's overkill for straightforward CRUD apps or well-trodden paths. But if you're reading this, you're probably not building a CRUD app.
"The goal isn't to avoid failure. It's to fail fast, fail cheap, and fail before you've committed the whole team to a dead end.
We specialize in this kind of pathfinding work—taking your riskiest assumptions and turning them into knowns. If you're facing a build where the hard parts aren't clear yet, or where you're not sure if it's even possible, let's talk.
