Skip to main content

Red Flags and Green Flags When Hiring a Technology Contractor

Gabe Hilado
Founder and CEO, Zenpo Software Innovations

The conference room has too many people in it. Two from procurement, one from legal, three from the program office, and across the table, the contractor's team — four deep, including someone whose only job today is to run the slide deck. The proposal is spiral-bound. It's thick. The methodology section alone runs eleven pages.

Nobody in the room will remember the methodology section in six months. But the contract will still be running.

What does the first question tell you?

This is where it starts — the first five minutes, before the rehearsed material kicks in. A contractor who opens by asking about the feature list is telling you what they optimize for: scope they can bid against. Features are measurable. Features go on the SOW. Features close deals.

A contractor who opens by asking what happens when something breaks is telling you something different. They've been on the other end of a 7 AM call where the system is down and the program office wants answers. They know that the interesting part of any engagement isn't the build. It's the Monday morning when the build meets production and loses.

That question — "what's your incident response process?" or "who gets called when this goes sideways?" — isn't in most RFP response templates. It shows up when someone has been accountable for a system that broke and had to explain why, in a room where nobody wanted to hear the explanation.

How do they talk about failure?

Midway through the eval, someone on the government side asks about a past engagement. This is where the room splits.

One type of contractor gives you the polished version. Everything went well. The client was happy. Lessons were learned. They use the word "challenges" the way a press release uses it — to acknowledge difficulty existed without ever naming it. You leave the conversation knowing nothing about how they actually operate under pressure.

The other type pauses. They tell you about the migration that broke because the source system had undocumented field dependencies. Or the deployment that rolled back because a downstream API changed behavior between staging and production. They name the failure. They walk through what they did about it. They can tell you what they'd do differently.

That pause matters. It's the difference between someone who has rehearsed a narrative and someone who is retrieving an actual memory. The specificity is the signal — not the polish.

And there's a companion to this: ask them to name a technology decision they'd make differently in hindsight. Every practitioner who's been in the field long enough has one. The contractor who can't name one hasn't been making decisions. They've been executing someone else's.

Where does pushback show up?

Halfway through the requirements walkthrough, something should feel uncomfortable. Not adversarial — just honest. A contractor who agrees with every requirement without friction is either not listening or not thinking. Both are problems.

The green flag is pushback that arrives uninvited. "This requirement assumes the data model supports X — have you validated that?" Or: "You've scoped this as a six-month effort, but the dependency on the authentication migration means the first three months are blocked. Have you accounted for that?"

That kind of pushback costs the contractor something. It risks the impression that they're difficult. It complicates the evaluation. It makes the room less comfortable. But it also means they read the requirements closely enough to find the problem — and they cared enough about the outcome to say it out loud before the contract was signed, not after.

Organizations that evaluate technology decisions based on evidence rather than vendor confidence already know this: the contractor who tells you what you want to hear before the contract is the same one who tells you what went wrong after. The order just changes.

What's in the proposal?

Pull the proposal back out. Flip through it. Count the pages dedicated to methodology — the frameworks, the SDLC diagrams, the governance models, the org charts showing who reports to whom.

Now count the pages dedicated to your actual problem. The specific system. The specific constraints. The specific risks that are unique to your environment and won't appear in any other proposal this team submits this quarter.

If the methodology section is longer, you're not reading a proposal. You're reading a template with your agency name dropped into the header. The methodology section exists because it's reusable. It survives from proposal to proposal because it doesn't require the contractor to understand anything specific about you. It's filler that looks like rigor.

A proposal that spends most of its weight on your problem — the integration constraints, the compliance requirements, the legacy dependencies, the organizational dynamics that will actually determine whether the project succeeds — that's a proposal written by someone who did the work before the pitch — the way firms built around delivery accountability tend to operate. It's a product-led approach to consulting rather than a methodology-led one: start with the problem, not the process.

Why confidence is the wrong signal

The contractor who sounds most confident in the pitch is often the one who's never been accountable for what broke. Confidence in a contractor eval is easy to manufacture. Sit up straight, make eye contact, don't hesitate. Say yes to every question. Project certainty about timelines, cost, and technical feasibility.

None of that correlates with delivery.

What correlates with delivery is a track record of being in the room when things went wrong and having a specific, unscripted answer about what happened next. Confidence that comes from experience sounds different from confidence that comes from preparation. The experienced version has edges. It includes qualifications. It says "we can do this, but here's what concerns us about the timeline" instead of "absolutely, no problem."

The distinction is subtle enough that most evaluation rubrics don't capture it. Rubrics reward clear answers, complete proposals, and demonstrated capability. They don't have a line item for "told us something we didn't want to hear before we signed the contract."

What's the one question that matters most?

At the end of the eval, after the slide deck is closed and the methodology has been summarized and the team bios have been reviewed, ask one more question.

What went wrong on your last engagement?

Not "what challenges did you face." Not "what lessons did you learn." What went wrong. The word matters. "Challenges" invites abstraction. "Wrong" demands specifics.

Watch their face. Watch whether the answer comes from a script or from a scar. Watch whether they name the failure or narrate around it. Watch whether they take ownership or distribute it across the client, the timeline, the requirements.

That answer is the interview. Everything before it was the pitch.