5 Red Flags in Technology Due Diligence
Published 30 March 2026 · Peter Rossi
I have been involved in technology due diligence across a range of PE and VC transactions, from early-stage SaaS businesses to established platforms. After a while you start to notice that the same warning signs appear across very different deals.
These are not always the things that make it onto a formal risk register. Sometimes they are the subtle patterns you pick up from a conversation, or the absence of something you would expect to find.
Here are five that I keep coming back to.
1. The Engineering Team Has Never Had a Real Conversation About Technical Debt
Technical debt exists in every codebase of any meaningful age. That is not the problem. The problem is when a team has never actually sat down, looked at it honestly, and decided what to do about it.
You can usually tell fairly quickly whether a team has done this. A team that manages their technical debt has a view of it. They can tell you roughly where it lives, how it got there, and what they would need to fix it. They might not have a fully costed backlog, but they have a working understanding.
A team that has not tends to describe their codebase as "solid" or "a bit legacy in places but nothing major." When you push on what "legacy" means, the picture becomes less clear. Specific questions about particular components get vague. The CTO pivots to talking about future plans.
That pattern worries me more than a CTO who opens with "look, we have got three areas that need serious investment." Honesty about problems is recoverable. Unawareness of them is much harder to work with.
2. Security Is Treated as a Compliance Checkbox
I do not expect every business I assess to have a mature security programme. They usually do not. What I do expect is that the people running the engineering function take it seriously and understand their own exposure.
The checkbox version of security looks like this: there is a policy document somewhere, a pentest was done eighteen months ago, and MFA is switched on for the main product. If you ask whether the pentest findings were remediated, the answer gets woolly. If you ask about secrets management, it turns out API keys are being passed around in a shared spreadsheet.
This matters because security debt compounds. Issues that were low priority two years ago do not disappear. They become higher risk as the business grows, as it handles more customer data, and as it becomes a more attractive target.
What I am looking for is a team that can describe their security posture honestly, knows where the gaps are, and has made conscious trade-offs rather than just never got around to it. There is a meaningful difference between "we know our access controls are not where they should be, here is why and here is our plan" and "security is fine, we have got a pentest."
3. The Data Room and the Technical Reality Don't Match
Most businesses going through a DD process have spent time preparing their materials. That is entirely normal. The issue is when the technical narrative in the data room is materially inconsistent with what you find when you look at the actual system.
I have seen cases where the management presentation described a clean, microservices architecture, and the reality was a large monolith with a few services bolted on. Neither is necessarily a problem, but the gap between the presentation and the reality is.
When you find these gaps, it is worth understanding how they happened. Sometimes it is just the way the founders think about what they have built, which is forgivable. Sometimes it reflects a pattern of presenting well rather than being straight, which is a different issue entirely.
The engineering team usually gives you the honest version, not because they are trying to undermine the deal, but because they talk about what they work with every day. If you can have a frank conversation with the engineers rather than only with the CTO or CEO, you tend to get a more accurate picture quickly.
4. The Key Person Risk Has No Mitigation Plan
I have seen transactions where the technology was essentially the product of one very talented individual, and that individual had no intention of staying once the deal closed. The acquirer knew about the key person risk in the abstract but had not thought through what it actually meant for the business.
Key person risk in technology is a spectrum. At one end, there is a team where two or three people hold more institutional knowledge than others, which is manageable. At the other end, there is a single engineer who wrote every meaningful system, manages all the infrastructure, and is the only one who knows how to deploy. That is a different conversation.
The question is not just whether key person risk exists. It is whether the business has thought about it, what has been done to reduce it, and what the mitigation plan is if the key person leaves. Good documentation, shared operational knowledge, and properly written runbooks all reduce the risk. Their absence concentrates it further.
In a deal context, this can translate directly into deal structure. Retention arrangements, earn-outs, and handover periods all exist partly to manage this risk. But they work much better when you have identified it early rather than when it surfaces after close.
5. The Architecture Can't Be Explained Clearly
This one is more subtle, but I find it consistently reliable.
A team that understands what they have built can explain it clearly at a system level. Not at a component level, not by listing the technologies they use, but at the level of how the pieces fit together, where the data flows, and why they made the structural choices they did.
When a technical leader struggles to give a coherent system-level explanation, it usually means one of two things. Either the architecture has grown organically without design intent, and nobody has ever stepped back far enough to understand the whole thing. Or the person presenting does not have a clear enough view of the system to explain it, which raises questions about how technical decisions get made.
Neither of these is necessarily a deal-breaker. An organic architecture can be rationalised over time. A CTO who is not deeply hands-on might still be a good leader. But both are worth understanding properly before close, because they have real implications for what integration and improvement will require.
What to Do When You Find These
Finding red flags does not mean a deal falls apart. It means you have got something to work with.
The most useful thing you can do with a red flag is quantify it. "The architecture needs refactoring" is less useful than "the estimated cost to rationalise the architecture is £X and would take roughly Y months." The first is a concern. The second is something you can factor into a valuation or a post-closing plan.
The context also matters. A fast-growing business that has intentionally prioritised product over process might have operational gaps that are entirely understandable given what they were focused on. A more mature business with fewer excuses but the same gaps is a different situation.
And some things really are deal-breakers: material security vulnerabilities in a business handling sensitive data, IP that is not cleanly owned, or a level of key person dependency that would leave the acquirer with something they cannot run after close.
Most of the deals I have been involved in had at least one significant issue surfaced through the technical assessment. In most cases, those deals still happened. The difference between a problem that kills a deal and one that changes its terms is usually whether it was found early enough to be managed.
Peter Rossi is a fractional CTO and technology due diligence advisor working with PE-backed companies and growth-stage businesses across the UK. Get in touch if you are running a deal process and want an independent technical view.
Book a conversation