Quality Controls

From 15% to 99%: Why Manual QC Verification Caps Out

#ai#quality-control#aimqc#inspection#coverage#oil-and-gas

David OlssonDavid Olsson

Most QC programs on major construction projects verify somewhere between 10 and 20 percent of the inspection evidence they are supposed to hold. Not because anyone planned it that way. Because verification at scale, done manually, is physically impossible within any normal project budget.


The math nobody does out loud

A large pipeline project might generate 40,000 inspection records across welding, coating, pressure testing, civil, and instrumentation disciplines. A QC coordinator can meaningfully review — open, check, link, and sign off — maybe 80 to 100 records per day, on a good day, with no interruptions.

At that rate, reviewing 40,000 records end-to-end would take one person roughly two years. The project closes out in eight months.

The math does not work. It has never worked. So organizations adapt by doing what humans always do when a task is impossible: they sample. They check the high-risk items. They trust the contractors they know. They lean on the QC leads who are thorough and hope the others are good enough.

That is a spot-check model. It is rational given the constraints. It is also a ceiling.

Why the ceiling matters

A 15% verification rate does not mean 85% of records are wrong. It means 85% of records have not been confirmed to be right. That is a different problem — and it surfaces late.

At turnover, an operator or regulator does not ask "did you check the risky stuff?" They ask for a complete, verifiable package. Every outstanding question at that point is a negotiation, a delay, or a rework cost. The lower the coverage, the more those questions pile up.

The gap between what was inspected on site and what can be demonstrated was inspected is the real risk. High coverage is not a nice-to-have. It is what determines whether a turnover package is defensible.

Human attention does not scale — but triage does

The solution is not to hire more coordinators. It is to change what coordinators are spending their attention on.

AI-assisted workflows do not replace QC judgment. They handle the parts of the process that do not require it: checking whether a document is present, whether fields are populated, whether the record links to the right ITP line item, whether the dates are plausible, whether the inspector is qualified for the work type.

That classification and triage layer — applied at machine speed across all 40,000 records — can flag the subset that actually needs a human to look at it. Instead of sampling randomly or by gut feel, coordinators spend their time on the records that have a problem.

Coverage goes from 15% to effectively full. The number of coordinators does not change. What changes is what they are doing with their hours.

The precondition

None of this works without the structural layer from the previous post. AI triage requires data that is linked and typed — records connected to ITP line items, NCRs tied to inspection events, ITRs referencing the upstream work they close out. Machine-scale classification of unstructured PDFs in a shared drive is a research project, not a production system.

The combination of structured data and AI-assisted verification is what closes the gap between 15% and 99%. Neither alone is enough.


David Olsson is CTO at AIMQC. Contact: dolsson@aimqc.com

Powered by scsiwyg

From 15% to 99%: Why Manual QC Verification Caps Out · scsiwyg