The case came from a 58-year-old male, non-smoker, presenting to an emergency department with chest tightness and mild exertional dyspnea. Three board-certified radiologists read the initial chest CT over a 36-hour period. All three signed off on it. None flagged anything requiring urgent follow-up.
Our model flagged a 6.2mm subpleural nodule in the right lower lobe with a nodule malignancy risk score of 0.71 — above the 0.65 threshold we use to trigger an urgent alert. The finding had been present on a prior CT from 14 months earlier, where it measured 4.8mm. That earlier study had also been read as unremarkable.
This isn't a story about incompetent radiologists. All three were fellowship-trained, board-certified, and reading at an institution with a national reputation. It's a story about cognitive load, the limits of human attention during high-volume reads, and where an AI second reader adds genuine clinical value.
What the Evidence Says About Missed Findings
Radiology miss rates are higher than most clinicians expect. A 2019 meta-analysis published in European Radiology put the overall error rate across radiology subspecialties at approximately 3-5%, with perceptive errors — where the finding is visible but not consciously registered — accounting for 60-80% of mistakes. For lung nodules specifically, the miss rate on CT varies between 19% and 26% depending on nodule size, location, and reader fatigue.
Subpleural nodules are particularly prone to being overlooked. They sit at the edge of the lung parenchyma, adjacent to the pleura, and on axial slices they can blend into the surrounding tissue in ways that interrupt automatic visual search patterns. Our training data showed that subpleural nodules under 8mm were missed by at least one radiologist in 23.4% of cases across our validation cohort of 4,100 CT studies.
The three-radiologist scenario described above is unusual — triple reads typically exist precisely to catch what single reads miss. The fact that all three failed to flag this nodule illustrates something important: errors in radiology often aren't random. They tend to cluster around specific anatomical locations, specific radiographic presentations, and specific workflow conditions. When one reader misses something, the next reader is more likely to miss it too, not less — especially if the prior read is visible in the workflow and implicitly anchors attention elsewhere.
How the AI Model Locates Findings Humans Skip
Our chest CT model uses a 3D convolutional architecture trained on 2.4 million annotated CT slices sourced from 11 academic centers across the United States and Europe. It processes each study as a volumetric dataset rather than a stack of 2D slices, which allows it to track nodule candidates across contiguous axial, coronal, and sagittal planes simultaneously.
Subpleural detection specifically went through three targeted rounds of re-training. In early validation, our subpleural sensitivity lagged 8 points behind central nodule sensitivity. We addressed this by augmenting the training dataset with 340,000 additional subpleural cases, adjusting the model's attention mechanism weighting toward pleural margins, and adding a dedicated post-processing pass that specifically re-examines pleural zones. After those changes, subpleural sensitivity reached 93.8%, compared to 94.6% for central nodules — a 0.8-point gap that we continue working to close.
The model also runs a temporal comparison when prior studies are available in the PACS. Growth rate computation — even for small nodules where absolute growth is measured in fractions of a millimeter — adds meaningful predictive signal. The 4.8mm-to-6.2mm change in this case represented a volume doubling time of approximately 340 days, which falls within the range associated with malignant growth kinetics per Fleischner Society guidelines.
What This Does Not Mean
This case study shouldn't be read as a claim that AI will catch everything radiologists miss. It won't. Our model has its own blind spots, its own failure modes, and its own cases where confident predictions turn out to be wrong. We publish those data transparently. In our most recent external validation study across three independent hospitals, the false negative rate for sub-centimeter nodules was 11.3% — lower than the human benchmark, but not zero.
What the AI provides is a different kind of attention — one that doesn't fatigue over a 12-hour shift, isn't anchored by the prior read, and applies the same computational effort to the 200th study in a queue as it did to the first. That's a complement to radiologist judgment, not a replacement for it.
The patient in this case underwent PET-CT follow-up, which confirmed focal uptake consistent with stage IA non-small cell lung carcinoma. He's currently receiving treatment. The outcome was caught early enough that curative-intent resection remained an option.
The Clinical Workflow Implication
When we present this data to radiology department heads, the question we get most often isn't about the technology. It's about liability. Who is responsible when the AI flags something the radiologist didn't? Who is responsible when the AI misses something the radiologist also missed?
These are legitimate questions and the regulatory framework is still evolving around them. What we do know is that the FDA's 510(k) pathway for AI-assisted triage tools treats the AI as a clinical decision support tool, not a diagnostic device in itself. The radiologist remains the responsible party. The AI is a second set of eyes — one that's required to demonstrably improve detection performance before it's deployed, but that doesn't transfer liability away from the clinical team.
In practice, what we see in deployed systems is that radiologists use AI alerts as a prompt to re-examine specific regions rather than as a verdict. That's exactly the workflow integration we designed for. The goal was never to replace the read — it was to give the radiologist a reason to look again at the places most likely to harbor an unregistered finding.
In the case above, a second look would have been enough.