The FDA's official literature says a standard 510(k) review takes 90 days. Ours took 14 months the first time. The second submission — after we'd lived through the first — took 11. Neither of those numbers appears in any of the guidance documents we read before we started.

This piece is a candid account of what the clearance timeline actually looks like for AI-based medical imaging software, based on what we went through and what we've heard from other companies navigating the same process. It's not regulatory advice. It's operational reality.

The 90-Day Clock and What It Actually Measures

The 90-day standard review period for 510(k) submissions is real, but it measures something specific: the time from when FDA accepts a submission as administratively complete to when it issues a decision. It doesn't measure how long it takes to get to administrative acceptance, and it doesn't account for the "Additional Information" (AI) requests that can pause the clock entirely.

In practice, for an AI/ML-based Software as a Medical Device (SaMD) submission under De Novo or 510(k), the clock pauses each time FDA issues an AI request and resumes only after the company responds. Our first submission received two AI requests — one at day 47 and one at day 91. Each request gave us 90 days to respond. Responding thoroughly to both consumed about six months of our team's capacity. The clock paused. The 14 months started making sense.

More recent FDA data from the Digital Health Center of Excellence suggests that the median total time from first submission to final decision for SaMD products has been running between 298 and 412 days depending on complexity and whether the product is a novel technology or a predicate-based submission. Plan for 12-18 months and budget accordingly.

Predicate Selection: The Decision That Shapes Everything

510(k) clearance depends on demonstrating substantial equivalence to a legally marketed predicate device. For diagnostic AI, the predicate selection decision shapes almost everything that follows — what clinical performance data you need, what intended use language the FDA will accept, and how much scrutiny the submission receives.

We made a predicate selection error on our first submission that cost us months. We identified a cleared computer-aided detection (CAD) device that seemed like a good match by intended use, but its cleared indications were narrower than ours — it had been cleared for a specific patient population that our technology served differently. FDA flagged this in their first AI request and asked us to address the population discrepancy with supplemental clinical testing data we hadn't budgeted for.

The lesson: when you're building your predicate chain, map not just the intended use but the patient population, the clinical setting, the operator requirements, and the output format. A predicate that matches on headline function but diverges on any one of those dimensions will generate questions you don't want to answer at day 47 of a 90-day review.

Clinical Performance Testing Requirements

The FDA's guidance on clinical performance testing for AI/ML-based SaMD has evolved significantly over the past three years. The 2023 Action Plan and the subsequent draft guidance on predetermined change control plans have added complexity but also more transparency about what reviewers are actually looking for.

For our chest X-ray indication, FDA required clinical testing on a reader study design with a minimum of five board-certified radiologists as comparators, a minimum of 400 cases stratified to include at least 30% disease-positive cases, and statistical power calculations supporting the primary performance endpoint. They also required that the test set be demographically diverse — specifically requesting evidence that our performance held across self-identified racial and ethnic groups, two sexes, and a range of imaging equipment manufacturers.

That last requirement caught us initially. Our internal validation dataset had been collected primarily at academic medical centers in the northeastern United States. Equipment diversity was limited, and demographic stratification hadn't been a primary design criterion when we built the dataset. We had to source additional testing cases from three community hospitals and a teleradiology archive, which pushed the clinical testing phase out by four months.

The Predetermined Change Control Plan Requirement

Under the FDA's evolving framework for AI/ML-based SaMD, cleared devices are expected to include a predetermined change control plan (PCCP) that describes what types of model updates can be made without a new 510(k) submission. This is genuinely useful for AI companies — it provides a path to improve models post-clearance without going back through the full review process for every update.

But writing a good PCCP is harder than it sounds. The plan has to specify the types of changes covered (retraining on new data, architecture modifications, performance threshold adjustments), the performance monitoring protocols that will govern when a change triggers the PCCP process versus requiring a new submission, and the specific performance boundaries outside which a new submission is always required.

We spent approximately six weeks iterating on our PCCP language with external regulatory consultants before we were comfortable submitting it. It's the part of the submission most companies underestimate because it feels like planning for the future, not getting cleared today. In practice, it's one of the most operationally valuable documents you'll produce.

Post-Clearance: What the Clock Actually Starts

Clearance isn't the end of regulatory engagement — it's the beginning of a different kind. The cleared device must be manufactured under a quality management system compliant with 21 CFR Part 820 (or the harmonized ISO 13485 standard). Post-market surveillance is required. Adverse event reporting obligations begin on the date of clearance, not the date of first sale.

For AI devices that include a PCCP, the monitoring obligations are ongoing and must be documented. FDA can request post-market study data as a condition of clearance. We received a post-market study commitment as part of our clearance letter — a two-year real-world performance study across our initial deployment sites, with annual reports to FDA.

The 14-month timeline was longer than we wanted. Looking back at it now, the process forced a level of rigor in our clinical testing design and documentation that made the product genuinely better. That's not a comfortable thing to say when you're living through month eight of a 90-day review. But it's true.