Field Notes: The Stage and the Press Release
FDA’s AI strategy is evolving faster than its guidance — and the gap is becoming impossible to ignore.
FDA is moving fast on internal AI, slow on external guidance, and is exposed to the same vendor risk it expects you to manage. The agency you’re submitting to is not the agency that wrote the rulebook you’re submitting against. Three signals from this week tell that story.
1. HALO went live, and Makary said the quiet part out loud
On May 6, FDA announced that HALO, the agency’s consolidated data platform, had absorbed more than 40 application and submission data sources, systems, and portals across all centers and gone live underneath Elsa 4.0. Same day, at the FDLI annual conference, Commissioner Makary said: “The one-day inspections are a screening inspection in low-risk facilities that our AI is identifying as low risk.” The official FDA press release on the same pilot said facility selection used “risk-based criteria such as product type, prior inspection outcomes, and operational characteristics.” It never used the word AI.
The gap between the conference stage and the press release is now structural. FDA’s official posture is bounded, risk-based, and validated. FDA’s senior leadership at industry events describes AI doing the sorting. What data Elsa actually pulls from to rank facilities is analyst inference at this point. FDA has confirmed the output, not the inputs. Both versions of the agency are true. Plan your quality program around the conference version, because that’s the one your inspector is operating under.
Watch for the first published evaluation numbers from the One-Day pilot. Elizabeth Miller’s office named three metrics: inspection duration, escalation rates, and the usefulness of findings in guiding risk-based decision-making. When those numbers land, you’ll know whether one-day inspections become the FY27 default for surveillance.
Sources: FDA press releases, May 6, 2026 (fda.gov/news-events/press-announcements/fda-expands-ai-capabilities-and-completes-data-platform-consolidation and fda.gov/news-events/press-announcements/fda-launches-one-day-inspectional-assessments-strengthen-and-expand-oversight); Bloomberg Law via Fierce Pharma, May 6, 2026; RAPS coverage of FDLI fireside chat, May 2026.
2. The AI device guidance got demoted, and almost nobody noticed
The AI-Enabled Device Software Functions Lifecycle guidance, drafted January 2025 with comments closed April 2025, sits on CDRH’s “B-list” in the FY26 guidance agenda. B-list means “as resources permit.” In 2025, CDRH published only one B-list guidance total. Combine that with CDRH’s three-year-after-comment-close commitment, the staff losses in the 2025 RIF, the 10-for-1 deregulatory executive order chilling guidance publication, and the volume of substantive comments on generative AI that the draft barely addresses, and finalization slips to late 2027 at the earliest. April 2028 is the outside deadline.
If you’re submitting an AI-enabled device this year, you’re submitting against draft language that may shift before final. The December 2024 PCCP final guidance is the most stable AI-related document CDRH has, and it was written against the draft AI-DSF guidance. If the final AI-DSF guidance redraws the line between significant and non-significant device modifications, PCCPs already authorized could no longer cover their planned changes. That’s a bridging problem nobody is pricing in.
For investors: this is a tailwind for the current advisory and tooling cohort. A guidance final pushed to 2028 is a two-to-three-year revenue runway for everyone selling clarity FDA hasn’t published.
Source: What the FDA? analysis, April 2026, steveilverman.substack.com/p/fdas-ai-device-guidance-is-stuck; CDRH FY2026 Guidance Agenda.
3. Elsa swapped its brain, and that tells you something about procurement
Elsa launched June 2025 running on Anthropic’s Claude. After the February 27, 2026 Trump directive halting federal use of Anthropic, HHS phased Claude out in early March and FDA transitioned Elsa to Google Gemini. An internal FDA banner, leaked to NOTUS, told staff that Gemini was “already available in Elsa and will become the primary model going forward.” ChatGPT Enterprise is the approved alternative for other HHS tasks. A White House workshop in late April is now working a path back for Anthropic’s newer Mythos model.
The flagship AI tool reviewing your submissions changed its underlying foundation model in 90 days, driven by a procurement decision that had nothing to do with model quality. That should reframe how you evaluate every quality and regulatory AI vendor in your own stack. Single-model dependencies are a sourcing risk. Vendors selling auditable reasoning trails and model-agnostic architecture have a defensible position that wrappers around one frontier model do not.
The same logic applies inside your QMS. If your AI tooling is locked to one model provider and that provider gets pulled, what happens to your validated workflow, your audit trail, and your inspection-readiness posture? That’s not a hypothetical anymore. FDA just lived it.
Sources: Politico, February 27, 2026, politico.com/news/2026/02/27/trump-orders-all-federal-agencies-to-stop-using-anthropic-00804517; Clinical Leader, March 12, 2026; NOTUS, March 2, 2026; Axios, April 29, 2026.
The agency is moving faster than its rulebook. Build for the agency, not the rulebook.

