Stop babysitting your coding agents
Agents can generate code. Getting it right for your system, team conventions, and past decisions is the hard part – you end up wasting time and tokens in correction loops.
MCPs give agents access to information but not understanding. The teams pulling ahead use a context engine to give agents exactly what they need.
Join us April 23 (FREE) to see:
Where teams get stuck on the AI maturity curve
How a context engine solves for quality, efficiency, and cost
Live demo: the same coding task with and without a context engine
The study came out quietly, but the finding was loud: AI models, tested against real emergency room cases, delivered more accurate diagnoses than two human physicians working the same scenarios.
The researchers were from Harvard. The setting was emergency medicine — one of the highest-stakes, most time-compressed environments in all of healthcare. And the result was not close.
This isn't a "someday AI will help doctors" story. This is an "AI already outperforms doctors at a specific, high-value task" story. And for New York City, that matters more than almost anywhere else.
What the Study Actually Found
The Harvard team gave both AI models and licensed emergency room physicians the same set of clinical cases — patient histories, symptoms, test results, the full workup. They measured diagnostic accuracy. The AI won.
This type of study is part of a broader wave of clinical AI research over the past two years. Multiple peer-reviewed papers have now found that large language models and specialized diagnostic AI tools perform at or above attending-physician level in fields including radiology, dermatology, ophthalmology, and now emergency medicine. Each one gets a brief news cycle, then gets filed away as "interesting."
The pattern is no longer interesting. It's a structural shift.
Why NYC Is the Most Affected City in the Country
New York City runs the largest municipal hospital system in the United States. NYC Health + Hospitals operates 11 hospitals and more than 70 patient care sites across all five boroughs. The city's emergency rooms logged well over 1.3 million visits last year. Bellevue, Kings County, Lincoln, Jacobi, Elmhurst — these are among the busiest ERs in the country, handling everything from trauma to psychiatric crises to ordinary illness in populations that are often uninsured, non-English-speaking, and presenting with conditions complicated by delayed care.
These are also the ERs most likely to be understaffed at 3 AM on a Tuesday.
That's the real context for this Harvard study. The question isn't whether AI should replace ER doctors — it isn't replacing anyone right now — the question is whether AI-assisted triage and diagnosis could meaningfully improve outcomes in the exact environments where diagnostic errors are most likely: overwhelmed ERs, under-resourced facilities, overnight shifts.
The data suggests yes.
Who Wins From This
Patients — particularly those who are most vulnerable to diagnostic errors. Studies have consistently shown that uninsured patients, Black and Hispanic patients, and patients with limited English proficiency receive statistically worse diagnostic accuracy in ER settings. An AI layer that surfaces accurate differential diagnoses regardless of the patient in front of it is, in practice, an equity tool.
Healthcare AI companies — the companies building diagnostic AI tools for clinical settings are already worth tens of billions of dollars. Studies like this one accelerate FDA review, accelerate hospital procurement, and strengthen the VC case for continued funding. Expect continued consolidation in this space.
Medical education — the entire question of what doctors need to know and how they need to be trained is shifting. Diagnostic reasoning, the core cognitive skill that medical school has prized for a century, is now something AI can replicate with measurable accuracy. That doesn't make doctors obsolete. But it changes what makes a great doctor.
Who Faces Disruption
Emergency medicine physicians — not replacement in the short term, but meaningful role compression over the next five to ten years. Diagnostic AI will become a required layer in ER workflow, not an optional one. The ER physician who resists learning to work with AI tools will be the equivalent of the radiologist who refused to learn PACS systems in the 1990s.
Diagnostic-heavy specialties — radiology has already been hit hard. Pathology, dermatology, and ophthalmology are following. Emergency medicine is the next domino. Physicians entering medical school right now will graduate into a fundamentally different clinical environment.
Hospital administrators and insurers — the arrival of AI diagnostics creates a new liability landscape. If an AI-assisted diagnosis is available and a hospital doesn't use it, and the human physician misses something the AI would have caught — that's a new category of malpractice exposure. Legal frameworks haven't caught up. They will.
What This Means for NYC Residents
If you're a patient, you don't have to do anything right now — but you should know this technology is coming into the hospitals you use. Ask your ER team if they're using any AI-assisted diagnostic tools. You're allowed to ask. You're allowed to know.
If you're a healthcare worker in NYC — nurse, physician assistant, resident, attending — this is not a threat to ignore or dismiss. The most effective clinical professionals over the next decade will be the ones who learn to use AI as a force multiplier rather than resist it as an adversary. That means getting familiar with the tools before your hospital mandates them.
If you run a private medical practice in one of the five boroughs — a primary care office, an urgent care clinic, a specialist practice — AI-assisted diagnostic tools are becoming available to practices of every size. The cost is dropping. The accuracy is documented. The question is not whether to adopt, but when.
A Note on What This Study Doesn't Say
AI does not have bedside manner. It does not notice that a patient is trembling or that something about the story doesn't add up clinically. It cannot call a patient's daughter to clarify the medication history. It cannot hold someone's hand.
Diagnosis is one of many things a physician does. It's a critical one. And it's the one where AI now has a documented, peer-reviewed edge.
The coming decade in medicine is not AI vs. doctors. It's AI + doctors vs. disease, delay, and diagnostic error. The hospitals and practices that figure out how to build that combination effectively will have significantly better patient outcomes than the ones that don't.
For NYC — which has the patient volume, the diversity, and the resource pressure to make AI adoption both urgent and impactful — that future is arriving faster than most people realize.
The Metro Intel covers AI developments, real estate, and local business in New York City. Sign up at themetrointel.com.


