Skip to content

Leading AI's Biomedical Revolution

Two Stanford Medicine leaders on AI’s potential, and what realizing it will require. By Courtney Chiang Dorman and Kate Larsen.

Roughly a half century before the launch of ChatGPT, artificial intelligence was being applied to healthcare. That groundbreaking work took place at Stanford University, where a computer program called MYCIN was developed in the early 1970s to aid in prescribing an appropriate antibiotic. It guided doctors through a series of questions, analyzed their responses and then ranked the most likely diagnoses and suggested antibiotic treatments—rudimentary by today’s standards, perhaps, but revolutionary in a time when VCRs and floppy disks represented cutting-edge tech.

It somehow seems fitting to understand AI’s future in healthcare by going to the place where that conversation started. Which is what Brunswick did recently, interviewing two Stanford Medicine leaders: Dr. Lloyd Minor, a scientist and surgeon who is Dean of the Stanford School of Medicine and Vice President for Medical Affairs at Stanford University; and Priya Singh, Executive Vice President, Senior Associate Dean, and Chief Strategy Officer for Stanford Medicine.

Located in the heart of Silicon Valley, Stanford Medicine includes a medical school, adult and children’s hospitals, a healthcare system with more than 60 clinics and centers across the Bay Area, and dozens of research labs. That gives the organization important roles in shaping how AI is studied, taught—and how it is applied in hospitals and clinics.

Dr. Lloyd Minor is the Carl and Elizabeth Naumann Dean of the Stanford University School of Medicine, and Vice President for Medical Affairs at Stanford University.


Both Minor and Singh were optimistic about the technology’s potential and frank about the challenges to realizing it. And both broadened the conversation beyond the technology. As Singh told us, “AI’s impact on medicine won’t be determined by technology alone—it will be shaped by strategy.”

AI’s Promise: Extraordinary, but Not Guaranteed

In late 2023, Dean Minor wrote that AI would transform biomedicine in the 21st century as profoundly as antibiotics did in the 20th century. He told Brunswick he remains “just as optimistic” about the technology’s potential today.

“AI isn’t just an incremental advancement,” he said. “It has the potential to fundamentally reshape our understanding of disease, accelerate discovery and revolutionize how we deliver care.”

Minor was quick to highlight how the technology is already delivering on its promise. “Just last year, the Nobel Prize in Chemistry recognized breakthroughs in computational protein design and protein structure prediction—both areas where AI has played a transformative role,” he said. “What once took years of painstaking experimentation can now be achieved in mere months.”

Minor explained how AI-driven protein modeling is paving the way for novel, highly targeted treatments, “such as custom-designed enzymes that could break down plaques in neurodegenerative diseases like Alzheimer’s, or AI-engineered proteins that neutralize drug-resistant bacteria,” which has the potential to address some of medicine’s most pressing challenges.

Compared with such groundbreaking work, AI’s role in reducing paperwork may seem mundane. But, as Minor has written, the effects could be significant for both patients and care providers. A 2024 survey found that more than 90% of physicians regularly feel burned out, while separate research concluded that clinicians spend almost twice the amount of time on clerical work as they do face to face with patients.

Priya Singh is Chief Strategy Officer and Senior Associate Dean for Stanford Medicine.


AI’s potential spans industries, but the stakes are arguably the highest in healthcare. The consequences can be life or death. The data that trains and powers AI in healthcare is incredibly personal and sensitive.

“The real impact of AI in biomedicine will depend on how effectively we build the systems and strategies to harness it,” said Minor, who explained that AI is being deployed in ways that reflect—and sometimes exacerbate—existing systemic tensions.

“For example, it’s public knowledge that insurance companies are already using AI to accelerate claims denials, while providers are leveraging AI to contest those denials and justify care. When AI fuels administrative battles rather than improving patient outcomes, it highlights the risk of allowing technology to evolve without thoughtful oversight.”

Dean Minor is clear that AI is merely a tool, reflective of the structures and incentives that shape its use and says that “fundamental changes in policy, payment models and care delivery will be necessary to ensure AI serves as a true enabler of better health, rather than just another layer of complexity in an already strained system.”

Setting a High Bar for Responsible AI

How does Stanford Medicine address those challenges? Most obviously and immediately: on its own campus, and in its own hospitals and clinics, which annually see more than 1.2 million outpatient visits.

“We recognize that the speed and effectiveness of AI adoption depend on how well we integrate it into our system—not just in research and clinical care, but in the way we train, support and empower our people,” Singh told Brunswick. “The organizations that successfully harness AI won’t just adopt new tools; they will rethink workflows, align incentives and prepare their workforce to engage with these advancements in meaningful ways.”

Priya Singh: “AI’s impact on medicine won’t be determined by technology aloneit will be shaped by strategy.”

One step Singh highlighted was Stanford Medicine’s FURM assessment—“fair, useful, reliable models”—which it developed to evaluate AI models intended for healthcare applications. The assessment involves conducting an ethical review and running simulations to gauge the model’s practical efficacy. It also uses financial projections to assess the model’s sustainability.

The goal, says Singh, is to ensure that the “AI used within our system continuously adds value for patients, care providers and the broader community.” But Singh notes that responsible AI can’t be achieved in isolation, which is why Stanford Medicine’s work extends far beyond its own campus and facilities.

“While individual organizations can set internal standards, AI’s impact on health requires shared guidelines, clear accountability and collaboration across health care, academia and industry,” she said, which is why Stanford Medicine is working with experts across Stanford University and beyond to facilitate these conversations. “We want to help set a high bar for responsible AI and lead efforts to create shared standards.”

One of those efforts debuted in 2023, when Stanford Medicine partnered with the Stanford Institute for HAI (Human-Centered Artificial Intelligence) to launch RAISE Health (Responsible AI for Safe and Equitable Health).

The initiative, according to Singh, is helping shape the future of biomedicine by guiding the ethical use of AI across research, education and patient care. Helping launch those efforts alongside Dean Minor are HAI Co-Directors James Landay and Fei -Fei Li, who is often called the “Godmother of AI” for her pioneering work in computer vision.

Dr. Lloyd Minor: “AI has the potential to fundamentally reshape our understanding of disease, accelerate discovery and revolutionize how we deliver care.” 

Stanford Medicine’s work in AI extends beyond the Bay Area. It’s also a founding member of CHAI (Coalition for Health AI) a national consortium of health systems, government agencies and private sector partners working to develop best practices and guardrails for AI adoption.

Additionally, earlier this year, Stanford Medicine partnered with the Alice L. Walton School of Medicine to host a conference in Bentonville, Arkansas, called, “Think Health: AI for Healthy Communities.” The event explored how AI can transform community and rural healthcare—a keenly relevant subject to discuss in Arkansas, a state which has one of the 10-lowest life expectancies in the country.

If convening is one of Stanford Medicine’s priorities, then another is engaging workforces in AI adoption. “One of the key steps we’ve taken is launching a workforce survey on artificial intelligence across our health delivery system,” Singh said. “AI is changing the way we work, and we need to understand how our people interact with these tools—what excites them, what challenges they face and what support they need to fully leverage AI in their roles.” 

That kind of work bridges the gap between technological progress and real-world impact—a gap MYCIN never quite managed to cross. In the 1970s and ’80s, it performed on par with human experts but remained more of a novelty than a revolution, never reaching clinical practice.

Fifty years later, the lesson remains: The real breakthrough won’t just be in what AI can do, but in how thoughtfully we choose to use it. 

Download full article (1 MB)

photographs courtesy of stanford medical

The Authors

courtney-dorman-1600×760
Courtney Chiang Dorman

Manager Partner of the Americas, San Francisco

Courtney works with organizations, boards and leadership teams on their most critical issues, including strategy, executive communications, reputation and crisis management.

KateLarsen
Kate Larsen

Director, San Francisco

Kate is a Director in Brunswick's Health & Life Sciences group and draws on her 15 years of experience as a journalist to help clients manage their most pressing issues, solve problems, and tell meaningful stories that make an impact.