Fei‑Fei Li has long been a beacon in artificial intelligence: her work has helped machines see; now, she’s pushing them to understand the world in 3D, to act sensibly, and to do so with humans—not just data—at the center.
From ImageNet to Human‑Centered AI
Li first came to widespread fame through ImageNet, a massive dataset of labeled images designed to benchmark and train computer vision models. It was a keystone in the deep learning revolution, helping machines reliably identify objects in images.
Over the years, her interests have expanded beyond making machines see. She co‑founded AI4ALL, a nonprofit focused on bringing in young people from underrepresented communities into AI, championed ethical AI, and helped build Stanford’s Institute for Human‑Centered AI (HAI), which aims to ensure AI augments humanity rather than replacing or harming it.
The Latest: Spatial Intelligence & World Labs
In 2024, Li co‑founded World Labs, an AI startup focused on giving machines spatial intelligence—the ability to perceive, reason about, and act in three‑dimensional physical space.
-
World Labs raised US$230 million in initial funding with major investors backing the vision.
-
The startup’s goal is to move beyond today’s generative models toward agents that understand the geometry and physics of real-world spaces: opening doors, avoiding obstacles, manipulating objects—all based on verbal instruction.
-
Some of the technical strategy involves combining synthetic and real‑world data, and rethinking architectural choices to embed more “common sense” about space.
Li is partially on leave from her Stanford duties to focus on this, but remains Co‑Director at HAI.
Honors, Voice, and Ethical Imperative
Her work continues receiving major recognition:
-
In 2025, she was awarded the Queen Elizabeth Prize for Engineering for pioneering contributions—especially for the creation of ImageNet and showing the importance of high‑quality datasets.
-
Her memoir, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI, intertwines her personal story with the evolution of AI itself.
She is also vocal on the risks of AI: hallucinations, bias, disinformation, surveillance, and more. She repeatedly argues that technology is shaped by the people behind it—and that governance, institutions, and inclusive social values must be built into AI’s development.
Challenges & Tensions
The path Li is helping to chart is not without obstacles.
-
Funding gap: Li has spoken about the imbalance between AI research in private companies versus academia or public labs, particularly in areas involving long‑term risk, ethics, and basic research.
-
Bridging perception and action: Spatial intelligence is harder than generating images or text—real‑world acting involves dealing with noise, unpredictability, and safety. Translating lab research to reliable, safe systems is a big hurdle.
-
Ethics & regulation: Li has called for guardrails and strong governance. How to regulate AI, especially when models are developed across borders or in private industry, is a thorny issue.
Visions for the Future
What does Fei‑Fei Li see ahead?
-
An AI that doesn’t just generate content but understands space and context, capable of acting in the physical world meaningfully.
-
Human‑centered AI: systems that augment human capacity, preserve dignity, and reflect diverse lived experiences.
-
Broadening participation in AI: programs like AI4ALL to ensure underrepresented groups have both a seat at the table and influence over AI’s development.
-
Interdisciplinary integration: merging computer science with the humanities, law, public policy, ethics, and social sciences.
