Exclusive-Stanford AI leader Fei-Fei Li building ‘spatial intelligence’ startup – Yahoo Finance
By Katie Paul, Anna Tong and Krystal Hu
NEW YORK/SAN FRANCISCO (Reuters) – Prominent computer scientist Fei-Fei Li is building a startup that uses human-like processing of visual data to make artificial intelligence (AI) capable of advanced reasoning, six sources told Reuters, in what would be a leap forward for the technology.
Li, considered a pioneer in the AI field, raised money for the company in a recent seed funding round. Investors included Silicon Valley venture firm Andreessen Horowitz, three of the sources said, and Radical Ventures, a Canadian firm she joined as a scientific partner last year, according to two others.
Spokespeople for Andreessen Horowitz and Radical Ventures declined to comment. Li did not respond to requests for comment.
Li is widely known as the “godmother of AI,” a title derived from the “godfathers” moniker often used to refer to a trio of researchers who won the computing world’s top prize, the Turing Award, in 2018 for their breakthroughs in AI technology.
In describing the startup, one source pointed to a talk Li gave at the TED conference in Vancouver last month, in which she said the cutting edge of research involved algorithms that could plausibly extrapolate what images and text would look like in three-dimensional environments and act upon those predictions, using a concept called “spatial intelligence.”
To illustrate the idea, she showed a picture of a cat with its paw outstretched, pushing a glass toward the edge of a table. In a split second, she said, the human brain could assess “the geometry of this glass, its place in 3D space, its relationship with the table, the cat and everything else,” then predict what would happen and take action to prevent it.
“Nature has created this virtuous cycle of seeing and doing, powered by spatial intelligence,” she said.
Her own lab at Stanford University was trying to teach computers “how to act in the 3D world,” she added, for example by using a large language model to get a robotic arm to perform tasks like opening a door and making a sandwich in response to verbal instructions.
Li made her name in the AI field by developing a large-scale image dataset called ImageNet that helped usher in a generation of computer vision technologies that could identify objects reliably for the first time.
She co-directs Stanford’s Human-Centered AI Institute, which focuses on developing AI technology in ways that “improve the human condition.” In addition to her academic work, Li led AI at Google Cloud from 2017 to 2018, served on Twitter’s board of directors and has done stints advising policymakers, including at the White House.
Li has lamented a funding gap on AI research between a well-resourced private sector on one side and academics and government labs on the other, calling for a “moonshot mentality” from the U.S. government to invest in scientific applications of the technology and research into its risks.
Her Stanford profile says she is on partial leave from the beginning of 2024 to the end of 2025. Among the research interests listed on her profile are “cognitively inspired AI,” computer vision and robotic learning.
On LinkedIn, she lists her current job as “newbie” and “something new,” starting as of January 2024.
By jumping into the startup world, Li is joining a race among the hottest AI companies to teach their algorithms common sense in order to overcome the limitations of current technologies like large language models, which have a tendency to spit out nonsensical falsehoods in the midst of otherwise dazzling human-like responses.
Many say this ability to “reason” must be established before AI models can achieve artificial general intelligence, or AGI, referring to a threshold at which the system can perform most tasks as well as or more capably than a human.
Some researchers believe they can improve reasoning by building bigger and more sophisticated versions of the current models, while others argue the path forward involves the use of new “world models” that can ingest visual information from the physical environment around them to develop logic, replicating how babies learn.
(Reporting by Katie Paul in New York and Anna Tong and Krystal Hu in San Francisco; Editing by Kenneth Li and Alistair Bell)
This post was originally published on 3rd party site mentioned in the title of the post.