Ever since last week's dramatic events at OpenAI, the rumor mill has been in overdrive about why the company's chief scientific officer, Ilya Sutskever, and its board decided to oust CEO Sam Altman. (Only to rehire him a few days later, and replace the only two women on its board with white men. Classy!)
While we still don't know all the details, there have been reports that researchers at OpenAI had made a "breakthrough" in AI that had alarmed staff members. Reuters and The Information both reported that researchers had come up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school-level math.
According to the people who spoke to Reuters, some at OpenAI believe this could be a milestone in the company's quest to build artificial general intelligence, a much-hyped concept of an AI system that is smarter than humans. The company declined to comment on Q*.
Social media is full of speculation and excessive hype, so I called some experts to find out how big a deal any breakthrough in math and AI would really be.
Researchers have for years tried to get AI models to solve math problems. Language models like ChatGPT and GPT-4 can do some math, but not very well or reliably. We currently don't have the algorithms or even the right architectures to be able to solve math problems reliably using AI, says Wenda Li, an AI lecturer at the University of Edinburgh. Deep learning and transformers (a kind of neural network), which is what language models use, are excellent at recognizing patterns, but that alone is likely not enough, Li adds.
Math is a benchmark for reasoning, Li says. A machine that is able to reason about mathematics, could, in theory, be able to learn to do other tasks that build on existing information, such as writing computer code or drawing conclusions from a news article. Math is a particularly hard challenge because it requires AI models to have the capacity to reason and so really understand what they are dealing with.
A generative AI system that could reliably do math would need to have a really firm grasp on concrete definitions of particular concepts that can get very abstract. A lot of math problems also require some level of planning over multiple steps, says Katie Collins, a PhD researcher at the University of Cambridge, who specializes in math and AI. Indeed, Yann LeCun, chief AI scientist at Meta, posted on X and LinkedIn at the weekend that he thinks Q* is likely to be "OpenAI attempts at planning." Read more in my story here.
Last week, I spoke live on LinkedIn with my colleagues Niall Firth and Will Douglas Heaven about OpenAI's crazy week—and what it means for the future of AI. If you need a recap, you can catch up on what was said here.
"The Future of Global AI Governance" report offers a first-of-its-kind perspective on global AI Governance that combines the legal expertise of the world's largest law firm, Dentons, the AI acumen of VERSES and guidance on socio-technical standards from the Spatial Web Foundation. Find out how governments and policymakers could regulate AI systems that are on a path to regulating themselves.
To watch the webinar series and access both the report and executive summary, visit VERSES.AI/AI-Governance
Upcoming Webinar: Thursday, December 7th - 10AM PST / 1PM EST / 7PM CET
Deeper Learning
Four ways AI is making the power grid faster and more resilient
The power grid is growing increasingly complex as more renewable energy sources come online. Where once a small number of large power plants supplied most homes at a consistent flow, now millions of solar panels generate variable electricity. Increasingly unpredictable weather adds to the challenge of balancing demand with supply. To manage the chaos, grid operators are increasingly turning to artificial intelligence.
Faster, better:AI's ability to learn from large amounts of data and respond to complex scenarios makes it particularly well suited to the task of keeping the grid stable, and a growing number of software companies are bringing AI products to the notoriously slow-moving energy industry. Read more from June Kim here.
From king to exile to king again: The inside story of Sam Altman's whiplash week A forensic telling of OpenAI's crazy week, right down to how many boba deliveries were made to OpenAI HQ during the intense negotiations. (The Information)
Australia launches world-first crackdown on deepfake porn In new industry standards, cloud-based storage services like Apple iCloud, Google Drive, and Microsoft OneDrive, as well as messaging services, will be required to take steps to remove unlawful content such as nonconsensual deepfake porn, child sexual abuse material, and terrorist content. (The Sydney Morning Herald)
AI-generated images are flooding stock images sites Adobe Stock and Shutterstock are flooded with AI-generated images that are hard to distinguish from real news imagery. Often they are missing labels indicating the role of AI. (The Washington Post)
A top EU official speaks out against AI self-regulation Thierry Breton, a European Commission member who has been leading tech regulation in the EU, has criticized the French AI company Mistral and Big Tech for lobbying for self-regulation in the AI Act. (La Tribune)
Is my toddler a stochastic parrot? I loved this comic about language models and AI and learning to talk, and what it means to be human. (The New Yorker)
🔥 CYBER WEEK DEAL: SAVE 50% Our biggest sale of the year is happening now! For a limited time, save 50% on an annual subscription to MIT Technology Review and gain access to trusted reporting, analysis and insight that you can't get anywhere else. Don't miss out, claim this offer now.
Comments
Post a Comment