Welcome back to The Algorithm! It's done. It's over. Two and a half years after it was first introduced—after months of lobbying and political arm-wrestling, plus grueling final negotiations that took nearly 40 hours—EU lawmakers have reached a deal over the AI Act. It will be the world's first sweeping AI law.
The AI Act was conceived as a landmark bill that would mitigate harm in areas where using AI poses the biggest risk to fundamental rights, such as health care, education, border surveillance, and public services, as well as banning uses that pose an "unacceptable risk."
"High risk" AI systems will have to adhere to strict rules that require risk-mitigation systems, high-quality data sets, better documentation, and human oversight, for example. The vast majority of AI uses, such as recommender systems and spam filters, will get a free pass.
The AI Act is a major deal in that it will introduce important rules and enforcement mechanisms to a hugely influential sector that is currently a Wild West. As my colleague Tate Ryan-Mosley wrote on Friday, it was a seriously tough law to get all EU countries to agree on. So what's in it?
Tech leaders in major digital economies, notably the U.S. and China, view their nations as pioneers in data privacy and sovereignty. But perceptions of national data sovereignty and privacy frameworks vary, underscoring the lack of harmonized global standards.
Google DeepMind's new Gemini model looks amazing—but could signal peak AI hype
Gemini is Google's biggest AI launch yet—its push to take on competitors OpenAI and Microsoft in the race for AI supremacy. There is no doubt that the model is pitched as best-in-class across a wide range of capabilities—an "everything machine," as one observer puts it.
It's a big step for Google, but not necessarily a giant leap for the field as a whole. Google DeepMind claims that Gemini outmatches GPT-4 on 30 out of 32 standard measures of performance. And yet the margins between them are thin. What Google DeepMind has done is pull AI's best current capabilities into one powerful package. To judge from demos, it does many things very well—but few things that we haven't seen before. For all the buzz about the next big thing, Gemini could be a sign that we've reached peak AI hype. At least for now. Read more from me and Will Douglas Heaven here.
How AI assistants are already changing the way code gets made AI coding assistants such as Copilot are here to stay, and they're upending an entire profession by giving people new ways to perform old tasks. But just how big a difference they'll make is still unclear. (MIT Technology Review)
Make no mistake—AI is owned by Big Tech If we're not careful, Microsoft, Amazon, and other large companies will leverage their position to set the policy agenda for AI, as they have in many other sectors, argue Amba Kak, Sarah Myers West, and Meredith Whittaker. (MIT Technology Review)
These robots know when to ask for help Large language models combined with confidence scores help them recognize uncertainty. That could be key to making robots safe and trustworthy. (MIT Technology Review)
More details have emerged about the OpenAI drama The Washington Post reports that OpenAI leaders warned of abusive behavior by Sam Altman before he was ousted, while the New York Times details the "years of simmering tensions" between the organization's leaders.
Comments
Post a Comment