The Sequence Radar #664: The Gentle Singularity Is Already Here
Was this email forwarded to you? Sign up here The Sequence Radar #664: The Gentle Singularity Is Already HerePlus a major AI move by Meta AI and new releases from Apple, Mistral and OpenAI.Next Week in The Sequence:In our series about evals, we dive into AGI benchmarks. The research section dives into Meta AI’s JEPA-2 model. Don’t miss our opinion section as we are going to discuss the famous superposition hypothesis in AI interpretability. Engineering dives into the world of AI sandbox environments. You can subscribe to The Sequence below:📝 Editorial: The Gentle Singularity Is Already HereIn a recent and quietly radical blog post titled "The Gentle Singularity," OpenAI CEO Sam Altman dropped a thesis that reads more like a plot twist than a prediction: the singularity isn’t coming—it’s already arrived. Forget the apocalyptic drama of rogue AIs and sci-fi rebellion; this version of the future is calm, smooth, and deeply weird. It’s not a bang but a gradient—one we’ve been sliding down without realizing. Altman lays out a timeline that feels less like prophecy and more like an insider’s itinerary. Right now, AI systems are churning through cognitive labor with the kind of stamina that would make any grad student jealous. By 2026, he expects them to generate real, novel scientific discoveries; by 2027, robots should be reliably navigating the physical world. If that sounds wild, it’s worth remembering that most of us didn’t expect generative models to go from autocomplete toys to research partners in under five years either. What makes this singularity "gentle" is its deceptive normalcy. People still walk their dogs, drink their coffee, and swipe through social media. But under the hood, AI is reshaping how work gets done. Coders are using copilots to write functions they barely touch. Scientists are fast-tracking ideas with AI-aided literature reviews and simulations. Designers are skipping mood boards in favor of generating full prototypes. It’s not flashy—but it’s everywhere. One of the spiciest sections in Altman's essay explores recursive acceleration: systems that build better versions of themselves, powered by increasingly autonomous infrastructure. Imagine an intelligence supply chain that bootstraps itself—data centers run by robots, trained by AIs, serving other AIs. If intelligence becomes as cheap and abundant as electricity, the result isn’t just economic growth—it’s epistemological upheaval. Of course, it’s not all silicon utopias and self-replicating insight engines. Altman puts his finger on the two pressure points: alignment and access. In other words: Will these systems want what we want? And who gets to use them? His optimism about “good governance” is noble, but critics rightfully worry that current institutions are too slow and too fractured to manage this transition. A gentle singularity doesn’t mean a safe one. Altman’s essay is part vision, part provocation—a call to update our mental models. No, the streets aren’t full of humanoid robots (yet), but that doesn’t mean the singularity is fiction. It just looks different than expected. As we move through this inflection point, the challenge isn’t to brace for impact—it’s to take the wheel. The future is arriving at walking speed, and it's asking if we’re paying attention. 🔎 AI Research📘 1. V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and PlanningLab: FAIR at Meta + Mila / Polytechnique Montréal 📘 2. EXPERTLONGBENCH: Benchmarking Language Models on Expert-Level Long-Form Generation Tasks with Structured ChecklistsLab: University of Michigan + Carnegie Mellon 📘 3. Debatable Intelligence: Benchmarking LLM Judges via Debate Speech EvaluationLab: IBM Research + Hebrew University of Jerusalem + AI2 📘 4. Thinking vs. Doing: Agents that Reason by Scaling Test-Time InteractionLab: Carnegie Mellon, UIUC, UC Berkeley, NYU, The AGI Company 📘 5. Institutional Books 1.0: A 242B Token Dataset from Harvard Library’s CollectionsLab: Harvard Library 📘 6. The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem ComplexityLab: Apple 🤖 AI Tech ReleasesMagistralMistral released Magistral, its first reasoning model. o3-ProOpenAI released o3-pro, a new version of its o3 model optimized for longer reasoning tasks. Apple ReleasesApple announced a series of AI releases at WWDC25. 🛠 AI in ProductionAd Retrieval at PinterestPinterest discusses the AI techniques use for ad retrieval in its platform. 📡AI Radar
You’re on the free list for TheSequence Scope and TheSequence Chat. For the full experience, become a paying subscriber to TheSequence Edge. Trusted by thousands of subscribers from the leading AI labs and universities. |
Similar newsletters
There are other similar shared emails that you might be interested in: