📖 2 min read
Saturday’s AI news cycle is stacked. From leaked frontier models to real-time voice AI to the future of coding workflows — here’s what you need to know.
Anthropic’s Leaked ‘Claude Mythos’ Model Is Real
Anthropic confirmed that a model discovered in an unsecured data store is genuine — and represents what insiders call a “step change” in reasoning capability. The leak-then-confirm playbook is unusual for a frontier lab, but with OpenAI preparing its own next-gen release, Anthropic apparently decided transparency beat silence.
📧 Want more like this? Get our free The Ultimate AI Tool Database: 200+ Tools Rated & Ranked — Downloaded 5,000+ times
Details on what Claude Mythos actually does remain scarce, but confirmed leaks from top labs don’t happen often. When they do, pay attention.
Anthropic Throttles Claude During Peak Hours — OpenAI Pounces
Anthropic engineer Thariq Shihipar announced that Claude’s 5-hour session limits will burn faster during weekday peak hours (5–11 AM PT). About 7% of users — mostly Pro subscribers — will hit caps they didn’t before. The recommendation? Run token-heavy jobs off-peak.
OpenAI wasted no time. The company signaled it’s removing caps on its own models to court frustrated Claude users. The pricing war for AI power users is heating up — check BetOnAI.net for updated model comparisons.
📧 Want more like this? Get our free The Ultimate AI Tool Database: 200+ Tools Rated & Ranked — Downloaded 5,000+ times
OpenAI Launches Codex Plugins — Slack, Figma, Notion, Gmail and More
OpenAI shipped first-class plugin support for Codex, bundling skills, app integrations, and MCP servers into shareable packages. Launch partners include Slack, Figma, Notion, and Gmail — over 20 integrations out of the gate.
The move closes a gap with Anthropic’s Claude Code (which already supports sub-agents in plugins) and Google’s Gemini CLI. Ars Technica noted this officially takes Codex beyond just coding into full workflow automation territory. For developers exploring agentic coding tools, AiToolCrush.com has hands-on reviews.
Google Ships Gemini 3.1 Flash Live — Real-Time Voice + Vision
Google released Gemini 3.1 Flash Live in developer preview via the Live API in Google AI Studio. The model targets low-latency multimodal conversations — real-time audio, video, and tool use — across 200+ countries.
📧 Want more like this? Get our free The Ultimate AI Tool Database: 200+ Tools Rated & Ranked — Downloaded 5,000+ times
This is Google’s answer to the growing demand for conversational AI agents that can see, hear, and act simultaneously. Developers can start building real-time voice and vision agents today.
Meta Open-Sources TRIBE v2 — Brain-to-AI Mapping
Meta released TRIBE v2, an open-source brain encoding model that predicts fMRI responses across video, audio, and text stimuli simultaneously. It’s neuroscience meets multimodal AI — moving beyond the traditional approach of mapping isolated brain functions to specific regions.
While not a consumer product, TRIBE v2 represents a significant step toward understanding how human cognition processes multiple information streams at once. The research community is already digging in.