Anthropic lets Claude control computers in Mac-only research preview
AI AgentProduct Release
Anthropic has launched a computer-control capability for Claude, offered as a research preview to Claude Pro and Max subscribers and currently limited to macOS. The model can execute tasks through app connectors such as Google Calendar and Slack, or simulate keyboard and mouse actions when no dedicated tool is available. The system requires user authorization and can be interrupted at any time, while also restricting sensitive scenarios and blocking prompt-injection risks by default. It can also work with the mobile Dispatch app for remote task handoff, though Anthropic warns that complex workflows may still fail and are not yet suitable for highly sensitive use cases.
Mistral releases Voxtral TTS, a 4B model with open weights and API access
Speech ModelOpen SourceAPI
Mistral has released Voxtral TTS, its first text-to-speech model, with 4 billion parameters, support for 9 languages, and a focus on low latency, multilingual generation, and zero-shot voice cloning. The company says the model can adapt to a new voice using just 3 seconds of reference audio, and it is already available in Mistral Studio and Le Chat, with API pricing set at $0.016 per thousand characters. Public materials indicate typical latency of about 70 milliseconds and a real-time factor of around 9.7x. Voxtral TTS can be paired with Voxtral Transcribe for an end-to-end voice pipeline and also supports local deployment, filling out Mistral’s speech-generation product stack.
Zhipu launches GLM-5.1, bringing coding scores close to Opus 4.6
Foundation ModelCodingChinese AI
Zhipu has launched GLM-5.1, continuing to focus on code generation and complex reasoning scenarios. According to the materials, the model improved from 35.4 to 45.3 on the official Coding Evaluation benchmark versus GLM-5.0, a gain of more than 30%, narrowing the gap with Claude Opus 4.6 to 2.6 points. The new version offers a 200K context window, a reasoning mode, and an OpenAI-compatible interface, and it has already been made available to Lite, Pro, and Max users of GLM Coding Plan. GLM-5.1’s main advantages lie in pricing and ease of integration, making it better suited for developers plugging it into existing workflows.
Shield AI raises $2 billion and moves ahead with Aechelon acquisition
Funding and M&ADefense AI
Defense AI company Shield AI has announced a $2 billion financing round, including a $1.5 billion Series G and $500 million in preferred equity, giving the company a post-money valuation of $12.7 billion. It is also moving forward with the acquisition of military simulation technology company Aechelon, which will continue to operate independently while the deal awaits regulatory approval. Shield AI says the new capital will accelerate development of its Hivemind autonomous flight software, defense foundation models, and the X-BAT program. The materials note that Hivemind already spans 26 vehicle types, including F-16s, drones, and helicopters, and has been selected by the U.S. Air Force for collaborative combat aircraft work, underscoring continued capital appetite for defense AI.
Anthropic’s Mythos leak reveals stronger cyber offense and defense capabilities
Model SecurityLeakAnthropic
Anthropic accidentally exposed testing details for its unreleased Claude Mythos model because of a content management system misconfiguration. Leaked files suggest the model belongs to a new Capybara series and materially outperforms the current Opus 4.6 in coding, academic reasoning, and cybersecurity tasks. Anthropic is reportedly concerned that these capabilities could accelerate vulnerability discovery and exploitation, so it plans to offer early access only to a small number of trusted organizations. The company says the incident did not involve customer data or model weights, but the leak exposed its staged safety-first release strategy ahead of schedule and again highlighted the tension between operational security and the dual-use risks of frontier models.
Meta open-sources TRIBE v2 to predict neural responses in the brain
ResearchNeuroscienceOpen Source
Meta has released TRIBE v2, a model designed to predict human neural responses to images, audio, and language. The system was trained on fMRI data from more than 700 volunteers and covers multiple input types including video, podcasts, and text. The materials say it delivers roughly 70x improvement over comparable methods in speed, accuracy, and resolution, while also supporting zero-shot generalization to new users, languages, and tasks. Meta has published the code, paper, and an interactive demo under a non-commercial license. For neuroscience and clinical research, this could allow some hypotheses to be tested first on a digital twin of brain activity, lowering experimental cost.
Hangzhou to enforce embodied AI robotics regulation in May with 50 provisions
Embodied AIPolicy and Regulation
Hangzhou has passed the country’s first local regulation focused specifically on embodied intelligent robots, titled Regulations of Hangzhou on Promoting the Development of the Embodied Intelligent Robot Industry, which will take effect on May 1, 2026. The regulation contains 7 chapters and 50 provisions covering technology innovation, infrastructure, industrial development, application enablement, and safety management, while also requiring graded supervision and robot identity authentication mechanisms. The materials say Hangzhou’s embodied intelligence cluster generated 106.8 billion yuan in output last year. Rather than offering only broad pro-innovation language, the new rule emphasizes opening public scenarios, accelerating deployment, and establishing a governance framework that can support robots moving from labs into urban and industrial settings.
Dutch court bars Grok from generating non-consensual nude images, with fines up to €10 million
AI SafetyJudicial RegulationxAI
The Amsterdam District Court in the Netherlands issued a preliminary injunction on March 26 requiring xAI’s Grok not to generate or distribute non-consensual nude images or child sexual abuse material involving Dutch residents, and prohibiting the X platform from continuing to provide the related functionality. If X.AI, X Corp, and XIUC fail to comply, each company faces fines of €100,000 per day up to a maximum of €10 million. The ruling is based on the GDPR and Dutch tort law, with the court finding that existing safeguards were insufficient to prevent sexualized generation involving real people. It is a direct judicial intervention into privacy violations enabled by generative AI and increases pressure on model providers to take responsibility for platform-level content governance.