Mistral AI Releases Flagship Medium 3.5 Model, 128B Dense Architecture Achieves 77.6% on SWE-Bench
Model ReleaseOpen Source
Mistral AI has launched its flagship 128B dense model, Mistral Medium 3.5, supporting a 256K context window and achieving a 77.6% score on SWE-Bench Verified. The model is now the default for Mistral Vibe and Le Chat, supporting remote asynchronous coding agents that can run long-running tasks in parallel on the cloud. Le Chat introduces a preview of 'Work Mode,' enabling complex multi-step tasks. The model weights are open under a modified MIT license. API pricing is set at $1.5 per million tokens for input and $7.5 per million tokens for output, with deployment also available via NVIDIA NIM.
IBM Launches Granite 4.1 Multimodal Model Family Covering Language, Speech, Vision, Embedding, and Security
Model ReleaseOpen SourceEnterprise AI
IBM has released the Granite 4.1 model family, including 3B, 8B, and 30B dense language models trained over five phases on approximately 15 trillion tokens, supporting up to 512K context length. The 8B model matches or outperforms the previous-generation 32B MoE-based Granite 4.0-H-Small across multiple benchmarks. Also launched are Vision 4.1 (leading in table and chart extraction), Speech 4.1 (5.33% word error rate), Guardian 4.1 (safety moderation), and Embedding Multilingual R2 (supporting 200+ languages). All models are open-sourced under the Apache 2.0 license and available on watsonx and Hugging Face.
Poolside AI Open-Sources First Coding Model Laguna XS.2, 33B Parameters Runnable Locally on Mac
Model ReleaseOpen SourceCoding
Poolside AI has released its first open-source model, Laguna XS.2 (Apache 2.0), with 33B total parameters and 3B activated per token, supporting a 128K context window and runnable on Macs with 36GB RAM via Ollama. It achieves 68.2% on SWE-bench Verified and performs strongly on long-horizon agent tasks like SWE-bench Pro. On the same day, the company launched a closed large model, Laguna M.1 (225B total parameters, 72.5% on SWE-bench Verified). Training used the Muon optimizer and an asynchronous RL system, with synthetic data comprising about 13%. API access to both models is currently free.
OpenAI Releases Symphony Open-Source Orchestration Framework, Internal Team PRs Increase by 500% in Three Weeks
Open SourceAI AgentDevelopment Tools
OpenAI has introduced Symphony, an open-source specification for orchestrating coding agents to handle software tasks. The system transforms project management tools into control planes for coding agents, assigning each task an isolated agent workspace, automatically monitoring progress, and restarting stalled tasks. Originating from OpenAI's internal engineering workflows, some teams saw a 500% increase in merged PRs within three weeks of adoption. The reference implementation, written in Elixir, has gained over 15,000 stars on GitHub. OpenAI does not plan to maintain it as a standalone product, offering it only as a reference implementation.
SenseTime Open-Sources SenseNova U1 Native Multimodal Model, NEO-unify Architecture Boosts Performance by Over 32%
Model ReleaseOpen SourceMultimodal
SenseTime has open-sourced the SenseNova U1 series of native unified understanding-generation models, leveraging its proprietary NEO-unify architecture to natively integrate multimodal understanding, reasoning, and generation within a single framework, moving away from traditional拼接-style designs. The open lightweight versions include U1-8B-MoT and U1-A3B-MoT, achieving leading performance among models of similar scale on benchmarks for image understanding, generation, editing, and visual reasoning. Compared to concatenated models of the same size, performance on complex tasks improves by over 32%, with 27% higher inference efficiency. The models are now available on GitHub and Hugging Face, free for commercial use.
Microsoft AI Annualized Revenue Hits $37B, Up 123%; Alphabet Cloud Revenue Surpasses $20B for First Time
Financial ReportTech Giants
Microsoft and Alphabet released their Q1 earnings reports on the same day. Microsoft's AI business reached an annualized revenue of $37 billion, up 123% year-over-year. M365 Copilot has 20 million seats, and nearly 140,000 organizations are using GitHub Copilot. Alphabet delivered strong results, with Google Cloud revenue surpassing $20 billion for the first time, up 63% year-over-year. Generative AI product revenue grew nearly 800% year-over-year, and cloud backlog nearly doubled to over $460 billion. Waymo surpassed 500,000 weekly autonomous rides. Both companies' capital expenditures drew significant market attention.
Rogo Raises $160M Series D at $2B Valuation, Building Financial AI Agent Platform
FundingFinancial AI
Financial AI platform Rogo has completed a $160 million Series D funding round led by Kleiner Perkins, with participation from Sequoia, Thrive Capital, Khosla Ventures, and J.P. Morgan, valuing the company at $2 billion and bringing total funding above $300 million. Designed specifically for financial services, Rogo is already used by over 35,000 professionals across 250+ global investment banks. Its autonomous AI agent Felix can execute multi-step financial workflows such as deal screening, CIM generation, buyer outreach, and due diligence. The generative AI market in BFSI is projected to reach $18.5 billion by 2034.
EU AI Act Revision Talks Collapse, High-Risk AI Rules Facing Uncertainty Ahead of August Deadline
Policy & RegulationEU
Tripartite negotiations between EU member states and the European Parliament on revising the AI Act collapsed after 12 hours of talks. The central dispute centers on whether industrial AI products already regulated under sector-specific rules should be exempted from obligations under the AI Act. Germany and the European People's Party advocate for lighter regulation, while several countries and center-left parties oppose exemptions. The new high-risk AI rules, scheduled to take effect on August 2, 2026, now face urgent timeline pressures. The next round of talks is expected in mid-May. Enterprises must still prepare to comply with requirements including generative AI transparency and content labeling.
Cognizant Acquires Astreya for $600M to Strengthen AI Infrastructure Managed Services
AcquisitionAI Infrastructure
Cognizant announced the acquisition of IT managed services provider Astreya for approximately $600 million, aiming to deepen its AI infrastructure and data center operations capabilities. With 25 years of experience, Astreya serves six of the 'Magnificent Seven' hyperscalers and operates the proprietary AI OpsHub platform. Cognizant's CEO noted that global spending on AI data center construction is expected to reach $6.7 trillion between 2025 and 2030. The deal is expected to close in Q2 2026. This marks another step in Cognizant's ongoing expansion into AI services, following prior acquisitions of 3Cloud and Belcan.
Cursor Launches Official TypeScript SDK Public Beta, Opens Agent Runtime Matching Editor Experience
Development ToolsAI AgentSDK
Cursor has launched a public beta of its official TypeScript SDK, opening up the Agent runtime, models, and toolchain that power its editor, CLI, and web version to developers for both local and cloud execution. Enterprise customers including Rippling, Notion, C3 AI, and Faire are already using the SDK to build backend agents, automatically fix bugs, and maintain self-healing codebases. Cursor also open-sourced starter projects including a coding agent CLI, prototyping tools, and agent-driven dashboards.