Anthropic Releases Sonnet 4.6: 1M Context, SWE 79.6
Model ReleaseAgentDeveloper Tools
Anthropic has released Claude Sonnet 4.6, making it the default model for free and Pro users on claude.ai, while also offering API access via Amazon Bedrock, Google Vertex AI, and other platforms. The beta version supports up to 1 million tokens in context length, ideal for long documents and large codebases. Pricing is set at $3 per million input tokens and $15 per million output tokens. The model achieves 72.5% on OSWorld and 79.6% on SWE-bench Verified, and is deployed with ASL-3 safety standards, emphasizing enhanced computer use and multi-step task capabilities.
Musk Confirms: xAI Grok 4.2 Smaller Version Is ~500B
Model ReleasexAI
According to AI engineer Mark Krechman, xAI has launched Grok 4.2, describing the current version as a 'smaller' model with approximately 500 billion parameters, with medium and large versions to follow. Elon Musk retweeted the information, confirming its accuracy. Reports note that the model sparked discussion due to demonstrations involving long-range recognition and reasoning. However, Grok 4.2's pricing, training data, and standardized benchmark results remain undisclosed, as do release timelines and access methods for the upcoming medium and large versions.
India's Sarvam AI Launches 30B/105B MoE Models and Plans Open Source
Model ReleaseOpen SourceIndia
Indian startup Sarvam AI has announced two self-developed LLMs: a 30B-parameter model and a 105B-parameter Mixture-of-Experts (MoE) model, both optimized for Indian languages including Hinglish. The 30B version activates around 1 billion parameters per inference and supports a 32,000-token context, focusing on low inference cost. The 105B version activates approximately 9 billion parameters with a context length of up to 128,000 tokens, targeting complex reasoning tasks. Trained using compute resources from India's National AI Mission, the company plans to open-source both models to promote localized AI adoption.
Temporal Raises $300M Series D at $5B Valuation for Agent Reliability
FundingAI AgentInfrastructure
Temporal, an open-source platform for reliable long-running workflows, has announced a $300 million Series D round, reaching a valuation of approximately $5 billion. The round was led by a16z, with participation from Lightspeed, Sapphire, and Sequoia. Temporal focuses on solving recoverability and observability challenges for AI agents in complex processes, enabling multi-step tasks to be orchestrated into retryable, auditable business workflows. Customers include OpenAI, Snap, Netflix, and JPMorgan Chase. The funding will expand infrastructure and product capabilities to support large-scale agentic applications.
Microsoft Pledges $50B to Bridge Global AI Divide, Focusing on Infrastructure
Industry PolicyInfrastructureMicrosoft
At the AI Impact Summit in New Delhi, India, Microsoft announced it will invest $50 billion by the end of this decade to help low-income countries build AI-enabling infrastructure, including data centers and broadband connectivity. Microsoft warned of a 'widening gap' in global AI adoption, noting that AI usage in high-income regions (the Global North) is roughly twice that of lower-income regions (the Global South). The initiative aims to reduce barriers to compute and network access, enabling more nations to train, deploy, and utilize AI systems. Microsoft plans to collaborate with governments and partners to ensure equitable distribution of AI benefits.
Google.org Launches $30M AI Government Innovation Grant, Deadline April 3
Grant ProgramPublic SectorGoogle
Google.org has launched the 'Impact Challenge: AI for Government Innovation,' a $30 million grant program inviting nonprofits, social enterprises, and academic institutions to propose AI-driven solutions for improving public services, with focus areas including health, resilience, and economic opportunity. Google.org notes that while 80% of public sector workers believe AI improves their work, only 18% think their government effectively uses AI. Selected projects will receive funding, engineering support, and access to Google’s infrastructure through an accelerator program. Applications are due by April 3, 2026.
NBER Survey: Nearly 90% of Firms Report No Productivity Gains from AI in Three Years Despite $250B+ Investment
Industry DataProductivityEnterprise AI
A NBER study cited in the briefing, surveying around 6,000 executives, reveals that despite over $250 billion in corporate AI investment in 2024 alone, nearly 90% of companies report no observed productivity improvements over the past three years. This highlights a 'disconnect between investment and output,' suggesting AI initiatives may remain in pilot phases, fail to transform end-to-end processes, or have gains offset by costs and organizational friction. The research urges firms to better define automatable tasks, assess ROI rigorously, and establish measurable productivity and quality metrics beyond mere tool deployment.
Incorta Acquires Layout.dev to Integrate No-Code Agentic AI with Real-Time Data
AcquisitionEnterprise SoftwareAI Agent
Enterprise data analytics firm Incorta has announced the acquisition of no-code AI application development platform Layout.dev. The integration will combine Layout.dev’s capabilities with Incorta’s real-time data access (Direct Data Mapping) to deliver composable Agentic AI workflows for enterprises. Incorta claims this enables business teams to rapidly build intelligent applications with full data context, while maintaining governance and security—reducing reliance on static dashboards and fragile data pipelines. Initial use cases will focus on finance, operations, and supply chain. Financial terms and closing date were not disclosed.
Security Study: LLM-Generated Passwords Have Only 20–27 Bits of Entropy, Easily Brute-Forced
SecurityLLM RiskDevelopment Practice
Security firm Irregular warns, after testing tools like Claude, ChatGPT, and Gemini, that 'strong passwords' generated by LLMs exhibit predictable patterns—appearing complex but lacking true randomness. The report found only 30 unique 16-character passwords across 50 generations, with recurring prefixes and suffixes. Estimated entropy ranges from 20 to 27 bits, far below the 98–120 bits expected from truly random 16-character strings, making them vulnerable to brute-force attacks within hours. Researchers argue this issue cannot be fixed reliably via prompts or temperature tuning, recommending dedicated password managers and rotation of any existing AI-generated passwords.