U.S. Department of Defense signs classified AI agreement with seven tech giants, excludes Anthropic due to security restrictions
AI PolicyMilitary AIAnthropic
On May 1, 2026, the U.S. Department of Defense announced a classified AI cooperation agreement with OpenAI, Google, Microsoft, Amazon, Nvidia, xAI, and startup Reflection, allowing their AI tools to be used on military classified networks, advancing the U.S. military toward becoming an 'AI-first' combat force. Anthropic was excluded for refusing to relax safety red lines regarding mass domestic surveillance and fully autonomous weapons, and was deemed a supply chain risk. Anthropic has filed a lawsuit and obtained a temporary injunction from a California federal judge. This move sets a precedent for the DoD to define the scope of AI military applications—any AI company limiting military use will be replaced by those willing to comply. Despite losing the defense contract, Anthropic's valuation rose to approximately $900 billion.
xAI launches Grok 4.3 with always-on reasoning and aggressively low pricing
Model ReleasexAIAPI Pricing
xAI has released the Grok 4.3 model, featuring always-on reasoning and a 1 million token context window. Its API is priced at just $1.25/$2.50 per million input/output tokens, significantly undercutting GPT-5.4 and Claude Opus 4.7. The model excels in specialized domains such as law (tops CaseLaw v2) and finance (leads CorpFin), but shows weaknesses in general coding and complex mathematics. It also introduces Custom Voices, a voice cloning suite requiring only 120 seconds of audio, with a voice agent API priced at $3/hour. Grok 4.3 is now available for testing on LMArena.
DeepSeek has published a multimodal model technical report titled 'Thinking with Visual Primitives' on GitHub, introducing a new paradigm that embeds point coordinates and bounding boxes into the reasoning process as fundamental units of thought. Based on DeepSeek V4-Flash (284B parameters, MoE architecture with 13B activated), the model achieves a 7056x visual compression ratio, drastically reducing KV cache usage. It reaches 89.2% accuracy on the Pixmo-Count counting task and scores 66.9% on maze navigation, outperforming GPT-5.4 by about 17 percentage points. Trained on over 40 million samples using a five-stage post-training strategy to fuse Box and Point expert capabilities, the model still requires trigger words to activate and needs improvement in cross-scenario generalization.
iFlytek launches Spark X2-Flash, 30B-parameter MoE model trained on domestic computing with 90% efficiency
Model ReleaseDomestic ComputingiFlytek
On May 1, iFlytek launched the Spark X2-Flash large language model, based on a MoE architecture with 30B total parameters and support for a 256K ultra-long context window. Trained on Huawei Ascend 910B domestic computing clusters, it employs DSA sparse attention and MTP multi-token prediction technologies, boosting domestic computing training efficiency from 20% to 90%, and improving sampling decoding efficiency by over 2x. The model performs close to trillion-parameter models in agent tasks and code generation, yet its token cost is less than one-third of mainstream large models. It is now available via the iFlytek Open Platform API and compatible with major agent frameworks like OpenClaw and Claude Code.
Anthropic is seeking approximately $50 billion in funding ahead of a planned IPO, targeting a valuation of around $900 billion, which would exceed OpenAI's $852 billion valuation. The company's annualized revenue run rate is between $30 billion and $40 billion, with Amazon and Google investing up to $25 billion and $40 billion respectively. Some early investors are choosing to wait for the IPO for exit. The round is expected to close within two weeks and will be the last private financing before listing. Anthropic plans to go public in October 2026, aiming to raise $60 billion, potentially making it the largest tech IPO in history.
Nebius acquires AI inference optimization firm Eigen AI for $643 million to strengthen inference platform
AcquisitionAI InferenceInfrastructure
Cloud provider Nebius announced the acquisition of AI inference optimization company Eigen AI for approximately $643 million in cash and stock. Founded by researchers from MIT HAN Lab, Eigen AI's core technologies include sparse attention (SpAtten) and activation-aware weight quantization (AWQ), the latter having become an industry standard for 4-bit model deployment. The acquisition will integrate Eigen AI's full-stack optimization with Nebius's global compute resources to enhance the performance of its Token Factory inference platform. The two have already co-optimized several major open-source models, ranking among the fastest in Artificial Analysis benchmarks. The deal is expected to close within weeks.
Microsoft launches Legal Agent in Word to assist with contract review
AI ApplicationMicrosoftLegal Tech
Microsoft has introduced Legal Agent, an AI agent designed for legal teams in Word, capable of following structured legal workflows to review contract clauses line-by-line, identify risks and obligations, and handle documents with tracked changes. Developed by the engineering team from former AI legal startup Robin AI, whom Microsoft acquired, this marks a key step in expanding intelligent agents across the Office suite, signaling AI’s deepening penetration from general assistance into specialized professional domains. It is currently available to U.S. users through the Frontier program.
Standard Intelligence raises $75 million with six-person team building computer operation foundation model
FundingComputer OperationAGI
Six-person AI startup Standard Intelligence has raised $75 million in funding led by Sequoia and Spark Capital, with angel participation from Andrej Karpathy. The company developed FDM-1, a foundation model for computer operation, trained on video clips and using inverse dynamics models to auto-generate annotations, building a dataset of 11 million hours of video. Its video encoder is reportedly 100 times more efficient than OpenAI's equivalent technology and supports million-token context windows for processing two-hour 30FPS videos. New funds will be used to expand computing resources and develop AI safety safeguards.
SoftBank plans U.S. IPO for AI robotics company Roze at $100 billion valuation despite no product or revenue
IPOSoftBankRobotics
SoftBank plans to take its newly formed AI and robotics company Roze public in the U.S., targeting a $100 billion valuation. The company has not launched any products or generated revenue and will focus on using autonomous robots to improve data center construction efficiency in the U.S. Roze will integrate assets previously acquired by SoftBank, including ABB Robotics ($5.4 billion), Ampere Computing ($6.5 billion), and DigitalBridge ($3 billion). The company plans to hold an analyst day in Texas in July and has hired KPMG to prepare listing documents. This move is part of SoftBank's large-scale push into AI infrastructure.
MCP protocol STDIO transport layer has systemic security flaw, exposing ~200K AI agent servers to remote code execution
Security VulnerabilityMCPAI Agent
OX Security researchers discovered a systemic design flaw in Anthropic's MCP protocol default STDIO transport layer, allowing arbitrary OS commands to be executed without input sanitization, affecting an estimated 200,000 AI agent servers. Researchers confirmed remote code execution vulnerabilities on six platforms including LiteLLM, LangFlow, and Windsurf, generating over 10 high-severity CVEs. Anthropic considers this behavior by design, has not issued a protocol-level fix, and shifts responsibility for input sanitization to developers. Security experts recommend treating every MCP STDIO configuration as an untrusted input surface and immediately applying five-step remediation: enumeration, patching, sandboxing, etc.