Meta Signs $21 Billion AI Compute Deal with CoreWeave, Cumulative Contract Value Reaches $35 Billion
AI InfrastructureCompute Investment
Meta has signed a long-term AI compute supply agreement worth $21 billion with cloud computing company CoreWeave, extending through 2032, bringing the total value of their agreements to $35 billion. CoreWeave is raising funds through convertible notes and bonds to support its expansion. This underscores Meta's strategy of heavy investment in AI infrastructure. Meanwhile, Anthropic completed a secondary share transfer round valuing the company at $350 billion, with employees reluctant to sell due to strong growth expectations, leaving some investors unable to acquire shares. Meta’s capital expenditure for 2025 has already exceeded $72 billion for AI infrastructure, reflecting continued commitment to compute capacity development.
Open-Source Video Model HappyHorse-1.0 Tops Global Blind Evaluation Leaderboard, 15B Parameters Outperform Seedance 2.0
Open Source ModelVideo Generation
The open-source AI video generation model HappyHorse-1.0 reached No.1 on the authoritative blind evaluation leaderboard Artificial Analysis Video Arena on April 8, becoming the world's top open-source AI video generator. Developed by a former Alibaba Taotian Group team led by Di Zhang, ex-Vice President of Kuaishou, the model features 15 billion parameters and uses a unified single-stream Transformer architecture supporting one-click synchronized audio-video generation. It achieved an Elo score of 1333–1357 in text-to-video, outperforming ByteDance's closed-source Seedance 2.0 by nearly 60 points, and set a new record of 1391–1406 in image-to-video. The model supports native lip-sync in seven languages, outputs at 1080p, and requires only 38 seconds of inference on a single H100 GPU. All weights and code are open-sourced on GitHub under commercial use permissions.
LG Launches EXAONE 4.5 Multimodal Model, 33B Parameters Surpass GPT-5 mini in Visual Reasoning
Multimodal ModelOpen Source
LG AI Research has released the multimodal AI model EXAONE 4.5, capable of understanding and reasoning across text and images. With 33 billion parameters, the model employs a hybrid attention architecture for efficient inference. It averaged 77.3 points across five STEM evaluation metrics, surpassing OpenAI’s GPT-5 mini (73.5 points), and scored 81.4 on LiveCodeBench coding benchmarks, exceeding Google Gemma 4 (80.0). The model excels in processing complex industrial documents such as contracts, blueprints, and financial reports, and supports multiple languages including Korean, Spanish, German, Japanese, and Vietnamese. LG has open-sourced the model via Hugging Face for research purposes, with future plans to extend capabilities to speech, video, and physical environment perception.
Musk Overhauls xAI Engineering Team, Appoints SpaceX Executive as President
xAIOrganizational Restructuring
Elon Musk has conducted a major restructuring of the xAI engineering team, appointing Michael Nichols, former Senior Vice President of SpaceX Starlink, as President of xAI. An internal memo announced several leadership changes: Aman Madaan will lead model factory and tool development, Aditya Gupta will oversee post-training and reinforcement learning, and Andrew Milich along with Jason Ginsburg will head the product team. The reorganization aims to improve training performance and rebuild the organizational structure. Since January, eight co-founders have left xAI, and dozens of employees have been laid off since February, affecting projects like Grok Imagine, Macrohard, and recruitment teams. Musk stated that xAI is being rebuilt from the ground up. xAI has been acquired by SpaceX and is expected to pursue an IPO this year with a projected valuation exceeding $2 trillion.
US Federal Appeals Court Denies Block on Pentagon Blacklisting of Anthropic
AI PolicyAnthropic
On April 8, the US Court of Appeals in Washington, DC denied a request to block the Pentagon from blacklisting Anthropic. The company was flagged as a national security risk after opposing government use of its AI technology in fully autonomous weapons and surveillance. While the appeals court acknowledged Anthropic could suffer irreparable harm, it ruled the evidence insufficient to overturn the government's decision. Previously, a San Francisco federal court had ruled the government overstepped its authority and ordered the removal of the designation, creating conflicting rulings that increase policy uncertainty for the AI industry. Anthropic claims the government action constitutes illegal retaliation, with further hearings scheduled for May 19. Tech trade group CCIA warns such judicial splits could undermine the global competitiveness of US tech firms.
OpenAI has launched a new $100/month ChatGPT Pro subscription tier, filling the gap between the existing $20 Plus plan and the $200 enterprise offering. The new plan provides professional developers with 5–10 times higher Codex message capacity, supporting both local and cloud-based coding tasks. This move is seen as a strategic response to Anthropic’s recent ban on third-party subscription access, aiming to attract developers with high-volume AI coding needs. OpenAI also hired Steinberger, former head of developer relations at Anthropic, further drawing developers from the OpenClaw community affected by Anthropic’s restrictions.
Florida Attorney General Announces Investigation into OpenAI, Clouding IPO Prospects
OpenAIRegulatory Investigation
Florida Attorney General James Uthmeier announced an investigation into OpenAI and ChatGPT on April 9, focusing on whether OpenAI’s data and AI technologies could fall into the hands of adversarial nations. The probe comes as OpenAI prepares for a potential IPO that could value the company as high as $1 trillion. OpenAI previously raised $122 billion in funding, reaching an $852 billion valuation, with plans to file for an IPO in the second half of 2026. CFO Sarah Friar revealed that shares would be reserved for retail investors in the IPO, with individual investors contributing over $3 billion in the latest round. The investigation may impact OpenAI’s path to going public.
Tencent Open-Sources HY-Embodied-0.5 Embodied AI Model, MoT Architecture Designed for Physical World
Embodied AIOpen Source Model
Tencent’s Hunyuan team has open-sourced the HY-Embodied-0.5 model series, designed specifically for embodied agents in the real world. The model adopts an innovative Mixture-of-Transformers (MoT) architecture, resolving modality conflicts by assigning non-shared parameters to the vision branch, thereby enhancing 3D spatial perception without sacrificing language capability. Training data integrates over 100 million embodied-specific samples covering 3D detection, depth estimation, spatial topology, and long-horizon action reasoning. The model introduces an iterative post-training paradigm using rejection sampling fine-tuning, GRPO reinforcement learning, and online distillation to compress large-model reasoning capabilities into a 2B-parameter edge model. This represents a comprehensive architectural and training framework redesign, offering a new open-source foundation for embodied AI.
LangChain Releases Deep Agents Deploy, Open-Source Alternative to Claude Managed Agents
AI AgentOpen Source
LangChain has launched Deep Agents Deploy as an open-source alternative to Claude Managed Agents. The platform enables one-click deployment of multi-tenant orchestration, scalable memory, sandbox environments, and standardized endpoints (MCP/A2A), simplifying complex agent production workflows into a single command. Key differentiators include model agnosticism and memory ownership: it supports multiple LLM providers and open standards like AGENTS.md, preventing vendor lock-in. LangChain emphasizes that proprietary agent platforms locking memory behind closed APIs create severe vendor dependency, whereas open-source solutions ensure developers retain control over the data flywheel generated through agent interactions.
Sentence Transformers v5.4 Adds Multimodal Embeddings, Unified Encoding for Text, Image, Audio, and Video
MultimodalDevelopment Tool
The Sentence Transformers library added support for multimodal embeddings and reranking models in version 5.4, enabling unified API-based encoding and similarity comparison across text, images, audio, and video. Multimodal embedding models map inputs from different modalities into a shared vector space for cross-modal retrieval; reranking models assess relevance of mixed-modality pairs to improve retrieval accuracy. Supports Qwen and Nvidia model series, introducing encode_query and encode_document methods optimized for retrieval tasks. Typical applications adopt a two-stage approach: fast recall using embedding models followed by fine-grained scoring with rerankers. GPU execution is recommended for inference.