Honor Robot 'Lightning' Completes Beijing Half-Marathon in 50:26, Breaking Human World Record
RoboticsMilestone
On April 19, 2026, a humanoid robot half-marathon was held in Beijing Yizhuang, with over 100 teams participating. Honor's self-developed autonomous navigation robot 'Lightning' completed the 21-kilometer course in 50 minutes and 26 seconds, significantly surpassing the human world record of 57 minutes and 20 seconds set by Uganda's Kiplimo. This year's event eliminated on-site human supervision, requiring robots to operate fully autonomously throughout; approximately 40% of participating robots possessed full autonomy. The race demonstrated China's industrial chain synergy advantage in humanoid robotics—from core components to high-precision hardware manufacturing—and is seen as a milestone event in global AI and robotics convergence.
US NSA Continues Using Anthropic Mythos Despite DoD Blacklist
AI SafetyPolicy
According to Axios, despite the U.S. Department of Defense (DoD) listing Anthropic as a supplier risk, the National Security Agency (NSA) is expanding its deployment of the Mythos Preview model. Described as Anthropic’s most powerful model for programming and agent tasks, it features high autonomy and excels at identifying cybersecurity vulnerabilities and designing exploitation methods. Previously, the Trump administration held initial talks with Anthropic's CEO regarding collaboration. Mythos’ security capabilities have raised alarms in the financial sector about critical infrastructure safety, while also highlighting internal divisions within the U.S. government over AI vendor selection.
Claude Mythos Architecture Suspected to Use Byte Seed's Loop Language Model Technology
Model ArchitectureOpen Source
The unreleased Anthropic model Mythos has sparked widespread speculation due to its anomalous performance in graph search tasks (GraphWalks BFS: 80% vs. GPT-5.4's 21.4%), leading researchers to suspect it may adopt Byte Seed team’s proposed Loop Language Model (LoopLM) architecture. Three key pieces of evidence support this hypothesis: dramatic improvement in graph search accuracy, inference tokens reduced to just 1/5 of Opus 4.6 yet slower speed—consistent with latent space iterative computation—and discovery of numerous zero-day vulnerabilities in CyberGym. On the same day, the open-source project OpenMythos launched, achieving performance comparable to a 1.3B standard Transformer using a 770M-parameter loop deep transformer architecture, validating the potential of inference depth over parameter scale.
OpenAI Plans Billion-Dollar JV with PE Giants to Accelerate Enterprise AI Deployment
FundingEnterprise AI
OpenAI is negotiating with major private equity firms—including TPG, Advent International, Bain Capital, and Brookfield—to form a joint venture valued at approximately $10 billion. Investors plan to inject around $4 billion, securing board seats and a guaranteed 17.5% minimum return. Simultaneously, OpenAI is building an on-premises deployment team to send engineers directly into client organizations. Over one million enterprises now use OpenAI products, Codex has over two million weekly active users, and API usage increased by 20% following the release of GPT-5.4. The industry focus is shifting from model competition toward sales, implementation, and recurring revenue.
llama.cpp Merges Speculative Checkpointing, VRAM Usage Down by Up to 40%
Open SourceInference Optimization
On April 18, llama.cpp merged the 'speculative checkpointing' update led by founder Georgi Gerganov. By saving only incremental changes during speculative decoding instead of fully refreshing KV caches, VRAM usage was reduced by up to 40%, with throughput increasing 15–20%. This optimization makes running 70B-parameter large models and long-context workloads feasible on consumer-grade hardware. Major frontends like Ollama, LM Studio, and GPT4All have begun integration and are expected to widely adopt the feature within days, further narrowing the experience gap between local and cloud-based inference.
Google Reportedly Collaborating with Marvell on Two AI Inference Chips Featuring Memory Processing Units
ChipAI Hardware
Google is collaborating with Marvell to develop two AI chips: one being a Memory Processing Unit (MPU) that reduces system memory pressure via in-memory computing; the other a next-generation TPU dedicated to AI inference, potentially based on the TPU v7 (Ironwood) architecture. TPU v7 delivers peak performance of 4614 TFLOPs with 192GB HBM per chip, integrated into a Superpod containing 9,216 chips. This partnership aims to significantly accelerate AI model inference through the combined use of MPU and new TPUs, addressing semiconductor capacity bottlenecks and potentially reshaping the ASIC landscape in AI inference.
National Data Bureau Proposes Token-Based New Model for Dataset Trading
PolicyData Element
The National Data Bureau released the 'Implementation Plan for Advancing Industry High-Quality Dataset Construction (Draft for Public Comment)', proposing to explore a new dataset trading model based on tokens (Token), aiming to build a quantifiable and priceable value system for datasets. The plan defines industry high-quality datasets as curated data collections suitable for AI model development and training, encouraging business models to evolve from basic data package sales to API calls, model-driven solutions, and full-stack services. It seeks to foster market consensus on paying for data and unlock the value of data as a production factor.
German Chancellor Calls for Looser EU Industrial AI Regulation to Catch Up with US and China
PolicyEU
At the Hannover Messe, German Chancellor Merz called for more lenient EU regulation of industrial AI, advocating for differentiation from consumer applications and exclusion of industrial AI from current strict regulatory frameworks wherever possible. He emphasized that AI enhances efficiency and productivity, and Germany is determined to catch up with the United States and China. Berlin previously announced plans to increase AI data processing capacity by at least fourfold by 2030.
Uber AI Budget Depleted in Months, 11% Backend Code Updated by AI Agents
Enterprise AIIndustry Watch
Uber’s CTO revealed that the company’s 2026 AI budget was exhausted within months due to engineers exceeding expected usage of AI coding tools such as Claude Code, despite total R&D spending reaching $3.4 billion. Currently, about 11% of backend code updates at Uber are performed by AI agents, though their actual quality and long-term value remain questionable. Users have criticized AI-generated restaurant descriptions on Uber Eats for being generic and inaccurate, reflecting how corporate over-incentivization of AI adoption may lead to resource waste and declining output quality.