Back to Archive
Sunday, February 22, 2026
8 stories3 min read

Today's Highlights

1

OpenAI forecasts over $280 billion revenue by 2030 and plans $600 billion in compute investment

Company NewsCompute InfrastructureCommercialization

Fortune reports that OpenAI internally projects its 2030 revenue to exceed $280 billion, with plans to cumulatively invest approximately $600 billion in AI infrastructure and computing power by 2030. Its CFO stated that annualized revenue in 2025 has surpassed $20 billion, up from around $6 billion the previous year; the company is also testing new monetization methods such as advertising alongside subscriptions. This forecast is tied to its ongoing expansion of consumer and enterprise product lines, as well as its dependence on long-term capital expenditures and power supply capacity.

Read full article
2

Taalas launches HC1 custom ASIC: ~17,000 tokens/s for inference at ~250W

AI ChipInference AccelerationASIC

Startup Taalas has released an inference ASIC called 'HC1,' adopting a 'model-as-chip' approach by writing specific model weights into mask ROM and bringing computation closer to storage to reduce data movement. Reports indicate it achieves inference speeds of approximately 17,000 tokens/s on Llama 3.1 8B with a power draw of about 250W, enabling air-cooled deployment. The team consists of around 24 members, and media estimates suggest the product cost about $30 million. By modifying only two mask layers, adaptation to new models can be completed in roughly two months, though this raises concerns about flexibility and rapid obsolescence.

Read full article
3

OpenAI proposes Harness Engineering: Codex agents deliver ~1M lines of code

AI AgentSoftware EngineeringDeveloper Tools

OpenAI has introduced an internal methodology called 'Harness Engineering,' using Codex agents to automate coding, testing, and bug fixing within strict architectural boundaries. InfoQ reports that structured documentation serves as a machine-readable source of truth, while structural tests and linters enforce unidirectional dependency flow from Types → Config → Repo → Service → Runtime → UI. Combined with observability via logs, metrics, and traces, agents can reproduce defects and propose fixes. The team reportedly delivered a beta product comprising approximately one million lines of code, with almost no manually written source code.

Read full article
4

Peking University releases AgentRob: Using MCP to enable real robot collaboration via forums

Embodied IntelligenceMCPMulti-Agent

Peking University's Tong Yang team has released the AgentRob framework, leveraging the MCP protocol to transform online forums into asynchronous, persistent coordination platforms where humans, LLM agents, and physical robots can collaborate. The system defines eight standardized interfaces, with a VLM responsible for decomposing post content into primitive actions like moving, taking photos, and uploading. An LLM then generates readable responses and replies. To minimize embodied execution risks, the design includes role-based permission mapping, safety filters to intercept hazardous commands, hardware emergency stops, and emphasizes compatibility with various robot forms.

Read full article
5

Beihang University open-sources Code2Bench: Dynamically scraping GitHub to combat data contamination

BenchmarkCode LLMData Contamination

The Beihang University team has proposed and open-sourced Code2Bench, promoting 'sustainable, contamination-resistant' evaluation for code LLMs. Tasks are dynamically scraped from recent GitHub commits and strictly filtered based on the model’s knowledge cutoff to reduce memorization. Test cases are selected with 100% branch coverage, raising the bar for logical completeness. The authors also introduce a cross-language 'diagnostic fingerprint' to identify model failure modes by analyzing distributions of logical and runtime errors, assessing how language features (e.g., static typing) impact code generation reliability, targeting ICLR 2026.

Read full article
6

Jina AI releases jina-embeddings-v5-text: 0.6B multilingual embedding model

EmbeddingRAGModel Release

Jina AI has released jina-embeddings-v5-text, a multilingual embedding model with approximately 0.6B parameters, trained via distillation from a 4B teacher model combined with contrastive learning, targeting RAG retrieval, matching, and clustering. The model introduces task-specific LoRA adapters to mitigate multi-task interference and supports Matryoshka-style dimension truncation (32–1024 dimensions) for trade-offs between accuracy and cost. GOR regularization minimizes degradation during low-bit quantization, making it suitable for edge devices and large-scale vector databases, reflecting the engineering trend of 'using vectors for context management.'

Read full article
7

Reuters: Generative AI may affect ~120K US film industry jobs, retraining surges

Industry ImpactContent GenerationEmployment

Reuters reports that generative AI is reshaping workflows in the U.S. film industry, with one projection suggesting around 120,000 jobs could be impacted by end-2026, prompting accelerated retraining efforts. Curious Refuge, an online school teaching AI-powered filmmaking, says it has over 10,000 students, with about 95% coming from entertainment and advertising professions, spanning 170 countries and supporting 11 languages. The report notes that AI studio Promise acquired the school in February 2025, aiming to integrate training with production pipelines into a scalable talent and toolchain ecosystem, reflecting the content industry's organized adoption of generative tools.

Read full article
8

Individual builds Mac mini agent RMA: 608 messages, 3,474 replies in 24 hours

AI AgentPersonal AutomationWorkflow

A subscriber email reveals a self-built AI agent, R Mini Arnold, running on a Mac mini and interacting via WhatsApp. It leverages the OpenClaw framework to call Anthropic Claude (primarily Sonnet, occasionally Opus) for tasks including file organization, email drafting, meeting preparation, and knowledge base management. The author reports sending 608 messages and receiving 3,474 replies within 24 hours, reducing a presentation workflow from 16–18 hours to about 1.5 hours. The system logged 179 unresolved failures and captured 146 'learning patterns,' highlighting ongoing challenges in stability and operational overhead for personal agent deployment.

Don't Miss Tomorrow's Insights

Join thousands of professionals who start their day with AI Daily Brief