Back to Archive
Monday, February 23, 2026
8 stories3 min read

Today's Highlights

1

86 Countries Sign New Delhi AI Impact Declaration, Encourage Open-Source Collaboration

Policy & RegulationInternational Cooperation

At the AI for Impact Summit in New Delhi, India, 86 countries and two international organizations signed the 'New Delhi Artificial Intelligence for Impact Declaration,' outlining seven pillars: AI resource accessibility, economic and social well-being, safety and trustworthiness, AI for science, societal empowerment, talent development, and resilient systems. The declaration encourages open-source and accessible technologies. Citing the IMF, the text notes that approximately 60% of jobs in advanced economies are highly exposed to AI impact, compared to about 26% in low-income countries. The UAE also announced that G42 and U.S.-based Credo AI will collaborate on advancing responsible AI governance.

Read full article
2

Multiple U.S. States Advance AI and Data Center Regulations with Bipartisan Consensus

Policy & Regulation

According to NPR, U.S. state legislatures are reaching bipartisan agreement on regulating AI and data centers. Florida Governor DeSantis is pushing an 'AI Bill of Rights' that prohibits the use of personal images without consent, requires chatbots to disclose non-human identity, and mandates parental consent for minors using companion chatbots. New York is advancing similar online safety legislation and platform accountability measures. Multiple states are also focusing on data center power/water usage and environmental pressures: New York and Maine are considering halting or restricting new large-scale data centers, while Colorado promotes greater renewable energy adoption. Disputes remain at the federal level regarding state regulatory authority.

Read full article
3

Databricks Launches Lakebase: Serverless PostgreSQL for AI Workloads

Data InfrastructureProduct Release

Databricks has launched Lakebase, a serverless PostgreSQL-compatible (Postgres 17) OLTP database designed for AI workloads such as RAG and real-time agents. It adopts a compute-storage separation architecture—'ephemeral compute + persistent lake storage'—to reduce performance jitter caused by resource contention between transactional and analytical queries. Lakebase allows data to be written directly into Lakehouse storage formats via Postgres interfaces, enabling immediate querying by Spark/Databricks SQL and reducing traditional ETL pipelines and data latency. It also provides pgvector support for vector retrieval and agent memory/feature access.

Read full article
4

Zhipu Releases GLM-5 Technical Report: 1.5–2x Cost Reduction at 200K Context

Model ProgressTechnical Report

Zhipu has released the GLM-5 technical report, detailing its training and inference framework tailored for Agent tasks. It introduces DSA sparse attention, which reduces attention computation costs by approximately 1.5–2x on 200K-length sequences. The team built slime, an asynchronous reinforcement learning infrastructure that decouples generation from training to improve large-scale trajectory exploration efficiency, and introduced key mechanisms for asynchronous training to mitigate policy lag. The report also states that the model supports seven domestic chip platforms including Huawei Ascend, and enhances end-to-end deployment efficiency through W4A8 mixed quantization, custom fused operators, and inference scheduling optimization.

Read full article
5

Google Proposes DTR and Think@n: Halves AIME 2025 Inference Costs

Research PaperInference Optimization

Google and the University of Virginia propose the 'Deep Thinking Ratio' (DTR) to measure actual reasoning effort in LLMs: merely increasing chain-of-thought tokens correlates negatively with accuracy (r=-0.59), whereas DTR shows strong positive correlation (r=0.683). The team uses Jensen-Shannon divergence between intermediate and final layer output distributions to identify 'deep thinking tokens' and proposes Think@n: after generating ~50 prefix tokens, evaluate DTR and terminate low-potential candidates early. At AIME 2025, Think@n achieved 94.7% accuracy at an average cost of 155.4k tokens, compared to Cons@n's 307.6k tokens and 92.7% accuracy, reducing inference cost by nearly 50%.

Read full article
6

Anthropic Updates 23,000-Word AI Constitution, Introduces AI Welfare Research

AI SafetyGovernance & Ethics

According to GeekPark, Anthropic has updated its approximately 23,000-word 'AI Constitution,' shifting from rule-based constraints toward a value-reasoning and judgment-focused training approach. It establishes clearer value prioritization and a three-tier authorization chain involving Anthropic, operators, and users to handle multi-party conflicts. The document acknowledges that it cannot yet determine whether AIs possess moral status, thus introducing 'welfare' research and model retirement procedures (e.g., weight preservation, post-retirement interviews) to mitigate potential ethical risks. Strong safety constraints remain, including prohibitions on escape attempts and self-replication.

Read full article
7

Westlake University AutoFigure: Converts Long Texts to Editable SVG, FigureBench 3300 Pairs

Research PaperMulti-AgentTool

A team from Westlake University introduced AutoFigure, an academic figure generation framework that converts lengthy documents into logically rigorous and editable SVG vector graphics. The system employs multi-agent collaboration: Planner extracts conceptual relationships and layout constraints, Designer iteratively arranges layouts, and Critic identifies logical and aesthetic flaws. It uses 'Reasoned Rendering' to decouple logical planning from aesthetic rendering. Through an Erase-and-Fix strategy combining OCR and SAM3, pixel images are reconstructed into draggable, editable vector canvases. The team also released the FigureBench dataset containing 3,300 text-image pairs; expert evaluations showed 66.7% of generated figures meet publication-ready quality.

Read full article
8

Federated Learning + LLM Intrusion Detection FIDMF: 99.40% Accuracy on NSL-KDD

SecurityResearch PaperFederated Learning

Scientific Reports published FIDMF, a federated learning-based intrusion detection and mitigation framework. It uses federated learning for privacy-preserving collaborative training and attention-enhanced LSTM to capture temporal behavioral features. An open-source LLM is introduced for contextual feature augmentation and explainable analysis, guiding GANs to generate semantically enriched attack samples, while SMOTE is used to address class imbalance. The authors evaluated FIDMF on NSL-KDD, CIC-IDS2017, and UNSW-NB15 datasets, achieving 99.40% overall accuracy and 99.38% F1-score on NSL-KDD, with up to 99.70% F1 for minority attack classes, reporting performance gains over configurations without LLM guidance.

Read full article

Don't Miss Tomorrow's Insights

Join thousands of professionals who start their day with AI Daily Brief