Back to Archive
Monday, December 15, 2025
10 stories3 min read

Today's Highlights

1

GPT-5.2 Officially Released with Significant Professional Capability Improvements, But Gap with Competitors Narrows

Large ModelAI ProductIndustry Competition

OpenAI has released GPT-5.2, focusing on professional knowledge work. On its proprietary GDPval benchmark, 70.9% of tasks performed at or above the level of top human experts, demonstrating expert-level capabilities in coding, office document handling, and analysis. Although the model's reasoning ability has improved significantly, the gap with leading models like Google's Gemini 3 and Anthropic's Claude 4.5 has shrunk, entering a phase of fierce competition. Commercialization pressure alongside safety and ethical controversies is intensifying, with monthly active user growth slowing by the end of 2025, making industry competition increasingly fierce.

Read full article
2

a16z's Trillion-Token Report: Chinese Open-Source AI Model Market Booms, with Rise in Reasoning and Programming Use Cases

AI EcosystemOpen-Source ModelMarket Application

a16z's latest report, based on over 100 million real Token usage data from the OpenRouter platform, reveals global large model market diversification, the rise of agent reasoning, and the open-source ecosystem. The share of Chinese open-source models surged from less than 2% in 2024 to nearly 30% by the end of 2025; Tokens for reasoning models exceeded 50%, and AI programming scenarios exploded, increasing from 11% to 50% year-over-year. Role-playing and creative content are mainstream applications for open-source models, showing significant long-term user lock-in effects and phenomena of continuous "Cinderella's glass slipper" retention and "boomerang" backflow.

Read full article
3

US President Signs Unified AI Regulation Order, National-Level AI Regulatory Framework Established

AI PolicyIndustry Regulation

US President Trump has signed an executive order establishing a nationwide unified AI regulatory framework, significantly limiting states' independent legislative authority, centralizing AI policy at the federal level. The Department of Justice is empowered to challenge state laws and link compliance to specific federal funding. The measure is supported by tech giants but has sparked controversy over weakening consumer protection and jurisdictional disputes. The policy resonates with current market, investment, and risk governance trends, intensifying "de-risking" security and legal divergences.

Read full article
4

AI Safety Index: No Leading AI Company Fully Compliant, Industry Risk and Regulatory Divides Intensify

AI GovernanceSecurity Compliance

The Future of Life Institute released an AI Safety Index, evaluating companies like Anthropic, OpenAI, and Google on risk prevention, governance, and transparency. The results indicate "no company fully meets" emerging safety governance standards. Mainstream models performed poorly in safety, robustness, and privacy tests, with real-world risks exceeding benchmark verification. Industry self-regulation and public disclosure still have significant room for improvement.

Read full article
5

New Method Significantly Improves Large Model Pre-training Efficiency and Accuracy, Lowering AI Training Barrier

AI Fundamental ResearchLarge Model Technology

The University of Waterloo in Canada has proposed the SubTrack++ training method, which reduces large model pre-training time by 50% while significantly improving accuracy. This breakthrough is expected to substantially lower computational and cost barriers, enabling more companies and users to customize large models for personalized AI applications. The paper will be officially released at a top NeurIPS conference. This advancement could alleviate bottlenecks in AI development related to energy consumption and resources.

Read full article
6

AI Web Browsers Exposed for Widespread Collection of User Privacy Data, Serious Security Risks Emerge

AI Product SecurityPrivacy Compliance

2025 USENIX research shows that widely used AI browsers (including ChatGPT for Google, Perplexity, Microsoft Copilot, etc.) automatically collect user browsing content, accounts, conversation history, medical and financial information, transmitting it to servers or third parties. The default modes of AI browsers make it difficult to control data collection boundaries, facing risks like prompt injection, information leakage, and phishing attacks. The industry calls for strengthening AI product security architecture, urging users to pay attention to privacy protection challenges accompanying AI proliferation.

Read full article
7

US Stock AI Sector Plummets Sharply, Concerns Over Industry "Bubble" and High Valuations Rise

AI IndustryCapital Market

Affected by the US President's AI regulation executive order and some tech earnings reports, AI chip and related tech company stocks in the US generally declined. The market is becoming wary of the long-term valuations and actual profitability of the AI industry. Analysis suggests that the surge in data center investment brings pressure on energy consumption and local infrastructure resource allocation. AI "de-bubbling" may drive industry funding and regulatory trends towards rationality, with significant short-term volatility in the tech stock sector.

Read full article
8

Real-World Application Evaluation of GPT-5.2: Performance Leaps Forward but Revolutionary Value Limited, AI Assistance Still Falls Short of Transforming the Workplace

Large ModelAI Application Evaluation

A latest report shows GPT-5.2 has significant improvements in professional office tasks, long-context reasoning, and multimodal capabilities. However, typical ChatGPT Enterprise users still save less than one hour per day. AI focuses on efficiency tools and "collaborative assistant" attributes, failing to bring cross-industry productivity revolution. About 6% of users are high-intensity paying users. AI application scenarios remain dominated by code, documents, and analysis, with universal value not yet fully realized.

Read full article
9

AI Large Model Capabilities Advance Towards Advanced Language Analysis and Recursive Reasoning, Garnering Peer Expert Recognition

Large ModelLanguage Capability

Latest experiments by UC Berkeley linguists show that advanced large language models like OpenAI's o1 can diagram complex sentences, automatically identify recursive grammar rules, and abstract general rules, reaching the level of human language graduate students. AI has made breakthroughs in linguistic ambiguity, nested analysis, etc., possessing primary "meta-language" reasoning ability, challenging the traditional view that "AI cannot truly understand and analyze language."

Read full article
10

South Korea's Domestic Large Models Lag Behind Overseas Models in CSAT Math Reasoning, Domestic AI R&D Gap Persists

Domestic Large ModelIndustry Comparison

A Sogang University team in South Korea evaluated domestic large models and five overseas models like ChatGPT and Gemini 3 on unified high school CSAT math and advanced essay questions. Results show: Korean model scores clustered between 20-58 (out of 100), while mainstream overseas large models generally scored 76-92 with outstanding reasoning ability. Analysis suggests South Korea's industrial and algorithmic ecosystem foundation still needs strengthening. Upgrading self-developed models and optimizing training data may be key to catching up with world leaders.

Read full article

Don't Miss Tomorrow's Insights

Join thousands of professionals who start their day with AI Daily Brief