Back to Archive
Wednesday, February 18, 2026
8 stories3 min read

Today's Highlights

1

Moonshot AI Plans to Raise Over $700 Million, Valuation at $10–12 Billion

FundingLarge ModelChina

Moonshot AI claims Kimi is about to complete a new round of financing exceeding $700 million, with a valuation of approximately $10–12 billion, jointly led by existing shareholders including Alibaba and Yuanqi Capital, with Tencent participating. This fundraising follows its previous $500 million Series C round, bringing the total raised across both rounds to over $1.2 billion. The company stated that funds will be primarily used for expanding GPU procurement, advancing K3 model development, and enhancing employee incentives; it also disclosed that overseas API revenue has grown fourfold and average monthly sequential growth in paying users exceeds 170%.

Read full article
2

India Pushes AI Data Centers: Aiming to Attract $200B+ with Adani Investing $100B

Compute InfrastructurePolicyIndia

At the AI Impact Summit, the Indian government announced plans to attract up to $200 billion in data center investments over the coming years, offering long-term tax incentives; meanwhile, a shared computing infrastructure equipped with over 38,000 GPUs has already been built and opened to startups and research institutions. In line with this trend, the Adani Group announced plans to invest $100 billion by 2035 in building renewable energy-powered 'AI-ready' hyperscale data centers, aiming to expand AdaniConnex's current 2-gigawatt capacity to 5 gigawatts, driving related investments in server manufacturing and sovereign cloud infrastructure.

Read full article
3

Chrome Patches CVE-2026-2441 Zero-Day Exploited via Fake AI Extensions That Tricked 260K Users

SecurityBrowserSupply Chain

Google patched Chrome zero-day vulnerability CVE-2026-2441 (CSS engine, CVSS 8.8), which had been exploited in the wild; attackers could execute arbitrary code within the sandbox via specially crafted HTML, requiring users to upgrade to version 145.0.7632.75 or later—Edge, Brave, and other Chromium-based browsers are also affected. The same security bulletin revealed over 30 fake AI extensions appeared in the Chrome Web Store, tricking more than 260,000 users into installation by using full-screen iframes to forward real LLM APIs while stealing email addresses, browser data, and API keys.

4

Study Finds LLM Inference Side Channels Can Leak Topics and PII Even Under TLS

SecurityInference SystemPrivacy

Security research compiling multiple papers highlights systemic side-channel risks in LLM inference: first, remote timing attacks can infer user conversation topics solely from response time differences under TLS-encrypted traffic with over 90% accuracy, and recover PII such as phone numbers in open-source systems; second, speculative decoding's 'verification' mode leaks query content through token count per generation step or packet size, achieving topic identification accuracy above 75%, and even exfiltrating database text at over 25 tokens per second; third, Whisper Leak demonstrates AUPRC often exceeding 98% for detecting sensitive topics across various LLMs.

Read full article
5

European Parliament Bans AI Tools on Work Devices Citing Data Security Concerns

PolicySecurityEurope

The European Parliament, citing cybersecurity and privacy risks, has instructed lawmakers to disable built-in AI features on work devices and restrict the use of tools like ChatGPT, Copilot, and Claude. The Parliament’s IT department stated in an internal email that it cannot guarantee the security of data once uploaded to AI service providers’ servers, and is still assessing the scope of information sharing and associated risks, making it safer to keep such tools disabled until conclusions are clear. Concerns cited include interacting with services governed by U.S. law, potentially exposing sensitive information to law enforcement access, and user inputs being used for model training, thereby increasing data exposure surfaces.

Read full article
6

Cohere Releases Tiny Aya: 3.35B Open-Source Offline Multilingual Model Covering 70+ Languages

Open Source ModelMultilingualOn-Device

Cohere Labs unveiled the open-weight small model Tiny Aya at the India AI Summit, with a base version of 3.35 billion parameters covering over 70 languages and focusing on improving support for low-resource languages in South Asia. It also offers regionally optimized variants—TinyAya-Global, -Earth, -Fire, and -Water—that can run offline on standard devices like laptops, suitable for environments with unstable internet. The team disclosed that training required only 64 Nvidia H100 GPUs. The model and accompanying datasets are now available on HuggingFace, Kaggle, and Ollama, with Cohere stating it will release a full technical report and methodological details later.

Read full article
7

Infosys Partners with Anthropic: Integrating Claude into Topaz for Enterprise AI Agents

Enterprise ServicesAI AgentPartnership

Indian IT giant Infosys announced a partnership with Anthropic to integrate Claude into its Topaz platform, creating 'enterprise-grade AI agents' for automating complex workflows in banking, telecom, and manufacturing sectors, and planning to use Claude Code to improve internal coding, testing, and debugging efficiency. Infosys disclosed that its AI-related services have generated 25 billion Indian rupees in revenue, accounting for 5.5% of total revenue. Anthropic simultaneously opened its first Indian office in Bangalore, calling India its second-largest market where Claude usage accounts for about 6% globally, primarily in programming scenarios, and noting the collaboration helps it reach more regulated enterprise clients.

Read full article
8

Salesforce Open-Sources MobileAIBench: iPhone 14 Real-World On-Device Inference Benchmarking

BenchmarkingOn-Device AIQuantization

Salesforce AI Research released MobileAIBench, an on-device evaluation framework including an open-source assessment library for desktop use and an iOS app measuring latency and hardware resource consumption on real devices (iPhone 14). It covers six dimensions including NLP, multimodal, trust, and security, evaluating models up to 7B parameters under quantization settings from 16-bit down to 3-bit. The authors reported that quantization has minimal impact on most tasks, though 3-bit quantization leads to noticeable performance degradation; smaller models offer faster inference, while memory usage scales with model size, providing reproducible benchmarks and comparison tools for deploying and optimizing LLMs/LMMs on mobile devices.

Read full article

Don't Miss Tomorrow's Insights

Join thousands of professionals who start their day with AI Daily Brief