
A sounding board for strengthening the student experience
Source: MIT News – Artificial intelligence During his first year at MIT in 2021, Matthew Caren ’25 received...

Unpacking the bias of large language models
Source: MIT News – Artificial intelligence Research has shown that large language models (LLMs) tend to overemphasize information...

EPFL Researchers Introduce MEMOIR: A Scalable Framework for Lifelong Model Editing in LLMs
Source: MarkTechPost The Challenge of Updating LLM Knowledge LLMs have shown outstanding performance for various tasks through extensive...

Celebrating an academic-industry collaboration to advance vehicle technology
Source: MIT News – Artificial intelligence On May 6, MIT AgeLab’s Advanced Vehicle Technology (AVT) Consortium, part of...
OpenBMB Releases MiniCPM4: Ultra-Efficient Language Models for Edge Devices with Sparse Attention and Fast Inference
Source: MarkTechPost The Need for Efficient On-Device Language Models Large language models have become integral to AI systems,...
StepFun Introduces Step-Audio-AQAA: A Fully End-to-End Audio Language Model for Natural Voice Interaction
Source: MarkTechPost Rethinking Audio-Based Human-Computer Interaction Machines that can respond to human speech with equally expressive and natural...
EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
Source: MarkTechPost Navigating the dense urban canyons of cities like San Francisco or New York can be a...

DeepCoder-14B: The Open-Source AI Model Enhancing Developer Productivity and Innovation
Source: Unite.AI Artificial Intelligence (AI) is changing how software is developed. AI-powered code generators have become vital tools...

OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
Source: MarkTechPost The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed...

Internal Coherence Maximization (ICM): A Label-Free, Unsupervised Training Framework for LLMs
Source: MarkTechPost Post-training methods for pre-trained language models (LMs) depend on human supervision through demonstrations or preference feedback...