Ming Li
minglii [AT] umd.edu
I am a third-year Ph.D. student in Computer Science at the University of Maryland, advised by Prof. Tianyi Zhou. Over the past few years, I have led multiple research projects, resulting in over 10 first-author papers at top-tier venues including ACL, NeurIPS, ICLR, EMNLP, and NAACL.
Beyond research, I serve as an ACL ARR Area Chair and reviewer for major conferences, and I was honored to receive the 2026 Apple Scholar in AI/ML Fellowship. I have also worked closely with Adobe, Amazon, and Microsoft Research to translate research ideas into real-world AI systems.
Research interests
My research mainly focuses on large language models (LLMs), with interests spanning four interconnected areas: post-training, evaluation, interpretability, and human-AI cognitive alignment:
(1) Post-training: I study both data-centric approaches, such as data selection and synthesis, and algorithmic approaches for improving model adaptation and reasoning (Cherry LLM (IFD), Superfiltering, Mosaic-IT, Selective Reflection-Tuning);
(2) Evaluation: I develop benchmarks and analyses to probe the boundaries, strengths, and failure modes of current LLMs (Fog of War, AI Socialization, CaughtCheating, ColorBench);
(3) Interpretability: I investigate training effects and response behaviors to better understand how LLMs reason and generate outputs (Layer_Gradient, Gradient_Unified, MiP-Overthinking, ThinkARM);
(4) Human-AI Cognitive Alignment: I study how closely current LLMs align with human cognitive behaviors in reasoning and difficulty perception (Difficulty Alignment, Schoenfeld Reasoning, Item Difficulty Modeling).
I am always happy to discuss research ideas and collaborations. Feel free to reach out by email.