news
Jan 28, 2025 | I will join Microsoft (MSR) as a Research Internship this spring semester~ |
---|---|
Jan 22, 2025 | One paper was accepted by NAACL 2025! RuleR: Improving LLM Controllability by Rule-based Data Recycling |
Jan 22, 2025 | One paper was accepted by ICLR 2025! BenTo: Benchmark Task Reduction with In-Context Transferability |
Oct 31, 2024 | One paper was put on the arXiv: What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective, where we try to understand the layer-wise gradient behaviors when LLMs are finetuned on Fast vs. Slow Thinking. Repo: Layer_Gradient. |
May 23, 2024 | One paper was put on the arXiv: Mosaic IT: Enhancing Instruction Tuning with Data Mosaics, where we proposed an augmentation method for instruction tuning, which concurrently improves the LLM performances and lowers the training expenses. Repo: Mosaic-IT. |
May 16, 2024 | Three papers were accepted by ACL 2024! 1 Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning; 2 Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning; 3 Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements (DEBATunE). |
Mar 13, 2024 | One paper was accepted by NAACL 2024! From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning (Cherry LLM (IFD)) |
Feb 21, 2024 | I will join Adobe (based in San Jose) as a Research Scientist/Engineer Intern this Summer~ |
Oct 28, 2023 | One paper was accepted by Instruction Workshop @ NeurIPS 2023! Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning. |
Oct 07, 2023 | One paper was accepted by EMNLP 2023! PRCA: Fitting Black-Box Large Language Models for Retrieval Question Answering via Pluggable Reward-Driven Contextual Adapter. |
Sep 01, 2023 | I arrived at the University of Maryland, officially beginning my journey for a Ph.D. ✌️ |
Jun 01, 2023 | I obtained my Master’s in Computer Science at Texas A&M University. |