news
Jun 22, 2024 | One paper was put on the arXiv: RuleR: Improving LLM Controllability by Rule-based Data Recycling, where we proposed an augmentation method that incorporates multiple rule-based constraints into the original instruction data. Repo: RuleR. |
---|---|
May 23, 2024 | One paper was put on the arXiv: Mosaic IT: Enhancing Instruction Tuning with Data Mosaics, where we proposed an augmentation method for instruction tuning, which concurrently improves the LLM performances and lowers the training expenses. Repo: Mosaic-IT. |
May 16, 2024 | Three papers were accepted by ACL 2024! 1 Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning; 2 Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning; 3 Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements (DEBATunE). |
Mar 13, 2024 | One paper was accepted by NAACL 2024! From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning (Cherry LLM (IFD)) |
Feb 21, 2024 | I will join Adobe (based in San Jose) as a Research Scientist/Engineer Intern this Summer~ |
Feb 20, 2024 | One Survey was put on the arXiv: A Survey on Knowledge Distillation of Large Language Models. Repo: Awesome-Knowledge-Distillation-of-LLMs. |
Oct 28, 2023 | One paper was accepted by Instruction Workshop @ NeurIPS 2023! Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning. |
Oct 07, 2023 | One paper was accepted by EMNLP 2023! PRCA: Fitting Black-Box Large Language Models for Retrieval Question Answering via Pluggable Reward-Driven Contextual Adapter. |
Sep 01, 2023 | I arrived at the University of Maryland, officially beginning my journey for a Ph.D. ✌️ |
Jun 01, 2023 | I obtained my Master’s in Computer Science at Texas A&M University. |