About me
I am a fifth-year PhD student (2021-present) of Department of Computer Science and Technology at Soochow University. I am fortunate to be advised by Prof. Min Zhang and Juntao Li. Before that, I received the bachelor degree in Computer Science from Soochow University at 2021. I am currently a research intern at Tencent Hunyuan Digital Human, advised by Dr. Zhaopeng Tu.
I am actively seeking industry positions and welcome opportunities to apply my research to real-world challenges. If you are interested in my work or potential collaboration, please feel free to contact me via:
- Email: wangyuenlp@gmail.com
Research Interests
- Natural Language Processing
- Large Reasoning Models
- Multimodal Interaction
News
- 2025.09 We released BatonVoice: An operationalist framework for controllable TTS, where an LLM “conductor” 🪄 interprets user instructions into explicit textual plans of vocal features (e.g., pitch, energy), and a specialized TTS “orchestra” 🎻 generates the speech.
- 2025.09 We have two papers accepted by NeurIPS 2025.(1*Spotlight, 1*Poster)
- 2025.06 I am invited to present a talk at BAAI 2025.
- 2025.04 We released Deepmath-103k: A large-scale mathematical dataset for advancing reasoning, once trended #1 on Hugging Face Datasets.
- 2025.01 We revealed the underthinking issue in large reasoning models.
Publications
(NeurIPS 2025 Spotlight) Thoughts are all over the place: On the underthinking of o1-like llms
Yue Wang, Qiuzhi Liu, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Linfeng Song, Dian Yu, Juntao Li, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, Dong Yu.
(NeurIPS 2025) Two Experts Are All You Need for Steering Thinking: Reinforcing Cognitive Effort in MoE Reasoning Models Without Additional Training
Mengru Wang, Xingyu Chen, Yue Wang, Zhiwei He, Jiahao Xu, Tian Liang, Qiuzhi Liu, Yunzhi Yao, Wenxuan Wang, Ruotian Ma, Haitao Mi, Ningyu Zhang, Zhaopeng Tu, Xiaolong Li, Dong Yu.
(ACL 2025) 𝒜3: Automatic Alignment Framework for Attributed Text Generation
Yue Wang, Haoke Zhang, Juntao Li, Jinxiong Chang, Min Zhang.
(COLING 2024) Towards More Realistic Chinese Spell Checking with New Benchmark and Specialized Expert Model
Yue Wang, Zilong Zheng, Juntao Li, Zhihui Liu, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Min Zhang.
(WSDM 2024) Towards Better Chinese Spelling Check for Search Engines: A New Dataset and Strong Baseline
Yue Wang,Zilong Zheng, Zecheng Tang, Juntao Li, Zhihui Liu, Kunlong Chen, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Min Zhang
(EMNLP 2023 Findings) G-SPEED: General SParse Efficient Editing MoDel
Haoke Zhang, Yue Wang, Juntao Li, Xiabing Zhou, Min Zhang.
(Artificial Intelligence) Are the BERT family zero-shot learners? A study on their potential and limitations
Yue Wang, Lijun Wu, Juntao Li, Xiaobo Liang, Min Zhang.
(ACL 2023 Findings) Towards Better Hierarchical Text Classification with Data Generation
Yue Wang, Dan Qiao, Juntao Li, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Min Zhang.
(TPAMI) Randomness Regularization with Simple Consistency Training for Neural Networks
Juntao Li, Xiaobo Liang, Lijun Wu, Yue Wang, Qi Meng, Tao Qin, Min Zhang, and Tie-Yan Liu.
(NeurIPS 2021) R-Drop: Regularized Dropout for Neural Networks
Xiaobo Liang, Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu.
Pre-Prints
Yue Wang, Ruotian Ma, Xingyu Chen, Zhengliang Shi, Wanshun Chen, Huang Liu, Jiadi Yao, Qu Yang, Qingxuan Jiang, Fanghua Ye, Juntao Li, Min Zhang, Zhaopeng Tu, Xiaolong Li, Linus.
Yue Wang, Xinrui Wang, Juntao Li, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Min Zhang.
Deepmath-103k: A large-scale, challenging, decontaminated, and verifiable mathematical dataset for advancing reasoning
Zhiwei He, Tian Liang, Jiahao Xu, Qiuzhi Liu, Xingyu Chen, Yue Wang, Linfeng Song, Dian Yu, Zhenwen Liang, Wenxuan Wang, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, Dong Yu.
Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models
Bang Zhang, Ruotian Ma, Qingxuan Jiang, Peisong Wang, Jiaqi Chen, Zheng Xie, Xingyu Chen, Yue Wang, Fanghua Ye, Jian Li, Yifan Yang, Zhaopeng Tu, Xiaolong Li.
OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Dan Qiao, Yi Su, Pinzheng Wang, Jing Ye, Wenjing Xie, Yuechi Zhou, Yuyang Ding, Zecheng Tang, Jikai Wang, Yixin Ji, Yue Wang, Pei Guo, Zechen Sun, Zikang Zhang, Juntao Li, Pingfu Chao, Wenliang Chen, Guohong Fu, Guodong Zhou, Qiaoming Zhu, Min Zhang.
Social Welfare Function Leaderboard: When LLM Agents Allocate Social Welfare
Zhengliang Shi, Ruotian Ma, Jen-tse Huang, Xinbei Ma, Xingyu Chen, Mengru Wang, Qu Yang, Yue Wang, Fanghua Ye, Ziyang Chen, Shanyi Wang, Cixing Li, Wenxuan Wang, Zhaopeng Tu, Xiaolong Li, Zhaochun Ren, Linus.
Talk
Internships
- 2025.03 - present, research intern at Tencent Hunyuan Digital Human, advised by Dr. Zhaopeng Tu
- 2024.09 - 2025.03, research intern at Tencent AI Lab, advised by Dr. Zhaopeng Tu
- 2022.03 - 2024.08, research intern at Ant Group
Awards
- 2021, Outstanding Graduate Student at Soochow University
- 2019, CW Chu Scholarship at Soochow University
