About me
I am a fifth-year PhD student (2021-present) of Department of Computer Science and Technology at Soochow University. I am fortunate to be advised by Prof. Min Zhang and Juntao Li. Before that, I received the bachelor degree in Computer Science from Soochow University at 2021. I am currently a research intern at Tencent Hunyuan Digital Human, advised by Dr. Zhaopeng Tu.
I am actively seeking industry positions and welcome opportunities to apply my research to real-world challenges. If you are interested in my work or potential collaboration, please feel free to contact me via:
- Email: wangyuenlp@gmail.com
Research Interests
- Natural Language Processing
- Large Reasoning Models
- Multimodal Reasoning
News
- 2025.06 I am invited to present a talk at BAAI 2025. Video
- 2025.04 We released Deepmath-103k: A large-scale mathematical dataset for advancing reasoning, once trended #1 on Hugging Face Datasets. (ArXiv). Media Report Data
- 2025.01 We revealed the underthinking issue in large reasoning models (ArXiv). Media Report
Publications
(Artificial Intelligence) Are the BERT family zero-shot learners? A study on their potential and limitations
Yue Wang, Lijun Wu, Juntao Li, Xiaobo Liang, Min Zhang.
(ACL 2025) 𝒜3: Automatic Alignment Framework for Attributed Text Generation
Yue Wang, Haoke Zhang, Juntao Li, Jinxiong Chang, Min Zhang.
(ACL 2023 Findings) Towards Better Hierarchical Text Classification with Data Generation
Yue Wang, Dan Qiao, Juntao Li, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Min Zhang.
(WSDM 2024) Towards Better Chinese Spelling Check for Search Engines: A New Dataset and Strong Baseline
Yue Wang,Zilong Zheng, Zecheng Tang, Juntao Li, Zhihui Liu, Kunlong Chen, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Min Zhang
(COLING 2024) Towards More Realistic Chinese Spell Checking with New Benchmark and Specialized Expert Model
Yue Wang, Zilong Zheng, Juntao Li, Zhihui Liu, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Min Zhang.
(EMNLP 2023 Findings) G-SPEED: General SParse Efficient Editing MoDel
Haoke Zhang, Yue Wang, Juntao Li, Xiabing Zhou, Min Zhang.
(TPAMI) Randomness Regularization with Simple Consistency Training for Neural Networks
Juntao Li, Xiaobo Liang, Lijun Wu, Yue Wang, Qi Meng, Tao Qin, Min Zhang, and Tie-Yan Liu.
(NeurIPS 2021) R-Drop: Regularized Dropout for Neural Networks Media Report
Xiaobo Liang, Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu.
Pre-Prints
Thoughts are all over the place: On the underthinking of o1-like llms Media Report
Yue Wang, Qiuzhi Liu, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Linfeng Song, Dian Yu, Juntao Li, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, Dong Yu.
Yue Wang, Xinrui Wang, Juntao Li, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Min Zhang.
Deepmath-103k: A large-scale, challenging, decontaminated, and verifiable mathematical dataset for advancing reasoning Media Report
Zhiwei He, Tian Liang, Jiahao Xu, Qiuzhi Liu, Xingyu Chen, Yue Wang, Linfeng Song, Dian Yu, Zhenwen Liang, Wenxuan Wang, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, Dong Yu.
Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models Media Report
Bang Zhang, Ruotian Ma, Qingxuan Jiang, Peisong Wang, Jiaqi Chen, Zheng Xie, Xingyu Chen, Yue Wang, Fanghua Ye, Jian Li, Yifan Yang, Zhaopeng Tu, Xiaolong Li.
Two Experts Are All You Need for Steering Thinking: Reinforcing Cognitive Effort in MoE Reasoning Models Without Additional Training Media Report
Mengru Wang, Xingyu Chen, Yue Wang, Zhiwei He, Jiahao Xu, Tian Liang, Qiuzhi Liu, Yunzhi Yao, Wenxuan Wang, Ruotian Ma, Haitao Mi, Ningyu Zhang, Zhaopeng Tu, Xiaolong Li, Dong Yu.
OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Dan Qiao, Yi Su, Pinzheng Wang, Jing Ye, Wenjing Xie, Yuechi Zhou, Yuyang Ding, Zecheng Tang, Jikai Wang, Yixin Ji, Yue Wang, Pei Guo, Zechen Sun, Zikang Zhang, Juntao Li, Pingfu Chao, Wenliang Chen, Guohong Fu, Guodong Zhou, Qiaoming Zhu, Min Zhang.
Talk
- “The Challenge of Reasoning Efficiency in Large Reasoning Models” (BAAI 2025) Video
Internships
- 2024.09 - 2025.03, research intern at Tencent Hunyuan Digital Human, advised by Dr. Zhaopeng Tu
- 2024.09 - 2025.03, research intern at Tencent AI Lab, advised by Dr. Zhaopeng Tu
- 2022.03 - 2024.08, research intern at Ant Group
Awards
- 2021, Outstanding Graduate Student at Soochow University
- 2019, CW Chu Scholarship at Soochow University