特聘序列

当前位置: 首页 > 关于我们 > 师资队伍 > 特聘序列 > 正文

全部
教研序列
特聘序列
客座序列
研究序列
教辅序列
倪仕文

特聘副教授 硕士生导师

工作邮箱:nishiwen@suat-sz.edu.cn
职务 特聘副教授 硕士生导师 工作邮箱 nishiwen@suat-sz.edu.cn
个人简介

深圳理工大学人工智能研究院副教授,独立PI,入选国家级海外引才计划。中国中文信息学会青工委委员,中国计算机学会自然语言处理专委会委员,斐陶斐荣誉学会荣誉会员。博士毕业于中国台湾的国立成功大学计算机系,曾任中国科学院深圳先进技术研究院-数字所的博士后/助理研究员,曾获深圳市优秀博士后,台湾信息学会最佳博士论文佳作奖,中国科学院深圳先进院“卓越员工奖”、“十大优秀博士后”奖、“院长博士后择优专项计划”、“优秀产业合作奖”、“优秀论文奖”等荣誉和奖励。近三年共主持10项国家/省/市/横向项目,累计经费超500万。在自然语言处理、大语言模型等方向有着丰富的研究经历,并取得了丰富的研究成果,以(共)一作/通讯在ACL、ICLR、ICML、NeurIPS 、AAAI、EMNLP、IEEE TASLP等CCF A/B/JCR Q1/IEEE Trans顶级会议和期刊发表论文20余篇,谷歌学术引用1300余次,且相关工作被DeepSeek、Qwen、KIMI、混元、豆包等头部大模型多次引用。担任NLP三大会ACL,EMNLP,NAACL领域主席,以及IEEE TASLP、ACL、EMNLP、AAAI、ICLR、NeurIPS等国际顶会和SCI期刊审稿人。此外,申请人主导或参与研发的得理法律大模型、晓星教育&心理大模型、墨子(天机星)知识产权大模型、大模型驱动的顺丰财务审查多智能体系统等多项大模型技术均成功落地应用,累计服务超10万人次。个人学术主页:https://shiwen-ni.top/

主要研究方向

课题组与字节、阿里、腾讯等大厂以及中科院、北大、哈工深、CMU、康奈尔、新南威尔士等大学和科研机构保持长期学术合作,有两大研究方向:

1.纯AI大模型方向(发AI领域顶会):幻觉优化,知识编辑,高效推理,强化学习,多智能体,AI安全,模型评测

2.AI for Science方向(发Nature/Science子刊):基因组语言模型、大模型驱动的自主知识发现、AI安全、大模型与人脑对齐

欢迎对自然语言处理、人工智能大模型方向感兴趣的本科生,研究生加入我们,一起建立一个自由,开放,相互帮助,对科研有热情的团队!

研究领域

自然语言处理、多模态/大语言模型、AI for Science

学习工作经历

学习经历:

2019.09–2022.12,国立成功大学,博士学位

2017.09–2019.07,武汉大学,硕士学位

2013.09–2017.07,河南农业大学,学士学位

工作经历:

2026.03-至今,深圳理工大学,人工智能研究院,特聘副教授

2023.02-2026.02,中国科学院深圳先进技术研究院,数字所,博士后

学术成果

1.David Ma, Huaqing Yuan, Xingjian Wang, Qianbo Zang, Tianci Liu, Xinyang He, Yanbin Wei, Jiawei Guo, Ni Jiahui, Zhenzhu Yang, Meng Cao, Shanghaoran Quan, Yizhi Li, Wangchunshu Zhou, Jiaheng Liu, Wenhao Huang, Ge Zhang,Shiwen Ni(通讯), Xiaojie Jin. ScaleLong: A Multi-Timescale Benchmark for Long Video Understanding,ICLR 2026.(CCF A)

2.Shuaimin Li, Liyang Fan, linyufang, zeyang li, Xian Wei,Shiwen Ni(通讯),Hamid Alinejad-Rokny, Min Yang. Automatic Paper Reviewing with Heterogeneous Graph Reasoning over LLM-Simulated Reviewer-Author Debates,AAAI 2026. (CCF A)

3.Xintong Sun, Chi Wei, Minghao Tian,Shiwen Ni(通讯),Earley-Driven Dynamic Pruning for Efficient Structured Decoding,ICML 2025. (CCF A)

4.Ziqiang Liu, Feiteng Fang, Xi Feng, Xinrun Du, Chenhao Zhang, Zekun Wang, Yuelin Bai, Qixuan Zhao, Liyang Fan, Chengguang Gan, Hongquan Lin, Jiaming Li, Yuansheng Ni, Haihong Wu, Yaswanth Narsupalli, Zhigang Zheng, Chengming Li, Xiping Hu, Ruifeng Xu, Xiaojun Chen, Min Yang, Jiaheng Liu, Ruibo Liu, Wenhao Huang, Ge Zhang,Shiwen Ni(通讯),II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models,NeurIPS 2024. (CCF A)

5.Xinrun Du, Yifan Yao, Kaijing Ma, Bingli Wang, Tianyu Zheng,......, Zhoujun Li, Dayiheng Liu, Qian Liu, Tianyu Liu,Shiwen Ni(组织作者), Junran Peng, Yujia Qin, Wenbo Su, Guoyin Wang, Shi Wang, Jian Yang, Min Yang, Meng Cao, Xiang Yue, Zhaoxiang Zhang, Wangchunshu Zhou, Jiaheng Liu, Qunshu Lin, Wenhao Huang, Ge Zhang,SuperGPQA: ScalingLLMEvaluation across 285GraduateDisciplines,NeurIPS 2025.(CCF A)

6.Sunbowen Lee, Junting Zhou, Chang Ao, Kaige Li, Xinrun Du, Sirui He, Haihong Wu, Tianci Liu, Jiaheng Liu, Hamid Alinejad-Rokny, Min Yang, Yitao Liang, Zhoufutu Wen,Shiwen Ni(通讯),Quantification of Large Language Model Distillation,ACL 2025. (CCF A)

7.King Zhu, Qianbo Zang, Shian Jia, Siwei Wu, Feiteng Fang, Yizhi Li, Shawn Gavin, Tuney Zheng, Jiawei Guo, Bo Li, Haoning Wu, Xingwei Qu, Jian Yang, Zachary Liu, Xiang Yue, JH Liu, Chenghua Lin, Min Yang,Shiwen Ni(通讯), Wenhao Huang, Ge Zhang,LIME: Less Is More for MLLM Evaluation,ACL 2025. (CCF A)

8.Chenhao Zhang, Xi Feng, Yuelin Bai, Xinrun Du, Jinchang Hou, Kaixin Deng, Guangzeng Han, Qinrui Li, Bingli Wang, Jiaheng Liu, Xingwei Qu, Yifei Zhang, Qixuan Zhao, Yiming Liang, Ziqiang Liu, Feiteng Fang, Min Yang, Wenhao Huang, Chenghua Lin, Ge Zhang,Shiwen Ni(通讯),Can MLLMs Understand the Deep Implication Behind Chinese Images?ACL 2025. (CCF A)

9.Guhong Chen, Liyang Fan, Zihan Gong, Nan Xie, Zixuan Li, Ziqiang Liu, Chengming Li, Qiang Qu,Shiwen Ni(通讯), Min Yang,Agentcourt: Simulating court with adversarial evolvable lawyer agents,ACL 2025. (CCF A)

10.Jinchang Hou, Chang Ao, Haihong Wu, Xiangtao Kong, Zhigang Zheng, Daijia Tang, Chengming Li, Xiping Hu, Ruifeng Xu,Shiwen Ni(通讯), Min Yang,E-EVAL: A Comprehensive Chinese K-12 Education Evaluation Benchmark for Large Language Models,ACL 2024. (CCF A)

11.Feiteng Fang, Yuelin Bai,Shiwen Ni(通讯), Min Yang, Xiaojun Chen, Ruifeng Xu,Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training,ACL 2024. (CCF A)

12.Yuelin Bai, Xinrun Du, Yiming Liang, Yonggang Jin, Ziqiang Liu, Junting Zhou, Tianyu Zheng, Xincheng Zhang, Nuo Ma, Zekun Wang, Ruibin Yuan, Haihong Wu, Hongquan Lin, Wenhao Huang, Jiajun Zhang, Wenhu Chen, Chenghua Lin, Jie Fu, Min Yang,Shiwen Ni(通讯), Ge Zhang,COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning,NAACL 2025. (CCF B)

13.Nan Xie, Yuelin Bai, Hengyuan Gao, Feiteng Fang, Qixuan Zhao, Zhijian Li, Ziqiang Xue, Liang Zhu,Shiwen Ni(通讯), Min Yang,DeliLaw: A Chinese Legal Counselling System Based on a Large Language Model,CIKM 2024. (CCF B)

14.Shiwen Ni, Xiangtao Kong, Chengming Li, Xiping Hu, Ruifeng Xu, Jia Zhu, Min Yang,Training on the Benchmark Is Not All You Need,AAAI 2025. (CCF A)

15.Shiwen Ni, Hao Cheng, Min Yang,Pre-training, Fine-tuning and Re-ranking: A Three-Stage Framework for Legal Question Answering,ICASSP 2025. (CCF B)

16.Shiwen Ni, Dingwei Chen, Chengming Li, Xiping Hu, Ruifeng Xu, Min Yang,Forgetting before Learning: Utilizing Parametric Arithmetic for Knowledge Updating in Large Language Models,ACL 2024. (CCF A)

17.Shiwen Ni, Minghuan Tan, Yuelin Bai, Fuqiang Niu, Min Yang, Bowen Zhang, Ruifeng Xu, Xiaojun Chen, Chengming Li, Xiping Hu, Ye Li, Jianping Fan,MoZIP: A Multilingual Benchmark to Evaluate Large Language Models in Intellectual Property,COLING 2024. (CCF B)

18.Shiwen Ni, Min Yang, Ruifeng Xu, Chengming Li, Xiping Hu,Layer-wise Regularized Dropout for Neural Language Models,COLING 2024. (CCF B)

19.Shiwen Ni, Hung-Yu Kao,Masked Siamese Prompt Tuning for Few-Shot Natural Language Understanding,IEEE Transactions on Artificial Intelligence, 2023.

20.Shiwen Ni, Jiawen Li, Min Yang, Hung-Yu Kao,DropAttack: A Random Dropped Weight Attack Adversarial Training for NLG,IEEE Transactions on Audio, Speech and Language Processing, 2023.(SCI一区)

21.Shiwen Ni, Hung-Yu Kao,KPT++: Refined Knowledgeable Prompt Tuning for Few-shot Text Classification,Knowledge-Based Systems, 2023. (SCI一区)

22.Shiwen Ni, Jiawen Li, Hung-Yu Kao,R-AT: Regularized adversarial training for natural language understanding,EMNLP 2022. (CCF B)

23.Jiawen Li,Shiwen Ni, Hung-Yu Kao.Meet The Truth: Leverage Objective Facts and Subjective Views for Interpretable Rumor Detection,ACL 2021. (CCF A)

关闭