Biography

Tianhao Li is a graduate student at Duke University and a visiting researcher at the SaFoLab, University of Wisconsin-Madison, work with Prof. Chaowei Xiao, Prof. Neil Gong, and Prof. Zhenyu Yang. His research aims to evaluating and enhancing the safety and privacy of generative models and systems, particularly in real-world applications such as healthcare, science, and the metaverse. He received a B.Eng. in Information Security in 2024 from North China University of Technology and worked as a Security Researcher (AI Red Teaming) at NSFOCUS and TOPSEC during his undergraduate studies. He also serves as peer reviewers for prestigious journals and conferences, including TIST, TAI, TBE, EAAI, RESS, JBHI, AAAI, IJCAI, ICLR, and ACL, etc.. In addition, he contribute to the MLCommons AI Risk & Reliability (AIRR) Working Group, and the NVIDIA's widely recognized open-source project NVIDIA/garak (4.7K+ stars). In early 2025, he founded Special Interest Group in Modern Interdisciplinary Research (SIGMIR) under section 501(c)(3), built its core team, and led its early operations and strategic development.

Tianhao

Reach me: [tianhao.li@duke.edu] | [tianhao.li@proton.me] | [PGP Public Key: 7A4F-5713-BF75-077E-6072-98A7-BA44-8C77-65EB-C1DC]

Know me: [Curriculum Vitae]    Follow me: [Google Scholar] | [ResearchGate] | [ORCID] | [LinkedIn] | [Github] | [HuggingFace] | [Zhihu] | [Strava]


Research Highlight


My research focuses on advancing trustworthy generative AI by identifying safety and privacy risks and developing robust mitigation strategies. Through empirical evaluation and system-level enhancements, I aim to enable responsible deployment in high-stakes domains such as healthcare, science, and cybersecurity. [Li et al., 2024; Zheng et al., 2025; Li et al., 2025; Xiong et al., 2025; Ghosh et al., 2025]

Evaluation: Benchmark • Jailbreak • Offensive Red Teaming
Enhancement: Alignment • Guardrails • Defensive Blue Teaming
Fundamental Concepts of Trustworthy Generative Artificial Intelligence
Safety: Harm Prevention • Toxicity • Misuse
Privacy: Data Protection • Anonymization
Fairness: Bias Mitigation • Equity
Robustness: Adversarial Defense • Reliability • Resilience
Explainability: Transparency • Interpretability
Human Value
Generative Artificial Intelligence Under Test
Large Language Models / Foundation Models
Conversational AI / Chatbots
Autonomous Agent
Multi-agents System
Real-world Application & Downstream Tasks
CybersecurityHealthcareMedical ImagingScientific ResearchDrug DiscoveryContent CreationEducationMetaverse
Open to collaboration: Special Interest Group in Modern Interdisciplinary Research (SIGMIR) is a voluntary-based virtual research group, open to all students, researchers, and faculty members. If you are interested in joining, please fill in the Google Form.

News