Follow
Yichuan Mo
Yichuan Mo
Ph.D. Candidate, Peking University
Verified email at stu.pku.edu.cn
Title
Cited by
Cited by
Year
Jailbreak and guard aligned language models with only few in-context demonstrations
Z Wei, Y Wang, L Ang, Y Mo, Y Wang
arXiv preprint arXiv:2310.06387, 2023
592023
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture
Y Mo, D Wu, Y Wang, Y Guo, Y Wang
NeurIPS 2022, 2022
372022
Improving Generative Adversarial Networks via Adversarial Learning in Latent Space
Y Li, Y Mo, L Shi, J Yan, X Zhang, JUN ZHOU
NeurIPS 2022, 2022
162022
Multi-task learning improves synthetic speech detection
Y Mo, S Wang
ICASSP 2022, 2022
132022
DICE: Domain-attack Invariant Causal Learning for Improved Data Privacy Protection and Adversarial Robustness
Q Ren, Y Chen, Y Mo, Q Wu, J Yan
SIGKDD 2022, 2022
72022
Fight Back Against Jailbreaking via Prompt Adversarial Tuning
Y Mo, Y Wang, Z Wei, Y Wang
ICLR 2024 Workshop on Secure and Trustworthy Large Language Models, 0
5*
Towards Reliable Backdoor Attacks on Vision Transformers
Y Mo, D Wu, Y Wang, Y Guo, Y Wang
2023
PID: Prompt-Independent Data Protection Against Latent Diffusion Models
A Li, Y Mo, M Li, Y Wang
ICML 2024, 0
TERD: A Unified Framework for Safeguarding Diffusion Models Against Backdoors
Y Mo, H Huang, M Li, A Li, Y Wang
ICML 2024, 0
The system can't perform the operation now. Try again later.
Articles 1–9