Follow
Yichuan Mo
Yichuan Mo
Ph.D. Candidate, Peking University
Verified email at stu.pku.edu.cn
Title
Cited by
Cited by
Year
Jailbreak and guard aligned language models with only few in-context demonstrations
Z Wei, Y Wang, L Ang, Y Mo, Y Wang
arXiv preprint arXiv:2310.06387, 2023
1832023
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture
Y Mo, D Wu, Y Wang, Y Guo, Y Wang
NeurIPS 2022, 2022
662022
Multi-task learning improves synthetic speech detection
Y Mo, S Wang
ICASSP 2022, 2022
212022
Improving Generative Adversarial Networks via Adversarial Learning in Latent Space
Y Li, Y Mo, L Shi, J Yan, X Zhang, JUN ZHOU
NeurIPS 2022, 2022
202022
Fight Back against Jailbreaking via Prompt Adversarial Tuning
Y Mo, Y Wang, Z Wei, Y Wang
NeurIPS 2024, 2024
19*2024
DICE: Domain-attack Invariant Causal Learning for Improved Data Privacy Protection and Adversarial Robustness
Q Ren, Y Chen, Y Mo, Q Wu, J Yan
SIGKDD 2022, 2022
132022
TERD: A Unified Framework for Safeguarding Diffusion Models Against Backdoors
Y Mo, H Huang, M Li, A Li, Y Wang
ICML 2024, 2024
72024
PID: Prompt-Independent Data Protection Against Latent Diffusion Models
A Li, Y Mo, M Li, Y Wang
ICML 2024, 2024
32024
On the Adversarial Transferability of Generalized" Skip Connections"
Y Wang, Y Mo, D Wu, M Li, X Ma, Z Lin
arXiv preprint arXiv:2410.08950, 2024
22024
Towards Reliable Backdoor Attacks on Vision Transformers
Y Mo, D Wu, Y Wang, Y Guo, Y Wang
2023
The system can't perform the operation now. Try again later.
Articles 1–10