Folgen
Xingjun Ma
Xingjun Ma
Associate Professor, School of Computer Science, Fudan University
Bestätigte E-Mail-Adresse bei unimelb.edu.au - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Characterizing adversarial subspaces using local intrinsic dimensionality
X Ma, B Li, Y Wang, SM Erfani, S Wijewickrema, G Schoenebeck, D Song, ...
ICLR 2018, 2018
5052018
Symmetric cross entropy for robust learning with noisy labels
Y Wang, X Ma, Z Chen, Y Luo, J Yi, J Bailey
ICCV 2019, 2019
3902019
Improving adversarial robustness requires revisiting misclassified examples
Y Wang, D Zou, J Yi, J Bailey, X Ma, Q Gu
ICLR 2020, 2020
2972020
Dimensionality-driven learning with noisy labels
X Ma, Y Wang, ME Houle, S Zhou, SM Erfani, ST Xia, S Wijewickrema, ...
ICML 2018, 2018
2932018
Iterative learning with open-set noisy labels
Y Wang, W Liu, X Ma, J Bailey, H Zha, L Song, ST Xia
CVPR 2018, 2018
2342018
On the Convergence and Robustness of Adversarial Training
Y Wang, X Ma, J Bailey, J Yi, B Zhou, Q Gu
ICML 2019, 2019
2052019
Understanding adversarial attacks on deep learning based medical image analysis systems
X Ma, Y Niu, L Gu, Y Wang, Y Zhao, J Bailey, F Lu
Pattern Recognition 110, 107332, 2021
1912021
Reflection backdoor: A natural backdoor attack on deep neural networks
Y Liu, X Ma, J Bailey, F Lu
ECCV 2020, 2020
1552020
Skip connections matter: On the transferability of adversarial examples generated with resnets
D Wu, Y Wang, ST Xia, J Bailey, X Ma
ICLR 2020, 2020
1482020
Normalized loss functions for deep learning with noisy labels
X Ma, H Huang, Y Wang, S Romano, S Erfani, J Bailey
ICML 2020, 2020
1432020
Towards fair and privacy-preserving federated deep models
L Lyu, J Yu, K Nandakumar, Y Li, X Ma, J Jin, H Yu, KS Ng
IEEE Transactions on Parallel and Distributed Systems 31 (11), 2524-2541, 2020
115*2020
Clean-Label Backdoor Attacks on Video Recognition Models
S Zhao, X Ma, X Zheng, J Bailey, J Chen, YG Jiang
CVPR 2020, 2020
1012020
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles
R Duan, X Ma, Y Wang, J Bailey, AK Qin, Y Yang
CVPR 2020, 2020
922020
Black-box adversarial attacks on video recognition models
L Jiang, X Ma, S Chen, J Bailey, YG Jiang
ACM MM 2019, 2019
852019
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Y Li, N Koren, L Lyu, X Lyu, B Li, X Ma
ICLR 2021, 2021
812021
WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection
B Zi, M Chang, J Chen, X Ma, YG Jiang
ACM MM 2020, 2020
742020
Privacy and robustness in federated learning: Attacks and defenses
L Lyu, H Yu, X Ma, L Sun, J Zhao, Q Yang, PS Yu
arXiv preprint arXiv:2012.06337, 2020
652020
Improving adversarial robustness via channel-wise activation suppressing
Y Bai, Y Zeng, Y Jiang, ST Xia, X Ma, Y Wang
ICLR 2021, 2021
532021
Unlearnable Examples: Making Personal Data Unexploitable
H Huang, X Ma, SM Erfani, J Bailey, Y Wang
ICLR 2021, 2021
332021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Y Li, X Lyu, N Koren, L Lyu, B Li, X Ma
NeurIPS 2021, 2021
242021
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20