Follow
Jindong Gu
Jindong Gu
University of Oxford & Google DeepMind
Verified email at robots.ox.ac.uk - Homepage
Title
Cited by
Cited by
Year
Understanding individual decisions of cnns via contrastive backpropagation
J Gu, Y Yang, V Tresp
14th Asian Conference on Computer Vision (ACCV), 119-134, 2019
1172019
A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models
J Gu, Z Han, S Chen, A Beirami, B He, G Zhang, R Liao, Y Qin, V Tresp, ...
arXiv preprint arXiv:2307.12980, 2023
762023
Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness
J Gu, H Zhao, V Tresp, PHS Torr
European Conference on Computer Vision (ECCV), 308-325, 2022
642022
Improving the robustness of capsule networks to image affine transformations
J Gu, V Tresp
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 7285-7293, 2020
592020
Are vision transformers robust to patch perturbations?
J Gu, V Tresp, Y Qin
European Conference on Computer Vision (ECCV), 404-421, 2022
542022
Towards efficient adversarial training on vision transformers
B Wu*, J Gu*, Z Li, D Cai, X He, W Liu
European Conference on Computer Vision (ECCV), 307-325, 2022
412022
Interpretable graph capsule networks for object recognition
J Gu
Proceedings of the AAAI Conference on Artificial Intelligence 35 (2), 1469-1477, 2021
352021
Backdoor Defense via Adaptively Splitting Poisoned Dataset
K Gao, Y Bai, J Gu, Y Yang, ST Xia
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4005-4014, 2023
342023
Effective and Efficient Vote Attack on Capsule Networks
J Gu, B Wu, V Tresp
International Conference on Learning Representations (ICLR), 2021, 2021
322021
Capsule network is not more robust than convolutional network
J Gu, V Tresp, H Hu
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 14309-14317, 2021
322021
Understanding bias in machine learning
J Gu, D Oelke
Workshop on Visualization for AI Explainability, IEEE Vis, 2019
322019
Saliency methods for explaining adversarial attacks
J Gu, V Tresp
Workshop on Human-Centric Machine Learning, NeurIPS 2019, 2019
322019
Attacking Adversarial Attacks as A Defense
B Wu, H Pan, L Shen, J Gu, S Zhao, Z Li, D Cai, X He, W Liu
arXiv preprint arXiv:2106.04938, 2021
282021
Watermark vaccine: Adversarial attacks to prevent watermark removal
X Liu, J Liu, Y Bai, J Gu, T Chen, X Jia, X Cao
European Conference on Computer Vision (ECCV), 1-17, 2022
232022
Search for better students to learn distilled knowledge
J Gu, V Tresp
European Conference on Artificial Intelligence (ECAI), 1159-1165, 2020
232020
MM-SafetyBench: A Benchmark for Safety Evaluation of Multimodal Large Language Models
X Liu, Y Zhu, J Gu, Y Lan, C Yang, Y Qiao
European Conference on Computer Vision (ECCV), 2024
17*2024
Fraug: Tackling federated learning with non-iid features via representation augmentation
H Chen, A Frikha, D Krompass, J Gu, V Tresp
International Conference on Computer Vision (ICCV), 2023, 4849-4859, 2023
172023
An image is worth 1000 lies: Adversarial transferability across prompts on vision-language models
H Luo*, J Gu*, F Liu, P Torr
International Conference on Learning Representations (ICLR), 2024, 2024
15*2024
A survey on transferability of adversarial examples across deep neural networks
J Gu, X Jia, P de Jorge, W Yu, X Liu, A Ma, Y Xun, A Hu, A Khakzar, Z Li, ...
Transactions on Machine Learning Research (TMLR), 2023
152023
Semantics for global and local interpretation of deep neural networks
J Gu, V Tresp
arXiv preprint arXiv:1910.09085, 2019
152019
The system can't perform the operation now. Try again later.
Articles 1–20