Jonas Rauber
Jonas Rauber
University of Tübingen & Max Planck Research School for Intelligent Systems
Verified email at - Homepage
TitleCited byYear
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
W Brendel*, J Rauber*, M Bethge
International Conference on Learning Representations 2018, 2018
Foolbox: A Python toolbox to benchmark the robustness of machine learning models
J Rauber*, W Brendel*, M Bethge
Reliable Machine Learning in the Wild - ICML 2017 Workshop, 2017
Comparing deep neural networks against humans: object recognition when the signal gets weaker
R Geirhos, DHJ Janssen, HH Schütt, J Rauber, M Bethge, FA Wichmann
arXiv preprint arXiv:1706.06969, 2017
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
N Papernot, F Faghri, N Carlini, I Goodfellow, R Feinman, A Kurakin, ...
arXiv preprint arXiv:1610.00768, 2018
Towards the first adversarially robust neural network model on MNIST
L Schott*, J Rauber*, W Brendel, M Bethge
International Conference on Learning Representations 2019, 2018
On Evaluating Adversarial Robustness
N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, ...
arXiv preprint arXiv:1902.06705, 2019
Generalisation in humans and deep neural networks
R Geirhos*, CRM Temme*, J Rauber*, HH Schuett, M Bethge, ...
Advances in Neural Information Processing Systems 31 (2018), 2018
Adversarial Vision Challenge
W Brendel, J Rauber, A Kurakin, N Papernot, B Veliqi, M Salathé, ...
Neural Information Processing Systems 2018, 2018
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
F Croce*, J Rauber*, M Hein
arXiv preprint arXiv:1903.11359, 2019
Accurate, reliable and fast robustness evaluation
W Brendel, J Rauber, M Kümmerer, I Ustyuzhaninov, M Bethge
arXiv preprint arXiv:1907.01003, 2019
Foolbox Documentation
J Rauber, W Brendel
Read the Docs, 2018
The system can't perform the operation now. Try again later.
Articles 1–11