Jonas Rauber
Jonas Rauber
University of Tübingen & Max Planck Research School for Intelligent Systems
Verified email at uni-tuebingen.de - Homepage
Title
Cited by
Cited by
Year
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
W Brendel*, J Rauber*, M Bethge
International Conference on Learning Representations 2018, 2018
3682018
Foolbox: A Python toolbox to benchmark the robustness of machine learning models
J Rauber*, W Brendel*, M Bethge
Reliable Machine Learning in the Wild Workshop, ICML 2017, 2017
259*2017
On evaluating adversarial robustness
N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, ...
arXiv preprint arXiv:1902.06705, 2019
1872019
Generalisation in humans and deep neural networks
R Geirhos*, CRM Temme*, J Rauber*, HH Schuett, M Bethge, ...
Advances in Neural Information Processing Systems 31 (2018), 2018
1322018
Towards the first adversarially robust neural network model on MNIST
L Schott*, J Rauber*, W Brendel, M Bethge
International Conference on Learning Representations 2019, 2018
132*2018
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
N Papernot, F Faghri, N Carlini, I Goodfellow, R Feinman, A Kurakin, ...
arXiv preprint arXiv:1610.00768, 2018
1182018
Comparing deep neural networks against humans: object recognition when the signal gets weaker
R Geirhos, DHJ Janssen, HH Schütt, J Rauber, M Bethge, FA Wichmann
arXiv preprint arXiv:1706.06969, 2017
1132017
Adversarial Vision Challenge
W Brendel, J Rauber, A Kurakin, N Papernot, B Veliqi, M Salathé, ...
Neural Information Processing Systems 2018, 2018
362018
Accurate, reliable and fast robustness evaluation
W Brendel, J Rauber, M Kümmerer, I Ustyuzhaninov, M Bethge
Advances in Neural Information Processing Systems, 12861-12871, 2019
102019
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
F Croce*, J Rauber*, M Hein
arXiv preprint arXiv:1903.11359, 2019
72019
Fast Differentiable Clipping-Aware Normalization and Rescaling
J Rauber, M Bethge
arXiv preprint arXiv:2007.07677, 2020
32020
Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX
J Rauber, R Zimmermann, M Bethge, W Brendel
Journal of Open Source Software 5 (53), 2607, 2020
22020
EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy
J Rauber, M Bethge, W Brendel
arXiv preprint arXiv:2008.04175, 2020
12020
Modeling patterns of smartphone usage and their relationship to cognitive health
J Rauber, E Fox, L Gatys
Machine Learning for Health Workshop, NeurIPS 2019, 2019
2019
Inducing a human-like shape bias leads to emergent human-level distortion robustness in CNNs
R Geirhos, P Rubisch, J Rauber, CRM Temme, C Michaelis, W Brendel, ...
19th Annual Meeting of the Vision Sciences Society (VSS 2019), 209c-209c, 2019
2019
Foolbox Documentation
J Rauber, W Brendel
Read the Docs, 2018
2018
The system can't perform the operation now. Try again later.
Articles 1–16