Matthias Kümmerer
Matthias Kümmerer
Centre for integrative Neuroscience, University Tuebingen
Verified email at bethgelab.org - Homepage
Title
Cited by
Cited by
Year
SciPy 1.0: fundamental algorithms for scientific computing in Python
P Virtanen, R Gommers, TE Oliphant, M Haberland, T Reddy, ...
Nature methods 17 (3), 261-272, 2020
62602020
Deep gaze i: Boosting saliency prediction with feature maps trained on imagenet
M Kümmerer, L Theis, M Bethge
arXiv preprint arXiv:1411.1045, 2014
3392014
DeepGaze II: Reading fixations from deep features trained on object recognition
M Kümmerer, TSA Wallis, M Bethge
arXiv preprint arXiv:1610.01563, 2016
1982016
Understanding low-and high-level contributions to fixation prediction
M Kummerer, TSA Wallis, LA Gatys, M Bethge
Proceedings of the IEEE International Conference on Computer Vision, 4789-4798, 2017
1882017
Information-theoretic model comparison unifies saliency metrics
M Kümmerer, TSA Wallis, M Bethge
Proceedings of the National Academy of Sciences 112 (52), 16054-16059, 2015
1222015
Accurate, reliable and fast robustness evaluation
W Brendel, J Rauber, M Kümmerer, I Ustyuzhaninov, M Bethge
arXiv preprint arXiv:1907.01003, 2019
412019
Saliency benchmarking made easy: Separating models, maps and metrics
M Kummerer, TSA Wallis, M Bethge
Proceedings of the European Conference on Computer Vision (ECCV), 770-787, 2018
402018
Attention to comics: Cognitive processing during the reading of graphic literature
J Laubrock, S Hohenstein, M Kümmerer
Empirical comics research, 239-263, 2018
192018
Meaning maps and saliency models based on deep convolutional neural networks are insensitive to image meaning when predicting human fixations
MA Pedziwiatr, M Kümmerer, TSA Wallis, M Bethge, C Teufel
Cognition 206, 104465, 2021
102021
Guiding human gaze with convolutional neural networks
LA Gatys, M Kümmerer, TSA Wallis, M Bethge
arXiv preprint arXiv:1712.06492, 2017
92017
Deepgaze ii: Predicting fixations from deep features over time and tasks
M Kümmerer, T Wallis, M Bethge
Journal of Vision 17 (10), 1147-1147, 2017
82017
How close are we to understanding image-based saliency?
M Kümmerer, T Wallis, M Bethge
arXiv preprint arXiv:1409.7686, 2014
82014
Saliency benchmarking: Separating models, maps and metrics
M Kümmerer, TS Wallis, M Bethge
arXiv preprint arXiv:1704.08615, 2017
72017
Mit/tübingen saliency benchmark
M Kümmerer, Z Bylinskii, T Judd, A Borji, L Itti, F Durand, A Oliva, ...
62020
Measuring the importance of temporal features in video saliency
M Tangemann, M Kümmerer, TSA Wallis, M Bethge
European Conference on Computer Vision, 667-684, 2020
52020
There is no evidence that meaning maps capture semantic information relevant to gaze guidance: Reply to Henderson, Hayes, Peacock, and Rehrig (2021)
MA Pedziwiatr, M Kümmerer, TSA Wallis, M Bethge, C Teufel
Cognition, 104741, 2021
12021
Behavioural evidence for the existence of a spatiotopic free-viewing saliency map
M Kümmerer, TSA Wallis, M Bethge
Journal of Vision 19 (10), 305a-305a, 2019
12019
Meaning maps and deep neural networks are insensitive to meaning when predicting human fixations
MA Pedziwiatr, TSA Wallis, M Kümmerer, C Teufel
Journal of Vision 19 (10), 253c-253c, 2019
12019
Probing Neural Decision-Making in Behavioral Models of Scanpath Prediction
M Kümmerer, T Wallis, M Bethge
PERCEPTION 48, 71-71, 2019
12019
Extending DeepGaze II: Scanpath prediction from deep features
M Kümmerer, T Wallis, M Bethge
Journal of Vision 18 (10), 371-371, 2018
12018
The system can't perform the operation now. Try again later.
Articles 1–20