Sebastian Lapuschkin (né Bach)
Sebastian Lapuschkin (né Bach)
Head of Explainable AI Group, Fraunhofer Heinrich Hertz Institute
Verified email at - Homepage
Cited by
Cited by
On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation
S Bach, A Binder, G Montavon, F Klauschen, KR Müller, W Samek
PLOS ONE 10 (7), e0130140, 2015
Explaining nonlinear classification decisions with deep taylor decomposition
G Montavon, S Lapuschkin, A Binder, W Samek, KR Müller
Pattern Recognition 65, 211-222, 2017
Evaluating the visualization of what a deep neural network has learned
W Samek, A Binder, G Montavon, S Lapuschkin, KR Müller
IEEE transactions on neural networks and learning systems 28 (11), 2660-2673, 2016
Unmasking clever hans predictors and assessing what machines really learn
S Lapuschkin, S Wäldchen, A Binder, G Montavon, W Samek, KR Müller
Nature communications 10 (1), 1-8, 2019
Interpretable deep neural networks for single-trial EEG classification
I Sturm, S Lapuschkin, W Samek, KR Müller
Journal of neuroscience methods 274, 141-145, 2016
Layer-wise relevance propagation for neural networks with local renormalization layers
A Binder, G Montavon, S Lapuschkin, KR Müller, W Samek
International Conference on Artificial Neural Networks, 63-71, 2016
iNNvestigate neural networks!
M Alber, S Lapuschkin, P Seegerer, M Hägele, KT Schütt, G Montavon, ...
The Journal of Machine Learning Research 20 (93), 1-8, 2019
Analyzing classifiers: Fisher vectors and deep neural networks
S Lapuschkin, A Binder, G Montavon, KR Muller, W Samek
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2016
Layer-wise relevance propagation: an overview
G Montavon, A Binder, S Lapuschkin, W Samek, KR Müller
Explainable AI: interpreting, explaining and visualizing deep learning, 193-209, 2019
The LRP toolbox for artificial neural networks
S Lapuschkin, A Binder, G Montavon, KR Müller, W Samek
The Journal of Machine Learning Research 17 (1), 3938-3942, 2016
Understanding and comparing deep neural networks for age and gender classification
S Lapuschkin, A Binder, KR Muller, W Samek
Proceedings of the IEEE International Conference on Computer Vision …, 2017
Layer-wise relevance propagation for deep neural network architectures
A Binder, S Bach, G Montavon, KR Müller, W Samek
Information Science and Applications (ICISA) 2016, LNEE 6679, 913-922, 2016
Explaining the unique nature of individual gait patterns with deep learning
F Horst, S Lapuschkin, W Samek, KR Müller, WI Schöllhorn
Scientific reports 9 (1), 1-13, 2019
Toward interpretable machine learning: Transparent deep neural networks and beyond
W Samek, G Montavon, S Lapuschkin, CJ Anders, KR Müller
arXiv preprint arXiv:2003.07631, 2020
Interpreting and explaining deep neural networks for classification of audio signals
S Becker, M Ackermann, S Lapuschkin, KR Müller, W Samek
arXiv preprint arXiv:1807.03418, 2018
Resolving challenges in deep learning-based analyses of histopathological images using explanation methods
M Hägele, P Seegerer, S Lapuschkin, M Bockmayr, W Samek, ...
Scientific reports 10 (1), 1-12, 2020
Towards best practice in explaining neural network decisions with LRP
M Kohlbrenner, A Bauer, S Nakajima, A Binder, W Samek, S Lapuschkin
2020 International Joint Conference on Neural Networks (IJCNN), 1-7, 2020
Interpretable human action recognition in compressed domain
V Srinivasan, S Lapuschkin, C Hellge, KR Müller, W Samek
2017 IEEE International Conference on Acoustics, Speech and Signal …, 2017
Interpreting the predictions of complex ml models by layer-wise relevance propagation
W Samek, G Montavon, A Binder, S Lapuschkin, KR Müller
arXiv preprint arXiv:1611.08191, 2016
Pruning by explaining: A novel criterion for deep neural network pruning
SK Yeom, P Seegerer, S Lapuschkin, A Binder, S Wiedemann, KR Müller, ...
Pattern Recognition 115, 107899, 2021
The system can't perform the operation now. Try again later.
Articles 1–20