Follow
Marina Marie-Claire Höhne (née Vidovic)
Marina Marie-Claire Höhne (née Vidovic)
Full Professor at Uni Potsdam, Head of the Data Science department at ATB-Potsdam
Verified email at uni-potsdam.de
Title
Cited by
Cited by
Year
Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond
A Hedström, L Weber, D Krakowczyk, D Bareeva, F Motzkus, W Samek, ...
Journal of Machine Learning Research 24 (34), 1-11, 2023
1952023
Improving the robustness of myoelectric pattern recognition for upper limb prostheses by covariate shift adaptation
MMC Vidovic, HJ Hwang, S Amsüss, JM Hahne, D Farina, KR Müller
IEEE Transactions on Neural Systems and Rehabilitation Engineering 24 (9 …, 2015
1912015
This looks more like that: Enhancing self-explaining models by prototypical relevance propagation
S Gautam, MMC Höhne, S Hansen, R Jenssen, M Kampffmeyer
Pattern Recognition 136, 109172, 2023
442023
Feature importance measure for non-linear learning algorithms
MMC Vidovic, N Görnitz, KR Müller, M Kloft
arXiv preprint arXiv:1611.07567, 2016
412016
DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies
B Mieth, A Rozier, JA Rodriguez, MMC Höhne, N Görnitz, KR Müller
NAR genomics and bioinformatics 3 (3), lqab065, 2021
392021
Using transfer learning from prior reference knowledge to improve the clustering of single-cell RNA-Seq data
B Mieth, JRF Hockley, N Görnitz, MMC Vidovic, KR Müller, A Gutteridge, ...
Scientific reports 9 (1), 20353, 2019
352019
Protovae: A trustworthy self-explainable prototypical variational model
S Gautam, A Boubekki, S Hansen, S Salahuddin, R Jenssen, M Höhne, ...
Advances in Neural Information Processing Systems 35, 17940-17952, 2022
342022
Noisegrad—enhancing explanations by introducing stochasticity to model weights
K Bykov, A Hedström, S Nakajima, MMC Höhne
Proceedings of the AAAI Conference on Artificial Intelligence 36 (6), 6132-6140, 2022
342022
How Much Can I Trust You?--Quantifying Uncertainties in Explaining Neural Networks
K Bykov, MMC Höhne, KR Müller, S Nakajima, M Kloft
arXiv preprint arXiv:2006.09000, 2020
342020
Explaining bayesian neural networks
K Bykov, MMC Höhne, A Creosteanu, KR Müller, F Klauschen, ...
arXiv preprint arXiv:2108.10346, 2021
272021
Finding the right XAI method—a guide for the evaluation and ranking of explainable AI methods in climate science
PL Bommer, M Kretschmer, A Hedström, D Bareeva, MMC Höhne
Artificial Intelligence for the Earth Systems 3 (3), e230074, 2024
262024
Covariate shift adaptation in EMG pattern recognition for prosthetic device control
MMC Vidovic, LP Paredes, HJ Hwang, S Amsu, J Pahl, JM Hahne, ...
2014 36th annual international conference of the IEEE engineering in …, 2014
212014
The meta-evaluation problem in explainable AI: identifying reliable estimators with MetaQuantus
A Hedström, P Bommer, KK Wickstrøm, W Samek, S Lapuschkin, ...
arXiv preprint arXiv:2302.07265, 2023
202023
Opening the black box: Revealing interpretable sequence motifs in kernel-based learning algorithms
MMC Vidovic, N Görnitz, KR Müller, G Rätsch, M Kloft
Machine Learning and Knowledge Discovery in Databases: European Conference …, 2015
172015
DORA: Exploring outlier representations in deep neural networks
K Bykov, M Deb, D Grinwald, KR Müller, MMC Höhne
arXiv preprint arXiv:2206.04530, 2022
142022
SVM2Motif—reconstructing overlapping DNA sequence motifs by mimicking an SVM predictor
MMC Vidovic, N Görnitz, KR Müller, G Rätsch, M Kloft
PloS one 10 (12), e0144782, 2015
112015
Demonstrating the risk of imbalanced datasets in chest x-ray image-based diagnostics by prototypical relevance propagation
S Gautam, MMC Höhne, S Hansen, R Jenssen, M Kampffmeyer
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), 1-5, 2022
102022
Labeling neural representations with inverse recognition
K Bykov, L Kopf, S Nakajima, M Kloft, M Höhne
Advances in Neural Information Processing Systems 36, 2024
92024
Self-supervised learning for 3d medical image analysis using 3d simclr and monte carlo dropout
Y Ali, A Taleb, MMC Höhne, C Lippert
arXiv preprint arXiv:2109.14288, 2021
92021
How much can I trust you
K Bykov, MMC Höhne, KR Müller, S Nakajima, M Kloft
Quantifying Uncertainties in Explaining Neural Networks, 2020
92020
The system can't perform the operation now. Try again later.
Articles 1–20