RenderGAN: Generating realistic labeled data L Sixt, B Wild, T Landgraf Frontiers in Robotics and AI 5, 66, 2018 | 194 | 2018 |
Restricting the Flow: Information Bottlenecks for Attribution K Schulz, L Sixt, F Tombari, T Landgraf International Conference on Learning Representations, 2020 | 158 | 2020 |
When Explanations Lie: Why Many Modified BP Attributions Fail L Sixt, M Granz, T Landgraf International Conference on Machine Learning, 9046-9057, 2020 | 126* | 2020 |
Automatic localization and decoding of honeybee markers using deep convolutional neural networks B Wild, L Sixt, T Landgraf arXiv preprint arXiv:1802.04557, 2018 | 24 | 2018 |
Do users benefit from interpretable vision? a user study, baseline, and dataset L Sixt, M Schuessler, OI Popescu, P Weiß, T Landgraf arXiv preprint arXiv:2204.11642, 2022 | 10 | 2022 |
Rendergan: Generating realistic labeled data–with an application on decoding bee tags L Sixt unpublished Bachelor Thesis, Freie Universität, Berlin, 2016 | 9 | 2016 |
Automatic localization and decoding of honeybee markers using deep convolutional neural networks. arXiv B Wild, L Sixt, T Landgraf See https://arxiv. org/abs, 1802 | 6 | 1802 |
DNNR: Differential Nearest Neighbors Regression Y Nader, L Sixt, T Landgraf International Conference on Machine Learning, 16296-16317, 2022 | 3 | 2022 |
Interpretability Through Invertibility: A Deep Convolutional Network With Ideal Counterfactuals And Isosurfaces L Sixt, M Schuessler, P Weiß, T Landgraf | 3 | 2020 |
Two4two: Evaluating interpretable machine learning-a synthetic dataset for controlled experiments M Schuessler, P Weiß, L Sixt arXiv preprint arXiv:2105.02825, 2021 | 2 | 2021 |
A rigorous study of the deep taylor decomposition L Sixt, T Landgraf arXiv preprint arXiv:2211.08425, 2022 | 1 | 2022 |
Analyzing a Caching Model L Sixt, EZ Liu, M Pellat, J Wexler, H Milad, B Kim, M Maas arXiv preprint arXiv:2112.06989, 2021 | 1 | 2021 |
The emojicite package Adds Emojis to Citations L Sixt | | |