Inverting gradients-how easy is it to break privacy in federated learning? J Geiping, H Bauermeister, H Dröge, M Moeller Advances in neural information processing systems 33, 16937-16947, 2020 | 1314 | 2020 |
An adaptive IHS pan-sharpening method S Rahmani, M Strait, D Merkurjev, M Moeller, T Wittman IEEE Geoscience and Remote Sensing Letters 7 (4), 746-750, 2010 | 502 | 2010 |
Learning proximal operators: Using denoising networks for regularizing inverse imaging problems T Meinhardt, M Moller, C Hazirbas, D Cremers Proceedings of the IEEE International Conference on Computer Vision, 1781-1790, 2017 | 420 | 2017 |
Witches' brew: Industrial scale data poisoning via gradient matching J Geiping, L Fowl, WR Huang, W Czaja, G Taylor, M Moeller, T Goldstein arXiv preprint arXiv:2009.02276, 2020 | 228 | 2020 |
A convex model for nonnegative matrix factorization and dimensionality reduction on physical space E Esser, M Moller, S Osher, G Sapiro, J Xin IEEE Transactions on Image Processing 21 (7), 3239-3252, 2012 | 202 | 2012 |
Variational depth from focus reconstruction M Moeller, M Benning, C Schönlieb, D Cremers IEEE Transactions on Image Processing 24 (12), 5369-5378, 2015 | 133 | 2015 |
A variational approach for sharpening high dimensional images M Möller, T Wittman, AL Bertozzi, M Burger SIAM Journal on Imaging Sciences 5 (1), 150-178, 2012 | 115 | 2012 |
Spectral decompositions using one-homogeneous functionals M Burger, G Gilboa, M Moeller, L Eckardt, D Cremers SIAM Journal on Imaging Sciences 9 (3), 1374-1408, 2016 | 89 | 2016 |
Collaborative total variation: A general framework for vectorial TV models J Duran, M Moeller, C Sbert, D Cremers SIAM Journal on Imaging Sciences 9 (1), 116-151, 2016 | 84 | 2016 |
What doesn't kill you makes you robust (er): How to adversarially train against data poisoning J Geiping, L Fowl, G Somepalli, M Goldblum, M Moeller, T Goldstein arXiv preprint arXiv:2102.13624, 2021 | 81 | 2021 |
Point-wise map recovery and refinement from functional correspondence E Rodolà, M Moeller, D Cremers arXiv preprint arXiv:1506.05603, 2015 | 78 | 2015 |
Stochastic training is not necessary for generalization J Geiping, M Goldblum, PE Pope, M Moeller, T Goldstein arXiv preprint arXiv:2109.14119, 2021 | 77 | 2021 |
A variational approach to hyperspectral image fusion M Moeller, T Wittman, AL Bertozzi Algorithms and Technologies for Multispectral, Hyperspectral, and …, 2009 | 77 | 2009 |
The primal-dual hybrid gradient method for semiconvex splittings T Mollenhoff, E Strekalovskiy, M Moeller, D Cremers SIAM Journal on Imaging Sciences 8 (2), 827-857, 2015 | 71 | 2015 |
Improving deep learning for HAR with shallow LSTMs M Bock, A Hölzemann, M Moeller, K Van Laerhoven Proceedings of the 2021 ACM International Symposium on Wearable Computers, 7-12, 2021 | 63 | 2021 |
An adaptive inverse scale space method for compressed sensing M Burger, M Möller, M Benning, S Osher Mathematics of Computation 82 (281), 269-299, 2013 | 63 | 2013 |
Ds*: Tighter lifting-free convex relaxations for quadratic matching problems F Bernard, C Theobalt, M Moeller Proceedings of the IEEE conference on computer vision and pattern …, 2018 | 50 | 2018 |
Sublabel-accurate relaxation of nonconvex energies T Mollenhoff, E Laude, M Moeller, J Lellmann, D Cremers Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2016 | 49 | 2016 |
Proximal backpropagation T Frerix, T Möllenhoff, M Moeller, D Cremers arXiv preprint arXiv:1706.04638, 2017 | 44 | 2017 |
Truth or backpropaganda? An empirical investigation of deep learning theory M Goldblum, J Geiping, A Schwarzschild, M Moeller, T Goldstein arXiv preprint arXiv:1910.00359, 2019 | 40 | 2019 |