Folgen
Jonas Kohler
Jonas Kohler
Bestätigte E-Mail-Adresse bei inf.ethz.ch - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Sub-sampled cubic regularization for non-convex optimization
JM Kohler, A Lucchi
ICML 2017, 2017
1862017
Escaping Saddles with Stochastic Gradients
H Daneshmand, J Kohler, A Lucchi, T Hofmann
ICML 2018, 2018
1662018
Exponential convergence rates for Batch Normalization: The power of length-direction decoupling in non-convex optimization
J Kohler, H Daneshmand, A Lucchi, M Zhou, K Neymeyr, T Hofmann
AISTATS 2019, 2019
151*2019
Batch normalization provably avoids ranks collapse for randomly initialised deep networks
H Daneshmand, J Kohler, F Bach, T Hofmann, A Lucchi
NeurIPS 2020, 2020
642020
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
A Hoffmann, C Fanconi, R Rade, J Kohler
ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend …, 2021
542021
Learning Generative Models of Textured 3D Meshes from Real-World Images
D Pavllo, J Kohler, T Hofmann, A Lucchi
ICCV 2021, 2021
542021
The Role of Memory in Stochastic Optimization
A Orvieto, J Kohler, A Lucchi
UAI, 2019, 2019
41*2019
Synthesizing Speech from Intracranial Depth Electrodes using an Encoder-Decoder Framework
J Kohler, MC Ottenhoff, S Goulis, M Angrick, AJ Colon, L Wagner, ...
Neurons, Behavior, Data analysis, and Theory (NBDT), 2021
242021
Adaptive norms for deep learning with regularised Newton methods
J Kohler, L Adolphs, A Lucchi
NeurIPS 2019 Workshop: Beyond First-Order Optimization Methods in Machine …, 2019
19*2019
Vanishing Curvature in Randomly Initialized Deep ReLU Networks.
A Orvieto, J Kohler, D Pavllo, T Hofmann, A Lucchi
AISTATS, 7942-7975, 2022
16*2022
Safe Deep Reinforcement Learning for Multi-Agent Systems with Continuous Action Spaces
Z Sheebaelhamd, K Zisis, A Nisioti, D Gkouletsos, D Pavllo, J Kohler
ICML 2021 Workshop on Reinforcement Learning for Real Life Workshop, 2021
162021
A Sub-sampled Tensor Method for Non-convex Optimization
A Lucchi, J Kohler
IMA Journal of Numerical Analysis 43 (5), 2019
16*2019
Imagine flash: Accelerating emu diffusion models with backward distillation
J Kohler, A Pumarola, E Schönfeld, A Sanakoyeu, R Sumbaly, P Vajda, ...
arXiv preprint arXiv:2405.05224, 2024
102024
Cache Me if You Can: Accelerating Diffusion Models through Block Caching
F Wimbauer, B Wu, E Schoenfeld, X Dai, J Hou, Z He, A Sanakoyeu, ...
CVPR 2024, 2023
102023
Two-Level K-FAC Preconditioning for Deep Learning
N Tselepidis, J Kohler, A Orvieto
NeurIPS 2020 Workshop on Optimization for Machine Learning (OPT2020), 2020
72020
Adaptive guidance: Training-free acceleration of conditional diffusion models
A Castillo, J Kohler, JC Pérez, JP Pérez, A Pumarola, B Ghanem, ...
arXiv preprint arXiv:2312.12487, 2023
52023
fMPI: Fast Novel View Synthesis in the Wild with Layered Scene Representations
J Kohler, NG Sanchez, L Cavalli, C Herold, A Pumarola, AG Garcia, ...
CV4MR 2024, 2023
12023
Insights on the interplay of network architectures and optimization algorithms in deep learning
J Kohler
ETH Zurich, 2022
2022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–18