Folgen
Róbert Csordás
Róbert Csordás
Stanford NLP
Bestätigte E-Mail-Adresse bei stanford.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers
R Csordás, K Irie, J Schmidhuber
EMNLP 2021, 2021
1062021
Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks
R Csordás, S van Steenkiste, J Schmidhuber
International Conference on Learning Representations (ICLR), 2021
79*2021
Going Beyond Linear Transformers with Recurrent Fast Weight Programmers
K Irie, I Schlag, R Csordás, J Schmidhuber
Conference on Neural Information Processing Systems (NeurIPS), 2021, 2021
552021
A generalist neural algorithmic learner
B Ibarz, V Kurin, G Papamakarios, K Nikiforou, M Bennani, R Csordás, ...
Learning on Graphs Conference, 2: 1-2: 23, 2022
452022
The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization
R Csordás, K Irie, J Schmidhuber
International Conference on Learning Representations (ICLR), 2021
432021
Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control
R Csordás, J Schmidhuber
International Conference on Learning Representations (ICLR), 2019
422019
Randomized Positional Encodings Boost Length Generalization of Transformers
A Ruoss, G Delétang, T Genewein, J Grau-Moya, R Csordás, M Bennani, ...
arXiv preprint arXiv:2305.16843, 2023
352023
Method and apparatus for generating a displacement map of an input dataset pair
R Csordás, Á Kis-Benedek, B Szalkai
US Patent 10,380,753, 2019
292019
Mindstorms in Natural Language-Based Societies of Mind
M Zhuge, H Liu, F Faccio, DR Ashley, R Csordás, A Gopalakrishnan, ...
arXiv preprint arXiv:2305.17066, 2023
282023
A Modern Self-Referential Weight Matrix That Learns to Modify Itself
K Irie, I Schlag, R Csordás, J Schmidhuber
Deep RL Workshop NeurIPS 2021, 2021
282021
The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention
K Irie*, R Csordás*, J Schmidhuber
International Conference on Machine Learning 2022 (ICML), 2022
222022
Approximating Two-Layer Feedforward Networks for Efficient Transformers
R Csordás, K Irie, J Schmidhuber
arXiv preprint arXiv:2310.10837, 2023
72023
CTL++: Evaluating Generalization on Never-Seen Compositional Patterns of Known Functions, and Compatibility of Neural Representations
R Csordás, K Irie, J Schmidhuber
EMNLP 2022, 2022
72022
SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention
R Csordás, P Piękos, K Irie, J Schmidhuber
arXiv preprint arXiv:2312.07987, 2023
32023
Improving Baselines in the Wild
K Irie, I Schlag, R Csordás, J Schmidhuber
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
22021
Topological Neural Discrete Representation Learning à la Kohonen
K Irie*, R Csordás*, J Schmidhuber
arXiv preprint arXiv:2302.07950, 2023
12023
Automating Continual Learning
K Irie, R Csordás, J Schmidhuber
2023
Systematic generalization in connectionist models
R Csordás
2023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–18