Lu Yin
Lu Yin
Assistant Professor (Lecturer) at University of Aberdeen, Research Fellow at TU/e
Bestätigte E-Mail-Adresse bei - Startseite
Zitiert von
Zitiert von
Sparse training via boosting pruning plasticity with neuroregeneration
S Liu, T Chen, X Chen, Z Atashgahi, L Yin, H Kou, L Shen, M Pechenizkiy, ...
Advances in Neural Information Processing Systems 34, 9908-9922, 2021
Do we actually need dense over-parameterization? in-time over-parameterization in sparse training
S Liu, L Yin, DC Mocanu, M Pechenizkiy
International Conference on Machine Learning, 6989-7000, 2021
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained Graph Tickets
T Huang, T Chen, M Fang, V Menkovski, J Zhao, L Yin, Y Pei, DC Mocanu, ...
[LoG BEST PAPER] Learning on Graphs Conference, LOG 2022, 2022
Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity
L Yin, Y Wu, Z Zhang, CY Hsieh, Y Wang, Y Jia, M Pechenizkiy, Y Liang, ...
arXiv preprint arXiv:2310.05175, 2023
Are Large Kernels Better Teachers than Transformers for ConvNets?
T Huang, L Yin, Z Zhang, L Shen, M Fang, M Pechenizkiy, Z Wang, S Liu
[ICML 2023] International Conference on Machine Learning, 2023
Lottery Pools: Winning More by Interpolating Tickets without Increasing Training or Inference Cost
L Yin, S Liu, M Fang, T Huang, V Menkovski, M Pechenizkiy
[AAAI 2023] Thirty-Seventh AAAI Conference on Artificial Intelligence., 2022
Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training
L Yin, V Menkovski, M Fang, T Huang, Y Pei, M Pechenizkiy, DC Mocanu, ...
[UAI 2022] The 38th Conference on Uncertainty in Artificial Intelligence, 2022
Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks
Z Atashgahi, X Zhang, N Kichler, S Liu, L Yin, M Pechenizkiy, R Veldhuis, ...
[TMLR] Transactions on Machine Learning Research, 2023
Knowledge Elicitation using Deep Metric Learning and Psychometric Testing
L Yin, V Menkovski, M Pechenizkiy
[ECML 2020] European Conference on Machine Learning, 2020
Dynamic Sparsity Is Channel-Level Sparsity Learner
L Yin, G Li, M Fang, L Shen, T Huang, Z Wang, V Menkovski, X Ma, ...
[Neurips 2023], 2023
Junk dna hypothesis: A task-centric angle of llm pre-trained weights through sparsity
L Yin, S Liu, A Jaiswal, S Kundu, Z Wang
arXiv preprint arXiv:2310.02277, 2023
Hierarchical Semantic Segmentation using Psychometric Learning
L Yin, V Menkovski, S Liu, M Pechenizkiy
[ACML 2021 LONG ORAL] Asian Conference on Machine Learning, 2021
Enhancing Adversarial Training via Reweighting Optimization Trajectory
T Huang, S Liu, T Chen, M Fang, L Shen, V Menkovski, L Yin, Y Pei, ...
[ECML PKDD 2023] European Conference on Machine Learning and Principles and …, 2023
Semantic-Based Few-Shot Learning by Interactive Psychometric Testing
L Yin, V Menkovski, Y Pei, M Pechenizkiy
[AAAI 2022 IML Workshop] AAAI 2022 Workshop on Interactive Machine Learning, 2021
FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping
A Jaiswal, B Hu, L Yin, Y Ro, S Liu, T Chen, A Akella
arXiv preprint arXiv:2404.03865, 2024
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
B Wu, Q Xiao, S Liu, L Yin, M Pechenizkiy, DC Mocanu, M Van Keulen, ...
arXiv preprint arXiv:2312.04727, 2023
A Structural-Clustering Based Active Learning for Graph Neural Networks
RM Fajri, Y Pei, L Yin, M Pechenizkiy
[IDA 2024] International Symposium on Intelligent Data Analysis, 2023
BiDST: Dynamic Sparse Training is a Bi-Level Optimization Problem
J Ji, G Li, L Yin, M Qin, G Yuan, L Guo, S Liu, X Ma
NeurRev: Train Better Sparse Neural Network Practically via Neuron Revitalization
G Li, L Yin, J Ji, W Niu, M Qin, B Ren, L Guo, S Liu, X Ma
[ICLR 2024] The Twelfth International Conference on Learning Representations, 2023
REST: Enhancing Group Robustness in DNNs Through Reweighted Sparse Training
J Zhao, L Yin, S Liu, M Fang, M Pechenizkiy
[ECML PKDD 2023], 313-329, 2023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20