Folgen
Tianlong Chen
Tianlong Chen
Assistant Professor, CS@UNC Chapel Hill; PostDoc, CSAIL@MIT+BMI@Harvard; Ph.D., ECE@UT Austin
Bestätigte E-Mail-Adresse bei cs.unc.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Graph Contrastive Learning with Augmentations
Y You, T Chen, Y Sui, T Chen, Z Wang, Y Shen
Advances in Neural Information Processing Systems (NeurIPS), 2020
20302020
Abd-net: Attentive but diverse person re-identification
T Chen, S Ding, J Xie, Y Yuan, W Chen, Y Yang, Z Ren, Z Wang
IEEE International Conference on Computer Vision (ICCV), 2019
6152019
Graph Contrastive Learning Automated
Y You, T Chen, Y Shen, Z Wang
International Conference on Machine Learning (ICML), 2021
4702021
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
T Chen, J Frankle, S Chang, S Liu, Y Zhang, Z Wang, M Carbin
Advances in Neural Information Processing Systems (NeurIPS), 2020
3742020
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
T Chen, S Liu, S Chang, Y Cheng, L Amini, Z Wang
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020
2662020
When Does Self-Supervision Help Graph Convolutional Networks?
Y You, T Chen, Z Wang, Y Shen
International Conference on Machine Learning (ICML), 2020
2392020
Robust Pre-Training by Adversarial Contrastive Learning
Z Jiang, T Chen, T Chen, Z Wang
Advances in Neural Information Processing Systems (NeurIPS), 2020
2242020
Learning to optimize: A primer and a benchmark
T Chen, X Chen, W Chen, H Heaton, J Liu, Z Wang, W Yin
Journal of Machine Learning Research (JMLR), 2021
2182021
Chasing Sparsity in Vision Transformers: An End-to-End Exploration
T Chen, Y Cheng, Z Gan, L Yuan, L Zhang, Z Wang
Advances in Neural Information Processing Systems (NeurIPS), 2021
1932021
Robust overfitting may be mitigated by properly learned smoothening
T Chen, Z Zhang, S Liu, S Chang, Z Wang
International Conference on Learning Representation (ICLR), 2021
1932021
A Unified Lottery Ticket Hypothesis for Graph Neural Networks
T Chen, Y Sui, X Chen, A Zhang, Z Wang
International Conference on Machine Learning (ICML), 2021
1752021
More convnets in the 2020s: Scaling up kernels beyond 51x51 using sparsity
S Liu, T Chen, X Chen, X Chen, Q Xiao, B Wu, M Pechenizkiy, D Mocanu, ...
International Conference on Learning Representations (ICLR), 2023
1542023
Is Attention All NeRF Needs?
M Varma T, P Wang, X Chen, T Chen, S Venugopalan, Z Wang
International Conference on Learning Representations (ICLR), 2023
148*2023
Trustllm: Trustworthiness in large language models
L Sun, Y Huang, H Wang, S Wu, Q Zhang, C Gao, Y Huang, W Lyu, ...
arXiv preprint arXiv:2401.05561, 2024
1452024
The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models
T Chen, J Frankle, S Chang, S Liu, Y Zhang, M Carbin, Z Wang
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021
1372021
H O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models
Z Zhang, Y Sheng, T Zhou, T Chen, L Zheng, R Cai, Z Song, Y Tian, C Ré, ...
Advances in Neural Information Processing Systems (NeurIPS), 2023
1362023
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
S Liu, T Chen, X Chen, Z Atashgahi, L Yin, H Kou, L Shen, M Pechenizkiy, ...
Advances in Neural Information Processing Systems (NeurIPS), 2021
1172021
Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice
P Wang, W Zheng, T Chen, Z Wang
International Conference on Learning Representations (ICLR), 2022
1062022
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
S Liu, T Chen, X Chen, L Shen, DC Mocanu, Z Wang, M Pechenizkiy
International Conference on Learning Representations (ICLR), 2022
1042022
L^ 2-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks
Y You, T Chen, Z Wang, Y Shen
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020
1002020
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20