Follow
Tian Jin
Tian Jin
PhD Student at MIT
Verified email at mit.edu - Homepage
Title
Cited by
Cited by
Year
Offloading support for OpenMP in Clang and LLVM
SF Antao, A Bataev, AC Jacob, GT Bercea, AE Eichenberger, G Rokos, ...
2016 Third Workshop on the LLVM Compiler Infrastructure in HPC (LLVM-HPC), 1-11, 2016
842016
Split-cnn: Splitting window-based operations in convolutional neural networks for memory system optimization
T Jin, S Hong
Proceedings of the Twenty-Fourth International Conference on Architectural …, 2019
472019
Compiling onnx neural network models using mlir
T Jin, GT Bercea, TD Le, T Chen, G Su, H Imai, Y Negishi, A Leu, ...
arXiv preprint arXiv:2008.08272, 2020
412020
Automatic tiling of “mostly-tileable” loop nests
D Wonnacott, T Jin, A Lake
5th International Workshop on Polyhedral Compilation Techniques, Amsterdam, 2015
332015
Performance analysis and optimization of Clang's OpenMP 4.5 GPU support
M Martineau, S McIntosh-Smith, C Bertolli, AC Jacob, SF Antao, ...
2016 7th International Workshop on Performance Modeling, Benchmarking and …, 2016
282016
Efficient fork-join on GPUs through warp specialization
AC Jacob, AE Eichenberger, H Sung, SF Antao, GT Bercea, C Bertolli, ...
2017 IEEE 24th International Conference on High Performance Computing (HiPC …, 2017
192017
Pruning’s effect on generalization through the lens of training and regularization
T Jin, M Carbin, D Roy, J Frankle, GK Dziugaite
Advances in Neural Information Processing Systems 35, 37947-37961, 2022
152022
Efficient automatic scheduling of imaging and vision pipelines for the GPU
L Anderson, A Adams, K Ma, TM Li, T Jin, J Ragan-Kelley
Proceedings of the ACM on Programming Languages 5 (OOPSLA), 1-28, 2021
142021
Language to network: Conditional parameter adaptation with natural language descriptions
T Jin, Z Liu, S Yan, A Eichenberger, LP Morency
Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020
72020
The effect of data dimensionality on neural network prunability
Z Ankner, A Renda, GK Dziugaite, J Frankle, T Jin
arXiv preprint arXiv:2212.00291, 2022
62022
Hybrid static/dynamic schedules for tiled polyhedral programs
T Jin, N Prajapati, W Ranasinghe, G Iooss, Y Zou, S Rajopadhye, ...
arXiv preprint arXiv:1610.07236, 2016
32016
Striped Attention: Faster Ring Attention for Causal Transformers
W Brandon, A Nrusimha, K Qian, Z Ankner, T Jin, Z Song, J Ragan-Kelley
arXiv preprint arXiv:2311.09431, 2023
12023
Self-Selected Attention Span for Accelerating Large Language Model Inference
T Jin, Z Xu, S Sharify, X Wang
arXiv preprint arXiv:2404.09336, 2024
2024
The Cost of Scaling Down Large Language Models: Reducing Model Size Affects Memory before In-context Learning
T Jin, N Clement, X Dong, V Nagarajan, M Carbin, J Ragan-Kelley, ...
The Twelfth International Conference on Learning Representations, 2023
2023
The Cost of Down-Scaling Language Models: Fact Recall Deteriorates before In-Context Learning
T Jin, N Clement, X Dong, V Nagarajan, M Carbin, J Ragan-Kelley, ...
arXiv preprint arXiv:2310.04680, 2023
2023
Towards Effective and Efficient Zero-shot Learning by Fine-tuning with Task Descriptions
T Jin, Z Liu, S Yan, A Eichenberger, LP Morency
2019
Using Hybrid Schedules to Safely Outperform Classical Polyhedral Schedules
T Jin
2015 International Conference on Parallel Architecture and Compilation (PACT …, 2015
2015
LLVM-HPC 2016
SF Antao, A Bataev, AC Jacob, GT Bercea, AE Eichenberger, G Rokos, ...
Offloading Support for OpenMP in Clang and LLVM
C Bertolli, AEE Bercea, G Rokos, M Martineau, T Jin, G Ozen, Z Sura, ...
The system can't perform the operation now. Try again later.
Articles 1–19