Folgen
Tung D. Le
Tung D. Le
Staff Research Scientist at IBM Research - Tokyo
Bestätigte E-Mail-Adresse bei jp.ibm.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Tflms: Large model support in tensorflow by graph rewriting
TD Le, H Imai, Y Negishi, K Kawachiya
arXiv preprint arXiv:1807.02037, 2018
422018
Compiling onnx neural network models using mlir
T Jin, GT Bercea, TD Le, T Chen, G Su, H Imai, Y Negishi, A Leu, ...
arXiv preprint arXiv:2008.08272, 2020
392020
Automatic gpu memory management for large neural models in tensorflow
TD Le, H Imai, Y Negishi, K Kawachiya
Proceedings of the 2019 ACM SIGPLAN International Symposium on Memory …, 2019
202019
Efficient query evaluation on distributed graphs with Hadoop environment
LD Tung, Q Nguyen-Van, Z Hu
Proceedings of the 4th Symposium on Information and Communication Technology …, 2013
172013
Towards systematic parallelization of graph transformations over pregel
LD Tung, Z Hu
International Journal of Parallel Programming 45, 320-339, 2017
142017
Large model support for deep learning in caffe and chainer
M Cho, TD Le, U Finkler, H Imai, Y Negishi, T Sekiyama, S Vinod, ...
SysML, 2018
132018
Fast and accurate 3D medical image segmentation with data-swapping method
H Imai, S Matzek, TD Le, Y Negishi, K Kawachiya
arXiv preprint arXiv:1812.07816, 2018
122018
Minimizing data transfers for regular reachability queries on distributed graphs
Q Nguyen-Van, LD Tung, Z Hu
Proceedings of the 4th Symposium on Information and Communication Technology …, 2013
122013
Failure-aware Scheduling in Grid Computing Environments.
T Do, T Nguyen, DT Nguyen, HC Nguyen, T Le
GCA, 40-46, 2009
122009
Real-time resource usage reduction in artificial neural networks
T Sekiyama, K Kawachiya, TD Le, Y Negishi
US Patent 10,268,951, 2019
102019
Profiling based out-of-core hybrid method for large neural networks: poster
Y Ito, H Imai, TL Duc, Y Negishi, K Kawachiya, R Matsumiya, T Endo
Proceedings of the 24th Symposium on Principles and Practice of Parallel …, 2019
92019
Involving cpus into multi-gpu deep learning
TD Le, T Sekiyama, Y Negishi, H Imai, K Kawachiya
Proceedings of the 2018 ACM/SPEC international conference on performance …, 2018
92018
Multi-GPU deep learning using CPUs
TD Le, H Imai, T Sekiyama, Y Negishi
US Patent 11,164,079, 2021
72021
Pregel meets UnCAL: A systematic framework for transforming big graphs
LD Tung
2015 31st IEEE International Conference on Data Engineering Workshops, 250-254, 2015
72015
Efficient parallel training of a network model on multiple graphics processing units
I Haruki, TD Le, Y Negishi
US Patent 10,949,746, 2021
62021
Localizing tree-based convolutional neural networks
TD Le, T Sekiyama
US Patent 11,106,970, 2021
52021
High resolution medical image segmentation using data-swapping method
H Imai, S Matzek, TD Le, Y Negishi, K Kawachiya
Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd …, 2019
42019
An Intermediate Library for Multi-GPUs Computing Skeletons
T D. Le, NH Duc, PT Anh, NH Hoang, NM Thap
hgpu. org, 2012
32012
Large data flow graphs in limited gpu memory
G Janssen, V Zolotov, TD Le
2019 IEEE International Conference on Big Data (Big Data), 1821-1830, 2019
22019
Optimizing tree-based convolutional neural networks
TD Le, T Sekiyama, K Zhao
US Patent App. 15/903,600, 2018
12018
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20