Follow
Gordon Euhyun Moon
Gordon Euhyun Moon
Verified email at sogang.ac.kr
Title
Cited by
Cited by
Year
A Large-Scale Study in Predictability of Daily Activities and Places
G Moon, J Hamm
Proceedings of the 8th EAI International Conference on Mobile Computing …, 2016
232016
Evaluating Spatial Accelerator Architectures with Tiled Matrix-Matrix Multiplication
GE Moon, H Kwon, G Jeong, P Chatarasi, S Rajamanickam, T Krishna
IEEE Transactions on Parallel and Distributed Systems 33 (4), 1002-1014, 2021
172021
Extending Sparse Tensor Accelerators to Support Multiple Compression Formats
E Qin, G Jeong, W Won, SC Kao, H Kwon, S Srinivasan, D Das, GE Moon, ...
Proceedings of the 35th IEEE International Parallel & Distributed Processing …, 2021
152021
ALO-NMF: Accelerated Locality-Optimized Non-negative Matrix Factorization
GE Moon, JA Ellis, A Sukumaran-Rajam, S Parthasarathy, P Sadayappan
Proceedings of the 26th ACM SIGKDD International Conference on Knowledge …, 2020
15*2020
Parallel data-local training for optimizing word2vec embeddings for word and graph embeddings
GE Moon, D Newman-Griffis, J Kim, A Sukumaran-Rajam, ...
2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing …, 2019
52019
Parallel Latent Dirichlet Allocation on GPUs
GE Moon, I Nisa, A Sukumaran-Rajam, B Bandyopadhyay, ...
International Conference on Computational Science, 259-272, 2018
5*2018
Parallel Training of GRU Networks with a Multi-Grid Solver for Long Sequences
GE Moon, EC Cyr
International Conference on Learning Representations, 2022
42022
SPION: Layer-Wise Sparse Training of Transformer via Convolutional Flood Filling
B Yoon, Y Han, GE Moon
arXiv preprint arXiv:2309.12578, 2023
2023
Chronica: A Data-Imbalance-Aware Scheduler for Distributed Deep Learning
S Maeng, GE Moon, S Park
Proceedings of the 23rd IEEE/ACM International Symposium on Cluster, Cloud …, 2023
2023
Adapting Multigrid-in-Time to Train Deep Neural Networks [Slides]
EC Cyr, S Guenther, L Ruthotto, JB Schroder, NR Gauger, G Moon, ...
Sandia National Lab.(SNL-NM), Albuquerque, NM (United States), 2022
2022
Parallel-in-Time Training of Recurrent Neural Networks
EC Cyr, G Moon
2021 Fall Western Sectional Meeting, 2021
2021
Mixed-Precision Schemes for Linear Algebra Kernels on GPUs
G Moon, S Rajamanickam
Sandia National Lab.(SNL-NM), Albuquerque, NM (United States), 2021
2021
MINT: Microarchitecture for Efficient and Interchangeable CompressioN Formats on Tensor Algebra.
E Qin, G Jeong, W Won, SC Kao, H Kwon, S Srinivasan, D Das, GE Moon, ...
Sandia National Lab.(SNL-NM), Albuquerque, NM (United States), 2020
2020
Utilizing Spatial Accelerators for Machine Learning and Linear Algebra Kernels
GE Moon, S Rajamanickam, T Krishna, H Kwon, P Chatarasi, E Qin
Sandia National Lab.(SNL-NM), Albuquerque, NM (United States), 2020
2020
Parallel Algorithms for Machine Learning
GE Moon
The Ohio State University, 2019
2019
Parallel LDA with Over-Decomposition
GE Moon, A Sukumaran-Rajam, P Sadayappan
2017 IEEE 24th International Conference on High Performance Computing …, 2017
2017
The system can't perform the operation now. Try again later.
Articles 1–16