关注
Gyeongman Kim
Gyeongman Kim
在 kaist.ac.kr 的电子邮件经过验证
标题
引用次数
引用次数
年份
Distilling linguistic context for language model compression
G Park, G Kim, E Yang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021
262021
Diffusion Video Autoencoders: Toward Temporally Consistent Face Video Editing via Disentangled Video Encoding
G Kim, H Shim, H Kim, Y Choi, J Kim, E Yang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
172022
PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuning
G Kim, D Jang, E Yang
arXiv preprint arXiv:2402.12842, 2024
12024
SeamsTalk: Seamless Talking Face Generation via Flow-Guided Inpainting
Y Jeong, G Kim, D Jang, J Hwang, E Yang
IEEE Access, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–4