Folgen
Zhisheng Xiao
Zhisheng Xiao
Bestätigte E-Mail-Adresse bei google.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Tackling the generative learning trilemma with denoising diffusion gans
Z Xiao, K Kreis, A Vahdat
International Conference on Learning Representations, 2022
5482022
Likelihood regret: An out-of-distribution detection score for variational auto-encoder
Z Xiao, Q Yan, Y Amit
Advances in Neural Information Processing Systems, 2020
2292020
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
Z Xiao, K Kreis, J Kautz, A Vahdat
International Conference on Learning Representations, 2021
1282021
Ufogen: You forward once large scale text-to-image generation via diffusion gans
Y Xu, Y Zhao, Z Xiao, T Hou
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
572024
Generative Latent Flow
Z Xiao, Q Yan, Y Amit
arXiv preprint arXiv:1905.10485, 2019
542019
Mobilediffusion: Subsecond text-to-image generation on mobile devices
Y Zhao, Y Xu, Z Xiao, T Hou
arXiv preprint arXiv:2311.16567, 2023
382023
Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
Z Xiao, Q Yan, Y Amit
Uncertainty and Robustness in Deep Learning, ICML workshop, 2021
262021
ControlVAE: Tuning, analytical properties, and performance analysis
H Shao, Z Xiao, S Yao, D Sun, A Zhang, S Liu, T Wang, J Li, T Abdelzaher
IEEE transactions on pattern analysis and machine intelligence 44 (12), 9285 …, 2021
222021
Dreaminpainter: Text-guided subject-driven image inpainting with diffusion models
S Xie, Y Zhao, Z Xiao, KCK Chan, Y Li, Y Xu, K Zhang, T Hou
arXiv preprint arXiv:2312.03771, 2023
132023
Adaptive Multi-stage Density Ratio Estimation for Learning Latent Space Energy-based Model
Z Xiao, T Han
Advances in Neural Information Processing Systems, 2022
122022
A method to model conditional distributions with normalizing flows
Z Xiao, Q Yan, Y Amit
arXiv preprint arXiv:1911.02052, 2019
102019
Imagen 3
J Baldridge, J Bauer, M Bhutani, N Brichtova, A Bunner, K Chan, Y Chen, ...
arXiv preprint arXiv:2408.07009, 2024
92024
EM Distillation for One-step Diffusion Models
S Xie, Z Xiao, DP Kingma, T Hou, YN Wu, KP Murphy, T Salimans, ...
arXiv preprint arXiv:2405.16852, 2024
72024
Exponential tilting of generative models: Improving sample quality by training and sampling from latent energy
Z Xiao, Q Yan, Y Amit
ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit …, 2020
72020
Two Symmetrized Coordinate Descent Methods Can Be Times Slower Than the Randomized Version
P Xiao, Z Xiao, R Sun
SIAM Journal on Optimization 31 (4), 2726-2752, 2021
6*2021
Hifi tuner: High-fidelity subject-driven fine-tuning for diffusion models
Z Wang, W Wei, Y Zhao, Z Xiao, M Hasegawa-Johnson, H Shi, T Hou
arXiv preprint arXiv:2312.00079, 2023
52023
EBMs Trained with Maximum Likelihood are Generator Models Trained with a Self-adverserial Loss
Z Xiao, Q Yan, Y Amit
Energy Based Models Workshop, ICLR, 2021
32021
Energy-based variational autoencoders
A Vahdat, K Kreis, Z Xiao, J Kautz
US Patent App. 17/357,728, 2022
22022
Denoising diffusion generative adversarial networks
Z Xiao, K Kreis, A Vahdat
US Patent App. 17/957,143, 2023
12023
Training energy-based variational autoencoders
A Vahdat, K Kreis, Z Xiao, J Kautz
US Patent App. 17/357,738, 2022
2022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20