关注
Yichong Xu
Yichong Xu
Member of Technical Staff, Character.AI
在 microsoft.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
The application of two-level attention models in deep convolutional neural network for fine-grained image classification
T Xiao, Y Xu, K Yang, J Zhang, Y Peng, Z Zhang
Proceedings of the IEEE conference on computer vision and pattern …, 2015
10082015
Gpteval: Nlg evaluation using gpt-4 with better human alignment
Y Liu, D Iter, Y Xu, S Wang, R Xu, C Zhu
arXiv preprint arXiv:2303.16634, 2023
3572023
An empirical study of training end-to-end vision-and-language transformers
ZY Dou, Y Xu, Z Gan, J Wang, S Wang, L Wang, C Zhu, P Zhang, L Yuan, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
2762022
Scale-invariant convolutional neural networks
Y Xu, T Xiao, J Zhang, K Yang, Z Zhang
arXiv preprint arXiv:1411.6369, 2014
1662014
Want to reduce labeling cost? GPT-3 can help
S Wang, Y Liu, Y Xu, C Zhu, M Zeng
arXiv preprint arXiv:2108.13487, 2021
1522021
Generate rather than retrieve: Large language models are strong context generators
W Yu, D Iter, S Wang, Y Xu, M Ju, S Sanyal, C Zhu, M Zeng, M Jiang
arXiv preprint arXiv:2209.10063, 2022
1472022
Dialoglm: Pre-trained model for long dialogue understanding and summarization
M Zhong, Y Liu, Y Xu, C Zhu, M Zeng
Proceedings of the AAAI Conference on Artificial Intelligence 36 (10), 11765 …, 2022
992022
Training data is more valuable than you think: A simple and effective method by retrieving from training data
S Wang, Y Xu, Y Fang, Y Liu, S Sun, R Xu, C Zhu, M Zeng
arXiv preprint arXiv:2203.08773, 2022
792022
Kg-fid: Infusing knowledge graph in fusion-in-decoder for open-domain question answering
D Yu, C Zhu, Y Fang, W Yu, S Wang, Y Xu, X Ren, Y Yang, M Zeng
arXiv preprint arXiv:2110.04330, 2021
772021
Multi-task learning with sample re-weighting for machine reading comprehension
Y Xu, X Liu, Y Shen, J Liu, J Gao
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
61*2019
Active learning for graph neural networks via node feature propagation
Y Wu, Y Xu, A Singh, Y Yang, A Dubrawski
arXiv preprint arXiv:1910.07567, 2019
512019
Revive: Regional visual representation matters in knowledge-based visual question answering
Y Lin, Y Xie, D Chen, Y Xu, C Zhu, L Yuan
Advances in Neural Information Processing Systems 35, 10560-10571, 2022
492022
Dict-bert: Enhancing language model pre-training with dictionary
W Yu, C Zhu, Y Fang, D Yu, S Wang, Y Xu, M Zeng, M Jiang
arXiv preprint arXiv:2110.06490, 2021
492021
Fusing context into knowledge graph for commonsense question answering
Y Xu, C Zhu, R Xu, Y Liu, M Zeng, X Huang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 …, 2021
462021
Noise-Tolerant Interactive Learning Using Pairwise Comparisons
Y Xu, H Zhang, K Miller, A Singh, A Dubrawski
Advances in Neural Information Processing Systems, 2431--2440, 2017
46*2017
Human parity on commonsenseqa: Augmenting self-attention with external attention
Y Xu, C Zhu, S Wang, S Sun, H Cheng, X Liu, J Gao, P He, M Zeng, ...
arXiv preprint arXiv:2112.03254, 2021
452021
Dynamic fusion networks for machine reading comprehension
Y Xu, J Liu, J Gao, Y Shen, X Liu
arXiv preprint arXiv:1711.04964, 2017
44*2017
Preference-based reinforcement learning with finite-time guarantees
Y Xu, R Wang, L Yang, A Singh, A Dubrawski
Advances in Neural Information Processing Systems 33, 18784-18794, 2020
432020
On Strategyproof Conference Peer Review
Y Xu, H Zhao, X Shi, NB Shah
Proceedings of the Twenty-Eighth International Joint Conference on …, 2019
432019
Small models are valuable plug-ins for large language models
C Xu, Y Xu, S Wang, Y Liu, C Zhu, J McAuley
arXiv preprint arXiv:2305.08848, 2023
312023
系统目前无法执行此操作,请稍后再试。
文章 1–20