Follow
Zhilin Yang
Title
Cited by
Cited by
Year
Xlnet: Generalized autoregressive pretraining for language understanding
Z Yang, Z Dai, Y Yang, J Carbonell, RR Salakhutdinov, QV Le
Advances in neural information processing systems 32, 2019
60152019
Transformer-xl: Attentive language models beyond a fixed-length context
Z Dai, Z Yang, Y Yang, J Carbonell, QV Le, R Salakhutdinov
arXiv preprint arXiv:1901.02860, 2019
24202019
Revisiting semi-supervised learning with graph embeddings
Z Yang, W Cohen, R Salakhudinov
International conference on machine learning, 40-48, 2016
13932016
HotpotQA: A dataset for diverse, explainable multi-hop question answering
Z Yang, P Qi, S Zhang, Y Bengio, WW Cohen, R Salakhutdinov, ...
arXiv preprint arXiv:1809.09600, 2018
9972018
Multi-task cross-lingual sequence tagging from scratch
Z Yang, R Salakhutdinov, W Cohen
arXiv preprint arXiv:1603.06270, 2016
552*2016
Good semi-supervised learning that requires a bad gan
Z Dai, Z Yang, F Yang, WW Cohen, RR Salakhutdinov
Advances in neural information processing systems 30, 2017
4412017
Gated-Attention Readers for Text Comprehension
B Dhingra, H Liu, Z Yang, WW Cohen, R Salakhutdinov
arXiv preprint arXiv:1606.01549, 2016
4172016
Differentiable learning of logical rules for knowledge base reasoning
F Yang, Z Yang, WW Cohen
Advances in neural information processing systems 30, 2017
4112017
Review networks for caption generation
Z Yang, Y Yuan, Y Wu, WW Cohen, RR Salakhutdinov
Advances in neural information processing systems 29, 2016
342*2016
Breaking the softmax bottleneck: A high-rank RNN language model
Z Yang, Z Dai, R Salakhutdinov, WW Cohen
arXiv preprint arXiv:1711.03953, 2017
3262017
Cosnet: Connecting heterogeneous social networks with local and global consistency
Y Zhang, J Tang, Z Yang, J Pei, PS Yu
Proceedings of the 21th ACM SIGKDD international conference on knowledge …, 2015
2982015
GPT understands, too
X Liu, Y Zheng, Z Du, M Ding, Y Qian, Z Yang, J Tang
arXiv preprint arXiv:2103.10385, 2021
291*2021
Neural cross-lingual named entity recognition with minimal resources
J Xie, Z Yang, G Neubig, NA Smith, J Carbonell
arXiv preprint arXiv:1808.09861, 2018
1522018
Semi-supervised qa with generative domain-adaptive nets
Z Yang, J Hu, R Salakhutdinov, WW Cohen
arXiv preprint arXiv:1702.02206, 2017
1502017
Linguistic knowledge as memory for recurrent neural networks
B Dhingra, Z Yang, WW Cohen, R Salakhutdinov
arXiv preprint arXiv:1703.02620, 2017
120*2017
P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks
X Liu, K Ji, Y Fu, Z Du, Z Yang, J Tang
arXiv preprint arXiv:2110.07602, 2021
100*2021
Words or characters? fine-grained gating for reading comprehension
Z Yang, B Dhingra, Y Yuan, J Hu, WW Cohen, R Salakhutdinov
arXiv preprint arXiv:1611.01724, 2016
922016
Glomo: Unsupervised learning of transferable relational graphs
Z Yang, J Zhao, B Dhingra, K He, WW Cohen, RR Salakhutdinov, ...
Advances in Neural Information Processing Systems 31, 2018
55*2018
Transformer-xl: Attentive language models beyond a fixed-length context. arXiv 2019
Z Dai, Z Yang, Y Yang, J Carbonell, QV Le, R Salakhutdinov
arXiv preprint arXiv:1901.02860, 0
52
GLM: General Language Model Pretraining with Autoregressive Blank Infilling
Z Du, Y Qian, X Liu, M Ding, J Qiu, Z Yang, J Tang
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
46*2022
The system can't perform the operation now. Try again later.
Articles 1–20