Follow
Ming Ding
Title
Cited by
Cited by
Year
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training
J Qiu, Q Chen, Y Dong, J Zhang, H Yang, M Ding, K Wang, J Tang
KDD 2020, 2020
5292020
GPT understands, too
X Liu, Y Zheng, Z Du, M Ding, Y Qian, Z Yang, J Tang
arXiv preprint arXiv:2103.10385, 2021
500*2021
CogView: Mastering Text-to-Image Generation via Transformers
M Ding, Z Yang, W Hong, W Zheng, C Zhou, D Yin, J Lin, X Zou, Z Shao, ...
NeurIPS 2021, 2021
2652021
Cognitive graph for multi-hop reading comprehension at scale
M Ding, C Zhou, Q Chen, H Yang, J Tang
ACL 2019, 2019
2152019
ProNE: Fast and Scalable Network Representation Learning
J Zhang, Y Dong, Y Wang, J Tang, M Ding
Proceedings of the 28th International Joint Conference on Artificial …, 2019
1482019
Towards Knowledge-Based Recommender Dialog System
Q Chen, J Lin, Y Zhang, M Ding, Y Cen, H Yang, J Tang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
1442019
Understanding Negative Sampling in Graph Representation Learning
Z Yang*, M Ding*, C Zhou, H Yang, J Zhou, J Tang
KDD 2020, 2020
1112020
Semi-supervised learning on graphs with generative adversarial nets
M Ding, J Tang, J Zhang
Proceedings of the 27th ACM International Conference on Information and …, 2018
1102018
M6: A chinese multimodal pretrainer
J Lin, R Men, A Yang, C Zhou, M Ding, Y Zhang, P Wang, A Wang, ...
arXiv preprint arXiv:2103.00823, 2021
1042021
Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous graph neural networks
Q Lv*, M Ding*, Q Liu, Y Chen, W Feng, S He, C Zhou, J Jiang, Y Dong, ...
KDD 2021, 2021
962021
CogLTX: Applying BERT to Long Texts
M Ding, C Zhou, H Yang, J Tang
NeurIPS 2020, 2020
872020
All nlp tasks are generation tasks: A general pretraining framework
Z Du, Y Qian, X Liu, M Ding, J Qiu, Z Yang, J Tang
ACL 2022, 2021
83*2021
CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers
M Ding, W Zheng, W Hong, J Tang
NeurIPS 2022, 2022
762022
CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
W Hong*, M Ding*, W Zheng, X Liu, J Tang
ICLR 2023, 2022
612022
MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems
T Huang, Y Dong, M Ding, Z Yang, W Feng, X Wang, J Tang
KDD 2021, 2021
582021
M6-ufc: Unifying multi-modal controls for conditional image synthesis
Z Zhang, J Ma, C Zhou, R Men, Z Li, M Ding, J Tang, J Zhou, H Yang
NeurIPS 2021, 2021
48*2021
Cognitive knowledge graph reasoning for one-shot relational learning
Z Du, C Zhou, M Ding, H Yang, J Tang
arXiv preprint arXiv:1906.05489, 2019
282019
Controllable Generation from Pre-trained Language Models via Inverse Prompting
X Zou, D Yin, Q Zhong, M Ding, Z Yang, J Tang
arXiv preprint arXiv:2103.10685, 2021
262021
FewNLU: Benchmarking state-of-the-art methods for few-shot natural language understanding
Y Zheng, J Zhou, Y Qian, M Ding, J Li, R Salakhutdinov, J Tang, S Ruder, ...
ACL 2022, 2021
212021
Adaptive Diffusion in Graph Neural Networks
J Zhao, Y Dong, M Ding, E Kharlamov, J Tang
Advances in Neural Information Processing Systems (NeurIPS 2021), 2021
202021
The system can't perform the operation now. Try again later.
Articles 1–20