Follow
Jiaxing Wang
Jiaxing Wang
Chinese Academy of Sciences, Institute of Automation
Verified email at nlpr.ia.ac.cn
Title
Cited by
Cited by
Year
Revisiting parameter sharing for automatic neural channel number search
J Wang, H Bai, J Wu, X Shi, J Huang, I King, M Lyu, J Cheng
Advances in Neural Information Processing Systems 33, 5991-6002, 2020
282020
M-nas: Meta neural architecture search
J Wang, J Wu, H Bai, J Cheng
Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 6186-6193, 2020
262020
Bayesian automatic model compression
J Wang, H Bai, J Wu, J Cheng
IEEE Journal of Selected Topics in Signal Processing 14 (4), 727-736, 2020
182020
Dpnas: Neural architecture search for deep learning with differential privacy
A Cheng, J Wang, XS Zhang, Q Chen, P Wang, J Cheng
Proceedings of the AAAI conference on artificial intelligence 36 (6), 6358-6366, 2022
172022
RaFM: rank-aware factorization machines
X Chen, Y Zheng, J Wang, W Ma, J Huang
International Conference on Machine Learning, 1132-1140, 2019
132019
ECBC: Efficient convolution via blocked columnizing
T Zhao, Q Hu, X He, W Xu, J Wang, C Leng, J Cheng
IEEE Transactions on Neural Networks and Learning Systems, 2021
92021
DynaMS: Dyanmic margin selection for efficient deep learning
J Wang, Y Li, J Zhuo, X Shi, W Zhang, L Gong, T Tao, P Liu, Y Bao, W Yan
The Eleventh International Conference on Learning Representations, 2022
32022
Joint channel and weight pruning for model acceleration on moblie devices
T Zhao, XS Zhang, W Zhu, J Wang, S Yang, J Liu, J Cheng
arXiv preprint arXiv:2110.08013, 2021
12021
Architecture Aware Latency Constrained Sparse Neural Networks
T Zhao, Q Hu, X He, W Xu, J Wang, C Leng, J Cheng
arXiv preprint arXiv:2109.00170, 2021
12021
Multi-Granularity Pruning for Model Acceleration on Mobile Devices
T Zhao, XS Zhang, W Zhu, J Wang, JL Sen Yang, J Cheng
1*
The system can't perform the operation now. Try again later.
Articles 1–10