フォロー
Sheng Shen
Sheng Shen
確認したメール アドレス: berkeley.edu - ホームページ
タイトル
引用先
引用先
Multitask prompted training enables zero-shot task generalization
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
ICLR 2022, 2021
13432021
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
12942023
Q-bert: Hessian based ultra low precision quantization of bert
S Shen, Z Dong, J Ye, L Ma, Z Yao, A Gholami, MW Mahoney, K Keutzer
AAAI 2020, 2019
5202019
Crosslingual generalization through multitask finetuning
N Muennighoff, T Wang, L Sutawika, A Roberts, S Biderman, TL Scao, ...
ACL 2023, 2022
4322022
How Much Can CLIP Benefit Vision-and-Language Tasks?
S Shen*, LH Li*, H Tan, M Bansal, A Rohrbach, KW Chang, Z Yao, ...
ICLR 2022, 2021
3722021
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Z Li*, E Wallace*, S Shen*, K Lin*, K Keutzer, D Klein, JE Gonzalez
ICML 2020, 2020
2662020
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
Z Yao, A Gholami, S Shen, K Keutzer, MW Mahoney
AAAI 2021, 2020
2312020
Agentbench: Evaluating llms as agents
X Liu, H Yu, H Zhang, Y Xu, X Lei, H Lai, Y Gu, H Ding, K Men, K Yang, ...
arXiv preprint arXiv:2308.03688, 2023
184*2023
An annotated dataset of literary entities
D Bamman, S Popat, S Shen
NAACL 2019, 2019
972019
Learned token pruning for transformers
S Kim*, S Shen*, D Thorsley, A Gholami, W Kwon, J Hassoun, K Keutzer
KDD 2022, 2021
962021
What Language Model to Train if You Have One Million GPU Hours?
T Le Scao, T Wang, D Hesslow, L Saulnier, S Bekman, MS Bari, ...
EMNLP 2022, 2022
842022
Ermes: Emoji-Powered Representation Learning for Cross-Lingual Sentiment Classification
Z Chen*, S Shen*, Z Hu, X Lu, Q Mei, X Liu
WWW 2019, 2018
83*2018
Powernorm: Rethinking batch normalization in transformers
S Shen, Z Yao, A Gholami, M Mahoney, K Keutzer
ICML 2020, 2020
822020
Aligning large multimodal models with factually augmented rlhf
Z Sun*, S Shen*, S Cao*, H Liu, C Li, Y Shen, C Gan, LY Gui, YX Wang, ...
arXiv preprint arXiv:2309.14525, 2023
782023
Poisoning Language Models During Instruction Tuning
A Wan*, E Wallace*, S Shen, D Klein
ICML 2023, 2023
782023
SqueezeLLM: Dense-and-Sparse Quantization
S Kim*, C Hooper*, A Gholami*, Z Dong, X Li, S Shen, MW Mahoney, ...
arXiv preprint arXiv:2306.07629, 2023
732023
Through a gender lens: An empirical study of emoji usage over large-scale android users
Z Chen, X Lu, S Shen, W Ai, X Liu, Q Mei
arXiv preprint arXiv:1705.05546 10 (3178876.3186157), 2017
722017
Pragmatically Informative Text Generation
S Shen, D Fried, J Andreas, D Klein
NAACL 2019, 2019
702019
K-lite: Learning transferable visual models with external knowledge
S Shen, C Li, X Hu, Y Xie, J Yang, P Zhang, A Rohrbach, Z Gan, L Wang, ...
NeurIPS 2022, 2022
692022
Llava-next: Improved reasoning, ocr, and world knowledge
H Liu, C Li, Y Li, B Li, Y Zhang, S Shen, YJ Lee
552024
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20