Folgen
Tony Lee
Tony Lee
Bestätigte E-Mail-Adresse bei stanford.edu
Titel
Zitiert von
Zitiert von
Jahr
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
28112021
Wilds: A benchmark of in-the-wild distribution shifts
PW Koh, S Sagawa, H Marklund, SM Xie, M Zhang, A Balsubramani, ...
arXiv preprint arXiv: 2012.07421, 2021
11572021
Holistic Evaluation of Language Models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
arXiv preprint arXiv:2211.09110, 2022
662*2022
StarCoder: may the source be with you!
R Li, LB Allal, Y Zi, N Muennighoff, D Kocetkov, C Mou, M Marone, C Akiki, ...
arXiv preprint arXiv:2305.06161, 2023
382*2023
Extending the WILDS Benchmark for Unsupervised Adaptation
S Sagawa, PW Koh, T Lee, I Gao, SM Xie, K Shen, A Kumar, W Hu, ...
arXiv preprint arXiv:2112.05090, 2021
942021
Evaluating Human-Language Model Interaction
M Lee, M Srivastava, A Hardy, J Thickstun, E Durmus, A Paranjape, ...
arXiv preprint arXiv:2212.09746, 2022
572022
Holistic Evaluation of Text-to-Image Models
T Lee, M Yasunaga, C Meng, Y Mai, JS Park, A Gupta, Y Zhang, ...
Thirty-seventh Conference on Neural Information Processing Systems Datasets …, 2023
212023
Can small and synthetic benchmarks drive modeling innovation? a retrospective study of question answering modeling approaches
NFLTL Robin, JP Liang
20*2021
Cheaply Estimating Inference Efficiency Metrics for Autoregressive Transformer Models
D Narayanan, K Santhanam, P Henderson, R Bommasani, T Lee, ...
Advances in Neural Information Processing Systems 36, 2024
9*2024
BioMedLM: A 2.7 B Parameter Language Model Trained On Biomedical Text
E Bolton, A Venigalla, M Yasunaga, D Hall, B Xiong, T Lee, R Daneshjou, ...
arXiv preprint arXiv:2403.18421, 2024
7*2024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–10