Follow
Ariel Herbert-Voss
Ariel Herbert-Voss
Verified email at g.harvard.edu
Title
Cited by
Cited by
Year
Language models are few-shot learners
TB Brown
arXiv preprint arXiv:2005.14165, 2020
363932020
Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford …, 2020
89202020
Evaluating large language models trained on code
M Chen, J Tworek, H Jun, Q Yuan, HPDO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374, 2021
33822021
Extracting training data from large language models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
30th USENIX Security Symposium (USENIX Security 21), 2633-2650, 2021
17852021
Release strategies and the social impacts of language models
I Solaiman, M Brundage, J Clark, A Askell, A Herbert-Voss, J Wu, ...
arXiv preprint arXiv:1908.09203, 2019
5652019
Toward trustworthy AI development: mechanisms for supporting verifiable claims
M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ...
arXiv preprint arXiv:2004.07213, 2020
4102020
Language Models are Few-Shot Learners. 2020. doi: 10.48550
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arxiv, 5-7, 2005
2302005
& Amodei, D.(2020)
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
Language models are few-shot learners, 2005
1532005
Language models are few-shot learners
B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, ...
arXiv preprint arXiv:2005.14165 1, 2020
1332020
Language models are few-shot learners (arXiv: 2005.14165). arXiv
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
1142005
The wmdp benchmark: Measuring and reducing malicious use with unlearning
N Li, A Pan, A Gopal, S Yue, D Berrios, A Gatti, JD Li, AK Dombrowski, ...
arXiv preprint arXiv:2403.03218, 2024
682024
Evaluating large language models trained on code. arXiv 2021
M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374 10, 2021
562021
Language models are few-shot learners. cite
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arXiv preprint arxiv:2005.14165, 2020
542020
Computing minimal interpolants in C1, 1 (Rd)
A Herbert-Voss, MJ Hirn, F McCollum
Rev. Mat. Iberoam 33 (1), 29-66, 2017
152017
Computing minimal interpolants in
A Herbert-Voss, MJ Hirn, F McCollum
arXiv preprint arXiv:1411.5668, 2014
42014
2. A. Bordes, Y. Boureau, and J. Weston. Learning end-to-end goal-oriented dialog. In 5th
GS Shyam, A Askell, S Agarwal, A Herbert-Voss, G Krueger, T Henighan, ...
The system can't perform the operation now. Try again later.
Articles 1–16