フォロー
Jiaming Ji (吉嘉铭)
Jiaming Ji (吉嘉铭)
確認したメール アドレス: stu.pku.edu.cn - ホームページ
タイトル
引用先
引用先
Baichuan 2: Open large-scale language models
A Yang, B Xiao, B Wang, B Zhang, C Bian, C Yin, C Lv, D Pan, D Wang, ...
arXiv preprint arXiv:2309.10305, 2023
389*2023
Beavertails: Towards improved safety alignment of llm via a human-preference dataset
J Ji, M Liu, J Dai, X Pan, C Zhang, C Bian, R Sun, Y Wang, Y Yang
NeurIPS 2023, 2023
2162023
Ai alignment: A comprehensive survey
J Ji, T Qiu, B Chen, B Zhang, H Lou, K Wang, Y Duan, Z He, J Zhou, ...
arXiv preprint arXiv:2310.19852, 2023
1572023
Safe rlhf: Safe reinforcement learning from human feedback
J Dai, X Pan, R Sun, J Ji, X Xu, M Liu, Y Wang, Y Yang
The Twelfth International Conference on Learning Representations (Spotlight), 2024
1552024
Bi-dexhands: Towards human-level bimanual dexterous manipulation
Y Chen, Y Geng, F Zhong, J Ji, J Jiang, Z Lu, H Dong, Y Yang
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
94*2023
Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
J Ji, B Zhang, J Zhou, X Pan, W Huang, R Sun, Y Geng, Y Zhong, J Dai, ...
NeurIPS 2023, 2023
55*2023
Constrained update projection approach to safe policy optimization
L Yang, J Ji, J Dai, L Zhang, B Zhou, P Li, Y Yang, G Pan
NeurIPS 2022, 2023
422023
Aligner: Achieving efficient alignment through weak-to-strong correction
J Ji, B Chen, H Lou, D Hong, B Zhang, X Pan, J Dai, Y Yang
NeurIPS 2024, Oral Presentation, 2024
362024
Omnisafe: An infrastructure for accelerating safe reinforcement learning research
J Ji, J Zhou, B Zhang, J Dai, X Pan, R Sun, W Huang, Y Geng, M Liu, ...
JMLR 2024, 2023
312023
Heterogeneous-Agent Reinforcement Learning
Y Zhong, JG Kuba, S Hu, J Ji, Y Yang
JMLR, 2023
262023
Cup: A conservative update policy algorithm for safe reinforcement learning
L Yang, J Ji, J Dai, Y Zhang, P Li, G Pan
arXiv preprint arXiv:2202.07565, 2022
172022
The application of large language models in medicine: A scoping review
X Meng, X Yan, K Zhang, D Liu, X Cui, Y Yang, M Zhang, C Cao, J Wang, ...
Iscience 27 (5), 2024
152024
Augmented proximal policy optimization for safe reinforcement learning
J Dai, J Ji, L Yang, Q Zheng, G Pan
Proceedings of the AAAI Conference on Artificial Intelligence 37 (6), 7288-7295, 2023
122023
Pku-beaver: Constrained value-aligned llm via safe rlhf
J Dai, X Pan, J Ji, R Sun, Y Wang, Y Yang
122023
SafeDreamer: Safe Reinforcement Learning with World Models
W Huang, J Ji, B Zhang, C Xia, Y Yang
ICLR 2024, 2023
112023
Pku-saferlhf: A safety alignment preference dataset for llama family models
J Ji, D Hong, B Zhang, B Chen, J Dai, B Zheng, T Qiu, B Li, Y Yang
arXiv preprint arXiv:2406.15513, 2024
72024
VOCE: Variational Optimization with Conservative Estimation for Offline Safe Reinforcement Learning
J Guan, G Chen, J Ji, L Yang, A Zhou, Z Li
NeurIPS 2023, 2023
72023
MyoChallenge 2022: Learning contact-rich manipulation using a musculoskeletal hand
V Caggiano, G Durandau, H Wang, A Chiappa, A Mathis, P Tano, N Patel, ...
NeurIPS 2022 Competition Track, 233-250, 2023
62023
Rethinking information structures in rlhf: Reward generalization from a graph theory perspective
T Qiu, F Zeng, J Ji, D Yan, K Wang, J Zhou, H Yang, J Dai, X Pan, Y Yang
arXiv preprint arXiv:2402.10184, 2024
42024
SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset
J Dai, T Chen, X Wang, Z Yang, T Chen, J Ji, Y Yang
NeurIPS 2024, 2024
12024
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20