TY - JOUR
T1 - How should the advancement of large language models affect the practice of science?
AU - Binz, Marcel
AU - Alaniz, Stephan
AU - Roskies, Adina
AU - Aczel, Balazs
AU - Bergstrom, Carl T.
AU - Allen, Colin
AU - Schad, Daniel
AU - Wulff, Dirk
AU - West, Jevin D.
AU - Zhang, Qiong
AU - Shiffrin, Richard M.
AU - Gershman, Samuel J.
AU - Popov, Vencislav
AU - Bender, Emily M.
AU - Marelli, Marco
AU - Botvinick, Matthew M.
AU - Akata, Zeynep
AU - Schulz, Eric
N1 - Publisher Copyright:
Copyright © 2025 the Author(s).
PY - 2025/2/4
Y1 - 2025/2/4
N2 - Large language models (LLMs) are being increasingly incorporated into scientific workflows. However, we have yet to fully grasp the implications of this integration. How should the advancement of large language models affect the practice of science? For this opinion piece, we have invited four diverse groups of scientists to reflect on this query, sharing their perspectives and engaging in debate. Schulz et al. make the argument that working with LLMs is not fundamentally different from working with human collaborators, while Bender et al. argue that LLMs are often misused and overhyped, and that their limitations warrant a focus on more specialized, easily interpretable tools. Marelli et al. emphasize the importance of transparent attribution and responsible use of LLMs. Finally, Botvinick and Gershman advocate that humans should retain responsibility for determining the scientific roadmap. To facilitate the discussion, the four perspectives are complemented with a response from each group. By putting these different perspectives in conversation, we aim to bring attention to important considerations within the academic community regarding the adoption of LLMs and their impact on both current and future scientific practices.
AB - Large language models (LLMs) are being increasingly incorporated into scientific workflows. However, we have yet to fully grasp the implications of this integration. How should the advancement of large language models affect the practice of science? For this opinion piece, we have invited four diverse groups of scientists to reflect on this query, sharing their perspectives and engaging in debate. Schulz et al. make the argument that working with LLMs is not fundamentally different from working with human collaborators, while Bender et al. argue that LLMs are often misused and overhyped, and that their limitations warrant a focus on more specialized, easily interpretable tools. Marelli et al. emphasize the importance of transparent attribution and responsible use of LLMs. Finally, Botvinick and Gershman advocate that humans should retain responsibility for determining the scientific roadmap. To facilitate the discussion, the four perspectives are complemented with a response from each group. By putting these different perspectives in conversation, we aim to bring attention to important considerations within the academic community regarding the adoption of LLMs and their impact on both current and future scientific practices.
KW - AI
KW - large language models
KW - science
UR - https://www.scopus.com/pages/publications/85216927526
UR - https://www.scopus.com/pages/publications/85216927526#tab=citedBy
U2 - 10.1073/pnas.2401227121
DO - 10.1073/pnas.2401227121
M3 - Article
C2 - 39869798
AN - SCOPUS:85216927526
SN - 0027-8424
VL - 122
JO - Proceedings of the National Academy of Sciences of the United States of America
JF - Proceedings of the National Academy of Sciences of the United States of America
IS - 5
M1 - e2401227121
ER -