6.
Desaire H, Chua AE, Kim MG, Hua D. Accurately detecting AI text when ChatGPT is told to write like a chemist. Cell Reports Physical Science. 15 nov 2023;4(11):101672.
10.
Xu S, Liu S, Culhane T, Pertseva E, Wu MH, Semnani SJ, et al. Fine-tuned LLMs Know More, Hallucinate Less with Few-Shot Sequence-to-Sequence Semantic Parsing over Wikidata [Internet]. arXiv; 2023 [cité 29 nov 2023]. Disponible sur:
http://arxiv.org/abs/2305.14202
11.
Yiu E, Kosoy E, Gopnik A. Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet). Perspect Psychol Sci. 26 oct 2023;17456916231201401.
16.
Liang W, Yuksekgonul M, Mao Y, Wu E, Zou J. GPT detectors are biased against non-native English writers [Internet]. arXiv; 2023 [cité 6 sept 2023]. Disponible sur:
http://arxiv.org/abs/2304.02819
17.
Feng S, Park CY, Liu Y, Tsvetkov Y. From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) [Internet]. Toronto, Canada: Association for Computational Linguistics; 2023 [cité 5 sept 2023]. p. 11737‑62. Disponible sur:
https://aclanthology.org/2023.acl-long.656
18.
Desaire H, Chua AE, Isom M, Jarosova R, Hua D. Distinguishing academic science writing from humans or ChatGPT with over 99% accuracy using off-the-shelf machine learning tools. Cell Reports Physical Science. 21 juin 2023;4(6):101426.
30.
Santurkar S, Durmus E, Ladhak F, Lee C, Liang P, Hashimoto T. Whose Opinions Do Language Models Reflect? [Internet]. arXiv; 2023 [cité 6 sept 2023]. Disponible sur:
http://arxiv.org/abs/2303.17548
38.
Romero M, Heiser L, Lepage A, Gagnebien A, Bonjour A, Lagarrigue A, et al. Enseigner et apprendre à l’ère de l’intelligence artificielle [Internet]. Vol. Livre blanc. Canopé; 2023 [cité 5 sept 2023]. Disponible sur:
https://hal.science/hal-04013223
43.
Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 18 janv 2023;613(7945):620‑1.
45.
Hall M, van der Maaten L, Gustafson L, Jones M, Adcock A. A Systematic Study of Bias Amplification [Internet]. arXiv; 2022 [cité 21 nov 2023]. Disponible sur:
http://arxiv.org/abs/2201.11706
46.
Lin S, Hilton J, Evans O. TruthfulQA: Measuring How Models Mimic Human Falsehoods [Internet]. arXiv; 2022 [cité 6 sept 2023]. Disponible sur:
http://arxiv.org/abs/2109.07958
47.
Sloane M. To make AI fair, here’s what we must learn to do. Nature. 4 mai 2022;605(7908):9‑9.
48.
Zhang D, Maslej N, Brynjolfsson E, Etchemendy J, Lyons T, Manyika J, et al. The AI Index 2022 Annual Report [Internet]. arXiv; 2022 [cité 21 nov 2023]. Disponible sur:
http://arxiv.org/abs/2205.03468
50.
Shimron E, Tamir JI, Wang K, Lustig M. Implicit data crimes: Machine learning bias arising from misuse of public data. Proceedings of the National Academy of Sciences. 29 mars 2022;119(13):e2117203119.
51.
Thoppilan R, De Freitas D, Hall J, Shazeer N, Kulshreshtha A, Cheng HT, et al. LaMDA: Language Models for Dialog Applications [Internet]. arXiv; 2022 [cité 21 nov 2023]. Disponible sur:
http://arxiv.org/abs/2201.08239
53.
Directorate-General for Communications Networks C and T (European C, Izsak K, Terrier A, Kreutzer S, Strähle T, Roche C, et al. Opportunities and challenges of artificial intelligence technologies for the cultural and creative sectors [Internet]. LU: Publications Office of the European Union; 2022 [cité 24 nov 2023]. Disponible sur:
https://data.europa.eu/doi/10.2759/144212
54.
Nadeem M, Bethke A, Reddy S. StereoSet: Measuring stereotypical bias in pretrained language models. In: Zong C, Xia F, Li W, Navigli R, éditeurs. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) [Internet]. Online: Association for Computational Linguistics; 2021 [cité 21 nov 2023]. p. 5356‑71. Disponible sur:
https://aclanthology.org/2021.acl-long.416
55.
Ahmed S, Nutt CT, Eneanya ND, Reese PP, Sivashanker K, Morse M, et al. Examining the Potential Impact of Race Multiplier Utilization in Estimated Glomerular Filtration Rate Calculation on African-American Care Outcomes. J GEN INTERN MED. 1 févr 2021;36(2):464‑71.
56.
Kreps SE, McCain M, Brundage M. All the News that’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation [Internet]. Rochester, NY; 2020 [cité 5 sept 2023]. Disponible sur:
https://papers.ssrn.com/abstract=3525002
58.
Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, et al. Language Models are Few-Shot Learners [Internet]. arXiv; 2020 [cité 6 sept 2023]. Disponible sur:
http://arxiv.org/abs/2005.14165
69.
IA et Education - Digipad by La Digitale [Internet]. [cité 5 sept 2023]. Disponible sur:
https://digipad.app