What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT
- Title
- What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT
- Description
- This was an empirical study that aimed to characterize evidence of gender bias in ChatGPT. Understanding social biases in generative AI is key to understanding of human flourishing in the age of AI.
- Creator
- Deanna M Kaplan; Roman Palitsky; Santiago J Arconada Alvarez; Nicole S Pozzo; Morgan N Greenleaf; Ciara A Atkinson; Wilbur A Lam
- Source
- https://www.jmir.org/2024/1/e51837
- Publisher
- Journal of Medical Internet Research
- Date
- 2024
- Contributor
- Deanna K. from Atlanta, GA
Dublin Core
Citation
Deanna M Kaplan; Roman Palitsky; Santiago J Arconada Alvarez; Nicole S Pozzo; Morgan N Greenleaf; Ciara A Atkinson; Wilbur A Lam, “What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT,” The People's Canon, accessed December 9, 2025, https://peoplescanon.ecdsomeka.org/items/show/9.
