There are many different ethical frameworks that individuals, organizations, and governments use to evaluate how trustworthy different AI systems are, including:
Ethical frameworks are also helpful for evaluating individual AI platforms you are considering using.
As an example, consider the seven categories of EU's Ethical Guidelines for Trustworthy AI:
OpenAI has put limits on GPT-4 being used broadly for facial recognition due to privacy concerns.
What does this tell you for OpenAI's trustworthiness?
A recent study published in the Journal of Medical Internet Research found that ChatGPT had a 72% accuracy in clinical decision making.
A preprint (not yet peer-reviewed) study published by Stanford and UC Berkley researchers suggests that ChatGPT's performance on solving math problems varies over time.
What do these studies tell you about the accuracy of the results from ChatGPT?
Look to see if the AI platforms you are using has a privacy policy, e.g. Open AI's Privacy Policy. It's worth noting that OpenAI allows you to turn off your search history, which otherwise would be used for training the AI system
What does this tell you for OpenAI's trustworthiness?
The Foundation Model Transparency Index uses a rubric to score AI systems' transparency. In the 2023 ranking, popular LLMs like GPT-4 (OpenAI) and PaLM 2 (Google) received scores of 48% and 40% respectively
What do these scores tell you about these platforms' transparency?
A 2023 commentary piece from the Brooking's institute compared responses between Bard (now Gemini) and ChatGPT to understand potential political bias in their results.
This MIT Technology Review article shows how images generated by AI systems perpetuate stereotypes.
What impacts does the AI platform have on the environment and on people, and do the developers take any steps to mitigate those impacts?
OpenAI has been involved in exploitative labor in the past.
AI data training centers have large carbon footprints and use a great deal of water.
What do these stories tell you about AI platforms impact on society and the environment?
A real-life example of a company responding to a mistake is Google's response to Gemini's image generation feature after users found examples of the tool creating historically inaccurate and biased images.
How do you think Google did in terms of accountability through this response?
Davidson College Library Research Guides are licensed under CC BY-SA 4.0.
Mailing Address: Davidson College - E.H. Little Library, 209 Ridge Road, Box 5000, Davidson, NC 28035