Skip to Main Content

Generative Artificial Intelligence

This guide provides an overview of generative artificial intelligence, explores resources for the ethical use of AI, and offers suggestions for effectively integrating generative AI tools into your research.

Ethical Frameworks for AI

There are many different ethical frameworks that individuals, organizations, and governments use to evaluate how trustworthy different AI systems are, including:

Evaluating the Trustworthiness of AI

Ethical frameworks are also helpful for evaluating individual AI platforms you are considering using.

As an example, consider the seven categories of EU's Ethical Guidelines for Trustworthy AI:

1. Human agency and oversight

  • Does the AI give users (and content owners) the ability to make decisions about their rights?
  • Are humans involved in the training of the AI (human-in the loop)? 
  • How might an AI tool harm human rights? Is the platform or AI company doing anything to address or curb those effects?

Evaluating Human Agency in Practice

OpenAI has put limits on GPT-4 being used broadly for facial recognition due to privacy concerns.

What does this tell you for OpenAI's trustworthiness?

2. Technical Robustness and Safety

  • How accurate and reliable is the results from the AI platform?
  • Does that AI platform have safeguards in places to protect against data attacks or other problems?

Evaluating Accuracy & Reliability in Practice

A recent study published in the Journal of Medical Internet Research found that ChatGPT had a 72% accuracy in clinical decision making.

A preprint (not yet peer-reviewed) study published by Stanford and UC Berkley researchers suggests that ChatGPT's performance on solving math problems varies over time.

What do these studies tell you about the accuracy of the results from ChatGPT? 

3. Privacy and Data Governance

  • How does the AI platform handle your personal data?
  • Can you opt-out of collecting your personal data? 

Evaluating Privacy in Practice

Look to see if the AI platforms you are using has a privacy policy, e.g. Open AI's Privacy Policy. It's worth noting that OpenAI allows you to turn off your search history, which otherwise would be used for training the AI system

What does this tell you for OpenAI's trustworthiness?

3. Transparency

  • How clear is a company about how their AI platform operates and was developed?
  • Does the AI system show where it got information from to form its reponses?

Evaluating Transparency in Practice

The Foundation Model Transparency Index uses a rubric to score AI systems' transparency. In the 2023 ranking, popular LLMs like GPT-4 (OpenAI) and PaLM 2 (Google) received scores of 48% and 40% respectively

What do these scores tell you about these platforms' transparency?

4. Diversity, Non-Discrimination and Fairness

  • Does the AI show bias in its results?
  • Is the AI designed with all users in mind (i.e. meets accessibility standards, has universal design)

Evaluating Bias in Practice

A 2023 commentary piece from the Brooking's institute compared responses between Bard (now Gemini) and ChatGPT to understand potential political bias in their results.

This MIT Technology Review article shows how images generated by AI systems perpetuate stereotypes.

5. Societal and Environmental Well-Being

What impacts does the AI platform have on the environment and on people, and do the developers take any steps to mitigate those impacts?

Evaluating Societal and Enviormental Well-Being in Practice

OpenAI has been involved in exploitative labor in the past.

AI data training centers have large carbon footprints and use a great deal of water

What do these stories tell you about AI platforms impact on society and the environment?

6. Accountability

  • How doe AI platforms and companies behind those systems respond when it is responsible for harm?

Evaluating Accountability in Practice

A real-life example of a company responding to a mistake is Google's response to Gemini's image generation feature after users found examples of the tool creating historically inaccurate and biased images.

How do you think Google did in terms of accountability through this response?

Davidson College Library Research Guides are licensed under CC BY-SA 4.0.

Mailing Address: Davidson College - E.H. Little Library, 209 Ridge Road, Box 5000, Davidson, NC 28035