Investigating Bias in Popular Generative AI ToolS

Some Strategies for identifying and addressing bias
based on user experience as well as Coleman (2021) and Noble (2018)

Critically assess generative results

  • Continue to question whose voices are heard/not heard. E.g., I asked for a text response to “who are society’s important people?” and Gemini started off the list with very stereotypical Western/Eurocentric figures. 

  • Ask, who’s voice is missing or under-represented here? 

Prompt critically / reframe prompts

  • Noble (2018) suggests that prioritization of algorithms (Google results) is often dictated by monetization. Can we predict this and prompt better?

  • Coleman (2021), would potentially ask us to reframe prompts to be more open-ended, recognizing you also have the bias. Perhaps even asking for surreal/abstract interpretations, is it possible? 

Interpret as text 

  • Coleman (2021) recognizes that generative AI can be interpreted as its own text as suggested by the idea of “the surround”. They talk about the ubiquity of AI, its involvement in everything around us. If we can recognize the generative results as their own biased texts, we can be properly critical of its content and biases.

Resources

Center for American Women and Politics. (2025). Race and ethnicity of women in elected office. Rutgers, Eagleton Institute of Politics. https://cawpdata.rutgers.edu/women-elected-officials/race-ethnicity

Chapman University. (n.d.). Bias in AI. https://www.chapman.edu/ai/bias-in-ai.aspx

Coleman, B. (2021). Technology of The Surround. Catalyst: Feminism, Theory, Technoscience, 7(2), 1–21. 

Google. (2024). Gemini [Large language model]. https://gemini.google.com/

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://doi.org/10.18574/9781479833641 Introduction and Chapter 1.

OpenAI. (2024). ChatGPT (Version 4o) [Large language model]. https://chat.openai.com/

Next
Next

Project Two