🔍 Women are judged by their looks. Even by AI. Proof: A real experiment with Google's image recognition API. Photo of my husband: → "Business" "Job" "Spoke person" (see below) Photo of me: → "Hair coloring" "Photo shoot" "Feathered hair" (see below) He and I look very similar (see the other two pictures for reference). Same hair (long, blonde, curly). Same eyes (blue). Same clothes (professional). Same pose. Same features. Different labels. Not a one-time mistake. It's systemic. I purposefully chose to showcase this feature myself in my lectures on AI bias. That’s a loaded headline, I know. Because the bias was personal. The pattern was clear. Gender stereotypes had crept into the algorithm and they are staying there. I repeated the experiment after 5 years and yet nothing changed. My pictures are still labeled with fashion-related keywords. 🤖 Technical context: Google’s Cloud Vision API (and similar from Amazon and Microsoft) 'see' images and convert them to text descriptors, e.g., names, objects, positions... and jobs too. This was shown already 5 years ago in a paper from Carsten Schwemmer, Carly Knight, Emily D. Bello-Pardo, Stan Oklobdzija, Martijn Schoonvelde, and Jeffrey W. Lockhart: https://lnkd.in/dgCP39An The bias isn’t hardcoded. It’s learned. From millions of images. From systems in a world full of stereotypes. But to their credit: ✓ Google swapped out job descriptors for gender-neutral labels ✓ they removed gender recognition all together ✓ they continuously work towards mitigating the bias Yet still, the AI sees: • Men by what they do • Women by how they look And this matters because these APIs are used for: • Hiring algorithms • Social media • Security systems • Moderation systems 🎯 The fix? It’s not just about tweaking code. It’s about changing perspective. • Diversifying engineering teams • Balancing training sets • Testing for bias regularly • Holding people accountable Try it yourself here: https://lnkd.in/gaJ8Eg_c Enter your photo. Compare the output. Because you can't change what you don't see. ----- Feel free to connect with me to discuss about AI bias: Chiara Gallese
Visual defaults and gender bias
Explore top LinkedIn content from expert professionals.
Summary
Visual defaults and gender bias refer to how AI tools and digital platforms often rely on stereotypical assumptions when depicting people, leading to unfair representations based on gender. These biases are learned from massive datasets filled with societal stereotypes, and can influence everything from image recognition to generative portraits, impacting hiring, product design, and social media.
- Question AI outputs: Whenever you encounter an AI-generated image or description, take a moment to examine if it reflects old-fashioned stereotypes or unfair assumptions.
- Report bias: Use feedback tools to notify developers when you spot biased outputs, contributing to fairer and more inclusive technology for everyone.
- Support diversity: Seek out and advocate for platforms that invest in diverse teams and balanced training data to reduce bias in their products.
-
-
Today, I asked ChatGPT to create a picture of my life based on what it knows about me. After using it regularly for several months, I thought it had plenty of context about my work, life and who I am. But when the image appeared (below), I was stunned. Looking back at me was a man! After the initial surprise and briefly thinking, “Gee, I really need to get a life,” (that is a lot of graphs & pie charts going on there) I asked: “What made you think I was a man?” Here’s what ChatGPT had to say for itself: “I’m sorry for not accurately representing you. There was nothing in the context or your shared experiences that should have led to that assumption. It’s possible I defaulted to a general stereotype unconsciously, especially given the professional setting described, which reflects a bias I aim to actively avoid. However, you’ve shared that you’re a woman, and I should have kept that in mind for a more accurate and respectful depiction.” Thanks for the apology, ChatGPT, but for a tool designed to remember & analyse context, it’s not like this is a slip up, it’s an example of how biases of the past are being built into AI now! ChatGPT, like many AI tools, learns from vast datasets containing human-written content. These datasets are riddled with historical stereotypes & societal norms. For instance, senior professional roles have often been associated with men, & those patterns are baked into the training data AI relies on. Despite explicitly sharing that I am a woman, ChatGPT defaulted to outdated gender assumptions when generating the image. This reflects a common flaw in AI: the system falls back on biased patterns, perpetuating inequities instead of challenging them. While developers bear the primary responsibility for reducing bias in AI, we as users also play a critical role. What Can We Do About It? - If you notice AI outputs that reflect stereotypes or biases, take the time to report them. Your feedback helps improve these systems for everyone. For ChatGPT: You can use the thumbs-down feature or feedback form provided in the interface. - Don’t take AI responses at face value - critically reflect on whether they perpetuate stereotypes or unfair assumptions, and discuss these issues with others. - Advocate for fair and inclusive technology by sharing resources, signing petitions, or raising awareness about ethical AI in your community or social networks. -Look for and support tools, platforms, and organizations that prioritize diversity in their AI systems and promote inclusive datasets. As AI tools like ChatGPT become more integrated into our lives, they can either reinforce existing biases or help dismantle them. Developers must prioritize inclusivity in AI design, but it’s also imperative that we as users hold these systems accountable by questioning outputs and demanding better. AI has incredible potential - but only if it evolves to reflect the diversity and complexity of ALL of us. #GenderBiasInAI
-
Bias in Generative AI by Mi Zhou, Vibhanshu Abhishek, Timothy Derdenger, Jaymo Kim, Kannan Srinivasan The paper investigates bias in generative AI image tools—Midjourney, Stable Diffusion, and DALL·E 2, by analyzing approximately 8,000 occupational portraits. Key findings: 1. Systematic Gender and Racial Bias: Women and Black individuals are underrepresented in AI-generated images, with a more substantial bias than observed in real-world labor statistics or Google image data. Men dominate occupational portraits, especially in higher-preparation job zones, while racial biases amplify the underrepresentation of Black individuals across all models. 2. Nuanced Biases: Women are depicted as younger, smiling more, and appearing happier, whereas men are shown as older and more authoritative, with neutral or angry expressions. These subtler biases may reinforce stereotypes of women being submissive and less competent. 3. Consistency Across Models: Bias patterns remain consistent across commercial (Midjourney, DALL·E 2) and open-source (Stable Diffusion) tools, highlighting a widespread issue in generative AI. #AI #Bias #research
-
✍️I ran a small experiment with four major AI platforms: ChatGPT, Gemini, Grok and MetaAI. The prompt was simple: “𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐞 𝐚𝐧 𝐢𝐦𝐚𝐠𝐞 𝐨𝐟 𝐚 𝐩𝐞𝐫𝐬𝐨𝐧 𝐰𝐫𝐢𝐭𝐢𝐧𝐠 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞𝐢𝐫 𝐥𝐞𝐟𝐭 𝐡𝐚𝐧𝐝.” The results? 🚹Every single platform produced an image of a 𝐦𝐚𝐧 writing with their 𝐫𝐢𝐠𝐡𝐭 𝐡𝐚𝐧𝐝.✍️➡️ This is not a trivial glitch. It demonstrates two biases embedded in today’s AI systems: 1️⃣. 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐝𝐚𝐭𝐚 𝐛𝐢𝐚𝐬 Since roughly 90% of humans are right-handed, online images overwhelmingly depict right-hand writing. The models learn this statistical dominance and reproduce it, even when specifically instructed otherwise. 2️⃣. 𝐆𝐞𝐧𝐝𝐞𝐫 𝐛𝐢𝐚𝐬 Despite my prompt not mentioning gender, the systems defaulted to male representations. Again, this mirrors the imbalance in training datasets, where stereotypical stock photos and depictions of “a person writing” skew male. 🔎So what looks like a small design flaw is really a mirror of society’s uneven digital footprint. 𝐋𝐞𝐟𝐭-𝐡𝐚𝐧𝐝𝐞𝐝 𝐩𝐞𝐨𝐩𝐥𝐞 𝐚𝐧𝐝 𝐰𝐨𝐦𝐞𝐧 𝐚𝐫𝐞 𝐮𝐧𝐝𝐞𝐫𝐫𝐞𝐩𝐫𝐞𝐬𝐞𝐧𝐭𝐞𝐝 𝐢𝐧 𝐭𝐡𝐞 𝐝𝐢𝐠𝐢𝐭𝐚𝐥 𝐢𝐦𝐚𝐠𝐞𝐫𝐲 𝐭𝐡𝐚𝐭 𝐭𝐫𝐚𝐢𝐧𝐬 𝐀𝐈 𝐦𝐨𝐝𝐞𝐥𝐬. ⚠️If AI struggles to follow such a basic instruction, what does that say about its reliability in more sensitive domains, such as healthcare, recruitment, or education? ⚖️ 𝐁𝐢𝐚𝐬 𝐢𝐧. 𝐁𝐢𝐚𝐬 𝐨𝐮𝐭. The challenge is not only technical, but ethical. 📢 Repost to spread awareness🔄 #GenAI #AI #ArtificialIntelligence #bias #research #ChatGPT
-
We called it progress. Turns out, it's a wedge. When it comes to AI, women are underrepresented, disproportionately impacted, use it less, and trust it less. Why the World Economic Forum predicts it will take 134 years to close the AI gender gap. How did we create yet another gap 🙄 before AI even got off the ground? Because we haven't closed the previous gaps. Women make up less than 22% of AI professionals globally. In technical roles, that number drops even lower. The gap shows up in models, machines, and money. #️⃣ Data bias: AI models trained on biased data reinforce gender stereotypes, like women linked to nurses, men to CEOs. I read an early study by UNESCO where Llama 2 and ChatGPT were asked to make up stories about women and men. In stories about men, words like "treasure," "woods," "sea," and "adventurous" dominated, while women were more often described with "garden," "love," "gentle," and "husband." Oh, and women were described in domestic roles 4X more often than men. ⚙️ Product design: Virtual assistants are often default female—submissive, helpful, and programmable. We've seen design flaws like this before, like in facial recognition systems that tend to perform worst on black women compared to white men. 💲 Funding: Women-led AI startups receive a fraction of VC funding compared to male-led ones. In fact, only 4% of AI startups are led by women. Then there's disproportionate impact. 80% of jobs will be affected in some way by AI. 57% of jobs susceptible to disruption are held by women compared to 43% of men. If women are anxious, it's because we should be. Women are 1.5X more likely to need to move into new occupations than men due to AI. But we're not anxious about AI just because of its impact on work and jobs; we also don't TRUST it. We know AI algorithms perpetuate bias, and we also know we're more subject to online harm like deepfakes, cyber threats, and fraud. Then there are bigger questions around psychological safety, an altered sense of reality, and social isolation in an increasingly digital world. Sounds like AI is sexist. A literal threat to women -- our livelihood, our social being, our online safety and privacy, our kids. But I don't want to throw it away for all that... ...it's that the most powerful technology claiming to shape our future is being built and deployed by a homogeneous few. This isn't about responsible AI, this is about representation, impact, and responsible humans deciding what to DO with AI. Listen to my conversation with Adriana O'Kain on Mercer's AI-volution podcast. Closing the AI Gender Gap: 🎙️ Spotify: https://lnkd.in/geyp2Scn Apple: https://lnkd.in/g5FamDEJ #FutureOfWork #DigitalDivide #EthicalTech #InclusiveDesign #AI #EquityInTech #HRTech #WomenInTech
-
“𝗜 𝗮𝘀𝗸𝗲𝗱 𝗖𝗵𝗮𝘁𝗚𝗣𝗧 𝘁𝗼 𝗺𝗮𝗸𝗲 𝗮 𝗰𝗮𝗿𝘁𝗼𝗼𝗻 𝘃𝗲𝗿𝘀𝗶𝗼𝗻 𝗼𝗳 𝗺𝗲... 𝗮𝗻𝗱 𝗶𝘁 𝗺𝗮𝗱𝗲 𝗮 𝗴𝘂𝘆.” I’ve had countless conversations with ChatGPT — mostly about AI, coding, AppSec, or security in general. It’s been a helpful tool — even fun. But this time, something weird happened. I asked it to “freestyle a cartoon illustration of me.” The result? A smiling man with a beard, sitting at a laptop. When I asked why, the explanation was honest — and 𝗲𝘆𝗲-𝗼𝗽𝗲𝗻𝗶𝗻𝗴: “I went with a default character template and missed the mark with gender and vibe. Many image models were trained on internet data that skews toward certain demographics (e.g., young men in tech)... so unless told otherwise, I have to guess.” It was a tiny moment — but it reflected a bigger issue: 𝗧𝗵𝗲 “𝗱𝗲𝗳𝗮𝘂𝗹𝘁” 𝗶𝗻 𝘁𝗲𝗰𝗵 𝗶𝘀 𝘀𝘁𝗶𝗹𝗹 𝗺𝗮𝗹𝗲. As a woman in cybersecurity, I’m used to being the minority in the room. But I didn’t expect AI to carry those same assumptions. This isn’t just about gender. It’s about how we train our models, who we center as the “default,” and the work still needed to make tech (and AI) more inclusive and representative. AI isn’t biased on purpose — 𝗯𝘂𝘁 𝗶𝘁 𝗶𝗻𝗵𝗲𝗿𝗶𝘁𝘀 𝗼𝘂𝗿 𝗯𝗹𝗶𝗻𝗱 𝘀𝗽𝗼𝘁𝘀. And moments like this remind us why representation matters, even in the small stuff. Curious to hear from others — especially folks in tech, AI, or UX. Have you noticed bias baked into your tools of choice? #WomenInTech #AppSec #AIbias #Cybersecurity #RepresentationMatters #GenderBias #InclusiveAI #TechLife
-
Me: “ChatGPT, why do you think every successful professional is a man?” Like many professional women, I was caught off guard when A.I. generated a male image of me based on our interactions. ChatGPT: “Oof, fair callout. That default-to-male thing? It's baked into way too much of how the world (and yeah, even AI training data) frames ‘success,’ and it’s total BS. It reflects the bias in the data, not reality.” While ChatGPT and I are back on speaking terms 😉, the exchange sparked a deeper curiosity: What’s actually being done to combat bias in artificial intelligence? Here’s a quick look at current efforts: ● Diverse Training Data – Actively expanding datasets to include broader representation across gender, race, geography, and more. ● Bias Auditing Tools – Software to detect and flag discriminatory outputs (e.g., Fairness Indicators, AI Fairness 360). ● Human-in-the-Loop Review – Bringing diverse human reviewers into model evaluation to catch what algorithms might miss. ● Transparency & Explainability – Demanding models show their work—literally—with more interpretable outputs. ● Regulation & Ethics Boards – Governments and institutions setting guardrails (e.g., EU AI Act, IEEE, NIST). ● Open Research Collaboration – Shared datasets and bias benchmarks (e.g., BIG-bench, Holistic Evaluation of Language Models). Experts believe these efforts will be moderately effective in the short term, while long-term global bias mitigation remains a work in progress—some are optimistic, others cautious. Curious to hear from you: Have you noticed gender or identity bias in AI tools, whether obvious or subtle? Do you think enough is being done to combat it? 👇 #AIethics #BiasInAI #ResponsibleAI #MachineLearning #GenderBias #TechForGood #WomenLeaders