Visual media professionals are on a perpetual quest for captivating imagery and streamlined ways to source culturally responsive content.
Enter generative AI: a true game-changer that has revolutionised the creative process.
But beneath its convenience lies a profound challenge: AI is unable to understand cultural nuance. This has unique implications for the depiction of LGBTQ+ people.
A recent search for “LGBTQ+ shopping” on Bing AI returned troubling results: predominantly young, slim and conventionally attractive individuals wearing rainbow t-shirts and carrying rainbow shopping bags, in front a rainbow flag. Similarly, a search for “gay couple” on Midjourney returned hundreds of square-jawed, white, male pairs.
These synthetic visions of the queer community are perpetuating stereotypes, non-inclusive age and body preferences and biases surrounding ethnicity and gender.
This skewed portrayal stems from several factors entrenched in generative AI’s underlying logic. AI models heavily rely on existing content and data, often failing to capture traits — like sexual orientation and gender identity — that cannot be directly observed.
AI systems can also be fed data that categorizes gender as a binary concept, disregarding nonbinary and transgender identities.
The sensitive nature of collecting data on sexual orientation and gender identity also poses a challenge, as it’s often illegal to do so for privacy reasons. Additionally, attempting to categorize the fluid and contextually variable concept of queerness raises significant philosophical questions around its measurability and reliability.
While AI developers are conscious of addressing algorithmic bias, a combination of logistical, ethical and legal factors have historically marginalised queer communities in algorithmic fairness research. The media has long been criticized for one-dimensional, stereotypical portrayals of LGBTQ+ identities, which decades of inclusion efforts have sought to rectify.
But AI lacks syncretic knowledge of this progress and its discourse. Being an aggregation engine, it misses high-fidelity, granular and representational data — not to mention accurate, personal and consented self-expressions.
As almost one in five Gen Z adults now identify as LGBTQ+, the pressing issue of AI-generated imagery coincides with a polarizing political rhetoric and increasing misinformation — a harmful mix that threatens to impede the progress made over decades in dismantling prejudices.
Prioritizing visuals created by humans is essential to promoting authentic and inclusive LGBTQ+ visibility. Only a human approach can effectively address the need for human-centric representation.
Humans possess emotional depth and lived experiences that AI cannot replicate, allowing us to capture representation authentically. Our intricate understanding of queer stories allows for the illustration of lesser-seen, raw and unfiltered narratives. By embracing empathy and integrating emotional depth into the creative process, visual media professionals can accurately portray the diverse experiences within the LGBTQ+ community.
AI’s reliance on algorithms makes it ill-equipped to address real-time developments or concerns. Queer identities, dynamic and ever evolving, are difficult for AI to categorize. Neglecting to account for current events risks producing irrelevant and culturally lazy content, eroding audience trust, respect and relevancy. Adaptable, human creators bring a more responsive approach to changing societal dynamics around LGBTQ+ rights and representation.
Like any system, AI can perpetuate biases ingrained in its training data. By consciously seeking to represent diverse identities and experiences, humans can counteract these biases and stereotypes. This concerted effort, formalized in the creative process, fosters an inclusive media landscape that facilitates understanding and empathy.
So many headlines are dominated by the same tension: Why are so many in a rush to remove humans from the effort to craft captivating human stories? Why do we expect a nascent technology with biased data to improve our stories, without our direct involvement?
To avoid the pitfalls of AI’s incomplete understanding of culture, media makers must prioritize human-created content to promote authentic and inclusive LGBTQ+ visibility.
By harnessing the unique abilities of humans to embrace empathy, be adaptable and stay aware of biases, visual media professionals can address the urgent need for human-centric representation.
Genevieve Ross is creative director at Stocksy.