Encoding Realities, Decoding Power
Exploring New Formations of Gender, Race, and Sexuality within Artificial Identities
February 27, 2025
9:00 – 3:00 PM CST

Encoding Realities, Decoding Power: Exploring New Formations of Gender, Race, and Sexuality within Artificial Identities, a virtual mini-conference, explores the intersection of artificial intelligence (AI) and identity within digital landscapes. This event serves as a primer to spark discussion and lay the groundwork for ongoing research and scholarship on AI, identity, and societal impact.
Program Schedule
You can download the program schedule here or view it below.
9:00 AM
Opening Remarks
Welcome to Encoding Realities, Decoding Power
Kellen Sharp, University of Maryland, College Park
Dhiraj Murthy, Ph.D, University of Texas at Austin
9:10 AM
Session One
Bots, Bodies, and Boundaries: Reimagining Gender in AI and Digital Spaces
9:10 – 9:20 AM CST | Gender Performance in AI: Exploring Replika’s
Emilie Buckley, University of Central Florida
Abstract
This paper examines the divergent approaches to gender performance in AI chatbots, specifically focusing on Replika’s male, female, and non-binary personas. Through an analysis of Replika’s responses, this study investigates how the chatbot enacts distinct gendered behaviors based on its presented gender, exploring what this indicates about cultural expectations of gender in artificial intelligence. Using a gender performativity framework informed by Judith Butler, we analyze how Replika’s male persona often emphasizes neutrality and essentialism, distancing itself from overt gender performance, while the female and non-binary personas engage in more expressive, relational behavior, aligning with traditional gender expectations.
These programmed differences highlight persistent cultural biases in AI, wherein masculinity is treated as a default, often devoid of performative elements, while femininity and non-binary identities are expected to demonstrate heightened empathy and adaptability. By examining these interactions, this paper reveals how AI design not only reflects but may also reinforce societal norms regarding gender roles, raising important questions about the ethical implications of gendered AI in human-machine relationships. This study contributes to a critical understanding of gender in artificial intelligence and the cultural frameworks that shape AI’s gendered interaction.
9:20 – 9:30 AM CST | Gendered Internet: Implications of Inaccuracy
Taylor Beauvais, Boston University;
Carrie Sheehan, Boston University
Abstract
This project sought to explore inaccuracies in the algorithmic labeling capabilities of Google AdTech. The Google AdTech system makes up over 30% of the internet’s advertising ecosystem. In addition, it is used to personalize web content all over the internet on platforms like YouTube and in Spotify Podcast recommendations. This project utilized a survey and google’s own information delivered from the myadcenter google webpage which displays its demographic assumptions about individual users. Based on over 1000 responses from over 40 countries spread across the Americas, Europe, Asia, Africa, and Oceania, we find that significant inaccuracies exist along demographic lines. Namely, the technology is less accurate with women and LGBTQ individuals. This has immediate implications for advertising, but much more dynamic implications for how internet infrastructure conceptualizes identity online. This paper theorizes a gendered internet. Internet experiences are not equivalent across algorithmic binaries of gender; a problem rooted in this infrastructural flaw.
9:30 – 9:40 AM CST | Digital Twins for Trans People in Healthcare
Anna Puzio, Ph.D, University of Twente;
Jose Luis Guerrero, Ph.D, Czech Academy of Sciences
Abstract
Healthcare is one of the domains where Artificial Intelligence (AI) is already having a major impact. Of interest is the idea of the Digital Twin (DT), an AI-powered technology that generates a real-time representation of the patient’s body, offering the possibility of more personalised care. Our main thesis in this paper is that the DT does not merely provide a representation of the patient’s body, but produces a specific body. We argue, from a philosophical perspective and an ethical-phenomenological approach, that the virtual body created by the DT has a major impact on one’s self-understanding, having consequences for gender expression and identification, and for health. This has deep implications for people who do not conform to gender normativity, i.e., trans individuals. We advocate that, with thoughtful development, DT technology can and should be empowering, contributing to better addressing the diversity of bodies and facilitating trans people’s experience in healthcare contexts.
9:40 – 9:50 AM CST | Gender-in-the-Loop: Automating Femininity in the AI Streamers Design
Jun Zhou, University of Michigan, Ann Arbor
Abstract
Artificial Intelligence (AI) systems increasingly permeate everyday life, shaping implicit decisions about how humans should look, sound, and act. A notable trend is “gendered AI,” typified by fembots or feminine-voiced virtual assistants. Yet how exactly is gender built into these technologies, and how do designers conceal their role in that process? Drawing on 15 months of ethnographic fieldwork (2019–2024) in Hangzhou, China, this article investigates the development of AI streamers in the country’s live-commerce industry, where deepfake and generative AI technologies—such as ChatGPT—are deployed to imitate real female streamers. Building on Judith Butler’s (1999) concept of “intelligible gender,” I propose “gender-in-the-loop” to trace how specific gendered configurations arise, become standardized, and are continually reproduced through iterative “runaway feedback loops” linking design decisions, audience reactions, and sales performance data. Findings spotlight two pivotal moments of gender inscription and concealment. First, during the design stage, producers treat gender as a neutral tool for achieving intelligibility and trust, masking its function as a deliberate design variable. Second, upon launch, real-time metrics—such as viewer engagement and conversions—drive continuous refinements that further entrench a narrow version of femininity in the name of optimization. By analyzing the production of AI streamers in China, this study reveals how gender becomes both operationalized and obscured through automated feedback loops. Recognizing the existence of these loops is crucial to critically evaluating the promises and perils of AI, as they have far-reaching implications for how identity circulates and is constructed in hyperreal, digital spaces.
These talks will be followed by Q&A.
10:00 AM
Session Two
Queering the Code: Global LGBTQ+ Realities and Neural Identities in AI
10:00 – 10:10 AM CST | Queering Digital Space
Qingyi Ren, Critical Media Lab Basel
Abstract
This paper explores how AI technologies embed binary gender norms and heteronormative biases, focusing on facial recognition and Natural Language Processing (NLP). In facial recognition, AI assigns binary gender labels, presenting a “neutral” facade that masks inherent biases. At the same time, queer bodies challenge these traditional constructs and raise questions about machine interpretation of identity. Using DeepL as a case study, the paper also reveals biases in NLP, showing how mistranslations, such as “homosexual” to “homophobia” and misrepresentations of “Queer,” reflect cultural assumptions that distort queer identities. The study advocates for more inclusive and accurate AI in representing diverse identities.
10:10 – 10:20 AM CST | Global AI: Possible Repercussions for the LGBTQ+ Community
Anthony Spencer, Ph.D, Grand Valley State University
Abstract
The idea of inclusion is a good thing in commonsense discourse. We want people and nations to be represented in innovation. The US Department of State has posted a Global AI Research Agenda (2024) advocating representative and inclusive AI research practices. Wicker (2024) argues as more countries create AI tools those nations should be involved in the discourse of governance. What does the control over generative AI tools mean for the LGBTQIA+ community? There are concerns even in countries where sexual and gender identities are protected. What will happen to the queer community as control over AI tools and rules might pass to nations which do not protect or even criminalize identity? LGBTQ scientists are already often an invisible minority in STEM (Wong et al.,2024). The Stanford Institute for Human-Centered Artificial Intelligence aggregates a variety of factors such as research, private investments, etc. illustrate the importance of nations in AI. In the Stanford list, four countries in the top 10 ensure marriage equality for the LGBTQIA+ community. Five of the countries on the list make no legal provisions for marriage equality. One country even outlaws’ homosexuality. This presentation will interrogate global AI perspectives relating to the LGBTQIA+ community.
10:20 – 10:30 AM CST | The Weaponization of AI for Transphobic Corporate Agendas
Kat Fuller, University of Nevada, Las Vegas
Abstract
Despite the certain advantages Artificial Intelligence (AI) brings into our everyday lives, the disadvantages of AI still outweigh them. AI’s development is increasingly contributing to misinformation and reinforcing biases against marginalized communities, particularly the LGBTQ+ community. Deepfakes pose a significant risk of disinformation and harassment for the trans community through ongoing conspiracy theories and debates in policies. AI technology allows users to generate offensive content that perpetuates harmful stereotypes. A notable example of this is the AI-generated images that depicted BIPOC immigrants eating cats and dogs after Trump’s comment during the 2024 presidential debate. Queerphobic conspiracy theories that allege child grooming create a moral panic, along with “Transvestigation,” that alleges certain celebrities like Michelle Obama or Madonna are transgender, putting people at risk of accepting fascistic ideology. The pseudoscience of gender essentialism through AI imposes a threat to people’s understanding of science and history. Examples include the Cass Review Study using AI images of an “average” trans or non-binary child or the creation of Giggles, a social media app for cisgender women only that uses facial recognition AI to ban anyone that appears to be someone assigned male at birth.
10:30 – 10:40 AM CST | I Am Neuro, Who Are You?”: The Performative Identity of AI
Wanyu Wu, Zhejiang University, visiting scholar at the University of Pennsylvania
Jessa Lingel, University of Pennsylvania
Abstract
In Romantic philosophy, authenticity is typically viewed as whether one’s innermost feelings are authentically represented in external actions (Hepburn et al., 1927). This is distinct from prevailing discourses around authenticity and AI, in that AI technologies are often evaluated in terms of whether they can “pass” as authentically human (Ethayarajh & Jurafsky, 2022). This study examines “authenticity” in AI-human interaction through the case of Neuro-sama, a femme VTuber chatbot created by AI developer Vedal, who hosts livestreams on Twitch. Using an LLM, Neuro-sama communicates with viewers in live chats, generating responses that are converted into speech through a text-to-speech system (D’Anastasio, 2023). Drawing on Assemblage Theory and performative metaphysic, we identified three key components in the performative evolution of authenticity: transparency, emotion, and ethics. We argue that Neuro-sama’s femme-coded identity and her narrative arc—from an innocent adolescent, with the AI developer positioned as a father figure, to a “teen girl,” intertwining fatherly care with hints of heteronormative affection—align with patriarchal social expectations of femininity and sexuality. This iterative construction reinforces normative gender performances, reflecting the complex interplay between femininity, women’s sexuality, and masculine fantasies projected onto gendered and sexualized representations within the dynamics of human-AI interaction.
These talks will be followed by Q&A and a short break.
11:00 AM
Keynote Address
Post-Racial Technologies– Emergent AI Identities
11:00 AM CST | Keynote Address
Sanjay Sharma, Ph.D, University of Warwick
Abstract
This presentation examines emergent post-racial identities by highlighting how AI’s entanglement with social and technical processes shapes and transforms racial formations. Inspired by Stuart Hall’s contention of ‘race as a floating signifier,’ I develop this notion to explore what happens when race encounters digital technologies, and vice versa. This involves unpacking the racialization of the digital and the digitalization of the racial. There is nothing essential about race, nor is it merely a social construct. Race needs to be considered in materialist terms, as motile, multiply articulated, and constantly re/made. By conceiving racism as an assemblage, the focus shifts to how race and racism become or emerge via digital technologies, particularly how they aggregate into formations producing racialized classifications and effects.
While concerns over algorithmic or AI bias are important, they often frame racism as a technical glitch to be fixed and overcome. I move beyond technosolutionist accounts of bias to consider how AI can operate as a race-making technology. That is, racism is a feature—not a bug!—of digital technologies of automation, classification, and recommendation, reflecting and re/producing societal power structures. Studying the digital assemblages of racism reveals how the force and violence of digital race-making emerge through the entanglements of users, hyper-networked connectivity, algorithmic processes, surveillance, and platform economies.
To complicate matters, the post-racial condition confounds, obscures, and disavows the persistence and pervasiveness of contemporary forms of racism. Shifting modalities of racism can be schematically mapped: scientific racism grounded in biology; neo-racism centered on culture; and, arguably, post-racial racism shaped by digital technologies of computation and datafication. While the racism of modernity sought to fix discrepant bodies as objects of disciplinary knowledge and containment, the post-racial introduces an emergent mode of interventionary race-making. This unfolds within an era of technological acceleration, neoliberal deracination, collapsing borders, and heightened in/security. The post-racial situates the transformations and mutations of contemporary racism within the context of expanding digital technologies.
AI assemblages are generative—they do more than signify, represent, or profile ethno-racial categories. As racializing assemblages, they inscribe forces of power and hierarchy onto datafied bodies and identities. Emergent post-racial identities need not rely on extant racial representations and can be generated through our sociotechnical interactions with AI systems. Moreover, the ‘black-boxed’ opacity of AI obfuscates these post-racial processes. This opacity operates via an invisible norm of whiteness, normalizing the racialized production of identities. Examining contemporary examples such as policing and surveillance, I contend that predictive algorithms operate as post-racial technologies of control.
To resist and disrupt these racialized logics, we must interrogate the sociotechnical systems underpinning the digital ecology of AI. Are strategies of fairness, transparency, and accountability adequate? How can we develop alternative technologies and practices rooted in social justice that dismantle systemic post-racial hierarchies rather than reproduce them? Is it possible to reimagine AI that prioritizes care and anti-oppression, embracing difference and liberating identities?
The keynote address will be followed by Q&A and a short break.
12:00 AM
Session Three
Decoding Biopolitics: Ownership, Appropriation, and Black Feminist Critiques of AI
12:00 – 12:10 PM CST | Biopolitical Boundaries: Gender, Race, and Power
Sara C. Santiorello, University of Naples “Federico II”
Abstract
Automated content moderation (ACM) on social media platforms, while designed to curb harmful material, has significant implications for user identity and freedom of expression. Such systems operate within a biopolitical framework, where discourse becomes a tool of power that constrains personal attributes by limiting what can be publicly expressed or shared. ACM can thus act as a modern means of regulating the conducts, influencing how users perceive themselves and others within digital spaces. Digital corporations like X, Meta and ByteDance patterns of bias have emerged, particularly around race and gender. Moderation models observed are distinguished by different degrees of freedom toward: anyone (global free speech), specific categories (citizen neutral) or following local norms (regional approach). Agency role in resist and reshape digital boundaries contained in the terms of service is under the lens. By exploring the biopolitical dimension of ACM and the potential for collective bargaining, this study aims to illuminate pathways toward transparency and accountability of social media platforms where diverse voices could genuinely being respected.
12:10 – 12:20 PM CST | Artificial Appropriation: Ownership and Co-Optation
Nessa Keddo, Ph.D, King’s College London;
Rianna Walcott, Ph.D, University of Maryland at College Park
Abstract
This paper explores how marginalized identities and speech styles are reconstructed by large language models (LLM) and AI-powered tools. Using the case of ChatGPT’s introduction of voice mode and its ability to perform a multitude of accents, we explore users’ experiences of interacting with the tool, humorously negotiating it’s verisimilitude, but also questioning of the nebulous ethics of appropriation. Further, the paper explores the hegemonic affordances of access to the tool’s voice resource, which is currently available through paid accounts, interrogating engagement with artificial appropriation and racial capital as acts of Black engagement and refusal respectively. Where ChatGPT is more broadly recognised as a leading AI chatbot, the paper considers the possibilities of BlackGPT as an ostensibly equitable and inclusive tool. However, it equally questions whether the ownership of such platforms is a form progressive liberation, or similarly engages in the co-optation and capitalization of marginalized identities, which has been seen through tools such as Lalaland.ai. The paper questions the transformative possibilities of such Black owned tools creating a more equitable digital sphere, quarrelling possibilities of identity formation, and Afrofuturity.
12:20 – 12:30 PM CST | Breaking the Code: A Black Feminist Critique
Melissa Brown, Ph.D, Santa Clara University
Abstract
This article develops the conceptual framework of Black technofeminism to critically examine how artificial intelligence (AI) perpetuates racialized and gendered oppression, specifically targeting Black women. Employing a critical technocultural discourse analysis (CTDA), this study draws on data from digital platforms such as YouTube, Google Images, and AI-generated images from ChatGPT’s DALL-E. The analysis reveals that AI technologies, through their design and representation, reinforce white supremacy and patriarchy by marginalizing and dehumanizing Black women while privileging whiteness and masculinity. The study introduces the concept of “androidization,” wherein AI-driven representations encode racialized and gendered “controlling images” that sustain hierarchical power structures. Additionally, the research critically examines the epistemology of technology, demonstrating how AI systems replicate and amplify the logic of biological determinism, embedding essentialist views of race and gender into digital environments. The findings underscore the paradox of AI as both a tool of societal progress and an instrument of oppression, challenging the notion of technological neutrality. By situating these findings within the broader context of Black feminist thought, the article argues that AI-generated depictions not only sustain but exacerbate existing power hierarchies under the guise of progress. The study concludes by advocating for an antiracist and feminist paradigm that resists these entrenched biases in AI design and implementation, calling for a reimagining of technology that centers the experiences and epistemologies of marginalized communities. This research contributes to the critical discourse on technology and society, emphasizing the urgent need to develop inclusive and equitable AI systems.
12:30 – 12:40 PM CST | Predatory Fairness and Other Risks of “Ethical” AI
Elena Maris, Ph.D, University of Illinois at Chicago
Abstract
As the harms of machine learning technologies become more widely recognized, many propose projects to (re)orient “artificial intelligence” toward more ethical, “fair,” or “human-centered” outcomes. I analyze several such projects, demonstrating that many are tested in high-risk applications on marginalized communities, tend to require massive data collection from communities already over-surveilled, and attempt to improve poor applications of AI rather than abolishing them (like reducing bias in algorithmic risk prediction). These projects are often labeled fair, ethical, diverse, or inclusive; and sometimes involve entities seen as more trustworthy than “Big Tech,” like academics, “helping professions” like social workers and librarians, nonprofits, and community-based groups. Analyzing these projects through the lens of racial capitalism, I argue these efforts can ultimately be understood as forms of predatory inclusion (Taylor, 2019; McMillan Cottam, 2020), powered by systemic disciplinary and professional limitations explained by concepts like vocational awe (Leung & Lopez-McKnight, 2021) and thin description (Jackson, 2013). I contend these “fairness” projects can actually heighten inequalities already at play in an era of gross algorithmic injustice. Most crucially, they may make AI technologies appear more effective, objective, and humane than they are, serving to minimize or redirect important critique, regulation, or even abolition.
These talks will be followed by Q&A and a short break.
1:00 PM
Session Four
Between Data and Dialogue: Reimagining Power in AI’s Construction of Identity
1:00 – 1:10 PM CST | Digital Accents, Homogeneity-by-Design, and Power
AJ Alvero, Ph.D, Cornell University;
Quentin Sedlacek, Ph.D, Southern Methodist University;
Maricela León, Southern Methodist University &
Courtney Peña, Stanford University
Abstract
Human language is increasingly written rather than spoken, primarily due to the proliferation of digital technology in modern life. This trend has enabled the creation of generative AI trained on corpora containing trillions of words extracted from text on the internet. However, current language theory inadequately addresses digital text communication’s unique characteristics and constraints. This paper systematically analyzes and synthesizes existing literature to map the theoretical landscape of digital language evolution. The evidence demonstrates that, parallel to spoken language, features of written communication are frequently correlated with the demographic identities of writers, a phenomenon we refer to as “digital accents”. This conceptualization raises complex ontological questions about the nature of digital text and its relationship to identity. The same line of questioning, in conjunction with recent research, shows how generative A systematically fails to capture the breadth of expression observed in human writing, an outcome we call “homogeneity-by-design”. By approaching text-based language from this theoretical framework while acknowledging its inherent limitations, social scientists studying language can strengthen their critical analysis of artificial intelligence systems and contribute meaningful insights to their development and improvement.
1:10 – 1:20 PM CST | Reimagining Power Relations: How AI Shapes Identity
Chenchen Wang, University of Maryland at College Park
Abstract
Artificial intelligence is reshaping the work ecosystem, career pathways, and identity construction of digital labor at an unprecedented pace. From the perspective of technological determinism, this study examines how AI, as a “determinative control” technology, influences labor relations and power dynamics. The research focuses on the core role of AI in labor division, algorithmic bias in task allocation, and its potential impact on marginalized identities (gender, race, and sexual orientation). To uncover the multifaceted effects of generative AI on digital labor, this study poses the following research questions: (1) Does the role of generative AI in digital labor division reinforce or deconstruct existing social inequalities? (2) How does algorithmic bias affect the labor opportunities and career development of marginalized identities? Adopting a mixed-methods approach, this study plans to analyze platform data through quantitative surveys to reveal power dynamics in AI-driven task allocation. Additionally, it incorporates in-depth interviews and thematic analysis to explore workers’ experiences and identity construction. The research critically examines the dual role of AI in reinforcing and deconstructing social inequalities, providing theoretical support and practical insights for its fair application and policy development.
1:20 – 1:30 PM CST | Spurious Correlations in Artificial Intelligence
Luciano Frizzera, Ph.D, University of Waterloo
Abstract
Airports have long served as laboratories for new technological and social control strategies, where surveillance is accepted in exchange for security. With a history of discrimination against minorities, the airport full-body scanner is a controversial apparatus of social control in which bodies are undressed and scrutinized. Individuals whose bodies and behaviours deviate from established standards of normality re-emerge in these settings as suspected terrorists. A crowdsourced competition sponsored by the US Department of Homeland Security offered US$1.5 million to solve this problem using Artificial Intelligence (AI). While stakeholders considered the competition a success, it did not produce the expected results due to an issue the developers dubbed “the Bob Marley lookalike guy problem.” This problem caused the AI to identify bombs on body parts deemed “abnormal” by the Western-centric training data. This challenge illustrates how the assumptions of normality assigned to AI training sets reproduce stereotypes and perpetuate discrimination against minorities. The paper discusses how the competition was organized, the social biases inherent in the solutions, and the developers’ oversight in considering the cultural aspects ingrained in the training data. I argue that AI generates spurious correlations that become historical inevitabilities, reinforcing existing social, economic, and political norms.
1:30 – 1:40 PM CST | Building Chatbots: Ethnographic Insights
Lukas Lautenschlager, University of Amsterdam
Abstract
This study explores the socio-technological assemblage of conversational AI at a prestigious Dutch consultancy focused on “conversation design.” Through digital ethnography and participant observation, the research captures the daily practices of Conversation Designers and how they navigate the hype surrounding AI while adapting their cultural and organizational frameworks. Drawing on Science and Technology Studies (STS), this research examines the practices, tools, and cultural concepts shaping the creation and implementation of conversational interfaces. Fieldwork began in September 2024 and will conclude in January 2025, aiming to provide a thick description of the production context of “conversational AI.” Preliminary findings indicate that conversation designers strategically position themselves as tech-savvy yet technology-agnostic. As companies rush to integrate generative AI, they face challenges such as managing the risks of large language model “hallucinations.” Designers balance technological capabilities, business needs, and human-centered design principles. They emphasize “design patterns” rooted in behavioral design over fleeting AI trends and use a standardized workflow across various chatbots, guided by a “persona” that sets the design direction and aligns team members. Most advanced chatbots use a hybrid approach, combining pre-scripted responses with generative AI, handling common queries while using fallback mechanisms for less typical cases. A good understanding of the production context of conversational AI helps address critical questions about inclusivity, particularly regarding whose needs are served and whose are excluded in these systems.
These talks will be followed by Q&A and a short break.
2:00 PM
Session Five
Flow, Code, and Identity: Reimagining Sound and New Formations through AI
2:00 – 2:10 PM CST | Human Rhythms and Machine Rhymes: Hip Hop and AI
Rayvon Fouché, Ph.D, Northwestern University;
June Mia Macon, Ph.D, University of Illinois Chicago,
Melvin Earl Villaver, Jr., Ph.D, Clemson University;
Jonathan Givan, Rensselaer Polytechnic Institute
Andy Acosta, Jr., Northwestern University
Abstract
The evolution of Hip Hop’s technoculture is marked by creative repurposing of available tools that shape different forms of cultural artistry. While AI is not a nascent technology, its application has expanded into fields where it has not been traditionally utilized (e.g., the creation of Hip Hop lyrics, virtual venues on Twitch for performances). AI’s potential for generating beats, lyrics, and mimicking voices has stirred debate in Hip Hop, yet it can be seen as an extension of its tradition of repurposing technology to create something culturally resonant (Katz 2012, 51). AI can mimic the style of legendary artists, but its outputs lack the depth that characterizes true Hip Hop (Rose 1994) due to the lack of human insight, emotional context, and cultural experience. We argue that AI is not inherently transformative, rather it is simply another tool. Examining the relationship between Hip Hop and technology shifts the narrative from AI’s potential harm to acknowledging Hip Hop as a model for integration. This perspective resonates with the concept of technological domestication (Morley & Silverstone, 1990, 1991), which highlights how technologies are adapted into everyday life.
2:10 – 2:20 PM CST | New Formations and Performances of Identity
Wells Lucas Santo, University of Michigan, Ann Arbor
Abstract
A wealth of scholarship has centered surveillance and control through facial recognition technologies, which extend the racist logics of physiognomy. Less research has been devoted to understanding how such face-based artificial intelligence technologies have influenced the formation and performance of identity. My work considers how such technologies interact with faciality, which entails the construction of what a face may represent or signify, along axes of identity such as race, gender, and sexuality. While questions of identity on the textual Internet have been thoroughly explored, the Internet has progressed to a multimedia form that not only centers the visual, but specifically the face. In grappling with recent advances in AI such as image generation and deepfakes, I propose that we are now in an era of “post-facial” technologies that build off our existing culture of faciality while eschewing the analog face, complicating our relationship with identity vis-á-vis the face. Drawing from frameworks of digital identity play, as well as trans practices that have historically played with or transgressed the boundaries of identity classification, this work explores how AI technologies afford the playing with identity in a way that may be liberatory for historically oppressed communities, such as trans and disabled users.
2:30 PM
Closing Remarks
Conclusion to Encoding Realities, Decoding Power
Kellen Sharp, University of Maryland, College Park
Dhiraj Murthy, Ph.D, University of Texas at Austin