Team: Carmen, Nuria, Sophie
My previous intervention on performative representations of air quality in Barcelona's subway network was interesting to me in the way it sought to make the non-tangible tangible. Although not specifically interested in air quality sensing per say, I learned from this intervention that I would like to further explore performative design as part of my future interventions, playing with the perception of unperceivable things.
We decided to design an intervention exploring the tensions between online and real identity, to understand whether online identity can be reliable, as more and more decisions are taken based on digital profiles. These can either be technology-based decisions, such as AI-powered e-commerce decisions or recruitment decisions, or human decisions. Indeed, behaviours on social media are becoming integral to the way we perceive people. We also wanted to understand how people can expand the boundaries of their identities online and portray a different version of themselves, intentionally or unintentionally, and the impacts this can have on mental health. This can be liberating for certain people, but can also create unrealistic representations of the self.
We therefore decided to have strangers assess the Instagram profiles of our classmates and answer questions about them based on what they could see from their profiles. We collated all the answers and built digital profiles for each classmate, representing the identity they portray online.
We then asked them to play a who is who game. Each student had to find which of the digital profiles belonged to them.
AI assesses our personal digital data for personalised decisions, yet societal biases are
inherent in these systems. Can we rely on AI's evaluations of our digital data and expect
consistent assumptions about both our real and digital identities?
To explore this issue, we had an AI imagine portraits of our classmates based on their digital identities established previously, and compared them to assessments from another AI guessing their age, gender, and ethnicity using real pictures. This revealed interesting disparities between the two: AI imagined a mexican classmate following stereotypes of Latin American looks, and even worse... in a jungle. On my side, it was interesting to see that humans thought I was Jewish and AI imagined that I was Middle-Eastern, whereas I am French and Christian. This shows how AI repeats the same biases held by humans.
During our intervention, we acknowledged aspects that could be further improved to incite more
reflection among our classmates. The digital profiles we presented to them were too similar and therefore hard to differentiate because the questions we provided to the "assessors" were close-ended.
We decided to develop an improved version of the intervention and asked each "assessor" to describe the people they assessed in a few words. This would make the profiles more personalised and would help our classmates identify themselves.
While this intervention could be further improved, I realised that online identity was a topic I would really like to explore further as it touches many aspects of my design space. This is why I integrated AI in this intervention, to explore the assumptions it would make about digital or real identities and how those differ from human judgment.
Integrating AI in this intervention made me want to explore the relationship between AI and
identity further: not only how AI perceives identity but also how AI can shape identity.
Based on this new intervention and new reflections on my research interests, I updated my design space accordingly.
No Code Website Builder