Extended Intelligences

A fascinating seminar that demystified artificial intelligence by shedding light on the technology and logics behind AI systems as well as ethical considerations to take into account when developing them. It exposed us to accessible tools to develop our own AI models, providing us with robust foundations to further explore the application of AI in our own projects.

We first received a conceptual introduction to AI:
AI as automation

It is a neural network that is able to learn and self-figurate itself. Machine learning means automating tasks by providing examples (training data) instead of writing instructions (code). AI tools are not assertive, they work with statistics and propose the most statistically probably output.

AI as an expression of current ideology:

The development, design, and deployment of artificial intelligence systems are deeply influenced by the values, beliefs, biases and priorities of the society creating them. These are embedded in the datasets that are used to train neural networks. Looking into the intricacies of AI models is like looking at a mirror of our society.

AI as an infrastructure

AI represents a significant amount of digital infrastructure, which consumes energy and has a carbon footprint. It costs up to $700,000 a day in energy to power ChatGPT. There is also human labour behind AI: the "ghost trainers" who label big datasets to train AI, the content moderators etc.

AI as a concept to think with

AI is just another form of intelligence to think with, just like plants and animals. How do we make sure that we think with AI and that AI doesn't think for us?

AI as an ubiquitious technology:

Its application can be used for many purposes: image classification, object and facial recognition, emotion detection, image processing, image generation, text generation, natural language processing, voice recognition and generation, time series predictions etc. It can be found in many tools that we use on a daily basis: social media, smartphones, health trackers etc. The websites www.theresanaiforthat.com or www.aicyclopedia.com are libraries of all AI tools available out there.

We then learned about the different layers behind an AI system:  

Mobirise Website Builder
THE NEURAL NETWORK

A big mathematical structure specific to a task that is capable of self configuration from data. They are mostly written in Python and aren't programmes that we can install but pieces of software that we can integrate in programs. A lot of them are published as open source and can be found on GitHub.

THE DATASET

A sample of data used to train neural networks. It can be quantitative or qualitative and always contains bias, which will be present in the trained AI system. Websites with already existing datasets are: Kaggle, PapersWithCode.

THE LIBRARY

A Python library is a set of useful functionalities that you can reuse instead of coding them. 

Mobirise Website Builder
THE MODEL

The model is the trained neural network. As neural networks can be trained with different datasets, one neural network can be trained into different models. An image recognition neural network can be trained to recognise light, nature, faces or objects.

Tools

We learned to use AI through APIs or hosting notebook services, through Google Colab and the useful notebooks already provided by Pau on GitHub. We used Replicate to connect to an API and HuggingFace to play with pre-existing models. This made me realise that running AI models yourself is more accessible than I thought. APIs are used when we want to use services provided by another entity and when we want our computer to interact with the computer of that entity. A lot of APIs are available for free:

Microsoft APIs
Google APIs
Amazon APIs
Replicate APIs
Other APIs

Ethics

The growing use of AI raises many ethical issues. AI systems will repeat the biases in the world through the biases found in the datasets used to train them. Bias then gets perpetuated and amplified by algorithms. Bias can be removed by enforcing certain restrictions in algorithms or by training non-majority predictions and over-representing the elements that are under-represented in datasets. The increasing use of AI also raises questions about the accountability of the decisions made by AI. Should the human decisions behind AI systems be held accountable?

The Love Hacker

Team: Vania, Anna, Everardo, Sophie

We used all this knowledge to develop our own AI tool. I teamed up with Vania, Anna and Everardo as we were all interested in exploring facial and emotion recognition. This application of AI is interesting for me as it relates to digital identity, the data that AI is able to collect about individuals, how AI perceives their identity and the potential biases.

As a speculative application, we decided to develop an AI model that could detect love in facial expressions. Other than the playful appeal of this project and the business purposes it could serve (e.g. dating apps, emotion-based advertising etc.), such applications could be useful to help neurodivergent individuals understand people's emotions, or for visually impaired individuals sense people's emotions if these were translated to sounds, for example. 

We created a model based on DeepFace, an open-source deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images and employs a nine-layer neural network with over 120 million connection weights. It was trained on four million images uploaded by Facebook users. It has 97.35% accuracy where the human beings have 97.53%.

The model also uses OpenCV which is a large open source library for computer vision, machine learning and image processing and now plays a major role in real-time operations used in today's AI systems.

Our model is based on an already available model on GitHub which uses both DeepFace and OpenCV to develop a video to text AI model on Google Colab. This model was built for computer vision, but as we couldn't access our camera through Google Colab, we changed the code to allow emotion recognition in videos that we upload ourselves to the hosting network. As the emotion of love was not available in the DeepFace library, we asked the AI to treat happiness as love, and to label only this emotion on the videos.


There are of course ethical considerations to take into account when developing such an application:
Surveillance

Detecting flirting and emotions can be invasive. Depending on the use of this emotion recognition technology, it can go against data privacy.

Accuracy

Emotions are complex and can be easily misinterpreted by AI, and even by humans. A smile does not necessarily mean happiness.

Subjectivity:

Flirting behaviors can be influenced by cultural norms, and donĀ“t look the same across the world. The AI needs to be thoroughly trained to avoid bias.

Other languages of love

Relationships involve complex ways of expressing emotions. Relying only on facial recognition may neglect other ways to express human connections.

Unintended Consequences

Implementing AI for relationship-related surveillance may have unintended consequences on behaviours, impacting societal norms and potentially changing the dynamics of human romantic interactions.

Final reflections

This was a fascinating seminar which really opened my eyes to the possibilities of AI and how accessible it was to the wider public, in contrary to the black box perception that most people have of AI today. I want to take the emotion recognition model that we develop further and see if I can develop the code to apply emotion recognition to computer vision through my computer's camera.

It really inspired me to further explore this technology in my master project, as it is central to my research interest in online identity and surveillance. As more and more decisions will be made by AI systems (recruitment, health etc.), we need to understand that AI decisions are not always reliable as the data it relies on is not always representative of reality.

I also want to further explore the impact that AI will have on the human brain, as we increasingly rely on AI to think for us with the rise of tools such as ChatGPT. Will this mean that we will have more time to develop new skills that AI cannot do for us, or will it just decrease our intellectual potential? Rethinking education will be crucial in ensuring we learn to think with AI instead of letting AI think for us. 

AI Website Generator