Live Tracking
with
Varvara & Mar

Mirror, Mirror on the arm, who's in the training set of them all?
Varvara & Mar

Project Intention

The initial idea of “Mirror, Mirror on the arm, who's in the training set of them all?” was an immersive and thought-provoking interactive installation that combines fairy tales and modern technology. Inspired by the Grimm brothers' "Snow White," this piece features a robotic arm equipped with a unique, mirror-like display that actively tracks and engages with the audience. Upon encountering a viewer, the arm positions the mirror to capture their face, initiating a sophisticated search through the vast LAION dataset—a collection of millions of portraits used for training AI models like Stable Diffusion. The current prototype state of the project does not include LAION database search feature but proves the interaction concept.

The mirror display, a blend of a two-way mirror and a digital screen, reveals one of two outcomes: it either showcases a digital twin, the closest match from the dataset, or displays the message “Seems you are not in the dataset” if no match is found. This interaction, while seemingly simple, is laden with complex implications. It confronts the viewer with the unnerving reality that their likeness may be part of these extensive datasets, often compiled without consent or awareness. The installation thus serves as a stark commentary on the issues of privacy, consent, and surveillance in the age of AI.

The project delves into the ethics of AI and data collection. It exposes the often opaque and questionable methods of data acquisition in large, publicly accessible datasets. The installation highlights the blurred lines between public and private domains in the digital age by presenting the possibility of finding one's image illicitly scraped and used. The project turns to the simple question, 'Who's the fairest of them all?' into a thought-provoking reflection on privacy and the digital age.

Technical Breakdown

Invited Artists
Varvara & Mar

Project Title
Mirror, Mirror on the arm, who's in the training set of them all?

Tools
UR10, LED light, sparkler, medium and large format camera

Medium
Rays of light, digital and analog photography

Software
Custom ad-hoc Software UR SVG Planner, programming via teach pendant.

Technique used
A Raspberry Pi analyses the video stream of a webcam at the end of the robot arm and recognises faces on it. The most prominent face is identified. A script searches for similar faces in the LAION Face database and then displays them on the connected screen. A Python script on the Raspberry Pi recognises in which direction the face is moving and sends the robot the information via Ethernet as to which joint should be moved where. Sample code for Realtime Control by UR roboter can be found on our Github account.

Varvara & Mar is an artist duo formed by Varvara Guljajeva and Mar Canet in 2009. Often the duo's work is inspired by the digital age. Their work is not just a display of technical prowess, but also a commentary on contemporary issues, challenging viewers to reconsider their relationship with technology and its impact on society. The artist duo has exhibited their art pieces in numerous international shows and festivals, such as MAD in New York, FACT in Liverpool, Santa Monica in Barcelona, Barbican and V&A Museum in London, Onassis Cultural Centre in Athens, Ars Electronica museum in Linz, ZKM in Karlsruhe, and more.

Dr Varvara Guljajeva is an Assistant Professor in Computational Media and Arts at the Hong Kong University of Science and Technology (Guangzhou). Mar Canet Sola is a PhD candidate and research fellow at Cudan research group in BFM Tallinn University. They live in Tallinn, Estonia.

Reflexion on the process 
In discussion with Jonas Berthod

What was the idea you originally brought to “A Third Hand”?

Our aim was to work with human-robot interaction. We wanted to create a project that responds to an audience and has its own behaviour. We took cues from the Grimm brothers’ version of the story of Snow White, one of the most enduring narratives in Western culture which still resonates with the way interact with modern technology today.

Mirror, Mirror on the arm, who's in the training set of them all? is an interactive installation in the form of a robotic arm holding a magic mirror. The robot arm follows the audience, orienting a mirror to their face. As the “mirror” – or rather a display covered with special glass attached to a robotic arm – insistently tracks and follows the audience members, the software sifts through millions of portraits included in the LAION-400M dataset of image-text pairs and “reflects” the closest match back to the viewer, overlaying the two visages via the two-way mirror surface. If the database search returns nothing, the mirror displays the text “Seems you are not in the dataset”.

The project raises the awareness of the public about being part of the highly used AI training dataset deployed for many AI models without our consent nor awareness. The sinister overtones of this less-than-consensual interaction are further emphasized by the fact that image data from large, publicly available datasets are often obtained with rather questionable methods, to a point where there is a likelihood of discovering the illicitly scraped image of one's self. 

What steps did you take? 

The project has several parts. First, robot control and communication with software, which is necessary for real-time human-robot interaction. Then, software development that communicates with robot and does face detection. Following that, we also need software that finds similar faces to the viewer’s in the dataset. Lastly, there’s the digital fabrication of custom-made parts that integrated the magic mirror, a LCD display, a camera, and a Raspberry Pi.

In what ways does your project benefit from the involvement of a robotic arm, and in what fields do you see potential for artistic exploration?

We had not worked with the UR10 robot arm previously, but had used other robots including Kuka, Yaskawa, and Uarm. All these exhibit different control methods. Since we aimed for real-time control, this made things more complicated: we needed to open sockets, install APIs or do other tricks. But all this work was worth it, because the achieved interaction feels like a robot came to life and has its agency to act. It is like giving a role to a robot in the story.

The presence of robots in our daily life is increasing and with the rapid development of AI, the machines begin to make decisions in our place. Art projects are a laboratory to play through the darkest, funniest and absurd scenarios that might appear one day in real life. 

Which parts of the project did not go as planned and how did you adapt?

Interactive projects are always more complex than the ones that involve robot animation only. The complexity comes from real-time control and the integration of several parts of code, plus the additional hardware fixed to the robot. Our idea was ambitious and involved a robot arm control and software development that would make a face search in the LAION-400M database. During our short residency at ECAL, we managed to do real-time robot arm control that follows the audience’s face. The software part of the project, which is about the retrieval and displaying of a similar face from the database, is still in the process and needs way more skills and computing power than we initially thought. Regarding tracking the face, we were unsure what camera should be used. In the end, we used a Raspberry Pi web camera.

On the other hand, which parts went well, and were you able to push the research further than expected thanks to these favourable outcomes? 

Success was that we managed to do robot arm real-time control and it did follow audience face.

Which parts of the process brought surprise, delight, joy or unexpected success?

Everything was quite predictable for us. A slightly negative surprise was that the UR10 arm robot was harder to control via code than expected.

How will your project contribute to the potential uses of a robotic arm by designers and artists? How do you hope others will be able to benefit from your research, and what do you hope they will do with it?

Our project demonstrates the possibility of communicating with a robot arm via code in real-time. We used UDP packages for that purpose. Also, it shows how to link face detection with the robot’s movements in space so that it tracks the audience.

Will your involvement in the project have an impact on your own practice? How and why?

A little – we learned that the UR10 robot is not the best one for artists, and it’s good to have a robot on its own for using in different art projects.