An interactive live video installation in which the audience can play with a neural network-based object recognition system, and is subjected to its decisions.
A large video projection shows a live video feed of the exhibition space, mirroring the audience. The image is superimposed with the a real time analysis by a so-called Artificial Intelligence computer vision system. It's a playful situation that satisfies one's curiosity ("so this is how a self-driving car sees me??"), and invites exploration and experiment in front of the camera.
The installation uses a freely available neural network. This particular system has been pre-trained with a standard industry dataset to identify more than 9000 different objects, which makes for an entertaining and colourful experience. The underlying model has several hundred labels for humans alone.
In addition to providing a live feed, the system has been expanded to store the objects it has identified. It literally attempts to picture the entire visible world, by finding objects that correspond to its model of what the world is, with only the things that appear before its camera in the course of its exposition. All first sightings are permanently stored as exemplary components of this "closed world". Further occurrences are still displayed in the live feed, but don't find their way into the database any more.
Like any algorithmic system that is based on an (inevitably imperfect) model of the physical world, it makes mistakes. Some of the misclassifications of its audience are amusing, some are offensive. Still, it is programmed to make decisions on the spot. Once captured, one becomes part of the database, without a chance of revision or appeal.
If the audience wants to see the artwork, they have to expose themselves to its judgement and agree to give up control. It's a mechanism familiar from, well, almost anywhere on the Internet nowadays: If you want to participate, you have to agree to be surveilled and relinquish agency to algorithms and scoring systems.
Just as with the evaluation systems of Surveillance Capitalism, it is not comprehensible how this neural network comes to its decisions. It's the famous-infamous AI Black Box. And yet sometimes it seems possible to retrace its line of thinking. It is the misinterpretations that reveal its characteristics. The visual material used to train the network resonates in the recognized objects.
In this particular case it was possible (and illuminating) until recently to look up the corresponding training images in the ImageNet dataset, a collection of over 14 million images from across the internet, conceived in California, and annotated by Amazon Turkers, low-wage workers from around the globe. By now the dataset is closed to public inspection, a result of the ethically questionable categorisations it contains.
It shows again that technology is not „neutral“. Algorithms are no more free of prejudices and fallacies than the people who developed them, and only as fair as the conditions, under which they have been developed.
It also becomes clear that software-based models of the world cannot adequately represent our chaotic, dirty, ambiguous reality. The fact that engineers can freely assert the idea that it is appropriate to describe the world in 9000 terms is only one example of the inadequate scrutiny given to the digital tools that determine what choices we are given in our daily lives. Things are crudely simplified just to get things going, but usually these temporary solutions very soon turn out to be permanent. Then they form reality: The model is being confused with what it represents.
The descriptions that we make of it have probably always determined the world in which we live. "The limits of my language mean the limits of my world", Wittgenstein remarked. What’s new is that we all seem to be only too happy to follow the satnav to a significantly poorer reality in terms of diversity, meaning, and sheer life.
still from the live-feed
schema
World Instance