A new Google project called “Play a Kandinsky”, which is being implemented in collaboration with the Georges Pompidou National Center for Art and Culture in Paris, aims to recreate such a complex phenomenon as synesthesia using artificial intelligence. In a simplified form, this is a phenomenon in which one kind of feeling generates sensations in another, for example, a visual picture begins to “sound”. The artist Wassily Kandinsky is considered one of the most important visionaries in this field, his work is based on the combination of image and sound.
Only a few people can understand and feel synesthesia through personal experience, so teaching a neural network to create the technology is in many ways an experiment. There are no standards and rules, you cannot draw parallels and reveal patterns, but this is exactly what a new neural network should try to do. The Google Transformer system was taken as a basis, in the training of which musician Antoine Bertin and the NSDOS group took part.
The neural network analyzes Kandinsky’s original paintings, such as the 1925 painting Yellow, Red, Blue, which the artist painted under the influence of the synesthesia effect. And he tries to select the sound of individual elements so that the viewer transforms into a listener and, as the artist intended, hears music in the interweaving of lines and shades in the picture. In particular, the red color for Kandinsky sounded like a violin, but the yellow one echoed with the roar of a trumpet, but what other viewers will see and hear depends on the AI.