Translating Paintings into Music using Neural Networks
Prateek Verma1,2, Constantin Basica2, and Pamela Davis Kivelson3
1 Center for Computer Research in Music and Acoustics (CCRMA), Stanford University
2 Stanford Artificial Intelligence Laboratory
3 Design Program, Department of Mechanical Engineering, Stanford University
prateekv@stanford.edu, cobasica@ccrma.stanford.edu, pdk@stanford.edu
We propose a system that learns from artistic pairings of music and corresponding album cover art. The goal is to ’translate’ paintings into music and, in further stages of development, the converse. We aim to deploy this system as an artistic tool for real time ’translations’ between musicians and painters. The system’s outputs serve as elements to be employed in a joint live performance of music and painting, or as generative material to be used by the artists as inspiration for their improvisation.
Overview of the system's architecture
DEMO
Painting #1
by Pamela Davis Kivelson
Painting #2
by Pamela Davis Kivelson
Painting #3
by Vincent van Gogh
Painting #4
by Vincent van Gogh
Using music by different composers
Painting #1 + Brush Stroke #1
Painting #2 + Brush Stroke #1
Painting #3 + Brush Stroke #1
Painting #4 + Brush Stroke #1
Painting #1 + Brush Stroke #2
Painting #2 + Brush Stroke #2
Painting #3 + Brush Stroke #2
Painting #4 + Brush Stroke #2
Using music by the same composer (Constantin Basica)
Painting #1 + Brush Stroke #1
Painting #2 + Brush Stroke #1
Painting #3 + Brush Stroke #1
Painting #4 + Brush Stroke #1
Painting #1 + Brush Stroke #2
Painting #2 + Brush Stroke #2
Painting #3 + Brush Stroke #2
Painting #4 + Brush Stroke #2