The composition of electroacoustic music engenders another way of thinking, different from purely instrumental compositions. Using transformed, synthetic or artificial sounds created with computers as a tool leads to different reflections in the process of creation of music, in the modeling of sound and in the interaction between the tools and the composer or musician. Today, many sounds and music-edition software are developed by companies, such as Max / MSP and Ableton, or in Music Creation Centers such as Inscore and OSSIA/SCORE respectively at the GRAME and SCRIME laboratories. But although these programs offer a wide range of tools, they may not meet the composer's first need, which is the creation of music. Composers of the twentieth and twenty-first century, such as Iannis Xenakis, develop their own languages and their own tools to compose. Nevertheless, these tools may be specific to each composer and the problem of creating a universal tool facilitating musical creation for composers arises.
In this purpose, we propose the first steps of development of a tool that aims to be universal. It should be customizable for each composers by the use of machine learning. The composer gives his own sound bank and his own graphic language that train separately two groups of neural networks, one for sounds and the other for pictures. This tool consists then on a drawing interface that translates new images drawn by the composer into sounds. This develops a new way of creating music.
We propose here a proof of concept for this project, with one specific graphic language that we have determined to test our neural networks.