Google’s Magenta project creates art with AI

In a world where paintings, sculptures, songs and art, in general, are limited or not at all innovative, the Mountain View giant has decided to bring his grain of sand by revolutionizing the world of art with his artificial intelligence system. combined with the best human artists.

And it is from the desire to explore the possibilities that the artificial intelligence to create authentic works of art, in different fields and disciplines such as music and other visual arts, which was born the Magenta project by Google

The project, announced in 2016, and launched in June of the same year, aims to build an artificial intelligence system able to create their own works of art.

The project, which was announced during a session at a music and technology festival called Moogfest, in North Carolina, would allow people to produce completely new types of music and art, just as it did with the impact of electronic keyboards, battery and cameras.

Douglas Eck, head of Google’s artificial intelligence research division pointed out in a conference that Magenta could have played a similar role to Les Paul, which helped to develop the modern electric guitar. 

Magenta, which uses self-learning and artificial intelligence principles to create algorithms that can create its own works of art, is based on “Tensor Flow”.

It’s about a Google’s open source artificial intelligence software library, used for numerical computation and which uses data flow diagrams to create algorithms able to generate their own music.

The Magenta project therefore seeks to attract the attention of artists and creators to experiment with technology and apply it to their work. The goal of this project is to create a system capable of offering music lovers new pieces by simply placing an order on a computer.


Magenta currently boasts several projects based on its very interesting platform.

  • One of these is called MusicVAE, a tool to create music with neural networks that generates melodies based on self-learning and is able to improve the melody created using human inputs.
  • Another very interesting project is Neural Synth (N-Synth), a revolutionary sound synthesis method based on Deep Learning. Neural Synth is able to learn the characteristics of sounds and combine them to create a new sound totally unpredictable. Would you like to try generating a sound via artificial intelligence via browser? Try doing this with Sound make by GOOGLE.  If you want to try using N-Synth, its preset sounds and morphing pad, download a copy and start having fun.  
  • Beat Blender by Creative Lab, uses MusicVAE and allows you to insert 4 drum rhythms on 4 corners of a square.

This application uses machine learning and latent spaces to generate two-dimensional palettes of battery rhythms that transform from one dimension to another. You can select the “seeds” for the four corners and Beat Blender will emit MIDI (using Web MIDI) so you can use it not only with its internal sounds, but with any MIDI device that you have connected to your computer.

  • One of the really amazing apps is definitely Lo-Fi Player, one of the latest projects created through Google’s Magenta project platform. The system allows users to interact with different objects in a virtual room to mix their soundtracks with the help of two artificial intelligence systems and some melodies composed by the technologist and artist Vibert Thio, who hopes to turn it into a kind of Tiktok for the creation of music.

As Vibert Thio stated, the objective is make the music mixing experience as simple and enjoyable as possible and make the creation of music a more collective experience, taking into account the borders and quarantines imposed by coronavirus. 

These and many other apps have been created, to date, with the Google Magenta project and we could claim to be on the verge of a revolution as big as the transition from the electronic era to the digital era occurred in the 80s, with the birth of MIDI. Yet somehow it seems that, like the soft synth revolution of the early 2000s, MIDI will once again be at the center of the next technological revolution.

Google and Facebook have been experimenting with big data and learning tools for a number of years now, and show their use in applications such as Google Now, Google Photos or Facebook Messenger. Both have recently deepened the strategy of Google I/O and Facebook F8.

Thanks to their experiments they have been able to demonstrate that artificial intelligence and robots will go far beyond simple tasks without creativity.

And most likely, in the next 3-5 years, artificial intelligence musical instruments could become standard parts of modern digital studios.