Music writing by AI is not the dystopian future that some might fear. In fact, it’s already here in the form of an app that can create new songs for users in seconds.
EMPI is a musical interface that uses predictive ML models to translate physical gestures into continuous sonic output and matching haptic feedback. Participants in our studies performed with EMPI using various predictive models, and found them to be engaging.
What is predictive music writing
AI predictive music writing is an algorithm that can be used to generate a melody or chord progression from an existing piece of music. It can also be used to assist in the composition of new songs by suggesting ideas based on the musical themes present in the existing track. The process works by analyzing the track’s lyrics, vocals, instrumentation and genre to identify common elements. The AI then uses these patterns to create new music that is similar to the original track.
This type of AI can be used to write music for a number of different purposes, from film scores to advertising jingles. In fact, some musicians are already using AI predictive music writing to help them compose their songs. However, some critics have expressed concerns about the ethical implications of this technology.
One of the biggest challenges in AI predictive music writing is capturing a musician’s unique style. This is a difficult task because musical styles can change over time and vary from person to person. Additionally, many musicians often collaborate with other musicians on their music. This can be challenging because it can lead to plagiarism. To address these issues, AI predictive music writing uses a plagiarism checker and a critic to ensure that the generated music is free from plagiarism and of high quality.
Can AI write good music
It is possible for AI to write good music, but it requires an immense amount of training and data. The best way to train an AI model is to use a lot of high-quality recordings of recognizable musical genres, such as rock or classical music. Then, the AI can learn to emulate those musical styles by analyzing the data and finding similarities in patterns.
It’s important to note that AI cannot create new, original music on its own, but it can help musicians produce better songs by providing instant lyric suggestions or melodies. Many major music companies are partnering with AI-based startups to develop these types of tools, including Spotify, Warner Music Group and Hipgnosis. This trend has the potential to disrupt the traditional music industry, which is currently dominated by major labels and focused on distributing and owning intellectual property.
Some musicians have even started using AI software to compose songs for them. One such example is the reincarnation of the 27 Club album, which was composed entirely by artificial intelligence. This album features songs by Kurt Cobain, Jim Morrison, Amy Winehouse and Jimi Hendrix recomposed by the Google AI Magenta. While this may seem like a terrifying prospect, it is important to remember that AI music is still very much in its early stages.
It is also important to note that even if an AI program can generate a song that sounds convincingly human, it will never be able to replicate the emotion and expression of a true musician. For this reason, it is important for artists to continue collaborating with humans in order to create the most compelling musical works possible.
What are the benefits of AI predictive music writing
Artificial intelligence has become a vital tool for the music industry, offering improved product quality and automating processes. This enables artists to spend more time on creative work and reduce the time spent on mundane tasks. Additionally, AI can help musicians find new sounds and genres that may not have been explored before. It can also improve the quality of recordings by helping to detect mistakes and correct them.
Despite its obvious benefits, there are concerns over the potential impact of AI on musical creation. Some fear that the technology could replace human musicians, while others argue that it will enhance their creativity and push them to be better. However, it’s important to remember that AI is still in its early stages, and there are many limitations to its capabilities.
One of the biggest issues is the lack of reliable training data for machine learning models. In addition, there is the potential for the generated music to plagiarize from its training dataset. This can be avoided by using a plagiarism checker to discard any plagiarized music. Another issue is that the generated music can be of varying quality. This can be overcome by using a critic to evaluate the music based on its quality.
Recently, researchers have developed an embodied predictive musical interface (EMPI) that uses a mix-density RNN to predict the next interaction and its timing. It consists of a single-board computer for machine-learning calculations and synthesis, a lever for physical input, and a built-in speaker for actuated physical output. The participants reacted positively to the system and continued to perform and interact with it for over five minutes. This suggests that constrained and gesture-focussed ML interactions are an effective way to deploy predictive music interaction.
What is the future of AI-generated music
While many see AI-generated music as a threat to musicians who work hard on their craft, it’s not likely to completely replace human creativity. For one thing, it’s not easy to make high-quality music with text-to-music tools. While some artists use them as a starting point, it takes a lot of creativity to build an album with them. Plus, these tools can be prone to plagiarism. The creators of Jukebox and MusicLM have been accused of plagiarizing the songs they create from other artists’ music. This is because they generate the same sequences of notes over and over again. This is often referred to as “sequence folding” and can be detected with a plagiarism checker.
Also, AI-generated music can sound very strange and unnatural. For example, it’s impossible for an algorithm to replicate the nuances of a human voice. As a result, AI-generated music tends to sound robotic and mechanical. Moreover, it’s difficult for AI to capture the essence of a song’s emotions.
Despite these shortcomings, AI has shown promise in the music industry. It can be used to help musicians end writers’ block and to create new sounds that are on brand for an artist. It can also be used to write soundtracks for videos and other types of visual content.
As AI continues to evolve, it may eventually be able to create music that sounds more natural and authentic. However, humans will always be able to add a level of depth and emotion to their music that machines cannot replicate. This is why it’s important for musicians to continue telling their stories and letting people feel seen by them. This will ensure that music continues to be an essential part of the human experience.
How does AI predictive music writing work
The EMPI device uses a mixture density recurrent neural network to predict the next input on the lever. The model transforms the current input into parameters of a probability distribution, which can be thought of as a set of weighted dice that represent different combinations of features in the data. When the AI is predicting future actions, it samples this distribution and chooses the best value for each parameter to generate the desired output. The model then repeats this process for each subsequent time step.
During training, the AI learns how to map the features of the music to the corresponding movements using an autoencoder architecture. The encoder AI downsamples the incoming music sequence X to create data Z that is most similar to the original X, and the decoder AI uses this data to generate new music. This is a good example of how deep learning autoregressive models work (1).
The models used in this study were trained on a corpus of human-sourced performance data and also on synthetic, noise, and non-correlated movement sequences. The models all use an MDRNN and have 32, 64, and 128 LSTM units. They were adapted for interactive use by training them to condition on their internal memory state with the user’s interactions. This configuration makes them suitable for call-and-response applications.
Users of the EMPI device were asked to improvise with the device using three different ML models and either with or without a physically-actuated indicator on the lever. The results show that the presence of the physical feedback encouraged the participants to interact with the ML model in new ways. This interaction, in turn, led to interesting musical behaviors on the part of the EMPI system.