Brentry's song "Hope rose high" and some visuals will appear here when
all resources are loaded.
We're using the song to raise money for the International Rescue Committee and NHS Charities Together, to assist the IRC in helping the most vulnerable people in the world to cope with COVID-19 and its impact, and to show our sincere gratitude to frontline NHS staff here in the UK.
Brentry's song "Hope rose high" is available from the ethical streaming platform Ujo Music. We'd prefer you listen to it there rather than the usual streaming services who are also carrying the song.
"Hope rose high" was written in a short space of time – over the weekend of Saturday 21st March, 2020, when there was a last-minute change to our team membership for the AI Song Contest. The human contributors were as follows.
The section below called Background contains the timeline of the entire project.
The song was made from four-bar AI-generated melodies, chords, and drums, generated by Tom's MAIA Markov algorithm. The main musical input was a dataset of 200 previous Eurovision songs, in simplified MIDI arrangements, provided by the Dutch broadcaster and competition organiser VPRO. VPRO asked teams not to release the dataset. To increase the transparency and ensure the replicability of this work, you can find analytical representations calculated by my algorithm behind the Play with MAIA Markov section and available for download from the Replicability section.
Nancy took, tweaked, and wove the four-bar generated passages into a "modern power ballad". The lyrics are also AI-generated, from theselyricsdonotexist.com (TLDNE). If you have little or no idea what some of them mean, join the club! Given the current global situation, we did specify a topic of "hope", and have tried to tease out that message from the generated lyrics. Tom wrote the bass line because he felt like it was more-or-less implied by the melodies and chords that Nancy had selected, and arranged the song in a digital audio workstation.
In short, the starting material is almost all AI, but the combination of these materials and the instrumentation has been determined/created by humans. The timeline in the background section contains some save-as files, so you can hear how the song developed. There are also sections on playing with the AI and more details on the algorithm. Immediately below, I (Tom) explore how the musical AI outputs have been incorporated into the song.
The pre-chorus melody of the song is "Juniper", from the same list.
The chorus melody of the song is "Thistle" (same list). It's not actually a melody – I think there were ten or so items in the dataset provided by the Dutch broadcaster VPRO where the tracks weren't separated properly, creating a polyphonic subspace in the whole melody space.
The diagram below is called a pendular graph. It's a method we developed to visualise the hierarchical repetitive structure of music (see Nikrang, Collins, & Widmer, 2014 and also Deutch & Feroe, 1981 for more details). On laptop or desktop, you should be able to click on the various nodes to explore the music referred to by the letter names.
In the diagram, "F" stands for use of the "Fennel" passage, "J" for "Juniper", "T" for "Thistle", and "L" for "Lion". Above this lowest level are slightly longer sections: "V" stands for verse, consisting of two lots of "Fennel"; "C" stands for chorus, which consists of "Thistle" plus three more bars we had to write ourselves (labelled "X"). On the highest structural level, we see "F1", "V1", and "C1" combine to make a larger section "U1" that repeats as "U2".
You can use the inputs below to play with the MAIA Markov algorithm that we used to generate raw musical materials for this project. If you arrived here from a shared link, just wait for the play button below to become available and then you can try out whatever someone wanted you to hear! If you're starting from scratch:
|Global tempo control|
With strong ties to academia, MAIA considers very seriously the fair and appropriate use of material from others. This means:
Artificial intelligence (AI) has been defined as
"The performance of tasks, which, if performed by a human, would be deemed to require intelligence" (Wiggins, 2006, p.450),reading between the lines somewhat of Turing's (1950) famous paper. AI methods can be applied to multiple music-analytic and music-creative activities. Here, we consider AI applied to the activity of songwriting.
A main topic of my PhD (Collins, 2011) and a couple of subsequent papers (Collins et al., 2016; Collins & Laney, 2017) was concerned with automatically extracting the entire hierarchical structure from an existing piece of music, and using that structure to guide a more local generative process, so that generated material had more convincing long-term structure and phrasing, compared to existing work.
For the AI Song Contest, however, it seemed more appropriate to use the MAIA Markov algorithm to generate shorter, four-bar passages for human musicians to experiment with and combine into longer-term structures.
The timeline for the project went like this:
Show more about melody input and output for this project...
Show more about chordal input and output for this project...
Show more about bassline input and output for this project...
Input. The musical inputs here are symbolic encodings of 93 EDM-style drum excerpts (see Replicability for details).
||Input example 1 (from "1638")|
||Input example 2 (from "16123")|
See AI details for information on how the inputs are passed into the AI model to generate the MIDI and text passages below.
Output. Show more about percussion output for this project...
Input. We used theselyricsdonotexist.com (TLDNE) to generate the lyrics. Here are the different outputs, beginning with the one we actually used with very minimal changes:
We also tried to use the MAIA Markov algorithm to generate lyrics, but more work is required to improve the representation.The lyrical inputs are from 40 songs by Imogen Heap and 60 songs in the dataset provided by VPRO.
Output. Show more about lyrics output for this project...
Imogen and Tom had a long conversation on 6th February, and one of the topics we covered was the questionable ethics and legality of deriving new material from existing, copyright material without permission and/or proper attribution.
In Tom's opinion, the ethical and legal implications of deriving new from old material centres on a perceptually valid empirical analysis (such as the "originality" or "creativity" analyses conducted in Collins & Laney, 2017, section 3.3). If it can be demonstrated that a generated passage is no more derivative of a corpus than other songs from that same corpus, then the reuse of material is acceptable. If a generated passage does derive more from a corpus than other songs, then the resue is not acceptable.
For example, one can't copyright an isochronous bass-drum pattern because such patterns are ubiquitous, whereas one can copyright a sequence of events (notes, chords, etc.) that is novel with respect to the corpus of existing music.
The source code for these models is available as an NPM package called MAIA Markov, v.0.0.2. The metadata (md), state-transition matrices (stm), and initial distributions (id) that combine with MAIA Markov to enable replication of the results above can be downloaded via the following links.
The MAIA Markov algorithm uses an empirically-derived Markov model to generate "new" music in the style of a corpus of existing music. An article we wrote for Significance magazine provides a good lay- or young person's introduction to how this works (Collins, Laney, Willis, & Garthwaite, 2011). If/when you're ready for heavier stuff, the main references are my PhD thesis (Collins, 2011) and two subsequent papers (Collins et al., 2016; Collins & Laney, 2017). Of these, Collins and Laney (2017) is probably the best starting point.
I have made some tweaks and improvements to the algorithm since 2017, and I am currently writing these up for a peer-reviewed journal, but the main approach is the same. It is worth noting that a Markov model is not a deep learning algorithm – or even an artificial neural network, both of which are very popular right now. I do conduct research on these approaches too, but so far I prefer the Markov modeling approach because it has stronger cognitive plausibility in terms of the way I learnt to create music – analysing states (e.g., notes or chords) in a certain tonal and temporal context, and formulating and exploring possibilities for how certain states tend to lead to others.
We hope you enjoyed experimenting with the outputs of the above
Feel free to get in touch if you have any questions or suggestions.
Requests for use of the material on or linked from this page should be sent to Tom Collins.