The Body of Music
Postgraduate Owen Vallis is inventing new tools to help machines and humans make beautiful music together.
Music is changing at the speed of sound. In less than a century we’ve gone from groups of Jazz musicians jamming in underground clubs on ex-Civil War instruments, to the electronic age, where live performers have so many gadgets onstage that it looks like they’re performing from inside a spaceship. It’s a complex business, and technology doesn’t necessarily make for an easier job, or better music. Victoria University postgraduate Owen Vallis is using Artificial intelligence (A.I.) to create tools that are sophisticated, yet blend seamlessly with human performance.
“Since I was young, I have always loved electronic music. I grew up with Kraftwork, Vangelis, and 80’s synth bands,” says Vallis. “Also, my family was in to music, so it was always around the house. My father, brother, and I would get together and play guitar, piano, and sing together all the time. So from a very young age I was playing instruments.”
A musician needs to be able to create arrangements of large groups of musical material in real-time, or perform several different rhythmic or melodic instruments within the same song, or improvise from a sonic palette that potentially represents any sound a speaker is capable of reproducing. “The skills required to successfully perform in this way would be similar to a cellist playing in an orchestra suddenly starting to conduct, while also simultaneously improvising on several different woodwind instruments,” says Vallis.
But the information age has changed the way that people interact with music. One of the key challenges in performing live electronic music, for example, is how a musician can expressively manipulate the massive number of possible controls tied to their sounds. To be effective, live electronic performers must interact and improvise on several different musical levels. A musician needs to be able to create arrangements of large groups of musical material in real-time, or perform several different rhythmic or melodic instruments within the same song, or improvise from a sonic palette that potentially represents any...
His postgraduate project looks at tackling this problem through the use of Machine Learning and A.I. models. “The basic idea is that you can train your computer to mimic your improvisational style for a given sound or instrument in the computer. By creating a model for each key instrument, the computer can be trained to mimic the way I perform on several different instruments. Then during a live performance, while I play one instrument, the software ‘listens’ to my performance and plays the other instrument sections in parallel. It’s a bit like creating a whole band out of yourself.”
His project focuses on building new instruments, such as large multi-touch surfaces, while also creating A.I. programmes to expand what a single performer can do. His work, he says, will allow the musical community to explore new ways to create and perform live electronic music. “The unique thing about computer music these days is that even though a computer may generate the sounds, musicians are free to control those sounds with any sort of hardware interface they choose.” The hardware could be a pair of wireless sneakers that transmit the motion of the wearer, or modified acoustic instruments that send data to a computer. This allows musicians to make individual and unique connections between their hardware instruments and how they are mapped to create sounds in the computer.
“Beyond creating new tools and instruments for live performance, I am looking at how online communities help musicians by providing open-source access to software and hardware. When creating these new instruments, it is key to allow other people to use, learn from, and modify the projects, effectively expanding what the instruments are capable of.”
Between 2005 and 2008 Owen was at CalArts in Valencia, California, working for his Undergraduate degree in Music Technology. In July of 2009 he began working towards a PhD in Sonic Arts from the New Zealand School of Music (NZSM) at Victoria University, Wellington. “Once I got to schools like NZSM and CalArts, I had the opportunity to learn from some great teachers who made me realise that I could combine my two passions (music and technology,) into a single focus. Before going to CalArts I used to produce a lot of indie-rock bands in Nashville. I worked at a record store selling house, techno, and trance to DJs in San Francisco and just generally involved myself in as much music as I could.”
He cites his adviser, Dr. Ajay Kapur, as probably the individual most responsible for his research efforts. “I was actually working as a producer in a studio in Hollywood when Ajay discussed with me the idea of coming to New Zealand to study. His passion for his work, and his support of my research have pushed me to become the student I am today.
For Vallis, New Zealand has been an incredible opportunity. “Not only is there fantastic support for the arts, but the close proximity of the school of music to Victoria’s computer science and engineering schools has been an immense help in developing my understanding of the technical side of my research. The professors at NZSM have a broad musical perspective that has been invaluable in my own studies as a composer. Finally, I have had the opportunity to teach students here who have not only become good friends, but have also pushed me to think about ideas in new ways and explore different avenues of research.”
“The skills required to successfully perform in this way would be similar to a cellist in a symphony suddenly starting to conduct, while simultaneously improvising on several different woodwind instruments.”
Receiving the New Zealand International Doctoral Research Scholarships award has allowed Vallis to focus on his work, as well as affording him the opportunity to publish, travel, and attend leading conferences around the world.
“Now that I’m in New Zealand, there is even more amazing music to explore. Multi-channel surround sound pieces, and great opportunities like Blow Fest to exhibit installation work and allow people to interact with my music in real time.”
When Vallis lets his imagination run wild, there’s no limit to the possibilities for live electronic music. “Imagine multiple computer musicians performing on stage, playing as if they had five hands, assisted by computers that have been trained to model their individual improvisational styles. These mechanical and non-mechanical performers are all passing musical ideas around on stage while the audience gets excited (sending data to the stage), and the music reacts by becoming more intense.” In Vallis’s world these musical soundscapes are linked to visuals (created by digital visual artists,) and these visuals react to the music, are mapped onto objects, or are projected out to the audience’s mobile devices.
“The line between an active musician and a passive audience becomes blurred as the whole event is one connected ensemble. With personalised interfaces I expect to see groups of electronic musicians participating in musical dialogs during performances, passing ideas back and forth in a similar way to jazz musicians exploring a musical space onstage.”
Which proves that in music, what goes around certainly comes around.