#HTE

Musical Instruments, Transformed

Musical instruments are undergoing a renaissance. Waves of techno-cultural innovation are melding arts and sciences, and steadily raising the tide on traditional crafts, turning them on their heads. The transformation of the musical instrument—through networks, sensors, data and computation—is not only a story about the craft of making musical tools, but the incredible ways by which we express our human musicality through them.

imageWintergatan’s Marble Machine, designed by Martin Molin

The earliest known musical instrument predates the oldest known European cave art by about 5,000 years [1][2]. To put that in perspective consider the pyramids in Egypt. The Great Pyramid of Giza is about 5,000 years old. This seems ancient to us alive today. That is how much time stands between the first known European cave art and the nearly forty-five-thousand-year-old flute discovered in southwest Germany. While it is likely true that the first musical instrument was the human voice, this fossilized finding underscores two key points: humans have been musical for a long time; and humans can produce tools that express their musicality.

imageOldest known musical instrument. Flute discovered in western Ulm, Germany

Are you musical? Today scientists are using noninvasive brain imaging, such as MRI (magnetic resonance imaging), to observe the brain at work. This has led to a new field called cognitive neuroscience. In this field, powerful insights are being uncovered for how each of us contain vast mental faculties that highlight our unique ability to create and appreciate music[3]. You may not be musical, but you do possess a high degree of musicality.

Songbirds are the most musical creatures we know. Yet, birds cannot even recognize the same tune when transposed to a different key[4]. This is a relatively simple task for humans. Not only can we detect a tunes in other keys, but in other timbres—meaning different instrument sounds. For example, if someone were to play you the tune Happy Birthday in key of C, you could immediately pick it out when played again at a lower key. This holds true if it were played on violin, bass guitar or piano. In every case, you would still recognize the song as Happy Birthday. Humans are the only species that we know can do this [4].

imageEunoia, Brainwave Instrument by Lisa Park

Humans are toolmakers. By fashioning and using tools, our species has adapted to harsh environments and thrived. Step back, and it becomes apparent that the sum of human civilization is a testament to the power of innovative technology accrued over time. Not only have tools allowed us to overcome the basic natural barriers threatening survival, but they have amplified our means of creative expression, and given rise to a vast spectrum of cultural artifacts [5].

“Mens et Manus"—"Mind and Hand"—is the motto on the seal of Massachusetts Institute of Technology [6]. The saying was originally intended to capture a new-science-sentiment for Enlightenment-era thinking that questioned the role of nature and technology. Today, it is an excellent maxim for our embodied co-evolution with expressive tools. Musical instruments are undergoing a renaissance due largely to computation and a convergence of other technologies enabled by computation. Yet in order to fully appreciate where this is going in the future, we need to first drive home that our Mens et Manus goes way beyond technology. The human experience is embodied, and our cognitive capacity to bind with the physical environment is key. We have an integral relationship with technological tools as it allows us to act in concert with their affordances, and thus serve more as a symphony of instrumentation. The musical instrument of the future is thereby not an instrument at all, but a tightly integrated system of musical expression.

imageFirst known instrument "system” (circa 850 AD), hydro powered organ with interchangeable cylinders, Banu Musa

Pauline Oliveros was a pioneering American composer and accordionist. She dedicated her life to expanding awareness of music and life. The Expanded Instrument System (EIS) evolved from Oliveros’ solo performances and composition work with tape and digital delays beginning in the late 1950s [7]. This system entailed an evolving array of electronic, sound-processing components that allowed players to perform in the past, present and future simultaneously [7]. The hallmark of EIS is the concept of system-based performance. Today we take for granted that instruments can interconnect through MIDI (musical instrument digital interface) to automate playback. And while EIS was certainly not the first systems-based instrumentation, it represented a key nascent concept. Human players can become powerfully augmented through an expressive system.

imagePauline Oliveros with Ellen Fullman performing with an “Expanded” accordion (EIS)

Today a convergence of media and technologies like networks, sensors, data, learning algorithms and embedded computation are transforming the musical instrument in new ways. Advanced design applications, combined with a 21st Century Medici-like trend toward reintegrating arts and sciences, is dramatically expanding the field, and are leading to a new wave of creative instrumentation.

The first wave of modern innovation can be linked to the Industrial Revolution, when the mass production of fabricated parts led to better construction, affordability and popularization of instruments in the home. Falling costs and wide distribution meant that more people could attend live musical performances, and even own a variety of instruments that were previously reserved for the affluent[10]. For example, supply chains, standardization of specialized parts, and the industrial-era innovation of the iron frame made the piano an affordable and widely popular home instrument during the end of the 19th century[8][9].

The second wave was spawned by the advent of electronics and analog circuitry. Electrification not only transformed traditional instruments, it led to the invention of entirely new ones. In the 1930s, guitars were transformed from quiet background rhythm-keepers to iconic lead instruments through magnetic transduction and amplification[11]. Around this time a host of new musical inventions started emerging, including the Theremin, a strange device controlled by using your hands to disrupt electromagnetic fields. Analog circuits also meant that instruments could generate audible tones and use synthesis techniques to create entirely new sounds through oscillators, filters and envelope controllers[12].

The third wave was fueled by the Digital Revolution. Digital circuits were eventually able to model their analog counterparts, and then go way beyond their capabilities – from digital synthesis, to sampling and looping, to having a multi-track recorder studio in your pocket. The ubiquity of computing transformed instruments into systems, and dramatically expanded the field through digital networks.

imageAndrius Šarapovas, Kinetic Generative Music Installation

Network performance can be traced back as early as 1951, when the experimental composer John Cage used radio transistors to perform the piece Imaginary Landscape No. 4 for Twelve Radios [13]. In this performance, radios were interconnected in a way that would cause them to influence each other. Cage’s early cybernetic experiment pointed to a future, where digitally networked instruments could talk and control one another. Beginning in the 80s, MIDI, and other music related network protocols enabled musicians—from a single keyboard—to control a host of other digitally-enabled instruments across a network. By the turn of the century the World Wide Web was disrupting how music was distributed and shared. The uncharted information space of the web further expanded the field of network performance tools by extending them to a global audience.

During a 2012 exhibition at the London Science Museum, visitors were invited to collaborate with millions of online guests to make music together in real-time over the internet. Tellart collaborated with Google Creative Lab, and worked alongside partners Universal Design Studio, MAP, B-Reel, Karsten Schmidt, and Fraser Randall to develop the Universal Orchestra experiment, a robotic instrument array that allowed for physical and virtual performance through the same system in real-time from anywhere in the world. This open source experiment pointed towards a future where digital and physical realities blend, and musical interaction between players happens in real-time connected through telepresence.

imageUniversal Orchestra Experiment, Chrome Web Lab. Tellart in Collaboration with Google Creative Lab. Photograph, T by Andrew Meredith

A fourth wave of innovation is quickly washing in, as the Internet of Things evolves into smaller and more powerful networks of computational devices that exchange data in real-time, and then learn and adapt from those data through machine intelligence. Simultaneously digital and physical materials are amalgamating, and spawning a dizzying array of mutable forms, interfaces, and interactive possibilities for performing musically with systems.

Embedded computational instrument platforms—like Bela, developed by Dr. Andrew McPherson’s team at the Center for Digital Music at Queen Mary University of London, and Satellite CCRMA, developed by Edgar Berdahl, Ph.D. and Wendy Ju, Ph.D. at Stanford’s Center for Computer Research in Music and Acoustics—are transforming how instruments sense the player, interconnect with surroundings, and augment acoustic properties of traditional instrumentation [14]. Fedde ten Berge applies sculptural techniques to design artifacts that challenge the notion of traditional instrument form while exploring the sonic qualities of materials. He leverages embedded computing to amplify vibrational character of his objects. Subhraag Singh’s Infinitone, is a technologically-enhanced woodwind instrument, roughly the size of soprano saxophone that replaces traditional keys pads with slides controlled by motors, allowing the instrument to play nearly any musical interval in the harmonic spectrum [15].

imageSubhraag Singh, performing with the Inifniton at the Georgia Tech Guthman Competition

Embedded computation allows instrument designers to experiment with both the acoustic properties of material—amplifying, processing and augmenting tonality—as well as with formal properties— interfaces, media, shape and configuration [14].

imageFedde ten Berge, Sonic Sculptures using Embedded Computation with Bela

As musical objects encompass more powerful onboard computers that can sense the world, and algorithmically learn to listen, move, and act musically, will these instruments then have the capacity to join with humans in a live improvisational performance?

Mason Bretan, working under Gil Weinberg, Ph.D. at Georgia Tech’s Robotic Musicianship Lab has developed Shimon, a marimba-playing robot that uses deep learning and big data to perform live with musicians, as well as write and perform its own music. Shimon is able to listen, catch the beat, sync up and determine the themes of live human performance and respond with its own unique concepts [16]. By using choreographic gestures, the machine conveys similar musical body language of other human players.

imageShimon, marimba-playing robot, performing live with human players

As the invisible landscape of artificial intelligence is made visible through better tools and interfaces we should expect to see massive transformation in musical instrumentation. This work is already underway. Researcher Dr. Rebecca Fiebrink at Goldsmiths University in London created Wekinator, an open source software for developing musical instruments and other real-time creative applications using machine learning. The Magenta project, led by Google Brain’s principle researcher Douglas Eck, is helping to unlock Google’s machine learning services for music and art. Both Fiebrink and Eck are advocates for experimenting with these technologies creatively and making them accessible to artist. The Magenta team started out by providing an experimental algorithm called NSynth, that used deep neural networks (a form of machine learning based on learning data representations) to generate sounds. To further their experiment, the NSynth researchers designed a musical hardware interface for working with the algorithm, called NSynth Super [17].

imageNSynth Super, Making Sounds Generated by Machine Learning

In 1983, Pauline Oliveros began work on the Expanded Instrument System (EIS), evolving it with multiple collaborators until her death in 2016. Her vision was to create a performance environment where musicians could explore interactions with technology, thereby “expanding” their instruments. Oliveros’ had a bold and spirited attitude towards technology and music. She embraced change, and throughout her life was able to incorporate new layers of technology into the EIS system [7]. If she were alive today, its easy to imagine that her curiosity would lead to explorations into virtual and augmented sound, sensor networks, embedded computation, artificial intelligence, and a range of other cutting edge transformations of musical instruments.

Musical instruments are undergoing a renaissance. They are transforming from singular sonic objects into intelligent systems for musical expression. Through a convergence of technologies and disciplines, fueled by computation, we are witnessing the full upheaval of traditional instrument-craft, leaving us to reimagine how music is produced, performed and disseminated to audiences around the world. Even while machines are learning to think, and can now write and perform music of their own, it does not outmode human musicianship. Musicality is an integral part of the human experience. As we have seen, our musical nature extends back to prehistoric times, and there is mounting evidence to illuminate our long evolution into complex musical creatures. Through mind and hand—"Mens et Manus"—humans are also sophisticated toolmakers. Our technology is deeply interwoven into our evolutionary nature. Therefore the musical system of the future should, and will, incorporate musicians, emboldening them to express their musicality to ever higher degrees.

*******

SOURCES

[1] Conard, N. J., M. Malina, and S. C. Münzel. 2009. New flutes document the earliest musical tradition in southwestern Germany. Nature 460: 737-740.

[2] Thomas Higham, Basell, Jacobi, Wood, Ramsey, Conard. Testing models for the beginning of the Aurignacian and advent of figurative art and music: The radiocarbon chronology of Geißenklösterle. Journal of Human Evolution, 2012.

[3] Michael A. Arbib (2013), Language, Music and the Brain, MIT Press

[4] Aniruddh D. Patel, Music (2010), Language, and the Brain, Oxford University Press

[5] Yuval Noah Harari (2015), Sapiens: A Brief History of Humankind, Harper

[6] Mind and Hand: the Birth of MIT (2005), MIT Press

[7] Sounding the Margins: Collected Writings (1992-2009), Pauline Oliveros, Deep Listening Publications

[8] James Barron (2006), Piano: The Making of a Steinway Concert Grand, Times Books

[9] Alfred Dolge (1910) Pianos and Their Makers: A Comprehensive History of the Development of the Piano

[10] Olivia Groves (2017), The Evolution Of The Modern Piano

[11] Theodoros II, History of the Electric Guitar,, Gizmodo

[12] Trevor Pinch (2004), Analog Days: The Invention and Impact of the Moog Synthesizer, Harvard University Press

[13] Jon H. Appleton, Ronald C. Perera (1975), The Development and Practice of Electronic Music

[14] E. Berdahl and W. Ju. Satellite CCRMA: A musical interaction and sound synthesis platform. In Proceedings of the International Conference on New Interfaces for Musical Expression

[15] Alex Marshall(2017), The status quo will be obliterated!’ – the inventors making their own musical instruments, Guardian

[16] Mason Bretan, Gil Weinberg (2016), A survey of robotic musicianship, ACM Digital Library

[17] Jesse Engel, et al (2017), Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders, Google Magenta

image
http://www.core77.com/posts/77919/Musical-Instruments-Transformed