ORIGINS IN THE LATE OF 19TH CENTURY TO EARLY 20TH CENTURY

The ability to record sounds is often connected to the production of electronic music, but not absolutely necessary for it. The earliest known sound recording device was the phonautograph, patented in 1857 by Édouard-Léon Scott de Martinville. It could record sounds visually, but was not meant to play them back.

In 1878, Thomas A. Edison patented the phonograph, which used cylinders similar to Scott’s device. Although cylinders continued in use for some time,Emile Berliner developed the disc phonograph in 1887.

A significant invention, which was later to have a profound effect on electronic music, was Lee DeForest’s triode audion. This was the first thermionic valve, or vacuum tube, invented in 1906, which led to the generation and amplification of electrical signals, radio broadcasting, and electronic computation, amongst other things.

Before electronic music, there was a growing desire for composers to use emerging technologies for musical purposes. Several instruments were created that employed electromechanical designs and they paved the way for the later emergence of electronic instruments. An electromechanical instrument called the Telharmonium (sometimes Teleharmonium or Dynamophone) was developed by Thaddeus Cahill in the years 1898–1912. However, simple inconvenience hindered the adoption of the Telharmonium, due to its immense size.

One of the early electronic instrument often mentioned may be Theremin, invented by Professor Léon Theremin circa 1919–1920.Other early electronic instruments include the Audion Piano invented in 1915 by Lee De Forest who was inventor of triodeaudion as mentioned above,the Croix Sonore, invented in 1926 by Nikolai Obukhov, and the Ondes Martenot, which was most famously used in the Turangalîla-Symphonie by Olivier Messiaen as well as other works by him. The Ondes Martenot was also used by other, primarily French, composers such as Andre Jolivet.

THE RISE OF ELECTRONIC INSTRUMENTS IN THE 20 AND 30

This decade brought a wealth of early electronic instruments and the first compositions for electronic instruments. The first instrument, the Etherophone, was created by Léon Theremin (born Lev Termen) between 1919 and 1920 in Leningrad, though it was eventually renamed the Theremin. This led to the first compositions for electronic instruments, as opposed to noisemakers and re-purposed machines. In 1929, Joseph Schillinger composed First Airphonic Suite for Theremin and Orchestra, premièred with the Cleveland Orchestra with Leon Theremin as soloist.

In addition to the Theremin, the Ondes Martenot was invented in 1928 by Maurice Martenot, who debuted it in Paris.

The following year, Antheil first composed for mechanical devices, electrical noisemakers, motors and amplifiers in his unfinished opera, Mr. Bloom.

Recording of sounds made a leap in 1927, when American inventor J. A. O’Neill developed a recording device that used magnetically coated ribbon. However, this was a commercial failure.

Two years later, Laurens Hammond established his company for the manufacture of electronic instruments. He went on to produce the Hammond organ, which was based on the principles of theTelharmonium, along with other developments including early reverberation units.

Hammond (along with John Hanert and C. N. Williams) would also go on to invent another electronic instrument, the Novachord, which Hammond’s company manufactured from 1939–1942.

The method of photo-optic sound recording used in cinematography made it possible to obtain a visible image of a sound wave, as well as to realize the opposite goal—synthesizing a sound from an artificially drawn sound wave.

In this same period, experiments began with sound art, early practitioners of which include Tristan Tzara, Kurt Schwitters, Filippo Tommaso Marinetti, and others.

REVOLUTION OF MAGNETIC TAPE IN THE 40

Low-fidelity magnetic wire recorders had been in use since around 1900 and in the early 1930s the movie industry began to convert to the new optical sound-on-film recording systems based on the photoelectric cell.It was around this time that the German electronics company AEG developed the first practical audio tape recorder, the “Magnetophon” K-1, which was unveiled at the Berlin Radio Show in August 1935.

Walter Weber rediscovered and applied the AC biasing technique, which dramatically improved the fidelity of magnetic recording by adding an inaudible high-frequency tone. It extended the 1941 ‘K4’ Magnetophone frequency curve to 10 kHz and improved the dynamic range up to 60 dB,surpassing all known recording systems at that time.

As early as 1942 AEG was making test recordings in stereo.However these devices and techniques remained a secret outside Germany until the end of WWII, when captured Magnetophon recorders and reels of Farben ferric-oxide recording tape were brought back to the United States by Jack Mullin and others.

These captured recorders and tapes were the basis for the development of America’s first commercially made professional tape recorder, the Model 200, manufactured by the AmericanAmpex company with support from entertainer Bing Crosby, who became one of the first performers to record radio broadcasts and studio master recordings on tape.

Magnetic audio tape opened up a vast new range of sonic possibilities to musicians, composers, producers and engineers. Audio tape was relatively cheap and very reliable, and its fidelity of reproduction was better than any audio medium to date. Most importantly, unlike discs, it offered the same plasticity of use as film. Tape can be slowed down, sped up or even run backwards during recording or playback, with often startling effect. It can be physically edited in much the same way as film, allowing for unwanted sections of a recording to be seamlessly removed or replaced; likewise, segments of tape from other sources can be edited in. Tape can also be joined to form endless loops that continually play repeated patterns of pre-recorded material.

Audio amplification and mixing equipment further expanded tape’s capabilities as a production medium, allowing multiple pre-taped recordings (and/or live sounds, speech or music) to be mixed together and simultaneously recorded onto another tape with relatively little loss of fidelity. Another unforeseen windfall was that tape recorders can be relatively easily modified to become echo machines that produce complex, controllable, high-quality echoand reverberation effects (most of which would be practically impossible to achieve by mechanical means).

The spread of tape recorders eventually led to the development of electroacoustic tape music. The first known example was composed in 1944 by Halim El-Dabh, a student at Cairo, Egypt.He recorded the sounds of an ancient zaar ceremony using a cumbersome wire recorder and at the Middle East Radio studios processed the material using reverberation, echo, voltage controls, and re-recording. The resulting work was entitled The Expression of Zaar and it was presented in 1944 at an art gallery event in Cairo. While his initial experiments in tape based composition were not widely known outside of Egypt at the time, El-Dabh is also notable for his later work in electronic music at the Columbia-Princeton Electronic Music Center in the late 1950s.

RISE OF COMPUTERS IN THE 50

In 1954, Stockhausen composed his Elektronische Studie II—the first electronic piece to be published as a score.

In 1955, more experimental and electronic studios began to appear. Notable were the creation of the Studio de Fonologia (already mentioned), a studio at the NHK in Tokyo founded by Toshiro Mayuzumi, and the Phillips studio at Eindhoven, the Netherlands, which moved to the University of Utrecht as the Institute of Sonology in 1960.

The score for Forbidden Planet, by Louis and Bebe Barron,was entirely composed using custom built electronic circuits and tape recorders in 1956.

The world’s first computer to play music was CSIRAC which was designed and built by Trevor Pearcey and Maston Beard. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s.

In 1951 it publicly played the Colonel Bogey March of which no known recordings exist.However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice which is current computer music practice. CSIRAC was never recorded, but the music played was accurately reconstructed (reference 12).

The oldest known recordings of computer generated music were played by the Ferranti Mark 1 computer, a commercial version of the Baby Machine from the University of Manchester in the autumn of 1951. The music program was written by Christopher Strachey.

The impact of computers continued in 1956. Lejaren Hiller and Leonard Isaacson composed Iliac Suite for string quartet, the first complete work of computer-assisted composition usingalgorithmic composition. “… Hiller postulated that a computer could be taught the rules of a particular style and then called on to compose accordingly.”Later developments included the work of Max Mathews at Bell Laboratories, who developed the influential MUSIC I program.

In 1957, MUSIC, one of the first computer programs to play electronic music, was created by Max Mathewsat Bell Laboratories. Vocoder technology was also a major development in this early era.

In 1956, Stockhausen composed Gesang der Jünglinge, the first major work of the Cologne studio, based on a text from the Book of Daniel. An important technological development of that year was the invention of the Clavivox synthesizer by Raymond Scott with subassembly by Robert Moog.

EXPANSION OF ELECTRONIC MUSIC IN THE 60

These were fertile years for electronic music—not just for academia, but for independent artists as synthesizer technology became more accessible.

By this time, a strong community of composers and musicians working with new sounds and instruments was established and growing. 1960 witnessed the composition of Luening’s Gargoyles for violin and tape as well as the premiere of Stockhausen’s Kontakte for electronic sounds, piano, and percussion. This piece existed in two versions—one for 4-channel tape, and the other for tape with human performers. “InKontakte, Stockhausen abandoned traditional musical form based on linear development and dramatic climax.

This new approach, which he termed ‘moment form,’ resembles the ‘cinematic splice’ techniques in early twentieth century film.

The first of these synthesizers to appear was the Buchla. Appearing in 1963, it was the product of an effort spearheaded by musique concrète composer Morton Subotnick.

The theremin had been in use since the 1920s but it attained a degree of popular recognition through its use in science-fiction film soundtrack music in the 1950s (e.g., Bernard Herrmann’s classic score for The Day the Earth Stood Still).

In the UK in this period, the BBC Radiophonic Workshop (established in 1958) emerged one of the most productive and widely known electronic music studios in the world,thanks in large measure to their work on the BBC science-fiction series Doctor Who.

In 1961 Josef Tal established the Centre for Electronic Music in Israel at The Hebrew University, and in 1962 Hugh Le Caine arrived in Jerusalem to install his Creative Tape Recorder in the centre.

Milton Babbitt composed his first electronic work using the synthesizer—his Composition for Synthesizer (1961)—which he created using the RCA synthesizer at the Columbia-Princeton Electronic Music Center.

In San Francisco, composer Stan Shaff and equipment designer Doug McEachern, presented the first “Audium” concert at San Francisco State College (1962), followed by a work at the San Francisco Museum of Modern Art (1963), conceived of as in time, controlled movement of sound in space. Twelve speakers surrounded the audience, four speakers were mounted on a rotating, mobile-like construction above.

In an SFMOMA performance the following year (1964), San Francisco Chronicle music critic Alfred Frankenstein commented, “the possibilities of the space-sound continuum have seldom been so extensively explored”.

In 1967, the first Audium, a “sound-space continuum” opened, holding weekly performances through 1970. In 1975, enabled by seed money from the National Endowment for the Arts, a new Audium opened, designed floor to ceiling for spatial sound composition and performance.

There are composers who manipulate sound space by locating multiple speakers at various locations in a performance space and then switching or panning the sound between the sources. In this approach, the composition of spatial manipulation is dependent on the location of the speakers and usually exploits the acoustical properties of the enclosure.

RISE OF SYNTHESIZERS IN THE 70

Released in 1970 by Moog Music the Mini-Moog was among the first widely available, portable and relatively affordable synthesizers. It became the most widely used synthesizer in both popular and electronic art music.Patrick Gleeson, playing live with Herbie Hancock in the beginning of the 1970s, pioneered the use of synthesizers in a touring context, where they were subject to stresses the early machines were not designed for.

In 1974 the WDR studio in Cologne acquired an EMS Synthi 100 synthesizer which was used by a number of composers in the production of notable electronic works—amongst others, Rolf Gehlhaar’s Fünf deutsche Tänze (1975), Karlheinz Stockhausen’s Sirius (1975–76), and John McGuire’s Pulse Music III (1978).

The early 1980s saw the rise of bass synthesizers, the most influential being the Roland TB-303, a bass synthesizer and sequencer released in late 1981 that would later become synonymous with electronic dance music, particularly acid house.One of the first to utilize it was Charanjit Singh in 1982, though it wouldn’t be popularized until Phuture’s “Acid Tracks” in 1987.

SEQUENCERS AND DRUM MACHINES

Music sequencers began being used around the mid 20th century, and Tomita’s albums in mid-1970s being later examples.In 1978, Yellow Magic Orchestra were using computer-based technology in conjunction with a synthesiser to produce popular music,making their early use of the microprocessor-based Roland MC-8 Microcomposer sequencer.

Drum machines, also known as rhythm machines, also began being used around the late-1950s, with a later example being Osamu Kitajima’s progressive rock album Benzaiten (1974), which used a rhythm machine along with electronic drums and a synthesizer.In 1977, Ultravox’s “Hiroshima Mon Amour” was one of the first singles to use the metronome-like percussion of aRoland TR-77 drum machine.

In 1980, Roland Corporation released the TR-808, one of the first and most popular programmable drum machines. The first band to use it was Yellow Magic Orchestra in 1980, and it would later gain widespread popularity with the release of Marvin Gaye’s “Sexual Healing” and Afrika Bambaataa’s “Planet Rock” in 1982, after which the TR-808 would remain in continued use until at least 2008.

MIDI

In 1980, a group of musicians and music merchants met to standardize an interface by which new instruments could communicate control instructions with other instruments and the prevalent microcomputer.

This standard was dubbed MIDI (Musical Instrument Digital Interface) and resulted from a collaboration between leading manufacturers initially Sequential Circuits, Oberheim,Roland, and later participated groups including: Yamaha, Korg, and Kawai, etc.A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized.

The advent of MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and in synchrony, with each device responding according to conditions predetermined by the composer.

MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios viasampling and sampled-ROM-based instruments.

Miller Puckette developed graphic signal-processing software for 4X called Max (after Max Mathews) and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background.

POPULARIZATION OF DANCE MUSIC IN THE 80 AND 90

In the 1980s many genres of popular electronic music exploited the use of MIDI protocol; a technological development that expanded interactivity and synchronized functionality across a range ofmusic related technologies.

In the 1990s, following the growth of personal computing, EDM creation began migrating to computer based production systems.

Some of the most widely used synthesizers in EDM music include the Yamaha DX7, Korg M1, and Roland’s Jupiter and SH-101. In addition, the most widely used bass synthesizer is the Roland TB-303, while the most widely used drum machines are Roland’s TR-808 and TR-909.

SOFTWARES DOMINATION IN THE 2000

In recent years, as computer technology has become more accessible and music software has advanced, interacting with music production technology is now possible using means that bear no relationship to traditional musical performance practices:for instance, laptop performance (laptronica)and live coding.In general, the term Live PA refers to any live performance of electronic music, whether with laptops, synthesizers, or other devices.

In the last decade, a number of software-based virtual studio environments have emerged, with products such as Propellerhead’s Reason and Ableton Live finding popular appeal.Such tools provide viable and cost-effective alternatives to typical hardware-based production studios, and thanks to advances in microprocessor technology, it is now possible to create high quality music using little more than a single laptop computer.

Such advances have democratized music creation,leading to a massive increase in the amount of home-produced electronic music available to the general public via the internet.

Artists can now also individuate their production practice by creating personalized software synthesizers, effects modules, and various composition environments. Devices that once existed exclusively in the hardware domain can easily have virtual counterparts.

Some of the more popular software tools for achieving such ends are commercial releases such as Max/Msp and Reaktor and open source packages such as Csound, Pure Data,SuperCollider, and ChucK.