How software has changed music production

Music notation in the past

With the advent of computers, a multitude of new ways of doing things have developed.  Music notation is no different.  Traditionally, composers and arrangers had to write out their scores by hand, which was very time consuming and tedious. Written music had to be copied by hand as well until, according to The Encyclopædia Britannica, Ottaciano dei Petrucci developed the first polyphonic music printed from movable type around beginning of the 16th century.

In the 1960s, composers still had to write out their scores by hand, and although different methods of written music reproduction  had been developed, many were impractical as explained in the article article, “How does using music notation software affect your music?,” by Robert Morris:

In the early 1960s, when I was an undergraduate composer at the Eastman School, there was really only one way to reproduce one’s scores, short of having them engraved in the process of publication. One copied music on transparent music paper (velum) in India ink and sent the masters to a blueprint house to be reproduced on an ozalid machine. Other options—reproduction via chemical copying machines or music typewriters—were infeasible.

When music notation software slowly began to emerge, it was not very practical until the late 1980s.  Various programs came and went until the two most well known programs came out: Finale and Sibelius.

How notation software and midi have affected music composition

Music notation software has opened up many new possibilities for composers. Now composers can easily print, copy and share their pieces unlike ever before.

MIDI (Musical Instrument Digital Interface) is another great advancement in music production.  MIDI is basically a digital language that records certain musical messages such as pitch, duration, velocity and more.  MIDI does not produce sound itself since it is only the code. MIDI instruments, which can be a keyboard, or on software then interpret the MIDI and produce sound.  Midi controllers, such as a keyboard can produce and sometimes receive MIDI input.

With the combination of MIDI technology and notation software, musicians are able to input notes into the notation software through a MIDI controller, and hear their compositions played back via MIDI.  This innovation allows a quick, intuitive means for musicians to notate music by simply playing a piece on a MIDI keyboard.  Musicians still need knowledge of music theory however, since many errors occur when using this method that must be corrected through the notation software.

Another breakthrough that MIDI has made is the ability to hear compositions played back. This is an incredibly useful tool for younger, or more inexperienced composers because it allows them to easily check for errors by ear rather than tediously checking the score visually and mentally hearing the composition in their head which takes considerable training and skill to achieve.

This ability to hear compositions played back has sparked some controversy however.  Traditional composers go through rigorous training to be able to read written music and mentally hear the way it will sound if played. Before MIDI this was the only way to compose.  With the use of MIDI playback, it is no longer absolutely necessary mentally hear the music.

The concerns that some musicians have with MIDI playback is that inexperienced composers might become dependent on MIDI playback in order to check errors. Another concern is that since MIDI playback of instruments in notation software does not properly represent the abilities of real instruments, composers will write pieces that are impossible or impractical to play.

These are real issues, and as a developing composer myself, they are ones that I face regularly. I will admit that I am almost entirely dependent on MIDI playback to make sure I haven’t made any errors, since I have a limited ability to hear written music mentally. I also often have the desire to write for instruments that I am not very familiar with, but am always unsure of how practical it would be to play.  The only way to prevent possible problems is to gain a sufficient understating of the instruments used and their capabilities.

How sound editing software has changed music

The most used type of sound editing software used in music is known as a DAW (Digital Audio Workstation) and the most well known and used is Pro Tools.When recorded music first came to be, it was all “raw” or unedited, but today it is unheard of not to use at least some form of editing.

With the use of a DAW, audio engineers have a multitude of tools at their disposal with which they can simply polish up a piece of music or practically change it into something entirely different.  As time goes on, music seems to be more heavily edited.  Auto-tune is a great example of this.  Auto-tuning has become standard practice in the industry and is a controversial topic.  Some people think it is cheating to use auto tune because it can allow less skilled vocalists to appear more talented and has made many performers and audio engineers lazy.

Auto-tune is a tool just like any other available to audio engineers.  When it is used sparingly to make small adjustments to the vocals, or purposefully used in an extreme fashion to make a special effect, I see no problem with it, but when it is used heavily to try to cover up the lack of talent of a vocalist and ends up sounding non-human when it shouldn’t it becomes a problem.

 

 

 

 


 

 

Pre-recorded sound in live music is overused

The use of pre-recorded sound or backing tracks has been around for awhile and is very commonplace if not standard use, to some extent, in most live performances today.

Pre-recorded sound is not always a bad thing and there are many legitimate uses for it.  It is only when it is overused or used dishonestly that it becomes a problem.

The uses of pre-recorded sound in live performances.

Backing tracks are often used in live performances to help augment the sound of certain instruments or vocals.  When live vocalists sing over pre-recorded vocal tracks to make the mix of sound “bigger,” it is referred to as doubling.

Doubling is a very common use of pre-recorded sound and usually makes the performance sound richer. Doubling is primarily used for vocals, but it is often used for instruments such as guitars as well. I have had experience with this technique myself from recording vocal tracks for a musical/dance performance at Angelina College. I do not have a problem with this kind of pre-recorded sound because the actual live sound usually overpowers the pre-recorded sound.

Another very common use of pre-recorded sound is to add additional tracks to a live performance when it is impossible or impractical to hire live musicians for the tracks. When this technique is used sparingly, and depending on the style of music, most people do not have a problem with it.

In most electronic music for example, it is expected for some tracks to be pre-recorded, such as fast paced synth lines, and especially drum loops.  In fact, live electronic dance music a.k.a. EDM, performances rely entirely on pre-recorded sound and samples.  Some people think that EDM DJs simply press the play button on stage and then do nothing, but in reality they actually are doing more than that. Most EDM musicians use a combination of control surfaces and software to play samples and loops at the right times during the performance, and also sometimes have control over effects such as distortion as well. This YouTube video Dillon Francis & DJ Snake – (GET LOW) – [Launchpad Remix] by Chris Rinaldi shows a glimpse of how a control surface is used.

In symphonic metal, one of my favorite genres of music, pre-recorded strings and other classical instruments are necessary because of the cost and impracticality of touring with an entire orchestra.

When Pre-recorded sound is overused

Pre-recorded sound is a great advancement in music technology and truly enhances live performances when used properly, but it also has the potential to be abused.

More minor instances of abuse occur when musicians rely heavily on additional tracks and pre-recorded sound to the point where there is little live sound coming through. Such cases are still very enjoyable, but in my opinion, not as authentic as they could be.

A lot of the reason why some bands, especially smaller, lesser known bands, use additional tracks is that they have very few members.  From my own experience from live performances, I will compare two bands; one four member band, and one three member band that I have seen live. One, the name of which I have since forgotten, was the opening band to an Evanescence concert I went to.  I enjoyed their performance even though they used quite a bit of additional tracks and the vocals were heavily doubled or perhaps even lip-synced in some places.  The other band, Courage My Love, used very little if any pre-recorded sound yet still sounded amazing with only a guitarist/vocalist, drummer/vocalist, and a bassist.  That proved to me that it is not necessary to use pre-recorded sound to have a great live performance.

Lip-syncing and tracked instruments

There are many times when performers pretend to be playing or singing live when the sound the audience hears is actually a backing track.

Sometimes the entire backing band is tracked and just pretends to be actually playing while the vocalist is actually singing live, or it can be the other way around.  Entire performances can be tracked as well, which means all the sound heard is pre-recorded.

Pre-recorded sound is heavily abused in this way in pop music.  Live pop music has increasingly become more about the visual aspects of the show than the music itself.  Complex dance routines, and special effects dominate live pop performances.

Pop stars  have received a lot of criticism for lip-syncing during their performances. Their defense for lip-syncing is mostly that they can’t sing properly while dancing vigorously. Many people are not satisfied with that excuse however, often mentioning how the artist P!nk has performed intricate aerial acrobatics while singing live quite well. During one of Britney Spears’ performances in Australia, many audience members walked out of the show very disappointed because of how obvious it was that she was lip-syncing and her dancing was apparently not very impressive either.

One on the most notorious cases of lip-syncing was with the group Milli Vanilli.  During one of their performances, the backing track began to skip, repeating the line “Girl you know it’s,” revealing that they had been lip-syncing the whole time.  It was later discovered that the two front men did not sing the vocals on the album either, and were complete fakes.  After they were exposed, they tried to use their own vocals, but since they weren’t very good, they lost their reputation very quickly.  Even when the producer for the group had the actual vocalists from the album perform live, the group’s reputation had suffered so much that they never recovered.

Piracy and and lip-syncing

Because of piracy, most musicians cannot support themselves on recorded music sales alone because of the low incentive for listeners to buy recorded music, so they have to  perform live to make up the difference.  When musicians deceive audience members by pretending to sing live while only lip-syncing, they are removing the musical value of their performances and therefore leaving no incentive for people to pay them for their music.

If a musical performance is entirely pre-recorded, then it should not be able to be labeled as a live musical performance.  Lip-syncing is fraud, and is hurting the only sustainable source of income that most musicians have left.