Music notation in the past
With the advent of computers, a multitude of new ways of doing things have developed. Music notation is no different. Traditionally, composers and arrangers had to write out their scores by hand, which was very time consuming and tedious. Written music had to be copied by hand as well until, according to The Encyclopædia Britannica, Ottaciano dei Petrucci developed the first polyphonic music printed from movable type around beginning of the 16th century.
In the 1960s, composers still had to write out their scores by hand, and although different methods of written music reproduction had been developed, many were impractical as explained in the article article, “How does using music notation software affect your music?,” by Robert Morris:
In the early 1960s, when I was an undergraduate composer at the Eastman School, there was really only one way to reproduce one’s scores, short of having them engraved in the process of publication. One copied music on transparent music paper (velum) in India ink and sent the masters to a blueprint house to be reproduced on an ozalid machine. Other options—reproduction via chemical copying machines or music typewriters—were infeasible.
When music notation software slowly began to emerge, it was not very practical until the late 1980s. Various programs came and went until the two most well known programs came out: Finale and Sibelius.
How notation software and midi have affected music composition
Music notation software has opened up many new possibilities for composers. Now composers can easily print, copy and share their pieces unlike ever before.
MIDI (Musical Instrument Digital Interface) is another great advancement in music production. MIDI is basically a digital language that records certain musical messages such as pitch, duration, velocity and more. MIDI does not produce sound itself since it is only the code. MIDI instruments, which can be a keyboard, or on software then interpret the MIDI and produce sound. Midi controllers, such as a keyboard can produce and sometimes receive MIDI input.
With the combination of MIDI technology and notation software, musicians are able to input notes into the notation software through a MIDI controller, and hear their compositions played back via MIDI. This innovation allows a quick, intuitive means for musicians to notate music by simply playing a piece on a MIDI keyboard. Musicians still need knowledge of music theory however, since many errors occur when using this method that must be corrected through the notation software.
Another breakthrough that MIDI has made is the ability to hear compositions played back. This is an incredibly useful tool for younger, or more inexperienced composers because it allows them to easily check for errors by ear rather than tediously checking the score visually and mentally hearing the composition in their head which takes considerable training and skill to achieve.
This ability to hear compositions played back has sparked some controversy however. Traditional composers go through rigorous training to be able to read written music and mentally hear the way it will sound if played. Before MIDI this was the only way to compose. With the use of MIDI playback, it is no longer absolutely necessary mentally hear the music.
The concerns that some musicians have with MIDI playback is that inexperienced composers might become dependent on MIDI playback in order to check errors. Another concern is that since MIDI playback of instruments in notation software does not properly represent the abilities of real instruments, composers will write pieces that are impossible or impractical to play.
These are real issues, and as a developing composer myself, they are ones that I face regularly. I will admit that I am almost entirely dependent on MIDI playback to make sure I haven’t made any errors, since I have a limited ability to hear written music mentally. I also often have the desire to write for instruments that I am not very familiar with, but am always unsure of how practical it would be to play. The only way to prevent possible problems is to gain a sufficient understating of the instruments used and their capabilities.
How sound editing software has changed music
The most used type of sound editing software used in music is known as a DAW (Digital Audio Workstation) and the most well known and used is Pro Tools.When recorded music first came to be, it was all “raw” or unedited, but today it is unheard of not to use at least some form of editing.
With the use of a DAW, audio engineers have a multitude of tools at their disposal with which they can simply polish up a piece of music or practically change it into something entirely different. As time goes on, music seems to be more heavily edited. Auto-tune is a great example of this. Auto-tuning has become standard practice in the industry and is a controversial topic. Some people think it is cheating to use auto tune because it can allow less skilled vocalists to appear more talented and has made many performers and audio engineers lazy.
Auto-tune is a tool just like any other available to audio engineers. When it is used sparingly to make small adjustments to the vocals, or purposefully used in an extreme fashion to make a special effect, I see no problem with it, but when it is used heavily to try to cover up the lack of talent of a vocalist and ends up sounding non-human when it shouldn’t it becomes a problem.