What is needed to play the music? What kind of processing can you do with it? Apply effects like data structures and algorithms made easy pdf, distortion, to an existing sound. Assemble and sequence short sound files together, for instance composition tools that use audio loops.
Any kind of playback functions up to advanced jukebox applications. MIDI representation, which tells what notes to play and when to play them. How can you store this data in memory and on files? You can invert the time for special effects by reading the samples backwards. One method to do that is to use a library of samples of real instruments.
When you play a note that has a different pitch than the original sample, you must change the rate at which the samples are read, so that the note pitch is correct. This can be done up to a certain level, as otherwise the sound becomes too artificial, as the timbre of an instrument changes according to the pitch. A table of music data structures each representing one note must be updated to keep track of which note is still playing. A new data structure must be assigned to a starting note, the real time process must then give some processing time to each note playing and check which notes are finished to release that note from getting processing time. The release of the note must unlock the loop playback and then the release part of the note, as you can not simply stop the reading of sample because a real instrument always has at least a short duration to go from playing a note to a complete silence. Percussion instruments are more easily handled as they do not need a loop.
What kind of processing can you do with this music data structure? The transposition of a piece of music or a part of it is very simple, as you simply need to add a constant number to any note pitch. Music generating algorithms for composition applications, which can generate sequences of chords and notes. How is this music data structure generated? How can you store these music data structures in memory and on files? The notes are then organized in time sequences.
In a purely MIDI sequence, such a note would be split in two messages. Start or activate a timer that will call the real time processing routine. Reset a variable that will count the time relative to the beginning of the sequence. When the real time routine is called, it will check the MIDI event list and send the next events that have reached the time counter. Increment the time counter and the list pointing to the next MIDI event to wait for, then return. During that time, the main program can regularly check the timer counter and for instance update the screen to display the next measures so that the user can see what is playing at any time. Handle any user interaction, like changing the tempo, stopping the playback, jumping to another measure or even editing the music.
The second class of objects are the containers, that is to say the measures, staves, systems and pages. They are the objects that will contain the objects of the first class. The third class are mainly the clefs, key and time signatures and they are the interpreter of how you read the objects that you find in the containers. They are also placed in the containers.
Let us take an example of the above three classes of objects. Compute the start time and duration of the note by its position in the measure compared to the other notes and the time signature. A tempo indication must be used to attach the events to a real time position. The presence of a a staccato sign or a slur may influence the exact duration. The name of the instruments is often displayed in front of the staves in the first system, so with that information, decide which MIDI message you need to send to activate the correct sound in the synthesizer. Assemble the sequence of events for each measure and then start feeding them to the real time routine that will send them to the MIDI interface. When more than one voice is present in the MIDI track, the algorithm must also decide which note is part of which voice so that the final result looks logical and readable.
Grace notes and trills must be detected and written properly. However, reading a MusicXML file, transforming it into your own music data structures and displaying it on screen or to the printer, is not an easy straightforward task to accomplish and may take considerable time to develop. It all depends on what your software must be able to do. For the page layout of the score, the score itself is an array of pages, a page is an array of systems, a system is an array of staves and a staff is an array of measure objects.
Each note or rest object contains a list of symbols, chords, articulation, lyrics, that is attached to that specific note or rest and may extend to a further note or rest. Its duration is arbitrary and may only contain one note or several hundreds of notes and rests, but it is organized in a way that no dependencies exist between two consecutive flows, at least on the purely graphical aspect of processing. MIDI sequence, as each note has its absolute pitch value available. I know how much time can be spent in writing music notation algorithms. It is of course a great and exciting adventure. But nobody would accuse the piano of having made the composition, right?
It all depends on the algorithm and methods used. Let us take a very simple example, an arpeggiator. MIDI interface and hear the resulting music. I wish you a nice time developing music software. The anthology will ideally be out in Fall 2013. And we’re still finalizing the subtitle.
So here’s the best citation I have. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Below is the introduction, to give you a taste. Algorithms play an increasingly important role in selecting what information is considered most relevant to us, a crucial feature of our participation in public life.