A Video Explaining This Lesson Is Available To Members – WATCH IT HERE
Before we can begin to synthesize sound, we need to know the answer to one crucial question:
WHAT IS SOUND?
“mechanical vibrations transmitted through an elastic medium, traveling in air at a speed of approximately 1130 ft per second (770 mph) at sea level. (http://dictionary.reference.com/browse/sound)”
“Sound is defined by ANSI/ASA S1.1-2013 as “(a) Oscillation in pressure, stress, particle displacement, particle velocity, etc., propagated in a medium with internal forces (e.g., elastic or viscous), or the superposition of such propagated oscillation. (b) Auditory sensation evoked by the oscillation described in (a).(https://en.wikipedia.org/wiki/Sound)“Cymatic Image by: Audio Visualization by Jessie Edsall
Sound comes from a series of vibrations, Sound travels in waves. When a source, or something that produces sound, vibrates, it transfers its energy to the surrounding particles causing them to vibrate. Those particles then bump into the ones next to them and so on. This causes the particles to move back and forth but waves of energy to move outward in all directions from the source.
Waves are made up of compressions given extra space and allowed to expand. Remember that sound is a type of kinetic energy. As the molecules are pressed together, they pass the kinetic energy to each other. Thus sound energy travels outward from the source. These molecules are part of a medium, which is anything that carries sound. Sound travels through air, water, or even a block of steel, thus, all are mediums for sound. Without a medium there are no molecules to carry the sound waves. In places like space, where there is no atmosphere, there is no sound. (https://www.nde-ed.org/EducationResources/HighSchool/Sound/Popup/discussion002.htm)
We measure sound waves in a chart representing volume/amplitude over frequency. Visualize the flat plain of a placid lake of water. A cross-section of this lake would show a straight, flat horizontal line, without movement. This centreline represents the midpoint or resting point of the medium (such as air) that can transfer energy. This midpoint also represents a speaker that is not producing sound (the cone sits at it’s centre point).
When a disturbance is made within our medium, such as a tuning fork in a room of air or a rock being tossing into our placid lake, the energy of this disturbance is transferred through our medium in the form of a wave or series of ripples.
We are able to hear these sound waves because the ripples in air (just like in water) have a physical impact on the organs we use for hearing (Our ears). The sound waves travel from the sound source, through the air, are focused by the outer part of our ears (the Pinna) into our ear canal and finally impact onto our ear drum. The movement of our ear drum is then converted from a mechanical vibration to an electrical signal sent to our brain for interpretation.
Variations in the frequency, amplitude and shape of these waves is what effects how the wave sounds to our ears and ultimately lets us differentiate between sound sources. These differences in sounds can be understood by knowing the 3 elements of sound: Pitch, Timbre and Volume.
Synthesis is creating sounds as electrical vibrations by adjusting the pitch, timbre and volume of a wave. By understanding the elements of sound we can begin to create our own unique electronically produced sounds.
There are a few definitions you should understand before we continue:
Cycle: single unit of vibration
Frequency: Speed of Vibration (high frequency vs. low frequency)
Hertz: Cycles per second
KiloHerz = 1,000Hz (1kHz)
Audible frequency range for humans= 20Hz-20,000Hz
3 Elements of Sound:
1.Pitch: Musical term for frequency – 440Hz=A
- lower frequency, lower the pitch
- higher the frequency, higher the pitch
- Is how we can tell the difference between a piano playing an “A” or a trumpet playing the same note.
- A tone’s harmonic structure
- Fundamental frequency
- Overtones, Harmonics, Partials – These create a tone’s Timbre
- The volume envelope
2. Synthesis is recreating those sounds as electrical vibrations
7 Main Components of a Synthesizer:
1.Oscillator – Controls pitch – “A device for generating oscillating electric currents or voltages by non-mechanical means.”
2.Amplifier – An electronic device for increasing the amplitude of electrical signals, used chiefly in sound reproduction.
3.Filter – Increasing or decreasing the upper harmonics – An audio filter is a frequency dependent amplifier circuit, working in the audio frequency range, 0 Hz to beyond 20 kHz. Audio filters can amplify (“boost”), pass or attenuate (“cut”) some frequency ranges.
4.Filter Envelope – Commonly using automation of a filter’s Attack, Decay, Sustain and Release
5.Volume envelope – Commonly using automation of an amplifier’s Attack, Decay, Sustain and Release
6.Pitch envelope – Commonly using automation of an Oscillator’s Attack, Decay, Sustain and Release
7.LFO – Low-frequency oscillation (LFO) is an electronic signal which is usually below 20 Hz and creates a rhythmic pulse or sweep. This pulse or sweep is often used to modulate synthesizers, delay lines and other audio equipment in order to create effects used in the production of electronic music.
Sine wave – Pure vibrations. All frequencies are made up of Sine waves
Square wave – Instantly on for half of a cycle and instantly off for half of a cycle (50% on/50% off) – only odd harmonics
Pulse wave – Variable square wave (30% on/70% off…etc) This refers to the wave’s duty cycle.
Sawtooth wave (ramp) – Instantly on for half of a cycle and gradually off for half of a cycle – includes all harmonics even and odd
Triangle wave – Gradually on for half of a cycle and gradually off for half of a cycle –
Noise – Noise containing many frequencies with equal intensities.
Finally, Ableton’s Operator
As you look at the user interface of Operator you should notice 9 sections as follows:
In the centre of the interface is a rectangular window called the display. Surrounding the display are 8 sections referred to collectively as the Shell.
Beginning at the bottom left corner of the interface and proceeding clockwise, you will see Oscillator A, Oscillator B, Oscillator C, Oscillator D, LFO, Filter, Pitch, Global.
The information contained within the Display will change depending on which section you have selected in the shell.
Ableton’s Operator is capable of utilizing three primary types of synthesis:
1. Additive Synthesis
2. Frequency Modulation
3. Subtractive Synthesis
End Of Part 1
Dubstep is a genre of electronic dance music that originated in South London, England. It emerged in the late 1990s as a development within a lineage of related styles such as 2-step garage, dub, techno, drum and bass, broken beat, jungle, and reggae. In the UK the origins of the genre can be traced back to the growth of the Jamaican sound system party scene in the early 1980s. The music generally features sparse, syncopated drum and percussion patterns with bass lines that contain prominent sub bass frequencies. (https://en.wikipedia.org/wiki/Dubstep)
Sonically, Dubstep can be characterized by it’s moderate tempo, triplet beat and “wobbily” bass sounds, which I will demonstrate during this lesson on Dubstep production techniques. During this class we will explore how to use Ableton’s “Operator” for our bass, Drum Rack for our beat as well as Auto filter for a little extra control.
Generally, Dubstep is produced around 140 BPM.