This is a purely graphical waveshaper. It’s still a DSP module, so there is no need for any conversion between GUI and DSP data. Although there is the Waveshaper2 or 2B, if you’re not mathematically inclined, or want to allow your plugin users to change the waveshaper’s function then this is by far the easiest solution. It’s very intuitive, you just click on a node (one of the small green squares) and drag it till you get the result you want. You can see below how dragging the nodes changes the output waveform.
You just need to be aware that moving the centre node (0,0) can cause some issues with clicking and unwanted DC offset voltages (for audio you could put a high-pass filter set to about 50Hz cut-off pitch to filter out the DC component), when dealing with LFO’s or modulation envelopes you really do want to avoid moving the 0,0 node. Another advantage is that as your plug-in users are unable to enter values into the waveshaper, they can’t cause the output to exceed the default +/- 5 Volts for Audio voltages. The best way to learn this module is to connect it up and try dragging the nodes to see what the result is.
A Wave-Shaper distorts or modifies the input voltage depending on the transfer function that you specify. They can be used as a distortion unit, soft clipper, waveform modifier or control voltage transfer function (e.g. velocity response curve), they are to say the least, versatile.
Waveshaping is also a popular synthesis technique that turns simple sounds such as a sine wave into more harmonically complex sounds. A guitar fuzz box is an example of a very basic waveshaper. The unamplified electric guitar sound is fairly close to a sine wave. But the fuzz box works by amplifying it to the point where the input clips the peaks of the signal in an amplifier designed to clip at moderate signal levels. A signal that is clipped has many more high frequency harmonics added to its spectrum. Sounds that have passed through any type of waveshaper will have a lot more high frequency harmonics, which gives them a “richer” sound.
Table based Waveshaping.
As you can probably imagine doing all these calculations in real time at audio frequencies is going to be a lot of work for the computer. So we generally pre-calculate these polynomials and put the results in a table. Then when we are Waveshaping sounds, we just take the value of the audio input and use it to look up the answer in the table. In the world of computer programming this is called optimization, and it greatly reduces the load on your CPU. One big advantage of using a table is that regardless of how complex the original equations were, it always takes the same amount of time to look up the answer. You can even draw a function by hand without using an equation and use that hand-drawn function as your transfer function.
The SynthEdit Waveshaper2 module.
The Waveshaper 2 module is just an in-out module, where you specify the transfer function either in the text box on the module, or in the module’s properties panel. Changing the formula will also change the curve in display that represents the transfer function
Note: Input Levels: The input voltage is internally clipped at +/- 5 Volts, while it won’t do any harm to exceed these values, the results won’t bear any relation to the expected results from the formula. Output Levels: These may be higher or lower than the default SynthEdit level of +/- 5 Volts depending on the transfer function specified. We can adjust the formula to compensate for this, we just change the multiplier at the beginning of the transfer function (shown in bold): Output = 5*sin(x/PI)
A simple Waveshaping Formula.
A waveshaper in DSP form can be described as a function that takes the original input signal x, and applies a mathematical formula to the input thus producing a new output signal y. This function is called the transfer function. A simple example in SynthEdit is shown below:
Here we use a simple equation: Output = 5*sin(x/PI), the important part (the transfer function) is shown in bold. we merely multiply the results of the transfer function by 5 to restore the output to its normal SynthEdit +/- 5 volts peak to peak audio level. In order to change the shape of the function (and not just make it bigger or smaller), the function must be nonlinear, which means it has exponents greater than 1, or functions (this is where we use sines, cosines, exponentials, logarithms, etc.). You can use almost any function you want as a waveshaper. But the most useful ones output zero when the input is zero (that’s because you usually don’t want any output when there is no input-this will result in unwanted clicking noises).
If we change the equation slightly to: 5 * sin((x*1.2)/PI) the bold section is the change we insert, then the output goes from having the peaks flattened out, to a small amount of foldback, the greater the number we use to multiply x, the greater the waveform is folded back on itself- see below where it’s increased to x*1.8
Increasing the Input voltage: What happens if we exceed the usual SynthEdit audio levels? Well with the most basic function we used in the first example you might possibly think you’ll get a result like we had in the last example where it starts to fold back, but it doesn’t it just goes into “hard clipping”. To demonstrate I put an amplifier between the signal source and the waveshaper with a gain of *3- you can see the result below.
And if we take the function that gave us the foldback:- 5 * sin((x*1.8)/PI), and increase the input above the default level, then the following happens:
The reason for this is that the input of the Waveshaper modules in SynthEdit is internally limited to the default of 10 volts peak to peak, so when we reach the clipping voltage the waveform loses its sine wave like shape and is clipped. So to ensure correct operation we need to ensure that the input never exceeds +/-5 Volts (especially when using external audio sources). Likewise the result of the function must be scaled correctly to restore the default output of+/- 5 Volts audio that SynthEdit uses.
Need to convert a Sine wave to a Square wave? Then this formula will do the job for you: 5*sgn(x/pi)
If we want to “pull” a sine wave into more pronounced peaks the following formula works well: 0.15 * sinh(x/1.2)
Which also has a useful effect on Sawtooth and Triangle waves, so you can see how this can also be useful for manipulating envelopes and control voltages provided we pay close attention to the input and output voltage swings.
At its most basic level, a multi-tap delay, is a delay effect with multiple outputs, or “taps”. Each of these taps can be set to a different delay time, and a different level, meaning that the delayed sound is played back multiple times at different intervals and levels. This differs from a traditional delay effect, which typically only has a single output with a set delay time, and a gradually decaying repeat. Traditionally in the analogue world this would be done either with a tape delay using multiple playback heads, or using multiple BBD delay chips.
Multi-Tap delays in VST effects.
In a digital setting, multi-tap delay is achieved by creating multiple delay lines within a delay unit or plugin. Each of these delay lines can be adjusted together or independently, allowing you to control the delay time, feedback, panning, and even the tonal quality of each tap.
One of the main characteristics of multi-tap delay is its ability to create complex rhythmic patterns. By setting different delay times, and audio levels for each tap, you can craft intricate rhythmic structures that can add movement and depth to a mix. This can be particularly effective in genres that rely heavily on rhythm, such as electronic and dance music.
Additionally, multi-tap delay can be used to create unique sonic textures. By manipulating the feedback, panning, and tone of each tap, you can create a wide range of effects, from swirling, stereo-enhanced delays to dense, reverb-like textures.
A simple delay is an effect that introduces a single repetition of the signal with a delay in time and slow decay in amplitude, whereas a Multitap Delay is a sequence of simple delays. The output of each simple delay feeds into the next simple delay and back into the input. This creates an echo, but not one that simply decays constantly over time. The multitap delay echo may decay, but then peak at some points, or may contain echoes of the echo. A multitap delay can be simulated with a ready made tapped delay line, but we don’t have one in the standard SynthEdit Modules (Although there is a third party module from TD Modules). Adding a large number of taps, especially if feedback is added, will create a very complex effect.
Creating our Multi-Tap Delay with SynthEdit
A basic Multi-Tap effect (without controls, feedback and mixing for simplicity) is shown below:
The Multi-Tap Delay: This is just a chain of Delay 2 modules connected in series, with each Delay tapped to send to an output Tap 1, Tap 2 etc.
The Mixer:
The Feedback control: Nothing too special here, except that the 1 Pole LP has its Frequency scale set to 1kHz per Volt in the module properties.
The Delay Effect in Full:
In the structure the only point of note is that the cutoff control slider for the Low Pass Filter in the Feedback container has it’s minimum set to 0.5 V corresponding to 500Hz, and it’s maximum set to 5 V corresponding to 5kHz (this isn’t critical however, and can be changed to suit tastes). The filter in the feedback module provides the sort of frequency response degradation you would expect to find in most analogue or BBD delay effects pedals, where each time the audio “goes around” in the loop high frequencies are progressively lost. The Left and Right channels are both fed through the same delay line, and split again at the output, however, if you wanted a more complex stereo effect then you could duplicate the structure and connect this to the second audio channel to achieve a delay with the Left and Right channels being controlled separately from each other.
If you want to the effect can be made very complex, by using panning to adjust the position of each Tap in the stereo field, filtering the output of each tap individually…it’s limited by your imagination.
A Ping-Pong Echo is much like a normal echo, except that its a stereo effect and the left signal feeds back to the right channel, and the right channel feeds back to the left channel, bouncing the signal back and forth across the stereo field. Hence the term ping-pong delay. Shown below is a schematic diagram of a cross delay.
Creating our Ping-Pong echo in SynthEdit.
To create a Ping-Pong Echo in SynthEdit we must use an external feedback path rather than the Delay2 module’s internal feedback circuit and as you know if you try to create this structure with a normal feedback path, you will get the error message: “This patch contains a FEEDBACK path, Please remove.” We get round this problem using a pair of Feedback – Volts modules to create feedback paths that are acceptable to SynthEdit. See the layout below:
For the sake of simplicity I have kept to one set of controls for both left and right channels, but there’s nothing stopping you from having separate controls for left and right channels. Of course you can build on this to add HF Damping into the feedback loop, displays to show the delay time etc.
The Delay2 module is a handy module which introduces a delay into an audio signal. The Delay2 Module has a minimum of 0 seconds, and a maximum of 10 seconds delay time. To control the delay time of the delay module, you use it’s Modulation input to adjust the delay time parameter set in the modules property view. The Delay Time parameter sets the maximum delay time of the module. This maximum time can then be modulated, or adjusted, from 0 to the maximum of 10 using the modulation input. This allows you to make an adjustable delay by connecting a slider, or by modulating the input using another control voltage source such as an oscillator module, we can create effects such as flanging and chorus.
Very basic delay effect.
To set up a delay time that you can vary vary from 0 to 1 second: 1) Connect a slider module to the Delay module’s Modulation plug. 2) Go to the delay module’s properties panel and make sure the Delay Time value is at the default value of 1.0 Second. The feedback plug gives you control over the internally generated feedback level, between 0% (0 volts) and 100% (10 volts). Note: The interpolate out option is only used when you are going to have rapid changes in the value on the Modulation plug to prevent unwanted clicks, and stepping effects. If you’re not modulating the plug with an Envelope or LFO leave it un-ticked to save on CPU power.
This works, and it’s relatively easy to work out the delay time from the slider’s readout; 10V gives us a 1 second delay time , 5 Volts give us a 0.5 sec delay, 0 V gives 0 Sec, but its not the best of solutions.
Adding a Delay time display in mS.
Normally, delay times are given in milliseconds, so how do we get our slider to display the actual delay time in milliseconds? Using the structure below it’s quite easy; 1) Set the maximum value of the delay slider to 1000, leaving the minimum at 0, 2) Connect a Multiply module between the slider and the Delay2 Modulation plug. Set input 2 to 0.01 either in the Properties panel or using a Fixed Volts module to supply the value. 3) For the display we use a Volts to Float2 converter, feeding the float value to a PatchMemory Float Out3 where it’s converted from DSP to GUI data. 4) The GUI float value is then converted to Text using a Float To Text module, mine is set to 0 decimal places, but it all depends on your delay time range, and how accurate you prefer to be. 5) Untick the Writeable property for the Text Entry4 module, as changing this value won’t affect the delay time (this would only frustrate your plug-in’s users). We now have a readout in milliseconds. Well yes, you could use the readout on the slider control but this has a small drawback. By default it adds three decimal places, and with the sort of values we’re using the display is cut off at one end.
How our mS display looks:
Adding damped feedback to an echo effect in SynthEdit.
Again, the project above is a good basic Echo, but we would often find that in real life the sounds that are “echoed” will gradually change, they will lose some of their high frequency elements. What can we do about this? Well, with the structure below we can add an external feedback loop, that has a 1 pole low pass filter included to give us variable HF damping. Because we are using a Feedback (Volts) module we don’t get error messages about feedback loops not being allowed, and the slight delay introduced by the module will have no real effect on the feedback. In practice we won’t want the filter to go as low as 0 hZ, so we could set the Filter to kHz per volt and the slider range from say .2 to 8000, giving a range of 200Hz to 8Khz. You’ll need to set some decimal places for the Float To Text conversion module to get a readout of both Hz and kHz. Note: When testing in Synthedit due to the displays being GUI modules you’ll only see the Delay time and Damping frequency displays update if the Audio Engine is running.
How it looks with the filter control and frequency readout.
A Bitcrusher is a low-fidelity effect used to emulate the distortion introduced by older samplers, and wavetable synthesizers. They work by reducing the sampling rate (the frequency at which the audio signal is sampled, or bit-rate), or reducing the bit size of the samples (the number of bits used to digitally represent the amplitude of the audio waveform). These can reductions can be used independently or together. A Bitcrusher will by virtue of the way it operates introduce digital noise and aliasing into the audio, this is deliberate, as this is emulating the sound of the older low resolution audio samplers and wavetable synthesizers.
Bitcrusher options in SynthEdit: There are currently three options; 1) A third party module by Elena Novaretti in her ED DSP module pack, the ED Crusher which (personal opinion) I think is currently the better option. This has the edge on the community modules BitCrushers in that it allows reduction of both bit depth, and sampling rate together. 2) The BitReducer in the Community modules. 3) The BitCrusher in the Community modules.
A basic SynthEdit Bitcrusher design.
The 1 pole LP in the input is fixed at 15kHz, and is included to limit the input signal to prevent too high a frequency entering and causing un necessary aliasing or distortion. The sampling frequency (X Quantize) is controlled via the Volts to Float2 module, and the properties for this module are set to Volts DC (Fast), and an update rate of 10 Hz. The amplitude sampling (Y Quantize) setting is controlled via a RoySwitchL(Int), and a set of Integer fixed values (4, 8, 16, 32) selectable via a drop down list. The LP filter in the output is not an attempt to reduce aliasing (once you introduce aliasing you can’t get rid of it) as it’s a part of what we are re-creating, it’s there to reduce the high frequency output to get closer to the low fi sound of old samplers and wavetable synthesizers. Note: All Bitcrushers produce noise and aliasing because you are reducing the sampling rates, and thus decreasing the Nyquist frequency. Note: You can put chords through a bitcrusher, but it sounds much better with single notes. Once you start putting chords, or harmonically complex signals through this effect the resulting audio quickly becomes a noisy mess (very much like using an ordinary fuzz box). If you’re using this effect in a synthesizer plugin to recreate the sound of a vintage 8 bit wavetable synthesizer it’s much better to have one bitcrusher per voice, than one bitcrusher used on all the voices together. Will oversampling improve the sound from a bitcrusher? In short, no. You will actually be wasting CPU resources on something that will not help create the lo-fi sound that’s intended. You won’t reduce the noise or aliasing noticeably…and I have to ask why you want to increase the quality when the aim is to create a vintage lo-fi sound?
Quantization is used to constrain a control voltage to discrete steps. The module takes the amplitude of the input voltage and breaks it down into the steps specified by the voltage on the Step Size control plug. Quantization is used on control voltages for pitch an filter cut-off voltages. (No reason why you can’t apply it to audio, but the correct module for audio is a Bitcrusher which gives you better control). Quantization is often used for generative and ambient music. The Synthedit Quantizer will work on Audio signals, and progressively reduce a sine wave signal form the original sine wave, through a stepped sine wave to a pulse waveform, which is quite different in effect from a bit-crusher (bit reducer). Note: all a quantizer will do when applied to audio is to introduce distortion, it does not introduce any frequency changing or shifting effects.
Plugs. Left Hand Side:
Signal In:- (Voltage) Control Voltage Input signal Step Size:- (Voltage) Quantisation step size in Volts
Right Hand Side:
Signal Out:- (Voltage) Quantized Control Voltage output.
The screenshot below shows the effect of quantizing a +/- 5 Volts sine wave. Note: In some cases the peak-to-peak voltage of the output will actually be greater than the =/- 5 Volts input to accommodate the correct voltages between each step.
Using a single Delay2 for a very basic reverb effect is all well and good, but has its limitations, one of which is that at the short delay times we are using there can be noticeable peaks and troughs in the frequency response, as what we are setting up is basically a comb filter, so we can get some ringing effects on transients at certain frequencies, and also a “drainpipe” like effect on other sounds. Not good. So what’s the answer?
The Schroeder Model of Reverb.
If we put a sharp transient sound such as a handclap into a delay line then exponentially decaying impulses follow the first impulse. Though this is similar to an exponentially decaying reverb, in the frequency response peaks occur at equally spaced frequencies like the teeth of a comb, hence the name comb filter.
This will result in a ringing metallic sound, especially once feedback is added into the mix. Digital audio pioneer Manfred Schroeder proposed using parallel delays with differing delay times, in combination with all-pass filters for digital reverb. His idea proposed a structure with four parallel delays and two serial all-pass filters. The delays create the reflections, and the all-pass filters “smear” any transients thus making the reverb more diffuse and reducing any resonances. Schroeder’s design is shown below in block form. Each comb is a Delay2 module.
6 x Delay container: A set of six Delay2 modules pre-set to differing delay times within the range of 10 to 60 mS which is the ideal range for reverb, any longer and we start to get an echo effect, and shorter and there’s no noticeable reverb effect. All the outputs from the Delay2 modules are fed into the Input 1 of a Divide module, this is because if we just feed all the signals into a normal module they will all be added together, more than likely exceeding the input range of the module causing clipping. Input 2 of the divide module is set to 6 (the number of delay modules) to restore the normal signal range. There have been various calculations made for the “ideal” delay times, but in practice a natural reverb depends on the shape and size of the area, and the things in that area. There has been a calculation made of the “ideal” delay times which are 50, 53, 61, 68, 72, and 78 ms
All Pass Container: The All Pass container uses two more Delay two modules in series one set to 17mS, and another to 74mS these two delays give an added “blurring” effect to any transients, along with a 1pole Low Pass filter for variable HF damping on the reverb “tail”
Feedback Container: The feedback container has a Feedback – Volts module to allow feedback. A combination of the inherent time delay in a feedback loop in SynthEdit, and the two All Pass filter modules adds further frequency dependant phase shifting to further reduce ringing and metallic sounding transients. Again the frequencies of the All Pass filters are fairly random, but spaced out to try and give maximum effectiveness in reducing unwanted audio artefacts.
BBD Reverbs. First there were plate and spring line reverbs, then along came an electronic “chip” called a Bucket Brigade Delay (BBD). This used high impedance CMOS devices and capacitors to pass the electric charge (audio) along the line, each stage in the line adding a small delay. These worked well, and eliminated the problems of external noise entering the signal chain. However the way they worked introduced their own artefacts such as aliasing at high frequencies, filtering was used filtering to restrict this, and they also introduced a certain amount of distortion, which gave them their own unique low-fi sound. By using multiple chips with different delay times you could get some quite complex reverbs.
Delay2.
Creates a delay (echo or reverb) effect on audio signals very similar to BBD chips. Note: If you intend to modulate the delay time, you should enable interpolation, this reduces clicks and “zipper” noise.
Plugs. Left Hand Side: Signal In:- (Voltage) Input Signal Modulation:- (Voltage) Varies the delay time dynamically (0 to 10V) Feedback:- (Voltage) Controls the amount of feedback of the delayed signal.
Right Hand Side: Signal Out: – (Voltage) Audio output signal Parameters: Delay Time (secs): – Maximum delay time in Seconds. The delay time is limited to a maximum of 10 seconds. Interpolate Output: – Provides smoother modulation of delay time, but with an increase in CPU load.
Basis BBD style, single delay reverb effect. We can emulate this in a SynthEdit module, which takes the audio input, and much like the old BBD chips, slices up the audio into samples and passes them down the delay line. Shown below is our very basic reverb effect. Delay Time:- Set the delay time to 0.001 Seconds (10mS) in properties Delay control:– Set the maximum voltage for the delay slider to 0.06 volts, this gives us a delay time range of 10mS to 70mS. Feedback:– Set the maximum voltage for the Feedback slider to 8 volts, as we don’t want the absolute maximum feedback level as it will just give uncontrolled oscillation. Wet/Dry:- This control will default to a range of -5 volts to +5 volts. Leave this as it is to give the full range of wet to dry effect.
There’s one important point to note, if this reverb is being included as part of a Synthesizer VST we need to think about polyphony. It really isn’t needed for a reverb effect so we need to containerise our reverb, put a Voice Combiner module in place in the input- the voice combiner forces then the Modules in this container to stay Monophonic. Otherwise every time you play more than one note on your Synth plug in it will create an un-necessary clone of the reverb module wasting CPU and memory.
Making the Reverb Stereo.
Converting to Stereo is easy, we just create two identical Reverb chains, one for the left channel, and one for the right channel. Keep the two delay and feedback levels linked otherwise some very peculiar (and unwanted) phase and stereo imaging effects can be introduced.
Limitations of basic Delay reverbs.
This is structure is OK for a very basic reverb effect, but has its limitations, one of which is that at the short delay times we are using there can be noticeable peaks and troughs in the frequency response, as what we are setting up is basically a comb filter, so we can get some ringing effects on transients at certain frequencies, and also a “drainpipe” like effect on other sounds. Not so good. So what’s the answer? Fortunately there is one that was devised by Manfred Schroeder, known as the Schroeder Model of reverb.
Reverberation or reverb, in music is a persistence of sound, or a very short echo after a sound is produced. Reverb is created when a sound or signal is made inside an enclosed or semi enclosed area, the reflected sounds from the enclosing walls causing complex sound reflections to build up and then decay as the sound is absorbed by the objects in the space – which could include furniture, people, walls etc. This is most noticeable when the sound source stops and the reverberations continue, their volume decreasing until zero is reached. Reverb is pitch dependent, as sounds of different pitches will arrive back to the listener at slightly different times, and unless the room is very carefully designed it will have a resonant frequency at which sounds will be emphasized. In comparison to a distinct echo, which usually 50 to 100 ms after the previous sound, reverb is made up of reflections of sounds that arrive in less than about 50 ms. As time passes, the level of the reflections gradually reduces to non-noticeable levels. Anywhere there are surfaces to reflect sounds from you get reverb. The more complex the shape of the room and the more objects in it the more complex the reverb is. Certain frequencies can be boosted, and others cut due to the wavelength of the sounds interacting with the size of the room (resonance). It may be created through physical means, such as echo chambers, or electronically through audio signal processing. There are various means of achieving a reverb effect listed below.
Echo chambers The first reverb effects, introduced in the 1930s, were created by playing recordings through loudspeakers in medium to large spaces (various spaces such as empty rooms, bathrooms and even stairwells have been used) and mixing the sound with the original using strategically placed microphones. The American producer Bill Putnam is credited for the first artistic use of artificial reverb in music, on the 1947 song “Peg o’ My Heart” by the Harmonicats. Putnam placed a microphone and loudspeaker in the studio bathroom to create an echo chamber, adding an “eerie dimension”. The first two examples of reverb (Spring and Plate) are included just to give some historical background as we are going to be using digital sound processing to achieve our reverb effects.
Plate reverb A plate reverb system uses an electromechanical transducer, similar to the driver in a loudspeaker, to create vibrations in a large plate of sheet metal. The plate’s motion is picked up by one or more contact microphones. The audio signal from these is then mixed with the original “dry” signal. Plate reverb was introduced in the late 1950s by Elektromesstechnik with their EMT 140 design. The greatest problem with plate reverb units is their size and weight, which limits their use to recording studios.
A spring reverb. Spring reverbs, were introduced by the company Bell Labs, using a set of springs mounted inside a box. They work in a similar way to plate reverb, with a transducer at one end of the spring, and a pickup placed at the far end of the spring, to reproduce more realistic reverb different length springs could be mixed together (longer spring = longer delay time) . They can have a very distinctive “twangy” sound with loud percussive sounds due to the springs. One major drawback is sensitivity to outside vibration. They were popular in the 1960s, and were first used by the Hammond company to add reverb to Hammond organs. They became popular with guitarists, including surf musicians such as the Beach Boys, and spring reverb could easily be built into guitar amplifiers. They were also used by dub reggae musicians such as King Tubby. The American engineer Laurens Hammond of the Hammond company was granted a patent on a spring reverb system in 1939.
Digital reverb Digital reverb units simulate reverb by using multiple delay lines that have different delay times and variable feedback, giving the impression of sound bouncing off of multiple surfaces. Some digital effects allow users to independently adjust early and late reflections. Digital reverb was introduced in 1976 by EMT with the EMT 250, and became increasingly popular with many groups and studios in the 1980s.
Gated reverb Gated reverb combines reverb with a noise gate, creating a “large” reverb sound with a short tail (the tail is cut short by the noise gate). It was pioneered by the English recording engineer Hugh Padgham and the drummer Phil Collins, and became a staple of 1980s pop music.
Convolution reverb. Convolution uses impulse responses to record the reverberation of physical spaces and recreate them digitally. The first real-time convolution reverb processor, the DRE S777, was announced by Sony in 1999. Convolution reverb is often used in film production, with sound engineers recording impulse responses of sets and locations so sounds can be added in post-production with realistic sounding reverberation. Basically, a convolution reverb takes an input signal (the sound to be reverberated) and processes it with the sound of an actual or virtual acoustic space to create the illusion that the input was recorded in that space. The sound of the acoustic space is captured in what is called an impulse response (IR), which often starts as a recording of a short, sharp sound, such as the firing of a starter pistol or the bursting of an inflated balloon (the impulse), in the acoustic space in question. As you can imagine, such a sound excites the reverberation (the response) in the space, and so the impulse response (or at least its initial recording) sounds like an explosion followed by the reverberating reflections created by the recording space. Once you have the IR for the space as a file, you then load it into the convolution reverb and input your sound to be processed. At that point, the software convolves the two digital audio signals together to create the output. Convolution itself is a mathematical process that has many applications including statistics, image processing, and electrical engineering as well as audio processing (It’s a very complex subject, understanding the precise details of how it works is not easy!). If you like, you can think of convolution as a kind of multiplication of each sample of the input with each sample of the IR, with the result being that the audio input takes on the sonic characteristics of the space in which the original IR was recorded.
Shimmer Reverb. Shimmer reverb alters the pitch of the reverberated sound up or down in frequency by placing a pitch shifter in the delay lines feedback loop. Once we apply any feedback in the reverb the pitch of the reverberated sound will steadily change up or down in pitch depending on the amount of pitch change and the feedback level. Shimmer reverb is often used in ambient music. Brian Eno has been one of the earliest adopters of this sound.