It was recently noticed (by Elena Novaretti) that the stock VU Meter is wildly inaccurate. It reads 0 dB when the input is at a level of-16 to -20dB. The original structure (it’s a module that can be opened and modified, like a prefab) is shown below:
Compare the three meters shown below. The top one has had the Float Function module removed, the middle is a third party VU meter, the bottom one is the Stock VU meter (which is off scale when the others read -10 dB!):
Removing the Float Fuction module with its formula values (shown below) results in a meter that gives a fairly accurate result
The working structure is shown below. Note: You might think “so I can just take out the Float Function, and dB to Animation as the Volts to Float is alreading giving a dB VU output”? The answer is no, this won’t work correctly, we must have the dB to animation conversion module to get the correct (non-linear) meter scaling
This module, although being used for the Registration system for plug-ins can be used for other purposes too. Credits and thanks for help with this module:I have to give credit and thanks to Jeff McClintock of SynthEdit for pointing me in the right direction with using this module, as I was making a rookie error connecting it up! Also many thanks to Sylvain Cottarel for providing the precise rules governing the checking and updating of the preferences.xml file.
How the User Setting – Text module works.
This is what the SE help file has to say: “Stores any setting common to all instances of your plug in. For example if you want your samples always loaded from a user-specified folder.”
The module takes your settings, and creates an XML file, this file is then saved in a folder where no user permissions are needed by SE. The name entered into the product plug in the properties panel will give the XML file it’s filename. Note: Once you have decided on the name for the settings, don’t change it, if you give the Product plug a new name it will create another XML file and folder. For this reason the left hand side plugs cannot be updated with a string from another module. These values must be set manually in the properties panel when programming.
Multiple modules. You can use multiple User Setting – Text modules and they will append a single xml file. This allows you to save product activation information and any other user settings that need be retained across plug in session regardless of any other parameters set by assignned presets.
Product. “Product” should be fed with the name of your VST and this is the name of the xml file that this module will create and use to store your path (and other useful info if needed). Jeff has chosen a safe location where permission to save is never requested by Windows. You can use more of these User Settings – Text to store a wide variety of useful settings. Important Note: The product pin must be set with exactly the same name as the plug-in name or it won’t recall settings made to the xml properly for that plug in.
Key. If we are using more than one module to store preferences fo a number of parameters, e.g. to save colours etc so the user can “skin” the plug-in with their own panel colour, text colour, and other settings then each User Settings-Text module preference type needs to have it’s own module, each with a unique “Key” text string to separate the parameters in the XML file.
About the XML File and it’s location. You’ll find the xml file(s) created in : C:\Users\”USERNAME”\App Data\Local\”your vst name”\Preferences.xml. The file is re-created if it has been removed, or is not found.
1) On opening the vst, any User setting – text module looks in the app data folder if it can find a folder which has a name equal to its Product pin input (The name of the VST). 2) If the module finds this folder, it checks the folder to see if it contains a Preferences.xml file. 3) If a Preferences.xml is located, the Module reads the file if a key is present corresponding to its Key pin input. 4) If the key contents are identical, the value of the key is sent to the Value output pins (GUI and DSP) 5) If the key is not present, the module writes the value present in its Value Gui plug in the xml file, or, if there is on key present, the module writes values of its Default value plug. 6) If the module doesn’t find the Preferences.xml file in the folder or simply doesn’t find the folder, the module creates a new folder with a name corresponding to its Product plug value, then creates the Preferences.xml file inside and then writes the key value following the previous rules into the xml file.
Important note: Just to make things 100% clear the module does not know the location of the VST it is in, just where to find the relevant preferences file. This will always be correct as the module always has the full path to find the file wherever the VST is located.
Example “Skin” Setup.
The setup below allows the user to select the background colour (Rectangle) The Text colour, the Border (DAM Rectangle) colours, and the colour of the Vector Knob’s cap. The User Settings module properties are set as follows: 1) Product = Skins Test 1 Key = Background 2) Product = Skins Test 1 Key = Text 3) Product = Skins Test 1 Key = Knob Cap .
I used the name Skins Test 1 in my example Product value. This will generate the following XML File named Preferences.xml in the folder Skins Test 1 So it would be in this location: : C:\Users\"USERNAME"\AppData\Local\Skins Test 1\Preferences.xml.
<?xml version="1.0" encoding="UTF-8"?> <Preferences> <My_Setting_Name>0</My_Setting_Name> <Background>4</Background> <Text>0</Text <Knob_Cap>2</Knob_Cap> </Preferences> The number listed between each of the Key names being the integer of the selected option.
Creating your Settings: The List Entry 4 module is used to store the names of the options. So for a list of colour options you could have set the Item List as; Light Grey, Medium Grey, Dark Grey, Red, Green, Blue In the List To Text module you would set the Item List pug with the matching Hex ARGB values; FFAAAAAA, FF777777, FF444444, FFFF00000, FF00FF00, FF0000FF Note: The sequence of colour names and ARGB values must match if you don’t want to confuse your users. Note: There are some modules that may require a hash (#) symbol prefix on the ARGB code. Note: Don’t forget to use commas to separate the values. The last entry needs no comma or any other symbol after it.
Connecting up the User Setting – Text module for saving user preferences.
Storing Float values. You could also use a Text to Float converter to control settings that need a Float value, e.g. If you were using a DAM Text Enter module which has the Font Size you could store a list of font sizes such as 12, 14, 16, 18 in both the List Entry and List to Text Modules. All you need do is connect the Item Text output plug of the List to Text module to the appropriate Float plug using a (GUI obviously) Text to Float converter. Just be careful of the Float values, some such as the Font size on the DAM Text Enter module will use the same values as the font size. Others may use a value between 0 and 1. Using decimals such as 0.4 is no problem, the modules will handle these values correctly. Note: Both the List Entry 4 and List to Text modules have the same Item List values in this scenario; 12, 14, 16, 18
Using float values with the User Setting – Text module
Setting the Ignore PC/Ignore Program Change flag stops controls from responding to Patch Changes. This is useful if you are using pre-programmed patches in a Synthesizer, but want to exclude some controls such as a master volume control from responding to the patch change.
Where are IgnorePC and other parameters stored ?
The IPC parameters are stored in the same place as other plug-in parameters. It’s only that changes to those parameters from the internal preset browser that are blocked by the IPC flag. Parameters are handled by the Patch Automation module, for IPC or Patches to work this module must be contained within your plug-in. The Patch changes and parameters are handled by the Patch Automator module.
Patch Automator module.
This module remembers the settings of the other controls. Adding this module into a container allows 128 different patches to be stored. Only the settings of the containers sliders or other controls are stored, not the containers structure. If connected to a synth’s MIDI input, it will respond to MIDI patch change messages, and will switch all controls in the same container to the new patch. The Patch Automator (was Patch Select) module has a hidden connection to each control in the container. Hence you must not connect controls to a Patch-Select’s inputs (doing so would create a feedback loop). For the same reason, you must not connect sub-controls to the Patch Slelect module.
Stock Modules that support Ignore Program Change; Containers, Joystick, Knob, LED2, Mod Wheel, Peak Meter, Slider, Switch, Text Entry, VU Meter, Voltmeter, Waveshaper2 Waveshaper3 Patch Memories (all), Any module such as Image 2 that is placed inside a container can be set as IPC, so that your own sub-controls can be made to Ignore Program Changes..
Emulate ‘Ignore Program Change’
This option is shown when exporting your plugin. This feature enables the ‘Ignore Program Change’ feature to work in plugins. (previously, it worked only in the Editor). When you change a preset from the plugin UI, one or more controls will stay unchanged. This is useful for controls like ‘Master Volume’ or ‘Master Tuning’, which you want to apply to all presets. Note: This feature is activated a couple of seconds after loading a DAW session, so if you change presets immediately, you might not see the effects of Ignore-PC at first. Note: When you change a preset from the DAW preset browser Ignore-Program-Change is disabled. This is so that when you recall a previously saved session in the DAW, the preset is restored exactly as it was when you saved the session.
IPC in V1.4 In SynthEdit Version 1.4, Ignore Program Change did not work in plugins (or in SynthEdit itself). IPC In V1.5 “In SynthEdit Version 1.5 we figured out a way to make it work in plugins again, but in order not to break your existing plugins we made it optional. So if you made a plugin in SE 1.4 IPC didn’t work. With SE 1.5 you have the choice to use IPC, or leave it non-functional for backward compatibility”.
SoundFont is a brand name which refers to a file format and it’s associated technology that uses sample-based synthesis to play audio from MIDI files via a soundcard. It was first used on the Sound Blaster AWE32 sound card as part of it’s General MIDI support. (The original GM Soundfont file is available from this website Musicalartefacts.com). A SoundFont bank contains base samples in a Pulse Code Modulation format (the audio format most commonly used in WAV containers) mapped to sections on a musical keyboard. A SoundFont bank can also contain other music synthesis parameters such as loops, vibrato effect, and velocity-sensitive volume changing. SoundFont banks can conform to standard sound sets such as General MIDI, or use other wholly custom sound-set definitions like Roland GS and Yamaha XG. If you have the necessary software, along with the time and patience you can also create your own SF2 files.
The structure of a basic SF2 Sample player is shown below. Apart from the standard ADSR/VCA there are three sections of note, shown with Blue, Green & Red backgrounds.
Note: This oscillator is only compatible with SF2 SoundFonts, it does not support SF3 or SFZ Soundfont file formats, neither does it support WAV or any other format of audio files.
Note: The SF2 Sample player is not really suitable for use in a true Wavetable Synthesizer.
Blue: SF2 Selection and Patch Loading
This section is where the main difference between a completely DSP Synthesizer, and a SF2 synthesizer resides. The required SF2 Bank of sounds is sample is selected from the File select menu. The individual sound font required to be played is then selected using the List Entry 3 – Patch module, which uses the GUI and DSP Sample Loader modules to load the required sample into the Sample Oscillator. Depending on the Patch itself it will either have an infinite loop (i.e. a string instrument sample), or a naturally decaying “single shot” (i.e. a guitar or piano sample).
Red: SF2 Sample Oscillator.
Oscillator is a bit of a misnomer (personal opinion) as it doesn’t create anything itself. The Patch is loaded into the player, and if the sound is looped, it will play indefinitely. If the sound is naturally decaying then it will just play one single time. This is set in the patch itself, and cannot be altered at this point. We still need to instruct the Oscillator what pitch to play the sample at, and there are also a Gate plug and a Trigger plug. The oscillator will still work without these plugs being connected, but you’ll have to re-start the audio engine to get the new patch to load.
Green: MIDI Control.
As usual we need a MIDI-CV2 module for Pitch, Trigger, Gate and Velocity signals (Aftertouch is not supported with the SF2 Oscillator). The one important difference here is that the MIDI Patch Automator must be included for the Selection and Loading of our samples. Note: If you are using Polyphonic mode the Allocation Mode must be set to Poly (Hard) to prevent clicking noises.
Apart from these points, you can treat the Sample Oscillator as any other SE Sound generator. The audio can be filtered, waveshaped and otherwise creatively altered. The output is Stereo, as some Soundfont patches are recorded in stereo.
The FAQ section on Soundfonts from the SynthEdit website.
Distributing Soundfonts, Wave, and MIDI Files.
Please read this section very carefully to prevent problems with “missing” soundfonts.
Often a VST will need to load one or more Soundfonts automatically. For example, your VSTs may use Soundfonts as built-in waveforms. On the end user’s PC, these Soundfonts need to be installed in the plugin VST’s folder. Example:
The Soundfont must be in the same folder as the VST Plugin. Under ideal circumstances, you would create an installer that copies both the VST and required Soundfont to the customer’s hard drive. Each plugin you create needs to be in it’s own folder so the plugin can correctly locate any resource file.
Relative Paths
Since you don’t know the end user’s plugin folder, store Soundfont filenames as a relative path. A relative path does not contain the full Drive or folder specification, e.g. “C:\SomeFolder\”.
Before you hit ‘Export-as-Plugin’ check your Sample-Player module is using relative paths. Select the module. Locate any filename parameters (e.g. ‘Sample Loader2 Filename” ) on the Properties panel (at far right). Check if the ‘Value’ column contains a ‘relative’ filename like “Drums.sf2”. If it contains folder references ( like “C:\Samples\Drums.sf2) you must remove the drive and folder parts. i.e. change it to just “Drums.sf2”.
Important Note: Simplifying the filenames can prevent them from loading in the SynthEdit editor because SynthEdit no longer knows what folder they are in. You can either ignore these warnings and revert the filenames after you have exported the plugin, or put the Soundfonts in SynthEdit’s default audio file location. Look in the menu “Edit/Preferences/File Locations/Audio Files” for this folder. This is where SynthEdit looks for files with relative filenames.
To have SE automatically use relative paths – You must set your default audio folder before you load the Soundfont, not after. Otherwise, SynthEdit will use the full path. Alternately: set the default audio folder, then un-load the SoundFont, then re-load the SoundFont.
If you follow this simple guidelines you should have no issues with using SF2 soundfonts or the Sample Oscillator.
This is a very basic overview, of some subjects that seem to cause misunderstandings and elements of confusion for some people. Sound is caused by an object vibrating and causing repeated compression and rarefaction of the air. These pressure waves impact on our eardrums and cause them to vibrate, sending small electrcal impulses to our brain. The same happens with a microphone, a small diaphragm vibrates with the pressure waves and converts the vibration into electrical signals.
Illustration of sound/pressure waves in air.
Volume, loudness or Amplitude:
The difference between the compression, and the rarefication of the air determines low loudly we percieve the sound. The greater the difference in pressure the louder the sound. Sound level is expressed as dB, the following list is a guide to loudness levels to give an idea of what this means to us; 0 dB – The softest sound a person can hear with normal hearing 10 dB – Normal breathing 20 dB – Leaves rustling, a ticking watch 30 dB – A whisper 40 dB – Refrigerator hum, a quiet office 50 dB – Moderate rainfall 60 dB – Normal conversation, dishwashers 70 dB – Vacuum cleaners, traffic 80 dB – Police car siren, a noisy restaurant- the level at which hearing damage can be caused by prolonged exposure. 90 dB – Hairdryers, blenders, power tools 100 dB – Motorcycles, hand dryers 110 dB – Nightclubs, sporting events 120 dB – Thunder, concerts, a jet plane taking off 130 dB – Jackhammers, ambulances 140 dB – Fireworks, gunshot. Note: dB as a measurement does not relate in any real sense to voltage or wattage, it is really just a measurement of relative sound or signal levels, it’s really not as simple as saying for example that 100 watts of amplifier power output is equal to 80dB.
Frequency or pitch:
The spacing between the pressure changes determines the pitch, or frequency of the sound we hear. The closer together these pressure changes are the higher the pitch we hear. The human ear can detect these pressure changes when they fall between 20 times per second (20Hz), and 15 000 times per second (15kHz) and the upper limit of audibilty varies between each person. Pitch is the frequency translated into musical terms.
Harmonics:
In the image showing pressure waves and how they relate to a sound, we saw a sine wave, this is a pure audio tone of one frequency. There are few (if any, excluding a synthesizer) musical instruments that produce a true sine wave. There will be a fundamental frequency. This is what we hear as the pitch of the instrument, as an example we will use standard A or 440 Hz. Below is our 440Hz sine wave and its frequency spectrum.
A pure 440 Hz sine wave, note how there is only the fundamental frequency, and no others.
We may have an instrument that has a tone made up of the pitch we hear 440Hz, but it has other components. In most instruments such as a flute these will be directly related to our 440Hz pitch. They will be what are called Harmonics, and in most cases they will be at frequecies such as; 440 * 2 = 880 Hz which is the 2nd harmonic, 440 * 3 = 1320 Hz 3rd harmonic, 440 * 4 = 1750 Hz 4th harmonic, 440 * 5 = 2200 Hz 5th harmonic. For each harmonic we multiply the fundamental frequency by the harmonic number, not the preceeding harmonic. These harmonics will almost always be at a lower level than the fundamental. I have used these as an example below with decreasing levels, you can see the effect on the waveform.
The effect of adding harmonics to a sine wave.
In the example below the 4th harmonic has been increased in level above the 3rd harmonic you can see how this has affected our waveform, this will have noticable changed the timbre of our sound but not the percieved pitch, as this is still the strongest of all the pitches.
The effect of incresing the strength of the 4th harmonic.
If we go as far as the 10th harmonic, then we can get a rough approximation of a sawtooth, with a bit of juggling with the levels.
Adding odd & even harmonics to produce a crude sawtooth
And by some juggling with the odd harmonics (3,5,7, & 9th) keeping the even harmonics low a crude approximation of a square wave.
Adding only odd harmonics produces a rough looking square wave.
Further juggling with even harmonics (2,4,6,8 & 10th) can get a wonky Triangle shaped wave.
Adding only even harmonics creates a wobbly looking Triangle wave.
Why are the harmonics so important? Where do they come from in physical instruments?
Why are these harmonics so important? Without getting too technical, and going deep into the theoretical side, musical instruments are usually a resonant string or tube (Very much over-simplified, but close enough). If we take a string and pluck it we will get a strong vibration the pitch of which is defined by the length of the string. However the string will also have other modes of vibration related to it’s length.
How harmonics relate to the size of a string (or pipe)
Each of these is added to the fundamental, in decreasing amounts relating to a range of variables such as; string tension, string diameter, the materials in the string, how hard the string is plucked, and what the ends of the string are attached to (Not to mention the shape and size of the body of the instrument etc). The same principle applies to woodwind and brass instruments. The science of acoustic instruments and analysing or predicting the sound they produce needs some complex mathematics. Where things get strangely different and eye wateringly complex is with percussion instruments…but that’s another very, very complex subject.
The complete science and analysis defining the sound produced by even a very simple physical musical instrument is very complex and requires a lot of complex maths.
What about Phase? What is it?
In the screenshot below we have two sine waves of the same frequency.
Two sine waves in phase (0 degrees phase difference).
See how they both start on the same part of the sine wave’s cycle. These are in phase, there is no time difference between them, and they have a phase difference of 0 degrees.. If you add the two waves together you’ll get another sine wave only twice the amplitude. 5+5 = 10, (-5) + (-5) = -10. In the screenshot below the two sine waves are out of phase, and have a phase difference of 180 degrees. You can see that the two waves have cancelled each other. (+5) + (-5) =0.
Two sine waves out of phase, with 180 degrees phase difference.
When they are 90 degrees out of phase you get a partial addition of 5 + 2.5 = 7.5, and (-5) + (-2.5) = -7.5.
Two sine waves out of phase by 90 degrees.
So if the phase between these two sine waves was to vary slightly over time you would get a “beating” effect as the two alternaltely fade between adding and cancelling. This is in effect what is happening when you have two sine waves of slightly different pitches, if the difference between the two sine waves is 0.5 Hz, then you would get a beating effect where the signals would fade between adding and cancelling every two seconds
“Beating” effect created by slight regular variations in phase, or a slight difference in frequency.
The importance of phase and phase shift.
These effects caused by phase are very important to us in electronic music production, as phase differences can be used for positioning instruments in the stereo field and introducing changes to the harmonic structure (timbre) of the sounds. The effects apply equally to electronic audio signals and acoustic audio waves that you hear from a loudspeaker. Note: Phase is not audible as such until we introduce a second audio signal into the mix where it will immediately change the timbre of the audio. If you have a single audio signal and vary its phase, you will not hear any difference, unless you were to take the original audio and mix it with the phase shifted audio, once you do this you’ll get a frequency notch where the two signals subtract from each other (the classic phasing effect). Small variations in frequency however are immediately obvious to most listeners without any second signal to refer to (unless you are totally tone-deaf). An exception to the rule of phase changes being inaudible. There is however an exception to phase changes not being audible: if a very deep and rapid change is made to the phase of an oscillator, you will get something called Phase Modulation or Phase Distortion, where this actually distorts the shape of a sine wave. This effect has been used to great effect notably in Casio CZ (Phase Distortion) and Yamaha DX FM synthesizers (strictly speaking this should be PMor Phase Modulation) synthesizers for example…but thats another complex subject. Just as an illustration in the image below the yellow trace is a 440 Hz sine wave with no phase changes, the green trace is a 440 Hz sine wave with a 10% shift in phase being applied by an 880 Hz sine wave.
The effect of Phase Modulation at audio rates. (440 Hz sine wave with 10 % phase modulation using an 880 Hz sine wave)
Phase and audio Mixing.
Phase is also important when it comes to converting a stereo signal to a mono signal. What sounds great in stereo may if there is a phase difference between left and right channels the mono audio will sound totally different, and may have a band of frequencies that are cancelling out boosting some frequencies and cutting others. It can also sound like a comb filter (flanger) being applied without any variation in the flanger delay time. All that careful mixing and equalization is quickly ruined. This could even be outside your control… music heard on a radio may not be heard in stereo for example. This is where phase shift can become vital in tone control and equalizer plug-ins.
I can’t claim any orginal thinking on this idea. It was inspred by reading a PDF I found online. Credit must go to: Jae Hyun Ahn and Richard Dudas. Center for Research in Electro-Acoustic Music and Audio (CREAMA). Hanyang University School of Music.
The Idea is that you use either chained or nested comb filters to change the Harmonic spectrum of an audio signal. Notes: 1) White and Pink noise just sounds like white noise put through a rather odd flanger. 2) The more harmonics the input waveform has, the more extreme the end result is. A sine wave will (apart from at resonance) still sound like a sine wave. As you increase the harmonic content things become more noticeable. 3) If you put the audio through a waveshaper and use extreme shaping levels the effect of this filter can become very strange. 4)Important. Beware of high feedback levels. The concept is that with positive feedback in the delay line certain frequencies will be enhanced, and with negative feedback certain frequencies will be cancelled. By putting an offset on the frequency of either the posistive or negative feedback filters we can further change the harmonic structure. The diagram below shows the audio spectrum using negative feedback at the top, and positive feedback at the bottom. You can clearly see that one results in sharp notches removing frequencies (-ve), and the other results in sharp peaks (+ve) enhancing other frequencies.
These screenshots will help to show some of the effects on a sawtooth signal: First the unfiltered classic spectrum of a 440 Hz sawtooth
Unfiltered Sawtooth.
Then with 60% feedback, the filters set to 1kHz, and no offset.
And with the same settings, but a +ve offset added to the +ve feedback comb filters.
You can see that by these changes to the comb filters the harmonic spectrum can be changed. The sound the project produces (although I call it a filter) is quite unlike any other filter in the ususal synthedit modules. This is because we are boosting and cutting harmonics, rather than just cutting out frequencies, and also not always keeping these boosts and cuts harmonically related. In the screenshots below using white noise as an audio source you can clearly see the peaks and troughs in the filter output, first with no comb filter offsets, then with a 200 Hz comb filter offset. The greater the feedback used in the comb filters, the more pronounced and narrower the peaks become.
The harmonic filter output spectrum with no comb filter offsets.The harmonic filter with a 200Hz offset on the second comb filter chain, notice the new peaks appearing in the spectrum offset from the original peaks.
Note: For the curious amongst my readers, as an experiment I tried setting all comb filters with the same feedback polarity and testing. This does not produce the same results as having a 50/50 positive and negative feedback comb filter setup. It just produces a high level output with very wide single peaks and troughs. The 50/50 mix of positive and negative feedback are essential for this filter to work correctly. I found no benifit in changing this mixture of feedback, in fact the opposite was true.
Constructing the Harmonic Filter.
This project uses a third party module. The Filter wet/dry mixing uses the ED Morph 1D module. You could substitute the stock X-Mix module. This is the basic structure, the Quad comb filter is a container, the structure of this is shown underneath the main structure. Pitch control minimum and maximum values: Minimum = 0.5 Maximum = 5. Offset values: Not critical, but +/- 0.3 is a good starting point, this corresponds to a shift of +/- 300Hz. Feedback: A range of 0 to 7.5 is adequate, I would give 8 as the absolute maximum, don’t be tempted to go to 10, it will Oscillate very loudly. Notes: The Freq Analyser2 module was for testing and showing the effect of the filter on the frequency spectrum, it can be removed if you want to. Likewise the Audio container is just for testing.
Main filter structure.Revised Quad Comb filter structure
All the Delay2 modules should have a Delay Time of 0.05 Seconds set in the properties panel (otherwise you get some very strange and unmusical things happening). The two negative feedback filters have the same feedback level, as do the positive feedback filters. The Negative feedback filters get their feedback from the same control slider, they just have the value inverted so if the positive filters are set to +7, the negative filters are set to -7. The Frequency (delay) offset is applied to both the positive feedback filters, but not the negative feedback filters.
This is aimed specifically at the stock 1 Pole High Pass Filter. What it is and what it’s useful for.
What is the 1 Pole HPF useful for?
Although it doesn’t have a steep cut off slope, and has no feedback/resonance control, this filter still has it’s uses. The idea of this filter is that all frequencies below the cutoff (pitch) frequency are attenuated by 6dB for every octave below the cutoff frequency. If for any reason you need a steeper cutoff slope just combine two or more in series. The only settings of note are; Frequency Scale (Properties): Set to either 1 V/Octave or 1V/kHz. Pitch: The cutoff frequency. Below this frequency the audio is attenuated with a slope of 6 dB/Octave.
1 DC Blocking.
Some structures may, due to the way they work may generate a DC component in their output audio. An example of this could be a waveshaper. This is undesirable as it can cause all sorts of issues such as assymetric clipping. If you have an audio signal output at +/-5 volts peak to peak, and you have a +3 volts DC component, your +ve peal voltage is going to be 8 volts rather than 5, and if a sudden increase in audio occurs then you’ll quickly hit the +10 volts maximum for audio, and run into harsh, digital clipping. In the example below you can see where I have used (albeit a rather extreme wavshaping shape) a waveshaper and there is a noticeable peak showing at 10 Hz, which on it’s own could cause some audio issues, but also if you look at the voltmeter on the signal out plug of the wavehsaper we have 2.061 volts DC, which is exactly what we want to avoid. At the output of the 1 pole HPF (which I set to 100 Hz) however the 10 Hz peak in the spectrum is gone along with some others being much reduced, and we also lose that 2.061 V DC component.
2 Effects such as Reverb.
Effects such as Reverb, Echo, Phasing and Chorus all sound much better if we apply a high pass filter to the audio input of the effect, it will sound much cleaner if a wide range of frequencies is being fed into the effect. Allowing lower bass (below 100- 200 Hz) frequencies into Reverb and Echo quickly makes the sound muddy, or even overpower the higher frequencies.
3 EQ and Mixing Plugins
There are several uses here, getting rid of low frequency rumbles, and hum which can quickly overwhelm a mix, or cause clipping. Use of High Pass filters on some instruments can also improve the clarity of a mix when used well. Always include high pass filtering in any Mixer VST, Mixer Channel Strip VST etc.
The only third party module used in this project is the Chorus module itself.
The DAM Chorus Module plug layout.
Most of it is fairly self explanatory. Mod depth: The maximum and minimum are 0 to 100, (that’s right 100 it’s not an error), this is the amount of modulation applied to the chorus delay time. Mod Phase: Changes the phase of the modulation signals between Left and Right channels for stereo effects. Width: Controls the Stereo Width. Damp: Controls the amount of HF Damping. 10 = None, 0 = Maximum damping. Feedback: Allows feedback in the signal path, this produces more of a Flanger type effect than Chorus at higher levels. Note: Some people seem to be confused by negative feedback, all this means is that the feedback is out of phase with the original signal, whereas the normal (positive) feedback we encounter is in phase. This changes the audio spectrum noticeably. LFO Shape: Changes the waveshape of the modulation LFO. There is a choice of; Sine, 3 Sines, Triangle, Parabola, and Random modulation.
The Chorus module in operation.
1 Pole HP filters are included (I Used 100Hz cutoff, but it’s down to your personal preferences) this stops the sound from getting too “muddy”. Bass in a chorus effect often doesn’t sound too good. I have included the Voice Combiner modules in the structure because, if for any reason the module is in a Polyphonic signal chain you will get distortion, and it also wastes CPU. Note: Don’t expect to hear too much of a stereo effect if your left and right channels are carrying identical audio, if everything is identical (phase, frequencies, volume) you’ll still essentially get a centred (mono) audio output.
NB. Third Party modules required. This prefab relies on the ED GUI Fixed String, the ED Switch > String, and the DAM Text Enter modules to provide the colour changing text display.
A simple prefab that changes the colour of the displayed text to red when the audio clipping level is exceeded. This uses one of the lesser known settings on the Volts to Float2 Module. When the Response plug is set to “Clip Detect” a Float output of 10 is output for as long as the input exceeds 10 V. The Update Rate is down to personal choice but I would suggest 60Hz to make sure most of the peaks in the audio likely to cause clipping are detected. The Float Out from the Volts to Float module is converted to GUI Float by the Patch Memory Float Out3, and the Animation Position output is fed to a Float to Integer conversion module. When the input clip level is exceeded the Float value is converted to the Integer 1, which switches from the first ED GUI Fixed string with the value of 55550000 (partially transparent dark red) to the second ED GUI Fixed string with the value of FFFF0000 (opaque bright dark red). The BG Top, and BG Bottom ARGB are set to a partially transparent white background.
The indicator in it’s clipping (left) and non-clipping states. Fonts can be changed in the DAM module (with the normal warnings about using non-system fonts) along with the Font weight and size. As previously mentioned the colours are set in the ED GUI Fixed String modules.
There are two third party Text Entry module provided by Davidson in his module pack DAM Text Enter, and DAM Text Enter Std.
DAM Text Enter Std.
Compared to the standard Text Entry 4 module this has a few additions. 1) There are two Style inputs which are switchable via the Syle Switch plug. So you could have two styles in use, for example; Normal Text Box, and GUI Text Box. Changing the Style switch from true to false will switch between the two styles. 2) The Text Static plug. This allows you to supply a an extra text string. In the Properties Panel there is the Text Displayed drop down (not available as a plug) which allows you three options; (i) Normal- Text Static is ignored, (ii) Append Text which adds the Text Static string after the normal text, (iii) Prepend Text adds the Text Static string before the normal text. Useful but to change styles we still need to edit the global.txt file.
DAM Text Enter.
This is similar to the DAM Text Enter STD, you can still use the global.txt file if you want to, but you have the option of specifying (as well as the appended/prepended text) the following; Font, Font Size Font PC Y Offset Font MAC Y Offset Font ARGB Background Gradient On/Off BG Top ARGB BG Bottom ARGB As well as the Gradient Points AX, AY, BX, BY.
Plugs & Properties.
Text: Text input/output pin in string format. Being GUI this is bi-directional. *Style: Some meta data is still used for MAC vertical offsets. Set this style to match the global option of offsets. Multiline: Turn on to enable multiline text formatting. Writable: Turn on to enable text input by clicking on the text entry box. Hint: Enter a hint to be displayed with mouse hover. Menu Items: Set items for the right click menu. Menu Selection: Enter the Integer number to pick the corresponding menu item. Text Displayed: Select how static text will be displayed. Normal mode only uses text from the Text pin. The options are; Normal, Append and Prepend. Text Static: Enter the text for static use, when either append or prepend display settings are applied. Font: Specify a font in string format spell exactly matching fonts installed. If unsure what the exact font names are check a text editor to see available fonts. For example: Arial. Note as usual stick to system fonts. If the font is not supported on another PC, your VST’s GUI text will be different. Font Size: Set a font size in pixels. Font Weight: Select from levels of boldness from thin to ultra. Check amount applied cross platform to ensure levels. Font Style: Specify normal, italic, or oblique font styles. Font Alignment: Specify the font alignment; leading (left), center, trailing (right). Font PC Y Offset: Adjust the Y axis placement of the text for PC. This is the text desired position use the MAC adjust only to fix MAC offset from PC setting. Font MAC Y Offset: Adjust the Y axis placement of the text for MAC. This will be applied along with the PC offset. PC offset affects both systems. MAC offset is an addition or subtraction to the PC offset only on MAC. Font ARGB: Set a font color in ARGB format FF000000. Gradient OFF/ON: Turn on off background gradient. BG Top ARGB: Set the top background gradient color in ARGB format. BG Bottom ARGB: Set the bottom background gradient color in ARGB format. Gradient Point A X: Set the A point X value gradient stop in pixels. Gradient Point A Y: Set the A point Y value gradient stop in pixels. Gradient Point B X: Set the B point X value gradient stop in pixels. Gradient Point B Y: Set the B point Y value gradient stop in pixels. Mouse Down: Left and Right side mouse down outputs when text box is clicked.
Width: Outputs the width of the text box in pixels. Height: Outputs the height of the text box in pixels. Note: These plugs are ouput values only, and are intended purely to make it easier to size text boxes so they match. They do not accept input values.
Gradients.
I think the easiest way to explain these settings is visually:
No Gradient: Uses only the BG Top value of FFAAAAAA
This should give you a rough idea of what’s going on. Experimentation is the key to understanding these gradients.
NB The AX, AY, BX, BY values are and the gradient position are related to the size of the text area.
Gradient On, all points set to 0. Uses the BG Bottom value of FF333333.
Gradient On AY = 60: Top uses FF333333, Bottom uses FFAAAAAA
Gradient on BY = 20: Top uses FFAAAAAA, Bottom uses FF333333
Text area size = 200×200 px AX=200, AY=200.
Using the DAM Text Enter as a Push Button.
Note: Third party modules required. As well as The DAM Text Enter you need these other modules; RH-Switch-Text to swap the colour values for the lower button QTN_GUIBool2Bool to convert the output to DSP. ED GUI Timer for the latching push button.
By using the BG Colour/Gradient, and the Mouse Down output plug you can make latching and non-latching push buttons.
Momentary (Non-Latching) Push Button. RH-Switch-Text is used to swap between the L Btn Off and L Btn On colours using the LHS Mouse Down plug. QTN_GUIBool2Bool converts the GUI bool to DSP. Patch Memory Bool is not suitable here due to the Plug positions on the Modules.
Latching Push Button. Here the RH GUI Timer is used to latch the Mouse Down output in it’s On state, until a second click on the button switches it back to the off state. This is set in the module properties, Mode=Bistable. Note: T1 and T2 have no effect in this mode.
Alternative Latching Push Button. This button also changes text colour when clicked, and switches from a graduated green to fully green.