Stuck with a SynthEdit project?

Month: March 2025 (Page 1 of 2)

Filters: FAQ’s and tips.

Float plugs.

I notice some filters have the lighter blue “Float” plugs? Synthedit allows me to connect a voltage to these plugs, but when I change the value quickly or modulate them the filter makes horrible crackling and clicking noises. What can I do to stop this? The short and simple answer is: Do not connect a Voltage to the Float plugs and do not modulate the values.
These values must never be changed rapidly. A filter that has it’s pitch controlled by a float value should not have the frequency changed rapidly.

Minimum filter frequency.

I connected up a filter so it has a range of 0 to 10 kHz, but the lowest frequency it will go to is about 14Hz? What am I doing wrong.
Nothing. All filters in Synthedit have a deliberately restricted frequency range.
Anyway, there is no such frequency as 0 Hz… if you think about it this means the voltage never changes, which means that it is really DC, so the only values that apply are Volts (in Synthedit we don’t need to consider Amps and Watts).

My filters go up to a certain frequency and no more.

This is intentional. The pitch control voltages on most (if not all) filters are internally clipped at 10 V. Often if you try to take the frequency of a DSP filter too high it will either; crash and stop working, make an awful noise, or exhibit strange behaviours. In any case when we are working with any form of digital signal there is that upper frequency limit set by our sample rate, the Nyquist Frequency where if we exceed this a lot of harsh distortion known as “aliasing” is produced.

SINC filters and dB/ Octave “roll off”.

The short answer to this question is for SINC filters no you cannot translate the “Taps” setting to dB/Octave. The number of Taps must also be an integer, hence the plug being an orange integer plug. (Just as in the “real” world you can’t have half a filter stage!).

The slope cannot be defined in dB/oct for windowed sinc FIR filters. Rather you should use transition bandwidth. There is no straightforward conversion of slope in dB/oct to transition bandwidth due to conceptual differences between FIR and IIR filters. IIR filters do not have a defined stopband.

https://eeglab.org/others/Firfilt_FAQ.html

There is no SINC bandpass filter, can I make one?

Yes by cascading an HPF and an LPF in series you can get a bandpass filter, and by having an LPF and HPF in parallel you’ll get a notch filter. Altering the number of Taps will change the notch or pass band shape, and changing the frequencies will change their width or Q. Note: For the notch filter the number of TAPS must be kept the same in both filters for it to work correctly, otherwise latency can affect the filter operation.

SINC Filter Latency.

Although there are no concerns about frequency dependent phase shift with a SINC filter there is latency which relates only to the number of TAPS, You can see below that the initial pulse from the oscillator (green), has been significantly delayed by passing through the filter with the number of TAPS set to 171.

Changing the filter frequency from 14 kHz (see above) down to 8 kHz (see below) only changes the pulse shape and amplitude, not the delay time.

Note: By setting Delay Compensation to Full in the preferences dialogue, the effects of latency will be removed.

The effects of enabling Delay Compensation (see below).

Notice also the ringing or ripple effect around the output pulses from the filter, this is a normal artefact with SINC filters with high sample rates (above 44kHz) and like latency is dependent on the number of TAPS, not filter frequency (the input signal does seem to affect this ripple effect though).

Can I “see” the frequency spectrum of a filter?

Yes there is a handy module available the Freq Analyser2, connect up a white noise source (the best signal for doing this since it contains the whole of the audio frequency spectrum). All these filters have the same cutoff frequency so you can see the difference in how they attenuate the frequencies outside the passband, and in the Moog filter the resonant peak it creates.

FIR FILTERS. (Finite Impulse Response)

The SINC Lowpass is a linear-phase FIR Filter. “Taps” specifies the number of coefficients, more taps gives us a steeper cutoff response, but introduces progressively more latency.
FIR filters don’t have poles only zeros : poles mean ‘feedback’ and once there is feedback it is an IIR filter.
The SINC Lowpass filter has latency, for example if you send an impulse into it, you can see it emerge a little later in time…
FIR Filters do not introduce phase shift, however they do introduce latency. This is dependent on the number of poles, and is frequency independent.
The latency is always in integer steps.
You can consider all FIR filters to be a multitap delay with no feedback, with all the taps spaced 1 sample apart, gain is then applied per tap and then all taps are added together.
So for a 171 tap delay, you have 170 delays, 171 gains (like level adjust modules) and 170 adds.

List of FIR filters in SynthEdit;

Filters built into SE’s Oversampling modules.
SINC High pass filters,
SINC Low pass filters.

IIR Filters (Infinite Impulse Response).

The output of IIR filters as they decay (in theory) never stops changing, and will never reach zero. The output just gets closer and closer to zero forever.
For this reason the 1-pole filters in SE will assume that when the filter’s output drops below a certain point that it is ‘near enough to zero’ and it’s the right point for the Filter to “sleep”.
Phase shift.
Phase shift in IIR filters (as in the physical analogue equivalents) is frequency dependent. Every IIR filter introduces some per-frequency phase shift.
Cascading filters will obviously sum all of the phase shifts.
Limits to parameters.
For all filters, your filter cutoff can never be 0 Hz or Nyquist (we are subject to the laws of physics, maths, and electronics), so if they are intended to be modulated to the extremes of that range, then we have to clip the cutoff frequency, and the Q (feedback/resonance) to ranges that don’t exceed the valid ranges.
Biquad Filters.
Because Biquads are unstable at low frequencies, a clip value of 5 to 10 Hertz, keeps things under control, and that’s out of the range of human hearing anyway.
Butterworth Filters.
These filters are intended to be preset to a particular frequency, and not have the frequency changed whilst in use. They will not perform well if the frequency is changed whilst the VST is running. They are designed for tone controls and equalization usage. They have a very flat frequency response and so make excellent tone controls.
If you really need a bandpass filter that can have it’s frequency changed always use an SV filter, you can fix the resonance at 0, and if you need a narrower pass-band cascade the filters (beware of phase shift if you are mixing wet and dry audio though), but you would get phase changes with the Butterworth EQ filters anyway!

List of IIR Filters;
1 Pole LPF
1 Pole HPF
Butterworth
Biquad
Moog ladder filters
SV Filters
All Pass filters
Hilbert or “Dome/Bode” filters
Steiner Parker
Claudia’s Filter module
The “Panda” filter.
….Just about all the Synthedit stock and third party filters in fact.

Adding “Korg” saturation to an SV Filter.

The stock Korg filters are fine, but suppose we wanted a more versatile filter such as an SV filter, but with that distinctive Korg MS20 style growl and scream?
I can’t promise this is going to sound the same as the Korg, but it does have a wild overdriven sound to it. There’s only one 3rd party module used, the DAM Waveshaper NL (Thanks Davidson it’s a great addition to SE!)

The principle is fairly simple it’s a gain control on the input feeding the first SV filter which then feeds into the waveshaper. The first filter and the waveshaper can be switched out to give a clean SV filter sound.
Normal/Saturated switching.
On switching the extra filter and waveshaper in we can get the overdriven sound by adjusting the gain, the filter resonance (labelled Saturate) Note: You could leave it set as the default maximum of 10V but I found this tended to get a little too unpredictable. A maximum of 8 volts is plenty. Increasing the resonance will naturally increase the distortion from the waveshaper, but this is also controllable via the Tanh control. (I found the best waveshaper mode to be Tanh).
Second filter.
Nothing much to say here, it’s best however to keep the maximum resonance level to about 8 V to avoid self-oscillation.
Modes.
If like I have you want to make this a Low/High/Band/Notch filter then make sure both filters are connected to switch modes.
1/2 Stage switch.
Best results for my ears came from just switching the output filter between 1 and 2 stage. I thought switching both made the filter to “thin” and shrill to be very musical.
Control voltages.
I kept the CV inputs down to controlling Pitch and Tanh, but feel free to experiment.
Note:
I tried adding an Pitch offset to the first SV filter in the chain, but I found the results were not too different.

Important note: When experimenting with this filter please, please, please keep the volume low until you are familiar with it. It will probably produce unexpectedly high volumes when resonant filter peaks start saturating. I would not like anyone to damage their hearing!

Morphing:- ED Morph1D.

About the ED MORPH 1D module.

The module morphs through a variable number of audio inputs by a linear (1D) morph value.
When using even one input (this makes no sense I know), the output will always be the same as the input, no matter what value is set on the Morph plug.
When using two inputs as shown below the results will be:

A morph value of 0 V will give us 100% of Input 1 on the output, and a value of 5 V gives a 50% mix of Input 1 and Input 2 ((input A + input B)/2) while a value of 10 V gives 100% of Input 2 and 0% of Input 1.

When using four inputs the results will be:

0.0 Volts = 100% of Input 1 (Sine)
3.3 Volts = 100% of Input 2 (Saw)
6.6 Volts = 100% of Input 3 (Triangle)
10 Volts = 100% of Input 4 (Pulse)

Plugs.
Morph (0V to 10V) [Volts]: – A linear morph value.
Response [List]: – Selects between linear and exponential (VCA-like) level response modes. This is how the module responds to the Morph voltage.
Spare Input [Audio/Volts]: – Self replicating input plugs, these allow you to add as many audio inputs as you need.
Output [Audio/Volts]: – The morphed audio output

Morphing in 2D (x and y co-ordinates)

To morph in 2D, more Morph1D modules can be cascaded and X and Y coordinates need to be assigned properly, as illustrated in the example below:

Note: To achieve correct 2D morphing please adhere to the format of using the joystick controller’s Y output for the first pair of Morph modules, and the X output for the second Morph module.
This method allows for much more complex manipulation of waveforms/filters.

Morphing between SV Filter outputs.

Credit must go to Elena Novaretti of ED Modules for the basic concept of the 2D Morphing layout.
This is adapted from Elena Novaretti’s example Morph2D project (originally for morphing between oscillator wave-shapes using a joystick control).
The idea is that all four outputs of the stock StateVar Filter (multi) are fed into a pair of ED Morph1D modules, and then into a third Morph 1D module to give smooth morphing between the filter types without excessive changes in volume.
The Response setting on the ED Morph 1D modules can be Linear or Exponential, in this usage Linear is the best option.
Note: As per usual it’s best to avoid taking the Resonance on the SV filter above 8.5 to 9 V as it will self-oscillate, and quite often produces some unexpectedly high audio output levels.

The Joystick controller.

The structure of the joystick container is shown below:
The Float to Volts converters are set to Smoothing = Smooth (30mS).
The Patch Memory is left with the default Min = 0 V and Max = 10 V.

Here is a suggestion for a panel layout. The Frequency Analyser isn’t needed I just included it my layout during testing. If you want to include that’s fine. It was handy with a white noise source for evaluating how the filter behaves.

Latency

What is latency?

Some types of signal processing introduce an unavoidable, but unwanted time-delay to an audio signal. This unwanted delay is a result of the time taken to do complex and/or repeated calculations, and is referred to as ‘latency’.
Latency should not be confused with deliberately introduced time-delays to signals: e.g. reverb and echo plug-ins.

Examples of latency.

Examples of processing that introduces latency include spectral effects, look-ahead limiters, oversampling, and sometimes filters. This latency is a side-effect of the processing, and we would naturally prefer to experience the effect without any unwanted latency.
‘Latency Compensation’ aka PDC (Plug-in Delay Compensation) is a method of hiding this unwanted latency.

How does Plug-in Delay Compensation work?

Plug-ins can report their unwanted latency to the host DAW. The DAW can then compensate for this by time-shifting the audio on the plug-in’s track. Shifting a track’s playback earlier in time can compensate for a plug-in that adds latency to the signal. The result is as if the plug-in added no latency.
Note: Latency compensation works only on pre-recorded material, it is not possible to time-shift live audio.
For this reason, you will sometimes hear latency from your DAW when you are monitoring a track while recording it, but you will not hear the latency later when you playback the track.

If delay compensation works inside a DAW can I use it to stop phasing and other side effects in a VST?

Yes if Plug In Delay compensation is enabled, SE can introduce delay lines to “remove” the latency effects. This only works for modules which report the delay to SE however.
If you’re not using Delay compensation there is a little trick with the SINC filters. The latency is purely down to the number of taps, so you could use this little trick:

Because the two filters have the same number of TAPS set the latency will be identical, the Green low pass filtered signal is now re-aligned with the Yellow filter set with a cutoff of 10kHz allowing the original pulse waveform through to be mixed with the original.

WARNING: Changing latency will interrupt and restart the plugin, and possibly other plugins running in the DAW. It is a disruptive operation. You really need to minimize the chances of this happening. For example, it’s best not to expose to the DAW for automation any parameter which might change the latency. If you can get away with a fixed-latency (in the XML), prefer that option.

Latency reporting to the DAW is currently supported in VST3 plugins. How it works is the plugin adds up the cumulative latency of all its modules and reports the total latency to the DAW.

Which modules introduce latency?

SynthEdit modules that produce latency include:
1) ‘Sinc Low-pass Filter’, and ‘Sinc High-pass Filter’.
2) The oversampling system.
3) All the feedback modules.
4) The ED Spectral Synthesis modules.
5) The ED Pitch shifter.
6) The “Patch Points”, that’s right they can introduce latency, there’s a reason for this: they have a feedback module built in. This only becomes active if you use patch points to create a feedback loop, so in normal use they won’t cause any latency, but if you create a feedback loop- latency!

These modules report their latency to the SynthEdit plug-in runtime and SynthEdit plug-ins in turn can report that latency to the DAW.

You can enable and disable this latency reporting when exporting your plug-in from SynthEdit. The options are:
1) ‘Off’,
2) ‘Full’ (On) and
3) ‘Constrained’. ‘Constrained’ means that the plug-in will compensate for latency up to a point (5ms), but not more. This prevents the user from experiencing too much latency when monitoring a live audio signal or instrument.
Note: Too much latency makes it difficult to play a software instrument with good timing. 5ms is enough compensation for most situations anyhow. ‘Full’ compensates latency up to 1000ms. ‘Off’ does not compensate for latency at all.

When should I use latency compensation?

Basically for effects modules. It’s not needed for a synthesizer/instrument plug in as you are generating the signal, not introducing an unwanted delay to an existing audio signal that will cause timing errors in the DAW.

Can latency affect the audio in my plug in?

If you are using two different filters in parallel, or mixing wet and dry filtered signals then yes it can in the form of unwanted “phasing” effects. The structure below can illustrate this quite well.

If we use an oscillator as a white noise source and mix the filtered and unfiltered signals, there is a pronounced notch (or band reject) effect going on. If we vary the filter frequency it doesn’t change the notch frequency much, but try changing the number of taps, and the notch frequency changes. Why is this? Wee by increasing the number of “taps” we are increasing the number of calculations and making the CPU do more work, so the latency increases- and with that the notch frequency changes. We can see the delay effect more clearly with a sine wave in the scope in the first example we have the default 171 taps (green i s the direct signal, yellow is filtered).
In the second example we have increased the number of taps to 571, and you can see the noticeable difference in timing between the two signals.

MIDI Note number to Volts chart.

Note: In this table middle C (60) is C3, not C4 as per ISO system standardized by Acoustical Society of America.
This conversion table doesn’t apply to MIDI note lists of the sort contained in the ‘MIDI Filter’ module. For those a simple ’10/127 x note number’ formula works.

MIDI | VOLTS
000 = -0.75
001 = -0.667
002 = -0.583
003 = -0.5
004 = -0.417
005 = -0.333
006 = -0.25
007 = -0.167
008 = -0.083
009 = 0
010 = 0.083
011 = 0.167
012 = 0.25
013 = 0.333
014 = 0.417
015 = 0.5
016 = 0.583
017 = 0.667
018 = 0.75
019 = 0.833
020 = 0.917
021 = 1
022 = 1.08
023 = 1.17
024 = 1.25
025 = 1.33
026 = 1.42
027 = 1.5
028 = 1.58
029 = 1.67
030 = 1.75
031 = 1.83
032 = 1.92
033 = 2
034 = 2.08
035 = 2.17
036 = 2.25
037 = 2.33
038 = 2.42
039 = 2.5
040 = 2.58
041 = 2.67
042 = 2.75
043 = 2.83
044 = 2.92
045 = 3
046 = 3.08
047 = 3.17
048 = 3.25
049 = 3.33
050 = 3.42
051 = 3.5
052 = 3.58
053 = 3.67
054 = 3.75
055 = 3.83
056 = 3.92
057 = 4
058 = 4.08
059 = 4.17
060 = 4.25
061 = 4.33
062 = 4.42
063 = 4.5
064 = 4.58
065 = 4.67
066 = 4.75
067 = 4.83
068 = 4.92
069 = 5
070 = 5.08
071 = 5.17
072 = 5.25
073 = 5.33
074 = 5.42
075 = 5.5
076 = 5.58
077 = 5.67
078 = 5.75
079 = 5.83
080 = 5.92
081 = 6
082 = 6.08
083 = 6.17
084 = 6.25
085 = 6.33
086 = 6.42
087 = 6.5
088 = 6.58
089 = 6.67
090 = 6.75
091 = 6.83
092 = 6.92
093 = 7
094 = 7.08
095 = 7.17
096 = 7.25
097 = 7.33
098 = 7.42
099 = 7.5
100 = 7.58
101 = 7.67
102 = 7.75
103 = 7.83
104 = 7.92
105 = 8
106 = 8.08
107 = 8.17
108 = 8.25
109 = 8.33
110 = 8.42
111 = 8.5
112 = 8.58
113 = 8.67
114 = 8.75
115 = 8.83
116 = 8.92
117 = 9
118 = 9.08
119 = 9.17
120 = 9.25
121 = 9.33
122 = 9.42
123 = 9.5
124 = 9.58
125 = 9.67
126 = 9.75
127 = 9.83

The volts values can be fed to ‘Trigger To MIDI’ and ‘SampleOscillator2’ modules at ‘Pitch’ pin for example, the values can be calculated as well, but it’s not so straightforward and due to deviations product requires constant manual adjustment, the table saves you this tedious process, it’s accurate and verified

Musical notes to Frequency in Hz.

C0 = 16.352 Hz
C1 = 32.704 Hz
C2 = 65.408 Hz
C3 = 130.816 Hz
C4 = 261.632 Hz
C5 = 523.264 Hz
C6 = 1046.528 Hz
C7 = 2093.056 Hz
C8 = 4186.112 Hz
C9 = 8372.224 Hz
C10 = 16744.448 Hz

C#0 = 17.324 Hz
C#1 = 34.648 Hz
C#2 = 69.296 Hz
C#3 = 138.592 Hz
C#4 = 277.184 Hz
C#5 = 554.368 Hz
C#6 = 1108.736 Hz
C#7 = 2217.472 Hz
C#8 = 4434.944 Hz
C#9 = 8869.888 Hz
C#10 = 17739.776 Hz

D0 = 18.354 Hz
D1 = 36.708 Hz
D2 = 73.416 Hz
D3 = 146.832 Hz
D4 = 293.664 Hz
D5 = 587.328 Hz
D6 = 1174.656 Hz
D7 = 2349.312 Hz
D8 = 4698.624 Hz
D9 = 9397.248 Hz
D10 = 18794.496 Hz

D#0 = 19.445 Hz
D#1 = 38.890 Hz
D#2 = 77.780 Hz
D#3 = 155.560 Hz
D#4 = 311.120 Hz
D#5 = 622.240 Hz
D#6 = 1244.480 Hz
D#7 = 2488.960 Hz
D#8 = 4977.920 Hz
D#9 = 9955.840 Hz
D#10 = 19911.680 Hz

E0 = 20.602 Hz
E1 = 41.204 Hz
E2 = 82.408 Hz
E3 = 164.816 Hz
E4 = 329.632 Hz
E5 = 659.264 Hz
E6 = 1318.528 Hz
E7 = 2637.056 Hz
E8 = 5274.112 Hz
E9 = 10548.224 Hz
E10 = 21096.448 Hz

F0 = 21.827 Hz
F1 = 43.654 Hz
F2 = 87.308 Hz
F3 = 174.616 Hz
F4 = 349.232 Hz
F5 = 698.464 Hz
F6 = 1396.928 Hz
F7 = 2793.856 Hz
F8 = 5587.712 Hz
F9 = 11175.424 Hz
F10 = 22350.848 Hz

F#0 = 23.125 Hz
F#1 = 46.250 Hz
F#2 = 92.500 Hz
F#3 = 185.000 Hz
F#4 = 370.000 Hz
F#5 = 740.000 Hz
F#6 = 1480.000 Hz
F#7 = 2960.000 Hz
F#8 = 5920.000 Hz
F#9 = 11840.000 Hz
F#10 = 23680.000 Hz > audible

G0 = 24.500 Hz
G1 = 49.000 Hz
G2 = 98.000 Hz
G3 = 196.000 Hz
G4 = 392.000 Hz
G5 = 784.000 Hz
G6 = 1568.000 Hz
G7 = 3136.000 Hz
G8 = 6272.000 Hz
G9 = 12544.000 Hz
G10 = 25088.000 Hz > audible

G#0 = 25.957 Hz
G#1 = 51.914 Hz
G#2 = 103.828 Hz
G#3 = 207.656 Hz
G#4 = 415.312 Hz
G#5 = 830.624 Hz
G#6 = 1661.248 Hz
G#7 = 3322.496 Hz
G#8 = 6644.992 Hz
G#9 = 13289.984 Hz
G#10 = 26579.968 Hz > audible

A0 = 27.500 Hz
A1 = 55.000 Hz
A2 = 110.000 Hz
A3 = 220.000 Hz
A4 = 440.000 Hz
A5 = 880.000 Hz
A6 = 1760.000 Hz
A7 = 3520.000 Hz
A8 = 7040.000 Hz
A9 = 14080.000 Hz
A10 = 28160.000 Hz > audible

A#0 = 29.135 Hz
A#1 = 58.270 Hz
A#2 = 116.540 Hz
A#3 = 233.080 Hz
A#4 = 466.160 Hz
A#5 = 932.320 Hz
A#6 = 1864.640 Hz
A#7 = 3729.280 Hz
A#8 = 7458.560 Hz
A#9 = 14917.120 Hz
A#10 = 29834.240 Hz > audible

B0 = 30.868 Hz
B1 = 61.736 Hz
B2 = 123.472 Hz
B3 = 246.944 Hz
B4 = 493.888 Hz
B5 = 987.776 Hz
B6 = 1975.552 Hz
B7 = 3951.104 Hz
B8 = 7902.208 Hz
B9 = 15804.416 Hz
B10 = 31608.832 Hz > audible

Signal levels and conversions

Signal level conversions
SynthEdit is modelled after real analog synthesizers. In a modular analog synth the various modules offer many features and functions and there are no rules as to how you connect them together (that’s the fun part!). As a result, all the modules must be compatible with each other. This was achieved in the real world with voltage control. The standard for most Synthesizers being 1 octave per volt.
All modules responded to the same voltage range in a consistent way.
SynthEdit uses the same principle of voltage control signals.
The control voltage plugs of SynthEdit’s modules generally have a useful range from 0.0 to 10.0 volts.

Voltage to Pitch conversion

Example of voltage to pitch conversion:

The pitch of SynthEdit’s Oscillator modules are calibrated in Volts per Octave.
A control voltage of 5 Volts sets the oscillator frequency to 440 Hertz or Middle A, the note in the center of a piano keyboard. Increasing the input by 1 Volt will cause the pitch to rise one octave (the frequency doubles to 880 Hz ).

Simplified: volts = 1.442695041 Log (0.07272727273 Hz)
To convert Volts to Frequency
Frequency = 440*2^(Volts-5)
To convert MIDI note number to Volts
There are 128 midi notes. There are 12 semitones on an Octave.  5.0 Volts is middle-A (MIDI note 69) 
Volts = 5.0+(MIDI-note-number-69)/12

Voltage to volume conversion.

The VCA Module allows you to choose from 3 different response curves:

a) Linear
b) Exponential
c) Decibel

The following chart shows the relationship between input and output voltages.

A more useful graph is the output volume in decibels for a given input voltage. This more closely shows how loud the signal sounds.

This shows that an input of 10 Volts produces full volume ( 0 Decibels ), and a gain of 0 volts gives silence ( -70 decibels, very quiet).
A full-scale signal is -10 to +10 Volts. A 1kHz sine signal, amplified to full-scale (-10 to +10 on the Oscilloscope) shows as 0dB in Cubase, this is a peak reading. SynthEdit’s Oscillators normal output range is -5 to +5 Volts ( -6dB in Cubase). Unlike Cubase SynthEdit’s own VU Meter displays an averaged signal. However you can switch it to peak mode. In “dB Peak” mode SynthEdit’s VU meter reads 10dB above Cubase’s.
What do they look like?, all these sounds have the same envelope settings, but different VCA modes.

Decibel (dB) mode
The human ear hears this as a constant, natural fade.

The Decibel curve drops 35 dB between 10 – 1 Volt.
dB = (35/9) * (volume-1.f)
Volts = 10 * 10.f ^ ( dB * 0.5 )
Since a perfect dB curve never reaches zero volume, below 1 Volt the VCA dB curve is faded to silence.

Exponential mode
This scale imitates the voltage drop of a discharging capacitor. Many hardware synths generated their envelopes using this method, as it is the easiest to produce with an electronic circuit, and is similar to the decay curve of a natural sound.

Given a volume from 0 – 10, this formula gives the output level in volts.
volts = 10-c1(1-e^(3(volume/10-1)))
Where ‘c1’ is a constant that determines the amount of curve:
c1 = 10/(1-e^-3)
c1 =10.524

Linear mode VCA, the or Level Adj module.
This is the most direct method of controlling level.  However to the human ear, this sound fades in an irregular way, appearing to go quiet too quickly at the end.

Converting Volts to dB
To convert a level in volts to dB, use the following formula:
dB =20*log10(volts/10)

To convert a level in dB to Volts
volts =10*10^(dB/20)

Voltage to Time conversions

Envelope generator module times are based on a Time Cent scale. This is a similar concept to the decibel scale, whereby you get finer control over the short envelope segments.
(Pictured – ADSR2 curve)

ADSR2
Due to popular request the ADSR2 is faster than it’s predecessor. It’s designed to range between 1ms to 10s.
Time = 10^(Volts0.4-3) Volts = (log10(time)+3)2.5

ADSR (Old deprecated module!)
The ADSR segments range from about 10ms at 0 Volts up to approx. 10s at 10 Volts
Time =2^(Volts-6.666666)
Volts =log(Time)/log(2)+6.66666
NOTE: Sometimes you may need even faster envelope times, for example when generating percussive sounds like drums. You can get shorter times by using negative voltages.

MIDI-CV Portamento time
Time = 2^(Volts-8.666666)
Volts = log(Time)/log(2)+8.66666

MIDI-CV2 and Keyboard2 Portamento time
Time = pow(10.0,Volts0.4-3) Volts = (log10(time)+3)2.5
In constant-rate mode, the formula is the same except the result is the glide time per octave


 

Signal and Data Types


Patch leads and Plugs are colour coded depending on the signal type.
Plugs (AKA Pins) are the coloured ‘spots’ on the modules, and Patch leads are the cables we “draw” to connect the modules.

The text, and coloured spot on each module represents a plug that can be used to connect to other modules using patch cords.
A plug allows parameters of the module to be changed using the output of another module, such as a control, information to be transferred, control voltages, to be sent and received and
Input plugs are normally located on the left of a module, and output plugs are on the right of a module.
There are a few exceptions to this rule in the range of GUI modules
Plugs are colour coded depending on their signal type, as are the patch cords.

DSP versus GUI data.

This is the section that contains some of the controls for the DSP modules, and the interface/control panel itself, GUI handles all the key clicks, mouse events
So you can see what type a module is the DSP modules are grey, and the GUI modules are blue.
In general you are not able to connect GUI to DSP directly, but there are some modules that allow information to pass between GUI and DSP modules. GUI/DSP Converters are part Light Blue part Light Grey.

DSP or Digital Signal Processing.

This covers Oscillators, Filters, Modulators, Inverters, Level control, Voltage controlled amplifiers, and the controls such as knobs, lists, sliders, lamps and switches. These are the modules that generate, control, and process your sounds. Most controls send voltages to the modules, but some such a list entry, allow you to select one value out of a pre defined list in a module. There are some with values you can pre-set such as Boolean (0 or 1), Voltage,

DSP versus GUI Data.

DSP data cannot be directly connected to a GUI module due to the different ways of handling data. Synthedit will not allow the direct connection. The data speeds are higher for DSP reflecting the amount of data which must be processed, and GUI data rates are lower to allow more CPU priority to be given to the DSP signal handling.
You must use a special data conversion module to communicate between the two types of data, and never use this to process DSP data, the conversion should only be used to send signals from a GUI control to a DSP module to change it’s operating parameters. Never try and use GUI to DSP for modulation or other rapid changes.

Timing.

In Synthedit communications between DSP and GUI is meant to be accurate enough to handle controls and visual display items. There is no guarantee of precision accuracy.
This means that for for fast timing and data updates there is a risk that data will be skipped or mis-timed. For this reason (and others) you should never convert DSP data to GUI to use GUI modules to process DSP signals. Not ever.
DSP Sample rate data is at 44 kHz or more while GUI communication takes place at roughly 30 to 60 Hz!
In short do not try and use GUI modules to handle DSP data. It will fail.

Bit Depth

SynthEdit processes all signals internally at 32 bits floating point.
When SynthEdit write to files it outputs either 16 bit integer, or 32 bit floating point samples.

Sample Rate

Many software synths separate control signals and audio signals. Control signals e.g. LFO’s and Envelopes are processed at a low sample rate, this saves on CPU power, but results in sluggish envelopes and zipper noise (noticeable stepping clicks on fast envelopes, panning or volume fades). You also have the hassle of having conversion modules.
To maintain sound quality, SynthEdit uses the same high sample rate for all audio signals. As a result SynthEdit just sounds cleaner and more responsive than many other soft synths. SynthEdit supports any sample-rate.  The Oscillator waveforms are generated at run time to suit the sample-rate.

The types of data signals.

Blue:- Normal Audio or Control Voltage signal. DSP use only.
Audio, or control voltage signals. Voltage is essentially floating point data. It is used to send Audio from one DSP module to another, or to send control voltages from one DSP module to another to control the recipients behaviour.
DSP Floating Point and Volts pins will inter-connect however this is NOT good practice and you should always use a volts to float, or a float to volts converter. Just because you can do it doesn’t mean you should!
Note: Voltage plugs are always DSP Plugs.

—————————————————————————————————-

Light Blue:- Floating point values
A floating point number, is a positive or negative whole number with a decimal point. For example, 5.5, 0.25, and -103.342 are all floating point numbers, while 91, and 0 are not. Floating point numbers get their name from the way the decimal point can “float” to any position necessary within the number.
Note: With large numbers, there are times when maths results from floating point calculations are not 100% accurate.

—————————————————————————————————-

Red:- Text data.
The Text data type stores any kind of text data. It can contain both single-byte and multibyte characters that the locale supports. The term simple large object refers to an instance of a Text or Byte data type.
Text can be GUI or DSP.

—————————————————————————————————-

BLOB:- Binary Large OBject.
This is a more technical aspect of data in Synthedit, and is not often used except in passing large amounts of binary data in or between modules when using or creating samplers, or sample players.
In Synthedit it has a built in limit of 5MB. trying to pass more than 5MB of data won’t work that amount of data can’t be handled and will just be dropped (ignored) and not transmitted.
A “BLOB” is a common acronym for “Binary Large OBject”, which means it’s a data object holding a large amount of binary data. Some languages have native BLOB types, but C++ doesn’t. Never the less, creating a blob is simple enough – you just create an array of bytes. This is done by creating an array of characters. This might be confusing, though, as an array of characters has a special meaning in C++ – it’s also a string.
For more information you really need to read in depth C++ programming tutorials and documentation.

—————————————————————————————————-

Orange:- Integer/ Integer64.
The Integer data type stores whole numbers that range from -2,147,483,647 to 2,147,483,647 for 9 or 10 digits of precision.
Note:- The number 2,147,483,648 is a reserved value and cannot be used. An Integer value is stored as a signed binary integer and is typically used to store counts, quantities, and so on.

—————————————————————————————————-

Yellow:- MIDI data.
MIDI is an acronym that stands for Musical Instrument Digital Interface. It’s a way to connect devices that make and control sound — such as synthesizers, samplers, and computers — so that they can communicate with each other, using MIDI messages.
Synthedit takes the MIDI input from your chosen device, and converts it into control signals (voltages) that it’s modules can understand to control the modules actions.

—————————————————————————————————-

Green:- A list of values. For example, Waveform names. DSP Only. Usually connects to a drop-down list.

—————————————————————————————————-

Black:- Boolean (logic on/off). This (for those of you familiar with electronics) is like the system used by TTL and CMOS logic chips.

—————————————————————————————————-


NOTE: SynthEdit will not allow you to connect patch cords to plugs of a different signal type without using a converter except for Voltage and Float (But you should always use a data type converter).

MIDI FAQs problems and “bugs”

Triggering a note from a button not a keyboard.

I want to trigger a note from a button on my Synthesizers control panel, but I can’t get it to work, why?
Connecting a button directly to your synthesizers ADSR module won’t work quite as you might expect.
The reason is SynthEdit’s sleep mode. When a MIDI to CV module finishes playing a note, it powers-off any downstream modules as this saves CPU. However this also prevents the ADSR module from responding to the button.
All is not lost though, simply connect a Button to a Trigger to MIDI module.
Connect the Trigger to MIDI modules MIDI out to the MIDI to CV modules MIDI in. Now when you push the button, a MIDI command is sent to the MIDI to CV, just like any other note-on. The MIDI to CV then wakes up all the modules and starts the note.

Using a button to trigger an ADSR.

Weird voltages from the Velocity Plug

I Connected the Velocity out from the MIDI to CV module to a volt meter and got weird values. The voltage seems to go higher and higher instead of going low when you lift the key. Is this a bug?
No, there is no bug.
The MIDI to CV is polyphonic, although there’s only one on-screen, imagine there are 6 of them, all connected to your voltmeter.
When you first push Play, all 6 modules output zero (Voltmeter reads “0.00”). You then play a note, and the voltmeter, reads “1.0” (for example), You play a 2nd note, the voltmeter reads “2.0”, you play a 3rd note, the voltmeter read “3.0”, etc., etc.
In a polyphonic synthesiser, these voltages add up, higher and higher, until you reach the maximum polyphony.
There is no bug, except perhaps that the Voltmeter is not Polyphonic, so instead of showing you 6 readings (for 6 voices), it adds them all together, which can be confusing.
Note: For Debugging- go to Mono Mode.
If you need to take voltage readings to debug your synth, the easiest solution is to set your project to Mono mode. Voltage readings will make much more sense. (you can set your project back to Polyphonic once you’re done).

I released a note and the voltage didn’t return to 0.

Why doesn’t the voltage return to zero when you release the note?
Because each note takes time to fade out, the MIDI to CV module continues to output the same voltages as when you first hit the note. Imagine if the pitch went to zero the instant you released a note, instead of fading gracefully to silence, notes would thump into a low frequency buzz.
Imagine if the Velocity output went zero on note-off, if you were using it to control the note loudness, the note would click instantly to silence, like an organ sound. So, when you release your finger from a key, the MIDI CV continues to output the same pitch and velocity.

Note: The “Gate” voltage does drop to zero instantly in all scenarios.

How to stop notes “Sticking”

How do I stop notes from “sticking” in my SynthEdit synth?
There are several reasons why notes could be sticking, usually due to construction errors. If you are getting stuck notes, check for the following:
1) Ensure that your Envelope module’s Gate inputs are connected to a proper Gate signal. e.g. From the Gate output of a MIDI to CV Module.
2) Make sure that the Envelope module’s Gate inputs are not being controlled by a slider, or fixed value module.
3) Check your VCA module’s Volume input to make sure that it is not being directly controlled by a Slider, or Fixed Value module.

“Poly to Mono” module

This module converts a polyphonic signal to monophonic by splitting off only the most recent note played. The output will be similar to a monophonic instrument. This is useful when trying to modulate a monophonic object (e.g. an LFO) from a polyphonic signal (e.g. note-pitch), which is usually not possible in a meaningful way.

“Voice Combiner” module

This module mixes-down a polyphonic signal into a monophonic signal that includes all voices that are playing.

Converting MIDI 1 to MIDI 2

SynthEdit also provides a MIDI converter module that can convert MIDI 1 to MIDI 2 and vice versa.
This is useful for maintaining compatibility with MIDI 1 only modules.
MIDI 2.0 is now the default MIDI standard, because MIDI 1, MIDI MPE, and Steinberg Note-Expression can all be converted losslessly to MIDI 2. However it’s not always possible to convert MIDI 2 to MIDI 1.
The SynthEdit SDK now provides helper classes that will convert MIDI for you.
This allows you to write your MIDI code without having to handle all the different types of MIDI.
Note: It’s recommend that you write your modules to use MIDI 2.
The SDK contains the ‘MIDI to Gate’ module that shows how to write a MIDI 2 module that also accepts MIDI 1 transparently.

You can intercept the MIDI signals anytime before it reaches the Patch Automator.
Note: The MIDI-CV secretly sends it’s MIDI to the Patch Automator.
By default the MIDI in SE 1.5 is Version 2.0. The MIDI-In module converts everything to MIDI V 2.0. You can send version 1.0 also, but SE’s own MIDI modules will tend to covert it back into Version 2.0 if they get a chance .



« Older posts