Stuck with a SynthEdit project?

Month: August 2023

Synthedit Sub Controls Part 2:- Patch Memory

I think these modules are worth a separate page all on their own, as they are quite an “in depth” subject, and can be a source of confusion for beginners.

The rules of using Patch Memory.

Rule 1: The first rule of Patch Memory is; Don’t use “redirectors”.

From the SynthEdit help file:

Avoid splitters (redirectors)
Due to a design mistake, SynthEdit 1.0 had the Container’s ‘Control on Parent’ pin on the wrong side. This made it impossible to connect it to a Patch Mem module (needed for save/recall of paged panel settings).
The ‘Bool Splitter’ was introduced to fix this problem, but the bool splitter causes problems of it’s own. The bool splitter ‘reflects‘ any one input signal back out it’s other inputs, this is a bad design because it is not clear which module is in control of the others. The result is modules ‘fighting‘ each other for control. Symptoms include ‘flakey‘ inconsistent behavior and patches not saved/recalled correctly.
3rd party module developers have sometimes copied SynthEdit’s example and released modules that rely on the use of splitters. This is not their fault, however if at all possible avoid using splitters.

Rule 2: The second rule of Patch Memory is: The first wire connected to the ‘Value Out’ plug always resets the Parameter. The Value, Name and Range of the parameter are taken from whatever module you are connecting to. To avoid this behaviour when replacing a wire – connect the new wire first before disconnecting the old one.

Rule 3: Do not connect a Patch Memory to the Polyphony Control module. The Polyphony control is itself a type of Patch Memory, so if you do it will cause data conflicts and the two modules will “fight” over which is doing what.

Rule 4: Never use more than one Patch Memory for each control in a Sub-Control since this can cause some very unpredictable data conflicts and extremely strange behaviors in your GUI controls.

The Patch Memory modules are the hub around which you will construct any
Sub-Control prefab. They are the link from your controls (GUI) to the audio side (DSP) of SynthEdit.
1) GUI to DSP Link: They are the interface (bridge) between the DSP (Audio Processing) side, and the GUI controls and graphics.
2) Storing values for Presets: They allow the storing of values as a preset to “remember” the state or position of a control
3) Use the correct type: There is one Patch Memory for each datatype, divided into input and output controls.
4) When using Patch Memory modules for passing large block of Data between modules (BLOB data is a good example) for samplers (for example) there is a limit to the maximum side of the block of data. They must never exceed 5Mb in size. V1.4 and it’s plug-ins will crash as as soon as the data maximum is exceeded, however the latest version of SynthEdit 1.5 will now not pass the data through the Patch Memory at all, but will not crash. So we still have the 5Mb Data limit but SE and it’s VST plug ins handle the limit in a more stable manner. This limit applies to all the Patch Memory modules.

Note: In the Headings below I just refer to each of the two basic types of module as PatchMemory “Data-Type” 3 or PatchMemory “Data-type” Out3 where Data-Type is for example Integer, or Text to save loads of repetition.

GUI in – DSP out.

PatchMemory Float3 module

These are the modules that take a GUI data input, and convert them to a DSP output, for example the module shown above will take the Position of an Image used as a control, convert it so data suitable for DSP output on the Value Out Plug.

DSP in – GUI out.

PatchMemory Float Out3 module

These are the modules that take a DSP data input, and convert them to a GUI output, for example the module shown above will take the value on the Value In plug, convert it so data suitable for GUI output on the Value, and Animation Position plugs.
Value In:
The Value In plug is always a DSP data plug receiving data from DSP to be sent to the GUI side.
Value out:
The Value Out plug is always a DSP data plug sending data from the GUI modules to the DSP side.

Send or Receive? Which way does the data flow?

This is somewhat confusing and counter intuitive for newcomers to SE but these are the rules for Patch Memory module names:
A Patch Memory Out module always Inputs data from a DSP module.
A Patch Memory Module always Outputs data to a DSP Module.

Note: GUI plugs are bi-directional, they send and receive.
Note: The first wire connected to the ‘Value Out’ plug resets the Parameters. The Value, Name and Range of the parameter are taken from whatever module you are connecting to. To avoid and complications caused by this behaviour when replacing a connection:- always connect the new wire first before disconnecting the old one

Typical use of a  PatchMemory Float3

One of the basic and easy to understand of PatchMemory uses is it’s use in the Knob prefab included in the SynthEdit control modules.
1) The Animation Position plug on the Patch Memory is linked to both of the Image2 modules (the body of the knob, and the shiny cap) as we want these to move together.
Note: Although the Tinted Bitmap Image has an Animation Position plug It’s not used here, as we don’t want the scale for the knob to move.
2) The Patch Memory Float3 module serves three main purposes:
a) To link all the modules together
b) It memorizes the position of the knob
c) It allows us to convert the position of the image of a knob into a Floating Point value, which is output as a DSP value (hence the grey background). This allows us to use the position of image to output a Voltage which can be used to control (for example) the cut-off frequency of a filter module.
3) The Animation Position plugs are always scaled from 0 to 1 as a default, whereas both the Value, and Value Out plugs have the value scaled up to the default 0 to 10 range required for controlling DSP modules.
4) The Text Entry4 module allows us to display a title adjacent to the control knob.
Float to Volts conversion.
The Float to Volts module is included to convert the Floating point output of the Patch Memory to a suitable Data stream for DSP modules. this allows us to set the response speed of the control, so that there is a smooth output signal sent to the DSP controls.
Menu Items, and Menu Selection plugs.
These can be used to set up a Right Click menu for the control knob.

Our two important plugs here are Choice and Item List: Choice is the Integer which refers to the Item from the Item List that is being chosen.
Defining the list:
The list is set up as a text string in the Properties for the module. Each entry is given an ID number starting at 0
Note: the first item is always 0. So if you are choosing pre-defined options from another module, so a typical list could be: Sine=0,Saw=1,Triangle=3,Pulse=4
This is referred to as an “Enumerated List”
There must be no spaces in the list, unless the Item name specifically has a space in it such as “White Noise”.
Note: The last entry is not followed by a comma or period.
Example:
A pop-up menu system is shown below.
If we don’t define a list then the list entries will be read directly from the Oscillator module (OK we expect the Value Out plugs to be uni-directional, but in the case of this module it can extract the item list from the Module that this plug is connected to-trust me it does!)
Limiting our choices:
If we want to limit the choices available in the list then we need to pre-define our list. This is done in the properties for the Popup Menu module.
Note: You must define the list options before connecting the Popup Menu to any other modules, otherwise the Choice and Item List entry boxes will be “greyed out”, and the list will be defined by the Patch Memory List3 and the module it’s reading the list items from.
In this case we use the list: Sine=0,Saw=1,Triangle=3,Pulse=4 which limits the choices available by leaving out Ramp=2,White Noise=5,Pink Noise=6.
The choice option requires no programming, it just selects the ID number from the list supplied.
This system will work with any DSP module that uses the green List Entry plug. Just remember that the first item in the enumerated list will be 0, and not 1.

This is how the properties panel for the Popup Menu module should read: (Unless you want different options of course)

Programmers notes about Patch Mem:

The range attributes are specified in the xml for the parameter, they are called minVal maxVal, the default range being 0 to 10.

A patch memory is pretty much a ‘stub’ box module containing a SE entity called Parameter. The module simply communicates to the parameter value using its pins. A parameter is in effect a memory whose value is stored in presets and which changes for every preset, and can of course be of int, float, bool, blob or string type.

By using xml, a parameter can be linked to either GUI or DSP plugs. The linked pin(s) won’t be exposed, but are used for sending or receiving data to/from the parameter. Therefore, a parameter also allows communication between GUI and DSP.

You must keep in mind that when the GUI is closed, the GUI part of a module (including a PM) effectively does not exist. However, a parameter can still transmit its value to any DSP input plug it is linked to, or receive the value from a DSP output plug it is linked to. Once the GUI is opened (but not until then), it will transmit its value to a GUI plug it is linked to.

SynthEdit Sub Controls:- Part 3 Data Manipulation

These modules change the value of a GUI variable. They may or may
not also convert the value from one GUI data type to another.

This is a module of limited use except in the SynthEdit VU Meter prefab. (“Unfortunately the ‘dB to animation’ module is specific to the SynthEdit VU Meter image, which is copied off real VU Meters and is not any kind of ‘nice’ formula. It can’t be used on meters with a different scale.Jeff McClintock.)

These modules provide a generic conversion function. Both audio processing and GUI versions are provided.

The supported functions are shown below:
*, /, +, -, ^, sin, cos, tan, asin, acos, atan, sinh, cosh, tanh , exp, log, log10, sqrt, floor, ceil, abs, hypot, deg, rad, sgn, min, max. This list applies to both the DSP and GUI modules.

DSP/Audio Float Function

DSP Float function

Left Hand Side Plugs:
-> A:- (Floating Point) Input value
-> Formula: B=:- (Text) Function applied to the signal flow from B to A for example B= A*2
Right Hand Side Plugs:
-> B:- (Floating point) Output Value
The audio version (as usual) sends data in just one direction.
For example to double the amplitude of a ‘float’ signal, open the properties screen, enter the formula – A * 2.

GUI Float Function:

GUI Float Function

The GUI function module is similar except it works in both directions, so you have two formulae Usually Formula B will be the inverse of Formula A.  For example if you wanted a text entry to display a knob’s value as ranging from 1-100, you could enter formula B=A*100, so it follows that Formula B should be the inverse which is B=A/100

Left Hand Side Plugs:
-> A:- (Floating Point) Input value
-> Formula: A=:- (Text) Function applied to signal flow from A to B
-> Formula: B=:- (Text) Function applied to signal flow from B to A
Note: If either the A or B function is left as an empty string or the value 0, signal flow from that pin will be disabled.
Right Hand Side Plugs:
<- B:- (Floating point) Output Value

An example of using these modules is shown below in the SynthEdit VU Meter prefab. We take the Audio input, and feed it into the Volts to Float module where the signal level is converted to floating point at a rate optimised for DSP, which will also give a suitable response time for the meter to be useful (Conversion rate = 60hZ), and the type of response is dB VU to suit a VU meter readout.
The next step is to use a PatchMemory Float Out3 to convert the data from Audio/DSP to GUI, and to take the value output (not the Animation Position output) and feed it via the Float Function with these values for the Formulae:

Formula A= B-18
Formula B= A+18
From here the data is sent via the dB to Animation module to convert the data to fit the scale used for the VU meter, this is where we finally use Animation Position to move the “meter” to the position that matches the Audio input on the VU “scale”.
The actual meter we see on the panel uses two Image2 modules. The top one which displays the meter needle is connected to the Animation Position data, and the lower Image has no connections as it’s just the meter scale.

VU Meter


These modules convert from one data type to another, without affecting the value.
I’m not including the BLOB datatype converters as this is a rather specialist data format which is used for Audio analysis etc.
Bool to Float: Converts a Boolean input to a Floating Point value. For example: False =0, True =10
Bool to Int: Converts a Boolean input to an Integer output. False = 0, True = 1
Bool to Text: Converts a Boolean Input to the equivalent Text string. False = False, True = True.
Float to Bool: =<0 = False, >0 =True
Float to Double: Converts ordinary 32bit floating Point values to 64bit Floating Point values (Double Precision)
Float to Integer: converts the Floating Point input to the nearest whole number. For example a Floating Point value of 3.4 would output Integer 3, and a Floating Point value of 3.6 would result in an Integer output of 4.
Float to Text: Converts a Floating point value to a numeric Text string, so 3.1415962 will display these numbers in a text box. The number of digits after the decimal point is set by the Decimal Places plug so with Decimal Places set to 3 the input value will be truncated 3.141.
Int to Bool: Converts an Integer input to an Boolean output. 0 = False, 1= True.
Int to Float: Converts an Integer value to a Floating Point value (The starting point being Integer the result will obviously be a whole number).
Int to Text: Converts an Integer value to a numeric Text string.

Can I use SynthEdit V 1.5?

As of 15th July 2024 yes you can now use V1.5 for serious projects. It’s now out of Beta testing and is in a stable form. Unfortunately for anyone using older versions of Windows such as 7 it will not run on these versions. On Windows 8 it may run, but with some issues..

Can I use the new modules from V1.5 in V1.4?
Well they will work, but…this is a big but you could encounter serious issues if you use them in a VST, as they are not optimised for V1.4.

What’s New in V1.5? (Stable, but new additions may be made in the future)

System Requirements/Specifications.

  • Requires Windows 10 to run reliably.
  • macOS plugins require macOS 10.9 (Mavericks) or better.

New Features. (Subject to change)

MIDI 2.0 and MIDI Polyphonic Expression (MPE) support.
MIDI output support in plugins (depends on DAW support)
Apple Silicon (ARM) support.
VST3 plugins are supported on mac (not all macOS DAWs support VST3).
SFZ (Sample Playback) now supported.
Windows in the editor are now ‘zoomable’ (<CTRL> mouse-wheel).
Searchable module browser.
New oversampling mode for control-signals (add * to pin name)
‘Export as JUCE’ supports additional plugin formats in some cases (AAX, CLAP, Standalone)
Faster loading of projects.
New XML-based project format (human readable)
‘Ignore Program Change’ feature returns.
VST3 bypass parameter support (add a Bool PatchMemory called “BYPASS”)

Known Issues.

MIDI out from VST3 plugins depends on DAW support. Some DAWs, like Cubase, do not support this.
Partially transparent images may render too dark on macOS.
Parameter limits (i.e. high-value and low-value) are not enforced in the SynthEdit editor but are enforced by the DAW in plugins. This behaviour is needed to support older projects.
It seems that some older graphics cards/PC’s even if they are running Windows 10 can cause SE 1.5 to crash on start up- there appears to be no solution apart from a new PC. It looks like its caused by old/buggy driver software which doesn’t support some of the graphics methods used in V1.5.

Universal VST 3 plug-ins.

A universal plugin is one that can run on either macOS or Windows. SynthEdit now produces universal VST3 plugins. This means you can offer the same download to macOS and Windows users. It will ‘just work’. SynthEdit universal VST3s support both Intel Mac computers and also ‘Apple Silicon’ (ARM) based ones such as the M1, M2, and later.
As a result of being universal, VST3 plugins now take a little more disk space and contain more files than with previous versions of SynthEdit.
Note: Universal VST3 plugins might fail to scan in some DAW’s. A workaround is to delete the ‘MacOS’ folder inside the VST3 plugin using some free software very kindly supplied by Davidson https://drive.google.com/file/d/1sozlsa0Xgzb4KPSn707g1nATNrZaW9Mz/view?usp=sharing. You do need to be careful using this software however. There is a read me with the software which you must read.

MIDI 2.0

SynthEdit 1.5 understands and works with MIDI 2.0.
This should work automatically in most cases. However, if you are using 3rd-party MIDI plugins you will need to use a MIDI Converter module to convert their incoming MIDI to version 1.0. You don’t need any converter on the output, SE will convert it back to MIDI 2.0 as needed. 

MPE support implementation for various DAW’s:

Bitwig:
Note expression works for VST3 plugins. I (Jeff) set up my controller as a “Seaboard RISE” and set the bend range to 48.

Cubase:
Add an Instrument Track containing your VST3 plugin.
In the Inspector change the track MIDI input from “All MIDI Inputs” to your MPE controller. e.g. “Seaboard block”
In the Inspector open the “Note Expression” section. tick “MIDI as Note Expression”
Select “Tuning”, in the box below and assign it to “Horizontal/X”
Select “Brightness”, and assign it to “Vertical/Y”
Assign “Poly Pressure” to “Pressure(P)”

Ableton Live:
Ableton Live (MacOS) supports MPE for Audio Unit plugins on macOS.
Ableton Live (Windows) does not currently support VST3 Note Expression, but you can get MPE to work by following the steps below:

Firstly you need an MPE Control module to allow you to force the plugin into MPE Mode.

In Ableton Insert your VST3 plugin, then ensure that the VST plugins MPE Mode is set to “On”. Open Ableton’s settings and ensure that Ableton’s MIDI controllers MPE tick-box is Off.  Yes that’s correct select Off! (On is actually off! Very confusing but blame Ableton’s programmers)

You then need to set up 16 MIDI tracks in Ableton to route all the channels to the VST3 plugin (If you don’t do this Ableton will merge all channels into one).

This page explains how to set up several tracks to handle all the MIDI channels of MPE in Ableton.
https://support.roli.com/support/solutions/articles/36000019096-ableton-using-the-seaboard-rise-grand-with-ableton-live
Note that the automatic MPE support in Ableton 11 does not work with VST3 plugins yet.

Reaper:
With Reaper, MPE with Audio Unit plugins works provided the instrument has an ‘MPE Control’ module, and the user switches it to ‘MPE On’.
For VST3 plugins I (Jeff) needed to use the same export setting as for Ableton Live
(‘MPE Emu’, and enable MPE from the MPE Control module). But it worked right away on a single instrument track.

New program features.

Preset Modified:
A new option in SE 1.5 is the ‘Preset Modified’ indicator. You can use it to add an indicator to your preset menu to show that the user has modified an existing preset in some way.
Below we use the ‘Concat’ module to add an asterisk (*) to the preset name whenever the preset has been modified.
The asterisk is entered into the ‘List to Text’ module as the ‘item list’. i.e.: “,*” (a comma followed by an asterisk).

Cubase “Bypass” button:
To support the Cubase bypass button you will need to add a dedicated ‘Patch Parameter – Bool’ to your plugin. Rename it as “Bypass”.
Then you can hook that up however you prefer to implement the bypass. For example, you can connect it to a switch module that sends the input signal directly to the output whenever the bypass mode is enabled.

How to emulate “Ignore Program Change”
This feature enables the ‘Ignore Program Change’ feature to work in plugins. (previously it worked only in the Editor). Whereby when you change a preset, one or more controls will stay as they were, unchanged. This is useful for controls like ‘Master Volume’ or ‘Master Tuning’ which you might not want to change with each individual preset.

Saving projects in VST2 format:
SynthEdit 1.5 (and future later versions) does not directly support the saving of VST2 plugins (only VST3 is supported).
However, some projects can also be loaded in the older version (SynthEdit 1.4) and exported as VST2 plugins from there.
To save some hassle SE 1.5 export has a tick-box option ‘Create a VST2 using SE 1.4’. All this does is automatically launch SynthEdit Version 1.4 (if it’s installed) and export the same project from there.
Limitations:
The project file you are converting must be compatible with the SE 1.4 file format *.se1 not the newer *.synthedit file format
The project must rely only 64 bit modules that are SE 1.4 compatible
You must have the latest SE 1.4 installed (Build number 695 or later)

Multi-channel VST plugins:
SynthEdit plugins with more than 2 inputs or outputs can now load into Cubase as multichannel plugins (1 ‘bus’, many channels).
Alternatively, when exporting you can choose “Outputs as Stereo Pairs” which will create a plugin with multiple buses, each with two channels.
Note: If you change these options, Cubase may fail to recognize the change unless you delete Cubase’s existing plugin cache file (VST3plugins.xml) and rescan the plugin.
Some DAWs like Ableton Live might not recognize multichannel plugins (only ones with the “stereo pairs” option).

I see that V 1.5 can be downloaded, is it OK to use, or should I wait?
If you are only going to use it to see what’s new, and try out the new functions and modules yes it’s fine. From experience it may not run (or be glitchy and buggy) on older PCs.
It will only run on Windows 10 or 11.
Sorry Windows 7 and 8 users, but your options are:
1) Upgrade your PC
2) Stay with V1.4.
Just remember its still undergoing Beta testing and is subject to update and changes, some of the newer modules are still a work in progress so are not fully functional.
Use with care as updates may well “break” an existing design made with the current version number of V1.5
As always with Beta software bug reports are welcome (NOT HERE) but at the “GroupsIO” message board: groups.io/g/synthedit/messages using the hashtag #bugreport. It’s free to join up and you’ll find a lot of help and support on there.

SynthEdit Sub Controls:- Part 1 the basics.

Sub controls provide a more customized user interface than standard controls but require more expertise to use effectively. They provide a lower-level access to the various visual elements on your control panel. Before sub-controls, SynthEdit provided only pre-made controls, like the Slider Control. Now we have a range of modules grouped under the category of “Sub-Controls that we can use to create our own customized controls.

To understand sub-controls, it’s good to have some background on how these particular controls work.
Although you can’t see them, a slider (unlike the Knob control a slider can’t be opened up to see its structure) has several interacting components… Moving the slider updates the numeric text readout and typing a new value into the readout will move the control to that value (provided it’s within the range that’s set for the slider).
So there’s a two way communication between these two components.
Your computer typically updates its graphics 60 times per second yet the control’s output is at audio-rate signal (typically 44,100 samples per second). So there’s a rate-conversion happening inside this module to make sure that a control value change doesn’t get missed due to the difference in sampling rates and methods.
The current position of the slider is stored in it’s internal patch-memory, and the slider will move automatically to reflect the current patch.
With this standard SynthEdit Slider the layout of the elements is fixed. There’s no way, say, to put the numeric readout at the top, as its a compiled SEM, not a prefab like the knob.

Important Notes:
DSP to GUI conversion; When creating sub-controls we must always convert the GUI float values to DSP voltages. DSP modules cannot be connected directly to GUI controls, and if the correct DSP to GUI conversion is not used the controls will behave unpredictably, and control signals will not update properly. You should always use a Patch Memory or Bridge module to communicate between the two.
Float to volts: Although you can connect Floating Point plugs directly to Voltage plugs, it’s strongly advised to use a Float to Volts converter so that the control value will pass correctly from the Float to the Voltage plugs. If you don’t values can be missed, passed incorrectly, or the module operation could be “glitchy”.

The reason for having sub-controls is to separate the control into its constituent parts. This gives you more flexibility to customize the control. Here’s an equivalent sub-control based knob. It’s built inside a Container module.
Go to the module browser, or Menu Insert->Controls->Knob to load the prefab.

Here’s the internal structure.

Internal structure of the control knob

How it works: Three categories of Sub-Control:
1) Patch Memory:
The main hub of all the data exchange is the Patch Memory module. It handles the two way communication between the graphical or GUI elements (with light blue backgrounds) and the audio-processing or DSP elements (with grey backgrounds). Note some module such as patch memory have both light blue and grey backgrounds such as the Patch Memory modules – these act as a “bridge” between GUI and DSP.
The Patch Memory module also handles any MIDI Automation of the control and handles switching between presets (patches).
2) (Bitmap) Images and Text Entry4:
The second type of module here are graphical controls. These are the Bitmap Image and Text Entry4 boxes. These accept the user input value and display the control’s current value.
These graphical elements use the Patch-Memory module as sort of data ‘hub’. The values input from any module connected is transferred to all the other connected modules, keeping them in sync. The only connection which is a “one way street” is the DSP output. You can see from the arrow heads that the GUI plugs (light blue background) are bi-directional as we would expect. Updating the numeric-entry box moves the knob and vice-versa. These signals are not constantly sampled like audio signals but are event-driven. So the knob consumes CPU only when being moved. Compared to a DSP audio signal, the data flow to and from the knob is not constant and is at a much slower rate.

Useful Info: DSP data is always one way traffic. GUI data is almost always a two-way data flow to and from each control, although you can make GUI controls “read only”.

Important Note: Although you could use the filename plug on the Image2 control to switch between images dynamically (say for a colour or style change) you should not do this, as it will force a GUI restart/refresh and clear ALL Patch Memory values. This is to prevent the control panel view “glitching” .


3) Float to Volts conversion.
The third type of module here is datatype conversion. In this case the Float to Volts module. It’s purpose is to bridge the Patch Memory module’s numeric Floating Point value with the DSP Voltage that is sent via the IO Module. You could connect directly to the Floating Point Value Out plug, but its not advisable.
The signal leaves the Patch-Mem module as a DSP connection, but still at the slower rate generated by the GUI graphics system. The Float-to-Volt smooths out and up-samples to the correct signal rate suitable for driving SynthEdit’s various audio modules, and gives you control over how smooth the conversion should be. Less smoothing uses less CPU, but may sound ‘stepped‘ or ‘zippered‘.
It is good practice to to use the Float To Volts conversion module to make sure the control works smoothly and no data changes are missed by being out of sync with the DSP modules you will connect it to.
In SynthEdit audio rate signals are called Voltages because SynthEdit is simulating an old-school Voltage controlled Synthesizer.

The general structure of a Sub-Control patch is:

Customizing the control.
The numeric input is optional, so it can just be deleted it if you don’t need it. Same with the rotating knob image, if you don’t like it, then it can be replaced with another similar image.
Also when the Knob prefab container is unlocked, and open, you can place the elements however you like, enabling you to rename the Knob, and move the title above, below or to one side of the control.
To rename the knob select the title’s Text Entry4 box, then in the properties panel change the “Text Entry4 setting from “Knob” to “Voltage” (or however you want to name it).
Adding a Voltage readout.
This is quite a simple modification. If we want to add a voltage readout to the knob control.
As you can see below it’s still based on the original prefab structure, but with the addition of two new GUI modules:
2) The Float To Text module is used to convert the scaled up Floating Point value to a text string. By changing the integer value on the Decimal Places plug we can set how many figures will appear after the decimal in the readout.
3) The Text Entry4 module displays the output voltage, and if you click in the text box and alter the value, the knob will move to reflect this new value. If you don’t want this behaviour then just untick the “Writeable” check box in properties.
4) Changing the control name: If you select the Text Entry4 control that displays the text “Knob” in the box, you can edit the text that it displays in the “Text” box in the properties panel to the legend of your choice
Note: Editing the “Name” text box in properties only changes the displayed name of the module in the structure view, not the title on the panel view.

A customized control knob.

Reducing CPU usage in Synthedit and your VST’s.

SynthEdit is already highly efficient in terms of CPU usage, and has several features that automatically optimize the performance of your SynthEdit creations.
However, it can’t compensate for any errors that may be made in construction, so with complex projects you may start to experience some performance issues.
Common causes of high CPU usage:
1) Putting effects in the main Synthesizer container – causes unnecessary polyphony.
2) Hanging modules – they don’t sleep and waste CPU
3) Oversampling – very heavy on CPU

Keeping it under control.
CPU usage is very dependent on the way a synth is put together. Often there are two ways to achieve the same result, with vastly different CPU usage. The more efficient your projects are, the better they will perform.
DSP = more CPU cycles.
Any module that generates or processes audio signals will use more CPU than a module that doesn’t. Audio requires that the processor is handling data at the current sampling rate set in preferences.
In contrast sliders, knobs, and other controls or signals that don’t change often, like the MIDI to CV modules Pitch and Velocity outputs, can be considered control signals, which are handled in a different way, and at a much slower rate.
Not quite DSP.
An envelope generator module is somewhere in-between; during its Attack and Decay segments, it’s generating something close to audio rate data, and during the sustain section of the envelope, it’s generating a flat-line, control-rate signal.

1) Adding modules to the audio path is expensive in terms of CPU usage.
2) Adding modules to the control path is usually low cost in terms of CPU cycles.

Always keep effects outside of the main Synthesizer Container.
A frequent beginners error is to put a reverb module inside the synthesizer’s container, which will then get applied to each individual note played, potentially eating a large number of CPU cycles. What is usually intended, is just one reverb, applied to the merged output (the reverb is added outside of the container.) Always add effects outside of your main synthesizer container to keep them from “Going Polyphonic”. See below for how it should be done: Main Synthesizer container, followed by another container for the effect modules.

Where to place effects in your VSTs signal chain.

Polyphony is not simple.
Some modules force the signal into a monophonic format by default, for example;
Delay2, Level Adjust and Pan. When every clone shares the same settings and these modules sit at the end of the signal chain they will be forced into the Monophonic mode..
However putting these monophonic effects between polyphonic modules imposes polyphony on them. Say you’re dealing with the following setup shown below:

Example:
As it stands this won’t cause a problem, but then you decide to add a Filter after the Reverb JC module which is also Polyphonic after the monophonic effects, then forces all the modules in between the VCA and the Waveshaper, (Delay2 and Reverb JC) into polyphonic operation.

A badly designed VST the Moog filter will force the Delay and Reverb into being polyphonic

This is extremely wasteful of CPU resources, so either put the effects in their own container, followed by the filter, or put a voice combiner between the VCA and the Delay2 module as shown below. Even better would be to put the Delay2, Reverb JC, and the Moog Filter into their own container.

How to prevent modules from going polyphonic

Avoid “Hanging” Modules:

A hanging module is on that is connected “downstream” from the MIDI-CV module but has no output wires.  (See the example below)

Example of a hanging module

SynthEdit identifies this as the ‘last’ module in the voice and monitors it’s output. However this module has no output wire which causes SynthEdit to never “sleep” the voice. This situation results in very high CPU consumption. This can often happen if a chain of modules is deleted for a modification and one gets missed – be careful to select all the modules in a chain if you are removing it, you may have had two filters and been switching between the two for testing or something similar.
Also when you do have a situation where you’re switching between two filters don’t put the switch in the output like so:

How not to switch between filters

You just created a hanging module. When you switch between filters, the de-selected one becomes a “hanging” module and never sleeps.
The correct method is to switch input and output as shown below, with the switches controlled by the same List Entry module. The two extra control modules use far less memory and CPU than the hanging module in the previous method, as when you switch filters, the de-selected on is removed from the signal chain and goes into “sleep” mode conserving CPU.

The correct way of switching filters

De-Normal Numbers.
De-normal numbers are very small floating point values that are inaudible, but are still processed by SynthEdit resulting in a waste of CPU resources (they will prevent all the downstream modules from sleeping). The symptom is sudden CPU spikes, especially during note tails.
Note: CPU spikes during MIDI note-on events, or when moving controls are not an indication of a denormal number problem.

Detecting De-Normal numbers.
If you suspect you may have a problem with de-normals, use the denormal Detector module on the output of the suspect module
Connect the output to an LED Indicator module for visual indication.  If denormal numbers are detected, you may use a denormal cleaner module to remove them.

Detecting denormals
Removing denormals

Once you have positively identified the de-normals, then just remove the detector.
The main cause of de-normals are modules with internal feedback, i.e. filters and delay modules.  These types of modules usually have de-normal removal built-in, so if you suspect a denormal problem please report it to the module author, he/she may be able to release a fix.

Oversampling, what it is.

Oversampling should be used only when it’s really needed for an improvement in sound quality, as it pushes up the CPU usage quite dramatically.
About Oversampling:
In signal processing, oversampling is the process of sampling an audio signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if sampled at the Nyquist rate or above it.
The Nyquist rate is defined as twice the bandwidth of the signal.
Oversampling is capable of improving resolution and signal-to-noise ratio, and can be helpful in avoiding aliasing and phase distortion by relaxing anti-aliasing filter performance requirements. A signal is said to be oversampled by a factor of N if it is sampled at N times the Nyquist rate.
For example, we have a system with a Nyquist limit of 44kHz, but it’s actually sampled at 88kHz, then it would be oversampled by a factor of 2, so if the sample rate is taken up to 132kHz then we are oversampling by a factor of 3.
Why oversample?
There are four good reasons for performing oversampling:
1) To improve anti-aliasing performance (the higher the sample rate, the lower the level of the aliasing by products)
2) To increase resolution
3) To reduce noise and
4) It’s much easier to reduce aliasing distortion during sampling than after the sampling process (in fact reducing it after sampling is almost impossible).
Oversampling and Anti Aliasing.
Oversampling can improve anti-aliasing.
By increasing the bandwidth of the sampling system, the job of the anti-aliasing filters is made simpler. Once the signal has been sampled, the signal can be digitally filtered and then down sampled to the required sampling frequency.
In DSP technology, any filtering systems associated with the down-sampled audio are easier to put in place than an analogue filter system that would be required by a non-oversampled audio design.
Oversampling and effects.
Controversial subject, but an effect really isn’t going to get much of an improvement in sound quality (if any), but your CPU usage is going to increase substantially.

MIDI 1 and MIDI 2 in SynthEdit.

MIDI 1 (Musical Instrument Digital Interface)
MIDI is a technical standard that describes a standard means of, communications, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices for playing, editing, and recording music.

MIDI Channels

This is a subject that seems (for some people) to cause confusion.
A MIDI channel allows a specific device to receive its own set of MIDI data. So any MIDI Data sent on say channel 1 will only be received by a connected device such as a MIDI Synthesizer which is set to use Midi Ch1.
When a MIDI device is set to “All” then it will receive data from all the other interconnected devices.
This allows us to control separate devices from separate sources. So in your DAW you could have three different keyboards controlling three different VST synthesizers, and a control surface set up as channels 1,2,3 for the individual Synthesizers, and channel four as the control channel for a mixer.
A single MIDI cable can carry up to sixteen channels of MIDI data, each of which can be routed to a separate device. Each interaction with a key, button, knob or slider is converted into a MIDI event, which specifies musical instructions, such as a note’s pitch, timing and loudness. One common MIDI application is to play a MIDI keyboard or other controller and use it to trigger a digital sound module (which contains synthesized musical sounds) to generate sounds, which the audience hears produced by a keyboard amplifier. MIDI data can be transferred via MIDI or USB cable, or recorded to a sequencer or digital audio workstation to be edited or played back.
Many groups, (Tangerine Dream for one) and studios have embraced this technology, as it’s much easier to transport, nowhere near as heavy, and unlike analogue synthesizers always stays in tune, and is a lot cheaper than a stack of of hardware synthesizers. Which is easier to transport, maintain and set up; three laptops and sound interfaces feeding into a mixer, or three Moog Modulars?

Preserving a performance.
A file format that stores and exchanges the data is also defined.
The advantages of MIDI include small file size, ease of modification (there are many software MIDI editors) along with a wide choice of electronic instruments, synthesizers, software synthesizers, or digitally sampled sounds.
A MIDI recording of a performance on a keyboard could sound like a piano or other keyboard instrument; however, since MIDI records the messages and information about their notes and not the specific sounds, this recording could be changed to many other sounds, ranging from synthesized or sampled guitar or flute to full orchestra.

Ease of communication.
Before the development of MIDI, electronic musical instruments from different manufacturers could generally not communicate with each other. This meant that a musician could not, for example, plug a Roland keyboard into a Yamaha synthesizer module. With MIDI, any MIDI-compatible keyboard (or other controller device) can be connected to any other MIDI-compatible sequencer, sound module, drum machine, synthesizer, or computer, even if they are made by different manufacturers.

MIDI 2.0
SynthEdit MIDI pins can handle both MIDI 1.0 and MIDI 2.0 standards.
From Version 1.5 many SynthEdit modules accept either MIDI 1 or MIDI 2 but send MIDI 2.

About MIDI 2.
Back in 1983, musical instrument companies that were in fierce competition nonetheless banded together to create a specification to allow musical instruments to communicate with each other, and with computers. This was MIDI 1.0, the first universal Musical Instrument Digital Interface.
Nearly four decades on, we can see that MIDI was crafted so well that it has remained useful and relevant. Its ability to join computers and musical instruments has become an major part of live performances for recording, controlling mixers, programming Synthesizers, and even stage lighting.
Now, MIDI 2.0 is taking the technology even further, deliberately retaining backward compatibility with MIDI 1.0 equipment and software already in use.

Here’s why MIDI 2.0 is the biggest advance in music technology in decades:

MIDI 2.0 Means Two-way MIDI Conversations
MIDI 1.0 messages were unidirectional: from the transmitter to a receiver. MIDI 2.0 is bi-directional and changes MIDI from a monologue to a dialogue between computers and instruments.
For example, with the new MIDI-CI (Capability Inquiry) messages, MIDI 2.0 devices can talk to each other, and auto-configure themselves to work together. They can also exchange information on functionality, which is key to backward compatibility.
MIDI 2.0 software and equipment can “talk” to a device, and if it doesn’t support MIDI 2.0, and then it can simply switch to the old MIDI 1.0 protocol.

Higher Resolution, More Controllers and Better Timing
To deliver an even higher level of musical and artistic expressiveness, MIDI 2.0 re-imagines the role of performance controllers, which is the aspect of MIDI that converts human performance gestures to control signals computers can understand.
Controllers have become easier to use, and there are more of them: over 32,000 controllers, including controls for individual notes.
Enhanced, 32-bit resolution gives controls a smoother, continuous, “analogue” feel. Note-On options were added for articulation control and setting precise note pitch.
In addition to this, dynamic response (velocity) has been improved.
What’s more, major timing improvements in MIDI 2.0 can also apply to MIDI 1.0 devices in fact, some MIDI 1.0 gear can actually “retrofit” certain MIDI 2.0 features.

Profile Configuration
MIDI gear can now have Profiles that can dynamically configure a device for a particular user scenario.
If a control surface queries a device with a “mixer” Profile, then the controls will map to faders, pan-pots, and other mixer parameters.
But when connected with a “drawbar organ” Profile, that same control surface can map its controls to virtual drawbars and other keyboard parameters, or map to dimmers if the profile is a lighting controller. This saves enormously on setup time, improves workflow, and eliminates time consuming manual programming.

Property Exchange
Profiles set up an entire device, and Property Exchange messages provide specific, detailed information sharing.
These messages can discover, retrieve, and set many properties like preset names, individual parameter settings, and unique functionalities.
Essentially, everything one MIDI 2.0 device needs to know about the MIDI 2.0 device it’s connected to.
For example, your DAW or recording software could display everything you need to know about a synthesizer on-screen, effectively bringing hardware synthesizers up to the same level of programmability as their software counterparts.

Built for the Future.
Unlike MIDI 1.0, which was initially tied to a specific hardware implementation, the new Universal MIDI Packet format makes it easy to implement MIDI 2.0 on any digital transport (like USB or Ethernet). To enable future applications that we haven’t yet developed, there’s ample space still in the standard reserved for brand-new MIDI specifications and messages.

For more detailed information on (and it’s very detailed and complex) on MIDI 2 standards and protocols visit the MIDI organisation’s website
https://www.midi.org/midi-articles/details-about-midi-2-0-midi-ci-profiles-and-property-exchange

Converting MIDI 1 to MIDI 2
SynthEdit provides a MIDI converter module that can convert MIDI 1 to MIDI 2 and vice versa.
This is useful for maintaining compatibility with MIDI 1 only modules.
MIDI 2.0 is now the default MIDI standard, because MIDI 1, MIDI MPE, and Steinberg Note-Expression can all be converted losslessly to MIDI 2. However it’s not always possible to convert MIDI 2 to MIDI 1.
The SynthEdit SDK now provides helper classes that will convert MIDI for you.
This allows you to write your MIDI code without having to handle all the different types of MIDI.
Note: It’s recommend that you write your modules to use MIDI 2.
The SDK contains the ‘MIDI to Gate’ module that shows how to write a MIDI 2 module that also accepts MIDI 1 transparently.

You can intercept the MIDI signals anytime before it reaches the Patch Automator.
Note that the MIDI-CV also secretly sends it’s MIDI data there too.
By default the MIDI in SE 1.5 is Version 2.0. The MIDI-In module converts everything to MIDI V 2.0. You can send version 1.0 also, but SE’s own MIDI modules will tend to covert it back into Version 2.0 if they get a chance .

MIDI Messages and (basic) Standards.
MIDI messages are made up of 8-bit bytes that are transmitted serially at a rate of 31.25 Kbit/s. This rate was chosen because it is an exact division of 1 MHz, the operational speed of many early microprocessors.  The first bit of each word identifies whether the byte is a status byte or a data byte, and is followed by seven bits of information. A start bit and a stop bit are added to each byte for framing purposes, so a MIDI byte requires ten bits for transmission

A MIDI link can carry sixteen independent channels of information. The channels are numbered 1–16. A device can be configured to only listen to specific channels and to ignore the messages sent on other channels (omni off mode), or it can listen to all channels, effectively ignoring the channel address (omni on). An individual device may be monophonic (the start of a new note-on MIDI command implies the termination of the previous note), or polyphonic (multiple notes may be sounding at once, until the polyphony limit of the instrument is reached, or the notes reach the end of their decay envelope, or explicit note-off MIDI commands are received). Receiving devices can typically be set to all four combinations of omni off/on and mono/poly modes

A MIDI message is an instruction that controls some aspect of the receiving device. A MIDI message consists of a status byte, which indicates the type of the message, followed by up to two data bytes that contain the parameters. MIDI messages can be channel messages sent on only one of the 16 channels and monitored only by devices on that channel, or system messages that all devices receive. Each receiving device ignores data not relevant to its function.  There are five types of message: Channel Voice, Channel Mode, System Common, System Real-Time, and System Exclusive.

Channel Voice messages transmit real-time performance data over a single channel. Examples include “note-on” messages which contain a MIDI note number that specifies the note’s pitch, a velocity value that indicates how forcefully the note was played, and the channel number; “note-off” messages that end a note; program change messages that change a device’s patch; and control changes that allow adjustment of an instrument’s parameters. MIDI notes are numbered from 0 to 127 assigned to C−1 to G9. This corresponds to a range of 8.175799 to 12543.85 Hz (assuming equal temperament and 440 Hz A4) and extends beyond the 88 note piano range from A0 to C8.

System Exclusive messages
System Exclusive (SysEx) messages are a major reason for the flexibility and longevity of the MIDI standard. Manufacturers use them to create proprietary messages that control their equipment more thoroughly than standard MIDI messages could.  SysEx messages use the MIDI protocol to send information about the synthesizer’s parameters, rather than performance data such as which notes are being played and how loud. SysEx messages are addressed to a specific device in a system. Each manufacturer has a unique identifier that is included in its SysEx messages, which helps ensure that only the targeted device responds to the message, and that all others ignore it. Many instruments also include a SysEx ID setting, so a controller can address two devices of the same model independently. SysEx messages can include functionality beyond what the MIDI standard provides.

Time code
A sequencer can drive a MIDI system with its internal clock, but when a system contains multiple sequencers, they must synchronize to a common clock. MIDI Time Code (MTC), developed by Digidesign, implements SysEx messages that have been developed specifically for timing purposes, and is able to translate to and from the SMPTE time code standard. MIDI Clock is based on tempo, but SMPTE time code is based on frames per second, and is independent of tempo. MTC, like SMPTE code, includes position information, and can adjust itself if a timing pulse is lost. MIDI interfaces such as Mark of the Unicorn’s MIDI Timepiece can convert SMPTE code to MTC

More Info: https://en.wikipedia.org/wiki/MIDI

SynthEdit:- How Plugs, Patch Cables and Data are used.

Here I’m listing some of the more commonly used Module plug names, the types of data they carry, and their usage along with useful notes.

BLOB data module

Not used very often in SynthEdit, except in passing large amounts of binary data in or between modules when using or creating samplers, or sample players.
In SynthEdit it has a built in limit of 5Mb.
A “BLOB” is a common acronym for “Binary Large Object”, which means it’s an object holding a large amount of binary data. Some languages has native blob types, but C++ doesn’t. Never the less, creating a blob is simple enough – you just create an array of bytes. In your example, this is done by creating an array of chars. This might be confusing, though, as an array of chars has a special meaning in C++ – it’s also a string.
For more information on BLOBS you really need to read in depth C++/DSP programming tutorials and documentation.

Boolean data module

A Boolean or Bool is a data type with two possible values:
true or false (0 or 1). It’s named after the English mathematician and logician George Boole, whose algebraic and logical systems are used in all modern digital computers.
In SynthEdit any value above 0 is considered as “True”.

Floating point module

A floating point number, is a positive or negative whole number with a decimal point. For example, 5.5, 0.25, and -103.342 are all floating point numbers, while 91, and 0 are not. Floating point numbers get their name from the way the decimal point can “float” to any position necessary within the number.
Note: Floating point math by it’s nature cannot be precise when handling very large numbers so you can’t always trust math modules when they are applied to floating point data.

Integer module

The Integer data type stores whole numbers that range from -2,147,483,647 to 2,147,483,647 for 9 or 10 digits of precision. Note:- The number 2,147,483,648 is a reserved value and cannot be used. The Integer value is stored as a signed binary integer and is typically used to store counts, quantities, and so on.

MIDI Module

MIDI is an acronym that stands for Musical Instrument Digital Interface. It’s a way to connect devices that make and control sound — such as synthesizers, samplers, and computers — so that they can communicate with each other, using MIDI messages.
These messages are not only the note that is played on a keyboard, it can be a wide range of data related to volume, positions of controls, how hard the key is pressed etc.
SynthEdit takes the MIDI input from your chosen device, and converts it into control signals (voltages) that it’s modules can understand to control those module’s actions.
With the new MIDI2 standard this has become more complex.

Text module

The Text data type stores any kind of text data including numbers (but these are always treated as text, not values). It can contain both single-byte and multibyte characters that the locale supports. The term simple large object refers to an instance of a Text or Byte data type.

Audio/voltage module

Used to carry Audio, or Control voltage signals. Voltage is essentially floating point data. It is used to send Audio from one DSP module to another, or to send control voltages from one DSP module to another to control the recipients behaviour. DSP Floating Point and Volts pins will inter-connect, but you really should use a Float to Volts converter. You can then set the rate at which the conversion takes place and ensure that the control operates smoothly, and predictably. Leaving out the Float to Volts module can result in the possibility of the control being somewhat “glitchy” and unpredictable.

Control and Audio voltage plugs.

Voltage (audio):
Audio voltage plugs transmit blocks of samples to be processed at sample rate by other input audio plugs. This means that they are operating at the sample rate set in the preferences.
Voltage (Control):
Plugs used for control signals like Volts (non-audio), float, int, string, bool and BLOB work differently, they are event based meaning that processing of a single value at time happens only when a new value is created, for example a control is moved/operated.
The Audio module code is managed differently: it’s done by using a sub process which processes the sample blocks or by an “event handler” respectively.
Accuracy:
Both kinds of data however are transmitted and processed on a sample rate accurate clock. Nothing technically prevents a module from sending an update to a control plug (e.g. float) 48 000 times a second but that would be very inefficient in terms of CPU because the handler used for the receiving modules will be processed continuously at 48 000 times a second to process occasional single control update values.
Control plugs are not meant to transport audio data:
Jeff programmed a suitable converter module which “down samples” an audio stream to send the sample values to a float pin at a much slower rate of 60Hz or even less. Control pins are only suitable for slowly changing parameters or automations, and must never be used for audio streams.

An Example showing an Audio and control plugs:
The Oscillator:- The Audio Out plug carries a very rapidly changing audio voltage (up to the current audio stream rate), so this would naturally be an Audio voltage.
All of the left hand voltage plugs however are considered as control voltages and are not updated as frequently as the Audio output.
The ADSR2:- All of these plugs, both left and right hand side are considered as Control Voltages, and not updated as frequently. Also even though the Attack/Decay/Release sections may change quite rapidly, even the signal out plug carrying the ADSR envelope voltage is not handled as audio.

Note: Voltage plugs are always DSP Plugs, whereas Float Plugs can be DSP or GUI. You cannot connect GUI plugs to DSP plugs without a suitable converter such as a Patch Memory module.
You should never convert a DSP audio voltage to a GUI signal to process using GUI modules…it will be very inefficient and glitchy! Converting Control signals is fine as long as you use the correct modules for the job.

Patch cables.

These are our links between the plugs on the modules, much like hardware patch leads. The colours of the leads match the types of data they carry. Synthedit will not allow you to connect incompatible data plugs and sockets- you will just get an error message:

Or if you try and connect a DSP plug to a GUI plug even if it’s the same data-type you’ll get this error message:

More about patch cables.
When you click on a cable it’s highlighted in yellow. As you can see below there are also different styles of line; some have nodes added (the white dots) which can be added by clicking on the line once it’s selected. These allow us to “bend” the cables. By right clicking on a selected patch cable we get the menu shown, with the options; Straighten line (remove all bends), Delete Line, Curvey (I know it’s miss spelled-but I’m using the spelling from Synthedit to save confusion), and Straight. Curvey give us a line in the style of the one highlighted, and straight – the one with sharp corners. To move a node just select the cable then click on a node and drag it to the new position.

Spare Plugs.
Where you see a plug like the one below with the title “Spare Output” this is a self replicating plug which means that as soon as you connect it, another appears below it titled “Spare Output“. These plugs when connected take their title from that of the plug they are connected to.

The correct way to convert a GUI Floating Point control value to a DSP Voltage control value.

You should always use a Patch Memory module to convert from GUI to DSP and vice versa. Likewise Floating Point must always be converted to Volts using a Float to Volts module to ensure the controls work smoothly with no data loss.

Converting GUI control floating point to DSP voltage


Some ways in which data plugs are used:

Hint:- (Text) This text displays as a popup yellow tooltip when the mouse hovers over a control on the VST’s panel.
Both of the items below (Menu Item and Choice) form the replacement for the older “Drop Down List” and form a selection menu based on an “Enumerated List
Menu Items:- (Text) Comma separated text list. Provides a right-click popup context menu for the image. Commonly used for adding a MIDI-Learn menu.
Choice or Menu Selection:- (Integer) Selects items from the list based on their numeric position in the text also Commonly used for adding a MIDI-Learn menu.
(NOTE: An enumerated list starts always at 0, not at 1). This also provides a right-click popup context menu for a control or image. Commonly used for adding a MIDI-Learn menu.


Animation Position: (Floating Point) Returns the position of a knob or slider control. The value of this is (normalized) to between 0 and 1, so for controlling DSP modules needs to be scaled up to the usual 0 to 10 Volts range Patch Memory (Float) modules do this scaling automatically.
This Animation Position data is normally Bi-Directional.


Mouse Down: (Boolean) Sends a “True” value when a control or image receives a left click from a mouse.

Properties.
Some settings or controls may not always be available as Plugs, they may only be available in the Module’s Properties panel:
Writeable:- (Boolean) Makes read-only or writeable.
Greyed:- (Boolean) Displays text in a disabled, grey style.
Background Colour: (Text)
Foreground Colour: (Text)
Ignore Program Change: (Boolean)
Mouse Down:- (Boolean) Provides a Boolean “True” signal whenever the mouse is left clicked on the image.
Read Only:- You can only read data from a module when this is set to True, it will not accept any inputs.
Greyed:-

NOTE: Fixed value modules only have outputs, and their values are set in the properties panel so they cannot be altered in any way from the VST’s control panel, however you can allow the VST plug-in’s user to select from a bank of fixed values via a drop down list, or similar selector.

Polyphony and CPU usage in SynthEdit.

SynthEdit has a default of 6 voice polyphony. You can change this, up to the maximum of 128 voices supported by MIDI. 
Note:- for polyphony to work correctly, each synthesiser must be in it’s own container and each container must have exactly one MIDI to CV Module.
Important Note: Each new polyphonic channel creates cloned synth modules, so keep the number voices set to a realistic number (after all normal keyboard players can only hold down so many keys at once!) if not you risk creating a real “CPU hog”
NOTE: VST Plug-ins internally control their own polyphony, once your module structure becomes a VST it’s out of your control, you must set your limits at the design stage.

When a MIDI to CV Module is added to a container, all the modules downstream automatically become polyphonic. You can set the maximum number of voices from the container’s properties dialog. The polyphony is confined to that container. If you bring connections out of the container via an IO Mod Module, the signal is merged back into a mono signal. This is one way to control which modules are polyphonic.
SynthEdit creates clones of the modules in a container as needed for each voice in use (you can’t see the clones, they are generated internally).

Illustration of cloned modules

SynthEdit only clones the modules it needs to. For example;

LFOs and Polyphony:
The LFO is treated differently for polyphonic modules. Here one LFO is shared by all the voices.

This is because the LFO does not depend on the MIDI to CV module, it is not connected to the MIDI to CV module in the signal path.

SynthEdit automatically analyses each synthesiser’s signal flow to minimize the number of modules that need to be cloned to sound a new voice. This helps reduce the CPU load and increases performance.
Note: If do decide you want one LFO per voice, it will need to have some connection with the MIDI to CV module.
For example: you could connect the MIDI to CV modules Pitch Out plug to the LFO via a switch module, then turn off the switch, (although the switch is “off” breaking the control voltage flow, SynthEdit still sees this as a connection) this would then give you one separate LFO for each note you played.

Polyphony control

Reserve Voices
Reserve Voices are extra spare voices used only to prevents clicks when you play more notes than Polyphony allows for. 3 to5 are usually sufficient.
Imagine you set polyphony to 3, and hold 3 notes, then hit a 4th note, what will happen? One of the 3 notes has to stop, but there is no way to stop a note instantly without it clicking. You have two options:

  1. Have some extra voices in ‘reserve’ to play the 4th note. This allows the new note to start immediately while one of the old notes is faded out. This is the best option.
  2. Have no reserve voices, SE will fade-out one of the 3 notes, then play the 4th note. This results in latency, half your notes get delayed several milliseconds, and the fade-out ‘pop’ is more noticeable because it’s not ‘masked’ by a new note.

Reserve voices don’t count toward polyphony. Reserve voices are only in effect for a few milliseconds, you only need enough to cope with how many notes you expect trigger ‘at once’ (at exactly the same time). I don’t tend to trigger more than 3 notes at the same instant, so 3-4 reserve voices is enough. This is not related to how many notes you can hold down at once, ‘polyphony’ sets that.


SynthEdit:- Aliasing and distortion.

Audio Aliasing is an effect which occurs when converting an analogue signal into a digital one with an insufficient sampling frequency.
The result of this effect is that the high-frequency components of that analogue signal will not be correctly interpreted, and the digital signal will not be an accurate copy of the analogue one.
Analogue to Digital conversion.
When analogue signals are digitised and turned into digital signals, the analogue signal is sampled at regularly occurring points in time, or in other words, the instantaneous amplitude of the analogue signal is recorded to create a digital copy of the analogue signal.
This happens very quickly in audio signals, for example, CD audio is sampled at 44.1 kHz (44,100 samples per second).
Aliasing occurs when a signal is sampled at an insufficient rate. Two audio signals can become indistinguishable from each other once they have been sampled and converted– they have become aliases of each other.

The Nyquist sampling theorem states that:
“To avoid aliasing, the sampling frequency must be at least twice that of the highest frequency which is to be represented“. If we use the example of CD audio, a sampling frequency of 44.1 kHz means that the highest frequency which can be represented without aliasing is 22.05 kHz. For CD audio this is sufficient as the upper limit of human hearing is around 15 to 20 kHz depending on the individual.

Aliasing can occur either because the anti-alias filter in the A-D converter (or in a sample-rate converter) doesn’t have a steep enough roll-off, or alternatively because the system has been overloaded. Distortion caused by overloading the input or conversion circuitry is the most common source of aliasing, because overloads result in the generation of multiple high-frequency harmonics within the digital system itself after the anti-aliasing filtering.
Sampling images.
The sampling process is similar to a form of amplitude modulation in which the input signal frequencies are added to, and subtracted from the sample-rate frequency. In radio terms, the sum products are called the upper sideband and the subtracted products are called the lower sideband. In digital circles they are just referred to as the ‘images‘.
Unwanted Effects.
These images play no part in the digital audio process — they are essentially just a side-effect of sampling. However they must be kept well above the wanted audio frequencies so that they can be removed easily without affecting the quality of the required audio signals. This is where all the can trouble begin. The upper image isn’t really a problem – that’s easily filtered out, but if the lower one is too low in frequency, it will mix with the audio we do want and because the frequencies are similar, this will create ‘aliases‘ that cannot be removed.
Unwanted guests you can’t get rid of.
This is what the aliases turn into… that guest at the party who causes bad feelings and will not leave. Once aliasing effects are there there is no way you can filter them out without causing even more audio degradation.

Spectrum of aliasing signal images

Note that, unlike an analogue system, in which the distortion products caused by overloads always follow a normal harmonic series, and can even give quite a pleasant sound, (consider tape saturation on an old reel to reel recorder, or soft clipping in a valve amplifier) overloading, or incorrect clock frequencies in a digital system aliasing result in the harmonic series being “folded back or mirrored” on itself to produce audible signals that are no longer harmonically related to the source (they are referred to as “Inharmonics”).
In this very basic example, we have ended up with aliases at 2kHz and 18kHz that have no obvious musical relationship to the 10kHz source. This is why overloading a digital system sounds so nasty in comparison to overloading an analogue system.

The SynthEdit De-tuner module.

The De-tuner Prefab Module makes tuning an Oscillator to a standard pitch really easy. The controls are self descriptive. If your pitch CV from the keyboard is middle C, then the Octave dropdown lets you transpose the pitch down by two octaves, and up by three octaves in steps of one octave. Likewise the Note dropdown allows you to raise the pitch by up to 11 semitones, in one semitone steps. Fine is just that you can fine tune by 0 to 1 semitone. Shown below is the internal structure of the De-tuner, along with the values for the semitones- aka tune (Octaves are in 1 volt steps, as it is 1volt/octave after all). You can alter these if you want… Just be sure to do that in a copy of the Prefab though, not the original.

How the De-Tuner works.

All the module does is just select the required fixed value outputs along with the value from the fine tune control and feed it from the output of the Dropdown List to the IO Module, where they are combined to give the correct voltage to feed to the VCO.

Connecting up the de-tuner
Inside the de-tuner

These are the voltages in the Fixed Values that give us the semitone steps.

The semitone tuning values

A de-tuner usability tweak.

We can make this a Prefab Module a little more “musician friendly” however, with a little editing of the Fixed Values module:
If we edit the names of the Output plugs of the fixed values module connected to the “Tune” dropdown list as shown below then the list will show C, C#, D, Eb instead of 1, 2, 3, 4… Then when you click on the dropdown list it looks like this:

Improved de-tuner prefab