Deep Learning made easy
The term “deep learning” was born in 2006 when Geoffrey Hinton published an article about the recognition of handwritten numbers with an accuracy of more than 98%, and therefore considered a problem solved at the “state of the art”. In simple terms, a neural network is a layered structure of artificial neurons. Each artificial neuron transmits the information to the neurons of the next layer.
From a practical point of view, each piece of information is modulated by a 'weight', and the structure is nothing more than a set of weights, in short, a large matrix of coefficients. if we wanted to identify a cat, we could decide the value of these weights so that all the images of the input set corresponding to a cat are assigned the output value “cat” instead of “dog”.
A neural network is nothing more than this. The weights are established in a mathematical way trying to approximate the output by changing the weights a little at a time until you get a better response of the network.
We give a test plugin to some mixing engineers and they processed the audio with that plugin by modifying the various parameters. At regular intervals, the plugin would generate a log, and fill a folder with various information: for example, the incoming audio and the parameter value, the modification time and so on. From the input signal, you can recognize various attributes, such as frequency response, phase, mid/ side content or loudness.
In just a few hours, a huge amount of data can be generated. The modified plugin takes into account the re-opening of a previously closed session or the simultaneous writing of many sessions. The data is sent to us and stored in our servers, and it's processed in such a way as to build a network of weights, i.e. the configuration of the network.
This process is called training and requires enormous amounts of calculation and sometimes hours or days to complete. The plug-in is continuously listening and saves the data in a buffer, and when the preset execution request is made, it starts the network, returning the setting of the parameters according to the preferences of the engineer measured in the learning process by automatically moving its knobs.
Back to home.