of course i get these artefacts too when i switch off smooth2. but when i switch off rate s and smooth2 at the same time, no artefacts will happen. that was my whole point! nat templates are set in the way they are because of compatibility with slower cpu's. switching off rate s and running it at 0.104ms with timed kernels is even heavy on my very new system. but therefor you'll be able to get an allmost perfect emulation of a 1176 from the gemini programs. it really sounds unreal good with very fast attack times. it starts to form the transients like the real box!!
btw. as i (i guess in the nebula manual) red somewhere that the liquidity parameter will affect the intensity of the smooth kernel and quite a lot of modern programs come with 0% liquidity preadjusted and dont make any artefacts.
i recommend to amplify nebulas action while experimenting with it. for some programs you can just mute the output of the original signal so that you can amplify the rest. for other programs its not so easy but it's possible too. but i cant remember how i did this..
could you point me to a program which will have artefacts when rate s and smooth kernel are switched off?
rate s doesn't do anything by itself. all it does is allow you to adjust the program rate. so rate s by itself won't have any effect on artifacts. it doesn't do anything. except that when it is off, nebula uses the fastest prog rate it can, and when rate s is on, you can adjust prog rate yourself.
liquidity, as far as i know, doesn't affect the smooth kernel. the smooth kernel is smooth 2, it's a different smoothing just for h1. liquidity is smooth 1. it affects all kernels. they are two separate smooth effects. so liquidity can be set to 0%, and that just means you aren't getting any smooth1. if smooth2 in the glob page is set to anything besides OFF, you ARE getting smooth2. so those presets that have liquidity set to 0% are still using smooth2.
timed mode and faster prog rates may allow you to get by with less smooth2 (so instead of cubic you may be able to use linear, or maybe even switch it off), but we were talking about split mode not timed. split mode is not intended for dynamic programs.
of course split mode is not intended for dynamic programs. and timed mode would kill your cpu when you use it for eq's. eq's have long kernels to represent the specific phase shift. so now you have a dynamic eq. nebula can just handle this with freqd kernels, but will blure your transients. split mode then seams the best compromise as long as it doesnt make artefacts. if you get artefacts, experiment with the freqd kernel fades. i wish you good luck with it!
can you explain to me what smooth1&2 exactly doing? and how rate s will translate kernels to the longer intervalls? i kind of see a structure of all that which makes sense to not introduce artefacts for both, rate s and smooth being disabled. and till now this theory worked all the time for me. but if you are right, i just had luck and a wrong theory. so i'd be very happy to improof my knoledge about this!
rate s itself doensn't do anything. it's just an on/off switch that turns on the 'prog rate' parameter so you can adjust it. adjusting prog rate means you are adjusting the length of the windows/blocks that nebula uses to process the audio coming in.
having longer kernels probably does help with the phase of eqs but that doesn't mean they are dynamic. an eq would only be dynamic if it had dynamic samples in it. none of the templates/sessions that come with nat are set up to do that, so a custom template is needed, and it's pretty tricky to make. the reason dynamic eqs are practically non-existent is because they are much more complex to make, would have WAY more samples, and would take lots more cpu.
consider this- say you have an eq and you want to sample the width, gain, and freq control. lets say freq control has 5 fixed frequencies you switch between. then lets say width is fully variable and you decide to take 5 positions from that control. 5 might not sound like a lot and some people may think it's not enough, but lets continue with this math and see why 5 may already be pushing it. you still have the gain control. lets say it goes from +12db to -12db. lets say you decide.. to do 7 sampled spots on it. one at 0, one at +/- 3db, +/- 8db, and +/-12 db. some people might think those are large gaps. personally i think that could still come out well but it would be nice if you could have more resolution there.
ok so that's 5x5x7=175. that's 175 samples. just for 1k. if you want harmonics, well 10k would mean 1750 samples. that's with only 5 sampled width and 7 sampled gain positions. chances are you don't have any or many programs that go up to 1000 samples let alone close to 2000.
ah but, that's already well beyond probably any eq programs out there in sample count, and still doesn't have dynamics. preamps have 30 dynamic steps by default in the NAT template. well, if you were going to do a dynamic eq you couldn't have that many. lets say you lower it to 10 but increase the distance between them to still have a nice range.
lets also say you want 10k. now you have 5x5x7x10x10=17500 samples. you do not have a program with 17500 samples or even anywhere near close to that. and this is with what could be considered a low amount of dynamic steps, and a low amount of width positions and a low amount of gain positions sampled. even 5k gives you around 9000 samples. now, chances are that the upper harmonics (5-10k) will have noise for their lower dynamic steps, with no impulse (because at the lower sampled levels the harmonics will be well below the noise floor). well, you can go in and edit the xmls to remove those after you look to see which are below the noise floor, which may let you cut the count down a few hundred.
see the thing is that when you get up to programs that big, you are totally in uncharted territory. the amount of work skyrockets, you really have to know what you are doing, and the amount of bugs and issues also shoots up. the program also uses a lot more cpu. if the eq only had two controls that'd cut the sample count down a lot, so with a 5k program you could be left with around 800 samples. but what control is missing? width? the band frequency? for an eq to be dynamic it has to be using the envelope follower to modulate between a set of dynamic steps. if the eq has dynamics and adjustable controls you are looking at several thousand samples with a full 10k easy, and that's with, again, low-ish numbers of positions sampled on those controls.
i did a tube amp with simple eq, which was just a bass and treble control. so just two fixed band shelf type things basically. with only 2 controls and dynamics those programs have 2-3000 samples. i don't think there are other dynamic EQs (with adjustable controls) out there, but for a high end type studio EQ you'd likely have a 3rd control which means you're multiplying the number of samples by however many steps you do on that extra control. 3 controls would be hard to do without approaching 10,000 samples and still having a decent amount of steps sampled on each control and a decent amount of dynamic steps. and 10,000 samples is a crazy amount.
i kind of thought this while i was reading the word dynamic eq for the first time in this forum. i thought then this guy must be crazy and wasnt sure if nebula could handle this at all. after this first thought i came to the conclusion that you must have found a trick to shorten things up.. now i'm ipressed again by hearing that you havnt!!
if i were you, i'd try to just make a program with k1 and a second one for all the harmonics. i know, it wont be the same, but it'll work probably better and is lighter on ram/cpu...
if i got it right, your main trouble is transients and dynamics?
I captured a eq dynamically last year with only three dynamic steps and three bands....its not that hard on cpu.Though Admittedly it does not have huge amount of samples But I was creating 200-300 combination eq's before I started releasing anything, but I did not release them because of cpu. I could of reduced the quality to release them but I didn't think people would want it.
It was done this way because the transformer slightly saturates on peaks... I have only done this for one of my releases well actually the original elc24 and the new version.
Every hardware is different And took this approach for this eq. In actual fact most 1073's get a lump on the low end when you push them because of the transformer.
A lot of High end eq's would not need this unless you wanted that pushed sound. However eq's with a lot of ringing would probably need this anyway to get the true sound, they have a kind of stop start characteristic at low levels.
if you are using a console (like the german mastering console stereo bus out library at 10k) or a tape program like the dte 30 ips library it means that they are dynamic libraries because they do affect transients and dynamic range.I don't know if i'm plain wrong or crazy but using nebula for mastering at 96 khz( 2 or 3 instance maximum) i always use TIMED (the 3 little arrows in the kern page point toward timed) with all the timed values at the maximum(73 ms or even 100 ms).Ltimed value is set 100000. I also turn RATE S to on and put its value to 200 ms. Apart from tweaking the original preset am i doing something wrong?? All i can hear is that i get much better sound even using PEAK detection instead of rms17 or evf or evf17 in the efvs page.RATE CNV is set to 9000 and ahead time to 6.000 ms(also in the glob page). I want to say that i use 2 or 3 libraries for mastering(gmc,dte or anm) just to ad some flavour and punch to clients tracks.And yes i like to tweak my libraries a litte
but if you are setting the prog rate to 200ms, they aren't going to be 'affecting transients'. not in any way resembling the hardware. so i would say, yes you are doing something wrong there. imo you should just switch the h1 and either even or odd kerns to timed, and not mess with the program rate. switching all three can cause artifacts and so unless you are willing to test and make sure it isn't, i don't think you should do it that way.
lowering the prog rate faster gives more accurate (like the hardware) behavior, with the exception that it can cause artifacts. raising the prog rate is going in the opposite direction and nebula is completely and utterly missing your transients, not processing them with the 'proper' samples and instead processing them with the quietest captured impulses rather than louder ones like it should be. i've already demonstrated that it even does that at 20ms but like i said, unless you feel like testing for artifacts yourself you should just leave well enough alone, and switch h1 and either even or odd kerns to timed, for render. again that's just my opinion about what the easiest, relatively risk free way for you to get a possibly higher quality sounding render would be.
Hi Cupwise, thanks for the tips.Yeah just keeping the original rate of the library and turning rate s to on makes thins more pleasent sonically speaking.At 200 it seems there's no processing like the original hardware. A question that has always intrigued me : would it make sense keeping attack and release values at the minimum values possible? (1ms).Just to have a "faster" reaction from the library? Wrong or not? thanks!
changing rate s to on does nothing by itself. it just allows you to adjust the program rate. if you don't adjust the program rate, setting rate s to on does nothing.
theoretically, lowering the attack and release would give you results a little closer to the hardware but a) it could cause artifacts (prob more likely with tape stuff than with preamp stuff) and b) with a prog rate of 20ms it probably doesn't make too much of a difference
but i usually think it seems to sound better with the attack around 6-7ms and release at 1, but i could be imagining that.