It is beginning to sound like it is impossible to even get the results the developer of the program intended unless your settings match theirs on the one they used to test it.
I sincerely hope some sort of round-table discussion happens so AA and 3rd party devs can be on same page and get compressor development focused in a common direction that both elevates the craft and makes the product more user-friendly.
The gainstaging thing is already a mess which could have been avoided by such a round-table discussion. Hopefully this topic gets nipped in the bud and handled before more and more comps are released and it becomes an unfixable debacle.
It's not that I think anyone in particular is wrong for their approach, but this is a prime example of a turning point where nebula could either pull together and get things right so that millions around the world who don't have degrees in computer programming can all love and use the comps...
... or things could get so deep and esoteric to the point where the potential user-base of nebula compressors shrinks into obscurity.
If it IS very important how all these things are set, then is it possible to have code in a nebula program that atuomatically sets those values within nebula?
For instance, it is poinless that I should have to go in and make chages to reverb to account for length of tail, the save, then reload. After all, the program had a specific length. It either needs lengthenign or not. If the program could just change the necessary values, all the user error would go out the window.
Let's get real about this issue in general. For every singe person like me who dug for two weeks to get all these answers and finally went in and changed various settings, and even went in and changed lines in xml files (I think I'm having a flashback to DOS in1978)
... for everyone like me who made it a full-time job to track all this stuff down and do tests various ways...
... I promise you there have been a dozen or more who are now just either using the reverbs without noticing the tails are cut off, or have abandoned them as not sounding right and will say as much when asked in a forum next time.
"I tried nebula. It's not so great. The reverbs don't sound so great."
Nebula as it stands, is setup so as to introduce the MAXIMUM possible "user errors"
If it is possible to eitehr have an eq version, a reverb version, a comp version, etc... where all development is on the same page about gearing toward those settings so users get the intended effect.
...or if it is possible for program itself to change those settings necessary and eliminate the myriad "user errors" alltogether, everyone would benefit.
One aspect of using reverb version for everything that I don't remember seeing discussed:
I'm having one problem after another after another all due to nothing but latency. I have automation not working properly, instruments falling out of sync, etc.
This is so problematic, it has nearly shut me down. Now, I may be misunderstanding something, but...
I have ram to spare. I have cpu to spare. I am choking do death on latency.
Isn't this the direct result of using reverb instances for everything when most are not actually reverb? I'm only doing it because I was told to, but wouldn't it make sense in my case to "trade" some cpu for latency by dropping the latency values for instances that don't require them (using non-reverb version)
What exactly is havng settings tailored for a 5 second reverb tail on every eq and comp in my session doing in terms of system resources?
What about rate conv... if the program doesn't use different sr, am I conserving resources by setting all the way down to zero, or any negative fx there?
Are there negatve fx to just turning lfreqd ALL the way down if not using reverb?
On some programs but not others?
Which parameters exactly can or should be turned down, and where are some practical limits to how far they can be in order to get an eq to run as lean as possible with NO negative fx to sound...
Is the answer the same for chorus, comp, etc...
I have an 8 core mac running logic with none of the cores over 25pct.
I didn't determine that buffer size affects sound. Poster before me did. I've never used anything other than the 2 settings it ships with.
FAQ? I read every word of it
... before I bought nebula.
... then again the day I bought it.
... then again a couple days ago.
At the moment I'd like to reset to default values. I can't remember offhand if I've seen a default list... so I can set everything to original state since everything loads with last saved settings. Is this what the xml file is? A listing of original states? Or is it the container that stores the latest settings?
If these are simple questions covered in the FAQ, then why has no one been able to answer them over the past few days here or on GS?
FAQ, like the manual, gives definitions of some of the parameters, but does not give answers as to practical limits for the ones I'm asking about.
Depending on who you ask, rate conv should be either 3000, 7000, or maximum to correctly convert for non-native sr's. Well, I think an appropriate question is not only "Which is it?", but maybe a more illuminating "What are the effects on my system of each choice?"
If it were one instance of nebula, I wouldn't even care about most of these q's, but when you compound many instances, the differences between running efficiently and using 100x what's necessary start to become important.
If these are found in FAQ or elsewhere, then I'd be happy to be directed to the appropriate spot, but I'm not seeing where to find:
Does rate conv need to be anything other than 0 if sr is same as native? What is the system cost for non-0 values?
Same for Lfreq D... what are the practical limits of how far it can be turneed down? Can it be ALL the way down for eq's? Choruses? Comps? Does that actually alleviate latency, or is it just within confines of dsp buffer setting?
"10 Does Nebula reverb version dll makes the eq and compressors emulation presets sound better?
As the Nebula reverb version has more latency and less CPU consumption, you can setup longer OPT TIMED and OPT FREQD in the MAST page times. This way you can get better sound results."
So... crank them all the way up? How are they affecting compressor sound? Does the same hold true for other things like eq and chorus? Better results how? What exactly is more accurate? For instance, if I have the slowest possible attack time, does it matter?
Well, I can only help with what setting I have changed in NEBULA.
With a 96K library running @ 44.1, the library needs to be SampleRateConverted [SRC]. For certain libraries, the default RATECNV need to be larger.
From FAQ definition: "Q – RATE CNV This parameter sets the upper bound limit for the sample rate conversion action. It is expressed in milliseconds This parameter is used for optimizing the loading time on slow systems. A large reverb program requires a large amount of time to load: in this case the rate conversion procedure is skipped."
IF you are working @ a different samplerate then the library was built at ... you'll see something like 96k -> 44.1 . The KEY is to see if you have a blinking arrow there ... if so, then the library was not fully converted, so you need a larger RATECNV value.
I think mine is set to 9000. YEAH, it takes a little longer to load, no big deal.
I MAY also have lowered the latency by hand editing the XML file [standard text file].
That's about it ... Let me saw this, though ... I think you should go back and re-read the FAQ ... and maybe take it a little slower ... WHY, because there ARE some suggested default values listed.
Another point to consider ... the Dev has mentioned that some of these 'visible' setting should not be changed, and will be hidden at some later point.
Last point ... NEBULA allows for a wide variety of libraries ... many have unique aspect ... SOME are still to be refined [like COMPRESSORS]. You may consider writing a request straight to the library developer and as if they have 'optimized' setting that are different than what is printed in the included manuals. And if there IS something ... I'm sure others here would be interested if you posted that info.
As a reminder ... the FAQ are something that is also mentioned to be a 'work-in-progress' ... as I read it, it is there to define parameters, etc ... but it's not a straight out course on the interactive elements within NEBULA.
Hey ... just trying to offer some help, and hope it does. The FAQ should be a slow read. As for the library developers ... the ones who tweak NAT and NEBULA and have the actual hardware to compare it with seem to have mentioned 1 modification to a NEBULA setting ... the RATECNV. Maybe that says something. Other than latency ... and that is system dependant.
Not a bad idea about asking 3rd party dev for their particular settings regarding comps.
I'm not joking about reading faq in it's entirety (now more than) 3x.
I know all the values it mentions. Rate conv is a good example. It says to max it. Others say to set it to 3000 or 7000. Any of them will work. I already know all the stuff about the arrow and src. Yes, I can make any instance of nebula work correctly by maxing everything out. Yes, at 2.7 months of latency, I will have no src probelms, no cut reverb trails, and no compressor errors... but what I'm saying is that the cost of some of those settings is becoming too high, and understanding their exact system costs would help.
The question arises not because I can't get the plug to work, but because the cumulative effect of multiple instances is crippling my system. so I'm trying to figure out, for instance, what the DIFFERENCE is between having setting of 3000 or maxed out for rate conv. If either works, which would help my system alleviate it's symptoms better?
My GUESS would be that I would benefit from the minimum setting that works (near 3000)... that that would give me SRC that works while introducing the minimum latency possible for that task.
I would very strongly prefer to stop guessing, however as these are questions that do, in fact have factual answers.
That's where I'm coming from. Basically, I have multiple questions all with the same theme: If I'm having serious problems due to latency, can I optimize these few parameters by using the minimum value that will work?
I'm not changing things willy-nilly, and I'm not changing many parameters.
I've only used 2 default vaules for dsp buffer, and only changed L-FreqD (even though FAQ mentions possibility of changing the other one as well), and rate conv... and only changed them because plug was not working properly... just trying to zero in on advice that suggests changing them to a value somewhere between here and the moon... and what the ramifications are of a minimal WORKING setting vs maxing them out.
It seems like a sensible question to me, and the flip side is about things that don't use rate conversion. If I'm at 44.1, and program is at 4.1, is it or isn't it a waste of resources to have rate cnv set to anything but zero? That's not philosophical, it's practical, and should have a simple factual answer.
I understand that the plug has default values so that it can catch a wide array of possible uses, but I'm not even talking about tweaking things to alter sound here... I'm talking about just making sure I'm not wasting resources for instances that don't require those settings. This ceased being a philosophical inquiry when I used all the settings recommended by faq and my system was brought to it's knees.
"My GUESS would be that I would benefit from the minimum setting that works (near 3000)... that that would give me SRC that works while introducing the minimum latency possible for that task. "
From my understanding, RATECNV doesn't affect latency. I read that it is the time given to SRC the library, and its' only impact is LOADING time.
To concur with your issue ... I too have a, relatively, power computer ... and I have a few apps that can make my system work, HOWEVER ... no other app can actually bring my computer to its knees as NEBULA can
If you consider what some convolution reverbs demand, NEBULA takes this to an exponential level ... this is the nature of the beast [and why so many warnings to potential users].
Now, there are some libraries that come with a 'lite duty' version. I believe the main difference are the number of KERNALS that it uses. Maybe something to consider.
Without doubt, the upcoming ability to strap NEBULA into a 'server' environment, I find, will be a necessity. I do hope that some tweaking to the NEB core may happen [only the Dev really knows ... but from what I've read, there are some serious memory optimizing designs going into the server aspect.
Well, that's the sort of thing I'm wondering about. I had read that it affected loading time, but wasn't sure of any other effects.
Same for Lfreq D... not sure what all is being affected and if it is potentially helping anything by bringing it down.
Same for L Timed ... though Ive never touched it. Same for ahead... though I've never touched it.
Just in general, are there certain types of programs that don't require some of these settings, and is there any system savings to be had by minimizing them for those types of programs. In other words, is there anywhere that I'm just wasting resources by using unnecessarily broad settings?
If your only issue is latency and you've never strayed from the default DSP Buffer as you say, I'd be experimenting with that first and foremost.
Lots of people drop Nebula Reverb's buffer size down to 1024 or so for that reason, plus it makes auditioning parameter adjustments a hell of a lot more convenient. If needs be, you can just keep a higher latency version for actual reverbs. The default buffer size of 8192 quickly accumulates to an obscene degree, so I'm not surprised you're having problems.
You don't even have to mess with the XML file anymore; the buffer is readily adjustable on the MAST page.