This is a simple question that i hope to get a simple answer to.
Basically I work only @ 44.1 in all my projects. I, however, own many libraries from various devs at 44.1, 96, and 192 kHz. I would like to know your recommendation for me: Should I always use the 192 ones if CPU power is NOT a problem? Or would you rather recommend me using the 96? Or maybe the 44.1?
I just want to get the best possible quality for my projects and I am not worried about the cpu hit as i have a powerful pc.
There is no quality advantage to load a 192k version of a program into a 44.1 project. Are you saying you are willing to change the rate your projects run at in the future?
Simple answer: If you are working at 44.1 then the best quality with regards to re-sampling is to use a native sampled 44.1 library. If you are working at 96k, then use a native sampled 96k library...
Of course, this does not take into account the actual quality of the library itself and if it is available at your projects native sample-rate in the first place.
The sample-rate conversion is not the most important factor, the basic sound and features of the library come first. Having said that, if you have a tough decision which library to buy and one is in your native project sample rate, get that one!
No quality advantage if you use a 96k library in a 44.1k session. It only will add an unnecessary sample rate conversion to the process. It might add more CPU strain as well, AND not sound as good as the native 44.1k program.
Sometimes you have no choice to, but that's just something you'll have to deal with. If given the choice, ALWAYS use the 44.1k programs in a 44.1k session.
Oversampling a plug-in vs. changing the projects host SR
Some plug-ins are offering internal oversampling (re-sampling of the audio signal to a higher SR) as an alternative to a globally higher SR in the host. As always in life, there is a serious tradeoff though. Re-sampling a signal requires steep filtering at Nyquist which in general is a tradeoff between computational cost, steepness of the filter, ripple in the passband and resulting impulse response of the filter. For instance, if a plug-in uses a FIR filter to obtain very steep filtering, pre-ringing occurs. On the other hand, if IIR filtering is used, the filter might be not steep enough to eliminate aliasing content. However, the additional computational cost is a “local” phenomenon opposed to an overall higher SR which causes a higher CPU demand in all software running in the host.
Running an overall higher SR on the other hand avoids tons of up and down sampling and all the artifacts that are typically introduced by doing so and just requires some down sampling at the end which can be done with an high quality offline sample rate converter. If computing power is not an issue my vote goes clearly to use a higher SR instead of local re-sampling.
My current CPU can't run an entire project at 96k, and its annoying. There are a few Nebula programs that I am always going to need that are native 96k only (VNXT plate!).
I have not worked out the best way yet, but i am certain the best possible results will be a laborious mix of rendering certain Nebula based buses at 96k and then downsampling them with SoX for the 44.1 project.
I tried once for using just one native 96k Nebula program, and it was barely worth the effort. If you had a few 96k native ones on a bus, and any other plug-ins that would benefit from running at a higher rate there too, then I am sure it would be worth the extra trouble.
I think Nebula's 'design philosophy' is quality above all, adding oversampling would be reducing quality for convenience. In the not too distant future we will all have better faster computers, so the problem will eventually go away without any Nebula updates needed.