Buy a good computer with SSDs. Work at last in 48 kHz from each source, own recordings, loops, etc. Work at 96 kHz need around 2.5 times CPU from working at 48 kHz and 2.0 times of SSD from working at 48 kHz.
Somebody correct me if wrong, but Nebula is resampling the selected library/program to your source project, not the other way around.
So if you have a 44.1 project, and load a 96khz Nebula library on it, that library gets resampled to 44.1 and then applied to your source audio.
So if you're source is 44.1, you _might_ get some extra detail about 96k Nebula libraries. I don't know which DAW you're using, but I would just set your project to 96k. That means (At least in Studio One and Reaper) that any 44.1 source you insert in a track gets automatically upscaled by your DAW to 96k. Then Nebula gets applied on that 96k signal (so leaving the original Nebula library 'as intact as possible', unresampled).
The most important question is if this is worth it. It will / might add something extra, if you can hear it (and if you find it worth the extra memory and CPU consumption) is totally and only up to you.
If I know a project is going to end up on HD-video thing or on Bluray for example (which is all 48k most of the times) I like to work in 48K. I think it's 'fake' to work in 44.1 and upsample it later when it gets mastered for Bluray or something. But workflow and experience might tell you something different.
Nobody is faulting you for working in 44.1K, let me tell you that. It's still the most used sampling rate in digital audio.
When I start to track something new (with guitar amp sims and analog recordings mixed) I work in 96K to get the lowest latency possible out of my interface(s) and to always have some sort of 'upsampling' going. I feel more samples for an analog-emulation-plugin (like an amp sim) to work with, the more real it can be.
After all is tracked and I'm happy with it, I apply Nebula on it (still in 96K) but then bounce everything down to 44.1K files. In doing this I apply all the Nebula stuff I almost blindly apply to all tracks anyway, I resample down and I commit my recordings (separating recording / tracking stage and mixing stage in my workflow).
I then start mixing with those 44.1K files which already have Nebula console/tape/mojo on them. And I have to admit, I'm using more and more algo plugins lately in the later stages, since there is already quite some Nebula mojo in the files.
So, do some tests. Apply 96K nebula directly onto your 44.1K files and bounce. Apply 96K Nebula to upsampled versions of your source files and bounce and leave them in 96K. Take the last test but make a copy which you sample down to 44.1K again. So you got 3 set of files now, start comparing. See if you spot the difference and if you find it worthwhile. Only you can tell.
really great to hear how you work and it makes a lot of sense.
I know this might sound stupid but I never thought about the DAW upsampling as you import samples. Dont know why, I use Cubase and it matches the samples bit rate to the project bit rate when I import them, so I'm sure it will up sample to 48K or 96K on importing the sample.
Interesting that you drop down to 44.1 when mixing... (if I understood correctly) is this because you use very little Nebula when actually at mixing stage?
reason this all come up was because I was reading some posts else where which suggested that upsampling (a sample) to 96K wont improve the sound but when you come to mixing tracks together (upsampled from 44.1 to 96K, this is where you get the benefits. I wasn't sure how accurate this statement was, so I have gone in search for answers.
No question, you are absolutely correct. I will run those tests and see if I can hear any difference myself.
Another thing... if I'm reading correctly, you use 96K to get lower latency? which I think that is what Enrique also suggested... that a higher sample rate uses less CPU?? (but more drive...?) err... confusing? is this because there is less interpolation going on?