On one hand, this would be very interesting to me as I'm always curious about stuff like this (lots of handmade little null tests on my machine).
But from a workflow perspective, I think decisions made based on the difference are utterly out of context. Why would I want to tweak the difference Nebula is making to the sound? My goal is to get the sound to where I want it not to apply X% of Nebula distortion to something. What sounds amazing and cool as a difference might be the inferior choice in the context of the mix.
I guess what I'm asking is: other than interest/testing, how does this help production? I only ask this because obviously every feature takes development time and potentially system resources, so it would need advantages to weight against those possible disadvantages.
how do you know that the nebula program doesn't add artificial artifacts because of poor gain staging? i mean most of the programs have their instructions on how to use them and that's great, but in my case this feature would really help me double check the result and change the gains accordingly. As for the implementation part i don't think will be so difficult as all the processing is done.
Well whatever Nebula adds or doesn't add, artificial or not, I would think it either sounds good or it doesn't. Maybe abusing a program would sound really awesome for something in context that, if solo'd and difference'd you'd never have done because, by itself, it sounds totally "wrong".
Don't get me wrong, I solo stuff and listen to what Nebula is doing to the track, I just try not to base decisions on that information; for me, that's just to learn more about Nebula and my programs in particular. It tells me useful information about Nebula and what it's doing but it doesn't tell me much about how well that fits with the rest of the context.
So while I agree that this kind of switch would be interesting, I'm still not sure I see how it would be useful.