i guess we are going to have to agree to disagree because i don't agree with how you are interpreting these null tests, and most of your assumptions about what's going on are based on them. if there were random transient excitation you would be able to see it. i rendered the raw wav with peak and rms modes, zoomed into the wav forms vertically many many times, and when i switched between them i saw absolutely no difference. not one pixel was different no matter how close i zoomed in. i don't hear any difference either.
if i can zoom in so that i'm essentially looking at them under a microscope and see absolutely no change when switching between them (nor hear it), and see absolutely no small variation between their frequency responses no matter which identical segments i analyze, i can hardly say that there is any random excitation. it should measurable but it's just not there. i even used a linear phase eq to filter everything out below ~1khz to remove any masking caused by the lower freqs, and still couldn't hear anything. i zoomed in on both files again and could zoom in even further now, and still saw no difference in amplitude anywhere. you're just assuming that it's there because when you null things there is stuff left over. one .wav having material the other doesn't isn't the only way to have something left over after a null. phase differences will leave stuff behind too, even if both wavs have almost identical content.
that's what i think is going on. peak and rms will have different attack timing, with peak catching transients and RMS not, and a phase difference caused by the attacks would leave the transients after null between peak/raw. which is what it does. so the leftover is some higher freq stuff left from the transients. it's probably also a little frequency specific which is why it sounds a little like distortion. if you null between 15ms and 30ms using RMS both times, there is still leftover stuff but now its the body of the drums after the transients. because that's what falls in that time range. but with that leftover you can hear better that it isn't any kind of distortion, it's just that part of the drums.
the nulls in this latest test you did, in my mind, demonstrate what i'm saying about the phase difference. both nulls have it though. yes, the one between peak and raw is louder. but it's only about 7db louder on average. to me the explanation is that it's louder because transients are louder, and because peak mode has the faster attack, so it's attack stage is happening earlier, during the transients, and the phase shift caused by the attack is happening during the transients, so the null leaves them. with the rms/raw null the transients aren't there because RMS doesn't detect them, which is what i was saying in the first place. the attack doesn't kick in until the body of the drum, so that's what's in this null. and that's why this null is quieter, because the body is lower than the transients.
yr wrote:I'm pretty sure that if you bought a new "clean" preamp or A/D converter and realized that it has a built-in random transient exciter (as "peak" mode does) you would send it back to the manufacturer...
i would but if you can only measure the transient excitement by doing a null test then you aren't conclusively proving that it's there. phase difference leaves stuff behind after a null too. the fact that the 'transient excitement' cannot be seen on the processed files before null no matter how close you look at them or how you analyze tells me that it's probably subtle phase shifting.
yr wrote:Just did another test using a clean (no thd), flat freq preset and I'm pretty much convinced that "peak" mode's handling of transients is worse then the rms mode (be it phase distortion or any other artifact).
i would word it that the rms version just isn't handling the transients at all, which is common knowledge about rms and exactly what should be expected with nebula. i agree that it's phase distortion/shifting, which is what i've been saying, but i don't agree that peak is worse than RMS, because they both do it. the difference is that it happens quicker with peak because peak actually catches the transients, where RMS doesn't. besides that, whats left after null is like over 60db below the original level in each case. if that were added harmonic distortion at that level it would be pretty bad, but if it's because of phase shifting it's not that bad. i think this is evidenced in the fact that it's always been there but people still love nebula. it's always there, no matter which mode you use or which attack time you use, and it's always about the same level below the original (if you compare identical tiny sections after null to the same sections before to not have bias due to transients being louder). in my mind all this has shown is that peak mode places it earlier, in the transient, because peak mode actually catches the transients just like i said it would, whereas RMS doesnt.
In short- the difference between the modes in the test is "averagely" 7 dB in terms of phase distortion (only 7dB? -that's a lot!) + the peak mode shifts part of the transients because it can't handle the fast changes. That is 1 preset with 1 kernel with no thd. Now multiply that in the number of Nebula presets that you use and you get the idea how significant it is. Not to mention the higher levels of artifacts when combining high peak levels with high saturation in Nebula. There is a good reason why Nebula uses average Env type by default.
if you are just going to completely ignore what i'm saying i'm going to just bow out of this. i explained that the 7db difference is probably just because the transients are louder. nulling peak with raw leaves the transients, because the attack happens during them with peak, because peak actually catches the transients. nulling raw with RMS doesn't leave the transients because the attack happened after them. the 'phase distortion' is still there, but it's not in the transients it's in what comes after them- the body of the drum, which is at a lower level going into the program already. this completely explains why the difference in level. it's not because peak causes more phase shifting, it's because it happens in places where the audio is louder to begin with- the transients. it's not that the phase distortion is louder with peak mode, it's that drum transients are louder than the body of the drum, and that's where the phase distortion happens with peak. it still happens with rms.
so yeah, of course, if you compare the raw/rms null to the raw/peak null, the raw/peak null will be a bit higher in level. because transients are louder. so you can't compare like that. select segments of the sound left over in each null, and measure the average amplitude of those segments, then measure the amplitude of the raw file at that same location. null is around -60db lower in both cases. so if you measure it correctly, and compare the nulls to the original rather than comparing them to each other, then you see that the phase distortion is basically the same level.
yr wrote:Not to mention the higher levels of artifacts when combining high peak levels with high saturation in Nebula.
you haven't shown this.
yr wrote:There is a good reason why Nebula uses average Env type by default.
i believe originally RMS was what was used. then giancarlo decided that env was better. that's the reason. it's because someone decided it. i decided otherwise and i've given my reasons. you don't have to agree, nobody does, but you've completely ignored most of what i've said in response to your tests. how can you explain that there is supposedly what you first called 'inharmonic distortion', then switched to 'excitation' which would be harmonic, or artifacts at all, if you can't see them or measure them in any way? if they are there i should be able to see them in a freq response or by zooming into the .wavs. i see absolutely nothing at all. i've even filtered out bass and zoomed in further and still the files look identical. there's nothing being added, except for some phase shifting which is always happening, has always happened and will always happen. peak just puts it in the transients instead of in what follows them. if anything, you need to explain why it's better to have the phase shifting happen in the bodies of the drums rather than in the transients. and it would have to be a good enough reason to counteract the fact that RMS just doesn't act on transients at all, and uses samples that may have been taken at -40dbvu for transients that might be around -10dbfs.
would it make you happy if i gave this disclaimer?- end users should probably leave this stuff alone if they think nebula already sounds good, unless they are willing to do the listening tests and read everything that's been said and have some understanding of it. otherwise, just forget everything i've said. actually, just forget it anyway.
i'm going to continue releasing my stuff in peak when i think it makes sense, you can do whatever you feel is right, yr.
I'm not ignoring what you say, I just don't understand the logic and feel like your shifting your point of view. When you run a preset that should have no effect (ideally), the phrase "catching the transients" has no meaning. It is not a compressor and Nebula is not "catching" anything- the phase distortion it is purely a by-product of the playback engine. It is measurable and I believe it has negative sonic implications far greater then, for instance, which AD converter was used for the sampling. Think about it, if your AD was introducing such levels of phase distortion would it pass even to most basic loop-back test?- for sure not. Another important point here is that we are not talking about a fixed phase shift (which is problematic enough) but phase distortion that relates the transients (the topic of this thread).
When you suggested that the peak mode could be better for preamps I was happy to test it. Having done so and gone through a)not liking the sonic effect b)showing that it generates higher levels of phase distortion and c)not been given any measurable data nor audio examples to believe that peak mode actually improves things, I would say that the "burden of proof" is on you.
What you do with your libraries is of course for you to decide, but don't forget- Nebula is quite complex for many users in terms of finding the "ideal" settings for various presets. So any general suggestion coming from a commercial 3rd party developer can cause serious waves.
Last edited by yr on Tue Mar 19, 2013 9:25 pm, edited 2 times in total.
Sure, but in the given example, phase distortion is a negative by-prduct of the engine and not a positive sign of higher dynamic accuracy. If we agree that Nebula is not perfect (which nobody claims) and that we try to find the best ways to work with (or around) the limitations of the engine, then any change to our collective "general wisdom" should be well tested and open for discussion.
yes the phrase 'catching the transients' does have meaning. even if a program isn't a compressor. any dynamic program in nebula still uses a level detector. that's what the peak and RMS modes are for. for the 10th time, RMS is commonly known to not catch transients. it still applies to non-compressor programs with dynamics 100% and the fact that you think it doesn't, and are still ignoring my question about why i can't directly see the supposed artifacts with any kind of analysis and without nulling makes me feel like i'm wasting my time here.
even for a preamp program, 'catching transients' still applies. there are samples taken at different dynamic levels. the detector in nebula detects incoming level, and determines which sample to play based on that. RMS mode will not see a transient that hits at -10dbfs, so it will not play the corresponding dynamic sample. instead it will play one that corresponds with whatever the current RMS level is, or if it was a lone drum hit with nothing before it, the transient will use the lowest captured sample for that program. in either case it should be using a higher level sample. in the 2nd case it should be using one MUCH higher, and the result is that the transient is actually using a lower dynamic sample than the body which is the opposite of what should happen. this is factual stuff here, and fairly basic. i can't explain it any more simple than that.
i haven't changed my stance at all, you are the one who has gone from claiming it was inharmonic distortion, to harmonic excitation, and finally admitted that i was right about it being phase distortion (without actually acknowledging it). but for the 900th time, drum transients are louder than the body of the drum. if you do a process that leaves the transients (nulling something that put a phase shift in the transients with the dry version), you will then have a seemingly louder result compared to doing a process that leaves the body (nulling something that put the phase shift in the body of the drum with the dry version). the process that leaves transients will be louder. but it's because transients are louder.
you are looking at a null that leaves transients, and comparing it to a null that leaves a drum body, and declaring that because the null that left transients is louder, that there is some kind of distortion that was louder in that one. completely ignoring the fact that the transients are louder.
if i used 2 different tones for this test, one that was louder than the other, and when it showed louder results after the null than the other, i declared that it must have more distortion, would that make sense? no. this is what you are doing.
and i really don't see how the burden of proof is on me when you are the one claiming there is something there that can't be seen. show me a way to analyze a file processed with peak, compared to one processed with rms, where i can see that the one with peak has added artifacts, without nulling anything. this should be possible. but i've zoomed in on the waves and see no extra material of any kind, nor do i see it in a frequency response.
i'm done arguing about it because either i'm not wording my points clearly enough or you just don't understand it (besides the fact that you don't address half of what i say, even though i've looked at everything you've presented, replicated your tests and addressed it all), but either way i'm feeling like i could be spending my time somewhere else. don't use peak. nobody use peak (i still will though).
and of course, i reserve the right to go back on my word and say more on this, but it's only going to happen if a) you address the two or three points i've made that you have been ignoring. i'm not pointing them out again either because i've repeated them like 10x each b) i get tired of working on other stuff and have nothing better to do
you can't "see" the phase shifts because you are using the wrong tool. Do you also check thd by looking at zoomed samples? did thd even exist before you discovered the vst analyzer?
You have a preset with no thd, no phase shits, no frequency shifts. Any of those added to your output file are artifacts by definition. Since both times the processed file has the same level, with the only parameter changed being the mode, there is certainly room to discuss the levels (and nature) of phase distortion. Unlike the intentionally flawed test that you described..
Ultimately, we are trying to establish which mode will bring us closer to the sound of HW preamps. I think for clean (er) presets the answer should be clear by now. If you could provide tests or samples that demonstrate the upside when using more saturated presets please do. But you need to go a bit further then "peak is more like how HW works and therefore must be better" (while ignoring the playback engine limitations, artifacts etc.)
i really can't say i appreciate the accusation that i intentionally concocted a flawed test. i obviously felt peak was good from my 1st comment on it, but i checked out your tests and addressed everything you said because i thought you could be on to something. you have seemed sure that peak is no good from the start, regardless of the fact that your initial tests clearly were flawed and you were assuming a lot based on them, but i never claimed that was intentional. now i have to wonder if you have had something else you are trying to prove.
i understand that you wouldn't look at a waveform to see phase shifting, and that was actually my point when you were trying to suggest there was inharmonic or harmonic content being added. you SHOULD be able to see that, and it wasn't there. after i pointed it out THEN you started talking about phase distortion, but you still keep jumping around between the 3.
now you are talking about comparing things to hardware and how i'm not, but i don't see a pic of these new tones going through the actual hardware. the 2nd one looks similar to something that could happen with high saturation levels. also i thought your method of using programs with little to no saturation made good sense because it removed the sampled harmonic distortion from the results mostly, but now you are using a heavily saturated program? anyway, i made a similar tone, and definitely didn't always see those same results you got, with various programs. i think some of this stuff is going to be different between programs. i actually had a few cases where switching to EVF17 mode produced a lot more activity below the fundamental (i had at 210hz) than peak did. it was way more significant than your example in the first pic, and peak was cleaner.
one thing that could interfere with these tests is that inharmonic content/artifacts can definitely be in the samples. i've had cases where there is noise above ~25khz that gets triggered by anything going into the program, and it's always at the same freq spot (so it isn't relative to the input). it was in the samples, because i could go to the samples and filter it out and then it wouldn't be triggered by audio going into the programs. so it is in the realm of possibility that your first pic there could be explained by there being some kind of inharmonic artifacts in the higher level, more saturated samples, and peak mode just triggers more of it because it plays those samples when the higher parts of the tone get detected, whereas RMS mode mostly ignores them so less gets triggered. if it's just a product of peak mode itself it should happen that way with every program, including ones with low saturation or even 1k programs, and i didn't see that with the handful i tried of various types. one produced more with peak, but others didn't and one did more with EVF17.
i really feel like you are making awfully declarative statements based on a few tests results which could have any number of possible explanations that would need to be accounted for and controlled, but haven't been. you just asserted that peak mode is no good because of nebula limitations so your mind is already clearly made up. i already tried to get out of this once or twice and pointed out that you are free to use whatever mode you want. you can continue to talk about this with anyone who will reply, but i'd like to not be dragged back in by accusations that i'm being intentionally dishonest somehow.