Login

Changing XML file for high quality sound and rendering?

Tips & tricks, working results, technical support

Re: Changing XML file for high quality sound and rendering?

Postby brp » Thu Aug 29, 2013 3:48 pm

hey guis

i'll explain it to you!

freqd works with FAST fourier transformation (fft) and timed works with furier transformation (dft). so freqd logically has the typical windowing desease like all those cheap spectrum analyzers where you can see actually whats wrong with it ;-)

the only reason for choosing freqd is its cpu-friendlyness!!! that's always true, for every program!!!

BUT:

the compromise with freqd can be VERY different (logically) depending on kernelsize and prograte etc.
you can imagin the kernel (or a part of it) as the fft window which needs windowfunctions which is nothing else than fadein and fadeout at start and end of the window.

maybe g can give here a hint how nebula gets its windowsizes :roll:
because i dont know this. it can also be dependant from samplerate and nebulas buffersize (which i beleve is not cause it would sound different on small buffers).

as you now can imagine, fft will work better for big windows than small ones and the same is true for prograte.

transientloss: if there is a transient at the beginning of a window, it will be fade in and fade out at the end...

so use freqd for reverbs and timed for compressors. try to lower the prograte when you loose transients on a static eq program, if this doesnt satisfy, switch to timed and hope your cpu still likes you afterwards!! ;-)


all the best
pascal
brp
User Level IX
User Level IX
 
Posts: 99
Joined: Tue Mar 30, 2010 1:02 pm

Re: Changing XML file for high quality sound and rendering?

Postby Tim Petherick » Thu Aug 29, 2013 4:31 pm

brp wrote:
maybe g can give here a hint how nebula gets its windowsizes :roll:
because i dont know this. it can also be dependant from samplerate and nebulas buffersize (which i beleve is not cause it would sound different on small buffers).





Stated this myself about buffer sizes.....
Especially when I tried to explain to people about using freqd on compressors!

All my main releases on compressors have used TIMED for this reason. Also I have wanted to use it on some of my dynamic eq's too but its just too heavy , i've mentioned buffer size affecting quality because of program rate changes in freqd before but it was totally missed.

Thats why I stated that it would be good to get down to lower buffer sizes than 128

viewtopic.php?f=13&t=25214&hilit=buffer+size+64&start=20

viewtopic.php?f=11&t=1505&p=13501&hilit=buffer+size+64#p13501

I'm thinking one thing that may change this whole problem will be Nebula H because I think we are going to get faster latency's.....

Let's hope that Nebula H will happen.




Tim
User avatar
Tim Petherick
Expert
Expert
 
Posts: 1352
Joined: Sat Apr 17, 2010 4:07 pm
Location: Bath , Uk

Re: Changing XML file for high quality sound and rendering?

Postby Cupwise » Fri Aug 30, 2013 3:29 am

ngarjuna wrote: The only thing I'd like to say is: my whole point was not a matter of sacred ground but rather a matter of giving new users bad advice.

i get that.
ngarjuna wrote:I think that's an appropriate time to issue warnings that this isn't a no compromise, automatic improvement;
the thing is that if you know what you are doing and do it right, the only compromise is going to be cpu use goes up.
ngarjuna wrote:and, more importantly, that whether or not you consider it an improvement

like i said, the gains you can get from having a faster program rate is a measurable thing, and i don't see how anyone could not consider it an improvement. if you can get it with minimal (imperceptible) to no increase in artifacts after switching from freqd to timed. in the analog world, things react instantly. nebula's prog rate isn't reflective of that. it's only making changes (whether dynamic or other cases of sample interpolation) once per block of prog rate time. analog world doesn't act like that. so it's an indisputable fact that a faster program rate would ALWAYS be better, if it could be obtained without any other ill effects happening. which again comes down to artifacts. so again, if the faster program rate can be had without a perceptible increase of artifacts, i'd consider that an indisputable improvement, with the only tradeoff then being CPU.

i explained the effect that having the much slower program rates has on reverbs. it's why they sound like they are behaving differently each time a drum happens. a real plate or spring or other reverb won't sound like that. that's because they don't have a 180ms program rate restricting their ability to react. nobody can argue that having a faster program rate wouldn't benefit reverbs, if it was possible. the same logic applies to everything else though too, really. reverbs just highlight the issue more because of how long their program rates are and you can actually hear the results a lot more clearly. but consider that all of your preamp programs are reacting up to 20ms (prog rate for preamps) slow on some transients. how can they be said to be treating transients accurately to the hardware in that case? a 20ms prog rate means up to 20ms late reaction.

sorry, but no matter what you think, that program cannot be said to 'be like the actual hardware' with regards to transients. 20ms can entirely miss a transient altogether. again, this is why compressors have the smaller length, to allow ~2ms program rates. and even then, even THEN the transients are still handled inconsistently. you can hear it. faster program rates would always be better, ALWAYS, barring any bad side effect.
ngarjuna wrote:
you might in fact be decreasing fidelity to the hardware.
sorry but to me that statement doesn't really mean anything. 'decreasing fidelity to the hardware'? the hardware doesn't have a 20ms program rate that only allows it to react dynamically every 20ms. it reacts instantly. 20ms isn't instant.
ngarjuna wrote: Which is exactly what you said in one of those paragraphs there,....and that FREQD, in some cases, would be the higher fidelity choice if I understand you correctly.

no. i wasn't saying that. i never said anything about fidelity or that freqd would ever be better. i said faster program rates were better. but i said that they may increase artifacts. but the point is, that you may be able to fine tune the prog rate to a specific rate where the artifacts aren't significantly increased over freqd. this takes lots of testing and stuff that i don't think typical users are going to have time to do or figure out.

i'm basically defending my use of timed as a dev. but see here's the problem. if i start using timed in my stuff, then it's obviously because i believe there is a benefit (faster prog rates being the main one). if i think faster program rates can benefit my programs, why would that only apply to MY programs? it wouldn't. its a fundamental part of how nebula works. a preamp with a 20ms program rate cannot be said to handle transients 'like the hardware', especially if on top of that slow program rate it's also using RMS or something similar as it's detector which uses averaging. to me, THAT takes from the 'fidelity of the hardware', whatever that means.
ngarjuna wrote:If this was a simpler subject that would be one thing; but it's clear from your rather detailed understanding of the issues that there are tradeoffs either way.
again, only tradeoff is cpu use if you do it in a way avoiding significant increase of artifacts. otherwise there are no tradeoffs. it's a measurable, provable improvement.
ngarjuna wrote:
So I return to my original suggestion that the program developer has a better view of the various tradeoffs and issues their programs are bound by than end users

i kind of don't agree with this sentiment, in general. it may or may not be true. the true facts of the matter are that anyone can sample something with NAT and get a great sounding result. have an expensive tube preamp? hook it up, use a default preamp template, and you have a good sounding program. you don't have to know anything about how it works. acustica made that possible. they provided the templates, which i talked about. those templates carry most of the water. anyone can use nat and sample an expensive preamp, and just use the preamp template and NAT, both which were designed by acustica. in that case, that person would have done nothing to that program. all they really did was hook a preamp to their computer, maybe calibrate levels and maybe some tests on the hardware end, etc, but nothing was done on their part to the actual program side. i gave you a list of things you can look for to see if a program has been customized in any way by a dev. anyone can look at those things and see if a program has been customized beyond the acustica provided templates. now, what i don't agree with is this part of your wording:
"program developer has a better view of the various tradeoffs and issues their programs"

i only agree with that if that developer has actually put effort into customizing their stuff or testing/seeing if there are any possible gains. if they are relying on templates that were provided by acustica, how can you really ever say that they understand anything about the program itself? i'm not calling out any specific devs or even saying i never just used the acustica provided templates myself. but there is this mysticism surrounding sampling, and a lot of it is bunk. if someone uses a template as it was provided by acustica, and makes no further adjustments to that program, then, really, it was ACUSTICA who made that program (people use the word 'program' generically to describe a nebula effect. i'm differentiating between the actual program and the vector. in my scenario i just gave, the person made the vector, which is what contains the samples. acustica made the program. the program dictates how those samples are used so it's at least equally as important as the vectors if not more). all the 'dev' would have done in that case was the sampling. NOT the programming. anyone could do that, with minimal time and effort. that's a testament to acustica, nebula, and NAT. in that scenario, the person should get very little credit, for simply hooking a preamp to their A/D D/A and sampling. it's NAT that did all the work and made something out of it.

you have two different aspects. the sampling. and the program. if i use the default template from acustica and never customize or change it beyond that, then i did nothing to the program. i have proven nothing about my understanding of that program, and i can't be said to have fine tuned or tweaked it in any way. this is just true fact. so these general statements that all devs always understand the tradeoffs or settings of their programs best than anyone else will, i don't agree with that. because there are things i myself don't understand. i don't think anyone understands ALL tradeoffs or all things that there are to understand about those settings. nobody. but when probably MOST of the programs out there are using default templates, which means most of those programs were actually designed by ACUSTICA, well, that really kind of flies squarely in the face of the whole 'dev has it set the best way' statement. if it is set the best way, it'd be because acustica made the template for that program (essentially acustica made the program).

i've seen plenty of programs where the only thing done to it was to change the padin/padout settings (including the first things i made). that's hardly a customization. and again, a 20ms prog rate means preamps are missing transients entirely, so if you consider that the 'best way', well.. i disagree. and again, i'm not saying every program HAS to be customized or that a program using the templates won't still sound good, because they do. i'm just saying that the templates work well, but not necessarily in the BEST possible way. to get to the BEST possible way requires fine tuning. it takes experimentation. it takes work. it takes time and effort. either it's done or it isn't.
ngarjuna wrote:imho in Nebula it actually does matter to a larger degree how much something sounds like what it's being sold as).
ok, well if that matters to you go find me a preamp that reacts dynamically based on 20ms blocks which it averages before using the next dynamic result (amount of 'saturation', harmonics, etc). you won't. personally i don't think i ever said in any of my marketing that any of my stuff with a program rate of over 10ms still treated transients like the hardware. if i did say that, i would be patently wrong. especially if it was an equalizer that doesn't even have dynamic samples which cannot possibly ever hope to recreate the 'way hardware handles transients'. but even a preamp... you need a faster program rate to even hope to have it act like the hardware. 20ms and RMS detectors completely miss transients. a hardware preamp doesn't. this is scientific fact. if you think i have my facts mixed up, explain to me how a system that works on windows or blocks of time around 20ms long (equal to the program rate) can catch a transient that happens almost instantly. it can't.
ngarjuna wrote:Like I said, if people want to tweak and test and compare, have at it, there's a whole big beautiful engine in there with lots of exposed parts. But when those same people who return to the forum because they made a bunch of tweaks to the engine and now some latest, greatest reverb library won't play back correctly, what then?.

that'd be on them. hopefully they know better than to blame the dev they bought it off of, and hopefully they saved the original library before doing it.
ngarjuna wrote: To the people handing out advice about how to switch from FREQD to TIMED: what's your guaranteed level of support when this advice affects their ability to use some other program or library?
it's only going to affect the programs they make the switch with..
Cupwise
Expert
Expert
 
Posts: 773
Joined: Tue Nov 23, 2010 2:03 am

Re: Changing XML file for high quality sound and rendering?

Postby ngarjuna » Fri Aug 30, 2013 4:57 am

Maybe it's just me, but I really feel like you're going back and forth to argue with me. Which is fine, I guess, have at it…but I'm failing to see your point, honestly.

i'm basically defending my use of timed as a dev.

Yeah, I get that. But I don't get why. Maybe you missed the part where I said that it's entirely appropriate for devs to test and tweak their products (which is so obvious it probably doesn't even need saying). Or even that when experienced non-devs delve into tweaking that's a different matter and I've done my best to just stay out of those threads (I think those threads are like land mines for noobies and are exactly how we ended up here today which is the criticism about TIMED that I have voiced in the past). If you're under the impression that I've questioned or challenged your use of TIMED in YOUR libraries then that's mistaken, I even said as much I believe. But you've twisted that into some kind of developer-worship where the unwashed masses should just do as their told. Nothing could be further from what I was saying.

the thing is that if you know what you are doing and do it right, the only compromise is going to be cpu use goes up.

Well that's a pretty significant caveat though; you yourself were the one who said the amount of time it would take to do this effectively would make a person a de facto developer. So in reality there are two huge compromises, the time spent and/or the artifacts that result from a less than perfect approach to the tweaking.

. so it's an indisputable fact that a faster program rate would ALWAYS be better,

There's a lot I don't know, so maybe I'm barking up the wrong tree; but I was taught that there is always a tradeoff when you're doing frequency/time transformations. Increasing the accuracy of the time domain would normally have the automatic effect of decreasing the accuracy of the frequency domain. Is this not the case with Nebula's transformation algos? I'm asking that as a question not trying to suggest an answer, btw.

sorry, but no matter what you think, that program cannot be said to 'be like the actual hardware' with regards to transients.

Come on, dude, that's just pedantic. You know exactly what I'm saying when I suggest that Nebula programs rely mostly on their ability to recreate as much of the sound of the hardware from which they're sampled as possible. That's what made the classic Nebula programs classics, that's what makes the best selling Nebula programs best selling. This whole "there are inherently going to be differences" schtick goes down under yeah, of course, but you're really not seeing the trees through the forest.

sorry, but no matter what you think, that program cannot be said to 'be like the actual hardware' with regards to transients.

A program doesn't have to null with the hardware to "be like" it; in fact, that's the precise function of a simile in language: that while there are some differences there are also noteworthy similarities being compared. The very meaning of the phrase "be like the actual hardware" means something distinctly different than "is identical to the actual hardware".

if i think faster program rates can benefit my programs, why would that only apply to MY programs? it wouldn't. its a fundamental part of how nebula works.

Wait a second…who ever said or suggested anything of the sort? I suggested that a person who sampled the hardware (has access for valid listening comparisons), a person who had (and will continue to) spend hours and hours on the project of sampling and convoluting, a person who has creative impetus of what this program set should be in the first place is the person who should be tweaking the engine for a particular library. If a user falls into that category (and it's quite possible that many users could if they wished to), then it applies to them too; if another dev falls into that category, then it applies to them too.

a preamp with a 20ms program rate cannot be said to handle transients 'like the hardware', especially if on top of that slow program rate it's also using RMS or something similar as it's detector which uses averaging. to me, THAT takes from the 'fidelity of the hardware', whatever that means.

I guess there must be more to audio processing than transients because people who prefer Nebula don't seem to be as hung up on this 20ms program rate as you are. I'm guessing there are a lot of VSTs out there that respond quite a bit faster than Nebula and yet…here we all are.

if i use the default template from acustica and never customize or change it beyond that, then i did nothing to the program. i have proven nothing about my understanding of that program, and i can't be said to have fine tuned or tweaked it in any way. this is just true fact. so these general statements that all devs always understand the tradeoffs or settings of their programs best than anyone else will, i don't agree with that. because there are things i myself don't understand. i don't think anyone understands ALL tradeoffs or all things that there are to understand about those settings. nobody. but when probably MOST of the programs out there are using default templates, which means most of those programs were actually designed by ACUSTICA, well, that really kind of flies squarely in the face of the whole 'dev has it set the best way' statement. if it is set the best way, it'd be because acustica made the template for that program (essentially acustica made the program).

That's perfectly reasonable. I can't speak for your processes but having conversed with many of the third party developers selling libraries I don't know of many developers selling libraries using just stock templates. I'm not saying it never happened or even that I'd know the difference; for all I know my very favorite Nebula programs were set-it-and-forget-it sampling sessions. But I do know that when I speak to developers they volunteer (not that I've ever asked) about all the hand tweaking that had to be done to their libraries to get them shiny; and from the forums you certainly give that impression as well with all of your testing and tweaking and updates (I say that as a compliment, by the way, I think you've made some really great Nebula libraries, some classics even).

You said earlier:
basically what i'm saying is, all this stuff i've said about prog rate, in my opinion it's not something for end users to concern themselves over unless they have all kinds of time to experiment with the stuff

on the other hand, i don't think users should go prying around, messing with stuff, unless they know what they are doing or are willing to put LOTS of time into figuring it out.

Which was pretty much exactly my point. So why are you taking me task over a point you explicitly agree with?
User avatar
ngarjuna
Expert
Expert
 
Posts: 778
Joined: Tue Mar 30, 2010 5:04 pm
Location: Miami

Re: Changing XML file for high quality sound and rendering?

Postby Cupwise » Fri Aug 30, 2013 6:25 am

ngarjuna wrote:Maybe it's just me, but I really feel like you're going back and forth to argue with me. Which is fine, I guess, have at it…but I'm failing to see your point, honestly.
i made plenty of points. i'm not trying to argue. i'm just expressing places where i disagree.

the thing is that if you know what you are doing and do it right, the only compromise is going to be cpu use goes up.

Well that's a pretty significant caveat though; you yourself were the one who said the amount of time it would take to do this effectively would make a person a de facto developer. So in reality there are two huge compromises, the time spent and/or the artifacts that result from a less than perfect approach to the tweaking.

ok but the difference is that now you are talking about tradeoffs in terms of the person's time spent. i was talking about solely tradeoffs in the quality that nebula puts out, which when you said nebula should be more like hardware, i took that as you thinking the quality is important.

There's a lot I don't know, so maybe I'm barking up the wrong tree; but I was taught that there is always a tradeoff when you're doing frequency/time transformations. Increasing the accuracy of the time domain would normally have the automatic effect of decreasing the accuracy of the frequency domain. Is this not the case with Nebula's transformation algos? I'm asking that as a question not trying to suggest an answer, btw.
i haven't seen a decrease of frequency accuracy when using timed mode with any of my tests. in fact, with timed mode you can have a faster program rate while having LONGER kern lengths which allows you to have a MORE accurate low end.

sorry, but no matter what you think, that program cannot be said to 'be like the actual hardware' with regards to transients.

Come on, dude, that's just pedantic. You know exactly what I'm saying when I suggest that Nebula programs rely mostly on their ability to recreate as much of the sound of the hardware from which they're sampled as possible. That's what made the classic Nebula programs classics, that's what makes the best selling Nebula programs best selling.
yeah i know what you mean. accuracy means accuracy. only problem is that transient response is one element that goes into making something accurate. and if a program is set to have a 20ms program rate, the transient response isn't there. it's that simple. you can say vague things about quality, and yeah, i'll agree that there are lots of great sounding programs out there which use 20ms program rates. but the fact of the matter is that they can't be said to accurately handle transients the same way as the hardware. why am i even bothering saying this? because lots of claims have been made that they are handling those transients accurately. it's false. it can be demonstrated. i'm not saying those programs suck. i'm saying they can't possibly handle transients like the hardware. 20ms is much too slow of a program rate for that. fact. and that's all i was saying there. it's not pedantic. it's me pointing out something which may be an inconvenient truth or something people don't want to admit. but that's just me being honest.

A program doesn't have to null with the hardware to "be like" it; in fact, that's the precise function of a simile in language: that while there are some differences there are also noteworthy similarities being compared.
nobody ever said anything about nulling. a tube preamp works in such a way that if you send a transient with a loud peak into it, that loud peak is saturated instantly. it gets the level of harmonics that it gets from that preamp, as dictated by the level of that peak and how the preamp handles sound at that level. a program made from that preamp, which uses a 20ms program rate, will not do that. it will MISS that transient. that transient peak will use a dynamic sample taken at a lower level, not at the higher level. it WILL NOT get the appropriate level of saturation as it would in the hardware. and not only that but most preamps are using EVF17 detection or RMS which even further delays their reaction to the level coming in. preamps don't do that. they react instantly. i'm laying this out in basic plain english words and being specific. this is how it works. i'm just trying to demystify this a bit. people can say this or that program reacts to transients like the hardware all they want, but if it has a slower program rate, they are wrong. period.

I guess there must be more to audio processing than transients because people who prefer Nebula don't seem to be as hung up on this 20ms program rate as you are. I'm guessing there are a lot of VSTs out there that respond quite a bit faster than Nebula and yet…here we all are.
that's not the point at all. the point is that people have talked about transients and how accurately nebula handles them. this all started because you made a post calling into question whether timed mode actually had any benefits. all i've done is give examples of them. and i've made an effort to demystify some things that i think have been overcomplicated. people have talked about how this or that program handles transients. i'm just saying that, if it has a slow program rate, it doesn't. and it definitely doesn't like the hardware. i've already said (i think in every one of my replies to you) that that's not the only factor though, so there's no need to act like i haven't acknowledged that this is only one factor of what goes into this stuff.

that said, you said you thought accuracy to hardware was more important to nebula, and this IS a factor that goes into that. now you're acting like you don't think it's important. transient response may be only one thing, but it's one thing that hasn't been there, and now with the more powerful cpus out, i think it can be. so to me, as a dev, that's cool.


I don't know of many developers selling libraries using just stock templates. I'm not saying it never happened or even that I'd know the difference;
well the thing is that if you don't know what to look for then you can't know. i told you what to look for. let me just say a general statement. there is hype all across the audio industry. why should we just assume that that doesn't at all exist in the nebula world? why has this community always kind of taken that truth to be 'self evident'? all i'm doing is raising some questions. and yeah maybe making a few waves but it's in my nature. i've always been careful to avoid ever saying things like that my programs handle transients perfectly. i've never made those claims, and honestly in the past i didn't even care about that specific issue. you're asking me why i care about that so much but again, i've never went out of my to make that claim. but now it kind of creates a funny situation when people have already talked about how programs that are definitely using 20ms prog rates just like the default templates (which were designed what, like 5 years ago?) are somehow still handling transients accurately, when only recently i think CPUs have improved to where timed mode and faster prog rates may allow for more accurate transient response. and yet, it's no big deal because supposedly we've already had that. but only, we didn't. but because it's something that has already been claimed of this or that library, it's like the fact that now we can actually have it is diminished.


i mean either you care about accuracy or you don't. if you do, then you can recognize that timed mode can get us closer. and if so, stick around because i'm uploading a program that demonstrates exactly what i'm saying about transients and how they are handled by the default template (which most preamp programs are probably using) which uses a 20ms prog rate. by the way i've never done this before but i knew exactly what was going to happen and it's exactly what i expected would happen.
Cupwise
Expert
Expert
 
Posts: 773
Joined: Tue Nov 23, 2010 2:03 am

Re: Changing XML file for high quality sound and rendering?

Postby brp » Fri Aug 30, 2013 6:36 am

hi tim

are you saying nebula reverb sounds different on different buffersizes or just the non reverb nebula? as far as i know, here lies the main difference between the two versions. the non reverb nebula will do some truncation at little buffers to not rapeing the cpu. but the reverb version should sound the same unless the buffer is getting smaller than the fft window (nebula would smaller the window or truncat it in some way).

in your case with the dynamic eq i'd test the mixed mode useing the reverb version! i'd try to avoid small or truncated fft windows, because the fades of the windwfunction are becomeing big in relation to the whole window meaning the sound gets kind of blurry!! when there is no dynamic program change, the bug can lay somewhere else. i think you were fooled by hearing some artefact introduced by some truncateing (but not 100% sure). check first, if everything is set right for dynamics (rawfun1, rawfun2...) and experiment with smooth, prograte, liquidity (it should kind of blend between the kernels), kernellength, etc...

i definitely hope that nebula H will be better in useing multicore cpu, so that timed kernels can be used all the time ;-)
brp
User Level IX
User Level IX
 
Posts: 99
Joined: Tue Mar 30, 2010 1:02 pm

Re: Changing XML file for high quality sound and rendering?

Postby brp » Fri Aug 30, 2013 7:11 am

to all those people who dont see though the whole tweaking subject:

nebulaprograms are a bit like cars. when they leave the factory, they usually got optimised for fueleconomy (vs. cpueconomy). but if you buy such a car and want to win a race, you'd have to tune it and give a sh*t about fueleconomy. it's that simple!!
brp
User Level IX
User Level IX
 
Posts: 99
Joined: Tue Mar 30, 2010 1:02 pm

Re: Changing XML file for high quality sound and rendering?

Postby Cupwise » Fri Aug 30, 2013 7:15 am

ok i've just uploaded a test program that anyone can use to see what i'm talking about here. and, for maximum transparency, i made it available in a way that some people probably aren't familiar with. instead of an .n2p (program) and an .n2v (vector), what i have is still essentially the same. but instead the program is in the form of an .xml file (actually there are two of them), and the vector is a folder with several .wavs (the actual impulses) and another xml.

this is what programs look like before they are encrypted into n2p/n2v files.

what i've done is opened NAT, loaded the 7k offline preamp session, then i reduced the number of kernels down to 1k. i didn't change anything else, and just hit 'generate' which spits out big .wav with all the tone sweeps at various dynamic levels which is what you run through the preamp or whatever you are sampling. only, i didn't touch it or run it through anything. i just hit 'deconvolve' which makes a program out of it. with the NAT available to devs you get the n2p/n2v AND this xml/folder generated.

to be able to load this like you do a normal program, you just put the two xmls (not the one in the folder with the .wavs) into your 'programs' folder. then put the folder with the .wav impulses and the other xml into your vectors folder.

what's the point? well, after NAT made those impulses i went in and opened the lowest level one in an editor. then i dropped it's volume level by around 90dB. each of the impulses have filenames according to what dynamic level they are taken/sampled at. the lowest one is named -43.50dB. so there are 30 files ranging from 0dB to -43.50dB, at 1.5 dB intervals. now it might be confusing but originally before i lowered the level of that one, all of the actual impulses are pretty much identical and have the same actual level, peaking at 0dB. the actual tone sweeps they were made from aren't like that, they are actually at the levels the impulse file names say. so the impulse at -43.50dB was made from a tone sweep that had it's max amplitude at -43.50 dbfs, for example. anyway, as you can see, all of those impulses are pretty much the same (which is because they weren't actually ran through any hardware, so they are pretty much digitally exact), except for the lowest one which i've reduced in level drastically.


now, here's how nebula works with its handling of dynamics. it 'starts' from the lowest level samples that a program has, always. if a sound comes into nebula, the envelope follower detects it, and starts trying to play the impulse matching the level of that signal (actually it interpolates between all of the different impulses but you get the idea). but the main thing is that it starts from the lowest sample. in this case, the one at -43.50dB. so basically, if a drum or something loud happens really fast, the envelope follower has to quickly yet smoothly transition from that lowest sample to one at the level of that transient, to play the appropriate impulse for it.

lets say there is silence going into the program, then suddenly, a transient reaches -8db. then nebula has to smoothly transition from the sample at -43.5dB up to -8db, and it has to go through (interpolating) all the ones in between for this to be a smoothly done thing.

so what's the point of all this? place the xmls and folder as i said and load the program (they are in the PRE category) called '20ms prog rate freqd' and run a simple drum loop through it, or even just a few isolated drum hits with notable attacks that actually peak a fair amount above the rest of the drum. load an oscilloscope type vst after nebula to see what's happening, or you can just render to wav and look at it in an editor. now bypass the program and see what happens. turn it back on. the program makes the transients disappear. now load the other program called '2ms prog rate timed' and see what it does. i shortened the length of the kerns so a decent computer should still be able to run the program. anyway, you can see that with this one, the transients are still there. they don't get dropped out like they do with the other one that uses 20ms prog rate.

what does this prove? it proves that any program using a 20ms prog rate is playing the lowest dynamic sample/impulse in that program, for the transients, because it can't react quickly enough to get up to the louder level ones as it should, for it to be called 'accurate' with how it responds to transients. does it make sense that a transient at -8db should be processed with a sample that was sampled from a preamp at -43.5dB? no. it doesn't. and yet that's exactly what happens, unless you have a faster program rate that allows it to get to those higher levels faster.

i have some pngs in the zip that show my own results doing this test with a kick and snare alternating, using smexoscope.

anyway, here is the zip with the programs and pics. oh and you might look and think 'well, it does miss the first bit of the transient but not TOO much', consider that i only dropped the level of the very lowest sample. as soon as the envelope follower moves away from that lowest sample, it's playing samples that are at full level, but are still 'sampled' from very low levels. let's say that again, you have a transient come out of silence, and the transient peaks at -8db. all i've really shown is how long nebula lingers on the impulse sampled at -43.5 db before moving up to the next one, which is sampled at -42db. but what if we could see how long it takes to actually reach the ones at around -8db? it'd look worse. in fact, it doesn't reach them in time. the transient is already gone and over by the time the envelope follower gets the higher impulses playing. so you never get impulses played to match that transient at all. it's totally missed. not only that, but it's probably still not playing the appropriate level impulses for what comes immediately after that transient/drum attack.

and again, with a high-end clean preamp that hardly matters, because those things were designed to provide similar results at all levels up to the point of clipping. but with something that has lots of non-linear stuff going on, it can and will matter, in terms of accuracy. a piece of tube equipment may have a slight, very subtle compression effect that increases as the level going in rises. transients will instantly get that subtle compression. but not with a 20ms program made from that piece of a equipment... the same goes for the increased level of harmonics that should occur at that higher level. and i'm not trying to say things that use 20ms program rates suck, just that faster program rates with timed mode can improve things, if it's done properly (with lots of testing and looking out for artifacts).
Cupwise
Expert
Expert
 
Posts: 773
Joined: Tue Nov 23, 2010 2:03 am

Re: Changing XML file for high quality sound and rendering?

Postby RJHollins » Fri Aug 30, 2013 8:03 am

This is a very interesting AND important topic.

Maybe I ask a favor of ALL participants ... please.

First ... this is not a new subject ... but it has not seem enough participation, particularly from Developers.

I sincerely hope ... in the true spirit of all Nebulites ... to remember some things. We ALL are using NEBULA because of one thing ... NOT its' convenience, the small footprint, the ease on our CPU's, or real-time capabilities.

Each one of us are here because of the sonics.

There have been many questions, and speculations along the way. We have all read the 'newcomer' seeking to jump to the fringe. [this is all fine, as not everyone playing with plugins and DAW's are solely in it for a Professional living. There are some here wanting to experiment, play, have curiosity ... etc.

If 'We' can place each question into the realm of 'General Consensus', as opposed to personal redirect ... I think we can ALL benefit beyond Our current experience.

Let's not solo out one set of comments, because, the points raised have NOT come from a single source.

As a pure User [one who has NOT delved into the land of NAT ... in fact, my version of NAT crashes, and I hope gets fix!] :evil:

I stated many moons ago that only the Dev has access to the actual hardware being sampled, and would be helpful to the community to share insights from these direct experiences. To this point, thanks to 'Cup' [and few others] that have spoken out [sincerely] on this.

We are just hearing of some of the 'finer' details of the Nebula engine and indications of its functioning. Those that have deeper DPS understanding can surmise this to a deeper level. For those of us of the Audio Engineer persuasion, we know one thing ... the 'Stock' Nebula libraries sound really good ... at least good enough for us to put up with all the effort to use them.

For those, either looking to push the envelope ... or maybe just to get a better understanding of things Nebula ... let's maintain the proper perspective in the conversation.

As said, there is nothing here from 'words of caution', to the confusion as to how audio gets Nebulitize that many Members here have or could raise.

I'd be quite certain that <G> could teach everyone here [including the high end Devs] some deeper level of understanding if he so chose.

Let us also keep in mind the computers we are trying to work with in order to do our work. We do need some 'common' base level to design for [practicality]. As we explore the outer fringe areas, we may have to keep it at 'educational conversation' due to limitations in our computational power. However, that should not stunt the conversation.

Let's not forget that some aspects of 'pushing to the edge' can influence Users attitude or perspective ... not that the 'push' is bad ... but practicality, or usability are essential. The stock Nebula library can already force us easily into a cumbersome work flow, let alone a further 'push'.

As Nebula Users, we have to accept this for the end result. Staying out of the 'personal', and framing the science, mathematics will help pave to insights and an improved understanding for those interested.

I hope my comments are taken in the proper Nebula spirit.
8-)
i7-5820k, MSI X99A Plus, 16 GIG Ram, Noctua NH-D14, Win-7 Pro [64-bit], Reaper-64

NVC [Nebula Virtual Controllers]
RJHollins
Expert
Expert
 
Posts: 2635
Joined: Sun Mar 28, 2010 5:53 pm

Re: Changing XML file for high quality sound and rendering?

Postby brp » Fri Aug 30, 2013 9:53 pm

thank you cupwise for shareing these example!

it shows exactly what i was talking about:

transient happens, envelopefollower detects it, next prograte will play the signal convolved with the respective impulse. even with very high progrates when kernels would switch fast enough to handle the transient correctly, at the beginning of this fft-window you'll get a fade from the windowfunction. and where is our transient? right, at the beginning of the window!! now everyone should see the problem with freqd vs. transients! what you can try is to set lookahead half the time of prograte or buffertime or kernellenght or whatever the length of the fft-window would be, so that at least some of the transients will get a chance to be somewhere in the middle of the window and therefor being processed (allmost) correctly...

with a little bit of brainpower you'll now see why g(enius) have introduced the split mode ;-)

@rjhollins

i think you see it perfectly right! in the analog days engineers got into electronics to mod and improove theyr gear. now we have kind of a similar situation within the digital world, at least with nebula. it is kind of a plugin on which you can screw off the housing and solder here and there and experiment with it. of course that imply that the stupid people think they'd have to solder something inside this box just because everyone does... this is just a normal human phenomenon: look at all those teens tuning theyr cars by makeing them worse ;-) humans really are some of the very funniest animals on this planet, we just can accept this as a fact and trying to be better ourself in case we dont feel comfortable with that. i find myself often to be exactly one of those funny, strange animals altough i give my best to be not *smile*
brp
User Level IX
User Level IX
 
Posts: 99
Joined: Tue Mar 30, 2010 1:02 pm

PreviousNext

Return to Working with Nebula

Who is online

Users browsing this forum: No registered users and 7 guests