There are several libraries that I have running @ TIMED setting. I usually check the 'ms' length of the original first, and then match that length when I switch to TIMED.
I've also [briefly] experimented with higher than original settings. CPU drain usually brings me back to equaling the original value.
Changing to TIMED mode is interesting ... some libraries sound better, or different ... and I'll sometimes prefer to use it that way. Other libraries don't yield a significant improvement, or are not worth the CPU hit [cause I may need several instances, so not really practical].
So, at a minimum, I find matching original Ms length to be a 'minimum' target ... and as 'David' documented on his tutorial [Web Site], I only use one of the 'EVEN' or 'ODDs' ... not both. anyway ... just my 1/2 cent
For example, in normal mode in a Cabinet program its says 2 ms on first Kernel; going in TIME mode without touching ms. Do you say it is enough? I think just going to TIMED without touching ms the sound is improved.
REALLY interesting Tim! Any chance you can share how you came to that conclusion? I believe you, but I'm also curious
On a side note, I LOVE the sound of Nebula pres, cabs, and EQs with the 50ms Timed Kernels! But only recently have I started using 20ms-30ms Timed Kernels to save CPU, and the inaccurate bass is there but I wasn't sure why. This thread is getting some gears rolling in my head!
I've also noticed that 10ms is enough for compressors. Maybe I'm nuts here, but I also thought that lowering the kernels to, say, 1ms would make fast attack/release compressors behave "quicker" while using the 10ms kernels would be more accurate in terms of the tone that a compressor gives. Anyone have any other observations/opinions on timed kernels with compressors?
Last edited by botus99 on Wed Aug 07, 2013 9:53 pm, edited 1 time in total.
Scottxx wrote:For example, in normal mode in a Cabinet program its says 2 ms on first Kernel; going in TIME mode without touching ms. Do you say it is enough? I think just going to TIMED without touching ms the sound is improved.
Considering the input from several very knowledgeable and experienced developers contrasted with your findings, I have a theory: what you liked instantly about the TIMED instances (with very low rate) is exactly what people are warning you is a possible effect of TIMED programs: low rolloff. If a track is a little muddy and you roll some of that mud off it will sound night and day 'better'.
Only because you're preferring settings that the devs are saying pretty explicitly will roll off the lows (unless I'm misunderstanding their replies which is possible).
I don't have much of the science behind it ... yet ... but, when I look to test TIMED mode compared to 'original', I 'presupposed' that matching the same ms value would be a reference point.
What I thought I understood ... the original ms value is matched to the length of the sample used ... settings below that value would contain less than the full waveform, thereby modifying the freq spectrum [rolloff]. Using values higher than the original sample [in TIMED mode] would be a needless waste of CPU ... am I wrong with this thinking ?