Hi guys! I know the switching-to-TimeD topic has already been widely debated, spent a little time in the past to read and give it a try, took my personal conclusions and decided not to do this tweak (just because I felt it required to be judged case by case, and spending time on this while mixing is beyond my abilities). Still yesterday I happened to read a topic I missed at the time, and I’m asking myself again if I can do something to improve my use of Nebula (if I can’t, I’ll still be happy with the “default” settings!). Hope you guys can help me get a clear view on that.
TimeD tweak and personal impression So here’s how I used to do this tweak, and the impression (no more than that!) I personally had: • Previously changed xml file to allow greater TimeD length values • Switch clean and even (not touching odd) to TimeD, and set them to the ms values that were used by default by that program for the FreqD (usually 50ms) • Did never check if previous moves changed the Program RATE value • Never tried switching to SPLITH mode, since I’was not expecting this to sound better than the full-length TimeD-only setting (could be wrong?)
I just compared with/without this tweak by ears, never made real tests (but listened to some that were posted on this forum, thanks!). Impressions: 1. Maybe a slight improvement, as stated in manual, but definitely a BIG difference between the two (to my ears) 2. What I perceive as improvement is mostly on mid and high freq. Feels like it’s shining, more detailed and defined, maybe better transients, don’t know what but I would tell hi freq spectrum is always improved 3. At same time, I don’t always perceive that same improvement on low end. Maybe by contrast, I don’t know, I feel like losing definition there, and in general a less-warm or brighter tone (I’m not saying there’s some difference in the freq response, but I have an overall impression I would describe as something like that) 4. Previous point seemed to me to be sort of program-related, but never gone deeper on that, like saying something about it being related to certain program-types only (like Console, Preamp, Tape, Eq or Comp)
In the end, the big difference (1) was such to me that I realized that mixing in FreqD or in TimeD would lead me to take different mixing decisions. It followed that switching to TimeD just before rendering would give me a different result than I’d expect (I was actually experiencing this, don’t know if it was a placebo), and no matter if better or worse, point is that having mixed on TimeD from beginning - I can’t do that for CPU impact - I would have done different things (maybe less processing on hi freq, or different compression). So concluded that doing this tweak before rendering was not suitable in my case.
Still I could use just a few programs on TimeD in real time, but there comes point (3). I read some other guys heard this thing too, maybe it’s because of wrong settings? Or is it something to be checked on single programs? This kind of low-end “issue” I’m feeling, makes me think TimeD will most of the time not work for me (unless done by library Dev himself, of course! I’m guessing that’s where TimeD engine is considered to be superior: when you sample a HW and build your program to be played in TimeD, but not always when users change its setting on a program the Dev supposed would be played on FreqD. Is my guess right?). As I said, I didn’t check if PgmRate value changed while switching to TimeD the way I did, but if I got right what I read, this is supposed to happen only when switching all 3 (cln, even and odd), while I changed only 2 of them.
Questions So now I’m asking if this “low freq lack of definition” (or whatever it is) has been experienced by some of you too, if you somehow solved it, or if there’s some general rule like: “this tweak will mostly work for Console programs, not always on EQs” or something like that.
My opinion, by now, is that I prefer to use Devs default settings and do only the tweaks they explicitly suggest for their libraries. This I know will always sound good to me. Still someone’s saying some Devs are not using TimeD exclusively because of a CPU compromise, and switching to TimeD will always get an improvement there. Has something come out more recently to put a final word on this, or is it a still debated/relative subject?
Last, can you confirm that AlexB’s Console programs will sound best with DSPBuffer settings of 1024 (or higher) because lower values will affect Program Rate badly? And, would matching that value to the AudioDeviceBuffer be preferable or make any sense?
Thank you in advance for any experience-based opinion, suggestion or help!