Should I understand it is not possible to work with Cuda at all on windows ... ?
note : i'm newbie to nebula but have access to some of the best GPU and would like to give a try. More power is also a argument for real time mixing ... tired of rendering then going back for 0.5dB adjustment.
we are sticked to old cuda 4.17 release and we didn't upgrade to latest releases. Since cuda version and your graphic card driver version are related, I believe you can't use cuda if you didn't save an old driver configuration and you are starting from the scratch today. We'll try to update cuda bridge in the future.
Anyway if you need power nothing can beat latest generation of intel processors. I can literally open hundreds of instances.
thx for the quick response. Indeed, I was thinking about using one of the Tesla GPU and to borrow it from the Gfx dpt. Ok, as we literally work with hundreds of tracks, I'd like to make sure we have a lot of available power. So, as I understood there are issues with the server version, should a multi-CPU MB be a better choice (usually with Intel Xeon E5-2600 family) than a overclocked core i7 ? I mean, do you have any previous experiences ?
no but I tried 4th generation intel processor and they can handle high number of tracks. If you have tesla, try to install cuda 4.17 and proper video drivers, run the little program whereismycuda posted in our official cuda thread
OK, i'll give a try w the Tesla and a i7-4770K as you suggested. If it works like I hope, it will definitely be the solution to our cost-reduction process. I bought the server version and also will give a try.
Nebula sounds fantastic. I can compare easily here as we have most of the nebula gear in physical. If the source is digital, I even prefer nebula most of the time. If the source is analog, I still ear more things going ... but maintenance cost of a STUDERA820 doesn't justify anymore ...
to dwagrimm : I just have a i7-920 at the moment (4cores/8threads) and, in my limited experience (so far), I can see all the threads working.