Tuesday, July 22nd 2014, 5:25pm UTC+2

You are not logged in.

  • Login

Dear visitor, you are currently not logged in. Login or Register as a new user .

nms

Intermediate

  • "nms" started this thread

Posts: 165

Location: Canada

  • Send private message

21

Sunday, November 6th 2011, 7:17am

A few things Antony.. first, 48khz is not at all a standard in audio for anything aside from video production. Distributable music is all at 44, all commercially available sample packs are almost always at 44. If you're working in music you'll get the cleanest most efficient conversions by going in even multiples. Some SRC's handle it better than others and are near flawless, but many aren't so even multiples is a good way of playing it safe..

One other thing here.. shame on you for coming in throwing around Nyquist theory! I happened to catch that in a thread from a yr ago where you were telling people higher SR's absolutely hold no improvement and are nothing but a placebo effect to those that think otherwise. You couldn't be more wrong and dropping Nyquist theory has no place in discussions about digital audio production. Its only relevance is in recording.

Why you're talking about ability to reproduce frequencies higher than we can hear I do not understand. It's got nothing to do with that. As I previously mentioned it's about raising the upper band limit so that the filters and the aliasing created by digital processing further out of the audible range away from the music. Yes, further than the extra 2.5khz that you get from working at 48khz. Digital aliasing is NOT something that occurs only in the top 2.5k of the frequency spectrum.

I'm done debating this though as I don't have the time and it'd only go in circles from here. To have the stance that nothing can be gained by going above 48khz is to imply that every company who's oversampling multiple times above that is doing it for no good reason and they're simply fooling evreyone with what's nothing more than a marketing gimmick. You know what the developer of IMO the best vst compressor ever made recommended to me recently? Despite his plugin offering up to 16x oversampling, he recommended I just run it without oversampling since I'm already working at 88.2khz where the biggest difference is already achieved.

@Ionis - Your ears may be deceiving you.. it's easy to allow ourselves to be tricked now and then. Expectation bias and whatnot. I would strongly doubt your firebox would have anything positive to add from the AD conversion. The cleanest you can get is the source.. which is exactly what is streamed directly into your computer with the usb. Your converters will only color the sound if anything but if you like the sound then go for it I say. The Access team have said so several times and said directly that running it through the analog outs will not give you better sound quality. I admit I was skeptical of that myself til about a month ago but I was able to confirm it and find some distortion to the waveform that I wasn't happy with. I'd rather track the source directly and handle the SRC myself.. then run it out through any of my hardware compression or distortion if I choose while still retaining the original. I have to print to disk first anyways if I'm processing stereo stuff since my outboard units are mono so I wouldn't really want to deal with a source that's gone DA/AD/DA/AD.

Anyways.. are you using other outboard synths or gear? I would avoid the FF400 myself. Duets sound good. A Motu ultralight mk3 would get you good sound.. people are a little clueless as to just how good the converters are in those. People are so easily fooled by expectation bias and believing something's good or not because they read it somewhere. We've done testing on a lot of the top interfaces and the transparency of the Motus in test results (using proper test software) compared to the rest were shocking. As for the Virus, the inputs on it tested much more transparent than my MR816x. Do you have any issue with using the Virus as an interface? Unless you need extra inputs or the ability to work above 48khz I'd use that if it works with your system. I'd bet on the DA/AD quality being better than the firebox.

This post has been edited 3 times, last edit by "nms" (Nov 6th 2011, 10:45am)


Ionis

Intermediate

Posts: 239

Location: Texas, USA

  • Send private message

22

Sunday, November 6th 2011, 9:35am

The Ultralite's are pretty sweet. I do hear a difference between the USB and direct outs though. Thanks for your input on this.
/|/|/|/|/|/|/|/|/|/|
Ionis

nms

Intermediate

  • "nms" started this thread

Posts: 165

Location: Canada

  • Send private message

23

Sunday, November 6th 2011, 10:10am

There is a difference between usb & analog outs.. with that you're correct. It's just not as clean as the source. Here's an odd example I found with the difference in the same saw waveform sent via analog out vs usb. The one with the odd lip at the peak is the analog out:



I'm not sure what's going on there, but after finding this I stick to the USB and fully believe what the Access crew said flat out which is that there's no cleaner output than the source sent out via USB.
Personally I prefer to go with the purest source and process it and effect it from there. I'm somewhat of a control freak though and prefer that anything that's going to affect the sound be by my own doing.

AtonyB

Professional

Posts: 729

Location: UK

  • Send private message

24

Sunday, November 6th 2011, 2:06pm

Ever since VC worked smoothly (which it has done for some time now, for me) I stick to USBs and S/PDIF - the S/PDIF is a good backup if its having an episode which it can from time to time, but it also gives me an extra input.

Anyway, it's clear you have put your fingers in your ears - because you aren't taking on board my repeated distinction between the sample rate used on the final output and internal oversampling used in plugins. Oversampling can be done on the fly and having the higher sample rate globally is just inefficient and a waste of resources.

It's simple; upsample (ideally integer multiple, why would you choose anything else?) -> process -> downsample for plugins that benefit from this.

Alternatively you can simply calculate the 'intersamples' as and when (which might be more efficient since you are only generating one extra sample a time and you can use lookups for your operands rater than a whole set which you might not even need - since you are downsampling you only care about the output values on the output sample points).

Another alternative is to work your code to achieve better precision for things like filters which can achieve great results, even on fixed point systems. Just recently I had to develop my own square root function (library function available was useless!) which would be precise for very small values (all the way down to zero + 1bit) using just a few instructions and I got better than 0.2% (8-bit below msb) accuracy (worst case) and could have got better but this was accurate enough and this is on a 16-bit fixed point system.

nms

Intermediate

  • "nms" started this thread

Posts: 165

Location: Canada

  • Send private message

25

Monday, November 7th 2011, 12:43am

It's a fool's argument because the issue boils down to two options:

a) work at 44/48 and only have access to oversampling in the plugins that do feature it (so many don't) and allow your audio to be up & down sampled everytime it goes through one of these plugins at whatever quality realtime SRC each plugin is capable of and filtered with whatever antialias filter it uses. Oversampling adds latency as well. Benefits restricted to only the plugins that have oversampling available. Plugin oversampling also drains cpu don't forget.

b) work at 88khz where all audio stays at a higher rate thereby not being subject to artifacts created from being up & downsampled multiple times at varying quality or adding latency from the process. Old favorites, DAW plugins, the whole lot operates as if running x2 oversampling. Antialias filters become 100% invisible because they are so high up in the spectrum we can't possibly hear them.

With the giant leaps in the current multicore processors available and cheap storage in abundance there is no excuse for sacrificing audio quality and latency just to save some CPU or hard drive space. Computers are way ahead of where they were 4 yrs ago and the capability is there. If you're a professional who takes audio quality very seriously then no doubt you'll have a fast & powerful computer that is more than capable of handling the task.


Anyways.. this is after all the feature request forum and this has veered off track. I'm not holding my breath for VC to cater to any of this and I'm just going to use my workaround which as I mentioned previously is to switch to 48khz when it's time to track something then SRC it to my working rate. Alternately if I want to use the analog outs I have to disconnect the virus usb and sequence it via midi so I can set the rate to 48 internally and then record it into my interface at 88khz.
It'd be nicer to just be able to choose 48k internal operation reagardless of circumstances.. and I feel like it's a significant margin for quality that goes untapped in its current state to many people.. who knows.. maybe one day.

This post has been edited 3 times, last edit by "nms" (Nov 7th 2011, 1:33am)


AtonyB

Professional

Posts: 729

Location: UK

  • Send private message

26

Monday, November 7th 2011, 1:52pm

work at 44/48 and only have access to oversampling in the plugins that do feature it (so many don't) and allow your audio to be up & down sampled everytime it goes through one of these plugins at whatever quality realtime SRC each plugin is capable of and filtered with whatever antialias filter it uses. Oversampling adds latency as well. Benefits restricted to only the plugins that have oversampling available.


Plugins upsample if they need to. If they don't and you give them a sample rate they weren't designed for (who doesn't plan higher sample rates into their plugins?) they simply behave differently (e.g. a delay effect may simply have a shorter maximum delay). If they don't vary their buffer size for things like EQs their spectral resolution drops as you have the same number of points across a larger frequency range - not good, complexity wasted on irrelevant frequencies. So no doubt they will increase their buffer size to maintain spectral resolution (and it doesn't matter if they are working in time or frequency domain), thus more taps need to be calculated, just so you can get the same precision you already had on the lower sampling frequency. This more than offsets any up/downsampling you need to do, especially when you consider you are doing it across the board rather than when it actually matters (bearing in mind that plugins that upsample, will still be doing it, possibly by the same factor, but maybe they work to target sample rate [range for non-integer?] rather than a fixed factor, and thus you get the same overhead).

Remember that this extra bandwidth is inter-process bandwidth, too, where the narrowest bottlenecks are seen. In a multi-core environment it might actually be better to up/down sample, even if you need to be careful with your interpolation filters, the requirements for which are actually pretty loose (depending on the plugin you might only need a few taps, or go for a sinc lookup table and be as precise as you like), especially for downsampling.

Also, when it all comes down to it, have you ever rendered a project both at 48kHz and 88.1kHz and compared them in a realistic test? Even if you can see errors on the spectra (and im not including the virus example, here, as that is not an example of the limitations, but probably a function of a decision they made when designing it), it's unlikely that anyone will be able to detect it anyway, assuming you use plugins that are well designed enough to deal with signal processing issues beyond simply cranking the sample rate up.


Quoted

Plugin oversampling also drains cpu don't forget.


Barely, see above.


Quoted

work at 88khz where all audio stays at a higher rate thereby not being subject to artifacts created from being up & downsampled multiple times at varying quality or adding latency from the process. Old favorites, DAW plugins, the whole lot operates as if running x2 oversampling. Antialias filters become 100% invisible because they are so high up in the spectrum we can't possibly hear them.

With the giant leaps in the current multicore processors available and cheap storage in abundance there is no excuse for sacrificing audio quality and latency just to save some CPU or hard drive space. Computers are way ahead of where they were 4 yrs ago and the capability is there. If you're a professional who takes audio quality very seriously then no doubt you'll have a fast & powerful computer that is more than capable of handling the task.


This is an old and feeble argument. People like to argue that, oh in 3 years PCs will be powerful enough for this not to be a problem, especially on the multi-core side of things. First of all the software has to be parallelised for it to benefit from multi core, and there are plenty of bottlenecks for it to be the case that you can't even rely on an improved performance. The real issue is that things have to perform the best they can, regardless of any endless supply of resources a certain proportion of your market have to offer, the field is very competative which is why the bigger numbers sell better.




Quoted

Anyways.. this is after all the feature request forum and this has veered off track. I'm not holding my breath for VC to cater to any of this and I'm just going to use my workaround which as I mentioned previously is to switch to 48khz when it's time to track something then SRC it to my working rate.


Like I said, the VC VST would have to perform sample rate conversion, in real time, on many channels of audio, which could add to latency (which some people already complain a lot about) - all for a seldom used sample rate.



Long story short - higher sample rate can be, but is not the only, solution to the issues you mentioned, but it might actually be the worst way of doing it. This may be especially the case when you are in a multi-threaded multi-core environment in which interprocess communication is the biggest bottleneck there is and is the greatest limiting factor on its scalability. There are plenty of other ways to deal with it which are probably not less efficient, and even where they may be, they are probably using up CPU time that would otherwise be spent waiting for data.

nms

Intermediate

  • "nms" started this thread

Posts: 165

Location: Canada

  • Send private message

27

Monday, November 7th 2011, 11:36pm

If you spent half as much time actually testing the results of a project done at higher rates rather than 44/48 (assuming you have the ears, monitors and production skills that won't hold you back from getting results) as you did dreaming up theories about how it's surely better to avoid going higher than 48k, despite how most of the top guys in the industry would disagree.. then you might actually get somewhere with this. A smart person puts a great deal of effort into trying to find ways to prove himself wrong.. not stubbornly cling to his theories with every breath despite evidence to the contrary. And yes, computers are more than able to handle it.. or so many people wouldn't be doing it.

AtonyB

Professional

Posts: 729

Location: UK

  • Send private message

28

Tuesday, November 8th 2011, 7:04pm

Interesting viewpoint - I have given a brief look at both sides... I say both work, and neither are 'the one true path', with a bias as the alternate stance to yours - all you've done is poorly reference others with no balance and with not a word on how you tried to 'prove yourself wrong'. Think about this - of all the great music that has ever been made using the Virus, how many people would notice if the Virus in question was switched to 44.1kHz instead? How much of it was this the case for anyway?

Arguing its right because so many people do it is not a valid argument... ever... whoever it is... just as determining whether homeopathy, or perhaps less obviously cough medicine, works by looking at how much money is spent on it is not a valid argument either. Don't forget that the priority for 99.99% of the market is to sell lots of stuff and not simply to make it sound great (it just so happens that they cross paths once in a while) - why do you think KORG have been repackaging the same thing that does pretty much the same stuff for the last 30 years save for some 'bigger numbers'?

While we're at it 'can handle it' is completely irrelevant - we are talking about getting the most out of a system which is the priority of any engineer since it makes your product the most competative by either outperforming on the same platform or being cheaper by being on a lower spec platform. What if you could make the same music with the same plugins on a pc that cost half the price? Without these considerations mobile phones would not do half the crap they can do and, in fact, most electronic devices you can think of.

While we are at it, I do test these 'theories' (a scientific term which many people misapprehend), on my project I had to think very carefully about the sample rate I used and I could not get away with just simply making the kinds of argument you made - I tried them, measured performance and reported on it. As it happened turning up the sample rate is probably the weakest solution to problems I had there simply because it wasted memory, bandwidth, overall performance and battery life - where instead a few extra lightweight instructions cured it beautifully.

I will now desist as I have nothing further to gain from this discussion.

nms

Intermediate

  • "nms" started this thread

Posts: 165

Location: Canada

  • Send private message

29

Wednesday, November 9th 2011, 12:01am

Unlike yourself I have a lot more experience with both high and typical sample rates and have discussed it with a few of the top authorities on the subject. I do not whatsoever come from a place of bias. You know who instigated my switch? Andy Simper (developer & owner of Cytomic) when he released a new version of his compressor "The Glue" (best vst compressor in existence in my experience) with oversample settings. I heard the difference right away and that's when I revisited the idea of working at a higher rate across the board so I could reap as much of that effect wherever possible. Talking with he and Alexey Lukin (Izotope developer & developer of the best dithering & SRC I know of on the planet) and reading what they've said on the topic gave me much of the understanding I now have, combined with proof from results of my own testing. Tests similar to the result I posted with the clearly audible and better Virus tone at 48k. I'm surprised you can sit and with a straight face tell me about being a competitive engineer trying to outdo the sound of others and then say that difference is negligable or that CPU takes precedence over sound quality gains.

Part of being a competitive professional is having good gear and so long as you have a computer that is adequate for someone in that position you have all the power you need for working at 88khz. The best engineers know quality comes first. Btw, if you had looked into it properly you'd know your ram doesn't get taxed any greater than it does at 44/48.

I love my virus, but I'm not blind. The synth engine itself is 15 yrs old which is not ideal for digital technology and the oscillators aren't the best sounding on the market. Andy Simper also developed Dcam Synth Squad and their Strobe synth beats the Virus in terms of sound quality, aliasing, and sound of the raw oscillators. Having oversampling settings available helps see to that. If you think you know better than the guy who developed that as well as the Glue.. you'd be delusional to say so without a lot more experience and testing than you have done.
The interface of the virus and FX, programmability, mod matrix and everything else are where the Virus pulls ahead. I demand the best from my gear though so rendering at 48k was the needed way to bridge the gap and strengthen its weak spot a bit.

This post has been edited 1 times, last edit by "nms" (Nov 10th 2011, 10:17am)


Ionis

Intermediate

Posts: 239

Location: Texas, USA

  • Send private message

30

Wednesday, November 9th 2011, 4:31am

I am changing the direction here...

Question: Does recording the Virus ( or any other hardware synth ) at a higher sampling rate and bit depth actually yield better results then just dropping it in through USB at the same sampling rate and bit depth?
/|/|/|/|/|/|/|/|/|/|
Ionis