Beiträge von AtonyB


    Yes it is, of course, possible to have a bank of oscillators, each modelling each partial - I have no idea if that is how Harmour works (you could just work in frequency domain) - but in the case of the Virus, it has 80 voices, so you can guess how quickly you would run out of voices...


    Be comforted by the fact that using wavetable synthesis and the variants on the Virus you can get a lot of the harmonic modulation effects you might want from there and using the formant shift you can get a lot of great sounds which are easy to keep in check.


    Personally I'm not that impressed with Harmor, it was going cheap and I wasn't persuaded, but I parted with over £1000 for the Virus, so what does that tell you?

    FM is internal to the oscillator 'structure' in the software - instead of grabbing osc2 value, grab it times osc1 value - synchronising the mod matrix is a much lower frequency process as there would be a lot more going on buffering variables or using semaphores to ensure there isnt inconsistent update of variables, etc.


    In other words, the delay effect, for example, may only have its delay time variable updated at, like, 20Hz or something, so if you try to modulate it with a signal going at, say, 1kHz, you would get epic aliasing. Changing the virus OS to support this would be somewhere between sacrificing a lot of voices and a complete rewrite of the OS.

    I think I'm going to retire discussing anything to do with DSP on these forums as it is a major source of stress...


    Zitat

    But rough FFT approximations aren't how additive engines work.


    Not exclusively, no, but FFT is not an approximation, it is a very specific and unique calculation based on a section of a signal - the precision is limited to the bit depth of the system calculating it. But you could also keep track of them as 'partials' which may be more efficient for different systems, but since Harmour can use audio files as an input it mustn't work entirely this way without some spectrum -> set of partials going on. In general, partials just refers to the harmonics on a waveform (although some synthesizers like the Roland D50 have massaged this meaning).


    Zitat

    What would you specifically call what I refer to as 'additive synthesis', just so we're using the same language? If, for example, you didn't want to lump it in with FM and RM?


    I guess, if you are talking about some of the things that Harmor does, then the term 'spectral remapping' i coined earlier would probably fit...


    Zitat

    Interestingly, given your description of subtractive, both Harmor and Razor would fit the bill. Looking at the interface, they use complex oscillators at the start of the signal chain and everything afterwards alters the frequency components. The difference is the engine and the new sonic options allowed to you thanks to the ability to play with sine individual waves.



    No, subtractive synthesis exclusively manipulates the amplitude of frequency components and not their frequency. An example is a filter or amplitude envelope, in fact every subtractive operation can be represented as a filter.


    Additive can create components, or 'partials' at different frequencies based on operations on the already existing ones - such as FM which varies the frequency of the partials based on the value of an oscillator, or RM(AM) which creates partials above and below the partials based on the sum and difference of the frequency of the partials. The prism function on Harmor is actually pretty similar to AM - but I haven't tinkered with it enough to know exactly what it is doing.

    PowerCore managed to work with several instances, but i suppose they were more independant and it never worked that great anyway. Either way I think its better to have it all in one place, why is this a problem to some people?

    Subtractive synthesis is where you start with a base oscillator and its harmonics, and every stage thereafter only alters the amplitude of the frequency components you have.


    Additive synthesis takes base oscillators and creates new harmonics with its processing. FM and RM do this.


    Also, frequency domain manipulation is not so intensive, thats how digital vocoders work. You can quite comfortably take the FFT (convert to frequency domain) of an incoming signal, filter it in the frequency domain by multiplication and then convert it back in overlapping blocks (COLA) on even the most lightweight of DSP systems. 'Frequency remapping' (i guess you'd call it) probably isnt a huge amount more intensive, depending on how complex you go and if you can think of clever tricks to make it more efficient.

    work at 44/48 and only have access to oversampling in the plugins that do feature it (so many don't) and allow your audio to be up & down sampled everytime it goes through one of these plugins at whatever quality realtime SRC each plugin is capable of and filtered with whatever antialias filter it uses. Oversampling adds latency as well. Benefits restricted to only the plugins that have oversampling available.


    Plugins upsample if they need to. If they don't and you give them a sample rate they weren't designed for (who doesn't plan higher sample rates into their plugins?) they simply behave differently (e.g. a delay effect may simply have a shorter maximum delay). If they don't vary their buffer size for things like EQs their spectral resolution drops as you have the same number of points across a larger frequency range - not good, complexity wasted on irrelevant frequencies. So no doubt they will increase their buffer size to maintain spectral resolution (and it doesn't matter if they are working in time or frequency domain), thus more taps need to be calculated, just so you can get the same precision you already had on the lower sampling frequency. This more than offsets any up/downsampling you need to do, especially when you consider you are doing it across the board rather than when it actually matters (bearing in mind that plugins that upsample, will still be doing it, possibly by the same factor, but maybe they work to target sample rate [range for non-integer?] rather than a fixed factor, and thus you get the same overhead).


    Remember that this extra bandwidth is inter-process bandwidth, too, where the narrowest bottlenecks are seen. In a multi-core environment it might actually be better to up/down sample, even if you need to be careful with your interpolation filters, the requirements for which are actually pretty loose (depending on the plugin you might only need a few taps, or go for a sinc lookup table and be as precise as you like), especially for downsampling.


    Also, when it all comes down to it, have you ever rendered a project both at 48kHz and 88.1kHz and compared them in a realistic test? Even if you can see errors on the spectra (and im not including the virus example, here, as that is not an example of the limitations, but probably a function of a decision they made when designing it), it's unlikely that anyone will be able to detect it anyway, assuming you use plugins that are well designed enough to deal with signal processing issues beyond simply cranking the sample rate up.



    Zitat

    Plugin oversampling also drains cpu don't forget.


    Barely, see above.



    Zitat

    work at 88khz where all audio stays at a higher rate thereby not being subject to artifacts created from being up & downsampled multiple times at varying quality or adding latency from the process. Old favorites, DAW plugins, the whole lot operates as if running x2 oversampling. Antialias filters become 100% invisible because they are so high up in the spectrum we can't possibly hear them.


    With the giant leaps in the current multicore processors available and cheap storage in abundance there is no excuse for sacrificing audio quality and latency just to save some CPU or hard drive space. Computers are way ahead of where they were 4 yrs ago and the capability is there. If you're a professional who takes audio quality very seriously then no doubt you'll have a fast & powerful computer that is more than capable of handling the task.


    This is an old and feeble argument. People like to argue that, oh in 3 years PCs will be powerful enough for this not to be a problem, especially on the multi-core side of things. First of all the software has to be parallelised for it to benefit from multi core, and there are plenty of bottlenecks for it to be the case that you can't even rely on an improved performance. The real issue is that things have to perform the best they can, regardless of any endless supply of resources a certain proportion of your market have to offer, the field is very competative which is why the bigger numbers sell better.





    Zitat

    Anyways.. this is after all the feature request forum and this has veered off track. I'm not holding my breath for VC to cater to any of this and I'm just going to use my workaround which as I mentioned previously is to switch to 48khz when it's time to track something then SRC it to my working rate.


    Like I said, the VC VST would have to perform sample rate conversion, in real time, on many channels of audio, which could add to latency (which some people already complain a lot about) - all for a seldom used sample rate.




    Long story short - higher sample rate can be, but is not the only, solution to the issues you mentioned, but it might actually be the worst way of doing it. This may be especially the case when you are in a multi-threaded multi-core environment in which interprocess communication is the biggest bottleneck there is and is the greatest limiting factor on its scalability. There are plenty of other ways to deal with it which are probably not less efficient, and even where they may be, they are probably using up CPU time that would otherwise be spent waiting for data.

    I'm not sure I'd call Harmor easy to use except for just fiddling. Bear in mind you do have FM and RM, also, which are additive processes, although the Virus doesn't feature some of the frequency domain manipulation tools that Harmour has although I wonder if these are just gimmicks or if they do actually offer something unique...

    long story short, you get a discontinuity as the oscillator resets, this causes the impulse you see (its not a vertical line between the two because the virus does not have infinite bandwidth on the audio output, so you see an overshoot with ripple). Although I do admit, i'm surprised it's as big as it is... that may well be deliberate to replicate what a real sync'd analogue oscillator would look like (and I havent put one through a scope ever to see for myself).

    Just to clarify... you do know what oscillator sync is, don't you? It resets osc 2 to zero phase (the beginning of its waveform) every time osc 1 completes a cycle.


    This means you will get a discontinuity if osc2 has not completed its cycle, or rather, is nowhere near its starting value when it resets. This discontinuity is seen as an impulse, or spike, because it is band limited meaning not all the frequency components that would make that impulse get through. If you use the semitone dial to get a larger range of variation you will see that eventually a second cycle of the square starts to get through - if you move that up and down you should be able to see whats going on. You can also edit the pulse width (the higher you set it the less time the square spends down before it goes up) to make it transition in time before the oscillator resets itself (which allows you to get away with a lower freq osc2 than osc1 when using sync, coupled with PWM this can get some nice rhythmic effects.

    Ever since VC worked smoothly (which it has done for some time now, for me) I stick to USBs and S/PDIF - the S/PDIF is a good backup if its having an episode which it can from time to time, but it also gives me an extra input.


    Anyway, it's clear you have put your fingers in your ears - because you aren't taking on board my repeated distinction between the sample rate used on the final output and internal oversampling used in plugins. Oversampling can be done on the fly and having the higher sample rate globally is just inefficient and a waste of resources.


    It's simple; upsample (ideally integer multiple, why would you choose anything else?) -> process -> downsample for plugins that benefit from this.


    Alternatively you can simply calculate the 'intersamples' as and when (which might be more efficient since you are only generating one extra sample a time and you can use lookups for your operands rater than a whole set which you might not even need - since you are downsampling you only care about the output values on the output sample points).


    Another alternative is to work your code to achieve better precision for things like filters which can achieve great results, even on fixed point systems. Just recently I had to develop my own square root function (library function available was useless!) which would be precise for very small values (all the way down to zero + 1bit) using just a few instructions and I got better than 0.2% (8-bit below msb) accuracy (worst case) and could have got better but this was accurate enough and this is on a 16-bit fixed point system.

    It doesn't sound as if you read much of what I said.


    I said that increasing the sample rate can relax the demand on the interpolation filter - perhaps to make it cheaper, but I would have thought the cost of extra bandwidth, etc. would offset that. BUT on an 'off-the-shelf' Texas Instruments (/Burr brown) CODEC running at 44.1kHz the interpolation filter is unity gain until around just shy of 20kHz.


    48kHz goes even further (unity until just shy of 21.5kHz). This means that no higher sample rate can represent frequencies up to that point any better, and those above are unimportant as no-one can hear them assuming the audio equipment even lets them through.


    I did notice that there was some frequency shifting going on between the sample rates where the peaks didn't seem to align that well, but this might just be due to different spectral leakage due to the differing sample rates - you get an artefact where the pitch wanders off when you change sample rate then comes back in so its hard to tell if it lines up once it stabilises by ear (and I'm not going into the lab to find out) - this is made more difficult by the fact that the oscillators (deliberately or not) arent consistent cycle to cycle anyway and the differing sample rates may alter the performance of that also. Non-integer small shifts in sample rate can be the biggest pain in the arse when trying to get cross sample rate consistency (I haven't checked, so I don't know if the virus is fixed or floating point so I couldn't begin to guess why, either, to suggest how you would rectify it).


    Either way, I'm not surprised, if you go into the minutest detail, that you see a difference, because they are different, the sample times are different, the banwidth is different and, yes, tinkering with the virus at 44.1 you do notice a little less sharpness at the top end which really surprises me..


    I don't know what codec they use in the Virus, but thats irrelevant to the audio you get via USB as that won't go anywhere near a codec so the interpolation filter must be implemented as an FIR or otherwise on the DSP which may explain the effect you see...


    Again, all I've heard arguments for is why 48 is better than 44.1 which is perhaps a deptracated problem, mainly because 48 is pretty standard for everything besides CD now and any codec I'ved used is fine on both for human ears, even on unrealistic tests, but historically may have given you a bit more margin for error. I guess it's also associated with something to do with the virus, and not having seen a line of code for the os on the thing, is all guesswork.


    I'm confused, anyway. When it comes to recording the virus, can you not just change the sample rate temporarily while you do it to 48k or 96k, whatever floats your boat, and then change back when you are done? If you are using it TI mode then it has to be consistent with sample rates, so I'm not sure what they can do for you without implementing some sort of SRC in the VC plugin beyond simply doubling up the samples, which I guess is happening somehwere along the line already for 44.1k...

    I'm not sure I agree that the envelope attacks are slow... I'd say the opposite, actually, as the variation in the last few points (125-127) have a very large range of variation, of course you expect that something to infinity step, but sometimes you really cant get a long enough attack on the envelopes.


    Bear in mind that a lot of what you could do that would be useful with audio range LFOs is already available such as FM, RM (AM) and the modulation on the delay effect - I cant think of anything that wouldn't come out as garbage being modulated at audio frequencies with the LFOs. I'm certain that it's a complexity thing, though, as the mod matrix may not be 'polled' fast enough and you would get terrible aliasing, and increasing the polling rate may compromise the overall performance...

    Interesting - it seems that the antialiasing filter on the virus doesnt perform so well as there is a distinct drop off on 44.1 from 16kHz even (bizzare or possibly deliberate, who knows) and the first lobe is quite prominant - much more so than with 48kHz (from my own measurements).


    I'll have to admit, I've always run the virus in 48k as that is the default sample rate for my ASIO driver, so I haven't noticed that difference, albeit a specific issue that can be remedied by an EQ and would be buried in any mix.


    Aside from the quirks of the particular piece of equipment being used - there's no reason why you can't run at 44.1kHz as there should be no audible difference (ie identical up to 18kHz since, sampling aside, getting the analogue equipment to be reliable up there is hairy enough and nobody cares at any higher than that even if they can hear it). I will concede that running at 48kHz does give you a little more room for the aliasing filter to go from unity (ideally at 18kHz, which is the case for the codec I use when run at 44.1kHz) to -100 or so but it is not crucial.


    Above 48kHz, however, besides internal upsampling for plugins, gives you nothing extra in the frequency ranges that anyone can hear (and are probably best removed for improving dynamic range, although they will be so low amplitude that you needn't worry anyway).


    Oversampling is only of benefit if the plugin makes use of it, such as for sample interpolation (but you don't need to upsample for that, you can generate the inerpolated sample(s) on the fly if you wish). Speaking of leading industry experts - I will point out that I have heard such people being extremely pleased with themselves for 'modelling capacitors (or condensers as they were saying)' in their emulation of hardware and using prewarp - things which are the bread and butter of DSP...


    Incidentally, I would point out that I am actually doing a PhD in DSP so I do actually know what I'm talking about, rather than simply regurgitating misapprehended concepts to order.