Posts by tonstudio96

    For those who are able to meassure the S/PDIF:

    Is this a common problem or does it occur with specific sources only? In a project we experienced some issues with several sound cards when analyzing the S/PPID quality which was not always in SPEC. I tight PLL in the sink device ran into problems.

    I am doing so with a fast changed volume control value from the sequencer / DAW driven into the device through MIDI.

    For my own synth I am capable to do this with a very high speed.

    The alternative was a more versatile envelop behaviour. I do not know what the TI offers here, but I am using an 8 point envelope overcoming the limits of a classical ADSR curve.

    I think producing a HW synth and earning money is difficult today in terms of cost and development because of all these SW-synths arounds and the flood of cheep analog synths (produced by mainly one company).

    However there are possibiliteis to improve an existing hardware even when CPU is running out of time. It is possible to simply exchange functions which became obsolete because too many people made use of it by new functions and provide exisiting customers with an alternative software so they can use both versions when they like. With a better SW, you could imagine to tweak it that way, that it runs on a higher sample rate in order to make use of better filtering and sound generation strategies with reduced number of voices. For mixing and composing one could provide a SW-version with a limited quality to get around in real time with all the voices and exchange it by a better one to produce the voices with an increased quality piece for piece with several recording seesions. For many users this would be okay since most of them a semi prof customers anyway.

    I proposed exactly this strategy at least 15 years ago in the old access virus. Furthermore I posted my idee, how I did that with my first synths in the 90tees: LEt the system run on half sample speed and thus doulbe time parameter seetings offering half MIDI spped. Then it can calculate the double number of voices / or the double precision and rename the track from 48kHz to 96kHz. Doing that in both directions you can even shun the bandwidth limits given by the DACs.

    I thus had been able to use my TMD320-DSP-System and the Soundart-Chameleon in 96kHz-Mode and produce bass-Frequencies down to 10Hz easily in production mode, although the are designed to operate 48kHz DACs only.

    For the nowadays synths this would be even more easy since they could operate the tota asynchrous way and push all data into the USB-Buffer.

    One of my FPGA synths slightly does this since it operates on a VIDEO / VGA Frequency and the Synth engine can produce 148.5 MHz / 192000 = 777 and there is only RAM for 512 voices. So it makes use of gaps and black shoulder time outs to produce the voices ahead and use a sample buffer.

    Just for those who come over this thread:

    The wavetable synthesis I talked about (also the virtual one with dynamically calculated waves from equations) are present and might be available in DIY-platforms soon. There is some negotiation to be done since there is an active 5Y-NDA regarding some designs I sold.

    To get informed about the wave table functions I integrated in my synth see these pages: (actually one table for all voices, but with 8 waves only, in PGP-style) (one parameter-set (= equation) for each part, offering an unlimited number of waveforms, there is only a bandwidth limit according to the highest frequency of 768kHz/8)

    The idea behind that can be read here:…unktionsgenerator_im_FPGA

    (capable of producing "almost squares" with up to 15th harmonic.

    This core is running in 2 non musical applications too.

    Since Access does not offer reloadable wave tables, this might be a good add to the synth.

    What would you except from a controller, which is not available at the ready built devices in the market? In my profile you can see the CME49 wich is a perfect MKBD for the virus. I only chain anothe MIDI-CTRLs in between.

    The free MODALapp brings the comfort of screen-based editing on your computer or mobile device. Available as a standalone app for macOS, Windows, iOS, iPadOS or Android systems and in VST3 and AU plug-in formats it is the perfect companion for synth enthusiasts looking to deep dive to all sound forming and performance parameters to create their own signature sounds, and to backup and manage patches and sequences.

    I second that. Even double large displays are not nice to handle precise wave manipulation and preparation. I suggested VGA-screens to many keyboard manufactures early but still many do not see the necessarity. Since we have the PCs , at least direct access to a device should be given to change settings and such. To do this in real time in my syntch I added a VGA to most of my devices:

    Did you manage to do that? The Volca's MIDI In can be triggered by the Virus' output i guess (never did that). I am using the Mini-Jacks with Synch to include the Volca in my setup. (I have synch outputs at my DIY-system). This way it is also possible to synch from the Volca.

    Another way to obtain wide sounds is to use overlays of channels for the same midi input and create similar settings with slightly different ADSR and filter behaviour and cause modifications like phase inversion and dynamic delays. The "second" or "third" instrument (containing several oscillators) can be mixed into the track with some -12dB ... -6dB and will partly produce comb filtering which is similar to that whyt happens in rooms when frequencies reflect and superimpose and what is also known from Lauridsen-Schodder-Stereo, but unlike those it is different for the frequencies und also variates in time. Moreover MIDI-Echos can be used (again with changing settings) to change that dynamically (change reflections' sound over the time).

    Do you refer to the virus' inputs or outputs (record FOR or THE virus) ?
    Regarding production, I use the synths in stereo mode to tune and selec sounds but record all channels individually using the maximum outputs of the synths / expanders. This means not to play the MIDI alltogether and have the "downmix" in the sounth but record track for track to have full options later, when i want to use them.

    (Several steps of sound processing cannot be done with synths, even not with the TI i guess)

    Right, but as said, I know the limits of wavetables since we use them in SDR, RADAR and LIDAR for different purposes. Regarding sound generation, these limits create a lot of artefacts, which in some cases are the reason for the "good" sound. Depends on the way you access the table and (pre)process a wave.

    And yes, I know the microwave - I even tried to emulate it using also wave table synthesis:

    At the moment, I am evaluating a new way of WTS, which is first possible with the current technology and available at a reasonable price. It requires some processing power and tricky resource management, but in some weeks I shall be able to present some results. Currently I am waiting for the new PCB. Since I plan a release also for B2C, it might be possible that there will be someting available in summer. Maybe as a companion to the Virus.