Quantcast
Channel: KVR Audio
Viewing all articles
Browse latest Browse all 4906

DSP and Plugin Development • Re: Frequency domain simulation of temporal domain processes, FFT stuff

$
0
0
I don't know what mipmaps are or what band-limited means, so I've got a lot to learn before I can even understand these conversations. That linked thread is great though, just read through it.

Something mentioned there (and I think here) is this idea of processing the waveform when there's a new pitch. Quote from other thread:

"This method can get more complicated when the user changes pitch, say using a pitch envelope, or if the synth uses a morphing wavetable. In this case you could generate a new wavetable WT with each new audio block, or every 500 or so samples. This won't burden a CPU too much, any aliasing caused by rapid increase in pitch before the WT regenerates won't be noticeable, and the transition between morphing wavetables should not be course."

Is this to say that the waveform is being updated at a slower rate in the processor than everything else (i.e. the modulation)? If so, are they in separate threads, or is it more of a "if x time has passed, update the waveform" polling type of thing, whereas modulation is updating regardless every time? Why would the waveform update at a slower rate than the modulation? I mean, a modulation system could be massive and complicated, I can't imagine the waveform processing would be so much more expensive than the modulation processing that it would warrant totally separate rates (especially since that might introduce frequent branch mispredictions). Maybe I'm confused.

Statistics: Posted by rou58 — Wed Apr 24, 2024 1:50 am



Viewing all articles
Browse latest Browse all 4906

Trending Articles