[Portaudio] Portaudio & FFTW
richard at rwdobson.com
Mon Feb 5 12:55:28 EST 2018
Short answer is Yes. I have done this (in part - the resynthesis stage)
in the program "pvplay" which is part of the now open-source CDP system
(https://github.com/ComposersDesktop/CDP7). Of course if you want to
receive audio samples you would need to implement the analysis stage as
well (See the streaming "pvs" opcodes in Csound, based on the same code,
in turn based on Mark Dolson's original phase vocoder).
The approach is broadly the same whether for a GUI app or for a command
line streaming player such as pvplay. Once you have your "basic" phase
vocoder code working (which is an fft-based analysis/synthesis scheme),
create a background thread to handle the processing, and a ring buffer
to hold incoming and outgoing audio samples from/to the portaudio
callback. My code offered the option to use FFTW 2.1. rather than the
provided FFT implementation. Tim Goetze made some LADSPA plugins based
on my original implementation, but updated to use FFTW 3
If you just want to do classic FFT-based convolution, you don't need all
the extra baggage of pvoc; but you will still need the
background/foreground mechanism, as for any process incurring arbitrary
block computation and latency.
On 05/02/2018 17:19, cristiano piatti wrote:
> Good morning,
> does anybody know how to implement a FFTW (www.fftw.org) streaming
> audio example ?,
> and can suggest me how to implement a canonical dft and/or fft on a
> streaming of audio samples ?
> Many thanks.
> Portaudio mailing list
> Portaudio at lists.columbia.edu
More information about the Portaudio