[Portaudio] WASAPI: how to use a different sample rate than the shared format ?

Laurent Zanoni laurent.zanoni at acapela-group.com
Wed Dec 7 06:01:52 EST 2016


in Win 10 (UWP), there is a few flags supposed to adjust the user format with the one from the driver. 
(AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM, SRC_DEFAULT_QUALITY), as well as the option eStreamOptionMatchFormat (for W10)
I'm trying to play 22050 samples (adapted from paex_sine_c++) in Wasapi shared mode, but it fails saying that the format is not supported.

CreateAudioClient calls GetClosestFormat  with 22050 as sample rate Then IAudioClientIsFormatSuppported fills the closes match with 48000.
Then it fails to validate the sample rate of course :/

Isn't WASAPI Shared mode supposed to allow resampling with proper flags ?

any hint ?

  ----- Original Message ----- 
  From: Nocs ... 
  To: portaudio list ; philburk at mobileer.com 
  Sent: Monday, December 05, 2016 7:20 PM
  Subject: Re: [Portaudio] A small guidance needed in WMME

  Thanks for the response, yeap i managed to achieved it in blocked way and using also opus encoding which seems to make things easier cause of its compression to send it in small buffers.

  By compining portaudio and opus decoding i think is very well suited way to use it for voip solutions 

  I havent test the transmition cause i am on the way of making it the next days but i think if things go not so well with vector buffers i will have to use

  the ringbuffers as you metion and thanks for the tip about it 


  From: portaudio-bounces at lists.columbia.edu <portaudio-bounces at lists.columbia.edu> on behalf of Phil Burk <philburk at mobileer.com>
  Sent: Monday, December 5, 2016 7:23:02 PM
  To: portaudio list
  Subject: Re: [Portaudio] A small guidance needed in WMME 

  If you need low latency then you should probably use the callback API. 
  Just make sure you don't do much besides simple scaling and routing of signals in the callback.
  If you need to send the data over a network, or to disk, then pipe it through a ringbuffer to another thread. That way you can avoid doing any networking in the audio callback.

  You may not be able to use a bidirectional stream. In that case you may also need to use a ringbuffer to connect two unidirectional streams.

  I think that with a combination of callbacks and ring buffers you can build whatever topology you need.

  Phil Burk

  On Thu, Dec 1, 2016 at 9:31 AM, Nocs ... <NoCos30 at hotmail.com> wrote:

    Hello to all,

    I have lost myself in the translation of too many examples and tests and i am not sure what way should i choose to have a good and best choice so not to spend time uneccessarily.

    I made a blocking and non blocking input to output but i cant understand which is best,  after i see other tests with other ways i dont know if what i am doing is correct for my needs or not.

    What i want to achieve is to be able to get the input from microphone, save it to a buffer and also play that buffer to the output at the same time.

    It will be for a p2p chat service using connection between 2 pc`s for example so after i do my tests on the same pc  i have to be able to switch the output to listen to the incoming buffer from others pc microphone sended buffer later on.

    Which test or combinations of tests, examples and tuts suite me well ?

    Thanks in advance for you time 

    Portaudio mailing list
    Portaudio at lists.columbia.edu


  Portaudio mailing list
  Portaudio at lists.columbia.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.columbia.edu/pipermail/portaudio/attachments/20161207/9ee8f28d/attachment.html>

More information about the Portaudio mailing list