Hi Tanu,
Thx for your response.
to clarify:
Yes, im using " pa_stream_readable_size()", to first check if there are
readable bytes available.
"pa_stream_peek()" doesnt have any error codes to return, it always returns
Zero.
by no data, I mean that the PCM data that gets written to the file is a flat
line. that is "free > 0" and data!= NULL.
but by logs i noticed that very few bytes actually get pulled by
pa_stream_peek() in each iteration like: 8, 6 , 59, 70,10 etc. to give one
example. is that strange?
when I do get some data ( not a flat PCM line but noisy) , I get approx 1000
bytes pulled in each iteration by pa_stream_peek(), like 800,1600,1300,700 etc.
i'm using threaded main loop .
I am using a single context , and i'm opening 2 streams in that context ( one
for playback and one for record) , will that create a problem? I'm attaching
the code for your reference.
"void PulseAudioClient::Read()" is the function to lookout for. This is still a
test code , so please excuse the inefficiencies.
i will upload the raw pcm file and mail the link.
________________________________________
From: Tanu Kaskinen [[email protected]]
Sent: Friday, November 15, 2013 12:36 PM
To: Chanchani, Nimesh
Cc: [email protected]
Subject: Re: [pulseaudio-discuss] Question regarding usage of pa_stream_peek()
and pa_stream_drop()
On Thu, 2013-11-14 at 10:40 +0000, [email protected] wrote:
> Hi ,
>
> I have a question regarding the usage of pa_stream_peek() and
> pa_stream_drop().
>
> my usage is :
>
> pa_stream_peek(mStreams[1], &data, &free);
> if ( data == NULL && free > 0 )
> {
> pa_stream_drop(mStreams[1]);
> }
> else
> {
> fwrite(data, sizeof(uint8_t), free, fpOutput);
> pa_stream_drop(mStreams[1]);
> }
Looks good, except that errors from pa_stream_peek() aren't handled, and
the code assumes that there is data available (it's a valid assumption
if you have checked pa_stream_readable_size()).
> The problem is that sometimes i get the data and sometimes not, also when i
> get the data it is very noisy.
What do you mean by not getting data? Do you mean that "data == NULL &&
free > 0" or "data == NULL && free == 0"? If the former, it means that
the stream contains holes, which is pretty unusual, and I think it
should happen only if you record from a monitor source. If the latter,
then you're calling pa_stream_peek() when there's no data available, and
you should use pa_stream_readable_size() and call pa_stream_peek() only
if the readable size is greater than zero.
It's also unclear what you mean by "very noisy". Can you make an example
audio file available somewhere? PulseAudio doesn't generate any noise by
itself, so this sounds like a hardware problem, or you're using wrong
format parameters when playing back the audio.
> Now, from reading the document, it says :
> pa_stream_peek() , returns the bytes read, which may be less than the
> fragsize.
> also, pa_stream_drop() is used to remove the current fragment.
>
> now my question is , since pa_stream_drop is needed to move the buffer
> forward, so that i read the next , what will happen if I call
> pa_stream_peek() , which returns partial fragment, and then call
> pa-stream_drop(), which will drop the fragment. Will this lead to data
> loss?
No, there are no "partial fragments" that you need to worry about. The
documentation could probably be improved, but I'm not quite sure how.
The server sends audio in chunks, and the size of those chunks is not
necessarily constant. Sometimes their size may be the same as the
fragsize parameter that you configured, but sometimes their size may be
something else. pa_stream_peek() always returns a full chunk, and
pa_stream_drop() always drops the chunk that was returned by
pa_stream_peek().
--
Tanu
________________________________
This message is for the designated recipient only and may contain privileged,
proprietary, or otherwise confidential information. If you have received it in
error, please notify the sender immediately and delete the original. Any other
use of the e-mail by you is prohibited. Where allowed by local law, electronic
communications with Accenture and its affiliates, including e-mail and instant
messaging (including content), may be scanned by our systems for the purposes
of information security and assessment of internal compliance with Accenture
policy. .
______________________________________________________________________________________
www.accenture.com
#include <utils/Log.h>
#include <math.h>
#include <pulse/pulseaudio.h>
#include <pulse/thread-mainloop.h>
#include "PulseAudioClient.h"
#define PLAYBACK_LATENCY_USECS 25000 //25msecs
namespace android_audio_legacy
{
bool PulseAudioClient::mSetup = false;
pa_mainloop_api* PulseAudioClient::mMainloopApi;
pa_threaded_mainloop* PulseAudioClient::mThreadedMainloop;
pa_context* PulseAudioClient::mContext;
pa_sample_spec PulseAudioClient::mPlaybackFormat;
pa_sample_spec PulseAudioClient::mRecordFormat;
const char* PulseAudioClient::mAppName;
pa_stream* PulseAudioClient::mStreams[NSTREAMS];
int16_t PulseAudioClient::mData[DATA_SIZE];
int PulseAudioClient::mNStreamsReady;
int PulseAudioClient::FlushSucceded;
bool PulseAudioClient::InitPulse()
{
if ( mSetup == true )
{
//Already initialized , Exit.
return true;
}
mAppName = "AndroidPulseAudioClient";
mSetup = false;
mMainloopApi = NULL;
mContext = NULL;
mNStreamsReady = 0;
for (int i = 0; i < NSTREAMS; i++)
mStreams[i] = NULL;
// allocate a new threaded main loop object
mThreadedMainloop = pa_threaded_mainloop_new();
if(mThreadedMainloop == NULL)
return false;
// get the abstract main loop abstraction layer vtable for this mainloop
mMainloopApi = pa_threaded_mainloop_get_api(mThreadedMainloop);
// start the event loop thread
if(pa_threaded_mainloop_start(mThreadedMainloop) < 0)
return false;
// lock the event loop object, effectively blocking the event loop
// thread from processing events while we create and connect the context
pa_threaded_mainloop_lock(mThreadedMainloop);
// create a new pulseaudio context with the server
mContext = pa_context_new(mMainloopApi, mAppName);
if(mContext == NULL)
return false;
pa_context_set_state_callback(mContext, PulseAudioClient::ContextStateCallback, NULL);
// Connect the context
if (pa_context_connect(mContext, NULL, PA_CONTEXT_NOFAIL, NULL) < 0) {
LOGE("pa_context_connect() failed.");
return false;
}
pa_threaded_mainloop_unlock(mThreadedMainloop);
mSetup = true;
return true;
}
void PulseAudioClient::FlushSuccess(pa_stream *s, int success, void *userdata)
{
FlushSucceded = 1;
}
void PulseAudioClient::Read()
{
const void* data;
/* Be notified when we can write data */
pa_stream_cork(mStreams[1],0,NULL,NULL); // uncork the stream, it was started corked.
while ( (pa_stream_get_state(mStreams[1]) != PA_STREAM_READY) || pa_stream_is_corked(mStreams[1])) { usleep(100); }
FlushSucceded = 0;
pa_stream_flush(mStreams[1],PulseAudioClient::FlushSuccess,NULL);
while (!FlushSucceded){ usleep(100); }
FILE *fpOutput;
fpOutput = fopen("/data/output.pcm","wb");
if (fpOutput == NULL)
{
LOGE("ERROR: cannot create");
exit(1);
}
pa_stream_set_read_callback(mStreams[1], PulseAudioClient::ReadCallback, 0);
pa_threaded_mainloop_lock(mThreadedMainloop);
unsigned long dumb_flag = 0;
int brtn;
while( dumb_flag < 3000)
{
size_t free = pa_stream_readable_size(mStreams[1]);
while( free == 0 )
{
pa_threaded_mainloop_wait(mThreadedMainloop);
free = pa_stream_readable_size(mStreams[1]);
}
pa_stream_peek(mStreams[1], &data, &free);
if ( data == NULL && free > 0 )
{
pa_stream_drop(mStreams[1]);
BytesRead = 0;
}
else
{
brtn = fwrite(data, sizeof(uint8_t), free, fpOutput);
pa_stream_drop(mStreams[1]);
dumb_flag++;
}
}
fclose(fpOutput);
pa_threaded_mainloop_unlock(mThreadedMainloop);
pa_stream_set_read_callback(mStreams[1], NULL, NULL);
return;
}
void PulseAudioClient::ContextStateCallback(pa_context *c, void *userdata)
{
LOGD("ContextStateCallback");
if(c == NULL)
{
LOGE("NULL context");
return;
}
switch (pa_context_get_state(c)) {
case PA_CONTEXT_CONNECTING:
LOGD("PA_CONTEXT_CONNECTING");
break;
case PA_CONTEXT_AUTHORIZING:
LOGD("PA_CONTEXT_AUTHORIZING");
break;
case PA_CONTEXT_SETTING_NAME:
LOGD("PA_CONTEXT_SETTING_NAME");
break;
case PA_CONTEXT_READY: {
LOGD("PA_CONTEXT_READY");
for (int i = 0; i < NSTREAMS; i++) {
char name[64];
LOGD("Creating stream %i", i);
snprintf(name, sizeof(name), "stream #%i", i);
//Playback Stream
if ( i == 0)
{
mPlaybackFormat.format = PA_SAMPLE_S16LE;
mPlaybackFormat.rate = SAMPLE_HZ;
mPlaybackFormat.channels = CHANNELS;
mStreams[i] = pa_stream_new(c, name, &mPlaybackFormat, NULL);
if(mStreams[i] == NULL)
{
LOGE("NULL stream");
return;
}
pa_stream_set_state_callback(mStreams[i], PulseAudioClient::PlaybackStreamStateCallback, (void*) (long) i);
//pa_stream_set_write_callback
size_t latency_bytes = pa_usec_to_bytes(PLAYBACK_LATENCY_USECS, &mPlaybackFormat);
pa_buffer_attr bufferAttr;
bufferAttr.maxlength = (uint32_t) -1;
bufferAttr.tlength = latency_bytes;
bufferAttr.prebuf = -1;
bufferAttr.minreq = (uint32_t) -1;
bufferAttr.fragsize = 0;
LOGD("bufferAttr.maxlength %d", bufferAttr.maxlength);
LOGD("bufferAttr.tlength %d", bufferAttr.tlength);
pa_stream_connect_playback(mStreams[i],
NULL,
&bufferAttr,
pa_stream_flags_t(PA_STREAM_INTERPOLATE_TIMING | PA_STREAM_AUTO_TIMING_UPDATE | PA_STREAM_ADJUST_LATENCY),
NULL,
i == 0 ? NULL : mStreams[0]);
}
//Record Stream
if ( i == 1)
{
mRecordFormat.format = PA_SAMPLE_S16LE;
mRecordFormat.rate = 44100;
mRecordFormat.channels = 2;
mStreams[i] = pa_stream_new(c, name, &mRecordFormat, NULL);
if(mStreams[i] == NULL)
{
LOGE("NULL stream");
return;
}
pa_stream_set_state_callback(mStreams[i], PulseAudioClient::RecordStreamStateCallback, (void*) (long) i);
pa_buffer_attr bufferAttr;
bufferAttr.maxlength = (uint32_t) -1;
bufferAttr.fragsize = (uint32_t) -1;
pa_stream_connect_record(mStreams[i],
NULL,
&bufferAttr,
pa_stream_flags_t(PA_STREAM_START_CORKED));
}
}
break;
}
case PA_CONTEXT_TERMINATED:
LOGD("PA_CONTEXT_TERMINATED");
mMainloopApi->quit(mMainloopApi, 0);
break;
case PA_CONTEXT_FAILED:
default:
LOGE("Context error: %s\n", pa_strerror(pa_context_errno(c)));
break;
}
}
void PulseAudioClient::PlaybackStreamStateCallback(pa_stream *s, void *userdata)
{
LOGD("PlaybackStreamStateCallback");
if(s == NULL)
{
LOGE("NULL stream");
return;
}
switch (pa_stream_get_state(s)) {
case PA_STREAM_UNCONNECTED:
LOGD("PA_STREAM_UNCONNECTED");
break;
case PA_STREAM_CREATING:
LOGD("PA_STREAM_CREATING");
break;
case PA_STREAM_TERMINATED:
LOGD("PA_STREAM_TERMINATED");
break;
case PA_STREAM_READY: {
LOGD("PA_STREAM_READY");
/* Be notified when this stream is drained */
pa_stream_set_underflow_callback(s, PulseAudioClient::UnderflowCallback, userdata);
++mNStreamsReady;
break;
}
default:
case PA_STREAM_FAILED:
LOGE("Stream error: %s\n", pa_strerror(pa_context_errno(pa_stream_get_context(s))));
break;
}
}
void PulseAudioClient::RecordStreamStateCallback(pa_stream *s, void *userdata)
{
LOGD("RecordStreamStateCallback");
if(s == NULL)
{
LOGE("NULL stream");
return;
}
switch (pa_stream_get_state(s)) {
case PA_STREAM_UNCONNECTED:
LOGD("PA_STREAM_UNCONNECTED");
break;
case PA_STREAM_CREATING:
LOGD("PA_STREAM_CREATING");
break;
case PA_STREAM_TERMINATED:
LOGD("PA_STREAM_TERMINATED");
break;
case PA_STREAM_READY: {
LOGD("PA_STREAM_READY");
/* Be notified when this stream is drained */
pa_stream_set_underflow_callback(s, PulseAudioClient::UnderflowCallback, userdata);
break;
}
default:
case PA_STREAM_FAILED:
LOGE("Stream error: %s\n", pa_strerror(pa_context_errno(pa_stream_get_context(s))));
break;
}
}
void PulseAudioClient::NopFreeCallback(void *p)
{
LOGD("NopFreeCallback");
}
void PulseAudioClient::UnderflowCallback(struct pa_stream *s, void *userdata)
{
LOGD("UnderflowCallback!");
int i = (int) (long) userdata;
LOGD("Stream %i underflow.", i);
}
void PulseAudioClient::WriteCallback(pa_stream *s, size_t length, void *userdata)
{
pa_threaded_mainloop_signal(mThreadedMainloop, 0);
}
void PulseAudioClient::ReadCallback(pa_stream *s, size_t length, void *userdata)
{
pa_threaded_mainloop_signal(mThreadedMainloop, 0);
}
uint64_t PulseAudioClient::GetLatency()
{
int negative=0;
pa_usec_t latency_usec = 0;
if( mNStreamsReady ){
LOGD("Getting latency from hardware");
//It looks from pa documentation as though we should use pa_stream_get_latency to get
//latency. However, this is returned as 0 if we haven't started playback, and audioflinger
//requests latency before sending any data... So use out own latency if this occurs
pa_threaded_mainloop_lock(mThreadedMainloop);
int ret = pa_stream_get_latency(mStreams[0], &latency_usec, &negative);
pa_threaded_mainloop_unlock(mThreadedMainloop);
if( ret == PA_ERR_NODATA || latency_usec == 0 )
latency_usec = PLAYBACK_LATENCY_USECS;
LOGD("Returning latency %llu", latency_usec);
}
else
LOGD("Latency requested before stream is ready");
return latency_usec;
}
} // namespace android
_______________________________________________
pulseaudio-discuss mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss