fpo wrote: 
> To me, what I found, points to a possibly spurious race condition, which
> may be hard to reproduce on a specific machine. E.g. I can avoid
> triggering the dropouts when using a big stream buffer like -b 16384. It
> may very well be, that error 0x2745 is a *consequence* of a race
> condition as opposed to a *cause*.
> I could do some more testing if you would like to give me some pointers
> where to look further.

I agree it appears to be a "race condition" -- a situation where the
outcome depends on timing, or whichever bit of software "wins the race".
The first place to look is the stream_thread() and any static variables
it uses. Unfortunately, a typical C program makes heavy use of static
variables, and SqueezeLite is no different.
I see that the stream_thread() uses a static streambuf, so it would be
wise to ensure that the thread uses only thread-safe access to streambuf
and other static variables. Maybe the 30-second timeout we see is
related to the mutex_lock() used throughout the stream_thread. Debugging
race-conditions is tough, good luck with that!

LARGER BUFFER
Ok, let me do a test with the larger buffer, as you suggest.
squeezelite-1.9.8-1318-test2-win32 -d all=debug -b 16384 -n
SqueezeLite-bogan-admin -s nas -o 1


_No_dropout_was_heard_even_after_30_minutes!!_

The new buffer size affected the size reported in stream_thread()

[13:54:43.545] decode_thread:75 streambuf bytes: 3588759
outputbuf space: 3527999

Now it is bigger at 3.5MB, compared with 2MB before. Again, we see this
value steadily decreasing during playback, until it is as small as 864
bytes.

[13:57:46.637] decode_thread:75 streambuf bytes: 864 outputbuf
space: 65095
Previously, it took 2 minutes to use up the streambuffer, (run it down
to zero) now it takes 3 minutes. Makes sense. 

Then, we see it go fetch the next track on the playlist, and playback
continues:

[13:57:47.067] decode_thread:100 decode complete
[13:57:47.067] sendSTAT:166 ms_played: 183306 (frames_played: 8091728
device_frames: 7917)
[13:57:47.067] sendSTAT:195 STAT: STMd
[13:57:47.067] sendSTAT:200 received bytesL: 3588759 streambuf: 0
outputbuf: 3390976 calc elapsed: 183306 real elapsed: 183500 (diff:
-194) device: 179 delay: 0
[13:57:47.067] sendSTAT:166 ms_played: 183306 (frames_played: 8091728
device_frames: 7917)
[13:57:47.067] sendSTAT:195 STAT: STMt
[13:57:47.067] sendSTAT:200 received bytesL: 3588759 streambuf: 0
outputbuf: 3390976 calc elapsed: 183306 real elapsed: 183500 (diff:
-194) device: 179 delay: 0
[13:57:47.154] process:527 strm
[13:57:47.154] process_strm:280 strm command s
[13:57:47.154] process_strm:350 strm s autostart: 1 transition period: 3
transition type: 0 codec: m
[13:57:47.154] sendSTAT:166 ms_played: 183400 (frames_played: 8091728
device_frames: 7917)
[13:57:47.154] sendSTAT:195 STAT: STMf
[13:57:47.154] sendSTAT:200 received bytesL: 3588759 streambuf: 0
outputbuf: 3390976 calc elapsed: 183400 real elapsed: 183594 (diff:
-194) device: 179 delay: 94
[13:57:47.154] codec_open:264 codec open: 'm'
[13:57:47.154] connect_socket:164 connecting to 192.168.0.60:9002
[13:57:47.175] stream_sock:597 header: GET
/stream.mp3?player=70:85:c2:a8:ce:68 HTTP/1.0
...
[13:57:47.286] decode_thread:75 streambuf bytes: 3223041 outputbuf
space: 172487

This is totally normal, no errors at all. 

Increasing the stream buffer seems to be a solid workaround, but we
still don't know why the dropouts were occurring.
If this issue is due to a resource leak, then increasing the buffer
would only prolong the outage, rather than cure it. 
But so far, I don't see any signs that the issue exists when using a
larger stream buffer, even now 55 minutes have gone by.


------------------------------------------------------------------------
foopydog's Profile: http://forums.slimdevices.com/member.php?userid=71156
View this thread: http://forums.slimdevices.com/showthread.php?t=113554

_______________________________________________
plugins mailing list
[email protected]
http://lists.slimdevices.com/mailman/listinfo/plugins

Reply via email to