Roman Haefeli wrote:
i ve been testing the new netpd-server based on the new
[tcpserver]/[tcsocketserver FUDI] now for a while and definitely could
solve some problems, but some new ones were introduced.
i found, that the most recent version of [tcpserver] peforms quite bad
cpu-wise. this has some side-effects. in netpd, when a certain number of
users are logged in (let's say 16), it can happen, that the traffic of
those clients makes the netpd-server use more than the available
cpu-time. i made some tests and checked, if all messages come through
and if messages delivered by the server are still intact. under normal
circumstances, there is no problem at all. but under heavy load, when
the pd process is demanding more than available cpu time, some messages
are corrupted or lost completely; in the worst case the pd process
segfaults, at the moment of a client connecting or disconnecting. i
guess, this is due to some buffer under- or overrun between pd and the
tcp stack, but i don't really know.
Hi Roman,
Did you try using the new [timeout( message? The latest version of
tcpserver defaults to a 1ms timeout, so if you have a bunch if
disconnected clients, Pd will hang for 1ms each, which will quickly add
up to more than the audio block time and then Pd will start thrashing
and eventually die or become comatose, as it were.
I think you need to experiment with different values for the timeout.
Set it to zero and it should give the same results as the previous
version; maybe try something around 100 instead of the default 1000
(it's in microseconds).
The other way to fix this in the tcpserver source is to make a new
thread for each client, but I'm afraid that will just open another can
of worms/zombies.
Martin
_______________________________________________
Pd-dev mailing list
[email protected]
http://lists.puredata.info/listinfo/pd-dev