i am bringing up an elaborate system of processes connected by zeromq.
i have a definitive metric of overall performance, which is the lag between
real time and the timestamp of a record (collected in real time) processed
by the "sink" process. i can vary the flow through the input flow.

with low to moderate flows, i maintain a lag of 22secs; this is what i expect
as there are a couple of 10s buffering steps.

with higher flows, the lag starts increasing without bound.

i can increase the number of processes in various parts of the processing graph,
but how can i effectively figure out what to increase? ordinarily, i would look 
at
which processes have increasing input queue lengths. but 0mq doesn't do that.
all i can measure is the memory footprint which starts increasing, sometimes
alarmingly quickly, but mostly steadily. i can't tell if the memory usage is 
from
fragmentation, or an input queue, or output queue.

can anyone offer advice here? is there a best practice for this?

                andrew

------------------
Andrew Hume  (best -> Telework) +1 623-551-2845
[email protected]  (Work) +1 973-236-2014
AT&T Labs - Research; member of USENIX and LOPSA




_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to