this begs the question "is there something in a non-unix environment that would
suffice?"
for my case, all major data flows go thru various "portal"s where i precisely
measure the
messages in and out (and thus know the exact queue length). except for these
portals,
all data flows have a fairly tight leash (in terms of HWM). i can then examine
progress
by displaying two complementary views: one measures teh flow in messages/min
and the other measures queue length as a % of messages out.
for example, as my app goes from a state where processing had suspended
to a normal running state, the queue view looks like
> |->digest|->active|->locate| ->tagger |->tally|->calibrate|->measure
> | blob | casea | lrl |caset loc | tcase | lrc rc | locm
> 05:19| 1.8 | | | | | |
> 05:20| 1.4 | | | | | 0.1 0.1 |
> 05:21| 0.9 | | | 0.1 | | |
> 05:22| 0.4 | | | 0.2 | | |
> 05:23| 0.2 | | | 0.2 | | |
> 05:24| 0.1 | | | 0.1 | | |
> 05:25| | | | 0.1 | | |
> 05:26| | | | 0.1 | | |
> 05:28| | | | 0.1 | | |
and the flow looks like
> |->digest|->active|->locate| ->tagger |->tally | ->calibrate
> |->measure
> | blob | casea | lrl | caset loc | tcase | lrc rc |
> locm
> 05:19| 1.912M | 57.61M | 4.359M | 49.58M 479.3K | 729.2K | 3.688M 946.4K
> | 543.7K
> 05:20| 4.131M | 105.1M | 8.509M | 82.58M 971.6K | 1.134M | 6.694M 1.831M
> | 964.9K
> 05:21| 5.675M | 150.8M | 12.44M | 80.83M 1.648M | 1.046M | 16.15M 4.433M
> | 1.510M
> 05:22| 4.460M | 120.4M | 11.80M | 60.79M 1.202M | 1.153M | 10.02M 3.191M
> | 1.279M
> 05:23| 2.467M | 89.90M | 5.795M | 105.3M 682.3K | 1.645M | 6.125M 1.552M
> | 698.6K
> 05:24| 1.876M | 51.99M | 4.254M | 60.62M 481.7K | 901.9K | 4.574M 1.237M
> | 481.6K
> 05:26| 1.819M | 51.20M | 4.227M | 42.82M 481.4K | 721.2K | 4.206M 1.099M
> | 506.0K
> 05:27| 941.1K | 28.27M | 2.227M | 42.58M 270.7K | 707.3K | 2.217M 780.8K
> | 266.3K
> 05:28| 864.8K | 24.92M | 1.956M | 28.12M 233.8K | 437.7K | 2.013M 601.5K
> | 232.3K
> 05:29| 806.6K | 23.07M | 1.705M | 20.25M 185.4K | 289.9K | 1.701M 560.0K
> | 187.7K
the headings indicate the topology and portal names.
this shows that digest was processing 4-5M messages/min while there was a queue
and that the asymptotic input rate is about 800K-1M messages/min.
normally, we like no queues, except for tagger (who uses the queue to do
approximate temporal sorting).
i know this isn't graphical really, but its proved to be a very useful debugging
and monitoring tool for teh application. actually, make that critical;
i don't think i could have debugged or tuned this app without them.
andrew
On Aug 24, 2012, at 7:19 AM, [email protected] wrote:
> Hi.
> I'm not a great developer in Unix so I'm asking the following questions.
> Are there any tools that can graphically represent multithreading and message-
> shuffling while debugging an application?
> I know gdb and valgrind, but they don't do what I mean.
> I intend a kind of real-time profiler and possibly a kind of wireshark (if
> anyone's familiar with that).
>
> Does anyone know of any such application?
> I use Ubuntu if that is of interest.
>
> Thanks
> Claudio
> _______________________________________________
> zeromq-dev mailing list
> [email protected]
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
------------------
Andrew Hume (best -> Telework) +1 623-551-2845
[email protected] (Work) +1 973-236-2014
AT&T Labs - Research; member of USENIX and LOPSA
_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev