On 10/17/10 12:19 PM, Sebastian Hahn wrote: > On Oct 15, 2010, at 4:13 AM, Mike Perry wrote: >> Thus spake Karsten Loesing ([email protected]): >> >>> - Do we want to keep the #1919 Torperf runs running or migrate them >>> to some other VM (that has enough memory)? What do we expect to >>> learn from keeping them running or migrating them that we didn't >>> learn from the first week or two? Instead of keeping them running >>> we could also make a PDF report and put it on >>> metrics.tpo/papers.html. >> >> I think this is very important to keep running, and that we should >> think about adding new runs for based on the ratios of measured >> consensus bandwidth to published descriptor bandwidth. Guards with >> high ratios for this value have been observed by the bandwidth >> authorities as having lots of slack capacity, where as Guards with >> low ratios would be overloaded. >> >> I would love to have all of these datastream available for comparison >> when various events and perf tweaks change the network. In fact, I >> would love it if we could have the following 5 torperf runs logging >> continuously and all overlayed on the main Torperf metrics graph: >> >> 1. Fastest 3 guards by network status >> 2. Fastest 3 guards by ratio of ns_bw/desc_bw >> 3. EntryGuards=0 (default current torperf) >> 4. Slowest 3 guards by network status >> 5. Slowest 3 guards by ratio of ns_bw/desc_bw
Okay, I think we can continue running the 3 or 5 Torperf runs with modified guard node selection algorithms in the future if we think the data is useful. However, note that a) the 3 Torperf runs for #1919 run on some machine which is, AFAIK, not one of our VMs and which may or may not host the 3 or even 5 Torperf runs in the future, b) I have no idea how to add 2 more Torperf runs with the guard selection algorithms you stated, and c) we're not using the output data for anything yet. Mike and Sebastian, any ideas about a) and b)? I think I can help with c) once I have the data. >> Is it hard to keep all of these running and logging for some reason? >> Does the 1919 script take up a lot of RAM? > > The problem wasn't that the script was taking a lot of RAM, but rather > that each torperf instance comes with its own tor instance. Spinning > up another 9 clients caused the memory issues. I think we can ask for more memory for the VM running this. >> I can make these and any other changes to the 1919 script to help this >> along. > > So far it appears the script is doing fine. Does that apply to the additional 2 Torperf runs, too? Best, --Karsten
