Marc E. Fiuczynski wrote:
Hi Shailabh,


I applied this on the PlanetLab CVS kernel and ran aiostress on a 3GHz
P4 - its running as expected with the average sectors served values
tracking the shares set.


Thank you for testing this with our kernel.  Will give this a shot ASAP.


Two problems seen so far:
- running a simple dd doesn't show up on the stats for the io controller
for a class even though the pids show up in members. Need to find
out why...

- Setting very low limit values (< 50 or so) doesn't help - the app gets
a minimum of 20-30 sectors per second anyway. The aggressiveness of
regulation by the scheduler can be increased but I'm not sure if that is
desirable.



What does 20-30 sectors per second translate to in terms of disk bandwidth?
A sector is 512 bytes, right? That implies that a class at minimum gets
10-15Kbytes per second?

Correct.

Assuming seeks on the disk do not go nuts due to
the scheduler, what is the avg/max number of sectors per second one can
expect from a reasonably good disk these days?

On the system on which we have PlanetLab installed, hdparm shows about 55 MB/s for unbuffered reads and around 850 MB/s for buffer-cache reads.
This is fairly good IDE disk and should be representative of most newer PL nodes.


Assuming unbuffered I/O only, 55 MB is 112,640 sectors per second for the system as a whole. Of course, seeks will reduce it and page cache reads (which are likely to be the common case) will increase the effective rate to apps (scheduler only regulates disk b/w). Even assuming a 50% degradation due to seeks, 30 sectors/sec minimum should allow for 2100 classes.

I'm in the process of trying to exercise a large number of classes....will keep the list posted.

-- Shailabh


-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now. http://productguide.itmanagersjournal.com/
_______________________________________________
ckrm-tech mailing list
https://lists.sourceforge.net/lists/listinfo/ckrm-tech

Reply via email to