Lyvim Xaphir wrote:

>On Sun, 2002-05-19 at 20:53, civileme wrote:
>
>Looks like you lobbed a grenade into this thread, Civ.  ;)
>
>Make that two grenades; I forgot about the other post....  lol.
>
>I've got a question at the end.
>
>
>>Actually, we are talking about data rates very differently....
>>
>>From/To the attached electronics and disk, the data is _pure serial_
>>
>>Under any one head there iare
>>258048 bits  of usable data in a rotation...  And this rotation occurs 
>>in 1/7200th of a second or about .139 ms, so if an entire track is read, 
>>we have
>>
>>about 1.86 GBITS/s (that's Gigabits/sec using 10^9)  or 1.73Gbits/sec 
>>(using 2^30)
>>
>>Now the controller is capable of passing data at what?  How is that 
>>rated?  Is it bytes/sec, words/sec. bits/sec, or just a clock rate that 
>>is advertised?  The advertising is usually careful NOT to say.  In fact 
>>it appears to be a clock rate.
>>
>>Now we have to look at the pcide bus The AT Attachment with Packet 
>>Interface Revision 6 Draft says there are 16 data bits.  What does that 
>>mean in raw bit rate?
>>
>>Hmmm lessee, there are many sorts of messages crossing that bus, not all 
>>of them data, and to every 512 bytes there is appended a 57-byte CRC 
>>packet so let's agree to knock off about 10% for necessary cruft to 
>>preserve data integrity--it's a little higher than that but so what?
>>
>>133MHz (ATAPI-6) less 14Mhz is 119MHzx16bits =1.9GBITS/sec or 1.77Gbits/sec
>>
>>WHOA!!!!  Until we have 10000rpm disks for IDE it looks like we can 
>>transfer data a little faster than it can spin on or off the disk....
>>
>>But that's OK we can build up the data for a burst in an onboard buffer 
>>or seven so that many tasks can be happening apparently simultaneously
>>
>>Now the PCI bus is 32 and the extended PCI bus is 64 bits wide....
>>
>>Hmmm  the IDE channel is only 16 bits wide so to use the PCI bus to 
>>capacity, we need 4 times the clock rate or 4x33 or 132.
>>
>>Sheesh, seems we are at the max that can work with an essentially 
>>unbuffered transfer from memory to disk...  but of course the buffering 
>>is already there for the next of the components that advertises a speed 
>>increase.
>>
>>Now look as the fractional nanosecond values of signals for spooling 
>>data and recall that once a cylinder boundary is reached (at 63K for 
>>single-platter two head disks) we have to talk about stepping the heads, 
>>and now we are talking milliseconds, a 10^6 order of magnitude change in 
>>data rate.  That is why a buffer on the disk electronics is a great idea.
>>
>>The disk is the slowest component. RAID0, RAID4 or RAID5 can make a real 
>>difference in the apparent performance of disk transfers by making 
>>stepping a less frequent event (with the right chunk size defined, of 
>>course.).
>>
>>Civileme
>>
>
>I'm curious as to wether you have any recommendations for chunk sizes. 
>Here's what I've got set up now:
>
>raiddev       /dev/md0
>raid-level    1
>chunk-size    32k
>persistent-superblock 1
>
>nr-raid-disks 2
>    device    /dev/hde5
>    raid-disk 0
>    device    /dev/hdg5
>    raid-disk 1
>raiddev       /dev/md1
>raid-level    0
>chunk-size    32k
>persistent-superblock 1
>_______________________________________________
>
>I've got two identical IBM deskstars in place, with the prefetch set at
>4k:
>_______________________________________________
>
>[root@tamriel elx]# hdparm -a /dev/hde
>
>/dev/hde:
> readahead    =  8 (on)
>_______________________________________________
>
>Hdparm gives the following information:
>
>
>[root@tamriel elx]# hdparm -i /dev/hde
>
>/dev/hde:
>
> Model=IBM-DTLA-307030, FwRev=TX4OA60A, SerialNo=YKEYKTYJ069
> Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
> RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=40
> BuffType=DualPortCache, BuffSize=1916kB, MaxMultSect=16, MultSect=16
> CurCHS=16383/16/63, CurSects=-66060037, LBA=yes, LBAsects=60036480
> IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}
> PIO modes: pio0 pio1 pio2 pio3 pio4
> DMA modes: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5
> AdvancedPM=yes: disabled (255)
> Drive Supports : ATA/ATAPI-5 T13 1321D revision 1 : ATA-2 ATA-3 ATA-4
>ATA-5
>_________________________________________________
>
>The question I have for you is, could I better optimize the system with
>chunk sizes other than 32k?
>
>Thanks,
>
>LX
>
>
>------------------------------------------------------------------------
>
>Want to buy your Pack or Services from MandrakeSoft? 
>Go to http://www.mandrakestore.com
>
remember we have a 63 sector track with 512 byte sectors  31.5k  

and we usually have two of them  31.5+31.5 =63k

So 64k chunk is a bad idea if you have single-platter drives--every 
chunk represents a head step before a drive switch.

And 32 k chunk means drives switch once before the head steps-cutting 
stepping events in half, and they are more than 90% of the time spent by 
drives in accessing.

Hmmm 21k sounds even better, cause you switch switch switch switch 
switch switch then a read makes the heads step...  This could destroy 
threading of drive requests though.

I think 32 is a good starting number, and 63k is another good one, but 
it is an empirical thing depending on the mix of large and small files. 
 If it is program launching as in /usr, the 63k gets the nod from 
me--big files spread over many drives i RAID 4 or 5 and stepping events 
can be anticipated as the first switch is made to the next drive.  21 
would be wonderful for a series of tiny files like what my filesystem 
sledgehammer generates.

Civileme




Want to buy your Pack or Services from MandrakeSoft? 
Go to http://www.mandrakestore.com

Reply via email to