Well, after several days of testing, studying, and troubleshooting I finally 
have some new results to report. One big obstacle that took me a few days 
to figure out was that Redhat would not boot with a second, mirror, drive in
the computer, even though Debian and Mandrake booted fine. Since Redhat was 
using grub and the other 2 linuxen were using lilo, I thought switching RH 
to lilo too would help. But it didn't. It turns out that RH uses a "LABEL"
variable to store/pass the root partition in both the bootloader and fstab.
When I took this out and manually entered /dev/hda in both places it worked
fine. For some reason it was getting the LABEL from /dev/hdb mixed up with
the LABEL from /dev/hda. I don't understand why they even use this LABEL
variable but it seems to work just fine (better actually) without it. I will
spare you the details of all the other problems I encountered.

> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of
> Bob Miller
> Sent: Friday, December 13, 2002 15:49
> To: [EMAIL PROTECTED]
> Subject: Re: [Eug-lug]Mirroring a drive (all partitions and the MBR)
> 
> Dexter Graphic wrote:
> 
> > I will try it again tonight before bedtime. I figure it will take several 
> > hours to duplicate my 40 GB drive. Which block size do you think I should 
> > use to improve the speed? Is bigger better? 
> 
> It should take 1-2 hours, if you have all the right hdparm settings.

Using Debian's default settings (copied below) it took 11.5 hours. 

$ hdparm /dev/hda

/dev/hda:
 multcount    =  0 (off)
 I/O support  =  0 (default 16-bit)
 unmaskirq    =  0 (off)
 using_dma    =  0 (off)
 keepsettings =  0 (off)
 nowerr       =  0 (off)
 readonly     =  0 (off)
 readahead    =  8 (on)
 geometry     = 4865/255/63, sectors = 78165360, start = 0
 busstate     =  1 (on)

I checked Mandrake 8.2 and Redhat 8.0 and they both had all the correct
options turned on by default (except for 32-bit I/O, which made no difference
in speed whether it was on or off. The Debian hdparm readme said the same
thing: that 32bit-I/O usually does nothing for drive performance.)

By the way, how do make these hdparm setting "stick" so that I don't 
have to manually enter them each time I reboot using the command
"hdparm -m16 -c1 -d1 -a8 /dev/hda"?

> For my disk, these are the right settings.
> 
> > tivopc oss/lm_sensors mips> # hdparm /dev/hdb
> > 
> > /dev/hdb:
> >  multcount    = 16 (on)
> >  I/O support  =  1 (32-bit)
> >  unmaskirq    =  0 (off)
> >  using_dma    =  1 (on)
> >  keepsettings =  0 (off)
> >  nowerr       =  0 (off)
> >  readonly     =  0 (off)
> >  readahead    =  8 (on)
> >  geometry     = 79780/16/63, sectors = 80418240, start = 0
> 
> You probably want the same, but maybe a different multcount.
> On Debian, see /usr/share/doc/hdparm/README.Debian for details.

I checked my drive with "hdparm -I /dev/hda" (drive identification) and saw 
that it too used a multpoint of 16. When I made all the changes my drive
performance increased by 851 percent. Here are the test results:

$ hdparm -tT /dev/hda    (test cache and disk throughput)

With Debian's default hdparms:

/dev/hda:
 Timing buffer-cache reads:   128 MB in  1.02 seconds =125.49 MB/sec
 Timing buffered disk reads:  64 MB in 17.03 seconds =  3.76 MB/sec

With optimized hdparms:

/dev/hda:
 Timing buffer-cache reads:   128 MB in  1.03 seconds =124.27 MB/sec
 Timing buffered disk reads:  64 MB in  2.00 seconds = 32.00 MB/sec

I then tried changing the ide bus speed from the default 33 to 66, but
it made no appreciable difference in performance. Here are the results
from copying a 75MB partition: 

With kernel parameter idebus=33

[root@free1r root]# time dd if=/dev/hda3 of=/dev/hdb3 bs=4k
18073+0 records in
18073+0 records out

real    0m3.321s
user    0m0.039s
sys     0m1.211s

With kernel parameter idebus=66

[root@free1r root]# time dd if=/dev/hda3 of=/dev/hdb3 bs=4k
18073+0 records in
18073+0 records out

real    0m3.308s
user    0m0.037s
sys     0m1.189s

Note: in all cases I am showing the mean result of multiple runs.
(I ran at least 3 tests and threw the high and low result away.)


> As for blocksize, medium sized is better.  If the block size is too
> small, there's too much overhead.  If the block size is too big, only
> one disk is active at a time.  Most disks have a 2 Mbyte cache, and
> I'd suggest a block size half that big: 1 Mbyte.  That way, /dev/hda
> should be able to run at full speed filling its cache, and /dev/hdb
> should be able to accept full write requests into its cache.
> It should/might keep both disks running at close to full speed.

With optimized drive parameters, I tried the 75MB partition copy timing
test using various block sizes from 1k to 16MB. There was NO difference 
whatsoever. It took 4 seconds in ever case. Only a block size of 512
performed differently, it took 12 seconds. (Here are the block sizes I 
used: 512 1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M 2M 4M 8M 16M.)


> > Another thing that I am concerned about is how dd handles the occurrence 
> > of unusable (marked bad) sectors on the target drive? Does this potential
> > shifting of block locations mess up the partition tables or file systems?
> 
> Bad sectors are remapped in firmware.
> 
> > And wouldn't it decreases the total capacity of the drive by a few blocks?
> 
> Drives are a little bigger than advertised to compensate.

If I have two partitions of say 10,000 blocks, and one of them develops a
bad spot which the firmware finds and remaps, are you saying that they will 
still both have 10,000 usable blocks because the extra block is pulled into
service from some "spare" are of the drive?


> > Does Linux support SMART?
> 
> Um, I dunno.  What's SMART?

S.M.A.R.T. stands for Self Monitoring Analysis and Reporting Technology.

from www.wdc.com/library/dtlf-c.pdf

  In S.M.A.R.T. technology's brief history, it has progressed through 
  three versions.

  S.M.A.R.T. I provides failure prediction by monitoring certain online 
  hard drive activities.

  S.M.A.R.T. II improves failure prediction by adding an automatic 
  off-line read scan to monitor additional operations.

  S.M.A.R.T. III not only monitors hard drive activities but adds 
  failure prevention by attempting to detect and repair sector errors.

  Western Digital has implemented all three versions of the S.M.A.R.T. 
  reliability monitor on its hard drives.

It sounds like it has something to do with mapping bad sectors but I
don't really know how it works or if it runs independent of the OS.

Dex

> -- 
> Bob Miller                              K<bob>
> kbobsoft software consulting
> http://kbobsoft.com                     [EMAIL PROTECTED]
> _______________________________________________
> Eug-LUG mailing list
> [EMAIL PROTECTED]
> http://mailman.efn.org/cgi-bin/listinfo/eug-lug
_______________________________________________
Eug-LUG mailing list
[EMAIL PROTECTED]
http://mailman.efn.org/cgi-bin/listinfo/eug-lug

Reply via email to