I've recently been able to procure a hw raid enclosure for a news server, and 
wanted to move the current spool (located on a sw raid5 array) to the new 
array.  Seeing the horrid performance of copying between the two (~1Gb every 
30 minutes), I thought it was due to the 'cp -a' I was using to copy the data.

I installed dump/restore, and found that they wouldn't work at all: (/mnt1 is 
the sw raid array mounted ro.  it didn't work in a dump | restore mode, so the 
examples below are trying to dump to /dev/null ...)

# ------- Start ------- #
[news ~]$ dump 0sdbf 100000 100000 128 /dev/null /dev/md0
  DUMP: Date of this level 0 dump: Mon Mar 22 19:25:36 1999
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/sda1 (/) to /dev/null
  DUMP: mapping (Pass I) [regular files]
/dev/sda1: Ext2 inode is not a directory while mapping files in dev/md0
[news ~]$ sudo dump 0sdbf 100000 100000 128 /dev/null /mnt1
  DUMP: Date of this level 0 dump: Mon Mar 22 19:25:40 1999
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/sda1 (/) to /dev/null
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 117 tape blocks on 0.00 tape(s).
  DUMP: dumping (Pass III) [directories]
  DUMP: dumping (Pass IV) [regular files]
  DUMP: DUMP: 201 tape blocks on 1 volumes(s)
  DUMP: Closing /dev/null
  DUMP: DUMP IS DONE
[news ~]$ cd /mnt1
[news /mnt1]$ dump 0sdbf 100000 100000 128 /dev/null .
  DUMP: Date of this level 0 dump: Mon Mar 22 19:25:51 1999
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping . to /dev/null
.: Attempt to read block from filesystem resulted in short read while opening 
filesystem
# ------- End ------- #

Is this a dump problem, or a md problem?

-- 
Randomly Generated Tagline:
In plumbing, a straight flush is better than a full house!

Reply via email to