On Tue, Dec 05, 2006 at 10:48:18AM -0500, B. Cook wrote:

> Hello all,
> I've got a dying drive on my hands.. and I know I found a doc/guide on 
> the handbook before regarding this..
> something like..
> # tar cf - --one-file-system -C /var . | tar xpvf - -C /mnt/var
> where the new drive is fsck'd and mounted at /mnt ..
> or something else..
> where is that?
> or how would I do that?
> I just want to take this 20G drive and copy it to a 30G drive..
> what would be the best way?

Build file systems on the new drive in sizes to suit you.
Make sure you make the new drive bootable if the old one is used that way.
That involves  fdisk/bsdlabel/newfs  or you can do it using sysinstall
which calls fdisk, bsdlabel and newfs for you.  
Since your new disk is a little larger than the old one, decide where 
you want to apply the extra space.   If root and /usr are doing fine 
the way they are, I would suggest putting it all in whatever large
catch-all partition you have - often mounted as /home.
Mount up the new partitions to something meaningful.
that means mkdir to make the mount points and mount to mount the partitions.
Use dump/restore to transfer things.

  You have installed the new disk so it looks like /dev/ad1 
    (the old dying one is /dev/ad0)
  dd if=/dev/zero of=/dev/ad1 bs=512 count=32  (This makes sure the new
                                                drive is clean and can 
                                                usually be omitted)
  fdisk -BI ad1                                 makes 1 FreeBSD slice
                                                and writes the MBR
  bsdlabel -w -B ad1s1                          makes the slice bootable
  bsdlabel -e ad1s1                             edit the partition table
                                                for the sizes you want.
Sample edited partition table:
  8 partitions:
#        size   offset    fstype   [fsize bsize bps/cpg]
  a:   524288        0    4.2BSD     2048 16384 32776 
  b:  2097152        *      swap                    
  c: 58510540        0    unused        0     0         # "raw" part, don't edit
  d:  1048576        *    4.2BSD     2048 16384     8 
  e:  4194304        *    4.2BSD     2048 16384 28552 
  f:  6291456        *    4.2BSD     2048 16384 28552 
  g:        *        *    4.2BSD     2048 16384 28552

Ignore the stuff above the '8 partitions:' line.  eg. don't change it.
Also, do not change the c: numbers. (I made up one for the example)

The asterisks in the offset field tell bsdlabel to do the calculation.
The in the last size field tell it to put all the rest in that partition.

This gives the following sizes:

  a:   256   MB      (for root (/))
  b:  1024   MB      (for swap)
  c:  whole disk identifier
  d:   512   MB      (for /tmp)
  e:  2048   MB      (for /usr)
  f:  3072   MB      (for /var)
  g: 21657   MB (21 GB)  for that large catch-all, /home 
That may seem large for /usr, but will easily be used up if you 
install all source and do some major builds and install a lot of
ports.  In fact, you may want to move /usr/ports to /home and
make a symlink before you start building ports if you plan to
do some big ones like openoffice.   The 3 GB in /var gives room
to run a small database of some sort.  If you do a bigger one
you will want a lot more - maybe additional disk.   But for this
example, these sizes are nice.   In addition to catch-all, I am
presuming you put user home directories in /home.

I am presuming my example matches the partition structure you have
on the old disk.  It might be different, and so you might prefer to
follow that with adjusted sizes rather than the same partitions I
lay out here.

So, save and end the edit session for the partition table which
causes bsdlabel to write it to the disk label.   Now you have the
partitions created, all you need to do is turn them in to 
file systems using newfs.

Run a newfs on each partition except the one for swap.
  newfs /dev/ad1s1a           
That would be /dev/ad1s1a, /dev/ad1s1d, /dev/ad1s1e, /dev/ad1s1f, ad1s1g

All the fdisk, bsdlabel and newfs stuff can be done with sysinstall
rather than by running each step by hand, but really it is about as
easy to do it step by step, and then you see each part and understand
it a little better.

Create mount points:
  mkdir /newroot
  mkdir /newusr
  mkdir /newvar
  mkdir /newhome

Probably don't need to bother with /tmp, since it shouldn't need
to be copied.  But you can if you like.

Mount the partitions:
  mount /dev/ad1s1a /newroot
  mount /dev/ad1s1e /newusr
  mount /dev/ad1s1f /newvar
  mount /dev/ad1s1g /newhome

Now use dump/restore to move stuff over.
  cd /newroot
  dump 0af - / | restore -rf -
  cd /newusr
  dump 0af - /usr | restore -rf -
  cd /newvar
  dump 0af - /var | restore -rf -
  cd /newhome
  dump 0af - /home | restore -rf -

If any of the restores come up and ask about setting owner/permissions
on . answer 'n'  I don't think it does that for a -r restore, but...

Once that is all finished, change the drives around so the new drive
is in the first position and boot it up.   It should work.

Those extra (now unused) mount points /newroot, /newusr, /newvar, /newhome
will be setting there in root and can be removed with rmdir or can
just be left there and ignored.

Have fun,


> (and I need to talk to someone over the phone about this.. it's his box 
> and he has no inet)
> Thanks in advance.
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
freebsd-questions@freebsd.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to