Grant Peel wrote:

!. Can I use a FreeBSD bootable installation disk (6.4) made from an
ISO image, to boot my PC and make the filesystems on the 36GB drive,
without actually installing FreeBSD? (Please feel free to tell me
exactly how :-)).

Except that site seems to be down at the moment.

Yes, you can use disk1 from the Installation set as a live filesystem, and
from there you can manipulate other hard drives attached to the machine.
However, expect to do everything from the command line and take care to
familiarise yourself with the necessary procedures before starting.

2. Once I get that drive to the network center, and restore the dumps
to it, how do I ensure the drive is bootable? (I assume I actually do
that in the previous step).

Well, if you install a minimal FreeBSD system on your 36GB drive using the
installation media, you could test booting from it while it is still in your
PC.  Then you can duplicate the data from the remote server onto the drive
(using dump and restore or how you will) which won't touch the boot blocks
on the disk.  It should 'just work' but watch out that you keep disk devices
in /etc/fstab in synch with where the OS finds the drive.  You can fix an
incorrect fstab from the system console in single user mode: it would be a
good idea to write yourself a little cheat sheet on how to do that before
you start.

3. Is it possible to skip step one altogether and use the
instructions in the  man pages regarding "Restoring a filesystem" and
makeing the 'Pristine' filesystem? If so, again, how do I ensure the
disk is bootable?

Of course.  You can take a blank disk to your datacenter and initialise it
entirely from your existing server.  A neat trick would be to set up geom
mirroring between the old and new disks but you really need a disk of the same
size as the original (or larger) to do that properly.

Failing that to replace an existing disk with a duplicate:

  0) You've backed everything up already, haven't you?

  i) install new drive into server. Note: you may need to use /boot/loader.conf
     settings to hard-wire your original disk to appear as da0 if your new disk
probes before the old one. Or make sure the SCSI ID of the new drive is larger than the ID of the old one -- the boot drive is typically installed at SCSI ID 0 so that should happen by default.
 ii) reboot from the old disk.

iii) identify the device node for the new disk from the messages in /var/run/dmesg.boot I'm going to assume /dev/da1 (with the original
     system on /dev/da0) but make the appropriate substitutions in the
     following if that is not the case.

 iv) Two choices here.  You can either do this the hardcore way (branch a)
     or the way of least resistance (branch b)

v-a) Slice the disk allocating all the space for FreeBSD, making it bootable at
     the same time:

     # fdisk -B -I /dev/da1

vi-a) Write a BSD partition table into the slice, then set up your required FreeBSD partitions:

     # bsdlabel -w da1s1 auto
     # bsdlabel -e da1s1

     'bsdlabel -e' will pop you into an editor (vi by default) with a copy of
     the partition table.  Edit this to suit -- feel free to add any extra lines
     to create additional partitions: you've got partitions a (traditionally the
root) b (swap) and d, e, f, g, h (data partitions) to play with. Save the edited partition table and bsdlabel will write it to the disk when you exit
     the editor.  Don't futz with the 'c' partition (traditionally the whole 
     -- 'bsdlabel -w' will already have put in the correct values for you.

     Note that bsdlabel understands the 'size' parameter either as sectors or, 
     given the appropriate suffix as kilobytes (k), megabytes (M), gigabytes 
     %age of available space (%) or you can just say '*' meaning 'everything 
     You can also just put in '*' in the 'offset' column and bsdlabel will 
     the value for you, ordering the partitions on the drive in the order given 
in the

vii-a) Create file systems on your new partitions -- repeat for each of the 4.2BSD type partitions set up in the previous stage:

# newfs /dev/da1s1a # newfs -U /dev/da1s1d
     ... etc. ...

     Traditionally softupdates is not enabled on the root partition, although it
does no harm to enable it nowadays. (At one time it caused problems updating the system when the root partition was reasonably full, but that bug was squashed long ago.)


v-b -- vii-b) Make a backup copy of /etc/fstab. Fire up sysinstall and use the menu items under the 'Custom Install' to achieve the desired result. Note: you aren't going to go on to install any of the base system so be sure to chose the options to *W*rite out the settings at each stage. Do the 'Partition' and 'Label' actions (to your *new* disk, obviously), then quit from sysinstall. If
     necessary copy back the saved version of /etc/fstab

     The two branches join again:

viii) reboot to single user mode.  (ie. 'shutdown -r now' then catch the system 
     it is coming back up, and hit '4' in the boot menu for single user.) Once 
     get a single user shell, remount the root filesystem read-write:

     # mount -u /

     Then mount your newly created partitions under /mnt -- note: you have to
     give all of the relevant parts in the mount commands as there's nothing
     in /etc/fstab to give the system any hints:

     # mount -t ufs -o rw /dev/da1s1a /mnt
     # mkdir -p /mnt/var
     # mount -t ufs -o rw /dev/da1s1d /mnt/var
     # mkdir -p /mnt/usr
     # mount -t ufs -o rw /dev/da1s1e /mnt/usr
     ... etc ...

     The above is an illustration and will need to be modified to accord with 
     way your system is laid out.  Refer to /etc/fstab for how the partitions 
     d, e, f etc) map onto the various mount points (/, /usr, /var, /home etc.)
     -- you want to create the same layout but using the partitions from the new
     disk (da1) all transposed to the equivalent locations under /mnt

 ix) Now, dump and restore each of the filesystems from the old drive onto
     the equivalent partitions on the new drive.  You can dump the old 
     while they are unmounted -- and in fact this is advantageous as it ensures
     you will get definitely not get bitten by files changing on you during the
     course of the dump.  Stuff on the root partition rarely changes much at 
     so dumping that while it's mounted isn't a problem in practice[*]:

     # dump -0 -a -C 32 -f - /dev/da0s1a | ( cd /mnt ; restore -rf - )
     # dump -0 -a -C 32 -f - /dev/da0s1d | ( cd /mnt/var ; restore -rf - )
     # dump -0 -a -C 32 -f - /dev/da0s1e | ( cd /mnt/usr ; restore -rf - )
     ... etc ...
mutatis mutandem as for the mount commands in (viii).

     There will be some extraneous '.restoresymtable' files left lying around
the duplicated partitions. These can eventually be deleted as described in restore(1).

  x) shut the system down.  Extract the old system disk and keep in a safe
     place in case of emergencies.  Rejumper the new disk to the SCSI ID 
     used by the old disk.  Reassemble the machine. Reboot.

 xi) ... and you should have a perfectly working system duplicated onto the new
     hard drive.  You're done.

This is by no means a trivial operation, and there are plenty of ways to fat-finger your server to death when using these low-level commands. It will help if you can scare up some spare kit to practice on before working on your live server. If it all goes a bit pear-shaped then the backout path is to replace the original system disk, remove the new disk and then reboot. If you've managed to scribble all over your live
system disk, then you're going to need the full backups mentioned in (0).  Do 
know how to restore your server from scratch?



[*] See also the -L flag for dump(1).

Dr Matthew J Seaman MA, D.Phil.                   7 Priory Courtyard
                                                 Flat 3
PGP:     Ramsgate
                                                 Kent, CT11 9PW

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to