Re: ZFS-only booting on FreeBSD

2011-02-19 Thread Matthew Seaman
On 18/02/2011 15:59, Daniel Staal wrote:
 
 I've been reading over the ZFS-only-boot instructions linked here:
 http://wiki.freebsd.org/ZFS (and further linked from there) and have one
 worry:
 
 Let's say I install a FreeBSD system using a ZFS-only filesystem into a
 box with hotswapable hard drives, configured with some redundancy.  Time
 passes, one of the drives fails, and it is replaced and rebuilt using the
 ZFS tools.  (Possibly on auto, or possibly by just doing a 'zpool
 replace'.)
 
 Is that box still bootable?  (It's still running, but could it *boot*?)

Why wouldn't it be?  The configuration in the Wiki article sets aside a
small freebsd-boot partition on each drive, and the instructions tell
you to install boot blocks as part of that partitioning process.  You
would have to repeat those steps when you install your replacement drive
before you added the new disk into your zpool.

So long as the BIOS can read the bootcode from one or other drives, and
can then access /boot/zfs/zpool.cache to learn about what zpools you
have, then the system should boot.

 Extend further: If *all* the original drives are replaced (not at the same
 time, obviously) and rebuilt/resilvered using the ZFS utilities, is the
 box still bootable?

Yes, this will still work.  You can even replace all the drives
one-by-one with bigger ones, and it will still work and be bootable (and
give you more space without *requiring* the system be rebooted).

 If not, what's the minimum needed to support booting from another disk,
 and using the ZFS filesystem for everything else?

This situation is described in the Boot ZFS system from UFS article
here: http://wiki.freebsd.org/RootOnZFS/UFSBoot

I use this sort of setup for one system where the zpool has too many
drives in it for the BIOS to cope with; works very well booting from a
USB key.

In fact, while the partitioning layout described in the
http://wiki.freebsd.org/RootOnZFS articles is great for holding the OS
and making it bootable, for using ZFS to manage serious quantities of
disk storage, other strategies might be better.  It would probably be a
good idea to have two zpools: one for the bulk of the space built from
whole disks (ie. without using gpart or similar partitioning), in
addition to your bootable zroot pool.  Quite apart from wringing the
maximum usable space out of your available disks, this also makes it
much easier to replace failed disks or use hot spares.

Cheers,

Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: ZFS-only booting on FreeBSD

2011-02-19 Thread Daniel Staal
--As of February 19, 2011 12:01:37 PM +, Matthew Seaman is alleged to 
have said:



Let's say I install a FreeBSD system using a ZFS-only filesystem into a
box with hotswapable hard drives, configured with some redundancy.  Time
passes, one of the drives fails, and it is replaced and rebuilt using the
ZFS tools.  (Possibly on auto, or possibly by just doing a 'zpool
replace'.)

Is that box still bootable?  (It's still running, but could it *boot*?)


Why wouldn't it be?  The configuration in the Wiki article sets aside a
small freebsd-boot partition on each drive, and the instructions tell
you to install boot blocks as part of that partitioning process.  You
would have to repeat those steps when you install your replacement drive
before you added the new disk into your zpool.

So long as the BIOS can read the bootcode from one or other drives, and
can then access /boot/zfs/zpool.cache to learn about what zpools you
have, then the system should boot.


So, assuming a forgetful sysadmin (or someone who is new didn't know about 
the setup in the first place) is that a yes or a no for the one-drive 
replaced case?


It definitely is a 'no' for the all-drives replaced case, as I suspected: 
You would need to have repeated the partitioning manually.  (And not 
letting ZFS handle it.)



If not, what's the minimum needed to support booting from another disk,
and using the ZFS filesystem for everything else?


This situation is described in the Boot ZFS system from UFS article
here: http://wiki.freebsd.org/RootOnZFS/UFSBoot

I use this sort of setup for one system where the zpool has too many
drives in it for the BIOS to cope with; works very well booting from a
USB key.


Thanks; I wasn't sure if that procedure would work if the bootloader was on 
a different physical disk than the rest of the filesystem.  Nice to hear 
from someone who's tried it that it works.  ;)



In fact, while the partitioning layout described in the
http://wiki.freebsd.org/RootOnZFS articles is great for holding the OS
and making it bootable, for using ZFS to manage serious quantities of
disk storage, other strategies might be better.  It would probably be a
good idea to have two zpools: one for the bulk of the space built from
whole disks (ie. without using gpart or similar partitioning), in
addition to your bootable zroot pool.  Quite apart from wringing the
maximum usable space out of your available disks, this also makes it
much easier to replace failed disks or use hot spares.


If a single disk failure in the zpool can render the machine unbootable, 
it's better yet to have a dedicated bootloader drive: It increases the mean 
time between failures of your boot device (and therefore your machine), and 
it reduces the 'gotcha' value.  In a hot-swap environment booting directly 
off of ZFS you could fail a reboot a month (or more...) after the disk 
replacement, and finding your problem then will be a headache until someone 
remembers this setup tidbit.


If the 'fail to boot' only happens once *all* the original drives have been 
replaced the mean time between failures is better in the ZFS situation, but 
the 'gotcha' value becomes absolutely huge: Since you can replace one (or 
two, or more) disks without issue, the problem will likely take years to 
develop.


Ah well, price of the bleeding edge.  ;)

Daniel T. Staal

---
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS-only booting on FreeBSD

2011-02-19 Thread Matthew Seaman
On 19/02/2011 13:18, Daniel Staal wrote:
 Why wouldn't it be?  The configuration in the Wiki article sets aside a
 small freebsd-boot partition on each drive, and the instructions tell
 you to install boot blocks as part of that partitioning process.  You
 would have to repeat those steps when you install your replacement drive
 before you added the new disk into your zpool.

 So long as the BIOS can read the bootcode from one or other drives, and
 can then access /boot/zfs/zpool.cache to learn about what zpools you
 have, then the system should boot.
 
 So, assuming a forgetful sysadmin (or someone who is new didn't know
 about the setup in the first place) is that a yes or a no for the
 one-drive replaced case?

Umm... a sufficiently forgetful sysadmin can break *anything*.  This
isn't really a fair test: forgetting to write the boot blocks onto a
disk could similarly render a UFS based system unbootable.   That's why
scripting this sort of stuff is a really good idea.   Any new sysadmin
should of course be referred to the copious and accurate documentation
detailing exactly the steps needed to replace a drive...

ZFS is definitely advantageous in this respect, because the sysadmin has
to do fewer steps to repair a failed drive, so there's less opportunity
for anything to be missed out or got wrong.

The best solution in this respect is one where you can simply unplug the
dead drive and plug in the replacement.  You can do that with many
hardware RAID systems, but you're going to have to pay a premium price
for them.  Also, you loose out on the general day-to-day benefits of
using ZFS.

 It definitely is a 'no' for the all-drives replaced case, as I
 suspected: You would need to have repeated the partitioning manually. 
 (And not letting ZFS handle it.)

Oh, assuming your sysadmins consistently fail to replace the drives
correctly, then depending on your BIOS you can be in deep do-do as far
as rebooting goes rather sooner than that.

 If a single disk failure in the zpool can render the machine
 unbootable, it's better yet to have a dedicated bootloader drive

If a single disk failure renders your system unbootable, then you're
doing it wrong.  ZFS-root systems should certainly reboot if zfs can
still assemble the root pool -- so with one disk failed for RAIDZ1, or
two for RAIDZ2 or up to half the drives for mirror.

If this failure to correctly replace broken drives is going to be a
significant problem in your environment, then I guess you're going to
have to define appropriate processes.  You might say that in the event
of a hard drive being replaced, it is mandatory to book some planned
downtime at the next convenient point, and do a test reboot + apply any
remedial work needed.  If your system design is such that you can't take
any one machine down for maintenance, even with advance warning then
you've got more important problems to solve before you worry about using
ZFS or not.

Cheers,

Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: ZFS-only booting on FreeBSD

2011-02-19 Thread Daniel Staal
--As of February 19, 2011 2:44:38 PM +, Matthew Seaman is alleged to 
have said:



Umm... a sufficiently forgetful sysadmin can break *anything*.  This
isn't really a fair test: forgetting to write the boot blocks onto a
disk could similarly render a UFS based system unbootable.   That's why
scripting this sort of stuff is a really good idea.   Any new sysadmin
should of course be referred to the copious and accurate documentation
detailing exactly the steps needed to replace a drive...

ZFS is definitely advantageous in this respect, because the sysadmin has
to do fewer steps to repair a failed drive, so there's less opportunity
for anything to be missed out or got wrong.

The best solution in this respect is one where you can simply unplug the
dead drive and plug in the replacement.  You can do that with many
hardware RAID systems, but you're going to have to pay a premium price
for them.  Also, you loose out on the general day-to-day benefits of
using ZFS.


--As for the rest, it is mine.

True, best case is hardware RAID for this specific problem.  What I'm 
looking at here is basically reducing the surprise: A ZFS pool being used 
as the boot drive has the 'surprising' behavior that if you replace a drive 
using the instructions from the man pages or a naive Google search, you 
will have a drive that *appears* to work, until some point later where you 
attempt to reboot your system.  (At which point you will need to start 
over.)  To avoid this you need to read local documentation and/or remember 
that there is something beyond the man pages needs to be done.


With a normal UFS/etc. filesystem the standard failure recovery systems 
will point out that this is a boot drive, and handle as necessary.  It will 
either work or not, it will never *appear* to work, and then fail at some 
future point from a current error.  It might be more steps to repair a 
specific drive, but all the steps are handled together.


Basically, if a ZFS boot drive fails, you are likely to get the following 
scenario:

1) 'What do I need to do to replace a disk in the ZFS pool?'
2) 'Oh, that's easy.'  Replaces disk.
3) System fails to boot at some later point.
4) 'Oh, right, you need to do this *as well* on the *boot* pool...'

Where if a UFS boot drive fails on an otherwise ZFS system, you'll get:
1) 'What's this drive?'
2) 'Oh, so how do I set that up again?'
3) Set up replacement boot drive.

The first situation hides that it's a special case, where the second one 
doesn't.


To avoid the first scenario you need to make sure your sysadmins are 
following *local* (and probably out-of-band) docs, and aware of potential 
problems.  And awake.  ;)  The scenario in the second situation presents 
it's problem as a unified package, and you can rely on normal levels of 
alertness to be able to handle it correctly.  (The sysadmin will realize it 
needs to be set up as a boot device because it's the boot device.  ;)  It 
may be complicated, but it's *obviously* complicated.)


I'm still not clear on whether a ZFS-only system will boot with a failed 
drive in the root ZFS pool.  Once booted, of course a decent ZFS setup 
should be able to recover from the failed drive.  But the question is if 
the FreeBSD boot process will handle the redundancy or not.  At this point 
I'm actually guessing it will, which of course only exasperates the above 
surprise problem: 'The easy ZFS disk replacement procedure *did* work in 
the past, why did it cause a problem now?'  (And conceivably it could cause 
*major* data problems at that point, as ZFS will *grow* a pool quite 
easily, but *shrinking* one is a problem.)


Daniel T. Staal

---
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS-only booting on FreeBSD

2011-02-19 Thread krad
On 19 February 2011 15:35, Daniel Staal dst...@usa.net wrote:
 --As of February 19, 2011 2:44:38 PM +, Matthew Seaman is alleged to
 have said:

 Umm... a sufficiently forgetful sysadmin can break *anything*.  This
 isn't really a fair test: forgetting to write the boot blocks onto a
 disk could similarly render a UFS based system unbootable.   That's why
 scripting this sort of stuff is a really good idea.   Any new sysadmin
 should of course be referred to the copious and accurate documentation
 detailing exactly the steps needed to replace a drive...

 ZFS is definitely advantageous in this respect, because the sysadmin has
 to do fewer steps to repair a failed drive, so there's less opportunity
 for anything to be missed out or got wrong.

 The best solution in this respect is one where you can simply unplug the
 dead drive and plug in the replacement.  You can do that with many
 hardware RAID systems, but you're going to have to pay a premium price
 for them.  Also, you loose out on the general day-to-day benefits of
 using ZFS.

 --As for the rest, it is mine.

 True, best case is hardware RAID for this specific problem.  What I'm
 looking at here is basically reducing the surprise: A ZFS pool being used as
 the boot drive has the 'surprising' behavior that if you replace a drive
 using the instructions from the man pages or a naive Google search, you will
 have a drive that *appears* to work, until some point later where you
 attempt to reboot your system.  (At which point you will need to start
 over.)  To avoid this you need to read local documentation and/or remember
 that there is something beyond the man pages needs to be done.

 With a normal UFS/etc. filesystem the standard failure recovery systems will
 point out that this is a boot drive, and handle as necessary.  It will
 either work or not, it will never *appear* to work, and then fail at some
 future point from a current error.  It might be more steps to repair a
 specific drive, but all the steps are handled together.

 Basically, if a ZFS boot drive fails, you are likely to get the following
 scenario:
 1) 'What do I need to do to replace a disk in the ZFS pool?'
 2) 'Oh, that's easy.'  Replaces disk.
 3) System fails to boot at some later point.
 4) 'Oh, right, you need to do this *as well* on the *boot* pool...'

 Where if a UFS boot drive fails on an otherwise ZFS system, you'll get:
 1) 'What's this drive?'
 2) 'Oh, so how do I set that up again?'
 3) Set up replacement boot drive.

 The first situation hides that it's a special case, where the second one
 doesn't.

 To avoid the first scenario you need to make sure your sysadmins are
 following *local* (and probably out-of-band) docs, and aware of potential
 problems.  And awake.  ;)  The scenario in the second situation presents
 it's problem as a unified package, and you can rely on normal levels of
 alertness to be able to handle it correctly.  (The sysadmin will realize it
 needs to be set up as a boot device because it's the boot device.  ;)  It
 may be complicated, but it's *obviously* complicated.)

 I'm still not clear on whether a ZFS-only system will boot with a failed
 drive in the root ZFS pool.  Once booted, of course a decent ZFS setup
 should be able to recover from the failed drive.  But the question is if the
 FreeBSD boot process will handle the redundancy or not.  At this point I'm
 actually guessing it will, which of course only exasperates the above
 surprise problem: 'The easy ZFS disk replacement procedure *did* work in the
 past, why did it cause a problem now?'  (And conceivably it could cause
 *major* data problems at that point, as ZFS will *grow* a pool quite easily,
 but *shrinking* one is a problem.)

 Daniel T. Staal

 ---
 This email copyright the author.  Unless otherwise noted, you
 are expressly allowed to retransmit, quote, or otherwise use
 the contents for non-commercial purposes.  This copyright will
 expire 5 year s after the author's death, or in 30 years,
 whichever is longer, unless such a period is in excess of
 local copyright law.
 ---
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


on slightly different note, make sure you align your partitions so the
zfs partitions 1st sector is divisible by 8, eg 1st sector 2048. Also
when you create the zpool, use the gnop -s 4096 trick to make sure the
pool has ashift=12. You may not be using advanced format drives yet,
but when you do in the future you will be glad you started out like
this.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to 

Re: ZFS-only booting on FreeBSD

2011-02-19 Thread Andy Tornquist
P0r c qpppqppqpqapprfpprkkqroikiujpou























Q

R

F


Rf







On 2/18/11, Daniel Staal dst...@usa.net wrote:

 I've been reading over the ZFS-only-boot instructions linked here:
 http://wiki.freebsd.org/ZFS (and further linked from there) and have one
 worry:

 Let's say I install a FreeBSD system using a ZFS-only filesystem into a
 box with hotswapable hard drives, configured with some redundancy.  Time
 passes, one of the drives fails, and it is replaced and rebuilt using the
 ZFS tools.  (Possibly on auto, or possibly by just doing a 'zpool
 replace'.)

 Is that box still bootable?  (It's still running, but could it *boot*?)

 Extend further: If *all* the original drives are replaced (not at the same
 time, obviously) and rebuilt/resilvered using the ZFS utilities, is the
 box still bootable?

 If not, what's the minimum needed to support booting from another disk,
 and using the ZFS filesystem for everything else?

 Daniel T. Staal

 ---
 This email copyright the author.  Unless otherwise noted, you
 are expressly allowed to retransmit, quote, or otherwise use
 the contents for non-commercial purposes.  This copyright will
 expire 5 years after the author's death, or in 30 years,
 whichever is longer, unless such a period is in excess of
 local copyright law.
 ---

 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


-- 
Sent from my mobile device
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS-only booting on FreeBSD

2011-02-19 Thread Robert Bonomi

 Date: Sat, 19 Feb 2011 10:35:35 -0500
 From: Daniel Staal dst...@usa.net
 Subject: Re: ZFS-only booting on FreeBSD

  [[..  sneck  ..]]

 Basically, if a ZFS boot drive fails, you are likely to get the following 
 scenario:
 1) 'What do I need to do to replace a disk in the ZFS pool?'
 2) 'Oh, that's easy.'  Replaces disk.
 3) System fails to boot at some later point.
 4) 'Oh, right, you need to do this *as well* on the *boot* pool...'

 Where if a UFS boot drive fails on an otherwise ZFS system, you'll get:
 1) 'What's this drive?'
 2) 'Oh, so how do I set that up again?'
 3) Set up replacement boot drive.

 The first situation hides that it's a special case, where the second one 
 doesn't.

For any foolproof system, there exists a _sufficiently-determined_ fool
 capable of breaking it applies.

 To avoid the first scenario you need to make sure your sysadmins are 
 following *local* (and probably out-of-band) docs, and aware of potential 
 problems.  And awake.  ;)  The scenario in the second situation presents 
 it's problem as a unified package, and you can rely on normal levels of 
 alertness to be able to handle it correctly.  (The sysadmin will realize 
 it needs to be set up as a boot device because it's the boot device.  ;)  
 It may be complicated, but it's *obviously* complicated.)

 I'm still not clear on whether a ZFS-only system will boot with a failed 
 drive in the root ZFS pool.  Once booted, of course a decent ZFS setup 
 should be able to recover from the failed drive.  But the question is if 
 the FreeBSD boot process will handle the redundancy or not.  At this 
 point I'm actually guessing it will, which of course only exasperates the 
 above surprise problem: 'The easy ZFS disk replacement procedure *did* 
 work in the past, why did it cause a problem now?'  (And conceivably it 
 could cause *major* data problems at that point, as ZFS will *grow* a 
 pool quite easily, but *shrinking* one is a problem.)

A non-ZFS boot drive results in immediate, _guaranteed_, down-time for
replacement if/when it fails.

A ZFS boot drive lets you replace the drive and *schedule* the down-time
(for a 'test' re-boot, to make *sure* everything works) at a convenient
time.

Failure to schedule the required down time is a management failure, not
a methodology issue.  One has located the requisite sufficiently-
determined fool, and the results thereof are to be expected.



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS-only booting on FreeBSD

2011-02-19 Thread Matthew Seaman
On 19/02/2011 15:35, Daniel Staal wrote:
 I'm still not clear on whether a ZFS-only system will boot with a failed
 drive in the root ZFS pool.

If it's a mirror, raidz or similar pool type with resilience, then yes,
it certainly will boot with a failed drive.  Been there, done that.

Cheers,

Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: ZFS-only booting on FreeBSD

2011-02-19 Thread Daniel Staal
--As of February 19, 2011 2:12:20 PM -0600, Robert Bonomi is alleged to 
have said:



A non-ZFS boot drive results in immediate, _guaranteed_, down-time for
replacement if/when it fails.

A ZFS boot drive lets you replace the drive and *schedule* the down-time
(for a 'test' re-boot, to make *sure* everything works) at a convenient
time.


--As for the rest, it is mine.

No it doesn't.  It only extends the next scheduled downtime until you deal 
with it.  ;)  (Or, in a hot-swap environment with sufficient monitoring, 
means you need to deal with it before the next scheduled downtime.)


Or, from what it sounds like, you could have a redundant/backup boot disk. 
I'm planning on using a $5 USB drive as my boot disk.  Triple redundancy 
would cost $15.  I paid more for lunch today.  (Hmm.  I'll have to test to 
see if that setup works, although given the rest of this discussion I don't 
see why it shouldn't...)


I see the advantage, and that it offers higher levels of resiliency and if 
properly handled should cause no problems.  I just hate relying on humans 
to remember things and follow directions.  That's what computers are for. 
Repairing a failed disk in a ZFS boot pool requires a human to remember to 
look for directions in an unusual place, and then follow them correctly. 
If they don't, nothing happens immediately, but there is the possibility of 
failure at some later unspecified time.  (Meanwhile if they look for 
directions in the *usual* place, they get a simple and straightforward set 
of instructions that will appear to work.)


*If* that failure occurs, that downtime will be longer than the downtime 
you would save from a dozen boxes being handled using the correct ZFS 
procedure, as everyone tears their hair out going 'Why doesn't it work?!? 
It worked just fine a moment ago!' until someone remembers this quirk.


I don't like quirky computers.  That's why I'm not a Windows admin.  ;)

Daniel T. Staal

---
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS-only booting on FreeBSD

2011-02-19 Thread David Brodbeck
On Sat, Feb 19, 2011 at 1:52 PM, Daniel Staal dst...@usa.net wrote:
 I see the advantage, and that it offers higher levels of resiliency and if
 properly handled should cause no problems.  I just hate relying on humans to
 remember things and follow directions.  That's what computers are for.
 Repairing a failed disk in a ZFS boot pool requires a human to remember to
 look for directions in an unusual place, and then follow them correctly.

That's why I generally prefer to boot off hardware RAID 1 in
situations where reliability is critical.  There are too many fiddly
unknown factors in booting off software RAID.  Even if you do
everything else right, the BIOS may refuse to look beyond the failed
drive and boot off the good one.  I save the software RAID for data
spindles (which I tend to keep separate from the boot/OS spindles,
anyway.)

2-port 3ware cards are relatively inexpensive, and well supported by
every OS I've used except Solaris.  If you're going for RAID 1 you
don't need expensive battery-backed cache.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS-only booting on FreeBSD

2011-02-19 Thread perryh
Robert Bonomi bon...@mail.r-bonomi.com wrote:

 A non-ZFS boot drive results in immediate, _guaranteed_,
 down-time for replacement if/when it fails.

Not if it is gmirrored and hot-pluggable.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


ZFS-only booting on FreeBSD

2011-02-18 Thread Daniel Staal

I've been reading over the ZFS-only-boot instructions linked here:
http://wiki.freebsd.org/ZFS (and further linked from there) and have one
worry:

Let's say I install a FreeBSD system using a ZFS-only filesystem into a
box with hotswapable hard drives, configured with some redundancy.  Time
passes, one of the drives fails, and it is replaced and rebuilt using the
ZFS tools.  (Possibly on auto, or possibly by just doing a 'zpool
replace'.)

Is that box still bootable?  (It's still running, but could it *boot*?)

Extend further: If *all* the original drives are replaced (not at the same
time, obviously) and rebuilt/resilvered using the ZFS utilities, is the
box still bootable?

If not, what's the minimum needed to support booting from another disk,
and using the ZFS filesystem for everything else?

Daniel T. Staal

---
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


ZFS-only booting on FreeBSD

2011-02-18 Thread Daniel Staal

I've been reading over the ZFS-only-boot instructions linked here:
http://wiki.freebsd.org/ZFS (and further linked from there) and have one
worry:

Let's say I install a FreeBSD system using a ZFS-only filesystem into a
box with hotswapable hard drives, configured with some redundancy.  Time
passes, one of the drives fails, and it is replaced and rebuilt using the
ZFS tools.  (Possibly on auto, or possibly by just doing a 'zpool
replace'.)

Is that box still bootable?  (It's still running, but could it *boot*?)

Extend further: If *all* the original drives are replaced (not at the same
time, obviously) and rebuilt/resilvered using the ZFS utilities, is the
box still bootable?

If not, what's the minimum needed to support booting from another disk,
and using the ZFS filesystem for everything else?

Daniel T. Staal

---
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS-only booting on FreeBSD

2011-02-18 Thread Daniel Staal


Sorry for the dupe, I did send them about 8 hours apart...

Daniel T. Staal

---
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org