Then perhaps you should do zpool import -R / pool
*after* you attach EBS.
That way Solaris won't automatically try to import
the pool and your
scripts will do it once disks are available.
zpool import doesn't work as there was no previous export.
I'm trying to solve the case where the
I'm trying to provide some disaster-proofing on Amazon EC2 by using a
ZFS-based EBS volume for primary data storage with Amazon S3-backed snapshots.
My aim is to ensure that, should the instance terminate, a new instance can
spin-up, attach the EBS volume and auto-/re-configure the zpool.
I've
I'm not actually issuing any when starting up the new instance. None are
needed; the instance is booted from an image which has the zpool configuration
stored within, so simply starts and sees that the devices aren't available,
which become available after I've attached the EBS device.
Before
The instances are ephemeral; once terminated they cease to exist, as do all
their settings. Rebooting an image keeps any EBS volumes attached, but this
isn't the case I'm dealing with - its when the instance terminates
unexpectedly. For instance, if a reboot operation doesn't succeed or if
One thing I've just noticed is that after a reboot of the new instance, which
showed no data on the EBS volume, the files return. So:
1. Start new instance
2. Attach EBS vols
3. `ls /foo` shows no data
4. Reboot instance
5. Wait a few minutes
6. `ls /foo` shows data as expected
Not sure if this
I can replicate this case; Start new instance attach EBS volumes reboot
instance data finally available.
Guessing that it's something to do with the way the volumes/devices are seen
then made available.
I've tried running various operations (offline/online, scrub) to see whether it
will