I'm trying to provide some "disaster-proofing" on Amazon EC2 by using a 
ZFS-based EBS volume for primary data storage with Amazon S3-backed snapshots. 
My aim is to ensure that, should the instance terminate, a new instance can 
spin-up, attach the EBS volume and auto-/re-configure the zpool.

I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure 
already defined. Starting an instance from this image, without attaching the 
EBS volume, shows the pool structure exists and that the pool state is 
"UNAVAIL" (as expected). Upon attaching the EBS volume to the instance the 
status of the pool changes to "ONLINE", the mount-point/directory is accessible 
and I can write data to the volume.

Now, if I terminate the instance, spin-up a new one, and connect the same (now 
unattached) EBS volume to this new instance the data is no longer there with 
the EBS volume showing as blank. 

EBS is supposed to ensure persistence of data after EC2 instance termination, 
so I'm assuming that when the newly attached drive is seen by ZFS for the first 
time it is wiping the data some how? Or possibly that some ZFS logs or details 
on file location/allocation aren't being persisted? Assuming this, I created an 
additional EBS volume to persist the intent-logs across instances but I'm 
seeing the same problem.

I'm new to ZFS, and would really appreciate the community's help on this.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to