Scott Laird wrote: > On 10/18/07, Neil Perrin <[EMAIL PROTECTED]> wrote: >>> So, the only way to lose transactions would be a crash or power loss, >>> leaving outstanding transactions in the log, followed by the log >>> device failing to start up on reboot? I assume that that would that >>> be handled relatively cleanly (files have out of data data), as >>> opposed to something nasty like the pool fails to start up. >> I just checked on the behaviour of this. The log is treated as part >> of the main pool. If it is not replicated and disappears then the pool >> can't be opened - just like any unreplicated device in the main pool. >> If the slog is found but can't be opened or is corrupted then then the >> pool will be opened but the slog isn't used. >> This seems a bit inconsistent. > > Hmm, yeah. What would happen if I mirrored the ramdisk with a hard > drive? Would ZFS block until the data's stable on both devices, or > would it continue once the write is complete on the ramdisk?
ZFS ensures all mirror sides have the data before returning. > > Failing that, would replacing the missing log with a blank device let > me bring the pool back up, or would it be dead at that point? Replacing the device would work: : mull ; mkfile 100m /p1 /p2 : mull ; zpool create whirl /p1 log /p2 : mull ; echo abc > /whirl/f : mull ; sync : mull ; rm /p2 : mull ; sync <reset system> : mull ; zpool status pool: whirl state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scrub: none requested config: NAME STATE READ WRITE CKSUM whirl UNAVAIL 0 0 0 insufficient replicas /p1 ONLINE 0 0 0 logs UNAVAIL 0 0 0 insufficient replicas /p2 UNAVAIL 0 0 0 cannot open : mull ; mkfile 100m /p2 /p3 : mull ; zpool online whirl /p2 warning: device '/p2' onlined, but remains in faulted state use 'zpool replace' to replace devices that are no longer present : mull ; zpool status pool: whirl state: ONLINE status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-4J scrub: none requested config: NAME STATE READ WRITE CKSUM whirl ONLINE 0 0 0 /p1 ONLINE 0 0 0 logs ONLINE 0 0 0 /p2 UNAVAIL 0 0 0 corrupted data errors: No known data errors : mull ; zpool replace whirl /p2 /p3 : mull ; zpool status pool: whirl state: ONLINE scrub: resilver completed with 0 errors on Thu Oct 18 18:16:39 2007 config: NAME STATE READ WRITE CKSUM whirl ONLINE 0 0 0 /p1 ONLINE 0 0 0 logs ONLINE 0 0 0 replacing ONLINE 0 0 0 /p2 UNAVAIL 0 0 0 corrupted data /p3 ONLINE 0 0 0 errors: No known data errors : mull ; zpool status pool: whirl state: ONLINE scrub: resilver completed with 0 errors on Thu Oct 18 18:16:39 2007 config: NAME STATE READ WRITE CKSUM whirl ONLINE 0 0 0 /p1 ONLINE 0 0 0 logs ONLINE 0 0 0 /p3 ONLINE 0 0 0 errors: No known data errors : mull ; zfs mount : mull ; zfs mount -a : mull ; cat /whirl/f abc : mull ; > >>>>> 3. What about corruption in the log? Is it checksummed like the rest of >>>>> ZFS? >>>> Yes it's checksummed, but the checksumming is a bit different >>>> from the pool blocks in the uberblock tree. >>>> >>>> See also: >>>> http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on >>> That started this whole mess :-). I'd like to try out using one of >>> the Gigabyte SATA ramdisk cards that are discussed in the comments. >> A while ago there was a comment on this alias that these cards >> weren't purchasable. Unfortunately, I don't know what is available. > > The umem one is unavailable, but the Gigabyte model is easy to find. > I had Amazon overnight one to me, it's probably sitting at home right > now. Cool let us know how it goes. Neil. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss