On Aug 4, 2010, at 7:15 AM, Dmitry Sorokin wrote:

> 
> I'm in the same situation as Darren - my log SSD device died completely.
> Victor,  could you please explain how did you "mocked up log device in a
> file" so zpool status started to show the device with UNAVAIL status?
> I lost the latest zpool.cache file, but I was able to recover GUID of
> the log device from the backup copy of zpool.cache.

Well, that's not very difficult. You need to write proper VDEV configuration 
with good checksum into at least one ZFS label of some kind of new device - 
either disk of file.

IF you have backup zpool.cache with necessary details then is it not that 
difficult.

Btw, in Darren's case we almost succeeded - it was possible to import pool with 
mocked up log device, but due to corruption in metaslabs it panicked almost 
immediately. For some reason setting aok/zfs_recover did not help too. Last 
option was to try readonly import but I was not able to prepare necessary bits 
quickly enough and Darren decided to stop pursuing recovery and revert to 
partial backups he had.

I'm almost sure that readonly import would let him get everything back. In 
future it should be easier as ZFS readonly import support is now integrated 
into source code thanks to George Wilson's efforts.

regards
victor

> 
> Thanks,
> Dmitry
> 
> 
> -----Original Message-----
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Victor
> Latushkin
> Sent: Tuesday, August 03, 2010 7:09 PM
> To: Darren Taylor
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] problem with zpool import - zil and cache
> drive are not displayed?
> 
> 
> On Aug 4, 2010, at 12:23 AM, Darren Taylor wrote:
> 
>> Hi George,
>> 
>> I think you are right. The log device looks to have suffered a
> complete loss, there is no data on the disk at all. The log device was a
> "acard" ram drive (with battery backup), but somehow it has faulted
> clearing all data. 
>> 
>> --victor gave me this advice, and queried about the zpool.cache-- 
>> Looks like there's a hardware problem with c7d0 as it appears to
> contain garbage. Do you have zpool.cache with this pool configuration
> available?
> 
> Besides containing garbage former log device now appears to have
> different geometry and is not able to read in the higher LBA ranges. So
> i'd say it is broken.
> 
>> c7d0 was the log device. I'm unsure what the next step is, but i'm
> assuming there is a way to grab the drives original config from the
> zpool.cache file and apply back to the drive?
> 
> I mocked up log device in a file, and that made zpool import more happy:
> 
> bash-4.0# zpool import
>  pool: tank
>    id: 15136317365944618902
> state: DEGRADED
> status: The pool was last accessed by another system.
> action: The pool can be imported despite missing or damaged devices.
> The
>        fault tolerance of the pool may be compromised if imported.
>   see: http://www.sun.com/msg/ZFS-8000-EY
> config:
> 
>        tank        DEGRADED
>          raidz1-0  ONLINE
>            c6t4d0  ONLINE
>            c6t5d0  ONLINE
>            c6t6d0  ONLINE
>            c6t7d0  ONLINE
>          raidz1-1  ONLINE
>            c6t0d0  ONLINE
>            c6t1d0  ONLINE
>            c6t2d0  ONLINE
>            c6t3d0  ONLINE
>        cache
>          c8d1
>        logs
>          c13d1s0   UNAVAIL  cannot open
> 
> 
> 
> bash-4.0# zpool import -fR / tank
> cannot import 'tank': one or more devices is currently unavailable
>        Recovery is possible, but will result in some data loss.
>        Returning the pool to its state as of July 21, 2010 03:49:50 AM
> NZST
>        should correct the problem.  Approximately 91 seconds of data
>        must be discarded, irreversibly.  After rewind, several
>        persistent user-data errors will remain.  Recovery can be
> attempted
>        by executing 'zpool import -F tank'.  A scrub of the pool
>        is strongly recommended after recovery.
> bash-4.0#
> 
> So if you are happy with the results, you can perform actual import with
> 
> zpool import -fF -R / tank
> 
> You should then be able to remove log device completely.
> 
> regards
> victor
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> 
> 
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to