Thanks for the response Victor.  It is certainly still relevant in the sense
that I am hoping to recover the data (although I've been informed the odds
are strongly against me)

My understanding is that Nexenta has been backporting ZFS code changes post
134.  I suppose that it could be an error they somehow introduced or perhaps
I've found a unique codepath that is also relevant pre-134 as well.
Earlier today I was able to send some zdb dump information to Cindy which
hopefully will shed some light on the situation (I would be happy to send to
you as well)

-brian

On Tue, Aug 17, 2010 at 10:37 AM, Victor Latushkin <victor.latush...@sun.com
> wrote:

> Hi Brian,
>
> is it still relevant?
>
>
> On 02.08.10 21:07, Brian Merrell wrote:
>
>> Cindy,
>>
>> Thanks for the quick response.  Consulting ZFS history I note the
>> following actions:
>>
>> "imported" my three disk raid-z pool originally created on the most
>> "recent" version of OpenSolaris but now running NexantaStor 3.03
>>
>
> Then we need to know what changes are there in NexentaStor 3.03 on top of
> build 134. Nexenta folks are reading this list, so I hope they'll chime in.
>
> regards
> victor
>
>
>  "upgraded" my pool
>> "destroyed" two file systems I was no longer using (neither of these were
>> of course the file system at issue)
>> "destroyed" a snapshot on another filesystem
>> played around with permissions (these were my only actions directly on the
>> file system)
>>
>> None of these actions seemed to have a negative impact on the filesystem
>> and it was working well when I gracefully shutdown (to physically move the
>> computer).
>>
>> I am a bit at a loss.  With copy-on-write and a clean pool how can I have
>> corruption?
>>
>> -brian
>>
>>
>>
>> On Mon, Aug 2, 2010 at 12:52 PM, Cindy Swearingen <
>> cindy.swearin...@oracle.com <mailto:cindy.swearin...@oracle.com>> wrote:
>>
>>    Brian,
>>
>>    You might try using zpool history -il to see what ZFS operations,
>>    if any, might have lead up to this problem.
>>
>>    If zpool history doesn't provide any clues, then what other
>>    operations might have occurred prior to this state?
>>
>>    It looks like something trappled this file system...
>>
>>    Thanks,
>>
>>    Cindy
>>
>>    On 08/02/10 10:26, Brian wrote:
>>
>>        Thanks Preston.  I am actually using ZFS locally, connected
>>        directly to 3 sata drives in a raid-z pool. The filesystem is
>>        ZFS and it mounts without complaint and the pool is clean.  I am
>>        at a loss as to what is happening.
>>        -brian
>>
>>
>>
>>
>> --
>> Brian Merrell, Director of Technology
>> Backstop LLP
>> 1455 Pennsylvania Ave., N.W.
>> Suite 400
>> Washington, D.C.  20004
>> 202-628-BACK (2225)
>> merre...@backstopllp.com <mailto:merre...@backstopllp.com>
>> www.backstopllp.com <http://www.backstopllp.com>
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
>
> --
> --
> Victor Latushkin                   phone: x11467 / +74959370467
> TSC-Kernel EMEA                    mobile: +78957693012
> Sun Services, Moscow               blog: http://blogs.sun.com/vlatushkin
> Sun Microsystems
>



-- 
Brian Merrell, Director of Technology
Backstop LLP
1455 Pennsylvania Ave., N.W.
Suite 400
Washington, D.C.  20004
202-628-BACK (2225)
merre...@backstopllp.com
www.backstopllp.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to