On 10/14/15 4:02 , Jeb Winders wrote:
> Apology accepted Robert and thanks for the explanation.
> 
> As to activity on the pool -- I started out with 2.2TB used in this pool
> with 1.3T free and now I have 1.55T and 1.92T free.  before i realized what
> has gone on in my zones/archive i had already written 4gb to a new zone.
> 
> I have now stopped all the running zones and have not done any other ops
> with the zones pool.  everything is still mounted read-write
> 
> Should I still try a rollback ? or is it not worth it at this point.

At that point, unfortunately, the various uberblocks in ZFS will have
been overwritten without a snapshot to anchor the old data in place. You
could try and scan the unallocated portions of the drives to attempt to
find data blocks, but at that point it's much more of a forensic
approach and if the data blocks aren't really self-evident to their
contents, trying to piece it together will be rather arduous. It's not
something I've done, so I'm not sure how to advise someone to even go
down that path.

Robert

> On Tue, Oct 13, 2015 at 11:45 PM, Robert Mustacchi <[email protected]> wrote:
> 
>> On 10/13/15 19:39 , Jeb Winders wrote:
>>> Another question -- should I rename my zfs fs zones/archive to something
>>> else or would that make it harder to recover stuff ?
>>>
>>> or am I just deluding myself to think there will be any recovery ?
>>
>> On behalf of those of us at Joyent, I'd like to apologize for this
>> happening, though I appreciate that it doesn't actually end up restoring
>> your data or really help with the lost time.
>>
>> It appears that this change happened as a side effect of the efforts of
>> trying to unify the SDC and SmartOS platforms which were subtlety
>> different, that crontab entry apparently being a part of those differences.
>>
>> At this point recovery depends on how much activity has occurred. While
>> it's possible to roll back some amount in ZFS, it really depends on how
>> much activity has happened since the event. Unfortunately, I would
>> suspect that most of the metadata that would point to the data blocks in
>> question has been lost if it's been active for a while. If you have kept
>> that part of the zpool read-only instead, then you might be able to try
>> using aspects of zpool import -F to try and roll back the pool itself.
>>
>> If that's not the case, then at this point I would recommend renaming
>> the ZFS dataset using the zfs rename command.
>>
>> Robert
>>
> 
> 


-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to