Hi Ethan,

This change still looks good to me.

Thanks for the quick turnaround on this one!!!

-evan


Ethan Quach wrote:
> As per the previous comments, here is the webrev for just 3889.
> 
> Webrev:
> ----------
> http://cr.opensolaris.org/~equach/webrev.3889
> 
> Defect:
> --------
> 3889 be_create_snapshot fails when running pkg image-update
> http://defect.opensolaris.org/bz/show_bug.cgi?id=3889
> 
> 
> thanks,
> -ethan
> 
> 
> 
> Evan Layton wrote:
>> Ethan Quach wrote:
>>>
>>>
>>> Evan Layton wrote:
>>>> HI Ethan,
>>>>
>>>> This looks really good. I do have a question though.
>>>>
>>>> On line 876 it's setting be_created if we get a failure back from 
>>>> be_clone_fs_callback. If this fails then we haven't created the BE
>>>> but we're still setting be_created, doing the goto done and
>>>> attempting to clean-up the BE. 
>>>
>>> That's an incorrect assumption; if the call to be_clone_fs_callback() 
>>> fails,
>>> that doesn't mean no datasets for the new BE have been created yet.  For
>>> example it could have succeeded in creating the new root dataset, but 
>>> failed
>>> in cloning one of the subordinates.  The places where be_created is set
>>> are the places where datasets have potentially been created.
>>
>> Yes I realized after I sent the question that it was at best incorrect...
>>
>>>
>>> Though I just realized there could be a race condition.  If another 
>>> process
>>> created a BE name "foobar" just as this code was running, 
>>> be_clone_fs_callback()
>>> would fail and this code would end up destroying the "foobar" BE just 
>>> created
>>> by the other process.  I think this is going to need some rework.
>>
>> Good catch!
>>
>>>
>>>> I can understand if we've at least
>>>> created one of a set of needed snapshots (or clones) we need to clean
>>>> up but what happens in the case where no snapshots (or clones) were
>>>> created? 
>>>
>>> For cases where we're sure no datasets have been created, be_created 
>>> doesn't
>>> get set. Those are 893, 921, 957.
>>
>> Ah ok I see what you're getting at there and it makes sense to me now. 
>> Thanks for the explanation!
>>
>>>
>>>
>>> Anyway, as per my comment about, I'm going to separate 3844 and 3889 
>>> (the
>>> latter being a much simpler fix), so that 3889 can get in earlier 
>>> than later.
>>> I'll resend 3889 in just a few...
>>
>> Sounds good!
>>
>> -evan
>>
>>>
>>>
>>> thanks,
>>> -ethan
>>>
>>>
>>>> I guess that gets covered in _be_destroy in that the first
>>>> zfs_open will fail?
>>>>   Anyway the changes look fine, I just wanted to make sure this 
>>>> scenario
>>>> was covered.
>>>>
>>>> Thanks,
>>>> -evan
>>>>
>>>> Ethan Quach wrote:
>>>>  
>>>>> Two more fixes in this webrev.  The first defect fixes the case where
>>>>> creating a BE will silently ignore failures in creating corresponding
>>>>> zones in the new BE.  It will now stop the creation process and report
>>>>> error.  be_copy() will now also cleanup a partially created boot
>>>>> environment upon any failure that happens during the creation
>>>>> process instead of leaving it lying around.
>>>>>
>>>>> The second defect is a regression due to 1197.  The additional test
>>>>> case that caught this was doing a pkg image-update from a system
>>>>> where the pool is running less then the required pool version that
>>>>> supports snapshot user properties.
>>>>>
>>>>>
>>>>>
>>>>> Webrev:
>>>>> ----------
>>>>> http://cr.opensolaris.org/~equach/webrev.3844.3889
>>>>>
>>>>> Defects:
>>>>> ----------
>>>>> 3884 be_copy_zones() silently ignores zones
>>>>> http://defect.opensolaris.org/bz/show_bug.cgi?id=3844
>>>>>
>>>>> 3889 be_create_snapshot fails when running pkg image-update
>>>>> http://defect.opensolaris.org/bz/show_bug.cgi?id=3889
>>>>>
>>>>>
>>>>> thanks,
>>>>> -ethan
>>>>>
>>>>> _______________________________________________
>>>>> caiman-discuss mailing list
>>>>> caiman-discuss at opensolaris.org
>>>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>>>>     
>>>>
>>>> _______________________________________________
>>>> caiman-discuss mailing list
>>>> caiman-discuss at opensolaris.org
>>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>>>   
>>


Reply via email to