Hi Angela,

Please see my response in line.

Thank you,
Jan

angela wrote:
> Hi Jan,
>
> Please see my comments in line.
>
> As for the test suite libti, now the sparc support and x86 bug fix 
> have been finished.
> You can find the baseline and Opensolaris 0811 RC2 test results in the 
> following page:
> http://opg-qe.central.sun.com/wiki/index.php/Install_libti

I have go through the test report and it so far it looks good :-)
Thanks for testing ths

>
> BTW, the sparc system I used to test is sun4u :)

Since we will be supporting sun4v for January release,
could I please ask you to give it a try on sun4v as well ?
Thanks !


>
> Thank you for your support,
> Angela
>
> jan damborsky ??:
>> Hi Angela,
>>
>>
>> angela wrote:
>>> jan damborsky ??:
>>>> Hi Angela,
>>>>
>>>> thanks for reporting those issues !
>>>>
>>>> Please see my comments in line.
>>>>
>>>> Jan
>>>>
>>>>
>>>> angela wrote:
>>>>  
>>>>> Hi Jan,
>>>>>
>>>>> When I tried to clean create zfs volume assertion, I found the 
>>>>> following
>>>>> defects:
>>>>>
>>>>> 1) it doesn't reflect failures if zfs volume creation pass but enable
>>>>> dump on it fail. I filed CR
>>>>> <http://defect.opensolaris.org/bz/show_bug.cgi?id=5687>5687
>>>>> <http://defect.opensolaris.org/bz/show_bug.cgi?id=5687> to track it.
>>>>>
>>>>> BTW, by went through zfm_create_volumes(), I found it can create 
>>>>> several
>>>>> volumes at the same time, right?
>>>>>     
>>>>
>>>> Yes, that is correct.
>>>>
>>>>  
>>>>> But test_ti doesn't provide this functionality.
>>>>> Should we support it or not?
>>>>>     
>>>>
>>>> I am not sure, what is your opinion on this ?
>>>>   
>>> What I mean is that, it can only create one volume at the same time 
>>> by test_ti.
>>> There is no way to creating several volumes at the same time by 
>>> calling test_ti.
>>> If test_ti doesn't provide interface for this kind of calling, 
>>> zfm_create_volumes() can't be tested fully.
>>
>> Understood. I think it is a good idea to enhance test driver
>> for supporting this feature, I am just wondering what you
>> think about this from QE perspective ?
> Maybe we can let test_ti read a file which contains zvols attributes 
> per line.
> That is:
> To creating one zvol, calling as the following:
> test_ti -t l -p <zpool> -n <zvol_name> -z <zvol_size> -u <zvol_type>
> To creating multi zvols, calling as the following:
> test_ti -t l -p <zpool> -f <zvols_attributs_file>
> For each line in <zvols_attributs_file>, using white space to separate 
> each field, the format for each line is as:
> zvol_name zvol_size zvol_type

That sounds reasonable :-)
Could you please file bug for this ?

>>
>>>>  
>>>>> 2) it failed to release rpool if it has volume with type swap or 
>>>>> dump. I
>>>>> filed CR5689 
>>>>> <http://defect.opensolaris.org/bz/show_bug.cgi?id=5689> to
>>>>> track it.
>>>>> For information I got, TI should release rpool successfully no matter
>>>>> what kind of dataset it has.
>>>>>     
>>>>
>>>> Currently, TI assumes that ZFS volume dedicated to swap is called 
>>>> 'swap' and
>>>> ZFS volume created for dump is called 'dump'. This is compliant with
>>>> 'ZFS boot'
>>>>   
>>> So the volume name can only be "swap" and "dump", can not be any 
>>> other names, right?
>>
>> There is no technical restriction for this, but right now installer 
>> is only creating
>> swap and dump with that names. Speaking about dump, I think it is fine
>> to assume that it is always named dump, as it make sense to have only
>> one created on the system.
>> With respect to swap, I think that the approach might be different. 
>> People
>> are used to create swap devices at will and it might be possible that
>> AI installer might support more than one swap created.
>>
>>> If so,
>>> 1) CR5689 will not be a defect
>>
>> I think that we can assume always dump with name 'dump'
>> and should take care of any swap device created in the pool.
>>
>>> 2) test_ti should not accept names other than swap/dump for these zvols
>>
>> I think that it is out of scope of test driver to do this kind of 
>> checks,
>> I would like to keep it generic, since the assumptions might change
>> in future.
>>
>>> 3) and I need to modify test assertions accordingly.
>>
>> That is a good suggestion. Right now, I would recommend
>> to always create swap with name 'swap' and dump with
>> name 'dump' - this will exactly reflect what installer does
>> today. Should this change in future, we would accommodate
>> test assertions to reflect that.
> So I'll modify test assertions.

Thanks !

>>
>>>> specification (PSARC2006/370):
>>>>
>>>> ...
>>>>
>>>> 6.5 Swap/Dump
>>>>
>>>> On systems with zfs root, swap and dump will be allocated from within
>>>> the root pool.  This has several advantages:  it simplifies the 
>>>> process of formatting disks for install, and it provides the ability
>>>> to re-size the swap and dump areas without having to re-partition 
>>>> the disk.
>>>> Both swap and dump will be zvols.
>>>>
>>>> On systems with zfs root, there will be separate swap and dump zvols.
>>>> The reasons for this are:
>>>>
>>>>    1.  The implementation of swap and dump to zvols (not discussed
>>>>        here because it isn't a visible interface) does not allow the
>>>>        same device to be used for both.
>>>>    2.  The total space used for swap and dump is a relatively small
>>>>        fraction of a typical disk size so the cost of having separate
>>>>        swap and dump devices is small.
>>>>    3.  It will be common for systems to need different amounts of 
>>>>        swap and dump space.  When they are separate zvols, it's easier
>>>>        to define the proper size of each.
>>>>
>>>> Because swap is implemented as a zvol, features such as encryption
>>>> and checksumming will automatically work for swap space (it it 
>>>> particularly
>>>> important that encyrption be supported).
>>>>
>>>> The swap and dump zvols for a pool will be the datasets:
>>>>
>>>>     <rootpool>/swap
>>>>     <rootpool>/dump
>>>>
>>>> ...
>>>>
>>>> For those cases (ZFS volume for swap named <pool>/swap,
>>>> ZFS volume for dump named <pool>/dump) releasing of ZFS
>>>> pool should succeed. Or is it failing in that case ?
>>>>   
>>> If the volume name is "swap" and "dump", then the releasing can 
>>> succeed. See the log I attached.
>>> If the volume name is not swap/dump, then it will fail, as CR5689 
>>> filed.
>>
>> Thanks for testing this ! Please see my comment above about
>> other names used for swap and dump.
>>
>>>>  
>>>>> If it is, I think I should add more test assertions on "release 
>>>>> rpool"
>>>>> feature as well as "create an existent rpool" feature with respect of
>>>>> different dataset in it, right?
>>>>>     
>>>>
>>>> If ZFS pool to be released contains either regular ZFS filesystem 
>>>> or volume,
>>>> TI should successfully release that pool. I agree that test 
>>>> assertion should
>>>> be probably added to cover this scenario.
>>>>   
>>> I'll try to add them.
>>> Does the codes of releasing the existent zpool ti_create_target 
>>> calling are the same with the codes ti_release_target calling? If it 
>>> is been reused, I will only add test assertions only on release pool.
>>
>> No. Looking at the source code, the code path is different.
>> That said, I think it could be considered as a bug.
>> But since installer currently uses ti_release_target() for releasing
>> the pool, the path for destroying the pool in ti_create_target()
>> is not utilized.
>> Based on this, it should be sufficient to test releasing the pool
>> using ti_release_target().
>>
>>>
>>> BTW, I found when trying to create a zfs root pool, there is an 
>>> attribute named TI_ATTR_ZFS_RPOOL_PRESERVE.
>>> Is that mean, when trying to create an existent zfs root pool, if 
>>> setting this attribute, zfs root pool will be preserved, else TI 
>>> will release it firstly and then create a brand new zfs root pool, 
>>> right?
>>
>> Yes, that is correct. That said, this code path currently
>> can't release the pool if dump or swap are there,
>> please see my comment above.
>>
>>> If so, for preserved case, if the devices used to create zpool are 
>>> the same, then nothing happened, right? If the devices used are not 
>>> the same, what will happen?
>>
>> If TI_ATTR_ZFS_RPOOL_PRESERVE is set, pool is always left untouched
>> regardless of device name provided.
>>
>>> And also test_ti doesn't have interface to set preserve or not.
>>
>> That is correct - test_ti always sets TI_ATTR_ZFS_RPOOL_PRESERVE
>> to FALSE. We might enhance test_ti to provide this.
>>
>> Thank you,
>> Jan
>>
>>>
>>> Thanks,
>>> Angela
>>>> Thank you,
>>>> Jan
>>>>
>>>>  
>>>>> Thanks,
>>>>> Angela
>>>>>     
>>>>
>>>>   
>>>
>>
>


Reply via email to