On 02/11/2011 20:01, Drew Fisher wrote:
> Round 3:
> 
> Now with an x86 test. 
> 
> https://cr.opensolaris.org/action/browse/caiman/drewfish/7088826_3/webrev/
> 
> I've confirmed both new tests run as expected and reconfirmed my original
> tests still run correctly.
> 
> Darren:  the reason the sparc test passed in round 2 was because I never
> called __run_simple_test() in the test.  I just set a test_manifest xml
> string and expected string.  No errors in that!  /sigh

That'll do it alright ;)

I'm happy with the current set of changes,

Thanks,

Darren.

> 
> -Drew
> 
> On 11/1/11 8:28 AM, Drew Fisher wrote:
>> Round 2:
>>
>> https://cr.opensolaris.org/action/browse/caiman/drewfish/7088826_2/webrev/
>>
>> Niall and I talked off-line about the verify_disk_read() comments he
>> brought up.  I explained that, due to the label, reading from the raw
>> disk may not work (you can't open the device with a VTOC label), so I
>> added a check for both the raw 'ctd' path and 'ctds2' path.
>>
>> I've also added a test to test_target_selection_sparc.py but if more
>> should be added, please let me know.
>>
>> Thanks!
>>
>> -Drew
>>
>> On 10/26/11 9:48 PM, Drew Fisher wrote:
>>> Good evening!
>>>
>>> Could I get a couple pairs of eyes on this RFE?
>>>
>>> 7088826 <http://monaco.us.oracle.com/detail.jsf?cr=7088826> target
>>> discovery needs augmentation to handle aliases, wwn, and active/passive
>>> paths
>>>
>>> https://cr.opensolaris.org/action/browse/caiman/drewfish/7088826/webrev/
>>>
>>> Testing for this one has been all kinds of fun!  My test system is a
>>> SPARC T1000 with a FC HBA attached.
>>>
>>> Here's my format output:
>>>
>>> AVAILABLE DISK SELECTIONS:
>>>        0. c2t0d0 <SUN82G cyl 65533 alt 2 hd 16 sec 153>
>>>           /pci@7c0/pci@0/pci@8/scsi@2/sd@0,0
>>>        1. c3t200600A0B821FC0Bd0 <SUN-CSM100_R_FC-0660 cyl 10238 alt 2 hd
>>> 64 sec 64>
>>>           /pci@780/SUNW,qlc@0,1/fp@0,0/ssd@w200600a0b821fc0b,0
>>>        2. c3t266000C0FFE080C4d0 <SUN-StorEdge 3511-421F cyl 65533 alt 2
>>> hd 64 sec 348>
>>>           /pci@780/SUNW,qlc@0,1/fp@0,0/ssd@w266000c0ffe080c4,0
>>>        3. c4t200700A0B821FC0Ad0 <drive type unknown>
>>>           /pci@780/SUNW,qlc@0/fp@0,0/ssd@w200700a0b821fc0a,0
>>>        4. c4t226000C0FF9080C4d0 <SUN-StorEdge 3511-421F cyl 65533 alt 2
>>> hd 64 sec 348>
>>>           /pci@780/SUNW,qlc@0/fp@0,0/ssd@w226000c0ff9080c4,0
>>>
>>>
>>> Disk 0 is simply the internal disk and not interesting, so ignore that.
>>>
>>> Disks 1 and 3 are the same disk.  The HBA is set up in "active /
>>> passive" mode.  This means that the passive disk is all but offline. 
>>> It'll answer to format and libdiskmgt but low-level read(2) calls will
>>> fail.  It's how I'm identifying a passive disk.
>>>
>>> Disks 2 and 4 are the same disk.  The difference is that they are both
>>> "active"  This means they behave exactly like a regular disk. 
>>>
>>> With this new code, only 3 disks (in this example) are "discovered". 
>>> The boot disk and the first "active" path in each pair.  All passive
>>> paths are not discovered.  All additional active aliases are not
>>> discovered.  This prevents the user from specifying one active alias for
>>> one zpool and another active alias for a second zpool.  Remember, this
>>> would be the same physical disk on the back-end and that would be ....
>>> not good ™.
>>>
>>> When target discovery finds a passive disk, it records the CTD string in
>>> an attribute in the active Disk object so we can match on it later. 
>>> When target discovery finds multiple active aliases for the same disk,
>>> we construct a Disk object out of the first alias and record any
>>> additional aliases for later matching.
>>>
>>> This likely will only present itself on SPARC machines due to CR
>>> 6969682.  mpxio is disabled by default on SPARC.  If mpxio is enabled,
>>> the format output is totally different.  mpxio would handle all of the
>>> multipathing and make the entire thing completely transparent to the user.
>>>
>>> Testing
>>> ------
>>>
>>> I tested AI manifests which specify a passive disk (disk 3 in the format
>>> output):
>>>
>>> 20:18:12    Error occurred during execution of 'target-selection'
>>> checkpoint.
>>> 20:18:12    Failed Checkpoints:
>>> 20:18:12
>>> 20:18:12        target-selection
>>> 20:18:12
>>> 20:18:12    Checkpoint execution error:
>>> 20:18:12
>>> 20:18:12        Unable to locate the disk 'c4t200700A0B821FC0Ad0' on the
>>> system.
>>> 20:18:12
>>> 20:18:12    Automated Installation Failed.  See install log at
>>> /system/volatile/install_log
>>>
>>> I tested AI manifests with two active CTDs to the same physical disk
>>> (disk 2 and 4):
>>>
>>> 20:17:48    Error occurred during execution of 'target-selection'
>>> checkpoint.
>>> 20:17:48    Failed Checkpoints:
>>> 20:17:48
>>> 20:17:48        target-selection
>>> 20:17:48
>>> 20:17:48    Checkpoint execution error:
>>> 20:17:48
>>> 20:17:48        Disk 'c4t226000C0FF9080C4d0' matches already used disk
>>> 'c3t266000C0FFE080C4d0'.
>>> 20:17:48
>>> 20:17:48    Automated Installation Failed.  See install log at
>>> /system/volatile/install_log
>>>
>>>
>>> I tested specifying the second active alias (disk 4).  AI proceeded
>>> normally and installed to disk 2, exactly as it should.
>>>
>>> Naturally, all other tests run as expected and pass with no new regressions.
>>>
>>> Darren:  Does anything specific have to be done with my changing of the
>>> DTD for versioning?
>>>
>>> Thanks!
>>>
>>> -Drew
>>>
>>>
>>> _______________________________________________
>>> caiman-discuss mailing list
>>> [email protected]
>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>
>>
>> _______________________________________________
>> caiman-discuss mailing list
>> [email protected]
>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to