[Cross posting to ha-clusters-discuss]
Hi Octave,

        Given that iSCSI targets are purely a network thing, i doubt
very much if using them with or without LDoms has any difference.

        That leaves the question of how SC/OHAC supports iSCSI disks.

I believe that we (SC) have focused on OpenSolaris/OHAC for iSCSI support
given the rather large number of issues they have on S10. Perhaps
someone on ha-clusters-discuss can shed more light on this question
(both general iSCSI support as well as supporting those as Quorum disks).

Regards,
-ashu


Octave Orgeron wrote:
> Quick question, are iSCSI disks supported for quorum and in SC or Open HA 
> with LDoms?
> 
>  *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
> Octave J. Orgeron
> Solaris Virtualization Architect and Consultant
> Web: http://unixconsole.blogspot.com
> E-Mail: unixconsole at yahoo.com
> *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
> 
> 
> 
> ----- Original Message ----
> From: Ashutosh Tripathi <Ashutosh.Tripathi at Sun.COM>
> To: Paolo Merisio <merisiop at gmail.com>
> Cc: ldoms-discuss at opensolaris.org
> Sent: Monday, July 20, 2009 1:16:40 PM
> Subject: Re: [ldoms-discuss] LDoms 1.2
> 
> Hi Paolo,
> 
>     Tell us a bit more about the kind of device
> d5 is. The only kind on which Quorum is supported is
> ones backed by full LUNs in the I/O domain.
> 
>     d5 looks suspiciously small to be a full disk, but
> can you confirm?
> 
> -ashu
> 
> Paolo Merisio wrote:
>> Ok, 
>> here more information about new test.
>>
>> root at n1 # [b]metaset -s 2ds[/b]
>>
>> Set name = 2ds, Set number = 2
>>
>> Host                Owner
>>   n1                 Yes
>>   n2                   n3                
>> Driv Dbase
>>
>> d5   Yes 
>> [b]Import/export of metaset, mount of fs in metaset are all ok on all 3 
>> nodes[/b] 
>> root at n1 # [b]cldev show d5[/b]
>>
>> === DID Device Instances ===                  
>> DID Device Name:                                /dev/did/rdsk/d5
>>   Full Device Path:                                n1:/dev/rdsk/c0d2
>>   Full Device Path:                                n2:/dev/rdsk/c0d2
>>   Full Device Path:                                n3:/dev/rdsk/c0d2
>>   Replication:                                     none
>>   default_fencing:                                 nofencing
>>
>> root at n1 # [b]prtvtoc /dev/rdsk/c0d2s0[/b]
>> * /dev/rdsk/c0d2s0 partition map
>> *
>> * Dimensions:
>> *     512 bytes/sector
>> *      16 sectors/track
>> *       4 tracks/cylinder
>> *      64 sectors/cylinder
>> *   32768 cylinders
>> *   32766 accessible cylinders
>> *
>> * Flags:
>> *   1: unmountable
>> *  10: read-only
>> *
>> *                          First     Sector    Last
>> * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
>>        0      4    00       8256   2088768   2097023
>>        7      4    01          0      8256      8255
>>
>>
>> root at n1 # tail -f /var/adm/messages&
>> [1] 5597
>>
>> Jul 17 09:49:41 n1 Cluster.Framework: [ID 801593 daemon.notice] stdout: 
>> becoming primary for 2ds
>>
>> [b]root at n1 # clq add d5
>> clq:  (C192716) I/O error.
>> root at n1 # [/b]
>>
>> No other errors on console of other nodes or messages of other nodes.
>> I hope this helps you to troubleshooting this problem.
>>
>> Thanks
> 
> _______________________________________________
> ldoms-discuss mailing list
> ldoms-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
> 
> 
> 
>       

Reply via email to