Ah, you are correct.. it's getting late for me.. LOL

Thanks Ellard!

 *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
Octave J. Orgeron
Solaris Virtualization Architect and Consultant
Web: http://unixconsole.blogspot.com
E-Mail: unixconsole at yahoo.com
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*



----- Original Message ----
From: Ashutosh Tripathi <ashutosh.tripa...@sun.com>
To: Octave Orgeron <unixconsole at yahoo.com>
Cc: Ellard.Roush at Sun.COM; ldoms-discuss at opensolaris.org; Paolo Merisio 
<merisiop at gmail.com>; clusters <ha-clusters-discuss at opensolaris.org>
Sent: Tuesday, July 21, 2009 10:58:22 PM
Subject: Re: [ha-clusters-discuss] iSCSI & Quorum [was: Re: [ldoms-discuss] 
LDoms 1.2]

Octave Orgeron wrote:
> Hi Ashutosh,
      ^^^^^^^^

   Spelling mistake! "Ellard" is what you meant, i think.

-ashu

> 
> Thank you for the clarification on how iSCSi can be used with SC. I would 
> assume this is the same for openha on opensolaris? This is something I'd like 
> to try out. Thanks again!
> 
> Thanks!
> 
>  *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
> Octave J. Orgeron
> Solaris Virtualization Architect and Consultant
> Web: http://unixconsole.blogspot.com
> E-Mail: unixconsole at yahoo.com
> *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
> 
> 
> 
> ----- Original Message ----
> From: Ellard Roush <Ellard.Roush at Sun.COM>
> To: Octave Orgeron <unixconsole at yahoo.com>
> Cc: Ashutosh Tripathi <Ashutosh.Tripathi at Sun.COM>; ldoms-discuss at 
> opensolaris.org; Paolo Merisio <merisiop at gmail.com>; clusters 
> <ha-clusters-discuss at opensolaris.org>
> Sent: Tuesday, July 21, 2009 7:05:34 PM
> Subject: Re: [ha-clusters-discuss] iSCSI & Quorum [was: Re: [ldoms-discuss] 
> LDoms 1.2]
> 
> Hi,
> 
> When the administrator adds an iSCSI device to the system, that device looks
> like a SCSI device (at least the Quorum subsystem sees the device as
> a SCSI device).
> 
> Sun Cluster supports an iSCSI device as a Quorum device when the
> iSCSI device is on the same subnet as the cluster nodes.
> We do not support an iSCSI device as a Quorum device when
> the iSCSI device is on a different subnet. The limitation is due
> to the behavior of iSCSI in Solaris 10.
> 
> The iSCSI device could be configured for Quorum using
> any of the following:
> 
>    SCSI2
>    SCSI3
>    Software Quorum
> 
> The iSCSI device that will be used as a quorum device cannot
> be a disk that is locally connected to only one cluster node
> and exported via iSCSI. Each cluster node must have a direct
> path to the device that is not dependent on another node being up.
> 
> I am not aware of any limitations caused in this area when
> Sun Cluster runs on LDoms.
> 
> Regards,
> Ellard
> 
> On 07/20/09 12:34, Octave Orgeron wrote:
>> Thanks for the suggestion. Interested in testing out SC or OpenHA on LDoms 
>> and use iSCSI since I don't have a SAN at home. Thanks!
>>
>>  
>> *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
>> Octave J. Orgeron
>> Solaris Virtualization Architect and Consultant
>> Web: http://unixconsole.blogspot.com
>> E-Mail: unixconsole at yahoo.com
>> *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
>>
>>
>>
>> ----- Original Message ----
>> From: Ashutosh Tripathi <Ashutosh.Tripathi at Sun.COM>
>> To: Octave Orgeron <unixconsole at yahoo.com>
>> Cc: Paolo Merisio <merisiop at gmail.com>; ldoms-discuss at opensolaris.org; 
>> clusters <ha-clusters-discuss at opensolaris.org>
>> Sent: Monday, July 20, 2009 2:27:24 PM
>> Subject: iSCSI & Quorum [was: Re: [ldoms-discuss] LDoms 1.2]
>>
>> [Cross posting to ha-clusters-discuss]
>> Hi Octave,
>>
>>     Given that iSCSI targets are purely a network thing, i doubt
>> very much if using them with or without LDoms has any difference.
>>
>>     That leaves the question of how SC/OHAC supports iSCSI disks.
>>
>> I believe that we (SC) have focused on OpenSolaris/OHAC for iSCSI support
>> given the rather large number of issues they have on S10. Perhaps
>> someone on ha-clusters-discuss can shed more light on this question
>> (both general iSCSI support as well as supporting those as Quorum disks).
>>
>> Regards,
>> -ashu
>>
>>
>> Octave Orgeron wrote:
>>> Quick question, are iSCSI disks supported for quorum and in SC or Open HA 
>>> with LDoms?
>>>
>>>  
>>> *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
>>> Octave J. Orgeron
>>> Solaris Virtualization Architect and Consultant
>>> Web: http://unixconsole.blogspot.com
>>> E-Mail: unixconsole at yahoo.com
>>> *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
>>>
>>>
>>>
>>> ----- Original Message ----
>>> From: Ashutosh Tripathi <Ashutosh.Tripathi at Sun.COM>
>>> To: Paolo Merisio <merisiop at gmail.com>
>>> Cc: ldoms-discuss at opensolaris.org
>>> Sent: Monday, July 20, 2009 1:16:40 PM
>>> Subject: Re: [ldoms-discuss] LDoms 1.2
>>>
>>> Hi Paolo,
>>>
>>>     Tell us a bit more about the kind of device
>>> d5 is. The only kind on which Quorum is supported is
>>> ones backed by full LUNs in the I/O domain.
>>>
>>>     d5 looks suspiciously small to be a full disk, but
>>> can you confirm?
>>>
>>> -ashu
>>>
>>> Paolo Merisio wrote:
>>>> Ok, here more information about new test.
>>>>
>>>> root at n1 # [b]metaset -s 2ds[/b]
>>>>
>>>> Set name = 2ds, Set number = 2
>>>>
>>>> Host                Owner
>>>>   n1                 Yes
>>>>   n2                   n3                Driv Dbase
>>>>
>>>> d5   Yes [b]Import/export of metaset, mount of fs in metaset are all ok on 
>>>> all 3 nodes[/b] root at n1 # [b]cldev show d5[/b]
>>>>
>>>> === DID Device Instances ===                  DID Device Name:             
>>>>                    /dev/did/rdsk/d5
>>>>   Full Device Path:                                n1:/dev/rdsk/c0d2
>>>>   Full Device Path:                                n2:/dev/rdsk/c0d2
>>>>   Full Device Path:                                n3:/dev/rdsk/c0d2
>>>>   Replication:                                     none
>>>>   default_fencing:                                 nofencing
>>>>
>>>> root at n1 # [b]prtvtoc /dev/rdsk/c0d2s0[/b]
>>>> * /dev/rdsk/c0d2s0 partition map
>>>> *
>>>> * Dimensions:
>>>> *     512 bytes/sector
>>>> *      16 sectors/track
>>>> *       4 tracks/cylinder
>>>> *      64 sectors/cylinder
>>>> *   32768 cylinders
>>>> *   32766 accessible cylinders
>>>> *
>>>> * Flags:
>>>> *   1: unmountable
>>>> *  10: read-only
>>>> *
>>>> *                          First     Sector    Last
>>>> * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
>>>>        0      4    00       8256   2088768   2097023
>>>>        7      4    01          0      8256      8255
>>>>
>>>>
>>>> root at n1 # tail -f /var/adm/messages&
>>>> [1] 5597
>>>>
>>>> Jul 17 09:49:41 n1 Cluster.Framework: [ID 801593 daemon.notice] stdout: 
>>>> becoming primary for 2ds
>>>>
>>>> [b]root at n1 # clq add d5
>>>> clq:  (C192716) I/O error.
>>>> root at n1 # [/b]
>>>>
>>>> No other errors on console of other nodes or messages of other nodes.
>>>> I hope this helps you to troubleshooting this problem.
>>>>
>>>> Thanks
>>> _______________________________________________
>>> ldoms-discuss mailing list
>>> ldoms-discuss at opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
>>>
>>>
>>>
>>>      
>>
>>      
>> _______________________________________________
>> ha-clusters-discuss mailing list
>> ha-clusters-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss
> 
> 
> 
>      


      

Reply via email to