You will face this restriction only if you are running the nodes of a
same cluster in the same physical box. With this kind of configuration,
you have a major single-point of failure which is the physical box itself
(e.g. a power failure of your box brings everything down).

  So we don't expect such configuration to be setup with many nodes having
shared devices, and you can still add as many node not using shared devices
as you want.

  We haven't investigated yet the best way to remove that restriction, and
we haven't define when this will be done.

alex.

Misha Chawla Shanker wrote:
> 
> 
> On Wed, May 7, 2008 at 10:01 AM, Alexandre Chartre 
> <Alexandre.Chartre at sun.com <mailto:Alexandre.Chartre at sun.com>> wrote:
> 
> 
>      This restriction is because of SCSI reservations: reservations are
>     identified by the SCSI controller so to identify that reservations
>     were done by different guest domains then guest domains have to be
>     associated with different SCSI controllers.
> 
> 
> So even if the LDOMs manager was to remove the restriction of not 
> allowing exporting the same path to multiple guests, SCSI reservations 
> will not work for such devices across those guests. Then how do you plan 
> to solve this in the long term ? Will the SCSI reservation logic be 
> simulated in the guest code somehow to fake this ?
> 
> Regards,
> Misha.
> 
> 
> 
>     alex.
> 
>     Misha Chawla Shanker wrote:
> 
>         Ok, let me rephrase my concern: What I meant was since you are
>         recommending using different paths via different HBA's, for a
>         given device, it can be shared across a cluster of maximum only
>         those many number of nodes as the number of HBA's on that system.
>         For example if there are 4 HBAs in a box then you can share
>         diskX only in a maximum of a 4-node cluster (via the 4 paths
>         diskX1, diskX2, diskX3 and diskX4).
> 
>         Another thing, this reduces the redundancy of paths to a device
>         within a particular cluster  node.
> 
>         Ideally it should be possible to export the same path to
>         multiple guests to achieve sharing in the true sense. I guess
>         there are some plans to provide such an option in the future
>         releases of LDOMs ?
> 
>         Regards,
>         Misha.
> 
>         On Wed, May 7, 2008 at 9:47 AM, Alexandre Chartre
>         <Alexandre.Chartre at sun.com <mailto:Alexandre.Chartre at sun.com>
>         <mailto:Alexandre.Chartre at sun.com
>         <mailto:Alexandre.Chartre at sun.com>>> wrote:
> 
> 
>             Yes, you can create many guest domains on the same box if
>         there are
>            in different clusters (because you are not going to share
>         disks between
>            different clusters).
> 
>             If guest domains are from the same cluster then you have some
>            limitations
>            if you want to share disks between guest on the same box.
> 
>            alex.
> 
> 
>            Ashutosh Tripathi wrote:
> 
>                Hi Misha,
> 
>                To clarify this point.
> 
>                 > Doesn't this restrict the number of  clusters that can be
>                formed inside
>                 > a box to the number of HBA's available on the box? How
>         well
>                does this
>                 > scale ? Upto 4 nodes within a box ?
>                 >
> 
>                       The granuality is on a per *disk* basis, not on a
>         per HBA
>                basis.
>                So you should be able to create many guest domains.
> 
>                -ashu
> 
>                Misha Chawla Shanker wrote:
> 
>                    Hi Alex,
> 
> 
>                    On Wed, May 7, 2008 at 9:24 AM, Alexandre Chartre
>                    <Alexandre.Chartre at sun.com
>         <mailto:Alexandre.Chartre at sun.com>
>                    <mailto:Alexandre.Chartre at sun.com
>         <mailto:Alexandre.Chartre at sun.com>>
>                    <mailto:Alexandre.Chartre at sun.com
>         <mailto:Alexandre.Chartre at sun.com>
>                    <mailto:Alexandre.Chartre at sun.com
>         <mailto:Alexandre.Chartre at sun.com>>>> wrote:
> 
> 
>                        Sun Cluster support in guest domain is currently
>         being
>                    qualified and
>                       it will be officially support on S10U5 with Sun
>         Cluster
>                    3.2U1 (+ some
>                       patches). The support is planned to be announced when
>                    LDoms 1.0.3 is
>                       released.
> 
>                        Sun Cluster in guest domain with Nevada should work
>                    since build 80
>                       (assuming you use a version of Sun Cluster which
>         works on
>                    Nevada).
> 
>                        Note that there are some limitations, the main
>         ones in
>                    your case is
>                       that Sun Cluster shared disks in guest domain have
>         to be
>                    virtual disks
>                       which backend are physical SCSI disks/luns.
> 
>                        Also the same disk should not be exported to
>         different
>                    guest domains
>                       using the same disk path. The same disk have to be
>                    exported to different
>                       guest domains using disk paths through different SCSI
>                    controllers/HBA.
> 
> 
>                    Doesn't this restrict the number of  clusters that can be
>                    formed inside a box to the number of HBA's available
>         on the
>                    box? How well does this scale ? Upto 4 nodes within a
>         box ?
> 
>                    Regards,
>                    Misha.
> 
> 
>                       alex.
> 
>                       Maciej Browarski wrote:
>                        > Hello,
>                        > Is there any chance to have cluster between 3 GUEST
>                    Logical Domain on
>                        > one physical server? (I use Nevada 85 and
>         openexpress
>                    02/08)
>                        > Or Cluster software install on Guest LDOMS some
>                    drivers, which need
>                        > access to real hardware ?
>                        >
>                        > Regards,
>                        >
>                       _______________________________________________
>                       ldoms-discuss mailing list
>                       ldoms-discuss at opensolaris.org
>         <mailto:ldoms-discuss at opensolaris.org>
>                    <mailto:ldoms-discuss at opensolaris.org
>         <mailto:ldoms-discuss at opensolaris.org>>
>                    <mailto:ldoms-discuss at opensolaris.org
>         <mailto:ldoms-discuss at opensolaris.org>
>                    <mailto:ldoms-discuss at opensolaris.org
>         <mailto:ldoms-discuss at opensolaris.org>>>
>                      
>         http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
> 
> 
> 
>                  
>          
> ------------------------------------------------------------------------
> 
>                    _______________________________________________
>                    ldoms-discuss mailing list
>                    ldoms-discuss at opensolaris.org
>         <mailto:ldoms-discuss at opensolaris.org>
>                    <mailto:ldoms-discuss at opensolaris.org
>         <mailto:ldoms-discuss at opensolaris.org>>
>                  
>          http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
> 
>                _______________________________________________
>                ldoms-discuss mailing list
>                ldoms-discuss at opensolaris.org
>         <mailto:ldoms-discuss at opensolaris.org>
>         <mailto:ldoms-discuss at opensolaris.org
>         <mailto:ldoms-discuss at opensolaris.org>>
>                http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
> 
> 
> 
>         
> ------------------------------------------------------------------------
> 
>         _______________________________________________
>         ldoms-discuss mailing list
>         ldoms-discuss at opensolaris.org <mailto:ldoms-discuss at 
> opensolaris.org>
>         http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
> 
> 

Reply via email to