Hang on, can you do cluster anti-affinity?  I know you can with hosts, but
I don't remember if you can do the same thing with clusters...

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 7:09 PM, Will Stevens <wstev...@cloudops.com> wrote:

> Yes, that is essentially the same thing.  You would create your
> anti-affinity between clusters instead of hosts.  That is also an option...
>
> *Will STEVENS*
> Lead Developer
>
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Fri, Sep 9, 2016 at 7:05 PM, Simon Weller <swel...@ena.com> wrote:
>
>> Why not just use different primary storage per cluster. You then can
>> control your storage failure domains on a cluster basis.
>>
>> Simon Weller/ENA
>> (615) 312-6068
>>
>> -----Original Message-----
>> From: Will Stevens [wstev...@cloudops.com]
>> Received: Friday, 09 Sep 2016, 5:46PM
>> To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
>> Subject: Re: storage affinity groups
>>
>> I have not really thought through this use case, but off the top of my
>> head, you MAY be able to do something like use host anti-affinity and then
>> use different primary storage per host affinity.  I know this is not the
>> ideal solution, but it will limit the primary storage failure domain to a
>> set of affinity hosts.  This pushes the responsibility of HA to the
>> application deployer, which I think you are expecting to the be case
>> anyway.  You still have a single point of failure with the load balancers
>> unless you implement GSLB.
>>
>> This will likely complicate your capacity management, but it may be a
>> short
>> term solution for your problem until a better solution is developed.
>>
>> If I think of other potential solutions I will post them, but that is what
>> I have for right now.
>>
>> *Will STEVENS*
>> Lead Developer
>>
>> *CloudOps* *| *Cloud Solutions Experts
>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>> w cloudops.com *|* tw @CloudOps_
>>
>> On Fri, Sep 9, 2016 at 3:44 PM, Yiping Zhang <yzh...@marketo.com> wrote:
>>
>> > Will described my use case perfectly.
>> >
>> > Ideally, the underlying storage technology used for the cloud should
>> > provide the reliability required.  But not every company has the money
>> for
>> > the best storage technology on the market. So the next best thing is to
>> > provide some fault tolerance redundancy through the app and at the same
>> > time make it easy to use for end users and administrators alike.
>> >
>> > Regards,
>> >
>> > Yiping
>> >
>> > On 9/9/16, 11:49 AM, "Tutkowski, Mike" <mike.tutkow...@netapp.com>
>> wrote:
>> >
>> >     Yep, based on the recent e-mail Yiping sent, I would agree, Will.
>> >
>> >     At the time being, you have two options: 1) storage tagging 2)
>> > fault-tolerant primary storage like a SAN.
>> >     ________________________________________
>> >     From: williamstev...@gmail.com <williamstev...@gmail.com> on behalf
>> > of Will Stevens <wstev...@cloudops.com>
>> >     Sent: Friday, September 9, 2016 12:44 PM
>> >     To: dev@cloudstack.apache.org
>> >     Subject: Re: storage affinity groups
>> >
>> >     My understanding is that he wants to do anti-affinity across primary
>> >     storage endpoints.  So if he has two web servers, it would ensure
>> that
>> > one
>> >     of his web servers is on Primary1 and the other is on Primary2.
>> This
>> > means
>> >     that if he loses a primary storage for some reason, he only loses
>> one
>> > of
>> >     his load balanced web servers.
>> >
>> >     Does that sound about right?
>> >
>> >     *Will STEVENS*
>> >     Lead Developer
>> >
>> >     *CloudOps* *| *Cloud Solutions Experts
>> >     420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>> >     w cloudops.com *|* tw @CloudOps_
>> >
>> >     On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike <
>> > mike.tutkow...@netapp.com>
>> >     wrote:
>> >
>> >     > Hi Yiping,
>> >     >
>> >     > Reading your most recent e-mail, it seems like you are looking
>> for a
>> >     > feature that does more than simply makes sure virtual disks are
>> > roughly
>> >     > allocated equally across the primary storages of a given cluster.
>> >     >
>> >     > At first, that is what I imagined your request to be.
>> >     >
>> >     > From this e-mail, though, it looks like this is something you'd
>> like
>> > users
>> >     > to be able to personally choose (ex. a user might want virtual
>> disk
>> > 1 on
>> >     > different storage than virtual disk 2).
>> >     >
>> >     > Is that a fair representation of your request?
>> >     >
>> >     > If so, I believe storage tagging (as was mentioned by Marty) is
>> the
>> > only
>> >     > way to do that at present. It does, as you indicated, lead to a
>> >     > proliferation of offerings, however.
>> >     >
>> >     > As for how I personally solve this issue: I do not run a cloud. I
>> > work for
>> >     > a storage vendor. In our situation, the clustered SAN that we
>> > develop is
>> >     > highly fault tolerant. If the SAN is offline, then it probably
>> means
>> > your
>> >     > entire datacenter is offline (ex. power loss of some sort).
>> >     >
>> >     > Talk to you later,
>> >     > Mike
>> >     > ________________________________________
>> >     > From: Yiping Zhang <yzh...@marketo.com>
>> >     > Sent: Friday, September 9, 2016 11:08 AM
>> >     > To: dev@cloudstack.apache.org
>> >     > Subject: Re: storage affinity groups
>> >     >
>> >     > I am not a Java developer, so I am at a total loss on Mike’s
>> > approach. How
>> >     > would end users choose this new storage pool allocator from UI
>> when
>> >     > provisioning new instance?
>> >     >
>> >     > My hope is that if the feature is added to ACS, end users can
>> assign
>> > an
>> >     > anti-storage affinity group to VM instances, just as assign
>> anti-host
>> >     > affinity groups from UI or API, either at VM creation time, or
>> update
>> >     > assignments for existing instances (along with any necessary VM
>> > stop/start,
>> >     > storage migration actions, etc).
>> >     >
>> >     > Obviously, this feature is useful only when there are more than
>> one
>> >     > primary storage devices available for the same cluster or zone (in
>> > case for
>> >     > zone wide primary storage volumes).
>> >     >
>> >     > Just curious, how many primary storage volumes are available for
>> your
>> >     > clusters/zones?
>> >     >
>> >     > Regards,
>> >     > Yiping
>> >     >
>> >     > On 9/8/16, 6:04 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com>
>> > wrote:
>> >     >
>> >     >     Personally, I think the most flexible way is if you have a
>> > developer
>> >     > write a storage-pool allocator to customize the placement of
>> virtual
>> > disks
>> >     > as you see fit.
>> >     >
>> >     >     You extend the StoragePoolAllocator class, write your logic,
>> and
>> >     > update a config file so that Spring is aware of the new allocator
>> and
>> >     > creates an instance of it when the management server is started
>> up.
>> >     >
>> >     >     You might even want to extend ClusterScopeStoragePoolAllocat
>> or
>> >     > (instead of directly implementing StoragePoolAllocator) as it
>> > possibly
>> >     > provides some useful functionality for you already.
>> >     >     ________________________________________
>> >     >     From: Marty Godsey <ma...@gonsource.com>
>> >     >     Sent: Thursday, September 8, 2016 6:27 PM
>> >     >     To: dev@cloudstack.apache.org
>> >     >     Subject: RE: storage affinity groups
>> >     >
>> >     >     So what would be the best way to do it? I use templates to
>> make
>> > it
>> >     > simple for my users so that the Xen tools are already installed
>> as an
>> >     > example.
>> >     >
>> >     >     Regards,
>> >     >     Marty Godsey
>> >     >
>> >     >     -----Original Message-----
>> >     >     From: Yiping Zhang [mailto:yzh...@marketo.com]
>> >     >     Sent: Thursday, September 8, 2016 7:55 PM
>> >     >     To: dev@cloudstack.apache.org
>> >     >     Subject: Re: storage affinity groups
>> >     >
>> >     >     Well, using tags leads to proliferation of templates or
>> service
>> >     > offerings etc. It is not very scalable and gets out of hand very
>> > quickly.
>> >     >
>> >     >     Yiping
>> >     >
>> >     >     On 9/8/16, 4:25 PM, "Marty Godsey" <ma...@gonsource.com>
>> wrote:
>> >     >
>> >     >         I do this by using storage tags. As an example I have some
>> >     > templates that are either created on SSD or magnetic storage. The
>> > template
>> >     > has a storage tag associated with it and then I assigned the
>> > appropriate
>> >     > storage tag to the primary storage.
>> >     >
>> >     >         Regards,
>> >     >         Marty Godsey
>> >     >
>> >     >         -----Original Message-----
>> >     >         From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
>> >     >         Sent: Thursday, September 8, 2016 7:16 PM
>> >     >         To: dev@cloudstack.apache.org
>> >     >         Subject: Re: storage affinity groups
>> >     >
>> >     >         If one doesn't already exist, you can write a custom
>> storage
>> >     > allocator to handle this scenario.
>> >     >
>> >     >         > On Sep 8, 2016, at 4:25 PM, Yiping Zhang <
>> > yzh...@marketo.com>
>> >     > wrote:
>> >     >         >
>> >     >         > Hi,  Devs:
>> >     >         >
>> >     >         > We all know how (anti)-host affinity group works in
>> > CloudStack,
>> >     > I am wondering if there is a similar concept for (anti)-storage
>> > affinity
>> >     > group?
>> >     >         >
>> >     >         > The use case is as this:  in a setup with just one
>> > (somewhat)
>> >     > unreliable primary storage, if the primary storage is off line,
>> then
>> > all VM
>> >     > instances would be impacted. Now if we have two primary storage
>> > volumes for
>> >     > the cluster, then when one of them goes offline, only half of VM
>> > instances
>> >     > would be impacted (assuming the VM instances are evenly
>> distributed
>> > between
>> >     > the two primary storage volumes).  Thus, the (anti)-storage
>> affinity
>> > groups
>> >     > would make sure that instance's disk volumes are distributed among
>> >     > available primary storage volumes just like (anti)-host affinity
>> > groups
>> >     > would distribute instances among hosts.
>> >     >         >
>> >     >         > Does anyone else see the benefits of anti-storage
>> affinity
>> >     > groups?
>> >     >         >
>> >     >         > Yiping
>> >     >
>> >     >
>> >     >
>> >     >
>> >     >
>> >
>> >
>> >
>>
>
>

Reply via email to