[Openstack-operators] Defining the agenda for Kubernetes Ops on OpenStack forum session @ OpenStack Summit Boston

2017-05-01 Thread Steve Gordon
Hi all,

There will be a forum session at OpenStack Summit Boston next week on the topic 
of Kubernetes Ops on OpenStack on OpenStack. This session will be occurring on 
the Wednesday, May 10, at 1:50pm-2:30pm [1]. If you are an operator, developer, 
or other contributor attending OpenStack Summit who would like to participate 
in this session we would love to have you. We're working on framing the agenda 
for the session in this Etherpad:

https://etherpad.openstack.org/p/BOS-forum-kubernetes-ops-on-openstack

Feel free to add your own thoughts and look forward to seeing you there. If 
this email has caused you to ask yourself what the forum is and why you'd be 
there, I'd suggest starting here:

https://wiki.openstack.org/wiki/Forum

Thanks!

Steve

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18764/kubernetes-ops-on-openstack

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova][blazar][scientific] advanced instance scheduling: reservations and preeemption - Forum session

2017-05-01 Thread Jay Pipes

On 05/01/2017 03:39 PM, Blair Bethwaite wrote:

Hi all,

Following up to the recent thread "[Openstack-operators] [scientific]
Resource reservation requirements (Blazar) - Forum session" and adding
openstack-dev.

This is now a confirmed forum session
(https://www.openstack.org/summit/boston-2017/summit-schedule/events/18781/advanced-instance-scheduling-reservations-and-preemption)
to cover any advanced scheduling use-cases people want to talk about,
but in particular focusing on reservations and preemption as they are
big priorities particularly for scientific deployers.

>

Etherpad draft is
https://etherpad.openstack.org/p/BOS-forum-advanced-instance-scheduling,
please attend and contribute! In particular I'd appreciate background
spec and review links added to the etherpad.

Jay, would you be able and interested to moderate this from the Nova side?


Masahito Muroi is currently marked as the moderator, but I will indeed 
be there and happy to assist Masahito in moderating, no problem.


Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Eric Fried
Sam-

Under the current design, you can provide a specific endpoint
(singular) via the `endpoint_override` conf option.  Based on feedback
on this thread, we will also be keeping support for
`[glance]api_servers` for consumers who actually need to be able to
specify multiple endpoints.  See latest spec proposal[1] for details.

[1] https://review.openstack.org/#/c/461481/

Thanks,
Eric (efried)

On 05/01/2017 12:20 PM, Sam Morrison wrote:
> 
>> On 1 May 2017, at 4:24 pm, Sean McGinnis  wrote:
>>
>> On Mon, May 01, 2017 at 10:17:43AM -0400, Matthew Treinish wrote:
 
>>>
>>> I thought it was just nova too, but it turns out cinder has the same exact
>>> option as nova: (I hit this in my devstack patch trying to get glance 
>>> deployed
>>> as a wsgi app)
>>>
>>> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>>>
>>> Although from what I can tell you don't have to set it and it will fallback 
>>> to
>>> using the catalog, assuming you configured the catalog info for cinder:
>>>
>>> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>>>
>>>
>>> -Matt Treinish
>>>
>>
>> FWIW, that came with the original fork out of Nova. I do not have any real
>> world data on whether that is used or not.
> 
> Yes this is used in cinder.
> 
> A lot of the projects you can set endpoints for them to use. This is 
> extremely useful in a a large production Openstack install where you want to 
> control the traffic.
> 
> I can understand using the catalog in certain situations and feel it’s OK for 
> that to be the default but please don’t prevent operators configuring it 
> differently.
> 
> Glance is the big one as you want to control the data flow efficiently but 
> any service to service configuration should ideally be able to be manually 
> configured.
> 
> Cheers,
> Sam
> 
> 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova][blazar][scientific] advanced instance scheduling: reservations and preeemption - Forum session

2017-05-01 Thread Blair Bethwaite
Hi all,

Following up to the recent thread "[Openstack-operators] [scientific]
Resource reservation requirements (Blazar) - Forum session" and adding
openstack-dev.

This is now a confirmed forum session
(https://www.openstack.org/summit/boston-2017/summit-schedule/events/18781/advanced-instance-scheduling-reservations-and-preemption)
to cover any advanced scheduling use-cases people want to talk about,
but in particular focusing on reservations and preemption as they are
big priorities particularly for scientific deployers.

Etherpad draft is
https://etherpad.openstack.org/p/BOS-forum-advanced-instance-scheduling,
please attend and contribute! In particular I'd appreciate background
spec and review links added to the etherpad.

Jay, would you be able and interested to moderate this from the Nova side?

Cheers,

On 12 April 2017 at 05:22, Jay Pipes  wrote:
> On 04/11/2017 02:08 PM, Pierre Riteau wrote:
>>>
>>> On 4 Apr 2017, at 22:23, Jay Pipes >> > wrote:
>>>
>>> On 04/04/2017 02:48 PM, Tim Bell wrote:

 Some combination of spot/OPIE
>>>
>>>
>>> What is OPIE?
>>
>>
>> Maybe I missed a message: I didn’t see any reply to Jay’s question about
>> OPIE.
>
>
> Thanks!
>
>> OPIE is the OpenStack Preemptible Instances
>> Extension: https://github.com/indigo-dc/opie
>> I am sure other on this list can provide more information.
>
>
> Got it.
>
>> I think running OPIE instances inside Blazar reservations would be
>> doable without many changes to the implementation.
>> We’ve talked about this idea several times, this forum session would be
>> an ideal place to draw up an implementation plan.
>
>
> I just looked through the OPIE source code. One thing I'm wondering is why
> the code for killing off pre-emptible instances is being done in the
> filter_scheduler module?
>
> Why not have a separate service that merely responds to the raising of a
> NoValidHost exception being raised from the scheduler with a call to go and
> terminate one or more instances that would have allowed the original request
> to land on a host?
>
> Right here is where OPIE goes and terminates pre-emptible instances:
>
> https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L92-L100
>
> However, that code should actually be run when line 90 raises NoValidHost:
>
> https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L90
>
> There would be no need at all for "detecting overcommit" here:
>
> https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L96
>
> Simply detect a NoValidHost being returned to the conductor from the
> scheduler, examine if there are pre-emptible instances currently running
> that could be terminated and terminate them, and re-run the original call to
> select_destinations() (the scheduler call) just like a Retry operation
> normally does.
>
> There's be no need whatsoever to involve any changes to the scheduler at
> all.
>
 and Blazar would seem doable as long as the resource provider
 reserves capacity appropriately (i.e. spot resources>>blazar
 committed along with no non-spot requests for the same aggregate).
 Is this feasible?
>
>
> No. :)
>
> As mentioned in previous emails and on the etherpad here:
>
> https://etherpad.openstack.org/p/new-instance-reservation
>
> I am firmly against having the resource tracker or the placement API
> represent inventory or allocations with a temporal aspect to them (i.e.
> allocations in the future).
>
> A separate system (hopefully Blazar) is needed to manage the time-based
> associations to inventories of resources over a period in the future.
>
> Best,
> -jay
>
>>> I'm not sure how the above is different from the constraints I mention
>>> below about having separate sets of resource providers for preemptible
>>> instances than for non-preemptible instances?
>>>
>>> Best,
>>> -jay
>>>
 Tim

 On 04.04.17, 19:21, "Jay Pipes" > wrote:

On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
> Hi Jay,
>
> On 4 April 2017 at 00:20, Jay Pipes > wrote:
>> However, implementing the above in any useful fashion requires
 that Blazar
>> be placed *above* Nova and essentially that the cloud operator
 turns off
>> access to Nova's  POST /servers API call for regular users.
 Because if not,
>> the information that Blazar acts upon can be simply
 circumvented by any user
>> at any time.
>
> That's something of an oversimplification. A reservation system
> outside of Nova could manipulate Nova host-aggregates to "cordon
 off"
> infrastructure from on-demand access (I believe Blazar already uses
> this approach), and it's not much of a jump to imagine operators
 being
> able to 

Re: [Openstack-operators] [openstack-dev] [scientific][nova][cyborg] Special Hardware Forum session

2017-05-01 Thread Blair Bethwaite
Thanks Rochelle. I encourage everyone to dump thoughts into the
etherpad (https://etherpad.openstack.org/p/BOS-forum-special-hardware
- feel free to garden it as you go!) so we can have some chance of
organising a coherent session. In particular it would be useful to
know what is going to be most useful for the Nova and Cyborg devs so
that we can give that priority before we start the show-and-tell /
knowledge-share that is often a large part of these sessions. I'd also
be very happy to have a co-moderator if any wants to volunteer.

On 26 April 2017 at 03:11, Rochelle Grober  wrote:
>
> I know that some cyborg folks and nova folks are planning to be there. Now
> we need to drive some ops folks.
>
>
> Sent from HUAWEI AnyOffice
> From:Blair Bethwaite
> To:openstack-...@lists.openstack.org,openstack-oper.
> Date:2017-04-25 08:24:34
> Subject:[openstack-dev] [scientific][nova][cyborg] Special Hardware Forum
> session
>
> Hi all,
>
> A quick FYI that this Forum session exists:
> https://www.openstack.org/summit/boston-2017/summit-schedule/events/18803/special-hardware
> (https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
> thing this Forum.
>
> It would be great to see a good representation from both the Nova and
> Cyborg dev teams, and also ops ready to share their experience and
> use-cases.
>
> --
> Cheers,
> ~Blairo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Blair Bethwaite
On 29 April 2017 at 01:46, Mike Dorman  wrote:
> I don’t disagree with you that the client side choose-a-server-at-random is 
> not a great load balancer.  (But isn’t this roughly the same thing that 
> oslo-messaging does when we give it a list of RMQ servers?)  For us it’s more 
> about the failure handling if one is down than it is about actually equally 
> distributing the load.

Maybe not great, but still better than making operators deploy (often
complex) full-featured external LBs when they really just want
*enough* redundancy. In many cases this seems to just create pets in
the control plane. I think it'd be useful if all OpenStack APIs and
their clients actively handled this poor-man's HA without having to
resort to haproxy etc, or e.g., assuming operators own the DNS.

-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Nikhil Komawar
I agree.

I think the solution proposed earlier in this thread about making default
to service catalog and optionally allow ops to choose 'the list of
glance-apis to send data to', would make everyone's life easier.

On Mon, May 1, 2017 at 2:16 PM, Blair Bethwaite 
wrote:

> On 28 April 2017 at 21:17, Sean Dague  wrote:
> > On 04/28/2017 12:50 AM, Blair Bethwaite wrote:
> >> We at Nectar are in the same boat as Mike. Our use-case is a little
> >> bit more about geo-distributed operations though - our Cells are in
> >> different States around the country, so the local glance-apis are
> >> particularly important for caching popular images close to the
> >> nova-computes. We consider these glance-apis as part of the underlying
> >> cloud infra rather than user-facing, so I think we'd prefer not to see
> >> them in the service-catalog returned to users either... is there going
> >> to be a (standard) way to hide them?
> >
> > In a situation like this, where Cells are geographically bounded, is
> > there also a Region for that Cell/Glance?
>
> Hi Sean. Nope, just the one global region and set of user-facing APIs.
> Those other glance-apis are internal architectural details and should
> be hidden from the public catalog so as not to confuse users and/or
> over-expose information.
>
> Cheers,
>
> --
> Cheers,
> ~Blairo
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Blair Bethwaite
On 28 April 2017 at 21:17, Sean Dague  wrote:
> On 04/28/2017 12:50 AM, Blair Bethwaite wrote:
>> We at Nectar are in the same boat as Mike. Our use-case is a little
>> bit more about geo-distributed operations though - our Cells are in
>> different States around the country, so the local glance-apis are
>> particularly important for caching popular images close to the
>> nova-computes. We consider these glance-apis as part of the underlying
>> cloud infra rather than user-facing, so I think we'd prefer not to see
>> them in the service-catalog returned to users either... is there going
>> to be a (standard) way to hide them?
>
> In a situation like this, where Cells are geographically bounded, is
> there also a Region for that Cell/Glance?

Hi Sean. Nope, just the one global region and set of user-facing APIs.
Those other glance-apis are internal architectural details and should
be hidden from the public catalog so as not to confuse users and/or
over-expose information.

Cheers,

-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [newton] [keystone] [nova] [novaclient] [shibboleth] [v3token] [ecp] nova boot fails for federated users

2017-05-01 Thread Evan Bollig PhD
Trying to figure out if this is a bug in ECP support within
novaclient, or if I am misconfiguring something. Any feedback helps!

We have keystone configured to use a separate Shibboleth server for
auth (with an ECP endpoint). Federated users with the _member_ role on
a project can boot VMs using "openstack server create", but attempts
to use "nova boot" (novaclient) are blocked by this error:

 $ nova list
ERROR (AttributeError): 'Namespace' object has no attribute 'os_user_id'

To auth, we have users generate a token with unscoped saml:

export OS_AUTH_TYPE=v3unscopedsaml
unset OS_AUTH_STRATEGY
export OS_IDENTITY_PROVIDER=testshib
export OS_PROTOCOL=saml2
export OS_IDENTITY_PROVIDER_URL=https://shibboleth-server/ECP
unset OS_TOKEN
export OS_TOKEN=$( openstack token issue -c id -f value --debug )
unset OS_PASSWORD
if [ -z $OS_TOKEN ]; then
  echo -e "\nERROR: Bad authentication"
  unset OS_TOKEN
else
  echo -e "\nAuthenticated."
fi
unset OS_USER_DOMAIN_NAME
export OS_AUTH_TYPE=v3token

Cheers,
-E


--
Evan F. Bollig, PhD
Scientific Computing Consultant, Application Developer | Scientific
Computing Solutions (SCS)
Minnesota Supercomputing Institute | msi.umn.edu
University of Minnesota | umn.edu
boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Sam Morrison

> On 1 May 2017, at 4:24 pm, Sean McGinnis  wrote:
> 
> On Mon, May 01, 2017 at 10:17:43AM -0400, Matthew Treinish wrote:
>>> 
>> 
>> I thought it was just nova too, but it turns out cinder has the same exact
>> option as nova: (I hit this in my devstack patch trying to get glance 
>> deployed
>> as a wsgi app)
>> 
>> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>> 
>> Although from what I can tell you don't have to set it and it will fallback 
>> to
>> using the catalog, assuming you configured the catalog info for cinder:
>> 
>> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>> 
>> 
>> -Matt Treinish
>> 
> 
> FWIW, that came with the original fork out of Nova. I do not have any real
> world data on whether that is used or not.

Yes this is used in cinder.

A lot of the projects you can set endpoints for them to use. This is extremely 
useful in a a large production Openstack install where you want to control the 
traffic.

I can understand using the catalog in certain situations and feel it’s OK for 
that to be the default but please don’t prevent operators configuring it 
differently.

Glance is the big one as you want to control the data flow efficiently but any 
service to service configuration should ideally be able to be manually 
configured.

Cheers,
Sam


> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] User Committee IRC Meeting - Monday May 1st

2017-05-01 Thread Edgar Magana
Dear UC Community,

This is a kind reminder that we are having our UC IRC meeting today at 1900 UTC 
in (freenode) #openstack-meeting

Agenda:
https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee

Thanks,

Edgar Magana
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Eric Fried
Matt-

Yeah, clearly other projects have the same issuethis blueprint is
trying to solve in nova.  I think the idea is that, once the
infrastructure is in place and nova has demonstrated the concept, other
projects can climbaboard.

It's conceivable that the new get_service_url() method could be
moved to a more common lib (ksaor os-client-config perhaps) in the
future to facilitate this.

Eric (efried)

On 05/01/2017 09:17 AM, Matthew Treinish wrote:
> On Mon, May 01, 2017 at 05:00:17AM -0700, Flavio Percoco wrote:
>> On 28/04/17 11:19 -0500, Eric Fried wrote:
>>> If it's *just* glance we're making an exception for, I prefer #1 (don't
>>> deprecate/remove [glance]api_servers).  It's way less code &
>>> infrastructure, and it discourages others from jumping on the
>>> multiple-endpoints bandwagon.  If we provide endpoint_override_list
>>> (handwave), people will think it's okay to use it.
>>>
>>> Anyone aware of any other services that use multiple endpoints?
>> Probably a bit late but yeah, I think this makes sense. I'm not aware of 
>> other
>> projects that have list of api_servers.
> I thought it was just nova too, but it turns out cinder has the same exact
> option as nova: (I hit this in my devstack patch trying to get glance deployed
> as a wsgi app)
>
> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>
> Although from what I can tell you don't have to set it and it will fallback to
> using the catalog, assuming you configured the catalog info for cinder:
>
> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>
>
> -Matt Treinish
>
>
>>> On 04/28/2017 10:46 AM, Mike Dorman wrote:
 Maybe we are talking about two different things here?  I’m a bit confused.

 Our Glance config in nova.conf on HV’s looks like this:

 [glance]
 api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
 glance_api_insecure=True
 glance_num_retries=4
 glance_protocol=http
>>
>> FWIW, this feature is being used as intended. I'm sure there are ways to 
>> achieve
>> this using external tools like haproxy/nginx but that adds an extra burden to
>> OPs that is probably not necessary since this functionality is already there.
>>
>> Flavio
>>
 So we do provide the full URLs, and there is SSL support.  Right?  I am 
 fairly certain we tested this to ensure that if one URL fails, nova goes 
 on to retry the next one.  That failure does not get bubbled up to the 
 user (which is ultimately the goal.)

 I don’t disagree with you that the client side choose-a-server-at-random 
 is not a great load balancer.  (But isn’t this roughly the same thing that 
 oslo-messaging does when we give it a list of RMQ servers?)  For us it’s 
 more about the failure handling if one is down than it is about actually 
 equally distributing the load.

 In my mind options One and Two are the same, since today we are already 
 providing full URLs and not only server names.  At the end of the day, I 
 don’t feel like there is a compelling argument here to remove this 
 functionality (that people are actively making use of.)

 To be clear, I, and I think others, are fine with nova by default getting 
 the Glance endpoint from Keystone.  And that in Keystone there should 
 exist only one Glance endpoint.  What I’d like to see remain is the 
 ability to override that for nova-compute and to target more than one 
 Glance URL for purposes of fail over.

 Thanks,
 Mike




 On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:

 Thank you both for your feedback - that's really helpful.

 Let me say a few more words about what we're trying to accomplish here
 overall so that maybe we can figure out what the right way forward is.
 (it may be keeping the glance api servers setting, but let me at least
 make the case real quick)

  From a 10,000 foot view, the thing we're trying to do is to get nova's
 consumption of all of the OpenStack services it uses to be less 
 special.

 The clouds have catalogs which list information about the services -
 public, admin and internal endpoints and whatnot - and then we're 
 asking
 admins to not only register that information with the catalog, but to
 also put it into the nova.conf. That means that any updating of that
 info needs to be an API call to keystone and also a change to 
 nova.conf.
 If we, on the other hand, use the catalog, then nova can pick up 
 changes
 in real time as they're rolled out to the cloud - and there is 
 hopefully
 a sane set of defaults we could choose (based on operator feedback like
 what 

Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Matthew Treinish
On Mon, May 01, 2017 at 05:00:17AM -0700, Flavio Percoco wrote:
> On 28/04/17 11:19 -0500, Eric Fried wrote:
> > If it's *just* glance we're making an exception for, I prefer #1 (don't
> > deprecate/remove [glance]api_servers).  It's way less code &
> > infrastructure, and it discourages others from jumping on the
> > multiple-endpoints bandwagon.  If we provide endpoint_override_list
> > (handwave), people will think it's okay to use it.
> > 
> > Anyone aware of any other services that use multiple endpoints?
> 
> Probably a bit late but yeah, I think this makes sense. I'm not aware of other
> projects that have list of api_servers.

I thought it was just nova too, but it turns out cinder has the same exact
option as nova: (I hit this in my devstack patch trying to get glance deployed
as a wsgi app)

https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55

Although from what I can tell you don't have to set it and it will fallback to
using the catalog, assuming you configured the catalog info for cinder:

https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129


-Matt Treinish


> 
> > On 04/28/2017 10:46 AM, Mike Dorman wrote:
> > > Maybe we are talking about two different things here?  I’m a bit confused.
> > > 
> > > Our Glance config in nova.conf on HV’s looks like this:
> > > 
> > > [glance]
> > > api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
> > > glance_api_insecure=True
> > > glance_num_retries=4
> > > glance_protocol=http
> 
> 
> FWIW, this feature is being used as intended. I'm sure there are ways to 
> achieve
> this using external tools like haproxy/nginx but that adds an extra burden to
> OPs that is probably not necessary since this functionality is already there.
> 
> Flavio
> 
> > > So we do provide the full URLs, and there is SSL support.  Right?  I am 
> > > fairly certain we tested this to ensure that if one URL fails, nova goes 
> > > on to retry the next one.  That failure does not get bubbled up to the 
> > > user (which is ultimately the goal.)
> > > 
> > > I don’t disagree with you that the client side choose-a-server-at-random 
> > > is not a great load balancer.  (But isn’t this roughly the same thing 
> > > that oslo-messaging does when we give it a list of RMQ servers?)  For us 
> > > it’s more about the failure handling if one is down than it is about 
> > > actually equally distributing the load.
> > > 
> > > In my mind options One and Two are the same, since today we are already 
> > > providing full URLs and not only server names.  At the end of the day, I 
> > > don’t feel like there is a compelling argument here to remove this 
> > > functionality (that people are actively making use of.)
> > > 
> > > To be clear, I, and I think others, are fine with nova by default getting 
> > > the Glance endpoint from Keystone.  And that in Keystone there should 
> > > exist only one Glance endpoint.  What I’d like to see remain is the 
> > > ability to override that for nova-compute and to target more than one 
> > > Glance URL for purposes of fail over.
> > > 
> > > Thanks,
> > > Mike
> > > 
> > > 
> > > 
> > > 
> > > On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:
> > > 
> > > Thank you both for your feedback - that's really helpful.
> > > 
> > > Let me say a few more words about what we're trying to accomplish here
> > > overall so that maybe we can figure out what the right way forward is.
> > > (it may be keeping the glance api servers setting, but let me at least
> > > make the case real quick)
> > > 
> > >  From a 10,000 foot view, the thing we're trying to do is to get 
> > > nova's
> > > consumption of all of the OpenStack services it uses to be less 
> > > special.
> > > 
> > > The clouds have catalogs which list information about the services -
> > > public, admin and internal endpoints and whatnot - and then we're 
> > > asking
> > > admins to not only register that information with the catalog, but to
> > > also put it into the nova.conf. That means that any updating of that
> > > info needs to be an API call to keystone and also a change to 
> > > nova.conf.
> > > If we, on the other hand, use the catalog, then nova can pick up 
> > > changes
> > > in real time as they're rolled out to the cloud - and there is 
> > > hopefully
> > > a sane set of defaults we could choose (based on operator feedback 
> > > like
> > > what you've given) so that in most cases you don't have to tell nova
> > > where to find glance _at_all_ becuase the cloud already knows where it
> > > is. (nova would know to look in the catalog for the interal interface 
> > > of
> > > the image service - for instance - there's no need to ask an operator 
> > > to
> > > add to the config "what is the service_type of the image service we
> > >