[Openstack-operators] [nova][placement] Placement requests and caching in the resource tracker

2018-11-02 Thread Eric Fried
All-

Based on a (long) discussion yesterday [1] I have put up a patch [2]
whereby you can set [compute]resource_provider_association_refresh to
zero and the resource tracker will never* refresh the report client's
provider cache. Philosophically, we're removing the "healing" aspect of
the resource tracker's periodic and trusting that placement won't
diverge from whatever's in our cache. (If it does, it's because the op
hit the CLI, in which case they should SIGHUP - see below.)

*except:
- When we initially create the compute node record and bootstrap its
resource provider.
- When the virt driver's update_provider_tree makes a change,
update_from_provider_tree reflects them in the cache as well as pushing
them back to placement.
- If update_from_provider_tree fails, the cache is cleared and gets
rebuilt on the next periodic.
- If you send SIGHUP to the compute process, the cache is cleared.

This should dramatically reduce the number of calls to placement from
the compute service. Like, to nearly zero, unless something is actually
changing.

Can I get some initial feedback as to whether this is worth polishing up
into something real? (It will probably need a bp/spec if so.)

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03
[2] https://review.openstack.org/#/c/614886/

==
Background
==
In the Queens release, our friends at CERN noticed a serious spike in
the number of requests to placement from compute nodes, even in a
stable-state cloud. Given that we were in the process of adding a ton of
infrastructure to support sharing and nested providers, this was not
unexpected. Roughly, what was previously:

 @periodic_task:
 GET /resource_providers/$compute_uuid
 GET /resource_providers/$compute_uuid/inventories

became more like:

 @periodic_task:
 # In Queens/Rocky, this would still just return the compute RP
 GET /resource_providers?in_tree=$compute_uuid
 # In Queens/Rocky, this would return nothing
 GET /resource_providers?member_of=...=MISC_SHARES...
 for each provider returned above:  # i.e. just one in Q/R
 GET /resource_providers/$compute_uuid/inventories
 GET /resource_providers/$compute_uuid/traits
 GET /resource_providers/$compute_uuid/aggregates

In a cloud the size of CERN's, the load wasn't acceptable. But at the
time, CERN worked around the problem by disabling refreshing entirely.
(The fact that this seems to have worked for them is an encouraging sign
for the proposed code change.)

We're not actually making use of most of that information, but it sets
the stage for things that we're working on in Stein and beyond, like
multiple VGPU types, bandwidth resource providers, accelerators, NUMA,
etc., so removing/reducing the amount of information we look at isn't
really an option strategically.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Eric Fried
Forwarding to openstack-operators per Jay.

On 10/24/18 10:10, Jay Pipes wrote:
> Nova's API has the ability to create "quota classes", which are
> basically limits for a set of resource types. There is something called
> the "default quota class" which corresponds to the limits in the
> CONF.quota section. Quota classes are basically templates of limits to
> be applied if the calling project doesn't have any stored
> project-specific limits.
> 
> Has anyone ever created a quota class that is different from "default"?
> 
> I'd like to propose deprecating this API and getting rid of this
> functionality since it conflicts with the new Keystone /limits endpoint,
> is highly coupled with RAX's turnstile middleware and I can't seem to
> find anyone who has ever used it. Deprecating this API and functionality
> would make the transition to a saner quota management system much easier
> and straightforward.
> 
> Also, I'm apparently blocked now from the operators ML so could someone
> please forward this there?
> 
> Thanks,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Openstack-sigs] [all] Naming the T release of OpenStack

2018-10-18 Thread Eric Fried
Sorry, I'm opposed to this idea.

I admit I don't understand the political framework, nor have I read the
governing documents beyond [1], but that document makes it clear that
this is supposed to be a community-wide vote.  Is it really legal for
the TC (or whoever has merge rights on [2]) to merge a patch that gives
that same body the power to take the decision out of the hands of the
community? So it's really an oligarchy that gives its constituency the
illusion of democracy until something comes up that it feels like not
having a vote on? The fact that it's something relatively "unimportant"
(this time) is not a comfort.

Not that I think the TC would necessarily move forward with [2] in the
face of substantial opposition from non-TC "cores" or whatever.

I will vote enthusiastically for "Train". But a vote it should be.

-efried

[1] https://governance.openstack.org/tc/reference/release-naming.html
[2] https://review.openstack.org/#/c/611511/

On 10/18/2018 10:52 AM, arkady.kanev...@dell.com wrote:
> +1 for the poll.
> 
> Let’s follow well established process.
> 
> If we want to add Train as one of the options for the name I am OK with it.
> 
>  
> 
> *From:* Jonathan Mills 
> *Sent:* Thursday, October 18, 2018 10:49 AM
> *To:* openstack-s...@lists.openstack.org
> *Subject:* Re: [Openstack-sigs] [all] Naming the T release of OpenStack
> 
>  
> 
> [EXTERNAL EMAIL]
> Please report any suspicious attachments, links, or requests for
> sensitive information.
> 
> +1 for just having a poll
> 
>  
> 
> On Thu, Oct 18, 2018 at 11:39 AM David Medberry  > wrote:
> 
> I'm fine with Train but I'm also fine with just adding it to the
> list and voting on it. It will win.
> 
>  
> 
> Also, for those not familiar with the debian/ubuntu command "sl",
> now is the time to become so.
> 
>  
> 
> apt install sl
> 
> sl -Flea #ftw
> 
>  
> 
> On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds
> mailto:t...@bakeyournoodle.com>> wrote:
> 
> Hello all,
>     As per [1] the nomination period for names for the T release
> have
> now closed (actually 3 days ago sorry).  The nominated names and any
> qualifying remarks can be seen at2].
> 
> Proposed Names
>  * Tarryall
>  * Teakettle
>  * Teller
>  * Telluride
>  * Thomas
>  * Thornton
>  * Tiger
>  * Tincup
>  * Timnath
>  * Timber
>  * Tiny Town
>  * Torreys
>  * Trail
>  * Trinidad
>  * Treasure
>  * Troublesome
>  * Trussville
>  * Turret
>  * Tyrone
> 
> Proposed Names that do not meet the criteria
>  * Train
> 
> However I'd like to suggest we skip the CIVS poll and select
> 'Train' as
> the release name by TC resolution[3].  My think for this is
> 
>  * It's fun and celebrates a humorous moment in our community
>  * As a developer I've heard the T release called Train for quite
>    sometime, and was used often at the PTG[4].
>  * As the *next* PTG is also in Colorado we can still choose a
>    geographic based name for U[5]
>  * If train causes a problem for trademark reasons then we can
> always
>    run the poll
> 
> I'll leave[3] for marked -W for a week for discussion to happen
> before the
> TC can consider / vote on it.
> 
> Yours Tony.
> 
> [1]
> 
> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html
> [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals
> [3]
> 
> https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53
> [4] https://twitter.com/vkmc/status/1040321043959754752
> [5]
> https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z
> 
> 
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
> 
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
> 
> 
> 
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] pci passthrough & numa affinity

2018-05-24 Thread Eric Fried
How long are you willing to wait?

The work we're doing to use Placement from Nova ought to allow us to
model both of these things nicely from the virt driver, and request them
nicely from the flavor.

By the end of Rocky we will have laid a large percentage of the
groundwork to enable this. This is all part of the road to what we've
been calling "generic device management" (GDM) -- which we hope will
eventually let us remove most/all of the existing PCI passthrough code.

I/we would be interested in hearing more specifics of your requirements
around this, as it will help inform the GDM roadmap.  And of course,
upstream help & contributions would be very welcome.

Thanks,
efried

On 05/24/2018 05:19 PM, Jonathan D. Proulx wrote:
> On Fri, May 25, 2018 at 07:59:16AM +1000, Blair Bethwaite wrote:
> :Hi Jon,
> :
> :Following up to the question you asked during the HPC on OpenStack
> :panel at the summit yesterday...
> :
> :You might have already seen Daniel Berrange's blog on this topic:
> :https://www.berrange.com/posts/2017/02/16/setting-up-a-nested-kvm-guest-for-developing-testing-pci-device-assignment-with-numa/
> :? He essentially describes how you can get around the issue of the
> :naive flat pci bus topology in the guest - exposing numa affinity of
> :the PCIe root ports requires newish qemu and libvirt.
> 
> Thanks for the pointer not sure if I've seen that one, I've seen a few
> ways to map manually.  I would have been quite surprised if nova did
> this so I am poking at libvirt.xml outside nova for now
> 
> :However, best I can tell there is no way to do this with Nova today.
> :Are you interested in working together on a spec for this?
> 
> I'm not yet convinced it's worth the bother, that's the crux of the
> question I'm investigating.  Is this worth the effort?  There's a meta
> question "do I have time to find out" :)
> 
> :The other related feature of interest here (newer though - no libvirt
> :support yet I think) is gpu cliques
> :(https://github.com/qemu/qemu/commit/dfbee78db8fdf7bc8c151c3d29504bb47438480b),
> :would be really nice to have a way to set these up through Nova once
> :libvirt supports it.
> 
> Thanks,
> -Jon
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-01-31 Thread Eric Fried
There's [1], but I would have expected you to see error logs like [2] if
that's what you're hitting.

[1]
https://github.com/openstack/nova/blob/master/nova/conf/compute.py#L627-L645
[2]
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1714-L1716

efried

On 01/31/2018 03:16 PM, Chris Apsey wrote:
> All,
> 
> Running in to a strange issue I haven't seen before.
> 
> Randomly, the nova-compute services on compute nodes are disabling
> themselves (as if someone ran openstack compute service set --disable
> hostX nova-compute.  When this happens, the node continues to report
> itself as 'up' - the service is just disabled.  As a result, if enough
> of these occur, we get scheduling errors due to lack of available
> resources (which makes sense).  Re-enabling them works just fine and
> they continue on as if nothing happened.  I looked through the logs and
> I can find the API calls where we re-enable the services (PUT
> /v2.1/os-services/enable), but I do not see any API calls where the
> services are getting disabled initially.
> 
> Is anyone aware of any cases where compute nodes will automatically
> disable their nova-compute service on their own, or has anyone seen this
> before and might know a root cause?  We have plenty of spare vcpus and
> RAM on each node - like less than 25% utilization (both in absolute
> terms and in terms of applied ratios).
> 
> We're seeing follow-on errors regarding rmq messages getting lost and
> vif-plug failures, but we think those are a symptom, not a cause.
> 
> Currently running pike on Xenial.
> 
> ---
> v/r
> 
> Chris Apsey
> bitskr...@bitskrieg.net
> https://www.bitskrieg.net
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [all] Log coloring under systemd

2017-05-17 Thread Eric Fried
Folks-

As of [1], devstack will include color escapes in the default log
formats under systemd.  Production deployments can emulate as they see fit.

Note that journalctl will strip those color escapes by default, which
is why we thought we lost log coloring with systemd.  Turns out that you
can get the escapes to come through by passing the -a flag to
journalctl.  The doc at [2] has been updated accordingly.  If there are
any other go-to documents that could benefit from similar content,
please let me know (or propose the changes).

Thanks,
Eric (efried)

[1] https://review.openstack.org/#/c/465147/
[2] https://docs.openstack.org/developer/devstack/systemd.html

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Eric Fried
Sam-

Under the current design, you can provide a specific endpoint
(singular) via the `endpoint_override` conf option.  Based on feedback
on this thread, we will also be keeping support for
`[glance]api_servers` for consumers who actually need to be able to
specify multiple endpoints.  See latest spec proposal[1] for details.

[1] https://review.openstack.org/#/c/461481/

Thanks,
Eric (efried)

On 05/01/2017 12:20 PM, Sam Morrison wrote:
> 
>> On 1 May 2017, at 4:24 pm, Sean McGinnis  wrote:
>>
>> On Mon, May 01, 2017 at 10:17:43AM -0400, Matthew Treinish wrote:
 
>>>
>>> I thought it was just nova too, but it turns out cinder has the same exact
>>> option as nova: (I hit this in my devstack patch trying to get glance 
>>> deployed
>>> as a wsgi app)
>>>
>>> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>>>
>>> Although from what I can tell you don't have to set it and it will fallback 
>>> to
>>> using the catalog, assuming you configured the catalog info for cinder:
>>>
>>> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>>>
>>>
>>> -Matt Treinish
>>>
>>
>> FWIW, that came with the original fork out of Nova. I do not have any real
>> world data on whether that is used or not.
> 
> Yes this is used in cinder.
> 
> A lot of the projects you can set endpoints for them to use. This is 
> extremely useful in a a large production Openstack install where you want to 
> control the traffic.
> 
> I can understand using the catalog in certain situations and feel it’s OK for 
> that to be the default but please don’t prevent operators configuring it 
> differently.
> 
> Glance is the big one as you want to control the data flow efficiently but 
> any service to service configuration should ideally be able to be manually 
> configured.
> 
> Cheers,
> Sam
> 
> 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Eric Fried
Matt-

Yeah, clearly other projects have the same issuethis blueprint is
trying to solve in nova.  I think the idea is that, once the
infrastructure is in place and nova has demonstrated the concept, other
projects can climbaboard.

It's conceivable that the new get_service_url() method could be
moved to a more common lib (ksaor os-client-config perhaps) in the
future to facilitate this.

Eric (efried)

On 05/01/2017 09:17 AM, Matthew Treinish wrote:
> On Mon, May 01, 2017 at 05:00:17AM -0700, Flavio Percoco wrote:
>> On 28/04/17 11:19 -0500, Eric Fried wrote:
>>> If it's *just* glance we're making an exception for, I prefer #1 (don't
>>> deprecate/remove [glance]api_servers).  It's way less code &
>>> infrastructure, and it discourages others from jumping on the
>>> multiple-endpoints bandwagon.  If we provide endpoint_override_list
>>> (handwave), people will think it's okay to use it.
>>>
>>> Anyone aware of any other services that use multiple endpoints?
>> Probably a bit late but yeah, I think this makes sense. I'm not aware of 
>> other
>> projects that have list of api_servers.
> I thought it was just nova too, but it turns out cinder has the same exact
> option as nova: (I hit this in my devstack patch trying to get glance deployed
> as a wsgi app)
>
> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>
> Although from what I can tell you don't have to set it and it will fallback to
> using the catalog, assuming you configured the catalog info for cinder:
>
> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>
>
> -Matt Treinish
>
>
>>> On 04/28/2017 10:46 AM, Mike Dorman wrote:
>>>> Maybe we are talking about two different things here?  I’m a bit confused.
>>>>
>>>> Our Glance config in nova.conf on HV’s looks like this:
>>>>
>>>> [glance]
>>>> api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
>>>> glance_api_insecure=True
>>>> glance_num_retries=4
>>>> glance_protocol=http
>>
>> FWIW, this feature is being used as intended. I'm sure there are ways to 
>> achieve
>> this using external tools like haproxy/nginx but that adds an extra burden to
>> OPs that is probably not necessary since this functionality is already there.
>>
>> Flavio
>>
>>>> So we do provide the full URLs, and there is SSL support.  Right?  I am 
>>>> fairly certain we tested this to ensure that if one URL fails, nova goes 
>>>> on to retry the next one.  That failure does not get bubbled up to the 
>>>> user (which is ultimately the goal.)
>>>>
>>>> I don’t disagree with you that the client side choose-a-server-at-random 
>>>> is not a great load balancer.  (But isn’t this roughly the same thing that 
>>>> oslo-messaging does when we give it a list of RMQ servers?)  For us it’s 
>>>> more about the failure handling if one is down than it is about actually 
>>>> equally distributing the load.
>>>>
>>>> In my mind options One and Two are the same, since today we are already 
>>>> providing full URLs and not only server names.  At the end of the day, I 
>>>> don’t feel like there is a compelling argument here to remove this 
>>>> functionality (that people are actively making use of.)
>>>>
>>>> To be clear, I, and I think others, are fine with nova by default getting 
>>>> the Glance endpoint from Keystone.  And that in Keystone there should 
>>>> exist only one Glance endpoint.  What I’d like to see remain is the 
>>>> ability to override that for nova-compute and to target more than one 
>>>> Glance URL for purposes of fail over.
>>>>
>>>> Thanks,
>>>> Mike
>>>>
>>>>
>>>>
>>>>
>>>> On 4/28/17, 8:20 AM, "Monty Taylor" <mord...@inaugust.com> wrote:
>>>>
>>>> Thank you both for your feedback - that's really helpful.
>>>>
>>>> Let me say a few more words about what we're trying to accomplish here
>>>> overall so that maybe we can figure out what the right way forward is.
>>>> (it may be keeping the glance api servers setting, but let me at least
>>>> make the case real quick)
>>>>
>>>>  From a 10,000 foot view, the thing we're trying to do is to get nova'

Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Eric Fried
s Mike. Our use-case is a little
> > bit more about geo-distributed operations though - our Cells are in
> > different States around the country, so the local glance-apis are
> > particularly important for caching popular images close to the
> > nova-computes. We consider these glance-apis as part of the underlying
> > cloud infra rather than user-facing, so I think we'd prefer not to see
> > them in the service-catalog returned to users either... is there going
> > to be a (standard) way to hide them?
> >
> > On 28 April 2017 at 09:15, Mike Dorman <mdor...@godaddy.com> wrote:
> >> We make extensive use of the [glance]/api_servers list.  We configure 
> that on hypervisors to direct them to Glance servers which are more “local” 
> network-wise (in order to reduce network traffic across security 
> zones/firewalls/etc.)  This way nova-compute can fail over in case one of the 
> Glance servers in the list is down, without putting them behind a load 
> balancer.  We also don’t run https for these “internal” Glance calls, to save 
> the overhead when transferring images.
> >>
> >> End-user calls to Glance DO go through a real load balancer and then 
> are distributed out to the Glance servers on the backend.  From the 
> end-user’s perspective, I totally agree there should be one, and only one URL.
> >>
> >> However, we would be disappointed to see the change you’re suggesting 
> implemented.  We would lose the redundancy we get now by providing a list.  
> Or we would have to shunt all the calls through the user-facing endpoint, 
> which would generate a lot of extra traffic (in places where we don’t want 
> it) for image transfers.
> >>
> >> Thanks,
> >> Mike
> >>
> >>
> >>
> >> On 4/27/17, 4:02 PM, "Matt Riedemann" <mriede...@gmail.com> wrote:
> >>
> >> On 4/27/2017 4:52 PM, Eric Fried wrote:
> >> > Y'all-
> >> >
> >> >   TL;DR: Does glance ever really need/use multiple endpoint URLs?
> >> >
> >> >   I'm working on bp use-service-catalog-for-endpoints[1], which 
> intends
> >> > to deprecate disparate conf options in various groups, and 
> centralize
> >> > acquisition of service endpoint URLs.  The idea is to introduce
> >> > nova.utils.get_service_url(group) -- note singular 'url'.
> >> >
> >> >   One affected conf option is [glance]api_servers[2], which 
> currently
> >> > accepts a *list* of endpoint URLs.  The new API will only ever 
> return *one*.
> >> >
> >> >   Thus, as planned, this blueprint will have the side effect of
> >> > deprecating support for multiple glance endpoint URLs in Pike, 
> and
> >> > removing said support in Queens.
> >> >
> >> >   Some have asserted that there should only ever be one endpoint 
> URL for
> >> > a given service_type/interface combo[3].  I'm fine with that - it
> >> > simplifies things quite a bit for the bp impl - but wanted to 
> make sure
> >> > there were no loudly-dissenting opinions before we get too far 
> down this
> >> > path.
> >> >
> >> > [1]
> >> > 
> https://blueprints.launchpad.net/nova/+spec/use-service-catalog-for-endpoints
> >> > [2]
> >> > 
> https://github.com/openstack/nova/blob/7e7bdb198ed6412273e22dea72e37a6371fce8bd/nova/conf/glance.py#L27-L37
> >> > [3]
> >> > 
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-04-27.log.html#t2017-04-27T20:38:29
> >> >
> >> > Thanks,
> >> > Eric Fried (efried)
> >> > .
> >> >
> >> > 
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> +openstack-operators
> >>
> >> --
> >>
> >> Thanks,
> &g

Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Eric Fried
Blair, Mike-

There will be an endpoint_override that will bypass the service
catalog.  It still only takes one URL, though.

Thanks,
Eric (efried)

On 04/27/2017 11:50 PM, Blair Bethwaite wrote:
> We at Nectar are in the same boat as Mike. Our use-case is a little
> bit more about geo-distributed operations though - our Cells are in
> different States around the country, so the local glance-apis are
> particularly important for caching popular images close to the
> nova-computes. We consider these glance-apis as part of the underlying
> cloud infra rather than user-facing, so I think we'd prefer not to see
> them in the service-catalog returned to users either... is there going
> to be a (standard) way to hide them?
> 
> On 28 April 2017 at 09:15, Mike Dorman <mdor...@godaddy.com> wrote:
>> We make extensive use of the [glance]/api_servers list.  We configure that 
>> on hypervisors to direct them to Glance servers which are more “local” 
>> network-wise (in order to reduce network traffic across security 
>> zones/firewalls/etc.)  This way nova-compute can fail over in case one of 
>> the Glance servers in the list is down, without putting them behind a load 
>> balancer.  We also don’t run https for these “internal” Glance calls, to 
>> save the overhead when transferring images.
>>
>> End-user calls to Glance DO go through a real load balancer and then are 
>> distributed out to the Glance servers on the backend.  From the end-user’s 
>> perspective, I totally agree there should be one, and only one URL.
>>
>> However, we would be disappointed to see the change you’re suggesting 
>> implemented.  We would lose the redundancy we get now by providing a list.  
>> Or we would have to shunt all the calls through the user-facing endpoint, 
>> which would generate a lot of extra traffic (in places where we don’t want 
>> it) for image transfers.
>>
>> Thanks,
>> Mike
>>
>>
>>
>> On 4/27/17, 4:02 PM, "Matt Riedemann" <mriede...@gmail.com> wrote:
>>
>> On 4/27/2017 4:52 PM, Eric Fried wrote:
>> > Y'all-
>> >
>> >   TL;DR: Does glance ever really need/use multiple endpoint URLs?
>> >
>> >   I'm working on bp use-service-catalog-for-endpoints[1], which intends
>> > to deprecate disparate conf options in various groups, and centralize
>> > acquisition of service endpoint URLs.  The idea is to introduce
>> > nova.utils.get_service_url(group) -- note singular 'url'.
>> >
>> >   One affected conf option is [glance]api_servers[2], which currently
>> > accepts a *list* of endpoint URLs.  The new API will only ever return 
>> *one*.
>> >
>> >   Thus, as planned, this blueprint will have the side effect of
>> > deprecating support for multiple glance endpoint URLs in Pike, and
>> > removing said support in Queens.
>> >
>> >   Some have asserted that there should only ever be one endpoint URL 
>> for
>> > a given service_type/interface combo[3].  I'm fine with that - it
>> > simplifies things quite a bit for the bp impl - but wanted to make sure
>> > there were no loudly-dissenting opinions before we get too far down 
>> this
>> > path.
>> >
>> > [1]
>> > 
>> https://blueprints.launchpad.net/nova/+spec/use-service-catalog-for-endpoints
>> > [2]
>> > 
>> https://github.com/openstack/nova/blob/7e7bdb198ed6412273e22dea72e37a6371fce8bd/nova/conf/glance.py#L27-L37
>> > [3]
>> > 
>> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-04-27.log.html#t2017-04-27T20:38:29
>> >
>> > Thanks,
>> > Eric Fried (efried)
>> > .
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> +openstack-operators
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators