On Thursday, November 21, 2013 9:56:41 AM, Matt Riedemann wrote:
On 11/3/2013 5:22 AM, Joe Gordon wrote:
On Nov 1, 2013 6:46 PM, "John Garbutt" <j...@johngarbutt.com
<mailto:j...@johngarbutt.com>> wrote:
>
> On 29 October 2013 16:11, Eddie Sheffield
<eddie.sheffi...@rackspace.com <mailto:eddie.sheffi...@rackspace.com>>
wrote:
> >
> > "John Garbutt" <j...@johngarbutt.com <mailto:j...@johngarbutt.com>>
said:
> >
> >> Going back to Joe's comment:
> >>> Can both of these cases be covered by configuring the keystone
catalog?
> >> +1
> >>
> >> If both v1 and v2 are present, pick v2, otherwise just pick
what is in
> >> the catalogue. That seems cool. Not quite sure how the multiple
glance
> >> endpoints works in the keystone catalog, but should work I assume.
> >>
> >> We hard code nova right now, and so we probably want to keep that
route too?
> >
> > Nova doesn't use the catalog from Keystone when talking to Glance.
There is a config value "glance_api_servers" which defines a list of
Glance servers that gets randomized and cycled through. I assume that's
what you're referring to with "we hard code nova." But currently there's
nowhere in this path (internal nova to glance) where the keystone
catalog is available.
>
> Yes. I was not very clear. I am proposing we change that. We could
try
> shoehorn the multiple glance nodes in the keystone catalog, then
cache
> that in the context, but maybe that doesn't make sense. This is a
> separate change really.
FYI: We cache the cinder endpoints from keystone catalog in the context
already. So doing something like that with glance won't be without
president.
>
> But clearly, we can't drop the direct configuration of glance servers
> for some time either.
>
> > I think some of the confusion may be that Glanceclient at the
programmatic client level doesn't talk to keystone. That happens happens
higher in the CLI level which doesn't come into play here.
> >
> >> From: "Russell Bryant" <rbry...@redhat.com
<mailto:rbry...@redhat.com>>
> >>> On 10/17/2013 03:12 PM, Eddie Sheffield wrote:
> >>>> Might I propose a compromise?
> >>>>
> >>>> 1) For the VERY short term, keep the config value and get the
change otherwise
> >>>> reviewed and hopefully accepted.
> >>>>
> >>>> 2) Immediately file two blueprints:
> >>>> - python-glanceclient - expose a way to discover available
versions
> >>>> - nova - depends on the glanceclient bp and allowing
autodiscovery of glance
> >>>> version
> >>>> and making the config value optional (tho not
deprecated / removed)
> >>>
> >>> Supporting both seems reasonable. At least then *most* people
don't
> >>> need to worry about it and it "just works", but the override is
there if
> >>> necessary, since multiple people seem to be expressing a desire
to have
> >>> it available.
> >>
> >> +1
> >>
> >>> Can we just do this all at once? Adding this to glanceclient
doesn't
> >>> seem like a huge task.
> >>
> >> I worry about us never getting the full solution, but it seems
to have
> >> got complicated.
> >
> > The glanceclient side is done, as far as allowing access to the
list of available API versions on a given server. It's getting Nova to
use this info that's a bit sticky.
>
> Hmm, OK. Could we not just cache the detected version, to reduce the
> impact of that decision.
>
> >> On 28 October 2013 15:13, Eddie Sheffield
<eddie.sheffi...@rackspace.com <mailto:eddie.sheffi...@rackspace.com>>
wrote:
> >>> So...I've been working on this some more and hit a bit of a
snag. The
> >>> Glanceclient change was easy, but I see now that doing this in
nova will require
> >>> a pretty huge change in the way things work. Currently, the API
version is
> >>> grabbed from the config value, the appropriate driver is
instantiated, and calls
> >>> go through that. The problem comes in that the actually glance
server isn't
> >>> communicated with until very late in the process. Nothing "sees"
the servers at
> >>> the level where the driver is determined. Also there isn't a
single glance server
> >>> but a list of them, and in the even of certain communication
failures the list is
> >>> cycled through until success or a number of retries has passed.
> >>>
> >>> So to change this to auto configuring will require turning this
upside down,
> >>> cycling through the servers at a higher level, choosing the
appropriate driver
> >>> for that server, and handling retries at that same level.
> >>>
> >>> Doable, but a much larger task than I first was thinking.
> >>>
> >>> Also, I don't really want the added overhead of getting the api
versions before
> >>> every call, so I'm thinking that going through the list of
servers at startup and
> >>> discovering the versions then and caching that somehow would be
helpful as well.
> >>>
> >>> Thoughts?
> >>
> >> I do worry about that overhead. But with Joe's comment, does it
not
> >> just boil down to caching the keystone catalog in the context?
> >>
> >> I am not a fan of all the specific talk to glance code we have in
> >> nova, moving more of that into glanceclient can only be a good
thing.
> >> For the XenServer itegration, for efficiency reasons, we need
glance
> >> to talk from dom0, so it has dom0 making the final HTTP call.
So we
> >> would need a way of extracting that info from the glance
client. But
> >> that seems better than having that code in nova.
> >
> > I know in Glance we've largely taken the view that the client
should be as thin and lightweight as possible so users of the client can
make use of it however they best see fit. There was an earlier patch
that would have moved the whole image service layer into glanceclient
that was rejected. So I think there is a division in philosophies here
as well
>
> Hmm, I would be a fan of supporting both use cases, "nova style" and
> more complex. Just seems better for glance to own as much as possible
> of the glance client-like code. But I am a nova guy, I would say
that!
> Anyway, that's a different conversation.
>
> John
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
I'm joining this thread a bit late but wanted to raise a few points
for consideration.
1. It doesn't look like the 'use-glance-v2-api' blueprint [1] has gone
anywhere since this thread seems to have hit a dead-end.
2. There is a blueprint [2] for nova supporting the cinder v2 API now
too and the related review is actually defaulting to use v2, so given
the history on this with the glance discussion, I think it's relevant
to drop it into the same conversation.
3. As for the keystone service catalog being used to abstract some of
this, there was a related blueprint [3] for abstracting the glance URI
that nova would talk to. The blueprint was closed because I think Joe
Gordon had something else cooking for enhancing the keystone service
catalog, but there weren't any details put into the closed blueprint
that Yang Yu opened. Where are we with that?
I plan on bringing this up as a blueprint topic in today's nova meeting.
[1] https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api
[2] https://blueprints.launchpad.net/nova/+spec/support-cinderclient-v2
[3]
https://blueprints.launchpad.net/nova/+spec/nova-enable-glance-arbitrary-url
I brought this up in the nova meeting yesterday and said I'd post back
here with what the discussion was. The full meeting notes are here
[1]. This is the snippet:
21:42:34 <russellb> mriedem1: ok now you can go
21:42:46 <mriedem1> russellb: was just going to point out some relate
bps,
21:42:52 <mriedem1> about supporting v2 APIs for cinder and glance
21:42:55 <mriedem1> #link
http://lists.openstack.org/pipermail/openstack-dev/2013-November/020041.html
21:43:05 <mriedem1> i wanted to raise attention since the ML thread
kind of died
21:43:19 <mriedem1> but now there is a patch to do the same for cinder
v2 so bringing it back up
21:43:26 <mriedem1> since you guys are reviewing blueprints
21:43:53 <russellb> OK
21:44:00 <russellb> so, trying to remember where we left that one ...
21:44:08 <russellb> deployers still want a knob to force it to one or
the other
21:44:21 <mriedem1> right, but there were thoughts on discovery
21:44:25 <russellb> in the absense of config, we want some automatic
version discovery
21:44:26 <mriedem1> and the keystone service catalog
21:44:41 <mriedem1> essentially can the service catalog abstract a lot
of this garbage from nova
21:44:44 <russellb> i think the catalog should be to discover the
endpoint, not the versions it has
21:44:55 <russellb> and then you talk to the endpoint to do discovery
21:45:15 <russellb> and cache it so you don't double up on API requests
for everything
21:45:19 <russellb> IMO
21:45:42 <mriedem1> yeah, cache on startup
21:45:48 <johnthetubaguy> russellb: +1 it seems a good end goal
21:45:57 <russellb> cool
21:46:00 <mriedem1> i just wasn't sure if jog0 was working something
else related to the service catalog that interacted with this
21:46:02 <cyeoh> russellb: agreed. There's been a bit of a discussion
about version discovery given that catalog entries for specific
versions of apis have started to appear
21:46:21 <johnthetubaguy> well its cached per user though, its a little
tricky
21:47:16 <jog0> we should make sure keystone agrees with this idea
21:47:22 <johnthetubaguy> we also have list of glance servers to round
robin, not sure how that fits in the keystone catelog, I guess it might
21:47:28 <jog0> also we cache the keystone catalog for cinder already
21:47:29 <russellb> mriedem1: can you ping dolph and get him to weigh
in?
21:47:30 <jog0> in the context
21:47:34 <russellb> mriedem1: and maybe summarize what we said here?
21:47:44 <dolphm> \o/
21:47:47 <cyeoh> for some apis we might need one version specific
catalog entry temporarily as the endpoints often currently point to a
specific version rather than the root
21:47:50 <mriedem1> russellb: sure
21:48:28 <johnthetubaguy> cyeoh: yeah, I agree, we probably need those
for the short term
21:48:28 <russellb> great thanks!
21:49:30 <russellb> OK, well thanks for raising awareness of this
21:49:31 <jog0> mriedem1: FYI the only thing I worked on was not saving
the entire keystone catalog in the context and that was a while ago
21:49:45 <mriedem1> jog0: ok, i'll ping you in nova
21:49:54 <jog0> mriedem1: ack
...
21:50:37 <dolphm> mriedem1: russellb: +1 for the bits about cached
version discovery against unversioned endpoints in the catalog
...
21:51:08 <russellb> dolphm: woot, that was quick
...
21:51:42 <dolphm> i just hope that the version discovery mechanism is
smart enough to realize when it's handed a versioned endpoint, and
happily run with that
...
21:52:00 <dolphm> (by calling that endpoint and doing proper discovery)
...
21:52:24 <russellb> dolphm: yeah, need to handle that gracefully ...
=================
[1]
http://eavesdrop.openstack.org/meetings/nova/2013/nova.2013-11-21-21.01.log.txt
--
Thanks,
Matt Riedemann
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev