Hey everybody,

In the shade stack, one of the main changes is making Proxy a subclass of keystoneauth1.adapter.Adapter. This means we don't have to carry endpoint_override dicts around to pass to rest calls on the session, but instead let ksa handle that for us inside the Adapter structure. With that change, it means that Service/ServiceFilter is now *essentially* just doing the same job as the *_api_version code from os-client-config.

That logic is still important because we need to, at Connection construction time, know what version of a given service's Proxy object to instantiate and attach to the connection. (so that conn.image is openstack.image.v2._proxy.Proxy on v2 clouds and openstack.image.v1._proxy.Proxy on v1 clods)

We could just remove them and let the OCC CloudConfig object and config settings handle it, since we currently have default versions set there.

HOWEVER - I'm writing this because we'd also like to stop having default versions encoded in config and instead rely on doing version discovery properly to find the right versions, leaving *_api_version settings as overrides, not required config.

This brings us to the issue:

* In the current structure, if we use version discovery, we'd need to run it - for every service on the cloud - in the Connection constructor. That'll be unworkably slow.

* If we don't use version discovery, we can't remove the default version settings from openstack.config/clouds.yaml - and private clouds or other clouds that don't have vendor settings files will require the user to set overrides (that could be detected) on clouds that did not have the version of a service that we have set as our default.

* We could add a magic multi-version Proxy object that only does version discovery when someone tries to use it, and once it has done discovery creates the correct versioned Proxy object and does getattribute passthrough to it. This has the potential benefit of also being able to have the 'magic' Proxy have a copy of each known version. For instance:

  conn = Connection(cloud='example')
  # On this access, Connection has a magic Proxy Image object at
  # conn.image. When conn.image.__getattribute__ is run, it is
  # discovered that there is no 'real' proxy. ksa discovery is run,
  # v1 and v2 are discovered and openstack.image.v2._proxy.Proxy
  # is created and added the 'real' proxy attribute. Subsequent
  # __getattribute__ calls on conn.image get passed through to the
  # v2 proxy:
  conn.image.images()
  # If someone knows they want a specific version for a call, the
  # magic proxy can also add instances for each found version as
  # attributes, allowing:
  conn.image.v1.images()
  # or
  conn.image.v2.images()

* We could add magic __getattribute__ stuff to Connection itself so that the first time someone tries to touch one of the service proxy attributes we do version discovery for that and then just attach the correct proxy object in place the first time. It could also attach versioned attributes as above:

  conn = Connection(cloud='example')
  # On this access, conn.__getattribute__ looks for image, sees
  # that it's in the list of service-types but doesn't have a
  # Proxy object yet. It does ksa discovery, finds that
  # v1 and v2 exist and makes a openstack.image.v2._proxy.Proxy
  # object that it attaches to conn.image. Subsequent
  # __getattribute__ calls on conn note that conn.image exists
  # so just work as normal.
  conn.image.images()
  # If someone knows they want a specific version for a call, the
  # original __getattribute__ call can also add instances for each
  # found version as attributes on the 'main' proxy, allowing:
  conn.image.v1.images()
  # or
  conn.image.v2.images()

* We could do neither of those things and instead rely on persistent caching of version discovery. Perhaps add a 'login' command to OSC that will do an auth, grab the catalog and locally cache all the version discovery. If we find that cached data we'll use it, otherwise we'll generate it and cache it on our first use. I'm a bit skeptical that this is entirely the right call, as it would potentially be a large startup cost at times the user isn't expecting, and without cooperative remote content (such as the proposed profile document) invalidation becomes super hard to get right. (I DO think we should add persistent caching of version discovery documents to lower cost when using OSC on to of SDK - but I worry about having basic functionality depend on such code.

Does anyone have a preference of one approach over the other? Or other suggestions?

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to