I've done most of this in r1172148:
<https://svn.apache.org/viewvc?view=revision&revision=1172148>
An additional special feature for all of those who are using
non-production nova clusters is the ability to use the normal auth
service, but then force the base servers url to be something else, for
example:
Rackspace("exampleuser", "xxxxxxxxxxxx",
ex_force_base_url="https://127.0.0.1/foo")
This would use the normal Rackspace US Auth server, but once it got an
auth token, it would use https://127.0.0.1/foo as the base url for all
api actions. (i think it all works correctly, please test and let me
know)
Thanks,
Paul
On Sat, Sep 17, 2011 at 11:55 AM, Paul Querna <[email protected]> wrote:
> I've started a branch with a little bit of a refactor to better
> accommodate the situations we are seeing in openstack:
>
> 1) I've made a new base class, Connection, that ConnectionKey builds
> on, instead of ConnectionKey being the base of our hierarchy.
>
> 2) This new base class can operate on absolute URLs or
> (host,port,secure) tuples. Our original model for (host,port,secure)
> has been a pain in the OpenStack space where we have seen many
> endpoints being mounted at different paths. It also causes difficulty
> with the split between the Authentication and Compute/Storage APIs --
> that you need to hit the separate auth url, and it gives you a URL to
> the actual compute API.
>
> 3) A new class, OpenStackAuthConnection, which deals with parsing
> responses and querying the common authentication APIs in the OpenStack
> world. The OpenStackBaseConnection will use this class.
>
> I'll try to get it into trunk today for review, but no promises, its
> too sunny here in SF to stay inside all day :)
>
> Thanks,
>
> Paul
>
> 2011/9/17 Tomaž Muraus <[email protected]>:
>> Yeah, I must agree with Paul on this one. Libcloud started with the
>> assumption that we control all the code which interacts with the provider
>> APIs.
>>
>> If we would chose to use an existing provider library there are multiple
>> problems with this:
>>
>> - since we don't control the code which performs HTTP requests, adding some
>> kind of asynchronous API later on would be basically impossible / very hacky
>>
>> - if there is a bug or a problem in the library which needs to be fixed asap
>> we would need to wait for the original project to ship a new version first.
>> Other alternative is to fork the project which is bad idea.
>>
>> - There is most likely a mismatch between the provider library API and the
>> libcloud API. This could introduce all kind of performance penalties. For
>> example, because we control the code which talks to the API, this means that
>> we can make some action in O(1) requests, but using the library would result
>> in O(n) requests.
>>
>> Thanks,
>> Tomaz
>>
>> On Fri, Sep 16, 2011 at 4:47 PM, Paul Querna <[email protected]> wrote:
>>
>>> libcloud is not a collection of dependencies on other python code. it
>>> is a library interacting with rest APIs.
>>>
>>> this has been a core part of its design from the very beginning.
>>>
>>> -1 to merging.
>>>
>>>
>>> On Fri, Sep 16, 2011 at 3:18 AM, Mike Nerone
>>> <reply+i-1661069-39729c8bfa0df364d880ed7936d5a186fafdb...@reply.github.com
>>> >
>>> wrote:
>>> > This pull request represents a new OpenStack driver based on
>>> python-novaclient. There are no tests at this time, so it's not ready to
>>> merge. I wanted to spark some discussion on this possibility, which I am
>>> very much for. python-novaclient is actually used internally by Rackspace
>>> and other OpenStack providers, so this is likely to be extremely
>>> well-maintained, plus novaclient already covers a lot of corner cases. To do
>>> it from scratch in libcloud results in a great deal of duplication of work
>>> and code, and would probably still lag on robustness unless we were to
>>> actually tediously crib nearly all of the novaclient logic.
>>> >
>>> > BTW, the first commit collapses the OpenStack 1.0 driver into the
>>> Rackspace driver (the only public implementation of OpenStack 1.0), as has
>>> been discussed recently on the mailing list and in IRC, resulting in a
>>> general consensus that this is a good thing to do. Aside from that, the bit
>>> you really want to look at for the novaclient-based driver is *all* in
>>> libcloud/compute/drivers/openstack.py.
>>> >
>>> > This is a proposed solution for
>>> https://issues.apache.org/jira/browse/LIBCLOUD-83
>>> >
>>> > You can merge this Pull Request by running:
>>> >
>>> > git pull https://github.com/racker/libcloud nova-client-with-novaclient
>>> >
>>> > Or you can view, comment on it, or merge it online at:
>>> >
>>> > https://github.com/apache/libcloud/pull/24
>>> >
>>> > -- Commit Summary --
>>> >
>>> > * Remove OpenStack 1.0 support, collapsing into RS.
>>> > * Beginning of PoC novaclient-based Nova driver.
>>> > * Pyflakes, shhh.
>>> > * Allow a subclass to specify API version.
>>> > * Whoops - use the param, don't fetch the list.
>>> > * Add list_sizes.
>>> > * Add list_images.
>>> > * Add ex_set_password().
>>> > * Add ex_set_server_name.
>>> > * Make _to_node handle no-IPs.
>>> > * Add create_node().
>>> > * Add ex_resize-related methods.
>>> > * Use resource manager ops to save round-trips.
>>> > * Add rebuild().
>>> > * Skeleton support for undocumented floating IPs.
>>> > * Add reboot_node().
>>> > * Add destroy_node() and ex_get_node().
>>> > * Add ex_quotas().
>>> > * Add ex_save_image().
>>> > * Some discussion comments.
>>> > * Require python-novaclient in setup.py.
>>> >
>>> > -- File Changes --
>>> >
>>> > M libcloud/compute/drivers/openstack.py (806)
>>> > M libcloud/compute/drivers/rackspace.py (618)
>>> > M setup.py (11)
>>> > R test/compute/fixtures/rackspace/v1_slug_flavors_detail.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_images_detail.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_images_post.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_limits.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_servers.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_servers_detail.xml (0)
>>> > R
>>> test/compute/fixtures/rackspace/v1_slug_servers_detail_deployment_missing.xml
>>> (0)
>>> > R
>>> test/compute/fixtures/rackspace/v1_slug_servers_detail_deployment_pending.xml
>>> (0)
>>> > R
>>> test/compute/fixtures/rackspace/v1_slug_servers_detail_deployment_same_uuid.xml
>>> (0)
>>> > R
>>> test/compute/fixtures/rackspace/v1_slug_servers_detail_deployment_success.xml
>>> (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_servers_detail_empty.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_servers_detail_metadata.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_servers_ips.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_servers_metadata.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_shared_ip_group.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_shared_ip_groups.xml (0)
>>> > R test/compute/fixtures/rackspace/v1_slug_shared_ip_groups_detail.xml (0)
>>> > M test/compute/test_deployment.py (2)
>>> > D test/compute/test_openstack.py (413)
>>> > M test/compute/test_rackspace.py (386)
>>> >
>>> > -- Patch Links --
>>> >
>>> > https://github.com/apache/libcloud/pull/24.patch
>>> > https://github.com/apache/libcloud/pull/24.diff
>>> >
>>> > --
>>> > Reply to this email directly or view it on GitHub:
>>> > https://github.com/apache/libcloud/pull/24
>>> >
>>>
>>
>