[Openstack] [openstack] About vm in compute-node cannot be assign IP

2012-06-18 Thread David
Hi ALL 

I build a two host’s openstack environment with ubuntu 12.04. One is For
controller and another is for compute-node .

I meet a problem is : The vm in compute-node cannot be assign fixed IP . but
the vm in controller can be assign fixed IP .

 

When I use tcpdump to trace the dhcp request/response , I found this :

On compute-node :

~# tcpdump -i br100 -n port 67 or 68

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on br100, link-type EN10MB (Ethernet), capture size 65535 bytes

01:13:46.794501 IP 0.0.0.0.68  255.255.255.255.67: BOOTP/DHCP, Request from
fa:16:3e:50:6d:1c, length 280

01:13:49.799593 IP 0.0.0.0.68  255.255.255.255.67: BOOTP/DHCP, Request from
fa:16:3e:50:6d:1c, length 280

01:13:52.803964 IP 0.0.0.0.68  255.255.255.255.67: BOOTP/DHCP, Request from
fa:16:3e:50:6d:1c, length 280

 

~# tcpdump -i br100 -n port 67 or 68

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on br100, link-type EN10MB (Ethernet), capture size 65535 bytes

01:13:47.995389 IP 0.0.0.0.68  255.255.255.255.67: BOOTP/DHCP, Request from
fa:16:3e:50:6d:1c, length 280

01:13:47.995785 IP 192.168.4.1.67  192.168.4.3.68: BOOTP/DHCP, Reply,
length 309

01:13:51.000454 IP 0.0.0.0.68  255.255.255.255.67: BOOTP/DHCP, Request from
fa:16:3e:50:6d:1c, length 280

01:13:51.000911 IP 192.168.4.1.67  192.168.4.3.68: BOOTP/DHCP, Reply,
length 309

01:13:54.004840 IP 0.0.0.0.68  255.255.255.255.67: BOOTP/DHCP, Request from
fa:16:3e:50:6d:1c, length 280

01:13:54.005196 IP 192.168.4.1.67  192.168.4.3.68: BOOTP/DHCP, Reply,
length 309

 

From above , I can found : the vm can send udp request to dnsmasq-server
which on controller. 

BUT when the controller response the request . the compute-node cannot get
it . This is why vm cannot be assign fixed IP on compute-node ,

But on Controller vm can be assign fixed IP .

Even though I know where is going wrong , But I cannot fixed it  -_-#

Could anyone tell me how to fixed this problem ? 

 

Thanks In advance .

 

The follow is my configuration .

 

The part of nova.conf as follow :

# network specific settings

--network_manager=nova.network.manager.FlatDHCPManager

--public_interface=br100

--flat_interface=eth0

--flat_network_bridge=br100

--fixed_range=192.168.4.0/27

--floating_range=192.168.7.208/28

--network_size=32

--flat_injected=false

--force_dhcp_release

 

The network config :

The controller :

auto lo

iface lo inet loopback

auto br100

iface br100 inet static

bridge_portseth0

bridge_stp  off

bridge_maxwait  0

bridge_fd   0

address 192.168.7.151

netmask 255.255.255.224

gateway 192.168.7.158

 

The compute node :

auto lo

iface lo inet loopback

auto br100

iface br100 inet static

bridge_portseth0

bridge_stp  off

bridge_maxwait  0

bridge_fd   0

address 192.168.7.152

netmask 255.255.255.224

gateway 192.168.7.158

 

 

David(李跃洲)

E-MAIL: yuezhou...@hisoft.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Where's Object's metadata located ?

2012-06-18 Thread Kuo Hugo
Hi Adrian ,

Thanks for your explanation ...

About Q2 , manifest question
Is there any audit mechanism to delete segments of failure uploading object?
What if the uploading procedure is been interrupted by user .
As you said , I think the segments still available for accessing .
On the other hand , it means that those segmented objects will keep on disk
in object-server , even though the uploading failure is caused by user him
self.

Is there any approach to remove those segmented objects? or it might waste
some disk space.

Thanks


2012/6/17 Adrian Smith adr...@17od.com

  Q1: Where's the metadata of an object ?
 It's stored in extended attributes on the filesystem itself. This is
 reason XFS (or other filesystem supporting extended attributes) is
 required.

  Could I find the value?
 Sure. You just need some way of a) identifying the object on disk and,
 b) a means of querying the extended metadata (using for example the
 python xattrs package).

  Q2: What if a large object be interrupted during upload , will the
 manifest objects be deleted?
 Large objects (i.e. those  5Gb) must be split up client-side and the
 segments uploaded individually. When all the segments are uploaded the
 manifest must then be created by the client. What I'm trying to get at
 is that each segment and even the manifest are completely independent
 objects. A failure during the upload of any one segment has no impact
 on other segments or the manifest.

 Adrian


 On 16 June 2012 09:53, Kuo Hugo tonyt...@gmail.com wrote:
  Hi folks ,
 
  Q1:
  Where's the metadata of an object ?
  For example the X-Object-Manifest. Does it store in inode ?
  I did not see the matadata X-Object=Manifest in container's DB.
 
   Could I find the value?
 
  Q2:
  What if a large object be interrupted during upload , will the manifest
  objects be deleted?
  For example ,
  OBJ1 :200MB
  I execute $swift upload con1 OBJ1 -S 1024000
  I do a force interrupt while send segment 10.
 
  I believe that OBJ1 won't live in con1 , what will happen to the rest
  manifest objects?
 
  Those objects seems still live in con1_segments container. Is there any
  mechanism to audit OBJ1 and delete those manifest objects ?
 
 
 
  --
  +Hugo Kuo+
  tonyt...@gmail.com
  +886 935004793
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 




-- 
+Hugo Kuo+
tonyt...@gmail.com
+ tonyt...@gmail.com886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Where's Object's metadata located ?

2012-06-18 Thread Juan J. Martinez
On 18/06/12 11:57, Kuo Hugo wrote:
 Hi Adrian , 
 
 Thanks for your explanation ...
 
 About Q2 , manifest question
 Is there any audit mechanism to delete segments of failure uploading object?
 What if the uploading procedure is been interrupted by user . 
 As you said , I think the segments still available for accessing . 
 On the other hand , it means that those segmented objects will keep on
 disk in object-server , even though the uploading failure is caused by
 user him self.   


Those segments are regular files.

The big file support it's just uploading the big file split in parts
smaller than 5GB + a manifest file that points to the place of the parts.

The user can both access to the parts (that are regular files) and to
the manifest, that basically re-assembles the parts automatically on the
fly.

Kind regards,

Juan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] maybe a bug, but where? (dnsmasq-dhcp versus Redis inside KVM)

2012-06-18 Thread Christian Parpart
Hey all,

after having upgraded to dnsmasq 1.62 (current), increasing
the lease times up to 7 days, I now have a very silent syslog
on my gateway host. However, there is one KVM instance
(running redis inside, w/ a 16GB RAM flavor),
that still looses its IP very very frequently.

It now seems, that even after just 4 hours of KVM instance uptime,
the nova-network node receives the following and logs it with:

2260 Jun 15 16:51:37 cesar1 dnsmasq-dhcp[8707]: DHCPREQUEST(br100)
10.10.40.16 fa:16:3e:3d:ff:f3
2261 Jun 15 16:51:37 cesar1 dnsmasq-dhcp[8707]: DHCPACK(br100) 10.10.40.16
fa:16:3e:3d:ff:f3 redis-appdata1
[]
3381 Jun 15 21:59:41 cesar1 dnsmasq-dhcp[10889]: DHCPREQUEST(br100)
10.10.40.16 fa:16:3e:3d:ff:f3
3382 Jun 15 21:59:41 cesar1 dnsmasq-dhcp[10889]: DHCPACK(br100) 10.10.40.16
fa:16:3e:3d:ff:f3 redis-appdata1
[]

And 26:59 was exactly the time our redis server went down.
Although, I cannot find anything except cron logs in the KVM instance's
syslog.

My question now is, why is such a request causing network unreachability to
that node?

Many thanks in advance,
Christian Parpart.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Test tool

2012-06-18 Thread Jay Pipes

On 06/14/2012 05:26 AM, Neelakantam Gaddam wrote:

Hi All,

Recently I came across the tool called Tempest to perform he integration
tests on a live cluster running openstack.

Can we use this tool to test Quantum networks also?


Yes, though support is very new :)

If you run Tempest (nosetests -sv --nologcapture tempest), the network 
tests will be run by default if and only if there is a network endpoint 
set up.



Are there any tools which do the end-to end testing of openstack
components (including Quantum) like creating multiple networks,
launching VMs, adding compute nodes,  pinging VMs.. etc ?


That would be tempest :) Mostly. Tempest doesn't yet test things like 
bringing up bare-metal compute nodes, but perhaps in the future it will.


All the best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Instance termination is not stable

2012-06-18 Thread Sajith Kariyawasam
Hi all,

I have Openstack Essex version installed and I have created several
instances based on an Ubuntu-12.04 UEC image in Openstack and those are up
and running.

When I'm trying to terminate an instance I'm getting an exception (log is
mentioned below) and, in console its status is shown as Shutoff and the
task is Deleting. Even though i tried terminating the instance again and
again nothing happens. But after I restart machine (nova) those instances
can be terminated.

This issue is not occurred everytime, but occassionally, as I noted this
occurs when there are more than 2 instances up and running at the same
time.. If I create one instance, terminate that, again create one,
terminate that one, if goes like that, there wont be an issue in
terminating.

What could be the problem here? any suggestions are highly appreciated.

Thanks


*ERROR LOG ( /var/log/nova/nova-compute.log )
==*

2012-06-18 18:43:55 DEBUG nova.manager [-] Skipping
ComputeManager._run_image_cache_manager_pass, 17 ticks left until next run
from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:147
2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._reclaim_queued_deletes from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-06-18 18:43:55 DEBUG nova.compute.manager [-]
FLAGS.reclaim_instance_interval = 0, skipping... from (pid=24151)
_reclaim_queued_deletes
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380
2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._report_driver_status from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-06-18 18:43:55 INFO nova.compute.manager [-] Updating host status
2012-06-18 18:43:55 DEBUG nova.virt.libvirt.connection [-] Updating host
stats from (pid=24151) update_status
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:2467
2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_unconfirmed_resizes from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-06-18 18:44:17 DEBUG nova.rpc.amqp [-] received {u'_context_roles':
[u'swiftoperator', u'Member', u'admin'], u'_context_request_id':
u'req-01ca70c8-2240-407b-92d1-5a59ee497291', u'_context_read_deleted':
u'no', u'args': {u'instance_uuid':
u'd250-1d8b-4973-8320-e6058a2058b9'}, u'_context_auth_token':
'SANITIZED', u'_context_is_admin': True, u'_context_project_id':
u'194d6e24ec1843fb8fbd94c3fb519deb', u'_context_timestamp':
u'2012-06-18T13:14:17.013212', u'_context_user_id':
u'f8a75778c36241479693ff61a754f67b', u'method': u'terminate_instance',
u'_context_remote_address': u'172.16.0.254'} from (pid=24151) _safe_log
/usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
2012-06-18 18:44:17 DEBUG nova.rpc.amqp
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] unpacked context: {'user_id':
u'f8a75778c36241479693ff61a754f67b', 'roles': [u'swiftoperator', u'Member',
u'admin'], 'timestamp': '2012-06-18T13:14:17.013212', 'auth_token':
'SANITIZED', 'remote_address': u'172.16.0.254', 'is_admin': True,
'request_id': u'req-01ca70c8-2240-407b-92d1-5a59ee497291', 'project_id':
u'194d6e24ec1843fb8fbd94c3fb519deb', 'read_deleted': u'no'} from
(pid=24151) _safe_log
/usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
2012-06-18 18:44:17 INFO nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: decorating:
|function terminate_instance at 0x2bd3050|
2012-06-18 18:44:17 INFO nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: arguments:
|nova.compute.manager.ComputeManager object at 0x20ffb90|
|nova.rpc.amqp.RpcContext object at 0x4d2a450|
|d250-1d8b-4973-8320-e6058a2058b9|
2012-06-18 18:44:17 DEBUG nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] instance
d250-1d8b-4973-8320-e6058a2058b9: getting locked state from (pid=24151)
get_lock /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1597
2012-06-18 18:44:17 INFO nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: locked: |False|
2012-06-18 18:44:17 INFO nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: admin: |True|
2012-06-18 18:44:17 INFO nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: executing:
|function terminate_instance at 0x2bd3050|
2012-06-18 18:44:17 DEBUG nova.utils
[req-01ca70c8-2240-407b-92d1-5a59ee497291 

Re: [Openstack] glance_api_servers vs. glance_host vs. keystone?

2012-06-18 Thread Kevin L. Mitchell
On Fri, 2012-06-15 at 20:54 -0400, Lars Kellogg-Stedman wrote:
 Thanks for the reply, makes sense.  Just to make sure I understand
 things, it sounds like Nova does not currently query Keystone for
 endpoints and continues to rely on explicit configuration (or to
 rephrase your answer, the reason these options have not gone away is
 because Nova does not yet have the necessary support for Keystone).
 Is that approximately correct?

The problem with the Keystone endpoints is that you have to make a query
to Keystone to get them.  We want to reduce the number of hits we make
on Keystone, not increase them—there are already too many as it is.
Thus, I suspect that nova may not even use the Keystone endpoints.  It
*does* support image URLs, however.  Thus, you use the options to
configure the default glance endpoint, and if you want to hit another
glance, you simply give a URL to the desired image rather than a simple
identifier.

(My comments about the support for endpoints in this email may differ
from my previous comments; chalk that up to further reflection on the
problem being solved…)
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] glance_api_servers vs. glance_host vs. keystone?

2012-06-18 Thread Nathanael Burton
What's the point of a service catalog (list of endpoints) if we don't want
to use it?! Looking up endpoints should be a cacheable request and in the
grand scheme of things -- low impact.

Nate
On Jun 18, 2012 10:13 AM, Kevin L. Mitchell kevin.mitch...@rackspace.com
wrote:

 On Fri, 2012-06-15 at 20:54 -0400, Lars Kellogg-Stedman wrote:
  Thanks for the reply, makes sense.  Just to make sure I understand
  things, it sounds like Nova does not currently query Keystone for
  endpoints and continues to rely on explicit configuration (or to
  rephrase your answer, the reason these options have not gone away is
  because Nova does not yet have the necessary support for Keystone).
  Is that approximately correct?

 The problem with the Keystone endpoints is that you have to make a query
 to Keystone to get them.  We want to reduce the number of hits we make
 on Keystone, not increase them—there are already too many as it is.
 Thus, I suspect that nova may not even use the Keystone endpoints.  It
 *does* support image URLs, however.  Thus, you use the options to
 configure the default glance endpoint, and if you want to hit another
 glance, you simply give a URL to the desired image rather than a simple
 identifier.

 (My comments about the support for endpoints in this email may differ
 from my previous comments; chalk that up to further reflection on the
 problem being solved…)
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] glance_api_servers vs. glance_host vs. keystone?

2012-06-18 Thread Kevin L. Mitchell
On Mon, 2012-06-18 at 10:18 -0400, Nathanael Burton wrote:
 What's the point of a service catalog (list of endpoints) if we don't
 want to use it?! Looking up endpoints should be a cacheable request
 and in the grand scheme of things -- low impact.

We do use the service catalog, quite extensively—on the client side.
From nova to glance, I suspect we don't use the service catalog, since
nova just uses the delegated credentials from the user.  Looking up the
service catalog is indeed quite cacheable; however: I don't believe that
such code has been added; it may be necessary to pierce abstraction
boundaries to perform that caching; and the glance endpoint is likely to
be pretty static anyway, and thus fine for setting by means of
configuration.  And again, it has been a while since I looked at that
code path…
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] glance_api_servers vs. glance_host vs. keystone?

2012-06-18 Thread Lars Kellogg-Stedman
 I don't see nova-network running...

And in fact, that seems to have been at the root of a number of
problems.  Thanks!  With some work over the weekend I'm now successfully
booting instances with networking using the Flat network manager.
Great.

It wasn't clear from the documentation that nova-network was a
*necessary* service (that is, it wasn't clear that the failure mode
would be fail to create an instance vs. your instance has not
networking).  We were punting on network configuration until after we
were able to successfully boot instances...so that was apparently a
bad idea on our part.

-- 
Lars Kellogg-Stedman l...@seas.harvard.edu   |
Senior Technologist| http://ac.seas.harvard.edu/
Academic Computing | 
http://code.seas.harvard.edu/
Harvard School of Engineering and Applied Sciences |

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] glance_api_servers vs. glance_host vs. keystone?

2012-06-18 Thread Lars Kellogg-Stedman
 Thus, I suspect that nova may not even use the Keystone endpoints...

That sounds crazy to me, but I just got here.  That is, why go to the
effort to develop an endpoint registration service and then decide not
to use it?  Given the asynchronous, distributed nature of OpenStack,
an endpoint directory seems like a good idea.

Just out of question, what *does* use the endpoint registry in
KeyStone (in the Essex release)?

-- 
Lars Kellogg-Stedman l...@seas.harvard.edu   |
Senior Technologist| http://ac.seas.harvard.edu/
Academic Computing | 
http://code.seas.harvard.edu/
Harvard School of Engineering and Applied Sciences |


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack common

2012-06-18 Thread Sergio A. de Carvalho Jr.
There's a new version of pep8 out today (1.3.1) which fixes a few
indentation cases of if statements that were broken in 1.3.

Sergio

On Sun, Jun 17, 2012 at 9:01 PM, Adrian Smith adr...@17od.com wrote:

 pep8 1.3 (released 15-Jun) is much stricter about the indentation used
 on continuation lines.

 After upgrading we started seeing quite a few instances of E127,E128...

 E127 continuation line over-indented for visual indent.

 Adrian


 On 17 June 2012 17:52, Jay Pipes jaypi...@gmail.com wrote:
  What version of pep8 are you using? The errors look to be warnings that
 are
  no longer printed in more modern versions of pep8...
 
  All the best,
  -jay
 
 
  On 06/17/2012 03:42 AM, Gary Kotton wrote:
 
  Hi,
  Over the weekend patches were made to Quantum to support Pep 1.3.
  Some of the patches were in the openstack common code. I have opened a
  bug to address this in the openstack common code
  (https://bugs.launchpad.net/openstack-common/+bug/1014216)
  I am currently updating the common code and will hopefully have a patch
  soon to address this for review.
  Thanks
  Gary
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help : https://help.launchpad.net/ListHelp
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack ate my error message!

2012-06-18 Thread Lars Kellogg-Stedman
Working with OpenStack for the past few weeks I've noticed a tendency
for the tools to eat error messages in a way that makes problem
determination tricky.

For example:

Early on, there were some authentication issues in my configuration.
The error message presented by the command line tools was:

  ERROR: list indices must be integers, not str

It was only by trawling through the DEBUG logs that I was able to find
the actual traceback (which indicated that Keystone was returning a
503 error, I think, and not returning the JSON expected by the
client).

And more recently:

The error generated by nova-network not running was a series of
tracebacks in the nova-compute log:

- One for nova.rpc.impl_qpid
- Another for nova.compute.manager
- Another for nova.rpc.amqp

I saw these errors prior to my mailing list post, but it was difficult
to connect them to useful facts about our environment.

I'm not suggesting there's an easy fix to this.  Delivering error
messages correctly in this sort of asynchronous, RPC environment is
difficult.

Thanks for all the hard work,

-- 
Lars Kellogg-Stedman l...@seas.harvard.edu   |
Senior Technologist| http://ac.seas.harvard.edu/
Academic Computing | 
http://code.seas.harvard.edu/
Harvard School of Engineering and Applied Sciences |


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] glance_api_servers vs. glance_host vs. keystone?

2012-06-18 Thread Kevin L. Mitchell
On Mon, 2012-06-18 at 10:41 -0400, Lars Kellogg-Stedman wrote:
 That sounds crazy to me, but I just got here.  That is, why go to the
 effort to develop an endpoint registration service and then decide not
 to use it?  Given the asynchronous, distributed nature of OpenStack,
 an endpoint directory seems like a good idea.
 
 Just out of question, what *does* use the endpoint registry in
 KeyStone (in the Essex release)?

The clients.  The endpoint registration system, so far as I understand,
was primarily intended for use by the clients.  It certainly would be
useful for use by the servers, but there are subtleties, and I am not
aware that it is currently used by nova-glance.  But yet again, I have
not looked at that code for a while; last time I was there, I was adding
the initial support for nova to feed the user's credentials into glance;
that was pre-Diablo, if I recall correctly.

Nova, glance, keystone, etc. are all moving targets; there are tons of
things that have only been added recently in the grand scheme of things,
and there are many loose ends still to be tied.  As an example, while I
was rototilling the quotas system in nova, new quotas were added that
changed the requirements I was working from, and since I was running up
against deadlines, I had to leave some of those ends untied for now;
there's no telling when I'll be able to get back to those loose ends and
finally tie them up.  I would not be surprised if something similar has
happened WRT the endpoints system, since there are so many subtleties
that need to be taken into account.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] glance_api_servers vs. glance_host vs. keystone?

2012-06-18 Thread Jay Pipes

On 06/18/2012 10:41 AM, Lars Kellogg-Stedman wrote:

Thus, I suspect that nova may not even use the Keystone endpoints...


That sounds crazy to me, but I just got here.  That is, why go to the
effort to develop an endpoint registration service and then decide not
to use it?  Given the asynchronous, distributed nature of OpenStack,
an endpoint directory seems like a good idea.


It's mostly a vestigial thing. Before Keystone had an endpoint registry, 
we used configuration options to indicate important endpoint URLs.



Just out of question, what *does* use the endpoint registry in
KeyStone (in the Essex release)?


I would imagine that most things will move towards using the endpoint 
registry over time and getting rid of multiple hardcoded endpoint URLs 
in configuration files.


Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Blueprint automatic-secure-key-generation] Automatic SECURE_KEY generation

2012-06-18 Thread Sascha Peilicke
On 06/15/2012 11:08 PM, Gabriel Hurley wrote:
 To the points Sascha raised, and in particular in response to the code method 
 suggested here 
 (https://github.com/saschpe/horizon/commit/1414d538d65d2d3deb981db0ab9e888a3c96a149)
  I think we are largely in agreement except for one point:
 
 I have no problem with this approach for development, or even for distro 
 packaging, but I strongly dislike this automatic generation for a production 
 deployment. Particularly there are issues about distributed deployments where 
 the installs need to share a common secret key to allow for shared data 
 signing validation, etc. I would rather not obfuscate these very real and 
 very relevant concerns for productions deployments for the sake of making 
 everything else a little easier. Especially because it will lead to very 
 hard-to-diagnose bugs because people aren't aware of the magic happening 
 behind the scenes.

Ah, you're thinking of a setup where there are multiple dashboard VMs
behind a load-balancer serving requests. Indeed, there the dashboard
instances should either share the SECRET_KEY or the load-balancer has to
make sure that all requests of a given session are redirected to the
same dashboard instance.

But shouldn't local_settings.py still take preference over settings.py?
Thus the admin could still set a specific SECRET_KEY in
local_settings.py regardless of the default (auto-generated) one. So I
only would have to fix the patch by not removing the documentation about
SECRET_KEY from local_settings.py, right?


 As such, I'd rather have the code you wrote be part of the environment 
 build/run_tests code such that there's an *optional* tool to do this, but it 
 doesn't hide legitimate security and functionality concerns.

Unfortunately, this is only relevant for securing production
deployments. Nobody cares if a developer instance is setup securely ;-)

 I'm also cc'ing Paul McMillan, Django's resident security expert, to get his 
 take on this.

Have you already got an answer from Paul? He wan't in the CC list, actually.

 
 - Gabriel
 
 -Original Message-
 From: Sascha Peilicke [mailto:sasc...@suse.de]
 Sent: Friday, June 15, 2012 12:38 AM
 To: Gabriel Hurley
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Blueprint automatic-secure-key-generation] Automatic
 SECURE_KEY generation

 Hi Grabriel,

 let's discuss the blueprint [1] on the list.

 On 06/14/2012 10:36 PM, Gabriel Hurley wrote:
 Blueprint changed by Gabriel Hurley:

 Whiteboard set to:
 After discussing this with both the Horizon core team and Django's
 security czar/core committer Paul McMillan, we've decided the best way
 to proceed with this is as follows:

   * Remove the default SECRET_KEY so it cannot be shared causing security
 problems.
   * For development, add a few lines to auto-generate a SECRET_KEY if one
 isn't present.
 Ok, nothing to add, this is what the patch actually does [2].

   * For production, document that a SECRET_KEY is required, how to
 generate one, etc.
 The question to me is, why this should matter to the admin at all.
 Security-wise, the only thing that matters is that the SECRET_KEY is unique
 per dashboard installation and set before the first start.
 Whether this is done by the admin or by the patch to discuss doesn't really
 matter. However, even if documented appropriately, setting up a complete
 OpenStack deployment isn't exactly a piece of cake. Having to remember yet
 another config value is error-prone IMO and easily forgotten. Actually,
 Django already does a really good job in documenting this, but we stumbled
 upon this because all of our internal development clouds had the same
 (default) SECRET_KEY. Seems like we just forgot ;-)

   * Work with the distros to make sure they properly generate a unique
 SECRET_KEY for each install.
 This is why I started the whole topic, as there are cases where this is just
 impractical. But let's take it slowly, current OpenStack rpm packages for
 openSUSE / SLES generate a SECRET_KEY as a %post scriptlet (i.e. just after
 package installation). This doesn't hurt and is probably what you had in 
 mind.
 However, this only works if there is actually a package to install.

 Unfortunately, this isn't the case when the dashboard is deployed from an
 appliance image. Of course, you could check and set the SECRET_KEY after
 successfully booting up the appliance image via a script (if it is not a 
 snapshot
 of an already booted system) or you could just do it when the Django app is
 actually started. The latter seems more practical to me. And lastly, when 
 it's
 part of the horizon code-base, the issue could be solved for all deployments.

 Footnotes:
  [1]
 https://blueprints.launchpad.net/horizon/+spec/automatic-secure-key-
 generation
  [2]
 https://github.com/saschpe/horizon/commit/1414d538d65d2d3deb981db0a
 b9e888a3c96a149

 --
 With kind regards,
 Sascha Peilicke
 SUSE Linux GmbH, Maxfeldstr. 5, D-90409 Nuernberg, Germany
 GF: Jeff Hawn, 

Re: [Openstack] [Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread Jay Pipes

On 06/18/2012 12:01 PM, David Kranz wrote:

There are a few tempest tests, and many in the old kong suite that is
still there, that wait for a server status that is something other than
ACTIVE or VERIFY_RESIZE. These other states, such as BUILD or REBOOT,
are transient so I don't understand why it is correct for code to poll
for those states. Am I missing something or do those tests have race
condition bugs?


No, you are correct, and I have made some comments in recent code 
reviews to that effect.


Here are all the task states:

https://github.com/openstack/nova/blob/master/nova/compute/task_states.py

Out of all those task states, I believe the only one safe to poll in a 
wait loop is RESIZE_VERIFY. All the others are prone to state 
transitions outside the control of the user.


For the VM states:

https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py

I consider the following to be non-racy, quiescent states:

ACTIVE
DELETED
STOPPED
SHUTDOFF
PAUSED
SUSPENDED
ERROR

I consider the following to be racy states that should not be tested for:

MIGRATING -- Instead, the final state should be checked for...
RESIZING -- Instead, the RESIZE_VERIFY and RESIZE_CONFIRM task states 
should be checked


I have absolutely no idea what the state termination is for the 
following VM states:


RESCUED -- is this a permanent state? Is this able to be queried for in 
a consistent manner before it transitions to some further state?


SOFT_DELETE -- I have no clue what the purpose or queryability of this 
state is, but would love to know...


Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Dashboard (Ubuntu 12.04/Essex)

2012-06-18 Thread Lillie Ross-CDSR11
All,

I've upgraded our cloud, but am stymied on one last configuration issue.

From the dashboard, I'm unable to display images and/or snapshots.  However, I 
can display loaded images using the nova and glance command line tools with no 
problems.

For example, 'glance index' displays the following:

ID   Name   Disk Format 
 Container Format Size
 -- 
  --
f4c861aa-ded5-4cb8-9338-c0551a11a5d5 tty-linux  ami 
 ami25165824
bfc26a24-8cd5-4259-a57a-7071d909de8c tty-linux-ramdisk  ari 
 ari 5882349
df43feb9-aa30-4a2a-a03e-0357df5d4249 tty-linux-kernel   aki 
 aki 4404752

and the corresponding command through the nova-api service, 'nova image-list' 
displays:

+--+---+++
|  ID  |Name   | Status | Server |
+--+---+++
| bfc26a24-8cd5-4259-a57a-7071d909de8c | tty-linux-ramdisk | ACTIVE ||
| df43feb9-aa30-4a2a-a03e-0357df5d4249 | tty-linux-kernel  | ACTIVE ||
| f4c861aa-ded5-4cb8-9338-c0551a11a5d5 | tty-linux | ACTIVE ||
+--+---+++

However, when accessing the Images  Snapshots panel in Horizon, I receive 2 
error messages that it's unable to retrieve images and snapshots.

Looking at the nova-api.log file, I see the following:

2012-06-18 11:13:30 INFO nova.api.openstack.wsgi 
[req-65ef5523-b1ff-47a1-8bd6-4a896df47958 1 1] GET 
http://173.23.181.1:8776/v1/1/snapshots/detail
2012-06-18 11:13:30 DEBUG nova.api.openstack.wsgi 
[req-65ef5523-b1ff-47a1-8bd6-4a896df47958 1 1] Unrecognized Content-Type 
provided in request from (pid=9586) get_body /u
sr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:697
2012-06-18 11:13:30 INFO nova.api.openstack.wsgi 
[req-65ef5523-b1ff-47a1-8bd6-4a896df47958 1 1] 
http://173.23.181.1:8776/v1/1/snapshots/detail returned with HTTP 200

which indicates that the request complete OK.  From the Apache access logs I see

127.0.0.1:80 173.17.11.184 - - [18/Jun/2012:11:13:24 -0500] GET 
/nova/images_and_snapshots/ HTTP/1.1 200 49213 
http://cloud.dtl.mot-solutions.com/nova/images_and_snaps
hots/ Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/534.57.2 
(KHTML, like Gecko) Version/5.1.7 Safari/534.57.2

which also seems to indicate that all is well with 49K data returned, however 
nothing displays in the dashboard.

Any suggestions?  Thanks to all in advance!

Regards,
Ross
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread Daryl Walleck
I can verify that rescue is a non-race state. The transition is active to 
rescue on setting rescue, and rescue to active when leaving rescue.


 Original message 
Subject: Re: [Openstack-qa-team] wait_for_server_status and Compute API
From: Jay Pipes jaypi...@gmail.com
To: openstack-qa-t...@lists.launchpad.net 
openstack-qa-t...@lists.launchpad.net,openstack@lists.launchpad.net 
openstack@lists.launchpad.net
CC: Re: [Openstack-qa-team] wait_for_server_status and Compute API


On 06/18/2012 12:01 PM, David Kranz wrote:
 There are a few tempest tests, and many in the old kong suite that is
 still there, that wait for a server status that is something other than
 ACTIVE or VERIFY_RESIZE. These other states, such as BUILD or REBOOT,
 are transient so I don't understand why it is correct for code to poll
 for those states. Am I missing something or do those tests have race
 condition bugs?

No, you are correct, and I have made some comments in recent code
reviews to that effect.

Here are all the task states:

https://github.com/openstack/nova/blob/master/nova/compute/task_states.py

Out of all those task states, I believe the only one safe to poll in a
wait loop is RESIZE_VERIFY. All the others are prone to state
transitions outside the control of the user.

For the VM states:

https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py

I consider the following to be non-racy, quiescent states:

ACTIVE
DELETED
STOPPED
SHUTDOFF
PAUSED
SUSPENDED
ERROR

I consider the following to be racy states that should not be tested for:

MIGRATING -- Instead, the final state should be checked for...
RESIZING -- Instead, the RESIZE_VERIFY and RESIZE_CONFIRM task states
should be checked

I have absolutely no idea what the state termination is for the
following VM states:

RESCUED -- is this a permanent state? Is this able to be queried for in
a consistent manner before it transitions to some further state?

SOFT_DELETE -- I have no clue what the purpose or queryability of this
state is, but would love to know...

Best,
-jay

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-t...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread Yun Mao
Hi Jay et al,

there is a patch in review here to overhaul the state machine:

https://review.openstack.org/#/c/8254/

All transient state in vm state will be moved to task state. Stable
state in task state (RESIZE_VERIFY) will be moved to vm state. There
is also a state transition diagram in dot format.

Comments welcome. Thanks,

All

On Mon, Jun 18, 2012 at 12:26 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 06/18/2012 12:01 PM, David Kranz wrote:

 There are a few tempest tests, and many in the old kong suite that is
 still there, that wait for a server status that is something other than
 ACTIVE or VERIFY_RESIZE. These other states, such as BUILD or REBOOT,
 are transient so I don't understand why it is correct for code to poll
 for those states. Am I missing something or do those tests have race
 condition bugs?


 No, you are correct, and I have made some comments in recent code reviews to
 that effect.

 Here are all the task states:

 https://github.com/openstack/nova/blob/master/nova/compute/task_states.py

 Out of all those task states, I believe the only one safe to poll in a wait
 loop is RESIZE_VERIFY. All the others are prone to state transitions outside
 the control of the user.

 For the VM states:

 https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py

 I consider the following to be non-racy, quiescent states:

 ACTIVE
 DELETED
 STOPPED
 SHUTDOFF
 PAUSED
 SUSPENDED
 ERROR

 I consider the following to be racy states that should not be tested for:

 MIGRATING -- Instead, the final state should be checked for...
 RESIZING -- Instead, the RESIZE_VERIFY and RESIZE_CONFIRM task states should
 be checked

 I have absolutely no idea what the state termination is for the following VM
 states:

 RESCUED -- is this a permanent state? Is this able to be queried for in a
 consistent manner before it transitions to some further state?

 SOFT_DELETE -- I have no clue what the purpose or queryability of this state
 is, but would love to know...

 Best,
 -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread Jay Pipes

On 06/18/2012 12:49 PM, Daryl Walleck wrote:

I can verify that rescue is a non-race state. The transition is active
to rescue on setting rescue, and rescue to active when leaving rescue.


I don't see a RESCUE state. I see a RESCUED state. Is that what you are 
referring to here? Want to make sure, since the semantics and tenses of 
the power, VM, and task states are a bit inconsistent.


Best,
-jay


 Original message 
Subject: Re: [Openstack-qa-team] wait_for_server_status and Compute API
From: Jay Pipes jaypi...@gmail.com
To: openstack-qa-t...@lists.launchpad.net
openstack-qa-t...@lists.launchpad.net,openstack@lists.launchpad.net
openstack@lists.launchpad.net
CC: Re: [Openstack-qa-team] wait_for_server_status and Compute API


On 06/18/2012 12:01 PM, David Kranz wrote:

 There are a few tempest tests, and many in the old kong suite that is
 still there, that wait for a server status that is something other than
 ACTIVE or VERIFY_RESIZE. These other states, such as BUILD or REBOOT,
 are transient so I don't understand why it is correct for code to poll
 for those states. Am I missing something or do those tests have race
 condition bugs?


No, you are correct, and I have made some comments in recent code
reviews to that effect.

Here are all the task states:

https://github.com/openstack/nova/blob/master/nova/compute/task_states.py

Out of all those task states, I believe the only one safe to poll in a
wait loop is RESIZE_VERIFY. All the others are prone to state
transitions outside the control of the user.

For the VM states:

https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py

I consider the following to be non-racy, quiescent states:

ACTIVE
DELETED
STOPPED
SHUTDOFF
PAUSED
SUSPENDED
ERROR

I consider the following to be racy states that should not be tested for:

MIGRATING -- Instead, the final state should be checked for...
RESIZING -- Instead, the RESIZE_VERIFY and RESIZE_CONFIRM task states
should be checked

I have absolutely no idea what the state termination is for the
following VM states:

RESCUED -- is this a permanent state? Is this able to be queried for in
a consistent manner before it transitions to some further state?

SOFT_DELETE -- I have no clue what the purpose or queryability of this
state is, but would love to know...

Best,
-jay

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-t...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Dashboard (Ubuntu 12.04/Essex)

2012-06-18 Thread Gabriel Hurley
It may have to do with the container type set on the images. There is some 
filtering happening in the Project dashboard that hides the AKI and ARI images 
that are associated with AMIs. So if you've only got AKI/ARI images those would 
be hidden. You can see (and manage) those images as an administrator in the 
Admin dashboard.

Without further information that'd be my best guess. You can also try setting 
the log level to DEBUG in your local_settings.py file to get more information 
about exactly what is returned from the API.


-  Gabriel

From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
[mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] On 
Behalf Of Lillie Ross-CDSR11
Sent: Monday, June 18, 2012 9:32 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] Dashboard (Ubuntu 12.04/Essex)

All,

I've upgraded our cloud, but am stymied on one last configuration issue.

From the dashboard, I'm unable to display images and/or snapshots.  However, I 
can display loaded images using the nova and glance command line tools with no 
problems.

For example, 'glance index' displays the following:

ID   Name   Disk Format 
 Container Format Size
 -- 
  --
f4c861aa-ded5-4cb8-9338-c0551a11a5d5 tty-linux  ami 
 ami25165824
bfc26a24-8cd5-4259-a57a-7071d909de8c tty-linux-ramdisk  ari 
 ari 5882349
df43feb9-aa30-4a2a-a03e-0357df5d4249 tty-linux-kernel   aki 
 aki 4404752

and the corresponding command through the nova-api service, 'nova image-list' 
displays:

+--+---+++
|  ID  |Name   | Status | Server |
+--+---+++
| bfc26a24-8cd5-4259-a57a-7071d909de8c | tty-linux-ramdisk | ACTIVE ||
| df43feb9-aa30-4a2a-a03e-0357df5d4249 | tty-linux-kernel  | ACTIVE ||
| f4c861aa-ded5-4cb8-9338-c0551a11a5d5 | tty-linux | ACTIVE ||
+--+---+++

However, when accessing the Images  Snapshots panel in Horizon, I receive 2 
error messages that it's unable to retrieve images and snapshots.

Looking at the nova-api.log file, I see the following:

2012-06-18 11:13:30 INFO nova.api.openstack.wsgi 
[req-65ef5523-b1ff-47a1-8bd6-4a896df47958 1 1] GET 
http://173.23.181.1:8776/v1/1/snapshots/detail
2012-06-18 11:13:30 DEBUG nova.api.openstack.wsgi 
[req-65ef5523-b1ff-47a1-8bd6-4a896df47958 1 1] Unrecognized Content-Type 
provided in request from (pid=9586) get_body /u
sr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:697
2012-06-18 11:13:30 INFO nova.api.openstack.wsgi 
[req-65ef5523-b1ff-47a1-8bd6-4a896df47958 1 1] 
http://173.23.181.1:8776/v1/1/snapshots/detail returned with HTTP 200

which indicates that the request complete OK.  From the Apache access logs I see

127.0.0.1:80 173.17.11.184 - - [18/Jun/2012:11:13:24 -0500] GET 
/nova/images_and_snapshots/ HTTP/1.1 200 49213 
http://cloud.dtl.mot-solutions.com/nova/images_and_snaps
hots/ Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/534.57.2 
(KHTML, like Gecko) Version/5.1.7 Safari/534.57.2

which also seems to indicate that all is well with 49K data returned, however 
nothing displays in the dashboard.

Any suggestions?  Thanks to all in advance!

Regards,
Ross
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] Interested in implementing swift ring builder server

2012-06-18 Thread Mark Gius
Hello Swifters,

I've got some interns working with me this summer and I had a notion that
they might take a stab at the swift ring builder server blueprint that's
been sitting around for a while (
https://blueprints.launchpad.net/swift/+spec/ring-builder-server).  As a
first step I figured that the ring-builder-server would be purely an
alternative for the swift-ring-builder CLI, with a future iteration adding
support for deploying the rings to all servers in the cluster.  I'm
currently planning on making the ring-builder server be written and
deployed like the account/container/etc servers, although I imagine the
implementation will be a lot simpler.

Is anybody else already working on this and forgot to update the blueprint?
 If not can I get the blueprint assigned to me on launchpad?  Username
'markgius'.  Or if there's some other process I need to go through please
let me know.

Mark
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Dashboard (Ubuntu 12.04/Essex)

2012-06-18 Thread Lillie Ross-CDSR11
thanks Gabriel,

No, as usual, it was a typo in my internal URL for glance in the service 
catalog.  I'd reconfigured the internal network and failed to update the 
internal URL value for the image service.

However, I DID have the dashboard's local_settings.py file configured to use 
public URLs, and dashboard is apparently ignoring this flag.  I may file a bug 
report.

Thanks again,
Ross



On Jun 18, 2012, at 1:56 PM, Gabriel Hurley wrote:

It may have to do with the container type set on the images. There is some 
filtering happening in the Project dashboard that hides the AKI and ARI images 
that are associated with AMIs. So if you’ve only got AKI/ARI images those would 
be hidden. You can see (and manage) those images as an administrator in the 
Admin dashboard.

Without further information that’d be my best guess. You can also try setting 
the log level to DEBUG in your local_settings.py file to get more information 
about exactly what is returned from the API.

-  Gabriel

From: 
openstack-bounces+gabriel.hurley=nebula@lists.launchpad.netmailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
 
[mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.netmailto:nebula@lists.launchpad.net]
 On Behalf Of Lillie Ross-CDSR11
Sent: Monday, June 18, 2012 9:32 AM
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] Dashboard (Ubuntu 12.04/Essex)

All,

I've upgraded our cloud, but am stymied on one last configuration issue.

From the dashboard, I'm unable to display images and/or snapshots.  However, I 
can display loaded images using the nova and glance command line tools with no 
problems.

For example, 'glance index' displays the following:

ID   Name   Disk Format 
 Container Format Size
 -- 
  --
f4c861aa-ded5-4cb8-9338-c0551a11a5d5 tty-linux  ami 
 ami25165824
bfc26a24-8cd5-4259-a57a-7071d909de8c tty-linux-ramdisk  ari 
 ari 5882349
df43feb9-aa30-4a2a-a03e-0357df5d4249 tty-linux-kernel   aki 
 aki 4404752

and the corresponding command through the nova-api service, 'nova image-list' 
displays:

+--+---+++
|  ID  |Name   | Status | Server |
+--+---+++
| bfc26a24-8cd5-4259-a57a-7071d909de8c | tty-linux-ramdisk | ACTIVE ||
| df43feb9-aa30-4a2a-a03e-0357df5d4249 | tty-linux-kernel  | ACTIVE ||
| f4c861aa-ded5-4cb8-9338-c0551a11a5d5 | tty-linux | ACTIVE ||
+--+---+++

However, when accessing the Images  Snapshots panel in Horizon, I receive 2 
error messages that it's unable to retrieve images and snapshots.

Looking at the nova-api.log file, I see the following:

2012-06-18 11:13:30 INFO nova.api.openstack.wsgi 
[req-65ef5523-b1ff-47a1-8bd6-4a896df47958 1 1] 
GEThttp://173.23.181.1:8776/v1/1/snapshots/detail
2012-06-18 11:13:30 DEBUG nova.api.openstack.wsgi 
[req-65ef5523-b1ff-47a1-8bd6-4a896df47958 1 1] Unrecognized Content-Type 
provided in request from (pid=9586) get_body /u
sr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:697
2012-06-18 11:13:30 INFO nova.api.openstack.wsgi 
[req-65ef5523-b1ff-47a1-8bd6-4a896df47958 1 
1]http://173.23.181.1:8776/v1/1/snapshots/detail returned with HTTP 200

which indicates that the request complete OK.  From the Apache access logs I see

127.0.0.1:80 173.17.11.184 - - [18/Jun/2012:11:13:24 -0500] GET 
/nova/images_and_snapshots/ HTTP/1.1 200 49213 
http://cloud.dtl.mot-solutions.com/nova/images_and_snaps
hots/ Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/534.57.2 
(KHTML, like Gecko) Version/5.1.7 Safari/534.57.2

which also seems to indicate that all is well with 49K data returned, however 
nothing displays in the dashboard.

Any suggestions?  Thanks to all in advance!

Regards,
Ross
___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread David Kranz
Thanks, Yun. The problem is that the API calls give you status which is 
neither task state nor vm state. I think these are the stable states:


ACTIVE, VERIFY_RESIZE, STOPPED, SHUTOFF, PAUSED, SUSPENDED, RESCUE, ERROR, 
DELETED

Does that seem right to you, and is there a plan to change that set for Folsom?

 -David




On 6/18/2012 12:51 PM, Yun Mao wrote:

Hi Jay et al,

there is a patch in review here to overhaul the state machine:

https://review.openstack.org/#/c/8254/

All transient state in vm state will be moved to task state. Stable
state in task state (RESIZE_VERIFY) will be moved to vm state. There
is also a state transition diagram in dot format.

Comments welcome. Thanks,

All

On Mon, Jun 18, 2012 at 12:26 PM, Jay Pipesjaypi...@gmail.com  wrote:

On 06/18/2012 12:01 PM, David Kranz wrote:

There are a few tempest tests, and many in the old kong suite that is
still there, that wait for a server status that is something other than
ACTIVE or VERIFY_RESIZE. These other states, such as BUILD or REBOOT,
are transient so I don't understand why it is correct for code to poll
for those states. Am I missing something or do those tests have race
condition bugs?


No, you are correct, and I have made some comments in recent code reviews to
that effect.

Here are all the task states:

https://github.com/openstack/nova/blob/master/nova/compute/task_states.py

Out of all those task states, I believe the only one safe to poll in a wait
loop is RESIZE_VERIFY. All the others are prone to state transitions outside
the control of the user.

For the VM states:

https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py

I consider the following to be non-racy, quiescent states:

ACTIVE
DELETED
STOPPED
SHUTDOFF
PAUSED
SUSPENDED
ERROR

I consider the following to be racy states that should not be tested for:

MIGRATING -- Instead, the final state should be checked for...
RESIZING -- Instead, the RESIZE_VERIFY and RESIZE_CONFIRM task states should
be checked

I have absolutely no idea what the state termination is for the following VM
states:

RESCUED -- is this a permanent state? Is this able to be queried for in a
consistent manner before it transitions to some further state?

SOFT_DELETE -- I have no clue what the purpose or queryability of this state
is, but would love to know...

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread Yun Mao
Hi David,

Yes there is a plan to change that for Folsom. vm_state will be purely
stable state and task_state will be purely for transition state. See
http://wiki.openstack.org/VMState for the new design rational of
(power_state, vm_state, task_state)

After the cleanup, vm_state will have

ACTIVE = 'active'  # VM is running
BUILDING = 'building'  # VM only exists in DB
PAUSED = 'paused'
SUSPENDED = 'suspended'  # VM is suspended to disk.
STOPPED = 'stopped'  # VM is powered off, the disk image is still there.
RESCUED = 'rescued'  # A rescue image is running with the original VM image
# attached.
RESIZED = 'resized'  # a VM with the new size is active. The user is expected
# to manually confirm or revert.
SOFT_DELETED = 'soft-delete'  # VM is marked as deleted but the disk images are
# still available to restore.
DELETED = 'deleted'  # VM is permanently deleted.
ERROR = 'error'

There is no SHUTOFF (merged with STOPPED), and VERIFY_RESIZE is named
(from task state) as RESIZED (in vm state). BUILDING state is not my
favorite but it's left there mostly for backward compatibility reason.

This is still up for discussion and your input is welcome. Thanks,

Yun

On Mon, Jun 18, 2012 at 3:54 PM, David Kranz david.kr...@qrclab.com wrote:
 Thanks, Yun. The problem is that the API calls give you status which is
 neither task state nor vm state. I think these are the stable states:

 ACTIVE, VERIFY_RESIZE, STOPPED, SHUTOFF, PAUSED, SUSPENDED, RESCUE, ERROR,
 DELETED

 Does that seem right to you, and is there a plan to change that set for
 Folsom?

  -David





 On 6/18/2012 12:51 PM, Yun Mao wrote:

 Hi Jay et al,

 there is a patch in review here to overhaul the state machine:

 https://review.openstack.org/#/c/8254/

 All transient state in vm state will be moved to task state. Stable
 state in task state (RESIZE_VERIFY) will be moved to vm state. There
 is also a state transition diagram in dot format.

 Comments welcome. Thanks,

 All

 On Mon, Jun 18, 2012 at 12:26 PM, Jay Pipesjaypi...@gmail.com  wrote:

 On 06/18/2012 12:01 PM, David Kranz wrote:

 There are a few tempest tests, and many in the old kong suite that is
 still there, that wait for a server status that is something other than
 ACTIVE or VERIFY_RESIZE. These other states, such as BUILD or REBOOT,
 are transient so I don't understand why it is correct for code to poll
 for those states. Am I missing something or do those tests have race
 condition bugs?


 No, you are correct, and I have made some comments in recent code reviews
 to
 that effect.

 Here are all the task states:

 https://github.com/openstack/nova/blob/master/nova/compute/task_states.py

 Out of all those task states, I believe the only one safe to poll in a
 wait
 loop is RESIZE_VERIFY. All the others are prone to state transitions
 outside
 the control of the user.

 For the VM states:

 https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py

 I consider the following to be non-racy, quiescent states:

 ACTIVE
 DELETED
 STOPPED
 SHUTDOFF
 PAUSED
 SUSPENDED
 ERROR

 I consider the following to be racy states that should not be tested for:

 MIGRATING -- Instead, the final state should be checked for...
 RESIZING -- Instead, the RESIZE_VERIFY and RESIZE_CONFIRM task states
 should
 be checked

 I have absolutely no idea what the state termination is for the following
 VM
 states:

 RESCUED -- is this a permanent state? Is this able to be queried for in a
 consistent manner before it transitions to some further state?

 SOFT_DELETE -- I have no clue what the purpose or queryability of this
 state
 is, but would love to know...

 Best,
 -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Passing user_data with server create via the api (xml)

2012-06-18 Thread Ed Shaw
Hello,

I'm trying to pass user_data on server create using the xml api.  I am base64 
UTF-8 encoding the string.  I've tried sending it as a message part, a query 
string on the url and as a post parameter.  This works from the Horizon UI, but 
I get
2012-06-18 19:58:18,610 - __init__.py[WARNING]: Unhandled non-multipart 
userdata ''
when I try to pass via xml.  The only thing I haven't tried is a different 
extension namespace on the user_data element if passing it that way, but I 
can't see any docs on this.

Has anyone been successful sending user_data over xml?

Thanks,

Ed
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Thoughts on client library releasing

2012-06-18 Thread Monty Taylor
We're trying to figure out how we release client libraries. We're really
close - but there are some sticking points.

First of all, things that don't really have dissent (with reasoning)

- We should release client libs to PyPI

Client libs are for use in other python things, so they should be able
to be listed as dependencies. Additionally, proper releases to PyPI will
make our cross project depends work more sensibly

- They should not necessarily be tied to server releases

There could be a whole version of the server which sees no needed
changes in the client. Alternately, there could be new upcoming server
features which need to go into a released version of the library even
before the server is released.

- They should not be versioned with the server

See above.

- Releases of client libs should support all published versions of
server APIs

An end user wants to talk to his openstack cloud - not necessarily to
his Essex cloud or his Folsom cloud. That user may also have accounts on
multiple providers, and would like to be able to write one program to
interact with all of them - if the user needed the folsom version of the
client lib to talk to the folsom cloud and the essex version to talk to
the essex cloud, his life is very hard. However, if he can grab the
latest client lib and it will talk to both folsom and essex, then he
will be happy.

There are three major points where there is a lack of clear agreement.
Here they are, along with suggestions for what we do about them.

- need for official stable branches

I would like to defer on this until such a time as we actually need it,
rather than doing the engineering for in case we need it. But first, I'd
like to define we, and that is that we are OpenStack as an upstream.
As a project, we are at the moment probably the single friendliest
project for the distros in the history of software. But that's not
really our job. Most people out there writing libraries do not have
multiple parallel releases of those libraries - they have the stable
library, and then they release a new one, and people either upgrade
their apps to use the new lib or they don't.

One of the reasons this has been brought up as a need is to allow for
drastic re-writes of a library. I'll talk about that in a second, but I
think that is a thing that needs to have allowances for happening.

So the model that keystone-lite used - create an experimental branch for
the new work, eventually propose that it becomes the new master - seems
like a better fit for the drastic rewrite scenario than copying the
stable/* model from the server projects, because I think the most common
thing will be that library changes are evolutionary, and having two
mildly different branches that both represent something that's actually
pretty much stable will just be more confusing than helpful.

That being said - at such a time that there is actually a pain-point or
a specific need for a stable branch, creating branches is fairly easy
... but I think once we have an actual burning need for such a thing, it
will make it easier for us to look at models of how we'll use it.

 - API or major-rewrite-driven versioning scheme

I was wondering why bcwaldon and I were missing each other so strongly
in the channel the other day when we were discussing this, and then I
realized that it's because we have one word API that's getting
overloaded for a couple of different meanings - and also that I was
being vague in my usage of the word. So to clarify, a client library has:

 * programming level code APIs
 * supported server REST APIs

So I back off everything I said about tying client libs version to
server REST API support. Brian was right, I was wrong. The thing that's
more important here is that the version should indicate programmer
contract, and if it that is changed in a breaking manner, the major
number should bump.

If we combine that with the point from above that our libraries should
always support the existing server REST APIs, then I think we can just
purely have statements like support for compute v3 can be found in
2.7.8 and later and people will likely be fine, because it will map
easily to the idea just grab the latest lib and you should be able to
talk to the latest server Yea?

So in that case, the client libs versions are purely whatever they are
right now, and we'll increase them moving forward using normal library
version thoughts.

 - room for deprecating old APIs

The above then leads us to wanting to know what we do about supported
server REST APIs over time, especially since I keep making sweeping
statements about should support all available server versions ... How
about this as a straw man: Since we're planning on beginning to run
tests of the client libs against previous versions (so we'll test trunk
novaclient against essex nova in addition to trunk nova) ... we need
support for past versions of servers as long as our automation can
sensibly spin up a past version. (Since the support for that 

Re: [Openstack] [metering] notification metadata

2012-06-18 Thread Doug Hellmann
On Fri, Jun 15, 2012 at 10:34 AM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:



 On Wed, Jun 13, 2012 at 4:26 PM, Caitlin Bestler 
 caitlin.best...@nexenta.com wrote:

  Doug Hellmann wrote:

 

 ** **

 There are a couple of other alternatives:

 ** **

 1. We could move notification handling into its own daemon to get it
 out of the collector. This new daemon would still run on a central service,
 and would need to be set up to support load balancing 

 just as the collector is now. The similarities are why we left the two
 types of processing in the same process to begin with.

 ** **

 2. We could modify nova to include more details about an instance when
 it publishes a notification. This is the best solution from our
 perspective, and I would like to see all of the properties anyway, 

 but I don't know if there is a performance-related reason for leaving
 out some details.

 ** **

 This problem is either simpler than the discussion implies, or there are
 constraints that are not being made explicit.

 But to me the choice would be simple. If the data will typically be used
 by the receiving entity, it should be included in the notification.

 Asynchronously fetching the data from some form of persistent storage is
 making the system more complex. That complexity would only
 be justified if a) there was optional data that the receiver of the
 notification would frequently not bother to read, and the goal is to limit
 

 The bandwidth to the data that will **probably** be consumed,


 I was concerned about both, but I am convinced that the right solution is
 to include all of the details about the instance in the notification data
 generated by nova. I am going to work on that patch.


 or b) once this data is included the total data rate becomes erratic, so
 limiting

 The notification size allows the bulk data transfer to be smoothed out
 without slowing down the acceptance of other notifications.

 ** **

 In either case, building a more complex system to make delivering bulk
 data deferrable implies that the deferred data is large. Why not

 Store it in a blob (i.e. in a Swift object), rather than in a database
 entry?


The changes to nova to add the extra data to outgoing notifications is
ready for review at https://review.openstack.org/#/c/8659/
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Thoughts on client library releasing

2012-06-18 Thread Doug Hellmann
How do these plans fit with the idea of creating a unified client library
(either as one package or several, based on a common core)?

On Mon, Jun 18, 2012 at 5:11 PM, Monty Taylor mord...@inaugust.com wrote:

 We're trying to figure out how we release client libraries. We're really
 close - but there are some sticking points.

 First of all, things that don't really have dissent (with reasoning)

 - We should release client libs to PyPI

 Client libs are for use in other python things, so they should be able
 to be listed as dependencies. Additionally, proper releases to PyPI will
 make our cross project depends work more sensibly

 - They should not necessarily be tied to server releases

 There could be a whole version of the server which sees no needed
 changes in the client. Alternately, there could be new upcoming server
 features which need to go into a released version of the library even
 before the server is released.

 - They should not be versioned with the server

 See above.

 - Releases of client libs should support all published versions of
 server APIs

 An end user wants to talk to his openstack cloud - not necessarily to
 his Essex cloud or his Folsom cloud. That user may also have accounts on
 multiple providers, and would like to be able to write one program to
 interact with all of them - if the user needed the folsom version of the
 client lib to talk to the folsom cloud and the essex version to talk to
 the essex cloud, his life is very hard. However, if he can grab the
 latest client lib and it will talk to both folsom and essex, then he
 will be happy.

 There are three major points where there is a lack of clear agreement.
 Here they are, along with suggestions for what we do about them.

 - need for official stable branches

 I would like to defer on this until such a time as we actually need it,
 rather than doing the engineering for in case we need it. But first, I'd
 like to define we, and that is that we are OpenStack as an upstream.
 As a project, we are at the moment probably the single friendliest
 project for the distros in the history of software. But that's not
 really our job. Most people out there writing libraries do not have
 multiple parallel releases of those libraries - they have the stable
 library, and then they release a new one, and people either upgrade
 their apps to use the new lib or they don't.

 One of the reasons this has been brought up as a need is to allow for
 drastic re-writes of a library. I'll talk about that in a second, but I
 think that is a thing that needs to have allowances for happening.

 So the model that keystone-lite used - create an experimental branch for
 the new work, eventually propose that it becomes the new master - seems
 like a better fit for the drastic rewrite scenario than copying the
 stable/* model from the server projects, because I think the most common
 thing will be that library changes are evolutionary, and having two
 mildly different branches that both represent something that's actually
 pretty much stable will just be more confusing than helpful.

 That being said - at such a time that there is actually a pain-point or
 a specific need for a stable branch, creating branches is fairly easy
 ... but I think once we have an actual burning need for such a thing, it
 will make it easier for us to look at models of how we'll use it.

  - API or major-rewrite-driven versioning scheme

 I was wondering why bcwaldon and I were missing each other so strongly
 in the channel the other day when we were discussing this, and then I
 realized that it's because we have one word API that's getting
 overloaded for a couple of different meanings - and also that I was
 being vague in my usage of the word. So to clarify, a client library has:

  * programming level code APIs
  * supported server REST APIs

 So I back off everything I said about tying client libs version to
 server REST API support. Brian was right, I was wrong. The thing that's
 more important here is that the version should indicate programmer
 contract, and if it that is changed in a breaking manner, the major
 number should bump.

 If we combine that with the point from above that our libraries should
 always support the existing server REST APIs, then I think we can just
 purely have statements like support for compute v3 can be found in
 2.7.8 and later and people will likely be fine, because it will map
 easily to the idea just grab the latest lib and you should be able to
 talk to the latest server Yea?

 So in that case, the client libs versions are purely whatever they are
 right now, and we'll increase them moving forward using normal library
 version thoughts.

  - room for deprecating old APIs

 The above then leads us to wanting to know what we do about supported
 server REST APIs over time, especially since I keep making sweeping
 statements about should support all available server versions ... How
 about this as a straw man: Since we're planning on 

Re: [Openstack] [Openstack-poc] Thoughts on client library releasing

2012-06-18 Thread Joe Heck
Monty - 

Thierry stated it as an assumption last PPB meeting, but I'd like it to be 
explicit that we have at least a tag on each client library release that we 
make so that it's possible to distribute a version of the clients. 

-joe

On Jun 18, 2012, at 2:11 PM, Monty Taylor wrote:
 We're trying to figure out how we release client libraries. We're really
 close - but there are some sticking points.
 
 First of all, things that don't really have dissent (with reasoning)
 
 - We should release client libs to PyPI
 
 Client libs are for use in other python things, so they should be able
 to be listed as dependencies. Additionally, proper releases to PyPI will
 make our cross project depends work more sensibly
 
 - They should not necessarily be tied to server releases
 
 There could be a whole version of the server which sees no needed
 changes in the client. Alternately, there could be new upcoming server
 features which need to go into a released version of the library even
 before the server is released.
 
 - They should not be versioned with the server
 
 See above.
 
 - Releases of client libs should support all published versions of
 server APIs
 
 An end user wants to talk to his openstack cloud - not necessarily to
 his Essex cloud or his Folsom cloud. That user may also have accounts on
 multiple providers, and would like to be able to write one program to
 interact with all of them - if the user needed the folsom version of the
 client lib to talk to the folsom cloud and the essex version to talk to
 the essex cloud, his life is very hard. However, if he can grab the
 latest client lib and it will talk to both folsom and essex, then he
 will be happy.
 
 There are three major points where there is a lack of clear agreement.
 Here they are, along with suggestions for what we do about them.
 
 - need for official stable branches
 
 I would like to defer on this until such a time as we actually need it,
 rather than doing the engineering for in case we need it. But first, I'd
 like to define we, and that is that we are OpenStack as an upstream.
 As a project, we are at the moment probably the single friendliest
 project for the distros in the history of software. But that's not
 really our job. Most people out there writing libraries do not have
 multiple parallel releases of those libraries - they have the stable
 library, and then they release a new one, and people either upgrade
 their apps to use the new lib or they don't.
 
 One of the reasons this has been brought up as a need is to allow for
 drastic re-writes of a library. I'll talk about that in a second, but I
 think that is a thing that needs to have allowances for happening.
 
 So the model that keystone-lite used - create an experimental branch for
 the new work, eventually propose that it becomes the new master - seems
 like a better fit for the drastic rewrite scenario than copying the
 stable/* model from the server projects, because I think the most common
 thing will be that library changes are evolutionary, and having two
 mildly different branches that both represent something that's actually
 pretty much stable will just be more confusing than helpful.
 
 That being said - at such a time that there is actually a pain-point or
 a specific need for a stable branch, creating branches is fairly easy
 ... but I think once we have an actual burning need for such a thing, it
 will make it easier for us to look at models of how we'll use it.
 
 - API or major-rewrite-driven versioning scheme
 
 I was wondering why bcwaldon and I were missing each other so strongly
 in the channel the other day when we were discussing this, and then I
 realized that it's because we have one word API that's getting
 overloaded for a couple of different meanings - and also that I was
 being vague in my usage of the word. So to clarify, a client library has:
 
 * programming level code APIs
 * supported server REST APIs
 
 So I back off everything I said about tying client libs version to
 server REST API support. Brian was right, I was wrong. The thing that's
 more important here is that the version should indicate programmer
 contract, and if it that is changed in a breaking manner, the major
 number should bump.
 
 If we combine that with the point from above that our libraries should
 always support the existing server REST APIs, then I think we can just
 purely have statements like support for compute v3 can be found in
 2.7.8 and later and people will likely be fine, because it will map
 easily to the idea just grab the latest lib and you should be able to
 talk to the latest server Yea?
 
 So in that case, the client libs versions are purely whatever they are
 right now, and we'll increase them moving forward using normal library
 version thoughts.
 
 - room for deprecating old APIs
 
 The above then leads us to wanting to know what we do about supported
 server REST APIs over time, especially since I keep making sweeping
 statements about should 

Re: [Openstack] Thoughts on client library releasing

2012-06-18 Thread Kevin L. Mitchell
On Mon, 2012-06-18 at 17:25 -0400, Doug Hellmann wrote:
 How do these plans fit with the idea of creating a unified client
 library (either as one package or several, based on a common core)?

I am under the impression that there is not a desire, at present, to
create a unified client library.  There is work underway to create a
unified client (command-line interface), but I believe it was intended
to use the client libraries for each of the projects.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Thoughts on client library releasing

2012-06-18 Thread Monty Taylor
On 06/18/2012 02:25 PM, Doug Hellmann wrote:
 How do these plans fit with the idea of creating a unified client
 library (either as one package or several, based on a common core)?

They are kind of orthogonal. At the point where python-openstackclient
is ready for release, we'd likely want to manage it the same way.

 On Mon, Jun 18, 2012 at 5:11 PM, Monty Taylor mord...@inaugust.com
 mailto:mord...@inaugust.com wrote:
 
 We're trying to figure out how we release client libraries. We're really
 close - but there are some sticking points.
 
 First of all, things that don't really have dissent (with reasoning)
 
 - We should release client libs to PyPI
 
 Client libs are for use in other python things, so they should be able
 to be listed as dependencies. Additionally, proper releases to PyPI will
 make our cross project depends work more sensibly
 
 - They should not necessarily be tied to server releases
 
 There could be a whole version of the server which sees no needed
 changes in the client. Alternately, there could be new upcoming server
 features which need to go into a released version of the library even
 before the server is released.
 
 - They should not be versioned with the server
 
 See above.
 
 - Releases of client libs should support all published versions of
 server APIs
 
 An end user wants to talk to his openstack cloud - not necessarily to
 his Essex cloud or his Folsom cloud. That user may also have accounts on
 multiple providers, and would like to be able to write one program to
 interact with all of them - if the user needed the folsom version of the
 client lib to talk to the folsom cloud and the essex version to talk to
 the essex cloud, his life is very hard. However, if he can grab the
 latest client lib and it will talk to both folsom and essex, then he
 will be happy.
 
 There are three major points where there is a lack of clear agreement.
 Here they are, along with suggestions for what we do about them.
 
 - need for official stable branches
 
 I would like to defer on this until such a time as we actually need it,
 rather than doing the engineering for in case we need it. But first, I'd
 like to define we, and that is that we are OpenStack as an upstream.
 As a project, we are at the moment probably the single friendliest
 project for the distros in the history of software. But that's not
 really our job. Most people out there writing libraries do not have
 multiple parallel releases of those libraries - they have the stable
 library, and then they release a new one, and people either upgrade
 their apps to use the new lib or they don't.
 
 One of the reasons this has been brought up as a need is to allow for
 drastic re-writes of a library. I'll talk about that in a second, but I
 think that is a thing that needs to have allowances for happening.
 
 So the model that keystone-lite used - create an experimental branch for
 the new work, eventually propose that it becomes the new master - seems
 like a better fit for the drastic rewrite scenario than copying the
 stable/* model from the server projects, because I think the most common
 thing will be that library changes are evolutionary, and having two
 mildly different branches that both represent something that's actually
 pretty much stable will just be more confusing than helpful.
 
 That being said - at such a time that there is actually a pain-point or
 a specific need for a stable branch, creating branches is fairly easy
 ... but I think once we have an actual burning need for such a thing, it
 will make it easier for us to look at models of how we'll use it.
 
  - API or major-rewrite-driven versioning scheme
 
 I was wondering why bcwaldon and I were missing each other so strongly
 in the channel the other day when we were discussing this, and then I
 realized that it's because we have one word API that's getting
 overloaded for a couple of different meanings - and also that I was
 being vague in my usage of the word. So to clarify, a client library
 has:
 
  * programming level code APIs
  * supported server REST APIs
 
 So I back off everything I said about tying client libs version to
 server REST API support. Brian was right, I was wrong. The thing that's
 more important here is that the version should indicate programmer
 contract, and if it that is changed in a breaking manner, the major
 number should bump.
 
 If we combine that with the point from above that our libraries should
 always support the existing server REST APIs, then I think we can just
 purely have statements like support for compute v3 can be found in
 2.7.8 and later and people will likely be fine, because it will map
 easily to the idea just grab the latest lib and you 

Re: [Openstack] [Openstack-poc] Thoughts on client library releasing

2012-06-18 Thread Monty Taylor


On 06/18/2012 02:26 PM, Joe Heck wrote:
 Monty -
 
 Thierry stated it as an assumption last PPB meeting, but I'd like it
 to be explicit that we have at least a tag on each client library
 release that we make so that it's possible to distribute a version of
 the clients.

+1000

I didn't want to get too far into implementation details, but the way
I'd really like to see this work for the client libs is that releases
are actually trigger via jenkins by tags on the repo - so there would
literally be no way to release something _without_ a tag.

I've got a POC patch to do the tag-based-versioning here:

https://review.openstack.org/#/c/8427/

We need to get (aiui) one thing landed to zuul so that we can
appropriately trigger on tag events... but that's the plan in my brain hole.

 On Jun 18, 2012, at 2:11 PM, Monty Taylor wrote:
 We're trying to figure out how we release client libraries. We're
 really close - but there are some sticking points.
 
 First of all, things that don't really have dissent (with
 reasoning)
 
 - We should release client libs to PyPI
 
 Client libs are for use in other python things, so they should be
 able to be listed as dependencies. Additionally, proper releases to
 PyPI will make our cross project depends work more sensibly
 
 - They should not necessarily be tied to server releases
 
 There could be a whole version of the server which sees no needed 
 changes in the client. Alternately, there could be new upcoming
 server features which need to go into a released version of the
 library even before the server is released.
 
 - They should not be versioned with the server
 
 See above.
 
 - Releases of client libs should support all published versions of 
 server APIs
 
 An end user wants to talk to his openstack cloud - not necessarily
 to his Essex cloud or his Folsom cloud. That user may also have
 accounts on multiple providers, and would like to be able to write
 one program to interact with all of them - if the user needed the
 folsom version of the client lib to talk to the folsom cloud and
 the essex version to talk to the essex cloud, his life is very
 hard. However, if he can grab the latest client lib and it will
 talk to both folsom and essex, then he will be happy.
 
 There are three major points where there is a lack of clear
 agreement. Here they are, along with suggestions for what we do
 about them.
 
 - need for official stable branches
 
 I would like to defer on this until such a time as we actually need
 it, rather than doing the engineering for in case we need it. But
 first, I'd like to define we, and that is that we are OpenStack
 as an upstream. As a project, we are at the moment probably the
 single friendliest project for the distros in the history of
 software. But that's not really our job. Most people out there
 writing libraries do not have multiple parallel releases of those
 libraries - they have the stable library, and then they release a
 new one, and people either upgrade their apps to use the new lib or
 they don't.
 
 One of the reasons this has been brought up as a need is to allow
 for drastic re-writes of a library. I'll talk about that in a
 second, but I think that is a thing that needs to have allowances
 for happening.
 
 So the model that keystone-lite used - create an experimental
 branch for the new work, eventually propose that it becomes the new
 master - seems like a better fit for the drastic rewrite scenario
 than copying the stable/* model from the server projects, because I
 think the most common thing will be that library changes are
 evolutionary, and having two mildly different branches that both
 represent something that's actually pretty much stable will just be
 more confusing than helpful.
 
 That being said - at such a time that there is actually a
 pain-point or a specific need for a stable branch, creating
 branches is fairly easy ... but I think once we have an actual
 burning need for such a thing, it will make it easier for us to
 look at models of how we'll use it.
 
 - API or major-rewrite-driven versioning scheme
 
 I was wondering why bcwaldon and I were missing each other so
 strongly in the channel the other day when we were discussing this,
 and then I realized that it's because we have one word API that's
 getting overloaded for a couple of different meanings - and also
 that I was being vague in my usage of the word. So to clarify, a
 client library has:
 
 * programming level code APIs * supported server REST APIs
 
 So I back off everything I said about tying client libs version to 
 server REST API support. Brian was right, I was wrong. The thing
 that's more important here is that the version should indicate
 programmer contract, and if it that is changed in a breaking
 manner, the major number should bump.
 
 If we combine that with the point from above that our libraries
 should always support the existing server REST APIs, then I think
 we can just purely have statements like support for 

[Openstack] [metering] mapping query API to DB engine API

2012-06-18 Thread Doug Hellmann
I have updated the proposed DB engine API to include query methods [1] we
will need based on the REST API [2]. I also updated the REST API page in
the wiki with references to which method implements each query.

I'm not entirely happy with the results because the new DB API methods are
all duplicated depending on whether user or project is passed. Exactly
one of the two values is always required, so it also didn't feel right to
have both be optional and force the engine to validate the arguments. Does
anyone else have any thoughts on how to address that?

Doug

[1]
https://github.com/dhellmann/ceilometer/commit/18130bcafabf23871831e9b4913752ebb7b9f3ef
[2] http://wiki.openstack.org/EfficientMetering/APIProposalv1
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone API V3 - draft 2 now available

2012-06-18 Thread Gabriel Hurley
Hi Joe,

I added lots of comments on the google doc. I think most of them reinforce the 
existing design decisions. That said, there are a few high-level issues I'd 
like to ask for discussion on:


1.   This API features no differentiation between the admin API and the 
regular API as it exists currently; I assume this is due to the new policy 
engine. Am I correct, and does that mean that Keystone will no longer be using 
the admin port (35357)?

2.   User roles on domains solves the issue of who has the power to manage 
tenants, but that then begs the question who has the power to manage 
domains? The same question applies to services and policies. Anything that is 
not scoped to the domain still falls into a grey area, and the previous answer 
of anyone who's got that permission anywhere has that permission everywhere 
strikes me as massively broken.

3.   On an API level, I'd like to see this API be the first to support a 
parameter on all GET requests that causes that request to not only return the 
serialization of that single object, but all the related objects as well. For 
example, the GET /tenant/tenant_id call by default would have a domain_id 
attribute, but with this flag it would have a domain attribute containing the 
entire serialized domain object. As for the name of this flag, I don't feel 
strongly. Django calls this concept select_related, sqlalchemy calls it 
eagerload. We could pick whatever we like here, but I'll be asking for this 
in Nova, et. al.'s APIs going forward too.

4.   In the you probably don't even want to touch it category: have you 
given any thought to password reset functionality? Obviously it's backend 
dependent, but having some general concept of forgot password/forgot 
username would be important to end users in many cases. There are three cases 
I can see depending on backend: directly provide a password reset mechanism 
where possible; provide instructions for password reset (configured by system 
admin) where there is an external process in place; return Not Implemented when 
neither previous case is satisfied. I'm not saying this *must* appear in this 
API spec, but it's worth mentioning.

Thanks for all the work on this. It's really looking great!


-  Gabriel

From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
[mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] On 
Behalf Of Joseph Heck
Sent: Sunday, June 17, 2012 3:09 PM
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: [Openstack] Keystone API V3 - draft 2 now available

Draft 2 of the V3 Core Keystone API is now available for comment:

  
https://docs.google.com/document/d/1_TkawQIa52eSBfS4pv_nx1SJeoBghIlGVZsRJJynKAM/edit

In this revision, I've
 * updated the token structure a bit - to match the new resources
 * changed how the associations or user-tenant through a role are enabled 
(POST instead of PUT)
 * put in detailed examples of responses to every call

The general format of this documentation roughly follows the developer 
documentation at developer.github.comhttp://developer.github.com, which I 
thought had a pretty good model of showing how to use the APIs and describing 
the relevant pieces. There's a lot of cut and paste in there, so if something 
seems obviously wrong, it probably is ... please make a comment on the google 
doc and let me know.

This document is far more structured and complete, and contains sufficient 
detail for those excited about WADLs and XSDs and such to create relevant 
mappings.

Feedback needed please, comment away!

-joe


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-poc] Thoughts on client library releasing

2012-06-18 Thread Gabriel Hurley
Big +1 for automated tagging and releasing (sounds like we're managing 
wildlife...) from Jenkins!

- Gabriel

 -Original Message-
 From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
 [mailto:openstack-
 bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of
 Monty Taylor
 Sent: Monday, June 18, 2012 3:00 PM
 To: Joe Heck
 Cc: openstack-...@lists.launchpad.net; openstack@lists.launchpad.net
 Subject: Re: [Openstack] [Openstack-poc] Thoughts on client library releasing
 
 
 
 On 06/18/2012 02:26 PM, Joe Heck wrote:
  Monty -
 
  Thierry stated it as an assumption last PPB meeting, but I'd like it
  to be explicit that we have at least a tag on each client library
  release that we make so that it's possible to distribute a version of
  the clients.
 
 +1000
 
 I didn't want to get too far into implementation details, but the way I'd 
 really
 like to see this work for the client libs is that releases are actually 
 trigger via
 jenkins by tags on the repo - so there would literally be no way to release
 something _without_ a tag.
 
 I've got a POC patch to do the tag-based-versioning here:
 
 https://review.openstack.org/#/c/8427/
 
 We need to get (aiui) one thing landed to zuul so that we can appropriately
 trigger on tag events... but that's the plan in my brain hole.
 
  On Jun 18, 2012, at 2:11 PM, Monty Taylor wrote:
  We're trying to figure out how we release client libraries. We're
  really close - but there are some sticking points.
 
  First of all, things that don't really have dissent (with
  reasoning)
 
  - We should release client libs to PyPI
 
  Client libs are for use in other python things, so they should be
  able to be listed as dependencies. Additionally, proper releases to
  PyPI will make our cross project depends work more sensibly
 
  - They should not necessarily be tied to server releases
 
  There could be a whole version of the server which sees no needed
  changes in the client. Alternately, there could be new upcoming
  server features which need to go into a released version of the
  library even before the server is released.
 
  - They should not be versioned with the server
 
  See above.
 
  - Releases of client libs should support all published versions of
  server APIs
 
  An end user wants to talk to his openstack cloud - not necessarily to
  his Essex cloud or his Folsom cloud. That user may also have accounts
  on multiple providers, and would like to be able to write one program
  to interact with all of them - if the user needed the folsom version
  of the client lib to talk to the folsom cloud and the essex version
  to talk to the essex cloud, his life is very hard. However, if he can
  grab the latest client lib and it will talk to both folsom and essex,
  then he will be happy.
 
  There are three major points where there is a lack of clear
  agreement. Here they are, along with suggestions for what we do about
  them.
 
  - need for official stable branches
 
  I would like to defer on this until such a time as we actually need
  it, rather than doing the engineering for in case we need it. But
  first, I'd like to define we, and that is that we are OpenStack as
  an upstream. As a project, we are at the moment probably the single
  friendliest project for the distros in the history of software. But
  that's not really our job. Most people out there writing libraries do
  not have multiple parallel releases of those libraries - they have
  the stable library, and then they release a new one, and people
  either upgrade their apps to use the new lib or they don't.
 
  One of the reasons this has been brought up as a need is to allow for
  drastic re-writes of a library. I'll talk about that in a second, but
  I think that is a thing that needs to have allowances for happening.
 
  So the model that keystone-lite used - create an experimental branch
  for the new work, eventually propose that it becomes the new master -
  seems like a better fit for the drastic rewrite scenario than
  copying the stable/* model from the server projects, because I think
  the most common thing will be that library changes are evolutionary,
  and having two mildly different branches that both represent
  something that's actually pretty much stable will just be more
  confusing than helpful.
 
  That being said - at such a time that there is actually a pain-point
  or a specific need for a stable branch, creating branches is fairly
  easy ... but I think once we have an actual burning need for such a
  thing, it will make it easier for us to look at models of how we'll
  use it.
 
  - API or major-rewrite-driven versioning scheme
 
  I was wondering why bcwaldon and I were missing each other so
  strongly in the channel the other day when we were discussing this,
  and then I realized that it's because we have one word API that's
  getting overloaded for a couple of different meanings - and also that
  I was being vague in my usage of 

[Openstack] [devstack] Quantum support

2012-06-18 Thread Gary Kotton

Hi,
Quantum has moved to openstack common configuration (the plugin.ini file 
no longer exists). Support has been added to devstack to ensure that 
quantum and devstack will work irrespective of the version running. 
Would it be possible to review this so that we can move forward with the 
approval of the common config task. https://review.openstack.org/#/c/8176/

Thanks in advance
Gary

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-poc] Thoughts on client library releasing

2012-06-18 Thread Monty Taylor
We're trying to figure out how we release client libraries. We're really
close - but there are some sticking points.

First of all, things that don't really have dissent (with reasoning)

- We should release client libs to PyPI

Client libs are for use in other python things, so they should be able
to be listed as dependencies. Additionally, proper releases to PyPI will
make our cross project depends work more sensibly

- They should not necessarily be tied to server releases

There could be a whole version of the server which sees no needed
changes in the client. Alternately, there could be new upcoming server
features which need to go into a released version of the library even
before the server is released.

- They should not be versioned with the server

See above.

- Releases of client libs should support all published versions of
server APIs

An end user wants to talk to his openstack cloud - not necessarily to
his Essex cloud or his Folsom cloud. That user may also have accounts on
multiple providers, and would like to be able to write one program to
interact with all of them - if the user needed the folsom version of the
client lib to talk to the folsom cloud and the essex version to talk to
the essex cloud, his life is very hard. However, if he can grab the
latest client lib and it will talk to both folsom and essex, then he
will be happy.

There are three major points where there is a lack of clear agreement.
Here they are, along with suggestions for what we do about them.

- need for official stable branches

I would like to defer on this until such a time as we actually need it,
rather than doing the engineering for in case we need it. But first, I'd
like to define we, and that is that we are OpenStack as an upstream.
As a project, we are at the moment probably the single friendliest
project for the distros in the history of software. But that's not
really our job. Most people out there writing libraries do not have
multiple parallel releases of those libraries - they have the stable
library, and then they release a new one, and people either upgrade
their apps to use the new lib or they don't.

One of the reasons this has been brought up as a need is to allow for
drastic re-writes of a library. I'll talk about that in a second, but I
think that is a thing that needs to have allowances for happening.

So the model that keystone-lite used - create an experimental branch for
the new work, eventually propose that it becomes the new master - seems
like a better fit for the drastic rewrite scenario than copying the
stable/* model from the server projects, because I think the most common
thing will be that library changes are evolutionary, and having two
mildly different branches that both represent something that's actually
pretty much stable will just be more confusing than helpful.

That being said - at such a time that there is actually a pain-point or
a specific need for a stable branch, creating branches is fairly easy
... but I think once we have an actual burning need for such a thing, it
will make it easier for us to look at models of how we'll use it.

 - API or major-rewrite-driven versioning scheme

I was wondering why bcwaldon and I were missing each other so strongly
in the channel the other day when we were discussing this, and then I
realized that it's because we have one word API that's getting
overloaded for a couple of different meanings - and also that I was
being vague in my usage of the word. So to clarify, a client library has:

 * programming level code APIs
 * supported server REST APIs

So I back off everything I said about tying client libs version to
server REST API support. Brian was right, I was wrong. The thing that's
more important here is that the version should indicate programmer
contract, and if it that is changed in a breaking manner, the major
number should bump.

If we combine that with the point from above that our libraries should
always support the existing server REST APIs, then I think we can just
purely have statements like support for compute v3 can be found in
2.7.8 and later and people will likely be fine, because it will map
easily to the idea just grab the latest lib and you should be able to
talk to the latest server Yea?

So in that case, the client libs versions are purely whatever they are
right now, and we'll increase them moving forward using normal library
version thoughts.

 - room for deprecating old APIs

The above then leads us to wanting to know what we do about supported
server REST APIs over time, especially since I keep making sweeping
statements about should support all available server versions ... How
about this as a straw man: Since we're planning on beginning to run
tests of the client libs against previous versions (so we'll test trunk
novaclient against essex nova in addition to trunk nova) ... we need
support for past versions of servers as long as our automation can
sensibly spin up a past version. (Since the support for that 

[Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread David Kranz
There are a few tempest tests, and many in the old kong suite that is 
still there, that wait for a server status that is something other than 
ACTIVE or VERIFY_RESIZE. These other states, such as BUILD or REBOOT, 
are transient so I don't understand why it is correct for code to poll 
for those states. Am I missing something or do those tests have race 
condition bugs?


 -David

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread Daryl Walleck
I can verify that rescue is a non-race state. The transition is active to 
rescue on setting rescue, and rescue to active when leaving rescue.


 Original message 
Subject: Re: [Openstack-qa-team] wait_for_server_status and Compute API
From: Jay Pipes jaypi...@gmail.com
To: openstack-qa-team@lists.launchpad.net 
openstack-qa-team@lists.launchpad.net,openst...@lists.launchpad.net 
openst...@lists.launchpad.net
CC: Re: [Openstack-qa-team] wait_for_server_status and Compute API


On 06/18/2012 12:01 PM, David Kranz wrote:
 There are a few tempest tests, and many in the old kong suite that is
 still there, that wait for a server status that is something other than
 ACTIVE or VERIFY_RESIZE. These other states, such as BUILD or REBOOT,
 are transient so I don't understand why it is correct for code to poll
 for those states. Am I missing something or do those tests have race
 condition bugs?

No, you are correct, and I have made some comments in recent code
reviews to that effect.

Here are all the task states:

https://github.com/openstack/nova/blob/master/nova/compute/task_states.py

Out of all those task states, I believe the only one safe to poll in a
wait loop is RESIZE_VERIFY. All the others are prone to state
transitions outside the control of the user.

For the VM states:

https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py

I consider the following to be non-racy, quiescent states:

ACTIVE
DELETED
STOPPED
SHUTDOFF
PAUSED
SUSPENDED
ERROR

I consider the following to be racy states that should not be tested for:

MIGRATING -- Instead, the final state should be checked for...
RESIZING -- Instead, the RESIZE_VERIFY and RESIZE_CONFIRM task states
should be checked

I have absolutely no idea what the state termination is for the
following VM states:

RESCUED -- is this a permanent state? Is this able to be queried for in
a consistent manner before it transitions to some further state?

SOFT_DELETE -- I have no clue what the purpose or queryability of this
state is, but would love to know...

Best,
-jay

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp
-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread David Kranz

On 6/18/2012 1:07 PM, Jay Pipes wrote:

On 06/18/2012 12:49 PM, Daryl Walleck wrote:

I can verify that rescue is a non-race state. The transition is active
to rescue on setting rescue, and rescue to active when leaving rescue.


I don't see a RESCUE state. I see a RESCUED state. Is that what you 
are referring to here? Want to make sure, since the semantics and 
tenses of the power, VM, and task states are a bit inconsistent.


Best,
-jay


-
For a black-box test what we have is 'status', which is neither vm-state 
not task state. I believe 'status'  contains the values of the 
attributes in the below code. I am going to add an assertion to 
wait_for_server_status that will fail if you give it an ephemeral state. 
From this list and the comments of Daryl and Jay, I propose the list of 

allowed states for this check:

ACTIVE, VERIFY_RESIZE, STOPPED, SHUTOFF, PAUSED, SUSPENDED, RESCUE, 
ERROR, DELETED


Any comments?


From nova/nova/api/openstack/common.py:

_STATE_MAP = {
vm_states.ACTIVE: {
'default': 'ACTIVE',
task_states.REBOOTING: 'REBOOT',
task_states.REBOOTING_HARD: 'HARD_REBOOT',
task_states.UPDATING_PASSWORD: 'PASSWORD',
task_states.RESIZE_VERIFY: 'VERIFY_RESIZE',
},
vm_states.BUILDING: {
'default': 'BUILD',
},
vm_states.REBUILDING: {
'default': 'REBUILD',
},
vm_states.STOPPED: {
'default': 'STOPPED',
},
vm_states.SHUTOFF: {
'default': 'SHUTOFF',
},
vm_states.MIGRATING: {
'default': 'MIGRATING',
},
vm_states.RESIZING: {
'default': 'RESIZE',
task_states.RESIZE_REVERTING: 'REVERT_RESIZE',
},
vm_states.PAUSED: {
'default': 'PAUSED',
},
vm_states.SUSPENDED: {
'default': 'SUSPENDED',
},
vm_states.RESCUED: {
'default': 'RESCUE',
},
vm_states.ERROR: {
'default': 'ERROR',
},
vm_states.DELETED: {
'default': 'DELETED',
},
vm_states.SOFT_DELETE: {
'default': 'DELETED',
},
}

def status_from_state(vm_state, task_state='default'):
Given vm_state and task_state, return a status string.
task_map = _STATE_MAP.get(vm_state, dict(default='UNKNOWN_STATE'))
status = task_map.get(task_state, task_map['default'])
LOG.debug(Generated %(status)s from vm_state=%(vm_state)s 
  task_state=%(task_state)s. % locals())
return status


def vm_state_from_status(status):
Map the server status string to a vm state.
for state, task_map in _STATE_MAP.iteritems():
status_string = task_map.get(default)
if status.lower() == status_string.lower():
return state



--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp