Re: [Openstack] Thoughts on client library releasing

2012-06-19 Thread Mark McLoughlin
Hi Monty,

Thanks for sending.

For reference, this was the link you posted last week:

  http://wiki.openstack.org/Governance/Proposed/LibraryProjectDefinition

One question I had on that is re:

  the ability to release a client library outside of the core project 
  release cycle (requests have been coming in to our release manager for
  this)

Who were these request from and why? That would help understand what
we're trying to do here.

In any case, my tl;dr suggestion is:

  1) The client libs get released along with OpenStack releases and 
 versioned the same.

 So, we release glanceclient-2012.2 with the Folsom release.

 This makes it really easy for users to understand. If you want a 
 client lib for the REST API versions first released in Folsom, use 
 the Folsom (or later) lib.

  2) We also include client libs in milestone releases, for people 
 wanting to use the libs against development releases of OpenStack. 

 One issue is how to version these in pypi? Don't know enough about 
 pypi's setup to comment, but e.g. do we want the latest milestone
 release to be what people get when they do 'pip install'?

 Another issue is if folks want a release of a client lib between 
 milestones. We should be able to do that, so long as it doesn't 
 mean prematurely locking us into API compat before work on a new 
 API is complete.

  3) For folks who want a safe source of bugfixes, we can maintain a
 stable branch of the libs and do releases of those.

On Mon, 2012-06-18 at 14:11 -0700, Monty Taylor wrote:
 We're trying to figure out how we release client libraries. We're really
 close - but there are some sticking points.
 
 First of all, things that don't really have dissent (with reasoning)
 
 - We should release client libs to PyPI
 
 Client libs are for use in other python things, so they should be able
 to be listed as dependencies. Additionally, proper releases to PyPI will
 make our cross project depends work more sensibly
 
 - They should not necessarily be tied to server releases
 
 There could be a whole version of the server which sees no needed
 changes in the client. Alternately, there could be new upcoming server
 features which need to go into a released version of the library even
 before the server is released.

At what point does the server commit to maintaining compatibility with
its new API?

e.g. if we add a new API and, a week later an official release is done
of the client lib to support the new API ... have we then prevented
ourselves from fixing issues with the API?

 - They should not be versioned with the server
 
 See above.
 
 - Releases of client libs should support all published versions of
 server APIs
 
 An end user wants to talk to his openstack cloud - not necessarily to
 his Essex cloud or his Folsom cloud. That user may also have accounts on
 multiple providers, and would like to be able to write one program to
 interact with all of them - if the user needed the folsom version of the
 client lib to talk to the folsom cloud and the essex version to talk to
 the essex cloud, his life is very hard. However, if he can grab the
 latest client lib and it will talk to both folsom and essex, then he
 will be happy.
 
 There are three major points where there is a lack of clear agreement.
 Here they are, along with suggestions for what we do about them.
 
 - need for official stable branches
 
 I would like to defer on this until such a time as we actually need it,
 rather than doing the engineering for in case we need it. But first, I'd
 like to define we, and that is that we are OpenStack as an upstream.
 As a project, we are at the moment probably the single friendliest
 project for the distros in the history of software. But that's not
 really our job.

That's quite a bunch of claims ... I'll ignore them for now :)

 Most people out there writing libraries do not have multiple parallel
 releases of those libraries - they have the stable library, and then
 they release a new one, and people either upgrade their apps to use
 the new lib or they don't.

Don't agree, here's an example - libvirt 0.9.11 announced:

  https://www.redhat.com/archives/libvir-list/2012-April/msg00085.html

and then 4 maintenance releases of 0.9.11.x:

  https://www.redhat.com/archives/libvir-list/2012-April/msg01410.html
  https://www.redhat.com/archives/libvir-list/2012-April/msg01429.html
  https://www.redhat.com/archives/libvir-list/2012-May/msg00370.html
  https://www.redhat.com/archives/libvir-list/2012-June/msg00686.html

and here's a release of 0.9.12:

  https://www.redhat.com/archives/libvir-list/2012-May/msg00707.html

an, lo, here's where I first made the case of bugfix only releases of
libvirt:

  http://www.redhat.com/archives/libvir-list/2009-March/msg00485.html

 One of the reasons this has been brought up as a need is to allow for
 drastic re-writes of a library. I'll talk about that in a second, but I
 think that is a thing 

[Openstack] Nova scheduler

2012-06-19 Thread Neelakantam Gaddam
Hi All,

Nova scheduler  decides the compute host for the VM instances to run based
on the selected scheduling algorithm. Is it possible to choose a particular
compute host for each request where a VM instance should run  ?

-- 
Thanks  Regards
Neelakantam Gaddam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova scheduler

2012-06-19 Thread heut2008
you should use admin credentials  ,then run instances  by adding  an
option --force_hosts=node1  to nova boot  .

2012/6/19 Neelakantam Gaddam neelugad...@gmail.com:
 Hi All,

 Nova scheduler  decides the compute host for the VM instances to run based
 on the selected scheduling algorithm. Is it possible to choose a particular
 compute host for each request where a VM instance should run  ?

 --
 Thanks  Regards
 Neelakantam Gaddam

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Blueprint automatic-secure-key-generation] Automatic SECURE_KEY generation

2012-06-19 Thread Sascha Peilicke
On 06/19/2012 04:55 AM, Paul McMillan wrote:
 Ah, you're thinking of a setup where there are multiple dashboard VMs
 behind a load-balancer serving requests. Indeed, there the dashboard
 instances should either share the SECRET_KEY or the load-balancer has to
 make sure that all requests of a given session are redirected to the
 same dashboard instance.
 
 I'm concerned that anything which automatically generates a secret key
 will cause further problems down the line for other users. For example,
 you've clearly experienced what happens when you use more than one
 worker and generate a per-process key. Imagine trying to debug that same
 problem on a multi-system cloud (with a load balancer that usually
 routes people to the same place, but not always). If you aren't forced
 to learn about this setting during deployment, you are faced with a
 nearly impossible problem of users just sometimes get logged out.
 
 I feel like this patch is merely kicking the can down the road just a
 little past where your particular project needs it to be, without
 thinking about the bigger picture.
I'm sorry about that, but that was definitely not the intent. Inherently
you are right, there are just far to many possible setups to get them
all right. Thus it's a valid choice to defer such decisions to the one
doing the setup. But trying to ease the process can't be that wrong
either. That's the whole point why distributions don't only provide
packages that merely include pristine upstream tarballs. We try (and
sometimes fail to) provide useful defaults that are at least useful to
get started.

 
 I'm sure you're not seriously suggesting that a large-scale production
 deployment of openstack will be served entirely by a single point of
 failure dashboard server.
 
 But shouldn't local_settings.py still take preference over settings.py?
 Thus the admin could still set a specific SECRET_KEY in
 local_settings.py regardless of the default (auto-generated) one. So I
 only would have to fix the patch by not removing the documentation about
 SECRET_KEY from local_settings.py, right?
 
 I agree with Gabriel. Horizon should ship with no secret key (so it
 won't start until you set one). At most, it should automatically
 generate a key on a per-process basis, or possibly as part of run_tests,
 so that getting started with development is easy. Under no circumstances
 should it attempt to read the mind of an admin doing a production
 deployment, because it will invariably do the wrong thing more often
 than the right. As a security issue, it's important that admins READ THE
 DOCUMENTATION. Preventing the project from starting until they address
 the issue is one good way.
Ok, I've adjusted the patch to reflect that. So there's no default
anymore but some more documentation about the options instead:

https://github.com/saschpe/horizon/compare/bp/automatic-secure-key-generation

This should now match Gabriel's proposal, so would that be ok for you?

 Unfortunately, this is only relevant for securing production
 deployments. Nobody cares if a developer instance is setup securely ;-)
 
 I beg to differ. Opening trivially exploitable holes in development
 machines (especially laptops) can be extremely damaging. (you did read
 the docs about the consequences of disclosing a secret key, right?)
Actually, this statement wasn't meant to be overly serious, therefore
the smiley. I only tried to clarify that production was the primary
concern, sorry again.

 If we don't force developers and end-users to read the documentation,
 particularly around security features, they will get them wrong. Best
 Guess security isn't a business I want to be in.
No doubts about that and still they do. Hopefully, the current patch is
acceptable to you know.
-- 
With kind regards,
Sascha Peilicke
SUSE Linux GmbH, Maxfeldstr. 5, D-90409 Nuernberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer HRB 16746 (AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova scheduler

2012-06-19 Thread Razique Mahroua
Hi,you can find here the available filters :http://nova.openstack.org/devref/filter_scheduler.htmlHere are the ones you might look at :IsolatedHostsFilterJsonFilterDifferentHostFilter/SameHostFilterRazique
Nuage  Co - Razique Mahrouarazique.mahr...@gmail.com

Le 19 juin 2012 à 10:54, Neelakantam Gaddam a écrit :Hi All,Nova scheduler decides the compute host for the VM instances to run based on the selected scheduling algorithm. Is it possible to choose a particular compute host for each request where a VM instance should run ?
-- Thanks  RegardsNeelakantam Gaddam
___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova scheduler

2012-06-19 Thread Razique Mahroua
Even simpler yesThanks
Nuage  Co - Razique Mahrouarazique.mahr...@gmail.com

Le 19 juin 2012 à 11:00, heut2008 a écrit :you should use admin credentials ,then run instances by adding anoption --force_hosts=node1 to nova boot .2012/6/19 Neelakantam Gaddam neelugad...@gmail.com:Hi All,Nova scheduler decides the compute host for the VM instances to run basedon the selected scheduling algorithm. Is it possible to choose a particularcompute host for each request where a VM instance should run ?--Thanks  RegardsNeelakantam Gaddam___Mailing list: https://launchpad.net/~openstackPost to   : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help  : https://help.launchpad.net/ListHelp___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Thoughts on client library releasing

2012-06-19 Thread Thierry Carrez
Mark McLoughlin wrote:
 One question I had on that is re:
 
   the ability to release a client library outside of the core project 
   release cycle (requests have been coming in to our release manager for
   this)
 
 Who were these request from and why? That would help understand what
 we're trying to do here.

A bit of history would help.

Originally the client library projects were independent and were
released by their author to PyPI. People built their tooling and
automation around the capability for PyPI to fetch the version they
needed (I don't really support that, but apparently that's what most
devs do). So any time someone needed to access a new feature, they would
ask the author for a new PyPI release. It resulted in a lot of
confusion, as the projects on PyPI were maintained by a number of folks
and some of them were a bit unmaintained.

Then we realized that the client libraries were needed by core projects
(Nova - Glance, Nova - Nova zones, Horizon, Glance - Swift, etc.)
while they were not OpenStack. Rather than creating a bunch of new
core projects, the PPB decided to consider them as separate release
deliverables of the corresponding server project, inheriting its PTL
and Launchpad project.

However they also inherited the release scheme of the server project
(new version every 6 months), which was (or was not) synced to PyPI
depending of who was owning the PyPI project. More confusion as PyPI
rarely contained the just-released version, since releasing to PyPI was
not handled by OpenStack release management.

People kept on asking for new PyPI releases to access recent features.
Sometimes it's just that they need an early version of the client
somewhere to start integrating it before the milestone hits
(cinderclient, swiftclient). A release every 6 months is not enough.
Should PyPI only get releases ? or milestones ? or dailies ? And PyPI
does not support pre-versioning (2012.2~DATE) in the same way the server
does. For example there is no way to version our milestones in PyPI.

For all those reasons, it appears to be simpler to revisit the decision
of sharing versioning and release scheme with the server project. The
client library is a separate project anyway, with its own rewrites, APIs
etc. Conflating the two (like sharing projects in Launchpad) led to
confusion.

 In any case, my tl;dr suggestion is:
 
   1) The client libs get released along with OpenStack releases and 
  versioned the same.
 [...]
   2) We also include client libs in milestone releases, for people 
  wanting to use the libs against development releases of OpenStack. 

That's the current situation, and as explained above it puts us in a
very difficult spot with releasing to PyPI. There is no elegant way for
client libraries to follow OpenStack Core versioning and be often
released with a version number that makes sense on PyPI. On the other
hand, Monty's solution solves that. So I'd reverse the question: what
would be the killer benefit of keeping on doing the same thing ?

I see only one: clear match between client and supported server version.
I think that can be documented, or we can use a versioning logic that
makes that very apparent, while still using a clearly separate release
scheme for client libraries.

Basically the benefit of solving our long-standing issues with releasing
on PyPI and clearing all the confusion those create sounds more
important to me.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Reminder: Project release status meeting - 21:00 UTC

2012-06-19 Thread Thierry Carrez
On the Project  release status meeting on today:

Just two weeks left before Folsom-2 milestone-proposed branch is cut !
Time to readjust objectives and check that the essential stuff (think:
Cinder split) is covered.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All PTLs should be present (if you can't make it, please name a
substitute on [1]). Everyone else is welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20120619T21

See you there,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [metering] Glance usage data retrieval

2012-06-19 Thread Julien Danjou
Hello,

As part of the ceilometer project¹, we're working on usage data
retrieval from various OpenStack components. One of them is Glance.
We're targeting Folsom for the first release, therefore it seems
important for both projects to be able to work together, this is why
we're bringing ceilometer to your attention and asking for advices. :)

What we want is to retrieve the maximum amount of data, so we can meter
things, to bill them in the end. For now and for Glance, this only
includes the number of image uploaded (per user/tenant), but we'll need
probaly more in a near future.

At this point we plan to plug into the notification system, since it
seems to be what Glance provides to accomplish this. And so far, the
notifications provided seem enough to accomplish what we want to do.

Do you have any advice regarding integration of Ceilometer and Glance
together? Is this something a stable interface we can rely on, or is
there a better way to do so?

Thanks in advance,

Regards,

¹  http://launchpad.net/ceilometer

-- 
Julien Danjou
// eNovance  http://enovance.com
// ✉ julien.dan...@enovance.com  ☎ +33 1 49 70 99 81

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Glance usage data retrieval

2012-06-19 Thread stuart . mclaren

Brian, Jay,

I'll give you a chance to reply to Julien first, but
I have a follow on query...

It doesn't seem like right now Glance produces enough records for full
metering of operations. Eg if you want to charge a specific user for
every successful image upload/image download this information doesn't
'fall out' of the current logs. For example the eventlet output such as:

Jun 19 09:18:39 az1-gl-api-0001DEBUG 3795 [eventlet.wsgi.server] 127.0.0.1 - - 
[19/Jun/2012 09:18:39] GET
/v1/images/4921 HTTP/1.1 200 104858310 2.098967

doesn't tie the GET operation to a particular user.
As Julien mentions there is an image_upload notification, but I don't
see an equivalent image_download notification.

(Note I'm using old Diablo code at the moment, so I'm going on
code inspection of the latest code -- apologies if I missed anything.)

Is the creation of records of operations in a well-defined format which
can be consumed by eg a Metering and Billing team something we'd
like to have? (I'm ignoring Data at Rest metering for now.)

If so, would something along the following lines be worth considering?

A wsgi filter, using a mechanism similar to Swift's posthooklogger, which
has a routine which is called when operations are about to complete. It
should have access to the context (http return status, user, operation
etc) so would be able to raise a notification if appropriate. Using a
standard notifier would mean output could be to a log file or, perhaps
in time, the notifications could be consumed by ceilometer.

-Stuart

On Tue, 19 Jun 2012, Julien Danjou wrote:


Hello,

As part of the ceilometer project¹, we're working on usage data
retrieval from various OpenStack components. One of them is Glance.
We're targeting Folsom for the first release, therefore it seems
important for both projects to be able to work together, this is why
we're bringing ceilometer to your attention and asking for advices. :)

What we want is to retrieve the maximum amount of data, so we can meter
things, to bill them in the end. For now and for Glance, this only
includes the number of image uploaded (per user/tenant), but we'll need
probaly more in a near future.

At this point we plan to plug into the notification system, since it
seems to be what Glance provides to accomplish this. And so far, the
notifications provided seem enough to accomplish what we want to do.

Do you have any advice regarding integration of Ceilometer and Glance
together? Is this something a stable interface we can rely on, or is
there a better way to do so?

Thanks in advance,

Regards,

¹  http://launchpad.net/ceilometer

--
Julien Danjou
// eNovance  http://enovance.com
// ✉ julien.dan...@enovance.com  ☎ +33 1 49 70 99 81

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Thoughts on client library releasing

2012-06-19 Thread Mark McLoughlin
On Tue, 2012-06-19 at 11:25 +0200, Thierry Carrez wrote:
 Mark McLoughlin wrote:
  One question I had on that is re:
  
the ability to release a client library outside of the core project 
release cycle (requests have been coming in to our release manager for
this)
  
  Who were these request from and why? That would help understand what
  we're trying to do here.
 
 A bit of history would help.
 
 Originally the client library projects were independent and were
 released by their author to PyPI. People built their tooling and
 automation

Interested in specific examples ...

 around the capability for PyPI to fetch the version they
 needed (I don't really support that, but apparently that's what most
 devs do). So any time someone needed to access a new feature, they would
 ask the author for a new PyPI release. It resulted in a lot of
 confusion, as the projects on PyPI were maintained by a number of folks
 and some of them were a bit unmaintained.

Yep, and since the client libs are important to the project, it makes
sense for them to be maintained by the project and released by the
project's release team.

 Then we realized that the client libraries were needed by core projects
 (Nova - Glance, Nova - Nova zones, Horizon, Glance - Swift, etc.)
 while they were not OpenStack. Rather than creating a bunch of new
 core projects, the PPB decided to consider them as separate release
 deliverables of the corresponding server project, inheriting its PTL
 and Launchpad project.

Yep, reasonable.

 However they also inherited the release scheme of the server project
 (new version every 6 months), which was (or was not) synced to PyPI
 depending of who was owning the PyPI project. More confusion as PyPI
 rarely contained the just-released version, since releasing to PyPI was
 not handled by OpenStack release management.

I would have thought that only OpenStack release management should be
publishing those libs to pypi now.

 People kept on asking for new PyPI releases to access recent features.
 Sometimes it's just that they need an early version of the client
 somewhere to start integrating it before the milestone hits
 (cinderclient, swiftclient). A release every 6 months is not enough.

Well, we do releases more often that that:

 - milestone releases provide early access to features currently under 
   development

 - stable releases provide a safe source of bugfixes to previous 
   releases

If the issue is integration *within* OpenStack - e.g. Nova needing
cinderclient - then I don't think that should dictate the release
schedule for these libs. We can surely figure out how to do that without
doing more regular official releases of the libs? But first, is that
what we're talking about?

If it's not just about integration within the project, what are the
examples of external projects needing OpenStack to release non-milestone
releases of the client libs more often? What kind of features are they
looking for?

 Should PyPI only get releases ? or milestones ? or dailies ? And PyPI
 does not support pre-versioning (2012.2~DATE) in the same way the server
 does. For example there is no way to version our milestones in PyPI.

I think we'd like to be able to publish milestone releases of the client
libs to pypi, but not have pip install install milestones unless they
are explicitly requested? Right?

pypi probably doesn't support that, but do we want to allow pypi's
limitations dictate our schedule either?

 For all those reasons, it appears to be simpler to revisit the decision
 of sharing versioning and release scheme with the server project. The
 client library is a separate project anyway, with its own rewrites, APIs
 etc. Conflating the two (like sharing projects in Launchpad) led to
 confusion.

I have to admit that I'm still a bit confused exactly what problem we're
fixing here - e.g. who exactly is demanding OpenStack do more regular
releases of its client libs and why?

  In any case, my tl;dr suggestion is:
  
1) The client libs get released along with OpenStack releases and 
   versioned the same.
  [...]
2) We also include client libs in milestone releases, for people 
   wanting to use the libs against development releases of OpenStack. 
 
 That's the current situation, and as explained above it puts us in a
 very difficult spot with releasing to PyPI. There is no elegant way for
 client libraries to follow OpenStack Core versioning and be often
 released with a version number that makes sense on PyPI. On the other
 hand, Monty's solution solves that.

Ok, so this *is* all about pypi? I'm starting to understand now :)

 So I'd reverse the question: what would be the killer benefit of
 keeping on doing the same thing ?
 
 I see only one: clear match between client and supported server version.
 I think that can be documented, or we can use a versioning logic that
 makes that very apparent, while still using a clearly separate release
 scheme for client libraries.

No, 

Re: [Openstack] Thoughts on client library releasing

2012-06-19 Thread Thierry Carrez
Mark McLoughlin wrote:
 Originally the client library projects were independent and were
 released by their author to PyPI. People built their tooling and
 automation
 
 Interested in specific examples ...

Most developers apparently pull their libraries from PyPI. It's a bit
strange for people working on distro development like you and me, but
that's what other people do. Our continuous integration also uses PyPI
to pull Python libraries.

 However they also inherited the release scheme of the server project
 (new version every 6 months), which was (or was not) synced to PyPI
 depending of who was owning the PyPI project. More confusion as PyPI
 rarely contained the just-released version, since releasing to PyPI was
 not handled by OpenStack release management.
 
 I would have thought that only OpenStack release management should be
 publishing those libs to pypi now.

PyPI doesn't support complex versioning, nor does it support channels
(think separate repositories for the same library from which you can get
only daily/milestones/stable-releases). I didn't want to let PyPI
limitations affect the versioning that makes the most sense for the
(server-side) core projects... so I ignored it. Turns out you can't
ignore PyPI, which is why Monty came up with this solution to our
problem: use a separate release scheme for client libraries, since they
*are* separate projects anyway.

 People kept on asking for new PyPI releases to access recent features.
 Sometimes it's just that they need an early version of the client
 somewhere to start integrating it before the milestone hits
 (cinderclient, swiftclient). A release every 6 months is not enough.
 
 Well, we do releases more often that that:
 
  - milestone releases provide early access to features currently under 
development
 
  - stable releases provide a safe source of bugfixes to previous 
releases

But those are not released in the same channels. We have the
per-commit channel, released several times a day. We have the milestone
channel, every ~2 months... and the release channel, every 6 months.
PyPI only supports one channel. Which one should it be ?

 If the issue is integration *within* OpenStack - e.g. Nova needing
 cinderclient - then I don't think that should dictate the release
 schedule for these libs. We can surely figure out how to do that without
 doing more regular official releases of the libs? But first, is that
 what we're talking about?

I think so. My understanding is that it's a lot more elegant to rely on
PyPI for our own libraries, rather than special-casing them for our
developers and our CI.

 I think we'd like to be able to publish milestone releases of the client
 libs to pypi, but not have pip install install milestones unless they
 are explicitly requested? Right?
 
 pypi probably doesn't support that, but do we want to allow pypi's
 limitations dictate our schedule either?

Yes, PyPI doesn't support that. Don't get me wrong, I'd love that PyPI
suddenly becomes smart (or that all developers used Linux distros with
sane packaging), but as long as it's the de-facto standard for Python
developers to get Python libraries we have to try our best to use it. In
the precise case of client libraries, accepting PyPI limitations sounds
like a small price to pay to simplify and enable generic access to our
libraries using a well-established system.

 [...]
 Ok, so this *is* all about pypi? I'm starting to understand now :)

It's been my nightmare for more than a year.

 [...]
 The way I'm seeing it now is there are three development streams:
 
   1) bugfixes (i.e. suitable for a stable branch)
 
   2) new client lib features built on top of existing REST API features 
  (e.g. feature additions to the client libs which were released as 
  part of Essex which do not require REST API features from Folsom)
 
   3) new client lib features built on top of new REST API features 
  (e.g. the Folsom development stream)
 
 IMHO, if you're J. Random Developer doing 'pip install glanceclient',
 you probably want (2).
 
 Particularly risk-averse distros may want (1), since (2) is more risky
 to pull updates from.
 
 And for integration within the project, or folks testing the latest
 development stream, you want (3).
 
 The project probably can't hope to support all 3 development streams,
 but can we do (1) and (2) in a stable branch and (3) using our usual
 development release cycle?
 
 Would that satisfy everyone but the particularly risk-averse distros?

That forces us to handle our client library releases outside of PyPI
though. My understanding is that not using PyPI for our client libraries
is not an option. We tried that before, and people kept on publishing
random unofficial versions there. So we tried that, and it failed. PyPI
is there to stay and can't be ignored. So we have to accept that it
supports only basic versioning and that it is single-channel. Monty's
proposal, I think, makes the best of our situation.

-- 

Re: [Openstack] Thoughts on client library releasing

2012-06-19 Thread Mark McLoughlin
On Tue, 2012-06-19 at 16:48 +0200, Thierry Carrez wrote:
 Mark McLoughlin wrote:
[..]
  However they also inherited the release scheme of the server project
  (new version every 6 months), which was (or was not) synced to PyPI
  depending of who was owning the PyPI project. More confusion as PyPI
  rarely contained the just-released version, since releasing to PyPI was
  not handled by OpenStack release management.
  
  I would have thought that only OpenStack release management should be
  publishing those libs to pypi now.
 
 PyPI doesn't support complex versioning, nor does it support channels
 (think separate repositories for the same library from which you can get
 only daily/milestones/stable-releases).

Nice way of putting it - channels. I think that helps clarify the
discussion hugely :)

(I guess you could have python-glanceclient-dev as a separate pypi
package to give a similar effect, but that would probably be evil)

[..]

 PyPI only supports one channel. Which one should it be ?

IMHO, PyPI should be a channel where you get:

  - bug fixes

  - support for features which exist in the currently released version
of OpenStack

  - every six months, support for new features in the just-released 
version of OpenStack

i.e. we push 2012.1 to pypi when Essex is released and we do 2012.1.x
releases as needed to pypi from our stable branch.

Where the definition of stable is slightly different - it includes new
features which expose existing REST API features of the associated
release.

i.e. a channel containing (1) and (2) in the split I made.

  If the issue is integration *within* OpenStack - e.g. Nova needing
  cinderclient - then I don't think that should dictate the release
  schedule for these libs. We can surely figure out how to do that without
  doing more regular official releases of the libs? But first, is that
  what we're talking about?
 
 I think so. My understanding is that it's a lot more elegant to rely on
 PyPI for our own libraries, rather than special-casing them for our
 developers and our CI.

For integration in the project, we need the development channel and I
don't think we want to pollute pypi with that.

(Again the point about the REST API potentially changing during
development)

So, I don't think I buy this bit and think it's worth digging into the
details.

[..]
  [...]
  The way I'm seeing it now is there are three development streams:
  
1) bugfixes (i.e. suitable for a stable branch)
  
2) new client lib features built on top of existing REST API features 
   (e.g. feature additions to the client libs which were released as 
   part of Essex which do not require REST API features from Folsom)
  
3) new client lib features built on top of new REST API features 
   (e.g. the Folsom development stream)
  
  IMHO, if you're J. Random Developer doing 'pip install glanceclient',
  you probably want (2).
  
  Particularly risk-averse distros may want (1), since (2) is more risky
  to pull updates from.
  
  And for integration within the project, or folks testing the latest
  development stream, you want (3).
  
  The project probably can't hope to support all 3 development streams,
  but can we do (1) and (2) in a stable branch and (3) using our usual
  development release cycle?
  
  Would that satisfy everyone but the particularly risk-averse distros?
 
 That forces us to handle our client library releases outside of PyPI
 though.

It forces us to handle our development stream outside of pypi, yes.

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Swift Probetests

2012-06-19 Thread Maru Newby
The swift probetests are broken:

https://bugs.launchpad.net/swift/+bug/1014931

Does the swift team intend to maintain probetests going forward?  Given how 
broken they are at present (bad imports, failures even when imports are fixed), 
it would appear that probetests are not gating commits.  That should probably 
change if the tests are to be maintainable.

Thanks,


Maru
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Probetests

2012-06-19 Thread Jay Pipes

On 06/19/2012 11:10 AM, Maru Newby wrote:

The swift probetests are broken:

https://bugs.launchpad.net/swift/+bug/1014931

Does the swift team intend to maintain probetests going forward?  Given how 
broken they are at present (bad imports, failures even when imports are fixed), 
it would appear that probetests are not gating commits.  That should probably 
change if the tests are to be maintainable.


Hi Maru, cc'ing Jose from the Swift QA team at Rackspace...

I don't know what the status is on these probetests or whether they are 
being maintained. Jose or John, any ideas? If they are useful, we could 
bring them into the module initialization of the Tempest Swift tests.


Best,
jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Common openstack client library

2012-06-19 Thread Alexey Ababilov
Hi!

Unfortunately, nova, keystone, and glance clients are very inconsistent. A
lot of code is copied between all these clients instead of moving it to a
common library. The code was edited without synchronization between
clients, so, they have different behaviour:

   - all client constructors use different parameters (api_key in nova or
   password in keystone and so on);
   - keystoneclient authenticates immediately in __init__, while novaclient
   does in lazily during first method call;
   - {keystone,nova}client can manage service catalogs and accept
   keystone's auth URI while glanceclient allows endpoints only;
   - keystoneclient can support authorization with an unscoped token but
   novaclient doesn't;
   - novaclient uses class composition while keystoneclient uses
   inheritance.

I have developed a library to unify current clients. The library can be
used as-is, but it would be better if openstack clients dropped their
common code (base.py, exceptions.py and so on) and just began to import
common code.

Here is an example of using unified clients.

from openstackclient_base import patch_clients
from openstackclient_base.client import HttpClient
http_client = HttpClient(username=..., password=...,
tenant_name=..., auth_uri=...)

from openstackclient_base.nova.client import ComputeClient
print ComputeClient(http_client).servers.list()

from openstackclient_base.keystone.client import IdentityPublicClient
print IdentityPublicClient(http_client).tenants.list()
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Common openstack client library

2012-06-19 Thread Joseph Heck
Alexey - 

where's the library of common that you've put together? Is it committed to 
openstack-common? somewhere else?

-joe

On Jun 19, 2012, at 9:43 AM, Alexey Ababilov wrote:
 Unfortunately, nova, keystone, and glance clients are very inconsistent. A 
 lot of code is copied between all these clients instead of moving it to a 
 common library. The code was edited without synchronization between clients, 
 so, they have different behaviour:
 
 all client constructors use different parameters (api_key in nova or password 
 in keystone and so on);
 keystoneclient authenticates immediately in __init__, while novaclient does 
 in lazily during first method call;
 {keystone,nova}client can manage service catalogs and accept keystone's auth 
 URI while glanceclient allows endpoints only;
 keystoneclient can support authorization with an unscoped token but 
 novaclient doesn't;
 novaclient uses class composition while keystoneclient uses inheritance.
 I have developed a library to unify current clients. The library can be used 
 as-is, but it would be better if openstack clients dropped their common code 
 (base.py, exceptions.py and so on) and just began to import common code.
 
 Here is an example of using unified clients.
 from openstackclient_base import patch_clients
 from openstackclient_base.client import HttpClient
 http_client = HttpClient(username=..., password=..., tenant_name=..., 
 auth_uri=...)
 
 from openstackclient_base.nova.client import ComputeClient
 print ComputeClient(http_client).servers.list()
 
 from openstackclient_base.keystone.client import IdentityPublicClient
 print IdentityPublicClient(http_client).tenants.list()
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Common openstack client library

2012-06-19 Thread Jay Pipes

Did you see:

https://github.com/openstack/python-openstackclient

Also, keep in mind that some of the ways the existing Glance client (and 
Swift client FTM) work is due to lack of support in httplib2 for 
chunked-transfer encoding.


Best,
-jay

On 06/19/2012 12:43 PM, Alexey Ababilov wrote:

Hi!

Unfortunately, nova, keystone, and glance clients are very inconsistent.
A lot of code is copied between all these clients instead of moving it
to a common library. The code was edited without synchronization between
clients, so, they have different behaviour:

  * all client constructors use different parameters (api_key in nova or
password in keystone and so on);
  * keystoneclient authenticates immediately in __init__, while
novaclient does in lazily during first method call;
  * {keystone,nova}client can manage service catalogs and accept
keystone's auth URI while glanceclient allows endpoints only;
  * keystoneclient can support authorization with an unscoped token but
novaclient doesn't;
  * novaclient uses class composition while keystoneclient uses inheritance.

I have developed a library to unify current clients. The library can be
used as-is, but it would be better if openstack clients dropped their
common code (base.py, exceptions.py and so on) and just began to import
common code.

Here is an example of using unified clients.

from openstackclient_base import patch_clients
from openstackclient_base.client import HttpClient
http_client = HttpClient(username=..., password=..., tenant_name=..., 
auth_uri=...)

from openstackclient_base.nova.client import ComputeClient
print ComputeClient(http_client).servers.list()

from openstackclient_base.keystone.client import IdentityPublicClient
print IdentityPublicClient(http_client).tenants.list()



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Timeout during image build (task Networking)

2012-06-19 Thread Jay Pipes

Hi Ross,

In the process of diagnosing this, but I'm seeing this sporadically when 
running Tempest against a devstack install. I'll try to pinpoint the 
issue later today and post back my findings.


Best,
-jay

p.s. Sorry for top-posting.

On 06/18/2012 06:03 PM, Lillie Ross-CDSR11 wrote:

I'm receiving RPC timeouts when trying to launch an instance. My
installation is the Essex release running on Ubuntu 12.04.

When I launch a test image, the launch fails. In my setup, Nova network
runs on a controller node, and all compute instances run on separate,
dedicated server nodes. The failure is repeatable. Upon examining the
various logs, I see the following (see below). Any insight would be welcome.

Regards,
Ross

 From 'nova show instance name' I read the following:

root@cirrus1:~# nova show test
+-+-+
| Property | Value |
+-+-+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-SRV-ATTR:host | nova8 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance-0005 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | networking |
| OS-EXT-STS:vm_state | error |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2012-06-18T20:42:56Z |
| fault | {u'message': u'Timeout', u'code': 500, u'created':
u'2012-06-18T20:43:58Z'} |
| flavor | m1.tiny |
| hostId | 50272989300483e2b5e5236cd572fef3f9149ae60faa5f5660f8da54 |
| id | d569b16f-10a8-4cb8-90a3-d5b664c2322d |
| image | tty-linux |
| key_name | admin |
| metadata | {} |
| name | test |
| private_0 network | |
| status | ERROR |
| tenant_id | 1 |
| updated | 2012-06-18T20:43:57Z |
| user_id | 1 |
+-+-+

 From the nova-network.log I see the following:

2012-06-18 15:43:36 DEBUG nova.manager [-] Running periodic task
VlanManager._disassociate_stale_fixed_ips from (pid=1381) periodic_tasks
/usr/lib/python2.7/dist-packages
/nova/manager.py:152
2012-06-18 15:43:57 ERROR nova.rpc.common [-] Timed out waiting for RPC
response: timed out
2012-06-18 15:43:57 TRACE nova.rpc.common Traceback (most recent call last):
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 490, in
ensure
2012-06-18 15:43:57 TRACE nova.rpc.common return method(*args, **kwargs)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 567, in
_consume
2012-06-18 15:43:57 TRACE nova.rpc.common return
self.connection.drain_events(timeout=timeout)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/connection.py, line 175, in
drain_events
2012-06-18 15:43:57 TRACE nova.rpc.common return
self.transport.drain_events(self.connection, **kwargs)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
238, in drain_events
2012-06-18 15:43:57 TRACE nova.rpc.common return
connection.drain_events(**kwargs)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
57, in drain_events
2012-06-18 15:43:57 TRACE nova.rpc.common return
self.wait_multi(self.channels.values(), timeout=timeout)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
63, in wait_multi
2012-06-18 15:43:57 TRACE nova.rpc.common chanmap.keys(),
allowed_methods, timeout=timeout)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
120, in _wait_multiple
2012-06-18 15:43:57 TRACE nova.rpc.common channel, method_sig, args,
content = read_timeout(timeout)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
94, in read_timeout
2012-06-18 15:43:57 TRACE nova.rpc.common return
self.method_reader.read_method()
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/amqplib/client_0_8/method_framing.py,
line 221, in read_method
2012-06-18 15:43:57 TRACE nova.rpc.common raise m
2012-06-18 15:43:57 TRACE nova.rpc.common timeout: timed out
2012-06-18 15:43:57 TRACE nova.rpc.common
2012-06-18 15:43:58 DEBUG nova.utils
[req-16158a6b-f3d6-49f3-977e-3ccfecc791ca 1 1] Attempting to grab
semaphore get_dhcp for method _get_dhcp_ip... from (pid=1381) i
nner /usr/lib/python2.7/dist-packages/nova/utils.py:927
2012-06-18 15:43:58 DEBUG nova.utils
[req-16158a6b-f3d6-49f3-977e-3ccfecc791ca 1 1] Got semaphore get_dhcp
for method _get_dhcp_ip... from (pid=1381) inner /usr/lib/p
ython2.7/dist-packages/nova/utils.py:931
2012-06-18 15:43:58 DEBUG nova.utils
[req-16158a6b-f3d6-49f3-977e-3ccfecc791ca 1 1] Attempting to grab
semaphore 

Re: [Openstack] Common openstack client library

2012-06-19 Thread Monty Taylor
Hi!

On 06/19/2012 09:43 AM, Alexey Ababilov wrote:
 Hi!
 
 Unfortunately, nova, keystone, and glance clients are very inconsistent.
 A lot of code is copied between all these clients instead of moving it
 to a common library. The code was edited without synchronization between
 clients, so, they have different behaviour:
 
   * all client constructors use different parameters (api_key in nova or
 password in keystone and so on);
   * keystoneclient authenticates immediately in __init__, while
 novaclient does in lazily during first method call;
   * {keystone,nova}client can manage service catalogs and accept
 keystone's auth URI while glanceclient allows endpoints only;
   * keystoneclient can support authorization with an unscoped token but
 novaclient doesn't;
   * novaclient uses class composition while keystoneclient uses inheritance.
 
 I have developed a library to unify current clients. The library can be
 used as-is, but it would be better if openstack clients dropped their
 common code (base.py, exceptions.py and so on) and just began to import
 common code.

There are two projects already in work focused on various aspects of
this. openstack-common is the place that we put code that should be
shared between the clients. python-openstackclient is a project that
aims at a single consistent interface.

I'm thrilled that you have done some work in this area, but it would be
great if you could do this in the context of the two fairly official
projects that already exist.

Thanks!
Monty


 Here is an example of using unified clients.
 
 from openstackclient_base import patch_clients
 from openstackclient_base.client import HttpClient
 http_client = HttpClient(username=..., password=..., tenant_name=..., 
 auth_uri=...)
 
 from openstackclient_base.nova.client import ComputeClient
 print ComputeClient(http_client).servers.list()
 
 from openstackclient_base.keystone.client import IdentityPublicClient
 print IdentityPublicClient(http_client).tenants.list()
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack Foundation] is there any IDS projects on openstack?

2012-06-19 Thread Stefano Maffulli
Wrong list, rerouting the message to the appropriate one.

Please join the team and subscribe to the list on
 https://launchpad.net/~openstack

/stef

On 06/19/2012 07:30 AM, badis hammi wrote:
 Hi friends,
 I'm new on this mailing list, I just begin with openstack and I wish to
 know if it exists any Intrusion Detection System projects on openstack.
 thank you
 
 
 ___
 Foundation mailing list
 foundat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Glance usage data retrieval

2012-06-19 Thread Jay Pipes

Hi Julien and Stuart! Comments inline...

On 06/19/2012 07:12 AM, stuart.mcla...@hp.com wrote:

Brian, Jay,

I'll give you a chance to reply to Julien first, but
I have a follow on query...

It doesn't seem like right now Glance produces enough records for full
metering of operations. Eg if you want to charge a specific user for
every successful image upload/image download this information doesn't
'fall out' of the current logs. For example the eventlet output such as:

Jun 19 09:18:39 az1-gl-api-0001 DEBUG 3795 [eventlet.wsgi.server]
127.0.0.1 - - [19/Jun/2012 09:18:39] GET
/v1/images/4921 HTTP/1.1 200 104858310 2.098967

doesn't tie the GET operation to a particular user.
As Julien mentions there is an image_upload notification, but I don't
see an equivalent image_download notification.


Yeah, there is one :) It's called image.send:

https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L892


(Note I'm using old Diablo code at the moment, so I'm going on
code inspection of the latest code -- apologies if I missed anything.)

Is the creation of records of operations in a well-defined format which
can be consumed by eg a Metering and Billing team something we'd
like to have? (I'm ignoring Data at Rest metering for now.)


Not sure about this one, but I believe the message format is the same as 
Nova:


https://github.com/openstack/glance/blob/master/glance/notifier/__init__.py#L55

With the exception of the request ID. The payload is structured here for 
downloads:


https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L882


If so, would something along the following lines be worth considering?

A wsgi filter, using a mechanism similar to Swift's posthooklogger, which
has a routine which is called when operations are about to complete. It
should have access to the context (http return status, user, operation
etc) so would be able to raise a notification if appropriate. Using a
standard notifier would mean output could be to a log file or, perhaps
in time, the notifications could be consumed by ceilometer.


LOL, that's actually how it currently works :)

https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L917

All the best,
-jay


-Stuart

On Tue, 19 Jun 2012, Julien Danjou wrote:


Hello,

As part of the ceilometer project¹, we're working on usage data
retrieval from various OpenStack components. One of them is Glance.
We're targeting Folsom for the first release, therefore it seems
important for both projects to be able to work together, this is why
we're bringing ceilometer to your attention and asking for advices. :)

What we want is to retrieve the maximum amount of data, so we can meter
things, to bill them in the end. For now and for Glance, this only
includes the number of image uploaded (per user/tenant), but we'll need
probaly more in a near future.

At this point we plan to plug into the notification system, since it
seems to be what Glance provides to accomplish this. And so far, the
notifications provided seem enough to accomplish what we want to do.

Do you have any advice regarding integration of Ceilometer and Glance
together? Is this something a stable interface we can rely on, or is
there a better way to do so?

Thanks in advance,

Regards,

¹ http://launchpad.net/ceilometer

--
Julien Danjou
// eNovance http://enovance.com
// ✉ julien.dan...@enovance.com ☎ +33 1 49 70 99 81

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instance termination is not stable

2012-06-19 Thread Sajith Kariyawasam
Any clue on this guys?

On Mon, Jun 18, 2012 at 7:08 PM, Sajith Kariyawasam saj...@gmail.comwrote:

 Hi all,

 I have Openstack Essex version installed and I have created several
 instances based on an Ubuntu-12.04 UEC image in Openstack and those are up
 and running.

 When I'm trying to terminate an instance I'm getting an exception (log is
 mentioned below) and, in console its status is shown as Shutoff and the
 task is Deleting. Even though i tried terminating the instance again and
 again nothing happens. But after I restart machine (nova) those instances
 can be terminated.

 This issue is not occurred everytime, but occassionally, as I noted this
 occurs when there are more than 2 instances up and running at the same
 time.. If I create one instance, terminate that, again create one,
 terminate that one, if goes like that, there wont be an issue in
 terminating.

 What could be the problem here? any suggestions are highly appreciated.

 Thanks


 *ERROR LOG ( /var/log/nova/nova-compute.log )
 ==*

 2012-06-18 18:43:55 DEBUG nova.manager [-] Skipping
 ComputeManager._run_image_cache_manager_pass, 17 ticks left until next run
 from (pid=24151) periodic_tasks
 /usr/lib/python2.7/dist-packages/nova/manager.py:147
 2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
 ComputeManager._reclaim_queued_deletes from (pid=24151) periodic_tasks
 /usr/lib/python2.7/dist-packages/nova/manager.py:152
 2012-06-18 18:43:55 DEBUG nova.compute.manager [-]
 FLAGS.reclaim_instance_interval = 0, skipping... from (pid=24151)
 _reclaim_queued_deletes
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380
 2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
 ComputeManager._report_driver_status from (pid=24151) periodic_tasks
 /usr/lib/python2.7/dist-packages/nova/manager.py:152
 2012-06-18 18:43:55 INFO nova.compute.manager [-] Updating host status
 2012-06-18 18:43:55 DEBUG nova.virt.libvirt.connection [-] Updating host
 stats from (pid=24151) update_status
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:2467
 2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
 ComputeManager._poll_unconfirmed_resizes from (pid=24151) periodic_tasks
 /usr/lib/python2.7/dist-packages/nova/manager.py:152
 2012-06-18 18:44:17 DEBUG nova.rpc.amqp [-] received {u'_context_roles':
 [u'swiftoperator', u'Member', u'admin'], u'_context_request_id':
 u'req-01ca70c8-2240-407b-92d1-5a59ee497291', u'_context_read_deleted':
 u'no', u'args': {u'instance_uuid':
 u'd250-1d8b-4973-8320-e6058a2058b9'}, u'_context_auth_token':
 'SANITIZED', u'_context_is_admin': True, u'_context_project_id':
 u'194d6e24ec1843fb8fbd94c3fb519deb', u'_context_timestamp':
 u'2012-06-18T13:14:17.013212', u'_context_user_id':
 u'f8a75778c36241479693ff61a754f67b', u'method': u'terminate_instance',
 u'_context_remote_address': u'172.16.0.254'} from (pid=24151) _safe_log
 /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
 2012-06-18 18:44:17 DEBUG nova.rpc.amqp
 [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
 194d6e24ec1843fb8fbd94c3fb519deb] unpacked context: {'user_id':
 u'f8a75778c36241479693ff61a754f67b', 'roles': [u'swiftoperator', u'Member',
 u'admin'], 'timestamp': '2012-06-18T13:14:17.013212', 'auth_token':
 'SANITIZED', 'remote_address': u'172.16.0.254', 'is_admin': True,
 'request_id': u'req-01ca70c8-2240-407b-92d1-5a59ee497291', 'project_id':
 u'194d6e24ec1843fb8fbd94c3fb519deb', 'read_deleted': u'no'} from
 (pid=24151) _safe_log
 /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
 2012-06-18 18:44:17 INFO nova.compute.manager
 [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
 194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: decorating:
 |function terminate_instance at 0x2bd3050|
 2012-06-18 18:44:17 INFO nova.compute.manager
 [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
 194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: arguments:
 |nova.compute.manager.ComputeManager object at 0x20ffb90|
 |nova.rpc.amqp.RpcContext object at 0x4d2a450|
 |d250-1d8b-4973-8320-e6058a2058b9|
 2012-06-18 18:44:17 DEBUG nova.compute.manager
 [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
 194d6e24ec1843fb8fbd94c3fb519deb] instance
 d250-1d8b-4973-8320-e6058a2058b9: getting locked state from (pid=24151)
 get_lock /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1597
 2012-06-18 18:44:17 INFO nova.compute.manager
 [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
 194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: locked: |False|
 2012-06-18 18:44:17 INFO nova.compute.manager
 [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
 194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: admin: |True|
 2012-06-18 18:44:17 INFO nova.compute.manager
 [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
 

[Openstack] Multiple nova-compute hosts, single-host nova-network, and a guest unreachable via its floating IP

2012-06-19 Thread Florian Haas
Hi everyone,

perhaps someone can shed some light on a floating IP issue.

I have 2 nova-compute nodes (call them alice and bob), one of them
(alice) is also running nova-network. bob uses alice as its
--metadata_host and --network_host.

I assign a floating IP to a guest running on bob. Expectedly, that IP
is bound to the NIC specified as the --public_interface on alice (my
nova-network host).

However, since alice has a route into the --fixed_range network over
its local bridge, the incoming traffic for the floating IP is routed
there, where there's no guest to answer it -- because the guest is,
after all, running on bob.

Now, this seems fairly logical to me in the combination of

1. a nova-network host also running nova-compute;
2. other nova-compute hosts being around;
3. those other nova-compute hosts _not_ also running nova-network (and
hence there being no multi-host networking).

If my reasoning is correct, is it safe to say that in order to be able
to use floating IPs in an environment with multiple nova-compute
hosts, you must

1. Either have a single nova-network host that is _not_ also running
nova-compute (but has a network connection to the --fixed_range
network, of course);
2. or run nova-network on all your nova-compute hosts, which however
requires that you enable multi-host mode and also run nova-api there?

Any help is much appreciated. Thanks!

Cheers,
Florian

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Test tool

2012-06-19 Thread Matt Joyce
https://github.com/cloudscaling/tarkin

This is a very minimal test set that supports pinging vms after launching
them.

Nothing crazy.

-Matt

On Mon, Jun 18, 2012 at 6:37 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 06/14/2012 05:26 AM, Neelakantam Gaddam wrote:

 Hi All,

 Recently I came across the tool called Tempest to perform he integration
 tests on a live cluster running openstack.

 Can we use this tool to test Quantum networks also?


 Yes, though support is very new :)

 If you run Tempest (nosetests -sv --nologcapture tempest), the network
 tests will be run by default if and only if there is a network endpoint set
 up.


  Are there any tools which do the end-to end testing of openstack
 components (including Quantum) like creating multiple networks,
 launching VMs, adding compute nodes,  pinging VMs.. etc ?


 That would be tempest :) Mostly. Tempest doesn't yet test things like
 bringing up bare-metal compute nodes, but perhaps in the future it will.

 All the best,
 -jay

 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/%7Eopenstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Need help and guidance of Openstack Swift

2012-06-19 Thread Yogesh Bansal
Hello Friends,
 
I have just started using Openstack.  I am trying to make a web based
application for Openstack swift. I am using the Django and python for my
application. Could you please provide me few examples how to send the
command to swift server using python.  I am able to get the auth token from
keystone and the public url for the swift storage. But when I tried to use
that swift public  URL, in response it say '403 Forbidden\n\nAccess was
denied to this resource.\n\n and Django stops by giving this error message 
raise ValueError(errmsg(Extra data, s, end, len(s)))
ValueError: Extra data: line 1 column 4 - line 5 column 4 (char 4 - 55)

I am able to run the curl commands with are provided in the Openstack
documentation and its running fine.  I just need help using python.
I am learning python and Django also. I have little understanding on this.
 
Could you please help and guide me on this. 
 
Thanks
Yogesh
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Need help and guidance of Openstack Swift

2012-06-19 Thread Jay Pipes

Some links to show you:

https://github.com/openstack/horizon/blob/7b565fc9839833b4b99b1fc3b02269b79071af3e/horizon/api/swift.py

https://github.com/openstack/horizon/blob/f6f2a91e14f6bdd4e1a87e31a1d6923127afee1b/horizon/dashboards/nova/containers/views.py

Best,
-jay

On 06/19/2012 02:04 PM, Yogesh Bansal wrote:

Hello Friends,

I have just started using Openstack.I am trying to make a web based
application for Openstack swift. I am using the Django and python for my
application. Could you please provide me few examples how to send the
command to swift server using python. I am able to get the auth token
from keystone and the public url for the swift storage. But when I tried
to use that swift publicURL, in response it say “*'403
Forbidden\n\nAccess was denied to this resource.\n\n*” and Django stops
by giving this error message

“raise ValueError(errmsg(Extra data, s, end, len(s)))

ValueError: Extra data: line 1 column 4 - line 5 column 4 (char 4 - 55)

”

I am able to run the curl commands with are provided in the Openstack
documentation and its running fine.I just need help using python.

I am learning python and Django also. I have little understanding on this.

Could you please help and guide me on this.

Thanks

Yogesh



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Interested in implementing swift ring builder server

2012-06-19 Thread Florian Hines
Hi Mark, 

I forgot to grab the blueprint but I finally got around to start working on 
this. I'll hopefully have something to put up for review next week.

This version just support's basic GET/POST ops to retrieve the rings or to 
view/alter/add devices to the ring in bulk.

Having someone else adding features as well would be awesome! 

-- 
Florian Hines | @pandemicsyn
http://about.me/pandemicsyn


On Monday, June 18, 2012 at 2:23 PM, Mark Gius wrote:

 Hello Swifters,
 
 I've got some interns working with me this summer and I had a notion that 
 they might take a stab at the swift ring builder server blueprint that's been 
 sitting around for a while 
 (https://blueprints.launchpad.net/swift/+spec/ring-builder-server).  As a 
 first step I figured that the ring-builder-server would be purely an 
 alternative for the swift-ring-builder CLI, with a future iteration adding 
 support for deploying the rings to all servers in the cluster.  I'm currently 
 planning on making the ring-builder server be written and deployed like the 
 account/container/etc servers, although I imagine the implementation will be 
 a lot simpler. 
 
 Is anybody else already working on this and forgot to update the blueprint?  
 If not can I get the blueprint assigned to me on launchpad?  Username 
 'markgius'.  Or if there's some other process I need to go through please let 
 me know. 
 
 Mark 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Multiple nova-compute hosts, single-host nova-network, and a guest unreachable via its floating IP

2012-06-19 Thread Sébastien Han
Hi Florian,

For my own setup, I wanted to achieve highly-available network, and avoid
the loss of the gateway of every instances running if nova-network falls
down. I couldn't afford 2 dedicated nodes to put nova-network itself in an
highly available state. Now if I loose a nova-network on a compute node,
all my instances running on this compute node will loose their gateway but
this scenario is better than loosing all my VMs. The multi_host was the
best option and I think it's applicable to every setup. So precisely, every
compute node hosts those services:

   - nova-compute
   - nova-network - avoid networking SPOF
   - nova-api-metadata - you don't need the entire nova-api service. Each
   new instance only needs to reach the metadata. Running this from the
   compute node can also improve performance with the cloud-init service.

Of course this setup works with the multi_host parameter enable.

My 2 cts contribution ;)

On Tue, Jun 19, 2012 at 7:52 PM, Florian Haas flor...@hastexo.com wrote:
 Hi everyone,

 perhaps someone can shed some light on a floating IP issue.

 I have 2 nova-compute nodes (call them alice and bob), one of them
 (alice) is also running nova-network. bob uses alice as its
 --metadata_host and --network_host.

 I assign a floating IP to a guest running on bob. Expectedly, that IP
 is bound to the NIC specified as the --public_interface on alice (my
 nova-network host).

 However, since alice has a route into the --fixed_range network over
 its local bridge, the incoming traffic for the floating IP is routed
 there, where there's no guest to answer it -- because the guest is,
 after all, running on bob.

 Now, this seems fairly logical to me in the combination of

 1. a nova-network host also running nova-compute;
 2. other nova-compute hosts being around;
 3. those other nova-compute hosts _not_ also running nova-network (and
 hence there being no multi-host networking).

 If my reasoning is correct, is it safe to say that in order to be able
 to use floating IPs in an environment with multiple nova-compute
 hosts, you must

 1. Either have a single nova-network host that is _not_ also running
 nova-compute (but has a network connection to the --fixed_range
 network, of course);
 2. or run nova-network on all your nova-compute hosts, which however
 requires that you enable multi-host mode and also run nova-api there?

 Any help is much appreciated. Thanks!

 Cheers,
 Florian

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Timeout during image build (task Networking)

2012-06-19 Thread Jay Pipes
cc'ing Vish on this, as this is now occurring on every single devstack + 
Tempest run, for multiple servers.


Vish, I am seeing the exact same issue as shown below. Instances end up 
in ERROR state and looking into the nova-network log, I find *no* errors 
at all, and yet looking at the nova-compute log, I see multiple timeout 
errors -- all of them trying to RPC while in the allocate_network 
method. Always the same method, always the same error, and no errors in 
nova-network or nova-api (other than just reporting a failed build)


Any idea on something that may have crept in recently? This wasn't 
happening a week or so ago, AFAICT.


Best,
-jay

On 06/18/2012 06:03 PM, Lillie Ross-CDSR11 wrote:

I'm receiving RPC timeouts when trying to launch an instance. My
installation is the Essex release running on Ubuntu 12.04.

When I launch a test image, the launch fails. In my setup, Nova network
runs on a controller node, and all compute instances run on separate,
dedicated server nodes. The failure is repeatable. Upon examining the
various logs, I see the following (see below). Any insight would be welcome.

Regards,
Ross

 From 'nova show instance name' I read the following:

root@cirrus1:~# nova show test
+-+-+
| Property | Value |
+-+-+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-SRV-ATTR:host | nova8 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance-0005 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | networking |
| OS-EXT-STS:vm_state | error |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2012-06-18T20:42:56Z |
| fault | {u'message': u'Timeout', u'code': 500, u'created':
u'2012-06-18T20:43:58Z'} |
| flavor | m1.tiny |
| hostId | 50272989300483e2b5e5236cd572fef3f9149ae60faa5f5660f8da54 |
| id | d569b16f-10a8-4cb8-90a3-d5b664c2322d |
| image | tty-linux |
| key_name | admin |
| metadata | {} |
| name | test |
| private_0 network | |
| status | ERROR |
| tenant_id | 1 |
| updated | 2012-06-18T20:43:57Z |
| user_id | 1 |
+-+-+

 From the nova-network.log I see the following:

2012-06-18 15:43:36 DEBUG nova.manager [-] Running periodic task
VlanManager._disassociate_stale_fixed_ips from (pid=1381) periodic_tasks
/usr/lib/python2.7/dist-packages
/nova/manager.py:152
2012-06-18 15:43:57 ERROR nova.rpc.common [-] Timed out waiting for RPC
response: timed out
2012-06-18 15:43:57 TRACE nova.rpc.common Traceback (most recent call last):
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 490, in
ensure
2012-06-18 15:43:57 TRACE nova.rpc.common return method(*args, **kwargs)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 567, in
_consume
2012-06-18 15:43:57 TRACE nova.rpc.common return
self.connection.drain_events(timeout=timeout)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/connection.py, line 175, in
drain_events
2012-06-18 15:43:57 TRACE nova.rpc.common return
self.transport.drain_events(self.connection, **kwargs)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
238, in drain_events
2012-06-18 15:43:57 TRACE nova.rpc.common return
connection.drain_events(**kwargs)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
57, in drain_events
2012-06-18 15:43:57 TRACE nova.rpc.common return
self.wait_multi(self.channels.values(), timeout=timeout)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
63, in wait_multi
2012-06-18 15:43:57 TRACE nova.rpc.common chanmap.keys(),
allowed_methods, timeout=timeout)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
120, in _wait_multiple
2012-06-18 15:43:57 TRACE nova.rpc.common channel, method_sig, args,
content = read_timeout(timeout)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
94, in read_timeout
2012-06-18 15:43:57 TRACE nova.rpc.common return
self.method_reader.read_method()
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/amqplib/client_0_8/method_framing.py,
line 221, in read_method
2012-06-18 15:43:57 TRACE nova.rpc.common raise m
2012-06-18 15:43:57 TRACE nova.rpc.common timeout: timed out
2012-06-18 15:43:57 TRACE nova.rpc.common
2012-06-18 15:43:58 DEBUG nova.utils
[req-16158a6b-f3d6-49f3-977e-3ccfecc791ca 1 1] Attempting to grab
semaphore get_dhcp for 

Re: [Openstack] [Swift] Interested in implementing swift ring builder server

2012-06-19 Thread Mark Gius
Alright.  Is this something you're more or less done with and are just
cleaning up or have you just started?  I've got my guys reading up on swift
and are about ready to dive in with the coding, so if you're more or less
done I'll find another project for them to work on.  If you're just started
(or even better, not yet started), would you mind if I stole this from you?

Mark

On Tue, Jun 19, 2012 at 11:59 AM, Florian Hines florian.hi...@gmail.comwrote:

  Hi Mark,

 I forgot to grab the blueprint but I finally got around to start working
 on this. I'll hopefully have something to put up for review next week.

 This version just support's basic GET/POST ops to retrieve the rings or to
 view/alter/add devices to the ring in bulk.

 Having someone else adding features as well would be awesome!

 --
 Florian Hines | @pandemicsyn
 http://about.me/pandemicsyn

 On Monday, June 18, 2012 at 2:23 PM, Mark Gius wrote:

 Hello Swifters,

 I've got some interns working with me this summer and I had a notion that
 they might take a stab at the swift ring builder server blueprint that's
 been sitting around for a while (
 https://blueprints.launchpad.net/swift/+spec/ring-builder-server).  As a
 first step I figured that the ring-builder-server would be purely an
 alternative for the swift-ring-builder CLI, with a future iteration adding
 support for deploying the rings to all servers in the cluster.  I'm
 currently planning on making the ring-builder server be written and
 deployed like the account/container/etc servers, although I imagine the
 implementation will be a lot simpler.

 Is anybody else already working on this and forgot to update the
 blueprint?  If not can I get the blueprint assigned to me on launchpad?
  Username 'markgius'.  Or if there's some other process I need to go
 through please let me know.

 Mark
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Multiple nova-compute hosts, single-host nova-network, and a guest unreachable via its floating IP

2012-06-19 Thread Vishvananda Ishaya

On Jun 19, 2012, at 10:52 AM, Florian Haas wrote:

 Hi everyone,
 
 perhaps someone can shed some light on a floating IP issue.
 
 I have 2 nova-compute nodes (call them alice and bob), one of them
 (alice) is also running nova-network. bob uses alice as its
 --metadata_host and --network_host.
 
 I assign a floating IP to a guest running on bob. Expectedly, that IP
 is bound to the NIC specified as the --public_interface on alice (my
 nova-network host).
 
 However, since alice has a route into the --fixed_range network over
 its local bridge, the incoming traffic for the floating IP is routed
 there, where there's no guest to answer it -- because the guest is,
 after all, running on bob.

the fixed range should be bridged into an actual ethernet device, which means 
bobs guest should be able to respond just fine.

I would track down where the packet is getting lost.  If the floating ip is 
coming in on eth2 on alice, and then it is forwarded to br100 on eht1, the 
packets should be going out eth1 where bob will pick them up and forward them 
to its own br100.  Then the vm should be able to respond properly.  If this is 
all working, it maybe that the response packets from the guest are going back 
out the wrong interface.  They should be going back to alice's nova-network ip 
where they will be conntracked back to the floating ip.

Vish



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Timeout during image build (task Networking)

2012-06-19 Thread Vishvananda Ishaya
This seems like a likely culprit.

Vish

On Jun 19, 2012, at 12:03 PM, Jay Pipes wrote:

 cc'ing Vish on this, as this is now occurring on every single devstack + 
 Tempest run, for multiple servers.
 
 Vish, I am seeing the exact same issue as shown below. Instances end up in 
 ERROR state and looking into the nova-network log, I find *no* errors at all, 
 and yet looking at the nova-compute log, I see multiple timeout errors -- all 
 of them trying to RPC while in the allocate_network method. Always the same 
 method, always the same error, and no errors in nova-network or nova-api 
 (other than just reporting a failed build)
 
 Any idea on something that may have crept in recently? This wasn't happening 
 a week or so ago, AFAICT.
 
 Best,
 -jay
 
 On 06/18/2012 06:03 PM, Lillie Ross-CDSR11 wrote:
 I'm receiving RPC timeouts when trying to launch an instance. My
 installation is the Essex release running on Ubuntu 12.04.
 
 When I launch a test image, the launch fails. In my setup, Nova network
 runs on a controller node, and all compute instances run on separate,
 dedicated server nodes. The failure is repeatable. Upon examining the
 various logs, I see the following (see below). Any insight would be welcome.
 
 Regards,
 Ross
 
 From 'nova show instance name' I read the following:
 
 root@cirrus1:~# nova show test
 +-+-+
 | Property | Value |
 +-+-+
 | OS-DCF:diskConfig | MANUAL |
 | OS-EXT-SRV-ATTR:host | nova8 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname | None |
 | OS-EXT-SRV-ATTR:instance_name | instance-0005 |
 | OS-EXT-STS:power_state | 0 |
 | OS-EXT-STS:task_state | networking |
 | OS-EXT-STS:vm_state | error |
 | accessIPv4 | |
 | accessIPv6 | |
 | config_drive | |
 | created | 2012-06-18T20:42:56Z |
 | fault | {u'message': u'Timeout', u'code': 500, u'created':
 u'2012-06-18T20:43:58Z'} |
 | flavor | m1.tiny |
 | hostId | 50272989300483e2b5e5236cd572fef3f9149ae60faa5f5660f8da54 |
 | id | d569b16f-10a8-4cb8-90a3-d5b664c2322d |
 | image | tty-linux |
 | key_name | admin |
 | metadata | {} |
 | name | test |
 | private_0 network | |
 | status | ERROR |
 | tenant_id | 1 |
 | updated | 2012-06-18T20:43:57Z |
 | user_id | 1 |
 +-+-+
 
 From the nova-network.log I see the following:
 
 2012-06-18 15:43:36 DEBUG nova.manager [-] Running periodic task
 VlanManager._disassociate_stale_fixed_ips from (pid=1381) periodic_tasks
 /usr/lib/python2.7/dist-packages
 /nova/manager.py:152
 2012-06-18 15:43:57 ERROR nova.rpc.common [-] Timed out waiting for RPC
 response: timed out
 2012-06-18 15:43:57 TRACE nova.rpc.common Traceback (most recent call last):
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 490, in
 ensure
 2012-06-18 15:43:57 TRACE nova.rpc.common return method(*args, **kwargs)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 567, in
 _consume
 2012-06-18 15:43:57 TRACE nova.rpc.common return
 self.connection.drain_events(timeout=timeout)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/connection.py, line 175, in
 drain_events
 2012-06-18 15:43:57 TRACE nova.rpc.common return
 self.transport.drain_events(self.connection, **kwargs)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
 238, in drain_events
 2012-06-18 15:43:57 TRACE nova.rpc.common return
 connection.drain_events(**kwargs)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
 57, in drain_events
 2012-06-18 15:43:57 TRACE nova.rpc.common return
 self.wait_multi(self.channels.values(), timeout=timeout)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
 63, in wait_multi
 2012-06-18 15:43:57 TRACE nova.rpc.common chanmap.keys(),
 allowed_methods, timeout=timeout)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
 120, in _wait_multiple
 2012-06-18 15:43:57 TRACE nova.rpc.common channel, method_sig, args,
 content = read_timeout(timeout)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
 94, in read_timeout
 2012-06-18 15:43:57 TRACE nova.rpc.common return
 self.method_reader.read_method()
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/amqplib/client_0_8/method_framing.py,
 line 221, in read_method
 2012-06-18 15:43:57 TRACE nova.rpc.common raise m
 2012-06-18 15:43:57 TRACE 

Re: [Openstack] [Swift] Interested in implementing swift ring builder server

2012-06-19 Thread John Dickinson
It looks like Florian (at Rackspace) is working on that blueprint. He just 
assigned it to himself.

I'm happy to hear that you have some extra devs for swift work. I'd love to 
help coordinate some swift goals with you.

Off the top of my head, here are a few things that could be worked on:

1) Handoff logic should allow every node in the cluster to be used (rather than 
just every zone).

2) Compatibility with Ubuntu 12.04 needs to be solid

3) Make installation trivially easy.

--John



On Jun 18, 2012, at 2:23 PM, Mark Gius wrote:

 Hello Swifters,
 
 I've got some interns working with me this summer and I had a notion that 
 they might take a stab at the swift ring builder server blueprint that's been 
 sitting around for a while 
 (https://blueprints.launchpad.net/swift/+spec/ring-builder-server).  As a 
 first step I figured that the ring-builder-server would be purely an 
 alternative for the swift-ring-builder CLI, with a future iteration adding 
 support for deploying the rings to all servers in the cluster.  I'm currently 
 planning on making the ring-builder server be written and deployed like the 
 account/container/etc servers, although I imagine the implementation will be 
 a lot simpler.
 
 Is anybody else already working on this and forgot to update the blueprint?  
 If not can I get the blueprint assigned to me on launchpad?  Username 
 'markgius'.  Or if there's some other process I need to go through please let 
 me know.
 
 Mark
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Timeout during image build (task Networking)

2012-06-19 Thread Vishvananda Ishaya
Sorry, paste fail on the last message.

This seems like a likely culprit:

https://review.openstack.org/#/c/8339/

I'm guessing it only happens on concurrent builds?  We probably need a 
synchronized somewhere.

Vish

On Jun 19, 2012, at 12:03 PM, Jay Pipes wrote:

 cc'ing Vish on this, as this is now occurring on every single devstack + 
 Tempest run, for multiple servers.
 
 Vish, I am seeing the exact same issue as shown below. Instances end up in 
 ERROR state and looking into the nova-network log, I find *no* errors at all, 
 and yet looking at the nova-compute log, I see multiple timeout errors -- all 
 of them trying to RPC while in the allocate_network method. Always the same 
 method, always the same error, and no errors in nova-network or nova-api 
 (other than just reporting a failed build)
 
 Any idea on something that may have crept in recently? This wasn't happening 
 a week or so ago, AFAICT.
 
 Best,
 -jay
 
 On 06/18/2012 06:03 PM, Lillie Ross-CDSR11 wrote:
 I'm receiving RPC timeouts when trying to launch an instance. My
 installation is the Essex release running on Ubuntu 12.04.
 
 When I launch a test image, the launch fails. In my setup, Nova network
 runs on a controller node, and all compute instances run on separate,
 dedicated server nodes. The failure is repeatable. Upon examining the
 various logs, I see the following (see below). Any insight would be welcome.
 
 Regards,
 Ross
 
 From 'nova show instance name' I read the following:
 
 root@cirrus1:~# nova show test
 +-+-+
 | Property | Value |
 +-+-+
 | OS-DCF:diskConfig | MANUAL |
 | OS-EXT-SRV-ATTR:host | nova8 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname | None |
 | OS-EXT-SRV-ATTR:instance_name | instance-0005 |
 | OS-EXT-STS:power_state | 0 |
 | OS-EXT-STS:task_state | networking |
 | OS-EXT-STS:vm_state | error |
 | accessIPv4 | |
 | accessIPv6 | |
 | config_drive | |
 | created | 2012-06-18T20:42:56Z |
 | fault | {u'message': u'Timeout', u'code': 500, u'created':
 u'2012-06-18T20:43:58Z'} |
 | flavor | m1.tiny |
 | hostId | 50272989300483e2b5e5236cd572fef3f9149ae60faa5f5660f8da54 |
 | id | d569b16f-10a8-4cb8-90a3-d5b664c2322d |
 | image | tty-linux |
 | key_name | admin |
 | metadata | {} |
 | name | test |
 | private_0 network | |
 | status | ERROR |
 | tenant_id | 1 |
 | updated | 2012-06-18T20:43:57Z |
 | user_id | 1 |
 +-+-+
 
 From the nova-network.log I see the following:
 
 2012-06-18 15:43:36 DEBUG nova.manager [-] Running periodic task
 VlanManager._disassociate_stale_fixed_ips from (pid=1381) periodic_tasks
 /usr/lib/python2.7/dist-packages
 /nova/manager.py:152
 2012-06-18 15:43:57 ERROR nova.rpc.common [-] Timed out waiting for RPC
 response: timed out
 2012-06-18 15:43:57 TRACE nova.rpc.common Traceback (most recent call last):
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 490, in
 ensure
 2012-06-18 15:43:57 TRACE nova.rpc.common return method(*args, **kwargs)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 567, in
 _consume
 2012-06-18 15:43:57 TRACE nova.rpc.common return
 self.connection.drain_events(timeout=timeout)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/connection.py, line 175, in
 drain_events
 2012-06-18 15:43:57 TRACE nova.rpc.common return
 self.transport.drain_events(self.connection, **kwargs)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
 238, in drain_events
 2012-06-18 15:43:57 TRACE nova.rpc.common return
 connection.drain_events(**kwargs)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
 57, in drain_events
 2012-06-18 15:43:57 TRACE nova.rpc.common return
 self.wait_multi(self.channels.values(), timeout=timeout)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
 63, in wait_multi
 2012-06-18 15:43:57 TRACE nova.rpc.common chanmap.keys(),
 allowed_methods, timeout=timeout)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
 120, in _wait_multiple
 2012-06-18 15:43:57 TRACE nova.rpc.common channel, method_sig, args,
 content = read_timeout(timeout)
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 /usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
 94, in read_timeout
 2012-06-18 15:43:57 TRACE nova.rpc.common return
 self.method_reader.read_method()
 2012-06-18 15:43:57 TRACE nova.rpc.common File
 

Re: [Openstack] [Swift] Interested in implementing swift ring builder server

2012-06-19 Thread Florian Hines
I just need to clean it up at this point (mainly the rebalance). Since the 
rebalance (i.e. swift-ring-builder rebalance) can take minutes to complete when 
you have lots of devices I need to move it off to a background task. 

-- 
Florian Hines | @pandemicsyn
http://about.me/pandemicsyn


On Tuesday, June 19, 2012 at 2:06 PM, Mark Gius wrote:

 Alright.  Is this something you're more or less done with and are just 
 cleaning up or have you just started?  I've got my guys reading up on swift 
 and are about ready to dive in with the coding, so if you're more or less 
 done I'll find another project for them to work on.  If you're just started 
 (or even better, not yet started), would you mind if I stole this from you?
 
 Mark
 
 On Tue, Jun 19, 2012 at 11:59 AM, Florian Hines florian.hi...@gmail.com 
 (mailto:florian.hi...@gmail.com) wrote:
  Hi Mark, 
  
  I forgot to grab the blueprint but I finally got around to start working on 
  this. I'll hopefully have something to put up for review next week.
  
  This version just support's basic GET/POST ops to retrieve the rings or to 
  view/alter/add devices to the ring in bulk. 
  
  Having someone else adding features as well would be awesome! 
  
  -- 
  Florian Hines | @pandemicsyn
  http://about.me/pandemicsyn
  
  
  On Monday, June 18, 2012 at 2:23 PM, Mark Gius wrote:
  
  
  
   Hello Swifters,
   
   I've got some interns working with me this summer and I had a notion that 
   they might take a stab at the swift ring builder server blueprint that's 
   been sitting around for a while 
   (https://blueprints.launchpad.net/swift/+spec/ring-builder-server).  As a 
   first step I figured that the ring-builder-server would be purely an 
   alternative for the swift-ring-builder CLI, with a future iteration 
   adding support for deploying the rings to all servers in the cluster.  
   I'm currently planning on making the ring-builder server be written and 
   deployed like the account/container/etc servers, although I imagine the 
   implementation will be a lot simpler. 
   
   Is anybody else already working on this and forgot to update the 
   blueprint?  If not can I get the blueprint assigned to me on launchpad?  
   Username 'markgius'.  Or if there's some other process I need to go 
   through please let me know. 
   
   Mark 
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net 
   (mailto:openstack@lists.launchpad.net)
   Unsubscribe : https://launchpad.net/~openstack
   More help : https://help.launchpad.net/ListHelp
   
   
   
  
  
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Blueprint automatic-secure-key-generation] Automatic SECURE_KEY generation

2012-06-19 Thread Gabriel Hurley
That's looking pretty good, Sascha. May I suggest:

1. Instead of using the temporary lockfile, let's actually move slightly back 
towards your original approach and check for the existence of a known secret 
key file (added to .gitignore of course) which the get_secret_key function can 
check for.

2. Split the code that generates the secret key into a separate function 
generate_secret_key which can be used at will.

2. In the run_tests.sh script (meant for developers, really not used in 
production deployments), have it check for a valid secret key in either the 
settings file itself or the known secret key file as part of the sanity check 
for the environment. If a valid secret key is not available, have it ask the 
user if they'd like to automatically generate a secret key (using 
generate_secret_key). Make sure this also respects the -q/--quiet flag for 
non-interactive runs like Jenkins.

3. Drop lines 25-29 in the local_settings.py.example in your branch, and 
uncomment line 31. I liked removing the example SECRET_KEY that exists in the 
example settings file like in your first patch. Even leaving it in as a comment 
encourages people to simply uncomment it and use the recycled insecure key. 
Let's take that back out and just keep your explanatory text, which is spot-on.

4. Modify the get_secret_key function so that if a valid key is not available 
(e.g. this is production and we didn't use run_tests.sh to install the 
environment) it dies with a very explanatory error message indicating that a 
secret key must be set.

I know that's a lot to take in, but I think it walks the cleanest line between 
making it super-easy for developers while making sure everyone is aware of 
what's going on.

If you'll do that and push the review to Gerrit I'll be totally in support of 
it.

Thanks!

- Gabriel

 -Original Message-
 From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
 [mailto:openstack-
 bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of
 Sascha Peilicke
 Sent: Tuesday, June 19, 2012 2:04 AM
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] [Blueprint automatic-secure-key-generation]
 Automatic SECURE_KEY generation
 
 On 06/19/2012 04:55 AM, Paul McMillan wrote:
  Ah, you're thinking of a setup where there are multiple dashboard VMs
  behind a load-balancer serving requests. Indeed, there the dashboard
  instances should either share the SECRET_KEY or the load-balancer has
  to make sure that all requests of a given session are redirected to
  the same dashboard instance.
 
  I'm concerned that anything which automatically generates a secret key
  will cause further problems down the line for other users. For
  example, you've clearly experienced what happens when you use more
  than one worker and generate a per-process key. Imagine trying to
  debug that same problem on a multi-system cloud (with a load balancer
  that usually routes people to the same place, but not always). If you
  aren't forced to learn about this setting during deployment, you are
  faced with a nearly impossible problem of users just sometimes get
 logged out.
 
  I feel like this patch is merely kicking the can down the road just a
  little past where your particular project needs it to be, without
  thinking about the bigger picture.
 I'm sorry about that, but that was definitely not the intent. Inherently you
 are right, there are just far to many possible setups to get them all right. 
 Thus
 it's a valid choice to defer such decisions to the one doing the setup. But
 trying to ease the process can't be that wrong either. That's the whole point
 why distributions don't only provide packages that merely include pristine
 upstream tarballs. We try (and sometimes fail to) provide useful defaults that
 are at least useful to get started.
 
 
  I'm sure you're not seriously suggesting that a large-scale production
  deployment of openstack will be served entirely by a single point of
  failure dashboard server.
 
  But shouldn't local_settings.py still take preference over settings.py?
  Thus the admin could still set a specific SECRET_KEY in
  local_settings.py regardless of the default (auto-generated) one. So
  I only would have to fix the patch by not removing the documentation
  about SECRET_KEY from local_settings.py, right?
 
  I agree with Gabriel. Horizon should ship with no secret key (so it
  won't start until you set one). At most, it should automatically
  generate a key on a per-process basis, or possibly as part of
  run_tests, so that getting started with development is easy. Under no
  circumstances should it attempt to read the mind of an admin doing a
  production deployment, because it will invariably do the wrong thing
  more often than the right. As a security issue, it's important that
  admins READ THE DOCUMENTATION. Preventing the project from starting
  until they address the issue is one good way.
 Ok, I've adjusted the patch to reflect 

Re: [Openstack] Need help and guidance of Openstack Swift

2012-06-19 Thread Yogesh Bansal
Hi Jay,

Thanks a lot for quick response. This was little helpful. I need how to send
the parameters to the swift server by making the connection. I think that
task is happening in inside the below code. Could you please tell me what is
happening in cloudfiles.get_connection method


def swift_api(request):
endpoint = url_for(request, 'object-store')
LOG.debug('Swift connection created using token %s and url %s'
  % (request.session['token'], endpoint))
auth = SwiftAuthentication(endpoint, request.session['token'])
return cloudfiles.get_connection(auth=auth)






Thanks  Regards
Yogesh Bansal

-Original Message-
From: openstack-bounces+yogeshbansal83=gmail@lists.launchpad.net
[mailto:openstack-bounces+yogeshbansal83=gmail@lists.launchpad.net] On
Behalf Of Jay Pipes
Sent: 19 June 2012 AM 11:34
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Need help and guidance of Openstack Swift

Some links to show you:

https://github.com/openstack/horizon/blob/7b565fc9839833b4b99b1fc3b02269b790
71af3e/horizon/api/swift.py

https://github.com/openstack/horizon/blob/f6f2a91e14f6bdd4e1a87e31a1d6923127
afee1b/horizon/dashboards/nova/containers/views.py

Best,
-jay

On 06/19/2012 02:04 PM, Yogesh Bansal wrote:
 Hello Friends,

 I have just started using Openstack.I am trying to make a web based 
 application for Openstack swift. I am using the Django and python for 
 my application. Could you please provide me few examples how to send 
 the command to swift server using python. I am able to get the auth 
 token from keystone and the public url for the swift storage. But when 
 I tried to use that swift publicURL, in response it say *'403 
 Forbidden\n\nAccess was denied to this resource.\n\n* and Django 
 stops by giving this error message

 raise ValueError(errmsg(Extra data, s, end, len(s)))

 ValueError: Extra data: line 1 column 4 - line 5 column 4 (char 4 - 
 55)

 

 I am able to run the curl commands with are provided in the Openstack 
 documentation and its running fine.I just need help using python.

 I am learning python and Django also. I have little understanding on this.

 Could you please help and guide me on this.

 Thanks

 Yogesh



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Timeout during image build (task Networking)

2012-06-19 Thread Lillie Ross-CDSR11
Vish, Jay,

OK, this looks promising.  A couple of questions…

I'm seeing this RPC timeout on the Essex 2012.1 packages released with Ubuntu 
12.04.  I'm assuming these packages are affected by this bug?

Why would something this fundamental not show up during Essex RC.X testing?

How best to 'fix' this for our production environment (thank god it's only the 
research organization!)

My previous testing of Essex (running on Diablo) didn't exhibit this problem.  
However during testing, I was configured using FlatDHCP networking versus our 
production cloud using VLAN networking.  This was what lead me to believe that 
it might be a network configuration issue.  Apparently not.

As it stands right now, we're dead in the water, so I hope some easy fix for 
the Ubuntu Essex release is possible.

Thanks for everyone looking this.  Hope to hear a resolution soon.

Regards,
Ross

On Jun 19, 2012, at 2:13 PM, Vishvananda Ishaya wrote:

Sorry, paste fail on the last message.

This seems like a likely culprit:

https://review.openstack.org/#/c/8339/

I'm guessing it only happens on concurrent builds?  We probably need a 
synchronized somewhere.

Vish

On Jun 19, 2012, at 12:03 PM, Jay Pipes wrote:

cc'ing Vish on this, as this is now occurring on every single devstack + 
Tempest run, for multiple servers.

Vish, I am seeing the exact same issue as shown below. Instances end up in 
ERROR state and looking into the nova-network log, I find *no* errors at all, 
and yet looking at the nova-compute log, I see multiple timeout errors -- all 
of them trying to RPC while in the allocate_network method. Always the same 
method, always the same error, and no errors in nova-network or nova-api (other 
than just reporting a failed build)

Any idea on something that may have crept in recently? This wasn't happening a 
week or so ago, AFAICT.

Best,
-jay

On 06/18/2012 06:03 PM, Lillie Ross-CDSR11 wrote:
I'm receiving RPC timeouts when trying to launch an instance. My
installation is the Essex release running on Ubuntu 12.04.

When I launch a test image, the launch fails. In my setup, Nova network
runs on a controller node, and all compute instances run on separate,
dedicated server nodes. The failure is repeatable. Upon examining the
various logs, I see the following (see below). Any insight would be welcome.

Regards,
Ross

From 'nova show instance name' I read the following:

root@cirrus1:~# nova show test
+-+-+
| Property | Value |
+-+-+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-SRV-ATTR:host | nova8 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance-0005 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | networking |
| OS-EXT-STS:vm_state | error |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2012-06-18T20:42:56Z |
| fault | {u'message': u'Timeout', u'code': 500, u'created':
u'2012-06-18T20:43:58Z'} |
| flavor | m1.tiny |
| hostId | 50272989300483e2b5e5236cd572fef3f9149ae60faa5f5660f8da54 |
| id | d569b16f-10a8-4cb8-90a3-d5b664c2322d |
| image | tty-linux |
| key_name | admin |
| metadata | {} |
| name | test |
| private_0 network | |
| status | ERROR |
| tenant_id | 1 |
| updated | 2012-06-18T20:43:57Z |
| user_id | 1 |
+-+-+

From the nova-network.log I see the following:

2012-06-18 15:43:36 DEBUG nova.manager [-] Running periodic task
VlanManager._disassociate_stale_fixed_ips from (pid=1381) periodic_tasks
/usr/lib/python2.7/dist-packages
/nova/manager.py:152
2012-06-18 15:43:57 ERROR nova.rpc.common [-] Timed out waiting for RPC
response: timed out
2012-06-18 15:43:57 TRACE nova.rpc.common Traceback (most recent call last):
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 490, in
ensure
2012-06-18 15:43:57 TRACE nova.rpc.common return method(*args, **kwargs)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 567, in
_consume
2012-06-18 15:43:57 TRACE nova.rpc.common return
self.connection.drain_events(timeout=timeout)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/connection.py, line 175, in
drain_events
2012-06-18 15:43:57 TRACE nova.rpc.common return
self.transport.drain_events(self.connection, **kwargs)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
238, in drain_events
2012-06-18 15:43:57 TRACE nova.rpc.common return
connection.drain_events(**kwargs)
2012-06-18 15:43:57 TRACE nova.rpc.common File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py, line
57, in drain_events
2012-06-18 

Re: [Openstack] Timeout during image build (task Networking)

2012-06-19 Thread Jay Pipes

On 06/19/2012 03:13 PM, Vishvananda Ishaya wrote:

Sorry, paste fail on the last message.

This seems like a likely culprit:

https://review.openstack.org/#/c/8339/

I'm guessing it only happens on concurrent builds? We probably need a
synchronized somewhere.


I notice the the RPC calls to the network topic were changed from cast 
to call. Would this make any difference that would manifest itself in 
the way we are seeing?


-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Interested in implementing swift ring builder server

2012-06-19 Thread Samuel Merritt

On 6/19/12 12:13 PM, John Dickinson wrote:

It looks like Florian (at Rackspace) is working on that blueprint. He just 
assigned it to himself.

I'm happy to hear that you have some extra devs for swift work. I'd love to 
help coordinate some swift goals with you.

Off the top of my head, here are a few things that could be worked on:

1) Handoff logic should allow every node in the cluster to be used (rather than 
just every zone).


That's already done; see commit bb509dd (and 95786e5 for some of the 
fallout).




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Cinder status update

2012-06-19 Thread John Griffith
For those of you that don't know Cinder is the new project intended to
separate block storage out of Nova and provide it via it's own
service.  The goal is to have a functional replacement for
Nova-Volumes by Folsom 2 (don't worry, you'll be able to select which
service to use).  So far things have gone fairly well, we're at a
stage now where we have a beta version that's ready for use in
devstack environments for folks that might be curious or interested in
doing some testing/fixing :)

I haven't done anything fancy like packaging it all up in vagrant, but
depending on the level of interest we can look into that.  Currently
the needed patches are in Gerrit as Drafts, so rather than mess with
adding a ton of people who are just a little curious, I've created a
github fork that can be used just for initial looking around/testing.

In order to get an install up and running all you need to do is clone
the following version of devstack:
https://github.com/j-griffith/devstack.git (you should also be able to
just modify your vagrant attributes file to point to this version of
devstack of course).

The stackrc in the version of devstack on j-griffith is hard coded for
the cinder service and the special repos in order to make it as easy
as possible to check out the beta.

Run stack.sh and you should be in business...

Please note that this is a hack to get things up and running for folks
that have expressed interest in testing and seeing where things are
at.  There are surely issues/bugs and things that aren't done yet,
but this is suitable to be called beta.

What to expect:
   * Create/List/Delete volumes on the cinder service via:
 cinderclient ('cinder'), euca2ools
   * Create/List/Delete volume-snapshots on the cinder service via:
 cinderclient ('cinder'), euca2ools
   * Attach/Detach needs some work but it can be done via euca-attach-volume

What's in progress:
   * Attach/Detach for cinderclient
   * Seems to be something not working in horizon any longer, need to
  look at this
   * Lots of cleanup and unused nova code to strip out of cinder project still
   * Tests (unit tests, as well as devstack tests)

Note there are a fixes/changes on a daily basis so it's very much a
moving target.  Official repos should all be updated and ready for
consumption no later than the end of this week.


Give it a try, if you find an issue or something missing let me know,
or better yet fix it up and send me a pull request :)

Thanks,
John

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cinder status update

2012-06-19 Thread Gabriel Hurley
Nice work.

When you've got the rest of the API bits ironed out (particularly 
attach/detach) I'll help work on making sure Horizon is fully functional there. 
Note that there's also an F3 Horizon blueprint for splitting volumes into its 
own optional panel: 
https://blueprints.launchpad.net/horizon/+spec/nova-volume-optional

We should coordinate on these things. ;-)

- Gabriel

 -Original Message-
 From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
 [mailto:openstack-
 bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of John
 Griffith
 Sent: Tuesday, June 19, 2012 2:10 PM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] Cinder status update
 
 For those of you that don't know Cinder is the new project intended to
 separate block storage out of Nova and provide it via it's own service.  The
 goal is to have a functional replacement for Nova-Volumes by Folsom 2
 (don't worry, you'll be able to select which service to use).  So far things 
 have
 gone fairly well, we're at a stage now where we have a beta version that's
 ready for use in devstack environments for folks that might be curious or
 interested in doing some testing/fixing :)
 
 I haven't done anything fancy like packaging it all up in vagrant, but
 depending on the level of interest we can look into that.  Currently the
 needed patches are in Gerrit as Drafts, so rather than mess with adding a ton
 of people who are just a little curious, I've created a github fork that can 
 be
 used just for initial looking around/testing.
 
 In order to get an install up and running all you need to do is clone the
 following version of devstack:
 https://github.com/j-griffith/devstack.git (you should also be able to just
 modify your vagrant attributes file to point to this version of devstack of
 course).
 
 The stackrc in the version of devstack on j-griffith is hard coded for the 
 cinder
 service and the special repos in order to make it as easy as possible to check
 out the beta.
 
 Run stack.sh and you should be in business...
 
 Please note that this is a hack to get things up and running for folks that 
 have
 expressed interest in testing and seeing where things are at.  There are
 surely issues/bugs and things that aren't done yet, but this is suitable to 
 be
 called beta.
 
 What to expect:
* Create/List/Delete volumes on the cinder service via:
  cinderclient ('cinder'), euca2ools
* Create/List/Delete volume-snapshots on the cinder service via:
  cinderclient ('cinder'), euca2ools
* Attach/Detach needs some work but it can be done via euca-attach-
 volume
 
 What's in progress:
* Attach/Detach for cinderclient
* Seems to be something not working in horizon any longer, need to
   look at this
* Lots of cleanup and unused nova code to strip out of cinder project still
* Tests (unit tests, as well as devstack tests)
 
 Note there are a fixes/changes on a daily basis so it's very much a moving
 target.  Official repos should all be updated and ready for consumption no
 later than the end of this week.
 
 
 Give it a try, if you find an issue or something missing let me know, or 
 better
 yet fix it up and send me a pull request :)
 
 Thanks,
 John
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Multiple nova-compute hosts, single-host nova-network, and a guest unreachable via its floating IP

2012-06-19 Thread Adam Gandelman

On 06/19/2012 10:52 AM, Florian Haas wrote:

Hi everyone,

perhaps someone can shed some light on a floating IP issue.

I have 2 nova-compute nodes (call them alice and bob), one of them
(alice) is also running nova-network. bob uses alice as its
--metadata_host and --network_host.

I assign a floating IP to a guest running on bob. Expectedly, that IP
is bound to the NIC specified as the --public_interface on alice (my
nova-network host).

However, since alice has a route into the --fixed_range network over
its local bridge, the incoming traffic for the floating IP is routed
there, where there's no guest to answer it -- because the guest is,
after all, running on bob.

Now, this seems fairly logical to me in the combination of

1. a nova-network host also running nova-compute;
2. other nova-compute hosts being around;
3. those other nova-compute hosts _not_ also running nova-network (and
hence there being no multi-host networking).


Florian-

I was helping someone debug a similar setup a while back and found that 
nova was not putting the interface into promiscuous mode and it was 
dropping packets bound for the other compute node on the floor.  This 
came to me after noticing everything seemed to work fine as soon as I 
started debugging with tcpdump on nova-network. :)


HTH,
Adam


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cinder status update

2012-06-19 Thread John Griffith
On Tue, Jun 19, 2012 at 3:29 PM, Gabriel Hurley
gabriel.hur...@nebula.com wrote:
 Nice work.

 When you've got the rest of the API bits ironed out (particularly 
 attach/detach) I'll help work on making sure Horizon is fully functional 
 there. Note that there's also an F3 Horizon blueprint for splitting volumes 
 into its own optional panel: 
 https://blueprints.launchpad.net/horizon/+spec/nova-volume-optional

 We should coordinate on these things. ;-)

    - Gabriel

 -Original Message-
 From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
 [mailto:openstack-
 bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of John
 Griffith
 Sent: Tuesday, June 19, 2012 2:10 PM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] Cinder status update

 For those of you that don't know Cinder is the new project intended to
 separate block storage out of Nova and provide it via it's own service.  The
 goal is to have a functional replacement for Nova-Volumes by Folsom 2
 (don't worry, you'll be able to select which service to use).  So far things 
 have
 gone fairly well, we're at a stage now where we have a beta version that's
 ready for use in devstack environments for folks that might be curious or
 interested in doing some testing/fixing :)

 I haven't done anything fancy like packaging it all up in vagrant, but
 depending on the level of interest we can look into that.  Currently the
 needed patches are in Gerrit as Drafts, so rather than mess with adding a ton
 of people who are just a little curious, I've created a github fork that can 
 be
 used just for initial looking around/testing.

 In order to get an install up and running all you need to do is clone the
 following version of devstack:
 https://github.com/j-griffith/devstack.git (you should also be able to just
 modify your vagrant attributes file to point to this version of devstack of
 course).

 The stackrc in the version of devstack on j-griffith is hard coded for the 
 cinder
 service and the special repos in order to make it as easy as possible to 
 check
 out the beta.

 Run stack.sh and you should be in business...

 Please note that this is a hack to get things up and running for folks that 
 have
 expressed interest in testing and seeing where things are at.  There are
 surely issues/bugs and things that aren't done yet, but this is suitable 
 to be
 called beta.

 What to expect:
    * Create/List/Delete volumes on the cinder service via:
              cinderclient ('cinder'), euca2ools
    * Create/List/Delete volume-snapshots on the cinder service via:
              cinderclient ('cinder'), euca2ools
    * Attach/Detach needs some work but it can be done via euca-attach-
 volume

 What's in progress:
    * Attach/Detach for cinderclient
    * Seems to be something not working in horizon any longer, need to
       look at this
    * Lots of cleanup and unused nova code to strip out of cinder project 
 still
    * Tests (unit tests, as well as devstack tests)

 Note there are a fixes/changes on a daily basis so it's very much a moving
 target.  Official repos should all be updated and ready for consumption no
 later than the end of this week.


 Give it a try, if you find an issue or something missing let me know, or 
 better
 yet fix it up and send me a pull request :)

 Thanks,
 John

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

Looks like most of these issues are taken care of.  I have a few
things to button up elsewhere, but should be pushing changes tonight
or tomorrow that will enable alot of this.

Also won't need to use my personal github any longer :)

I'll send an update to everyone on what to add to localrc to make this
work when everything is in place.

Thanks,
John

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Deleting a volume stuck in attaching state?

2012-06-19 Thread Lars Kellogg-Stedman
I attempted to attach a volume to a running instance, but later
deleted the instance, leaving the volume stuck in the attaching
state:

  # nova volume-list
  ++---+--+--+-+-+
  | ID |   Status  | Display Name | Size | Volume Type | Attached to |
  ++---+--+--+-+-+
  | 9  | attaching | None | 1| None| |
  ++---+--+--+-+-+

It doesn't appear to be possible to delete this with nova
volume-delete:

  # nova volume-delete
   nova volume-delete 9
   ERROR: Invalid volume: Volume status must be available or error (HTTP 400)

Other than directly editing the database (and I've had to do that an
awful lot already), how do I recover from this situation?

-- 
Lars Kellogg-Stedman l...@seas.harvard.edu   |
Senior Technologist| http://ac.seas.harvard.edu/
Academic Computing | 
http://code.seas.harvard.edu/
Harvard School of Engineering and Applied Sciences |


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Router as a VM

2012-06-19 Thread Neelakantam Gaddam
Hi All,

I am trying multi node setup using openstack and quantum using devstack. My
understanding is that for every tenant, there is a gateway interface
created in the physical host and these will act as gateways for the
tenants. Is it possible to configure a VM as a gateway/router for a tenant
and how can we do this ?

Thanks in advance.


-- 
Thanks  Regards
Neelakantam Gaddam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Deleting a volume stuck in attaching state?

2012-06-19 Thread John Griffith
On Tue, Jun 19, 2012 at 7:40 PM, Lars Kellogg-Stedman
l...@seas.harvard.edu wrote:
 I attempted to attach a volume to a running instance, but later
 deleted the instance, leaving the volume stuck in the attaching
 state:

  # nova volume-list
  ++---+--+--+-+-+
  | ID |   Status  | Display Name | Size | Volume Type | Attached to |
  ++---+--+--+-+-+
  | 9  | attaching | None         | 1    | None        |             |
  ++---+--+--+-+-+

 It doesn't appear to be possible to delete this with nova
 volume-delete:

  # nova volume-delete
   nova volume-delete 9
   ERROR: Invalid volume: Volume status must be available or error (HTTP 400)

 Other than directly editing the database (and I've had to do that an
 awful lot already), how do I recover from this situation?

 --
 Lars Kellogg-Stedman l...@seas.harvard.edu       |
 Senior Technologist                                | 
 http://ac.seas.harvard.edu/
 Academic Computing                                 | 
 http://code.seas.harvard.edu/
 Harvard School of Engineering and Applied Sciences |


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

Hi Lars,

Unfortunately manipulating the database might be your best bet for
now.  We do have plans to come up with another option in the Cinder
project, but unfortunately that won't help you much right now.

If somebody has a better method, I'm sure they'll speak up and reply
to this email, but I think right now that's your best bet.

Thanks,
John

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Nova] Bug # 971621

2012-06-19 Thread Sajith Kariyawasam
Hi all,

We have been having some issues in terminating LXC instances in Openstack
Nova, and found that the above mentioned bug [1] is already reported, and
according to [2] it is high and confirmed.

We would like to know are there any plans to fix this issue immediately ?
or is there any workaround for this issue for the moment ?

[1] https://bugs.launchpad.net/nova/+bug/971621
[2] https://bugs.launchpad.net/openstack/+bugs?memo=75start=75


-- 
Best Regards
Sajith
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] Issues with soft reboot

2012-06-19 Thread David Kranz
Daryl, I agree. I sent this message after seeing some more soft reboot 
tests being posted. What I meant was that we should limit soft reboot 
tests to the ones that are testing that functionality specifically, as 
opposed to tests that try to do something or other during reboot.


 -David

On 6/19/2012 11:05 AM, Daryl Walleck wrote:

Hi David,

 From a time perspective I see what you're saying. However, there's an 
important bit of functionality that is getting tested here: the fact that the 
soft reboot works regardless of hyper visor. I've always aimed to make Tempest 
hyper visor agnostic, and I would be hesitant to skip a valid test case. I 
think it's at least worth noting down as something we can revisit later, but I 
think there are other areas we can improve performance in first.

Daryl

Sent from my iPad

On Jun 19, 2012, at 8:01 AM, David Kranzdavid.kr...@qrclab.com  wrote:


To help with the effort of making the Tempest suite run faster, we should avoid 
or skip the use of soft reboot in any tests, at least for now. The problem is 
that, according to Vish, soft reboot requires guest support. If the booted 
image doesn't have it, compute will wait (two minutes by default), and do a 
hard reboot. So right now almost all tests that do a soft reboot will take at 
least 150 seconds or so and will not actually be testing anything useful. There 
should be a soft reboot test that uses an image with guest support.

-David

References:
https://bugs.launchpad.net/nova/+bug/1013747
https://bugs.launchpad.net/tempest/+bug/1014647

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp



--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


[Openstack-qa-team] IMPORTANT: Moratorium on adding negative test cases

2012-06-19 Thread Jay Pipes

QAers,

There has been a pile-up of code reviews recently, and I wanted to 
explain a decision that a number of core QA team members reached last 
week and why some patchsets have not been reviewed.


Two of the goals of Tempest are to have a functional integration test 
suite that stresses *different* things than the unit tests and runs in a 
reasonable amount of time ( 20 minutes or so).


While it's been great to see the large influx of negative tests added 
recently to Tempest, there is a concern that the incremental value these 
tests might add does not counterbalance the lengthy times that some of 
the tests take to run.


It was the advice of these core team members (including myself), that we 
put a moratorium on adding new negative tests to Tempest at this point 
and look instead at doing the following:


* Determine which negative test cases that are currently in Tempest DO 
NOT exist in the unit test suite of the corresponding core project, add 
those unit tests appropriately, and then remove the negative tests from 
Tempest
* Use a fuzz-testing grammar-based tool such as randgen [1] to do 
negative testing of the APIs [2]
* Focus all energies on writing test cases that stress the integration 
test points between the services and do more than just API-level 
validation -- in other words, doing more SSH-ing into an instance for 
verification of networking setups, init files, etc
* Change the way we are currently using Launchpad's bugs and blueprints 
system in the following ways:
 (1) No longer have bugs that stay around forever, with lists of new 
small tests to be added. Instead, bugs should describe a very specific 
task, in detail, and should be closed once the test is added to Tempest
 (2) Have someone dedicated to Bug Triaging and cleaning up the current 
list of open bugs -- I believe David Kranz and I will be tackling this.


Please respond to this post if you have any questions about the 
important points above. We need to refocus the QA team a bit to add 
greater value and make Tempest the integration test suite that the other 
core projects can rely on.


All the best,
-jay

[1] https://launchpad.net/randgen
[2] https://lists.launchpad.net/openstack-qa-team/msg00155.html

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp