Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-30 Thread Yair Fried
Hi,
I'd rather not subclass dict directly.
for various reasons adding extra attributes to normal python dict seems prone 
to errors since people will be expecting regular dicts, and on the other hand 
if we want to expand it in the future we might run into problems playing with 
dict methods (such as "update")

I suggets (roughly):

class ResponseBody(dict): 
def __init__(self, body=None, resp=None): 
self_data_dict = body or {} 
self.resp = resp 

def __getitem__(self, index):
return self._data_dict[index]


Thus we can keep the previous dict interface, but protect the data and make 
sure the object will behave exactly as we expect it to. if we want it to have 
more dict attributes/method we can add them explicitly


- Original Message -
From: "Boris Pavlovic" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Saturday, August 30, 2014 2:53:37 PM
Subject: Re: [openstack-dev] [qa] Lack of consistency in returning response 
from tempest clients

Sean, 




class ResponseBody(dict): 
def __init__(self, body={}, resp=None): 
self.update(body) 
self.resp = resp 


Are you sure that you would like to have default value {} for method argument 
and not something like: 


class ResponseBody(dict): 
def __init__(self, body=None, resp=None): 
body = body or {} 
self.update(body) 
self.resp = resp 

In your case you have side effect. Take a look at: 
http://stackoverflow.com/questions/1132941/least-astonishment-in-python-the-mutable-default-argument
 

Best regards, 
Boris Pavlovic 


On Sat, Aug 30, 2014 at 10:08 AM, GHANSHYAM MANN < ghanshyamm...@gmail.com > 
wrote: 



+1. That will also help full for API coming up with microversion like Nova. 


On Fri, Aug 29, 2014 at 11:56 PM, Sean Dague < s...@dague.net > wrote: 


On 08/29/2014 10:19 AM, David Kranz wrote: 
> While reviewing patches for moving response checking to the clients, I 
> noticed that there are places where client methods do not return any value. 
> This is usually, but not always, a delete method. IMO, every rest client 
> method should return at least the response. Some services return just 
> the response for delete methods and others return (resp, body). Does any 
> one object to cleaning this up by just making all client methods return 
> resp, body? This is mostly a change to the clients. There were only a 
> few places where a non-delete method was returning just a body that was 
> used in test code. 

Yair and I were discussing this yesterday. As the response correctness 
checking is happening deeper in the code (and you are seeing more and 
more people assigning the response object to _ ) my feeling is Tempest 
clients should probably return a body obj that's basically. 

class ResponseBody(dict): 
def __init__(self, body={}, resp=None): 
self.update(body) 
self.resp = resp 

Then all the clients would have single return values, the body would be 
the default thing you were accessing (which is usually what you want), 
and the response object is accessible if needed to examine headers. 

-Sean 

-- 
Sean Dague 
http://dague.net 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



-- 
Thanks & Regards 
Ghanshyam Mann 


___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][neutron][jenkins]Merge failure in jenkins : gate-tempest-dsvm-neutron-full and

2014-08-30 Thread Nader Lahouti
Hi,

There are failure in jenkins that not related to the patch:
https://review.openstack.org/#/c/89211/

The console log shows errors as below. Has anybody seen these error? Is it
a known issue. Please advise what needs to be done?

2014-08-30 15:26:11.789

| *** Not Whitelisted *** 2014-08-30 15:01:55.371 540 ERROR
neutron.services.firewall.agents.l3reference.firewall_l3_agent
[req-2425db37-b090-41cd-9dd1-eb86a62ec5aa None] FWaaS RPC failure in
delete_firewall for fw: f81f054d-b254-4d2c-a258-3a28f1df1d352014-08-30
15:26:11.789 

| *** Not Whitelisted *** 2014-08-30 15:01:55.371 540 TRACE
neutron.services.firewall.agents.l3reference.firewall_l3_agent
[u'Traceback (most recent call last):\n', u'  File
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py",
line 134, in _dispatch_and_reply\nincoming.message))\n', u'  File
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py",
line 177, in _dispatch\nreturn self._do_dispatch(endpoint, method,
ctxt, args)\n', u'  File
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py",
line 123, in _do_dispatch\nresult = getattr(endpoint,
method)(ctxt, **new_args)\n', u'  File
"/opt/stack/new/neutron/neutron/services/firewall/fwaas_plugin.py",
line 65, in firewall_deleted\nfw_db =
self.plugin._get_firewall(context, firewall_id)\n', u'  File
"/opt/stack/new/neutron/neutron/db/firewall/firewall_db.py", line 99,
in _get_firewall\nraise
firewall.FirewallNotFound(firewall_id=id)\n', u'FirewallNotFound:
Firewall f81f054d-b254-4d2c-a258-3a28f1df1d35 could not be
found.\n'].2014-08-30 15:26:11.790

| *** Not Whitelisted *** 2014-08-30 15:01:56.662 540 ERROR
neutron.services.firewall.agents.l3reference.firewall_l3_agent
[req-31ed65c9-0aee-4ff3-828b-794cb5a805d7 None] FWaaS RPC failure in
delete_firewall for fw: f81f054d-b254-4d2c-a258-3a28f1df1d352014-08-30
15:26:11.790 

| *** Not Whitelisted *** 2014-08-30 15:01:56.662 540 TRACE

==
Thanks,Nader.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][NFV] VIF_VHOSTUSER

2014-08-30 Thread Ian Wells
The problem here is that you've removed the vif_driver option and now
you're preventing the inclusion of named VIF types into the generic driver,
which means that rather than adding a package to an installation to add
support for a VIF driver it's now necessary to change the Nova code (and
repackage it, or - ew - patch it in place after installation).  I
understand where you're coming from but unfortunately the two changes
together make things very awkward.  Granted that vif_driver needed to go
away - it was the wrong level of code and the actual value was coming from
the wrong place anyway (nova config and not Neutron) - but it's been
removed without a suitable substitute.

It's a little late for a feature for Juno, but I think we need to write
something discovers VIF types installed on the system.  That way you can
add a new VIF type to Nova by deploying a package (and perhaps naming it in
config as an available selection to offer to Neutron) *without* changing
the Nova tree itself.

In the meantime, I recommend you consult with the Neutron cores and see if
you can make an exception for the VHOSTUSER driver for the current
timescale.
-- 
Ian.



On 27 August 2014 07:30, Daniel P. Berrange  wrote:

> On Wed, Aug 27, 2014 at 04:06:25PM +0200, Luke Gorrie wrote:
> > Howdy!
> >
> > I am writing to ask whether it will be possible to merge VIF_VHOSTUSER
> [1]
> > in Juno?
> >
> > VIF_VHOSTUSER adds support for a QEMU 2.1 has a feature called vhost-user
> > [2] that allows a guest to do Virtio-net I/O via a userspace vswitch.
> This
> > makes it convenient to deploy new vswitches that are optimized for NFV
> > workloads, of which there are now several both open source and
> proprietary.
> >
> > The complication is that we have no CI coverage for this feature in Juno.
> > Originally we had anticipated merging a Neutron driver that would
> exercise
> > vhost-user but the Neutron core team requested that we develop that
> outside
> > of the Neutron tree for the time being instead [3].
> >
> > We are hoping that the Nova team will be willing to merge the feature
> even
> > so. Within the NFV subgroup it would help us to share more code with each
> > other and also be good for our morale :) particularly as the QEMU work
> was
> > done especially for use with OpenStack.
>
> Our general rule for accepting new VIF drivers in Nova is that Neutron
> should have accepted the corresponding other half of VIF driver, since
> nova does not want to add support for things that are not in-tree for
> Neutron.
>
> In this case addign the new VIF driver involves defining a new VIF type
> and corresponding metadata associated with it. This metadata is part of
> the public API definition, to be passed from Neutron to Nova during VIF
> plugging and so IMHO this has to be agreed upon and defined in tree for
> Neutron & Nova. So even if the VIF driver in Neutron were to live out
> of tree, at a very minimum I'd expect the VIF_VHOSTUSER part to be
> specified in-tree to Neutron, so that Nova has a defined interface it
> can rely on.
>
> So based on this policy, my recommendation would be to keep the Nova VIF
> support out of tree in your own branch of Nova codebase until Neutron team
> are willing to accept their half of the driver.
>
> In cases like this I think Nova & Neutron need to work together to agree
> on acceptance/rejection of the proposed feature. Having one project accept
> it and the other project reject, without them talking to each other is not
> a good position to be in.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova backup not working in stable/icehouse?

2014-08-30 Thread Preston L. Bannister
You are thinking of written-for-cloud applications. For those the state
should not persist with the instance.

There are a very large number of existing applications, not written to the
cloud model, but which could be deployed in a cloud. Those applications are
not all going to get re-written (as the cost is often greater than the
benefit). Those applications need some ready and efficient means of backup.

The benefits of the cloud-application model and the cloud-deployment model
are distinct.

The existing nova backup (if it worked) is an inefficient snapshot. Not
really useful at scale.

There are two basic paths forward, here.  1) Build a complete common backup
implementation that everyone can use. Or 2) define a common API for
invoking backup, allow vendors to supply differing implementations, and add
to OpenStack the APIs needed by backup implementations.

Given past history, there does not seem to be enough focus or resources to
get (1) done.

That makes (2) much more likely. Reasonably sure we can find the interest
and resources for this path. :)






On Fri, Aug 29, 2014 at 10:55 PM, laserjetyang 
wrote:

> I think the purpose of nova VM is not for persistent usage, and it should
> be used for stateless. However, there are use cases to use VM to replace
> bare metal applications, and it requires the same coverage, which I think
> VMware did pretty well.
> The nova backup is snapshot indeed, so it should be re-implemented to be
> fitting into the real backup solution.
>
>
> On Sat, Aug 30, 2014 at 1:14 PM, Preston L. Bannister <
> pres...@bannister.us> wrote:
>
>> The current "backup" APIs in OpenStack do not really make sense (and
>> apparently do not work ... which perhaps says something about usage and
>> usability). So in that sense, they could be removed.
>>
>> Wrote out a bit as to what is needed:
>>
>> http://bannister.us/weblog/2014/08/21/cloud-application-backup-and-openstack/
>>
>> At the same time, to do efficient backup at cloud scale, OpenStack is
>> missing a few primitives needed for backup. We need to be able to quiesce
>> instances, and collect changed-block lists, across hypervisors and
>> filesystems. There is some relevant work in this area - for example:
>>
>> https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots
>>
>> Switching hats - as a cloud developer, on AWS there is excellent current
>> means of backup-through-snapshots, which is very quick and is charged based
>> on changed-blocks. (The performance and cost both reflect use of
>> changed-block tracking underneath.)
>>
>> If OpenStack completely lacks any equivalent API, then the platform is
>> less competitive.
>>
>> Are you thinking about backup as performed by the cloud infrastructure
>> folk, or as a service used by cloud developers in deployed applications?
>> The first might do behind-the-scenes backups. The second needs an API.
>>
>>
>>
>>
>> On Fri, Aug 29, 2014 at 11:16 AM, Jay Pipes  wrote:
>>
>>> On 08/29/2014 02:48 AM, Preston L. Bannister wrote:
>>>
 Looking to put a proper implementation of instance backup into
 OpenStack. Started by writing a simple set of baseline tests and running
 against the stable/icehouse branch. They failed!

 https://github.com/dreadedhill-work/openstack-backup-scripts

 Scripts and configuration are in the above. Simple tests.

 At first I assumed there was a configuration error in my Devstack ...
 but at this point I believe the errors are in fact in OpenStack. (Also I
 have rather more colorful things to say about what is and is not
 logged.)

 Try to backup bootable Cinder volumes attached to instances ... and all
 fail. Try to backup instances booted from images, and all-but-one fail
 (without logged errors, so far as I see).

 Was concerned about preserving existing behaviour (as I am currently
 hacking the Nova backup API), but ... if the existing is badly broken,
 this may not be a concern. (Makes my job a bit simpler.)

 If someone is using "nova backup" successfully (more than one backup at
 a time), I *would* rather like to know!

 Anyone with different experience?

>>>
>>> IMO, the create_backup API extension should be removed from the Compute
>>> API. It's completely unnecessary and backups should be the purview of
>>> external (to Nova) scripts or configuration management modules. This API
>>> extension is essentially trying to be a Cloud Cron, which is inappropriate
>>> for the Compute API, IMO.
>>>
>>> -jay
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> Ope

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-30 Thread Adam Harwell
Only really have comments on two of your related points:

[Susanne] To me Octavia is a driver so it is very hard to me to think of it as 
a standalone project. It needs the new Neutron LBaaS v2 to function which is 
why I think of them together. This of course can change since we can add 
whatever layers we want to Octavia.

[Adam] I guess I've always shared Stephen's viewpoint — Octavia != LBaaS-v2. 
Octavia is a peer to F5 / Radware / A10 / etc appliances, not to an Openstack 
API layer like Neutron-LBaaS. It's a little tricky to clearly define this 
difference in conversation, and I have noticed that quite a few people are 
having the same issue differentiating. In a small group, having quite a few 
people not on the same page is a bit scary, so maybe we need to really sit down 
and map this out so everyone is together one way or the other.

[Susanne] Ok now I am confused… But I agree with you that it need to focus on 
our use cases. I remember us discussing Octavia being the refenece 
implementation for OpenStack LBaaS (whatever that is). Has that changed while I 
was on vacation?

[Adam] I believe that having the Octavia "driver" (not the Octavia codebase 
itself, technically) become the reference implementation for Neutron-LBaaS is 
still the plan in my eyes. The Octavia Driver in Neutron-LBaaS is a separate 
bit of code from the actual Octavia project, similar to the way the A10 driver 
is a separate bit of code from the A10 appliance. To do that though, we need 
Octavia to be fairly close to fully functional. I believe we can do this 
because even though the reference driver would then require an additional 
service to run, what it requires is still fully-open-source and (by way of our 
plan) available as part of OpenStack core.

--Adam

https://keybase.io/rm_you


From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, August 29, 2014 9:19 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

Stephen

See inline comments.

Susanne

-

Susanne--

I think you are conflating the difference between "OpenStack incubation" and 
"Neutron incubator." These are two very different matters and should be treated 
separately. So, addressing each one individually:

"OpenStack Incubation"
I think this has been the end-goal of Octavia all along and continues to be the 
end-goal. Under this scenario, Octavia is its own stand-alone project with its 
own PTL and core developer team, its own governance, and should eventually 
become part of the integrated OpenStack release. No project ever starts out as 
"OpenStack incubated."

[Susanne] I totally agree that the end goal is for Neutron LBaaS to become its 
own incubated project. I did miss the nuance that was pointed out by Mestery in 
an earlier email that if a Neutron incubator project wants to become a separate 
project it will have to apply for incubation again or at that time. It was my 
understanding that such a Neutron incubated project would be grandfathered in 
but again we do not have much details on the process yet.

To me Octavia is a driver so it is very hard to me to think of it as a 
standalone project. It needs the new Neutron LBaaS v2 to function which is why 
I think of them together. This of course can change since we can add whatever 
layers we want to Octavia.

"Neutron Incubator"
This has only become a serious discussion in the last few weeks and has yet to 
land, so there are many assumptions about this which don't pan out (either 
because of purposeful design and governance decisions, or because of how this 
project actually ends up being implemented from a practical standpoint). But 
given the inherent limitations about making statements with so many unknowns, 
the following seem fairly clear from what has been shared so far:
·  Neutron incubator is the on-ramp for projects which should eventually become 
a part of Neutron itself.
·  Projects which enter the Neutron incubator on-ramp should be fairly close to 
maturity in their final form. I think the intent here is for them to live in 
incubator for 1 or 2 cycles before either being merged into Neutron core, or 
being ejected (as abandoned, or as a separate project).
·  Neutron incubator projects effectively do not have their own PTL and core 
developer team, and do not have their own governance.
[Susanne] Ok I missed the last point. In an earlier discussion Mestery implied 
that an incubated project would have at least one or two of its own cores. 
Maybe that changed between now and then.
In addition we know the following about Neutron LBaaS and Octavia:
·  It's already (informally?) agreed that the ultimate long-term place for a 
LBaaS solution is probably to be spun out into its own project, which might 
appropr

[openstack-dev] [nova] libvirt version_cap, a postmortem

2014-08-30 Thread Mark McLoughlin

Hey

The libvirt version_cap debacle continues to come up in conversation and
one perception of the whole thing appears to be:

  A controversial patch was "ninjaed" by three Red Hat nova-cores and 
  then the same individuals piled on with -2s when a revert was proposed
  to allow further discussion.

I hope it's clear to everyone why that's a pretty painful thing to hear.
However, I do see that I didn't behave perfectly here. I apologize for
that.

In order to understand where this perception came from, I've gone back
over the discussions spread across gerrit and the mailing list in order
to piece together a precise timeline. I've appended that below.

Some conclusions I draw from that tedious exercise:

 - Some people came at this from the perspective that we already have 
   a firm, unwritten policy that all code must have functional written 
   tests. Others see that "test all the things" is interpreted as a
   worthy aspiration, but is only one of a number of nuanced factors
   that needs to be taken into account when considering the addition of
   a new feature.

   i.e. the former camp saw Dan Smith's devref addition as attempting 
   to document an existing policy (perhaps even a more forgiving 
   version of an existing policy), whereas other see it as a dramatic 
   shift to a draconian implementation of "test all the things".

 - Dan Berrange, Russell and I didn't feel like we were "ninjaing a
   controversial patch" - you can see our perspective expressed in 
   multiple places. The patch would have helped the "live snapshot" 
   issue, and has other useful applications. It does not affect the 
   broader testing debate.

   Johannes was a solitary voice expressing concerns with the patch, 
   and you could see that Dan was particularly engaged in trying to 
   address those concerns and repeating his feeling that the patch was 
   orthogonal to the testing debate.

   That all being said - the patch did merge too quickly.

 - What exacerbates the situation - particularly when people attempt to 
   look back at what happened - is how spread out our conversations 
   are. You look at the version_cap review and don't see any of the 
   related discussions on the devref policy review nor the mailing list 
   threads. Our disjoint methods of communicating contribute to 
   misunderstandings.

 - When it came to the revert, a couple of things resulted in 
   misunderstandings, hurt feelings and frayed tempers - (a) that our 
   "retrospective veto revert policy" wasn't well understood and (b) 
   a feeling that there was private, in-person grumbling about us at 
   the mid-cycle while we were absent, with no attempt to talk to us 
   directly.


To take an even further step back - successful communities like ours
require a huge amount of trust between the participants. Trust requires
communication and empathy. If communication breaks down and the pressure
we're all under erodes our empathy for each others' positions, then
situations can easily get horribly out of control.

This isn't a pleasant situation and we should all strive for better.
However, I tend to measure our "flamewars" against this:

  https://mail.gnome.org/archives/gnome-2-0-list/2001-June/msg00132.html

GNOME in June 2001 was my introduction to full-time open-source
development, so this episode sticks out in my mind. The two individuals
in that email were/are immensely capable and reasonable people, yet ...

So far, we're doing pretty okay compared to that and many other
open-source flamewars. Let's make sure we continue that way by avoiding
letting situations fester.


Thanks, and sorry for being a windbag,
Mark.

---

= July 1 =

The starting point is this review:

   https://review.openstack.org/103923

Dan Smith proposes a policy that the libvirt driver may not use libvirt
features until they have been available in Ubuntu or Fedora for at least
30 days.

The commit message mentions:

  "broken us in the past when we add a new feature that requires a newer
   libvirt than we test with, and we discover that it's totally broken
   when we upgrade in the gate."

which AIUI is a reference to the libvirt "live snapshot" issue the
previous week, which is described here:

  https://review.openstack.org/102643

where upgrading to Ubuntu Trusty meant the libvirt version in use in the
gate went from 0.9.8 to 1.2.2, which caused the "live snapshot" code
paths in Nova for the first time, which appeared to be related to some
serious gate instability (although the exact root cause wasn't
identified).

Some background on the libvirt version upgrade can be seen here:

  
http://lists.openstack.org/pipermail/openstack-dev/2014-March/thread.html#30284

= July 1 - July 8 =

Back and forth debate mostly between Dan Smith and Dan Berrange. Sean
votes +2, Dan Berrange votes -2.

= July 14 =

Russell adds his support to Dan Berrange's position, votes -2. Some
debate between Dan and Dan continues. Joe Gordon votes +2. Matt
Riedemann expresses support-

Re: [openstack-dev] Review change to nova api pretty please?

2014-08-30 Thread Alex Leonhardt
Thanks Choumel!
Alex


On 30 August 2014 11:49, Chmouel Boudjnah  wrote:

>
> On Sat, Aug 30, 2014 at 11:28 AM, Alex Leonhardt 
> wrote:
>
>> Is there a list of things "not to send to this list" somewhere accessible
>> (link?) that I could review, to not send another (different) request by
>> mistake and possibly upset or annoy people on here ?
>
>
> There is this document :
>
> https://wiki.openstack.org/wiki/MailingListEtiquette
>
> and I have added the "Review request" section there.
>
> Chmouel.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-30 Thread Boris Pavlovic
Sean,


class ResponseBody(dict):
> def __init__(self, body={}, resp=None):
> self.update(body)
> self.resp = resp



Are you sure that you would like to have default value {} for method
argument and not something like:


class ResponseBody(dict):
def __init__(self, body=None, resp=None):
body = body or {}
self.update(body)
self.resp = resp


In your case you have side effect. Take a look at:
http://stackoverflow.com/questions/1132941/least-astonishment-in-python-the-mutable-default-argument

Best regards,
Boris Pavlovic


On Sat, Aug 30, 2014 at 10:08 AM, GHANSHYAM MANN 
wrote:

> +1. That will also help full for API coming up with microversion like Nova.
>
>
> On Fri, Aug 29, 2014 at 11:56 PM, Sean Dague  wrote:
>
>> On 08/29/2014 10:19 AM, David Kranz wrote:
>> > While reviewing patches for moving response checking to the clients, I
>> > noticed that there are places where client methods do not return any
>> value.
>> > This is usually, but not always, a delete method. IMO, every rest client
>> > method should return at least the response. Some services return just
>> > the response for delete methods and others return (resp, body). Does any
>> > one object to cleaning this up by just making all client methods return
>> > resp, body? This is mostly a change to the clients. There were only a
>> > few places where a non-delete  method was returning just a body that was
>> > used in test code.
>>
>> Yair and I were discussing this yesterday. As the response correctness
>> checking is happening deeper in the code (and you are seeing more and
>> more people assigning the response object to _ ) my feeling is Tempest
>> clients should probably return a body obj that's basically.
>>
>> class ResponseBody(dict):
>> def __init__(self, body={}, resp=None):
>> self.update(body)
>> self.resp = resp
>>
>> Then all the clients would have single return values, the body would be
>> the default thing you were accessing (which is usually what you want),
>> and the response object is accessible if needed to examine headers.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Thanks & Regards
> Ghanshyam Mann
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Review change to nova api pretty please?

2014-08-30 Thread Chmouel Boudjnah
On Sat, Aug 30, 2014 at 11:28 AM, Alex Leonhardt 
wrote:

> Is there a list of things "not to send to this list" somewhere accessible
> (link?) that I could review, to not send another (different) request by
> mistake and possibly upset or annoy people on here ?


There is this document :

https://wiki.openstack.org/wiki/MailingListEtiquette

and I have added the "Review request" section there.

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Review change to nova api pretty please?

2014-08-30 Thread Alex Leonhardt
Thanks Flavio, that was sent to this list a long time before I joined, so
my apologies to not have known.

Is there a list of things "not to send to this list" somewhere accessible
(link?) that I could review, to not send another (different) request by
mistake and possibly upset or annoy people on here ?

Thanks,
Alex



On 29 August 2014 10:37, Flavio Percoco  wrote:

> On 08/29/2014 07:52 AM, Alex Leonhardt wrote:
> > Hi All,
> >
> > Could someone please do the honor
> > :) https://review.openstack.org/#/c/116472/ ?
> > PEP8 failed, but thats not my fault ;) hehe
> >
>
> Please, abstain to send review requests to the mailing list.
>
> Thanks!
>
>
> http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html
>
> > Thanks!
> > Alex
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
> @flaper87
> Flavio Percoco
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Launch of a instance failed in juno

2014-08-30 Thread Nikesh Kumar Mahalka
Launch of a instance failed in juno devstack on ubuntu server 14.04 virtual
machine.
I am getting error "Host not found".
Below is part of /opt/stack/logs/screen/screen-n-cond.log
2014-08-30 12:06:51.721 ERROR nova.scheduler.utils
[req-744ba1cf-7433-46b4-9771-9600a87e8c28 admin admin] [instance:
2a679ed7-2f60-493a-a6cf-d937f11f442b] Error from last host:
juno-devstack-server (node juno-devstack-server): [u'Traceback (most recent
call last):\n', u'  File "/opt/stack/nova/nova/compute/manager.py", line
1932, in do_build_and_run_instance\nfilter_properties)\n', u'  File
"/opt/stack/nova/nova/compute/manager.py", line 2061, in
_build_and_run_instance\ninstance_uuid=instance.uuid,
reason=six.text_type(e))\n', u'RescheduledException: Build of instance
2a679ed7-2f60-493a-a6cf-d937f11f442b was re-scheduled: not all arguments
converted during string formatting\n']
2014-08-30 12:06:51.724 INFO oslo.messaging._drivers.impl_rabbit
[req-744ba1cf-7433-46b4-9771-9600a87e8c28 admin admin] Connecting to AMQP
server on 192.168.2.153:5672
2014-08-30 12:06:51.736 INFO oslo.messaging._drivers.impl_rabbit
[req-744ba1cf-7433-46b4-9771-9600a87e8c28 admin admin] Connected to AMQP
server on 192.168.2.153:5672
2014-08-30 12:06:51.763 WARNING nova.scheduler.driver
[req-744ba1cf-7433-46b4-9771-9600a87e8c28 admin admin] [instance:
2a679ed7-2f60-493a-a6cf-d937f11f442b] NoValidHost exception with message:
'No valid host was found.'
2014-08-30 12:06:51.763 WARNING nova.scheduler.driver
[req-744ba1cf-7433-46b4-9771-9600a87e8c28 admin admin] [instance:
2a679ed7-2f60-493a-a6cf-d937f11f442b] Setting instance to ERROR state.


Earlier also i mailed and i got reply "Compute node do not support  QEMU
hypervisor from Juno. So, you should not deploy a compute node on  VM"

Is  there any link in support of this answer?


Also some other observation is below:

Before ./stack.sh,contents of hosts file is:
vi /etc/hosts
127.0.0.1   localhost
192.168.2.153   juno-devstack-server
#127.0.1.1  juno-devstack-server

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

After ./stack.sh,contents of host file is:

127.0.0.1   localhost  *juno-devstack-server*
192.168.2.153   juno-devstack-server
#127.0.1.1  juno-devstack-server

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters


Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev