Re: [openstack-dev] unable to run unit tests

2014-08-23 Thread daya kamath
fwiw, clarkb advised me to to update testrepository with the following -

bzr clone lp:testrepository


to get a more meaningful error log output.
i have version testrepository=0.0.18 after i updated.

thanks!
daya



 From: Alex Leonhardt 
To: OpenStack Development Mailing List (not for usage questions) 
 
Sent: Sunday, August 24, 2014 2:39 AM
Subject: Re: [openstack-dev] unable to run unit tests
 


Thanks! So the real error is : 

UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 36: 
ordinal not in range(128)


Any clues ? :\ 

Alex




On 23 August 2014 21:46, Doug Wiegley  wrote:

That’s testr’s friendly way of telling you that you have an import error.
>
>Run it with the py26 environment and you’ll get a useful error.
>
>No, don’t ask me why.
>
>Doug
>
>
>
>
>On 8/23/14, 2:33 PM, "Alex Leonhardt"  wrote:
>
>>Hi All,
>>
>>
>>after some fighting with installing all the bits - I can finally execute
>>run_tests.sh but I still cannot seem to get it to work properly. This is
>>what is returning :
>>
>>
>>Non-zero exit code (2) from test listing.
>>stdout='\xb3)\x01@B8nova.tests.test_matchers.TestDictListMatches.test__str
>>__\xde|\xe2\x9e\xb3)\x01@P@Enova.tests.test_matchers.TestDictListMatches.t
>>est_describe_difference\xfeL\xc1\xc4\xb3)\x01@I?nova.tests.test_matchers.T
>>estDictListMatches.test_matches_match\x93\xfau\xb5\xb3)\x01@M@Bnova.tests.
>>test_matchers.TestDictListMatches.test_mismatch_details\x18`>_\xb3)\x01=4n
>>ova.tests.test_matchers.TestDictMatches.test__str__\xca\x9bK\xb9\xb3)\x01@
>>L@Anova.tests.test_matchers.TestDictMatches.test_describe_difference\x94\x
>>b6\x89\\\xb3)\x01@E;nova.tests.test_matchers.TestDictMatches.test_matches_
>>match^V\xc2v\xb3)\x01@H>nova.tests.test_matchers.TestDictMatches.test_mism
>>atch_details\x1bV\x9e\xff\xb3)\x01=4nova.tests.test_matchers.TestIsSubDict
>>Of.test__str__\xb8\x06h(\xb3)\x01@L@Anova.tests.test_matchers.TestIsSubDic
>>tOf.test_describe_difference\xd2\xdd3\xea\xb3)\x01@E;nova.tests.test_match
>>ers.TestIsSubDictOf.test_matches_match~qC\xd0\xb3)\x01@H>nova.tests.test_m
>>atchers.TestIsSubDictOf.test_mismatch_detailsA1\xa3l\xb3)\x01<3nova.tests.
>>test_matchers.TestXMLMatches.test__str__C\xd1E\x13\xb3)\x01@K@@nova.tests.
>>test_matchers.TestXMLMatches.test_describe_difference\xe0\xa0\xb2\x80\xb3)
>>\x01@D:nova.tests.test_matchers.TestXMLMatches.test_matches_matchDT\xe9h\x
>>b3)\x01@G=nova.tests.test_matchers.TestXMLMatches.test_mismatch_detailse\x
>>16\xb1\xad\xb3
>> `\x80N\xdb\x17text/plain;charset=utf8\rimport
>>errors\x80N\xa8nova.tests.api.ec2.test_api\nnova.tests.api.ec2.test_cinder
>>_cloud\nnova.tests.api.ec2.test_cloud\nnova.tests.api.ec2.test_ec2_validat
>>e\nnova.tests.api.ec2.test_ec2utils\nnova.tests.api.ec2.test_error_respons
>>e\nnova.tests.api.ec2.test_faults\nnova.tests.api.ec2.test_middleware\nnov
>>a.tests.api.openstack.compute.contrib.test_admin_actions\nnova.tests.api.o
>>penstack.compute.contrib.test_agents\nnova.tests.api.openstack.compute.con
>>trib.test_aggregates\nnova.tests.api.openstack.compute.contrib.test_attach
>>_interfaces\nnova.tests.api.openstack.compute.contrib.test_availability_zo
>>ne\nnova.tests.api.openstack.compute.contrib.test_baremetal_nodes\nnova.te
>>sts.api.openstack.compute.contrib.test_cells\nnova.tests.api.openstack.com
>>pute.contrib.test_certificates\nnova.tests.api.openstack.compute.contrib.t
>>est_cloudpipe\nnova.tests.api.openstack.compute.contrib.test_cloudpipe_upd
>>ate\nnova.tests.api.openstack.compute.contrib.test_config_drive\nnova.test
>>s.api.openstack.compute.contrib.test_console_auth_tokens\nnova.tests.api.o
>>penstack.compute.contrib.test_console_output\nnova.tests.api.openstack.com
>>pute.contrib.test_consoles\nnova.tests.api.openstack.compute.contrib.test_
>>createserverext\nnova.tests.api.openstack.compute.contrib.test_deferred_de
>>lete\nnova.tests.api.openstack.compute.contrib.test_disk_config\nnova.test
>>s.api.openstack.compute.contrib.test_evacuate\nnova.tests.api.openstack.co
>>mpute.contrib.test_extended_availability_zone\nnova.tests.api.openstack.co
>>mpute.contrib.test_extended_evacuate_find_host\nnova.tests.api.openstack.c
>>ompute.contrib.test_extended_hypervisors\nnova.tests.api.openstack.compute
>>.contrib.test_extended_ips\nnova.tests.api.openstack.compute.contrib.test_
>>extended_ips_mac\nnova.tests.api.openstack.compute.contrib.test_extended_r
>>escue_with_image\nnova.tests.api.openstack.compute.contrib.test_extended_s
>>erver_attributes\nnova.tests.api.openstack.compute.contrib.test_extended_s
>>tatus\nnova.tests.api.openstack.compute.contrib.test_extended_virtual_inte
>>rfaces_net\nnova.tests.api.openstack.compute.contrib.test_extended_volumes
>>\nnova.tests.api.openstack.compute.contrib.test_fixed_ips\nnova.tests.api.
>>openstack.compute.contrib.test_flavor_access\nnova.tests.api.openstack.com
>>pute.contrib.test_flavor_disabled\nnova.tests.api.openstack.compute.contri
>>b.test_flavor_manage\nnova.tests.api.opensta

[openstack-dev] [Heat] [Keystone] Heat cfn-push-stats failed with '403 SignatureDoesNotMatch', it may be Keystone problem.

2014-08-23 Thread Yukinori Sagara
Hi.


I am trying Heat instance HA, using RDO Icehouse.

After instance boot, instance push own stats to heat alarm with
cfn-push-stats command.

But cfn-push-stats always failed with error '403 SignatureDoesNotMatch',
this message is

output to /var/log/cfn-push-stats.log.


I debugged client and server side code. (i.e. cfn-push-stats, boto, heat,
keystone,

keystoneclient) And I found curious code mismatch between boto and
keystoneclient about

signature calculation.


Here is a result of debugging, and code examination.


* Client side

cfn-push-stats uses heat-cfntools library, and heat-cfntools do 'POST'
request with boto.

boto perfomes signature calculation. [1]

for signature calculation, firstly it construct 'CanonicalRequest', some
strings are joined.

And create a digest hash of the CanonicalRequest for signature calculation.

CanonicalRequest contains CanonicalQueryString, which is transfomed URL
query strings.


CanonicalRequest =

  HTTPRequestMethod + '\n' +

  CanonicalURI + '\n' +

  CanonicalQueryString + '\n' +

  CanonicalHeaders + '\n' +

  SignedHeaders + '\n' +

  HexEncode(Hash(RequestPayload))


**AWS original tool (aws-cfn-bootstrap-1.4) and boto uses empty string as

CanonicalQueryString, when request is POST.**


AWS original tool's code is following.


cfnbootstrap/aws_client.py

110 class V4Signer(Signer):


144 (canonical_headers, signed_headers) =
self._canonicalize_headers(new_headers)

145 canonical_request += canonical_headers + '\n' + signed_headers
+ '\n'

146 canonical_request +=
hashlib.sha256(self._construct_query(params).encode('utf-8') if verb ==
'POST' else '').hexdigest()


boto's code is following.


boto/auth.py

283 class HmacAuthV4Handler(AuthHandler, HmacKeys):


393 def canonical_request(self, http_request):

394 cr = [http_request.method.upper()]

395 cr.append(self.canonical_uri(http_request))

396 cr.append(self.canonical_query_string(http_request))

397 headers_to_sign = self.headers_to_sign(http_request)

398 cr.append(self.canonical_headers(headers_to_sign) + '\n')

399 cr.append(self.signed_headers(headers_to_sign))

400 cr.append(self.payload(http_request))

401 return '\n'.join(cr)


337 def canonical_query_string(self, http_request):

338 # POST requests pass parameters in through the

339 # http_request.body field.

340 if http_request.method == 'POST':

341 return ""

342 l = []

343 for param in sorted(http_request.params):

344 value =
boto.utils.get_utf8_value(http_request.params[param])

345 l.append('%s=%s' % (urllib.quote(param, safe='-_.~'),

346 urllib.quote(value, safe='-_.~')))

347 return '&'.join(l)


* Server side

heat-api-cfn queries to keystone in order to check request authorization.

keystone uses keystoneclient to check EC2 format request signature.


In here, **keystoneclient uses (non-empty) query string as
CanonicalQueryString, even

though request is POST.**

And create a digest hash of the CanonicalRequest for signature calculation.


keystoneclient's code is following.


keystoneclient/contrib/ec2/utils.py

 28 class Ec2Signer(object):


154 def _calc_signature_4(self, params, verb, server_string, path,
headers,

155   body_hash):

156 """Generate AWS signature version 4 string."""


235 # Create canonical request:

236 # http://docs.aws.amazon.com/general/latest/gr/

237 # sigv4-create-canonical-request.html

238 # Get parameters and headers in expected string format

239 cr = "\n".join((verb.upper(), path,

240 self._canonical_qs(params),

241 canonical_header_str(),

242 auth_param('SignedHeaders'),

243 body_hash))


125 @staticmethod

126 def _canonical_qs(params):

127 """Construct a sorted, correctly encoded query string as
required for

128 _calc_signature_2 and _calc_signature_4.

129 """

130 keys = list(params)

131 keys.sort()

132 pairs = []

133 for key in keys:

134 val = Ec2Signer._get_utf8_value(params[key])

135 val = urllib.parse.quote(val, safe='-_~')

136 pairs.append(urllib.parse.quote(key, safe='') + '=' + val)

137 qs = '&'.join(pairs)

138 return qs


So it should be different from boto(client side) to keystoneclient(server
side),

cfn-push-stats always fails with error log '403 SignatureDoesNotMatch' in
such reason.


I wrote a patch to resolve how to treat CanonicalQueryString mismatch,

My patch honored AWS original tool and boto, so if request is POST,

'CanonicalQueryString' is regarded as a empty string.


With my patch, Heat instance HA works fine.


This bug affects Heat and Keystone, but patch is 

Re: [openstack-dev] Adding GateFailureFix tag to commit messages

2014-08-23 Thread Joe Gordon
On Fri, Aug 22, 2014 at 2:11 AM, Daniel P. Berrange 
wrote:

> On Thu, Aug 21, 2014 at 09:02:17AM -0700, Armando M. wrote:
> > Hi folks,
> >
> > According to [1], we have ways to introduce external references to commit
> > messages.
> >
> > These are useful to mark certain patches and their relevance in the
> context
> > of documentation, upgrades, etc.
> >
> > I was wondering if it would be useful considering the addition of another
> > tag:
> >
> > GateFailureFix
> >
> > The objective of this tag, mainly for consumption by the review team,
> would
> > be to make sure that some patches get more attention than others, as they
> > affect the velocity of how certain critical issues are addressed (and
> gate
> > failures affect everyone).
> >
> > As for machine consumption, I know that some projects use the
> > 'gate-failure' tag to categorize LP bugs that affect the gate. The use
> of a
> > GateFailureFix tag in the commit message could make the tagging
> automatic,
> > so that we can keep a log of what all the gate failures are over time.
> >
> > Not sure if this was proposed before, and I welcome any input on the
> matter.
>
> We've tried a number of different tags in git commit messages before, in
> an attempt to help prioritization of reviews and unfortunately none of them
> have been particularly successful so far.  I think a key reasonsfor this
> is that tags in the commit message are invisible when people are looking at
> lists of possible changes to choose for review. Whether in the gerrit web
> UI reports / dashboards or in command line tools like my own gerrymander,
> reviewers are looking at lists of changes and primarily choosing which
> to review based on the subject line, or other explicitly recorded metadata
> fields. You won't typically look at the commit message until you've already
> decided you want to review the change. So while GateFailureFix may cause
> me to pay more attention during the review of it, it probably won't make
> me start review any sooner.
>


gerrit supports searching by message (although the searching is a little
odd sometimes) and these can be used in dashboards.

message:'MESSAGE'

Changes that match *MESSAGE* arbitrary string in the commit message body.



https://review.openstack.org/Documentation/user-search.html

Examples:

https://review.openstack.org/#/q/message:%22UpgradeImpact%22+is:open,n,z
https://review.openstack.org/#/q/message:%22DocImpact%22+is:open,n,z


>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding GateFailureFix tag to commit messages

2014-08-23 Thread Matt Riedemann



On 8/21/2014 11:55 AM, Sean Dague wrote:

On 08/21/2014 11:02 AM, Armando M. wrote:

Hi folks,

According to [1], we have ways to introduce external references to
commit messages.

These are useful to mark certain patches and their relevance in the
context of documentation, upgrades, etc.

I was wondering if it would be useful considering the addition of
another tag:

GateFailureFix

The objective of this tag, mainly for consumption by the review team,
would be to make sure that some patches get more attention than others,
as they affect the velocity of how certain critical issues are addressed
(and gate failures affect everyone).

As for machine consumption, I know that some projects use the
'gate-failure' tag to categorize LP bugs that affect the gate. The use
of a GateFailureFix tag in the commit message could make the tagging
automatic, so that we can keep a log of what all the gate failures are
over time.

Not sure if this was proposed before, and I welcome any input on the matter.


A concern with this approach is it's pretty arbitrary, and not always
clear which bugs are being addressed and how severe they are.

An idea that came up in the Infra/QA meetup was to build a custom review
dashboard based on the bug list in elastic recheck. That would also
encourage people to categorize this bugs through that system, and I
think provide a virtuous circle around identifying the issues at hand.

I think Joe Gordon had a first pass at this, but I'd be more interested
in doing it this way because it means the patch author fixing a bug just
needs to know they are fixing the bug. Whether or not it's currently a
gate issue would be decided not by the commit message (static) but by
our system that understands what are the gate issues *right now* (dynamic).

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Joe's change has merged:

https://review.openstack.org/#/c/109144/

There should be an "Open reviews" section in the elastic-recheck status 
page now:


http://status.openstack.org/elastic-recheck/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding GateFailureFix tag to commit messages

2014-08-23 Thread Matt Riedemann



On 8/22/2014 4:11 AM, Daniel P. Berrange wrote:

On Thu, Aug 21, 2014 at 09:02:17AM -0700, Armando M. wrote:

Hi folks,

According to [1], we have ways to introduce external references to commit
messages.

These are useful to mark certain patches and their relevance in the context
of documentation, upgrades, etc.

I was wondering if it would be useful considering the addition of another
tag:

GateFailureFix

The objective of this tag, mainly for consumption by the review team, would
be to make sure that some patches get more attention than others, as they
affect the velocity of how certain critical issues are addressed (and gate
failures affect everyone).

As for machine consumption, I know that some projects use the
'gate-failure' tag to categorize LP bugs that affect the gate. The use of a
GateFailureFix tag in the commit message could make the tagging automatic,
so that we can keep a log of what all the gate failures are over time.

Not sure if this was proposed before, and I welcome any input on the matter.


We've tried a number of different tags in git commit messages before, in
an attempt to help prioritization of reviews and unfortunately none of them
have been particularly successful so far.  I think a key reasonsfor this
is that tags in the commit message are invisible when people are looking at
lists of possible changes to choose for review. Whether in the gerrit web
UI reports / dashboards or in command line tools like my own gerrymander,
reviewers are looking at lists of changes and primarily choosing which
to review based on the subject line, or other explicitly recorded metadata
fields. You won't typically look at the commit message until you've already
decided you want to review the change. So while GateFailureFix may cause
me to pay more attention during the review of it, it probably won't make
me start review any sooner.

Regards,
Daniel



Yup, I had the same thoughts.  The TrivialFix tag idea is similar and 
never took off, and I personally don't like that kind of tag anyway 
since it's very open to interpretation.


And if GateFailureFix wasn't going to be tied to bug fixes for known 
(tracked in elastic-recheck) failures, but just high-priority fixes for 
a given project, then it's false advertizing for the change.  Gate 
failures typically affect all projects, whereas high-priority fixes for 
a project might be just isolated to that project, e.g. the recent 
testtools 0.9.36 setUp/tearDown and tox hashseed unit test failures are 
project-specific and high priority for the project to fix.


If you want a simple way to see high priority bugs that have code out 
for review, Tracy Jones has a nice page created for Nova [1].


[1] http://54.201.139.117/nova-bugs.html

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] new testtools breaking gate

2014-08-23 Thread Matt Riedemann



On 8/22/2014 12:22 PM, Clark Boylan wrote:

On Fri, Aug 22, 2014, at 05:55 AM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

this week is quite bumpy for unit testing in gate. First, it was
upgrade to new 'tox' version that broke quite some branches.


I did a ton of work to make the tox upgrade go smoothly because we knew
it would be somewhat painful. About a month ago I sent mail to this list
[0] describing the problem. This thread included a pointer to the bug
filed to track this [1] and example work around changes [2] which I
wrote and proposed for as many projects and branches as I had time to
test at that point.

Updating tox to 1.7.2 is important for a couple reasons. We get a lot of
confused developers wondering why using tox doesn't work to run their
tests when all of our documentation says just run tox. Well you needed a
special version (1.6.1). Communicating that to everyone that tries to
run tox is hard.

It is also important because tox adds new features like the hashseed
randomization. This is the cause of our problems but it is exposing real
bugs in openstack [3]. We should be fixing these issues and hopefully my
proposed workarounds are only temporary.

I decided to push ahead [4] and upgrade tox a couple days ago for a
couple reasons. This is an important change as illustrated above and
feature freeze and stabilization are rapidly approaching and this
probably needed to get in soon to have a chance at getting in at all. I
felt this was appropriate because I had done a ton of work prior to make
things go as smoothly as possible.

Where things did not go smoothly was on the reviews for my workaround.
Some changes were basically ignored [5] others ran into procedural
paperwork associated with stable branches that are not quite appropriate
for changes of this type [6][7]. I get that generally we only want to
backport things from master and that we have some specific way to cherry
pick things, but this type of change is to address issues with
stable/foo directly and has nothing to do with master. I did eventually
go through the "backport" dance for most of these changes despite this
not actually being a true backport.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041283.html
[1] https://bugs.launchpad.net/cinder/+bug/1348818
[2] https://review.openstack.org/#/c/109700/
[3]
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041496.html
[4]
http://lists.openstack.org/pipermail/openstack-dev/2014-August/042010.html
[5] https://review.openstack.org/#/c/109749/
[6] https://review.openstack.org/#/c/109759/
[7] https://review.openstack.org/#/c/109750/

With all of that out of the way are there suggestions for how we can do
this better next time? Do we need more time (I gave us about 4 weeks
which seemed like plenty to me)? Perhaps I should send more reminder
emails? Feedback is very welcome.

Thanks,
Clark


And today new testtools 0.9.36 were released and were caught by gate,
which resulted in the following unit test failures in multiple projects:

TestCase.setUp was already called. Do not explicitly call setUp from
your tests. In your own setUp, use super to call the base setUp.

All branches are affected: havana, icehouse, and master.

This is because the following check was released with the new version
of the library:
https://github.com/testing-cabal/testtools/commit/5c3b92d90a64efaecdc4010a98002bfe8b888517

And the temporary fix is to merge the version pin patch in global
requirements, backport it to stable branches, and merge the updates
from Openstack Proposal Bot to all affected projects. The patch for
master requirements is: https://review.openstack.org/#/c/116267/

In the meantime, projects will need to fix their tests not to call
setUp() and tearDown() twice. This will be the requirement to unpin
the version of the library.

So, please review, backport, and make sure it lands in project
requirements files.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT9z3cAAoJEC5aWaUY1u57DtsIAOFtK2i4zkMcC79nOrc5w9DW
oO2b064eyLwwbQEaWeeIL2JBSLBxqNV5zeN0eZB3Sq7LQLv0oPaUNTMFG2+gvask
JHCTAGKz776Rt7ptcfmpHURwcT9L//+1HXvd+ADtO0sYKwgmvaBF7aA4WFa4TseG
JCnAsi5OiOZZgTo/6U1B55srHkZr0DWxqTkKKysZJbR2Pr/ZT9io8yu9uucaz9VH
uNLfggtCcjGgccl7IqSUtVRf3lsSGuvBAxVqMszSFJQmFCjy2E26GfsTApp9KXtQ
gbCpEns8QCnt6KF9rygjHLMbYikjbITuUfSL2okZelX9VpKNx0CS29K/tRg5/BA=
=YavB
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Clark, given all the cat-herding involved I think you did a nice job.  I 
actually thought the tox/hashseed thing was wrapped up until the other 
day when tempest/elastic-rechec

Re: [openstack-dev] [neutron][cisco] Cisco Nexus requires patched ncclient

2014-08-23 Thread Maru Newby

On Aug 14, 2014, at 1:55 PM, Ihar Hrachyshka  wrote:

> Signed PGP part
> FYI: I've uploaded a review for openstack/requirements to add the
> upstream module into the list of potential dependencies [1]. Once it's
> merged, I'm going to introduce this new requirement for Neutron.

The only reason someone would install ncclient would be because they had a 
brocade or cisco solution that they wanted to integrate with and I don't think 
that justifies making Neutron depend on the library.


Maru


> [1]: https://review.openstack.org/114213
> 
> /Ihar
> 
> On 12/08/14 16:27, Ihar Hrachyshka wrote:
> > Hey all,
> >
> > as per [1], Cisco Nexus ML2 plugin requires a patched version of
> > ncclient from github. I wonder:
> >
> > - whether this information is still current; - why don't we depend
> > on ncclient thru our requirements.txt file.
> >
> > [1]: https://wiki.openstack.org/wiki/Neutron/ML2/MechCiscoNexus
> >
> > Cheers, /Ihar
> >
> > ___ OpenStack-dev
> > mailing list OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-23 Thread Clint Byrum
Excerpts from Dolph Mathews's message of 2014-08-22 09:45:37 -0700:
> On Fri, Aug 22, 2014 at 11:32 AM, Zane Bitter  wrote:
> 
> > On 22/08/14 11:19, Thierry Carrez wrote:
> >
> >> Zane Bitter wrote:
> >>
> >>> On 22/08/14 08:33, Thierry Carrez wrote:
> >>>
>  We also
>  still need someone to have the final say in case of deadlocked issues.
> 
> >>>
> >>> -1 we really don't.
> >>>
> >>
> >> I know we disagree on that :)
> >>
> >
> > No problem, you and I work in different programs so we can both get our
> > way ;)
> >
> >
> >  People say we don't have that many deadlocks in OpenStack for which the
>  PTL ultimate power is needed, so we could get rid of them. I'd argue
>  that the main reason we don't have that many deadlocks in OpenStack is
>  precisely *because* we have a system to break them if they arise.
> 
> >>>
> >>> s/that many/any/ IME and I think that threatening to break a deadlock by
> >>> fiat is just as bad as actually doing it. And by 'bad' I mean
> >>> community-poisoningly, trust-destroyingly bad.
> >>>
> >>
> >> I guess I've been active in too many dysfunctional free and open source
> >> software projects -- I put a very high value on the ability to make a
> >> final decision. Not being able to make a decision is about as
> >> community-poisoning, and also results in inability to make any
> >> significant change or decision.
> >>
> >
> > I'm all for getting a final decision, but a 'final' decision that has been
> > imposed from outside rather than internalised by the participants is...
> > rarely final.
> >
> 
> The expectation of a PTL isn't to stomp around and make "final" decisions,
> it's to step in when necessary and help both sides find the best solution.
> To moderate.
> 

Have we had many instances where a project's community divided into
two camps and dug in to the point where they actually needed active
moderation? And in those cases, was the PTL not already on one side of
said argument? I'd prefer specific examples here.

> >
> > I have yet to see a deadlock in Heat that wasn't resolved by better
> > communication.
> 
> 
> Moderation == bettering communication. I'm under the impression that you
> and Thierry are agreeing here, just from opposite ends of the same spectrum.
> 

I agree as well. PTL is a servant of the community, as any good leader
is. If the PTL feels they have to drop the hammer, or if an impass is
reached where they are asked to, it is because they have failed to get
everyone communicating effectively, not because "that's their job."

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-23 Thread Maru Newby

On Aug 20, 2014, at 6:28 PM, Salvatore Orlando  wrote:

> Some comments inline.
> 
> Salvatore
> 
> On 20 August 2014 17:38, Ihar Hrachyshka  wrote:
> 
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA512
>> 
>> Hi all,
>> 
>> I've read the proposal for incubator as described at [1], and I have
>> several comments/concerns/suggestions to this.
>> 
>> Overall, the idea of giving some space for experimentation that does
>> not alienate parts of community from Neutron is good. In that way, we
>> may relax review rules and quicken turnaround for preview features
>> without loosing control on those features too much.
>> 
>> Though the way it's to be implemented leaves several concerns, as follows:
>> 
>> 1. From packaging perspective, having a separate repository and
>> tarballs seems not optimal. As a packager, I would better deal with a
>> single tarball instead of two. Meaning, it would be better to keep the
>> code in the same tree.
>> 
>> I know that we're afraid of shipping the code for which some users may
>> expect the usual level of support and stability and compatibility.
>> This can be solved by making it explicit that the incubated code is
>> unsupported and used on your user's risk. 1) The experimental code
>> wouldn't probably be installed unless explicitly requested, and 2) it
>> would be put in a separate namespace (like 'preview', 'experimental',
>> or 'staging', as the call it in Linux kernel world [2]).
>> 
>> This would facilitate keeping commit history instead of loosing it
>> during graduation.
>> 
>> Yes, I know that people don't like to be called experimental or
>> preview or incubator... And maybe neutron-labs repo sounds more
>> appealing than an 'experimental' subtree in the core project. Well,
>> there are lots of EXPERIMENTAL features in Linux kernel that we
>> actively use (for example, btrfs is still considered experimental by
>> Linux kernel devs, while being exposed as a supported option to RHEL7
>> users), so I don't see how that naming concern is significant.
>> 
> 
> I think this is the whole point of the discussion around the incubator and
> the reason for which, to the best of my knowledge, no proposal has been
> accepted yet.
> 
>> 
>> 2. If those 'extras' are really moved into a separate repository and
>> tarballs, this will raise questions on whether packagers even want to
>> cope with it before graduation. When it comes to supporting another
>> build manifest for a piece of code of unknown quality, this is not the
>> same as just cutting part of the code into a separate
>> experimental/labs package. So unless I'm explicitly asked to package
>> the incubator, I wouldn't probably touch it myself. This is just too
>> much effort (btw the same applies to moving plugins out of the tree -
>> once it's done, distros will probably need to reconsider which plugins
>> they really want to package; at the moment, those plugins do not
>> require lots of time to ship them, but having ~20 separate build
>> manifests for each of them is just too hard to handle without clear
>> incentive).
>> 
> 
> One reason instead for moving plugins out of the main tree is allowing
> their maintainers to have full control over them.
> If there was a way with gerrit or similars to give somebody rights to merge
> code only on a subtree I probably would not even consider the option of
> moving plugin and drivers away. From my perspective it's not that I don't
> want them in the main tree, it's that I don't think it's fair for core team
> reviewers to take responsibility of approving code that they can't fully
> tests (3rd partt CI helps, but is still far from having a decent level of
> coverage).
> 
> 
>> 
>> 3. The fact that neutron-incubator is not going to maintain any stable
>> branches for security fixes and major failures concerns me too. In
>> downstream, we don't generally ship the latest and greatest from PyPI.
>> Meaning, we'll need to maintain our own downstream stable branches for
>> major fixes. [BTW we already do that for python clients.]
>> 
>> 
> This is a valid point. We need to find an appropriate trade off. My
> thinking was that incubated projects could be treated just like client
> libraries from a branch perspective.
> 
> 
> 
>> 4. Another unclear part of the proposal is that notion of keeping
>> Horizon and client changes required for incubator features in
>> neutron-incubator. AFAIK the repo will be governed by Neutron Core
>> team, and I doubt the team is ready to review Horizon changes (?). I
>> think I don't understand how we're going to handle that. Can we just
>> postpone Horizon work till graduation?
>> 
>> 
> I too do not think it's a great idea, mostly because there will be horizon
> bits not shipped with horizon, and not verified by horizon core team.
> I think it would be ok to have horizon support for neutron incubator. It
> won't be the first time that support for experimental features is added in
> horizon.
> 
> 
> 5. The wiki page says that graduation will require

Re: [openstack-dev] unable to run unit tests

2014-08-23 Thread Alex Leonhardt
Ah, i was still missing some dependencies / libs / headers / etc. so it's
all good now and I can run tests etc.

Alex



On 23 August 2014 22:09, Alex Leonhardt  wrote:

> Thanks! So the real error is :
>
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 36:
> ordinal not in range(128)
>
>
> Any clues ? :\
>
> Alex
>
>
>
> On 23 August 2014 21:46, Doug Wiegley  wrote:
>
>> That’s testr’s friendly way of telling you that you have an import error.
>>
>> Run it with the py26 environment and you’ll get a useful error.
>>
>> No, don’t ask me why.
>>
>> Doug
>>
>>
>>
>> On 8/23/14, 2:33 PM, "Alex Leonhardt"  wrote:
>>
>> >Hi All,
>> >
>> >
>> >after some fighting with installing all the bits - I can finally execute
>> >run_tests.sh but I still cannot seem to get it to work properly. This is
>> >what is returning :
>> >
>> >
>> >Non-zero exit code (2) from test listing.
>>
>> >stdout='\xb3)\x01@B8nova.tests.test_matchers.TestDictListMatches.test__str
>> >__\xde|\xe2\x9e\xb3)\x01@P
>> @Enova.tests.test_matchers.TestDictListMatches.t
>> >est_describe_difference\xfeL\xc1\xc4\xb3)\x01@I
>> ?nova.tests.test_matchers.T
>> >estDictListMatches.test_matches_match\x93\xfau\xb5\xb3)\x01@M
>> @Bnova.tests.
>>
>> >test_matchers.TestDictListMatches.test_mismatch_details\x18`>_\xb3)\x01=4n
>>
>> >ova.tests.test_matchers.TestDictMatches.test__str__\xca\x9bK\xb9\xb3)\x01@
>> >L@Anova.tests.test_matchers.TestDictMatches.test_describe_difference
>> \x94\x
>> >b6\x89\\\xb3)\x01@E
>> ;nova.tests.test_matchers.TestDictMatches.test_matches_
>> >match^V\xc2v\xb3)\x01@H
>> >nova.tests.test_matchers.TestDictMatches.test_mism
>>
>> >atch_details\x1bV\x9e\xff\xb3)\x01=4nova.tests.test_matchers.TestIsSubDict
>> >Of.test__str__\xb8\x06h(\xb3)\x01@L
>> @Anova.tests.test_matchers.TestIsSubDic
>> >tOf.test_describe_difference\xd2\xdd3\xea\xb3)\x01@E
>> ;nova.tests.test_match
>> >ers.TestIsSubDictOf.test_matches_match~qC\xd0\xb3)\x01@H
>> >nova.tests.test_m
>>
>> >atchers.TestIsSubDictOf.test_mismatch_detailsA1\xa3l\xb3)\x01<3nova.tests.
>> >test_matchers.TestXMLMatches.test__str__C\xd1E\x13\xb3)\x01@K
>> @@nova.tests.
>>
>> >test_matchers.TestXMLMatches.test_describe_difference\xe0\xa0\xb2\x80\xb3)
>> >\x01@D
>> :nova.tests.test_matchers.TestXMLMatches.test_matches_matchDT\xe9h\x
>> >b3)\x01@G
>> =nova.tests.test_matchers.TestXMLMatches.test_mismatch_detailse\x
>> >16\xb1\xad\xb3
>> > `\x80N\xdb\x17text/plain;charset=utf8\rimport
>>
>> >errors\x80N\xa8nova.tests.api.ec2.test_api\nnova.tests.api.ec2.test_cinder
>>
>> >_cloud\nnova.tests.api.ec2.test_cloud\nnova.tests.api.ec2.test_ec2_validat
>>
>> >e\nnova.tests.api.ec2.test_ec2utils\nnova.tests.api.ec2.test_error_respons
>>
>> >e\nnova.tests.api.ec2.test_faults\nnova.tests.api.ec2.test_middleware\nnov
>>
>> >a.tests.api.openstack.compute.contrib.test_admin_actions\nnova.tests.api.o
>>
>> >penstack.compute.contrib.test_agents\nnova.tests.api.openstack.compute.con
>>
>> >trib.test_aggregates\nnova.tests.api.openstack.compute.contrib.test_attach
>>
>> >_interfaces\nnova.tests.api.openstack.compute.contrib.test_availability_zo
>>
>> >ne\nnova.tests.api.openstack.compute.contrib.test_baremetal_nodes\nnova.te
>> >sts.api.openstack.compute.contrib.test_cells\
>> nnova.tests.api.openstack.com
>>
>> >pute.contrib.test_certificates\nnova.tests.api.openstack.compute.contrib.t
>>
>> >est_cloudpipe\nnova.tests.api.openstack.compute.contrib.test_cloudpipe_upd
>>
>> >ate\nnova.tests.api.openstack.compute.contrib.test_config_drive\nnova.test
>>
>> >s.api.openstack.compute.contrib.test_console_auth_tokens\nnova.tests.api.o
>> >penstack.compute.contrib.test_console_output\
>> nnova.tests.api.openstack.com
>>
>> >pute.contrib.test_consoles\nnova.tests.api.openstack.compute.contrib.test_
>>
>> >createserverext\nnova.tests.api.openstack.compute.contrib.test_deferred_de
>>
>> >lete\nnova.tests.api.openstack.compute.contrib.test_disk_config\nnova.test
>> >s.api.openstack.compute.contrib.test_evacuate\
>> nnova.tests.api.openstack.co
>> >mpute.contrib.test_extended_availability_zone\
>> nnova.tests.api.openstack.co
>>
>> >mpute.contrib.test_extended_evacuate_find_host\nnova.tests.api.openstack.c
>>
>> >ompute.contrib.test_extended_hypervisors\nnova.tests.api.openstack.compute
>>
>> >.contrib.test_extended_ips\nnova.tests.api.openstack.compute.contrib.test_
>>
>> >extended_ips_mac\nnova.tests.api.openstack.compute.contrib.test_extended_r
>>
>> >escue_with_image\nnova.tests.api.openstack.compute.contrib.test_extended_s
>>
>> >erver_attributes\nnova.tests.api.openstack.compute.contrib.test_extended_s
>>
>> >tatus\nnova.tests.api.openstack.compute.contrib.test_extended_virtual_inte
>>
>> >rfaces_net\nnova.tests.api.openstack.compute.contrib.test_extended_volumes
>>
>> >\nnova.tests.api.openstack.compute.contrib.test_fixed_ips\nnova.tests.api.
>> >openstack.compute.contrib.test_flavor_access\
>> nnova.tests.api.openstack.com
>>
>> >pute.contrib.test_flavor_disabled\nnova.tests.api.openstack.compute.cont

Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-23 Thread Maru Newby

On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam  wrote:

> On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery  wrote:
>> On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka  wrote:
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA512
>>> 
>>> On 20/08/14 18:28, Salvatore Orlando wrote:
 Some comments inline.
 
 Salvatore
 
 On 20 August 2014 17:38, Ihar Hrachyshka >>> > wrote:
 
 Hi all,
 
 I've read the proposal for incubator as described at [1], and I
 have several comments/concerns/suggestions to this.
 
 Overall, the idea of giving some space for experimentation that
 does not alienate parts of community from Neutron is good. In that
 way, we may relax review rules and quicken turnaround for preview
 features without loosing control on those features too much.
 
 Though the way it's to be implemented leaves several concerns, as
 follows:
 
 1. From packaging perspective, having a separate repository and
 tarballs seems not optimal. As a packager, I would better deal with
 a single tarball instead of two. Meaning, it would be better to
 keep the code in the same tree.
 
 I know that we're afraid of shipping the code for which some users
 may expect the usual level of support and stability and
 compatibility. This can be solved by making it explicit that the
 incubated code is unsupported and used on your user's risk. 1) The
 experimental code wouldn't probably be installed unless explicitly
 requested, and 2) it would be put in a separate namespace (like
 'preview', 'experimental', or 'staging', as the call it in Linux
 kernel world [2]).
 
 This would facilitate keeping commit history instead of loosing it
 during graduation.
 
 Yes, I know that people don't like to be called experimental or
 preview or incubator... And maybe neutron-labs repo sounds more
 appealing than an 'experimental' subtree in the core project.
 Well, there are lots of EXPERIMENTAL features in Linux kernel that
 we actively use (for example, btrfs is still considered
 experimental by Linux kernel devs, while being exposed as a
 supported option to RHEL7 users), so I don't see how that naming
 concern is significant.
 
 
> I think this is the whole point of the discussion around the
> incubator and the reason for which, to the best of my knowledge,
> no proposal has been accepted yet.
 
>>> 
>>> I wonder where discussion around the proposal is running. Is it public?
>>> 
>> The discussion started out privately as the incubation proposal was
>> put together, but it's now on the mailing list, in person, and in IRC
>> meetings. Lets keep the discussion going on list now.
>> 
> 
> In the spirit of keeping the discussion going, I think we probably
> need to iterate in practice on this idea a little bit before we can
> crystallize on the policy and process for this new repo. Here are few
> ideas on how we can start this iteration:
> 
> * Namespace for the new repo:
> Should this be in the neutron namespace, or a completely different
> namespace like "neutron labs"? Perhaps creating a separate namespace
> will help the packagers to avoid issues of conflicting package owners
> of the namespace.

I don’t think there is a technical requirement to choose a new namespace.  
Python supports sharing a namespace, and packaging can support this feature 
(see: oslo.*).

> 
> * Dependency on Neutron (core) repository:
> We would need to sort this out so that we can get UTs to run and pass
> in the new repo. Can we set the dependency on Neutron milestone
> releases? We already publish tar balls for the milestone releases, but
> I am not sure we publish these as packages to pypi. If not could we
> start doing that? With this in place, the incubator would always lag
> the Neutron core by at the most one milestone release.

Given that it is possible to specify a dependency as a branch/hash/tag in a git 
repo [1], I’m not sure it’s worth figuring out how to target tarballs.  Master 
branch of the incubation repo could then target the master branch of the 
Neutron repo and always be assured of being current, and then released versions 
could target milestone tags or released versions.

1: http://pip.readthedocs.org/en/latest/reference/pip_install.html#git

> 
> * Modules overlapping with the Neutron (core) repository:
> We could initially start with the features that required very little
> or no changes to the Neutron core, to avoid getting into the issue of
> blocking on changes to the Neutron (core) repository before progress
> can be made in the incubator.

+1

I agree that it would be in an incubated effort’s best interest to put off 
doing invasive changes to the Neutron tree as long as possible to ensure 
sufficient time to hash out the best approach.

> 
> * Packaging of ancillary code (CLI, Horizon, Heat):
> We start by adding th

Re: [openstack-dev] unable to run unit tests

2014-08-23 Thread Alex Leonhardt
Thanks! So the real error is :

UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 36:
ordinal not in range(128)


Any clues ? :\

Alex



On 23 August 2014 21:46, Doug Wiegley  wrote:

> That’s testr’s friendly way of telling you that you have an import error.
>
> Run it with the py26 environment and you’ll get a useful error.
>
> No, don’t ask me why.
>
> Doug
>
>
>
> On 8/23/14, 2:33 PM, "Alex Leonhardt"  wrote:
>
> >Hi All,
> >
> >
> >after some fighting with installing all the bits - I can finally execute
> >run_tests.sh but I still cannot seem to get it to work properly. This is
> >what is returning :
> >
> >
> >Non-zero exit code (2) from test listing.
> >stdout='\xb3)\x01@B8nova.tests.test_matchers.TestDictListMatches.test__str
> >__\xde|\xe2\x9e\xb3)\x01@P
> @Enova.tests.test_matchers.TestDictListMatches.t
> >est_describe_difference\xfeL\xc1\xc4\xb3)\x01@I
> ?nova.tests.test_matchers.T
> >estDictListMatches.test_matches_match\x93\xfau\xb5\xb3)\x01@M
> @Bnova.tests.
> >test_matchers.TestDictListMatches.test_mismatch_details\x18`>_\xb3)\x01=4n
> >ova.tests.test_matchers.TestDictMatches.test__str__\xca\x9bK\xb9\xb3)\x01@
> >L@Anova.tests.test_matchers.TestDictMatches.test_describe_difference
> \x94\x
> >b6\x89\\\xb3)\x01@E
> ;nova.tests.test_matchers.TestDictMatches.test_matches_
> >match^V\xc2v\xb3)\x01@H
> >nova.tests.test_matchers.TestDictMatches.test_mism
> >atch_details\x1bV\x9e\xff\xb3)\x01=4nova.tests.test_matchers.TestIsSubDict
> >Of.test__str__\xb8\x06h(\xb3)\x01@L
> @Anova.tests.test_matchers.TestIsSubDic
> >tOf.test_describe_difference\xd2\xdd3\xea\xb3)\x01@E
> ;nova.tests.test_match
> >ers.TestIsSubDictOf.test_matches_match~qC\xd0\xb3)\x01@H
> >nova.tests.test_m
> >atchers.TestIsSubDictOf.test_mismatch_detailsA1\xa3l\xb3)\x01<3nova.tests.
> >test_matchers.TestXMLMatches.test__str__C\xd1E\x13\xb3)\x01@K
> @@nova.tests.
> >test_matchers.TestXMLMatches.test_describe_difference\xe0\xa0\xb2\x80\xb3)
> >\x01@D
> :nova.tests.test_matchers.TestXMLMatches.test_matches_matchDT\xe9h\x
> >b3)\x01@G
> =nova.tests.test_matchers.TestXMLMatches.test_mismatch_detailse\x
> >16\xb1\xad\xb3
> > `\x80N\xdb\x17text/plain;charset=utf8\rimport
> >errors\x80N\xa8nova.tests.api.ec2.test_api\nnova.tests.api.ec2.test_cinder
> >_cloud\nnova.tests.api.ec2.test_cloud\nnova.tests.api.ec2.test_ec2_validat
> >e\nnova.tests.api.ec2.test_ec2utils\nnova.tests.api.ec2.test_error_respons
> >e\nnova.tests.api.ec2.test_faults\nnova.tests.api.ec2.test_middleware\nnov
> >a.tests.api.openstack.compute.contrib.test_admin_actions\nnova.tests.api.o
> >penstack.compute.contrib.test_agents\nnova.tests.api.openstack.compute.con
> >trib.test_aggregates\nnova.tests.api.openstack.compute.contrib.test_attach
> >_interfaces\nnova.tests.api.openstack.compute.contrib.test_availability_zo
> >ne\nnova.tests.api.openstack.compute.contrib.test_baremetal_nodes\nnova.te
> >sts.api.openstack.compute.contrib.test_cells\
> nnova.tests.api.openstack.com
> >pute.contrib.test_certificates\nnova.tests.api.openstack.compute.contrib.t
> >est_cloudpipe\nnova.tests.api.openstack.compute.contrib.test_cloudpipe_upd
> >ate\nnova.tests.api.openstack.compute.contrib.test_config_drive\nnova.test
> >s.api.openstack.compute.contrib.test_console_auth_tokens\nnova.tests.api.o
> >penstack.compute.contrib.test_console_output\
> nnova.tests.api.openstack.com
> >pute.contrib.test_consoles\nnova.tests.api.openstack.compute.contrib.test_
> >createserverext\nnova.tests.api.openstack.compute.contrib.test_deferred_de
> >lete\nnova.tests.api.openstack.compute.contrib.test_disk_config\nnova.test
> >s.api.openstack.compute.contrib.test_evacuate\
> nnova.tests.api.openstack.co
> >mpute.contrib.test_extended_availability_zone\
> nnova.tests.api.openstack.co
> >mpute.contrib.test_extended_evacuate_find_host\nnova.tests.api.openstack.c
> >ompute.contrib.test_extended_hypervisors\nnova.tests.api.openstack.compute
> >.contrib.test_extended_ips\nnova.tests.api.openstack.compute.contrib.test_
> >extended_ips_mac\nnova.tests.api.openstack.compute.contrib.test_extended_r
> >escue_with_image\nnova.tests.api.openstack.compute.contrib.test_extended_s
> >erver_attributes\nnova.tests.api.openstack.compute.contrib.test_extended_s
> >tatus\nnova.tests.api.openstack.compute.contrib.test_extended_virtual_inte
> >rfaces_net\nnova.tests.api.openstack.compute.contrib.test_extended_volumes
> >\nnova.tests.api.openstack.compute.contrib.test_fixed_ips\nnova.tests.api.
> >openstack.compute.contrib.test_flavor_access\
> nnova.tests.api.openstack.com
> >pute.contrib.test_flavor_disabled\nnova.tests.api.openstack.compute.contri
> >b.test_flavor_manage\nnova.tests.api.openstack.compute.contrib.test_flavor
> >_rxtx\nnova.tests.api.openstack.compute.contrib.test_flavor_swap\nnova.tes
> >ts.api.openstack.compute.contrib.test_flavorextradata\nnova.tests.api.open
> >stack.compute.contrib.test_flavors_extra_specs\nnova.tests.api.openstack.c
> >ompute.contrib.test_floating_ip_bulk\nnova.tests.api.open

Re: [openstack-dev] unable to run unit tests

2014-08-23 Thread Doug Wiegley
That’s testr’s friendly way of telling you that you have an import error.

Run it with the py26 environment and you’ll get a useful error.

No, don’t ask me why.

Doug



On 8/23/14, 2:33 PM, "Alex Leonhardt"  wrote:

>Hi All,
>
>
>after some fighting with installing all the bits - I can finally execute
>run_tests.sh but I still cannot seem to get it to work properly. This is
>what is returning :
>
>
>Non-zero exit code (2) from test listing.
>stdout='\xb3)\x01@B8nova.tests.test_matchers.TestDictListMatches.test__str
>__\xde|\xe2\x9e\xb3)\x01@P@Enova.tests.test_matchers.TestDictListMatches.t
>est_describe_difference\xfeL\xc1\xc4\xb3)\x01@I?nova.tests.test_matchers.T
>estDictListMatches.test_matches_match\x93\xfau\xb5\xb3)\x01@M@Bnova.tests.
>test_matchers.TestDictListMatches.test_mismatch_details\x18`>_\xb3)\x01=4n
>ova.tests.test_matchers.TestDictMatches.test__str__\xca\x9bK\xb9\xb3)\x01@
>L@Anova.tests.test_matchers.TestDictMatches.test_describe_difference\x94\x
>b6\x89\\\xb3)\x01@E;nova.tests.test_matchers.TestDictMatches.test_matches_
>match^V\xc2v\xb3)\x01@H>nova.tests.test_matchers.TestDictMatches.test_mism
>atch_details\x1bV\x9e\xff\xb3)\x01=4nova.tests.test_matchers.TestIsSubDict
>Of.test__str__\xb8\x06h(\xb3)\x01@L@Anova.tests.test_matchers.TestIsSubDic
>tOf.test_describe_difference\xd2\xdd3\xea\xb3)\x01@E;nova.tests.test_match
>ers.TestIsSubDictOf.test_matches_match~qC\xd0\xb3)\x01@H>nova.tests.test_m
>atchers.TestIsSubDictOf.test_mismatch_detailsA1\xa3l\xb3)\x01<3nova.tests.
>test_matchers.TestXMLMatches.test__str__C\xd1E\x13\xb3)\x01@K@@nova.tests.
>test_matchers.TestXMLMatches.test_describe_difference\xe0\xa0\xb2\x80\xb3)
>\x01@D:nova.tests.test_matchers.TestXMLMatches.test_matches_matchDT\xe9h\x
>b3)\x01@G=nova.tests.test_matchers.TestXMLMatches.test_mismatch_detailse\x
>16\xb1\xad\xb3
> `\x80N\xdb\x17text/plain;charset=utf8\rimport
>errors\x80N\xa8nova.tests.api.ec2.test_api\nnova.tests.api.ec2.test_cinder
>_cloud\nnova.tests.api.ec2.test_cloud\nnova.tests.api.ec2.test_ec2_validat
>e\nnova.tests.api.ec2.test_ec2utils\nnova.tests.api.ec2.test_error_respons
>e\nnova.tests.api.ec2.test_faults\nnova.tests.api.ec2.test_middleware\nnov
>a.tests.api.openstack.compute.contrib.test_admin_actions\nnova.tests.api.o
>penstack.compute.contrib.test_agents\nnova.tests.api.openstack.compute.con
>trib.test_aggregates\nnova.tests.api.openstack.compute.contrib.test_attach
>_interfaces\nnova.tests.api.openstack.compute.contrib.test_availability_zo
>ne\nnova.tests.api.openstack.compute.contrib.test_baremetal_nodes\nnova.te
>sts.api.openstack.compute.contrib.test_cells\nnova.tests.api.openstack.com
>pute.contrib.test_certificates\nnova.tests.api.openstack.compute.contrib.t
>est_cloudpipe\nnova.tests.api.openstack.compute.contrib.test_cloudpipe_upd
>ate\nnova.tests.api.openstack.compute.contrib.test_config_drive\nnova.test
>s.api.openstack.compute.contrib.test_console_auth_tokens\nnova.tests.api.o
>penstack.compute.contrib.test_console_output\nnova.tests.api.openstack.com
>pute.contrib.test_consoles\nnova.tests.api.openstack.compute.contrib.test_
>createserverext\nnova.tests.api.openstack.compute.contrib.test_deferred_de
>lete\nnova.tests.api.openstack.compute.contrib.test_disk_config\nnova.test
>s.api.openstack.compute.contrib.test_evacuate\nnova.tests.api.openstack.co
>mpute.contrib.test_extended_availability_zone\nnova.tests.api.openstack.co
>mpute.contrib.test_extended_evacuate_find_host\nnova.tests.api.openstack.c
>ompute.contrib.test_extended_hypervisors\nnova.tests.api.openstack.compute
>.contrib.test_extended_ips\nnova.tests.api.openstack.compute.contrib.test_
>extended_ips_mac\nnova.tests.api.openstack.compute.contrib.test_extended_r
>escue_with_image\nnova.tests.api.openstack.compute.contrib.test_extended_s
>erver_attributes\nnova.tests.api.openstack.compute.contrib.test_extended_s
>tatus\nnova.tests.api.openstack.compute.contrib.test_extended_virtual_inte
>rfaces_net\nnova.tests.api.openstack.compute.contrib.test_extended_volumes
>\nnova.tests.api.openstack.compute.contrib.test_fixed_ips\nnova.tests.api.
>openstack.compute.contrib.test_flavor_access\nnova.tests.api.openstack.com
>pute.contrib.test_flavor_disabled\nnova.tests.api.openstack.compute.contri
>b.test_flavor_manage\nnova.tests.api.openstack.compute.contrib.test_flavor
>_rxtx\nnova.tests.api.openstack.compute.contrib.test_flavor_swap\nnova.tes
>ts.api.openstack.compute.contrib.test_flavorextradata\nnova.tests.api.open
>stack.compute.contrib.test_flavors_extra_specs\nnova.tests.api.openstack.c
>ompute.contrib.test_floating_ip_bulk\nnova.tests.api.openstack.compute.con
>trib.test_floating_ip_dns\nnova.tests.api.openstack.compute.contrib.test_f
>loating_ip_pools\nnova.tests.api.openstack.compute.contrib.test_floating_i
>ps\nnova.tests.api.openstack.compute.contrib.test_fping\nnova.tests.api.op
>enstack.compute.contrib.test_hide_server_addresses\nnova.tests.api.opensta
>ck.compute.contrib.test_hosts\nnova.tests.api.openstack.compute.contrib.te
>st_

[openstack-dev] unable to run unit tests

2014-08-23 Thread Alex Leonhardt
Hi All,

after some fighting with installing all the bits - I can finally execute
run_tests.sh but I still cannot seem to get it to work properly. This is
what is returning :

Non-zero exit code (2) from test listing.
stdout='\xb3)\x01@B8nova.tests.test_matchers.TestDictListMatches.test__str__
\xde|\xe2\x9e\xb3)\x01@P
@Enova.tests.test_matchers.TestDictListMatches.test_describe_difference\xfeL\xc1\xc4\xb3)\x01@I
?nova.tests.test_matchers.TestDictListMatches.test_matches_match\x93\xfau\xb5\xb3)\x01@M
@Bnova.tests.test_matchers.TestDictListMatches.test_mismatch_details\x18`>_\xb3)\x01=4nova.tests.test_matchers.TestDictMatches.test__str__\xca\x9bK\xb9\xb3)\x01@L
@Anova.tests.test_matchers.TestDictMatches.test_describe_difference\x94\xb6\x89\\\xb3)\x01@E
;nova.tests.test_matchers.TestDictMatches.test_matches_match^V\xc2v\xb3)\x01@H
>nova.tests.test_matchers.TestDictMatches.test_mismatch_details\x1bV\x9e\xff\xb3)\x01=4nova.tests.test_matchers.TestIsSubDictOf.test__str__\xb8\x06h(\xb3)\x01@L
@Anova.tests.test_matchers.TestIsSubDictOf.test_describe_difference\xd2\xdd3\xea\xb3)\x01@E
;nova.tests.test_matchers.TestIsSubDictOf.test_matches_match~qC\xd0\xb3)\x01@H
>nova.tests.test_matchers.TestIsSubDictOf.test_mismatch_detailsA1\xa3l\xb3)\x01<3nova.tests.test_matchers.TestXMLMatches.test__str__C\xd1E\x13\xb3)\x01@K
@@nova.tests.test_matchers.TestXMLMatches.test_describe_difference\xe0\xa0\xb2\x80\xb3)\x01@D
:nova.tests.test_matchers.TestXMLMatches.test_matches_matchDT\xe9h\xb3)\x01@G=nova.tests.test_matchers.TestXMLMatches.test_mismatch_detailse\x16\xb1\xad\xb3
`\x80N\xdb\x17text/plain;charset=utf8\rimport
errors\x80N\xa8nova.tests.api.ec2.test_api\nnova.tests.api.ec2.test_cinder_cloud\nnova.tests.api.ec2.test_cloud\nnova.tests.api.ec2.test_ec2_validate\nnova.tests.api.ec2.test_ec2utils\nnova.tests.api.ec2.test_error_response\nnova.tests.api.ec2.test_faults\nnova.tests.api.ec2.test_middleware\nnova.tests.api.openstack.compute.contrib.test_admin_actions\nnova.tests.api.openstack.compute.contrib.test_agents\nnova.tests.api.openstack.compute.contrib.test_aggregates\nnova.tests.api.openstack.compute.contrib.test_attach_interfaces\nnova.tests.api.openstack.compute.contrib.test_availability_zone\nnova.tests.api.openstack.compute.contrib.test_baremetal_nodes\nnova.tests.api.openstack.compute.contrib.test_cells\nnova.tests.api.openstack.compute.contrib.test_certificates\nnova.tests.api.openstack.compute.contrib.test_cloudpipe\nnova.tests.api.openstack.compute.contrib.test_cloudpipe_update\nnova.tests.api.openstack.compute.contrib.test_config_drive\nnova.tests.api.openstack.compute.contrib.test_console_auth_tokens\nnova.tests.api.openstack.compute.contrib.test_console_output\nnova.tests.api.openstack.compute.contrib.test_consoles\nnova.tests.api.openstack.compute.contrib.test_createserverext\nnova.tests.api.openstack.compute.contrib.test_deferred_delete\nnova.tests.api.openstack.compute.contrib.test_disk_config\nnova.tests.api.openstack.compute.contrib.test_evacuate\nnova.tests.api.openstack.compute.contrib.test_extended_availability_zone\nnova.tests.api.openstack.compute.contrib.test_extended_evacuate_find_host\nnova.tests.api.openstack.compute.contrib.test_extended_hypervisors\nnova.tests.api.openstack.compute.contrib.test_extended_ips\nnova.tests.api.openstack.compute.contrib.test_extended_ips_mac\nnova.tests.api.openstack.compute.contrib.test_extended_rescue_with_image\nnova.tests.api.openstack.compute.contrib.test_extended_server_attributes\nnova.tests.api.openstack.compute.contrib.test_extended_status\nnova.tests.api.openstack.compute.contrib.test_extended_virtual_interfaces_net\nnova.tests.api.openstack.compute.contrib.test_extended_volumes\nnova.tests.api.openstack.compute.contrib.test_fixed_ips\nnova.tests.api.openstack.compute.contrib.test_flavor_access\nnova.tests.api.openstack.compute.contrib.test_flavor_disabled\nnova.tests.api.openstack.compute.contrib.test_flavor_manage\nnova.tests.api.openstack.compute.contrib.test_flavor_rxtx\nnova.tests.api.openstack.compute.contrib.test_flavor_swap\nnova.tests.api.openstack.compute.contrib.test_flavorextradata\nnova.tests.api.openstack.compute.contrib.test_flavors_extra_specs\nnova.tests.api.openstack.compute.contrib.test_floating_ip_bulk\nnova.tests.api.openstack.compute.contrib.test_floating_ip_dns\nnova.tests.api.openstack.compute.contrib.test_floating_ip_pools\nnova.tests.api.openstack.compute.contrib.test_floating_ips\nnova.tests.api.openstack.compute.contrib.test_fping\nnova.tests.api.openstack.compute.contrib.test_hide_server_addresses\nnova.tests.api.openstack.compute.contrib.test_hosts\nnova.tests.api.openstack.compute.contrib.test_hypervisor_status\nnova.tests.api.openstack.compute.contrib.test_hypervisors\nnova.tests.api.openstack.compute.contrib.test_image_size\nnova.tests.api.openstack.compute.contrib.test_instance_actions\nnova.tests.api.openstack.compute.contrib.test_instance_usage_audit_log\nnova.tests.api.openstack.compute.contrib.test_keypairs\nnova.tests.

[openstack-dev] [neutron] Runtime checks vs Sanity checks

2014-08-23 Thread Maru Newby
Kevin Benton has proposed adding a runtime check for netns permission problems:

https://review.openstack.org/#/c/109736/

There seems to be consensus on the patch that this is something that we want to 
do at runtime, but that would seem to run counter to the precedent that 
host-specific issues such as this one be considered a deployment-time 
responsibility.  The addition of the sanity check  framework would seem to 
support the latter case:

https://github.com/openstack/neutron/blob/master/neutron/cmd/sanity_check.py

Thoughts?


Maru


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [infra] Glance review patterns and their impact on the gate

2014-08-23 Thread Sean Dague
On 08/22/2014 07:22 AM, Daniel P. Berrange wrote:
> On Fri, Aug 22, 2014 at 07:09:10AM -0500, Sean Dague wrote:
>> Earlier this week the freshness checks (the ones that required passing
>> results within 24 hrs for a change to go into the gate) were removed to
>> try to conserve nodes as we get to crunch time. The hopes were that
>> review teams had enough handle on what when code in their program got
>> into a state that it *could not* pass it's own unit tests to not approve
>> that code.
> 
> Doh, I had no idea that we disabled the freshness checks. Since those
> checks have been in place I think reviewers have somewhat come to rely
> on them existing. I know I've certainly approved newly uploaded patches
> for merge now without waiting for 'check' jobs to finish, since we've
> been able to rely on the fact that the freshness checks will ensure it
> doesn't get into the 'gate' jobs queue if there was a problem. I see
> other reviewers do this reasonably frequently too. Of course this
> reliance does not work out for 3rd party jobs, so people shouldn't
> relly do that for changes where such jobs are important, but it is
> hard to resist in general.

We still require a +1 Jenkins for it to move to gate. We don't require
that it was within the last 24hrs (that was the freshness change that
was removed).

So fast approve of trivial patches is still valid workflow. As it won't
go into the gate until it already passes Jenkins once.

>> Realistically Glance was the biggest offender of this in the past, and
>> honestly the top reason for putting freshness checks in the gate in the
>> first place.
> 
> I'm not commenting about the past transgressions, but as long as those
> freshness jobs exist I think they sort of serve to actually re-inforce
> the bad behaviour you describe because people can start to rely on them.
> 
> Regards,
> Daniel
> 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Group API: add 'action' to authorizer?

2014-08-23 Thread Christopher Yeoh
On Sat, 23 Aug 2014 03:56:27 -0500
Joe Cropper  wrote:

> Hi Folks,
> 
> Would anyone be opposed to adding the 'action' checking to the v2/v3
> authorizers?  This would allow administrators more fine-grained
> control over  who can read vs. create/update/delete server groups.
> 
> Thoughts?
> 
> If folks are supportive, I'd be happy to add this... but not sure if
> we'd treat this as a 'bug' or whether there is a blueprint under which
> this could be done?

Long term we want to have a separate authorizer for every method. Alex
had a nova-spec  proposed for this but it unfortunately did not make
Juno

https://review.openstack.org/#/c/92326/

Also since the feature proposal deadline has passed it'll have to wait
till Kilo.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-23 Thread Kevin Benton
Can you disable posting of results directly from your Jenkins/Zuul setup
and have a script that just checks the log file for special markers to
determine if the vote should be FAILED/PASSED/SKIPPED? Another advantage of
this approach is that it gives you an opportunity to detect when a job just
failed to setup due to infrastructure reasons and trigger a recheck without
ever first posting a failure to gerrit.


On Fri, Aug 22, 2014 at 3:06 PM, Dane Leblanc (leblancd)  wrote:

> Thanks Edgar for updating the APIC status!!!
>
> Edgar and Kyle: *PLEASE NOTE**  I need your understanding and
> advice on the following:
>
> We are still stuck with a problem stemming from a design limitation of
> Jenkins that prevents us from being compliant with Neutron 3rd Party CI
> requirements for our DFA CI.
>
> The issue is that Jenkins only allows our scripts to (programmatically)
> return either Success or Fail. There is no option to return "Aborted", "Not
> Tested", or "Skipped".
>
> Why does this matter? The DFA plugin is just being introduced, and initial
> DFA-enabling change sets have not yet been merged. Therefore, all other
> change sets will fail our Tempest tests, since they are not DFA-enabled.
>
> Similarly, we were recently blocked in our APIC CI with a critical bug,
> causing all change sets without this fix to fail on our APIC testbed.
>
> In these cases, we would like to enter a "throttled" or "partially
> blocked" mode, where we would skip testing on change sets we know will
> fail, and (in an ideal world) signal this shortcoming to Gerrit e.g. by
> returning a "Skipped" status. Unfortunately, this option is not available
> in Jenkins scripts, as Jenkins is currently designed. The only options we
> have available is "Success" or all "Fail", which are both misleading. We
> would also incorrectly report success or fail on one of the following test
> commits:
> https://review.openstack.org/#/c/114393/
> https://review.openstack.org/#/c/40296/
>
> I've brought this issue up on the openstack-infra IRC, and jeblair
> confirmed the Jenkins limitation, but asked me to get consensus from the
> Neutron community as to this being a problem/requirement. I've also sent
> out an e-mail on the Neutron ML trying to start a discussion on this
> problem (no traction). I plan on bringing this up in the 3rd Party CI IRC
> on Monday, assuming there is time permitted in the open discussion.
>
> I'm also investigating
>
> For the short term, I would like to propose the following:
> * We bring this up on the 3rd Party CI IRC on Monday to get a solution or
> workaround, if available. If a solution is available, let's consider
> including that as a hint when we come up with CI requirements for handling
> CIs bocked by some critical fix.
> * I'm also looking into using a REST API to cancel a Jenkins job
> programmatically.
> * If no solution or workaround is available, we work with infra team or
> with Jenkins team to create a solution.
> * Until a solution is available, for plugins which are blocked by a
> critical bug, we post a status/notes indicating the plugin's situation on
> our 3rd party CI status wiki, e.g.:
>
> Vendor  Plugin/Driver Name  Contact Name
> Status  Notes
> My Vendor Name  My Plugin CIMy Contact Person   T
>  Throttled / Partially blocked / Awaiting Intial Commits
>
> The status/notes should be clear and understood by the Neutron team.  The
> console logs for change sets where the tests were skipped should also
> contain a message that all testing is being skipped for that commit.
>
> Note that when the DFA initial commits are merged, then this issue would
> go away for the DFA CI. However, this problem will reappear every time a
> blocking critical bug shows up for a 3rd party CI setup, or a new plugin is
> introduced and the hardware-enabling commits are not yet merged.  (That is,
> until we have a solution for the Jenkins limitation).
>
> Let me know what you think.
>
> Thanks,
> Dane
>
> -Original Message-
> From: Edgar Magana [mailto:edgar.mag...@workday.com]
> Sent: Friday, August 22, 2014 1:57 PM
> To: Dane Leblanc (leblancd); OpenStack Development Mailing List (not for
> usage questions)
> Subject: Re: [openstack-dev] [neutron] [third-party] What tests are
> required to be run
>
> Sorry my bad but I just changed.
>
> Edgar
>
> On 8/21/14, 2:13 PM, "Dane Leblanc (leblancd)"  wrote:
>
> >Edgar:
> >
> >I'm still seeing the comment "Results are not accurate. Needs
> >clarification..."
> >
> >Dane
> >
> >-Original Message-
> >From: Edgar Magana [mailto:edgar.mag...@workday.com]
> >Sent: Thursday, August 21, 2014 2:58 PM
> >To: Dane Leblanc (leblancd); OpenStack Development Mailing List (not
> >for usage questions)
> >Subject: Re: [openstack-dev] [neutron] [third-party] What tests are
> >required to be run
> >
> >Dane,
> >
> >Wiki has been updated.
> >
> >Thanks,
> >
> >Edgar
> >
> >On 8/21/14, 7:57 AM, "Dane Leblanc (leblancd)" 
> wrote:
> >
> >>

[openstack-dev] [nova] Server Group API: add 'action' to authorizer?

2014-08-23 Thread Joe Cropper
Hi Folks,

Would anyone be opposed to adding the 'action' checking to the v2/v3
authorizers?  This would allow administrators more fine-grained
control over  who can read vs. create/update/delete server groups.

Thoughts?

If folks are supportive, I'd be happy to add this... but not sure if
we'd treat this as a 'bug' or whether there is a blueprint under which
this could be done?

Thanks,
Joe

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-23 Thread Tim Bell
> -Original Message-
> From: John Dickinson [mailto:m...@not.mn]
> Sent: 23 August 2014 03:20
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale 
> PTLs
> 
> I think Anne makes some excellent points about the pattern being proposed
> being unlikely to be commonly implemented across all the programs (or, at 
> best,
> very difficult). Let's not try to formalize another "best practice" that 
> works many
> times and force it to work every time. Here's an alternate proposal:
> 
> Let's let PTLs be PTLs and effectively coordinate and manage the activity in 
> their
> respective projects. And let's get the PTLs together for one or two days every
> cycle to discuss project issues. Just PTLs, and let's focus on the project
> management stuff and some cross-project issues.
> 
> Getting the PTLs together would allow them to discuss cross-project issues,
> share frustrations and solutions about what does and doesn't work. Basically,
> think of it as a mid-cycle meetup, but for PTLs. (Perhaps we could even ask 
> the
> Foundation to sponsor it.)
> 
> --John
> 

As part of the user feedback loop, we've found the PTL role extremely useful to 
channel feedback.  The operator PTL discussions during the Atlanta summit 
helped to clarify a number of areas where the PTL can then take the points back 
to the design summit. It is not clear how czars would address the outward 
facing part of the PTL role which is clearly needed in view of the various 
discussions around program management and priorities.

If we have lots of czars or major differences in how each project is 
structured, it is not clear how we channel user feedback to the project teams. 
Would there be a user czar on each project ?

I have no problem with lots of czars to delegate activities across the projects 
but having a single accountable and elected PTL who can choose the level of 
delegation (and be assessed on that) seems to be a very good feature.

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev