Re: [openstack-dev] Gate issues - what you can do to help

2013-10-06 Thread Akihiro Motoki
Hi Gary,

Almost the same I have posted another way of the fix.
https://review.openstack.org/#/c/49942/

Both try to fix the same issue.
Gary's one changes neutronclient itself and mine chnages quantumclient proxy.
I am not sure which is the direction, but at least one of them should
be merged ASAP
to fix the stable/grizzly blocking failure.

Thanks,


On Sun, Oct 6, 2013 at 5:45 PM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 Can some Neutron cores please look at
 https://review.openstack.org/#/c/49943/. I have tested this locally and it
 addresses the issues that I have encountered.
 Thanks
 Gary

On Fri, Oct 4, 2013 at 2:06 AM, Akihiro Motoki amot...@gmail.com wrote:
 Hi,

 I would like to share what Gary and I investigated, while it is not
 addressed yet.

 The cause is the failure of quantum-debug command in setup_quantum_debug

(https://github.com/openstack-dev/devstack/blob/stable/grizzly/stack.sh#L
996).
 We can reproduce the issue in local environment by setting
 Q_USE_DEBUG_COMMAND=True in localrc.

 Mark proposed a patch https://review.openstack.org/#/c/49584/ but it
 does not address the issue.
 We need another way to proxy quantumclient to neutronclient.

 Note that there is a case devstack log in the gate does not contain
 the end of the console logs.
 In
http://logs.openstack.org/39/48539/5/check/check-tempest-devstack-vm-neut
ron/b9e6559/,
 the last command logged is quantum subnet-create, but actually
 quantum-debug command was executed
 and it failed.

 Thanks,
 Akihiro

 On Fri, Oct 4, 2013 at 12:05 AM, Alan Pevec ape...@gmail.com wrote:
 The problems occur when the when the the following line is invoked:


https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quant
um#L302

 But that line is reached only in case baremetal is enabled which isn't
 the case in gate, is it?

 Cheers,
 Alan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Akihiro MOTOKI amot...@gmail.com



--
Akihiro MOTOKI amot...@gmail.com




-- 
Akihiro MOTOKI amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-06 Thread Akihiro Motoki
Regarding https://review.openstack.org/#/c/49942/ (against
quantumclient branch),
the gate for quantumclient branch of python-neutronclient seems broken.
It seems the script expects master branch of python-neutronclient.
I am not sure what is the right direction to propose patch to
quantumclient branch.

Gary's patch https://review.openstack.org/#/c/49943/ looks a short way
to fix the gate issue
since once the fix is merged the gate issue will be fixed.
I am fine with the patch as a temporary solution.

Thanks,
Akihiro


On Sun, Oct 6, 2013 at 5:51 PM, Akihiro Motoki amot...@gmail.com wrote:
 Hi Gary,

 Almost the same I have posted another way of the fix.
 https://review.openstack.org/#/c/49942/

 Both try to fix the same issue.
 Gary's one changes neutronclient itself and mine chnages quantumclient proxy.
 I am not sure which is the direction, but at least one of them should
 be merged ASAP
 to fix the stable/grizzly blocking failure.

 Thanks,


 On Sun, Oct 6, 2013 at 5:45 PM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 Can some Neutron cores please look at
 https://review.openstack.org/#/c/49943/. I have tested this locally and it
 addresses the issues that I have encountered.
 Thanks
 Gary

On Fri, Oct 4, 2013 at 2:06 AM, Akihiro Motoki amot...@gmail.com wrote:
 Hi,

 I would like to share what Gary and I investigated, while it is not
 addressed yet.

 The cause is the failure of quantum-debug command in setup_quantum_debug

(https://github.com/openstack-dev/devstack/blob/stable/grizzly/stack.sh#L
996).
 We can reproduce the issue in local environment by setting
 Q_USE_DEBUG_COMMAND=True in localrc.

 Mark proposed a patch https://review.openstack.org/#/c/49584/ but it
 does not address the issue.
 We need another way to proxy quantumclient to neutronclient.

 Note that there is a case devstack log in the gate does not contain
 the end of the console logs.
 In
http://logs.openstack.org/39/48539/5/check/check-tempest-devstack-vm-neut
ron/b9e6559/,
 the last command logged is quantum subnet-create, but actually
 quantum-debug command was executed
 and it failed.

 Thanks,
 Akihiro

 On Fri, Oct 4, 2013 at 12:05 AM, Alan Pevec ape...@gmail.com wrote:
 The problems occur when the when the the following line is invoked:


https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quant
um#L302

 But that line is reached only in case baremetal is enabled which isn't
 the case in gate, is it?

 Cheers,
 Alan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Akihiro MOTOKI amot...@gmail.com



--
Akihiro MOTOKI amot...@gmail.com




 --
 Akihiro MOTOKI amot...@gmail.com



-- 
Akihiro MOTOKI amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Gary Kotton
Please see https://review.openstack.org/#/c/49483/

From: Matt Riedemann mrie...@us.ibm.commailto:mrie...@us.ibm.com
Date: Wednesday, October 2, 2013 7:19 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Administrator gkot...@vmware.commailto:gkot...@vmware.com
Subject: Re: [openstack-dev] Gate issues - what you can do to help

I'm tracking that with this bug:

https://bugs.launchpad.net/openstack-ci/+bug/1234181

There are a lot of sys.exit(1) calls in the neutron code on stable/grizzly (and 
in master too for that matter) so I'm wondering if something is puking but the 
error doesn't get logged before the process exits.


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com
[cid:_1_0EC4395C0EC433C80059AEAE86257BF8]

3605 Hwy 52 N
Rochester, MN 55901-1407
United States






From:Alan Pevec ape...@gmail.commailto:ape...@gmail.com
To:Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com,
Cc:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date:10/02/2013 10:45 AM
Subject:Re: [openstack-dev] Gate issues - what you can do to help




Hi,

quantumclient is now fixed for stable/grizzly but there are issues
with check-tempest-devstack-vm-neutron job where devstack install is
dying in the middle of create_quantum_initial_network() without trace
e.g. 
http://logs.openstack.org/71/49371/1/check/check-tempest-devstack-vm-neutron/6da159d/console.html

Any ideas?

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


attachment: ATT1..gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Alan Pevec
2013/10/3 Gary Kotton gkot...@vmware.com:
 Please see https://review.openstack.org/#/c/49483/

That's s/quantum/neutron/ on stable - I'm confused why is that, it
should have been quantum everywhere in Grizzly.
Could you please expand your reasoning in the commit message? It also
doesn't help, check-tempest-devstack-vm-neutron job still failed.

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Gary Kotton


On 10/3/13 11:59 AM, Alan Pevec ape...@gmail.com wrote:

2013/10/3 Gary Kotton gkot...@vmware.com:
 Please see https://review.openstack.org/#/c/49483/

That's s/quantum/neutron/ on stable - I'm confused why is that, it
should have been quantum everywhere in Grizzly.

Please see below:

nicira@os-devstack:/opt/stack/neutron/etc$ git branch
  master
* stable/grizzly
nicira@os-devstack:/opt/stack/neutron/etc$ pwd
/opt/stack/neutron/etc
nicira@os-devstack:/opt/stack/neutron/etc$



The stable/grizzly devstack code was trying to copy from quantum. The
destination directories are still quantum, just the source directories are
neutron.


Could you please expand your reasoning in the commit message? It also
doesn't help, check-tempest-devstack-vm-neutron job still failed.

I need to check this. Locally things work for me now where they did not
prior to the patch. It looks like I have missed some places with the
root-wrap:

ROOTWRAP_SUDOER_CMD='/usr/local/bin/quantum-rootwrap
/etc/neutron/rootwrap.conf *'


I'll post another version soon.

Thanks
Gary


Cheers,
Alan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Gary Kotton
Hi,
This seems to be my bad. I have abandoned the patch and am still looking
into the problems.
Thanks
Gary

On 10/3/13 12:14 PM, Gary Kotton gkot...@vmware.com wrote:



On 10/3/13 11:59 AM, Alan Pevec ape...@gmail.com wrote:

2013/10/3 Gary Kotton gkot...@vmware.com:
 Please see https://review.openstack.org/#/c/49483/

That's s/quantum/neutron/ on stable - I'm confused why is that, it
should have been quantum everywhere in Grizzly.

Please see below:

nicira@os-devstack:/opt/stack/neutron/etc$ git branch
  master
* stable/grizzly
nicira@os-devstack:/opt/stack/neutron/etc$ pwd
/opt/stack/neutron/etc
nicira@os-devstack:/opt/stack/neutron/etc$



The stable/grizzly devstack code was trying to copy from quantum. The
destination directories are still quantum, just the source directories are
neutron.


Could you please expand your reasoning in the commit message? It also
doesn't help, check-tempest-devstack-vm-neutron job still failed.

I need to check this. Locally things work for me now where they did not
prior to the patch. It looks like I have missed some places with the
root-wrap:

ROOTWRAP_SUDOER_CMD='/usr/local/bin/quantum-rootwrap
/etc/neutron/rootwrap.conf *'


I'll post another version soon.

Thanks
Gary


Cheers,
Alan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Gary Kotton
Hi,
I think that I may have stumbled upon the problem, but need the help from
someone on the infra team. Then again I may just be completely mistaken.

Prior to the little hiccup we have at the moment VM's that were used for
devstack on the infra side would have 1 interface:

2013-09-10 06:32:22.208 | Triggered by: https://review.openstack.org/40359
patchset 4
2013-09-10 06:32:22.208 | Pipeline: gate
2013-09-10 06:32:22.208 | IP configuration of this host:
2013-09-10 06:32:22.209 | 1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc
noqueue state UNKNOWN
2013-09-10 06:32:22.209 | inet 127.0.0.1/8 scope host lo
2013-09-10 06:32:22.209 | 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu
1500 qdisc pfifo_fast state UP qlen 1000
2013-09-10 06:32:22.209 | inet 10.2.144.147/15 brd 10.3.255.255 scope
global eth0
2013-09-10 06:32:23.193 | Running devstack
2013-09-10 06:32:23.866 | Using mysql database backend



In the latest version they have 2:

2013-10-03 11:33:54.298 | 1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc
noqueue state UNKNOWN
2013-10-03 11:33:54.299 | inet 127.0.0.1/8 scope host lo
2013-10-03 11:33:54.300 | 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu
1500 qdisc pfifo_fast state UP qlen 1000
2013-10-03 11:33:54.301 | inet 162.242.160.129/24 brd 162.242.160.255
scope global eth0
2013-10-03 11:33:54.302 | 3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu
1500 qdisc pfifo_fast state UP qlen 1000
2013-10-03 11:33:54.302 | inet 10.208.24.83/17 brd 10.208.127.255
scope global eth1
2013-10-03 11:33:57.539 | Running devstack
2013-10-03 11:33:58.780 | Using mysql database backend



The problems occur when the when the the following line is invoked:

https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quantum#L
302

Anyone able to clarify why an additional interface was added?

Thanks
Gary


On 10/3/13 12:58 PM, Gary Kotton gkot...@vmware.com wrote:

Hi,
This seems to be my bad. I have abandoned the patch and am still looking
into the problems.
Thanks
Gary

On 10/3/13 12:14 PM, Gary Kotton gkot...@vmware.com wrote:



On 10/3/13 11:59 AM, Alan Pevec ape...@gmail.com wrote:

2013/10/3 Gary Kotton gkot...@vmware.com:
 Please see https://review.openstack.org/#/c/49483/

That's s/quantum/neutron/ on stable - I'm confused why is that, it
should have been quantum everywhere in Grizzly.

Please see below:

nicira@os-devstack:/opt/stack/neutron/etc$ git branch
  master
* stable/grizzly
nicira@os-devstack:/opt/stack/neutron/etc$ pwd
/opt/stack/neutron/etc
nicira@os-devstack:/opt/stack/neutron/etc$



The stable/grizzly devstack code was trying to copy from quantum. The
destination directories are still quantum, just the source directories
are
neutron.


Could you please expand your reasoning in the commit message? It also
doesn't help, check-tempest-devstack-vm-neutron job still failed.

I need to check this. Locally things work for me now where they did not
prior to the patch. It looks like I have missed some places with the
root-wrap:

ROOTWRAP_SUDOER_CMD='/usr/local/bin/quantum-rootwrap
/etc/neutron/rootwrap.conf *'


I'll post another version soon.

Thanks
Gary


Cheers,
Alan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Jeremy Stanley
On 2013-10-03 06:54:10 -0700 (-0700), Gary Kotton wrote:
 I think that I may have stumbled upon the problem, but need the help from
 someone on the infra team.
[...]

I'm manually launching the test script on a fresh VM now and should
have something shortly.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Alan Pevec
 The problems occur when the when the the following line is invoked:

 https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quantum#L302

But that line is reached only in case baremetal is enabled which isn't
the case in gate, is it?

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Akihiro Motoki
Hi,

I would like to share what Gary and I investigated, while it is not
addressed yet.

The cause is the failure of quantum-debug command in setup_quantum_debug
(https://github.com/openstack-dev/devstack/blob/stable/grizzly/stack.sh#L996).
We can reproduce the issue in local environment by setting
Q_USE_DEBUG_COMMAND=True in localrc.

Mark proposed a patch https://review.openstack.org/#/c/49584/ but it
does not address the issue.
We need another way to proxy quantumclient to neutronclient.

Note that there is a case devstack log in the gate does not contain
the end of the console logs.
In 
http://logs.openstack.org/39/48539/5/check/check-tempest-devstack-vm-neutron/b9e6559/,
the last command logged is quantum subnet-create, but actually
quantum-debug command was executed
and it failed.

Thanks,
Akihiro

On Fri, Oct 4, 2013 at 12:05 AM, Alan Pevec ape...@gmail.com wrote:
 The problems occur when the when the the following line is invoked:

 https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quantum#L302

 But that line is reached only in case baremetal is enabled which isn't
 the case in gate, is it?

 Cheers,
 Alan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Akihiro MOTOKI amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-02 Thread Alan Pevec
Hi,

quantumclient is now fixed for stable/grizzly but there are issues
with check-tempest-devstack-vm-neutron job where devstack install is
dying in the middle of create_quantum_initial_network() without trace
e.g. 
http://logs.openstack.org/71/49371/1/check/check-tempest-devstack-vm-neutron/6da159d/console.html

Any ideas?

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-02 Thread Matt Riedemann
I'm tracking that with this bug:

https://bugs.launchpad.net/openstack-ci/+bug/1234181 

There are a lot of sys.exit(1) calls in the neutron code on stable/grizzly 
(and in master too for that matter) so I'm wondering if something is 
puking but the error doesn't get logged before the process exits.


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Alan Pevec ape...@gmail.com
To: Gary Kotton gkot...@vmware.com, 
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org
Date:   10/02/2013 10:45 AM
Subject:Re: [openstack-dev] Gate issues - what you can do to help



Hi,

quantumclient is now fixed for stable/grizzly but there are issues
with check-tempest-devstack-vm-neutron job where devstack install is
dying in the middle of create_quantum_initial_network() without trace
e.g. 
http://logs.openstack.org/71/49371/1/check/check-tempest-devstack-vm-neutron/6da159d/console.html


Any ideas?

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-01 Thread Alan Pevec
 1) Please *do not* Approve or Reverify stable/* patches. The pyparsing
 requirements conflict with neutron client from earlier in the week is still
 not resolved on stable/*.

 Also there's an issue with quantumclient and Nova stable/grizzly:
 https://jenkins01.openstack.org/job/periodic-nova-python27-stable-grizzly/34/console
 ...nova/network/security_group/quantum_driver.py, line 101, in get
  id = quantumv20.find_resourceid_by_name_or_id(
 AttributeError: 'module' object has no attribute 
 'find_resourceid_by_name_or_id'

That should be fixed by https://review.openstack.org/49006 + new
quantumclient release, thanks Matt!

Adam, Thierry - given that stable/grizzly is still blocked by this, I
suppose we should delay 2013.1.4 freeze (was planned this Thursday)
until stable/grizzly is back in shape?


Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-01 Thread Thierry Carrez
Alan Pevec wrote:
 1) Please *do not* Approve or Reverify stable/* patches. The pyparsing
 requirements conflict with neutron client from earlier in the week is still
 not resolved on stable/*.

 Also there's an issue with quantumclient and Nova stable/grizzly:
 https://jenkins01.openstack.org/job/periodic-nova-python27-stable-grizzly/34/console
 ...nova/network/security_group/quantum_driver.py, line 101, in get
  id = quantumv20.find_resourceid_by_name_or_id(
 AttributeError: 'module' object has no attribute 
 'find_resourceid_by_name_or_id'
 
 That should be fixed by https://review.openstack.org/49006 + new
 quantumclient release, thanks Matt!
 
 Adam, Thierry - given that stable/grizzly is still blocked by this, I
 suppose we should delay 2013.1.4 freeze (was planned this Thursday)
 until stable/grizzly is back in shape?

Given the gate slowness these days, RC1 being late and the advice not to
approve too much stable/* stuff, I think it makes sense to defer for at
least a week. Adam ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-01 Thread Gary Kotton
Yes, it should. I'll mark the bug as a stable backport
Thanks

On 10/1/13 10:51 AM, Alan Pevec ape...@gmail.com wrote:

Hi Gary,

2013/9/29 Gary Kotton gkot...@vmware.com:
 Not related to the stable branches, but related to trunk. At the moment
I am
 working on https://review.openstack.org/#/c/47788/. This patch adds a
 timeout to the access to the OVS database.

There are ovs-vsctl calls on stable/grizzly too, shouldn't this be
backported?

Cheers,
Alan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-01 Thread Adam Gandelman
On 10/01/2013 12:02 AM, Alan Pevec wrote:
 1) Please *do not* Approve or Reverify stable/* patches. The pyparsing
 requirements conflict with neutron client from earlier in the week is still
 not resolved on stable/*.
 Also there's an issue with quantumclient and Nova stable/grizzly:
 https://jenkins01.openstack.org/job/periodic-nova-python27-stable-grizzly/34/console
 ...nova/network/security_group/quantum_driver.py, line 101, in get
  id = quantumv20.find_resourceid_by_name_or_id(
 AttributeError: 'module' object has no attribute 
 'find_resourceid_by_name_or_id'
 That should be fixed by https://review.openstack.org/49006 + new
 quantumclient release, thanks Matt!

 Adam, Thierry - given that stable/grizzly is still blocked by this, I
 suppose we should delay 2013.1.4 freeze (was planned this Thursday)
 until stable/grizzly is back in shape?


 Cheers,
 Alan


This sounds okay to me.  To be clear:

2013.1.4 Freeze goes into affect Oct. 10th
2013.1.4 Release Oct. 17th

- Adam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-09-29 Thread 孔令贤
I met the same problem, I hope the situation is getting better. What can I
help?

2013-09-29 07:05:55.248 | Traceback (most recent call last):
2013-09-29 07:05:55.248 |   File
/home/jenkins/workspace/gate-nova-python27/nova/tests/api/openstack/compute/contrib/test_quantum_security_groups.py,
line 200, in test_associate
2013-09-29 07:05:55.248 | self.manager._addSecurityGroup(req, '1', body)
2013-09-29 07:05:55.248 |   File
/home/jenkins/workspace/gate-nova-python27/nova/api/openstack/compute/contrib/security_groups.py,
line 448, in _addSecurityGroup
2013-09-29 07:05:55.249 | context, id, group_name)
2013-09-29 07:05:55.249 |   File
/home/jenkins/workspace/gate-nova-python27/nova/api/openstack/compute/contrib/security_groups.py,
line 430, in _invoke
2013-09-29 07:05:55.249 | method(context, instance, group_name)
2013-09-29 07:05:55.249 |   File
/home/jenkins/workspace/gate-nova-python27/nova/compute/api.py, line
163, in wrapped
2013-09-29 07:05:55.249 | return func(self, context, target,
*args, **kwargs)
2013-09-29 07:05:55.249 |   File
/home/jenkins/workspace/gate-nova-python27/nova/network/security_group/quantum_driver.py,
line 333, in add_to_instance
2013-09-29 07:05:55.249 | security_group_id =
quantumv20.find_resourceid_by_name_or_id(
2013-09-29 07:05:55.250 | AttributeError: 'module' object has no
attribute 'find_resourceid_by_name_or_id'



2013/9/29 Alan Pevec ape...@gmail.com

  1) Please *do not* Approve or Reverify stable/* patches. The pyparsing
  requirements conflict with neutron client from earlier in the week is
 still
  not resolved on stable/*.

 Also there's an issue with quantumclient and Nova stable/grizzly:

 https://jenkins01.openstack.org/job/periodic-nova-python27-stable-grizzly/34/console
 ...nova/network/security_group/quantum_driver.py, line 101, in get
  id = quantumv20.find_resourceid_by_name_or_id(
 AttributeError: 'module' object has no attribute
 'find_resourceid_by_name_or_id'

 Relevant difference in pip freeze from last good is:
 -python-quantumclient==2.2.3
 +python-neutronclient==2.3.1
 +python-quantumclient==2.2.4.2

 Looks like new quantumclient compatbility layer is missing few methods.

 Cheers,
 Alan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
**
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-09-29 Thread Gary Kotton
Hi,
Not related to the stable branches, but related to trunk. At the moment I am 
working on https://review.openstack.org/#/c/47788/. This patch adds a timeout 
to the access to the OVS database.
The neutron gate fails on this - 
http://logs.openstack.org/88/47788/4/check/check-tempest-devstack-vm-neutron/47328d3/
 (at times it passes).
Is anyone aware of a change in the OVS version over the last few days?
Thanks
Gary

From: 孔令贤 anlin.k...@gmail.commailto:anlin.k...@gmail.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Sunday, September 29, 2013 11:15 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Gate issues - what you can do to help

I met the same problem, I hope the situation is getting better. What can I help?


2013-09-29 07:05:55.248 | Traceback (most recent call last):
2013-09-29 07:05:55.248 |   File 
/home/jenkins/workspace/gate-nova-python27/nova/tests/api/openstack/compute/contrib/test_quantum_security_groups.py,
 line 200, in test_associate
2013-09-29 07:05:55.248 | self.manager._addSecurityGroup(req, '1', body)
2013-09-29 07:05:55.248 |   File 
/home/jenkins/workspace/gate-nova-python27/nova/api/openstack/compute/contrib/security_groups.py,
 line 448, in _addSecurityGroup
2013-09-29 07:05:55.249 | context, id, group_name)
2013-09-29 07:05:55.249 |   File 
/home/jenkins/workspace/gate-nova-python27/nova/api/openstack/compute/contrib/security_groups.py,
 line 430, in _invoke
2013-09-29 07:05:55.249 | method(context, instance, group_name)
2013-09-29 07:05:55.249 |   File 
/home/jenkins/workspace/gate-nova-python27/nova/compute/api.py, line 163, in 
wrapped
2013-09-29 07:05:55.249 | return func(self, context, target, *args, 
**kwargs)
2013-09-29 07:05:55.249 |   File 
/home/jenkins/workspace/gate-nova-python27/nova/network/security_group/quantum_driver.py,
 line 333, in add_to_instance
2013-09-29 07:05:55.249 | security_group_id = 
quantumv20.find_resourceid_by_name_or_id(
2013-09-29 07:05:55.250 | AttributeError: 'module' object has no attribute 
'find_resourceid_by_name_or_id'


2013/9/29 Alan Pevec ape...@gmail.commailto:ape...@gmail.com
 1) Please *do not* Approve or Reverify stable/* patches. The pyparsing
 requirements conflict with neutron client from earlier in the week is still
 not resolved on stable/*.

Also there's an issue with quantumclient and Nova stable/grizzly:
https://jenkins01.openstack.org/job/periodic-nova-python27-stable-grizzly/34/console
...nova/network/security_group/quantum_driver.py, line 101, in get
 id = quantumv20.find_resourceid_by_name_or_id(
AttributeError: 'module' object has no attribute 'find_resourceid_by_name_or_id'

Relevant difference in pip freeze from last good is:
-python-quantumclient==2.2.3
+python-neutronclient==2.3.1
+python-quantumclient==2.2.4.2

Looks like new quantumclient compatbility layer is missing few methods.

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Lingxian Kong
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.commailto:konglingx...@huawei.com; 
anlin.k...@gmail.commailto:anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-09-28 Thread Alan Pevec
 1) Please *do not* Approve or Reverify stable/* patches. The pyparsing
 requirements conflict with neutron client from earlier in the week is still
 not resolved on stable/*.

Also there's an issue with quantumclient and Nova stable/grizzly:
https://jenkins01.openstack.org/job/periodic-nova-python27-stable-grizzly/34/console
...nova/network/security_group/quantum_driver.py, line 101, in get
 id = quantumv20.find_resourceid_by_name_or_id(
AttributeError: 'module' object has no attribute 'find_resourceid_by_name_or_id'

Relevant difference in pip freeze from last good is:
-python-quantumclient==2.2.3
+python-neutronclient==2.3.1
+python-quantumclient==2.2.4.2

Looks like new quantumclient compatbility layer is missing few methods.

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gate issues - what you can do to help

2013-09-27 Thread Sean Dague
Right now the gate is extremely full, there are a number of reasons for 
that. What's more important right now is to try not to make it worse 
while we get resolution on things.


1) Please *do not* Approve or Reverify stable/* patches. The pyparsing 
requirements conflict with neutron client from earlier in the week is 
still not resolved on stable/*. A stable/* approved patch will not get 
through the gate, and it will just reset everyone else, adding 40 
minutes to the gate time for all patches in the queue.


2) Please help with the 2 bugs that Joe Gordon posted earlier.

There is a Neutron issue, which looks like it might be a db deadlock - 
https://bugs.launchpad.net/neutron/+bug/1230407 which is bouncing jobs a 
lot in the gate


There is a Volumes test fail 
https://bugs.launchpad.net/tempest/+bug/1226337, which looks like it is 
tgt quietly failing to spin up a volume, and cinder not noticing. We've 
got a change in the check queue to try to enable more debugging to get 
to the bottom of it. However more help on it would be useful.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-09-27 Thread Monty Taylor


On 09/28/2013 12:08 AM, Gareth wrote:
 
 
 
 On Fri, Sep 27, 2013 at 9:43 PM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 Right now the gate is extremely full, there are a number of reasons
 for that. What's more important right now is to try not to make it
 worse while we get resolution on things.
 
 1) Please *do not* Approve or Reverify stable/* patches. The
 pyparsing requirements conflict with neutron client from earlier in
 the week is still not resolved on stable/*. A stable/* approved
 patch will not get through the gate, and it will just reset everyone
 else, adding 40 minutes to the gate time for all patches in the queue.
 
 
 This is not the first time say please don't reverify or approve. Can
 we add a feature in the future to temporarily block all patches except
 tempest or some specified ones? Or privilege gate for some important
 patches.

Agree with the sentiment. We had a good brainstorming session in IRC
about better ways to handle this systemically. We're going to test out a
couple of them and hopefully have a better answer soonish.

 2) Please help with the 2 bugs that Joe Gordon posted earlier.
 
 There is a Neutron issue, which looks like it might be a db deadlock
 - https://bugs.launchpad.net/__neutron/+bug/1230407
 https://bugs.launchpad.net/neutron/+bug/1230407 which is bouncing
 jobs a lot in the gate
 
 There is a Volumes test fail
 https://bugs.launchpad.net/__tempest/+bug/1226337
 https://bugs.launchpad.net/tempest/+bug/1226337, which looks like
 it is tgt quietly failing to spin up a volume, and cinder not
 noticing. We've got a change in the check queue to try to enable
 more debugging to get to the bottom of it. However more help on it
 would be useful.
 
 -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Gareth
 
 /Cloud Computing, OpenStack, Fitness, Basketball/
 /OpenStack contributor/
 /Company: UnitedStack http://www.ustack.com/
 /My promise: if you find any spelling or grammar mistakes in my email
 from Mar 1 2013, notify me /
 /and I'll donate $1 or ¥1 to an open organization you specify./
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev