[Yahoo-eng-team] [Bug 1463790] [NEW] lbaasv2 help menu

2015-06-10 Thread Alex Syafeyev
Public bug reported:

neutron lbaas-pool-update --help should show the same output as neutron 
lbaas-pool-create --help 
or mark the attributes which are edible in neutron lbaas-pool-show POOLID 
command 

__
neutron lbaas-pool-create -h
usage: neutron lbaas-pool-create [-h] [-f {shell,table,value}] [-c COLUMN]
 [--max-width integer] [--prefix PREFIX]
 [--request-format {json,xml}]
 [--tenant-id TENANT_ID] [--admin-state-down]
 [--description DESCRIPTION]
 [--session-persistence 
type=TYPE[,cookie_name=COOKIE_NAME]]
 [--name NAME] --lb-algorithm
 {ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP}
 --listener LISTENER --protocol
 {HTTP,HTTPS,TCP}

LBaaS v2 Create a pool.

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json,xml}
The XML or JSON request format.
  --tenant-id TENANT_ID
The owner tenant ID.
  --admin-state-downSet admin state up to false.
  --description DESCRIPTION
Description of the pool.
  --session-persistence type=TYPE[,cookie_name=COOKIE_NAME]
The type of session persistence to use and associated
cookie name
  --name NAME   The name of the pool.
  --lb-algorithm {ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP}
The algorithm used to distribute load between the
members of the pool.
  --listener LISTENER   The listener to associate with the pool
  --protocol {HTTP,HTTPS,TCP}
Protocol for balancing.

output formatters:
  output formatter options

  -f {shell,table,value}, --format {shell,table,value}
the output format, defaults to table
  -c COLUMN, --column COLUMN
specify the column(s) to include, can be repeated

table formatter:
  --max-width integer
Maximum display width, 0 to disable

shell formatter:
  a format a UNIX shell can parse (variable=value)

  --prefix PREFIX   add a prefix to all variable names


neutron lbaas-pool-update -h
usage: neutron lbaas-pool-update [-h] [--request-format {json,xml}] POOL

LBaaS v2 Update a given pool.

positional arguments:
  POOL  ID or name of pool to update.

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json,xml}
The XML or JSON request format.

__

openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
python-neutron-lbaas-2015.1.0-3.el7ost.noarch

python-neutron-2015.1.0-1.el7ost.noarch
openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
python-neutronclient-2.4.0-1.el7ost.noarch
openstack-neutron-2015.1.0-1.el7ost.noarch
openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
openstack-neutron-common-2015.1.0-1.el7ost.noarch
python-neutron-lbaas-2015.1.0-3.el7ost.noarch
python-neutron-fwaas-2015.1.0-3.el7ost.noarch
[root@puma07 ~(keystone_redhat]#

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463790

Title:
  lbaasv2 help menu

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  neutron lbaas-pool-update --help should show the same output as neutron 
lbaas-pool-create --help 
  or mark the attributes which are edible in neutron lbaas-pool-show POOLID 
command 

  
__
  neutron lbaas-pool-create -h
  usage: neutron lbaas-pool-create [-h] [-f {shell,table,value}] [-c COLUMN]
   [--max-width integer] [--prefix PREFIX]
   [--request-format {json,xml}]
   [--tenant-id TENANT_ID] [--admin-state-down]
   [--description DESCRIPTION]
   [--session-persistence 
type=TYPE[,cookie_name=COOKIE_NAME]]
   [--name NAME] --lb-algorithm
   {ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP}
   --listener LISTENER --protocol
   {HTTP,HTTPS,TCP}

  LBaaS v2 

[Yahoo-eng-team] [Bug 1463791] [NEW] JSCS doesnt skip external libs

2015-06-10 Thread Rob Cresswell
Public bug reported:

JSCS currently doesn't skip external libs, and thus throws many errors
that we cannot fix.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Cresswell (robcresswell)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1463791

Title:
  JSCS doesnt skip external libs

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  JSCS currently doesn't skip external libs, and thus throws many errors
  that we cannot fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1463791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463746] [NEW] vm status incorrect if hypervisor is broken

2015-06-10 Thread Andre Naehring
Public bug reported:

If a  nova-compute service is down (power failure) the instances shown
in nova list are still  in active state while nova service-list reports
down for the corresponding hypervisor.

nova list should check the hypervisor state and report unknown /
undefined for an instances running on a hypervisor where the nova-
compute is down.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463746

Title:
  vm status incorrect if hypervisor is broken

Status in OpenStack Compute (Nova):
  New

Bug description:
  If a  nova-compute service is down (power failure) the instances shown
  in nova list are still  in active state while nova service-list
  reports down for the corresponding hypervisor.

  nova list should check the hypervisor state and report unknown /
  undefined for an instances running on a hypervisor where the nova-
  compute is down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463816] [NEW] Healthmonitor misconfiguration - Lbaasv2

2015-06-10 Thread Alex Syafeyev
Public bug reported:

Configured lbaasv2 healhmonitor to send ping request. it sends HTTP
requests.

neutron lbaas-healthmonitor-create --delay 1 --max-retries 3 --timeout 3
--type PING --pool 10240065-efc0-4390-abd8-28266ccbaa37


[root@puma07 ~(keystone_redhat]# ip netns
qlbaas-43072366-21e9-4d78-8a75-ff3152cbfc70
qrouter-40aacb08-eceb-459c-9f99-d585793df812
qdhcp-6939cdc0-078c-41e5-9f91-301400af464a
[root@puma07 ~(keystone_redhat]# ip netns exec 
qlbaas-43072366-21e9-4d78-8a75-ff3152cbfc70
No command specified
[root@puma07 ~(keystone_redhat]# ifconfig
lo: flags=73UP,LOOPBACK,RUNNING  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10host
loop  txqueuelen 0  (Local Loopback)
RX packets 1171  bytes 89156 (87.0 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1171  bytes 89156 (87.0 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapd6c6f4d7-47: flags=4163UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
inet 192.168.1.10  netmask 255.255.255.0  broadcast 192.168.1.255
inet6 fe80::f816:3eff:fe77:c357  prefixlen 64  scopeid 0x20link
ether fa:16:3e:77:c3:57  txqueuelen 0  (Ethernet)
RX packets 683386  bytes 50012943 (47.6 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1346240  bytes 93670853 (89.3 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@puma07 ~(keystone_redhat]# tcpdump -i tapd6c6f4d7-47 tcp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tapd6c6f4d7-47, link-type EN10MB (Ethernet), capture size 65535 
bytes
15:25:23.483803 IP puma07.scl.lab.tlv.redhat.com.34860  192.168.1.7.http: 
Flags [S], seq 3945578266, win 14600, options [mss 1460,sackOK,TS val 187127554 
ecr 0,nop,wscale 7], length 0
15:25:23.483854 IP puma07.scl.lab.tlv.redhat.com.55473  192.168.1.9.http: 
Flags [S], seq 3472828468, win 14600, options [mss 1460,sackOK,TS val 187127554 
ecr 0,nop,wscale 7], length 0
15:25:23.483909 IP puma07.scl.lab.tlv.redhat.com.45636  192.168.1.8.http: 
Flags [S], seq 3334280218, win 14600, options [mss 1460,sackOK,TS val 187127554 
ecr 0,nop,wscale 7], length 0
15:25:23.484181 IP 192.168.1.8.http  puma07.scl.lab.tlv.redhat.com.45636: 
Flags [S.], seq 1218837381, ack 3334280219, win 13480, options [mss 
1360,sackOK,TS val 86951874 ecr 187127554,nop,wscale 7], length 0
15:25:23.484237 IP 192.168.1.9.http  puma07.scl.lab.tlv.redhat.com.55473: 
Flags [S.], seq 1626993153, ack 3472828469, win 13480, options [mss 
1360,sackOK,TS val 86951510 ecr 187127554,nop,wscale 7], length 0
15:25:23.484285 IP puma07.scl.lab.tlv.redhat.com.45636  192.168.1.8.http: 
Flags [R.], seq 1, ack 1, win 115, options [nop,nop,TS val 187127555 ecr 
86951874], length 0
15:25:23.484319 IP 192.168.1.7.http  puma07.scl.lab.tlv.redhat.com.34860: 
Flags [S.], seq 1919638368, ack 3945578267, win 13480, options [mss 
1360,sackOK,TS val 86951619 ecr 187127554,nop,wscale 7], length 0
15:25:23.484357 IP puma07.scl.lab.tlv.redhat.com.55473  192.168.1.9.http: 
Flags [R.], seq 1, ack 1, win 115, options [nop,nop,TS val 187127555 ecr 
86951510], length 0


python-neutron-2015.1.0-1.el7ost.noarch
openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
python-neutronclient-2.4.0-1.el7ost.noarch
openstack-neutron-2015.1.0-1.el7ost.noarch
openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
openstack-neutron-common-2015.1.0-1.el7ost.noarch
python-neutron-lbaas-2015.1.0-3.el7ost.noarch
python-neutron-fwaas-2015.1.0-3.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas lbaasv2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463816

Title:
  Healthmonitor misconfiguration - Lbaasv2

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Configured lbaasv2 healhmonitor to send ping request. it sends HTTP
  requests.

  neutron lbaas-healthmonitor-create --delay 1 --max-retries 3 --timeout
  3 --type PING --pool 10240065-efc0-4390-abd8-28266ccbaa37


  [root@puma07 ~(keystone_redhat]# ip netns
  qlbaas-43072366-21e9-4d78-8a75-ff3152cbfc70
  qrouter-40aacb08-eceb-459c-9f99-d585793df812
  qdhcp-6939cdc0-078c-41e5-9f91-301400af464a
  [root@puma07 ~(keystone_redhat]# ip netns exec 
qlbaas-43072366-21e9-4d78-8a75-ff3152cbfc70
  No command specified
  [root@puma07 ~(keystone_redhat]# ifconfig
  lo: flags=73UP,LOOPBACK,RUNNING  mtu 65536
  inet 127.0.0.1  netmask 255.0.0.0
  inet6 ::1  prefixlen 128  scopeid 0x10host
  loop  txqueuelen 0  (Local Loopback)
  RX packets 1171  bytes 89156 (87.0 KiB)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 1171  bytes 89156 (87.0 KiB)
  TX errors 0  dropped 

[Yahoo-eng-team] [Bug 1460054] Re: Instance details: switch between tabs is not possible anymore

2015-06-10 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1460054

Title:
  Instance details: switch between tabs is not possible anymore

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Issue
  =
  It's not possible anymore to switch between the tabs in the Instance details 
view.

  Steps to reproduce
  ==

  * launch an instance
  * go to the details view of that instance
  * click on tab console

  Expected behavior
  =

  The tab console opens

  Actual behavior
  ===

  The tab overview is still in focus. The behavior is independent of
  the chosen tab. IOW, it's the same behavior when clicked on tabs Log
  or Action Log.

  Firebug shows:
  TypeError: horizon.conf.spinner_options is undefined

  Horizon log:
  2015-05-28 15:54:55.100358 Not Found: 
/horizon/lib/bootstrap_datepicker/datepicker3.css
  2015-05-28 15:54:55.100930 Not Found: /horizon/lib/rickshaw.css

  Same behavior with browsers Firefox and Chrome

  
  Logs  Env.
  ===

  * Devstack
  * Last horizon commit: 65db6d33aa40a202cd16ad60e08273f715a67745
  * Last Nova commit: e5c169d15528a8e2eadb8eca668ea0d183cf8648

  References
  ==

  -

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1460054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463806] [NEW] Neutron database: cisco_csr_identifier_map.ipsec_site_conn_id has wrong length: 64 vs 36 in ipsec_site_connections.id

2015-06-10 Thread Vladislav Belogrudov
Public bug reported:

cisco_csr_identifier_map.ipsec_site_conn_id is a foreign key to
ipsec_site_connections.id . The former is varchar(64) while the latter
is varchar(36). Some database engines (e.g. NDB / MySQL Cluster) are
very strict about mismatched sizes of foreign keys and migration script
neutron/db/migration/alembic_migrations/versions/24c7ea5160d7_cisco_csr_vpnaas.py
fail

** Affects: neutron
 Importance: Undecided
 Assignee: Vladislav Belogrudov (vlad-belogrudov)
 Status: In Progress


** Tags: database mysql ndb neutron

** Changed in: neutron
 Assignee: (unassigned) = Vladislav Belogrudov (vlad-belogrudov)

** Changed in: neutron
   Status: New = In Progress

** Tags added: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463806

Title:
  Neutron database: cisco_csr_identifier_map.ipsec_site_conn_id has
  wrong length: 64 vs 36 in ipsec_site_connections.id

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  cisco_csr_identifier_map.ipsec_site_conn_id is a foreign key to
  ipsec_site_connections.id . The former is varchar(64) while the latter
  is varchar(36). Some database engines (e.g. NDB / MySQL Cluster) are
  very strict about mismatched sizes of foreign keys and migration
  script
  
neutron/db/migration/alembic_migrations/versions/24c7ea5160d7_cisco_csr_vpnaas.py
  fail

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463800] [NEW] Confusing errors appears after running netns-cleanup with --force attribute

2015-06-10 Thread Toni Freger
Public bug reported:

The setup: Controller, Compute and 2 Network nodes 
KILO - VRRP on RHEL7.1

Trying to delete all alive namespaces (alive - router with attached interface 
to the network)
The command succeeded but  a lot of error messages appears.

 neutron-netns-cleanup --config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/dhcp_agent.ini --force
2015-06-07 11:41:14.760 2623 INFO neutron.common.config [-] Logging enabled!
2015-06-07 11:41:14.761 2623 INFO neutron.common.config [-] 
/usr/bin/neutron-netns-cleanup version 2015.1.0
2015-06-07 11:41:16.777 2623 WARNING oslo_config.cfg [-] Option 
use_namespaces from group DEFAULT is deprecated for removal.  Its value may 
be silently ignored in the future.
2015-06-07 11:41:17.193 2623 ERROR neutron.agent.linux.utils [-] 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-0239082e-0817-430f-a183-581cc995da28', 'ip', 'link', 
'delete', 'qr-c3e98790-f6']
Exit code: 2
Stdin: 
Stdout: 
Stderr: RTNETLINK answers: Operation not supported

2015-06-07 11:41:17.592 2623 ERROR neutron.agent.linux.utils [-] 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-0239082e-0817-430f-a183-581cc995da28', 'ip', 'link', 
'delete', 'ha-1a34b88b-13']
Exit code: 2
Stdin: 
Stdout: 
Stderr: RTNETLINK answers: Operation not supported

2015-06-07 11:41:18.655 2623 ERROR neutron.agent.linux.utils [-] 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-c3fb97a3-8547-4008-9360-daa940906da3', 'ip', 'link', 
'delete', 'qr-ffdaf269-a2']
Exit code: 2
Stdin: 
Stdout: 
Stderr: RTNETLINK answers: Operation not supported

2015-06-07 11:41:19.052 2623 ERROR neutron.agent.linux.utils [-] 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-c3fb97a3-8547-4008-9360-daa940906da3', 'ip', 'link', 
'delete', 'ha-e4a43b4c-79']
Exit code: 2
Stdin: 
Stdout: 
Stderr: RTNETLINK answers: Operation not supported

2015-06-07 11:41:22.065 2623 ERROR neutron.agent.linux.utils [-] 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-082e1275-c4d5-445b-a9c3-6bb0b7fe0b6a', 'ip', 'link', 
'delete', 'qr-14f1c00c-6a']
Exit code: 2
Stdin: 
Stdout: 
Stderr: RTNETLINK answers: Operation not supported

2015-06-07 11:41:22.490 2623 ERROR neutron.agent.linux.utils [-] 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-082e1275-c4d5-445b-a9c3-6bb0b7fe0b6a', 'ip', 'link', 
'delete', 'qg-72ded7f8-ec']
Exit code: 2
Stdin: 
Stdout: 
Stderr: RTNETLINK answers: Operation not supported

2015-06-07 11:41:22.881 2623 ERROR neutron.agent.linux.utils [-] 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-082e1275-c4d5-445b-a9c3-6bb0b7fe0b6a', 'ip', 'link', 
'delete', 'ha-9630f5a6-73']
Exit code: 2
Stdin: 
Stdout: 
Stderr: RTNETLINK answers: Operation not supported

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463800

Title:
  Confusing errors appears after running netns-cleanup with --force
  attribute

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The setup: Controller, Compute and 2 Network nodes 
  KILO - VRRP on RHEL7.1

  Trying to delete all alive namespaces (alive - router with attached 
interface to the network)
  The command succeeded but  a lot of error messages appears.

   neutron-netns-cleanup --config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/dhcp_agent.ini --force
  2015-06-07 11:41:14.760 2623 INFO neutron.common.config [-] Logging enabled!
  2015-06-07 11:41:14.761 2623 INFO neutron.common.config [-] 
/usr/bin/neutron-netns-cleanup version 2015.1.0
  2015-06-07 11:41:16.777 2623 WARNING oslo_config.cfg [-] Option 
use_namespaces from group DEFAULT is deprecated for removal.  Its value may 
be silently ignored in the future.
  2015-06-07 11:41:17.193 2623 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-0239082e-0817-430f-a183-581cc995da28', 'ip', 'link', 
'delete', 'qr-c3e98790-f6']
  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Operation not supported

  2015-06-07 11:41:17.592 2623 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-0239082e-0817-430f-a183-581cc995da28', 'ip', 'link', 
'delete', 'ha-1a34b88b-13']
  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Operation not supported

  2015-06-07 11:41:18.655 2623 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 

[Yahoo-eng-team] [Bug 1463729] [NEW] refactor checking env. var INTEGRATION_TESTS

2015-06-10 Thread Martin Pavlásek
Public bug reported:

There is positive branch, much bigger than negative (it just raise
exception). It's not so clear and this nesting is no necessary at all.

** Affects: horizon
 Importance: Undecided
 Assignee: Martin Pavlásek (mpavlase)
 Status: In Progress


** Tags: integration-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1463729

Title:
  refactor checking env. var INTEGRATION_TESTS

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There is positive branch, much bigger than negative (it just raise
  exception). It's not so clear and this nesting is no necessary at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1463729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463372] Re: nova secgroup-list-rules shows empty table

2015-06-10 Thread Markus Zoeller
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463372

Title:
   nova secgroup-list-rules shows empty table

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  New

Bug description:
  We see no secgroups rules with nova command- 
  We should see the existing  rules even with nova command, Specially if we see 
the rules in GUI via COMPUTE tab. 

  1. see security groups with

  neutron security-group-rule-list

  2. see security groups with nova command

  nova secgroup-list-rules GROUPID

  nova secgroup-list-rules 54db0a3c-fc5d-4faf-8b1a
  +-+---+-+--+--+
  | IP Protocol | From Port | To Port | IP Range | Source Group |
  +-+---+-+--+--+
  | |   | |  | default  |
  | |   | |  | default  |
  +-+---+-+--+---+

  
  neutron security-group-rule-list 
  
+--++---+---+---+-+
  | id   | security_group | direction | 
ethertype | protocol/port | remote  |
  
+--++---+---+---+-+
  | 0e1cdfae-38d6-4d58-b624-011c2c05e165 | default| ingress   | IPv6
  | any   | default (group) |
  | 13c64385-ac4c-4321-bd3f-ec3e0ca939e1 | default| ingress   | IPv4
  | any   | default (group) |
  | 261ae2ec-686c-4e53-9578-1f55d92e280d | default| egress| IPv4
  | any   | any |
  | 41071f04-db2c-4e36-b5f0-8da2331e0382 | sec_group  | egress| IPv4
  | icmp  | any |
  | 45639c5d-cf4d-4231-a462-b180b9e52eaf | default| egress| IPv6
  | any   | any |
  | 5bab336e-410f-4323-865a-eeafee3fc3eb | sec_group  | ingress   | IPv4
  | icmp  | any |
  | 5e0cb33f-0a3c-41f8-8562-a549163d655e | sec_group  | egress| IPv6
  | any   | any |
  | 67409c83-3b62-4ba5-9e0d-93b23a81722a | default| egress| IPv4
  | any   | any |
  | 82676e25-f37c-4c57-9f7e-ffbe481501b5 | sec_group  | egress| IPv4
  | any   | any |
  | 89c232f4-ec90-46ba-989f-87d7348a9ea9 | default| ingress   | IPv4
  | any   | default (group) |
  | ad50904e-3cd4-43e2-9ab4-c7cb5277cc4d | sec_group  | egress| IPv4
  | 1-65535/tcp   | any |
  | c3386b79-06a8-4609-8db7-2924e092e5e9 | default| egress| IPv6
  | any   | any |
  | c37fe4d0-01b4-40f9-a069-15c8f3edffe4 | default| egress| IPv6
  | any   | any |
  | c51371f1-d3ae-4223-a044-f7b9b2eeb8a1 | sec_group  | ingress   | IPv4
  | 1-65535/udp   | any |
  | d3d6c1b3-bde5-45ce-a950-5bfd0fc7fc5c | default| ingress   | IPv6
  | any   | default (group) |
  | d4888c02-0b56-412e-bf02-dfd27ce84580 | sec_group  | egress| IPv4
  | 1-65535/udp   | any |
  | d7e0aee8-eee4-4ca1-b67e-ec4864a71492 | default| ingress   | IPv4
  | any   | default (group) |
  | df6504e5-0adb-411a-9313-4bad7074c42e | default| ingress   | IPv6
  | any   | default (group) |
  | e0ef6e04-575b-43ed-8179-c221d1e4f962 | default| egress| IPv4
  | any   | any |
  | e828f2ef-518f-4c67-a328-6dafc16431b9 | sec_group  | ingress   | IPv4
  | 1-65535/tcp   | any |
  
+--++---+---+---+-+


  Kilo+rhel7.1
  python-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-common-2015.1.0-1.el7ost.noarch
  python-neutron-lbaas-2015.1.0-3.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch

  
  openstack-nova-common-2015.1.0-4.el7ost.noarch
  openstack-nova-cert-2015.1.0-4.el7ost.noarch
  openstack-nova-compute-2015.1.0-4.el7ost.noarch
  openstack-nova-console-2015.1.0-4.el7ost.noarch
  python-nova-2015.1.0-4.el7ost.noarch
  openstack-nova-scheduler-2015.1.0-4.el7ost.noarch
  python-novaclient-2.23.0-1.el7ost.noarch
  openstack-nova-api-2015.1.0-4.el7ost.noarch
  

[Yahoo-eng-team] [Bug 1463713] [NEW] Creating tasks using CURL raises 500 internal server Error

2015-06-10 Thread GB21
Public bug reported:

When trying to create tasks by importing an image using CURL method I
got Internal Server Error . Possible causes might include some unhandled
exception. We could consider changing this behavior to return some kind
of 40x message .

Steps to reproduce
URL:  http://10.0.2.15:9292/v2/tasks
Headers:
  X-Auth-Token: XYZ
  Content-Type: application/json
  Accept-Encoding: gzip, deflate
  Accept: */*
  User-Agent: python-glanceclient
  Connection: keep-alive
Data:
['{type: import, input: {import_from: 
https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img;,
 import_from_format: raw, image_properties : {name: 
test-conversion-1, container_format: bare}}}' ]

Result: 500 Internal Server Error

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1463713

Title:
  Creating tasks using CURL raises 500 internal server Error

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When trying to create tasks by importing an image using CURL method I
  got Internal Server Error . Possible causes might include some
  unhandled exception. We could consider changing this behavior to
  return some kind of 40x message .

  Steps to reproduce
  URL:  http://10.0.2.15:9292/v2/tasks
  Headers:
X-Auth-Token: XYZ
Content-Type: application/json
Accept-Encoding: gzip, deflate
Accept: */*
User-Agent: python-glanceclient
Connection: keep-alive
  Data:
  ['{type: import, input: {import_from: 
https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img;,
 import_from_format: raw, image_properties : {name: 
test-conversion-1, container_format: bare}}}' ]

  Result: 500 Internal Server Error

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1463713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463688] [NEW] Period task shuts down instance

2015-06-10 Thread Gary Kotton
Public bug reported:

When the periodic task ran it detected that an instance was in a conflicting 
state. That is Nova was under the impression that the instance was running and 
it was actually not. This was due to an outage on the backend. When the 
instance was starting up again the period task forced the instance to shutdown.
In some cases this is too extreme and the admin should decide on the action to 
take.

** Affects: nova
 Importance: High
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: nova
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463688

Title:
  Period task shuts down instance

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When the periodic task ran it detected that an instance was in a conflicting 
state. That is Nova was under the impression that the instance was running and 
it was actually not. This was due to an outage on the backend. When the 
instance was starting up again the period task forced the instance to shutdown.
  In some cases this is too extreme and the admin should decide on the action 
to take.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463727] [NEW] unicode is not string

2015-06-10 Thread Dave Chen
Public bug reported:

In the testcase of test_v3_token_data_helper_populate_audit_info_string
[1], we generate audit_info via base64.urlsafe_b64encode, the value
returned is a string, acutally, the audit_info should be unicode instead
of a string type.

This will hide some issues while the testcase should detect [2] instead
of simply pass the test.


[1] 
https://github.com/openstack/keystone/blob/master/keystone/tests/unit/token/test_token_data_helper.py#L31
[2] 
https://review.openstack.org/#/c/125410/11/keystone/token/providers/common.py

** Affects: keystone
 Importance: Undecided
 Assignee: Dave Chen (wei-d-chen)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Dave Chen (wei-d-chen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1463727

Title:
  unicode is not string

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In the testcase of
  test_v3_token_data_helper_populate_audit_info_string [1], we generate
  audit_info via base64.urlsafe_b64encode, the value returned is a
  string, acutally, the audit_info should be unicode instead of a string
  type.

  This will hide some issues while the testcase should detect [2]
  instead of simply pass the test.


  [1] 
https://github.com/openstack/keystone/blob/master/keystone/tests/unit/token/test_token_data_helper.py#L31
  [2] 
https://review.openstack.org/#/c/125410/11/keystone/token/providers/common.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1463727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463733] [NEW] Image metadata update view doesn't search for image property key name

2015-06-10 Thread Markus Zoeller
Public bug reported:

Issue
=
The image metadata update view doesn't search for the key names of image 
metadata key name (like os_command_line). It only searches for the image 
metadata *display* names.

Steps to reproduce
==

* start devstack
* login to horizon as admin
* go to Admin - System - Images 
* for the image cirros ... select the action update metadata
* search for os_command_line in the available metadata

Expected behavior
=

The metadata Kernel Command Line is selected. That's the display name
of the key os_command_line.

Actual behavior
===

No metadata is found.

Logs  Env.
===

* Devstack
* Horizon version
commit bd80fb930be6b8e605e11d9ee9dd36d5b3571ca4
Merge: 084ab1b 0b60758
Author: Jenkins jenk...@review.openstack.org
Date:   Wed Jun 10 05:27:29 2015 +

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1463733

Title:
  Image metadata update view doesn't search for image property key name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Issue
  =
  The image metadata update view doesn't search for the key names of image 
metadata key name (like os_command_line). It only searches for the image 
metadata *display* names.

  Steps to reproduce
  ==

  * start devstack
  * login to horizon as admin
  * go to Admin - System - Images 
  * for the image cirros ... select the action update metadata
  * search for os_command_line in the available metadata

  Expected behavior
  =

  The metadata Kernel Command Line is selected. That's the display
  name of the key os_command_line.

  Actual behavior
  ===

  No metadata is found.

  Logs  Env.
  ===

  * Devstack
  * Horizon version
  commit bd80fb930be6b8e605e11d9ee9dd36d5b3571ca4
  Merge: 084ab1b 0b60758
  Author: Jenkins jenk...@review.openstack.org
  Date:   Wed Jun 10 05:27:29 2015 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1463733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463830] [NEW] FWaaS has a missing table

2015-06-10 Thread Csaba Kallai
Public bug reported:

Hi guys,
U
I Installed under Openstack Kilo the FWaaS. I am using the offical Ubuntu cloud 
repository. Every component is on the latest version.
I configure everything and I added rules, and I assigned to policy, but I can 
not create a firewall on horizon and CLI as well.

I checked, the neutron-server.log and it was the result:

2015-06-10 15:05:49.114 10628 TRACE neutron.api.v2.resource raise 
errorclass, errorvalue
2015-06-10 15:05:49.114 10628 TRACE neutron.api.v2.resource ProgrammingError: 
(ProgrammingError) (1146, Table 'neutron.firewall_router_associations' doesn't 
exist) 'SELECT firewall_router_associations.router_id AS 
firewall_router_associations_router_id \nFROM firewall_router_associations 
\nWHERE firewall_router_associations.router_id IN (%s) AND 
firewall_router_associations.fw_id IS NOT NULL' 
('30d71169-18ff-4c88-8a7b-e0521549c067',)

It seems be missing the firewall_router_associations  table from neutron db.
I tried to neutron-manage-db command upgrade the db (with upgrade head 
argument), but it not solved the problem.

Please help. Many thanks!

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463830

Title:
  FWaaS has a missing table

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi guys,
  U
  I Installed under Openstack Kilo the FWaaS. I am using the offical Ubuntu 
cloud repository. Every component is on the latest version.
  I configure everything and I added rules, and I assigned to policy, but I can 
not create a firewall on horizon and CLI as well.

  I checked, the neutron-server.log and it was the result:

  2015-06-10 15:05:49.114 10628 TRACE neutron.api.v2.resource raise 
errorclass, errorvalue
  2015-06-10 15:05:49.114 10628 TRACE neutron.api.v2.resource ProgrammingError: 
(ProgrammingError) (1146, Table 'neutron.firewall_router_associations' doesn't 
exist) 'SELECT firewall_router_associations.router_id AS 
firewall_router_associations_router_id \nFROM firewall_router_associations 
\nWHERE firewall_router_associations.router_id IN (%s) AND 
firewall_router_associations.fw_id IS NOT NULL' 
('30d71169-18ff-4c88-8a7b-e0521549c067',)

  It seems be missing the firewall_router_associations  table from neutron db.
  I tried to neutron-manage-db command upgrade the db (with upgrade head 
argument), but it not solved the problem.

  Please help. Many thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463831] [NEW] neutron DVR poor performance

2015-06-10 Thread Matthias
Public bug reported:

Scenario:
2 VMs of same tenant but in different subnets talk to each other. The traffic 
flow is ...

Traffic VM1 to VM2:
= CN1     CN2 ===
VM1---br-int---Router---br-int---br-tun--br-tun---br-int---VM2

Traffic VM2 to VM1:
= CN2    CN1 ===
VM2---br-int---Router---br-int---br-tun---br-tun---br-int---VM1

This works as designed; however obviously br-int of CN1 never gets
traffic from Router of CN1 (except the very first ARP response), same
for br-int of CN2. This might lead to flow (or mac?) timeout on br-int
after 300 secs and degrades performance massively because traffic is
flooded.

Changing the mac-addr aging timer influences the issue; change to 30 (default 
300) and the issue occurs after 30 seconds (instead 300) 
#ovs-vsctl set bridge br-int other_config:mac-aging-time=30

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463831

Title:
  neutron DVR poor performance

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Scenario:
  2 VMs of same tenant but in different subnets talk to each other. The traffic 
flow is ...

  Traffic VM1 to VM2:
  = CN1     CN2 ===
  VM1---br-int---Router---br-int---br-tun--br-tun---br-int---VM2

  Traffic VM2 to VM1:
  = CN2    CN1 ===
  VM2---br-int---Router---br-int---br-tun---br-tun---br-int---VM1

  This works as designed; however obviously br-int of CN1 never gets
  traffic from Router of CN1 (except the very first ARP response), same
  for br-int of CN2. This might lead to flow (or mac?) timeout on br-int
  after 300 secs and degrades performance massively because traffic is
  flooded.

  Changing the mac-addr aging timer influences the issue; change to 30 (default 
300) and the issue occurs after 30 seconds (instead 300) 
  #ovs-vsctl set bridge br-int other_config:mac-aging-time=30

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440773] Re: Remove WritableLogger as eventlet has a real logger interface in 0.17.2

2015-06-10 Thread Igor Pugovkin
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) = Igor Pugovkin (ipugovkin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440773

Title:
  Remove WritableLogger as eventlet has a real logger interface in
  0.17.2

Status in Cinder:
  New
Status in Manila:
  In Progress
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in Logging configuration library for OpenStack:
  Fix Released

Bug description:
  Info from Sean on IRC:

  the patch to use a real logger interface in eventlet has been released
  in 0.17.2, which means we should be able to phase out
  https://github.com/openstack/oslo.log/blob/master/oslo_log/loggers.py

  Eventlet PR was:
  https://github.com/eventlet/eventlet/pull/75

  thanks,
  dims

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1440773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463838] [NEW] [data processing] Stack trace for invalid cluster details page

2015-06-10 Thread Chad Roberts
Public bug reported:

If you take the URL for a cluster and edit it to reference a cluster ID
that does not exist, you get a stack trace page rather than a regular
Horizon page with a red box error message.

Easiest way to reproduce

Go to a cluster details page:  (sample url) 
/project/data_processing/clusters/2eeaf268-3bd7-4ffc-b7df-0bc43b9c126e
Tweak the ID in the url (1 character change should be plenty).
Note that the page is a stack trace rather than a regular Horizon page with a 
red box error message.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1463838

Title:
  [data processing] Stack trace for invalid cluster details page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If you take the URL for a cluster and edit it to reference a cluster
  ID that does not exist, you get a stack trace page rather than a
  regular Horizon page with a red box error message.

  Easiest way to reproduce

  Go to a cluster details page:  (sample url) 
/project/data_processing/clusters/2eeaf268-3bd7-4ffc-b7df-0bc43b9c126e
  Tweak the ID in the url (1 character change should be plenty).
  Note that the page is a stack trace rather than a regular Horizon page with a 
red box error message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1463838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463891] [NEW] VRRP: admin_state down on HA port cause management failure to agent without proper log

2015-06-10 Thread Roey Dekel
Public bug reported:

Tried to check how admin_state down affects HA ports.
Noticed that management data between them stoped and cause them to become 
master. Although traffic to connected floating IP remain working.
Problem is: no log on OVS agent idicated why it's processing a port update or 
why it's setting it's VLAN tag to 4095.
(06:39:44 PM) amuller: there should be an INFO level log saying something like: 
Setting port admin_state to {True/False}(06:39:56 PM) amuller: with the port 
ID of course

Current log:
2015-06-08 10:25:25.782 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Port 'ha-8e0f96c5-78' has lost its 
vlan tag '1'!
2015-06-08 10:25:25.783 1055 INFO neutron.agent.securitygroups_rpc 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Refresh firewall rules
2015-06-08 10:25:26.784 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Port 
8e0f96c5-7891-46a4-8420-778454949bd0 updated. Details: {u'profile': {}, 
u'allowed_address_pairs': [], u'admin_state_up': False, u'network_id': 
u'6a5116a2-39f7-45bc-a432-3d624765d602', u'segmentation_id': 10, 
u'device_owner': u'network:router_ha_interface', u'physical_network': None, 
u'mac_address': u'fa:16:3e:02:cb:47', u'device': 
u'8e0f96c5-7891-46a4-8420-778454949bd0', u'port_security_enabled': True, 
u'port_id': u'8e0f96c5-7891-46a4-8420-778454949bd0', u'fixed_ips': 
[{u'subnet_id': u'f81913ba-328f-4374-96f2-1a7fd44d7fb1', u'ip_address': 
u'169.254.192.3'}], u'network_type': u'vxlan'}
2015-06-08 10:25:26.940 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Configuration for device 
8e0f96c5-7891-46a4-8420-778454949bd0 completed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463891

Title:
  VRRP: admin_state down on HA port cause management failure to agent
  without proper log

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Tried to check how admin_state down affects HA ports.
  Noticed that management data between them stoped and cause them to become 
master. Although traffic to connected floating IP remain working.
  Problem is: no log on OVS agent idicated why it's processing a port update or 
why it's setting it's VLAN tag to 4095.
  (06:39:44 PM) amuller: there should be an INFO level log saying something 
like: Setting port admin_state to {True/False}(06:39:56 PM) amuller: with the 
port ID of course

  Current log:
  2015-06-08 10:25:25.782 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Port 'ha-8e0f96c5-78' has lost its 
vlan tag '1'!
  2015-06-08 10:25:25.783 1055 INFO neutron.agent.securitygroups_rpc 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Refresh firewall rules
  2015-06-08 10:25:26.784 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Port 
8e0f96c5-7891-46a4-8420-778454949bd0 updated. Details: {u'profile': {}, 
u'allowed_address_pairs': [], u'admin_state_up': False, u'network_id': 
u'6a5116a2-39f7-45bc-a432-3d624765d602', u'segmentation_id': 10, 
u'device_owner': u'network:router_ha_interface', u'physical_network': None, 
u'mac_address': u'fa:16:3e:02:cb:47', u'device': 
u'8e0f96c5-7891-46a4-8420-778454949bd0', u'port_security_enabled': True, 
u'port_id': u'8e0f96c5-7891-46a4-8420-778454949bd0', u'fixed_ips': 
[{u'subnet_id': u'f81913ba-328f-4374-96f2-1a7fd44d7fb1', u'ip_address': 
u'169.254.192.3'}], u'network_type': u'vxlan'}
  2015-06-08 10:25:26.940 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Configuration for device 
8e0f96c5-7891-46a4-8420-778454949bd0 completed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453074] Re: [OSSA 2015-010] help_text parameter of fields is vulnerable to arbitrary html injection (CVE-2015-3219)

2015-06-10 Thread Tristan Cacqueray
All patches are now merged, shouldn't series task be added to Horizon ?

** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453074

Title:
  [OSSA 2015-010] help_text parameter of fields is vulnerable to
  arbitrary html injection (CVE-2015-3219)

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  The Field class help_text attribute is vulnerable to code injection if
  the text is somehow taken from the user input.

  Heat UI allows to create stacks from the user input which define
  parameters. Those parameters are then converted to the input field
  which are vulnerable.

  The heat stack example exploit:

  description: Does not matter
  heat_template_version: '2013-05-23'
  outputs: {}
  parameters:
    param1:
  type: string
  label: normal_label
  description: hack=scriptalert('YOUR HORIZON IS PWNED')/script
  resources: {}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463898] [NEW] Unable to delete artifact with custom id

2015-06-10 Thread Alexey Galkin
Public bug reported:

Steps to reproduce:

1. Create new artifact with custom id:
{description: This is a sample plugin,  name: Demo Artifact,  
version: 42   ,  id:123456789}
2. Get list of artifacts. We can see new artifact with id: 123456789.
3. Delete artifact with id: 123456789
4. Get list of artifacts. 

Expected result:
We expected list without id: 123456789

Actual result:
Object with id: 123456789 exists in database and list with artifacts.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1463898

Title:
  Unable to delete artifact with custom id

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Steps to reproduce:

  1. Create new artifact with custom id:
  {description: This is a sample plugin,name: Demo Artifact,  
version: 42   ,  id:123456789}
  2. Get list of artifacts. We can see new artifact with id: 123456789.
  3. Delete artifact with id: 123456789
  4. Get list of artifacts. 

  Expected result:
  We expected list without id: 123456789

  Actual result:
  Object with id: 123456789 exists in database and list with artifacts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1463898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426341] Re: loadbalancer not able to associate monitor from the horizon dashboard

2015-06-10 Thread Brad Pokorny
*** This bug is a duplicate of bug 1398754 ***
https://bugs.launchpad.net/bugs/1398754

** This bug is no longer a duplicate of bug 1404471
   Can't associate a health monitor for a neutron load balance pool
** This bug has been marked a duplicate of bug 1398754
   LBaas v1 Associate Monitor to Pool Fails

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1426341

Title:
  loadbalancer not able to associate monitor from the horizon dashboard

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  i try to search around but can't find any answer, even try to post it
  inside ask.openstack.org but no luck. currently running on juno. from
  the dashboard,  when create we create a monitor and click on the
  monitor name we can see it has been assign to a pool. but we run
  command neutron lb-healthmonitor-show MONITORID it shows that the
  monitor haven't being assign to any pools yet.  from the dashboard, we
  cannot associate pool with monitor as it does not show the list of
  available pools.  there is not even a disassociate monitor menu. but
  when we manually associate a monitor to the pool from the cli.  the
  result of lb-healthmonitor-show MONITORID will show that the monitor
  is associated to the pool. and from the dashboard we can see that
  there is a menu to disassociate monitor whereas the associate monitor
  is not shown anymore. i believe this is something wrong with the
  horizon. Please advise if there is something that we can do on our
  side to solve this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1426341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463911] [NEW] IPV6 fragmentation and mtu issue

2015-06-10 Thread Gyula Halmos
Public bug reported:

Fragmented IPv6 packets are REJECTED by ip6tables on compute nodes. The
traffic is goign through an intra-VM network and the packet loss is
hurting the system.

There is a patch for this issue:
http://patchwork.ozlabs.org/patch/434957/

I would like to know is there any bug report or official release date
for this issue ?

This is pretty critical for my deployment.

Thanks in advance,

BR,

Gyula

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ip6tables neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463911

Title:
  IPV6 fragmentation and mtu issue

Status in OpenStack Compute (Nova):
  New

Bug description:
  Fragmented IPv6 packets are REJECTED by ip6tables on compute nodes.
  The traffic is goign through an intra-VM network and the packet loss
  is hurting the system.

  There is a patch for this issue:
  http://patchwork.ozlabs.org/patch/434957/

  I would like to know is there any bug report or official release date
  for this issue ?

  This is pretty critical for my deployment.

  Thanks in advance,

  BR,

  Gyula

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463948] [NEW] MySQL backend: Updating flavor extra-specs with altered case throws KeyErrors

2015-06-10 Thread Nicolas Simonds
Public bug reported:

The DB query for flavor extra-specs is case-insensitive, but the code to
handle update vs. create is case-sensitive.  This causes unexpected
behavior when trying to fix case on extra-specs:

Steps to reproduce:

Stand up a devstack with the MySQL backend.

nova flavor-key set 1 aaa=haha
nova flavor-key set 1 AAA=lolz

Expected results:

Option 1: Two extra specs, named aaa and AAA, with distinct values

Option 2: The extra spec named aaa replaced with AAA

Actual results:

a 409 Error from the client, and an exception throwing the exception on
the backend

** Affects: nova
 Importance: Undecided
 Assignee: Nicolas Simonds (nicolas.simonds)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463948

Title:
  MySQL backend: Updating flavor extra-specs with altered case throws
  KeyErrors

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The DB query for flavor extra-specs is case-insensitive, but the code
  to handle update vs. create is case-sensitive.  This causes unexpected
  behavior when trying to fix case on extra-specs:

  Steps to reproduce:

  Stand up a devstack with the MySQL backend.

  nova flavor-key set 1 aaa=haha
  nova flavor-key set 1 AAA=lolz

  Expected results:

  Option 1: Two extra specs, named aaa and AAA, with distinct values

  Option 2: The extra spec named aaa replaced with AAA

  Actual results:

  a 409 Error from the client, and an exception throwing the exception
  on the backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463918] [NEW] nova.scheduler.filters.utils is tested in two separate files

2015-06-10 Thread Ming Yang
Public bug reported:

Two separate tests [nova/tests/unit/scheduler/test_filters_utils.py] and
[nova/tests/unit/scheduler/filters/test_utils.py] appear to be testing
methods of the same file [nova/scheduler/filters/utils.py] in a very
similar fashion. They should be consolidated.

** Affects: nova
 Importance: Undecided
 Assignee: Ming Yang (mingy)
 Status: New


** Tags: low-hanging-fruit

** Changed in: nova
 Assignee: (unassigned) = Ming Yang (mingy)

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463918

Title:
  nova.scheduler.filters.utils is tested in two separate files

Status in OpenStack Compute (Nova):
  New

Bug description:
  Two separate tests [nova/tests/unit/scheduler/test_filters_utils.py]
  and [nova/tests/unit/scheduler/filters/test_utils.py] appear to be
  testing methods of the same file [nova/scheduler/filters/utils.py] in
  a very similar fashion. They should be consolidated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464023] [NEW] XML namespaces for API extensions have been deprecated

2015-06-10 Thread Sean M. Collins
Public bug reported:

The namespaces that are used in Neutron API extensions are not really
used anymore - and can be removed from the codebase

http://lists.openstack.org/pipermail/openstack-dev/2015-June/066219.html

** Affects: neutron
 Importance: Undecided
 Assignee: Sean M. Collins (scollins)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464023

Title:
  XML namespaces for API extensions have been deprecated

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The namespaces that are used in Neutron API extensions are not really
  used anymore - and can be removed from the codebase

  http://lists.openstack.org/pipermail/openstack-
  dev/2015-June/066219.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464034] [NEW] Kilo glance image create stuck in saving and queued state

2015-06-10 Thread Alfred Shen
Public bug reported:

In Kilo release, glance image-create with or without --os-image-api-
version caused queued status with following debug output. No error log
on keystone-api/registry.log. Othe glance CLIs work OK.

Similar symptom was reported in  https://bugs.launchpad.net/bugs/1146830
but seemed to be with diff cause.

# dpkg -l | grep glance
ii  glance  1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Image Registry and Delivery Service - Daemons
ii  glance-api  1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Image Registry and Delivery Service - API
ii  glance-common   1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Image Registry and Delivery Service - Common
ii  glance-registry 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Image Registry and Delivery Service - Registry
ii  python-glance   1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Image Registry and Delivery Service - Python library
ii  python-glance-store 0.4.0-0ubuntu1~cloud0 
all  OpenStack Image Service store library - Python 2.x
ii  python-glanceclient 1:0.15.0-0ubuntu1~cloud0  
all  Client library for Openstack glance server.


$  glance --debug --os-image-api-version 2 image-create --file 
/tmp/cirros-0.3.4-x86_64-disk.img  --disk-format qcow2 --container-format bare  
--progress 
curl -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 
'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: 
{SHA1}252d465682ed3a3d092b0ce05954601afb56c7df' -H 'Content-Type: 
application/octet-stream' http://3.39.89.230:9292/v2/schemas/image

HTTP/1.0 200 OK
content-length: 3867
via: 1.0 sjc1intproxy01 (squid/3.1.10)
x-cache: MISS from sjc1intproxy01
x-cache-lookup: MISS from sjc1intproxy01:8080
connection: keep-alive
date: Wed, 10 Jun 2015 21:41:24 GMT
content-type: application/json; charset=UTF-8
x-openstack-request-id: req-req-d0430e7a-5e36-466d-8d79-afefe9737695

{additionalProperties: {type: string}, name: image, links:
[{href: {self}, rel: self}, {href: {file}, rel:
enclosure}, {href: {schema}, rel: describedby}], properties:
{status: {enum: [queued, saving, active, killed, deleted,
pending_delete], type: string, description: Status of the image
(READ-ONLY)}, tags: {items: {type: string, maxLength: 255},
type: array, description: List of strings related to the image},
kernel_id: {pattern: ^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-
fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$, type: string,
description: ID of image stored in Glance that should be used as the
kernel when booting an AMI-style image., is_base: false},
container_format: {enum: [null, ami, ari, aki, bare, ovf,
ova], type: [null, string], description: Format of the
container}, min_ram: {type: integer, description: Amount of
ram (in MB) required to boot image.}, ramdisk_id: {pattern: ^([0
-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-
fA-F]){12}$, type: string, description: ID of image stored in
Glance that should be used as the ramdisk when booting an AMI-style
image., is_base: false}, locations: {items: {required: [url,
metadata], type: object, properties: {url: {type: string,
maxLength: 255}, metadata: {type: object}}}, type: array,
description: A set of URLs to access the image file kept in external
store}, visibility: {enum: [public, private], type: string,
description: Scope of image accessibility}, updated_at: {type:
string, description: Date and time of the last image modification
(READ-ONLY)}, owner: {type: [null, string], description:
Owner of the image, maxLength: 255}, file: {type: string,
description: (READ-ONLY)}, min_disk: {type: integer,
description: Amount of disk space (in GB) required to boot image.},
virtual_size: {type: [null, integer], description: Virtual
size of image in bytes (READ-ONLY)}, id: {pattern: ^([0-9a-
fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-
fA-F]){12}$, type: string, description: An identifier for the
image}, size: {type: [null, integer], description: Size of
image file in bytes (READ-ONLY)}, instance_uuid: {type: string,
description: ID of instance used to create this image., is_base:
false}, os_distro: {type: string, description: Common name of
operating system distribution as specified in
http://docs.openstack.org/trunk/openstack-compute/admin/content/adding-
images.html, is_base: false}, name: {type: [null, string],
description: Descriptive name for the image, maxLength: 255},
checksum: {type: [null, string], description: md5 hash of
image contents. (READ-ONLY), maxLength: 32}, created_at: {type:
string, description: Date and time of image registration (READ-
ONLY)}, disk_format: {enum: [null, ami, ari, aki, vhd,
vmdk, raw, qcow2, vdi, iso], type: [null, string],
description: Format of the disk}, 

[Yahoo-eng-team] [Bug 1463447] Re: Incorrect column name in Endpoint class in keystone/catalog/backends/sql.py

2015-06-10 Thread Zhenzan Zhou
In my devstack environment, table endpoint has 'region_id', not
'region'.

mysql describe endpoint;
++--+--+-+-+---+
| Field  | Type | Null | Key | Default | Extra |
++--+--+-+-+---+
| id | varchar(64)  | NO   | PRI | NULL|   |
| legacy_endpoint_id | varchar(64)  | YES  | | NULL|   |
| interface  | varchar(8)   | NO   | | NULL|   |
| service_id | varchar(64)  | NO   | MUL | NULL|   |
| url| text | NO   | | NULL|   |
| extra  | text | YES  | | NULL|   |
| enabled| tinyint(1)   | NO   | | 1   |   |
| region_id  | varchar(255) | YES  | MUL | NULL|   |
++--+--+-+-+---+
8 rows in set (0.00 sec)

It looks like you didn't upgrade your db schema before creating
endpoint. Please look at
keystone/common/sql/migrate_repo/versions/053_endpoint_to_region_association.py:

Migrated the endpoint 'region' column to 'region_id.


** Changed in: keystone
 Assignee: (unassigned) = Zhenzan Zhou (zhenzan-zhou)

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1463447

Title:
  Incorrect column name in Endpoint class in
  keystone/catalog/backends/sql.py

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  I've faced this problem while trying to follow this tutorial:
  
http://docs.openstack.org/kilo/install-guide/install/apt/content/keystone-install.html
  So, I've created db for keystone, and  then try to make 'openstack endpoint 
create' to populate table 'endpoint' inside
  'keystone' database. As I can see from mysql shell - there is column named 
'region' inside table 'endpoint':
  http://imgur.com/a/L4GuW
   But:
  As one can see in 
https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/sql.py
  attributes of Endpoint class are ['id', 'interface', 'region_id', 
'service_id', 'url', 'legacy_endpoint_id', 'enabled']. So 'region_id' is among 
them. I think it's a bug, because, when i'm trying to make openstack endpoint 
create, i'm getting this error inside /var/log/apache2/keystone-error.log:

  2015-06-09 09:29:06.354991 2015-06-09 09:29:06.353 1153 TRACE 
keystone.common.wsgi Traceback (most recent call last):
  ...
  ...
  OperationalError: (OperationalError) (1054, Unknown column 'region_id' in 
'field list') 'INSERT INTO endpoint (id, legacy_endpoint_id, interface, 
region_id, service_id, url, enabled, extra) VALUES (%s, %s, %s, %s, %s, %s, %s, 
%s)' ('7df45cff79a6419a80ba22902494c7d3', '903840e1b95e4649b9eef739f26ca249', 
'admin', None, '6cde7e716f6945428b0ee91db0c76a77', 
'http://controller:35357/v2.0', 1, '{}')

  So, maybe 'region_id' inside sql.py should be just 'region'?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1463447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447884] Re: Boot from volume, block device allocate timeout cause VM error, but volume would be available later

2015-06-10 Thread Matt Riedemann
** Changed in: nova
   Importance: Low = Medium

** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New = In Progress

** Changed in: nova/kilo
 Assignee: (unassigned) = Lan Qi song (lqslan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447884

Title:
  Boot from volume, block device allocate timeout cause VM error, but
  volume would be available later

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  In Progress

Bug description:
  When we  try to boot multi instances from volume (with a  large image
  source)  at the same time,  we usually got a block device allocate
  error as the logs in nova-compute.log:

  2015-03-30 23:22:46.920 6445 WARNING nova.compute.manager [-] Volume id: 
551ea616-e1c4-4ef2-9bf3-b0ca6d4474dc finished being created but was not set as 
'available'
  2015-03-30 23:22:47.131 6445 ERROR nova.compute.manager [-] [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] Instance failed block device setup
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] Traceback (most recent call last):
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1829, in 
_prep_block_device
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] do_check_attach=do_check_attach) +
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 407, in 
attach_block_devices
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] map(_log_and_attach, 
block_device_mapping)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 405, in 
_log_and_attach
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] bdm.attach(*attach_args, 
**attach_kwargs)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 339, in 
attach
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] do_check_attach=do_check_attach)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 46, in 
wrapped
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] ret_val = method(obj, context, *args, 
**kwargs)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 229, in 
attach
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] volume_api.check_attach(context, 
volume, instance=instance)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/volume/cinder.py, line 305, in 
check_attach
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] raise 
exception.InvalidVolume(reason=msg)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] InvalidVolume: Invalid volume: status 
must be 'available'
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]

  This error cause the VM in error status:
  
+--+++--+-+--+
  | ID   | Name   | Status | Task State 
  | Power State | Networks |
  
+--+++--+-+--+
  | 1fa2d7aa-8bd9-4a22-8538-0a07d9dae8aa | inst02 | ERROR  | 
block_device_mapping | NOSTATE |  |
  
+--+++--+-+--+
  But the volume was in available status:
  ---+
  |  ID  |   Status  | Name | Size | Volume 
Type | Bootable | Attached to  |