[Yahoo-eng-team] [Bug 1427014] [NEW] Images with same created timestamp breaks paging

2015-03-01 Thread Kahou Lei
Public bug reported:

Suppose there are several images which are created at the same
timestamp, paging back and forth will mess up the order.

** Affects: horizon
 Importance: Undecided
 Assignee: Kahou Lei (kahou82)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1427014

Title:
  Images with same created timestamp breaks paging

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Suppose there are several images which are created at the same
  timestamp, paging back and forth will mess up the order.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1427014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427015] [NEW] too many subnet-create cause q-dhcp failure

2015-03-01 Thread watanabe.isao
Public bug reported:

[reproduce]
1. neutron net create testnet
2. neutron dhcp-agent-network-add dhcp_agent_id testnet
3. neutron subnet-create testnet CIDR1 --name testsub1
4. neutron subnet-create testnet CIDR2 --name testsub2
5. neutron subnet-create testnet CIDR3 --name testsub3
6. neutron subnet-create testnet CIDR4 --name testsub4
7. neutron subnet-create testnet CIDR5 --name testsub5
since default value of max_fixed_ips_per_port is 5, it is ok till here.
8. neutron subnet-create testnet CIDR6 --name testsub6
error repetly occured in q-dhcp.log.

[trace log]
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent Traceback (most 
recent call last):
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/dhcp/agent.py, line 112, in call_driver
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 132, in restart
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent self.enable()
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 205, in enable
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent interface_name 
= self.device_manager.setup(self.network)
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 919, in setup
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent port = 
self.setup_dhcp_port(network)
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 863, in setup_dhcp_port
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 'fixed_ips': 
port_fixed_ips}})
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/dhcp/agent.py, line 441, in update_dhcp_port
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 
port_id=port_id, port=port, host=self.host)
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py, line 
156, in call
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 
retry=self.retry)
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py, line 90, 
in _send
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 
timeout=timeout, retry=retry)
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, 
line 349, in send
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent retry=retry)
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, 
line 340, in _send
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent raise result
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent RemoteError: Remote 
error: InvalidInput Invalid input for operation: Exceeded maximim amount of 
fixed ips per port.
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent [u'Traceback (most 
recent call last):\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply\nexecutor_callback))\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch\nexecutor_callback)\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  File 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py, line 312, in 
update_dhcp_port\nreturn self._port_action(plugin, context, port, 
\'update_port\')\n', u'  File 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py, line 75, in 
_port_action\nreturn plugin.update_port(context, port[\'id\'], port)\n', u' 
 File /opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 1014, in 
update_port\nport)\n', u'  File /opt/stack/neutron/neut
 ron/db/db_base_plugin_v2.py, line 1389, in update_port\n
original[\'mac_address\'], port[\'device_owner\'])\n', u'  File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 466, in 
_update_ips_for_port\nraise n_exc.InvalidInput(error_message=msg)\n', 
u'InvalidInput: Invalid input for operation: Exceeded maximim amount of fixed 
ips per port.\n'].
2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent
2015-02-28 00:31:45.553 DEBUG oslo_concurrency.lockutils 
[req-41e2c225-2f9f-4e82-a18e-c79faf13cc49 admin admin] Lock dhcp-agent 
released by subnet_update_end :: held 0.358s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:442
2015-02-28 00:31:45.732 3011 DEBUG 

[Yahoo-eng-team] [Bug 1426867] [NEW] Remove orphaned tables: iscsi_targets, volumes

2015-03-01 Thread Attila Fazekas
Public bug reported:

The `iscsi_targets` and `volumes` table was used with nova-volumes,
which is deprecated and removed, but the related tables are still created.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426867

Title:
  Remove orphaned tables: iscsi_targets, volumes

Status in OpenStack Compute (Nova):
  New

Bug description:
  The `iscsi_targets` and `volumes` table was used with nova-volumes,
  which is deprecated and removed, but the related tables are still created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1426867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426873] [NEW] Remove shadow tables

2015-03-01 Thread Attila Fazekas
Public bug reported:

$ nova-manage db archive_deleted_rows 1 
command fails with integrity error.

No-one wants to preserve those records in shadow table,
Instead of fixing the archiving issue the tables should be removed.

Later an archive to /dev/null function should be added to the nova
manage.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426873

Title:
  Remove shadow tables

Status in OpenStack Compute (Nova):
  New

Bug description:
  $ nova-manage db archive_deleted_rows 1 
  command fails with integrity error.

  No-one wants to preserve those records in shadow table,
  Instead of fixing the archiving issue the tables should be removed.

  Later an archive to /dev/null function should be added to the nova
  manage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1426873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427022] [NEW] not releasing IP address when deleting VM, port, LB VIP

2015-03-01 Thread Roh Tae Won
Public bug reported:


I am testing with Juno 2014.2.1 release.

When I delete neutron port, VM, or LB VIP, neutron.ipavailabilityranges table 
was not updated as expected.
(I mean subnet IPs are not released to be used again.)


 
##case 1. deleting neutron port


1-1 create network/subnet

# neutron net-create twroh_22net
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 1e4e010a-3984-4058-88c8-59ceff79cf3d |
| name  | twroh_22net  |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 22   |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | 13aae994774947298d231cfd361c1ca2 |
+---+--+


# neutron subnet-create --name twroh_22subnet twroh_22net 22.22.22.0/24
Created a new subnet:
+---++
| Field | Value  |
+---++
| allocation_pools  | {start: 22.22.22.2, end: 22.22.22.254} |
| cidr  | 22.22.22.0/24  |
| dns_nameservers   ||
| enable_dhcp   | True   |
| gateway_ip| 22.22.22.1 |
| host_routes   ||
| id| 34f7be13-c34c-45fb-b2bc-fa1b64da1d25   |
| ip_version| 4  |
| ipv6_address_mode ||
| ipv6_ra_mode  ||
| name  | twroh_22subnet |
| network_id| 1e4e010a-3984-4058-88c8-59ceff79cf3d   |
| tenant_id | 13aae994774947298d231cfd361c1ca2   |
+---++

1-2 select neutron.ipavailabilityranges table

MariaDB [(none)] select * from neutron.ipavailabilityranges where 
allocation_pool_id like '1e5d%';
+--++--+
| allocation_pool_id   | first_ip   | last_ip  |
+--++--+
| 1e5d0bec-11aa-422e-955c-6b96000e334d | 22.22.22.2 | 22.22.22.254 |
+--++--+

1-3 create a port with IP 22.22.22.100

# NETID=1e4e010a-3984-4058-88c8-59ceff79cf3d
# SUBNETID=34f7be13-c34c-45fb-b2bc-fa1b64da1d25 
# neutron port-create --fixed-ip subnet_id=$SUBNETID,ip_address=22.22.22.100 
$NETID
Created a new port:
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   |   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | unbound   
  |
| binding:vnic_type | normal
  |
| device_id |   
  |
| device_owner  |   
  |
| fixed_ips | {subnet_id: 34f7be13-c34c-45fb-b2bc-fa1b64da1d25, 
ip_address: 22.22.22.100} |
| id| 5ec5b197-d730-4e1b-8c99-057d43c0aa64  
  |
| mac_address  

[Yahoo-eng-team] [Bug 1427028] [NEW] Test bug with unicode - wyjście

2015-03-01 Thread Davanum Srinivas (DIMS)
Public bug reported:

testing nova-bugs

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- Test bug with unicode ’ 
+ Test bug with unicode - wyjście

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1427028

Title:
  Test bug with unicode - wyjście

Status in OpenStack Compute (Nova):
  New

Bug description:
  testing nova-bugs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1427028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427032] [NEW] Enable neutron network integration testing

2015-03-01 Thread Wu Hong Guang
Public bug reported:

To have  horizon.conf  include something like this

#Set to neutron to testing notwork dashboard
#There is no network testing at default. The default
#is nova-network
#network=nova-netwrok
network=neutron

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1427032

Title:
  Enable neutron network integration testing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  To have  horizon.conf  include something like this

  #Set to neutron to testing notwork dashboard
  #There is no network testing at default. The default
  #is nova-network
  #network=nova-netwrok
  network=neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1427032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427028] Re: Test bug with unicode café

2015-03-01 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1427028

Title:
  Test bug with unicode café

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  testing nova-bugs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1427028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427052] [NEW] Problems with 'Other' option in Daily Report page

2015-03-01 Thread Yamini Sardana
Public bug reported:

The 'Other' option for Period field on the Admin - Resource Usage - Daily 
Report page has the following issues:
1. The calendar that appears when From/To is clicked, does not automatically go 
off when the date has been clicked. We have to click somewhere on the screen so 
that it disappear.
2. When we try to generate a report for a selected period, it does not show the 
filtered information. It shows all the information available. 

Expected Behavior
1. Once the date is selected in the To and From columns, the calander should 
automatically close
2. When a time period is selected, information corresponding to only that time 
period should be displayed.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1427052

Title:
  Problems with 'Other' option in Daily Report page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The 'Other' option for Period field on the Admin - Resource Usage - Daily 
Report page has the following issues:
  1. The calendar that appears when From/To is clicked, does not automatically 
go off when the date has been clicked. We have to click somewhere on the screen 
so that it disappear.
  2. When we try to generate a report for a selected period, it does not show 
the filtered information. It shows all the information available. 

  Expected Behavior
  1. Once the date is selected in the To and From columns, the calander should 
automatically close
  2. When a time period is selected, information corresponding to only that 
time period should be displayed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1427052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427053] [NEW] Browsers get 404 after creating new volume_type's extra_specs

2015-03-01 Thread Masahito Muroi
Public bug reported:

Browsers get 404 after creating a new volume_type's extra_spec since the
view of the operation returns an absolute url.  This error only occurs
when you set a horizon's root path except / like /horizon.

How to reproduce it.

1. deploying horizon except / like /horizon
2. login as admin user
3. creating a new volume_type's extra_spec

In this case, after succeeding to create a new extra_specs, Horizon
should return a relative url using reverse method.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1427053

Title:
  Browsers get 404 after creating new volume_type's extra_specs

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Browsers get 404 after creating a new volume_type's extra_spec since
  the view of the operation returns an absolute url.  This error only
  occurs when you set a horizon's root path except / like /horizon.

  How to reproduce it.

  1. deploying horizon except / like /horizon
  2. login as admin user
  3. creating a new volume_type's extra_spec

  In this case, after succeeding to create a new extra_specs, Horizon
  should return a relative url using reverse method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1427053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427054] [NEW] no way to know what IP spoofing rule is applied

2015-03-01 Thread Akihiro Motoki
Public bug reported:

[From the discussion in neutronclient bug 1182629]

The discussion is that there is no way to confirm and update ip spoofing rules 
(which are established by neutron implicitly).
The bug itself was reported about two years ago, and I am not sure we need to 
fix it now.
I think it is still worth discussed when we discuss the next step of the API.

The following are quoted from neutronclient bug 1182629.
-

Robert Collins (lifeless) wrote on 2013-05-23: 
Sure, I appreciate what the rules do - but the security-group-rule-list is 
showing no details, and the rules that are there are not described usefully. 
The port lists for DHCP in and out for instance, should be shown, but aren't. 
The IP addresses are wildcard for the most part - but not on the ip spoofing 
rule. So I don't understand why they shouldn't be shown in a useful manner.

Aaron Rosen (arosen) wrote on 2013-05-23: 
[snip related to the first point]
The second thing is that in order to use security groups you need ip spoofing 
enabled. The reason for this is if ip spoofing was not enabled an instance 
could change it's source ip in order to get around a security group rule. IMO 
displaying the ip spoofing rules does us no good.

Robert Collins (lifeless) wrote on 2013-05-25: 
[snip related to the first point]
Secondly, ip spoofing is definitely important - but we can modify the DHCP rule 
like so:
  -A quantum-openvswi-oaa210549-d -m mac --mac-source FA:16:3E:7F:4F:76 -s 
0.0.0.0/32 -p udp -m udp --sport 68 --dport 67 -j RETURN
To be more tight: 0.0.0.0/32 is the address for DHCP requests; only that and 
the assigned address may be used.

Akihiro Motoki (amotoki) wrote on 2013-06-05: 
[snip related to the first point]
Regarding the second point, specifying the source MAC actually changes nothing 
since a rule preventing source mac spoofing is evaluated before DHCP request 
allow rule, but it is better to add the source mac since the rules becomes more 
robust (e.g., we can consider a case where there is no rule for source mac 
spoofing).

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427054

Title:
  no way to know what IP spoofing rule is applied

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  [From the discussion in neutronclient bug 1182629]

  The discussion is that there is no way to confirm and update ip spoofing 
rules (which are established by neutron implicitly).
  The bug itself was reported about two years ago, and I am not sure we need to 
fix it now.
  I think it is still worth discussed when we discuss the next step of the API.

  The following are quoted from neutronclient bug 1182629.
  -

  Robert Collins (lifeless) wrote on 2013-05-23: 
  Sure, I appreciate what the rules do - but the security-group-rule-list is 
showing no details, and the rules that are there are not described usefully. 
The port lists for DHCP in and out for instance, should be shown, but aren't. 
The IP addresses are wildcard for the most part - but not on the ip spoofing 
rule. So I don't understand why they shouldn't be shown in a useful manner.

  Aaron Rosen (arosen) wrote on 2013-05-23: 
  [snip related to the first point]
  The second thing is that in order to use security groups you need ip spoofing 
enabled. The reason for this is if ip spoofing was not enabled an instance 
could change it's source ip in order to get around a security group rule. IMO 
displaying the ip spoofing rules does us no good.

  Robert Collins (lifeless) wrote on 2013-05-25: 
  [snip related to the first point]
  Secondly, ip spoofing is definitely important - but we can modify the DHCP 
rule like so:
-A quantum-openvswi-oaa210549-d -m mac --mac-source FA:16:3E:7F:4F:76 -s 
0.0.0.0/32 -p udp -m udp --sport 68 --dport 67 -j RETURN
  To be more tight: 0.0.0.0/32 is the address for DHCP requests; only that and 
the assigned address may be used.

  Akihiro Motoki (amotoki) wrote on 2013-06-05: 
  [snip related to the first point]
  Regarding the second point, specifying the source MAC actually changes 
nothing since a rule preventing source mac spoofing is evaluated before DHCP 
request allow rule, but it is better to add the source mac since the rules 
becomes more robust (e.g., we can consider a case where there is no rule for 
source mac spoofing).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427056] [NEW] shelved_image_id is deleted before completing unshelving instance on compute node

2015-03-01 Thread Pranali Deore
Public bug reported:

Steps to reproduce:

1. Boot an instance from image.
2. Call Shelve instance, instance becomes SHELVED_OFFLOADED state.
3. Call unshelve instance.
   3.1 nova-conductor calls RPC.Cast to nova-compute
   If some failure happens in nova-compute. e.g. Instance failed to spawn 
error from libvirt
   3.2 nova-conductor deletes instance_system_metadata.shelved_image_id after 
RPC.cast for unshelving the instance.
   3.3 Instance becomes SHELVED_OFFLOADED again by revert_task_state, but 
instance_system_metadata.shelved_image_id is already deleted for this instance

Problems:
1. As there is no shelved_image_id, during unshelving the instance again, it 
gives error while getting image-meta in   
   libvirt driver and instance remains in SHELVED_OFFLOADED state.

2. As there is no shelved_image_id, deleting the instance will try to delete 
image_id=None image from glance,
   but 404 error will be returned from glance, instance will be successfully 
deleted, and shelved image remains.

** Affects: nova
 Importance: Undecided
 Assignee: Pranali Deore (pranali-deore)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Pranali Deore (pranali-deore)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1427056

Title:
  shelved_image_id is deleted before completing unshelving instance on
  compute node

Status in OpenStack Compute (Nova):
  New

Bug description:
  Steps to reproduce:

  1. Boot an instance from image.
  2. Call Shelve instance, instance becomes SHELVED_OFFLOADED state.
  3. Call unshelve instance.
 3.1 nova-conductor calls RPC.Cast to nova-compute
 If some failure happens in nova-compute. e.g. Instance failed to spawn 
error from libvirt
 3.2 nova-conductor deletes instance_system_metadata.shelved_image_id after 
RPC.cast for unshelving the instance.
 3.3 Instance becomes SHELVED_OFFLOADED again by revert_task_state, but 
instance_system_metadata.shelved_image_id is already deleted for this instance

  Problems:
  1. As there is no shelved_image_id, during unshelving the instance again, it 
gives error while getting image-meta in   
 libvirt driver and instance remains in SHELVED_OFFLOADED state.

  2. As there is no shelved_image_id, deleting the instance will try to delete 
image_id=None image from glance,
 but 404 error will be returned from glance, instance will be successfully 
deleted, and shelved image remains.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1427056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427060] [NEW] Unnecessary bdm entry is created if same volume is attached twice to an instance

2015-03-01 Thread Ankit Agrawal
Public bug reported:

When we try to attach an already attached volume to an instance it
raises InvalidVolume exception but creates an entry in
block_device_mapping table with deleted flag set to true. Ideally when
attach volume fails it should not create any entries in database.

Steps to reproduce:

1. Created an instance named test_vm_1.
2. Created a volume named test_volume_1.
3. Verified that instance is in active state and volume is in available state.
4. Attach volume using below command:
$ nova volume-attach instance_id volume_id.
5. Confirmed that volume is in 'in-use' status using below command:
$ cinder list.
6. Execute volume-attach command again with same volume_id.
$ nova volume-attach instance_id volume_id.
   
After executing step 6 it raises Invalid volume exception and attached volume 
can be used normally which is correct. But when you check  block_device_mapping 
table using below sql query, you will find an additional bdm entry which should 
not be created.

select * from block_device_mapping where instance_uuid='ee94830b-
5d39-42a7-b8c2-6175bb77563a';

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ntt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1427060

Title:
  Unnecessary bdm entry is created if same volume is attached twice to
  an instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  When we try to attach an already attached volume to an instance it
  raises InvalidVolume exception but creates an entry in
  block_device_mapping table with deleted flag set to true. Ideally when
  attach volume fails it should not create any entries in database.

  Steps to reproduce:

  1. Created an instance named test_vm_1.
  2. Created a volume named test_volume_1.
  3. Verified that instance is in active state and volume is in available state.
  4. Attach volume using below command:
  $ nova volume-attach instance_id volume_id.
  5. Confirmed that volume is in 'in-use' status using below command:
  $ cinder list.
  6. Execute volume-attach command again with same volume_id.
  $ nova volume-attach instance_id volume_id.
 
  After executing step 6 it raises Invalid volume exception and attached 
volume can be used normally which is correct. But when you check  
block_device_mapping table using below sql query, you will find an additional 
bdm entry which should not be created.

  select * from block_device_mapping where instance_uuid='ee94830b-
  5d39-42a7-b8c2-6175bb77563a';

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1427060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427092] [NEW] preallocate_images can be arbitrary characters to enable preallocate feature

2015-03-01 Thread Eli Qiao
Public bug reported:

 Raw and Qcow2 image type support preallocate image.
by enable this we need to set preallocate_images in config file.
currently , we can set it to any character except 'none' to enable it.

** Affects: nova
 Importance: Undecided
 Assignee: Eli Qiao (taget-9)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1427092

Title:
  preallocate_images can be arbitrary characters to enable preallocate
  feature

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
   Raw and Qcow2 image type support preallocate image.
  by enable this we need to set preallocate_images in config file.
  currently , we can set it to any character except 'none' to enable it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1427092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426427] Re: Improve tunnel_sync server side rpc to handle race conditions

2015-03-01 Thread Romil Gupta
** Description changed:

- We  have a concern that we may end up with db conflict errors due to
- multiple parallel requests incoming.
+ We  have a concern that we may have race conditions with the following
+ code snippet:
  
- Consider two threads (A and B), each receiving tunnel_sync with host set to 
HOST1 and HOST2. The race scenario is:
- A checks whether tunnel exists and receives nothing.
- B checks whether tunnel exists and receives nothing.
- A adds endpoint with HOST1.
- B adds endpoint with HOST2.
+ if host:
+ host_endpoint = driver.obj.get_endpoint_by_host(host)
+ ip_endpoint = driver.obj.get_endpoint_by_ip(tunnel_ip)
  
- Now we have two endpoints for the same IP address with different hosts (I 
guess that is not what we would expect).
- I think the only way to avoid it is check for tunnel existence under the same 
transaction that will update it, if present. Probably meaning, making 
add_endpoint aware of potential tunnel existence.
+ if (ip_endpoint and ip_endpoint.host is None
+ and host_endpoint is None):
+ driver.obj.delete_endpoint(ip_endpoint.ip_address)
+ elif (ip_endpoint and ip_endpoint.host != host):
+ msg = (_(Tunnel IP %(ip)s in use with host %(host)s),
+{'ip': ip_endpoint.ip_address,
+ 'host': ip_endpoint.host})
+ raise exc.InvalidInput(error_message=msg)
+ elif (host_endpoint and host_endpoint.ip_address != 
tunnel_ip):
+ # Notify all other listening agents to delete stale 
tunnels
+ self._notifier.tunnel_delete(rpc_context,
+ host_endpoint.ip_address, tunnel_type)
+ driver.obj.delete_endpoint(host_endpoint.ip_address)
+ 
+ Consider two threads (A and B), where for
+ 
+ Thread A we have following use case:
+ if Host is passed from an agent and it is not found in DB but the passed 
tunnel_ip is found, delete the endpoint from DB and add the endpoint with 
+ (tunnel_ip, host), it's an upgrade case.
+ 
+ whereas for Thread B we have following use case:
+ if passed host and tunnel_ip are not found in the DB, it is a new endpoint.
+ 
+ Both threads will do the following in the end:
+ 
+ tunnel = driver.obj.add_endpoint(tunnel_ip, host)
+ tunnels = driver.obj.get_endpoints()
+ entry = {'tunnels': tunnels}
+ # Notify all other listening agents
+ self._notifier.tunnel_update(rpc_context, tunnel.ip_address,
+  tunnel_type)
+ # Return the list of tunnels IP's to the agent
+ return entry
+ 
+ 
+ Since, Thread A first deletes the endpoints and adds it, we may have chances 
where Thread B doesn't get that endpoint in get_endpoints call during race 
condition.
+ 
+ One way to overcome this problem would be instead of doing
+ delete_endpoint we could introduce update_endpoint method in
+ type_drivers.

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

** Changed in: neutron
   Status: Invalid = New

** Tags added: ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426427

Title:
  Improve tunnel_sync server side rpc to handle race conditions

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We  have a concern that we may have race conditions with the following
  code snippet:

  if host:
  host_endpoint = driver.obj.get_endpoint_by_host(host)
  ip_endpoint = driver.obj.get_endpoint_by_ip(tunnel_ip)

  if (ip_endpoint and ip_endpoint.host is None
  and host_endpoint is None):
  driver.obj.delete_endpoint(ip_endpoint.ip_address)
  elif (ip_endpoint and ip_endpoint.host != host):
  msg = (_(Tunnel IP %(ip)s in use with host %(host)s),
 {'ip': ip_endpoint.ip_address,
  'host': ip_endpoint.host})
  raise exc.InvalidInput(error_message=msg)
  elif (host_endpoint and host_endpoint.ip_address != 
tunnel_ip):
  # Notify all other listening agents to delete stale 
tunnels
  self._notifier.tunnel_delete(rpc_context,
  host_endpoint.ip_address, tunnel_type)
  driver.obj.delete_endpoint(host_endpoint.ip_address)

  Consider two threads (A and B), where for

  Thread A we have following use case:
  if Host is passed from an agent and it is not found in DB but the passed 
tunnel_ip is found, delete the endpoint from DB and add the endpoint with 
  (tunnel_ip, host), it's an upgrade case.

  whereas for 

[Yahoo-eng-team] [Bug 1426904] [NEW] Can't start Neutron service when max VNI value is defined in ml2_conf.ini file

2015-03-01 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The VNI value is 24bits, i.e. 1 – 16777215

When I defined the max value in the ml2_conf.ini file:
[ml2_type_vxlan]
vni_ranges = 1001:16777215

Neutron service fails to start.

2015-02-28 17:11:33.793 CRITICAL neutron [-] MemoryError

2015-02-28 17:11:33.793 TRACE neutron Traceback (most recent call last):
2015-02-28 17:11:33.793 TRACE neutron   File /usr/local/bin/neutron-server, 
line 9, in module
2015-02-28 17:11:33.793 TRACE neutron 
load_entry_point('neutron==2014.2.2.dev438', 'console_scripts', 
'neutron-server')()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/server/__init__.py, line 48, in main
2015-02-28 17:11:33.793 TRACE neutron neutron_api = 
service.serve_wsgi(service.NeutronApiService)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 105, in serve_wsgi
2015-02-28 17:11:33.793 TRACE neutron LOG.exception(_('Unrecoverable error: 
please check log '
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/openstack/common/excutils.py, line 82, in __exit__
2015-02-28 17:11:33.793 TRACE neutron six.reraise(self.type_, self.value, 
self.tb)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 102, in serve_wsgi
2015-02-28 17:11:33.793 TRACE neutron service.start()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 73, in start
2015-02-28 17:11:33.793 TRACE neutron self.wsgi_app = 
_run_wsgi(self.app_name)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 168, in _run_wsgi
2015-02-28 17:11:33.793 TRACE neutron app = config.load_paste_app(app_name)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/common/config.py, line 185, in load_paste_app
2015-02-28 17:11:33.793 TRACE neutron app = deploy.loadapp(config:%s % 
config_path, name=app_name)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
2015-02-28 17:11:33.793 TRACE neutron return loadobj(APP, uri, name=name, 
**kw)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 272, in 
loadobj
2015-02-28 17:11:33.793 TRACE neutron return context.create()
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in 
create
2015-02-28 17:11:33.793 TRACE neutron return self.object_type.invoke(self)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 144, in 
invoke
2015-02-28 17:11:33.793 TRACE neutron **context.local_conf)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py, line 55, in 
fix_call
2015-02-28 17:11:33.793 TRACE neutron val = callable(*args, **kw)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/urlmap.py, line 25, in 
urlmap_factory
2015-02-28 17:11:33.793 TRACE neutron app = loader.get_app(app_name, 
global_conf=global_conf)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 350, in 
get_app
2015-02-28 17:11:33.793 TRACE neutron name=name, 
global_conf=global_conf).create()
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in 
create
2015-02-28 17:11:33.793 TRACE neutron return self.object_type.invoke(self)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 144, in 
invoke
2015-02-28 17:11:33.793 TRACE neutron **context.local_conf)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py, line 55, in 
fix_call
2015-02-28 17:11:33.793 TRACE neutron val = callable(*args, **kw)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/auth.py, line 71, in pipeline_factory
2015-02-28 17:11:33.793 TRACE neutron app = loader.get_app(pipeline[-1])
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 350, in 
get_app
2015-02-28 17:11:33.793 TRACE neutron name=name, 
global_conf=global_conf).create()
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in 
create
2015-02-28 17:11:33.793 TRACE neutron return self.object_type.invoke(self)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 146, in 
invoke
2015-02-28 17:11:33.793 TRACE neutron return fix_call(context.object, 
context.global_conf, **context.local_conf)
2015-02-28 17:11:33.793 TRACE neutron   File 

[Yahoo-eng-team] [Bug 1426904] [NEW] Can't start Neutron service when max VNI value is defined in ml2_conf.ini file

2015-03-01 Thread Romil Gupta
Public bug reported:

The VNI value is 24bits, i.e. 1 – 16777215

When I defined the max value in the ml2_conf.ini file:
[ml2_type_vxlan]
vni_ranges = 1001:16777215

Neutron service fails to start.

2015-02-28 17:11:33.793 CRITICAL neutron [-] MemoryError

2015-02-28 17:11:33.793 TRACE neutron Traceback (most recent call last):
2015-02-28 17:11:33.793 TRACE neutron   File /usr/local/bin/neutron-server, 
line 9, in module
2015-02-28 17:11:33.793 TRACE neutron 
load_entry_point('neutron==2014.2.2.dev438', 'console_scripts', 
'neutron-server')()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/server/__init__.py, line 48, in main
2015-02-28 17:11:33.793 TRACE neutron neutron_api = 
service.serve_wsgi(service.NeutronApiService)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 105, in serve_wsgi
2015-02-28 17:11:33.793 TRACE neutron LOG.exception(_('Unrecoverable error: 
please check log '
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/openstack/common/excutils.py, line 82, in __exit__
2015-02-28 17:11:33.793 TRACE neutron six.reraise(self.type_, self.value, 
self.tb)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 102, in serve_wsgi
2015-02-28 17:11:33.793 TRACE neutron service.start()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 73, in start
2015-02-28 17:11:33.793 TRACE neutron self.wsgi_app = 
_run_wsgi(self.app_name)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 168, in _run_wsgi
2015-02-28 17:11:33.793 TRACE neutron app = config.load_paste_app(app_name)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/common/config.py, line 185, in load_paste_app
2015-02-28 17:11:33.793 TRACE neutron app = deploy.loadapp(config:%s % 
config_path, name=app_name)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
2015-02-28 17:11:33.793 TRACE neutron return loadobj(APP, uri, name=name, 
**kw)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 272, in 
loadobj
2015-02-28 17:11:33.793 TRACE neutron return context.create()
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in 
create
2015-02-28 17:11:33.793 TRACE neutron return self.object_type.invoke(self)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 144, in 
invoke
2015-02-28 17:11:33.793 TRACE neutron **context.local_conf)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py, line 55, in 
fix_call
2015-02-28 17:11:33.793 TRACE neutron val = callable(*args, **kw)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/urlmap.py, line 25, in 
urlmap_factory
2015-02-28 17:11:33.793 TRACE neutron app = loader.get_app(app_name, 
global_conf=global_conf)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 350, in 
get_app
2015-02-28 17:11:33.793 TRACE neutron name=name, 
global_conf=global_conf).create()
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in 
create
2015-02-28 17:11:33.793 TRACE neutron return self.object_type.invoke(self)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 144, in 
invoke
2015-02-28 17:11:33.793 TRACE neutron **context.local_conf)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py, line 55, in 
fix_call
2015-02-28 17:11:33.793 TRACE neutron val = callable(*args, **kw)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/auth.py, line 71, in pipeline_factory
2015-02-28 17:11:33.793 TRACE neutron app = loader.get_app(pipeline[-1])
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 350, in 
get_app
2015-02-28 17:11:33.793 TRACE neutron name=name, 
global_conf=global_conf).create()
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in 
create
2015-02-28 17:11:33.793 TRACE neutron return self.object_type.invoke(self)
2015-02-28 17:11:33.793 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 146, in 
invoke
2015-02-28 17:11:33.793 TRACE neutron return fix_call(context.object, 
context.global_conf, **context.local_conf)
2015-02-28 17:11:33.793 TRACE neutron   File