[Yahoo-eng-team] [Bug 1183259] Re: deleted (then replaced) flavors keep the old name in nova show
Looks like this was fixed in icehouse, or havana. I'm marking this bug as fix released. If you are able to reproduce it in a supported version, please reopen. ** Changed in: nova Status: Incomplete = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1183259 Title: deleted (then replaced) flavors keep the old name in nova show Status in OpenStack Compute (Nova): Fix Released Bug description: If I create a flavor and then delete it and create it anew, the original name comes out in nova show and elsewhere. I suspect that the DB query is lacking a (deleted=0) To reproduce: OS_TENANT_NAME=admin nova flavor-create foo 1010 1024 10 1 OS_TENANT_NAME=admin nova flavor-delete foo OS_TENANT_NAME=admin nova flavor-create bar 1010 2048 10 1 nova boot --flavor=bar --image=$WHATEVER baz #See how the output says that the flavor is foo (1010) nova show baz -- also says foo This is using: 1:2013.1-0ubuntu2.1~cloud0 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1183259/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324395] [NEW] Rename Openstack to OpenStack
Public bug reported: Rename Openstack to OpenStack ** Affects: glance Importance: Undecided Assignee: ling-yun (zengyunling) Status: New ** Changed in: glance Assignee: (unassigned) = ling-yun (zengyunling) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1324395 Title: Rename Openstack to OpenStack Status in OpenStack Image Registry and Delivery Service (Glance): New Bug description: Rename Openstack to OpenStack To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1324395/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1020799] Re: Figure out a way to package LESS in friendlier manner
** Changed in: horizon Status: Confirmed = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1020799 Title: Figure out a way to package LESS in friendlier manner Status in OpenStack Dashboard (Horizon): Won't Fix Bug description: The way we're packaging LESS right now (in the /bin dir) doesn't make us a great citizen towards downstream packagers. In an ideal world, the lessc script would be installed as a standard script and be moved to the platform's scripts dir (e.g. /usr/local/bin). This can be easily accomplished by listing it as a script in setup.py. However, the path to the related .js files is hardcoded into the lessc file and assumes that they are in the top level of the adjacent lib directory. Forcing setuptools/distutils to place those files there is beyond the scope of my knowledge. If we can solve that bit, then the path to the lessc binary in settings.py can then be written as a check for the file at the expected path with a fallback to the output of which lessc on the shell. Either way, the current method of just shoving horizon's bin directory into the package isn't very friendly and will probably need to be fixed before the final Folsom release. That said, I defer to actual downstream packagers to shed more light on the situation. I know just enough about packaging to get by. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1020799/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324400] [NEW] Invalid EC2 instance type for a volume backed instance
Public bug reported: Due to nova.virt.driver:LibvirtDriver.get_guest_config prepends instance root_device_name with 'dev' prefix, root_device_name may not coincide with device_name in block device mapping structure. In this case describe instances operation reports wrong instance type: instance-store instead of ebs. Environment: DevStack Steps to reproduce: 1 Create volume backed instance passgin vda as root device name $ cinder create --image-id xxx 1 $ nova boot --flavor m1.nano --image xxx --block-device-mapping vda=yyy:::1 inst Note. I used cirros ami image. 2 Describe instances $ euca-describe-instances Look on instace type. It must be ebs, but it is instance-store in the output. Note. If euca-describe-instance crashes on ebs instnce, apply https://review.openstack.org/#/c/95580/ ** Affects: nova Importance: Undecided Assignee: Feodor Tersin (ftersin) Status: New ** Changed in: nova Assignee: (unassigned) = Feodor Tersin (ftersin) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1324400 Title: Invalid EC2 instance type for a volume backed instance Status in OpenStack Compute (Nova): New Bug description: Due to nova.virt.driver:LibvirtDriver.get_guest_config prepends instance root_device_name with 'dev' prefix, root_device_name may not coincide with device_name in block device mapping structure. In this case describe instances operation reports wrong instance type: instance-store instead of ebs. Environment: DevStack Steps to reproduce: 1 Create volume backed instance passgin vda as root device name $ cinder create --image-id xxx 1 $ nova boot --flavor m1.nano --image xxx --block-device-mapping vda=yyy:::1 inst Note. I used cirros ami image. 2 Describe instances $ euca-describe-instances Look on instace type. It must be ebs, but it is instance-store in the output. Note. If euca-describe-instance crashes on ebs instnce, apply https://review.openstack.org/#/c/95580/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1324400/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324407] [NEW] Rename Openstack to OpenStack
Public bug reported: Rename Openstack to OpenStack ** Affects: nova Importance: Undecided Assignee: ling-yun (zengyunling) Status: New ** Changed in: nova Assignee: (unassigned) = ling-yun (zengyunling) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1324407 Title: Rename Openstack to OpenStack Status in OpenStack Compute (Nova): New Bug description: Rename Openstack to OpenStack To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1324407/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324418] [NEW] debug level logs should not be translated
Public bug reported: According to the OpenStack translation policy available at https://wiki.openstack.org/wiki/LoggingStandards debug messages should not be translated. ** Affects: nova Importance: Undecided Assignee: ling-yun (zengyunling) Status: New ** Changed in: nova Assignee: (unassigned) = ling-yun (zengyunling) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1324418 Title: debug level logs should not be translated Status in OpenStack Compute (Nova): New Bug description: According to the OpenStack translation policy available at https://wiki.openstack.org/wiki/LoggingStandards debug messages should not be translated. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1324418/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324417] [NEW] fwaas:shared firewall rule is not able to use when it is already attached in other tenant's firewall policy
Public bug reported: DESCRIPTION: firewall rule shared by admin is not able to use in tenant's firewall policy when the rule is already attached in other tenant's or admin's firewall policy Steps to Reproduce: 1. create a firewall rule r1 as share = true from admin tenant 2. create a firewall policy p1 and attach the aboce firewall rule r1 from admin tenant 3. Try to create a firewall policy from other tenant with the above firewall rule r1 Actual Results: cli throws error as its being in use and doesn't create firewall policy root@IGA-OSC:~# fwrc --protocol icmp --action deny --name a2 --shared Created a new firewall_rule: ++--+ | Field | Value| ++--+ | action | deny | | description| | | destination_ip_address | | | destination_port | | | enabled| True | | firewall_policy_id | | | id | 15f3c1a8-f813-4809-ab44-00d12f7ff8ad | | ip_version | 4| | name | a2 | | position | | | protocol | icmp | | shared | True | | source_ip_address | | | source_port| | | tenant_id | 0ad385e00e97476e9456945c079a21ea | ++--+ root@IGA-OSC:~# fwpc ap --firewall-rule a2 Created a new firewall_policy: ++--+ | Field | Value| ++--+ | audited| False| | description| | | firewall_rules | 15f3c1a8-f813-4809-ab44-00d12f7ff8ad | | id | 800bea29-f165-421e-8e56-a0ec9af2bfc0 | | name | ap | | shared | False| | tenant_id | 0ad385e00e97476e9456945c079a21ea | ++--+ root@IGA-OSC:~# fwrs a2 ++--+ | Field | Value| ++--+ | action | deny | | description| | | destination_ip_address | | | destination_port | | | enabled| True | | firewall_policy_id | 800bea29-f165-421e-8e56-a0ec9af2bfc0 | | id | 15f3c1a8-f813-4809-ab44-00d12f7ff8ad | | ip_version | 4| | name | a2 | | position | 1| | protocol | icmp | | source_ip_address | | | source_port| | | tenant_id | 0ad385e00e97476e9456945c079a21ea | ++--+ From other tenant == root@IGA-OSC:~# fwpc p3 --firewall-rule a2 409-{u'NeutronError': {u'message': u'Firewall Rule 15f3c1a8-f813-4809-ab44-00d12f7ff8ad is being used.', u'type': u'FirewallRuleInUse', u'detail': u''}} ** Affects: neutron Importance: Undecided Status: New ** Tags: fwaas -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324417 Title: fwaas:shared firewall rule is not able to use when it is already attached in other tenant's firewall policy Status in OpenStack Neutron (virtual network service): New Bug description: DESCRIPTION: firewall rule shared by admin is not able to use in tenant's firewall policy when the rule is already attached in other tenant's or admin's firewall policy Steps to Reproduce: 1. create a firewall rule r1 as share = true from admin tenant 2. create a firewall policy p1 and attach the aboce firewall rule r1 from admin tenant 3. Try to create a firewall policy from other tenant with the above firewall rule r1 Actual Results: cli throws error as
[Yahoo-eng-team] [Bug 1322139] Re: VXLAN kernel requirement check for openvswitch agent is not working
** Changed in: neutron Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1322139 Title: VXLAN kernel requirement check for openvswitch agent is not working Status in OpenStack Neutron (virtual network service): Fix Released Bug description: on RHEL7 beta agent set to use VXLAN tunneling does not start. I'm using rdo packages, if you want I can check the upstream version somewhere, but seems that code that checks the version is still in github. in openvswitch-agent.log I see: 2014-05-21 13:53:21.762 1814 ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-ce4eadcb-4cbb-4c09-a404-98ecb5383fa5 None] Agent terminated 2014-05-21 13:53:21.762 1814 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last): 2014-05-21 13:53:21.762 1814 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File /usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, line 231, in _check_ovs_version 2014-05-21 13:53:21.762 1814 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_lib.check_ovs_vxlan_version(self.root_helper) 2014-05-21 13:53:21.762 1814 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File /usr/lib/python2.7/site-packages/neutron/agent/linux/ovs_lib.py, line 551, in check_ovs_vxlan_version 2014-05-21 13:53:21.762 1814 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent 'kernel', 'VXLAN') 2014-05-21 13:53:21.762 1814 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File /usr/lib/python2.7/site-packages/neutron/agent/linux/ovs_lib.py, line 529, in _compare_installed_and_required_version 2014-05-21 13:53:21.762 1814 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent raise SystemError(msg) 2014-05-21 13:53:21.762 1814 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent SystemError: Unable to determine kernel version for Open vSwitch with VXLAN support. To use VXLAN tunnels with OVS, please ensure that the version is 1.10 or newer! 2014-05-21 13:53:21.762 1814 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent It seems that the minimum kernel version required to use VXLAN is set to 3.13 (shouldn't 3.9 be enough ?). RHEL7 ships only 3.10. Vxlan module however is present and working, and even if the only module properly working is in 3.13 kernel, the check doesn't take into consideration backported features. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1322139/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324428] [NEW] net_create fail without definite segmentation_id
You have been subscribed to a public bug: if i define provider, but no segmentation_id, net-create fail. why not allocate segmentation_id automatically? ~$ neutron net-create test --provider:network_type=vlan --provider:physical_network=default Invalid input for operation: segmentation_id required for VLAN provider network. ** Affects: neutron Importance: Undecided Status: New -- net_create fail without definite segmentation_id https://bugs.launchpad.net/bugs/1324428 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324428] [NEW] net_create fail without definite segmentation_id
Public bug reported: if i define provider, but no segmentation_id, net-create fail. why not allocate segmentation_id automatically? ~$ neutron net-create test --provider:network_type=vlan --provider:physical_network=default Invalid input for operation: segmentation_id required for VLAN provider network. ** Affects: neutron Importance: Undecided Status: New ** Changed in: oslo Assignee: (unassigned) = Xurong Yang (idopra) ** Project changed: oslo = neutron ** Changed in: neutron Assignee: Xurong Yang (idopra) = (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324428 Title: net_create fail without definite segmentation_id Status in OpenStack Neutron (virtual network service): New Bug description: if i define provider, but no segmentation_id, net-create fail. why not allocate segmentation_id automatically? ~$ neutron net-create test --provider:network_type=vlan --provider:physical_network=default Invalid input for operation: segmentation_id required for VLAN provider network. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1324428/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1323151] Re: Revises error in neutron / neutron / db / migration / alembic_migrations / versions / havana_release.py (stable/havana branch)
** Changed in: neutron Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1323151 Title: Revises error in neutron / neutron / db / migration / alembic_migrations / versions / havana_release.py (stable/havana branch) Status in OpenStack Neutron (virtual network service): Invalid Bug description: In stable/havana branch : https://github.com/openstack/neutron/blob/stable/havana/neutron/db/migration/alembic_migrations/versions/havana_release.py We have the following code: havana Revision ID: havana Revises: 1341ed32cc1e Create Date: 2013-10-02 00:00:00.00 # revision identifiers, used by Alembic. revision = 'havana' down_revision = '27ef74513d33' Revises should be 27ef74513d33 here. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1323151/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324436] [NEW] Changing the ml2 network type result in internal server error while performing delete/update operation on the pre existing resources
Public bug reported: DESCRIPTION: Changing the ml2 network type result in internal server error while performing delete/update operation on the pre existing resources Steps to Reproduce: 1.Configure a ml2 plug-in with network type as vxlan. /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch,linuxbridge,l2population 2.Create a network . neutron net-create Net1 +---+--+ | Field | Value| +---+--+ | admin_state_up| True | | id| 4ea83d79-95f4-4a97-bb2e-b8599fa27723 | | name | Net1 | | provider:network_type | vxlan| | provider:physical_network | | | provider:segmentation_id | 500 | | router:external | False| | shared| False| | status| ACTIVE | | subnets | | | tenant_id | e261d031311b484a9ddb177291fab164 | +---+--+ 2. Update the ml2 plugin network type as vlan./etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = vlan tenant_network_types = vlan mechanism_drivers = openvswitch,linuxbridge,l2population 3.Create a network again : neutron net-create n1 Created a new network: +---+--+ | Field | Value| +---+--+ | admin_state_up| True | | id| d8593185-9e0f-435f-a96e-3d6deb13c5e4 | | name | n1 | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 3541 | | shared| False| | status| ACTIVE | | subnets | | | tenant_id | e261d031311b484a9ddb177291fab164 | +---+--+ 4.List the network both the network is listed. 5. Try to delete the vxlan network type network created: neutron net-delete Net1 Request Failed: internal server error while processing your request. ** Affects: neutron Importance: Undecided Status: New ** Tags: ml2 ** Description changed: DESCRIPTION: Changing the ml2 network type result in internal server error while performing delete/update operation on the pre existing resources Steps to Reproduce: 1.Configure a ml2 plug-in with network type as vxlan. - /etc/neutron/plugins/ml2/ml2_conf.ini + /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch,linuxbridge,l2population 2.Create a network . neutron net-create Net1 +---+--+ | Field | Value| +---+--+ | admin_state_up| True | | id| 4ea83d79-95f4-4a97-bb2e-b8599fa27723 | | name | Net1 | | provider:network_type | vxlan| | provider:physical_network | | | provider:segmentation_id | 500 | | router:external | False| | shared| False| | status| ACTIVE | | subnets | | | tenant_id | e261d031311b484a9ddb177291fab164 | +---+--+ - 2. Update the ml2 plugin network type as vlan./etc/neutron/plugins/ml2/ml2_conf.ini + + 2. Update the ml2 plugin network type as + vlan./etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = vlan tenant_network_types = vlan mechanism_drivers = openvswitch,linuxbridge,l2population + 3.Create a network again : neutron net-create n1 Created a new network:
[Yahoo-eng-team] [Bug 1309195] Re: IPv6 prefix shouldn't be added in the NAT table
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Also affects: neutron/havana Importance: Undecided Status: New ** Changed in: neutron/havana Status: New = Fix Committed ** Changed in: neutron/icehouse Status: New = Fix Committed ** Changed in: ossa Importance: Undecided = High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1309195 Title: IPv6 prefix shouldn't be added in the NAT table Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron havana series: Fix Committed Status in neutron icehouse series: Fix Committed Status in OpenStack Security Advisories: Confirmed Bug description: SNAT rules with IPv6 prefixes are added into the NAT table, which causes failure with the call to iptables-restore: Stderr: iptables-restore v1.4.18: invalid mask `64' specified\nError occurred at line: 22\nTry `iptables-restore -h' or 'iptables-restore --help' for more information.\n To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1309195/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324450] [NEW] add delete operations for the ODL MechanismDriver
Public bug reported: The delete operations (networks, subnets and ports) haven't been managed since the 12th review of the initial support. It seems sync_single_resource only implements create and update operations. ** Affects: neutron Importance: Undecided Status: New ** Patch added: mechanism_odl.py_delete.diff https://bugs.launchpad.net/bugs/1324450/+attachment/4122020/+files/mechanism_odl.py_delete.diff -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324450 Title: add delete operations for the ODL MechanismDriver Status in OpenStack Neutron (virtual network service): New Bug description: The delete operations (networks, subnets and ports) haven't been managed since the 12th review of the initial support. It seems sync_single_resource only implements create and update operations. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1324450/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324459] [NEW] Unable to connect multiple subnets of same network to one router
Public bug reported: This scenario is checked in icehouse release. Build used: === neutron-common : 1:2014.1-0ubuntu1~cloud0 neutron-dhcp-agent : 1:2014.1-0ubuntu1~cloud0 neutron-l3-agent : 1:2014.1-0ubuntu1~cloud0 neutron-metadata-agent :1:2014.1-0ubuntu1~cloud0 neutron-plugin-ml2 :1:2014.1-0ubuntu1~cloud0 neutron-plugin-nicira : 1:2014.1-0ubuntu1~cloud0 neutron-plugin-vmware : 1:2014.1-0ubuntu1~cloud0 neutron-server: 1:2014.1-0ubuntu1~cloud0 python-neutron: 1:2014.1-0ubuntu1~cloud0 python-neutronclient : 2:2.3.4.46.g07bcee8+git201404070301~trusty-0ubuntu1 Steps performed: 1. Through network horizon, create one network with one subnet. name of this subnet is subnet1 and his network address is 85.85.85.0/24 2. attached this subnet to router's interface. 3. Now through horizon, create second subnet for that network. His name is subnet2 and his network address is 95.95.95.0/24. 4. try to attached this subnet to router's interface. Observed behavior: = 1. Unable to connect multiple subnets of same network to one router. Only one subnet we can attached to router's interface. 2. Error will come when we try to add second subnet of same network to router. (please refer attached screen shot for reference ) 3. We can attached these subnets to two different routers. ** Affects: neutron Importance: Undecided Status: New ** Attachment added: Screen-shot-which showing-error messege. https://bugs.launchpad.net/bugs/1324459/+attachment/4122032/+files/screen-shot-1.jpg -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324459 Title: Unable to connect multiple subnets of same network to one router Status in OpenStack Neutron (virtual network service): New Bug description: This scenario is checked in icehouse release. Build used: === neutron-common : 1:2014.1-0ubuntu1~cloud0 neutron-dhcp-agent : 1:2014.1-0ubuntu1~cloud0 neutron-l3-agent : 1:2014.1-0ubuntu1~cloud0 neutron-metadata-agent :1:2014.1-0ubuntu1~cloud0 neutron-plugin-ml2 :1:2014.1-0ubuntu1~cloud0 neutron-plugin-nicira : 1:2014.1-0ubuntu1~cloud0 neutron-plugin-vmware : 1:2014.1-0ubuntu1~cloud0 neutron-server: 1:2014.1-0ubuntu1~cloud0 python-neutron: 1:2014.1-0ubuntu1~cloud0 python-neutronclient : 2:2.3.4.46.g07bcee8+git201404070301~trusty-0ubuntu1 Steps performed: 1. Through network horizon, create one network with one subnet. name of this subnet is subnet1 and his network address is 85.85.85.0/24 2. attached this subnet to router's interface. 3. Now through horizon, create second subnet for that network. His name is subnet2 and his network address is 95.95.95.0/24. 4. try to attached this subnet to router's interface. Observed behavior: = 1. Unable to connect multiple subnets of same network to one router. Only one subnet we can attached to router's interface. 2. Error will come when we try to add second subnet of same network to router. (please refer attached screen shot for reference ) 3. We can attached these subnets to two different routers. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1324459/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324479] [NEW] Fails to launch instance with create volume from image
Public bug reported: In IceHouse something has changed in Glance and when I try to launch an instance with option create volume from image it fails, because some attributes are absent in image dict returned from Glance. Different types of images were tried. RDO packages are used. From /var/log/cinder/api.log: 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/cinder/image/glance.py, line 434, in _extract_attributes 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault output[attr] = getattr(image, attr) 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/warlock/model.py, line 69, in __getattr__ 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault raise AttributeError(key) 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault AttributeError: owner The image's onwer in the database was indeed NULL (and that should be ok). If I add an owner to the image, then another attribute will also be not found: 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/cinder/image/glance.py, line 434, in _extract_attributes 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault output[attr] = getattr(image, attr) 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/warlock/model.py, line 69, in __getattr__ 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault raise AttributeError(key) 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault AttributeError: deleted The image dict returned from Glance was: {u'status': u'active', u'tags': [], u'container_format': u'bare', u'min_ram': 0, u'updated_at': u'2014-05-22T13:24:49Z', u'visibility': u'public', u'file': u'/v2/images/ad385533-0bbb-40d8-a4db- 669c76677e24/file', u'min_disk': 0, u'id': u'ad385533-0bbb-40d8-a4db- 669c76677e24', u'size': 3145728, u'name': u'img04', u'checksum': u'a5c6d1997966f85908c5640c5dfd7b79', u'created_at': u'2014-05-22T13:24:48Z', u'disk_format': u'raw', u'protected': False, u'direct_url': u'rbd://2485eec9-d30a-4258-b959-937359ed61e8/images/ad385533-0bbb-40d8 -a4db-669c76677e24/snap', u'schema': u'/v2/schemas/image'} I have no idea why some image attributes are absent, but one of the possible fixes is (for IceHouce branch): --- /a/cinder/image/glance.py 2014-04-21 12:58:43.0 -0700 +++ /b/cinder/image/glance.py 2014-05-29 03:23:31.0 -0700 @@ -431,7 +431,7 @@ elif attr == 'checksum' and output['status'] != 'active': output[attr] = None else: -output[attr] = getattr(image, attr) +output[attr] = getattr(image, attr, None) output['properties'] = getattr(image, 'properties', {}) ** Affects: nova Importance: Undecided Status: New ** Tags: cinder glance -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1324479 Title: Fails to launch instance with create volume from image Status in OpenStack Compute (Nova): New Bug description: In IceHouse something has changed in Glance and when I try to launch an instance with option create volume from image it fails, because some attributes are absent in image dict returned from Glance. Different types of images were tried. RDO packages are used. From /var/log/cinder/api.log: 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/cinder/image/glance.py, line 434, in _extract_attributes 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault output[attr] = getattr(image, attr) 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/warlock/model.py, line 69, in __getattr__ 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault raise AttributeError(key) 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault AttributeError: owner The image's onwer in the database was indeed NULL (and that should be ok). If I add an owner to the image, then another attribute will also be not found: 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/cinder/image/glance.py, line 434, in _extract_attributes 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault output[attr] = getattr(image, attr) 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/warlock/model.py, line 69, in __getattr__ 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault raise AttributeError(key) 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault AttributeError: deleted The image dict returned from Glance was:
[Yahoo-eng-team] [Bug 877046] Re: Misleading error message --bridge interface is required
Unable to confirm the bug, marking as invalid. If this was done in error and it can be reproduced in a supported version, please reopen. ** Changed in: nova Status: Confirmed = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/877046 Title: Misleading error message --bridge interface is required Status in OpenStack Compute (Nova): Invalid Bug description: When I had in these settings in nova.conf: --network_manage=nova.network.manager.VlanManager --flat_network_bridge=br100 I see this error in the nova_manage.log Fail --bridge_interface is required to create a network interface. While the settings below are incorrect, the error message is even more incorrect. BUG: There is no --bridge_interface parameter Outcome: This OpenStack beginner was confused and frustrated. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/877046/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 872522] Re: NetworkNotFound should be raised when a network is not found in network manager
Believe this 3 year old bug is long-fixed. If the bug recurs, please reopen. ** Summary changed: - NetworkNotFound should be raised when a network is not fould in network manager + NetworkNotFound should be raised when a network is not found in network manager ** Changed in: nova Status: Confirmed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/872522 Title: NetworkNotFound should be raised when a network is not found in network manager Status in OpenStack Compute (Nova): Fix Released Status in OpenStack QA: Won't Fix Bug description: db.network_get_by_cidr() can return None. In that case some method in network manager such as delete_network raises AttriuteError. But this is not informative. In this case, Network manager should throw exception. Following methods have same problem. - FloatingIP.associate_floating_ip() - NetworkManager._get_dhcp_ip() - NetworkManager.allocate_fixed_ip() - NetworkManager.deallocate_fixed_ip() - NetworkManager.lease_fixed_ip() - NetworkManager.release_fixed_ip() - NetworkManager.validate_networks() - VlanManager.allocate_fixed_ip() To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/872522/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 885165] Re: Add option to do remote host SSL cert verification in nova-objectstore
This is now solved with CONF.s3_use_ssl ** Changed in: nova Status: Triaged = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/885165 Title: Add option to do remote host SSL cert verification in nova-objectstore Status in OpenStack Compute (Nova): Fix Released Bug description: This bug is related to another bug which I am about to report. In nova/image/s3.py the _conn static method of the S3ImageService class passes in is_secure=False, when creating a new boto.s3.connection.S3Connection. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/885165/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324487] [NEW] nuage plugin ini file is not included in setup.cfg, and it doesn't get installed to etc, neither can be packaged
Public bug reported: setup.cfg doesn't include the data_files entries to get the etc/neutron/plugins/nuage/nuage_plugin.ini configuration file installed into system's /etc https://github.com/openstack/neutron/blob/master/setup.cfg#L24 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324487 Title: nuage plugin ini file is not included in setup.cfg, and it doesn't get installed to etc, neither can be packaged Status in OpenStack Neutron (virtual network service): New Bug description: setup.cfg doesn't include the data_files entries to get the etc/neutron/plugins/nuage/nuage_plugin.ini configuration file installed into system's /etc https://github.com/openstack/neutron/blob/master/setup.cfg#L24 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1324487/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324452] Re: While creating the VM/Instances, we can not add these instances to second subnet of a Network. It always attached to first subnet.
Using the Nova CLI you can pre-create the port with a specific Fixed IP and then use the port uuid in the nova boot command. That said, it always baffled me the use case where a tenant network (that models a single L2 broadcast domain) needs to be mapped by two logical IP network subnets. ** Also affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324452 Title: While creating the VM/Instances, we can not add these instances to second subnet of a Network. It always attached to first subnet. Status in OpenStack Dashboard (Horizon): New Status in OpenStack Neutron (virtual network service): New Bug description: This scenario is checked in icehouse release. Build used: === neutron-common : 1:2014.1-0ubuntu1~cloud0 neutron-dhcp-agent : 1:2014.1-0ubuntu1~cloud0 neutron-l3-agent : 1:2014.1-0ubuntu1~cloud0 neutron-metadata-agent :1:2014.1-0ubuntu1~cloud0 neutron-plugin-ml2 :1:2014.1-0ubuntu1~cloud0 neutron-plugin-nicira : 1:2014.1-0ubuntu1~cloud0 neutron-plugin-vmware : 1:2014.1-0ubuntu1~cloud0 neutron-server: 1:2014.1-0ubuntu1~cloud0 python-neutron: 1:2014.1-0ubuntu1~cloud0 python-neutronclient : 2:2.3.4.46.g07bcee8+git201404070301~trusty-0ubuntu1 Steps performed: 1. Through network horizon, create one network with one subnet. name of this subnet is subnet1 and his network address is 85.85.85.0/24 2. attached this subnet to router's interface. 3. Through horizon or nova command create 2 to 3 VMs/instances All these Vm are correctly created and connected to subnet1. They are getting correct IP address from subnet1. 4. Now through horizon, create second subnet. His name is subnet2 and his network address is 95.95.95.0/24. 5. Through horizon or nova command create 2 VMs/instances and check they are connecting to which subnet networks. Observed behavior: = 1. While creating the VM, we can not add these VM to second subnet of network. 2. It will always add to first subnet only. 3. there is no option for this in both openstack horizon and nova command. If there are multiple subsets for same networks then there is no option to specify particular subnet while creating a instance. 4. All created VMs are automatically attached to first subnet only. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1324452/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324496] [NEW] shared firewall policies and rules are not displayed in horizon
Public bug reported: This bug is an extension to https://bugs.launchpad.net/neutron/+bug/1323322 As a normal user, Shared Firewall Policies and Rules which are created by admin , are listed for CLI command. But, those are not visible in Horizon UI. Steps to reproduce: As admin: Create a firewall rule and mark it as shared. Create a firewall policy and mark it as shared. Now, as a normal user: Try CLI commands: neutron firewall-rule-list neutron-firewall-policy-list It will list all the policies and rules which are shared also. Now login to Horizon as normal user, In Firewall panel, Nothing is displayed under firewall-policies and firewall-rules. Expected Results: Shared firewall policies and rules should be listed as in CLI. And modification of those should be disabled for all users other than admin. ** Affects: neutron Importance: Undecided Assignee: Shivakumar M (shiva075gowda) Status: New ** Changed in: neutron Assignee: (unassigned) = Shivakumar M (shiva075gowda) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324496 Title: shared firewall policies and rules are not displayed in horizon Status in OpenStack Neutron (virtual network service): New Bug description: This bug is an extension to https://bugs.launchpad.net/neutron/+bug/1323322 As a normal user, Shared Firewall Policies and Rules which are created by admin , are listed for CLI command. But, those are not visible in Horizon UI. Steps to reproduce: As admin: Create a firewall rule and mark it as shared. Create a firewall policy and mark it as shared. Now, as a normal user: Try CLI commands: neutron firewall-rule-list neutron-firewall-policy-list It will list all the policies and rules which are shared also. Now login to Horizon as normal user, In Firewall panel, Nothing is displayed under firewall-policies and firewall-rules. Expected Results: Shared firewall policies and rules should be listed as in CLI. And modification of those should be disabled for all users other than admin. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1324496/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324452] Re: While creating the VM/Instances, we can not add these instances to second subnet of a Network. It always attached to first subnet.
I think it's not neutron's bug. Marking as Invalid for neutron. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324452 Title: While creating the VM/Instances, we can not add these instances to second subnet of a Network. It always attached to first subnet. Status in OpenStack Dashboard (Horizon): New Status in OpenStack Neutron (virtual network service): Invalid Bug description: This scenario is checked in icehouse release. Build used: === neutron-common : 1:2014.1-0ubuntu1~cloud0 neutron-dhcp-agent : 1:2014.1-0ubuntu1~cloud0 neutron-l3-agent : 1:2014.1-0ubuntu1~cloud0 neutron-metadata-agent :1:2014.1-0ubuntu1~cloud0 neutron-plugin-ml2 :1:2014.1-0ubuntu1~cloud0 neutron-plugin-nicira : 1:2014.1-0ubuntu1~cloud0 neutron-plugin-vmware : 1:2014.1-0ubuntu1~cloud0 neutron-server: 1:2014.1-0ubuntu1~cloud0 python-neutron: 1:2014.1-0ubuntu1~cloud0 python-neutronclient : 2:2.3.4.46.g07bcee8+git201404070301~trusty-0ubuntu1 Steps performed: 1. Through network horizon, create one network with one subnet. name of this subnet is subnet1 and his network address is 85.85.85.0/24 2. attached this subnet to router's interface. 3. Through horizon or nova command create 2 to 3 VMs/instances All these Vm are correctly created and connected to subnet1. They are getting correct IP address from subnet1. 4. Now through horizon, create second subnet. His name is subnet2 and his network address is 95.95.95.0/24. 5. Through horizon or nova command create 2 VMs/instances and check they are connecting to which subnet networks. Observed behavior: = 1. While creating the VM, we can not add these VM to second subnet of network. 2. It will always add to first subnet only. 3. there is no option for this in both openstack horizon and nova command. If there are multiple subsets for same networks then there is no option to specify particular subnet while creating a instance. 4. All created VMs are automatically attached to first subnet only. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1324452/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324502] [NEW] Looking For The Best
Public bug reported: The PlayStation 3 or PS3 jazz in the part of a lot of life now, notwithstanding at here it has small behind and get new and reinforced structure and sure doesn't dissatisfy. Initially, the individual playstation was at low priced. As excavation it's a tierce slender and floats than the bulky early type, and also a bag else energy resourceful, in the module somewhat small streaming expenses and, further histrion indispensable, ambient to quiet Emotional Ikon Experts Assemble video files from recording circle or USB among JPEG images presenting as with the pictures showcased in striking trough feigning. Additionally PlayStation 3 Slenderize act as a digital media heart and is set to flowing is elementary to from any of DLNA humble scheme instrumentality. Looking at the slim-console; you present be discovering an HDMI result that is an (SPDIF) optical digital frequence signal. For some gamers who were to greatly ready and optimistically Sony will hold to found IR acquirer that would get to permit contestant to admittance someone playstation with an world IR coupler distant. The remaining one abstract roughly controller playstation is that one may not attempt PS2 games on PS3 Slenderize gamers Consol because of the regardant compatibility is somewhat soothe intend discommode with games housing. The patronising news is that stylish joke post slim run at a lot of low temperature and the numerically according shape is instant at that indication. Itfs extremely archaic on the days sure to be completely certain but the new covering ornament and the gears bang to be intentional to reduction the overheating thing of the human models. In one surface, there is the incomparable Xbox 360 table, the favored 250GB, and in the separate endorse, we someone choose PS3 table, the Slim-250GB. Currently Sony has brought the issuance. Both the companion soul rationalized the trend uses wicked matte windup, still the PS3 console has been a recent trice. In turns the Xbox 360 consol perception suchlike superannuated and yet boasts that enormous influence brick. http://dietaslimfast.com/ ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324502 Title: Looking For The Best Status in OpenStack Neutron (virtual network service): New Bug description: The PlayStation 3 or PS3 jazz in the part of a lot of life now, notwithstanding at here it has small behind and get new and reinforced structure and sure doesn't dissatisfy. Initially, the individual playstation was at low priced. As excavation it's a tierce slender and floats than the bulky early type, and also a bag else energy resourceful, in the module somewhat small streaming expenses and, further histrion indispensable, ambient to quiet Emotional Ikon Experts Assemble video files from recording circle or USB among JPEG images presenting as with the pictures showcased in striking trough feigning. Additionally PlayStation 3 Slenderize act as a digital media heart and is set to flowing is elementary to from any of DLNA humble scheme instrumentality. Looking at the slim-console; you present be discovering an HDMI result that is an (SPDIF) optical digital frequence signal. For some gamers who were to greatly ready and optimistically Sony will hold to found IR acquirer that would get to permit contestant to admittance someone playstation with an world IR coupler distant. The remaining one abstract roughly controller playstation is that one may not attempt PS2 games on PS3 Slenderize gamers Consol because of the regardant compatibility is somewhat soothe intend discommode with games housing. The patronising news is that stylish joke post slim run at a lot of low temperature and the numerically according shape is instant at that indication. Itfs extremely archaic on the days sure to be completely certain but the new covering ornament and the gears bang to be intentional to reduction the overheating thing of the human models. In one surface, there is the incomparable Xbox 360 table, the favored 250GB, and in the separate endorse, we someone choose PS3 table, the Slim-250GB. Currently Sony has brought the issuance. Both the companion soul rationalized the trend uses wicked matte windup, still the PS3 console has been a recent trice. In turns the Xbox 360 consol perception suchlike superannuated and yet boasts that enormous influence brick. http://dietaslimfast.com/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1324502/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324502] Re: Looking For The Best
** Project changed: neutron = null-and-void ** Changed in: null-and-void Status: New = Invalid ** Information type changed from Public to Private Security -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324502 Title: Looking For The Best Status in NULL Project: Invalid Bug description: The PlayStation 3 or PS3 jazz in the part of a lot of life now, notwithstanding at here it has small behind and get new and reinforced structure and sure doesn't dissatisfy. Initially, the individual playstation was at low priced. As excavation it's a tierce slender and floats than the bulky early type, and also a bag else energy resourceful, in the module somewhat small streaming expenses and, further histrion indispensable, ambient to quiet Emotional Ikon Experts Assemble video files from recording circle or USB among JPEG images presenting as with the pictures showcased in striking trough feigning. Additionally PlayStation 3 Slenderize act as a digital media heart and is set to flowing is elementary to from any of DLNA humble scheme instrumentality. Looking at the slim-console; you present be discovering an HDMI result that is an (SPDIF) optical digital frequence signal. For some gamers who were to greatly ready and optimistically Sony will hold to found IR acquirer that would get to permit contestant to admittance someone playstation with an world IR coupler distant. The remaining one abstract roughly controller playstation is that one may not attempt PS2 games on PS3 Slenderize gamers Consol because of the regardant compatibility is somewhat soothe intend discommode with games housing. The patronising news is that stylish joke post slim run at a lot of low temperature and the numerically according shape is instant at that indication. Itfs extremely archaic on the days sure to be completely certain but the new covering ornament and the gears bang to be intentional to reduction the overheating thing of the human models. In one surface, there is the incomparable Xbox 360 table, the favored 250GB, and in the separate endorse, we someone choose PS3 table, the Slim-250GB. Currently Sony has brought the issuance. Both the companion soul rationalized the trend uses wicked matte windup, still the PS3 console has been a recent trice. In turns the Xbox 360 consol perception suchlike superannuated and yet boasts that enormous influence brick. http://dietaslimfast.com/ To manage notifications about this bug go to: https://bugs.launchpad.net/null-and-void/+bug/1324502/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1303933] Re: openvswitch plugin does not supports rpc_workers
The OVS plugin is earmarked for removal. I don't think there's any value in making this change now. Please consider using the ML2 plugin. ** Changed in: neutron Status: In Progress = Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1303933 Title: openvswitch plugin does not supports rpc_workers Status in OpenStack Neutron (virtual network service): Opinion Bug description: rpc_worker option supports multi rpc worker process. and It requires start_rpc_listener method and start rpc worker here. ml2 plugin implements this method, so that rpc worker get works. but openvswitch plugin doesn't implements this method. so rpc_worker options are discarded. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1303933/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1314968] Re: Installing test-requirements fails because pysendfile.2.0.0.tar.gz cannot be found
** Changed in: glance Importance: Undecided = High ** Also affects: glance/icehouse Importance: Undecided Status: New ** Changed in: glance/icehouse Status: New = In Progress ** Changed in: glance/icehouse Importance: Undecided = High ** Changed in: glance/icehouse Assignee: (unassigned) = Alan Pevec (apevec) ** Changed in: glance/icehouse Milestone: None = 2014.1.1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1314968 Title: Installing test-requirements fails because pysendfile.2.0.0.tar.gz cannot be found Status in OpenStack Image Registry and Delivery Service (Glance): Fix Committed Status in Glance icehouse series: In Progress Bug description: 2014-05-01 01:43:10.804 | Downloading/unpacking pysendfile==2.0.0 (from -r /home/jenkins/workspace/gate-glance-pep8/test-requirements.txt (line 23)) 2014-05-01 01:43:10.804 | http://pypi.openstack.org/openstack/pysendfile/2.0.0 uses an insecure transport scheme (http). Consider using https if pypi.openstack.org has it available 2014-05-01 01:43:10.804 | http://pypi.openstack.org/openstack/pysendfile/ uses an insecure transport scheme (http). Consider using https if pypi.openstack.org has it available 2014-05-01 01:43:10.804 | http://pysendfile.googlecode.com/files/pysendfile-2.0.0.tar.gz uses an insecure transport scheme (http). Consider using https if pysendfile.googlecode.com has it available 2014-05-01 01:43:10.804 | HTTP error 404 while getting http://pysendfile.googlecode.com/files/pysendfile-2.0.0.tar.gz (from -f) 2014-05-01 01:43:10.804 | Cleaning up... 2014-05-01 01:43:10.804 | Exception: 2014-05-01 01:43:10.804 | Traceback (most recent call last): 2014-05-01 01:43:10.804 | File /home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py, line 122, in main 2014-05-01 01:43:10.804 | status = self.run(options, args) 2014-05-01 01:43:10.805 | File /home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/commands/install.py, line 278, in run 2014-05-01 01:43:10.805 | requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) 2014-05-01 01:43:10.805 | File /home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py, line 1197, in prepare_files 2014-05-01 01:43:10.805 | do_download, 2014-05-01 01:43:10.805 | File /home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py, line 1375, in unpack_url 2014-05-01 01:43:10.805 | self.session, 2014-05-01 01:43:10.805 | File /home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py, line 547, in unpack_http_url 2014-05-01 01:43:10.805 | resp.raise_for_status() 2014-05-01 01:43:10.805 | File /home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/models.py, line 773, in raise_for_status 2014-05-01 01:43:10.805 | raise HTTPError(http_error_msg, response=self) 2014-05-01 01:43:10.805 | HTTPError: 404 Client Error: Not Found 2014-05-01 01:43:10.805 | 2014-05-01 01:43:10.806 | Storing debug log for failure in /home/jenkins/.pip/pip.log 2014-05-01 01:43:10.806 | 2014-05-01 01:43:10.806 | ERROR: could not install deps [-r/home/jenkins/workspace/gate-glance-pep8/requirements.txt, -r/home/jenkins/workspace/gate-glance-pep8/test-requirements.txt] Following fix would be submitted against this bug: index bef062d..986b853 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -19,7 +19,6 @@ psutil=1.1.1 # Optional packages that should be installed when testing MySQL-python psycopg2 --f http://pysendfile.googlecode.com/files/pysendfile-2.0.0.tar.gz pysendfile==2.0.0 qpid-python xattr=0.4 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1314968/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1257273] Re: Glance download fails when size is 0
** Also affects: glance/icehouse Importance: Undecided Status: New ** Changed in: glance/icehouse Status: New = In Progress ** Changed in: glance/icehouse Importance: Undecided = High ** Changed in: glance/icehouse Assignee: (unassigned) = s iwata (s-iwata) ** Changed in: glance/icehouse Milestone: None = 2014.1.1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1257273 Title: Glance download fails when size is 0 Status in OpenStack Image Registry and Delivery Service (Glance): Fix Committed Status in Glance icehouse series: In Progress Bug description: Glance images are not being fetched by glance's API v1 when the size is 0. There are 2 things wrong with this behaviour: 1) Active images should always be ready to be downloaded, regardless they're locally or remotely stored. 2) The size shouldn't be the way to verify whether an image has some data or not. https://git.openstack.org/cgit/openstack/glance/tree/glance/api/v1/images.py#n455 This is happening in the API v1, but it doesn't seem to be true for v2. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1257273/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1318528] Re: DHCP agent creates new instance of driver for each action
The object class is imported only once, whereas I agree the driver's object is instantiated every time. I suspect it's going to be a bit of work to restructure the code to accommodate a single instance of the driver; I am not opposed to the idea but I wonder if we can do some preliminary profiling to establish whether this refactoring is really worth it. If we managed to shave like 10-20% of execution times during this ops then great but I suspect that numbers are not going to look that great. ** Changed in: neutron Status: Confirmed = Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1318528 Title: DHCP agent creates new instance of driver for each action Status in OpenStack Neutron (virtual network service): Opinion Bug description: Working on rootwrap daemon [0] I've found out that DCHP agent asks for root_helper too often. [1] shows traceback for each place where get_root_helper is being called. It appeared that in [2] DHCP agent creates an instance of driver class for every single action it needs to run. That involves both lots of initialization code and very expensive dynamic import_object routine being run. [2] shows that the only thing that changes between driver instances is a network. I suggest we make network an argument for every action instead to avoid expensive dynamic driver instantiation. Links: [0] https://review.openstack.org/84667 [1] http://logs.openstack.org/67/84667/20/check/check-tempest-dsvm-neutron/3a7768e/logs/screen-q-dhcp.txt.gz?level=INFO [2] https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp_agent.py#L122 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1318528/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324218] Re: Empty Daily Report Page
** Also affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1324218 Title: Empty Daily Report Page Status in OpenStack Telemetry (Ceilometer): New Status in OpenStack Dashboard (Horizon): New Bug description: Is the Daily Report tab implemented yet? OpenStack Icehouse on CentOS 6.5 (2.6.32-431.11.2.el6.x86_64 #1 SMP) I can see and select both resource usage tabs, however the Daily Report page is empty (for over 2 weeks) I see no errors in apache, keystone or ceilometer logs on the cloud controller node. On the compute nodes, I see many of these: 2014-05-27 09:17:21.625 18382 WARNING ceilometer.transformer.conversions [-] dropping sample with no predecessor: (ceilometer.sample.Sample object at 0x24ce810,) Here are package versions of relevant RPMs on the controller node: = Name: openstack-ceilometer-collector Relocations: (not relocatable) Version : 2014.1Vendor: Fedora Project Release : 2.el6 Build Date: Wed 07 May 2014 11:33:07 AM PDT Install Date: Mon 12 May 2014 06:07:35 PM PDT Build Host: buildvm-16.phx2.fedoraproject.org Name: openstack-ceilometer-api Relocations: (not relocatable) Version : 2014.1Vendor: Fedora Project Release : 2.el6 Build Date: Wed 07 May 2014 11:33:07 AM PDT Install Date: Mon 12 May 2014 06:07:35 PM PDT Build Host: buildvm-16.phx2.fedoraproject.org Name: python-ceilometerclient Relocations: (not relocatable) Version : 1.0.8 Vendor: Fedora Project Release : 1.el6 Build Date: Mon 16 Dec 2013 11:20:29 AM PST Install Date: Mon 05 May 2014 12:23:26 PM PDT Build Host: buildvm-25.phx2.fedoraproject.org Name: openstack-ceilometer-notification Relocations: (not relocatable) Version : 2014.1Vendor: Fedora Project Release : 2.el6 Build Date: Wed 07 May 2014 11:33:07 AM PDT Install Date: Mon 12 May 2014 06:07:26 PM PDT Build Host: buildvm-16.phx2.fedoraproject.org Name: openstack-ceilometer-central Relocations: (not relocatable) Version : 2014.1Vendor: Fedora Project Release : 2.el6 Build Date: Wed 07 May 2014 11:33:07 AM PDT Install Date: Mon 12 May 2014 06:07:35 PM PDT Build Host: buildvm-16.phx2.fedoraproject.org Name: openstack-ceilometer-common Relocations: (not relocatable) Version : 2014.1Vendor: Fedora Project Release : 2.el6 Build Date: Wed 07 May 2014 11:33:07 AM PDT Install Date: Mon 12 May 2014 06:07:26 PM PDT Build Host: buildvm-16.phx2.fedoraproject.org Name: openstack-ceilometer-alarm Relocations: (not relocatable) Version : 2014.1Vendor: Fedora Project Release : 2.el6 Build Date: Wed 07 May 2014 11:33:07 AM PDT Install Date: Mon 12 May 2014 06:07:35 PM PDT Build Host: buildvm-16.phx2.fedoraproject.org Name: python-ceilometerRelocations: (not relocatable) Version : 2014.1Vendor: Fedora Project Release : 2.el6 Build Date: Wed 07 May 2014 11:33:07 AM PDT Install Date: Mon 12 May 2014 06:07:26 PM PDT Build Host: buildvm-16.phx2.fedoraproject.org Name: mongodb Relocations: (not relocatable) Version : 2.4.6 Vendor: Fedora Project Release : 1.el6 Build Date: Thu 19 Sep 2013 12:24:03 PM PDT Install Date: Mon 05 May 2014 05:45:00 PM PDT Build Host: buildvm-09.phx2.fedoraproject.org = Relevant RPMs on the compute nodes: = Name: python-ceilometerRelocations: (not relocatable) Version : 2014.1Vendor: Fedora Project Release : 2.el6 Build Date: Wed 07 May 2014 11:33:07 AM PDT Install Date: Mon 12 May 2014 09:47:30 AM PDT Build Host: buildvm-16.phx2.fedoraproject.org Name: python-ceilometerclient Relocations: (not relocatable) Version : 1.0.8 Vendor: Fedora Project Release : 1.el6 Build Date: Mon 16 Dec 2013 11:20:29 AM PST Install Date: Mon 12 May 2014 09:47:28 AM PDT Build Host: buildvm-25.phx2.fedoraproject.org Name: openstack-ceilometer-common Relocations: (not relocatable) Version : 2014.1
[Yahoo-eng-team] [Bug 1324479] Re: Fails to launch instance with create volume from image
** Project changed: nova = cinder -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1324479 Title: Fails to launch instance with create volume from image Status in Cinder: New Bug description: In IceHouse something has changed in Glance and when I try to launch an instance with option create volume from image it fails, because some attributes are absent in image dict returned from Glance. Different types of images were tried. RDO packages are used. From /var/log/cinder/api.log: 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/cinder/image/glance.py, line 434, in _extract_attributes 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault output[attr] = getattr(image, attr) 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/warlock/model.py, line 69, in __getattr__ 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault raise AttributeError(key) 2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault AttributeError: owner The image's onwer in the database was indeed NULL (and that should be ok). If I add an owner to the image, then another attribute will also be not found: 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/cinder/image/glance.py, line 434, in _extract_attributes 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault output[attr] = getattr(image, attr) 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault File /usr/lib/python2.6/site-packages/warlock/model.py, line 69, in __getattr__ 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault raise AttributeError(key) 2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault AttributeError: deleted The image dict returned from Glance was: {u'status': u'active', u'tags': [], u'container_format': u'bare', u'min_ram': 0, u'updated_at': u'2014-05-22T13:24:49Z', u'visibility': u'public', u'file': u'/v2/images/ad385533-0bbb-40d8-a4db- 669c76677e24/file', u'min_disk': 0, u'id': u'ad385533-0bbb-40d8-a4db- 669c76677e24', u'size': 3145728, u'name': u'img04', u'checksum': u'a5c6d1997966f85908c5640c5dfd7b79', u'created_at': u'2014-05-22T13:24:48Z', u'disk_format': u'raw', u'protected': False, u'direct_url': u'rbd://2485eec9-d30a-4258-b959-937359ed61e8/images/ad385533-0bbb-40d8 -a4db-669c76677e24/snap', u'schema': u'/v2/schemas/image'} I have no idea why some image attributes are absent, but one of the possible fixes is (for IceHouce branch): --- /a/cinder/image/glance.py 2014-04-21 12:58:43.0 -0700 +++ /b/cinder/image/glance.py 2014-05-29 03:23:31.0 -0700 @@ -431,7 +431,7 @@ elif attr == 'checksum' and output['status'] != 'active': output[attr] = None else: -output[attr] = getattr(image, attr) +output[attr] = getattr(image, attr, None) output['properties'] = getattr(image, 'properties', {}) To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1324479/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1301337] Re: We have three duplicated tests for check_ovs_vxlan_version
** Changed in: neutron Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1301337 Title: We have three duplicated tests for check_ovs_vxlan_version Status in OpenStack Neutron (virtual network service): Fix Released Bug description: test_ovs_lib, test_ovs_neutron_agent and test_ofa_neutron_agent have duplicated same unit tests for check_ovs_vxlan_version. The only difference is SystemError (from ovs_lib) and SystemExit (from agents). The tested logic is 99% same, and unit tests in ovs/ofa agent looks unnecessary. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1301337/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1294437] Re: GET role by name OS-KSADM call not functional
** Changed in: keystone Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1294437 Title: GET role by name OS-KSADM call not functional Status in OpenStack Identity (Keystone): Won't Fix Bug description: A get role by name call against adminurl returns all the roles instead of the filtered role GET /v2.0/OS-KSADM/roles?name=KeystoneServiceAdmin HTTP/1.1 Host: 10.127.101.67:35357 X-Auth-Token: Content-Type: application/json Accept-Encoding: gzip, deflate, compress Accept: application/json User-Agent: python-requests/2.2.1 CPython/2.7.4 Linux/3.13.0-17-generic HTTP/1.1 200 OK Vary: X-Auth-Token Content-Type: application/json Content-Length: 1624 Date: Wed, 19 Mar 2014 02:07:49 GMT {roles: [{description: test description, name: delme6699871, id: 0c388669b5f24b6b83e06a53132ac8f3}, {description: test description, name: delme5826205, id: 13a0e7d803674798ba4d7162a687ee78}, {id: 1a5c8dae44f047a8ac972f0c5281f9a8, name: admin}, {description: test description, name: delme2440856, id: 2ac7708dbabe495b8b521d522789ac69}, {description: test description, name: delme6598029, id: 345941408ff14352852c6effeeafffd6}, {description: test description, name: delme3997949, id: 6defa1fe084d475abefdba18bc16da7b}, {id: 7d07bbea97b040cf9aa70cd2c0519494, name: cls, cls: break}, {description: test description, name: delme8497993, id: 7e32e8d74fa44429975b66f2034e0e8f}, {description: test description, name: delme3345843, id: 831fcc12e1c749eba4082e24f6e33c02}, {description: test description, name: delme1401640, id: 85dbba78a47c4eedbbd3825febc15907}, {description: test description, name: delme4184222, id: 93309675c7ff459b8d13bc1e1df3ee03}, {description: test description, name: delme7697947, id: 9a473c7f75144da5b53799487d16d13d}, {enabled: True, description: Default role for project membership, name: _member_, id: 9fe2ff9ee4384b1894a90878d3e92bab}, {id: e75fbe7b349c4200995026598768aae7, name: KeystoneAdmin}, {id: eccfbb67d68f45d3bf3fba02017bd091, name: Member}, {description: test description, name: delme3796526, id: f72e90114945476eb081940e98c44976}, {id: fc4fec9f825c44d3ba2a0501287d27e0, name: KeystoneServiceAdmin}]} To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1294437/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1319640] Re: Console to instance persists even after logging out of Horizon
From a Nova perspective this is not a security issue. When console access is requested a token is returned as Thierry mentioned and as long as a valid token is used to access the console it doesn't matter if a user is logged into Horizon, or if they've reopened the tab. Essentially authorization is wrapped up in the token returned by Nova, not Horizon. There could be a feature request to provide token revocation which Horizon could use on logout though. ** Changed in: nova Status: New = Invalid ** Changed in: nova Importance: Undecided = Wishlist ** Changed in: nova Status: Invalid = Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1319640 Title: Console to instance persists even after logging out of Horizon Status in OpenStack Dashboard (Horizon): Incomplete Status in OpenStack Compute (Nova): Confirmed Status in OpenStack Security Advisories: Incomplete Bug description: Steps to Recreate the bug 1. Log in through Horizon dashboard 2. Create an instance and wait till it is running 3. Console the VM from drop down menu for the instance 4. Open Console on new window. 5. Now log out of the dashboard 6. Scenario 1 : Now you can see that Instance console session still persists 7. Copy the URL of console window. 8. Close the Console window 9. Scenario 2 : Reopen the window (In my case CTRL+SHIFT+T) on the browser - Will get access to the Instance Console. 10. Scenario 3: Pass on the copied URL to other LAN users and ask them to use it - Will get access to the Instance Console I assume it must have been like, Session for the console must exit once the console is closed. Must not allow multiple sessions (Refering to Scenario 3) To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1319640/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324586] [NEW] v1 image delete returns 200 with empty body instead of 204
Public bug reported: When successfully deleting the image on v1 Image API glance will return 200 with empty body instead of 204. As we are this close deprecating Image API v1 I would assume this to be closed as Won't Fix. ** Affects: glance Importance: Undecided Status: Won't Fix ** Description changed: When successfully deleting the image on v1 Image API glance will return 200 with empty body instead of 204. + + As we are this close deprecating Image API v1 I would assume this to be + closed as Won't Fix. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1324586 Title: v1 image delete returns 200 with empty body instead of 204 Status in OpenStack Image Registry and Delivery Service (Glance): Won't Fix Bug description: When successfully deleting the image on v1 Image API glance will return 200 with empty body instead of 204. As we are this close deprecating Image API v1 I would assume this to be closed as Won't Fix. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1324586/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324586] Re: v1 image delete returns 200 with empty body instead of 204
** Changed in: glance Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1324586 Title: v1 image delete returns 200 with empty body instead of 204 Status in OpenStack Image Registry and Delivery Service (Glance): Won't Fix Bug description: When successfully deleting the image on v1 Image API glance will return 200 with empty body instead of 204. As we are this close deprecating Image API v1 I would assume this to be closed as Won't Fix. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1324586/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1257235] Re: handle ipgenerationfailure exception during router operations
This no longer pops up in the logs: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSXBBZGRyZXNzR2VuZXJhdGlvbkZhaWx1cmU6IE5vIG1vcmUgSVAgYWRkcmVzc2VzIGF2YWlsYWJsZSBvbiBuZXR3b3JrXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tcS1zdmMudHh0XCIgQU5EIG1lc3NhZ2U6XCJfY3JlYXRlX3JvdXRlcl9nd19wb3J0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDEzNzczODcxNjQsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0= Not sure if I fixed it and forgot about it, or it just went away on its own ;) ** Changed in: neutron Status: Triaged = Invalid ** Changed in: neutron Assignee: Armando Migliaccio (armando-migliaccio) = (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1257235 Title: handle ipgenerationfailure exception during router operations Status in OpenStack Neutron (virtual network service): Invalid Bug description: The following stacktrace has been observed: [req-b632de6b-6cac-4923-a07f-d0b8983a60de a24057b8b690406986c6289d1e49ea66 6c6b7ffd1b28458d98fb7d4eb6db4be4] create failed 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource Traceback (most recent call last): 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/api/v2/resource.py, line 84, in resource 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource result = method(request=request, **args) 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/api/v2/base.py, line 411, in create 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource obj = obj_creator(request.context, **kwargs) 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 132, in create_router 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource self._update_router_gw_info(context, router_db['id'], gw_info) 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_gwmode_db.py, line 62, in _update_router_gw_info 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource context, router_id, info, router=router) 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 215, in _update_router_gw_info 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource self._create_router_gw_port(context, router, network_id) 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 163, in _create_router_gw_port 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource 'name': ''}}) 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 562, in create_port 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource result = super(Ml2Plugin, self).create_port(context, port) 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 1333, in create_port 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource ips = self._allocate_ips_for_port(context, network, port) 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 718, in _allocate_ips_for_port 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource result = NeutronDbPluginV2._generate_ip(context, subnets) 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 482, in _generate_ip 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource raise q_exc.IpAddressGenerationFailure(net_id=subnets[0]['network_id']) 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource IpAddressGenerationFailure: No more IP addresses available on network 76bb0b9c-db58-40c1-85cd-fb980d80ef6a. 2013-12-02 21:06:55.567 3055 TRACE neutron.api.v2.resource In the context of triaging for bug: 1243726 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1257235/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1323383] Re: Ubuntu source package for neutron can not be rebuild
This is a package issue rather than a neutron one. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1323383 Title: Ubuntu source package for neutron can not be rebuild Status in OpenStack Neutron (virtual network service): Invalid Bug description: Ubuntu's source package for neutron can not be rebuild twice: 1. There is no proper clean target. 2. neutron.egg-info included in neutron_2013.2.3.orig.tar.gz (regardless of .gitignore in original git). That cause problem when package is build twice from same source. 1st build is fine, 2nd cause following errors: (each type of error cited once) 1. dpkg-source: warning: newly created empty file 'build/lib.linux-x86_64-2.7/neutron/openstack/common/__init__.py' will not be represented in diff 2. dpkg-source: error: cannot represent change to neutron/__init__.pyc: binary file contents changed 3. dpkg-source: info: local changes detected, the modified files are: neutron-2013.2.3/neutron.egg-info/entry_points.txt neutron-2013.2.3/neutron.egg-info/requires.txt 1 and 2 caused by lack of clean target. 3rd error is more problematic: tar -tzvf neutron_2013.2.3.orig.tar.gz|grep egg drwxrwxr-x jenkins/jenkins 0 2014-04-03 20:49 neutron-2013.2.3/neutron.egg-info/ -rw-rw-r-- jenkins/jenkins 1800 2014-04-03 20:49 neutron-2013.2.3/neutron.egg-info/PKG-INFO -rw-rw-r-- jenkins/jenkins 1 2014-04-03 20:49 neutron-2013.2.3/neutron.egg-info/dependency_links.txt -rw-rw-r-- jenkins/jenkins 16 2014-04-03 20:49 neutron-2013.2.3/neutron.egg-info/top_level.txt -rw-rw-r-- jenkins/jenkins 52753 2014-04-03 20:49 neutron-2013.2.3/neutron.egg-info/SOURCES.txt -rw-rw-r-- jenkins/jenkins 3654 2014-04-03 20:49 neutron-2013.2.3/neutron.egg-info/entry_points.txt -rw-rw-r-- jenkins/jenkins 1 2014-04-03 20:49 neutron-2013.2.3/neutron.egg-info/not-zip-safe -rw-rw-r-- jenkins/jenkins406 2014-04-03 20:49 neutron-2013.2.3/neutron.egg-info/requires.txt But git repository stated it should not be included to source/git: https://github.com/openstack/neutron/blob/stable/havana/.gitignore (neutron.egg-info/). To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1323383/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1323769] Re: nec plugin: AttributeError: No such RPC function 'update_floatingip_statuses'
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Status: New = In Progress ** Changed in: neutron/icehouse Importance: Undecided = Medium ** Changed in: neutron/icehouse Assignee: (unassigned) = Akihiro Motoki (amotoki) ** Changed in: neutron/icehouse Milestone: None = 2014.1.1 ** Tags removed: icehouse-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1323769 Title: nec plugin: AttributeError: No such RPC function 'update_floatingip_statuses' Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: In Progress Bug description: In nec plugin with l3-agent (icehouse) AttributeError: No such RPC function 'update_floatingip_statuses' occurs. update_floatingip_statuses was implemented in Icehouse and RPC callback version related to L3RpcCallbackMixin was bumped to 1.1, but the version of L3RpcCallback in NEC plugin was not bumped to 1.1 yet. update_floatingip_statues RPC call from l3-agent expects RPC version 1.1. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1323769/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1315138] Re: stable backports failing with sub_unit.log was 50 MB of uncompressed data!!!
All patches with topic:bug/1315138 have been merged. ** Also affects: nova/icehouse Importance: Undecided Status: New ** Changed in: nova/icehouse Status: New = Fix Committed ** Changed in: nova/icehouse Importance: Undecided = Low ** Changed in: nova Importance: Undecided = Low ** Changed in: nova/icehouse Assignee: (unassigned) = Matt Riedemann (mriedem) ** Changed in: nova/icehouse Milestone: None = 2014.1.1 ** Changed in: nova Status: In Progress = Fix Committed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1315138 Title: stable backports failing with sub_unit.log was 50 MB of uncompressed data!!! Status in OpenStack Neutron (virtual network service): New Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: Fix Committed Status in OpenStack Core Infrastructure: Invalid Bug description: Since this merged today: https://review.openstack.org/#/c/85797/2 We have jobs failing in the stable branches which are backports: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiKyBlY2hvICdzdWJfdW5pdC5sb2cgd2FzID4gNTAgTUIgb2YgdW5jb21wcmVzc2VkIGRhdGEhISEnXCIgQU5EIHRhZ3M6Y29uc29sZSIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5ODk3NTA0Njc4MH0= Seems this should only be enforced on master. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1315138/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312858] Re: Keystone + Devstack fail when KEYSTONE_TOKEN_FORMAT=UUID
** Changed in: python-keystoneclient Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1312858 Title: Keystone + Devstack fail when KEYSTONE_TOKEN_FORMAT=UUID Status in devstack - openstack dev environments: Invalid Status in OpenStack Identity (Keystone): Invalid Status in Python client library for Keystone: Fix Released Bug description: Running devstack in fresh Ubuntu 12.04 virtual machine with: $ cat local_rc KEYSTONE_TOKEN_FORMAT=UUID ...fails to start Keystone. Despite being configured for the UUID provider, keystone attempts to read `/etc/keystone/ssl/certs/signing_cert.pem` and fails (because it doesn't exist): 2014-04-25 10:36:25.289 INFO eventlet.wsgi.server [-] 192.168.121.46 - - [25/Apr/2014 10:36:25] GET /v2.0/tokens/69da781ae31c405e9aaa7adbf8f6f806 HTTP/1.1 200 3988 0.009096 2014-04-25 10:36:25.294 DEBUG keystone.middleware.core [-] RBAC: auth_context: {'project_id': u'7fab1d7a9ba741208bd748749a394902', 'user_id': u'8d21c5353bdd4eb7a1a805cb3b7fd1b2', 'roles': [u'_member_', u'service ']} from (pid=13334) process_request /opt/stack/keystone/keystone/middleware/core.py:281 2014-04-25 10:36:25.296 DEBUG keystone.common.wsgi [-] arg_dict: {} from (pid=13334) __call__ /opt/stack/keystone/keystone/common/wsgi.py:181 2014-04-25 10:36:25.296 DEBUG keystone.common.controller [-] RBAC: Authorizing identity:revocation_list() from (pid=13334) _build_policy_check_credentials /opt/stack/keystone/keystone/common/controller.py:54 2014-04-25 10:36:25.297 DEBUG keystone.common.controller [-] RBAC: using auth context from the request environment from (pid=13334) _build_policy_check_credentials /opt/stack/keystone/keystone/common/controller. py:59 2014-04-25 10:36:25.297 DEBUG keystone.policy.backends.rules [-] enforce identity:revocation_list: {'project_id': u'7fab1d7a9ba741208bd748749a394902', 'user_id': u'8d21c5353bdd4eb7a1a805cb3b7fd1b2', 'roles': [u' _member_', u'service']} from (pid=13334) enforce /opt/stack/keystone/keystone/policy/backends/rules.py:101 2014-04-25 10:36:25.297 DEBUG keystone.openstack.common.policy [-] Rule identity:revocation_list will be now enforced from (pid=13334) enforce /opt/stack/keystone/keystone/openstack/common/policy.py:287 2014-04-25 10:36:25.298 DEBUG keystone.common.controller [-] RBAC: Authorization granted from (pid=13334) inner /opt/stack/keystone/keystone/common/controller.py:151 2014-04-25 10:36:25.309 ERROR keystoneclient.common.cms [-] Signing error: Error opening signer certificate /etc/keystone/ssl/certs/signing_cert.pem 140424564475552:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/etc/keystone/ssl/certs/signing_cert.pem','r') 140424564475552:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400: unable to load certificate 2014-04-25 10:36:25.310 ERROR keystone.common.wsgi [-] Command 'openssl' returned non-zero exit status 3 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi Traceback (most recent call last): 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi File /opt/stack/keystone/keystone/common/wsgi.py, line 207, in __call__ 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi result = method(context,
[Yahoo-eng-team] [Bug 1324625] [NEW] column label Updated At isn't clear or grammatically correct
Public bug reported: On the panel Admin-System Panel-System Info on the Tab Compute Services (and after https://review.openstack.org/#/c/95852/ merges the same column will be on the new Cinder Services tab), the label Updated At is not descriptive. It's also unusual to end a column label with a preposition. The meaning of this column is Time the service has been in its current state (maybe status too?) When the value 0 is shown in the column that implies the service has always been in the current state/no state change has been seen. I think this should get special treatment like the Project/Orchestration/Stacks table does with Updated, so that instead of 0 something like Never, Always, or Forever is shown (depending on the label at the top). ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1324625 Title: column label Updated At isn't clear or grammatically correct Status in OpenStack Dashboard (Horizon): New Bug description: On the panel Admin-System Panel-System Info on the Tab Compute Services (and after https://review.openstack.org/#/c/95852/ merges the same column will be on the new Cinder Services tab), the label Updated At is not descriptive. It's also unusual to end a column label with a preposition. The meaning of this column is Time the service has been in its current state (maybe status too?) When the value 0 is shown in the column that implies the service has always been in the current state/no state change has been seen. I think this should get special treatment like the Project/Orchestration/Stacks table does with Updated, so that instead of 0 something like Never, Always, or Forever is shown (depending on the label at the top). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1324625/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1250617] Re: Limited use trusts
** Changed in: python-keystoneclient Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1250617 Title: Limited use trusts Status in OpenStack Identity (Keystone): Fix Released Status in Python client library for Keystone: Fix Released Bug description: If a trust has been created with the sole purpose of supporting a one- time process, the trust should not be able to get more than one token . A generic implementation of this would allow a trust to be created with a coutner of potential uses, with the trust inactivated after the last usage To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1250617/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1174499] Re: Keystone token hashing is MD5
** Changed in: python-keystoneclient Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1174499 Title: Keystone token hashing is MD5 Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (Keystone): Fix Committed Status in OpenStack API documentation site: Confirmed Status in Python client library for Keystone: Fix Released Bug description: https://github.com/openstack/python- keystoneclient/blob/master/keystoneclient/common/cms.py def cms_hash_token(token_id): return: for ans1_token, returns the hash of the passed in token otherwise, returns what it was passed in. if token_id is None: return None if is_ans1_token(token_id): hasher = hashlib.md5() hasher.update(token_id) return hasher.hexdigest() else: return token_id MD5 is a deprecated mechanism, it should be replaces with at least SHA1, if not SHA256. Keystone should be able to support multiple Hash types, and the auth_token middleware should query Keystone to find out which type is in use. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1174499/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1255321] Re: v3 token requests result in 500 error when run in apache
** Changed in: python-keystoneclient Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1255321 Title: v3 token requests result in 500 error when run in apache Status in OpenStack Identity (Keystone): Confirmed Status in Python client library for Keystone: Fix Released Bug description: A 500 Internal Server Error is generated when requests are issued to /v3/auth/tokens mkdir /var/www/cgi-bin/keystone ln /usr/share/keystone/keystone.wsgi /var/www/cgi-bin/keystone/main ln /usr/share/keystone/keystone.wsgi /var/www/cgi-bin/keystone/admin /etc/httpd/conf.d/wsgi-keystone.conf Listen 5000 VirtualHost *:5000 WSGIScriptAlias / /var/www/cgi-bin/keystone/main Location / AuthType None /Location /VirtualHost Listen 35357 VirtualHost *:35357 WSGIScriptAlias / /var/www/cgi-bin/keystone/admin Location / AuthType None /Location /VirtualHost Using version: python-keystone-2013.2-1.el6ost keystone.conf [signing] token_format = PKI testuser: keystone user-create --name tester --pass tester --email tes...@test.com keystone role-create --name tester keystone tenant-create --name tester keystone user-role-add --user-id {USER_ID} --role-id {ROLE_ID} --tenant-id {TENANT_ID} Request: curl -H Content-type: application/json -d '{auth: {identity: {methods: [password], password: {user: {domain: {name: Default}, name: tester,password: tester}}}, scope: {project: {domain: {name: Default},name: tester' http://127.0.0.1:5000/v3/auth/tokens Note: The issues is not present if UUID tokens are used. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1255321/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324635] [NEW] Fix the policy check for Delete Subnet
Public bug reported: Opened to track the issue discussed on: https://review.openstack.org/#/c/86417/3/openstack_dashboard/dashboards/admin/networks/subnets/tables.py ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1324635 Title: Fix the policy check for Delete Subnet Status in OpenStack Dashboard (Horizon): New Bug description: Opened to track the issue discussed on: https://review.openstack.org/#/c/86417/3/openstack_dashboard/dashboards/admin/networks/subnets/tables.py To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1324635/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324634] [NEW] Fix the policy check for Delete Subnet
Public bug reported: Opened to track the issue discussed on: https://review.openstack.org/#/c/86417/3/openstack_dashboard/dashboards/admin/networks/subnets/tables.py ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1324634 Title: Fix the policy check for Delete Subnet Status in OpenStack Dashboard (Horizon): New Bug description: Opened to track the issue discussed on: https://review.openstack.org/#/c/86417/3/openstack_dashboard/dashboards/admin/networks/subnets/tables.py To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1324634/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1314129] Re: jsonutils should use simplejson on python 2.6 if available
Removed python-keystoneclient from this bug due to launchpad issues - fix released in python-keystoneclient 0.9.0. ** No longer affects: python-keystoneclient -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1314129 Title: jsonutils should use simplejson on python 2.6 if available Status in OpenStack Telemetry (Ceilometer): Fix Committed Status in Cinder: In Progress Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Orchestration API (Heat): In Progress Status in OpenStack Dashboard (Horizon): Fix Committed Status in OpenStack Bare Metal Provisioning Service (Ironic): Fix Committed Status in OpenStack Identity (Keystone): In Progress Status in OpenStack Message Queuing Service (Marconi): Fix Committed Status in OpenStack Neutron (virtual network service): Fix Committed Status in OpenStack Compute (Nova): In Progress Status in Oslo - a Library of Common OpenStack Code: Fix Committed Status in Messaging API for OpenStack: In Progress Status in Python client library for Neutron: In Progress Status in Python client library for Nova: Fix Committed Status in OpenStack Data Processing (Sahara, ex. Savanna): In Progress Status in Taskflow for task-oriented systems.: Fix Committed Status in Openstack Database (Trove): In Progress Status in Tuskar: Fix Committed Bug description: Python 2.6 ships 'json' module that is very slow because it's written in pure Python. Python 2.7 updated [1] its 'json' module from simplejson PyPI repo with a version that is based on C extension (and quick). Quoting: Updated module: The json module was upgraded to version 2.0.9 of the simplejson package, which includes a C extension that makes encoding and decoding faster. (Contributed by Bob Ippolito; issue 4136.) We should strive to use simplejson library when running on Python 2.6. [1]: https://docs.python.org/dev/whatsnew/2.7.html To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1314129/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1294210] Re: InvalidCPUInfo exception not catched in conductor _live_migrate
** Also affects: nova/icehouse Importance: Undecided Status: New ** Changed in: nova/icehouse Status: New = In Progress ** Changed in: nova/icehouse Importance: Undecided = Medium ** Changed in: nova/icehouse Assignee: (unassigned) = Chuck Short (zulcss) ** Changed in: nova/icehouse Milestone: None = 2014.1.1 ** Tags removed: icehouse-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1294210 Title: InvalidCPUInfo exception not catched in conductor _live_migrate Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: In Progress Bug description: Environment: - fresh devstack multinode installation using nova-network. - one controller node with a compute node (node_1) - one compute node (node_2) Both compute nodes has different cpu info features. When live migrating from node_1 to node_2 an InvalidCPUInfo is raised in live_mirgate method in conductor's manager. The exception is not captured and the roll back of the task_state is not performed. The instance stays in 'migrating' state. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1294210/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1317180] Re: Hyper-v fails to attach volumes when using v1 volume utilites
** Changed in: nova Assignee: (unassigned) = Petrut Lucian (petrutlucian94) ** Changed in: nova Importance: Undecided = Medium ** Also affects: nova/icehouse Importance: Undecided Status: New ** Changed in: nova/icehouse Status: New = In Progress ** Changed in: nova/icehouse Importance: Undecided = Medium ** Changed in: nova/icehouse Assignee: (unassigned) = Jay Bryant (jsbryant) ** Changed in: nova/icehouse Milestone: None = 2014.1.1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1317180 Title: Hyper-v fails to attach volumes when using v1 volume utilites Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: In Progress Bug description: The following patch https://github.com/openstack/nova/commit/4c2f36bfe006cb0ef89ca7a706223f30488a182e #diff-5c6ee11140977e63b54542e2ff5763d3R22 caused a regression by changing the eventlet.subprocess.Popen with the builtin subprocess.Popen (by using the nova.utils execute method) without changing the way the args were parsed. In this module, the execution args were parsed separated by whitespaces, which is not allowed by the builtin subprocess.Popen, causing a not found error. This error is returned for example when attaching a volume, at the point where iscsicli tool is used to login the iSCSI target or portal. Trace: http://paste.openstack.org/show/79418/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1317180/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1321082] Re: libvirt driver detach_volume fails after migration failure
** Also affects: nova/icehouse Importance: Undecided Status: New ** Changed in: nova/icehouse Status: New = In Progress ** Changed in: nova/icehouse Importance: Undecided = Medium ** Changed in: nova/icehouse Assignee: (unassigned) = Qin Zhao (zhaoqin) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1321082 Title: libvirt driver detach_volume fails after migration failure Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: In Progress Bug description: When a VM with an attached iSCSI disk fails to migrate, the rollback methods does not detach the disk from target host. What happens is _lookup_by_name() fails, since the VM does not exist on the target host. In detach_volume(), it is supposed to print a warning based on the correct error code being returned, instead of throwing the exception. However, this is not happening, because _lookup_by_name() throws an InstanceNotFound exception, rather than a libvirt.libvirtError exception. So we also need to catch InstanceNotFound exception, so that detach_volume() can continue to execute as expected. Here's the exception log that I have: 2014-05-16 16:30:22.328 41419 WARNING nova.compute.manager [req-3db28fed-c287-4b41-ac95-9a37a619c75c 0 4be9915c10c8426cbfe948940f7c8af1] [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] Detaching volume from unknown instance 2014-05-16 16:30:22.331 41419 ERROR nova.compute.manager [req-3db28fed-c287-4b41-ac95-9a37a619c75c 0 4be9915c10c8426cbfe948940f7c8af1] [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] Failed to detach volume 98a940e5-051f-4d0f-a8c7-859a5079d95e from /dev/vdb 2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] Traceback (most recent call last): 2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] File /usr/lib/python2.7/site-packages/nova/compute/manager.py, line 4218, in _detach_volume 2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] encryption=encryption) 2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 1356, in detach_volume 2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] virt_dom = self._lookup_by_name(instance_name) 2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 3477, in _lookup_by_name 2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] raise exception.InstanceNotFound(instance_id=instance_name) 2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] InstanceNotFound: Instance rhel65_113-3e1d0d56-0002 could not be found. 2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1321082/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1299517] Re: quota-class-update
** Changed in: horizon Assignee: (unassigned) = Sergio Cazzolato (sergio-j-cazzolato) ** Also affects: python-novaclient Importance: Undecided Status: New ** Changed in: python-novaclient Assignee: (unassigned) = Sergio Cazzolato (sergio-j-cazzolato) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1299517 Title: quota-class-update Status in OpenStack Dashboard (Horizon): New Status in OpenStack Compute (Nova): In Progress Status in Python client library for Nova: New Bug description: Cant update default quota: root@blade1-1-live:~# nova --debug quota-class-update --ram -1 default REQ: curl -i 'http://XXX.XXX.XXX.XXX:8774/v2/1eaf475499f8479d94d5ed7a4af68703/os-quota-class-sets/default' -X PUT -H X-Auth-Project-Id: admin -H User-Agent: python-novaclient -H Content-Type: application/json -H Accept: application/json -H X-Auth-Token: 62837311542a42a495442d911cc8b12a -d '{quota_class_set: {ram: -1}}' New session created for: (http://XXX.XXX.XXX.XXX:8774) INFO (connectionpool:258) Starting new HTTP connection (1): XXX.XXX.XXX.XXX DEBUG (connectionpool:375) Setting read timeout to 600.0 DEBUG (connectionpool:415) PUT /v2/1eaf475499f8479d94d5ed7a4af68703/os-quota-class-sets/default HTTP/1.1 404 52 RESP: [404] CaseInsensitiveDict({'date': 'Sat, 29 Mar 2014 17:17:32 GMT', 'content-length': '52', 'content-type': 'text/plain; charset=UTF-8'}) RESP BODY: 404 Not Found The resource could not be found. DEBUG (shell:777) Not found (HTTP 404) Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 774, in main OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:])) File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 710, in main args.func(self.cs, args) File /usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py, line 3378, in do_quota_class_update _quota_update(cs.quota_classes, args.class_name, args) File /usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py, line 3164, in _quota_update manager.update(identifier, **updates) File /usr/lib/python2.7/dist-packages/novaclient/v1_1/quota_classes.py, line 44, in update 'quota_class_set') File /usr/lib/python2.7/dist-packages/novaclient/base.py, line 165, in _update _resp, body = self.api.client.put(url, body=body) File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 289, in put return self._cs_request(url, 'PUT', **kwargs) File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 260, in _cs_request **kwargs) File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 242, in _time_request resp, body = self.request(url, method, **kwargs) File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 236, in request raise exceptions.from_response(resp, body, url, method) NotFound: Not found (HTTP 404) ERROR: Not found (HTTP 404) To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1299517/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1292173] Re: Remove list events API from Cisco N1kv neutron
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Status: New = In Progress ** Changed in: neutron/icehouse Importance: Undecided = Medium ** Changed in: neutron/icehouse Assignee: (unassigned) = Ihar Hrachyshka (ihar-hrachyshka) ** Tags removed: icehouse-potential-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1292173 Title: Remove list events API from Cisco N1kv neutron Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: In Progress Bug description: Earlier Cisco was using the list events api to poll policies from VSM. It was inefficient and caused delay in processing. So, now Cisco switched to list profiles to poll policies from VSM. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1292173/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1301035] Re: Nova notifier thread does not run in rpc_worker sub-processes
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Status: New = In Progress ** Changed in: neutron/icehouse Importance: Undecided = Medium ** Changed in: neutron/icehouse Assignee: (unassigned) = Assaf Muller (amuller) ** Tags removed: icehouse-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1301035 Title: Nova notifier thread does not run in rpc_worker sub-processes Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: In Progress Bug description: This reported to me today by Maru. When an rpc worker is spawned as a sub-process, that happens after the nova notifier thread has already started. eventlet.hubs.use_hub() is the call in neutron/openstack/common/service.py that causes all thread execution to stop. From the event let documentation: Make sure to do this before the application starts doing any I/O! Calling use_hub completely eliminates the old hub, and any file descriptors or timers that it had been managing will be forgotten. Maru's observation is that this means that thread should not spawn before forking the process if they need to run in the child process. I agree. The reason that threads spawn is that the plugin gets loaded prior to forking and the thread for the nova notifier is started in the __init__ method of a sub-class of the plugin. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1301035/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1300570] Re: dhcp_agent fails in RPC communication with neutron-server under Metaplugin
** Also affects: neutron/icehouse Importance: Undecided Status: New ** Changed in: neutron/icehouse Status: New = In Progress ** Changed in: neutron/icehouse Importance: Undecided = Medium ** Changed in: neutron/icehouse Assignee: (unassigned) = Itsuro Oda (oda-g) ** Tags removed: icehouse-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1300570 Title: dhcp_agent fails in RPC communication with neutron-server under Metaplugin Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: In Progress Bug description: This problem occurs when ml2 plugin runs under Metaplugin. error log of dhcp_agent is as follows: --- 2014-03-28 18:57:17.062 ERROR neutron.agent.dhcp_agent [req-9c53d7a6-d850-42de-896f-184827b33bfd None None] Failed reporting state! 2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent Traceback (most recent call last): 2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent File /opt/stack/neutron/neutron/agent/dhcp_agent.py, line 564, in _report_state 2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent self.state_rpc.report_state(ctx, self.agent_state, self.use_call) 2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent File /opt/stack/neutron/neutron/agent/rpc.py, line 72, in report_state 2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent return self.call(context, msg, topic=self.topic) 2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent File /opt/stack/neutron/neutron/openstack/common/rpc/proxy.py, line 129, in call 2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent exc.info, real_topic, msg.get('method')) 2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent Timeout: Timeout while waiting on RPC response - topic: q-plugin, RPC method: report_state info: unknown 2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent --- This problem is brought by the patch: https://review.openstack.org/#/c/72565/ because ml2 plguin does not become to open RPC connection at plugin initialization. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1300570/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324659] [NEW] dump_flows_for_table() needs to handle run_ofctl failing
Public bug reported: The function dump_flows_for_table() calls run_ofctl() to run the dump- flows command. In the case of this occuring during an OVS restart, run_ofctl() will fail, and an attempt to run splitlines() is made on a None object. ** Affects: neutron Importance: High Assignee: Kyle Mestery (mestery) Status: In Progress ** Changed in: neutron Importance: Undecided = High ** Changed in: neutron Assignee: (unassigned) = Kyle Mestery (mestery) ** Changed in: neutron Milestone: None = juno-1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324659 Title: dump_flows_for_table() needs to handle run_ofctl failing Status in OpenStack Neutron (virtual network service): In Progress Bug description: The function dump_flows_for_table() calls run_ofctl() to run the dump-flows command. In the case of this occuring during an OVS restart, run_ofctl() will fail, and an attempt to run splitlines() is made on a None object. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1324659/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324661] [NEW] Navigatiton elements shouldn't include the word Panel
Public bug reported: In the Admin dashboard the panel groups include the word Panel. (System Panel and Identity Panel). In the Project dashboard the parallel elements don't include Panel in their label. Panel should be removed from the Admin dashboard navigation for consistency. ** Affects: horizon Importance: Undecided Assignee: Doug Fish (drfish) Status: New ** Changed in: horizon Assignee: (unassigned) = Doug Fish (drfish) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1324661 Title: Navigatiton elements shouldn't include the word Panel Status in OpenStack Dashboard (Horizon): New Bug description: In the Admin dashboard the panel groups include the word Panel. (System Panel and Identity Panel). In the Project dashboard the parallel elements don't include Panel in their label. Panel should be removed from the Admin dashboard navigation for consistency. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1324661/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324662] [NEW] Deleting currently scoped project breaks session
Public bug reported: Horizon gets very unhappy if you delete the currently scoped project. Here's how to reproduce: 1. Create a new project 2. Add current user to this project as an admin user 3. Change to the new project context 4. Delete the project from Admin/Identity/Projects Horizon will return to the Projects view with an error message, but many views are broken due to the invalidation of the current token. Behavior will range from ISE to error messages. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1324662 Title: Deleting currently scoped project breaks session Status in OpenStack Dashboard (Horizon): New Bug description: Horizon gets very unhappy if you delete the currently scoped project. Here's how to reproduce: 1. Create a new project 2. Add current user to this project as an admin user 3. Change to the new project context 4. Delete the project from Admin/Identity/Projects Horizon will return to the Projects view with an error message, but many views are broken due to the invalidation of the current token. Behavior will range from ISE to error messages. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1324662/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1269418] Re: [OSSA 2014-017] nova rescue doesn't put VM into RESCUE status on vmware (CVE-2014-2573)
** Summary changed: - nova rescue doesn't put VM into RESCUE status on vmware (CVE-2014-2573) + [OSSA 2014-017] nova rescue doesn't put VM into RESCUE status on vmware (CVE-2014-2573) ** Changed in: ossa Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1269418 Title: [OSSA 2014-017] nova rescue doesn't put VM into RESCUE status on vmware (CVE-2014-2573) Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) havana series: Fix Committed Status in OpenStack Compute (nova) icehouse series: Fix Committed Status in The OpenStack VMwareAPI subTeam: In Progress Status in OpenStack Security Advisories: Fix Released Bug description: nova rescue of VM on vmWare will create a additional VM ($ORIGINAL_ID- rescue), but after that, the original VM has status ACTIVE. This leads to [root@jhenner-node ~(keystone_admin)]# nova unrescue foo ERROR: Cannot 'unrescue' while instance is in vm_state stopped (HTTP 409) (Request-ID: req-792cabb2-2102-47c5-9b15-96c74a9a4819) the original can be deleted, which then causes leaking of the -rescue VM. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1269418/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324699] [NEW] hosts aggregate panel provides no way to modify metadata
Public bug reported: The Hosts Aggregate panel allows you to create a new host aggregate, and edit it, but provides no way to modify the metadata that would be used to match flavors to the aggregate. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1324699 Title: hosts aggregate panel provides no way to modify metadata Status in OpenStack Dashboard (Horizon): New Bug description: The Hosts Aggregate panel allows you to create a new host aggregate, and edit it, but provides no way to modify the metadata that would be used to match flavors to the aggregate. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1324699/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1306727] Re: versions controller requests with a body log ERRORs
** Also affects: nova/icehouse Importance: Undecided Status: New ** Changed in: nova/icehouse Importance: Undecided = Medium ** Changed in: nova/icehouse Status: New = In Progress ** Changed in: nova/icehouse Assignee: (unassigned) = Mathieu Rohon (mathieu-rohon) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1306727 Title: versions controller requests with a body log ERRORs Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: In Progress Bug description: Using Nova trunk (Juno). I'm seeing the following nova-api.log errors when unauthenticated /versions controller POST requests are made with a request body: - Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 2014-04-11 07:04:06.235 27044 ERROR nova.api.openstack.wsgi [-] Exception handling resource: index() got an unexpected keyword argument 'body' Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi Traceback (most recent call last): Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi File /opt/stack/venvs/nova/lib/python2.7/site-packages/nova/api/openstack/wsgi.py, line 983, in _process_stack Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi action_result = self.dispatch(meth, request, action_args) Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi File /opt/stack/venvs/nova/lib/python2.7/site-packages/nova/api/openstack/wsgi.py, line 1070, in dispatch Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi return method(req=request, **action_args) Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi TypeError: index() got an unexpected keyword argument 'body' - Both the index() and multi() actions in the versions controller are susceptible to this behavior. Ideally we wouldn't be logging stack traces when this happens. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1306727/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297962] Re: [sru] Nova-compute doesnt start
** Also affects: nova/icehouse Importance: Undecided Status: New ** Changed in: nova/icehouse Status: New = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1297962 Title: [sru] Nova-compute doesnt start Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: In Progress Status in “nova” package in Ubuntu: New Status in “nova” source package in Trusty: Fix Committed Bug description: 2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in tworker 2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup rv = meth(*args,**kwargs) 2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup File /usr/lib/python2.7/dist-packages/libvirt.py, line 3127, in baselineCPU 2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup if ret is None: raise libvirtError ('virConnectBaselineCPU() failed', conn=self) 2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup libvirtError: this function is not supported by the connection driver: virConnectBaselineCPU To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1297962/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1271426] Re: protected property change not rejected if a subsequent rule match accepts them
Reopening this OSSN bug. The workaround in the OSSN has been reported to not work. Details from the reporter to come shortly. ** Changed in: ossn Status: Fix Released = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1271426 Title: protected property change not rejected if a subsequent rule match accepts them Status in OpenStack Image Registry and Delivery Service (Glance): Fix Released Status in Glance havana series: Fix Released Status in OpenStack Security Notes: In Progress Bug description: See initial report here: http://lists.openstack.org/pipermail /openstack-dev/2014-January/024861.html What is happening is that if there is a specific rule that would reject an action and a less specific rule that comes after that would accept the action, then the action is being accepted. It should be rejected. This is because we iterate through the property protection rules rather than just finding the first match. This bug does not occur when policies are used to determine property protections, only when roles are used directly. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1271426/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324348] Re: Server_group shouldn't have same policies in it
** Also affects: tempest Importance: Undecided Status: New ** Changed in: tempest Assignee: (unassigned) = wingwj (wingwj) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1324348 Title: Server_group shouldn't have same policies in it Status in OpenStack Compute (Nova): In Progress Status in Tempest: In Progress Bug description: It can put several same policies in one server_group now. It doesn't make sense, the duplicate policies need to be ignored. stack@devaio:~$ nova server-group-create --policy affinity --policy affinity wjsg1 +--+---++-+--+ | Id | Name | Policies | Members | Metadata | +--+---++-+--+ | 4f6679b7-f6b1-4d1e-92cd-1a54e1fe0f3d | wjsg1 | [u'affinity', u'affinity'] | [] | {} | +--+---++-+--+ stack@devaio:~$ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1324348/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1304481] Re: Nova operates network failed by Maximum attempts reached error when Cloud under heavy workload
** Changed in: nova Status: Invalid = New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1304481 Title: Nova operates network failed by Maximum attempts reached error when Cloud under heavy workload Status in OpenStack Neutron (virtual network service): Won't Fix Status in OpenStack Compute (Nova): New Bug description: In my deployment when I provision or terminate numerous VM, e.g. 100+, I got some Maximum attempts reached issues in Nova internal which happened on Nova operates network/neutron by neutronclient. You might check follow log snippet for details. I propose adding a neutron_http_retries option to Nova, and to make each neutron client object leverage this option (in nova/network/neutronv2/api.py). IMO we could make the fix base on this change: https://review.openstack.org/#/c/71464/ IFY, in my deployment I verified that when I set http retries of neutronclient to 5 times (most time 3 is just fine) the problem is gone. 216492 2014-04-08 02:16:09.844 3777 ERROR oslo.messaging._executors.base [-] Exception during message handling 216493 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base Traceback (most recent call last): 216494 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/oslo/messaging/_executors/base.py, line 36, in _dispatch 216495 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base incoming.reply(self.callback(incoming.ctxt, incoming.message)) 216496 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 122, in __call__ 216497 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base return self._dispatch(endpoint, method, ctxt, args) 216498 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 92, in _dispatch 216499 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base result = getattr(endpoint, method)(ctxt, **new_args) 216500 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/nova/exception.py, line 88, in wrapped 216501 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base payload) 216502 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, line 68, in __exit__ 216503 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base six.reraise(self.type_, self.value, self.tb) 216504 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/nova/exception.py, line 71, in wrapped 216505 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base return f(self, context, *args, **kw) 216506 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 249, in decorated_function 216507 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base pass 216508 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, line 68, in __exit__ 216509 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base six.reraise(self.type_, self.value, self.tb) 216510 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 235, in decorated_function 216511 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base return function(self, context, *args, **kwargs) 216512 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 300, in decorated_function 216513 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base function(self, context, *args, **kwargs) 216514 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 277, in decorated_function 216515 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base e, sys.exc_info()) 216516 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, line 68, in __exit__ 216517 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base six.reraise(self.type_, self.value, self.tb) 216518 2014-04-08 02:16:09.844 3777 TRACE oslo.messaging._executors.base File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 264, in decorated_function 216519 2014-04-08 02:16:09.844
[Yahoo-eng-team] [Bug 1324382] Re: Cannot edit a firewll rule with protocol 'ANY'
** Also affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324382 Title: Cannot edit a firewll rule with protocol 'ANY' Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Neutron (virtual network service): New Bug description: This issue can be reproduced in Havana and IceHouse release. Create a Neutron Firewall Rule, select protocol field as 'ANY', then try to edit this rule by clicking 'Edit Rule' button. An error message pops up, read 'Error: An error occurred. Please try again later.' Edit a firewall rule with protocol ANY should be allowed. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1324382/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324755] [NEW] disk consumption report incorrect in host-describe and simple-tenant-usage
Public bug reported: simple-tenant-usage and host use resource['disk_gb'] += (instance['root_gb'] + instance['ephemeral_gb']) to report disk size, however, the eph is maximum value can be allocated to an instance , not its current usage ** Affects: nova Importance: Undecided Assignee: jichenjc (jichenjc) Status: New ** Tags: api ** Changed in: nova Assignee: (unassigned) = jichenjc (jichenjc) ** Tags added: api -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1324755 Title: disk consumption report incorrect in host-describe and simple-tenant- usage Status in OpenStack Compute (Nova): New Bug description: simple-tenant-usage and host use resource['disk_gb'] += (instance['root_gb'] + instance['ephemeral_gb']) to report disk size, however, the eph is maximum value can be allocated to an instance , not its current usage To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1324755/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324382] Re: Cannot edit a firewll rule with protocol 'ANY'
Havana and IceHouse release need to be fixed too. Should I submit separated reviews? ** Changed in: neutron Assignee: (unassigned) = Gary Duan (gduan) ** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324382 Title: Cannot edit a firewll rule with protocol 'ANY' Status in OpenStack Dashboard (Horizon): In Progress Bug description: This issue can be reproduced in Havana and IceHouse release. Create a Neutron Firewall Rule, select protocol field as 'ANY', then try to edit this rule by clicking 'Edit Rule' button. An error message pops up, read 'Error: An error occurred. Please try again later.' Edit a firewall rule with protocol ANY should be allowed. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1324382/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324751] [NEW] Can't delete baremetal instance when the baremetal vm is deleted in advance
You have been subscribed to a public bug: when trying to destroy the undercloud, using 'virsh undefine baremetal_0' to destroy the domain first, and then in seed use 'heat stack-delete undercloud' to delete the stack, but didn't succeed, the 'nova delete undercloud' can't delete the instance either. Keeping in this state: root@ubuntu:~# nova list +--++++-+--- -+ | ID | Name | Status | Task State | Power State | Networ ks | +--++++-+--- -+ | 24a9e5a4-4147-483f-ba36-05bc7478f92d | undercloud-undercloud-j5g5kbp5xpio | ERROR | deleting | Running | ctlpla ne=192.0.2.3 | +--++++-+--- -+ root@ubuntu:~# heat stack-list +--+++--+ | id | stack_name | stack_status | creation_time| +--+++--+ | 7ca4c5fa-b26d-4169-aff8-912ba545fc81 | undercloud | DELETE_IN_PROGRESS | 2014-05-26T03:56:26Z | +--+++--+ ** Affects: nova Importance: Undecided Status: New -- Can't delete baremetal instance when the baremetal vm is deleted in advance https://bugs.launchpad.net/bugs/1324751 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324751] Re: Can't delete baremetal instance when the baremetal vm is deleted in advance
** Project changed: tripleo = nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1324751 Title: Can't delete baremetal instance when the baremetal vm is deleted in advance Status in OpenStack Compute (Nova): New Bug description: when trying to destroy the undercloud, using 'virsh undefine baremetal_0' to destroy the domain first, and then in seed use 'heat stack-delete undercloud' to delete the stack, but didn't succeed, the 'nova delete undercloud' can't delete the instance either. Keeping in this state: root@ubuntu:~# nova list +--++++-+--- -+ | ID | Name | Status | Task State | Power State | Networ ks | +--++++-+--- -+ | 24a9e5a4-4147-483f-ba36-05bc7478f92d | undercloud-undercloud-j5g5kbp5xpio | ERROR | deleting | Running | ctlpla ne=192.0.2.3 | +--++++-+--- -+ root@ubuntu:~# heat stack-list +--+++--+ | id | stack_name | stack_status | creation_time| +--+++--+ | 7ca4c5fa-b26d-4169-aff8-912ba545fc81 | undercloud | DELETE_IN_PROGRESS | 2014-05-26T03:56:26Z | +--+++--+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1324751/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1324764] [NEW] Add more comments to VPN plugin and Cisco svc driver
Public bug reported: Clarify the plugin and service driver flow with more comments. ** Affects: neutron Importance: Undecided Assignee: Ly Loi (lyloi) Status: New ** Tags: cisco comments plugin vpn ** Changed in: neutron Assignee: (unassigned) = Ly Loi (lyloi) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1324764 Title: Add more comments to VPN plugin and Cisco svc driver Status in OpenStack Neutron (virtual network service): New Bug description: Clarify the plugin and service driver flow with more comments. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1324764/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1316475] Re: [SRU] CloudSigma DS for causes hangs when serial console present
** Also affects: diskimage-builder Importance: Undecided Status: New ** Changed in: diskimage-builder Status: New = In Progress ** Changed in: diskimage-builder Assignee: (unassigned) = Adam Gandelman (gandelman-a) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1316475 Title: [SRU] CloudSigma DS for causes hangs when serial console present Status in Init scripts for use on cloud images: Confirmed Status in Openstack disk image builder: In Progress Status in tripleo - openstack on openstack: Triaged Status in “cloud-init” package in Ubuntu: New Bug description: SRU Justification Impact: The Cloud Sigma Datasource read and writes to /dev/ttyS1 if present; the Datasource does not have a time out. On non-CloudSigma Clouds or systems w/ /dev/ttyS1, Cloud-init will block pending a response, which may never come. Further, it is dangerous for a default datasource to write blindly on a serial console as other control plane software and Clouds use /dev/ttyS1 for communication. Fix: The patch disables Cloud Sigma by default. Verification: 1. Purge Cloud-init 2. Install from -proposed 3. Look in /etc/cloud/cloud.d/90_dpkg.cfg, and confirm CloudSigma is not in the list of datasources. Regression: The risk is low, except on CloudSigma targets which try to use new images generated with the new Cloud-init version. [Original Report] DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 3 (xid=0x7e777c23) DHCPREQUEST of 10.22.157.186 on eth2 to 255.255.255.255 port 67 (xid=0x7e777c23) DHCPOFFER of 10.22.157.186 from 10.22.157.149 DHCPACK of 10.22.157.186 from 10.22.157.149 bound to 10.22.157.186 -- renewal in 39589 seconds. * Starting Mount network filesystems[ OK ] * Starting configure network device [ OK ] * Stopping Mount network filesystems[ OK ] * Stopping DHCP any connected, but unconfigured network interfaces [ OK ] * Starting configure network device [ OK ] * Stopping DHCP any connected, but unconfigured network interfaces [ OK ] * Starting configure network device [ OK ] And it stops there. I see this on about 10% of deploys. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1316475/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp