Re: [Openstack] Installing Dashboard standalone
Hi Guillermo, Would not modifying the local_settings.py and changing the OPENSTACK_HOST to reference a node other than 127.0.0.1 resolve the issue? Cheers David On Thu, Dec 20, 2012 at 1:49 AM, Guillermo Alvarado guillermoalvarad...@gmail.com wrote: BTW I am trying to use a my own version of the openstack-dashboard/ horizon because I made some modifications to the GUI. My version is based in Essex release. Please anybody can help me with this? 2012/12/19 Guillermo Alvarado guillermoalvarad...@gmail.com I Installed the openstack-dashboard but I have this error in the apache logs: ImproperlyConfigured: Error importing middleware horizon.middleware: cannot import name users 2012/12/19 Guillermo Alvarado guillermoalvarad...@gmail.com Hi everyone, I want to install the openstack-dashboard/horizon standalone, I mean, I want to have a node for compute, a node for controller and a node for the dashboard. How can I achive this? Thanks in advance, Best Regards. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Installing Dashboard standalone
On 12/20/2012 02:49 AM, Guillermo Alvarado wrote: BTW I am trying to use a my own version of the openstack-dashboard/ horizon because I made some modifications to the GUI. My version is based in Essex release. Please anybody can help me with this? 2012/12/19 Guillermo Alvarado guillermoalvarad...@gmail.com mailto:guillermoalvarad...@gmail.com I Installed the openstack-dashboard but I have this error in the apache logs: ImproperlyConfigured: Error importing middleware horizon.middleware: cannot import name users 1. you've made a modification 2. you see an error. Would you mind and show the modification made? Otherwise this can't get very far. Dashboard reads the service endpoints from keystone. If keystone is configured correctly, you shouldn't see issues. Matthias 2012/12/19 Guillermo Alvarado guillermoalvarad...@gmail.com mailto:guillermoalvarad...@gmail.com Hi everyone, I want to install the openstack-dashboard/horizon standalone, I mean, I want to have a node for compute, a node for controller and a node for the dashboard. How can I achive this? Thanks in advance, Best Regards. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Debugging Quantum
Hi: This post can be useful. Check out: https://www.ibm.com/developerworks/mydeveloperworks/blogs/e93514d3-c4f0-4aa0-8844-497f370090f5/entry/openstack_nova_api?lang=en Regards, JuanFra. 2012/12/20 Trinath Somanchi trinath.soman...@gmail.com Hi Stackers- Since, my starting to understand the code as a student, I have face too many dots to fill up in connection of understanding the REQUEST processing flow from client to API server of Quantum. I have the following doubts at the source code level. I'm a little bit aware of the WSGI, WEBOB and the PASTE frameworks. [1] How can I plot to know how the WSGI is capturing the client sent request? [2] How to know what CLIENT is sending via the RESTful calls to the API server, since we dont have any LOGS for client.? [3] How WSGI, WEBOB and PASTE help in framing the and (de)serializing the request/response ? Can any code enthusiastic help me in understanding these doubts at code level I feel a little comments in the source code can be more helpful... thanks in advance... kindly please help me understand these. -- Regards, -- Trinath Somanchi, +91 9866 235 130 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Windows 2012 Server
Brilliant. I'll try your step-by-step with 2012 and, if it works, I'll drop you a line. Kind regards -- joe. On 18 December 2012 20:49, Lloyd Dewolf lloydost...@gmail.com wrote: On Tue, Dec 18, 2012 at 10:41 AM, Joe Warren-Meeks joe.warren.me...@gmail.com wrote: Hi guys, I've created a windows 2012 image and uploaded it ok. Pretty much following this example: http://docs.openstack.org/trunk/openstack-compute/admin/content/creating-a-windows-image.html When I go to launch an instance, it works ok and nova list and nova show look healthy. If I VNC to it as soon as it starts to boot, I get to see the BIOS and then the new Windows logo, but then the screen goes black and nothing seems to happen. Sending ctrl-alt-del elicits no response and it doesn't look like the network has DHCP'ed either. Has anyone else seen this and if so, any idea what I can do to fix it? We haven't had a customer ask for help with Windows 2012, but have had good success with Windows 2008 created using KVM. https://airframeaid.pistoncloud.com/entries/21838261-creating-windows-images -- @lloyddewolf http://www.pistoncloud.com/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Windows 2012 Server
I don't think it is that, as I can see it booting and the splash screen, but then the screen is just blank. Thanks for the pointer -- joe. On 18 December 2012 19:39, Vishvananda Ishaya vishvana...@gmail.com wrote: A number of things could be going wrong, but I did notice this bug recently: https://bugs.launchpad.net/nova/+bug/1086352 I think this only affects installs on old versions of xp. Perhaps there is some incompatibility between the virtio drivers and windows server 2012? Vish On Dec 18, 2012, at 10:41 AM, Joe Warren-Meeks joe.warren.me...@gmail.com wrote: Hi guys, I've created a windows 2012 image and uploaded it ok. Pretty much following this example: http://docs.openstack.org/trunk/openstack-compute/admin/content/creating-a-windows-image.html When I go to launch an instance, it works ok and nova list and nova show look healthy. If I VNC to it as soon as it starts to boot, I get to see the BIOS and then the new Windows logo, but then the screen goes black and nothing seems to happen. Sending ctrl-alt-del elicits no response and it doesn't look like the network has DHCP'ed either. Has anyone else seen this and if so, any idea what I can do to fix it? Kind regards -- joe. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Windows 2012 Server
Hey Lloyd, I can confirm that your step-by-step guide works fine! Thank you for the pointer. Now, anyone know an easy way to make dnsmasq send out a different default gateway? I've tried the following in /etc/dnsmasq-nova.conf and included it in nova.conf, but it doesn't seem to make a difference. The networks are labelled correctly. dhcp-option=tag:'production',option:router,10.0.33.1 dhcp-option=tag:'dmz',option:router,10.0.21.1 On 20 December 2012 10:59, Joe Warren-Meeks joe.warren.me...@gmail.comwrote: Brilliant. I'll try your step-by-step with 2012 and, if it works, I'll drop you a line. Kind regards -- joe. On 18 December 2012 20:49, Lloyd Dewolf lloydost...@gmail.com wrote: On Tue, Dec 18, 2012 at 10:41 AM, Joe Warren-Meeks joe.warren.me...@gmail.com wrote: Hi guys, I've created a windows 2012 image and uploaded it ok. Pretty much following this example: http://docs.openstack.org/trunk/openstack-compute/admin/content/creating-a-windows-image.html When I go to launch an instance, it works ok and nova list and nova show look healthy. If I VNC to it as soon as it starts to boot, I get to see the BIOS and then the new Windows logo, but then the screen goes black and nothing seems to happen. Sending ctrl-alt-del elicits no response and it doesn't look like the network has DHCP'ed either. Has anyone else seen this and if so, any idea what I can do to fix it? We haven't had a customer ask for help with Windows 2012, but have had good success with Windows 2008 created using KVM. https://airframeaid.pistoncloud.com/entries/21838261-creating-windows-images -- @lloyddewolf http://www.pistoncloud.com/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Instances can't reach metadata server in network HA mode
Vish, if you could help. I realized that all internal route of my vms point to cloudcontroller. if I change the default route to node address everything works perfectly. How can I make the node IP the default route? Thanks for all help! On Wed, Dec 19, 2012 at 2:34 PM, Gui Maluf guimal...@gmail.com wrote: Yes, it's in multi_host=true. In nova.conf and in the database multi_host is set to True. 10.5.5.32 isn't the gateway, instead is the private network. LoL Out of nothing my instances can now reach metadata. But when I login and ping www.google.com VM can resolv name but there is no answer back, all packets are lost. And I've attached a floating IP for two vms, on different node, and they dont even ping back in the same node. This is so confused! I'll do some tcpdump to check what is happening! On Wed, Dec 19, 2012 at 2:05 PM, Vishvananda Ishaya vishvana...@gmail.com wrote: Are you sure your network has multi_host = True? It seems like it isn't, since the gateway listed by the guest is 10.5.5.32 In multi_host mode each node should be getting an ip from the fixed range and the guest should be using that as the gateway. Vish On Wed, Dec 19, 2012 at 1:13 PM, Vishvananda Ishaya vishvana...@gmail.com wrote: There should be a redirect in iptables from 169.254.169.254:80http://169.254.169.254/to $my_ip:8775 (where nova-api-metadata is running) So: a) can you curl $my_ip:8775 (should 404) CloudController and Nodes awnser in the same way: 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 b) if you do sudo iptables -t nat -L -n v do you see the forward rule? Is it getting hit properly? there is the correct rule, but they never got hit controller 0 0 DNAT tcp -- * * 0.0.0.0/0 169.254.169.254 tcp dpt:80 to:200.131.6.250:8775 nodes 0 0 DNAT tcp -- * * 0.0.0.0/0 169.254.169.254 tcp dpt:80 to:200.131.6.248:8775 0 0 DNAT tcp -- * * 0.0.0.0/0 169.254.169.254 tcp dpt:80 to:200.131.6.249:8775 Thanks for appearing Vish! I was wishing your help! Vish On Dec 19, 2012, at 6:39 AM, Gui Maluf guimal...@gmail.com wrote: My set up is a nova-network-hahttp://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html, so each of my nodes run a nova-{api-metadata,network,compute,volume}, my controller runs all of this plus the rest of things it should run. Each of my nodes are the gateway for it's own instances. They all have the same net config, ip_forwarding. The main issue is that I can't telnet the nodes on port 80 that should redirect to metadatas server. metadata IP is set correctly to eth0, but 80 port is not open. My doubt is, should I create a endpoint for each node api-metadata service? should I install apache on nodes? I really don't know what to do anymore. This only happen on nodes, on cloudcontroller all instance run smoothly. they get the floatip, metadata service, etc. Thanks in advance! I will put the max of info I can here. root@oxala:~# nova-manage service list Binary Host Zone Status State Updated_At nova-compute xangonova enabled:-) 2012-12-18 20:34:21 nova-network xangonova enabled:-) 2012-12-18 20:34:20 nova-compute oxossi nova enabled:-) 2012-12-18 20:34:15 nova-network oxossi nova enabled:-) 2012-12-18 20:34:20 nova-volume oxossi nova enabled:-) 2012-12-18 20:34:18 nova-volume xangonova enabled:-) 2012-12-18 20:34:19 nova-consoleauth oxalanova enabled:-) 2012-12-18 20:34:24 nova-scheduler oxalanova enabled:-) 2012-12-18 20:34:25 nova-certoxalanova enabled:-) 2012-12-18 20:34:25 nova-volume oxalanova enabled:-) 2012-12-18 20:34:25 nova-network oxalanova enabled:-) 2012-12-18 20:34:17 nova-compute oxalanova enabled:-) 2012-12-18 20:34:10 *controller nova.conf* #NETWORK --allow_same_net_traffic=true --network_manager=nova.network.manager.FlatDHCPManager --firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver --public_interface=eth0 --flat_interface=eth1 --flat_network_bridge=br100 --fixed_range=10.5.5.32/27 --network_size=32 --flat_network_dhcp_start=10.5.5.33 --my_ip=200.131.6.250 --multi_host=True #--enabled_apis=ec2,osapi_compute,osapi_volume,metadata --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge
[Openstack] two or more NFS / gluster mounts
Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 replica N distribute setups with many a random glusterfs death), as such I have opted for using ceph. ceph's rados can also be used with cinder from the brief reading I've been doing into it. Cheers David On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.dewrote: Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
Hi David, It is for nova. Im not sure I understand. I want to be able to say to openstack; openstack, please install this instance (A) on this mountpoint and please install this instance (B) on this other mountpoint. I am planning on having two NFS / Gluster based stores, a fast one and a slow one. I probably will not want to say please every time :) Thanks, Andrew On Dec 20, 2012, at 3:42 PM, David Busby wrote: Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 replica N distribute setups with many a random glusterfs death), as such I have opted for using ceph. ceph's rados can also be used with cinder from the brief reading I've been doing into it. Cheers David On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Instances can't reach metadata server in network HA mode
Found out! I'd a /etc/dnsmasq-nova.conf file defining the default route as my controller node! now everything is working perfectly! :D On Thu, Dec 20, 2012 at 11:07 AM, Gui Maluf guimal...@gmail.com wrote: Vish, if you could help. I realized that all internal route of my vms point to cloudcontroller. if I change the default route to node address everything works perfectly. How can I make the node IP the default route? Thanks for all help! On Wed, Dec 19, 2012 at 2:34 PM, Gui Maluf guimal...@gmail.com wrote: Yes, it's in multi_host=true. In nova.conf and in the database multi_host is set to True. 10.5.5.32 isn't the gateway, instead is the private network. LoL Out of nothing my instances can now reach metadata. But when I login and ping www.google.com VM can resolv name but there is no answer back, all packets are lost. And I've attached a floating IP for two vms, on different node, and they dont even ping back in the same node. This is so confused! I'll do some tcpdump to check what is happening! On Wed, Dec 19, 2012 at 2:05 PM, Vishvananda Ishaya vishvana...@gmail.com wrote: Are you sure your network has multi_host = True? It seems like it isn't, since the gateway listed by the guest is 10.5.5.32 In multi_host mode each node should be getting an ip from the fixed range and the guest should be using that as the gateway. Vish On Wed, Dec 19, 2012 at 1:13 PM, Vishvananda Ishaya vishvana...@gmail.com wrote: There should be a redirect in iptables from 169.254.169.254:80http://169.254.169.254/to $my_ip:8775 (where nova-api-metadata is running) So: a) can you curl $my_ip:8775 (should 404) CloudController and Nodes awnser in the same way: 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 b) if you do sudo iptables -t nat -L -n v do you see the forward rule? Is it getting hit properly? there is the correct rule, but they never got hit controller 0 0 DNAT tcp -- * * 0.0.0.0/0 169.254.169.254 tcp dpt:80 to:200.131.6.250:8775 nodes 0 0 DNAT tcp -- * * 0.0.0.0/0 169.254.169.254 tcp dpt:80 to:200.131.6.248:8775 0 0 DNAT tcp -- * * 0.0.0.0/0 169.254.169.254 tcp dpt:80 to:200.131.6.249:8775 Thanks for appearing Vish! I was wishing your help! Vish On Dec 19, 2012, at 6:39 AM, Gui Maluf guimal...@gmail.com wrote: My set up is a nova-network-hahttp://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html, so each of my nodes run a nova-{api-metadata,network,compute,volume}, my controller runs all of this plus the rest of things it should run. Each of my nodes are the gateway for it's own instances. They all have the same net config, ip_forwarding. The main issue is that I can't telnet the nodes on port 80 that should redirect to metadatas server. metadata IP is set correctly to eth0, but 80 port is not open. My doubt is, should I create a endpoint for each node api-metadata service? should I install apache on nodes? I really don't know what to do anymore. This only happen on nodes, on cloudcontroller all instance run smoothly. they get the floatip, metadata service, etc. Thanks in advance! I will put the max of info I can here. root@oxala:~# nova-manage service list Binary Host Zone Status State Updated_At nova-compute xangonova enabled:-) 2012-12-18 20:34:21 nova-network xangonova enabled:-) 2012-12-18 20:34:20 nova-compute oxossi nova enabled:-) 2012-12-18 20:34:15 nova-network oxossi nova enabled:-) 2012-12-18 20:34:20 nova-volume oxossi nova enabled:-) 2012-12-18 20:34:18 nova-volume xangonova enabled:-) 2012-12-18 20:34:19 nova-consoleauth oxalanova enabled:-) 2012-12-18 20:34:24 nova-scheduler oxalanova enabled:-) 2012-12-18 20:34:25 nova-certoxalanova enabled:-) 2012-12-18 20:34:25 nova-volume oxalanova enabled:-) 2012-12-18 20:34:25 nova-network oxalanova enabled:-) 2012-12-18 20:34:17 nova-compute oxalanova enabled:-) 2012-12-18 20:34:10 *controller nova.conf* #NETWORK --allow_same_net_traffic=true --network_manager=nova.network.manager.FlatDHCPManager --firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver --public_interface=eth0 --flat_interface=eth1 --flat_network_bridge=br100 --fixed_range=10.5.5.32/27 --network_size=32
[Openstack] Vlanned networking setup
Hi, I am thinking about the following network setup: +-+ | vlan101(eth0) | +-+ +-+ | br0101 | +-+ || | +--+ +---+ +--+ | | | | | | | vm | | vm | | vm | | | | | | | +--+ +---+ +--+ || | +-+ | br1101 | +-+ +-+ | vlan101(eth1) | +-+ Basically public IP addresses will go over eth1 and private stuff over eth0. This would mean that openstack would have to create two vlans and two bridges. Is this possible? please create this vlanned network on eth0 (10.141) and create this other one(10.142) on eth1 Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] floating ip takes a long time to be accessible
Hi Nate, Thanks for your reply. But I am not running in multi-host mode now. The cluster uses one nova-networking service on the controller node. Xin On 12/17/2012 9:31 PM, Nathanael Burton wrote: On Dec 17, 2012 2:05 PM, Xin Zhao xz...@bnl.gov mailto:xz...@bnl.gov wrote: Hello, I allocate 2 public ips to instances, the first one becomes accessible almost immediately, but the second one always take a long time to be pingable. It doesn't matter which specific IP is assigned first or second, it's always the second one that is slow to be reachable, although the corresponding iptable rules are all set on the controller node almost immediately. Has anyone seen similar behavior? I am using essex (2012.1) on RHEL6. Thanks, Xin ___ Mailing list: https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack More help : https://help.launchpad.net/ListHelp Xin, You may need to use send_arp_for_ha=true to send a gratuitous ARP if you're using multi-host networking. Nate smime.p7s Description: S/MIME Cryptographic Signature ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Windows 2012 Server
Ignore that last question. I found this: https://review.openstack.org/#/c/10468/1/nova/network/linux_net.py Hacking that into my linux_net.py worked. Kind regards On 20 December 2012 12:49, Joe Warren-Meeks joe.warren.me...@gmail.comwrote: Hey Lloyd, I can confirm that your step-by-step guide works fine! Thank you for the pointer. Now, anyone know an easy way to make dnsmasq send out a different default gateway? I've tried the following in /etc/dnsmasq-nova.conf and included it in nova.conf, but it doesn't seem to make a difference. The networks are labelled correctly. dhcp-option=tag:'production',option:router,10.0.33.1 dhcp-option=tag:'dmz',option:router,10.0.21.1 On 20 December 2012 10:59, Joe Warren-Meeks joe.warren.me...@gmail.comwrote: Brilliant. I'll try your step-by-step with 2012 and, if it works, I'll drop you a line. Kind regards -- joe. On 18 December 2012 20:49, Lloyd Dewolf lloydost...@gmail.com wrote: On Tue, Dec 18, 2012 at 10:41 AM, Joe Warren-Meeks joe.warren.me...@gmail.com wrote: Hi guys, I've created a windows 2012 image and uploaded it ok. Pretty much following this example: http://docs.openstack.org/trunk/openstack-compute/admin/content/creating-a-windows-image.html When I go to launch an instance, it works ok and nova list and nova show look healthy. If I VNC to it as soon as it starts to boot, I get to see the BIOS and then the new Windows logo, but then the screen goes black and nothing seems to happen. Sending ctrl-alt-del elicits no response and it doesn't look like the network has DHCP'ed either. Has anyone else seen this and if so, any idea what I can do to fix it? We haven't had a customer ask for help with Windows 2012, but have had good success with Windows 2008 created using KVM. https://airframeaid.pistoncloud.com/entries/21838261-creating-windows-images -- @lloyddewolf http://www.pistoncloud.com/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
Hi Andrew, An interesting idea, but I am unaware if nova supports storage affinity in any way, it does support host affinity iirc, as a kludge you could have say some nova compute nodes using your slow mount and reserve the fast mount nodes as required, perhaps even defining separate zones for deployment? Cheers David On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.dewrote: Hi David, It is for nova. Im not sure I understand. I want to be able to say to openstack; openstack, please install this instance (A) on this mountpoint and please install this instance (B) on this other mountpoint. I am planning on having two NFS / Gluster based stores, a fast one and a slow one. I probably will not want to say please every time :) Thanks, Andrew On Dec 20, 2012, at 3:42 PM, David Busby wrote: Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 replica N distribute setups with many a random glusterfs death), as such I have opted for using ceph. ceph's rados can also be used with cinder from the brief reading I've been doing into it. Cheers David On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
Ah shame. You can specify different storage domains in oVirt. On Dec 20, 2012, at 4:16 PM, David Busby wrote: Hi Andrew, An interesting idea, but I am unaware if nova supports storage affinity in any way, it does support host affinity iirc, as a kludge you could have say some nova compute nodes using your slow mount and reserve the fast mount nodes as required, perhaps even defining separate zones for deployment? Cheers David On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi David, It is for nova. Im not sure I understand. I want to be able to say to openstack; openstack, please install this instance (A) on this mountpoint and please install this instance (B) on this other mountpoint. I am planning on having two NFS / Gluster based stores, a fast one and a slow one. I probably will not want to say please every time :) Thanks, Andrew On Dec 20, 2012, at 3:42 PM, David Busby wrote: Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 replica N distribute setups with many a random glusterfs death), as such I have opted for using ceph. ceph's rados can also be used with cinder from the brief reading I've been doing into it. Cheers David On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
I may of course be entirely wrong :) which would be cool if this is achievable / on the roadmap. At the very least if this is not already in discussion I'd raise it on launchpad as a potential feature. On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway a.hol...@syseleven.dewrote: Ah shame. You can specify different storage domains in oVirt. On Dec 20, 2012, at 4:16 PM, David Busby wrote: Hi Andrew, An interesting idea, but I am unaware if nova supports storage affinity in any way, it does support host affinity iirc, as a kludge you could have say some nova compute nodes using your slow mount and reserve the fast mount nodes as required, perhaps even defining separate zones for deployment? Cheers David On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi David, It is for nova. Im not sure I understand. I want to be able to say to openstack; openstack, please install this instance (A) on this mountpoint and please install this instance (B) on this other mountpoint. I am planning on having two NFS / Gluster based stores, a fast one and a slow one. I probably will not want to say please every time :) Thanks, Andrew On Dec 20, 2012, at 3:42 PM, David Busby wrote: Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 replica N distribute setups with many a random glusterfs death), as such I have opted for using ceph. ceph's rados can also be used with cinder from the brief reading I've been doing into it. Cheers David On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
Good plan. https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains On Dec 20, 2012, at 4:25 PM, David Busby wrote: I may of course be entirely wrong :) which would be cool if this is achievable / on the roadmap. At the very least if this is not already in discussion I'd raise it on launchpad as a potential feature. On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway a.hol...@syseleven.de wrote: Ah shame. You can specify different storage domains in oVirt. On Dec 20, 2012, at 4:16 PM, David Busby wrote: Hi Andrew, An interesting idea, but I am unaware if nova supports storage affinity in any way, it does support host affinity iirc, as a kludge you could have say some nova compute nodes using your slow mount and reserve the fast mount nodes as required, perhaps even defining separate zones for deployment? Cheers David On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi David, It is for nova. Im not sure I understand. I want to be able to say to openstack; openstack, please install this instance (A) on this mountpoint and please install this instance (B) on this other mountpoint. I am planning on having two NFS / Gluster based stores, a fast one and a slow one. I probably will not want to say please every time :) Thanks, Andrew On Dec 20, 2012, at 3:42 PM, David Busby wrote: Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 replica N distribute setups with many a random glusterfs death), as such I have opted for using ceph. ceph's rados can also be used with cinder from the brief reading I've been doing into it. Cheers David On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
mmm... not sure if the concept of oVirt multiple storage domains is something that can be implemented in Nova as it is, but I would like to share my thoughts because it's something that -from my point of view- matters. If you want to change the folder where the nova instances are stored you have to modify the option in nova-compute.conf 'instances_path': If you look at that folder (/var/lib/nova/instances/ by default) you will see a structure like this: drwxrwxr-x 2 nova nova 73 Dec 4 12:16 _base drwxrwxr-x 2 nova nova5 Oct 16 13:34 instance-0002 ... drwxrwxr-x 2 nova nova5 Nov 26 17:38 instance-005c drwxrwxr-x 2 nova nova6 Dec 11 15:38 instance-0065 If you have a shared storage for that folder, then your fstab entry looks like this one: 10.15.100.3:/volumes/vol1/zone1/instances /var/lib/nova/instances nfs defaults 0 0 So, I think that it could be possible to implement something like 'storage domains', but tenant/project oriented. Instead of having multiple generic mountpoints, each tenant would have a private mountpoint for his/her instances. So the /var/lib/nova/instances could look like this sample: /instances +/tenantID1 ++/instance-X ++/instance-Y ++/instance-Z +/tenantID2 ++/instance-A ++/instance-B ++/instance-C ... +/tenantIDN ++/instance-A ++/instance-B ++/instance-C And in the /etc/fstab something like this sample too: 10.15.100.3:/volumes/vol1/zone1/instances/tenantID1 /var/lib/nova/instances/tenantID1 nfs defaults 0 0 10.15.100.3:/volumes/vol1/zone1/instances/tenantID2 /var/lib/nova/instances/tenantID2 nfs defaults 0 0 ... 10.15.100.3:/volumes/vol1/zone1/instances/tenantIDN /var/lib/nova/instances/tenantIDN nfs defaults 0 0 With this approach, we could have something like per tenant QoS on shared storage to resell differente storage capabilities on a tenant basis. I would love to hear feedback, drawback, improvements... Cheers Diego -- Diego Parrilla http://www.stackops.com/*CEO* *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * http://www.stackops.com/ * * On Thu, Dec 20, 2012 at 4:32 PM, Andrew Holway a.hol...@syseleven.dewrote: Good plan. https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains On Dec 20, 2012, at 4:25 PM, David Busby wrote: I may of course be entirely wrong :) which would be cool if this is achievable / on the roadmap. At the very least if this is not already in discussion I'd raise it on launchpad as a potential feature. On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway a.hol...@syseleven.de wrote: Ah shame. You can specify different storage domains in oVirt. On Dec 20, 2012, at 4:16 PM, David Busby wrote: Hi Andrew, An interesting idea, but I am unaware if nova supports storage affinity in any way, it does support host affinity iirc, as a kludge you could have say some nova compute nodes using your slow mount and reserve the fast mount nodes as required, perhaps even defining separate zones for deployment? Cheers David On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi David, It is for nova. Im not sure I understand. I want to be able to say to openstack; openstack, please install this instance (A) on this mountpoint and please install this instance (B) on this other mountpoint. I am planning on having two NFS / Gluster based stores, a fast one and a slow one. I probably will not want to say please every time :) Thanks, Andrew On Dec 20, 2012, at 3:42 PM, David Busby wrote: Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 replica N distribute setups with many a random glusterfs death), as such I have opted for using ceph. ceph's rados can also be used with cinder from the brief reading I've been doing into it. Cheers David On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing
[Openstack] [keystone] IBM DB2 configuration
(raising to the mailing list) Which DB2 driver are you using? I was referring to: http://code.google.com/p/ibm-db/wiki/README ... which shows an example connection string for sqlalchemy as: db2 = sqlalchemy.create_engine('ibm_db_sa:// db2inst1:sec...@host.name.com:5/pydev') -Dolph On Thu, Dec 20, 2012 at 4:05 AM, Kevin-Yang benbenzhufore...@gmail.comwrote: Dolph, Really appreciated for your response. My VM configuration is: OS - Red Hat Enterprise Linux Server release 6.3 (Santiago) DB2 - Informational tokens are DB2 v9.7.0.0, s090521, LINUXAMD6497, and Fix Pack 0 ibm_db - http://pypi.python.org/packages/source/i/ibm_db/ibm_db-2.0.0.tar.gz#md5=709c576c0ec2379ca15049f5c861beb1 ibm_db_sa - When i changed from ibmdb - ibm_db_sa, I came with a different error - Could not determine dialect for 'ibm_db_sa'. ## Traceback (most recent call last): File /usr/local/bin/keystone-manage, line 5, in module pkg_resources.run_script('keystone==2012.2', 'keystone-manage') File build/bdist.linux-x86_64/egg/pkg_resources.py, line 499, in run_script File build/bdist.linux-x86_64/egg/pkg_resources.py, line 1239, in run_script File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/EGG-INFO/scripts/keystone-manage, line 28, in module cli.main(argv=sys.argv, config_files=config_files) File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/cli.py, line 164, in main return run(cmd, (args[:1] + args[2:])) File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/cli.py, line 147, in run return CMDS[cmd](argv=args).run() File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/cli.py, line 35, in run return self.main() File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/cli.py, line 56, in main driver.db_sync() File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/identity/backends/sql.py, line 136, in db_sync migration.db_sync() File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/common/sql/migration.py, line 49, in db_sync current_version = db_version() File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/common/sql/migration.py, line 61, in db_version return versioning_api.db_version(CONF.sql.connection, repo_path) File string, line 2, in db_version File /usr/local/lib/python2.7/site-packages/migrate/versioning/util/__init__.py, line 155, in with_engine engine = construct_engine(url, **kw) File /usr/local/lib/python2.7/site-packages/migrate/versioning/util/__init__.py, line 140, in construct_engine return create_engine(engine, **kwargs) File build/bdist.linux-x86_64/egg/sqlalchemy/engine/__init__.py, line 338, in create_engine File build/bdist.linux-x86_64/egg/sqlalchemy/engine/strategies.py, line 50, in create File build/bdist.linux-x86_64/egg/sqlalchemy/engine/url.py, line 123, in get_dialect sqlalchemy.exc.ArgumentError: Could not determine dialect for 'ibm_db_sa'. ## -- You received this bug notification because you are a bug assignee. https://bugs.launchpad.net/bugs/987121 Title: strict constraint for database table creation Status in OpenStack Identity (Keystone): Fix Released Bug description: OpenStack claims that any type of database supporting SQLAlchemy can be taken as the database for OpenStack use. In some databases, if a column is defined as UNIQUE, it must be specified NOT NULL at the same time, e.g. IBM DB2, which is SQLAlchemy supporting. I am doing some tests with DB2 now. For the tables TENANT, USER and ROLE, they all have the column NAME, but they don't define this column NOT NULL. For database like MYSQL, it is permitted and keystone-manage db_sync works well. However, for database with strict constrains, like IBM DB2, this is not allowed. Running keystone-manage db_sync will prompt the error, which tells me UNIQUE and NOT NULL must be specified for the column NAME. Suggestion: In the code keystone/identity/backends/sql.py, we have name = sql.Column(sql.String(64), unique=True) If we change it into name = sql.Column(sql.String(64), unique=True, nullable=False), this issue will be solved. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/987121/+subscriptions ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
Yes, I'm really agree with Diego. It would be a good choice for submitting a blueprint with this storage feature based on tenants. According to current quotas control, it limits the: - Number of volumes which may be created - Total size of all volumes within a project as measured in GB - Number of instances which may be launched - Number of processor cores which may be allocated - Publicly accessible IP addresses Another new feature related to shared storage we had thought about, it's to include an option for choosing if an instance has to be replicated or not, i.e. in a MooseFS scenario, to indicate goal (number of replicas). It's useful for example in testing or demo projects, where HA is not required. Regards, JuanFra. 2012/12/20 Diego Parrilla Santamaría diego.parrilla.santama...@gmail.com mmm... not sure if the concept of oVirt multiple storage domains is something that can be implemented in Nova as it is, but I would like to share my thoughts because it's something that -from my point of view- matters. If you want to change the folder where the nova instances are stored you have to modify the option in nova-compute.conf 'instances_path': If you look at that folder (/var/lib/nova/instances/ by default) you will see a structure like this: drwxrwxr-x 2 nova nova 73 Dec 4 12:16 _base drwxrwxr-x 2 nova nova5 Oct 16 13:34 instance-0002 ... drwxrwxr-x 2 nova nova5 Nov 26 17:38 instance-005c drwxrwxr-x 2 nova nova6 Dec 11 15:38 instance-0065 If you have a shared storage for that folder, then your fstab entry looks like this one: 10.15.100.3:/volumes/vol1/zone1/instances /var/lib/nova/instances nfs defaults 0 0 So, I think that it could be possible to implement something like 'storage domains', but tenant/project oriented. Instead of having multiple generic mountpoints, each tenant would have a private mountpoint for his/her instances. So the /var/lib/nova/instances could look like this sample: /instances +/tenantID1 ++/instance-X ++/instance-Y ++/instance-Z +/tenantID2 ++/instance-A ++/instance-B ++/instance-C ... +/tenantIDN ++/instance-A ++/instance-B ++/instance-C And in the /etc/fstab something like this sample too: 10.15.100.3:/volumes/vol1/zone1/instances/tenantID1 /var/lib/nova/instances/tenantID1 nfs defaults 0 0 10.15.100.3:/volumes/vol1/zone1/instances/tenantID2 /var/lib/nova/instances/tenantID2 nfs defaults 0 0 ... 10.15.100.3:/volumes/vol1/zone1/instances/tenantIDN /var/lib/nova/instances/tenantIDN nfs defaults 0 0 With this approach, we could have something like per tenant QoS on shared storage to resell differente storage capabilities on a tenant basis. I would love to hear feedback, drawback, improvements... Cheers Diego -- Diego Parrilla http://www.stackops.com/*CEO* *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * http://www.stackops.com/ * * On Thu, Dec 20, 2012 at 4:32 PM, Andrew Holway a.hol...@syseleven.dewrote: Good plan. https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains On Dec 20, 2012, at 4:25 PM, David Busby wrote: I may of course be entirely wrong :) which would be cool if this is achievable / on the roadmap. At the very least if this is not already in discussion I'd raise it on launchpad as a potential feature. On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway a.hol...@syseleven.de wrote: Ah shame. You can specify different storage domains in oVirt. On Dec 20, 2012, at 4:16 PM, David Busby wrote: Hi Andrew, An interesting idea, but I am unaware if nova supports storage affinity in any way, it does support host affinity iirc, as a kludge you could have say some nova compute nodes using your slow mount and reserve the fast mount nodes as required, perhaps even defining separate zones for deployment? Cheers David On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi David, It is for nova. Im not sure I understand. I want to be able to say to openstack; openstack, please install this instance (A) on this mountpoint and please install this instance (B) on this other mountpoint. I am planning on having two NFS / Gluster based stores, a fast one and a slow one. I probably will not want to say please every time :) Thanks, Andrew On Dec 20, 2012, at 3:42 PM, David Busby wrote: Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC
Re: [Openstack] Vlanned networking setup
There is no need for nova to create the vlans, you could use flatdhcp and manually create the vlans and specify the vlans when you create your networks: nova-manage network-create --bridge br0101 --bridge_interface eth0.101 nova-manage network-create --bridge br1101 --bridge_interface eth1.101 Note that exposing two networks to the guest can be tricky, so most people just use the the first bridge and do the public addresses with floating ips: nova-manage floating-create --ip_range ip_range --interface eth1.101 (no bridge is needed in this case) Vish On Dec 20, 2012, at 6:56 AM, Andrew Holway a.hol...@syseleven.de wrote: Hi, I am thinking about the following network setup: +-+ | vlan101(eth0) | +-+ +-+ | br0101 | +-+ || | +--+ +---+ +--+ | | | | | | | vm | | vm | | vm | | | | | | | +--+ +---+ +--+ || | +-+ | br1101 | +-+ +-+ | vlan101(eth1) | +-+ Basically public IP addresses will go over eth1 and private stuff over eth0. This would mean that openstack would have to create two vlans and two bridges. Is this possible? please create this vlanned network on eth0 (10.141) and create this other one(10.142) on eth1 Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
On Thu, Dec 20, 2012 at 9:37 AM, JuanFra Rodriguez Cardoso juanfra.rodriguez.card...@gmail.com wrote: Yes, I'm really agree with Diego. It would be a good choice for submitting a blueprint with this storage feature based on tenants. According to current quotas control, it limits the: - Number of volumes which may be created - Total size of all volumes within a project as measured in GB - Number of instances which may be launched - Number of processor cores which may be allocated - Publicly accessible IP addresses Another new feature related to shared storage we had thought about, it's to include an option for choosing if an instance has to be replicated or not, i.e. in a MooseFS scenario, to indicate goal (number of replicas). It's useful for example in testing or demo projects, where HA is not required. Regards, JuanFra. 2012/12/20 Diego Parrilla Santamaría diego.parrilla.santama...@gmail.com mmm... not sure if the concept of oVirt multiple storage domains is something that can be implemented in Nova as it is, but I would like to share my thoughts because it's something that -from my point of view- matters. If you want to change the folder where the nova instances are stored you have to modify the option in nova-compute.conf 'instances_path': If you look at that folder (/var/lib/nova/instances/ by default) you will see a structure like this: drwxrwxr-x 2 nova nova 73 Dec 4 12:16 _base drwxrwxr-x 2 nova nova5 Oct 16 13:34 instance-0002 ... drwxrwxr-x 2 nova nova5 Nov 26 17:38 instance-005c drwxrwxr-x 2 nova nova6 Dec 11 15:38 instance-0065 If you have a shared storage for that folder, then your fstab entry looks like this one: 10.15.100.3:/volumes/vol1/zone1/instances /var/lib/nova/instances nfs defaults 0 0 So, I think that it could be possible to implement something like 'storage domains', but tenant/project oriented. Instead of having multiple generic mountpoints, each tenant would have a private mountpoint for his/her instances. So the /var/lib/nova/instances could look like this sample: /instances +/tenantID1 ++/instance-X ++/instance-Y ++/instance-Z +/tenantID2 ++/instance-A ++/instance-B ++/instance-C ... +/tenantIDN ++/instance-A ++/instance-B ++/instance-C And in the /etc/fstab something like this sample too: 10.15.100.3:/volumes/vol1/zone1/instances/tenantID1 /var/lib/nova/instances/tenantID1 nfs defaults 0 0 10.15.100.3:/volumes/vol1/zone1/instances/tenantID2 /var/lib/nova/instances/tenantID2 nfs defaults 0 0 ... 10.15.100.3:/volumes/vol1/zone1/instances/tenantIDN /var/lib/nova/instances/tenantIDN nfs defaults 0 0 With this approach, we could have something like per tenant QoS on shared storage to resell differente storage capabilities on a tenant basis. I would love to hear feedback, drawback, improvements... Cheers Diego -- Diego Parrilla http://www.stackops.com/*CEO* *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * http://www.stackops.com/ * * On Thu, Dec 20, 2012 at 4:32 PM, Andrew Holway a.hol...@syseleven.dewrote: Good plan. https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains On Dec 20, 2012, at 4:25 PM, David Busby wrote: I may of course be entirely wrong :) which would be cool if this is achievable / on the roadmap. At the very least if this is not already in discussion I'd raise it on launchpad as a potential feature. On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway a.hol...@syseleven.de wrote: Ah shame. You can specify different storage domains in oVirt. On Dec 20, 2012, at 4:16 PM, David Busby wrote: Hi Andrew, An interesting idea, but I am unaware if nova supports storage affinity in any way, it does support host affinity iirc, as a kludge you could have say some nova compute nodes using your slow mount and reserve the fast mount nodes as required, perhaps even defining separate zones for deployment? Cheers David On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi David, It is for nova. Im not sure I understand. I want to be able to say to openstack; openstack, please install this instance (A) on this mountpoint and please install this instance (B) on this other mountpoint. I am planning on having two NFS / Gluster based stores, a fast one and a slow one. I probably will not want to say please every time :) Thanks, Andrew On Dec 20, 2012, at 3:42 PM, David Busby wrote: Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the
Re: [Openstack] two or more NFS / gluster mounts
Hi John, Yes, that's a really good solution. It is exactly what the StackOps Enterprise Edition offers out of the box. It's a simpler alternative assuming you are big enough to have several clusters of compute nodes, and each cluster with different quality of service preassigned. And it works... if the scheduler function works. My proposal about a hierarchy of folders for shared storage comes from requirements of some customers that want to be able to control the IO on a tenant basis, and want to use very cheap scalable shared storage. Let's say that StackOps EE follows now a static approach, and we would like to have a dynamic one ;-) Cheers Diego -- Diego Parrilla http://www.stackops.com/*CEO* *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * http://www.stackops.com/ * * On Thu, Dec 20, 2012 at 6:37 PM, John Griffith john.griff...@solidfire.comwrote: On Thu, Dec 20, 2012 at 9:37 AM, JuanFra Rodriguez Cardoso juanfra.rodriguez.card...@gmail.com wrote: Yes, I'm really agree with Diego. It would be a good choice for submitting a blueprint with this storage feature based on tenants. According to current quotas control, it limits the: - Number of volumes which may be created - Total size of all volumes within a project as measured in GB - Number of instances which may be launched - Number of processor cores which may be allocated - Publicly accessible IP addresses Another new feature related to shared storage we had thought about, it's to include an option for choosing if an instance has to be replicated or not, i.e. in a MooseFS scenario, to indicate goal (number of replicas). It's useful for example in testing or demo projects, where HA is not required. Regards, JuanFra. 2012/12/20 Diego Parrilla Santamaría diego.parrilla.santama...@gmail.com mmm... not sure if the concept of oVirt multiple storage domains is something that can be implemented in Nova as it is, but I would like to share my thoughts because it's something that -from my point of view- matters. If you want to change the folder where the nova instances are stored you have to modify the option in nova-compute.conf 'instances_path': If you look at that folder (/var/lib/nova/instances/ by default) you will see a structure like this: drwxrwxr-x 2 nova nova 73 Dec 4 12:16 _base drwxrwxr-x 2 nova nova5 Oct 16 13:34 instance-0002 ... drwxrwxr-x 2 nova nova5 Nov 26 17:38 instance-005c drwxrwxr-x 2 nova nova6 Dec 11 15:38 instance-0065 If you have a shared storage for that folder, then your fstab entry looks like this one: 10.15.100.3:/volumes/vol1/zone1/instances /var/lib/nova/instances nfs defaults 0 0 So, I think that it could be possible to implement something like 'storage domains', but tenant/project oriented. Instead of having multiple generic mountpoints, each tenant would have a private mountpoint for his/her instances. So the /var/lib/nova/instances could look like this sample: /instances +/tenantID1 ++/instance-X ++/instance-Y ++/instance-Z +/tenantID2 ++/instance-A ++/instance-B ++/instance-C ... +/tenantIDN ++/instance-A ++/instance-B ++/instance-C And in the /etc/fstab something like this sample too: 10.15.100.3:/volumes/vol1/zone1/instances/tenantID1 /var/lib/nova/instances/tenantID1 nfs defaults 0 0 10.15.100.3:/volumes/vol1/zone1/instances/tenantID2 /var/lib/nova/instances/tenantID2 nfs defaults 0 0 ... 10.15.100.3:/volumes/vol1/zone1/instances/tenantIDN /var/lib/nova/instances/tenantIDN nfs defaults 0 0 With this approach, we could have something like per tenant QoS on shared storage to resell differente storage capabilities on a tenant basis. I would love to hear feedback, drawback, improvements... Cheers Diego -- Diego Parrilla http://www.stackops.com/*CEO* *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29| skype:diegoparrilla* * http://www.stackops.com/ * * On Thu, Dec 20, 2012 at 4:32 PM, Andrew Holway a.hol...@syseleven.dewrote: Good plan. https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains On Dec 20, 2012, at 4:25 PM, David Busby wrote: I may of course be entirely wrong :) which would be cool if this is achievable / on the roadmap. At the very least if this is not already in discussion I'd raise it on launchpad as a potential feature. On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway a.hol...@syseleven.de wrote: Ah shame. You can specify different storage domains in oVirt. On Dec 20, 2012, at 4:16 PM, David Busby wrote: Hi Andrew, An interesting idea, but I am unaware if nova supports storage affinity in any way, it does support host affinity iirc, as a kludge you could have say some nova compute nodes using your slow mount and reserve the fast mount nodes as required,
Re: [Openstack] two or more NFS / gluster mounts
John Griffith wrote: Yes, I'm really agree with Diego. It would be a good choice for submitting a blueprint with this storage feature based on tenants. I think the key is that the File/Object service should be enabled similarly to how volumes are enabled, With similar tenant scoping and granularity. So a NFS export would be enabled for a VM much the way a volume is, with the only difference being that a NFS export *can* be shared. But when it is not shared, it should be just as eligible for local storage as a cinder volume is. To the extent that this is not just a migrate-to-local-storage feature, it needs to be integrated with Quantum as well. The network needs to be configured so that *only* this set of clients has access to the virtual network where the specified exports are enabled. This does a lot to solve the multi-tenant problem as well. Each export can be governed by a single tenant. If the NAS traffic is all on different virtual networks there are never any conflicts over UIDs and GIDs. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [keystone] IBM DB2 configuration
What I think we need is a simple way to run our current body of unit tests, to include the sql Migration tests, against a Live database, kindof the same way as I have et up for the live LDAP test. The steps: create a file under keystone/tests that doesn't trigger the nameing scheme that matches for unit tests. Since I use _ldap_livetest.py for LDAP, I would recommend _db_livetests.py. That should then import test_backend_sql and test_sql upgrade. They would pull in a custom config file that is .gitignored but that has the DB connection info for DB2 etc. We could post sample ones for smokestack etc to pull in for integration testing. A user then could run against those test with ./run_tests.sh -x _db_livetests On 12/20/2012 11:55 AM, Dolph Mathews wrote: (raising to the mailing list) Which DB2 driver are you using? I was referring to: http://code.google.com/p/ibm-db/wiki/README ... which shows an example connection string for sqlalchemy as: db2 = sqlalchemy.create_engine('ibm_db_sa://db2inst1:sec...@host.name.com:5/pydev http://db2inst1:sec...@host.name.com:5/pydev') -Dolph On Thu, Dec 20, 2012 at 4:05 AM, Kevin-Yang benbenzhufore...@gmail.com mailto:benbenzhufore...@gmail.com wrote: Dolph, Really appreciated for your response. My VM configuration is: OS - Red Hat Enterprise Linux Server release 6.3 (Santiago) DB2 - Informational tokens are DB2 v9.7.0.0, s090521, LINUXAMD6497, and Fix Pack 0 ibm_db - http://pypi.python.org/packages/source/i/ibm_db/ibm_db-2.0.0.tar.gz#md5=709c576c0ec2379ca15049f5c861beb1 ibm_db_sa - When i changed from ibmdb - ibm_db_sa, I came with a different error - Could not determine dialect for 'ibm_db_sa'. ## Traceback (most recent call last): File /usr/local/bin/keystone-manage, line 5, in module pkg_resources.run_script('keystone==2012.2', 'keystone-manage') File build/bdist.linux-x86_64/egg/pkg_resources.py, line 499, in run_script File build/bdist.linux-x86_64/egg/pkg_resources.py, line 1239, in run_script File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/EGG-INFO/scripts/keystone-manage, line 28, in module cli.main(argv=sys.argv, config_files=config_files) File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/cli.py, line 164, in main return run(cmd, (args[:1] + args[2:])) File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/cli.py, line 147, in run return CMDS[cmd](argv=args).run() File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/cli.py, line 35, in run return self.main() File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/cli.py, line 56, in main driver.db_sync() File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/identity/backends/sql.py, line 136, in db_sync migration.db_sync() File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/common/sql/migration.py, line 49, in db_sync current_version = db_version() File /usr/local/lib/python2.7/site-packages/keystone-2012.2-py2.7.egg/keystone/common/sql/migration.py, line 61, in db_version return versioning_api.db_version(CONF.sql.connection, repo_path) File string, line 2, in db_version File /usr/local/lib/python2.7/site-packages/migrate/versioning/util/__init__.py, line 155, in with_engine engine = construct_engine(url, **kw) File /usr/local/lib/python2.7/site-packages/migrate/versioning/util/__init__.py, line 140, in construct_engine return create_engine(engine, **kwargs) File build/bdist.linux-x86_64/egg/sqlalchemy/engine/__init__.py, line 338, in create_engine File build/bdist.linux-x86_64/egg/sqlalchemy/engine/strategies.py, line 50, in create File build/bdist.linux-x86_64/egg/sqlalchemy/engine/url.py, line 123, in get_dialect sqlalchemy.exc.ArgumentError: Could not determine dialect for 'ibm_db_sa'. ## -- You received this bug notification because you are a bug assignee. https://bugs.launchpad.net/bugs/987121 Title: strict constraint for database table creation Status in OpenStack Identity (Keystone): Fix Released Bug description: OpenStack claims that any type of database supporting SQLAlchemy can be taken as the database for OpenStack use. In some databases, if a column is defined as UNIQUE, it must be specified NOT NULL at the same time, e.g. IBM DB2, which is SQLAlchemy supporting. I am
[Openstack] Nova API for getting information on compute nodes
Hallo all, This is regarding the information about the compute nodes and I could see that there are Nova APIs (os-hosts os-hosts/hostname) available to provide data about the hosts. But, when I tried using the APIs in my environment, I am not getting any information about the compute nodes. Can I get to know what are the input parameters that I need to pass to the APIs? I am using OpenStack ESSEX. Thanks Krishnaprasad ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Can somebody offer some help regarding Keystone interaction with LDAP in Essex?
Well, keystone token-get works fine, as far as it goes. I've found a few other problems here. First I had a left-over dc=example,dc=com in the in the Role tree option which didn't help. Now that I don't, if I use SERVICE_ENDPOINT and SERVICE_TOKEN to authenticate directly to keystone, I can see that the roles exist: # keystone role-get 037b09e5e80d4d31a275be084f27b5c3 +-+--+ | Property | Value | +-+--+ | description | description | | id | 037b09e5e80d4d31a275be084f27b5c3 | | name| 037b09e5e80d4d31a275be084f27b5c3 | +-+--+ Of course, there's a problem here; I can't put anything in the LDAP server that Openstack will recognize as giving a name other than simply the role ID to this role, which is probably a show stopper itself. If that weren't enough, it still isn't apparently looking any role information up when I try to authenticate. I now suspect that this is because the tenant's enabled attribute is also set by default to something ridiculous that won't fit in the default LDAP schema, so there's no enabled value. Perhaps it interprets this to mean that the tenant is disabled and refuses to do more. I've attempted to change the tenant enabled attribute, but it seems that that option is only available post-essex. If any of this is close to correct, I suppose we may just need to go with the SQL identity backend until such time as we get Folsom set up and find that it actually has useful LDAP support. Any other ideas? Chris - Original Message - From: Dolph Mathews dolph.math...@gmail.com To: Christopher Smith csm...@wolfram.com Cc: openstack openstack@lists.launchpad.net Sent: Tuesday, December 18, 2012 5:57:59 PM Subject: Re: [Openstack] Can somebody offer some help regarding Keystone interaction with LDAP in Essex? Make sure you're specifying a tenant (e.g. OS_TENANT_NAME) in order to receive authorization (e.g. the admin role) to perform nova list. You can debug the authn/authz process using keystone token-get (this doc is for folsom, but should work for essex, although the arguments may have changed, check keystone --help): http://docs.openstack.org/trunk/openstack-compute/install/apt/content/verifying-identity-install.html If you're running into issues between your LDAP schema and what keystone Essex expects, it's worth pointing out that keystone became a lot more flexible in terms of LDAP configuration in Folsom. -Dolph On Tue, Dec 18, 2012 at 3:07 PM, Christopher Smith csm...@wolfram.com wrote: Hey everybody, We're trying very hard to build an Openstack cluster here, and I've been running into some trouble with the Keystone LDAP identity backend. I have every expectation that this is just something I have misconfigured, but honestly the documentation seems somewhat lacking for this, so I haven't been able to figure out what is going wrong. Here's the current situation: We have gotten an entire Openstack installation working, using the SQL backends to keystone. We're currently trying to move the identity into LDAP. This has caused a few problems, but the one I'm stuck on at the moment is that the admin user seems not to be associated with the admin role. Nor does keystone seem to be attempting at all to look up role information. The relevant section of keystone.conf looks like: [ldap] url = ldap:// ldap.wolfram.com tree_dn = ou=OpenStack,dc=wolfram,dc=com user_tree_dn = ou=Users,ou=OpenStack,dc=wolfram,dc=com role_tree_dn = ou=Roles,ou=OpenStack,dc=example,dc=com tenant_tree_dn = ou=Groups,ou=OpenStack,dc=wolfram,dc=com user = cn=directory,ou=misc,ou=OpenStack,dc=wolfram,dc=com password = redacted (but the bind is successful) suffix = cn=wolfram,cn=com Now, I've captured the traffic between keystone and ldap for when I execute any nova operation, say, nova list. What I get is a set of successful binds as cn=directory, a search request against ou=Users, one against ou=Groups, one for admin's UID in ou=Users, then re-bind as the Admin object -- also successful, so I assume I'm authenticated. Next I have a search by ID for the admin group. This, depending on the operation, might be repeated along with additional lookups for the lists of users and groups against the corresponding OUs. Anyway, all of this looks reasonable to me, expect that it doesn't appear to ever be trying to assign roles, which I'd like for it to do. My LDAP structure looks like this: ou=OpenStack ou=Groups Contains a set of groupOfNames objects with cn=id,ou=name member=DN of member. Our tenants are stored here. ou=misc This is just a place to stick the keystone directory user for the initial bind. ou=Roles Contains a set of organizationalRole objects with ou=name, cn=id, roleOccupant=DN of
Re: [Openstack] Vlanned networking setup
Hi Vish, Manually creating vlans would be quite tiresome if you are using a vlan per project and I'm not sure flatdhcp is good for serious use in multi tenanted production environments. (thoughts?) I tested the vlan manager functionality and this is *really* great for when you want to keep a customer on its own logical network with its own subnet but if you want to have a instance on more than one network your seem kinda screwed. This starts to be a problem when you think about DMZ's and proxys and stuff. Thanks, Andrew On Dec 20, 2012, at 6:35 PM, Vishvananda Ishaya wrote: There is no need for nova to create the vlans, you could use flatdhcp and manually create the vlans and specify the vlans when you create your networks: nova-manage network-create --bridge br0101 --bridge_interface eth0.101 nova-manage network-create --bridge br1101 --bridge_interface eth1.101 Note that exposing two networks to the guest can be tricky, so most people just use the the first bridge and do the public addresses with floating ips: nova-manage floating-create --ip_range ip_range --interface eth1.101 (no bridge is needed in this case) Vish On Dec 20, 2012, at 6:56 AM, Andrew Holway a.hol...@syseleven.de wrote: Hi, I am thinking about the following network setup: +-+ | vlan101(eth0) | +-+ +-+ | br0101 | +-+ || | +--+ +---+ +--+ | | | | | | | vm | | vm | | vm | | | | | | | +--+ +---+ +--+ || | +-+ | br1101 | +-+ +-+ | vlan101(eth1) | +-+ Basically public IP addresses will go over eth1 and private stuff over eth0. This would mean that openstack would have to create two vlans and two bridges. Is this possible? please create this vlanned network on eth0 (10.141) and create this other one(10.142) on eth1 Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [swift] RAID Performance Issue
Yes, that's why I was careful to clarify that I was talking about parity RAID. Performance should be fine otherwise. -- Chuck On Wed, Dec 19, 2012 at 8:26 PM, Hua ZZ Zhang zhu...@cn.ibm.com wrote: Chuck, David, Thanks for your explanation and sharing. Since RAID 0 doesn't have parity or mirroring to provide low level redundancy which indicate there's no write penalty, it can improve overall performance for concurrent IO of multiple disks. I'm wondering if it make sense to use such kind of RAID without parity/mirroring to increase R/W performance and leave replication and distribution to higher level of Swift. [image: Inactive hide details for Chuck Thier ---2012-12-20 上午 12:35:58---Chuck Thier cth...@gmail.com]Chuck Thier ---2012-12-20 上午 12:35:58---Chuck Thier cth...@gmail.com *Chuck Thier cth...@gmail.com* Sent by: openstack-bounces+zhuadl=cn.ibm@lists.launchpad.net 2012-12-20 上午 12:33 To David Busby d.bu...@saiweb.co.uk, cc openstack@lists.launchpad.net openstack@lists.launchpad.net Subject Re: [Openstack] [swift] RAID Performance Issue There are a couple of things to think about when using RAID (or more specifically parity RAID) with swift. The first has already been identified in that the workload for swift is very write heavy with small random IO, which is very bad for most parity RAID. In our testing, under heavy workloads, the overall RAID performance would degrade to be as slow as a single drive. It is very common for servers to have many hard drives (our first servers that we did testing with had 24 2T drives). During testing, RAID rebuilds were looking like they would take 2 weeks or so, which was not acceptable. While the array was in a degraded state, the overall performance of that box would suffer dramatically, which would have ripple effects across the rest of the cluster. We tried to make things work well with RAID 5 for quite a while as it would have made operations easier, and the code simpler since we wouldn't have had to handle many of the failure scenarios. Looking back, having to not rely on RAID has made swift a much more robust and fault tolerant platform. -- Chuck On Wed, Dec 19, 2012 at 4:32 AM, David Busby d.bu...@saiweb.co.uk wrote: Hi Zang, As JuanFra points out there's not much sense in using Swift on top of raid as swift handel; extending on this RAID introduces a write penalty (http://theithollow.com/2012/03/21/understanding-raid-penalty/) this in turn leads to performance issues, refer the link for write penalty's per configuration. As I recall (though this was from way back in October 2010) the suggested method of deploying swift is onto standalone XFS drives, leaving swift to handel the replication and distribution. Cheers David On Wed, Dec 19, 2012 at 9:12 AM, JuanFra Rodriguez Cardoso juanfra.rodriguez.card...@gmail.com wrote: Hi Zang: Basically, it makes no sense to use Swift on top of RAID because Swift just delivers replication schema. Regards, JuanFra. 2012/12/19 Hua ZZ Zhang zhu...@cn.ibm.com Hi, I have read the admin document of Swift and find there's recommendation of not using RAID 5 or 6 because swift performance degrades quickly with it. Can anyone explain why this could happen? If the RAID is done by hardware RAID controller, will the performance issue still exist? Anyone can share such kind of experience of using RAID with Swift? Appreciated for any suggestion from you. -Zhang Hua ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp pic02594.gifecblank.gifgraycol.gif___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [swift] RAID Performance Issue
Its always nice to have the benefit of a nice, big, fat BBU cache :) On Dec 21, 2012, at 12:03 AM, Chuck Thier wrote: Yes, that's why I was careful to clarify that I was talking about parity RAID. Performance should be fine otherwise. -- Chuck On Wed, Dec 19, 2012 at 8:26 PM, Hua ZZ Zhang zhu...@cn.ibm.com wrote: Chuck, David, Thanks for your explanation and sharing. Since RAID 0 doesn't have parity or mirroring to provide low level redundancy which indicate there's no write penalty, it can improve overall performance for concurrent IO of multiple disks. I'm wondering if it make sense to use such kind of RAID without parity/mirroring to increase R/W performance and leave replication and distribution to higher level of Swift. graycol.gifChuck Thier ---2012-12-20 上午 12:35:58---Chuck Thier cth...@gmail.com Chuck Thier cth...@gmail.com Sent by: openstack-bounces+zhuadl=cn.ibm@lists.launchpad.net 2012-12-20 上午 12:33 ecblank.gif To ecblank.gif David Busby d.bu...@saiweb.co.uk, ecblank.gif cc ecblank.gif openstack@lists.launchpad.net openstack@lists.launchpad.net ecblank.gif Subject ecblank.gif Re: [Openstack] [swift] RAID Performance Issue ecblank.gif ecblank.gif There are a couple of things to think about when using RAID (or more specifically parity RAID) with swift. The first has already been identified in that the workload for swift is very write heavy with small random IO, which is very bad for most parity RAID. In our testing, under heavy workloads, the overall RAID performance would degrade to be as slow as a single drive. It is very common for servers to have many hard drives (our first servers that we did testing with had 24 2T drives). During testing, RAID rebuilds were looking like they would take 2 weeks or so, which was not acceptable. While the array was in a degraded state, the overall performance of that box would suffer dramatically, which would have ripple effects across the rest of the cluster. We tried to make things work well with RAID 5 for quite a while as it would have made operations easier, and the code simpler since we wouldn't have had to handle many of the failure scenarios. Looking back, having to not rely on RAID has made swift a much more robust and fault tolerant platform. -- Chuck On Wed, Dec 19, 2012 at 4:32 AM, David Busby d.bu...@saiweb.co.uk wrote: Hi Zang, As JuanFra points out there's not much sense in using Swift on top of raid as swift handel; extending on this RAID introduces a write penalty (http://theithollow.com/2012/03/21/understanding-raid-penalty/) this in turn leads to performance issues, refer the link for write penalty's per configuration. As I recall (though this was from way back in October 2010) the suggested method of deploying swift is onto standalone XFS drives, leaving swift to handel the replication and distribution. Cheers David On Wed, Dec 19, 2012 at 9:12 AM, JuanFra Rodriguez Cardoso juanfra.rodriguez.card...@gmail.com wrote: Hi Zang: Basically, it makes no sense to use Swift on top of RAID because Swift just delivers replication schema. Regards, JuanFra. 2012/12/19 Hua ZZ Zhang zhu...@cn.ibm.com Hi, I have read the admin document of Swift and find there's recommendation of not using RAID 5 or 6 because swift performance degrades quickly with it. Can anyone explain why this could happen? If the RAID is done by hardware RAID controller, will the performance issue still exist? Anyone can share such kind of experience of using RAID with Swift? Appreciated for any suggestion from you. -Zhang Hua ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Vlanned networking setup
On Dec 20, 2012, at 2:24 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi Vish, Manually creating vlans would be quite tiresome if you are using a vlan per project and I'm not sure flatdhcp is good for serious use in multi tenanted production environments. (thoughts?) Personally I think vlan isolation just makes people feel better. But you can always go the quantum route if you want to make sure your networks are isolated. I tested the vlan manager functionality and this is *really* great for when you want to keep a customer on its own logical network with its own subnet but if you want to have a instance on more than one network your seem kinda screwed. This starts to be a problem when you think about DMZ's and proxys and stuff. Why not just use vlan mode and normal floating ips for public addressses? Vish ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Article posted: OpenStack Comes of Age
All, I just posted this article on OpenStack coming of age here: http://www.activestate.com/blog/2012/12/openstack-comes-age I'm looking forward to a great new year and the grizzly release (if we make past the end of the Mayan Calendar tomorrow)! Thanks for all your efforts and for making Folsom so awesome! Kind Regards, Diane Mueller Director, Cloud Evangelism ActiveState Stackato ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Stopping Devstack from running
Hi, I had installed Devstack on my Ubuntu system from http://devstack.org/ and it runs everytime when I boot my system. Is there a way to stop that? Nothing is mentioned on their website. I have been able to stop the Apache server from running in the background by the command: apachectl -k stop but I am unable to stop the MySQL Daemon. The conflict is because I need to run XAMPP for Linux sometimes on my system. Please suggest any ways to do this. Thanks, Sankha ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Stopping Devstack from running
On Thu, Dec 20, 2012 at 9:10 PM, Sankha Narayan Guria sankh...@gmail.com wrote: I had installed Devstack on my Ubuntu system from http://devstack.org/ and it runs everytime when I boot my system. Is there a way to stop that? Nothing is mentioned on their website. The only bits of DevStack that are set to automatically run at boot are the service packages provided by the OS such as Apache (as you found), the database server (MySQL by default), the queue server (rabbitmq by default) and tgt if you have cinder enabled.. Are you saying that you get all of the services running in screen on boot? That would mean that stack.sh or rejoin-stack.sh is being run by a boot script and it shouldn't be. Oddly enough there are a number of people that wish DevStack would survive a reboot. We actively discourage that to keep it from being used for more than development. dt -- Dean Troyer dtro...@gmail.com ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] [OpenStack] Support attach CD-ROM to instance
According to the comments in https://review.openstack.org/#/c/18469/, I summary the following work items need to be done, pls give me your suggestion: 1. I prefer to provide a new attribute when run new instance, for example: --cdrom image id 2. This image is from glance and accept any format not only iso. -- Best regard, David Geng --___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_nova_trunk #330
Title: raring_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/330/Project:raring_grizzly_nova_trunkDate of build:Thu, 20 Dec 2012 05:32:07 -0500Build duration:9 min 30 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesaddress uuid overwritingby sdagueeditnova/tests/integrated/test_api_samples.pyConsole Output[...truncated 12160 lines...]dch -a [ae7bef3] Use service fixture in DB servicegroup tests.dch -a [a4b5aad] Reset the IPv6 API backend when resetting the conf stack.dch -a [f4ca695] fix test_nbd using stubsdch -a [1a7a0e1] Imported Translations from Transifexdch -a [21a86ef] Properly remove the time override in quota tests.dch -a [2e01dc0] Fix API samples generation.dch -a [ef5d2af] Move TimeOverride to the general reusable-test-helper place.dch -a [aef4f3c] Added conf support for security groupsdch -a [07af4ce] Add accounting for orphans to resource tracker.dch -a [37d42ca] Add more association support to network APIdch -a [6ab2790] Remove the WillNotSchedule exception.dch -a [aef9802] Replace fixtures.DetailStream with fixtures.StringStream.dch -a [19558ab] Move network_driver into new nova.network.driverdch -a [06f0e45] Move DNS manager options into network.managerdch -a [2f8ffcc] Move agent_build_get_by_triple to conductordch -a [20f0c60] Move provider_fw_rule_get_all to conductordch -a [20811e9] Move security_group operations in VirtAPI to conductordch -a [6da1dbc] Retry NBD device allocation.dch -a [4abc8cc] Use testr to run nova unittests.dch -a [694bcb7] Update command on devref docdch -a [921eec9] Fixed deleting instance booted from invalid voldch -a [461a966] Add the missing replacement text in devref doc.dch -a [a5b12b6] Add syslogging to nova-rootwrapdch -a [a630ea4] Ensure that sql_dbpool_enable is a boolean valuedch -a [d6aa0cc] Remove the deprecated quantum v1 code and directory.dch -a [a2101c4] Allows an instance to post encrypted passworddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212200532~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A nova_2013.1+git201212200532~raring-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1+git201212200532~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1+git201212200532~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_nova_trunk #328
Title: precise_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/328/Project:precise_grizzly_nova_trunkDate of build:Thu, 20 Dec 2012 09:01:03 -0500Build duration:6 min 26 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesDefine a product, vendor package strings in version.pyby berrangeeditnova/version.pyRemove obsolete VCS version info completelyby berrangeeditnova/version.pyeditnova/vnc/xvp_proxy.pyeditnova/tests/test_versions.pyeditnova/service.pyeditbin/nova-manageAllow loading of product/vendor/package info from external fileby berrangeeditnova/tests/test_versions.pyeditnova/version.pyaddetc/nova/release.sampleConsole Output[...truncated 10399 lines...]dch -a [aef4f3c] Added conf support for security groupsdch -a [07af4ce] Add accounting for orphans to resource tracker.dch -a [37d42ca] Add more association support to network APIdch -a [6ab2790] Remove the WillNotSchedule exception.dch -a [aef9802] Replace fixtures.DetailStream with fixtures.StringStream.dch -a [19558ab] Move network_driver into new nova.network.driverdch -a [06f0e45] Move DNS manager options into network.managerdch -a [2f8ffcc] Move agent_build_get_by_triple to conductordch -a [20f0c60] Move provider_fw_rule_get_all to conductordch -a [20811e9] Move security_group operations in VirtAPI to conductordch -a [6da1dbc] Retry NBD device allocation.dch -a [4abc8cc] Use testr to run nova unittests.dch -a [bd7fb1c] Add a developer trap for api samplesdch -a [694bcb7] Update command on devref docdch -a [921eec9] Fixed deleting instance booted from invalid voldch -a [461a966] Add the missing replacement text in devref doc.dch -a [5019de6] Allow xenapi to work with empty image metadatadch -a [18817c7] Imported Translations from Transifexdch -a [76588ed] Fix for broken switch for config_drivedch -a [98a7161] Fix use of osapi_compute_extension option in api_samples.dch -a [6c9d9ab] Fix errors in used_limits extensiondch -a [a5b12b6] Add syslogging to nova-rootwrapdch -a [a630ea4] Ensure that sql_dbpool_enable is a boolean valuedch -a [503d572] Fixes KeyError: 'sr_uuid' when booting from volume on xenapidch -a [d6aa0cc] Remove the deprecated quantum v1 code and directory.dch -a [a2101c4] Allows an instance to post encrypted passworddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212200901~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nova_2013.1+git201212200901~precise-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201212200901~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201212200901~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_nova_trunk #331
Title: raring_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/331/Project:raring_grizzly_nova_trunkDate of build:Thu, 20 Dec 2012 09:01:04 -0500Build duration:8 min 43 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesDefine a product, vendor package strings in version.pyby berrangeeditnova/version.pyRemove obsolete VCS version info completelyby berrangeeditnova/tests/test_versions.pyeditnova/version.pyeditnova/vnc/xvp_proxy.pyeditnova/service.pyeditbin/nova-manageAllow loading of product/vendor/package info from external fileby berrangeeditnova/version.pyeditnova/tests/test_versions.pyaddetc/nova/release.sampleConsole Output[...truncated 12171 lines...]dch -a [ae7bef3] Use service fixture in DB servicegroup tests.dch -a [a4b5aad] Reset the IPv6 API backend when resetting the conf stack.dch -a [f4ca695] fix test_nbd using stubsdch -a [1a7a0e1] Imported Translations from Transifexdch -a [21a86ef] Properly remove the time override in quota tests.dch -a [2e01dc0] Fix API samples generation.dch -a [ef5d2af] Move TimeOverride to the general reusable-test-helper place.dch -a [aef4f3c] Added conf support for security groupsdch -a [07af4ce] Add accounting for orphans to resource tracker.dch -a [37d42ca] Add more association support to network APIdch -a [6ab2790] Remove the WillNotSchedule exception.dch -a [aef9802] Replace fixtures.DetailStream with fixtures.StringStream.dch -a [19558ab] Move network_driver into new nova.network.driverdch -a [06f0e45] Move DNS manager options into network.managerdch -a [2f8ffcc] Move agent_build_get_by_triple to conductordch -a [20f0c60] Move provider_fw_rule_get_all to conductordch -a [20811e9] Move security_group operations in VirtAPI to conductordch -a [6da1dbc] Retry NBD device allocation.dch -a [4abc8cc] Use testr to run nova unittests.dch -a [694bcb7] Update command on devref docdch -a [921eec9] Fixed deleting instance booted from invalid voldch -a [461a966] Add the missing replacement text in devref doc.dch -a [a5b12b6] Add syslogging to nova-rootwrapdch -a [a630ea4] Ensure that sql_dbpool_enable is a boolean valuedch -a [d6aa0cc] Remove the deprecated quantum v1 code and directory.dch -a [a2101c4] Allows an instance to post encrypted passworddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212200902~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A nova_2013.1+git201212200902~raring-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1+git201212200902~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1+git201212200902~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_nova_trunk #332
Title: raring_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/332/Project:raring_grizzly_nova_trunkDate of build:Thu, 20 Dec 2012 09:31:03 -0500Build duration:8 min 57 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesMake configdrive.py use version.product_string()by berrangeeditnova/virt/configdrive.pyExport custom SMBIOS info to QEMU/KVM guestsby berrangeeditnova/virt/libvirt/driver.pyeditnova/tests/test_libvirt.pyConsole Output[...truncated 12177 lines...]dch -a [ae7bef3] Use service fixture in DB servicegroup tests.dch -a [a4b5aad] Reset the IPv6 API backend when resetting the conf stack.dch -a [f4ca695] fix test_nbd using stubsdch -a [1a7a0e1] Imported Translations from Transifexdch -a [21a86ef] Properly remove the time override in quota tests.dch -a [2e01dc0] Fix API samples generation.dch -a [ef5d2af] Move TimeOverride to the general reusable-test-helper place.dch -a [aef4f3c] Added conf support for security groupsdch -a [07af4ce] Add accounting for orphans to resource tracker.dch -a [37d42ca] Add more association support to network APIdch -a [6ab2790] Remove the WillNotSchedule exception.dch -a [aef9802] Replace fixtures.DetailStream with fixtures.StringStream.dch -a [19558ab] Move network_driver into new nova.network.driverdch -a [06f0e45] Move DNS manager options into network.managerdch -a [2f8ffcc] Move agent_build_get_by_triple to conductordch -a [20f0c60] Move provider_fw_rule_get_all to conductordch -a [20811e9] Move security_group operations in VirtAPI to conductordch -a [6da1dbc] Retry NBD device allocation.dch -a [4abc8cc] Use testr to run nova unittests.dch -a [694bcb7] Update command on devref docdch -a [921eec9] Fixed deleting instance booted from invalid voldch -a [461a966] Add the missing replacement text in devref doc.dch -a [a5b12b6] Add syslogging to nova-rootwrapdch -a [a630ea4] Ensure that sql_dbpool_enable is a boolean valuedch -a [d6aa0cc] Remove the deprecated quantum v1 code and directory.dch -a [a2101c4] Allows an instance to post encrypted passworddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212200932~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A nova_2013.1+git201212200932~raring-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1+git201212200932~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1+git201212200932~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_nova_trunk #330
Title: precise_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/330/Project:precise_grizzly_nova_trunkDate of build:Thu, 20 Dec 2012 12:01:03 -0500Build duration:7 min 12 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesRemove fake_tests opt from test.py.by dprinceeditnova/test.pyConsole Output[...truncated 10408 lines...]dch -a [07af4ce] Add accounting for orphans to resource tracker.dch -a [37d42ca] Add more association support to network APIdch -a [6ab2790] Remove the WillNotSchedule exception.dch -a [aef9802] Replace fixtures.DetailStream with fixtures.StringStream.dch -a [19558ab] Move network_driver into new nova.network.driverdch -a [06f0e45] Move DNS manager options into network.managerdch -a [2f8ffcc] Move agent_build_get_by_triple to conductordch -a [20f0c60] Move provider_fw_rule_get_all to conductordch -a [20811e9] Move security_group operations in VirtAPI to conductordch -a [6da1dbc] Retry NBD device allocation.dch -a [4abc8cc] Use testr to run nova unittests.dch -a [bd7fb1c] Add a developer trap for api samplesdch -a [694bcb7] Update command on devref docdch -a [921eec9] Fixed deleting instance booted from invalid voldch -a [461a966] Add the missing replacement text in devref doc.dch -a [5019de6] Allow xenapi to work with empty image metadatadch -a [18817c7] Imported Translations from Transifexdch -a [76588ed] Fix for broken switch for config_drivedch -a [98a7161] Fix use of osapi_compute_extension option in api_samples.dch -a [6c9d9ab] Fix errors in used_limits extensiondch -a [a5b12b6] Add syslogging to nova-rootwrapdch -a [a630ea4] Ensure that sql_dbpool_enable is a boolean valuedch -a [503d572] Fixes KeyError: 'sr_uuid' when booting from volume on xenapidch -a [d6aa0cc] Remove the deprecated quantum v1 code and directory.dch -a [a2101c4] Allows an instance to post encrypted passworddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212201202~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nINFO:root:Destroying schroot.ova_2013.1+git201212201202~precise-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201212201202~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201212201202~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_nova_trunk #333
Title: raring_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/333/Project:raring_grizzly_nova_trunkDate of build:Thu, 20 Dec 2012 12:01:04 -0500Build duration:9 min 11 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesRemove fake_tests opt from test.py.by dprinceeditnova/test.pyConsole Output[...truncated 12180 lines...]dch -a [ae7bef3] Use service fixture in DB servicegroup tests.dch -a [a4b5aad] Reset the IPv6 API backend when resetting the conf stack.dch -a [f4ca695] fix test_nbd using stubsdch -a [1a7a0e1] Imported Translations from Transifexdch -a [21a86ef] Properly remove the time override in quota tests.dch -a [2e01dc0] Fix API samples generation.dch -a [ef5d2af] Move TimeOverride to the general reusable-test-helper place.dch -a [aef4f3c] Added conf support for security groupsdch -a [07af4ce] Add accounting for orphans to resource tracker.dch -a [37d42ca] Add more association support to network APIdch -a [6ab2790] Remove the WillNotSchedule exception.dch -a [aef9802] Replace fixtures.DetailStream with fixtures.StringStream.dch -a [19558ab] Move network_driver into new nova.network.driverdch -a [06f0e45] Move DNS manager options into network.managerdch -a [2f8ffcc] Move agent_build_get_by_triple to conductordch -a [20f0c60] Move provider_fw_rule_get_all to conductordch -a [20811e9] Move security_group operations in VirtAPI to conductordch -a [6da1dbc] Retry NBD device allocation.dch -a [4abc8cc] Use testr to run nova unittests.dch -a [694bcb7] Update command on devref docdch -a [921eec9] Fixed deleting instance booted from invalid voldch -a [461a966] Add the missing replacement text in devref doc.dch -a [a5b12b6] Add syslogging to nova-rootwrapdch -a [a630ea4] Ensure that sql_dbpool_enable is a boolean valuedch -a [d6aa0cc] Remove the deprecated quantum v1 code and directory.dch -a [a2101c4] Allows an instance to post encrypted passworddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212201202~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A nova_2013.1+git201212201202~raring-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1+git201212201202~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1+git201212201202~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_nova_trunk #331
Title: precise_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/331/Project:precise_grizzly_nova_trunkDate of build:Thu, 20 Dec 2012 14:31:04 -0500Build duration:7 min 2 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesMove baremetal database tests to fixtures.by devananda.vdveditnova/test.pyeditnova/tests/baremetal/db/base.pyNew Baremetal provisioning framework.by devananda.vdveditnova/tests/baremetal/__init__.pyeditnova/virt/baremetal/db/sqlalchemy/api.pyeditnova/tests/baremetal/db/test_bm_interface.pyeditnova/tests/baremetal/db/__init__.pyaddnova/virt/baremetal/driver.pyaddnova/virt/baremetal/base.pyaddnova/virt/baremetal/utils.pyaddnova/virt/baremetal/fake.pyeditnova/virt/baremetal/__init__.pyeditnova/virt/baremetal/db/api.pyeditetc/nova/rootwrap.d/compute.filtersaddnova/tests/baremetal/test_driver.pyeditnova/tests/baremetal/db/utils.pyaddnova/virt/baremetal/baremetal_states.pyaddnova/virt/baremetal/interfaces.templateFix a test isolation error in compute.test_compute.by robertceditnova/tests/compute/test_compute.pyeditnova/tests/compute/test_compute_utils.pyeditnova/tests/test_notifications.pyConsole Output[...truncated 10449 lines...]dch -a [07af4ce] Add accounting for orphans to resource tracker.dch -a [37d42ca] Add more association support to network APIdch -a [6ab2790] Remove the WillNotSchedule exception.dch -a [aef9802] Replace fixtures.DetailStream with fixtures.StringStream.dch -a [19558ab] Move network_driver into new nova.network.driverdch -a [06f0e45] Move DNS manager options into network.managerdch -a [2f8ffcc] Move agent_build_get_by_triple to conductordch -a [20f0c60] Move provider_fw_rule_get_all to conductordch -a [20811e9] Move security_group operations in VirtAPI to conductordch -a [6da1dbc] Retry NBD device allocation.dch -a [4abc8cc] Use testr to run nova unittests.dch -a [bd7fb1c] Add a developer trap for api samplesdch -a [694bcb7] Update command on devref docdch -a [921eec9] Fixed deleting instance booted from invalid voldch -a [461a966] Add the missing replacement text in devref doc.dch -a [5019de6] Allow xenapi to work with empty image metadatadch -a [18817c7] Imported Translations from Transifexdch -a [76588ed] Fix for broken switch for config_drivedch -a [98a7161] Fix use of osapi_compute_extension option in api_samples.dch -a [6c9d9ab] Fix errors in used_limits extensiondch -a [a5b12b6] Add syslogging to nova-rootwrapdch -a [a630ea4] Ensure that sql_dbpool_enable is a boolean valuedch -a [503d572] Fixes KeyError: 'sr_uuid' when booting from volume on xenapidch -a [d6aa0cc] Remove the deprecated quantum v1 code and directory.dch -a [a2101c4] Allows an instance to post encrypted passwordINFO:root:Destroying schroot.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212201432~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nova_2013.1+git201212201432~precise-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201212201432~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201212201432~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_python-novaclient_trunk #34
Title: precise_grizzly_python-novaclient_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-novaclient_trunk/34/Project:precise_grizzly_python-novaclient_trunkDate of build:Thu, 20 Dec 2012 15:01:01 -0500Build duration:2 min 44 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesUse requests module for HTTP/HTTPSby dtroyereditnovaclient/v1_1/servers.pyedittests/v1_1/fakes.pyeditnovaclient/shell.pyedittests/test_shell.pyedittests/utils.pyeditnovaclient/v1_1/client.pyedittests/v1_1/test_auth.pyedittools/pip-requiresedittests/test_http.pyedittests/test_auth_plugins.pyeditnovaclient/exceptions.pyeditnovaclient/client.pyConsole Output[...truncated 1965 lines...]Job: python-novaclient_2.10.0.35.gaa1df04+git201212201501~precise-0ubuntu1.dscMachine Architecture: amd64Package: python-novaclientPackage-Time: 99Source-Version: 1:2.10.0.35.gaa1df04+git201212201501~precise-0ubuntu1Space: 1816Status: attemptedVersion: 1:2.10.0.35.gaa1df04+git201212201501~precise-0ubuntu1Finished at 20121220-1503Build needed 00:01:39, 1816k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'python-novaclient_2.10.0.35.gaa1df04+git201212201501~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'python-novaclient_2.10.0.35.gaa1df04+git201212201501~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-novaclient/grizzly /tmp/tmp5cEC06/python-novaclientmk-build-deps -i -r -t apt-get -y /tmp/tmp5cEC06/python-novaclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 993741988804fcba6efbab0cf182300c779c00e5..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/python-novaclient/precise-grizzly --forcedch -b -D precise --newversion 1:2.10.0.35.gaa1df04+git201212201501~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [aa1df04] Use requests module for HTTP/HTTPSdch -a [e6e22db] Make --tenant a required arg for quota-showdch -a [c59de35] Add support for the coverage extension.dch -a [eaf3c36] Specify some arguments by name.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-novaclient_2.10.0.35.gaa1df04+git201212201501~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A python-novaclient_2.10.0.35.gaa1df04+git201212201501~precise-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'python-novaclient_2.10.0.35.gaa1df04+git201212201501~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'python-novaclient_2.10.0.35.gaa1df04+git201212201501~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_python-novaclient_trunk #32
Title: raring_grizzly_python-novaclient_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_python-novaclient_trunk/32/Project:raring_grizzly_python-novaclient_trunkDate of build:Thu, 20 Dec 2012 15:01:02 -0500Build duration:4 min 8 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesUse requests module for HTTP/HTTPSby dtroyeredittests/test_auth_plugins.pyeditnovaclient/client.pyedittests/utils.pyedittests/test_shell.pyeditnovaclient/v1_1/client.pyedittools/pip-requiresedittests/v1_1/test_auth.pyeditnovaclient/exceptions.pyedittests/test_http.pyeditnovaclient/shell.pyedittests/v1_1/fakes.pyeditnovaclient/v1_1/servers.pyConsole Output[...truncated 2843 lines...]Job: python-novaclient_2.10.0.35.gaa1df04+git201212201501~raring-0ubuntu1.dscMachine Architecture: amd64Package: python-novaclientPackage-Time: 132Source-Version: 1:2.10.0.35.gaa1df04+git201212201501~raring-0ubuntu1Space: 1804Status: attemptedVersion: 1:2.10.0.35.gaa1df04+git201212201501~raring-0ubuntu1Finished at 20121220-1505Build needed 00:02:12, 1804k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'python-novaclient_2.10.0.35.gaa1df04+git201212201501~raring-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'python-novaclient_2.10.0.35.gaa1df04+git201212201501~raring-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-novaclient/grizzly /tmp/tmpmCR9qg/python-novaclientmk-build-deps -i -r -t apt-get -y /tmp/tmpmCR9qg/python-novaclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 993741988804fcba6efbab0cf182300c779c00e5..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/python-novaclient/raring-grizzly --forcedch -b -D raring --newversion 1:2.10.0.35.gaa1df04+git201212201501~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [aa1df04] Use requests module for HTTP/HTTPSdch -a [e6e22db] Make --tenant a required arg for quota-showdch -a [c59de35] Add support for the coverage extension.dch -a [eaf3c36] Specify some arguments by name.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-novaclient_2.10.0.35.gaa1df04+git201212201501~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A python-novaclient_2.10.0.35.gaa1df04+git201212201501~raring-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'python-novaclient_2.10.0.35.gaa1df04+git201212201501~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'python-novaclient_2.10.0.35.gaa1df04+git201212201501~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_nova_trunk #334
Title: precise_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/334/Project:precise_grizzly_nova_trunkDate of build:Thu, 20 Dec 2012 23:31:03 -0500Build duration:6 min 24 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesImported Translations from Transifexby Jenkinseditnova/locale/nova.potConsole Output[...truncated 10537 lines...]dch -a [37d42ca] Add more association support to network APIdch -a [6ab2790] Remove the WillNotSchedule exception.dch -a [aef9802] Replace fixtures.DetailStream with fixtures.StringStream.dch -a [19558ab] Move network_driver into new nova.network.driverdch -a [06f0e45] Move DNS manager options into network.managerdch -a [2f8ffcc] Move agent_build_get_by_triple to conductordch -a [20f0c60] Move provider_fw_rule_get_all to conductordch -a [20811e9] Move security_group operations in VirtAPI to conductordch -a [6da1dbc] Retry NBD device allocation.dch -a [4abc8cc] Use testr to run nova unittests.dch -a [bd7fb1c] Add a developer trap for api samplesdch -a [694bcb7] Update command on devref docdch -a [921eec9] Fixed deleting instance booted from invalid voldch -a [461a966] Add the missing replacement text in devref doc.dch -a [5019de6] Allow xenapi to work with empty image metadatadch -a [18817c7] Imported Translations from Transifexdch -a [76588ed] Fix for INFO:root:Destroying schroot.broken switch for config_drivedch -a [98a7161] Fix use of osapi_compute_extension option in api_samples.dch -a [6c9d9ab] Fix errors in used_limits extensiondch -a [a5b12b6] Add syslogging to nova-rootwrapdch -a [a630ea4] Ensure that sql_dbpool_enable is a boolean valuedch -a [503d572] Fixes KeyError: 'sr_uuid' when booting from volume on xenapidch -a [d6aa0cc] Remove the deprecated quantum v1 code and directory.dch -a [a2101c4] Allows an instance to post encrypted passworddch -a [44d543b] Volume backed live migration w/o shared storagedebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212202331~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nova_2013.1+git201212202331~precise-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201212202331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201212202331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_nova_trunk #337
Title: raring_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/337/Project:raring_grizzly_nova_trunkDate of build:Thu, 20 Dec 2012 23:31:03 -0500Build duration:8 min 31 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesImported Translations from Transifexby Jenkinseditnova/locale/nova.potConsole Output[...truncated 12309 lines...]dch -a [f4ca695] fix test_nbd using stubsdch -a [1a7a0e1] Imported Translations from Transifexdch -a [21a86ef] Properly remove the time override in quota tests.dch -a [2e01dc0] Fix API samples generation.dch -a [ef5d2af] Move TimeOverride to the general reusable-test-helper place.dch -a [aef4f3c] Added conf support for security groupsdch -a [07af4ce] Add accounting for orphans to resource tracker.dch -a [37d42ca] Add more association support to network APIdch -a [6ab2790] Remove the WillNotSchedule exception.dch -a [aef9802] Replace fixtures.DetailStream with fixtures.StringStream.dch -a [19558ab] Move network_driver into new nova.network.driverdch -a [06f0e45] Move DNS manager options into network.managerdch -a [2f8ffcc] Move agent_build_get_by_triple to conductordch -a [20f0c60] Move provider_fw_rule_get_all to conductordch -a [20811e9] Move security_group operations in VirtAPI to conductordch -a [6da1dbc] Retry NBD device allocation.dch -a [4abc8cc] Use testr to run nova unittests.dch -a [694bcb7] Update command on devref docdch -a [921eec9] Fixed deleting instance booted from invalid voldch -a [461a966] Add the missing replacement text in devref doc.dch -a [a5b12b6] Add syslogging to nova-rootwrapdch -a [a630ea4] Ensure that sql_dbpool_enable is a boolean valuedch -a [d6aa0cc] Remove the deprecated quantum v1 code and directory.dch -a [a2101c4INFO:root:Destroying schroot.] Allows an instance to post encrypted passworddch -a [44d543b] Volume backed live migration w/o shared storagedebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212202331~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A nova_2013.1+git201212202331~raring-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1+git201212202331~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1+git201212202331~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_nova_trunk #335
Title: precise_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/335/Project:precise_grizzly_nova_trunkDate of build:Fri, 21 Dec 2012 00:01:03 -0500Build duration:6 min 28 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesUpdate exceptions to pass correct kwargs.by dprinceeditnova/api/ec2/cloud.pyeditnova/network/manager.pyeditnova/crypto.pyeditnova/virt/libvirt/driver.pyeditnova/db/sqlalchemy/api.pyeditnova/virt/fake.pyeditnova/compute/api.pyRaise old exception instance instead of new one.by dprinceeditnova/conductor/api.pyConsole Output[...truncated 10543 lines...]dch -a [37d42ca] Add more association support to network APIdch -a [6ab2790] Remove the WillNotSchedule exception.dch -a [aef9802] Replace fixtures.DetailStream with fixtures.StringStream.dch -a [19558ab] Move network_driver into new nova.network.driverdch -a [06f0e45] Move DNS manager options into network.managerdch -a [2f8ffcc] Move agent_build_get_by_triple to conductordch -a [20f0c60] Move provider_fw_rule_get_all to conductordch -a [20811e9] Move security_group operations in VirtAPI to conductordch -a [6da1dbc] Retry NBD device allocation.dch -a [4abc8cc] Use testr to run nova unittests.dch -a [bd7fb1c] Add a developer trap for api samplesdch -a [694bcb7] Update command on devref docdch -a [921eec9] Fixed deleting instance booted from invalid voldch -a [461a966] Add the missing replacement text in devref doc.dch -a [5019de6] AINFO:root:Destroying schroot.llow xenapi to work with empty image metadatadch -a [18817c7] Imported Translations from Transifexdch -a [76588ed] Fix for broken switch for config_drivedch -a [98a7161] Fix use of osapi_compute_extension option in api_samples.dch -a [6c9d9ab] Fix errors in used_limits extensiondch -a [a5b12b6] Add syslogging to nova-rootwrapdch -a [a630ea4] Ensure that sql_dbpool_enable is a boolean valuedch -a [503d572] Fixes KeyError: 'sr_uuid' when booting from volume on xenapidch -a [d6aa0cc] Remove the deprecated quantum v1 code and directory.dch -a [a2101c4] Allows an instance to post encrypted passworddch -a [44d543b] Volume backed live migration w/o shared storagedebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212210001~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nova_2013.1+git201212210001~precise-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201212210001~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201212210001~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp