Re: [Openstack] Why my vm often change into shut off status by itself?

2012-12-06 Thread pyw
Generally if you use virsh to restart the virtual machine,it seems to be
able to use some time before shutoff again。

$ date
Thu Dec  6 17:04:41 CST 2012

$ virsh start instance-006e
Domain instance-006e started

$ virsh list
 Id Name State
--
158 instance-006erunning

/var/log/libvirt/qemu$ sudo tail -f instance-006e.log
2012-12-03 06:14:13.488+: shutting down
qemu: terminating on signal 15 from pid 1957
2012-12-03 06:14:59.819+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -cpu
core2duo,+lahf_lm,+aes,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme
-enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name
instance-006e -uuid d7798df8-e225-4178-9d0b-f6691d78ce18 -nodefconfig
-nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-006e.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -drive
file=/data0/instances/instance-006e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
-device
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=25,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:17:ca:dd,bus=pci.0,addr=0x3
-chardev
file,id=charserial0,path=/data0/instances/instance-006e/console.log
-device isa-serial,chardev=charserial0,id=serial0 -chardev
pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb
-device usb-tablet,id=input0 -vnc 0.0.0.0:2 -k en-us -vga cirrus -incoming
fd:23 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/27
qemu: terminating on signal 15 from pid 1957
2012-12-04 06:54:27.150+: shutting down
2012-12-06 09:02:46.343+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -cpu
core2duo,+lahf_lm,+aes,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme
-enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name
instance-006e -uuid d7798df8-e225-4178-9d0b-f6691d78ce18 -nodefconfig
-nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-006e.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -drive
file=/data0/instances/instance-006e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
-device
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=23,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:17:ca:dd,bus=pci.0,addr=0x3
-chardev
file,id=charserial0,path=/data0/instances/instance-006e/console.log
-device isa-serial,chardev=charserial0,id=serial0 -chardev
pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb
-device usb-tablet,id=input0 -vnc 0.0.0.0:2 -k en-us -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/30

We can see this two log:
...
2012-12-03 06:14:59.819+: starting up
...
2012-12-04 06:54:27.150+: shutting down

The vm shutoff after running one day.


2012/12/6 Veera Reddy veerare...@gmail.com



 On Thu, Dec 6, 2012 at 8:29 AM, pyw pengyu...@gmail.com wrote:

 instance-0040shut off




 Hi,

 Try to start vm with virsh command

  virsh start  instance-0040

 With this  we can see what is actual problem

 Regards,
 Veera.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Why my vm often change into shut off status by itself?

2012-12-06 Thread Wangpan
are that all VMs shutting down at the same time?
such as '2012-12-04 06:54:27.150+: shutting down' or near this point?
if this is true, I guess it may be the host's problem.

2012-12-06



Wangpan



发件人:pyw
发送时间:2012-12-06 17:10
主题:Re: [Openstack] Why my vm often change into shut off status by itself?
收件人:Veera Reddyveerare...@gmail.com
抄送:openstackopenstack@lists.launchpad.net

Generally if you use virsh to restart the virtual machine,it seems to be able 
to use some time before shutoff again。



$ date
Thu Dec  6 17:04:41 CST 2012


$ virsh start instance-006e
Domain instance-006e started


$ virsh list
 Id Name State
--
158 instance-006erunning



/var/log/libvirt/qemu$ sudo tail -f instance-006e.log 
2012-12-03 06:14:13.488+: shutting down
qemu: terminating on signal 15 from pid 1957
2012-12-03 06:14:59.819+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -cpu 
core2duo,+lahf_lm,+aes,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme -enable-kvm 
-m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-006e -uuid 
d7798df8-e225-4178-9d0b-f6691d78ce18 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-006e.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew 
-no-kvm-pit-reinjection -no-shutdown -drive 
file=/data0/instances/instance-006e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
 -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=25,id=hostnet0 -device 
rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:17:ca:dd,bus=pci.0,addr=0x3 
-chardev 
file,id=charserial0,path=/data0/instances/instance-006e/console.log -device 
isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device 
isa-serial,chardev=charserial1,id=serial1 -usb -device usb-tablet,id=input0 
-vnc 0.0.0.0:2 -k en-us -vga cirrus -incoming fd:23 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/27
qemu: terminating on signal 15 from pid 1957
2012-12-04 06:54:27.150+: shutting down
2012-12-06 09:02:46.343+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -cpu 
core2duo,+lahf_lm,+aes,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme -enable-kvm 
-m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-006e -uuid 
d7798df8-e225-4178-9d0b-f6691d78ce18 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-006e.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew 
-no-kvm-pit-reinjection -no-shutdown -drive 
file=/data0/instances/instance-006e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
 -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=23,id=hostnet0 -device 
rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:17:ca:dd,bus=pci.0,addr=0x3 
-chardev 
file,id=charserial0,path=/data0/instances/instance-006e/console.log -device 
isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device 
isa-serial,chardev=charserial1,id=serial1 -usb -device usb-tablet,id=input0 
-vnc 0.0.0.0:2 -k en-us -vga cirrus -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/30


We can see this two log:
...
2012-12-03 06:14:59.819+: starting up

...
2012-12-04 06:54:27.150+: shutting down



The vm shutoff after running one day.



2012/12/6 Veera Reddy veerare...@gmail.com




On Thu, Dec 6, 2012 at 8:29 AM, pyw pengyu...@gmail.com wrote:

instance-0040shut off





Hi, 


Try to start vm with virsh command


 virsh start  instance-0040


With this  we can see what is actual problem


Regards,
Veera.___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Why my vm often change into shut off status by itself?

2012-12-06 Thread Stephen Gran
Hello,

On Thu, 2012-12-06 at 17:09 +0800, pyw wrote:


 We can see this two log:
 ...
 2012-12-03 06:14:59.819+: starting up
 
 ...
 2012-12-04 06:54:27.150+: shutting down
 
 
 
 The vm shutoff after running one day.

Without being sure, I am going to guess that you have a morning cron job
for something like log rotation that signals a process to restart or
reload.  That process seems to have PID 1957.

Maybe you can work out what's happening from that.

Cheers,
-- 
Stephen Gran
Senior Systems Integrator - guardian.co.uk

Please consider the environment before printing this email.
--
Visit guardian.co.uk - website of the year
 
www.guardian.co.ukwww.observer.co.uk www.guardiannews.com 
 
On your mobile, visit m.guardian.co.uk or download the Guardian
iPhone app www.guardian.co.uk/iphone and iPad edition www.guardian.co.uk/iPad 
 
Save up to 37% by subscribing to the Guardian and Observer - choose the papers 
you want and get full digital access. 
Visit guardian.co.uk/subscribe
 
-
This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.
 
Guardian News  Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.
 
Guardian News  Media Limited
 
A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP
 
Registered in England Number 908396


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Why my vm often change into shut off status by itself?

2012-12-06 Thread pyw
To Stephen Gran:

$ ps -ef|grep 1957
root  1957 1  0 Nov22 ?02:33:06 /usr/sbin/libvirtd -d

1957 is libvirtd...


other host info:
$ free -m
 total   used   free sharedbuffers cached
Mem: 96675  55961  40713  0561  41103
-/+ buffers/cache:  14296  82378
Swap:12134  5  12129

$ df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/sda119G  4.7G   14G  27% /
udev 48G   12K   48G   1% /dev
tmpfs19G  492K   19G   1% /run
none5.0M 0  5.0M   0% /run/lock
none 48G 0   48G   0% /run/shm
/dev/sda2   9.4G  519M  8.5G   6% /tmp
/dev/sda5   9.4G  3.2G  5.8G  36% /var
cgroup   48G 0   48G   0% /sys/fs/cgroup
/dev/sdb1   5.4T  147G  5.0T   3% /data0
(/data0 is nova instances directory)

$ cat /etc/nova/nova.conf|grep libvirt
compute_driver=libvirt.LibvirtDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
libvirt_type=kvm
libvirt_cpu_mode=host-model



2012/12/6 Stephen Gran stephen.g...@guardian.co.uk

 Hello,

 On Thu, 2012-12-06 at 17:09 +0800, pyw wrote:


  We can see this two log:
  ...
  2012-12-03 06:14:59.819+: starting up
 
  ...
  2012-12-04 06:54:27.150+: shutting down
 
 
 
  The vm shutoff after running one day.

 Without being sure, I am going to guess that you have a morning cron job
 for something like log rotation that signals a process to restart or
 reload.  That process seems to have PID 1957.

 Maybe you can work out what's happening from that.

 Cheers,
 --
 Stephen Gran
 Senior Systems Integrator - guardian.co.uk

 Please consider the environment before printing this email.
 --
 Visit guardian.co.uk - website of the year

 www.guardian.co.ukwww.observer.co.uk www.guardiannews.com

 On your mobile, visit m.guardian.co.uk or download the Guardian
 iPhone app www.guardian.co.uk/iphone and iPad edition
 www.guardian.co.uk/iPad

 Save up to 37% by subscribing to the Guardian and Observer - choose the
 papers you want and get full digital access.
 Visit guardian.co.uk/subscribe

 -
 This e-mail and all attachments are confidential and may also
 be privileged. If you are not the named recipient, please notify
 the sender and delete the e-mail and all attachments immediately.
 Do not disclose the contents to another person. You may not use
 the information for any purpose, or store, or copy, it in any way.

 Guardian News  Media Limited is not liable for any computer
 viruses or other material transmitted with or as part of this
 e-mail. You should employ virus checking software.

 Guardian News  Media Limited

 A member of Guardian Media Group plc
 Registered Office
 PO Box 68164
 Kings Place
 90 York Way
 London
 N1P 2AP

 Registered in England Number 908396


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Why my vm often change into shut off status by itself?

2012-12-06 Thread pyw
Individual virtual machines automatically shutoff occurs frequently,  this
time all the virtual machines are automatic shutoff at the same time.

If the nova failed to delete the virtual machine will cause the virtual
machine is shut down?


2012/12/6 Wangpan hzwang...@corp.netease.com

 **
 are that all VMs shutting down at the same time?
 such as '2012-12-04 06:54:27.150+: shutting down' or near this point?
 if this is true, I guess it may be the host's problem.

 2012-12-06
  --
  Wangpan
  --
  *发件人:*pyw
 *发送时间:*2012-12-06 17:10
 *主题:*Re: [Openstack] Why my vm often change into shut off status by
 itself?
 *收件人:*Veera Reddyveerare...@gmail.com
 *抄送:*openstackopenstack@lists.launchpad.net

   Generally if you use virsh to restart the virtual machine,it seems to
 be able to use some time before shutoff again。

 $ date
 Thu Dec  6 17:04:41 CST 2012

 $ virsh start instance-006e
 Domain instance-006e started

 $ virsh list
  Id Name State
 --
 158 instance-006erunning

 /var/log/libvirt/qemu$ sudo tail -f instance-006e.log
 2012-12-03 06:14:13.488+: shutting down
 qemu: terminating on signal 15 from pid 1957
 2012-12-03 06:14:59.819+: starting up
 LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
 QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -cpu
 core2duo,+lahf_lm,+aes,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme
 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name
 instance-006e -uuid d7798df8-e225-4178-9d0b-f6691d78ce18 -nodefconfig
 -nodefaults -chardev
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-006e.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc
 base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -drive
 file=/data0/instances/instance-006e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
 -device
 virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=25,id=hostnet0 -device
 rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:17:ca:dd,bus=pci.0,addr=0x3
 -chardev
 file,id=charserial0,path=/data0/instances/instance-006e/console.log
 -device isa-serial,chardev=charserial0,id=serial0 -chardev
 pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb
 -device usb-tablet,id=input0 -vnc 0.0.0.0:2 -k en-us -vga cirrus
 -incoming fd:23 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
 char device redirected to /dev/pts/27
 qemu: terminating on signal 15 from pid 1957
 2012-12-04 06:54:27.150+: shutting down
 2012-12-06 09:02:46.343+: starting up
 LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
 QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -cpu
 core2duo,+lahf_lm,+aes,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme
 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name
 instance-006e -uuid d7798df8-e225-4178-9d0b-f6691d78ce18 -nodefconfig
 -nodefaults -chardev
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-006e.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc
 base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -drive
 file=/data0/instances/instance-006e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
 -device
 virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=23,id=hostnet0 -device
 rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:17:ca:dd,bus=pci.0,addr=0x3
 -chardev
 file,id=charserial0,path=/data0/instances/instance-006e/console.log
 -device isa-serial,chardev=charserial0,id=serial0 -chardev
 pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb
 -device usb-tablet,id=input0 -vnc 0.0.0.0:2 -k en-us -vga cirrus -device
 virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
 char device redirected to /dev/pts/30

 We can see this two log:
 ...
 2012-12-03 06:14:59.819+: starting up
 ...
 2012-12-04 06:54:27.150+: shutting down

 The vm shutoff after running one day.


 2012/12/6 Veera Reddy veerare...@gmail.com



 On Thu, Dec 6, 2012 at 8:29 AM, pyw pengyu...@gmail.com wrote:

 instance-0040shut off




 Hi,

 Try to start vm with virsh command

  virsh start  instance-0040

 With this  we can see what is actual problem

 Regards,
 Veera.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Why my vm often change into shut off status by itself?

2012-12-06 Thread Wangpan
qemu: terminating on signal 15 from pid 1957
this means the VM is shutted off by libvirtd/libvirt api, the log of my VM is 
same as this,
so you should check who calls the libvirt to shut down your VMs.
I have no other ideas now, good luck, guy!

2012-12-06



Wangpan



发件人:pyw
发送时间:2012-12-06 17:34
主题:Re: Re: [Openstack] Why my vm often change into shut off status by itself?
收件人:Wangpanhzwang...@corp.netease.com
抄送:openstackopenstack@lists.launchpad.net

Individual virtual machines automatically shutoff occurs frequently,  this time 
all the virtual machines are automatic shutoff at the same time.



If the nova failed to delete the virtual machine will cause the virtual machine 
is shut down?




2012/12/6 Wangpan hzwang...@corp.netease.com

are that all VMs shutting down at the same time?
such as '2012-12-04 06:54:27.150+: shutting down' or near this point?
if this is true, I guess it may be the host's problem.

2012-12-06



Wangpan



发件人:pyw
发送时间:2012-12-06 17:10
主题:Re: [Openstack] Why my vm often change into shut off status by itself?
收件人:Veera Reddyveerare...@gmail.com
抄送:openstackopenstack@lists.launchpad.net

Generally if you use virsh to restart the virtual machine,it seems to be able 
to use some time before shutoff again。



$ date
Thu Dec  6 17:04:41 CST 2012


$ virsh start instance-006e
Domain instance-006e started


$ virsh list
 Id Name State
--
158 instance-006erunning



/var/log/libvirt/qemu$ sudo tail -f instance-006e.log 
2012-12-03 06:14:13.488+: shutting down
qemu: terminating on signal 15 from pid 1957
2012-12-03 06:14:59.819+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -cpu 
core2duo,+lahf_lm,+aes,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme -enable-kvm 
-m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-006e -uuid 
d7798df8-e225-4178-9d0b-f6691d78ce18 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-006e.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew 
-no-kvm-pit-reinjection -no-shutdown -drive 
file=/data0/instances/instance-006e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
 -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=25,id=hostnet0 -device 
rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:17:ca:dd,bus=pci.0,addr=0x3 
-chardev 
file,id=charserial0,path=/data0/instances/instance-006e/console.log -device 
isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device 
isa-serial,chardev=charserial1,id=serial1 -usb -device usb-tablet,id=input0 
-vnc 0.0.0.0:2 -k en-us -vga cirrus -incoming fd:23 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/27
qemu: terminating on signal 15 from pid 1957
2012-12-04 06:54:27.150+: shutting down
2012-12-06 09:02:46.343+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -cpu 
core2duo,+lahf_lm,+aes,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme -enable-kvm 
-m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-006e -uuid 
d7798df8-e225-4178-9d0b-f6691d78ce18 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-006e.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew 
-no-kvm-pit-reinjection -no-shutdown -drive 
file=/data0/instances/instance-006e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
 -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=23,id=hostnet0 -device 
rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:17:ca:dd,bus=pci.0,addr=0x3 
-chardev 
file,id=charserial0,path=/data0/instances/instance-006e/console.log -device 
isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device 
isa-serial,chardev=charserial1,id=serial1 -usb -device usb-tablet,id=input0 
-vnc 0.0.0.0:2 -k en-us -vga cirrus -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/30


We can see this two log:
...
2012-12-03 06:14:59.819+: starting up

...
2012-12-04 06:54:27.150+: shutting down



The vm shutoff after running one day.



2012/12/6 Veera Reddy veerare...@gmail.com




On Thu, Dec 6, 2012 at 8:29 AM, pyw pengyu...@gmail.com wrote:

instance-0040shut off





Hi, 


Try to start vm with virsh command


 virsh start  instance-0040


With this  we can see what is actual problem


Regards,
Veera.___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : 

Re: [Openstack] Accessing Nova DB from the Compute Host

2012-12-06 Thread Razique Mahroua
Check this Trinathhttps://bugs.launchpad.net/keystone/+bug/860885Regards,
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 5 déc. 2012 à 12:08, Razique Mahroua razique.mahr...@gmail.com a écrit :HI Trinath,just add the right credentials into your .bashrc or any file the system user can source :export SERVICE_TOKEN=admin export OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=openstackexport OS_AUTH_URL=http://$keystone-IP:5000/v2.0/export SERVICE_ENDPOINT=http://$keystone-IP:35357/v2.0/and it would workRegards,	
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15NUAGECO-LOGO-Fblan_petit.jpg

Le 5 déc. 2012 à 12:04, Trinath Somanchi trinath.soman...@gmail.com a écrit :Hi-Is there any way with out using the nova-client from the compute host, to access the nova database?I tried, using the /nova/db/api.py and /nova/db/sqlalchemy/api.py class definitions for accessing the database but failed to get the data.
I get this error for the sample def. i have written.File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 5263, in sampledata_by_host
  filter(models.Instance.host == host_name).all() File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2115, in all  return list(self) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2227, in __iter__
  return self._execute_and_instances(context) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2240, in _execute_and_instances  close_with_result=True)
 File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2231, in _connection_from_session  **kw) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 730, in connection
  close_with_result=close_with_result) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 736, in _connection_for_bind  return engine.contextual_connect(**kwargs)
 File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2490, in contextual_connect  self.pool.connect(), File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 224, in connect
  return _ConnectionFairy(self).checkout() File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 387, in __init__  rec = self._connection_record = pool._do_get()
 File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 802, in _do_get  return self._create_connection() File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 188, in _create_connection
  return _ConnectionRecord(self) File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 270, in __init__  self.connection = self.__connect() File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 330, in __connect
  connection = self.__pool._creator() File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 80, in connect  return dialect.connect(*cargs, **cparams)
 File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 281, in connect  return self.dbapi.connect(*cargs, **cparams)OperationalError: (OperationalError) unable to open database file None None
Can any one help when does this error occur and how to resolve the same.Thanks in advance.-- Regards,--
Trinath Somanchi,+91 9866 235 130

___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Openstack Nova/Quantum :; api-paste.ini file

2012-12-06 Thread Trinath Somanchi
Hi-

What is the significance of api-paste.ini file in the configuration of nova
and quantum and other modules of openstack?

How this configuration is parsed and used? by which api of the openstack
modules?

My quantum api-paste.ini looks this way.

[composite:quantum]
use = egg:Paste#urlmap
/: quantumversions
/v2.0: quantumapi_v2_0

[composite:quantumapi_v2_0]
use = call:quantum.auth:pipeline_factory
noauth = extensions quantumapiapp_v2_0
keystone = authtoken keystonecontext extensions quantumapiapp_v2_0

[filter:keystonecontext]
paste.filter_factory = quantum.auth:QuantumKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = localhost
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = password

[filter:extensions]
paste.filter_factory =
quantum.extensions.extensions:plugin_aware_extension_middleware_factory

[app:quantumversions]
paste.app_factory = quantum.api.versions:Versions.factory

[app:quantumapiapp_v2_0]
paste.app_factory = quantum.api.v2.router:APIRouter.factory

What does these [  ] arguments mean?

Can any one kindly help me understand these doubts.

thanks in advance.

-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Openstack Folsom and New kind of setup

2012-12-06 Thread Trinath Somanchi
Hi Stackers-

I have got a doubt about a new kind of setup with Folsom. Need your help on
the doubts i have.

Lets plan that we install nova, quantum, glance, keystone, mysql-database
and Horizon components of controller in individual machines.

How Horizon will be able to get the set the configurations/data?

Can any one help me understand this...

thanks in advance

-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Nova/Quantum :; api-paste.ini file

2012-12-06 Thread Kevin L. Mitchell
On Thu, 2012-12-06 at 16:11 +0530, Trinath Somanchi wrote:
 What is the significance of api-paste.ini file in the configuration of
 nova and quantum and other modules of openstack? 
 
 How this configuration is parsed and used? by which api of the
 openstack modules? 

So, api-paste.ini is parsed by the PasteDeploy package.  As a first step
to understanding this file, see this section of the PasteDeploy
documentation:

http://pythonpaste.org/deploy/#config-uris

(Note: the file is formatted as a standard INI file, and I believe
PasteDeploy uses the standard Python package ConfigParser to read it…)
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Nova/Quantum :; api-paste.ini file

2012-12-06 Thread Trinath Somanchi
Hi Kevin-

Thanks for the reply..

But, few of my doubts are left ...

[1] What is the significance of the api-paste.ini file in the configuration
of nova/quantum and other modules of ipenstack?

[2] How do the modules use these API configuration options? How they are
used different from normal .conf files?

kindly help me understand these..

-
Trinath


On Thu, Dec 6, 2012 at 10:19 PM, Kevin L. Mitchell 
kevin.mitch...@rackspace.com wrote:

 On Thu, 2012-12-06 at 16:11 +0530, Trinath Somanchi wrote:
  What is the significance of api-paste.ini file in the configuration of
  nova and quantum and other modules of openstack?
 
  How this configuration is parsed and used? by which api of the
  openstack modules?

 So, api-paste.ini is parsed by the PasteDeploy package.  As a first step
 to understanding this file, see this section of the PasteDeploy
 documentation:

 http://pythonpaste.org/deploy/#config-uris

 (Note: the file is formatted as a standard INI file, and I believe
 PasteDeploy uses the standard Python package ConfigParser to read it…)
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Nova/Quantum :; api-paste.ini file

2012-12-06 Thread Kevin L. Mitchell
Honestly, I don't understand your questions; I figured the documentation
I pointed you to would answer them, and the fact it doesn't suggests
that you're not asking what I thought you were asking.  Maybe an
approach from the beginning:

Nova, Quantum, Glance, Keystone, etc. all have, as components, a REST
API.  They use the PasteDeploy package to build this API; PasteDeploy
provides a means of building a WSGI stack (WSGI is the Python Web Server
Gateway Interface, an interface by which HTTP requests can be presented
to a Python application; it allows for not only an application, but also
a set of middleware, which wraps the application and can provide
enhancements).

The various configuration files you reference are used by PasteDeploy to
construct the WSGI stack for the API; that is, the configuration file
tells PasteDeploy that the nova-api service is composed of a specified
controller, wrapped by middleware that implements exception trap
translation, authentication checks, ratelimit enforcement, etc., all in
a specific order.  In essence, the configuration file acts sort of like
code, rather than configuration; it expresses the structure of the final
application.  (Although configuration can also be expressed in the file,
we're trying to avoid that, so that we don't mix configuration with
code.)

Does that help you some?

On Thu, 2012-12-06 at 22:29 +0530, Trinath Somanchi wrote:
 [1] What is the significance of the api-paste.ini file in the
 configuration of nova/quantum and other modules of ipenstack?
 
 
 [2] How do the modules use these API configuration options? How they
 are used different from normal .conf files?

-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Nova/Quantum :; api-paste.ini file

2012-12-06 Thread Kevin L. Mitchell
It's probably best to ask these sorts of questions on the email list, as
it gives an opportunity to others to answer them, as well as allowing
others who may have similar questions to see the answers in the first
place.

On Thu, 2012-12-06 at 23:24 +0530, Trinath Somanchi wrote:
 [1] In nova or quantum api,
 We can access the .conf params,
 
 This way...
 
 cfg.Conf.x as per the soutce code... We can get the
 api-paste-config too.. But i wonder how we can get the paste api confs
 values too accessible this way Like, admin_user .

PasteDeploy passes configuration options as arguments to the
constructors/factories for the various applications and middleware.
But, as I say, we're trying to avoid relying on this data in nova; the
only consumer of it I am aware of is the Keystone auth_token middleware,
and it has the capability now of specifying its necessary configuration
in the [keystone_authtoken] section of the nova/glance/quantum/cinder
configuration files.  (I suspect the Keystone team is deprecating the
configuration through api-paste.ini.)  This should all be documented in
the PasteDeploy manual…

 [2] since nova/quantum run as services, how do webob and wsgi play a
 role to prepare the request dict?

At this point, we leave behind PasteDeploy.  To answer your second
question first, WSGI is an interface specification; it describes how a
web application can be called by the server which receives the HTTP
request.  You can find out more about WSGI from PEP-333, at:

http://www.python.org/dev/peps/pep-0333/

As for webob, that is another package used by nova, etc., which changes
the interface we actually implement; that is, a WSGI application is a
callable taking a dictionary with the environment and a start_response
callback, but webob takes these two arguments and encapsulates them in a
Request class, which provides simplified access to the environment data
and some utility methods.  In essence, webob implements the
strange-looking parts of the WSGI interface spec for us, and we can
concentrate on getting the job done.

 [3] When does( at what level )keystone authentication happens for
 given RESTful request...

Keystone authentication happens, for most projects, in two separate
pieces of middleware.  The first is auth_token, contained in the
python-keystoneclient package (it was just moved from the keystone
package); this piece of middleware grabs the token out of the incoming
request, verifies that it is a valid and unexpired token, then inserts
various authentication data needed by the project (user and tenant IDs,
for instance).

The second piece of authentication is more or less a shim between the
Keystone auth_token and the project; it extracts the data that
auth_token injected into the request, then builds a project-specific
authentication context.  This context is how the various projects keep
track of what user made the request, and is used in authorization checks
(Does this user have permission to take this action on this
resource?).
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Problem with keystone on Debian

2012-12-06 Thread Guilherme Russi
Hello Guys,

 I'm trying to install OpenStack on Debian Wheezy, but i'm getting some
problems with keystone, when i try to create a user in a tenant using
the --tenant-id command, it returns the error: keystone: error:
unrecognized arguments: --service-id service-id number.

 Does anybody knows if it is a version problem, I've tried on ubuntu 12.10
and it worked.

Hope someone can help me.

Thank you.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Nova/Quantum :; api-paste.ini file

2012-12-06 Thread Trinath Somanchi
Thanks a lot kevin..

Your explanation has cleared my doubts..

Keeping togethor what i understand...

Suppose, we have a resquest to Nova..

The following steps are performed...

1. The request is captured by webob and is authenticated by keystone and is
decorated to wsgi app
2. Nova-api maps the url params to extensions
3. Nova-api extensions return the data dict.. Which webob returns as
response to the request in json/xml format...
4. Paste-api helps the keystone and other modules for update of the
request...

This the request is served...

In the above steps, i might be misunderstanding the concepts..

Kindly please help me by validating my understanding ...

-
Trinath
On Dec 6, 2012 11:42 PM, Kevin L. Mitchell kevin.mitch...@rackspace.com
wrote:

 It's probably best to ask these sorts of questions on the email list, as
 it gives an opportunity to others to answer them, as well as allowing
 others who may have similar questions to see the answers in the first
 place.

 On Thu, 2012-12-06 at 23:24 +0530, Trinath Somanchi wrote:
  [1] In nova or quantum api,
  We can access the .conf params,
 
  This way...
 
  cfg.Conf.x as per the soutce code... We can get the
  api-paste-config too.. But i wonder how we can get the paste api confs
  values too accessible this way Like, admin_user .

 PasteDeploy passes configuration options as arguments to the
 constructors/factories for the various applications and middleware.
 But, as I say, we're trying to avoid relying on this data in nova; the
 only consumer of it I am aware of is the Keystone auth_token middleware,
 and it has the capability now of specifying its necessary configuration
 in the [keystone_authtoken] section of the nova/glance/quantum/cinder
 configuration files.  (I suspect the Keystone team is deprecating the
 configuration through api-paste.ini.)  This should all be documented in
 the PasteDeploy manual…

  [2] since nova/quantum run as services, how do webob and wsgi play a
  role to prepare the request dict?

 At this point, we leave behind PasteDeploy.  To answer your second
 question first, WSGI is an interface specification; it describes how a
 web application can be called by the server which receives the HTTP
 request.  You can find out more about WSGI from PEP-333, at:

 http://www.python.org/dev/peps/pep-0333/

 As for webob, that is another package used by nova, etc., which changes
 the interface we actually implement; that is, a WSGI application is a
 callable taking a dictionary with the environment and a start_response
 callback, but webob takes these two arguments and encapsulates them in a
 Request class, which provides simplified access to the environment data
 and some utility methods.  In essence, webob implements the
 strange-looking parts of the WSGI interface spec for us, and we can
 concentrate on getting the job done.

  [3] When does( at what level )keystone authentication happens for
  given RESTful request...

 Keystone authentication happens, for most projects, in two separate
 pieces of middleware.  The first is auth_token, contained in the
 python-keystoneclient package (it was just moved from the keystone
 package); this piece of middleware grabs the token out of the incoming
 request, verifies that it is a valid and unexpired token, then inserts
 various authentication data needed by the project (user and tenant IDs,
 for instance).

 The second piece of authentication is more or less a shim between the
 Keystone auth_token and the project; it extracts the data that
 auth_token injected into the request, then builds a project-specific
 authentication context.  This context is how the various projects keep
 track of what user made the request, and is used in authorization checks
 (Does this user have permission to take this action on this
 resource?).
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Problem with keystone on Debian

2012-12-06 Thread Alberto Molina Coballes
On Thu, Dec 06, 2012 at 04:21:49PM -0200, Guilherme Russi wrote:
 Hello Guys,
 
  I'm trying to install OpenStack on Debian Wheezy, but i'm getting some
 problems with keystone, when i try to create a user in a tenant using
 the --tenant-id command, it returns the error: keystone: error:
 unrecognized arguments: --service-id service-id number.
 

Are you using essex (official repo) or folsom (backport repo)?

If you are using essex, you must use the command:

keystone user-role-add --user user_id --role role_id --tenant_id tenant_id

where *_id are user, role and tenants ids previously created.

Cheers

Alberto

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Nova/Quantum :; api-paste.ini file

2012-12-06 Thread Michael Basnight
Seems like a good start to a wiki page to me :)

Sent from my digital shackles

On Dec 6, 2012, at 11:27 AM, Kevin L. Mitchell kevin.mitch...@rackspace.com 
wrote:

 Honestly, I don't understand your questions; I figured the documentation
 I pointed you to would answer them, and the fact it doesn't suggests
 that you're not asking what I thought you were asking.  Maybe an
 approach from the beginning:
 
 Nova, Quantum, Glance, Keystone, etc. all have, as components, a REST
 API.  They use the PasteDeploy package to build this API; PasteDeploy
 provides a means of building a WSGI stack (WSGI is the Python Web Server
 Gateway Interface, an interface by which HTTP requests can be presented
 to a Python application; it allows for not only an application, but also
 a set of middleware, which wraps the application and can provide
 enhancements).
 
 The various configuration files you reference are used by PasteDeploy to
 construct the WSGI stack for the API; that is, the configuration file
 tells PasteDeploy that the nova-api service is composed of a specified
 controller, wrapped by middleware that implements exception trap
 translation, authentication checks, ratelimit enforcement, etc., all in
 a specific order.  In essence, the configuration file acts sort of like
 code, rather than configuration; it expresses the structure of the final
 application.  (Although configuration can also be expressed in the file,
 we're trying to avoid that, so that we don't mix configuration with
 code.)
 
 Does that help you some?
 
 On Thu, 2012-12-06 at 22:29 +0530, Trinath Somanchi wrote:
 [1] What is the significance of the api-paste.ini file in the
 configuration of nova/quantum and other modules of ipenstack?
 
 
 [2] How do the modules use these API configuration options? How they
 are used different from normal .conf files?
 
 -- 
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Do we have any schema for keystone v3.0 request/responses

2012-12-06 Thread Gabriel Hurley
It sounds like you *could* start updating and submitting it, but with the 
knowledge that you’ll have to continue to tweak it just as the JSON spec is 
being tweaked during development. So your options are to maintain it as such or 
wait until it’s declared FINAL and then do the work later on but only once.


-  Gabriel

From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
[mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] On 
Behalf Of Jorge Williams
Sent: Wednesday, December 05, 2012 7:04 PM
To: heckj; Ali, Haneef
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Do we have any schema for keystone v3.0 
request/responses

I was waiting for things to stabilize. Give me the go ahead Joe and I'll update 
and submit.

Sent from my Motorola Smartphone on the Now Network from Sprint!


-Original message-
From: heckj he...@mac.commailto:he...@mac.com
To: Ali, Haneef haneef@hp.commailto:haneef@hp.com
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Sent: Wed, Dec 5, 2012 18:32:03 CST
Subject: Re: [Openstack] Do we have any schema for keystone v3.0 
request/responses
Hey Ali,

We don't have an XSD for the V3 API sets - we've been holding off finalizing 
that up as we are making small implementation changes as we're getting it into 
practice and learning what ideas worked, and which didn't. Jorge (Rackspace) 
has something and offered to do more, but hasn't submitted it up for review, 
and I don't know what state it's in.

We also have modifications to the /token portion of the API that are pending 
final implementation (Guang is working on these now) - when that's complete, 
we'd very much welcome you helping us construct an XSD for ongoing use.

-joe


On Dec 5, 2012, at 4:16 PM, Ali, Haneef 
haneef@hp.commailto:haneef@hp.com wrote:
Hi,

Do we have any  XSD file  for keystone v3.0 api?  All the examples show only 
json format.  I don’t see even a single request/response example using xml. 
Does keystone v3.0 support xml content-type?  If so what is the namespace for 
the v3.0 schema?

Thanks
Haneef
___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Cinder] New volume status stuck at Creating after creation in Horizon

2012-12-06 Thread Ahmed Al-Mehdi
Hi Razique,

Following is the info you requested:

root@novato:~# pvdisplay

  --- Physical volume ---
  PV Name   /dev/loop2
  VG Name   cinder-volumes
  PV Size   10.00 GiB / not usable 4.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  2559
  Free PE   2559
  Allocated PE  0
  PV UUID   fYtaeo-MAg8-inx0-vqut-GUw6-behR-bKI3Q7

root@novato:~# vgdisplay
  --- Volume group ---
  VG Name   cinder-volumes
  System ID
  Formatlvm2
  Metadata Areas1
  Metadata Sequence No  1
  VG Access read/write
  VG Status resizable
  MAX LV0
  Cur LV0
  Open LV   0
  Max PV0
  Cur PV1
  Act PV1
  VG Size   10.00 GiB
  PE Size   4.00 MiB
  Total PE  2559
  Alloc PE / Size   0 / 0
  Free  PE / Size   2559 / 10.00 GiB
  VG UUID   kDlol2-KqAx-4E26-ebXR-4ppS-na5M-9vBeqd

root@novato:~# cat /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinderUser:cinderPass@10.176.20.102/cinder
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper=ietadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#osapi_volume_listen_port=5900
root@novato:~#


Regards,
Ahmed.



On Wed, Dec 5, 2012 at 12:57 AM, Razique Mahroua
razique.mahr...@gmail.comwrote:

 Hi Ahmed,
 can you run
 $ pvdisplay
 and
 $ vgdisplay

 can we see /etc/cinder/cinder.conf ?

 thanks,
 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 5 déc. 2012 à 09:54, Ahmed Al-Mehdi ahmedalme...@gmail.com a écrit :

 I posted the cinder-scheduler log in my first post, but here they are here
 again.  There are generated right around the time frame when I created the
 volume.  I am trying to understand the error message VolumeNotFound:
 Volume 9dd360bf-9ef2-499f-ac6e-
 893abf5dc5ce could not be found.  Is this error message related to
 volume_group cinder-volumes or the new volume I just created.


 2012-12-04 09:05:02 23552 DEBUG cinder.openstack.common.rpc.
 amqp [-] received {u'_context_roles': [u'Member', u'admin'],
 u'_context_request_id': u'req-1b122042-c3e4-4c1e-8285-ad148c8c2367',
 u'_context
 _quota_class': None, u'args': {u'topic': u'cinder-volume', u'image_id':
 None, u'snapshot_id': None, u'volume_id':
 u'9dd360bf-9ef2-499f-ac6e-893abf5dc5ce'}, u'_context_auth_token':
 'SANITIZED', u'_co
 ntext_is_admin': False, u'_context_project_id':
 u'70e5c14a28a14666a86e85b62ca6ae18', u'_context_timestamp':
 u'2012-12-04T17:05:02.375789', u'_context_read_deleted': u'no',
 u'_context_user_id': u'386d0
 f02d6d045e7ba49d8edac7bb43f', u'method': u'create_volume',
 u'_context_remote_address': u'10.176.20.102'} _safe_log
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/common.py:195
 2012-12-04 09:05:02 23552 DEBUG cinder.openstack.common.rpc.amqp [-]
 unpacked context: {'user_id': u'386d0f02d6d045e7ba49d8edac7bb43f', 'roles':
 [u'Member', u'admin'], 'timestamp': u'2012-12-04T17:05:
 02.375789', 'auth_token': 'SANITIZED', 'remote_address':
 u'10.176.20.102', 'quota_class': None, 'is_admin': False, 'request_id':
 u'req-1b122042-c3e4-4c1e-8285-ad148c8c2367', 'project_id': u'70e5c14a
 28a14666a86e85b62ca6ae18', 'read_deleted': u'no'} _safe_log
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/common.py:195
 2012-12-04 09:05:02 23552 ERROR cinder.openstack.common.rpc.amqp [-]
 Exception during message handling
 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp Traceback
 (most recent call last):
 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py,
 line 276, in _process_data
 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp rval
 = self.proxy.dispatch(ctxt, version, method, **args)
 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py,
 line 145, in dispatch
 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp
 return getattr(proxyobj, method)(ctxt, **kwargs)
 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/cinder/scheduler/manager.py, line 98, in
 _schedule
 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp
 db.volume_update(context, volume_id, {'status': 'error'})
 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/cinder/db/api.py, line 256, in
 volume_update
 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp
 return IMPL.volume_update(context, volume_id, values)
 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
 

Re: [Openstack] Openstack Nova/Quantum :; api-paste.ini file

2012-12-06 Thread Kevin L. Mitchell
On Thu, 2012-12-06 at 23:58 +0530, Trinath Somanchi wrote:
 Suppose, we have a resquest to Nova..
 
 The following steps are performed...
 
 1. The request is captured by webob and is authenticated by keystone
 and is decorated to wsgi app

Not quite correct; webob decorates (some of) the functions called, so
all functions in the WSGI stack end up having the WSGI calling
convention (func(env, start_response)).  The bulk of the middleware
uses the webob wsgify decorator, but there are some exceptions
(auth_token being one of them).  Other than that point, this is correct.

 2. Nova-api maps the url params to extensions

nova-api maps the URIs to controller classes and methods on those
classes (it uses the routes package to accomplish this).  Some of those
classes are extensions, rather than core; some of those interfaces are
further extended by the extensions (the extensions infrastructure can
accomplish both).  IOW, you are essentially correct…

 3. Nova-api extensions return the data dict.. Which webob returns as
 response to the request in json/xml format...

Well, it's nova that serializes the data dict to the appropriate format;
webob just handles the mechanics of sending the serialized data back,
along with appropriate HTTP headers.  The serialization framework is a
little complicated, so let's omit it for now…

 4. Paste-api helps the keystone and other modules for update of the
 request...

PasteDeploy builds the processing pipeline based on the values in
api-paste.ini and friends, putting the middleware into the correct
order, with the final application at the end of the chain.  (Note that
middleware is *not* extension, but rather additional processing done on
the request as a whole.)

 Kindly please help me by validating my understanding ...

I think you've fairly well understood most of it, aside from some
subtleties that I've tried to correct above.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: [swift3] api - boto and libcloud = AccessDenied

2012-12-06 Thread Blair Bethwaite
Hi Antonio,

It sounds like you might be using the wrong credentials. The S3 layer works
with the EC2 credentials.


On 6 December 2012 06:33, Antonio Messina arcimbo...@gmail.com wrote:


 Hi all,

 I'm trying to access SWIFT using the S3 API compatibility layer, but I
 always get an AccessDenied.

 I'm running folsom on ubuntu precise 12.04 LTS, packages are from
 ubuntu-cloud.archive.canonical.com repository. Swift is correctly
 configured, login and password have been tested with the web interface and
 from command line. Glance uses it to store the images.

 I've installed swift-plugin-s3 and I've configured proxy-server.conf as
 follow:

 pipeline = catch_errors healthcheck cache ratelimit authtoken keystoneauth
 swift3  proxy-logging proxy-server
 [filter:swift3]
 use = egg:swift3#swift3

 I've then tried to connect using my keystone login and password (and I've
 also tried with the EC2 tokens, with the same result).

 The code I'm using is:

 from libcloud.storage.types import Provider as StorageProvider
 from libcloud.storage.providers import get_driver as get_storage_driver

 s3driver = get_storage_driver(StorageProvider.S3)
 s3 = s3driver(ec2access, ec2secret, secure=False, host=s3host, port=8080)
 s3.list_containers()

 What I get is:

 Traceback (most recent call last):
   File stdin, line 1, in module
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/storage/drivers/s3.py,
 line 176, in list_containers
 response = self.connection.request('/')
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/common/base.py,
 line 605, in request
 connection=self)
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/common/base.py,
 line 93, in __init__
 raise Exception(self.parse_error())
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/storage/drivers/s3.py,
 line 68, in parse_error
 raise InvalidCredsError(self.body)
 libcloud.common.types.InvalidCredsError: '?xml version=1.0
 encoding=UTF-8?\r\nError\r\n  CodeAccessDenied/Code\r\n
 MessageAccess denied/Message\r\n/Error'


 Using boto instead:

  import boto
  s3conn = boto.s3.connection.S3Connection( aws_access_key_id=ec2access,
 aws_secret_access_key=ec2secret, port=s3port, host=s3host,
 is_secure=False,debug=3)
  s3conn.get_all_buckets()
 send: 'GET / HTTP/1.1\r\nHost: cloud-storage1:8080\r\nAccept-Encoding:
 identity\r\nDate: Wed, 05 Dec 2012 19:25:00 GMT\r\nContent-Length:
 0\r\nAuthorization: AWS
 7c67d5b35b5a4127887c5da319c70a18:WXVx9AONXvIkDiIdg8rUnfncFnM=\r\nUser-Agent:
 Boto/2.6.0 (linux2)\r\n\r\n'
 reply: 'HTTP/1.1 403 Forbidden\r\n'
 header: Content-Type: text/xml; charset=UTF-8
 header: Content-Length: 124
 header: X-Trans-Id: tx7a823c742f624f2682bfddb19f31bcc2
 header: Date: Wed, 05 Dec 2012 19:24:42 GMT
 Traceback (most recent call last):
   File stdin, line 1, in module
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/boto/s3/connection.py,
 line 364, in get_all_buckets
 response.status, response.reason, body)
 boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
 ?xml version=1.0 encoding=UTF-8?
 Error
   CodeAccessDenied/Code
   MessageAccess denied/Message
 /Error

 Login and password work when using the command line tool `swift`.

 I think I may be missing something very basilar here, but I couldn't find
 so much documentation...

 Thanks in advance

 .a.

 --
 antonio.s.mess...@gmail.com
 arcimbo...@gmail.com
 GC3: Grid Computing Competence Center
 http://www.gc3.uzh.ch/
 University of Zurich
 Winterthurerstrasse 190
 CH-8057 Zurich Switzerland

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum] Quantum OVS Plugin doubt

2012-12-06 Thread Trinath Somanchi
Hi Stackers-

I have a doubt with respect to the Quantum OVS Plugin.

[1] Do all the APIs of Quantum use the Quantum OVS plugin to get the data
from the database. or they directly contact the database.

Since, I have seen ovs_quantum_plugin.py code, it has create_network,
update_network methods which use the db api.

Is that the OVS Quantum Plugin APIs are partially used by the Quantum APIs
for getting data from database?

Kindly help me understand these areas of Quantum.

Thanks in advance

-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Nova/Quantum :; api-paste.ini file

2012-12-06 Thread Trinath Somanchi
Hi Kevin-

Thanks for the reply and making me understand the data flow.

I have one more doubt in plate.

I see that in api-paste.ini, with respect to the online available
documentation
https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst

I see that the auth_host, auth_port, admin_tenant_name, admin_user,
admin_password etc... are all hard coded.

But then, suppose that we have N number of compute nodes.where we have
agents running which might need these variables for all the variables,
these options must be manually hardcoded.

Also we set these variables as environmental variables.

How can we refer the Environmental variables for the variables in the
configuration files?

Can you guide me understand the issue.

Thanks in advance

-
Trinath


On Fri, Dec 7, 2012 at 2:09 AM, Kevin L. Mitchell 
kevin.mitch...@rackspace.com wrote:

 On Thu, 2012-12-06 at 23:58 +0530, Trinath Somanchi wrote:
  Suppose, we have a resquest to Nova..
 
  The following steps are performed...
 
  1. The request is captured by webob and is authenticated by keystone
  and is decorated to wsgi app

 Not quite correct; webob decorates (some of) the functions called, so
 all functions in the WSGI stack end up having the WSGI calling
 convention (func(env, start_response)).  The bulk of the middleware
 uses the webob wsgify decorator, but there are some exceptions
 (auth_token being one of them).  Other than that point, this is correct.

  2. Nova-api maps the url params to extensions

 nova-api maps the URIs to controller classes and methods on those
 classes (it uses the routes package to accomplish this).  Some of those
 classes are extensions, rather than core; some of those interfaces are
 further extended by the extensions (the extensions infrastructure can
 accomplish both).  IOW, you are essentially correct…

  3. Nova-api extensions return the data dict.. Which webob returns as
  response to the request in json/xml format...

 Well, it's nova that serializes the data dict to the appropriate format;
 webob just handles the mechanics of sending the serialized data back,
 along with appropriate HTTP headers.  The serialization framework is a
 little complicated, so let's omit it for now…

  4. Paste-api helps the keystone and other modules for update of the
  request...

 PasteDeploy builds the processing pipeline based on the values in
 api-paste.ini and friends, putting the middleware into the correct
 order, with the final application at the end of the chain.  (Note that
 middleware is *not* extension, but rather additional processing done on
 the request as a whole.)

  Kindly please help me by validating my understanding ...

 I think you've fairly well understood most of it, aside from some
 subtleties that I've tried to correct above.
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_quantum_trunk #110

2012-12-06 Thread openstack-testing-bot
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/110/Project:precise_grizzly_quantum_trunkDate of build:Thu, 06 Dec 2012 07:01:01 -0500Build duration:1 min 42 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesAdd router testcases that missing in L3NatDBTestCaseby revieweditquantum/tests/unit/test_l3_plugin.pyConsole Output[...truncated 2782 lines...]INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpMzouem/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpMzouem/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log d0a284de2ff272ae65a62bc2722c0274ff1be7d2..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/precise-grizzly --forcedch -b -D precise --newversion 2013.1+git201212060701~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [fbf50ad] Add router testcases that missing in L3NatDBTestCasedch -a [b939b07] Returns more appropriate error when address pool is exhausteddch -a [64f2a38] Add VIF binding extensionsdch -a [4aaf0fe] Sort router testcases as group for L3NatDBTestCasedch -a [0dea610] Refactor resources listing testcase for test_db_plugin.pydch -a [b836e71] l3 agent rpcdch -a [4ec139e] Fix rootwrap cfg for src installed metadata proxy.dch -a [643a36b] Add metadata_agent.ini to config_path in setup.py.dch -a [e56f174] add state_path sample back to l3_agent.ini filedch -a [06b2b2b] plugin/ryu: make live-migration work with Ryu plugindch -a [797036f] Remove __init__.py from bin/ and tools/.dch -a [e4ee84f] Removes unused code in quantum.commondch -a [d06b511] Fixes import order nitsdch -a [dc107a5] update state_path default to be the same valuedch -a [87e9b62] Use /usr/bin/ for the metadata proxy in l3.filtersdch -a [681d7d3] prevent deletion of router interface if it is needed by a floating ipdch -a [58cb6ce] Completes coverage of quantum.agent.linux.utilsdch -a [ac81d9d] Fixes Rpc related exception in NVP plugindch -a [0c3dd5a] add metadata proxy support for Quantum Networksdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-52cd13de-540c-432d-a52c-3ce19560e8ae', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-52cd13de-540c-432d-a52c-3ce19560e8ae', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_quantum_trunk #112

2012-12-06 Thread openstack-testing-bot
Title: raring_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/112/Project:raring_grizzly_quantum_trunkDate of build:Thu, 06 Dec 2012 07:01:01 -0500Build duration:2 min 25 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesAdd router testcases that missing in L3NatDBTestCaseby revieweditquantum/tests/unit/test_l3_plugin.pyConsole Output[...truncated 3239 lines...]INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpmaOFpC/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpmaOFpC/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log d0a284de2ff272ae65a62bc2722c0274ff1be7d2..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/raring-grizzly --forcedch -b -D raring --newversion 2013.1+git201212060701~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [fbf50ad] Add router testcases that missing in L3NatDBTestCasedch -a [b939b07] Returns more appropriate error when address pool is exhausteddch -a [64f2a38] Add VIF binding extensionsdch -a [4aaf0fe] Sort router testcases as group for L3NatDBTestCasedch -a [0dea610] Refactor resources listing testcase for test_db_plugin.pydch -a [b836e71] l3 agent rpcdch -a [4ec139e] Fix rootwrap cfg for src installed metadata proxy.dch -a [643a36b] Add metadata_agent.ini to config_path in setup.py.dch -a [e56f174] add state_path sample back to l3_agent.ini filedch -a [06b2b2b] plugin/ryu: make live-migration work with Ryu plugindch -a [797036f] Remove __init__.py from bin/ and tools/.dch -a [e4ee84f] Removes unused code in quantum.commondch -a [d06b511] Fixes import order nitsdch -a [dc107a5] update state_path default to be the same valuedch -a [87e9b62] Use /usr/bin/ for the metadata proxy in l3.filtersdch -a [681d7d3] prevent deletion of router interface if it is needed by a floating ipdch -a [58cb6ce] Completes coverage of quantum.agent.linux.utilsdch -a [ac81d9d] Fixes Rpc related exception in NVP plugindch -a [0c3dd5a] add metadata proxy support for Quantum Networksdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-dc4aa834-370b-4076-966d-ef5ee7e5cc85', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-dc4aa834-370b-4076-966d-ef5ee7e5cc85', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #1372

2012-12-06 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/1372/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson508026125332658966.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
Traceback (most recent call last):
  File /var/lib/jenkins/tools/ca-versions/gather-versions.py, line 177, in 
module
versions = versions_from_packages(os_release, p, staging_versions.keys())
  File /var/lib/jenkins/tools/ca-versions/gather-versions.py, line 77, in 
versions_from_packages
resp, content = req.request(url, GET)
  File /usr/lib/python2.7/dist-packages/httplib2/__init__.py, line 1543, in 
request
(response, content) = self._request(conn, authority, uri, request_uri, 
method, body, headers, redirections, cachekey)
  File /usr/lib/python2.7/dist-packages/httplib2/__init__.py, line 1293, in 
_request
(response, content) = self._conn_request(conn, request_uri, method, body, 
headers)
  File /usr/lib/python2.7/dist-packages/httplib2/__init__.py, line 1267, in 
_conn_request
conn.connect()
  File /usr/lib/python2.7/dist-packages/httplib2/__init__.py, line 889, in 
connect
raise socket.error, msg
socket.error: [Errno 110] Connection timed out
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Jenkins build is back to normal : cloud-archive_folsom_version-drift #1373

2012-12-06 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/1373/


-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_quantum_trunk #111

2012-12-06 Thread openstack-testing-bot
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/111/Project:precise_grizzly_quantum_trunkDate of build:Thu, 06 Dec 2012 09:31:01 -0500Build duration:1 min 39 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesDrop duplicated port_id check in remove_router_interface()by motokieditquantum/db/l3_db.pyeditquantum/common/exceptions.pyeditquantum/api/v2/base.pyeditquantum/extensions/l3.pyConsole Output[...truncated 2785 lines...]INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpYI1x4N/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpYI1x4N/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log d0a284de2ff272ae65a62bc2722c0274ff1be7d2..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/precise-grizzly --forcedch -b -D precise --newversion 2013.1+git201212060931~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [fbf50ad] Add router testcases that missing in L3NatDBTestCasedch -a [73161b0] Drop duplicated port_id check in remove_router_interface()dch -a [b939b07] Returns more appropriate error when address pool is exhausteddch -a [64f2a38] Add VIF binding extensionsdch -a [4aaf0fe] Sort router testcases as group for L3NatDBTestCasedch -a [0dea610] Refactor resources listing testcase for test_db_plugin.pydch -a [b836e71] l3 agent rpcdch -a [4ec139e] Fix rootwrap cfg for src installed metadata proxy.dch -a [643a36b] Add metadata_agent.ini to config_path in setup.py.dch -a [e56f174] add state_path sample back to l3_agent.ini filedch -a [06b2b2b] plugin/ryu: make live-migration work with Ryu plugindch -a [797036f] Remove __init__.py from bin/ and tools/.dch -a [e4ee84f] Removes unused code in quantum.commondch -a [d06b511] Fixes import order nitsdch -a [dc107a5] update state_path default to be the same valuedch -a [87e9b62] Use /usr/bin/ for the metadata proxy in l3.filtersdch -a [681d7d3] prevent deletion of router interface if it is needed by a floating ipdch -a [58cb6ce] Completes coverage of quantum.agent.linux.utilsdch -a [ac81d9d] Fixes Rpc related exception in NVP plugindch -a [0c3dd5a] add metadata proxy support for Quantum Networksdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-f7f207fd-1526-4389-96b7-2260e39b18b7', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-f7f207fd-1526-4389-96b7-2260e39b18b7', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_quantum_trunk #113

2012-12-06 Thread openstack-testing-bot
Title: raring_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/113/Project:raring_grizzly_quantum_trunkDate of build:Thu, 06 Dec 2012 09:31:00 -0500Build duration:2 min 23 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesDrop duplicated port_id check in remove_router_interface()by motokieditquantum/extensions/l3.pyeditquantum/db/l3_db.pyeditquantum/api/v2/base.pyeditquantum/common/exceptions.pyConsole Output[...truncated 3242 lines...]INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpFTlXbj/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpFTlXbj/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log d0a284de2ff272ae65a62bc2722c0274ff1be7d2..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/raring-grizzly --forcedch -b -D raring --newversion 2013.1+git201212060931~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [fbf50ad] Add router testcases that missing in L3NatDBTestCasedch -a [73161b0] Drop duplicated port_id check in remove_router_interface()dch -a [b939b07] Returns more appropriate error when address pool is exhausteddch -a [64f2a38] Add VIF binding extensionsdch -a [4aaf0fe] Sort router testcases as group for L3NatDBTestCasedch -a [0dea610] Refactor resources listing testcase for test_db_plugin.pydch -a [b836e71] l3 agent rpcdch -a [4ec139e] Fix rootwrap cfg for src installed metadata proxy.dch -a [643a36b] Add metadata_agent.ini to config_path in setup.py.dch -a [e56f174] add state_path sample back to l3_agent.ini filedch -a [06b2b2b] plugin/ryu: make live-migration work with Ryu plugindch -a [797036f] Remove __init__.py from bin/ and tools/.dch -a [e4ee84f] Removes unused code in quantum.commondch -a [d06b511] Fixes import order nitsdch -a [dc107a5] update state_path default to be the same valuedch -a [87e9b62] Use /usr/bin/ for the metadata proxy in l3.filtersdch -a [681d7d3] prevent deletion of router interface if it is needed by a floating ipdch -a [58cb6ce] Completes coverage of quantum.agent.linux.utilsdch -a [ac81d9d] Fixes Rpc related exception in NVP plugindch -a [0c3dd5a] add metadata proxy support for Quantum Networksdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-4e79f604-d640-4164-9ebc-66e120f7599b', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-4e79f604-d640-4164-9ebc-66e120f7599b', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: raring_grizzly_quantum_trunk #115

2012-12-06 Thread openstack-testing-bot
Title: raring_grizzly_quantum_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/115/Project:raring_grizzly_quantum_trunkDate of build:Thu, 06 Dec 2012 11:09:31 -0500Build duration:8 min 19 secBuild cause:Started by user james-pageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 9411 lines...]deleting and forgetting pool/main/q/quantum/quantum-plugin-openvswitch_2013.1+git201211291601~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-ryu-agent_2013.1+git201211291601~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-ryu_2013.1+git201211291601~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-server_2013.1+git201211291601~raring-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/quantum/raring-grizzly']Pushed up to revision 117.INFO:root:Storing current commit for next build: fbf50ad8cba857b62ecb24fccf778175db5b5534INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpdyb2py/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpdyb2py/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log d0a284de2ff272ae65a62bc2722c0274ff1be7d2..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/raring-grizzly --forcedch -b -D raring --newversion 2013.1+git201212061109~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [fbf50ad] Add router testcases that missing in L3NatDBTestCasedch -a [7011f7f] Releasing resources of context manager functions if exceptions occurdch -a [73161b0] Drop duplicated port_id check in remove_router_interface()dch -a [b939b07] Returns more appropriate error when address pool is exhausteddch -a [64f2a38] Add VIF binding extensionsdch -a [4aaf0fe] Sort router testcases as group for L3NatDBTestCasedch -a [0dea610] Refactor resources listing testcase for test_db_plugin.pydch -a [b836e71] l3 agent rpcdch -a [4ec139e] Fix rootwrap cfg for src installed metadata proxy.dch -a [643a36b] Add metadata_agent.ini to config_path in setup.py.dch -a [e56f174] add state_path sample back to l3_agent.ini filedch -a [06b2b2b] plugin/ryu: make live-migration work with Ryu plugindch -a [797036f] Remove __init__.py from bin/ and tools/.dch -a [e4ee84f] Removes unused code in quantum.commondch -a [d06b511] Fixes import order nitsdch -a [dc107a5] update state_path default to be the same valuedch -a [87e9b62] Use /usr/bin/ for the metadata proxy in l3.filtersdch -a [681d7d3] prevent deletion of router interface if it is needed by a floating ipdch -a [58cb6ce] Completes coverage of quantum.agent.linux.utilsdch -a [ac81d9d] Fixes Rpc related exception in NVP plugindch -a [0c3dd5a] add metadata proxy support for Quantum Networksdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.1+git201212061109~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A quantum_2013.1+git201212061109~raring-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing quantum_2013.1+git201212061109~raring-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include raring-grizzly quantum_2013.1+git201212061109~raring-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/quantum/raring-grizzlyEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_nova_trunk #241

2012-12-06 Thread openstack-testing-bot
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/241/Project:precise_grizzly_nova_trunkDate of build:Thu, 06 Dec 2012 12:05:02 -0500Build duration:3 min 14 secBuild cause:Started by user james-pageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesConsole Output[...truncated 264 lines...]remote: Compressing objects:  90% (27792/30880)   remote: Compressing objects:  91% (28101/30880)   remote: Compressing objects:  92% (28410/30880)   remote: Compressing objects:  93% (28719/30880)   remote: Compressing objects:  94% (29028/30880)   remote: Compressing objects:  95% (29336/30880)   remote: Compressing objects:  96% (29645/30880)   remote: Compressing objects:  97% (29954/30880)   remote: Compressing objects:  98% (30263/30880)   remote: Compressing objects:  99% (30572/30880)   remote: Compressing objects: 100% (30880/30880)   remote: Compressing objects: 100% (30880/30880), done.Receiving objects:   0% (1/144042)   Receiving objects:   0% (336/144042), 92.00 KiB | 93 KiB/s   Receiving objects:   0% (669/144042), 204.00 KiB | 64 KiB/s   Receiving objects:   0% (731/144042), 220.00 KiB | 45 KiB/s   Receiving objects:   0% (788/144042), 236.00 KiB | 7 KiB/s   Receiving objects:   0% (900/144042), 268.00 KiB | 5 KiB/s   Receiving objects:   0% (957/144042), 268.00 KiB | 5 KiB/s   Receiving objects:   0% (1173/144042), 348.00 KiB | 6 KiB/s   Receiving objects:   0% (1222/144042), 364.00 KiB | 4 KiB/s   Receiving objects:   0% (1281/144042), 380.00 KiB | 2 KiB/s   Receiving objects:   0% (1389/144042), 380.00 KiB | 2 KiB/s   Receiving objects:   1% (1441/144042), 380.00 KiB | 2 KiB/s   Receiving objects:   1% (1442/144042), 428.00 KiB | 1 KiB/s   Receiving objects:   1% (1546/144042), 428.00 KiB | 1 KiB/s   fatal: The remote end hung up unexpectedlyfatal: early EOFfatal: index-pack failed	at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771)	... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041)	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970)	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:1994)	at hudson.remoting.UserRequest.perform(UserRequest.java:118)	at hudson.remoting.UserRequest.perform(UserRequest.java:48)	at hudson.remoting.Request$2.run(Request.java:287)	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)	at java.util.concurrent.FutureTask.run(FutureTask.java:166)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)	at hudson.remoting.Engine$1$1.run(Engine.java:60)	at java.lang.Thread.run(Thread.java:722)-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_nova_trunk #243

2012-12-06 Thread openstack-testing-bot
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/243/Project:raring_grizzly_nova_trunkDate of build:Thu, 06 Dec 2012 12:04:50 -0500Build duration:5 min 22 secBuild cause:Started by user james-pageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesConsole Output[...truncated 8 lines...]Cloning repository originERROR: Error cloning remote repo 'origin' : Could not clone https://github.com/openstack/nova.githudson.plugins.git.GitException: Could not clone https://github.com/openstack/nova.git	at hudson.plugins.git.GitAPI.clone(GitAPI.java:245)	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1029)	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970)	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:1994)	at hudson.remoting.UserRequest.perform(UserRequest.java:118)	at hudson.remoting.UserRequest.perform(UserRequest.java:48)	at hudson.remoting.Request$2.run(Request.java:287)	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)	at java.util.concurrent.FutureTask.run(FutureTask.java:166)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)	at hudson.remoting.Engine$1$1.run(Engine.java:60)	at java.lang.Thread.run(Thread.java:722)Caused by: hudson.plugins.git.GitException: Error performing command: git clone --progress -o origin https://github.com/openstack/nova.git /var/lib/jenkins/slave/workspace/raring_grizzly_nova_trunk/novaCommand "git clone --progress -o origin https://github.com/openstack/nova.git /var/lib/jenkins/slave/workspace/raring_grizzly_nova_trunk/nova" returned status code 143: Cloning into '/var/lib/jenkins/slave/workspace/raring_grizzly_nova_trunk/nova'...	at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:776)	at hudson.plugins.git.GitAPI.access$000(GitAPI.java:38)	at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:241)	at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:221)	at hudson.FilePath.act(FilePath.java:758)	at hudson.FilePath.act(FilePath.java:740)	at hudson.plugins.git.GitAPI.clone(GitAPI.java:221)	... 13 moreCaused by: hudson.plugins.git.GitException: Command "git clone --progress -o origin https://github.com/openstack/nova.git /var/lib/jenkins/slave/workspace/raring_grizzly_nova_trunk/nova" returned status code 143: Cloning into '/var/lib/jenkins/slave/workspace/raring_grizzly_nova_trunk/nova'...	at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771)	... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041)	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970)	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:1994)	at hudson.remoting.UserRequest.perform(UserRequest.java:118)	at hudson.remoting.UserRequest.perform(UserRequest.java:48)	at hudson.remoting.Request$2.run(Request.java:287)	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)	at java.util.concurrent.FutureTask.run(FutureTask.java:166)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)	at hudson.remoting.Engine$1$1.run(Engine.java:60)	at java.lang.Thread.run(Thread.java:722)-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_grizzly_keystone_trunk #48

2012-12-06 Thread openstack-testing-bot
Title: precise_grizzly_keystone_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/48/Project:precise_grizzly_keystone_trunkDate of build:Thu, 06 Dec 2012 12:12:02 -0500Build duration:5 min 20 secBuild cause:Started by user james-pageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesNo ChangesConsole Output[...truncated 21234 lines...]gpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) "Checking signature on .changesGood signature on /tmp/tmpuIVOnu/keystone_2013.1+git201212061212~precise-0ubuntu1_source.changes.Checking signature on .dscGood signature on /tmp/tmpuIVOnu/keystone_2013.1+git201212061212~precise-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net):  Uploading keystone_2013.1+git201212061212~precise-0ubuntu1.dsc: done.  Uploading keystone_2013.1+git201212061212~precise.orig.tar.gz: done.  Uploading keystone_2013.1+git201212061212~precise-0ubuntu1.debian.tar.gz: done.  Uploading keystone_2013.1+git201212061212~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'keystone_2013.1+git201212061212~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-grizzly/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-grizzly/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/k/keystone/keystone-doc_2013.1+git201212030501~precise-0ubuntu1_all.debdeleting and forgetting pool/main/k/keystone/keystone_2013.1+git201212030501~precise-0ubuntu1_all.debdeleting and forgetting pool/main/k/keystone/python-keystone_2013.1+git201212030501~precise-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/keystone/precise-grizzly']Pushed up to revision 166.INFO:root:Storing current commit for next build: c858c1b304cae6310f08a220cf54c763f684fc42INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmpuIVOnu/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmpuIVOnu/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log af8761d9e0add62a83604b77ab015f5a8b3120a9..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/keystone/precise-grizzly --forcedch -b -D precise --newversion 2013.1+git201212061212~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [c858c1b] Only 'import *' from 'core' modulesdch -a [77dee93] use keystone test and change config during setUpdch -a [84a0b2d] Bug 1075090 -- Fixing log messages in python source code to support internationalization.dch -a [8c15e3e] Added documentation for the external auth supportdch -a [5b73757] Validate password type (bug 1081861)debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.1+git201212061212~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A keystone_2013.1+git201212061212~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing keystone_2013.1+git201212061212~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly keystone_2013.1+git201212061212~precise-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/keystone/precise-grizzlyEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_folsom_keystone_stable #75

2012-12-06 Thread openstack-testing-bot
Title: precise_folsom_keystone_stable
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_keystone_stable/75/Project:precise_folsom_keystone_stableDate of build:Thu, 06 Dec 2012 12:13:01 -0500Build duration:5 min 51 secBuild cause:Started by user james-pageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 3 builds failed.33ChangesNo ChangesConsole Output[...truncated 5184 lines...]Finished at 20121206-1218Build needed 00:03:53, 12340k disc spaceINFO:root:Uploading package to ppa:openstack-ubuntu-testing/folsom-stable-testinggpg: Signature made Thu Dec  6 12:14:50 2012 EST using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) <ja...@shingle-house.org.uk>"gpg: Signature made Thu Dec  6 12:14:50 2012 EST using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) <ja...@shingle-house.org.uk>"Checking signature on .changesGood signature on /tmp/tmpeJUumo/keystone_2012.2.2+git201212061213~precise-0ubuntu1_source.changes.Checking signature on .dscGood signature on /tmp/tmpeJUumo/keystone_2012.2.2+git201212061213~precise-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net):  Uploading keystone_2012.2.2+git201212061213~precise-0ubuntu1.dsc: done.  Uploading keystone_2012.2.2+git201212061213~precise.orig.tar.gz: done.  Uploading keystone_2012.2.2+git201212061213~precise-0ubuntu1.debian.tar.gz: done.  Uploading keystone_2012.2.2+git201212061213~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptExporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-folsom/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-folsom/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/k/keystone/keystone-doc_2012.2.2+git201211300431~precise-0ubuntu1_all.debdeleting and forgetting pool/main/k/keystone/keystone_2012.2.2+git201211300431~precise-0ubuntu1_all.debdeleting and forgetting pool/main/k/keystone/python-keystone_2012.2.2+git201211300431~precise-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchPushed up to revision 156.INFO:root:Storing current commit for next build: 7869c3ecd7b1ff72bfe817e97024fe1c7673fba4INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/folsom /tmp/tmpeJUumo/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmpeJUumo/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/keystone/precise-folsom --forcedch -b -D precise --newversion 2012.2.2+git201212061213~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2012.2.2+git201212061213~precise-0ubuntu1_source.changessbuild -d precise-folsom -n -A keystone_2012.2.2+git201212061213~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/folsom-stable-testing keystone_2012.2.2+git201212061213~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-folsom keystone_2012.2.2+git201212061213~precise-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/keystone/precise-folsomEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: raring_grizzly_nova_trunk #244

2012-12-06 Thread openstack-testing-bot
Title: raring_grizzly_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/244/Project:raring_grizzly_nova_trunkDate of build:Thu, 06 Dec 2012 12:10:23 -0500Build duration:19 minBuild cause:Started by user james-pageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesConsole Output[...truncated 36203 lines...]deleting and forgetting pool/main/n/nova/nova-api_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-cert_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-common_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-kvm_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-lxc_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-qemu_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-uml_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-xcp_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-xen_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-conductor_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-console_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-consoleauth_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-doc_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-network_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-novncproxy_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-objectstore_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-scheduler_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-volume_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-network_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-plugins_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xvpvncproxy_2013.1+git201212060902~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/python-nova_2013.1+git201212060902~raring-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/nova/raring-grizzly']Pushed up to revision 509.INFO:root:Storing current commit for next build: 59f6d3b63b3a32875a1835cb3ff9bbddf2b192eeINFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/grizzly /tmp/tmpA13fjb/novamk-build-deps -i -r -t apt-get -y /tmp/tmpA13fjb/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/nova/raring-grizzly --forcedch -b -D raring --newversion 2013.1+git201212061210~raring-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201212061210~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A nova_2013.1+git201212061210~raring-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing nova_2013.1+git201212061210~raring-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include raring-grizzly nova_2013.1+git201212061210~raring-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/nova/raring-grizzly+ [ ! 0 ]+ jenkins-cli build raring_grizzly_deployEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: precise_folsom_coverage #336

2012-12-06 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/precise_folsom_coverage/336/

--
Started by command line
Building on master
No emails were triggered.
[workspace] $ /bin/bash -x /tmp/hudson4115235006440770384.sh
+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/inspect_environment.sh
Inspecting deployed environment.
Writing envrc
Traceback (most recent call last):
  File /var/lib/jenkins/tools/jenkins-scripts/collate-versions.py, line 45, 
in module
pk, vers = l.split(',')
ValueError: need more than 1 value to unpack
+ exit 1
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Jenkins build is back to normal : precise_folsom_coverage #337

2012-12-06 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/precise_folsom_coverage/337/


-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_python-cinderclient_trunk #13

2012-12-06 Thread openstack-testing-bot
Title: raring_grizzly_python-cinderclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_python-cinderclient_trunk/13/Project:raring_grizzly_python-cinderclient_trunkDate of build:Thu, 06 Dec 2012 20:31:01 -0500Build duration:2 min 35 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 2 builds failed.50ChangesAlign cinderclient version code.by mordrededitopenstack-common.confeditcinderclient/__init__.pyedit.gitignoreaddcinderclient/openstack/common/version.pyeditsetup.pyeditMANIFEST.inUpdate to swapped versioninfo logic.by mordrededitcinderclient/openstack/common/setup.pyConsole Output[...truncated 1155 lines...]Download error on http://pypi.python.org/simple/setuptools-git/: timed out -- Some packages may not be found!Couldn't find index page for 'setuptools-git' (maybe misspelled?)Download error on http://pypi.python.org/simple/: timed out -- Some packages may not be found!No local packages or download links found for setuptools-git>=0.4Traceback (most recent call last):  File "setup.py", line 58, in "console_scripts": ["cinder = cinderclient.shell:main"]  File "/usr/lib/python2.7/distutils/core.py", line 112, in setup_setup_distribution = dist = klass(attrs)  File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 221, in __init__self.fetch_build_eggs(attrs.pop('setup_requires'))  File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 245, in fetch_build_eggsparse_requirements(requires), installer=self.fetch_build_egg  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 580, in resolvedist = best[req.key] = env.best_match(req, self, installer)  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 825, in best_matchreturn self.obtain(req, installer) # try and download/install  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 837, in obtainreturn installer(requirement)  File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 295, in fetch_build_eggreturn cmd.easy_install(req)  File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 611, in easy_installraise DistutilsError(msg)distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('setuptools-git>=0.4')ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-8b5a0401-5b40-42a2-ac73-d93facb5041d', '-u', 'jenkins', '--', 'python', 'setup.py', 'sdist']' returned non-zero exit status 1ERROR:root:Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-8b5a0401-5b40-42a2-ac73-d93facb5041d', '-u', 'jenkins', '--', 'python', 'setup.py', 'sdist']' returned non-zero exit status 1INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-cinderclient/grizzly /tmp/tmpaTGAXx/python-cinderclientmk-build-deps -i -r -t apt-get -y /tmp/tmpaTGAXx/python-cinderclient/debian/controlpython setup.py sdistTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-8b5a0401-5b40-42a2-ac73-d93facb5041d', '-u', 'jenkins', '--', 'python', 'setup.py', 'sdist']' returned non-zero exit status 1Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-8b5a0401-5b40-42a2-ac73-d93facb5041d', '-u', 'jenkins', '--', 'python', 'setup.py', 'sdist']' returned non-zero exit status 1Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_python-cinderclient_trunk #16

2012-12-06 Thread openstack-testing-bot
Title: precise_grizzly_python-cinderclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-cinderclient_trunk/16/Project:precise_grizzly_python-cinderclient_trunkDate of build:Thu, 06 Dec 2012 20:31:00 -0500Build duration:2 min 53 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 2 builds failed.50ChangesAlign cinderclient version code.by mordrededitcinderclient/__init__.pyeditsetup.pyeditMANIFEST.ineditopenstack-common.confaddcinderclient/openstack/common/version.pyedit.gitignoreUpdate to swapped versioninfo logic.by mordrededitcinderclient/openstack/common/setup.pyConsole Output[...truncated 1792 lines...]Job: python-cinderclient_1.0.1.2.g9201cee+git201212062031~precise-0ubuntu1.dscMachine Architecture: amd64Package: python-cinderclientPackage-Time: 98Source-Version: 1:1.0.1.2.g9201cee+git201212062031~precise-0ubuntu1Space: 648Status: attemptedVersion: 1:1.0.1.2.g9201cee+git201212062031~precise-0ubuntu1Finished at 20121206-2033Build needed 00:01:38, 648k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'python-cinderclient_1.0.1.2.g9201cee+git201212062031~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'python-cinderclient_1.0.1.2.g9201cee+git201212062031~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-cinderclient/grizzly /tmp/tmpZQ4EAa/python-cinderclientmk-build-deps -i -r -t apt-get -y /tmp/tmpZQ4EAa/python-cinderclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log c01e7822f9de3b17b8cca8d0b10cbf03e6890f8f..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/python-cinderclient/precise-grizzly --forcedch -b -D precise --newversion 1:1.0.1.2.g9201cee+git201212062031~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [9201cee] Update to swapped versioninfo logic.dch -a [5adf791] Align cinderclient version code.dch -a [62eb92a] Pin pep8 to 1.3.3dch -a [79dc21d] show help when calling without argumentsdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-cinderclient_1.0.1.2.g9201cee+git201212062031~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A python-cinderclient_1.0.1.2.g9201cee+git201212062031~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'python-cinderclient_1.0.1.2.g9201cee+git201212062031~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'python-cinderclient_1.0.1.2.g9201cee+git201212062031~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp