[Openstack] OpenStack for Android Tablets

2013-01-14 Thread Desta Haileselassie Hagos
Dear All,
Is there any support of OpenStack technology for tablets (for example
android tablets, with native android interfaces)?


With best regards,

Desta
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] NFS + RDMA == stuck at Booting from hard disk

2013-01-14 Thread Andrew Holway
Hey,

I had NFS over RDMA working but hit this bug with o_direct. I cannot remember 
the exact circumstances of this cropping up.

https://bugzilla.linux-nfs.org/show_bug.cgi?id=228

You can work around this with KVM by explicitly specifying the block size in 
the XML. I do not know how you would implement this in openstack. With 
difficulty I imagine :)

qemu:commandlineqemu:arg value='-set'/qemu:arg value='-set'/qemu:arg 
value='-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,logical_block_size=4096'//qemu:commandline

In general RDMA is best avoided for production setups. NFSoverRDMA support has 
been dumped from Mellanox OFED and Redhat support is really choppy. I saw weird 
kernel panics here and there and other general unhappiness.

ipoib is generally fast enough as long as you have it in connected mode and set 
the frame size appropriately try setting it a wee bit bigger than the block 
size of your filesystem.

ta for now

Andrew Holway



On Jan 10, 2013, at 10:14 PM, Mark Lehrer wrote:

 
 Has anyone here been able to make Openstack + KVM work with Infiniband  
 NFS-RDMA?
 
 I have spent a couple of days here trying to make it work, but with no luck.  
 At first I thought the problem was NFS3 and lockd, but I tried NFSv4 and I 
 have the same problem.  I also disabled AppArmor just as a test but that 
 didn't help.
 
 Things work fine with Infinband if I don't use RDMA, and I have many 
 non-libvirt QEMU-KVM vm's working fine using RDMA.
 
 When this problem occurs, the qemu process is CPU locked, strace shows calls 
 to FUTEX, select, and lots of EAGAIN messages.  If nobody else here is using 
 this setup, I'll keep digging to find out exactly which option is causing 
 this problem.
 
 Thanks,
 Mark
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova root wrapper understanding

2013-01-14 Thread Thierry Carrez
Kun Huang wrote:
 Thanks, Thierry Carrez. Your explanation is easy to understand. I have
 got why we need such a mechanism.
 
 BTW, is root-wrap a general or popular way to keep security? I have no
 experience on security, but I have heard the /root /should be banned
 because of security. Ideally, should we ban /root /in nodes and just use
 root wrapped /nova /user for tasks in need?

Ideally, we should run all services as an unprivileged user (nova). In
reality, given the low-level tasks generally needed to bootstrap
infrastructure resources, it's difficult to achieve. So we should strive
to only escalate when really needed, and filter properly to ensure
escalation is limited. Rootwrap provides a framework for that filtering.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] NFS + RDMA == stuck at Booting from hard disk

2013-01-14 Thread Andrew Holway
Also,

You might be better off asking on the KVM list for this low level stuff.

http://www.linux-kvm.org/page/Lists,_IRC#Mailing_Lists




On Jan 14, 2013, at 11:15 AM, Andrew Holway wrote:

 Hey,
 
 I had NFS over RDMA working but hit this bug with o_direct. I cannot remember 
 the exact circumstances of this cropping up.
 
 https://bugzilla.linux-nfs.org/show_bug.cgi?id=228
 
 You can work around this with KVM by explicitly specifying the block size in 
 the XML. I do not know how you would implement this in openstack. With 
 difficulty I imagine :)
 
 qemu:commandlineqemu:arg value='-set'/qemu:arg value='-set'/qemu:arg 
 value='-device 
 virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,logical_block_size=4096'//qemu:commandline
 
 In general RDMA is best avoided for production setups. NFSoverRDMA support 
 has been dumped from Mellanox OFED and Redhat support is really choppy. I saw 
 weird kernel panics here and there and other general unhappiness.
 
 ipoib is generally fast enough as long as you have it in connected mode and 
 set the frame size appropriately try setting it a wee bit bigger than the 
 block size of your filesystem.
 
 ta for now
 
 Andrew Holway
 
 
 
 On Jan 10, 2013, at 10:14 PM, Mark Lehrer wrote:
 
 
 Has anyone here been able to make Openstack + KVM work with Infiniband  
 NFS-RDMA?
 
 I have spent a couple of days here trying to make it work, but with no luck. 
  At first I thought the problem was NFS3 and lockd, but I tried NFSv4 and I 
 have the same problem.  I also disabled AppArmor just as a test but that 
 didn't help.
 
 Things work fine with Infinband if I don't use RDMA, and I have many 
 non-libvirt QEMU-KVM vm's working fine using RDMA.
 
 When this problem occurs, the qemu process is CPU locked, strace shows calls 
 to FUTEX, select, and lots of EAGAIN messages.  If nobody else here is using 
 this setup, I'll keep digging to find out exactly which option is causing 
 this problem.
 
 Thanks,
 Mark
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
Hello all,


I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients
(each hosted on a different machine) with 10 threads each uploading files
using the official python-swiftclient. Each thread is uploading to a
separate container.

I have 5 storage nodes and 1 proxy node. The nodes are all running with a
replication factor of 3. Each node has a quad-core i3 processor, 4GB of RAM
and a gigabit network interface.

Is there any way I can speed up this process? At the moment it takes about
20 seconds per file or more.

Regards,

Leander
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] How to make the subinterface connection in a VLAN environment?

2013-01-14 Thread Ray Sun
My OpenStack environment is running under traditional VLAN model.
Currently, I have two VMs which is running Ubuntu 12.04 server and their
Fixed IP is:

Host 1: eth0: 172.16.0.3
Host 2: eth0: 172.16.0.5

Then I add a subinterface to each VM like this:

Host 1: eth0:1 192.168.2.2
Host 2: eth0:1 192.168.2.3

My network configuration looks like:
auto eth0
iface eth0 inet dhcp

auto eth0:1
iface eth0:1 inet static
address *192.168.2.3*
netmask 255.255.255.0

Also in the access  security I open icmp there, but I don't think this is
related to my issue. Then I try to run ping host2 from host1, and it works.

Then on Host1, I run:

ping 172.16.0.5 -I 192.168.2.2 - *NO RESPONSE*

ping 192.168.2.3 - *Destination Host Unreachable*
*
*
I didn't modify the route table, and it looks like:
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
default 172.16.0.4  0.0.0.0 UG10000 eth0
172.16.0.0  *   255.255.255.0   U 0  00 eth0
192.168.2.0 *   255.255.255.0   U 0  00 eth0

Anyone know how can I access the host2 through my subinterface? Thanks a
lot.


- Ray
Yours faithfully, Kind regards.

CIeNET Technologies (Beijing) Co., Ltd
Email: qsun01...@cienet.com.cn
Office Phone: +86-01081470088-7079
Mobile Phone: +86-13581988291
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance, boto and image id

2013-01-14 Thread Antonio Messina
Apparently not. This is the output of glance image-show:

+---+--+
| Property  | Value|
+---+--+
| Property 'kernel_id'  | e1c78f4d-eca9-4979-9ee0-54019d5f79c2 |
| Property 'ramdisk_id' | 99c8443e-c3b2-4aef-8bf8-79cc58f127a2 |
| checksum  | 2f81976cae15c16ef0010c51e3a6c163 |
| container_format  | ami  |
| created_at| 2012-12-04T22:59:13  |
| deleted   | False|
| disk_format   | ami  |
| id| 67b612ac-ab20-4227-92fc-adf92841ba8b |
| is_public | True |
| min_disk  | 0|
| min_ram   | 0|
| name  | cirros-0.3.0-x86_64-uec  |
| owner | ab267870ac72450d925a437f9b7c064a |
| protected | False|
| size  | 25165824 |
| status| active   |
| updated_at| 2012-12-04T22:59:14  |
+---+--+


Looking at the code in
https://github.com/openstack/nova/blob/master/nova/api/ec2/ec2utils.py#L70and
https://github.com/openstack/nova/blob/master/nova/api/ec2/ec2utils.py#L126it
seems that the conversion is simply done by getting an id of type
integer (not uuid-like string) and then converting it to hex form and
appending it to the string 'ami-'

Question is: where this id comes from and is there any way to show it in
the horizon web interface?

.a.


On Sun, Jan 13, 2013 at 10:17 PM, Jay Pipes jaypi...@gmail.com wrote:

 The EC2-style image ID would probably be stored in the custom key/value
 pairs in the Glance image record for the image... so if you do a glance
 image-show IMAGE_UUID you should see the EC2 image ID in there...

 -jay

 On 01/11/2013 09:51 AM, Antonio Messina wrote:
  Hi all,
 
  I am using boto library to access an folsom installation, and I have a
  few doubts regarding image IDs.
 
  I understand that boto uses ec2-style id for images (something like ami
  ami-16digit number) and that nova API converts glance IDs to EC2 id.
  However, it seems that there is no way from the horizon web interface
  nor from euca-tools to get this mapping.
 
  How can I know the EC2 id of an image, having access only to the web
  interface or boto?
 
  I could use the *name* of the instance instead of the ID, but the name
  is not unique...
 
  .a.
 
  --
  antonio.s.mess...@gmail.com mailto:antonio.s.mess...@gmail.com
  GC3: Grid Computing Competence Center
  http://www.gc3.uzh.ch/
  University of Zurich
  Winterthurerstrasse 190
  CH-8057 Zurich Switzerland
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
antonio.s.mess...@gmail.com
GC3: Grid Computing Competence Center
http://www.gc3.uzh.ch/
University of Zurich
Winterthurerstrasse 190
CH-8057 Zurich Switzerland
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
I forgot to mention that I'm also using the suggestions mentioned here:
http://docs.openstack.org/developer/swift/deployment_guide.html#general-system-tuning


On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert 
leande...@gmail.com wrote:

 Hello all,


 I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients
 (each hosted on a different machine) with 10 threads each uploading files
 using the official python-swiftclient. Each thread is uploading to a
 separate container.

 I have 5 storage nodes and 1 proxy node. The nodes are all running with a
 replication factor of 3. Each node has a quad-core i3 processor, 4GB of RAM
 and a gigabit network interface.

 Is there any way I can speed up this process? At the moment it takes about
 20 seconds per file or more.

 Regards,

 Leander




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Change the partition power to recreate the RING

2013-01-14 Thread Alejandro Comisario
Chuck / John.
We are having 50.000 request per minute ( where 10.000+ are put from small
objects, from 10KB to 150KB )

We are using swift 1.7.4 with keystone token caching so no latency over
there.
We are having 12 proxyes and 24 datanodes divided in 4 zones ( each
datanode has 48gb of ram, 2 hexacore and 4 devices of 3TB each )

The workers that are puting objects in swift are seeing an awful
performance, and we too.
With peaks of 2secs to 15secs per put operations coming from the datanodes.
We tunes db_preallocation, disable_fallocate, workers and concurrency but
we cant reach the request that we need ( we need 24.000 put per minute of
small objects ) but we dont seem to find where is the problem, other than
from the datanodes.

Maybe worth pasting our config over here?
Thanks in advance.

alejandro
On 12 Jan 2013 02:01, Chuck Thier cth...@gmail.com wrote:

 Looking at this from a different perspective.  Having 2500 partitions
 per drive shouldn't be an absolutely horrible thing either.  Do you
 know how many objects you have per partition?  What types of problems
 are you seeing?

 --
 Chuck

 On Fri, Jan 11, 2013 at 3:28 PM, John Dickinson m...@not.mn wrote:
  If effect, this would be a complete replacement of your rings, and that
 is essentially a whole new cluster. All of the existing data would need to
 be rehashed into the new ring before it is available.
 
  There is no process that rehashes the data to ensure that it is still in
 the correct partition. Replication only ensures that the partitions are on
 the right drives.
 
  To change the number of partitions, you will need to GET all of the data
 from the old ring and PUT it to the new ring. A more complicated, but
 perhaps more efficient) solution may include something like walking each
 drive and rehashing+moving the data to the right partition and then letting
 replication settle it down.
 
  Either way, 100% of your existing data will need to at least be rehashed
 (and probably moved). Your CPU (hashing), disks (read+write), RAM
 (directory walking), and network (replication) may all be limiting factors
 in how long it will take to do this. Your per-disk free space may also
 determine what method you choose.
 
  I would not expect any data loss while doing this, but you will probably
 have availability issues, depending on the data access patterns.
 
  I'd like to eventually see something in swift that allows for changing
 the partition power in existing rings, but that will be
 hard/tricky/non-trivial.
 
  Good luck.
 
  --John
 
 
  On Jan 11, 2013, at 1:17 PM, Alejandro Comisario 
 alejandro.comisa...@mercadolibre.com wrote:
 
  Hi guys.
  We've created a swift cluster several months ago, the things is that
 righ now we cant add hardware and we configured lots of partitions thinking
 about the final picture of the cluster.
 
  Today each datanodes is having 2500+ partitions per device, and even
 tuning the background processes ( replicator, auditor  updater ) we really
 want to try to lower the partition power.
 
  Since its not possible to do that without recreating the ring, we can
 have the luxury of recreate it with a very lower partition power, and
 rebalance / deploy the new ring.
 
  The question is, having a working cluster with *existing data* is it
 possible to do this and wait for the data to move around *without data
 loss* ???
  If so, it might be true to wait for an improvement in the overall
 cluster performance ?
 
  We have no problem to have a non working cluster (while moving the
 data) even for an entire weekend.
 
  Cheers.
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to make the subinterface connection in a VLAN environment?

2013-01-14 Thread Tomokazu Hirai
Hi,

Which network compornent do you use ? nova-network ? quantum ?

If my perception is correct, you need to use quantum to have
multiple nic on VM. When you create multiple subnet, VM will boot
with multiple NICs and these VMs can take communicate with each
others with each NICs.

Best Regards,

-- Tomokazu Hirai @jedipunkz

From: Ray Sun qsun01...@cienet.com.cn
Subject: [Openstack] How to make the subinterface connection in a VLAN 
environment?
Date: Mon, 14 Jan 2013 19:00:19 +0800

 My OpenStack environment is running under traditional VLAN model.
 Currently, I have two VMs which is running Ubuntu 12.04 server and their
 Fixed IP is:
 
 Host 1: eth0: 172.16.0.3
 Host 2: eth0: 172.16.0.5
 
 Then I add a subinterface to each VM like this:
 
 Host 1: eth0:1 192.168.2.2
 Host 2: eth0:1 192.168.2.3
 
 My network configuration looks like:
 auto eth0
 iface eth0 inet dhcp
 
 auto eth0:1
 iface eth0:1 inet static
 address *192.168.2.3*
 netmask 255.255.255.0
 
 Also in the access  security I open icmp there, but I don't think this is
 related to my issue. Then I try to run ping host2 from host1, and it works.
 
 Then on Host1, I run:
 
 ping 172.16.0.5 -I 192.168.2.2 - *NO RESPONSE*
 
 ping 192.168.2.3 - *Destination Host Unreachable*
 *
 *
 I didn't modify the route table, and it looks like:
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 default 172.16.0.4  0.0.0.0 UG10000 eth0
 172.16.0.0  *   255.255.255.0   U 0  00 eth0
 192.168.2.0 *   255.255.255.0   U 0  00 eth0
 
 Anyone know how can I access the host2 through my subinterface? Thanks a
 lot.
 
 
 - Ray
 Yours faithfully, Kind regards.
 
 CIeNET Technologies (Beijing) Co., Ltd
 Email: qsun01...@cienet.com.cn
 Office Phone: +86-01081470088-7079
 Mobile Phone: +86-13581988291

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Robert van Leeuwen
On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert 
leande...@gmail.commailto:leande...@gmail.com wrote:
Hello all,


I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients (each 
hosted on a different machine) with 10 threads each uploading files using the 
official python-swiftclient. Each thread is uploading to a separate container.

I have 5 storage nodes and 1 proxy node. The nodes are all running with a 
replication factor of 3. Each node has a quad-core i3 processor, 4GB of RAM and 
a gigabit network interface.

Is there any way I can speed up this process? At the moment it takes about 20 
seconds per file or more.


It is very likely the system is starved on IO's.
As a temporary workaround you can stop the object-replicator and object-auditor 
during the import to have less daemons competing for IO's.

Some general troubleshooting tips:
Use iotop to look for the processes consuming io's

Assuming you use XFS:
Make sure the filesystem is created with the appropriate inode size as 
described in the docs.
(e.g. mkfs.xfs -i size=1024)

Also with lots of files you need quite a bit of memory to cache the inodes into 
memory.
Use the xfs runtime stats to get some indication about the cache:
http://xfs.org/index.php/Runtime_Stats
xs_dir_lookup and xs_ig_missed will give some indication how much IO's are 
spend on the inode lookups

You can look at slabtop to see how much memory is used by the inode cache.

Cheers,
Robert

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
By stopping, do you mean halt the service (kill the process) or is it a
change in the configuration file?


On Mon, Jan 14, 2013 at 1:20 PM, Robert van Leeuwen 
robert.vanleeu...@spilgames.com wrote:

  On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello all,


  I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients
 (each hosted on a different machine) with 10 threads each uploading files
 using the official python-swiftclient. Each thread is uploading to a
 separate container.

  I have 5 storage nodes and 1 proxy node. The nodes are all running with
 a replication factor of 3. Each node has a quad-core i3 processor, 4GB of
 RAM and a gigabit network interface.

  Is there any way I can speed up this process? At the moment it takes
 about 20 seconds per file or more.


 It is very likely the system is starved on IO's.
 As a temporary workaround you can stop the object-replicator and
 object-auditor during the import to have less daemons competing for IO's.

 Some general troubleshooting tips:
 Use iotop to look for the processes consuming io's

 Assuming you use XFS:
 Make sure the filesystem is created with the appropriate inode size as
 described in the docs.
 (e.g. mkfs.xfs -i size=1024)

 Also with lots of files you need quite a bit of memory to cache the inodes
 into memory.
 Use the xfs runtime stats to get some indication about the cache:
 http://xfs.org/index.php/Runtime_Stats
 xs_dir_lookup and xs_ig_missed will give some indication how much IO's are
 spend on the inode lookups

 You can look at slabtop to see how much memory is used by the inode cache.

 Cheers,
 Robert


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Robert van Leeuwen
 By stopping, do you mean halt the service (kill the process) or is it a 
 change in the configuration file?

Just halt the service.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Robert van Leeuwen
 According to the info below, i think the current size is 256 right?
 If I format the storage partition, will that automatically clear all the 
 contents from the storage or do I need to clean something else as well?

 Output from xfs_info:
 meta-data=/dev/sda3  isize=256agcount=4, agsize=13309312 blks

Yes, this is the wrong inode size.

When you format the disk on one node it will start to sync the removed data 
back from the other nodes to this machine (as long as the object-replicator is 
running on the other nodes).
Note that this can take a pretty long time. (our nodes with millions of files 
more then a week to sync)

If you want to throw away everything and start with a empty cluster my guess is 
you would need to stop everything and also remove the account and container 
databases.
I've never done this so I can't tell you for sure if that does not break 
anything or if it needs any manual intervention to re-create the databases.

Cheers,
Robert
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to make the subinterface connection in a VLAN environment?

2013-01-14 Thread Ray Sun
Hirai,
Thanks for your quick response. Currently I am using nova-network in VLAN
model. Actually the real requirement is the user want to bind many
subinterfaces on one NIC, and they want to connect the other server using
every subinterface, so we can't want to create so many VLANs for one
project.

- Ray
Yours faithfully, Kind regards.

CIeNET Technologies (Beijing) Co., Ltd
Email: qsun01...@cienet.com.cn
Office Phone: +86-01081470088-7079
Mobile Phone: +86-13581988291


On Mon, Jan 14, 2013 at 8:37 PM, Tomokazu Hirai tomokazu.hi...@gmail.comwrote:

 Hi,

 Which network compornent do you use ? nova-network ? quantum ?

 If my perception is correct, you need to use quantum to have
 multiple nic on VM. When you create multiple subnet, VM will boot
 with multiple NICs and these VMs can take communicate with each
 others with each NICs.

 Best Regards,

 -- Tomokazu Hirai @jedipunkz

 From: Ray Sun qsun01...@cienet.com.cn
 Subject: [Openstack] How to make the subinterface connection in a VLAN
 environment?
 Date: Mon, 14 Jan 2013 19:00:19 +0800

  My OpenStack environment is running under traditional VLAN model.
  Currently, I have two VMs which is running Ubuntu 12.04 server and their
  Fixed IP is:
 
  Host 1: eth0: 172.16.0.3
  Host 2: eth0: 172.16.0.5
 
  Then I add a subinterface to each VM like this:
 
  Host 1: eth0:1 192.168.2.2
  Host 2: eth0:1 192.168.2.3
 
  My network configuration looks like:
  auto eth0
  iface eth0 inet dhcp
 
  auto eth0:1
  iface eth0:1 inet static
  address *192.168.2.3*
  netmask 255.255.255.0
 
  Also in the access  security I open icmp there, but I don't think this
 is
  related to my issue. Then I try to run ping host2 from host1, and it
 works.
 
  Then on Host1, I run:
 
  ping 172.16.0.5 -I 192.168.2.2 - *NO RESPONSE*
 
  ping 192.168.2.3 - *Destination Host Unreachable*
  *
  *
  I didn't modify the route table, and it looks like:
  Kernel IP routing table
  Destination Gateway Genmask Flags Metric RefUse
  Iface
  default 172.16.0.4  0.0.0.0 UG10000
 eth0
  172.16.0.0  *   255.255.255.0   U 0  00
 eth0
  192.168.2.0 *   255.255.255.0   U 0  00
 eth0
 
  Anyone know how can I access the host2 through my subinterface? Thanks a
  lot.
 
 
  - Ray
  Yours faithfully, Kind regards.
 
  CIeNET Technologies (Beijing) Co., Ltd
  Email: qsun01...@cienet.com.cn
  Office Phone: +86-01081470088-7079
  Mobile Phone: +86-13581988291

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack][Swift] Reset Swift | Clear Swift and Account Database

2013-01-14 Thread Leander Bessa Beernaert
Hello all,

I've come to realize that my swift storage partitions are setup with the
wrong node size. The only way for me to fix this is to format the
partitions. I was wondering how I could reset swift (remove all data from
stored files) without having to install it again.

Regards,

Leander
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
I see. With replication switched off during upload, does inserting into
various containers speed up the process or is it irrelevant?


On Mon, Jan 14, 2013 at 1:49 PM, Robert van Leeuwen 
robert.vanleeu...@spilgames.com wrote:

   According to the info below, i think the current size is 256 right?
  If I format the storage partition, will that automatically clear all the
 contents from the storage or do I need to clean something else as well?
 
  Output from xfs_info:
  meta-data=/dev/sda3  isize=256agcount=4, agsize=13309312
 blks

 Yes, this is the wrong inode size.

 When you format the disk on one node it will start to sync the removed
 data back from the other nodes to this machine (as long as the
 object-replicator is running on the other nodes).
 Note that this can take a pretty long time. (our nodes with millions of
 files more then a week to sync)

 If you want to throw away everything and start with a empty cluster my
 guess is you would need to stop everything and also remove the account and
 container databases.
 I've never done this so I can't tell you for sure if that does not break
 anything or if it needs any manual intervention to re-create the databases.

 Cheers,
 Robert

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Openstack - cinder - volume driver NFS

2013-01-14 Thread Benoit ML
Hello,

I've installed openstack Folsom on centos with the epel repo.
Basically the all things works on 4 nodes configuration(controller,
network and 2 compute). Quantum is configured with GRE and L3
services.

At my point, i'd like to go futher on the storage part.

I'm trying to use the cinder NFS driver to manage volume.  From what I
read, the driver is basic : créate/delete volume on an NFS share.
Well but I didn't manage to get it work ...  can you help me please ?
what I must do ?

When I look into the cinder/volume.log, I see that cinder first create
a LV, and  try to stat /nfsPath/bigUUID  and fail. Why cinder try
to créate a LV ? and try to access a directory on the nfs share thaht
is not create before ?


Moreover if you have any advice for a good basic shared storage
architecture with Openstack, plz shared the information ;)

In cinder.conf :
-
volume_driver=cinder.volume.nfs.NfsDriver
state_path = /var/lib/cinder
nfs_shares_config=/etc/cinder/shares.conf
nfs_mount_point_base = /mnt/exports/volumes/
#nfs_sparsed_volumes = True
#nfs_disk_util = df
lock_path = /var/lib/cinder/tmp
-


Thank you in advance.

Regards,

-- 
--
Benoit

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Robert van Leeuwen
 I see. With replication switched off during upload, does inserting into 
 various containers speed up the process
 or is it irrelevant?

I'm not sure what's your question but maybe this helps:
In short:
The replication daemon is walking across your files to check if any files 
need to be replicated to other nodes
(e.g. when a node was broken or a new node is added to the cluster).
Because it is scanning your filesystem it eats up io's. That is why it speeds 
up the system if you turn this off.

The number off objects in a container impacts the sqlite databases. Each 
container is a SQLite database.
The SQLite database will get slower with more objects in them.
I think the recommendations for spinning disks is max a few million for one 
container, when using SSD's for the databases you can go quite a bit higher 
though.

Cheers,
Robert
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
Allow me to rephrase. I've read somewhere (can't remember where) that it
would be faster to upload files if they would be uploaded to separate
containeres. This was suggested for a standard swift installation with a
certain replication factor. Since I'll be uploading the files with the
replicators turned off, does it really matter if I insert a group of them
in separate containeres?


On Mon, Jan 14, 2013 at 2:27 PM, Robert van Leeuwen 
robert.vanleeu...@spilgames.com wrote:

   I see. With replication switched off during upload, does inserting
 into various containers speed up the process
  or is it irrelevant?

 I'm not sure what's your question but maybe this helps:
 In short:
 The replication daemon is walking across your files to check if any
 files need to be replicated to other nodes
 (e.g. when a node was broken or a new node is added to the cluster).
 Because it is scanning your filesystem it eats up io's. That is why it
 speeds up the system if you turn this off.

 The number off objects in a container impacts the sqlite databases. Each
 container is a SQLite database.
 The SQLite database will get slower with more objects in them.
 I think the recommendations for spinning disks is max a few million for
 one container, when using SSD's for the databases you can go quite a bit
 higher though.

 Cheers,
 Robert

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [heat] Grizzly-2 development milestone available for Heat

2013-01-14 Thread Martinx - ジェームズ
Hi!

 Is Grizzly-2 available on Ubuntu Raring Ringtail (13.04) daily builds?

Tks!
Thiago

On 10 January 2013 19:44, Steven Dake sd...@redhat.com wrote:

 Hi folks,

 The OpenStack release team has released the second milestone of the
 Grizzly development cycle (grizzly-2).  This is a significant step in
 Heat's incubation, as it is our first milestone release leading to the
 final delivery of OpenStack 2013.1 scheduled for April 4, 2013.

 You can find the full list of new features and fixed bugs, as well as
 tarball downloads at:

 https://launchpad.net/heat/**grizzly/grizzly-2https://launchpad.net/heat/grizzly/grizzly-2

 Features and bugs may be resolved until the next milestone, grizzly-3,
 which will be delivered on February 21st.  Come join the growing
 orchestration development community by contributing to Heat and making
 orchestration in OpenStack world class!

 Regards
 -steve

 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
Ok, thanks for all the tips/help.

Regards,

Leander


On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen 
robert.vanleeu...@spilgames.com wrote:

   Allow me to rephrase.
  I've read somewhere (can't remember where) that it would be faster to
 upload files if they would be uploaded to separate containeres.
  This was suggested for a standard swift installation with a certain
 replication factor.
  Since I'll be uploading the files with the replicators turned off, does
 it really matter if I insert a group of them in separate containeres?

 My guess is this concerns the SQLite database load distribution.
 So yes, it still matters.

 Just to be clear: turning replicators off does not matter at ALL when
 putting files in a healthy cluster.
 Files will be replicated / put on all required nodes at the moment the
 put request is done.
 The put request will only give an OK when there is quorum writing the file
 (the file is stored on more than half of the required object nodes)
 The replicator daemons do not have anything to do with this.

 Cheers,
 Robert

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Robert van Leeuwen
 Allow me to rephrase.
 I've read somewhere (can't remember where) that it would be faster to upload 
 files if they would be uploaded to separate containeres.
 This was suggested for a standard swift installation with a certain 
 replication factor.
 Since I'll be uploading the files with the replicators turned off, does it 
 really matter if I insert a group of them in separate containeres?

My guess is this concerns the SQLite database load distribution.
So yes, it still matters.

Just to be clear: turning replicators off does not matter at ALL when putting 
files in a healthy cluster.
Files will be replicated / put on all required nodes at the moment the put 
request is done.
The put request will only give an OK when there is quorum writing the file (the 
file is stored on more than half of the required object nodes)
The replicator daemons do not have anything to do with this.

Cheers,
Robert
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance, boto and image id

2013-01-14 Thread Jay Pipes
On 01/14/2013 06:06 AM, Antonio Messina wrote:
 Apparently not. This is the output of glance image-show:
 
 +---+--+
 | Property  | Value|
 +---+--+
 | Property 'kernel_id'  | e1c78f4d-eca9-4979-9ee0-54019d5f79c2 |
 | Property 'ramdisk_id' | 99c8443e-c3b2-4aef-8bf8-79cc58f127a2 |
 | checksum  | 2f81976cae15c16ef0010c51e3a6c163 |
 | container_format  | ami  |
 | created_at| 2012-12-04T22:59:13  |
 | deleted   | False|
 | disk_format   | ami  |
 | id| 67b612ac-ab20-4227-92fc-adf92841ba8b |
 | is_public | True |
 | min_disk  | 0|
 | min_ram   | 0|
 | name  | cirros-0.3.0-x86_64-uec  |
 | owner | ab267870ac72450d925a437f9b7c064a |
 | protected | False|
 | size  | 25165824 |
 | status| active   |
 | updated_at| 2012-12-04T22:59:14  |
 +---+--+

:( Oh well, was worth a shot.

 Looking at the code in
 https://github.com/openstack/nova/blob/master/nova/api/ec2/ec2utils.py#L70
 and
 https://github.com/openstack/nova/blob/master/nova/api/ec2/ec2utils.py#L126
 it seems that the conversion is simply done by getting an id of type
 integer (not uuid-like string) and then converting it to hex form and
 appending it to the string 'ami-'
 
 Question is: where this id comes from and is there any way to show it in
 the horizon web interface?

There is an integer key in the s3_images table that stores the map
between the UUID and the AMI image id:

https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L964

Not sure this is available via Horizon... sorry.

Best,
-jay

 .a.
 
 
 On Sun, Jan 13, 2013 at 10:17 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 
 The EC2-style image ID would probably be stored in the custom key/value
 pairs in the Glance image record for the image... so if you do a glance
 image-show IMAGE_UUID you should see the EC2 image ID in there...
 
 -jay
 
 On 01/11/2013 09:51 AM, Antonio Messina wrote:
  Hi all,
 
  I am using boto library to access an folsom installation, and I have a
  few doubts regarding image IDs.
 
  I understand that boto uses ec2-style id for images (something
 like ami
  ami-16digit number) and that nova API converts glance IDs to EC2 id.
  However, it seems that there is no way from the horizon web interface
  nor from euca-tools to get this mapping.
 
  How can I know the EC2 id of an image, having access only to the web
  interface or boto?
 
  I could use the *name* of the instance instead of the ID, but the name
  is not unique...
 
  .a.
 
  --
  antonio.s.mess...@gmail.com mailto:antonio.s.mess...@gmail.com
 mailto:antonio.s.mess...@gmail.com
 mailto:antonio.s.mess...@gmail.com
  GC3: Grid Computing Competence Center
  http://www.gc3.uzh.ch/
  University of Zurich
  Winterthurerstrasse 190
  CH-8057 Zurich Switzerland
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 -- 
 antonio.s.mess...@gmail.com mailto:antonio.s.mess...@gmail.com
 GC3: Grid Computing Competence Center
 http://www.gc3.uzh.ch/
 University of Zurich
 Winterthurerstrasse 190
 CH-8057 Zurich Switzerland

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack - cinder - volume driver NFS

2013-01-14 Thread Ray Sun
Here's my configuration for cinder using NFS, also I submit bug for
creating volume snapshot and volume from snapshot here, I already fixed it
in my local, not submit yet:
https://bugs.launchpad.net/cinder/+bug/1097266

*/etc/cinder/cinder.conf
*
[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://
cinder:cinder-administra...@controller.cienet.com.cn/cinder
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper=ietadm
volume_name_template = volume-%s
volume_group = vg_cinder
#volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#osapi_volume_listen_port=5900
volumes_dir=/var/lib/cinder/volumes/
rabbit_host=controller.cienet.com.cn
volume_driver=cinder.volume.nfs.NfsDriver
 defined in cinder.volume.drivers.nfs 
nfs_shares_config=/var/lib/cinder/nfsshare
nfs_mount_point_base=/var/lib/cinder/volumes
nfs_disk_util=df
nfs_sparsed_volumes=true

*/etc/cinder/rootwrap.d/volume.filters add*
# cinder/volume/nfs.py
stat: CommandFilter, /usr/bin/stat, root
mount: CommandFilter, /bin/mount, root
df: CommandFilter, /bin/df, root
truncate: CommandFilter, /usr/bin/truncate, root
chmod: CommandFilter, /bin/chmod, root
rm: CommandFilter, /bin/rm, root

*/etc/nova/nova.conf add*
# for cinder use volume as nfs
libvirt_volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver,local=nova.virt.libvirt.volume.LibvirtVolumeDriver,fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver,rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,nfs=nova.virt.libvirt.volume_nfs.NfsVolumeDriver

- Ray
Yours faithfully, Kind regards.

CIeNET Technologies (Beijing) Co., Ltd
Email: qsun01...@cienet.com.cn
Office Phone: +86-01081470088-7079
Mobile Phone: +86-13581988291


On Mon, Jan 14, 2013 at 10:22 PM, Benoit ML ben4...@gmail.com wrote:

 Hello,

 I've installed openstack Folsom on centos with the epel repo.
 Basically the all things works on 4 nodes configuration(controller,
 network and 2 compute). Quantum is configured with GRE and L3
 services.

 At my point, i'd like to go futher on the storage part.

 I'm trying to use the cinder NFS driver to manage volume.  From what I
 read, the driver is basic : créate/delete volume on an NFS share.
 Well but I didn't manage to get it work ...  can you help me please ?
 what I must do ?

 When I look into the cinder/volume.log, I see that cinder first create
 a LV, and  try to stat /nfsPath/bigUUID  and fail. Why cinder try
 to créate a LV ? and try to access a directory on the nfs share thaht
 is not create before ?


 Moreover if you have any advice for a good basic shared storage
 architecture with Openstack, plz shared the information ;)

 In cinder.conf :
 -
 volume_driver=cinder.volume.nfs.NfsDriver
 state_path = /var/lib/cinder
 nfs_shares_config=/etc/cinder/shares.conf
 nfs_mount_point_base = /mnt/exports/volumes/
 #nfs_sparsed_volumes = True
 #nfs_disk_util = df
 lock_path = /var/lib/cinder/tmp
 -


 Thank you in advance.

 Regards,

 --
 --
 Benoit

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Change the partition power to recreate the RING

2013-01-14 Thread Tong Li

John and swifters,
I see this problem as a big problem and I think that the scenario
described by Alejandro is a very common scenario. I am thinking if it is
possible to have like two rings (one with the newer extended power, one
with the existing ring power), when significant changes made to the
hardware, partition, a new ring get started with a command, and new data
into Swift will use the new ring, and existing data on the existing ring
still available and slowly (not impact the normal use) but automatically
moves to the new ring, once the existing ring shrinks to the size zero,
then that ring can be removed. The idea is to sort of having two virtual
Swift systems working side by side, the migration from existing ring to new
ring being done without interrupting the service. Can we put this
topic/feature as one to be discussed during the next summit and to be
considered as a high priority feature to work on for coming releases?

Thanks.

Tong Li
Emerging Technologies  Standards
Building 501/B205
liton...@us.ibm.com



From:   John Dickinson m...@not.mn
To: Alejandro Comisario alejandro.comisa...@mercadolibre.com,
Cc: openstack-operat...@lists.openstack.org
openstack-operat...@lists.openstack.org, openstack
openstack@lists.launchpad.net
Date:   01/11/2013 04:28 PM
Subject:Re: [Openstack] [SWIFT] Change the partition power to recreate
the RING
Sent by:openstack-bounces+litong01=us.ibm@lists.launchpad.net



If effect, this would be a complete replacement of your rings, and that is
essentially a whole new cluster. All of the existing data would need to be
rehashed into the new ring before it is available.

There is no process that rehashes the data to ensure that it is still in
the correct partition. Replication only ensures that the partitions are on
the right drives.

To change the number of partitions, you will need to GET all of the data
from the old ring and PUT it to the new ring. A more complicated, but
perhaps more efficient) solution may include something like walking each
drive and rehashing+moving the data to the right partition and then letting
replication settle it down.

Either way, 100% of your existing data will need to at least be rehashed
(and probably moved). Your CPU (hashing), disks (read+write), RAM
(directory walking), and network (replication) may all be limiting factors
in how long it will take to do this. Your per-disk free space may also
determine what method you choose.

I would not expect any data loss while doing this, but you will probably
have availability issues, depending on the data access patterns.

I'd like to eventually see something in swift that allows for changing the
partition power in existing rings, but that will be
hard/tricky/non-trivial.

Good luck.

--John


On Jan 11, 2013, at 1:17 PM, Alejandro Comisario
alejandro.comisa...@mercadolibre.com wrote:

 Hi guys.
 We've created a swift cluster several months ago, the things is that righ
now we cant add hardware and we configured lots of partitions thinking
about the final picture of the cluster.

 Today each datanodes is having 2500+ partitions per device, and even
tuning the background processes ( replicator, auditor  updater ) we really
want to try to lower the partition power.

 Since its not possible to do that without recreating the ring, we can
have the luxury of recreate it with a very lower partition power, and
rebalance / deploy the new ring.

 The question is, having a working cluster with *existing data* is it
possible to do this and wait for the data to move around *without data
loss* ???
 If so, it might be true to wait for an improvement in the overall cluster
performance ?

 We have no problem to have a non working cluster (while moving the data)
even for an entire weekend.

 Cheers.



[attachment smime.p7s deleted by Tong Li/Raleigh/IBM]
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
inline: graycol.gif___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Add NIC to running instance?

2013-01-14 Thread Wojciech Dec
Hi All,

is there a nova command to add a NIC to a running instance (ie without the
need to do nova boot ... --nic 1 --nic new-nic) ?

Documentation not showing up anything...

Regards,
W. Dec
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack - cinder - volume driver NFS

2013-01-14 Thread Benoit ML
Hello,

Thank you for your configuration, it will help me a lot.

One last question : what is the data in the file of
nfs_shares_config : /var/lib/cinder/nfsshare  plz ?

Thx in advance

2013/1/14 Ray Sun qsun01...@cienet.com.cn:
 Here's my configuration for cinder using NFS, also I submit bug for creating
 volume snapshot and volume from snapshot here, I already fixed it in my
 local, not submit yet:
 https://bugs.launchpad.net/cinder/+bug/1097266

 /etc/cinder/cinder.conf
 [DEFAULT]
 rootwrap_config=/etc/cinder/rootwrap.conf
 sql_connection =
 mysql://cinder:cinder-administra...@controller.cienet.com.cn/cinder
 api_paste_confg = /etc/cinder/api-paste.ini
 iscsi_helper=ietadm
 volume_name_template = volume-%s
 volume_group = vg_cinder
 #volume_group = cinder-volumes
 verbose = True
 auth_strategy = keystone
 #osapi_volume_listen_port=5900
 volumes_dir=/var/lib/cinder/volumes/
 rabbit_host=controller.cienet.com.cn
 volume_driver=cinder.volume.nfs.NfsDriver
  defined in cinder.volume.drivers.nfs 
 nfs_shares_config=/var/lib/cinder/nfsshare
 nfs_mount_point_base=/var/lib/cinder/volumes
 nfs_disk_util=df
 nfs_sparsed_volumes=true

 /etc/cinder/rootwrap.d/volume.filters add
 # cinder/volume/nfs.py
 stat: CommandFilter, /usr/bin/stat, root
 mount: CommandFilter, /bin/mount, root
 df: CommandFilter, /bin/df, root
 truncate: CommandFilter, /usr/bin/truncate, root
 chmod: CommandFilter, /bin/chmod, root
 rm: CommandFilter, /bin/rm, root

 /etc/nova/nova.conf add
 # for cinder use volume as nfs
 libvirt_volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver,local=nova.virt.libvirt.volume.LibvirtVolumeDriver,fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver,rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,nfs=nova.virt.libvirt.volume_nfs.NfsVolumeDriver

 - Ray
 Yours faithfully, Kind regards.

 CIeNET Technologies (Beijing) Co., Ltd
 Email: qsun01...@cienet.com.cn
 Office Phone: +86-01081470088-7079
 Mobile Phone: +86-13581988291


 On Mon, Jan 14, 2013 at 10:22 PM, Benoit ML ben4...@gmail.com wrote:

 Hello,

 I've installed openstack Folsom on centos with the epel repo.
 Basically the all things works on 4 nodes configuration(controller,
 network and 2 compute). Quantum is configured with GRE and L3
 services.

 At my point, i'd like to go futher on the storage part.

 I'm trying to use the cinder NFS driver to manage volume.  From what I
 read, the driver is basic : créate/delete volume on an NFS share.
 Well but I didn't manage to get it work ...  can you help me please ?
 what I must do ?

 When I look into the cinder/volume.log, I see that cinder first create
 a LV, and  try to stat /nfsPath/bigUUID  and fail. Why cinder try
 to créate a LV ? and try to access a directory on the nfs share thaht
 is not create before ?


 Moreover if you have any advice for a good basic shared storage
 architecture with Openstack, plz shared the information ;)

 In cinder.conf :
 -
 volume_driver=cinder.volume.nfs.NfsDriver
 state_path = /var/lib/cinder
 nfs_shares_config=/etc/cinder/shares.conf
 nfs_mount_point_base = /mnt/exports/volumes/
 #nfs_sparsed_volumes = True
 #nfs_disk_util = df
 lock_path = /var/lib/cinder/tmp
 -


 Thank you in advance.

 Regards,

 --
 --
 Benoit

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





-- 
--
Benoit

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Change the partition power to recreate the RING

2013-01-14 Thread John Dickinson
Yes, I think it would be a great topic for the summit.

--John


On Jan 14, 2013, at 7:54 AM, Tong Li liton...@us.ibm.com wrote:

 John and swifters,
 I see this problem as a big problem and I think that the scenario described 
 by Alejandro is a very common scenario. I am thinking if it is possible to 
 have like two rings (one with the newer extended power, one with the existing 
 ring power), when significant changes made to the hardware, partition, a new 
 ring get started with a command, and new data into Swift will use the new 
 ring, and existing data on the existing ring still available and slowly (not 
 impact the normal use) but automatically moves to the new ring, once the 
 existing ring shrinks to the size zero, then that ring can be removed. The 
 idea is to sort of having two virtual Swift systems working side by side, the 
 migration from existing ring to new ring being done without interrupting the 
 service. Can we put this topic/feature as one to be discussed during the next 
 summit and to be considered as a high priority feature to work on for coming 
 releases?
 
 Thanks.
 
 Tong Li
 Emerging Technologies  Standards
 Building 501/B205
 liton...@us.ibm.com
 
 graycol.gifJohn Dickinson ---01/11/2013 04:28:47 PM---If effect, this would 
 be a complete replacement of your rings, and that is essentially a whole new c
 
 From: John Dickinson m...@not.mn
 To:   Alejandro Comisario alejandro.comisa...@mercadolibre.com, 
 Cc:   openstack-operat...@lists.openstack.org 
 openstack-operat...@lists.openstack.org, openstack 
 openstack@lists.launchpad.net
 Date: 01/11/2013 04:28 PM
 Subject:  Re: [Openstack] [SWIFT] Change the partition power to recreate 
 the  RING
 Sent by:  openstack-bounces+litong01=us.ibm@lists.launchpad.net
 
 
 
 If effect, this would be a complete replacement of your rings, and that is 
 essentially a whole new cluster. All of the existing data would need to be 
 rehashed into the new ring before it is available.
 
 There is no process that rehashes the data to ensure that it is still in the 
 correct partition. Replication only ensures that the partitions are on the 
 right drives.
 
 To change the number of partitions, you will need to GET all of the data from 
 the old ring and PUT it to the new ring. A more complicated, but perhaps more 
 efficient) solution may include something like walking each drive and 
 rehashing+moving the data to the right partition and then letting replication 
 settle it down.
 
 Either way, 100% of your existing data will need to at least be rehashed (and 
 probably moved). Your CPU (hashing), disks (read+write), RAM (directory 
 walking), and network (replication) may all be limiting factors in how long 
 it will take to do this. Your per-disk free space may also determine what 
 method you choose.
 
 I would not expect any data loss while doing this, but you will probably have 
 availability issues, depending on the data access patterns.
 
 I'd like to eventually see something in swift that allows for changing the 
 partition power in existing rings, but that will be hard/tricky/non-trivial.
 
 Good luck.
 
 --John
 
 
 On Jan 11, 2013, at 1:17 PM, Alejandro Comisario 
 alejandro.comisa...@mercadolibre.com wrote:
 
  Hi guys.
  We've created a swift cluster several months ago, the things is that righ 
  now we cant add hardware and we configured lots of partitions thinking 
  about the final picture of the cluster.
  
  Today each datanodes is having 2500+ partitions per device, and even tuning 
  the background processes ( replicator, auditor  updater ) we really want 
  to try to lower the partition power.
  
  Since its not possible to do that without recreating the ring, we can have 
  the luxury of recreate it with a very lower partition power, and rebalance 
  / deploy the new ring.
  
  The question is, having a working cluster with *existing data* is it 
  possible to do this and wait for the data to move around *without data 
  loss* ???
  If so, it might be true to wait for an improvement in the overall cluster 
  performance ?
  
  We have no problem to have a non working cluster (while moving the data) 
  even for an entire weekend.
  
  Cheers.
  
  
 
 [attachment smime.p7s deleted by Tong Li/Raleigh/IBM] 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
Well, I've fixed the node size and disabled the all the replicator and
auditor processes. However, it is even slower now than it was before :/.
Any suggestions?


On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert 
leande...@gmail.com wrote:

 Ok, thanks for all the tips/help.

 Regards,

 Leander


 On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen 
 robert.vanleeu...@spilgames.com wrote:

   Allow me to rephrase.
  I've read somewhere (can't remember where) that it would be faster to
 upload files if they would be uploaded to separate containeres.
  This was suggested for a standard swift installation with a certain
 replication factor.
  Since I'll be uploading the files with the replicators turned off, does
 it really matter if I insert a group of them in separate containeres?

 My guess is this concerns the SQLite database load distribution.
 So yes, it still matters.

 Just to be clear: turning replicators off does not matter at ALL when
 putting files in a healthy cluster.
 Files will be replicated / put on all required nodes at the moment the
 put request is done.
 The put request will only give an OK when there is quorum writing the
 file (the file is stored on more than half of the required object nodes)
 The replicator daemons do not have anything to do with this.

 Cheers,
 Robert

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Change the partition power to recreate the RING

2013-01-14 Thread Chuck Thier
Hi Alejandro,

I really doubt that partition size is causing these issues.  It can be
difficult to debug these types of issues without access to the
cluster, but I can think of a couple of things to look at.

1.  Check your disk io usage and io wait on the storage nodes.  If
that seems abnormally high, then that could be one of the sources of
problems.  If this is the case, then the first things that I would
look at are the auditors, as they can use up a lot of disk io if not
properly configured.  I would try turning them off for a bit
(swift-*-auditor) and see if that makes any difference.

2.  Check your network io usage.  You haven't described what type of
network you have going to the proxies, but if they share a single GigE
interface, if my quick calculations are correct, you could be
saturating the network.

3.  Check your CPU usage.  I listed this one last as you have said
that you have already worked at tuning the number of workers (though I
would be interested to hear how many workers you have running for each
service).  The main thing to look for, is to see if all of your
workers are maxed out on CPU, if so, then you may need to bump
workers.

4.  SSL Termination?  Where are you terminating the SSL connection?
If you are terminating SSL in Swift directly with the swift proxy,
then that could also be a source of issue.  This was only meant for
dev and testing, and you should use an SSL terminating load balancer
in front of the swift proxies.

That's what I could think of right off the top of my head.

--
Chuck

On Mon, Jan 14, 2013 at 5:45 AM, Alejandro Comisario
alejandro.comisa...@mercadolibre.com wrote:
 Chuck / John.
 We are having 50.000 request per minute ( where 10.000+ are put from small
 objects, from 10KB to 150KB )

 We are using swift 1.7.4 with keystone token caching so no latency over
 there.
 We are having 12 proxyes and 24 datanodes divided in 4 zones ( each datanode
 has 48gb of ram, 2 hexacore and 4 devices of 3TB each )

 The workers that are puting objects in swift are seeing an awful
 performance, and we too.
 With peaks of 2secs to 15secs per put operations coming from the datanodes.
 We tunes db_preallocation, disable_fallocate, workers and concurrency but we
 cant reach the request that we need ( we need 24.000 put per minute of small
 objects ) but we dont seem to find where is the problem, other than from the
 datanodes.

 Maybe worth pasting our config over here?
 Thanks in advance.

 alejandro

 On 12 Jan 2013 02:01, Chuck Thier cth...@gmail.com wrote:

 Looking at this from a different perspective.  Having 2500 partitions
 per drive shouldn't be an absolutely horrible thing either.  Do you
 know how many objects you have per partition?  What types of problems
 are you seeing?

 --
 Chuck

 On Fri, Jan 11, 2013 at 3:28 PM, John Dickinson m...@not.mn wrote:
  If effect, this would be a complete replacement of your rings, and that
  is essentially a whole new cluster. All of the existing data would need to
  be rehashed into the new ring before it is available.
 
  There is no process that rehashes the data to ensure that it is still in
  the correct partition. Replication only ensures that the partitions are on
  the right drives.
 
  To change the number of partitions, you will need to GET all of the data
  from the old ring and PUT it to the new ring. A more complicated, but
  perhaps more efficient) solution may include something like walking each
  drive and rehashing+moving the data to the right partition and then letting
  replication settle it down.
 
  Either way, 100% of your existing data will need to at least be rehashed
  (and probably moved). Your CPU (hashing), disks (read+write), RAM 
  (directory
  walking), and network (replication) may all be limiting factors in how long
  it will take to do this. Your per-disk free space may also determine what
  method you choose.
 
  I would not expect any data loss while doing this, but you will probably
  have availability issues, depending on the data access patterns.
 
  I'd like to eventually see something in swift that allows for changing
  the partition power in existing rings, but that will be
  hard/tricky/non-trivial.
 
  Good luck.
 
  --John
 
 
  On Jan 11, 2013, at 1:17 PM, Alejandro Comisario
  alejandro.comisa...@mercadolibre.com wrote:
 
  Hi guys.
  We've created a swift cluster several months ago, the things is that
  righ now we cant add hardware and we configured lots of partitions 
  thinking
  about the final picture of the cluster.
 
  Today each datanodes is having 2500+ partitions per device, and even
  tuning the background processes ( replicator, auditor  updater ) we 
  really
  want to try to lower the partition power.
 
  Since its not possible to do that without recreating the ring, we can
  have the luxury of recreate it with a very lower partition power, and
  rebalance / deploy the new ring.
 
  The question is, having a working cluster with *existing data* is it
  possible to do 

Re: [Openstack] [OpenStack][Swift] Reset Swift | Clear Swift and Account Database

2013-01-14 Thread Chuck Thier
Hi Leander,

The following assumes that the cluster isn't in production yet:

1.  Stop all services on all machines
2.  Format and remount all storage devices
3.  Re-create rings with the correct partition size
4.  Push new rings out to all servers
5.  Start services back up and test.

--
Chuck

On Mon, Jan 14, 2013 at 8:02 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 Hello all,

 I've come to realize that my swift storage partitions are setup with the
 wrong node size. The only way for me to fix this is to format the
 partitions. I was wondering how I could reset swift (remove all data from
 stored files) without having to install it again.

 Regards,

 Leander

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
Hey Leander,

Can you post what performance you are getting?  If they are all
sharing the same GigE network, you might also check that the links
aren't being saturated, as it is pretty easy to saturate pushing 200k
files around.

--
Chuck

On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 Well, I've fixed the node size and disabled the all the replicator and
 auditor processes. However, it is even slower now than it was before :/. Any
 suggestions?


 On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
 leande...@gmail.com wrote:

 Ok, thanks for all the tips/help.

 Regards,

 Leander


 On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
 robert.vanleeu...@spilgames.com wrote:

  Allow me to rephrase.
  I've read somewhere (can't remember where) that it would be faster to
  upload files if they would be uploaded to separate containeres.
  This was suggested for a standard swift installation with a certain
  replication factor.
  Since I'll be uploading the files with the replicators turned off, does
  it really matter if I insert a group of them in separate containeres?

 My guess is this concerns the SQLite database load distribution.
 So yes, it still matters.

 Just to be clear: turning replicators off does not matter at ALL when
 putting files in a healthy cluster.
 Files will be replicated / put on all required nodes at the moment the
 put request is done.
 The put request will only give an OK when there is quorum writing the
 file (the file is stored on more than half of the required object nodes)
 The replicator daemons do not have anything to do with this.

 Cheers,
 Robert

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
I'm getting around 5-6.5 GB a day of bytes written on Swift. I calculated
this by calling swift stat  sleep 60s  swift stat. I did some
calculation based on those values to get to the end result.

Currently I'm resetting swift with a node size of 64, since 90% of the
files are less than 70KB in size. I think that might help.


On Mon, Jan 14, 2013 at 4:34 PM, Chuck Thier cth...@gmail.com wrote:

 Hey Leander,

 Can you post what performance you are getting?  If they are all
 sharing the same GigE network, you might also check that the links
 aren't being saturated, as it is pretty easy to saturate pushing 200k
 files around.

 --
 Chuck

 On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Well, I've fixed the node size and disabled the all the replicator and
  auditor processes. However, it is even slower now than it was before :/.
 Any
  suggestions?
 
 
  On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
 
  Ok, thanks for all the tips/help.
 
  Regards,
 
  Leander
 
 
  On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
  robert.vanleeu...@spilgames.com wrote:
 
   Allow me to rephrase.
   I've read somewhere (can't remember where) that it would be faster to
   upload files if they would be uploaded to separate containeres.
   This was suggested for a standard swift installation with a certain
   replication factor.
   Since I'll be uploading the files with the replicators turned off,
 does
   it really matter if I insert a group of them in separate containeres?
 
  My guess is this concerns the SQLite database load distribution.
  So yes, it still matters.
 
  Just to be clear: turning replicators off does not matter at ALL when
  putting files in a healthy cluster.
  Files will be replicated / put on all required nodes at the moment
 the
  put request is done.
  The put request will only give an OK when there is quorum writing the
  file (the file is stored on more than half of the required object
 nodes)
  The replicator daemons do not have anything to do with this.
 
  Cheers,
  Robert
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
Using swift stat probably isn't the best way to determine cluster
performance, as those stats are updated async, and could be delayed
quite a bit as you are heavily loading the cluster.  It also might be
worthwhile to use a tool like swift-bench to test your cluster to make
sure it is properly setup before loading data into the system.

--
Chuck

On Mon, Jan 14, 2013 at 10:38 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 I'm getting around 5-6.5 GB a day of bytes written on Swift. I calculated
 this by calling swift stat  sleep 60s  swift stat. I did some
 calculation based on those values to get to the end result.

 Currently I'm resetting swift with a node size of 64, since 90% of the files
 are less than 70KB in size. I think that might help.


 On Mon, Jan 14, 2013 at 4:34 PM, Chuck Thier cth...@gmail.com wrote:

 Hey Leander,

 Can you post what performance you are getting?  If they are all
 sharing the same GigE network, you might also check that the links
 aren't being saturated, as it is pretty easy to saturate pushing 200k
 files around.

 --
 Chuck

 On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Well, I've fixed the node size and disabled the all the replicator and
  auditor processes. However, it is even slower now than it was before :/.
  Any
  suggestions?
 
 
  On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
 
  Ok, thanks for all the tips/help.
 
  Regards,
 
  Leander
 
 
  On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
  robert.vanleeu...@spilgames.com wrote:
 
   Allow me to rephrase.
   I've read somewhere (can't remember where) that it would be faster
   to
   upload files if they would be uploaded to separate containeres.
   This was suggested for a standard swift installation with a certain
   replication factor.
   Since I'll be uploading the files with the replicators turned off,
   does
   it really matter if I insert a group of them in separate
   containeres?
 
  My guess is this concerns the SQLite database load distribution.
  So yes, it still matters.
 
  Just to be clear: turning replicators off does not matter at ALL when
  putting files in a healthy cluster.
  Files will be replicated / put on all required nodes at the moment
  the
  put request is done.
  The put request will only give an OK when there is quorum writing the
  file (the file is stored on more than half of the required object
  nodes)
  The replicator daemons do not have anything to do with this.
 
  Cheers,
  Robert
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
I'm currently using the swift client to upload files, would you recommend
another approach?


On Mon, Jan 14, 2013 at 4:43 PM, Chuck Thier cth...@gmail.com wrote:

 Using swift stat probably isn't the best way to determine cluster
 performance, as those stats are updated async, and could be delayed
 quite a bit as you are heavily loading the cluster.  It also might be
 worthwhile to use a tool like swift-bench to test your cluster to make
 sure it is properly setup before loading data into the system.

 --
 Chuck

 On Mon, Jan 14, 2013 at 10:38 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  I'm getting around 5-6.5 GB a day of bytes written on Swift. I calculated
  this by calling swift stat  sleep 60s  swift stat. I did some
  calculation based on those values to get to the end result.
 
  Currently I'm resetting swift with a node size of 64, since 90% of the
 files
  are less than 70KB in size. I think that might help.
 
 
  On Mon, Jan 14, 2013 at 4:34 PM, Chuck Thier cth...@gmail.com wrote:
 
  Hey Leander,
 
  Can you post what performance you are getting?  If they are all
  sharing the same GigE network, you might also check that the links
  aren't being saturated, as it is pretty easy to saturate pushing 200k
  files around.
 
  --
  Chuck
 
  On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
   Well, I've fixed the node size and disabled the all the replicator and
   auditor processes. However, it is even slower now than it was before
 :/.
   Any
   suggestions?
  
  
   On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
   leande...@gmail.com wrote:
  
   Ok, thanks for all the tips/help.
  
   Regards,
  
   Leander
  
  
   On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
   robert.vanleeu...@spilgames.com wrote:
  
Allow me to rephrase.
I've read somewhere (can't remember where) that it would be faster
to
upload files if they would be uploaded to separate containeres.
This was suggested for a standard swift installation with a
 certain
replication factor.
Since I'll be uploading the files with the replicators turned off,
does
it really matter if I insert a group of them in separate
containeres?
  
   My guess is this concerns the SQLite database load distribution.
   So yes, it still matters.
  
   Just to be clear: turning replicators off does not matter at ALL
 when
   putting files in a healthy cluster.
   Files will be replicated / put on all required nodes at the moment
   the
   put request is done.
   The put request will only give an OK when there is quorum writing
 the
   file (the file is stored on more than half of the required object
   nodes)
   The replicator daemons do not have anything to do with this.
  
   Cheers,
   Robert
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
  
  
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
 
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Reset Swift | Clear Swift and Account Database

2013-01-14 Thread Leander Bessa Beernaert
Thanks!

Regards,

Leander


On Mon, Jan 14, 2013 at 4:29 PM, Chuck Thier cth...@gmail.com wrote:

 Hi Leander,

 The following assumes that the cluster isn't in production yet:

 1.  Stop all services on all machines
 2.  Format and remount all storage devices
 3.  Re-create rings with the correct partition size
 4.  Push new rings out to all servers
 5.  Start services back up and test.

 --
 Chuck

 On Mon, Jan 14, 2013 at 8:02 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Hello all,
 
  I've come to realize that my swift storage partitions are setup with the
  wrong node size. The only way for me to fix this is to format the
  partitions. I was wondering how I could reset swift (remove all data from
  stored files) without having to install it again.
 
  Regards,
 
  Leander
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Total Network Confusion

2013-01-14 Thread James Condron
Hi all,

I've recently started playing with (and working with) OpenStack with a view to 
migrate our production infrastructure from esx 4 to Essex.

My issue, or at least utter idiocy, is in the network configuration. Basically 
I can't work out whether in the configuration of OpenStack I have done 
something daft, on the network something daft or I've not understood the 
technology properly.

NB: I can get to the outside world form my VMs; I don't want to confuse things 
further.

As attached is a diagram I knocked up to hopefully make this simpler, though I 
hope I can explain it simply with:

*
Given both public and private interfaces on my server being on the same network 
and infrastructure how would one go about accessing VMs via their internal IP 
and not have to worry about a VPN or Public IPs?
*

My corporate network  works on simple vlans; I have a vlan for my production 
boxen, one for development, one for PCs, telephony, etc. etc. These are pretty 
standard.

The public, eth0 NIC on my compute node (Single node setup, nothing overly 
fancy; pretty vanilla) is on my production vlan and everything is accessible.
the second nic, eth1, is supposedly on a vlan for this specific purpose.

I am hoping to be able to access these internal IPs on their... Internal IPs 
(For want of a better phrase). Is this possible? I'm reasonably confident this 
isn't a routing issue as I can ping the eth1 IP from the switch:

#ping 10.12.0.1

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.12.0.1, timeout is 2 seconds:
!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms

But none of the ones assigned to VMs:

#ping 10.12.0.4

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.12.0.4, timeout is 2 seconds:
.
Success rate is 0 percent (0/5)

Or for those looking at the attached diagram: vlan101 is great and works 
fine; what do I need to do (If at all possible) to get vlan102 listening?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Total Network Confusion

2013-01-14 Thread James Condron
Brilliant; sorry- I didn't attach the diagram.

On 14 Jan 2013, at 16:52, James Condron 
james.cond...@simplybusiness.co.ukmailto:james.cond...@simplybusiness.co.uk 
wrote:

Hi all,

I've recently started playing with (and working with) OpenStack with a view to 
migrate our production infrastructure from esx 4 to Essex.

My issue, or at least utter idiocy, is in the network configuration. Basically 
I can't work out whether in the configuration of OpenStack I have done 
something daft, on the network something daft or I've not understood the 
technology properly.

NB: I can get to the outside world form my VMs; I don't want to confuse things 
further.

As attached is a diagram I knocked up to hopefully make this simpler, though I 
hope I can explain it simply with:

*
Given both public and private interfaces on my server being on the same network 
and infrastructure how would one go about accessing VMs via their internal IP 
and not have to worry about a VPN or Public IPs?
*

My corporate network  works on simple vlans; I have a vlan for my production 
boxen, one for development, one for PCs, telephony, etc. etc. These are pretty 
standard.

The public, eth0 NIC on my compute node (Single node setup, nothing overly 
fancy; pretty vanilla) is on my production vlan and everything is accessible.
the second nic, eth1, is supposedly on a vlan for this specific purpose.

I am hoping to be able to access these internal IPs on their... Internal IPs 
(For want of a better phrase). Is this possible? I'm reasonably confident this 
isn't a routing issue as I can ping the eth1 IP from the switch:

#ping 10.12.0.1

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.12.0.1, timeout is 2 seconds:
!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms

But none of the ones assigned to VMs:

#ping 10.12.0.4

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.12.0.4, timeout is 2 seconds:
.
Success rate is 0 percent (0/5)

Or for those looking at the attached diagram: vlan101 is great and works 
fine; what do I need to do (If at all possible) to get vlan102 listening?
ATT1.c


[cid:31407EBC-B431-4C7C-B62D-68B20F8E4FB2@int.xbridge.com]

inline: simplified_net.png___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
I currently have 4 machines running 10 clients each uploading 1/40th of the
data. More than 40 simultaneous clientes starts to severely affect
Keystone's ability to handle these operations.


On Mon, Jan 14, 2013 at 4:58 PM, Chuck Thier cth...@gmail.com wrote:

 That should be fine, but it doesn't have any way of reporting stats
 currently.  You could use tools like ifstat to look at how much
 bandwidth you are using.  You can also look at how much cpu the swift
 tool is using.  Depending on how your data is setup, you could run
 several swift-client processes in parallel until you max either your
 network or cpu.  I would start with one client first, until you max it
 out, then move on to the next.

 --
 Chuck

 On Mon, Jan 14, 2013 at 10:45 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  I'm currently using the swift client to upload files, would you recommend
  another approach?
 
 
  On Mon, Jan 14, 2013 at 4:43 PM, Chuck Thier cth...@gmail.com wrote:
 
  Using swift stat probably isn't the best way to determine cluster
  performance, as those stats are updated async, and could be delayed
  quite a bit as you are heavily loading the cluster.  It also might be
  worthwhile to use a tool like swift-bench to test your cluster to make
  sure it is properly setup before loading data into the system.
 
  --
  Chuck
 
  On Mon, Jan 14, 2013 at 10:38 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
   I'm getting around 5-6.5 GB a day of bytes written on Swift. I
   calculated
   this by calling swift stat  sleep 60s  swift stat. I did some
   calculation based on those values to get to the end result.
  
   Currently I'm resetting swift with a node size of 64, since 90% of the
   files
   are less than 70KB in size. I think that might help.
  
  
   On Mon, Jan 14, 2013 at 4:34 PM, Chuck Thier cth...@gmail.com
 wrote:
  
   Hey Leander,
  
   Can you post what performance you are getting?  If they are all
   sharing the same GigE network, you might also check that the links
   aren't being saturated, as it is pretty easy to saturate pushing 200k
   files around.
  
   --
   Chuck
  
   On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
   leande...@gmail.com wrote:
Well, I've fixed the node size and disabled the all the replicator
and
auditor processes. However, it is even slower now than it was
 before
:/.
Any
suggestions?
   
   
On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
leande...@gmail.com wrote:
   
Ok, thanks for all the tips/help.
   
Regards,
   
Leander
   
   
On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
robert.vanleeu...@spilgames.com wrote:
   
 Allow me to rephrase.
 I've read somewhere (can't remember where) that it would be
 faster
 to
 upload files if they would be uploaded to separate containeres.
 This was suggested for a standard swift installation with a
 certain
 replication factor.
 Since I'll be uploading the files with the replicators turned
 off,
 does
 it really matter if I insert a group of them in separate
 containeres?
   
My guess is this concerns the SQLite database load distribution.
So yes, it still matters.
   
Just to be clear: turning replicators off does not matter at ALL
when
putting files in a healthy cluster.
Files will be replicated / put on all required nodes at the
moment
the
put request is done.
The put request will only give an OK when there is quorum writing
the
file (the file is stored on more than half of the required object
nodes)
The replicator daemons do not have anything to do with this.
   
Cheers,
Robert
   
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
   
   
   
   
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
   
  
  
 
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
Also, I'm unable to run the swift-bench with keystone.

I always get this error:

Traceback (most recent call last):
  File /usr/bin/swift-bench, line 149, in module
controller.run()
  File /usr/lib/python2.7/dist-packages/swift/common/bench.py, line 159,
in run
puts = BenchPUT(self.logger, self.conf, self.names)
  File /usr/lib/python2.7/dist-packages/swift/common/bench.py, line 241,
in __init__
Bench.__init__(self, logger, conf, names)
  File /usr/lib/python2.7/dist-packages/swift/common/bench.py, line 55,
in __init__
auth_version=self.auth_version)
  File /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 281,
in get_auth
if (kwargs['os_options'].get('object_storage_url') and
KeyError: 'os_options'


Here's my config file:

[bench]
auth = http://192.168.111.202:5000/v2.0
user = [tenant]:[user]
key = [key]
auth_version = 2
log-level = INFO
timeout = 10

put_concurrency = 10
get_concurrency = 10

lower_object_size = 20
upper_object_size = 2


num_objects = 1000
num_gets = 1
num_containers = 20

delete = yes


On Mon, Jan 14, 2013 at 5:01 PM, Leander Bessa Beernaert 
leande...@gmail.com wrote:

 I currently have 4 machines running 10 clients each uploading 1/40th of
 the data. More than 40 simultaneous clientes starts to severely affect
 Keystone's ability to handle these operations.


 On Mon, Jan 14, 2013 at 4:58 PM, Chuck Thier cth...@gmail.com wrote:

 That should be fine, but it doesn't have any way of reporting stats
 currently.  You could use tools like ifstat to look at how much
 bandwidth you are using.  You can also look at how much cpu the swift
 tool is using.  Depending on how your data is setup, you could run
 several swift-client processes in parallel until you max either your
 network or cpu.  I would start with one client first, until you max it
 out, then move on to the next.

 --
 Chuck

 On Mon, Jan 14, 2013 at 10:45 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  I'm currently using the swift client to upload files, would you
 recommend
  another approach?
 
 
  On Mon, Jan 14, 2013 at 4:43 PM, Chuck Thier cth...@gmail.com wrote:
 
  Using swift stat probably isn't the best way to determine cluster
  performance, as those stats are updated async, and could be delayed
  quite a bit as you are heavily loading the cluster.  It also might be
  worthwhile to use a tool like swift-bench to test your cluster to make
  sure it is properly setup before loading data into the system.
 
  --
  Chuck
 
  On Mon, Jan 14, 2013 at 10:38 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
   I'm getting around 5-6.5 GB a day of bytes written on Swift. I
   calculated
   this by calling swift stat  sleep 60s  swift stat. I did some
   calculation based on those values to get to the end result.
  
   Currently I'm resetting swift with a node size of 64, since 90% of
 the
   files
   are less than 70KB in size. I think that might help.
  
  
   On Mon, Jan 14, 2013 at 4:34 PM, Chuck Thier cth...@gmail.com
 wrote:
  
   Hey Leander,
  
   Can you post what performance you are getting?  If they are all
   sharing the same GigE network, you might also check that the links
   aren't being saturated, as it is pretty easy to saturate pushing
 200k
   files around.
  
   --
   Chuck
  
   On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
   leande...@gmail.com wrote:
Well, I've fixed the node size and disabled the all the replicator
and
auditor processes. However, it is even slower now than it was
 before
:/.
Any
suggestions?
   
   
On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
leande...@gmail.com wrote:
   
Ok, thanks for all the tips/help.
   
Regards,
   
Leander
   
   
On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
robert.vanleeu...@spilgames.com wrote:
   
 Allow me to rephrase.
 I've read somewhere (can't remember where) that it would be
 faster
 to
 upload files if they would be uploaded to separate
 containeres.
 This was suggested for a standard swift installation with a
 certain
 replication factor.
 Since I'll be uploading the files with the replicators turned
 off,
 does
 it really matter if I insert a group of them in separate
 containeres?
   
My guess is this concerns the SQLite database load distribution.
So yes, it still matters.
   
Just to be clear: turning replicators off does not matter at ALL
when
putting files in a healthy cluster.
Files will be replicated / put on all required nodes at the
moment
the
put request is done.
The put request will only give an OK when there is quorum
 writing
the
file (the file is stored on more than half of the required
 object
nodes)
The replicator daemons do not have anything to do with this.
   
Cheers,
Robert
   
___
Mailing list: 

Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
That should be fine, but it doesn't have any way of reporting stats
currently.  You could use tools like ifstat to look at how much
bandwidth you are using.  You can also look at how much cpu the swift
tool is using.  Depending on how your data is setup, you could run
several swift-client processes in parallel until you max either your
network or cpu.  I would start with one client first, until you max it
out, then move on to the next.

--
Chuck

On Mon, Jan 14, 2013 at 10:45 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 I'm currently using the swift client to upload files, would you recommend
 another approach?


 On Mon, Jan 14, 2013 at 4:43 PM, Chuck Thier cth...@gmail.com wrote:

 Using swift stat probably isn't the best way to determine cluster
 performance, as those stats are updated async, and could be delayed
 quite a bit as you are heavily loading the cluster.  It also might be
 worthwhile to use a tool like swift-bench to test your cluster to make
 sure it is properly setup before loading data into the system.

 --
 Chuck

 On Mon, Jan 14, 2013 at 10:38 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  I'm getting around 5-6.5 GB a day of bytes written on Swift. I
  calculated
  this by calling swift stat  sleep 60s  swift stat. I did some
  calculation based on those values to get to the end result.
 
  Currently I'm resetting swift with a node size of 64, since 90% of the
  files
  are less than 70KB in size. I think that might help.
 
 
  On Mon, Jan 14, 2013 at 4:34 PM, Chuck Thier cth...@gmail.com wrote:
 
  Hey Leander,
 
  Can you post what performance you are getting?  If they are all
  sharing the same GigE network, you might also check that the links
  aren't being saturated, as it is pretty easy to saturate pushing 200k
  files around.
 
  --
  Chuck
 
  On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
   Well, I've fixed the node size and disabled the all the replicator
   and
   auditor processes. However, it is even slower now than it was before
   :/.
   Any
   suggestions?
  
  
   On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
   leande...@gmail.com wrote:
  
   Ok, thanks for all the tips/help.
  
   Regards,
  
   Leander
  
  
   On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
   robert.vanleeu...@spilgames.com wrote:
  
Allow me to rephrase.
I've read somewhere (can't remember where) that it would be
faster
to
upload files if they would be uploaded to separate containeres.
This was suggested for a standard swift installation with a
certain
replication factor.
Since I'll be uploading the files with the replicators turned
off,
does
it really matter if I insert a group of them in separate
containeres?
  
   My guess is this concerns the SQLite database load distribution.
   So yes, it still matters.
  
   Just to be clear: turning replicators off does not matter at ALL
   when
   putting files in a healthy cluster.
   Files will be replicated / put on all required nodes at the
   moment
   the
   put request is done.
   The put request will only give an OK when there is quorum writing
   the
   file (the file is stored on more than half of the required object
   nodes)
   The replicator daemons do not have anything to do with this.
  
   Cheers,
   Robert
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
  
  
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
 
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
On Mon, Jan 14, 2013 at 11:01 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 I currently have 4 machines running 10 clients each uploading 1/40th of the
 data. More than 40 simultaneous clientes starts to severely affect
 Keystone's ability to handle these operations.

You might also double check that you are running a very recent version
of keystone that includes the update to use the swift memcache servers
and the correct configuration.  This will cache tokens and prevent
having to make a call to keystone for every single request.  If that
is an issue, that is likely causing a lot of latency to each request.

--
Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 Also, I'm unable to run the swift-bench with keystone.


Hrm... That was supposed to be fixed with this bug:
https://bugs.launchpad.net/swift/+bug/1011727

My keystone dev instance isn't working at the moment, but I'll see if
I can get one of the team to take a look at it.

--
Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance, boto and image id

2013-01-14 Thread Vishvananda Ishaya

On Jan 14, 2013, at 7:49 AM, Jay Pipes jaypi...@gmail.com wrote:

 
 There is an integer key in the s3_images table that stores the map
 between the UUID and the AMI image id:
 
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L964
 
 Not sure this is available via Horizon... sorry.

Correct. Here are some options:

a) query the db directly for the mapping

b) write an api extension to nova that exposes the mapping

c) write an external utility that syncs the info from the nova db into glance 
metadata

d) modify horizon to list images through the ec2 api instead of glance

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Add NIC to running instance?

2013-01-14 Thread Vishvananda Ishaya
This doesn't exist yet, but I thought at one point it was being worked on. 
Hot-adding nics would be a great feature for the quantum integration especially.

Blueprint here:

https://blueprints.launchpad.net/nova/+spec/network-adapter-hotplug

There was work done here:

https://review.openstack.org/#/c/11071/

But it hasn't been touched for a while. Not sure what happened to it.

Vish

On Jan 14, 2013, at 8:00 AM, Wojciech Dec wdec.i...@gmail.com wrote:

 Hi All,
 
 is there a nova command to add a NIC to a running instance (ie without the 
 need to do nova boot ... --nic 1 --nic new-nic) ?
 
 Documentation not showing up anything...
 
 Regards,
 W. Dec
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance, boto and image id

2013-01-14 Thread Antonio Messina
On Mon, Jan 14, 2013 at 6:18 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:


 On Jan 14, 2013, at 7:49 AM, Jay Pipes jaypi...@gmail.com wrote:

 
  There is an integer key in the s3_images table that stores the map
  between the UUID and the AMI image id:
 
 
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L964
 
  Not sure this is available via Horizon... sorry.

 Correct. Here are some options:

 a) query the db directly for the mapping

 b) write an api extension to nova that exposes the mapping

 c) write an external utility that syncs the info from the nova db into
 glance metadata

 d) modify horizon to list images through the ec2 api instead of glance


I guess d) depends on b), since  we cannot assume horizon is running on the
same machine as the nova-api service.

.a.


-- 
antonio.s.mess...@gmail.com
GC3: Grid Computing Competence Center
http://www.gc3.uzh.ch/
University of Zurich
Winterthurerstrasse 190
CH-8057 Zurich Switzerland
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
I'm using the ubuntu 12.04 packages of the folsom repository by the way.


On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier cth...@gmail.com wrote:

 On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Also, I'm unable to run the swift-bench with keystone.
 

 Hrm... That was supposed to be fixed with this bug:
 https://bugs.launchpad.net/swift/+bug/1011727

 My keystone dev instance isn't working at the moment, but I'll see if
 I can get one of the team to take a look at it.

 --
 Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
Are you by any chance referring to this topic
https://lists.launchpad.net/openstack/msg08639.html regarding the keystone
token cache? If so I've already added the configuration line and have not
noticed any speedup :/




On Mon, Jan 14, 2013 at 5:19 PM, Leander Bessa Beernaert 
leande...@gmail.com wrote:

 I'm using the ubuntu 12.04 packages of the folsom repository by the way.


 On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier cth...@gmail.com wrote:

 On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Also, I'm unable to run the swift-bench with keystone.
 

 Hrm... That was supposed to be fixed with this bug:
 https://bugs.launchpad.net/swift/+bug/1011727

 My keystone dev instance isn't working at the moment, but I'll see if
 I can get one of the team to take a look at it.

 --
 Chuck



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance, boto and image id

2013-01-14 Thread Vishvananda Ishaya

On Jan 14, 2013, at 9:28 AM, Antonio Messina antonio.s.mess...@gmail.com 
wrote:

 On Mon, Jan 14, 2013 at 6:18 PM, Vishvananda Ishaya vishvana...@gmail.com 
 wrote:
 
 On Jan 14, 2013, at 7:49 AM, Jay Pipes jaypi...@gmail.com wrote:
 
 
  There is an integer key in the s3_images table that stores the map
  between the UUID and the AMI image id:
 
  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L964
 
  Not sure this is available via Horizon... sorry.
 
 Correct. Here are some options:
 
 a) query the db directly for the mapping
 
 b) write an api extension to nova that exposes the mapping
 
 c) write an external utility that syncs the info from the nova db into glance 
 metadata
 
 d) modify horizon to list images through the ec2 api instead of glance
 
 I guess d) depends on b), since  we cannot assume horizon is running on the 
 same machine as the nova-api service.
 

Not really. The ec2 api exposes ec2_style ids instead of uuids. It seems better 
to just provide one view of ids to your users. If you are suggesting they use 
the ec2 api then the uuids may not be needed.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Total Network Confusion

2013-01-14 Thread Jay Pipes
I'd recommend Folsom over Essex :) And I'd highly recommend these
articles from Mirantis which really step through the networking setup in
VLANManager. Read through them in the following order and I promise at
the end you will have a much better understanding of networking in Nova.

http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/
http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/
http://www.mirantis.com/blog/openstack-networking-vlanmanager/
http://www.mirantis.com/blog/vlanmanager-network-flow-analysis/

All the best,
-jay

On 01/14/2013 11:52 AM, James Condron wrote:
 Hi all,
 
 I've recently started playing with (and working with) OpenStack with a
 view to migrate our production infrastructure from esx 4 to Essex.
 
 My issue, or at least utter idiocy, is in the network configuration.
 Basically I can't work out whether in the configuration of OpenStack I
 have done something daft, on the network something daft or I've not
 understood the technology properly.
 
 *NB: *I can get to the outside world form my VMs; I don't want to
 confuse things further.
 
 As attached is a diagram I knocked up to hopefully make this simpler,
 though I hope I can explain it simply with:
 
 *
 *Given both public and private interfaces on my server being on the same
 network and infrastructure how would one go about accessing VMs via
 their internal IP and not have to worry about a VPN or Public IPs?*
 *
 
 My corporate network  works on simple vlans; I have a vlan for my
 production boxen, one for development, one for PCs, telephony, etc. etc.
 These are pretty standard.
 
 The public, eth0 NIC on my compute node (Single node setup, nothing
 overly fancy; pretty vanilla) is on my production vlan and everything is
 accessible.
 the second nic, eth1, is supposedly on a vlan for this specific purpose.
 
 I am hoping to be able to access these internal IPs on their... Internal
 IPs (For want of a better phrase). Is this possible? I'm reasonably
 confident this isn't a routing issue as I can ping the eth1 IP from the
 switch:
 
 #ping 10.12.0.1
 
 Type escape sequence to abort.
 Sending 5, 100-byte ICMP Echos to 10.12.0.1, timeout is 2 seconds:
 !
 Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms
 
 But none of the ones assigned to VMs:
 
 #ping 10.12.0.4
 
 Type escape sequence to abort.
 Sending 5, 100-byte ICMP Echos to 10.12.0.4, timeout is 2 seconds:
 .
 Success rate is 0 percent (0/5)
 
 Or for those looking at the attached diagram: vlan101 is great and
 works fine; what do I need to do (If at all possible) to get vlan102
 listening?
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance, boto and image id

2013-01-14 Thread Antonio Messina
On Mon, Jan 14, 2013 at 7:07 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:


 On Jan 14, 2013, at 9:28 AM, Antonio Messina antonio.s.mess...@gmail.com
 wrote:

 On Mon, Jan 14, 2013 at 6:18 PM, Vishvananda Ishaya vishvana...@gmail.com
  wrote:


 On Jan 14, 2013, at 7:49 AM, Jay Pipes jaypi...@gmail.com wrote:

 
  There is an integer key in the s3_images table that stores the map
  between the UUID and the AMI image id:
 
 
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L964
 
  Not sure this is available via Horizon... sorry.

 Correct. Here are some options:

 a) query the db directly for the mapping

 b) write an api extension to nova that exposes the mapping

 c) write an external utility that syncs the info from the nova db into
 glance metadata

 d) modify horizon to list images through the ec2 api instead of glance


 I guess d) depends on b), since  we cannot assume horizon is running on
 the same machine as the nova-api service.


 Not really. The ec2 api exposes ec2_style ids instead of uuids. It seems
 better to just provide one view of ids to your users. If you are suggesting
 they use the ec2 api then the uuids may not be needed.


I just misread: instead of d), I've read something like

e) modify horizon to list ec2 images id together with glance uuid

I will try to better explain the issue:

I want my users to be able to customize some of the images already present
on our cloud by creating snapshots. Then, they should be able to use our
software (which uses EC2 api) to run their jobs. Our software is
non-interactive, so I can't print a list from which the user can chose the
correct image, the user must write the id on a configuration file.

I thing d) or e) would be fine, but d) will make our use case hard to apply
to other clouds, while if OpenStack would accept a patch for e) we could be
able to use other clouds as well...

.a.

-- 
antonio.s.mess...@gmail.com
GC3: Grid Computing Competence Center
http://www.gc3.uzh.ch/
University of Zurich
Winterthurerstrasse 190
CH-8057 Zurich Switzerland
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance, boto and image id

2013-01-14 Thread Vishvananda Ishaya
On Jan 14, 2013, at 10:15 AM, Antonio Messina antonio.s.mess...@gmail.com 
wrote:

 
 On Mon, Jan 14, 2013 at 7:07 PM, Vishvananda Ishaya vishvana...@gmail.com 
 wrote:
 
 On Jan 14, 2013, at 9:28 AM, Antonio Messina antonio.s.mess...@gmail.com 
 wrote:
 
 On Mon, Jan 14, 2013 at 6:18 PM, Vishvananda Ishaya vishvana...@gmail.com 
 wrote:
 
 On Jan 14, 2013, at 7:49 AM, Jay Pipes jaypi...@gmail.com wrote:
 
 
  There is an integer key in the s3_images table that stores the map
  between the UUID and the AMI image id:
 
  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L964
 
  Not sure this is available via Horizon... sorry.
 
 Correct. Here are some options:
 
 a) query the db directly for the mapping
 
 b) write an api extension to nova that exposes the mapping
 
 c) write an external utility that syncs the info from the nova db into 
 glance metadata
 
 d) modify horizon to list images through the ec2 api instead of glance
 
 I guess d) depends on b), since  we cannot assume horizon is running on the 
 same machine as the nova-api service.
 
 
 Not really. The ec2 api exposes ec2_style ids instead of uuids. It seems 
 better to just provide one view of ids to your users. If you are suggesting 
 they use the ec2 api then the uuids may not be needed.
 
 I just misread: instead of d), I've read something like 
 
 e) modify horizon to list ec2 images id together with glance uuid
 
 I will try to better explain the issue: 
 
 I want my users to be able to customize some of the images already present on 
 our cloud by creating snapshots. Then, they should be able to use our 
 software (which uses EC2 api) to run their jobs. Our software is 
 non-interactive, so I can't print a list from which the user can chose the 
 correct image, the user must write the id on a configuration file.
 
 I thing d) or e) would be fine, but d) will make our use case hard to apply 
 to other clouds, while if OpenStack would accept a patch for e) we could be 
 able to use other clouds as well...

Understood. An api extension to get the mapping seems perfectly reasonable.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Dolph Mathews
If memcache is being utilized by your keystone middleware, you should see
keystone attaching to it on the first incoming request, e.g.:

  keystoneclient.middleware.auth_token [INFO]: Using Keystone memcache for
caching token

You may also want to use auth_token from keystoneclient = v0.2.0 if you're
not already (instead of from keystone itself).


-Dolph


On Mon, Jan 14, 2013 at 11:43 AM, Leander Bessa Beernaert 
leande...@gmail.com wrote:

 Are you by any chance referring to this topic
 https://lists.launchpad.net/openstack/msg08639.html regarding the
 keystone token cache? If so I've already added the configuration line and
 have not noticed any speedup :/




 On Mon, Jan 14, 2013 at 5:19 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 I'm using the ubuntu 12.04 packages of the folsom repository by the way.


 On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier cth...@gmail.com wrote:

 On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Also, I'm unable to run the swift-bench with keystone.
 

 Hrm... That was supposed to be fixed with this bug:
 https://bugs.launchpad.net/swift/+bug/1011727

 My keystone dev instance isn't working at the moment, but I'll see if
 I can get one of the team to take a look at it.

 --
 Chuck




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
You would have to look at the proxy log to see if a request is being
made.  The results from the swift command line are just the calls that
the client makes.  The server still haves to validate the token on
every request.

--
Chuck

On Mon, Jan 14, 2013 at 12:37 PM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 Below is an output from Swift stat, since I don't see any requests to
 keystone, I'm assuming that memcache is being used right?

 REQ: curl -i http://192.168.111.215:8080/v1/AUTH_[ID] -X HEAD -H
 X-Auth-Token: [TOKEN]

 DEBUG:swiftclient:REQ: curl -i http://192.168.111.215:8080/v1/AUTH_[ID] -X
 HEAD -H X-Auth-Token: [TOKEN]

 RESP STATUS: 204

 DEBUG:swiftclient:RESP STATUS: 204

Account: AUTH_[ID]
 Containers: 44
Objects: 4818
  Bytes: 112284450
 Accept-Ranges: bytes
 X-Timestamp: 1358184925.20885
 X-Trans-Id: tx8cffb469c9c542be830db10a2b90d901




 On Mon, Jan 14, 2013 at 6:31 PM, Dolph Mathews dolph.math...@gmail.com
 wrote:

 If memcache is being utilized by your keystone middleware, you should see
 keystone attaching to it on the first incoming request, e.g.:

   keystoneclient.middleware.auth_token [INFO]: Using Keystone memcache for
 caching token

 You may also want to use auth_token from keystoneclient = v0.2.0 if
 you're not already (instead of from keystone itself).


 -Dolph


 On Mon, Jan 14, 2013 at 11:43 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:

 Are you by any chance referring to this topic
 https://lists.launchpad.net/openstack/msg08639.html regarding the keystone
 token cache? If so I've already added the configuration line and have not
 noticed any speedup :/




 On Mon, Jan 14, 2013 at 5:19 PM, Leander Bessa Beernaert
 leande...@gmail.com wrote:

 I'm using the ubuntu 12.04 packages of the folsom repository by the way.


 On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier cth...@gmail.com wrote:

 On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Also, I'm unable to run the swift-bench with keystone.
 

 Hrm... That was supposed to be fixed with this bug:
 https://bugs.launchpad.net/swift/+bug/1011727

 My keystone dev instance isn't working at the moment, but I'll see if
 I can get one of the team to take a look at it.

 --
 Chuck




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Leander Bessa Beernaert
Neither keystone nor swift proxy are producing any logs. I'm not sure what
to do :S


On Mon, Jan 14, 2013 at 6:50 PM, Chuck Thier cth...@gmail.com wrote:

 You would have to look at the proxy log to see if a request is being
 made.  The results from the swift command line are just the calls that
 the client makes.  The server still haves to validate the token on
 every request.

 --
 Chuck

 On Mon, Jan 14, 2013 at 12:37 PM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Below is an output from Swift stat, since I don't see any requests to
  keystone, I'm assuming that memcache is being used right?
 
  REQ: curl -i http://192.168.111.215:8080/v1/AUTH_[ID] -X HEAD -H
  X-Auth-Token: [TOKEN]
 
  DEBUG:swiftclient:REQ: curl -i http://192.168.111.215:8080/v1/AUTH_[ID]-X
  HEAD -H X-Auth-Token: [TOKEN]
 
  RESP STATUS: 204
 
  DEBUG:swiftclient:RESP STATUS: 204
 
 Account: AUTH_[ID]
  Containers: 44
 Objects: 4818
   Bytes: 112284450
  Accept-Ranges: bytes
  X-Timestamp: 1358184925.20885
  X-Trans-Id: tx8cffb469c9c542be830db10a2b90d901
 
 
 
 
  On Mon, Jan 14, 2013 at 6:31 PM, Dolph Mathews dolph.math...@gmail.com
  wrote:
 
  If memcache is being utilized by your keystone middleware, you should
 see
  keystone attaching to it on the first incoming request, e.g.:
 
keystoneclient.middleware.auth_token [INFO]: Using Keystone memcache
 for
  caching token
 
  You may also want to use auth_token from keystoneclient = v0.2.0 if
  you're not already (instead of from keystone itself).
 
 
  -Dolph
 
 
  On Mon, Jan 14, 2013 at 11:43 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
 
  Are you by any chance referring to this topic
  https://lists.launchpad.net/openstack/msg08639.html regarding the
 keystone
  token cache? If so I've already added the configuration line and have
 not
  noticed any speedup :/
 
 
 
 
  On Mon, Jan 14, 2013 at 5:19 PM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
 
  I'm using the ubuntu 12.04 packages of the folsom repository by the
 way.
 
 
  On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier cth...@gmail.com
 wrote:
 
  On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
   Also, I'm unable to run the swift-bench with keystone.
  
 
  Hrm... That was supposed to be fixed with this bug:
  https://bugs.launchpad.net/swift/+bug/1011727
 
  My keystone dev instance isn't working at the moment, but I'll see if
  I can get one of the team to take a look at it.
 
  --
  Chuck
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
Rather than ping-ponging emails back and forth on this list, it would
be easier if you could hop on to the #openstack-swift IRC channel on
freenode to discuss further.

--
Chuck

On Mon, Jan 14, 2013 at 1:00 PM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 Neither keystone nor swift proxy are producing any logs. I'm not sure what
 to do :S


 On Mon, Jan 14, 2013 at 6:50 PM, Chuck Thier cth...@gmail.com wrote:

 You would have to look at the proxy log to see if a request is being
 made.  The results from the swift command line are just the calls that
 the client makes.  The server still haves to validate the token on
 every request.

 --
 Chuck

 On Mon, Jan 14, 2013 at 12:37 PM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Below is an output from Swift stat, since I don't see any requests to
  keystone, I'm assuming that memcache is being used right?
 
  REQ: curl -i http://192.168.111.215:8080/v1/AUTH_[ID] -X HEAD -H
  X-Auth-Token: [TOKEN]
 
  DEBUG:swiftclient:REQ: curl -i http://192.168.111.215:8080/v1/AUTH_[ID]
  -X
  HEAD -H X-Auth-Token: [TOKEN]
 
  RESP STATUS: 204
 
  DEBUG:swiftclient:RESP STATUS: 204
 
 Account: AUTH_[ID]
  Containers: 44
 Objects: 4818
   Bytes: 112284450
  Accept-Ranges: bytes
  X-Timestamp: 1358184925.20885
  X-Trans-Id: tx8cffb469c9c542be830db10a2b90d901
 
 
 
 
  On Mon, Jan 14, 2013 at 6:31 PM, Dolph Mathews dolph.math...@gmail.com
  wrote:
 
  If memcache is being utilized by your keystone middleware, you should
  see
  keystone attaching to it on the first incoming request, e.g.:
 
keystoneclient.middleware.auth_token [INFO]: Using Keystone memcache
  for
  caching token
 
  You may also want to use auth_token from keystoneclient = v0.2.0 if
  you're not already (instead of from keystone itself).
 
 
  -Dolph
 
 
  On Mon, Jan 14, 2013 at 11:43 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
 
  Are you by any chance referring to this topic
  https://lists.launchpad.net/openstack/msg08639.html regarding the
  keystone
  token cache? If so I've already added the configuration line and have
  not
  noticed any speedup :/
 
 
 
 
  On Mon, Jan 14, 2013 at 5:19 PM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
 
  I'm using the ubuntu 12.04 packages of the folsom repository by the
  way.
 
 
  On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier cth...@gmail.com
  wrote:
 
  On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
   Also, I'm unable to run the swift-bench with keystone.
  
 
  Hrm... That was supposed to be fixed with this bug:
  https://bugs.launchpad.net/swift/+bug/1011727
 
  My keystone dev instance isn't working at the moment, but I'll see
  if
  I can get one of the team to take a look at it.
 
  --
  Chuck
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Change the partition power to recreate the RING

2013-01-14 Thread Alejandro Comisario
Chuck et All.

Let me go through the point one by one.

#1 Even seeing that object-auditor allways runs and never stops, we
stoped the swift-*-auditor and didnt see any improvements, from all the
datanodes we have an average of 8% IO-WAIT (using iostat), the only thing
that we see is the pid xfsbuf runs once in a while causing 99% iowait for
a sec, we delayed the runtime for that process, and didnt see changes
either.

Our object-auditor config for all devices is as follow :

[object-auditor]
files_per_second = 5
zero_byte_files_per_second = 5
bytes_per_second = 300

#2 Our 12 proxyes are 6 physical and 6 kvm instances running on nova,
checking iftop we are at an average of 15Mb/s of bandwidth usage so i dont
think we are saturating the networking.
#3 The overall Idle CPU on all datanodes is 80%, im not sure how to check
the CPU usage per worker, let me paste the config for a device for object,
account and container.

*object-server.conf*
*--*
[DEFAULT]
devices = /srv/node/sda3
mount_check = false
bind_port = 6010
user = swift
log_facility = LOG_LOCAL2
log_level = DEBUG
workers = 48
disable_fallocate = true

[pipeline:main]
pipeline = object-server

[app:object-server]
use = egg:swift#object

[object-replicator]
vm_test_mode = yes
concurrency = 8
run_pause = 600

[object-updater]
concurrency = 8

[object-auditor]
files_per_second = 5
zero_byte_files_per_second = 5
bytes_per_second = 300

*account-server.conf*
*---*
[DEFAULT]
devices = /srv/node/sda3
mount_check = false
bind_port = 6012
user = swift
log_facility = LOG_LOCAL2
log_level = DEBUG
workers = 48
db_preallocation = on
disable_fallocate = true

[pipeline:main]
pipeline = account-server

[app:account-server]
use = egg:swift#account

[account-replicator]
vm_test_mode = yes
concurrency = 8
run_pause = 600

[account-auditor]

[account-reaper]

*container-server.conf*
*-*
[DEFAULT]
devices = /srv/node/sda3
mount_check = false
bind_port = 6011
user = swift
workers = 48
log_facility = LOG_LOCAL2
allow_versions = True
disable_fallocate = true

[pipeline:main]
pipeline = container-server

[app:container-server]
use = egg:swift#container
allow_versions = True

[container-replicator]
vm_test_mode = yes
concurrency = 8
run_pause = 500

[container-updater]
concurrency = 8

[container-auditor]

#4 We dont use SSL for swift so, no latency over there.

Hope you guys can shed some light.


*
*
*
*
*Alejandro Comisario
#melicloud CloudBuilders*
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11) 15-3770-1857
Tel : +54(11) 4640-8443


On Mon, Jan 14, 2013 at 1:23 PM, Chuck Thier cth...@gmail.com wrote:

 Hi Alejandro,

 I really doubt that partition size is causing these issues.  It can be
 difficult to debug these types of issues without access to the
 cluster, but I can think of a couple of things to look at.

 1.  Check your disk io usage and io wait on the storage nodes.  If
 that seems abnormally high, then that could be one of the sources of
 problems.  If this is the case, then the first things that I would
 look at are the auditors, as they can use up a lot of disk io if not
 properly configured.  I would try turning them off for a bit
 (swift-*-auditor) and see if that makes any difference.

 2.  Check your network io usage.  You haven't described what type of
 network you have going to the proxies, but if they share a single GigE
 interface, if my quick calculations are correct, you could be
 saturating the network.

 3.  Check your CPU usage.  I listed this one last as you have said
 that you have already worked at tuning the number of workers (though I
 would be interested to hear how many workers you have running for each
 service).  The main thing to look for, is to see if all of your
 workers are maxed out on CPU, if so, then you may need to bump
 workers.

 4.  SSL Termination?  Where are you terminating the SSL connection?
 If you are terminating SSL in Swift directly with the swift proxy,
 then that could also be a source of issue.  This was only meant for
 dev and testing, and you should use an SSL terminating load balancer
 in front of the swift proxies.

 That's what I could think of right off the top of my head.

 --
 Chuck

 On Mon, Jan 14, 2013 at 5:45 AM, Alejandro Comisario
 alejandro.comisa...@mercadolibre.com wrote:
  Chuck / John.
  We are having 50.000 request per minute ( where 10.000+ are put from
 small
  objects, from 10KB to 150KB )
 
  We are using swift 1.7.4 with keystone token caching so no latency over
  there.
  We are having 12 proxyes and 24 datanodes divided in 4 zones ( each
 datanode
  has 48gb of ram, 2 hexacore and 4 devices of 3TB each )
 
  The workers that are puting objects in swift are seeing an awful
  performance, and we too.
  With peaks of 2secs to 15secs per put operations coming from the
 datanodes.
  We tunes db_preallocation, disable_fallocate, workers and concurrency
 but we
  cant reach the request that we need ( we 

[Openstack] Bark logging middleware

2013-01-14 Thread Kevin L. Mitchell
I have just completed writing a piece of middleware for logging requests
in WSGI stacks.  I have dubbed this useful piece of code, Bark, and it
is available on PyPi.  Here are the links:

  * http://pypi.python.org/pypi/bark
  * https://github.com/klmitch/bark

I've written an extensive README describing what Bark does and how it
does it, but here's a quick summary:

Bark is a logging middleware.  That is, you place it into your WSGI
pipeline (typically at the head of the pipeline, rather than close to
the application at the tail) and define one or more log streams.  Each
log stream is configured with an Apache-compatible format string.  Log
streams can send the formatted log messages to files, syslog, TCP or UDP
sockets, even email.  Bark is also easily extensible; it is possible to
add both new format string conversions and log stream types by simply
defining new entry points.

Why use Bark?  Bark can be used with any WSGI application (not just
nova) and can log virtually any information associated with the request,
and do it independently of normal application logging.  Moreover, since
the format strings are Apache-compatible, it should be possible to use
any tool designed to analyze Apache logs with Bark-generated log files.
Bark also implements proxy validation, to allow the proper originating
IP address of a client to be recorded.

Caveats: Bark can only log data provided by the underlying WSGI
implementation.  For instance, the normal WSGI server used by Nova makes
the remote IP address available in the REMOTE_ADDR environment variable,
but the port number is not made available (Bark expects it to be placed
in REMOTE_PORT if available).  Also, certain Apache conversions and
modifiers don't make sense for Bark (they are ignored for
compatibility).

For a full write-up, see the README, available at:

http://pypi.python.org/pypi/bark
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Change the partition power to recreate the RING

2013-01-14 Thread Chuck Thier
Hey Alejandro,

Those were the most common issues that people run into when they are having
performance issues with swift.  The other thing to check is to look at the
logs to make sure there are no major issues (like bad drives, misconfigured
nodes, etc.), which could add latency to the requests.  After that, I'm
starting to run out of the common issues that people run into, and it might
be worth contracting with one of the many swift consulting companies to
help you out.  If you have time, and can hop on #openstack-swift on
freenode IRC we might be able to have a little more interactive discussion,
or some other may come up with some ideas.

--
Chuck


On Mon, Jan 14, 2013 at 2:01 PM, Alejandro Comisario 
alejandro.comisa...@mercadolibre.com wrote:

 Chuck et All.

 Let me go through the point one by one.

 #1 Even seeing that object-auditor allways runs and never stops, we
 stoped the swift-*-auditor and didnt see any improvements, from all the
 datanodes we have an average of 8% IO-WAIT (using iostat), the only thing
 that we see is the pid xfsbuf runs once in a while causing 99% iowait for
 a sec, we delayed the runtime for that process, and didnt see changes
 either.

 Our object-auditor config for all devices is as follow :

 [object-auditor]
 files_per_second = 5
 zero_byte_files_per_second = 5
 bytes_per_second = 300

 #2 Our 12 proxyes are 6 physical and 6 kvm instances running on nova,
 checking iftop we are at an average of 15Mb/s of bandwidth usage so i dont
 think we are saturating the networking.
 #3 The overall Idle CPU on all datanodes is 80%, im not sure how to check
 the CPU usage per worker, let me paste the config for a device for object,
 account and container.

 *object-server.conf*
 *--*
 [DEFAULT]
 devices = /srv/node/sda3
 mount_check = false
 bind_port = 6010
 user = swift
 log_facility = LOG_LOCAL2
 log_level = DEBUG
 workers = 48
 disable_fallocate = true

 [pipeline:main]
 pipeline = object-server

 [app:object-server]
 use = egg:swift#object

 [object-replicator]
 vm_test_mode = yes
 concurrency = 8
 run_pause = 600

 [object-updater]
 concurrency = 8

 [object-auditor]
 files_per_second = 5
 zero_byte_files_per_second = 5
 bytes_per_second = 300

 *account-server.conf*
 *---*
 [DEFAULT]
 devices = /srv/node/sda3
 mount_check = false
 bind_port = 6012
 user = swift
 log_facility = LOG_LOCAL2
 log_level = DEBUG
 workers = 48
 db_preallocation = on
 disable_fallocate = true

 [pipeline:main]
 pipeline = account-server

 [app:account-server]
 use = egg:swift#account

 [account-replicator]
 vm_test_mode = yes
 concurrency = 8
 run_pause = 600

 [account-auditor]

 [account-reaper]

 *container-server.conf*
 *-*
 [DEFAULT]
 devices = /srv/node/sda3
 mount_check = false
 bind_port = 6011
 user = swift
 workers = 48
 log_facility = LOG_LOCAL2
 allow_versions = True
 disable_fallocate = true

 [pipeline:main]
 pipeline = container-server

 [app:container-server]
 use = egg:swift#container
 allow_versions = True

 [container-replicator]
 vm_test_mode = yes
 concurrency = 8
 run_pause = 500

 [container-updater]
 concurrency = 8

 [container-auditor]

 #4 We dont use SSL for swift so, no latency over there.

 Hope you guys can shed some light.


 *
 *
 *
 *
 *Alejandro Comisario
 #melicloud CloudBuilders*
 Arias 3751, Piso 7 (C1430CRG)
 Ciudad de Buenos Aires - Argentina
 Cel: +549(11) 15-3770-1857
 Tel : +54(11) 4640-8443


 On Mon, Jan 14, 2013 at 1:23 PM, Chuck Thier cth...@gmail.com wrote:

 Hi Alejandro,

 I really doubt that partition size is causing these issues.  It can be
 difficult to debug these types of issues without access to the
 cluster, but I can think of a couple of things to look at.

 1.  Check your disk io usage and io wait on the storage nodes.  If
 that seems abnormally high, then that could be one of the sources of
 problems.  If this is the case, then the first things that I would
 look at are the auditors, as they can use up a lot of disk io if not
 properly configured.  I would try turning them off for a bit
 (swift-*-auditor) and see if that makes any difference.

 2.  Check your network io usage.  You haven't described what type of
 network you have going to the proxies, but if they share a single GigE
 interface, if my quick calculations are correct, you could be
 saturating the network.

 3.  Check your CPU usage.  I listed this one last as you have said
 that you have already worked at tuning the number of workers (though I
 would be interested to hear how many workers you have running for each
 service).  The main thing to look for, is to see if all of your
 workers are maxed out on CPU, if so, then you may need to bump
 workers.

 4.  SSL Termination?  Where are you terminating the SSL connection?
 If you are terminating SSL in Swift directly with the swift proxy,
 then that could also be a source of issue.  This was only meant for
 dev and testing, and you should use an SSL terminating load 

[Openstack] Quantum client and SSL support

2013-01-14 Thread Piyanai Saowarattitada
Hi,

Could someone please confirm whether or not Quantum client
(top-of-trunk) supports SSL ?

http://wiki.openstack.org/SecureClientConnections appears to suggest
that SSL is not supported.

Snippet from the wiki page:
...
quantumclient (not started)

replace httplib2 with requests
add --os-cacert and OS_CACERT support
provide ca_cert to keystone clinet for authentication
...

Not sure how up to date the wiki page is. Quick scanning of the
blueprint list comes up empty...

If the answer is no, are there any plans for SSL support in Quantum client ?

This email thread https://lists.launchpad.net/openstack/msg09982.html
from last April gives me an impression that the SSL support was there
before the Quantum client got restructured in Folsom (?)
If that's the case, off hand, does anyone knows whether the code
before the Quantum client got restructured would work with the current
version of the API (v2) ?

Thanks!

Piyanai

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-community] Calling all user group and meetup organizers

2013-01-14 Thread Marton Kiss
Hi,

The meeting time shows me Jan 25 as scheduled date, is it right?
* Meeting status:  *Not started*  Starting date:  *Friday, January 25,
2013*  Starting
time:  *12:00 am, Europe Time (Berlin, GMT+01:00)*  Duration:  *2
hours*  Host's
name:  *Sean Roberts


Sean, do you have an agenda? I like to suggest two topics:
- If possible I like to get some supporters for groups.openstack.org site,
including content writers and some developer resources from the community.
We have a lot of knowledge about running / starting an user group, and need
to write it down, publish on the site and share it with newcomers.

- Also could be cool to start a discussion about group approval process. If
we plan to start a public site where everybody can create a new user group
we need to create some policy to differentiate real-working groups from
mistakenly created ones.

Regards,
  Márton Kiss
  Hungarian OpenStack usergroup




2013/1/9 Sean Roberts sean...@yahoo-inc.com

 We are going to have a planning meeting on Tuesday, 15 Jan 2012 11:00am to
 1:00pm PST. RSVP via  http://www.meetup.com/openstack/events/93593062/
  Connect remotely via webex
 https://yahoomeetings.webex.com/yahoomeetings/j.php?ED=160663792UID=492396097RT=MiM0
 If this time doesn't work for you, get ahold of me directly via
 skype:seanroberts66, email, mobile, irc:sarob, twitter:sarob, or carrier
 pigeon.

 Stefano and Thierry will be joining us. I want to get input from people
 that run user groups and meetups.

 See you then!

 Sean Roberts
 Infrastructure Strategy
 sean...@yahoo-inc.com
 Direct (408) 349-5234  Mobile (925) 980-4729

 701 First Avenue, Sunnyvale, CA, 94089-0703, US
 Phone (408) 349-3300  Fax (408) 349-3301


 ___
 Community mailing list
 commun...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/community


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [heat] Grizzly-2 development milestone available for Heat

2013-01-14 Thread Martinx - ジェームズ
Yes!  https://launchpad.net/ubuntu/raring/+source/nova

:-P

On 14 January 2013 13:20, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Hi!

  Is Grizzly-2 available on Ubuntu Raring Ringtail (13.04) daily builds?

 Tks!
 Thiago


 On 10 January 2013 19:44, Steven Dake sd...@redhat.com wrote:

 Hi folks,

 The OpenStack release team has released the second milestone of the
 Grizzly development cycle (grizzly-2).  This is a significant step in
 Heat's incubation, as it is our first milestone release leading to the
 final delivery of OpenStack 2013.1 scheduled for April 4, 2013.

 You can find the full list of new features and fixed bugs, as well as
 tarball downloads at:

 https://launchpad.net/heat/**grizzly/grizzly-2https://launchpad.net/heat/grizzly/grizzly-2

 Features and bugs may be resolved until the next milestone, grizzly-3,
 which will be delivered on February 21st.  Come join the growing
 orchestration development community by contributing to Heat and making
 orchestration in OpenStack world class!

 Regards
 -steve

 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-community] Calling all user group and meetup organizers

2013-01-14 Thread Sean Roberts
Reminder for the global user group planning meetup, tomorrow, Tuesday, 15 Jan 
2012 11:00am to 1:00pm PST  http://www.meetup.com/openstack/events/93593062/
Join us!


Sean Roberts
Infrastructure Strategy
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com
Direct (408) 349-5234  Mobile (925) 980-4729

701 First Avenue, Sunnyvale, CA, 94089-0703, US
Phone (408) 349-3300  Fax (408) 349-3301

[http://forgood.zenfs.com/logos/yahoo.png]

On 1/9/13 1:05 PM, Tim Horgan tim.hor...@cit.iemailto:tim.hor...@cit.ie 
wrote:

Hi Séan,

I run the OpenStack Ireland meetup group, just wondering if your meeting is 
targeting only groups in the East Coast of the USA?

Kind Regards,
Tim

On 9 Jan 2013, at 00:54, Sean Roberts 
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com wrote:

We are going to have a planning meeting on Tuesday, 15 Jan 2012 11:00am to 
1:00pm PST. RSVP via  http://www.meetup.com/openstack/events/93593062/  Connect 
remotely via webex 
https://yahoomeetings.webex.com/yahoomeetings/j.php?ED=160663792UID=492396097RT=MiM0
If this time doesn't work for you, get ahold of me directly via 
skype:seanroberts66, email, mobile, irc:sarob, twitter:sarob, or carrier pigeon.

Stefano and Thierry will be joining us. I want to get input from people that 
run user groups and meetups.

See you then!

Sean Roberts
Infrastructure Strategy
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com
Direct (408) 349-5234  Mobile (925) 980-4729

701 First Avenue, Sunnyvale, CA, 94089-0703, US
Phone (408) 349-3300  Fax (408) 349-3301

[http://forgood.zenfs.com/logos/yahoo.png]
___
Community mailing list
commun...@lists.openstack.orgmailto:commun...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/community




Regards,
Tim


-
TIM HORGAN
Head of Cloud Computing Centre of Excellence
Extended Campus Office
Cork Institute of Technology, Cork, Ireland
phone: +353 214335120callto:+353%2021%204335120 | mobile: +353 87 
9439333callto:+353%2087%209439333
twitter: @timhorganhttps://twitter.com/#%21/timhorgan | skype: 
timothy.horganhttps://twitter.com/#%21/timhorgan
linkedin: http://ie.linkedin.com/in/timhorgan | web: 
http://cloud.cit.iehttp://cloud.cit.ie/
-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-community] Calling all user group and meetup organizers

2013-01-14 Thread Colin McNamara
The scheduled webex still says it is for the 24th. You may want to adjust it so 
it will launch on time. 

(I will be in LA tomorrow so I have to dial in vs attending in person)
Regards,

Colin

If you would like to schedule a time to speak with me, please click here to see 
my calendar and pick a time that works for your schedule. The system will 
automatically send us both an outlook meeting invite.

Colin McNamara
(858)208-8105
CCIE #18233,VCP
http://www.colinmcnamara.com
http://www.linkedin.com/in/colinmcnamara

The difficult we do immediately, the impossible just takes a little
longer





On Jan 14, 2013, at 2:07 PM, Sean Roberts sean...@yahoo-inc.com wrote:

 Reminder for the global user group planning meetup, tomorrow, Tuesday, 15 Jan 
 2012 11:00am to 1:00pm PST  http://www.meetup.com/openstack/events/93593062/  
 Join us!
 
 
 Sean Roberts
 Infrastructure Strategy
 sean...@yahoo-inc.com
 Direct (408) 349-5234  Mobile (925) 980-4729
  
 701 First Avenue, Sunnyvale, CA, 94089-0703, US
 Phone (408) 349-3300  Fax (408) 349-3301
  
 
 
 On 1/9/13 1:05 PM, Tim Horgan tim.hor...@cit.ie wrote:
 
 Hi Séan,
 
 I run the OpenStack Ireland meetup group, just wondering if your meeting is 
 targeting only groups in the East Coast of the USA?
 
 Kind Regards,
 Tim
 
 On 9 Jan 2013, at 00:54, Sean Roberts sean...@yahoo-inc.com wrote:
 
 We are going to have a planning meeting on Tuesday, 15 Jan 2012 11:00am to 
 1:00pm PST. RSVP via  http://www.meetup.com/openstack/events/93593062/  
 Connect remotely via webex 
 https://yahoomeetings.webex.com/yahoomeetings/j.php?ED=160663792UID=492396097RT=MiM0
 If this time doesn't work for you, get ahold of me directly via 
 skype:seanroberts66, email, mobile, irc:sarob, twitter:sarob, or carrier 
 pigeon.
 
 Stefano and Thierry will be joining us. I want to get input from people 
 that run user groups and meetups.
 
 See you then!
 
 Sean Roberts
 Infrastructure Strategy
 sean...@yahoo-inc.com
 Direct (408) 349-5234  Mobile (925) 980-4729
  
 701 First Avenue, Sunnyvale, CA, 94089-0703, US
 Phone (408) 349-3300  Fax (408) 349-3301
  
 
 ___
 Community mailing list
 commun...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/community
 
 
 
 
 Regards,
 Tim
 
 
 -
 TIM HORGAN
 Head of Cloud Computing Centre of Excellence
 Extended Campus Office
 Cork Institute of Technology, Cork, Ireland
 phone: +353 214335120 | mobile: +353 87 9439333
 twitter: @timhorgan | skype: timothy.horgan
 linkedin: http://ie.linkedin.com/in/timhorgan | web: http://cloud.cit.ie
 -
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-community] Calling all user group and meetup organizers

2013-01-14 Thread Colin McNamara
Here are the action items that I captured .

Public Apt-Repo's yum repos built off trunk - OpenStack
Puppet manifest on a VM (easy openstack)
Rolling upgrade Continuous deployment
Mentoring user groups leader
Checklists 
Standard Slides preso 
users group / restricted to college
Redirect funding from Sponsorships to PHD students
Bug Squash Competition


Regards,

Colin

If you would like to schedule a time to speak with me, please click here to see 
my calendar and pick a time that works for your schedule. The system will 
automatically send us both an outlook meeting invite.

Colin McNamara
(858)208-8105
CCIE #18233,VCP
http://www.colinmcnamara.com
http://www.linkedin.com/in/colinmcnamara

The difficult we do immediately, the impossible just takes a little
longer





On Jan 14, 2013, at 2:20 PM, Sean Roberts sean...@yahoo-inc.com wrote:

 You caught me being lazy and reusing the webex meeting without updating the 
 time. It is definitely tomorrow, Tuesday, 15 Jan 2012 11:00am to 1:00pm PST. 
 I have updated the webex meet to reflect the actual time. 
 
 I will be forwarding an agenda in a few minutes and I will include your two 
 items.
 
 Sean Roberts
 Infrastructure Strategy
 sean...@yahoo-inc.com
 Direct (408) 349-5234  Mobile (925) 980-4729
  
 701 First Avenue, Sunnyvale, CA, 94089-0703, US
 Phone (408) 349-3300  Fax (408) 349-3301
  
 
 
 On 1/14/13 1:38 PM, Marton Kiss marton.k...@gmail.com wrote:
 
 Hi,
 
 The meeting time shows me Jan 25 as scheduled date, is it right? 
  Meeting status: Not started
   Starting date: Friday, January 25, 2013
   Starting time: 12:00 am, Europe Time (Berlin, GMT+01:00)
   Duration:  2 hours
   Host's name:   Sean Roberts 
 
 
 Sean, do you have an agenda? I like to suggest two topics:
 - If possible I like to get some supporters for groups.openstack.org site, 
 including content writers and some developer resources from the community. 
 We have a lot of knowledge about running / starting an user group, and need 
 to write it down, publish on the site and share it with newcomers. 
 
 - Also could be cool to start a discussion about group approval process. If 
 we plan to start a public site where everybody can create a new user group 
 we need to create some policy to differentiate real-working groups from 
 mistakenly created ones.
 
 Regards,
   Márton Kiss
   Hungarian OpenStack usergroup
 
 
 
 
 2013/1/9 Sean Roberts sean...@yahoo-inc.com
 We are going to have a planning meeting on Tuesday, 15 Jan 2012 11:00am to 
 1:00pm PST. RSVP via  http://www.meetup.com/openstack/events/93593062/  
 Connect remotely via webex 
 https://yahoomeetings.webex.com/yahoomeetings/j.php?ED=160663792UID=492396097RT=MiM0
 If this time doesn't work for you, get ahold of me directly via 
 skype:seanroberts66, email, mobile, irc:sarob, twitter:sarob, or carrier 
 pigeon.
 
 Stefano and Thierry will be joining us. I want to get input from people 
 that run user groups and meetups.
 
 See you then!
 
 Sean Roberts
 Infrastructure Strategy
 sean...@yahoo-inc.com
 Direct (408) 349-5234  Mobile (925) 980-4729
  
 701 First Avenue, Sunnyvale, CA, 94089-0703, US
 Phone (408) 349-3300  Fax (408) 349-3301
  
 
 
 ___
 Community mailing list
 commun...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/community
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-community] Calling all user group and meetup organizers

2013-01-14 Thread Sean Roberts
Agenda for tomorrow's user group / meetup planning meeting

 *
 *   Review of user group and meetup template
 *   Meetups fit into the larger plan of user committee and summit to summit 
development how?
 *   Where would be the location for social materials like videos, meetup logs, 
and other meeting related items?
 *   Where do operators go to figure out how to run OpenStack in production?
 *   Can the Foundation start working with some Universities for meeting space, 
sponsorship, and student participation? Is the Foundation interested in Phd 
thesis sponsorship?
 *   If possible I like to get some supporters for a groups.openstack.org site, 
including content writers and some developer resources from the community. We 
have a lot of knowledge about running / starting an user group, and need to 
write it down, publish on the site and share it with newcomers.  I think this 
is answered by the user group template information that will be published to 
http://wiki.openstack.org/OpenStackUserGroups.
 *   Should there be an OpenStack user group and/or meetup approval process? 
How do we clean up user groups and meetups that are abandoned?

Sean Roberts
Infrastructure Strategy
sean...@yahoo-inc.com
Direct (408) 349-5234  Mobile (925) 980-4729

701 First Avenue, Sunnyvale, CA, 94089-0703, US
Phone (408) 349-3300  Fax (408) 349-3301

[http://forgood.zenfs.com/logos/yahoo.png]

On 1/14/13 2:07 PM, Sean Roberts 
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com wrote:

Reminder for the global user group planning meetup, tomorrow, Tuesday, 15 Jan 
2012 11:00am to 1:00pm PST  http://www.meetup.com/openstack/events/93593062/
Join us!


Sean Roberts
Infrastructure Strategy
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com
Direct (408) 349-5234  Mobile (925) 980-4729

701 First Avenue, Sunnyvale, CA, 94089-0703, US
Phone (408) 349-3300  Fax (408) 349-3301

[http://forgood.zenfs.com/logos/yahoo.png]

On 1/9/13 1:05 PM, Tim Horgan tim.hor...@cit.iemailto:tim.hor...@cit.ie 
wrote:

Hi Séan,

I run the OpenStack Ireland meetup group, just wondering if your meeting is 
targeting only groups in the East Coast of the USA?

Kind Regards,
Tim

On 9 Jan 2013, at 00:54, Sean Roberts 
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com wrote:

We are going to have a planning meeting on Tuesday, 15 Jan 2012 11:00am to 
1:00pm PST. RSVP via  http://www.meetup.com/openstack/events/93593062/  Connect 
remotely via webex 
https://yahoomeetings.webex.com/yahoomeetings/j.php?ED=160663792UID=492396097RT=MiM0
If this time doesn't work for you, get ahold of me directly via 
skype:seanroberts66, email, mobile, irc:sarob, twitter:sarob, or carrier pigeon.

Stefano and Thierry will be joining us. I want to get input from people that 
run user groups and meetups.

See you then!

Sean Roberts
Infrastructure Strategy
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com
Direct (408) 349-5234  Mobile (925) 980-4729

701 First Avenue, Sunnyvale, CA, 94089-0703, US
Phone (408) 349-3300  Fax (408) 349-3301

[http://forgood.zenfs.com/logos/yahoo.png]
___
Community mailing list
commun...@lists.openstack.orgmailto:commun...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/community




Regards,
Tim


-
TIM HORGAN
Head of Cloud Computing Centre of Excellence
Extended Campus Office
Cork Institute of Technology, Cork, Ireland
phone: +353 214335120callto:+353%2021%204335120 | mobile: +353 87 
9439333callto:+353%2087%209439333
twitter: @timhorganhttps://twitter.com/#%21/timhorgan | skype: 
timothy.horganhttps://twitter.com/#%21/timhorgan
linkedin: http://ie.linkedin.com/in/timhorgan | web: 
http://cloud.cit.iehttp://cloud.cit.ie/
-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-community] Calling all user group and meetup organizers

2013-01-14 Thread Bruce Lok
Hi,
I am sorry that I may not able to join the meeting but the Hong Kong OpenStack 
User Group and Hong Kong Cyberport will be updating you any information if 
necessary.

May I have a bit comment on the following agenda:

Can the Foundation start working with some Universities for meeting space, 
sponsorship, and student participation? Is the Foundation interested in Phd 
thesis sponsorship
I strongly agree on this point, since there is getting more university students 
and researchers who starts to explore the open source cloud softwares as most 
commercial cloud softwares/services on market are really expensive for them to 
get start.   Also, university students and researchers are usually more sincere 
to explore the open source software because it is free and cloud computing is 
quite new for them.  And this is a good starting point to educate them to 
contribute to open source stuffs rather than just using it as freeware.

Should there be an OpenStack user group and/or meetup approval process? How do 
we clean up user groups and meetups that are abandoned?
I think there must be have an official central coordinator to manage all the 
user groups.

Thank you for the attention, and wish you all the best in year 2013.

Regards,
Bruce Lok
Coordinator of HKOSUG
Engineer | Technology Centre
Hong Kong Cyberport Management Co. Ltd.
Tel: +852 3166-3728
Fax: +852 3027-0099


From: Sean Roberts [mailto:sean...@yahoo-inc.com]
Sent: Tuesday, 15 January, 2013 07:26
To: OpenStack community; openstack
Cc: rob_hirschf...@dell.com; kamesh_pemmar...@dell.com; andi_a...@dell.com; 
trevor.low...@gmail.com; stephon.strip...@dreamhost.com; ca...@hq.newdream.net; 
brent.scot...@rackspace.com; kmest...@cisco.com; lloydost...@gmail.com; 
freedom...@gmail.com; duyujie@gmail.com; santiagoc...@outlook.com; 
sc...@kent.ac.uk; moha...@egyptcloudforum.com; ilkka.turu...@jamk.fi; 
bere...@b1-systems.de; Bruce Lok; marton.k...@xemeti.com; 
deepakgarg.i...@gmail.com; fr...@meruvian.org; tim.hor...@cit.ie; 
fen...@ubuntu.com; mypa...@gmail.com; muha...@lbox.cc; hang.t...@dtt.vn
Subject: Re: [openstack-community] Calling all user group and meetup organizers

Agenda for tomorrow's user group / meetup planning meeting

  *
  *   Review of user group and meetup template
  *   Meetups fit into the larger plan of user committee and summit to summit 
development how?
  *   Where would be the location for social materials like videos, meetup 
logs, and other meeting related items?
  *   Where do operators go to figure out how to run OpenStack in production?
  *   Can the Foundation start working with some Universities for meeting 
space, sponsorship, and student participation? Is the Foundation interested in 
Phd thesis sponsorship?
  *   If possible I like to get some supporters for a groups.openstack.org 
site, including content writers and some developer resources from the 
community. We have a lot of knowledge about running / starting an user group, 
and need to write it down, publish on the site and share it with newcomers.  I 
think this is answered by the user group template information that will be 
published to http://wiki.openstack.org/OpenStackUserGroups.
  *   Should there be an OpenStack user group and/or meetup approval process? 
How do we clean up user groups and meetups that are abandoned?

Sean Roberts
Infrastructure Strategy
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com
Direct (408) 349-5234  Mobile (925) 980-4729

701 First Avenue, Sunnyvale, CA, 94089-0703, US
Phone (408) 349-3300  Fax (408) 349-3301

[http://forgood.zenfs.com/logos/yahoo.png]

On 1/14/13 2:07 PM, Sean Roberts 
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com wrote:

Reminder for the global user group planning meetup, tomorrow, Tuesday, 15 Jan 
2012 11:00am to 1:00pm PST  http://www.meetup.com/openstack/events/93593062/
Join us!


Sean Roberts
Infrastructure Strategy
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com
Direct (408) 349-5234  Mobile (925) 980-4729

701 First Avenue, Sunnyvale, CA, 94089-0703, US
Phone (408) 349-3300  Fax (408) 349-3301

[http://forgood.zenfs.com/logos/yahoo.png]

On 1/9/13 1:05 PM, Tim Horgan tim.hor...@cit.iemailto:tim.hor...@cit.ie 
wrote:

Hi Séan,

I run the OpenStack Ireland meetup group, just wondering if your meeting is 
targeting only groups in the East Coast of the USA?

Kind Regards,
Tim

On 9 Jan 2013, at 00:54, Sean Roberts 
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com wrote:


We are going to have a planning meeting on Tuesday, 15 Jan 2012 11:00am to 
1:00pm PST. RSVP via  http://www.meetup.com/openstack/events/93593062/  Connect 
remotely via webex 
https://yahoomeetings.webex.com/yahoomeetings/j.php?ED=160663792UID=492396097RT=MiM0
If this time doesn't work for you, get ahold of me directly via 
skype:seanroberts66, email, mobile, irc:sarob, twitter:sarob, or carrier pigeon.

Stefano and Thierry will be joining us. I want to get input from people that 
run user groups and meetups.

See you then!


Re: [Openstack] [openstack-community] Calling all user group and meetup organizers

2013-01-14 Thread Yujie Du
2013/1/15 Bruce Lok bruce...@cyberport.hk

  Hi,

 I am sorry that I may not able to join the meeting but the Hong Kong
 OpenStack User Group and Hong Kong Cyberport will be updating you any
 information if necessary.

 ** **

 May I have a bit comment on the following agenda:

 ** **

 *Can the Foundation start working with some Universities for meeting
 space, sponsorship, and student participation? Is the Foundation interested
 in Phd thesis sponsorship*

 I strongly agree on this point, since there is getting more university
 students and researchers who starts to explore the open source cloud
 softwares as most commercial cloud softwares/services on market are really
 expensive for them to get start.   Also, university students and
 researchers are usually more sincere to explore the open source software
 because it is free and cloud computing is quite new for them.  And this is
 a good starting point to educate them to contribute to open source stuffs
 rather than just using it as freeware.

 **

+1

 **

 *Should there be an OpenStack user group and/or meetup approval process?
 How do we clean up user groups and meetups that are abandoned? *

 I think there must be have an official central coordinator to manage all
 the user groups.

 **

One more thing that I think would be discussed app:ds:discuss is that ,we
have a lot of community events last year,but didn't use meetup.com online
services,due to the problems such as user habits.Can you give some
advice?Or accept our own style as a user group meetup way?

 **

 Thank you for the attention, and wish you all the best in year 2013.

 ** **

 Regards,

 *Bruce Lok*

 Coordinator of HKOSUG

 Engineer | Technology Centre

 Hong Kong Cyberport Management Co. Ltd.

 Tel: +852 3166-3728

 Fax: +852 3027-0099

 ** **

 ** **

 *From:* Sean Roberts [mailto:sean...@yahoo-inc.com]
 *Sent:* Tuesday, 15 January, 2013 07:26
 *To:* OpenStack community; openstack
 *Cc:* rob_hirschf...@dell.com; kamesh_pemmar...@dell.com;
 andi_a...@dell.com; trevor.low...@gmail.com;
 stephon.strip...@dreamhost.com; ca...@hq.newdream.net;
 brent.scot...@rackspace.com; kmest...@cisco.com; lloydost...@gmail.com;
 freedom...@gmail.com; duyujie@gmail.com; santiagoc...@outlook.com;
 sc...@kent.ac.uk; moha...@egyptcloudforum.com; ilkka.turu...@jamk.fi;
 bere...@b1-systems.de; Bruce Lok; marton.k...@xemeti.com;
 deepakgarg.i...@gmail.com; fr...@meruvian.org; tim.hor...@cit.ie;
 fen...@ubuntu.com; mypa...@gmail.com; muha...@lbox.cc; hang.t...@dtt.vn

 *Subject:* Re: [openstack-community] Calling all user group and meetup
 organizers

  ** **

 Agenda for tomorrow's user group / meetup planning meeting

- ** **
- Review of user group and meetup template
- Meetups fit into the larger plan of user committee and summit to
summit development how? 
- Where would be the location for social materials like videos, meetup
logs, and other meeting related items?
- Where do operators go to figure out how to run OpenStack in
production?
- Can the Foundation start working with some Universities for meeting
space, sponsorship, and student participation? Is the Foundation interested
in Phd thesis sponsorship?
- If possible I like to get some supporters for a 
 groups.openstack.orgsite, including content writers and some developer 
 resources from the
community. We have a lot of knowledge about running / starting an user
group, and need to write it down, publish on the site and share it with
newcomers. * I think this is answered by the user group template
information that will be published to
http://wiki.openstack.org/OpenStackUserGroups. *
- Should there be an OpenStack user group and/or meetup approval
process? How do we clean up user groups and meetups that are abandoned?


   ** **

 *Sean Roberts*
 Infrastructure Strategy
 sean...@yahoo-inc.com

 Direct (408) 349-5234  Mobile (925) 980-4729

 701 First Avenue, Sunnyvale, CA, 94089-0703, US
 Phone (408) 349-3300  Fax (408) 349-3301

 

 ** **

 On 1/14/13 2:07 PM, Sean Roberts sean...@yahoo-inc.com wrote:

 ** **

Reminder for the global user group planning meetup, tomorrow, Tuesday,
 15 Jan 2012 11:00am to 1:00pm PST
 http://www.meetup.com/openstack/events/93593062/  

 Join us!

 ** **

 ** **

 *Sean Roberts*
 Infrastructure Strategy
 sean...@yahoo-inc.com

 Direct (408) 349-5234  Mobile (925) 980-4729

 701 First Avenue, Sunnyvale, CA, 94089-0703, US
 Phone (408) 349-3300  Fax (408) 349-3301

 

 ** **

 On 1/9/13 1:05 PM, Tim Horgan tim.hor...@cit.ie wrote:

 ** **

  Hi Séan,

 ** **

 I run the OpenStack Ireland meetup group, just wondering if your meeting
 is targeting only groups in the East Coast of the USA?

 ** **

 Kind Regards,

 Tim

 ** **

 On 9 Jan 2013, at 00:54, Sean Roberts sean...@yahoo-inc.com wrote:



 


[Openstack] [openstack]Test if i have join this maillist, don't read

2013-01-14 Thread harryxiyou
Test if i have join this maillist, don't read

-- 
Thanks
Harry Wei

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] How can I boot an instance with multiple NICs and limit them to a single VLAN?

2013-01-14 Thread Ray Sun
I tried to boot a new instance with two NICs, but seems I can't setup
network_id to a single one, that means I have to create at least two VLANs
for NICs. Is there any way that I can boot my instance with two NICs and
assign the ip in a single VLAN? And why this is not allowed currently?
Thanks.

Here's the error message I saw from output:
nova --debug boot --flavor 11033 --image
39b66c00-2dfd-4310-99f2-76ea8905e820 --availability-zone CL_PUBLIC --nic
net-id=5fbe4145-f48e-420b-9493-866714d08376,v4-fixed-ip=172.16.4.20 --nic
net-id=5fbe4145-f48e-420b-9493-866714d08376,v4-fixed-ip=172.16.4.21
--security-groups default MYTEST

DEBUG (shell:534) Duplicate networks (5fbe4145-f48e-420b-9493-866714d08376)
are not allowed (HTTP 400) (Request-ID:
req-0bf32ea6-afc4-4ffa-94dd-d4e4a0d3a8eb)
Traceback (most recent call last):
  File
/usr/local/lib/python2.7/dist-packages/python_novaclient-2.9.0.24-py2.7.egg/novaclient/shell.py,
line 531, in main
OpenStackComputeShell().main(sys.argv[1:])
  File
/usr/local/lib/python2.7/dist-packages/python_novaclient-2.9.0.24-py2.7.egg/novaclient/shell.py,
line 467, in main
args.func(self.cs, args)
  File
/usr/local/lib/python2.7/dist-packages/python_novaclient-2.9.0.24-py2.7.egg/novaclient/v1_1/shell.py,
line 228, in do_boot
server = cs.servers.create(*boot_args, **boot_kwargs)
  File
/usr/local/lib/python2.7/dist-packages/python_novaclient-2.9.0.24-py2.7.egg/novaclient/v1_1/servers.py,
line 498, in create
**boot_kwargs)
  File
/usr/local/lib/python2.7/dist-packages/python_novaclient-2.9.0.24-py2.7.egg/novaclient/v1_1/base.py,
line 162, in _boot
return_raw=return_raw, **kwargs)
  File
/usr/local/lib/python2.7/dist-packages/python_novaclient-2.9.0.24-py2.7.egg/novaclient/base.py,
line 148, in _create
_resp, body = self.api.client.post(url, body=body)
  File
/usr/local/lib/python2.7/dist-packages/python_novaclient-2.9.0.24-py2.7.egg/novaclient/client.py,
line 244, in post
return self._cs_request(url, 'POST', **kwargs)
  File
/usr/local/lib/python2.7/dist-packages/python_novaclient-2.9.0.24-py2.7.egg/novaclient/client.py,
line 228, in _cs_request
**kwargs)
  File
/usr/local/lib/python2.7/dist-packages/python_novaclient-2.9.0.24-py2.7.egg/novaclient/client.py,
line 210, in _time_request
resp, body = self.request(url, method, **kwargs)
  File
/usr/local/lib/python2.7/dist-packages/python_novaclient-2.9.0.24-py2.7.egg/novaclient/client.py,
line 204, in request
raise exceptions.from_response(resp, body)
BadRequest: Duplicate networks (5fbe4145-f48e-420b-9493-866714d08376) are
not allowed (HTTP 400) (Request-ID:
req-0bf32ea6-afc4-4ffa-94dd-d4e4a0d3a8eb)
ERROR: Duplicate networks (5fbe4145-f48e-420b-9493-866714d08376) are not
allowed (HTTP 400) (Request-ID: req-0bf32ea6-afc4-4ffa-94dd-d4e4a0d3a8eb)

- Ray
Yours faithfully, Kind regards.

CIeNET Technologies (Beijing) Co., Ltd
Email: qsun01...@cienet.com.cn
Office Phone: +86-01081470088-7079
Mobile Phone: +86-13581988291
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-community] Calling all user group and meetup organizers

2013-01-14 Thread Atul Jha
Hi All,

snip
Hi,
I am sorry that I may not able to join the meeting but the Hong Kong OpenStack 
User Group and Hong Kong Cyberport will be updating you any information if 
necessary.

May I have a bit comment on the following agenda:

Can the Foundation start working with some Universities for meeting space, 
sponsorship, and student participation? Is the Foundation interested in Phd 
thesis sponsorship
I strongly agree on this point, since there is getting more university students 
and researchers who starts to explore the open source cloud softwares as most 
commercial cloud softwares/services on market are really expensive for them to 
get start.   Also, university students and researchers are usually more sincere 
to explore the open source software because it is free and cloud computing is 
quite new for them.  And this is a good starting point to educate them to 
contribute to open source stuffs rather than just using it as freeware.

Should there be an OpenStack user group and/or meetup approval process? How do 
we clean up user groups and meetups that are abandoned?
I think there must be have an official central coordinator to manage all the 
user groups.

Thank you for the attention, and wish you all the best in year 2013.

Regards,
Bruce Lok
Coordinator of HKOSUG
Engineer | Technology Centre
Hong Kong Cyberport Management Co. Ltd.
Tel: +852 3166-3728
Fax: +852 3027-0099


From: Sean Roberts [mailto:sean...@yahoo-inc.com]
Sent: Tuesday, 15 January, 2013 07:26
To: OpenStack community; openstack
Cc: rob_hirschf...@dell.com; kamesh_pemmar...@dell.com; andi_a...@dell.com; 
trevor.low...@gmail.com; stephon.strip...@dreamhost.com; ca...@hq.newdream.net; 
brent.scot...@rackspace.com; kmest...@cisco.com; lloydost...@gmail.com; 
freedom...@gmail.com; duyujie@gmail.com; santiagoc...@outlook.com; 
sc...@kent.ac.uk; moha...@egyptcloudforum.com; ilkka.turu...@jamk.fi; 
bere...@b1-systems.de; Bruce Lok; marton.k...@xemeti.com; 
deepakgarg.i...@gmail.com; fr...@meruvian.org; tim.hor...@cit.ie; 
fen...@ubuntu.com; mypa...@gmail.com; muha...@lbox.cc; hang.t...@dtt.vn
Subject: Re: [openstack-community] Calling all user group and meetup organizers

Agenda for tomorrow's user group / meetup planning meeting

  *
  *   Review of user group and meetup template
  *   Meetups fit into the larger plan of user committee and summit to summit 
development how?
  *   Where would be the location for social materials like videos, meetup 
logs, and other meeting related items?
  *   Where do operators go to figure out how to run OpenStack in production?
  *   Can the Foundation start working with some Universities for meeting 
space, sponsorship, and student participation? Is the Foundation interested in 
Phd thesis sponsorship?
  *   If possible I like to get some supporters for a groups.openstack.org 
site, including content writers and some developer resources from the 
community. We have a lot of knowledge about running / starting an user group, 
and need to write it down, publish on the site and share it with newcomers.  I 
think this is answered by the user group template information that will be 
published to http://wiki.openstack.org/OpenStackUserGroups.
  *   Should there be an OpenStack user group and/or meetup approval process? 
How do we clean up user groups and meetups that are abandoned?

Sean Roberts
Infrastructure Strategy
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com
Direct (408) 349-5234  Mobile (925) 980-4729

701 First Avenue, Sunnyvale, CA, 94089-0703, US
Phone (408) 349-3300  Fax (408) 349-3301

[http://forgood.zenfs.com/logos/yahoo.png]

On 1/14/13 2:07 PM, Sean Roberts 
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com wrote:

Reminder for the global user group planning meetup, tomorrow, Tuesday, 15 Jan 
2012 11:00am to 1:00pm PST  http://www.meetup.com/openstack/events/93593062/
Join us!


Sean Roberts
Infrastructure Strategy
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com
Direct (408) 349-5234  Mobile (925) 980-4729

701 First Avenue, Sunnyvale, CA, 94089-0703, US
Phone (408) 349-3300  Fax (408) 349-3301

[http://forgood.zenfs.com/logos/yahoo.png]

On 1/9/13 1:05 PM, Tim Horgan tim.hor...@cit.iemailto:tim.hor...@cit.ie 
wrote:

Hi Séan,

I run the OpenStack Ireland meetup group, just wondering if your meeting is 
targeting only groups in the East Coast of the USA?

Kind Regards,
Tim

On 9 Jan 2013, at 00:54, Sean Roberts 
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com wrote:


We are going to have a planning meeting on Tuesday, 15 Jan 2012 11:00am to 
1:00pm PST. RSVP via  http://www.meetup.com/openstack/events/93593062/  Connect 
remotely via webex 
https://yahoomeetings.webex.com/yahoomeetings/j.php?ED=160663792UID=492396097RT=MiM0
If this time doesn't work for you, get ahold of me directly via 
skype:seanroberts66, email, mobile, irc:sarob, twitter:sarob, or carrier pigeon.

Stefano and Thierry will be joining us. I want to get input from people that 
run user groups and meetups.

See 

[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_quantum_trunk #194

2013-01-14 Thread openstack-testing-bot
 text/html; charset=UTF-8: Unrecognized 
-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_nova_trunk #469

2013-01-14 Thread openstack-testing-bot
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/469/Project:raring_grizzly_nova_trunkDate of build:Mon, 14 Jan 2013 07:01:03 -0500Build duration:3 min 53 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesMove libvirt VIF XML config into designer.pyby berrangeeditnova/virt/libvirt/vif.pyaddnova/virt/libvirt/designer.pyeditnova/virt/libvirt/config.pyAdd some constants to the network model for drivers to useby berrangeeditnova/network/model.pyMake nova network manager fill in vif_typeby berrangeeditnova/network/manager.pyeditnova/tests/network/test_manager.pyConsole Output[...truncated 6856 lines...]dch -a [f5a94c7] Handle waiting for conductor in nova.service.dch -a [b09daba] Allow forcing local conductor.dch -a [42f3d0b] Make pinging conductor a part of conductor API.dch -a [6ea3082] Fix some conductor manager return values.dch -a [89f91da] Handle directory conflicts with html output.dch -a [ac8b949] Fix error in NovaBase.save() methoddch -a [d17a741] Remove more unused opts from nova.scheduler.driverdch -a [7974ad3] Fix quota updating when admin deletes common user's instancedch -a [9d3f524] Correct the calculating of disk size when using lvm disk backend.dch -a [39f80b8] Adding configdrive to xenapi.dch -a [038ca9b] Fix libvirt resume function call to get_domain_xmldch -a [97f0ec7] Access instance as dict, not object in xenapidch -a [745335b] Move logic from os-api-host into computedch -a [7a77dd7] Create a directory for servicegroup drivers.dch -a [5731852] Move update_instance_info_cache to conductor.dch -a [bda08f6] Change ComputerDriver.legacy_nwinfo to raise by default.dch -a [3934fa5] Cleanup pyflakes in nova-managedch -a [ce098cc] Add user/tenant shim to RequestContextdch -a [3a0eb6d] Add host name to log message for _local_deletedch -a [f863697] Make nova network manager fill in vif_typedch -a [67cd497] Add some constants to the network model for drivers to usedch -a [bcb9983] Move libvirt VIF XML config into designer.pydch -a [567bbd1] Remove bogus 'unplug' calls from libvirt VIF testdch -a [abc9a0d] Update instance's cell_name in API cell.dch -a [f362b36] Fix test cases in integrated.test_multiprocess_apidch -a [35328dd] Added sample tests to FlavorSwap API.dch -a [477722a] Fix lintstack check for multi-patch reviewsdch -a [680a3ce] xenapi: Remove dead code, moves, testsdch -a [4ff4edd] Upgrade WebOb to 1.2.3dch -a [e87c241] Allow pinging own float when using fixed gatewaydebcommitTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['debcommit']' returned non-zero exit status 7Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['debcommit']' returned non-zero exit status 7Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_nova_trunk #465

2013-01-14 Thread openstack-testing-bot
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/465/Project:precise_grizzly_nova_trunkDate of build:Mon, 14 Jan 2013 07:01:02 -0500Build duration:9 min 13 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesMove libvirt VIF XML config into designer.pyby berrangeeditnova/virt/libvirt/config.pyaddnova/virt/libvirt/designer.pyeditnova/virt/libvirt/vif.pyAdd some constants to the network model for drivers to useby berrangeeditnova/network/model.pyMake nova network manager fill in vif_typeby berrangeeditnova/network/manager.pyeditnova/tests/network/test_manager.pyConsole Output[...truncated 11238 lines...]Distribution: precise-grizzlyFail-Stage: buildHost Architecture: amd64Install-Time: 50Job: nova_2013.1+git201301140701~precise-0ubuntu1.dscMachine Architecture: amd64Package: novaPackage-Time: 410Source-Version: 2013.1+git201301140701~precise-0ubuntu1Space: 116260Status: attemptedVersion: 2013.1+git201301140701~precise-0ubuntu1Finished at 20130114-0710Build needed 00:06:50, 116260k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201301140701~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201301140701~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/grizzly /tmp/tmpPlTv3o/novamk-build-deps -i -r -t apt-get -y /tmp/tmpPlTv3o/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/nova/precise-grizzly --forcedch -b -D precise --newversion 2013.1+git201301140701~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201301140701~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nova_2013.1+git201301140701~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201301140701~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201301140701~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5120

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5120/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson1895817599462925974.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5121

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5121/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson6544004505538030305.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5122

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5122/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson7303099275704079374.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_nova_trunk #470

2013-01-14 Thread openstack-testing-bot
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/470/Project:raring_grizzly_nova_trunkDate of build:Mon, 14 Jan 2013 08:31:04 -0500Build duration:3 min 50 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesMake Quantum plugin fill in the bridge nameby berrangeeditnova/virt/libvirt/vif.pyeditnova/network/quantumv2/api.pyeditnova/tests/test_libvirt_vif.pyMake it clearer that network.api.API is nova-network specific.by robertceditnova/network/api.pyConsole Output[...truncated 6863 lines...]dch -a [42f3d0b] Make pinging conductor a part of conductor API.dch -a [6ea3082] Fix some conductor manager return values.dch -a [89f91da] Handle directory conflicts with html output.dch -a [ac8b949] Fix error in NovaBase.save() methoddch -a [d17a741] Remove more unused opts from nova.scheduler.driverdch -a [7974ad3] Fix quota updating when admin deletes common user's instancedch -a [9d3f524] Correct the calculating of disk size when using lvm disk backend.dch -a [39f80b8] Adding configdrive to xenapi.dch -a [038ca9b] Fix libvirt resume function call to get_domain_xmldch -a [96c04dd] Make it clearer that network.api.API is nova-network specific.dch -a [97f0ec7] Access instance as dict, not object in xenapidch -a [745335b] Move logic from os-api-host into computedch -a [7a77dd7] Create a directory for servicegroup drivers.dch -a [5731852] Move update_instance_info_cache to conductor.dch -a [bda08f6] Change ComputerDriver.legacy_nwinfo to raise by default.dch -a [3934fa5] Cleanup pyflakes in nova-managedch -a [ce098cc] Add user/tenant shim to RequestContextdch -a [3a0eb6d] Add host name to log message for _local_deletedch -a [4babf7d] Make Quantum plugin fill in the 'bridge' namedch -a [f863697] Make nova network manager fill in vif_typedch -a [67cd497] Add some constants to the network model for drivers to usedch -a [bcb9983] Move libvirt VIF XML config into designer.pydch -a [567bbd1] Remove bogus 'unplug' calls from libvirt VIF testdch -a [abc9a0d] Update instance's cell_name in API cell.dch -a [f362b36] Fix test cases in integrated.test_multiprocess_apidch -a [35328dd] Added sample tests to FlavorSwap API.dch -a [477722a] Fix lintstack check for multi-patch reviewsdch -a [680a3ce] xenapi: Remove dead code, moves, testsdch -a [4ff4edd] Upgrade WebOb to 1.2.3dch -a [e87c241] Allow pinging own float when using fixed gatewaydebcommitTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['debcommit']' returned non-zero exit status 7Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['debcommit']' returned non-zero exit status 7Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_grizzly_nova_trunk #466

2013-01-14 Thread openstack-testing-bot
Title: precise_grizzly_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/466/Project:precise_grizzly_nova_trunkDate of build:Mon, 14 Jan 2013 08:31:03 -0500Build duration:11 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesMake Quantum plugin fill in the bridge nameby berrangeeditnova/tests/test_libvirt_vif.pyeditnova/virt/libvirt/vif.pyeditnova/network/quantumv2/api.pyMake it clearer that network.api.API is nova-network specific.by robertceditnova/network/api.pyConsole Output[...truncated 19833 lines...]deleting and forgetting pool/main/n/nova/nova-cells_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-cert_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-common_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-kvm_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-lxc_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-qemu_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-uml_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-xcp_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-xen_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-conductor_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-console_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-consoleauth_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-doc_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-network_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-novncproxy_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-objectstore_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-scheduler_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-volume_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-network_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-plugins_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xvpvncproxy_2013.1+git201301140131~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/python-nova_2013.1+git201301140131~precise-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/nova/precise-grizzly']Pushed up to revision 542.INFO:root:Storing current commit for next build: 2f4616c2be4fb03864c74cdbb1d1ec9b2bdbf235INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/grizzly /tmp/tmpEI8GQ5/novamk-build-deps -i -r -t apt-get -y /tmp/tmpEI8GQ5/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/nova/precise-grizzly --forcedch -b -D precise --newversion 2013.1+git201301140831~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201301140831~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nova_2013.1+git201301140831~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing nova_2013.1+git201301140831~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly nova_2013.1+git201301140831~precise-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/nova/precise-grizzly+ [ ! 0 ]+ jenkins-cli build precise_grizzly_deployEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5123

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5123/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson1296521835439203482.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5124

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5124/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson8734854266742836545.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_nova_trunk #471

2013-01-14 Thread openstack-testing-bot
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/471/Project:raring_grizzly_nova_trunkDate of build:Mon, 14 Jan 2013 09:01:04 -0500Build duration:4 min 23 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFix pyflakes issues in integrated testsby cbehrenseditnova/tests/integrated/test_extensions.pyeditnova/tests/integrated/test_api_samples.pyConsole Output[...truncated 6866 lines...]dch -a [6ea3082] Fix some conductor manager return values.dch -a [89f91da] Handle directory conflicts with html output.dch -a [ac8b949] Fix error in NovaBase.save() methoddch -a [d17a741] Remove more unused opts from nova.scheduler.driverdch -a [7974ad3] Fix quota updating when admin deletes common user's instancedch -a [9d3f524] Correct the calculating of disk size when using lvm disk backend.dch -a [39f80b8] Adding configdrive to xenapi.dch -a [038ca9b] Fix libvirt resume function call to get_domain_xmldch -a [96c04dd] Make it clearer that network.api.API is nova-network specific.dch -a [97f0ec7] Access instance as dict, not object in xenapidch -a [745335b] Move logic from os-api-host into computedch -a [7a77dd7] Create a directory for servicegroup drivers.dch -a [5731852] Move update_instance_info_cache to conductor.dch -a [bda08f6] Change ComputerDriver.legacy_nwinfo to raise by default.dch -a [3934fa5] Cleanup pyflakes in nova-managedch -a [ce098cc] Add user/tenant shim to RequestContextdch -a [3a0eb6d] Add host name to log message for _local_deletedch -a [4babf7d] Make Quantum plugin fill in the 'bridge' namedch -a [f863697] Make nova network manager fill in vif_typedch -a [67cd497] Add some constants to the network model for drivers to usedch -a [bcb9983] Move libvirt VIF XML config into designer.pydch -a [567bbd1] Remove bogus 'unplug' calls from libvirt VIF testdch -a [abc9a0d] Update instance's cell_name in API cell.dch -a [f362b36] Fix test cases in integrated.test_multiprocess_apidch -a [35328dd] Added sample tests to FlavorSwap API.dch -a [73cb9f6] Fix pyflakes issues in integrated testsdch -a [477722a] Fix lintstack check for multi-patch reviewsdch -a [680a3ce] xenapi: Remove dead code, moves, testsdch -a [4ff4edd] Upgrade WebOb to 1.2.3dch -a [e87c241] Allow pinging own float when using fixed gatewaydebcommitTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['debcommit']' returned non-zero exit status 7Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['debcommit']' returned non-zero exit status 7Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_nova_trunk #467

2013-01-14 Thread openstack-testing-bot
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/467/Project:precise_grizzly_nova_trunkDate of build:Mon, 14 Jan 2013 09:01:04 -0500Build duration:9 min 43 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesFix pyflakes issues in integrated testsby cbehrenseditnova/tests/integrated/test_extensions.pyeditnova/tests/integrated/test_api_samples.pyConsole Output[...truncated 11240 lines...]Distribution: precise-grizzlyFail-Stage: buildHost Architecture: amd64Install-Time: 54Job: nova_2013.1+git201301140902~precise-0ubuntu1.dscMachine Architecture: amd64Package: novaPackage-Time: 413Source-Version: 2013.1+git201301140902~precise-0ubuntu1Space: 116284Status: attemptedVersion: 2013.1+git201301140902~precise-0ubuntu1Finished at 20130114-0910Build needed 00:06:53, 116284k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201301140902~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201301140902~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/grizzly /tmp/tmprv3J87/novamk-build-deps -i -r -t apt-get -y /tmp/tmprv3J87/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/nova/precise-grizzly --forcedch -b -D precise --newversion 2013.1+git201301140902~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201301140902~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nova_2013.1+git201301140902~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201301140902~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201301140902~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5125

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5125/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson8659063694713879122.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5126

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5126/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson1933592077357983603.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5127

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5127/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson1860915769796582513.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5128

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5128/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson7670778259837257597.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5129

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5129/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson5741044338060832235.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5130

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5130/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson6118770960325836242.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5136

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5136/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson8799501169259009443.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_grizzly_keystone_trunk #80

2013-01-14 Thread openstack-testing-bot
Title: precise_grizzly_keystone_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/80/Project:precise_grizzly_keystone_trunkDate of build:Mon, 14 Jan 2013 12:01:04 -0500Build duration:4 min 44 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesImported Translations from Transifexby Jenkinseditkeystone/locale/keystone.potConsole Output[...truncated 5185 lines...]gpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) "Checking signature on .changesGood signature on /tmp/tmpEfIu3r/keystone_2013.1+git201301141201~precise-0ubuntu1_source.changes.Checking signature on .dscGood signature on /tmp/tmpEfIu3r/keystone_2013.1+git201301141201~precise-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net):  Uploading keystone_2013.1+git201301141201~precise-0ubuntu1.dsc: done.  Uploading keystone_2013.1+git201301141201~precise.orig.tar.gz: done.  Uploading keystone_2013.1+git201301141201~precise-0ubuntu1.debian.tar.gz: done.  Uploading keystone_2013.1+git201301141201~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'keystone_2013.1+git201301141201~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-grizzly/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-grizzly/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/k/keystone/keystone-doc_2013.1+git201301112036~precise-0ubuntu1_all.debdeleting and forgetting pool/main/k/keystone/keystone_2013.1+git201301112036~precise-0ubuntu1_all.debdeleting and forgetting pool/main/k/keystone/python-keystone_2013.1+git201301112036~precise-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/keystone/precise-grizzly']Pushed up to revision 169.INFO:root:Storing current commit for next build: 3a38ecfc8868c95ad3df43de22b29f08f2c9d4cfINFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmpEfIu3r/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmpEfIu3r/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 9af1d7bebd04a5cebefbaa6f6f9885cd33e2bbd7..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/keystone/precise-grizzly --forcedch -b -D precise --newversion 2013.1+git201301141201~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [3a38ecf] Validated URLs in v2 endpoint creation APIdch -a [7490cab] Correct spelling errors / typos in test namesdch -a [617b700] Imported Translations from Transifexdch -a [ba6b1c3] Revert "shorten pep8 output"dch -a [949fbbb] Removed unused variablesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.1+git201301141201~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A keystone_2013.1+git201301141201~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing keystone_2013.1+git201301141201~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly keystone_2013.1+git201301141201~precise-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/keystone/precise-grizzlyEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5137

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5137/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson5279458807317819463.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: raring_grizzly_glance_trunk #73

2013-01-14 Thread openstack-testing-bot
Title: raring_grizzly_glance_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_glance_trunk/73/Project:raring_grizzly_glance_trunkDate of build:Mon, 14 Jan 2013 12:10:56 -0500Build duration:12 minBuild cause:Started by user james-pageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesConsole Output[...truncated 7478 lines...]gpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) "gpg: Signature made Mon Jan 14 12:13:18 2013 EST using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) "Checking signature on .changesGood signature on /tmp/tmpMqRDNa/glance_2013.1+git201301141211~raring-0ubuntu1_source.changes.Checking signature on .dscGood signature on /tmp/tmpMqRDNa/glance_2013.1+git201301141211~raring-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net):  Uploading glance_2013.1+git201301141211~raring-0ubuntu1.dsc: done.  Uploading glance_2013.1+git201301141211~raring.orig.tar.gz: done.  Uploading glance_2013.1+git201301141211~raring-0ubuntu1.debian.tar.gz: done.  Uploading glance_2013.1+git201301141211~raring-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'raring-grizzly', 'glance_2013.1+git201301141211~raring-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/raring-grizzly/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/raring-grizzly/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/g/glance/glance-api_2013.1+git20130731~raring-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance-common_2013.1+git20130731~raring-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance-registry_2013.1+git20130731~raring-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance_2013.1+git20130731~raring-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/python-glance-doc_2013.1+git20130731~raring-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/python-glance_2013.1+git20130731~raring-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/glance/raring-grizzly']Pushed up to revision 231.INFO:root:Storing current commit for next build: 26a13983b8cc3b0276b1115057585126443f0a02INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpMqRDNa/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpMqRDNa/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/glance/raring-grizzly --forcedch -b -D raring --newversion 2013.1+git201301141211~raring-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.1+git201301141211~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A glance_2013.1+git201301141211~raring-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing glance_2013.1+git201301141211~raring-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include raring-grizzly glance_2013.1+git201301141211~raring-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/glance/raring-grizzlyEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5138

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5138/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson1029321532974771193.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5139

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5139/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson5012153280788242984.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_quantum_trunk #191

2013-01-14 Thread openstack-testing-bot
 text/html; charset=UTF-8: Unrecognized 
-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_quantum_trunk #195

2013-01-14 Thread openstack-testing-bot
 text/html; charset=UTF-8: Unrecognized 
-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5141

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5141/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson7929871280516769598.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_folsom_version-drift #5142

2013-01-14 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5142/

--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe 
/tmp/hudson1204781299510230500.sh
+ OS_RELEASE=folsom
+ /var/lib/jenkins/tools/ca-versions/gather-versions.py folsom
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ /var/lib/jenkins/tools/ca-versions/ca-versions.py -c -r folsom
---
The following Cloud Archive packages for folsom
have been superseded newer versions in Ubuntu!

python-eventlet:
Ubuntu: 0.9.17-0ubuntu1.1
Cloud Archive staging: 0.9.17-0ubuntu1~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


  1   2   >