Re: [Openstack] Management tools survey

2012-08-08 Thread Nick Lothian
Around 70. Some questions allowed people to skip answering them so the
numbers weren't the same for every question.

On Wed, Aug 8, 2012 at 10:58 AM, Matt Joyce matt.jo...@cloudscaling.comwrote:

 Do you have a final count on how many people responded?


 On Tue, Aug 7, 2012 at 4:09 AM, Nick Lothian nick.loth...@gmail.comwrote:

 For those that are interested, I've done a write-up of the results from
 this: http://fifthvertex.com/2012/08/07/cloud-tools-survey/

 Thanks for all those who responded.

 Nick


 On Thu, Jul 12, 2012 at 1:28 PM, Nick Lothian nick.loth...@gmail.comwrote:

 Yes, I'll be happy to share results

 On Wed, Jul 11, 2012 at 6:33 PM, Nick Barcet 
 nick.bar...@canonical.comwrote:

 On 07/11/2012 05:18 AM, Nick Lothian wrote:
  Hi,
 
  I'm trying to understand how people are doing management of servers
 and
  storage across multiple clouds (or perhaps it is only me that has this
  problem!).
 
  I've created a short survey I'd appreciate any responses on:
  http://www.surveymonkey.com/s/8PJCK9H
 
  Responses via email are fine too!

 Hello Nick,

 I am sure there are others, like me, interested in your findings in this
 area.  Will you share the results of the survey?

 Thanks,
 Nick


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Michael Still
On 08/08/12 14:33, Eric Windisch wrote:

 The solution here may be to use libguestfs, which seems to be a modern
 alternative to mtools, but to use it as a non-privileged user and to
 forego any illusions of mounting the filesystem anywhere via the kernel
 or FUSE.

Looking at the docs for the libguestfs python bindings, I think it
provides everything I need for config drive out of the box, so that's
certainly an option...

Mikal


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Help with meta-data

2012-08-08 Thread Simon Walter


Hi all,

I've completed the excruciating Launchpad process of subscribing to a 
mailing list to ask for your help with having my instances access their 
meta-data.


I'm new to OpenStack. So please forgive my n00bness.

I installed OpenStack on Ubuntu 12.04 by following stackgeek's 10 minute 
method and using their scripts: 
http://stackgeek.com/guides/gettingstarted.html


Which got me quite far. I had to fix some of the networking setup. Now I 
can launch instances and ping them.


However, they cannot access their meta-data:

Begin: Running /scripts/init-bottom ... done.
cloud-init start-local running: Wed, 08 Aug 2012 07:33:07 +. up 8.32 seconds

no instance data found in start-local

ci-info: lo: 1 127.0.0.1   255.0.0.0   .

ci-info: eth1  : 0 .   .   fa:16:3e:5a:f3:05

ci-info: eth0  : 1 192.168.1.205   255.255.255.0   fa:16:3e:23:d7:7c

ci-info: route-0: 0.0.0.0 192.168.1.1 0.0.0.0 eth0   UG

ci-info: route-1: 192.168.1.0 0.0.0.0 255.255.255.0   eth0   U

cloud-init start running: Wed, 08 Aug 2012 07:33:10 +. up 11.95 seconds

2012-08-08 07:33:54,243 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:33:57,242 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [5/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:01,246 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [10/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:04,246 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [13/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:07,246 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [16/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:10,246 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [19/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:13,246 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [22/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:16,246 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [25/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:21,250 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [30/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:24,250 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [33/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:29,254 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [38/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:35,258 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [44/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:41,261 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:47,266 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [56/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:53,269 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [62/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:34:59,274 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [68/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:35:06,278 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [75/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:35:13,282 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [82/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:35:20,285 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [89/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:35:27,289 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [96/120s]: url 
error [[Errno 113] No route to host]

2012-08-08 07:35:34,294 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [103/120s]: 
url error [[Errno 113] No route to host]

2012-08-08 07:35:42,297 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [111/120s]: 
url error [[Errno 113] No route to host]

2012-08-08 07:35:50,302 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
url error [[Errno 113] No route to host]

2012-08-08 07:35:55,308 - DataSourceEc2.py[CRITICAL]: giving up on md after 124 
seconds



no 

Re: [Openstack] [Netstack] [Quantum] Using VirtIO Driver with KVM

2012-08-08 Thread Emilien Macchi
Thank you for the bug Yaguang !

You've been faster than me ;)


Regards


On Wed, Aug 8, 2012 at 4:16 AM, heut2008 heut2...@gmail.com wrote:

 I have file  a bug https://bugs.launchpad.net/nova/+bug/1034216
 and  upload a patch https://review.openstack.org/#/c/11008/

 2012/8/8 Dan Wendlandt d...@nicira.com

  Hi Emilien,

 I know of customers using virtio with Quantum, but I think they may have
 modified the template directly, as you mention below.

 This code in nova changed quite a bit from Essex - Folsom.  Looking at
 nova/virt/libvirt/vif.py, I noticed that whoever added the
 libvirt_use_virtio_for_bridges flag did not add it to the
 LibvirtOpenVswitchDriver class.  Based on the XML you show below, my guess
 would be that if you add the following lines prior to the return statement
 of the plug() method of that class, you would get the right XML:

 if FLAGS.libvirt_use_virtio_for_bridges:
   conf.model = virtio

 That said, I haven't tested it.

 Can you file a bug on this in nova?  Thanks,

 Dan


 On Tue, Aug 7, 2012 at 2:57 PM, Emilien Macchi 
 emilien.mac...@stackops.com wrote:

 Hi Stackers,


 I'm talking with Sebastien 
 Hanhttp://www.sebastien-han.fr/blog/2012/07/19/make-the-network-of-your-vms-fly-with-virtio-driverabout
  VirtIO Driver for network interfaces.

 We have two setup :

 1) Ubuntu 12.04 / Essex / KVM
network_manager=*nova.network*.manager.*VlanManager*
libvirt_use_virtio_for_bridges=true

 2) Ubuntu 12.04 / Essex / KVM
network_manager=nova.network.quantum.manager.QuantumManager

 linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
libvirt_use_virtio_for_bridges=true

 What we have seen :

 - VMs booting from first setup have VirtIO driver for networking
 (Gigabit ready). Here is the libvirt.xml for network part :

 interface type='bridge'
 (...)
 model type='virtio'/
 (...)
 /interface

 - VMs booting from second setup (with Quantum) don't use VirtIO driver
 (No gigabit). Here is the libvirt.xml for network part :

 interface type='ethernet'
 target dev='tapx' /
 mac address='' /
 script path='' /
  /interface

 The only one way I found to use VirtIO driver with Quantum to modify
 /usr/share/pyshared/nova/virt/libvirt.xml.template file and add :

 model type='virtio'/
 (after line 125)

 So you must have something like this :
 interface type='ethernet'
 target dev='${nic.name}' /
 mac address='${nic.mac_address}' /
 script path='${nic.script}' /
 model type='virtio'/
 /interface

 And restart LibVirt service :
 service libvirt-bin restart


 What do you think about that ? Should we modify the template to get
 Gigabit capacity or is there something wrong in my configuration ?


 Best regards

 --
 Emilien Macchi
 *System Engineer*
 *www.stackops.com

 | *emilien.mac...@stackops.com**  *|* skype:emilien.macchi*
 * http://www.stackops.com
 *

 *

  ADVERTENCIA LEGAL 
 Le informamos, como destinatario de este mensaje, que el correo
 electrónico y las comunicaciones por medio de Internet no permiten asegurar
 ni garantizar la confidencialidad de los mensajes transmitidos, así como
 tampoco su integridad o su correcta recepción, por lo que STACKOPS
 TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias.
 Si no consintiese en la utilización del correo electrónico o de las
 comunicaciones vía Internet le rogamos nos lo comunique y ponga en nuestro
 conocimiento de manera inmediata. Este mensaje va dirigido, de manera
 exclusiva, a su destinatario y contiene información confidencial y sujeta
 al secreto profesional, cuya divulgación no está permitida por la ley. En
 caso de haber recibido este mensaje por error, le rogamos que, de forma
 inmediata, nos lo comunique mediante correo electrónico remitido a nuestra
 atención y proceda a su eliminación, así como a la de cualquier documento
 adjunto al mismo. Asimismo, le comunicamos que la distribución, copia o
 utilización de este mensaje, o de cualquier documento adjunto al mismo,
 cualquiera que fuera su finalidad, están prohibidas por la ley.

 * PRIVILEGED AND CONFIDENTIAL 
 We hereby inform you, as addressee of this message, that e-mail and
 Internet do not guarantee the confidentiality, nor the completeness or
 proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L.
 does not assume any liability for those circumstances. Should you not agree
 to the use of e-mail or to communications via Internet, you are kindly
 requested to notify us immediately. This message is intended exclusively
 for the person to whom it is addressed and contains privileged and
 confidential information protected from disclosure by law. If you are not
 the addressee indicated in this message, you should immediately delete it
 and any attachments and notify the sender by reply e-mail. In such case,
 you are hereby notified that any dissemination, 

[Openstack] adding security groups to running virtual machines

2012-08-08 Thread Wolfgang Hennerbichler

hi,

is it me or is it openstack who can't modify security groups for running 
virtual machines?

nova help | grep sec
doesn't give me a clue.

thanks for a hint,
Wolfgang

--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Pádraig Brady
On 08/08/2012 02:35 AM, Michael Still wrote:
 On 08/08/12 11:08, Pádraig Brady wrote:
 
 If supporting either of the above cases, it would be great to
 reuse the existing image loopback mounting code:

 virt.disk.setup_container(image_file)
 virt.disk.inject_file()
 other tweaks
 virt.disk.destroy_container(image_file)
 
 This code doesn't seem to support _reading_ from the container though.
 The current process (if you specify a glance image is):
 
 - fetch image from glance
 - mount it
 - inject the data into it
 - _copy_ the entire directory structure from the mounted image into the
 config disk image
 
 Its that final step that I think is hard with the containers code,
 unless I am missing something.


 What's the security vulnerability here? Its writing to something which
 might be a symlink to somewhere special, right?

That's one vector.
Even mounting the image is a potential vector.
Anyway these issues should be kept within virt.disk.api
(which can use libguestfs as it is).

 Would it be better for example to mount the image from glance, copy its
 contents to the config disk image (skipping symlinks), and then umount
 it? The data could then be written to the config disk instead of to the
 image from glance. That would mean if there was a symlink pointing
 somewhere special in the glance image it couldn't be exploited.

That would help, but as mentioned above, the loop mount itself
can be dangerous. So just using the disk.setup_container()
as mentioned above, will help, and at least avoid reimplementing
loop back mounting code.

Keeping symlinks could be a useful feature BTW.
Perhaps {cp,tar,rsync} --one-file-system could be
leveraged to merge trees in a more secure way.

cheers,
Pǽdraig.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Pádraig Brady
On 08/08/2012 05:37 AM, Eric Windisch wrote:
 
 Also notice that libguestfs is supported as an injection mechanism
 which mounts images in a separate VM, with one of the big advantages
 of that being better security.
 
 
 Are you sure about this? Reading the driver source, it appears to be using 
 'guestmount' as glue between libguestfs and FUSE. Worse, this is done as 
 root.  This mounts the filesystem in userspace on the host, but the userspace 
 process runs as root.  Because the filesystem is mounted, all reads and 
 writes must also happen as root, leading to potential escalation scenarios.
 
 It does seem that libguestfs could be used securely, but it isn't.

The image is handled in a separate VM.
guestmount sets up communication with this VM.

cheers,
Pádraig.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-08 Thread Daniel P. Berrange
On Wed, Aug 08, 2012 at 09:50:20AM +0800, Huang Zhiteng wrote:
  But to the contrary. I tested live-migrate (without block migrate)
  last night using a guest with 8GB RAM (almost fully committed) and
  lost any access/contact with the guest for over 4 minutes - it was
  paused for the duration. Not something I'd want to do to a user's
  web-server on a regular basis...
 
 4 minutes of pause (down time)?  That's way too long.  Even there was
 crazy memory intensive workload inside the VM being migrated, the
 worst case is KVM has to pause VM and transmit all 8 GB memory (all
 memory are dirty, which is very rare).  If you have 1GbE link between
 two host, that worst case pause period (down time) is less than 2
 minutes.  My previous experience is: the down time for migrating one
 idle (almost no memory access) 8GB VM via 1GbE is less than 1 second;
 the down time for migrating a 8 GB VM that page got dirty really
 quickly is 60 seconds.  FYI.

KVM has a tunable setting for the maximum allowable live migration
downtime, which IIRC defaults to something very small like 250ms.

If the migration can't be completed within this downtime limit,
KVM will simply never complete migration. Given that Nova does
not tune this downtime setting, I don't see how you'd see 4 mins
downtime unless it was not truely live migration, or there was
something else broken (eg the network bridge device had a delay
inserted by the STP protocol which made the VM /appear/ to be
unreponsive on the network even though it was running fine).

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-08 Thread Daniel P. Berrange
On Tue, Aug 07, 2012 at 04:13:22PM -0400, Jay Pipes wrote:
 On 08/07/2012 08:57 AM, Blair Bethwaite wrote:
  I also feel a little concern about this statement:
 
   It don't work so well, it complicates migration code, and we are building
  a replacement that works.
 
 
  I have to go further with my tests, maybe we could share some ideas, use
  case etc...
  
  I think it may be worth asking about this on the KVM lists, unless
  anyone here has further insights...?
  
  I grabbed the KVM 1.0 source from Ubuntu Precise and vanilla KVM 1.1.1
  from Sourceforge, block migration appears to remain in place despite
  those (sparse) comments from the KVM meeting minutes (though I am
  naive to the source layout and project structure, so could have easily
  missed something). In any case, it seems unlikely Precise would see a
  forced update to the 1.1.x series.
 
 cc'd Daniel Berrange, who seems to be keyed in on upstream KVM/Qemu
 activity. Perhaps Daniel could shed some light.

Block migration is a part of the KVM that none of the upstream developers
really like, is not entirely reliable, and most distros typically do not
want to support it due to its poor design (eg not supported in RHEL).

It is quite likely that it will be removed in favour of an alternative
implementation. What that alternative impl will be, and when I will
arrive, I can't say right now. A lot of the work (possibly all) will
probably be pushed up into libvirt, or even the higher level mgmt apps
using libvirt. It could well involve the mgmt app having to setup an
NBD or iSCSI server on the source host, and then launching QEMU on the
destination host configured to stream the data across from the NBD/iSCSI
server in parallel with the migration stream. But this is all just talk
for now, no firm decisions have been made, beyond a general desire to
kill the current block migration code.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] Question regarding multiple network interfaces

2012-08-08 Thread Leander Bessa Beernaert
So i have set up a small proof of concept, one controller node and two
compute nodes. Since the switches do not support VLAN i'm using flat dhcp.
Each machine has two network interfaces. Eth0 is connected to the public
switch and eth1 to the private switch. The private switch has no access to
the internet.

Both compute nodes have the value of cat /proc/sys/net/ipv4/ip_forward set
to 1. However, i still can't make an instance connect to the outside.

 Any thoughts?
On Tue, Aug 7, 2012 at 11:32 PM, Sébastien Han han.sebast...@gmail.comwrote:

 It's part of the operating system

 # echo 1  /proc/sys/net/ipv4/ip_forward

 Then edit your /etc/sysctl.conf and uncomment net.ipv4.ip_forward=1 to
 make this persistent after reboot.

 Finally run -- # sysctl -p

 That's all, cheers!


 On Tue, Aug 7, 2012 at 11:50 PM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Is there a flag in the nova.conf file or is this something that needs to
 be
  done on the operating system?
 
 
  On Tue, Aug 7, 2012 at 8:26 PM, Sébastien Han han.sebast...@gmail.com
  wrote:
 
  Hi,
 
  If eth0 is connected to the public switch and if eth1 is connected to
  the private switch you can enable the ipv4 forwarding on the compute
  node. Thanks to this the VMs will have access to the outside world and
  the packet will be routed from eth1 to eth0 :).
 
  Cheers!
 
  On Tue, Aug 7, 2012 at 5:18 PM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
   Hello,
  
   I have a question regarding the use of two network interfaces.
 According
   to
   the official documentation, one of the interfaces is used for public
   access
   and the other for internal access (inter-vm communication). What i'd
   like to
   know is how does an instance connect to the outside world (internet
   access)?
   Is it done through the switch connected to the private interface or
 the
   public interface?
  
   --
   Cumprimentos / Regards,
   Leander
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
 
 
 
 
  --
  Cumprimentos / Regards,
  Leander




-- 
Cumprimentos / Regards,
Leander
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Daniel P. Berrange
On Wed, Aug 08, 2012 at 12:33:57AM -0400, Eric Windisch wrote:
 
 
  What's the security vulnerability here? Its writing to something which
  might be a symlink to somewhere special, right?
 
 
 Mounting filesystems tends to be a source of vulnerabilities in and of
 itself. There are userspace tools as an alternative, but a standard OS
 mount is clearly not secure. While libguestfs is such a userspace
 alternative, and guestmount is in some ways safer than a standard mount, it
 is not used by Nova in a way that has any clear advantage to a standard
 mount as it runs as root.
 
 As this CVE indicates, injecting data into a mounted filesystem has its own
 problems, whether or not that filesystem is mounted directly in-kernel or
 via FUSE. There are also solutions here, some very complex, few if any are
 foolproof.
 
 The solution here may be to use libguestfs, which seems to be a modern
 alternative to mtools, but to use it as a non-privileged user and to forego
 any illusions of mounting the filesystem anywhere via the kernel or FUSE.

Yes, ideally Nova would use the libguestfs API directly to inject files
and stop using guestmount, at which point things are strongly confined,
since every takes place inside a VM which can only see the guest FS.
All files from the host are uploaded into the geust FS using a RPC
mechanism.  Even using the libguestfs API though, applications need
to be somewhat careful about what they do. The libguestfs manpage
highlights important security considerations:

  http://libguestfs.org/guestfs.3.html#security

Also note that current work is being done to make libguestfs use
libvirt to launch its appliance VMs, at which point libguestfs VMs
will be strongly confined by sVirt (SELinux/AppArmour), and also
able to run as a separate user ID.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Gold Member Election Update

2012-08-08 Thread John_Igoe
Recently the gold member formation committee met to discuss the election of 
directors for the gold members and agreed on the mechanics and timing of our 
election. We will be holding our election using cumulative voting the week 
before the individual member elections. There was some earlier discussion on 
the list about the ordering of director selection, and this will make it clear 
who will be the gold directors by the end of the week before the individual 
member elections open.

Because we have a small number of members, the election will be administered by 
the Foundation's legal counsel at DLA Piper. There are a total of 11 gold 
members at this point in time and 8 board members will be elected. They will 
tally our votes and publish the results. We expect to complete the voting on a 
single day during the week of August 13th, although the exact date has not yet 
been set.

Please let us know if you have any questions regarding this effort.

Openstack Foundation Gold Member Formation Committee:

Randy Bias
Joshua McKenty
Adam Waters
Dale David
Boris Renski
Cliff Young
John Igoe
Lew Tucker
Mark Collier
Simon Anderson
Sean Roberts
Alan Clark
Patrick Fu
Winston Damarillo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-08 Thread Kiall Mac Innes
From memory (a fuzzy memory at that!) Nova will fallback to block migration
if believes shared storage is unavailable.

This would explain the delay, but someone who's read the code recently can
confirm...

Thanks,
Kiall
On Aug 8, 2012 11:08 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Wed, Aug 08, 2012 at 09:50:20AM +0800, Huang Zhiteng wrote:
   But to the contrary. I tested live-migrate (without block migrate)
   last night using a guest with 8GB RAM (almost fully committed) and
   lost any access/contact with the guest for over 4 minutes - it was
   paused for the duration. Not something I'd want to do to a user's
   web-server on a regular basis...
 
  4 minutes of pause (down time)?  That's way too long.  Even there was
  crazy memory intensive workload inside the VM being migrated, the
  worst case is KVM has to pause VM and transmit all 8 GB memory (all
  memory are dirty, which is very rare).  If you have 1GbE link between
  two host, that worst case pause period (down time) is less than 2
  minutes.  My previous experience is: the down time for migrating one
  idle (almost no memory access) 8GB VM via 1GbE is less than 1 second;
  the down time for migrating a 8 GB VM that page got dirty really
  quickly is 60 seconds.  FYI.

 KVM has a tunable setting for the maximum allowable live migration
 downtime, which IIRC defaults to something very small like 250ms.

 If the migration can't be completed within this downtime limit,
 KVM will simply never complete migration. Given that Nova does
 not tune this downtime setting, I don't see how you'd see 4 mins
 downtime unless it was not truely live migration, or there was
 something else broken (eg the network bridge device had a delay
 inserted by the STP protocol which made the VM /appear/ to be
 unreponsive on the network even though it was running fine).

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/:|
 |: http://libvirt.org  -o- http://virt-manager.org:|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/:|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc:|

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] swift installation in multi storage nodes

2012-08-08 Thread sarath zacharia
Hai ,

We installed swift 1.6.1 according to the openstack document I
can use the storage system with a single node
but when we  are giving the localip address it's not working if it is in
loopback it's working finally . The dashboard shows
the error given below.


  timeout at /nova/containers/

Request Method: GET
Request URL: http://192.168.100.1/nova/containers/

Django Version: 1.3.1
Python Version: 2.7.3
Installed Applications:
['openstack_dashboard',
 'django.contrib.sessions',
 'django.contrib.messages',
 'django.contrib.staticfiles',
 'django_nose',
 'horizon',
 'horizon.dashboards.nova',
 'horizon.dashboards.syspanel',
 'horizon.dashboards.settings']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
 'django.middleware.csrf.CsrfViewMiddleware',
 'django.contrib.sessions.middleware.SessionMiddleware',
 'django.contrib.messages.middleware.MessageMiddleware',
 'openstack_dashboard.middleware.DashboardLogUnhandledExceptionsMiddleware',
 'horizon.middleware.HorizonMiddleware',
 'django.middleware.doc.XViewMiddleware',
 'django.middleware.locale.LocaleMiddleware')


Traceback:
File /usr/lib/python2.7/dist-packages/django/core/handlers/base.py in
get_response
  111. response = callback(request, *callback_args,
**callback_kwargs)
File /usr/lib/python2.7/dist-packages/horizon/decorators.py in dec
  40. return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/horizon/decorators.py in dec
  55. return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/horizon/decorators.py in dec
  40. return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/horizon/decorators.py in dec
  129. return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/django/views/generic/base.py in view
  47. return self.dispatch(request, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/django/views/generic/base.py in
dispatch
  68. return handler(request, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/horizon/tables/views.py in get
  105. handled = self.construct_tables()
File /usr/lib/python2.7/dist-packages/horizon/tables/views.py in
construct_tables
  96. handled = self.handle_table(table)
File /usr/lib/python2.7/dist-packages/horizon/tables/views.py in
handle_table
  68. data = self._get_data_dict()
File /usr/lib/python2.7/dist-packages/horizon/tables/views.py in
_get_data_dict
  132. self._data = {self.table_class._meta.name:
self.get_data()}
File
/usr/lib/python2.7/dist-packages/horizon/dashboards/nova/containers/views.py
in get_data
  58. exceptions.handle(self.request, msg)
File
/usr/lib/python2.7/dist-packages/horizon/dashboards/nova/containers/views.py
in get_data
  55.
marker=marker)
File /usr/lib/python2.7/dist-packages/horizon/api/swift.py in
swift_get_containers
  73.marker=marker)
File /usr/lib/python2.7/dist-packages/cloudfiles/connection.py in
get_all_containers
  304. return ContainerResults(self,
self.list_containers_info(**parms))
File /usr/lib/python2.7/dist-packages/cloudfiles/connection.py in
list_containers_info
  384. response = self.make_request('GET', [''], parms=parms)
File /usr/lib/python2.7/dist-packages/cloudfiles/connection.py in
make_request
  192. response = retry_request()
File /usr/lib/python2.7/dist-packages/cloudfiles/connection.py in
retry_request
  186. return self.connection.getresponse()
File /usr/lib/python2.7/httplib.py in getresponse
  1030. response.begin()
File /usr/lib/python2.7/httplib.py in begin
  407. version, status, reason = self._read_status()
File /usr/lib/python2.7/httplib.py in _read_status
  365. line = self.fp.readline()
File /usr/lib/python2.7/socket.py in readline
  430. data = recv(1)

Exception Type: timeout at /nova/containers/
Exception Value: timed out

please help .

-- 
with Thanks and Regards

Sarath Zacharia
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] adding security groups to running virtual machines

2012-08-08 Thread Rafael Durán Castañeda

On 08/08/2012 11:06 AM, Wolfgang Hennerbichler wrote:

hi,

is it me or is it openstack who can't modify security groups for 
running virtual machines?

nova help | grep sec
doesn't give me a clue.

thanks for a hint,
Wolfgang


I'm getting this:

 nova help | grep sec
secgroup-add-group-rule
Add a source group rule to a security group.
secgroup-add-rule   Add a rule to a security group.
secgroup-create Create a security group.
secgroup-delete Delete a security group.
secgroup-delete-group-rule
Delete a source group rule from a security group.
secgroup-delete-rule
Delete a rule from a security group.
secgroup-list   List security groups for the current tenant.
secgroup-list-rules
List rules for a security group.
dpkg -l | grep novaclient
ii  python-novaclient 
2012.2~f1~20120410.558-0ubuntu0~oneiric26  client 
library for OpenStack Compute API


What's your nova version?



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] adding security groups to running virtual machines

2012-08-08 Thread Wolfgang Hennerbichler

it's not me :)
http://forums.openstack.org/viewtopic.php?f=10t=719

On 08/08/2012 11:06 AM, Wolfgang Hennerbichler wrote:

hi,

is it me or is it openstack who can't modify security groups for running
virtual machines?
nova help | grep sec
doesn't give me a clue.

thanks for a hint,
Wolfgang




--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Thierry Carrez
Eric Windisch wrote:
 Unfortunately, this won't be the end of vulnerabilities coming from this 
 feature.

Indeed. I would like to see evil file injection die, and be replaced by
cloud-init / config-drive. That's the safest way.

If we can't totally get rid of file injection, I'd like it to be a clear
second-class citizen that you should enable only if you absolutely need it.

The first step towards that shinier future is to have a very solid and
featureful config-drive implementation, which I hope Michael can
complete in time for Folsom. Then maybe we can convert more people to a
view of the world where direct file injection is not useful and should
only be enabled as a last resort.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Daniel P. Berrange
On Wed, Aug 08, 2012 at 02:17:30PM +0200, Thierry Carrez wrote:
 Eric Windisch wrote:
  Unfortunately, this won't be the end of vulnerabilities coming from this 
  feature.
 
 Indeed. I would like to see evil file injection die, and be replaced by
 cloud-init / config-drive. That's the safest way.
 
 If we can't totally get rid of file injection, I'd like it to be a clear
 second-class citizen that you should enable only if you absolutely need it.

If we used the libguestfs APIs instead of guestmount program, then the
security characteristics of file injection would be pretty much equivalent
to config drive IMHO. In both cases you would be primarily relying on
the containment of the QEMU process for security.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum] Removing quantum-rootwrap

2012-08-08 Thread Thierry Carrez
Hi everyone,

Quantum currently contains bin/quantum-rootwrap, a copy of nova-rootwrap
supposed to control its privilege escalation to run commands as root.

However quantum-rootwrap is currently non-functional, missing a lot of
filter definitions that are necessary for it to work correctly. Quantum
is generally run with root_helper=sudo and a wildcard sudoers file. That
means Quantum is not ready to deprecate in Folsom (and remove in
Grizzly) its ability to run with root_helper=sudo, like Nova and Cinder do.

I discussed this with Dan, and it appears that the sanest approach would
be to remove quantum-rootwrap from Quantum and only support
root_helper=sudo (the only option that works). I suspect nobody is
actually using quantum-rootwrap right now anyway, given how broken it
seems to be. For the first official release of Quantum as an OpenStack
core project, I would prefer not to ship half-working options :)

Quantum would then wait for rootwrap to move to openstack-common (should
be done in Grizzly) to reconsider using it.

Let me know if any of you see issues with that approach.
(posted to the general list to get the widest feedback).

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Removing quantum-rootwrap

2012-08-08 Thread Chuck Short
Hi,

How much work would would be needed to get this added in quantum?

Thanks
chuck


On Wed, 08 Aug 2012 15:31:59 +0200
Thierry Carrez thie...@openstack.org wrote:

 Hi everyone,
 
 Quantum currently contains bin/quantum-rootwrap, a copy of
 nova-rootwrap supposed to control its privilege escalation to run
 commands as root.
 
 However quantum-rootwrap is currently non-functional, missing a lot of
 filter definitions that are necessary for it to work correctly.
 Quantum is generally run with root_helper=sudo and a wildcard sudoers
 file. That means Quantum is not ready to deprecate in Folsom (and
 remove in Grizzly) its ability to run with root_helper=sudo, like
 Nova and Cinder do.
 
 I discussed this with Dan, and it appears that the sanest approach
 would be to remove quantum-rootwrap from Quantum and only support
 root_helper=sudo (the only option that works). I suspect nobody is
 actually using quantum-rootwrap right now anyway, given how broken it
 seems to be. For the first official release of Quantum as an OpenStack
 core project, I would prefer not to ship half-working options :)
 
 Quantum would then wait for rootwrap to move to openstack-common
 (should be done in Grizzly) to reconsider using it.
 
 Let me know if any of you see issues with that approach.
 (posted to the general list to get the widest feedback).
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Removing quantum-rootwrap

2012-08-08 Thread Robert Kukura
On 08/08/2012 09:31 AM, Thierry Carrez wrote:
 Hi everyone,
 
 Quantum currently contains bin/quantum-rootwrap, a copy of nova-rootwrap
 supposed to control its privilege escalation to run commands as root.
 
 However quantum-rootwrap is currently non-functional, missing a lot of
 filter definitions that are necessary for it to work correctly. 

Is missing definitions the only issue? Those may need updating for F-3,
but this can certainly be done.

 Quantum
 is generally run with root_helper=sudo and a wildcard sudoers file.

What is your basis for this statement? The packaging of Essex Quantum
for Fedora and RHEL/EPEL do configure root_helper to use
quantum-rootwrap. If another distribution doesn't do this, I would
consider that a distribution bug, not an upstream problem.

 That
 means Quantum is not ready to deprecate in Folsom (and remove in
 Grizzly) its ability to run with root_helper=sudo, like Nova and Cinder do.

What's involved in deprecating this ability in Folsom? Is it that
difficult? If Nova and Cinder are doing it, why shouldn't Quantum?

 
 I discussed this with Dan, and it appears that the sanest approach would
 be to remove quantum-rootwrap from Quantum and only support
 root_helper=sudo (the only option that works). I suspect nobody is
 actually using quantum-rootwrap right now anyway, given how broken it
 seems to be. For the first official release of Quantum as an OpenStack
 core project, I would prefer not to ship half-working options :)

The quantum-rootwrap configuration in Essex is being used by anyone who
uses the official Fedora or EPEL RPMs. It may not provide fine-grained
validation of command parameters, but I haven't heard complaints that
its broken. Isn't it better than nothing?


 
 Quantum would then wait for rootwrap to move to openstack-common (should
 be done in Grizzly) to reconsider using it.
 
 Let me know if any of you see issues with that approach.
 (posted to the general list to get the widest feedback).
 

I do have an issue with Folsom dropping a capability that is being used
in Essex. If the existing rootwrap really does more harm than good, this
might be justified, but I don't think you can argue nobody has used it.

-Bob



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Scott Moser
On Wed, 8 Aug 2012, Michael Still wrote:

 On 08/08/12 07:38, Eric Windisch wrote:
 
  Pádraig Brady from Red Hat discovered that the fix implemented for
  CVE-2012-3361 (OSSA-2012-008) was not covering all attack
  scenarios. By crafting a malicious image with root-readable-only
  symlinks and requesting a server based on it, an authenticated user
  could still corrupt arbitrary files (all setups affected) or inject
  arbitrary files (Essex and later setups with OpenStack API enabled
  and a libvirt-based hypervisor) on the host filesystem, potentially
  resulting in full compromise of that compute node.
 
 
  Unfortunately, this won't be the end of vulnerabilities coming from
  this feature.
 
  Even if all the edge-cases around safely writing files are handled
  (and I'm not sure they are), simply mounting a filesystem is a very
  dangerous operation for the host.
 
  The idea had been suggested early-on to supporting ISO9660
  filesystems created with mkisofs, which can be created in userspace,
  are read-only, and fairly safe to produce, even as root on compute
  host.

 I am in the process of re-writing the config drive code as we speak. The
 re-write supports (and defaults to) providing the config drive as an
 iso9660 image.

 There are two places that mounting occurs with the new code:

  - if the user wants a vfat config drive, as I couldn't find a way to
 create a vfat filesystem from a directory using userspace code. This
 should be relatively safe though because the filesystem which is
 mounted is created by the code just before the mount. [1]

For sufficiently archaic filesystems, there are user-space tools that
understand how to manipulate them.  vfat has reached such a state.
mtools can read and write vfat filesystem without any help from the
kernel.

Heres an example of how to use it as I've been confused by the
documentation before.

 # get a directory to copy
 $ wget 
https://launchpad.net/python-novaclient/trunk/2.6.10/+download/python-novaclient-2.6.10.tar.gz
 $ tar -xvzf python-novaclient-2.6.10.tar.gz
 $ dir=python-novaclient-2.6.10

 # create a 64M file and put vfat filesystem on it
 $ img=my-vfat.img
 $ rm -f $img;
 $ truncate --size 64M $img;
 $ mkfs.vfat -n MY-LABEL $img

 # copy contents to image and then off that to a new dir
 $ mcopy -ospmi $img $dir/* ::
 $ mcopy -ospmi $img :: $dir.new
 $ du -hs $dir.new $dir
 736K  python-novaclient-2.6.10.new
 736K  python-novaclient-2.6.10
 $ diff -Naur $dir $dir.new  echo the same
 the same

Outside of the dependency on mtools, this seems like a much better
solution for limited vfat filesystem writing than libguestfs or kernel
mount.

I carried this example on to actually boot a kvm with my-vfat.img
attached and mounted it and it looked good there also.  the only issue I
saw in this was the timestamp on the directory
python-novaclient-2.6.10.new.

  - if the user specifies an image from glance for the injection to occur
 to. This is almost certainly functionality that you're not going to like
 for the reasons stated above. Its there because v1 did it, and I'm

Either you or I are reading the existing nova/virt/libvirt/driver.py
'config_drive_id' code incorrectly.  That code and
nova/compute/api.py:_create_instance seem to me to indicate if
config_drive_id is true than config_drive is not true.

the mount and add path is only taken if config_drive is true.


 willing to remove it if there is a consensus that's the right thing to
 do. However, file IO on this image mount is done as the nova user, not
 root, so that's a tiny bit safer (I hope).

 https://review.openstack.org/#/c/10934/

As I've said in the review, I think this function should not exist in
config-drive-v2.

I do think that attach volume by volume id at boot function should be
added to nova (it may already be), but it has no relevance to
config-drive.

Scott___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Netstack] [openstack-dev] [Quantum] Multi-host implementation

2012-08-08 Thread MURAOKA Yusuke
Hi,

I've updated the bp to correspond with current design spec.  
 https://blueprints.launchpad.net/quantum/+spec/quantum-multihost-dhcp


I'd know the case of failure, improper, insane, by that.
Anyway, comments, discussions are welcome.

Thanks.

--  
MURAOKA Yusuke

Mail: yus...@jbking.org


日付:2012年8月7日火曜日、時刻:2:47、差出人:Nachi Ueno:

 Hi Dan
  
 Thank you for pointing this.
  
 Yusuke updated design spec.
 https://blueprints.launchpad.net/quantum/+spec/quantum-multihost-dhcp
  
 2012/8/6 Dan Wendlandt d...@nicira.com (mailto:d...@nicira.com):
  Hi Nachi,
   
  I've reviewed the code and added comments. I'd like to see at least a basic
  spec describing the proposed approach (need only be a couple paragraphs,
  perhaps with a diagram) linked to the blueprint so we can have a design
  discussion around it. Thanks,
   
  Dan
   
   
  On Fri, Aug 3, 2012 at 1:03 PM, Nachi Ueno na...@nttmcl.com 
  (mailto:na...@nttmcl.com) wrote:

   Hi folks

   Sorry.
   I added openstack-...@lists.openstack.org 
   (mailto:openstack-...@lists.openstack.org) in this discussion.

   2012/8/3 Nati Ueno nati.u...@gmail.com (mailto:nati.u...@gmail.com):
Hi folks
 
 Gary
Thank you for your comment. I wanna discuss your point on the mailing
list.
 
Yusuke pushed Multi-host implementation for review.
https://review.openstack.org/#/c/10766/2
This patch changes only quantum-dhcp-agent side.
 
Gary's point is we should have host attribute on the port for
scheduling.
I agree with Gary.
 
In the nova, vm has available_zone for scheduling.
So Instead of using host properties.
How about use available_zone for port?
 
Format of availability_zone is something like this
available_zone=zone_name:host.
 
We can also add availability_zone attribute for the network as a
default value of port.
We can write this until next Monday.
However I'm not sure quantum community will accept this or not, so I'm
asking here.
 
If there are no objections, we will push zone version for review.
Thanks
Nachi
 
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net 
(mailto:openstack@lists.launchpad.net)
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp



   ___
   OpenStack-dev mailing list
   openstack-...@lists.openstack.org 
   (mailto:openstack-...@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
   
   
   
  --
  ~~~
  Dan Wendlandt
  Nicira, Inc: www.nicira.com (http://www.nicira.com)
  twitter: danwendlandt
  ~~~
   
   
  --
  Mailing list: https://launchpad.net/~netstack
  Post to : netst...@lists.launchpad.net (mailto:netst...@lists.launchpad.net)
  Unsubscribe : https://launchpad.net/~netstack
  More help : https://help.launchpad.net/ListHelp
  
  
  
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Removing quantum-rootwrap

2012-08-08 Thread Thierry Carrez
Robert Kukura wrote:
 On 08/08/2012 09:31 AM, Thierry Carrez wrote:
 Quantum currently contains bin/quantum-rootwrap, a copy of nova-rootwrap
 supposed to control its privilege escalation to run commands as root.

 However quantum-rootwrap is currently non-functional, missing a lot of
 filter definitions that are necessary for it to work correctly. 
 
 Is missing definitions the only issue? Those may need updating for F-3,
 but this can certainly be done.

Those are the only issues I spotted. Making Quantum compatible with the
latest version of rootwrap as shipped in Nova/Cinder, though, is a lot
more work.

 Quantum
 is generally run with root_helper=sudo and a wildcard sudoers file.
 
 What is your basis for this statement? The packaging of Essex Quantum
 for Fedora and RHEL/EPEL do configure root_helper to use
 quantum-rootwrap. If another distribution doesn't do this, I would
 consider that a distribution bug, not an upstream problem.

Given that quantum-rootwrap is currently non-working, I suspected that
everyone running Quantum *on Folsom* was using sudo and not the
rootwrap. If most people do that, it probably means it's a it early to
deprecate root_helper=sudo support in Folsom.

 That
 means Quantum is not ready to deprecate in Folsom (and remove in
 Grizzly) its ability to run with root_helper=sudo, like Nova and Cinder do.
 
 What's involved in deprecating this ability in Folsom? Is it that
 difficult? If Nova and Cinder are doing it, why shouldn't Quantum?

As a quick grep will show, there is much more adherence to root_helper
in Quantum than in Nova/Cinder, where it was used in a single place.
It's definitely doable, but I'd say a bit dangerous (and too late) 4
days before F3. I certainly won't have enough time for it...

 I do have an issue with Folsom dropping a capability that is being used
 in Essex. If the existing rootwrap really does more harm than good, this
 might be justified, but I don't think you can argue nobody has used it.

Fair point, it was definitely used in Essex.

We have three options at this point:

* Remove it (but is it acceptable to lose functionality compared to
Essex, even if Essex is not a core release for Quantum ?)

* Just fix it by adding missing filters (but then accept that
quantum-rootwrap doesn't behave like nova-rootwrap and cinder-rootwrap,
which is bad for consistency)

* Align quantum-rootwrap with nova-rootwrap and deprecate usage of
root_helper, by overhauling how root_helper is pervasively used
throughout Quantum code (lots of work, and introducing a lot of
disruption that late in the cycle)

Personally I think only the first two options are realistic. So this
boils down to losing functionality from Essex vs. hurting Folsom core
consistency.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [ceilometer] Metering meeting agenda for Thursday at 16:00 UTC (Aug 9th, 2012)

2012-08-08 Thread Nick Barcet
Hi,

The metering project team holds a meeting in #openstack-meeting,
Thursdays at 1600 UTC
http://www.timeanddate.com/worldclock/fixedtime.html?hour=16min=0sec=0.

Everyone is welcome.

Agenda:
http://wiki.openstack.org/Meetings/MeteringAgenda

 * Review last week's actions
   - jaypipes to create ceilometer cookbook
   - jd_ to publish results of PTL election on general ml sometimes tomorrow
   - jtran to open a ticket for the DB access work
   - nijaba create a diagram of Ceilometer architecture

 * Discuss Doug's API change proposal

 * Discuss priority of maintaining Essex support and find contributor to
work on it if we are going to do it

 * Discuss integration with Heat

 * Open discussion

If you are not able to attend or have additional topic you would like to
cover, please update the agenda on the wiki.

Cheers,
--
Nick Barcet nick.bar...@canonical.com
aka: nijaba, nicolas







signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] adding security groups to running virtual machines

2012-08-08 Thread Dan Wendlandt
Hi Wolfgang,

Yes, currently Nova only allows associating security groups at boot, though
you can change the rules in a security group post boot.

Quantum (new openstack networking service) will be adding a more advanced
notion of security groups which allows changing groups of booted instances
and other improvements.  This had been targeted for Folsom-3, but looks
very unlikely to make it in time for Folsom.

The difference between Nova security groups and Quantum security groups is
similar to the differences between traditional amazone security groups and
Amazon VPC security groups (see: http://aws.amazon.com/vpc/faqs/#S2)

Dan


On Wed, Aug 8, 2012 at 4:32 AM, Wolfgang Hennerbichler 
wolfgang.hennerbich...@risc-software.at wrote:

 it's not me :)
 http://forums.openstack.org/**viewtopic.php?f=10t=719http://forums.openstack.org/viewtopic.php?f=10t=719


 On 08/08/2012 11:06 AM, Wolfgang Hennerbichler wrote:

 hi,

 is it me or is it openstack who can't modify security groups for running
 virtual machines?
 nova help | grep sec
 doesn't give me a clue.

 thanks for a hint,
 Wolfgang



 --
 DI (FH) Wolfgang Hennerbichler
 Software Development
 Unit Advanced Computing Technologies
 RISC Software GmbH
 A company of the Johannes Kepler University Linz

 IT-Center
 Softwarepark 35
 4232 Hagenberg
 Austria

 Phone: +43 7236 3343 245
 Fax: +43 7236 3343 250
 wolfgang.hennerbichler@risc-**software.atwolfgang.hennerbich...@risc-software.at
 http://www.risc-software.at

 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Christoph Hellwig
On Tue, 2012-08-07 at 17:38 -0400, Eric Windisch wrote:
  Pádraig Brady from Red Hat discovered that the fix implemented for
  CVE-2012-3361 (OSSA-2012-008) was not covering all attack scenarios. By
  crafting a malicious image with root-readable-only symlinks and
  requesting a server based on it, an authenticated user could still
  corrupt arbitrary files (all setups affected) or inject arbitrary files
  (Essex and later setups with OpenStack API enabled and a libvirt-based
  hypervisor) on the host filesystem, potentially resulting in full
  compromise of that compute node.
   
 
 Unfortunately, this won't be the end of vulnerabilities coming from
 this feature.
 
 Even if all the edge-cases around safely writing files are handled (and
 I'm not sure they are), simply mounting a filesystem is a very
 dangerous operation for the host.
 
 The idea had been suggested early-on to supporting ISO9660 filesystems
 created with mkisofs, which can be created in userspace, are read-only,
 and fairly safe to produce, even as root on compute host.
 
 That idea was apparently shot-down because, the people who
 documented/requested the blueprint requested a read-write filesystem
 that you cannot obtain with ISO9660.  Now, everyone has to live with a
 serious technical blunder.

Why do we ever read a filesystem touched by a guest in the host?

I think the first step is to make sure that a filesystem that the guest
touched never gets used by the host again, not doing so is just way to
much of a security risk.

Second there are lots of options to create filesystem entirely in
userspace with contents that can later be written to:

 - mformat for vfat
 - growisofs or others for udf
 - genext2fs for ext2
 - e2tools to copy files into an ext2/ext3 filesystem previously created
   by mke2fs

Especially udf is a very interesting options as just about any modern
operating system supports it.  The same is true for vfat, but vfat is
fairly limiting for many use cases.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Removing quantum-rootwrap

2012-08-08 Thread Dan Wendlandt
On Wed, Aug 8, 2012 at 9:22 AM, Thierry Carrez thie...@openstack.orgwrote:

 Robert Kukura wrote:
  On 08/08/2012 09:31 AM, Thierry Carrez wrote:
  Quantum currently contains bin/quantum-rootwrap, a copy of nova-rootwrap
  supposed to control its privilege escalation to run commands as root.
 
  However quantum-rootwrap is currently non-functional, missing a lot of
  filter definitions that are necessary for it to work correctly.
 
  Is missing definitions the only issue? Those may need updating for F-3,
  but this can certainly be done.

 Those are the only issues I spotted. Making Quantum compatible with the
 latest version of rootwrap as shipped in Nova/Cinder, though, is a lot
 more work.

  Quantum
  is generally run with root_helper=sudo and a wildcard sudoers file.
 
  What is your basis for this statement? The packaging of Essex Quantum
  for Fedora and RHEL/EPEL do configure root_helper to use
  quantum-rootwrap. If another distribution doesn't do this, I would
  consider that a distribution bug, not an upstream problem.

 Given that quantum-rootwrap is currently non-working, I suspected that
 everyone running Quantum *on Folsom* was using sudo and not the
 rootwrap. If most people do that, it probably means it's a it early to
 deprecate root_helper=sudo support in Folsom.

  That
  means Quantum is not ready to deprecate in Folsom (and remove in
  Grizzly) its ability to run with root_helper=sudo, like Nova and Cinder
 do.
 
  What's involved in deprecating this ability in Folsom? Is it that
  difficult? If Nova and Cinder are doing it, why shouldn't Quantum?

 As a quick grep will show, there is much more adherence to root_helper
 in Quantum than in Nova/Cinder, where it was used in a single place.
 It's definitely doable, but I'd say a bit dangerous (and too late) 4
 days before F3. I certainly won't have enough time for it...

  I do have an issue with Folsom dropping a capability that is being used
  in Essex. If the existing rootwrap really does more harm than good, this
  might be justified, but I don't think you can argue nobody has used it.

 Fair point, it was definitely used in Essex.

 We have three options at this point:

 * Remove it (but is it acceptable to lose functionality compared to
 Essex, even if Essex is not a core release for Quantum ?)

 * Just fix it by adding missing filters (but then accept that
 quantum-rootwrap doesn't behave like nova-rootwrap and cinder-rootwrap,
 which is bad for consistency)

 * Align quantum-rootwrap with nova-rootwrap and deprecate usage of
 root_helper, by overhauling how root_helper is pervasively used
 throughout Quantum code (lots of work, and introducing a lot of
 disruption that late in the cycle)

 Personally I think only the first two options are realistic. So this
 boils down to losing functionality from Essex vs. hurting Folsom core
 consistency.


If someone (Bob?) has the immediate cycles to make rootwrap work in Folsom
with low to medium risk of disruption, I'd be open to exploring that, even
if it meant inconsistent usage in quantum vs. nova/cinder.

I also think we need to develop basic guidelines that should be enforced by
reviewers with respect to correctly using rootwrap moving forward.  Is
there a quick pointer we have for developers and reviewers to use?

Dan





 --
 Thierry Carrez (ttx)
 Release Manager, OpenStack

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Eric Windisch
 
 I think the first step is to make sure that a filesystem that the guest
 touched never gets used by the host again, not doing so is just way to
 much of a security risk.
 
 Second there are lots of options to create filesystem entirely in
 userspace with contents that can later be written to:
 
 Especially udf is a very interesting options as just about any modern
 operating system supports it. The same is true for vfat, but vfat is
 fairly limiting for many use cases.


Agreed on all points. 

 
 Why do we ever read a filesystem touched by a guest in the host?
I believe this is more of reading filesystems that were uploaded by users into 
glance. However, it is essentially the same thing.

I don't think we need to do this and don't think we should do this. Clearly, 
however, someone somewhere, at some point, thought they wanted this.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Removing quantum-rootwrap

2012-08-08 Thread jrd
From: Dan Wendlandt d...@nicira.com
Date: Wed, 8 Aug 2012 10:28:37 -0700

On Wed, Aug 8, 2012 at 9:22 AM, Thierry Carrez thie...@openstack.org 
 wrote:

Robert Kukura wrote:
 On 08/08/2012 09:31 AM, Thierry Carrez wrote:
 Quantum currently contains bin/quantum-rootwrap, a copy of 
 nova-rootwrap
 supposed to control its privilege escalation to run commands as 
 root.

 However quantum-rootwrap is currently non-functional, missing a lot 
 of
 filter definitions that are necessary for it to work correctly.

 Is missing definitions the only issue? Those may need updating for 
 F-3,
 but this can certainly be done.
   
Those are the only issues I spotted. Making Quantum compatible with the
latest version of rootwrap as shipped in Nova/Cinder, though, is a lot
more work.
   
 Quantum
 is generally run with root_helper=sudo and a wildcard sudoers file.

 What is your basis for this statement? The packaging of Essex Quantum
 for Fedora and RHEL/EPEL do configure root_helper to use
 quantum-rootwrap. If another distribution doesn't do this, I would
 consider that a distribution bug, not an upstream problem.
   
Given that quantum-rootwrap is currently non-working, I suspected that
everyone running Quantum *on Folsom* was using sudo and not the
rootwrap. If most people do that, it probably means it's a it early to
deprecate root_helper=sudo support in Folsom.
   
 That
 means Quantum is not ready to deprecate in Folsom (and remove in
 Grizzly) its ability to run with root_helper=sudo, like Nova and 
 Cinder do.

 What's involved in deprecating this ability in Folsom? Is it that
 difficult? If Nova and Cinder are doing it, why shouldn't Quantum?
   
As a quick grep will show, there is much more adherence to root_helper
in Quantum than in Nova/Cinder, where it was used in a single place.
It's definitely doable, but I'd say a bit dangerous (and too late) 4
days before F3. I certainly won't have enough time for it...
   
 I do have an issue with Folsom dropping a capability that is being 
 used
 in Essex. If the existing rootwrap really does more harm than good, 
 this
 might be justified, but I don't think you can argue nobody has used 
 it.
   
Fair point, it was definitely used in Essex.
   
We have three options at this point:
   
* Remove it (but is it acceptable to lose functionality compared to
Essex, even if Essex is not a core release for Quantum ?)
   
* Just fix it by adding missing filters (but then accept that
quantum-rootwrap doesn't behave like nova-rootwrap and cinder-rootwrap,
which is bad for consistency)
   
* Align quantum-rootwrap with nova-rootwrap and deprecate usage of
root_helper, by overhauling how root_helper is pervasively used
throughout Quantum code (lots of work, and introducing a lot of
disruption that late in the cycle)
   
Personally I think only the first two options are realistic. So this
boils down to losing functionality from Essex vs. hurting Folsom core
consistency.

If someone (Bob?) has the immediate cycles to make rootwrap work in Folsom 
 with low to medium
risk of disruption, I'd be open to exploring that, even if it meant 
 inconsistent usage in quantum
vs. nova/cinder.  


Hi Dan.  I've been working with Bob, getting myself up to speed on
quantum.  I've just talked it over with Bob, and I'll take a crack at
this one.  My approach is going to be to get the quantum rootwrap
stuff up to parity with nova.  It sounded like some further work might
get done in this area for Grizzly, but for the short term, this ought
to be fairly non-disruptive.

I also think we need to develop basic guidelines that should be enforced 
 by reviewers with
respect to correctly using rootwrap moving forward.  Is there a quick 
 pointer we have for
developers and reviewers to use?  

Dan

 

--
Thierry Carrez (ttx)
Release Manager, OpenStack
   
___
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

--
~~~
Dan Wendlandt 
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~


--
___
Mailing list: https://launchpad.net/~openstack

[Openstack] Making the RPC backend a required configuration parameter

2012-08-08 Thread Eric Windisch
I believe that the RPC backend should no longer have any default.



Historically, it seems that the Kombu driver is default only because it existed 
before all others and before there was an abstraction. With multiple 
implementations now available, it may be time for a change.

Why?
* A default skews the attitudes and subsequent architectures toward a specific 
implementation


* A default skews the practical testing scenarios, ensuring maturity of one 
driver over others.
* The kombu driver does not work out of the box, so it is no more reasonable 
as a default than impl_fake.
* The RPC code is now in openstack-common, so addressing this later will only 
create additional technical debt.

My proposal is that for Folsom, we introduce a future_required flag on the 
configuration option, rpc_backend. This will trigger a WARNING message if the 
rpc_backend configuration value is not set.  In Grizzly, we would make the 
rpc_backend variable mandatory in the configuration.

Mark McLoughlin wisely suggested this come before the mailing list, as it will 
affect a great many people. I welcome feedback and discussion.

Regards,
Eric Windisch



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] good link for jcloud

2012-08-08 Thread chaohua wang
Hi Forks,

I am newbie in Jcloud, Do you guys know if there is good link or book to
read.

Thanks,

Chaohua
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] adding security groups to running virtual machines

2012-08-08 Thread Dan Wendlandt
Actually, I was wrong about adding/removing security groups at run-time.
 Apparently, it works already :)   Thanks for pointing that out Mohammed!

Dan


On Wed, Aug 8, 2012 at 1:23 PM, Mohammed Naser mna...@vexxhost.com wrote:

 Hi,

 You can actually add/remove security groups from an instance anytime
 even when it's running.

 The nova client also had the add-secgroup and remove-secgroup commands
 in it (at least in trunk), not sure if this is not supported in
 specific releases.

 Regards,
 Mohammed

 On Wed, Aug 8, 2012 at 1:20 PM, Dan Wendlandt d...@nicira.com wrote:
  Hi Wolfgang,
 
  Yes, currently Nova only allows associating security groups at boot,
 though
  you can change the rules in a security group post boot.
 
  Quantum (new openstack networking service) will be adding a more advanced
  notion of security groups which allows changing groups of booted
 instances
  and other improvements.  This had been targeted for Folsom-3, but looks
 very
  unlikely to make it in time for Folsom.
 
  The difference between Nova security groups and Quantum security groups
 is
  similar to the differences between traditional amazone security groups
 and
  Amazon VPC security groups (see: http://aws.amazon.com/vpc/faqs/#S2)
 
  Dan
 
 
  On Wed, Aug 8, 2012 at 4:32 AM, Wolfgang Hennerbichler
  wolfgang.hennerbich...@risc-software.at wrote:
 
  it's not me :)
  http://forums.openstack.org/viewtopic.php?f=10t=719
 
 
  On 08/08/2012 11:06 AM, Wolfgang Hennerbichler wrote:
 
  hi,
 
  is it me or is it openstack who can't modify security groups for
 running
  virtual machines?
  nova help | grep sec
  doesn't give me a clue.
 
  thanks for a hint,
  Wolfgang
 
 
 
  --
  DI (FH) Wolfgang Hennerbichler
  Software Development
  Unit Advanced Computing Technologies
  RISC Software GmbH
  A company of the Johannes Kepler University Linz
 
  IT-Center
  Softwarepark 35
  4232 Hagenberg
  Austria
 
  Phone: +43 7236 3343 245
  Fax: +43 7236 3343 250
  wolfgang.hennerbich...@risc-software.at
  http://www.risc-software.at
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
 
  --
  ~~~
  Dan Wendlandt
  Nicira, Inc: www.nicira.com
  twitter: danwendlandt
  ~~~
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 



 --
 Mohammed Naser — vexxhost
 -
 E. mna...@vexxhost.com
 W. http://vexxhost.com




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Making the RPC backend a required configuration parameter

2012-08-08 Thread Andrew Clay Shafer
Is there a good reason NOT to do this?


On Wed, Aug 8, 2012 at 4:35 PM, Eric Windisch e...@cloudscaling.com wrote:

 I believe that the RPC backend should no longer have any default.



 Historically, it seems that the Kombu driver is default only because it
 existed before all others and before there was an abstraction. With
 multiple implementations now available, it may be time for a change.

 Why?
 * A default skews the attitudes and subsequent architectures toward a
 specific implementation


 * A default skews the practical testing scenarios, ensuring maturity of
 one driver over others.
 * The kombu driver does not work out of the box, so it is no more
 reasonable as a default than impl_fake.
 * The RPC code is now in openstack-common, so addressing this later will
 only create additional technical debt.

 My proposal is that for Folsom, we introduce a future_required flag on
 the configuration option, rpc_backend. This will trigger a WARNING
 message if the rpc_backend configuration value is not set.  In Grizzly, we
 would make the rpc_backend variable mandatory in the configuration.

 Mark McLoughlin wisely suggested this come before the mailing list, as it
 will affect a great many people. I welcome feedback and discussion.

 Regards,
 Eric Windisch



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Help with meta-data

2012-08-08 Thread Jay Pipes
On 08/08/2012 03:57 AM, Simon Walter wrote:
 Hi all,
 
 I've completed the excruciating Launchpad process of subscribing to a 
 mailing list to ask for your help with having my instances access their 
 meta-data.

What was excruciating about the subscription process?

 However, they cannot access their meta-data:
 
 Begin: Running /scripts/init-bottom ... done.
 cloud-init start-local running: Wed, 08 Aug 2012 07:33:07 +. up 8.32 
 seconds
 no instance data found in start-local
 ci-info: lo: 1 127.0.0.1   255.0.0.0   .
 ci-info: eth1  : 0 .   .   fa:16:3e:5a:f3:05
 ci-info: eth0  : 1 192.168.1.205   255.255.255.0   fa:16:3e:23:d7:7c
 ci-info: route-0: 0.0.0.0 192.168.1.1 0.0.0.0 eth0   UG
 ci-info: route-1: 192.168.1.0 0.0.0.0 255.255.255.0   eth0   U
 cloud-init start running: Wed, 08 Aug 2012 07:33:10 +. up 11.95 seconds
 2012-08-08 07:33:54,243 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: 
 url error [[Errno 113] No route to host]
snip
 2012-08-08 07:35:55,308 - DataSourceEc2.py[CRITICAL]: giving up on md after 
 124 seconds
 no instance data found in start
 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
 
 I can see something on the host:
 curl http://169.254.169.254:8775/
 1.0
 2007-01-19
 2007-03-01
 2007-08-29
 2007-10-10
 2007-12-15
 2008-02-01
 2008-09-01
 2009-04-04

Where are you curl'ing from? The compute node or the host running the
nova-ec2-metadata service?

 But doing something like:
 
 I get a HTTP 500 error.

I think you're missing a paste above :) doing something like what?

 I don't know if the problem is routing or with the meta-data service.

Well, it's unlikely it's an issue with the metadata service because the
metadata service is clearly responding properly to at least ONE host, as
evidenced above. It's more likely a routing issue.

Can you SSH into the VM in question and try pinging the EC2 metadata
service URL? (http://169.254.169.254:8775/)

Best,
-jay

 Any help is appreciated. I'm running this all on one box. Here is my 
 nova.conf:
 --dhcpbridge_flagfile=/etc/nova/nova.conf
 --dhcpbridge=/usr/bin/nova-dhcpbridge
 --logdir=/var/log/nova
 --state_path=/var/lib/nova
 --lock_path=/var/lock/nova
 --allow_admin_api=true
 --use_deprecated_auth=false
 --auth_strategy=keystone
 --scheduler_driver=nova.scheduler.simple.SimpleScheduler
 --s3_host=192.168.1.14
 --ec2_host=192.168.1.14
 --rabbit_host=192.168.1.14
 --cc_host=192.168.1.14
 --nova_url=http://192.168.1.14:8774/v1.1/
 --routing_source_ip=192.168.1.14
 --glance_api_servers=192.168.1.14:9292
 --image_service=nova.image.glance.GlanceImageService
 --iscsi_ip_prefix=192.168.22
 --sql_connection=mysql://nova:s7ack3d@127.0.0.1/nova
 --ec2_url=http://192.168.1.14:8773/services/Cloud
 --keystone_ec2_url=http://192.168.1.14:5000/v2.0/ec2tokens
 --api_paste_config=/etc/nova/api-paste.ini
 --libvirt_type=kvm
 --libvirt_use_virtio_for_bridges=true
 --start_guests_on_host_boot=true
 --resume_guests_state_on_host_boot=true
 --vnc_enabled=true
 --vncproxy_url=http://192.168.1.14:6080
 --vnc_console_proxy_url=http://192.168.1.14:6080
 # network specific settings
 --network_manager=nova.network.manager.FlatDHCPManager
 --public_interface=eth0
 --flat_interface=eth1
 --flat_network_bridge=br100
 --fixed_range=10.0.2.0/24
 --floating_range=192.168.1.30/27
 --network_size=32
 --flat_network_dhcp_start=10.0.2.1
 --flat_injected=False
 --force_dhcp_release
 --iscsi_helper=tgtadm
 --connection_type=libvirt
 --root_helper=sudo nova-rootwrap
 --verbose
 
 I have a question about VNC as well, but this is by far more important.
 
 Thanks for your help,
 
 Simon
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Making the RPC backend a required configuration parameter

2012-08-08 Thread Eric Windisch
 
 
 Regardless of the actual default in openstack-common, the devstack 
 default is going to skew all of this as well (if not more so), and 
 devstack does need a default. Much like db backend.

Devstack doesn't need a default, necessarily. Or more clearly, it doesn't need 
to have a hard default. It could have a soft-default, via a prompt on first-run 
unless defined in the localrc, similar to how passwords are currently handled.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Images API v2 Versioning and Glance

2012-08-08 Thread Brian Waldon
tl;dr - We're going to use minor versioning for the v2 Images API and take a 
more iterative approach to API design and implementation

Up until this point we have depended on big design up front for the Images API. 
We spent most of Essex talking about v2 without writing any code. At the 
beginning of the Folsom release cycle I took it upon myself to start 
implementing the v2 Images API (which has been rather slow-going). Several 
people have helped, for which I am grateful, but ultimately we didn't finish 
what was needed. Not only did we not finish everything, but there are several 
things in the spec that need to revisit.

The (possibly incorrect) assumption I have been working under is that we would 
release the v2 API and that would be it - just a major version, no minor 
versioning, no extensions. If we wanted to make changes, we would put them in 
v3. 

That won't work. It has taken two releases to get to this point and we still 
aren't done. The solution as I see it is to use a more iterative 
design/implementation cycle for our APIs. We can't be designing massive API 
specs up front - it doesn't always work. We need to have the flexibility to 
move fast and get to something that works that we are proud of. 

We are going to use minor versioning for the v2 Images API. This means that the 
Folsom release will implement a v2.0 spec, with additional (and already 
identified) features in a v2.1 sometime early in Grizzly.

Please voice any and all concerns ASAP since we're on a pretty short timeline.


Brian Waldon
The People's PTL
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Removing quantum-rootwrap

2012-08-08 Thread Dan Wendlandt
On Wed, Aug 8, 2012 at 1:20 PM, j...@redhat.com wrote:

  
 If someone (Bob?) has the immediate cycles to make rootwrap work in
 Folsom with low to medium
 risk of disruption, I'd be open to exploring that, even if it meant
 inconsistent usage in quantum
 vs. nova/cinder.
 

 Hi Dan.  I've been working with Bob, getting myself up to speed on
 quantum.  I've just talked it over with Bob, and I'll take a crack at
 this one.  My approach is going to be to get the quantum rootwrap
 stuff up to parity with nova.  It sounded like some further work might
 get done in this area for Grizzly, but for the short term, this ought
 to be fairly non-disruptive.


Nice to meet you, glad you'll be helping here.  Let's stay in close sync
about this change, as I'd like to get a better understanding of how
disruptive/risky this is change is if we're thinking of putting it in
Folsom.

Dan




 I also think we need to develop basic guidelines that should be
 enforced by reviewers with
 respect to correctly using rootwrap moving forward.  Is there a quick
 pointer we have for
 developers and reviewers to use?
 
 Dan
 
 
 
 --
 Thierry Carrez (ttx)
 Release Manager, OpenStack
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 --
 ~~~
 Dan Wendlandt
 Nicira, Inc: www.nicira.com
 twitter: danwendlandt
 ~~~
 
 
 --
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Images API v2 Versioning and Glance

2012-08-08 Thread Gabriel Hurley
With the stipulation that clients will be able to talk to all versions of the 
API from here on forward, I am totally in favor of this.

- Gabriel

 -Original Message-
 From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
 [mailto:openstack-
 bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of
 Brian Waldon
 Sent: Wednesday, August 08, 2012 3:51 PM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] Images API v2 Versioning and Glance
 
 tl;dr - We're going to use minor versioning for the v2 Images API and take a
 more iterative approach to API design and implementation
 
 Up until this point we have depended on big design up front for the Images
 API. We spent most of Essex talking about v2 without writing any code. At
 the beginning of the Folsom release cycle I took it upon myself to start
 implementing the v2 Images API (which has been rather slow-going). Several
 people have helped, for which I am grateful, but ultimately we didn't finish
 what was needed. Not only did we not finish everything, but there are
 several things in the spec that need to revisit.
 
 The (possibly incorrect) assumption I have been working under is that we
 would release the v2 API and that would be it - just a major version, no minor
 versioning, no extensions. If we wanted to make changes, we would put
 them in v3.
 
 That won't work. It has taken two releases to get to this point and we still
 aren't done. The solution as I see it is to use a more iterative
 design/implementation cycle for our APIs. We can't be designing massive API
 specs up front - it doesn't always work. We need to have the flexibility to
 move fast and get to something that works that we are proud of.
 
 We are going to use minor versioning for the v2 Images API. This means that
 the Folsom release will implement a v2.0 spec, with additional (and already
 identified) features in a v2.1 sometime early in Grizzly.
 
 Please voice any and all concerns ASAP since we're on a pretty short timeline.
 
 
 Brian Waldon
 The People's PTL
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Images API v2 Versioning and Glance

2012-08-08 Thread Brian Waldon
Great. The backwards-compatibility requirement you bring up isn't something new 
- we aren't dropping support for Image API  v1 (v1.0, v1.1) in Glance or in 
python-glanceclient any time soon. What we will need to do is add better 
version negotiation code to handle failures when clients expect later versions 
than a server can provide.

Waldon

On Aug 8, 2012, at 4:10 PM, Gabriel Hurley wrote:

 With the stipulation that clients will be able to talk to all versions of the 
 API from here on forward, I am totally in favor of this.
 
- Gabriel
 
 -Original Message-
 From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
 [mailto:openstack-
 bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of
 Brian Waldon
 Sent: Wednesday, August 08, 2012 3:51 PM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] Images API v2 Versioning and Glance
 
 tl;dr - We're going to use minor versioning for the v2 Images API and take a
 more iterative approach to API design and implementation
 
 Up until this point we have depended on big design up front for the Images
 API. We spent most of Essex talking about v2 without writing any code. At
 the beginning of the Folsom release cycle I took it upon myself to start
 implementing the v2 Images API (which has been rather slow-going). Several
 people have helped, for which I am grateful, but ultimately we didn't finish
 what was needed. Not only did we not finish everything, but there are
 several things in the spec that need to revisit.
 
 The (possibly incorrect) assumption I have been working under is that we
 would release the v2 API and that would be it - just a major version, no 
 minor
 versioning, no extensions. If we wanted to make changes, we would put
 them in v3.
 
 That won't work. It has taken two releases to get to this point and we still
 aren't done. The solution as I see it is to use a more iterative
 design/implementation cycle for our APIs. We can't be designing massive API
 specs up front - it doesn't always work. We need to have the flexibility to
 move fast and get to something that works that we are proud of.
 
 We are going to use minor versioning for the v2 Images API. This means that
 the Folsom release will implement a v2.0 spec, with additional (and already
 identified) features in a v2.1 sometime early in Grizzly.
 
 Please voice any and all concerns ASAP since we're on a pretty short 
 timeline.
 
 
 Brian Waldon
 The People's PTL
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Help on dnsmasq

2012-08-08 Thread Sébastien Han
Hi guys,

Any ideas on this?

https://bugs.launchpad.net/nova/+bug/1033675

https://answers.launchpad.net/nova/+question/205136

Any advice/tip will be truly appreciated :)

Cheers!
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Help with meta-data

2012-08-08 Thread Simon Walter


On 08/09/2012 06:45 AM, Jay Pipes wrote:

What was excruciating about the subscription process?


There's many more steps to subscribing to a Launchpad mailing list than 
good ol' mailman and the like. I'm just whinging off topic. Sorry... 
Thanks for your reply though!



However, they cannot access their meta-data:

Begin: Running /scripts/init-bottom ... done.
cloud-init start-local running: Wed, 08 Aug 2012 07:33:07 +. up 8.32 seconds
no instance data found in start-local
ci-info: lo: 1 127.0.0.1   255.0.0.0   .
ci-info: eth1  : 0 .   .   fa:16:3e:5a:f3:05
ci-info: eth0  : 1 192.168.1.205   255.255.255.0   fa:16:3e:23:d7:7c
ci-info: route-0: 0.0.0.0 192.168.1.1 0.0.0.0 eth0   UG
ci-info: route-1: 192.168.1.0 0.0.0.0 255.255.255.0   eth0   U
cloud-init start running: Wed, 08 Aug 2012 07:33:10 +. up 11.95 seconds
2012-08-08 07:33:54,243 - util.py[WARNING]: 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: url 
error [[Errno 113] No route to host]

snip

2012-08-08 07:35:55,308 - DataSourceEc2.py[CRITICAL]: giving up on md after 124 
seconds
no instance data found in start
Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

I can see something on the host:
curl http://169.254.169.254:8775/
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04


Where are you curl'ing from? The compute node or the host running the
nova-ec2-metadata service?


It's all on one box. So the same one I suppose.




But doing something like:

I get a HTTP 500 error.


I think you're missing a paste above :) doing something like what?



My bad. Something like:
curl http://169.254.169.254:8775/1.0/
or
curl http://169.254.169.254:8775/2009-04-04/
or
curl http://169.254.169.254:8775/2009-04-04/meta-data/instance-id

Also I notice that the error message above does not contain the port. Is 
that normal, or is it really not accessing the correct port?



I don't know if the problem is routing or with the meta-data service.


Well, it's unlikely it's an issue with the metadata service because the
metadata service is clearly responding properly to at least ONE host, as
evidenced above. It's more likely a routing issue.

Can you SSH into the VM in question and try pinging the EC2 metadata
service URL? (http://169.254.169.254:8775/)


I guess I'll have to build a VM from scratch, as I was relying on the 
ssh key to be able to ssh into the VM, which apparently is supplied by 
the meta-data service.


If that is the case, I can use it without the meta-data service, though 
it sure would be nice to have it working properly eventually.


Cheers,

Simon


--
simonsmicrophone.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] good link for jcloud

2012-08-08 Thread Everett Toews
Naturally the best place to start is http://www.jclouds.org/

They do have some OpenStack specific docs at 
http://www.jclouds.org/documentation/quickstart/openstack/ but the latest 
release 1.4.1 which the docs are based on only supports the Nova API v1.1. The 
next release 1.5.0 has support for the Nova API v2 but it's still a work in 
progress, although it's already in beta.

If you're working with Essex or later, I would recommend making the effort to 
use the beta release. Have a look at the installation guide at 
http://www.jclouds.org/documentation/userguide/installation-guide/ and replace 
alpha.1 with beta.9.

A few more links you'll find useful:
Google group: https://groups.google.com/forum/?fromgroups#!forum/jclouds
Example code: https://github.com/jclouds/jclouds-examples

Hope this helps,
Everett


From: openstack-bounces+everett.toews=rackspace@lists.launchpad.net 
[openstack-bounces+everett.toews=rackspace@lists.launchpad.net] on behalf 
of chaohua wang [chwang...@gmail.com]
Sent: Wednesday, August 08, 2012 3:48 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] good link for jcloud

Hi Forks,

I am newbie in Jcloud, Do you guys know if there is good link or book to read.

Thanks,

Chaohua


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Help with meta-data

2012-08-08 Thread tacy lee
try adding metadata_host to nova.conf

On Wed, Aug 8, 2012 at 3:57 PM, Simon Walter si...@gikaku.com wrote:


 Hi all,

 I've completed the excruciating Launchpad process of subscribing to a
 mailing list to ask for your help with having my instances access their
 meta-data.

 I'm new to OpenStack. So please forgive my n00bness.

 I installed OpenStack on Ubuntu 12.04 by following stackgeek's 10 minute
 method and using their scripts: http://stackgeek.com/guides/**
 gettingstarted.html http://stackgeek.com/guides/gettingstarted.html

 Which got me quite far. I had to fix some of the networking setup. Now I
 can launch instances and ping them.

 However, they cannot access their meta-data:

 Begin: Running /scripts/init-bottom ... done.
 cloud-init start-local running: Wed, 08 Aug 2012 07:33:07 +. up 8.32
 seconds

 no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth1  : 0 .   .   fa:16:3e:5a:f3:05

 ci-info: eth0  : 1 192.168.1.205   255.255.255.0   fa:16:3e:23:d7:7c

 ci-info: route-0: 0.0.0.0 192.168.1.1 0.0.0.0 eth0   UG

 ci-info: route-1: 192.168.1.0 0.0.0.0 255.255.255.0   eth0   U

 cloud-init start running: Wed, 08 Aug 2012 07:33:10 +. up 11.95 seconds

 2012-08-08 07:33:54,243 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [2/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:33:57,242 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [5/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:01,246 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [10/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:04,246 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [13/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:07,246 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [16/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:10,246 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [19/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:13,246 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [22/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:16,246 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [25/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:21,250 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [30/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:24,250 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [33/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:29,254 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [38/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:35,258 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [44/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:41,261 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [50/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:47,266 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [56/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:53,269 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [62/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:34:59,274 - util.py[WARNING]: 'http://169.254.169.254/2009-*
 *04-04/meta-data/instance-idhttp://169.254.169.254/2009-04-04/meta-data/instance-id'
 failed [68/120s]: url error [[Errno 113] No route to host]

 2012-08-08 07:35:06,278 - util.py[WARNING]: 'http://169.254.169.254/2009-*