[Openstack] Bundle running instance?

2011-08-31 Thread Darren Govoni

Hi,
  Is there a tutorial somewhere showing how to re-bundle a running 
instance (e.g. ubuntu)

and register it as a new image in openstack?

thanks,
Darren

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bundle running instance?

2011-08-31 Thread Wayne A. Walls
I think this is probably the best place to start:
http://docs.openstack.org/cactus/openstack-compute/admin/content/creating-a
-linux-image.html
If you find anything that is inaccurate in your efforts, ping
a...@openstack.org and she can get that updated :)

Cheers,


Wayne

On 8/31/11 8:16 AM, Darren Govoni dar...@ontrenet.com wrote:

Hi,
   Is there a tutorial somewhere showing how to re-bundle a running
instance (e.g. ubuntu)
and register it as a new image in openstack?

thanks,
Darren

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bundle running instance?

2011-08-31 Thread Pedro Navarro Pérez
What about the image management in starter guide:

http://docs.openstack.org/cactus/openstack-compute/starter/content/Creating_a_Linux_Image_-_Ubuntu_Fedora-d1e1287.html

On Wed, Aug 31, 2011 at 3:16 PM, Darren Govoni dar...@ontrenet.com wrote:
 Hi,
  Is there a tutorial somewhere showing how to re-bundle a running instance
 (e.g. ubuntu)
 and register it as a new image in openstack?

 thanks,
 Darren

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bundle running instance?

2011-08-31 Thread Wayne A. Walls
Just realized you asked about a running image, not a new one.  Apologies
on that.  I used this script in the past to accomplish bundling a running
instance.  It is likely outdated, but it will give you a good premise to
start:

#!/bin/sh

#words words words
#this is a script for easy image creation

. /root/creds/novarc
SYSTEM=$(uname -r)
read -p Please enter your bucket/container name: BUCKET_NAME

euca-bundle-vol --no-inherit -d /tmp/image -e /mnt, /tmp
losetup /dev/loop3 /tmp/image/image.img
mount /dev/loop3 /mnt
sed -i 
's/^UUID=[a-z0-9]\{8\}-[a-z0-9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]\{
12\}[\t]* \//\/dev\/vda1\t\//1' /mnt/etc/fstab
sed -i 
's/^UUID=[a-z0-9]\{8\}-[a-z0-9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]\{
12\}[\t]* none/\/mnt\/swap.file\tnone/1' /mnt/etc/fstab
cp /mnt/etc/network/interfaces /mnt/root/interfaces.bak
cat  /mnt/etc/network/interfaces  INTERFACE_UPDATE
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
INTERFACE_UPDATE
 

dd if=/dev/zero of=/mnt/swap.file bs=1024 count=512000
sleep 1
mkswap /mnt/swap.file
sleep 1
umount /mnt

euca-bundle-image -i /boot/initrd.img-$SYSTEM -d /tmp/ramdisk --ramdisk
true

euca-bundle-image -i /boot/vmlinuz-$SYSTEM -d /tmp/kernel --kernel true
euca-upload-bundle -m /tmp/kernel/vmlinuz-$SYSTEM.manifest.xml -b
$BUCKET_NAME
euca-upload-bundle -m /tmp/ramdisk/initrd.img-$SYSTEM.manifest.xml -b
$BUCKET_NAME
KERNEL_IMAGE=$(euca-register $BUCKET_NAME/vmlinuz-$SYSTEM.manifest.xml |
awk '{print $2}')
RAMDISK_IMAGE=$(euca-register $BUCKET_NAME/initrd.img-$SYSTEM.manifest.xml
| awk '{print $2}')
euca-bundle-image -i /tmp/image/image.img --kernel $KERNEL_IMAGE --ramdisk
$RAMDISK_IMAGE -d /tmp/imagebuild
euca-upload-bundle -m /tmp/imagebuild/image.img.manifest.xml -b
$BUCKET_NAME
AMI_IMAGE=$(euca-register $BUCKET_NAME/image.img.manifest.xml | awk
'{print $2}'); echo Image is decrypting and untarring for usage.

sleep 180

euca-run-instances $AMI_IMAGE



On 8/31/11 8:16 AM, Darren Govoni dar...@ontrenet.com wrote:

Hi,
   Is there a tutorial somewhere showing how to re-bundle a running
instance (e.g. ubuntu)
and register it as a new image in openstack?

thanks,
Darren

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bundle running instance?

2011-08-31 Thread Darren Govoni

Much appreciated! I will give it a try.

On 08/31/2011 10:46 AM, Wayne A. Walls wrote:

Just realized you asked about a running image, not a new one.  Apologies
on that.  I used this script in the past to accomplish bundling a running
instance.  It is likely outdated, but it will give you a good premise to
start:

#!/bin/sh

#words words words
#this is a script for easy image creation

. /root/creds/novarc
SYSTEM=$(uname -r)
read -p Please enter your bucket/container name: BUCKET_NAME

euca-bundle-vol --no-inherit -d /tmp/image -e /mnt, /tmp
losetup /dev/loop3 /tmp/image/image.img
mount /dev/loop3 /mnt
sed -i
's/^UUID=[a-z0-9]\{8\}-[a-z0-9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]\{
12\}[\t]* \//\/dev\/vda1\t\//1' /mnt/etc/fstab
sed -i
's/^UUID=[a-z0-9]\{8\}-[a-z0-9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]\{
12\}[\t]* none/\/mnt\/swap.file\tnone/1' /mnt/etc/fstab
cp /mnt/etc/network/interfaces /mnt/root/interfaces.bak
cat  /mnt/etc/network/interfaces  INTERFACE_UPDATE
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
INTERFACE_UPDATE


dd if=/dev/zero of=/mnt/swap.file bs=1024 count=512000
sleep 1
mkswap /mnt/swap.file
sleep 1
umount /mnt

euca-bundle-image -i /boot/initrd.img-$SYSTEM -d /tmp/ramdisk --ramdisk
true

euca-bundle-image -i /boot/vmlinuz-$SYSTEM -d /tmp/kernel --kernel true
euca-upload-bundle -m /tmp/kernel/vmlinuz-$SYSTEM.manifest.xml -b
$BUCKET_NAME
euca-upload-bundle -m /tmp/ramdisk/initrd.img-$SYSTEM.manifest.xml -b
$BUCKET_NAME
KERNEL_IMAGE=$(euca-register $BUCKET_NAME/vmlinuz-$SYSTEM.manifest.xml |
awk '{print $2}')
RAMDISK_IMAGE=$(euca-register $BUCKET_NAME/initrd.img-$SYSTEM.manifest.xml
| awk '{print $2}')
euca-bundle-image -i /tmp/image/image.img --kernel $KERNEL_IMAGE --ramdisk
$RAMDISK_IMAGE -d /tmp/imagebuild
euca-upload-bundle -m /tmp/imagebuild/image.img.manifest.xml -b
$BUCKET_NAME
AMI_IMAGE=$(euca-register $BUCKET_NAME/image.img.manifest.xml | awk
'{print $2}'); echo Image is decrypting and untarring for usage.

sleep 180

euca-run-instances $AMI_IMAGE



On 8/31/11 8:16 AM, Darren Govonidar...@ontrenet.com  wrote:


Hi,
   Is there a tutorial somewhere showing how to re-bundle a running
instance (e.g. ubuntu)
and register it as a new image in openstack?

thanks,
Darren

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bundle running instance?

2011-08-31 Thread Everett Toews
We have a similar script at

https://github.com/canarie/vm-toolkit/blob/master/bundle/vmbundle.py

that tries to take the pain out of bundling a running instance for our more
naive users.

Everett

On Wed, Aug 31, 2011 at 8:48 AM, Darren Govoni dar...@ontrenet.com wrote:

 Much appreciated! I will give it a try.


 On 08/31/2011 10:46 AM, Wayne A. Walls wrote:

 Just realized you asked about a running image, not a new one.  Apologies
 on that.  I used this script in the past to accomplish bundling a running
 instance.  It is likely outdated, but it will give you a good premise to
 start:

 #!/bin/sh

 #words words words
 #this is a script for easy image creation

 . /root/creds/novarc
 SYSTEM=$(uname -r)
 read -p Please enter your bucket/container name: BUCKET_NAME

 euca-bundle-vol --no-inherit -d /tmp/image -e /mnt, /tmp
 losetup /dev/loop3 /tmp/image/image.img
 mount /dev/loop3 /mnt
 sed -i
 's/^UUID=[a-z0-9]\{8\}-[a-z0-**9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]**
 \{4\}-[a-z0-9]\{
 12\}[\t]* \//\/dev\/vda1\t\//1' /mnt/etc/fstab
 sed -i
 's/^UUID=[a-z0-9]\{8\}-[a-z0-**9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]**
 \{4\}-[a-z0-9]\{
 12\}[\t]* none/\/mnt\/swap.file\tnone/1' /mnt/etc/fstab
 cp /mnt/etc/network/interfaces /mnt/root/interfaces.bak
 cat  /mnt/etc/network/interfaces  INTERFACE_UPDATE
 # The loopback network interface
 auto lo
 iface lo inet loopback

 # The primary network interface
 auto eth0
 iface eth0 inet dhcp
 INTERFACE_UPDATE


 dd if=/dev/zero of=/mnt/swap.file bs=1024 count=512000
 sleep 1
 mkswap /mnt/swap.file
 sleep 1
 umount /mnt

 euca-bundle-image -i /boot/initrd.img-$SYSTEM -d /tmp/ramdisk --ramdisk
 true

 euca-bundle-image -i /boot/vmlinuz-$SYSTEM -d /tmp/kernel --kernel true
 euca-upload-bundle -m /tmp/kernel/vmlinuz-$SYSTEM.**manifest.xml -b
 $BUCKET_NAME
 euca-upload-bundle -m /tmp/ramdisk/initrd.img-$**SYSTEM.manifest.xml -b
 $BUCKET_NAME
 KERNEL_IMAGE=$(euca-register $BUCKET_NAME/vmlinuz-$SYSTEM.**manifest.xml
 |
 awk '{print $2}')
 RAMDISK_IMAGE=$(euca-register $BUCKET_NAME/initrd.img-$**
 SYSTEM.manifest.xml
 | awk '{print $2}')
 euca-bundle-image -i /tmp/image/image.img --kernel $KERNEL_IMAGE --ramdisk
 $RAMDISK_IMAGE -d /tmp/imagebuild
 euca-upload-bundle -m /tmp/imagebuild/image.img.**manifest.xml -b
 $BUCKET_NAME
 AMI_IMAGE=$(euca-register $BUCKET_NAME/image.img.**manifest.xml | awk
 '{print $2}'); echo Image is decrypting and untarring for usage.

 sleep 180

 euca-run-instances $AMI_IMAGE



 On 8/31/11 8:16 AM, Darren Govonidar...@ontrenet.com  wrote:

  Hi,
   Is there a tutorial somewhere showing how to re-bundle a running
 instance (e.g. ubuntu)
 and register it as a new image in openstack?

 thanks,
 Darren

 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Keystone integration in glance

2011-08-31 Thread Kevin L. Mitchell
We recently finished up functional tests for the keystone integration
recently added to glance, and I wanted to send a quick description of
the specifics for everyone's reference—could give people an idea of how
to integrate with keystone for other projects.

tl;dr version: The code added to glance performs integration with
keystone, adds the concept of ownership to images, and allows images to
be shared with others.

Now, the details:

First, glance is integrated with keystone.  This requires adding a
couple of sections to the glance-api.conf and glance-registry.conf
configuration files and modifying the pipeline.  (The plugin WSGI
middleware pieces themselves are shipped with keystone.)  This
integration makes the username, tenant, and is_admin flag available.
There is also an abstract concept of an owner--by default, this is the
tenant, but a configuration option allows it to be switched to be the
user name.

The second piece is that all images now have an owner field (separate
from image properties).  The owner is based on that abstract concept
of owner in the glance keystone integration--i.e., by default, images
are owned by tenants, but flipping that configuration option makes them
owned by individual users.  Either way, the owner field of an image is
set to the owner abstraction of the creating user and cannot be
modified except by an admin.  Also, the is_public flag that was
already available for images is now interpreted relative to owner: if
the image is owned by the authenticated user, it shows up in lists even
if is_public is False, while other users can neither see nor access the
image (with the exception of admins).  There is one special behavior I
implemented: if the owner of an image is set to nothing, the image is
always accessible, but is_public determines whether the image is visible
in lists.  The point of this special behavior is to allow providers to
publish alpha and beta images that they want some users to be able to
use, but which they don't want to appear in lists of usable images.

The third piece is image sharing.  Images can be shared to other users;
these third-party users cannot change the image, but they are able to
see and use images shared with them, and may be delegated permission to
further share images.  Again, access controls are based on the owner
abstraction.

Now, the really long technical details:

The is_public attribute already existed in images; the owner
attribute has been added.  It may not be changed; attempts to change
owner are silently ignored (I wanted to allow the possibility that
users take the image description, change a value, and PUT it back
whole).  The glance command line tool has been updated to allow admins
to manipulate owner.

Sharing is a little more complicated.  There are a total of 5 operations
added: listing who an image has been shared with (image members);
listing images shared with someone; adding to or updating an entry in an
image's sharing list (the image's membership list); replacing an
image's membership list; and deleting an entry from an image's
membership list.

Retrieving an image's membership list is a GET
to /images/{image_id}/members; the returned JSON entry looks like:

{members: [
{member_id: MEMBER,
 can_share: SHARE_PERMISSION},
...
]}

Likewise, retrieving the list of images shared with a given owner is a
GET to /shared-images/{member}, and the returned JSON looks like:

{shared_images: [
{image_id: IMAGE,
 can_share: SHARE_PERMISSION},
...
]}

The entire membership list of an image may be replaced by doing a PUT
to /images/{image_id}/members with a JSON body like the following:

{memberships: [
{member_id: MEMBER,
 can_share: SHARE_PERMISSION},
...
]}

(Note that the can_share attribute is optional here; if not provided,
old entries preserve their can_share setting and new entries default
to False.  Any entries not in the body will be removed.)

Finally, individual entries can be added, updated, or deleted using PUT
or DELETE requests to /images/{image_id}/members/{member}.  For the PUT
operation, the body is optional, for specifying the can_share setting
of the membership:

{member: {can_share: SHARE_PERMISSION}}

If the body is not provided, existing entries have their can_share
setting preserved and new entries have it default to False.  Obviously,
no body can be provided when deleting an entry with the DELETE
operation.

I should note that attempting to delete an image or update image
attributes when you do not own the image will result in a 404 error,
instead of a 403 error.  The sharing operations listed above return 403,
but the way glance is currently architected makes it more difficult to
return a 403 for the image update/delete operations.  I expect this to
be addressed by future work.
-- 
Kevin L. Mitchell 

[Openstack] Diablo RBP

2011-08-31 Thread Vishvananda Ishaya
Hey Everyone,

We managed to get a lot of features in for Diablo-4, and a few have gone in 
afterwards.  There are still a few that I would like to see land:

https://code.launchpad.net/~cbehrens/nova/rpc-kombu/+merge/73096
(fixes a number of bugs with rpc, leaves the carrot code in place so we can 
revert to carrot as default if issues arise)

https://code.launchpad.net/~cloudbuilders/nova/os-keypair-integration/+merge/72140
https://code.launchpad.net/~cloudbuilders/nova/os-user_id-description/+merge/72233
(These are extensions that aren't touching much and are key for feature parity 
in dashboard)

https://code.launchpad.net/~danwent/nova/qmanager-new/+merge/72526
(This is the quantum manager and allows integration with quantum.  It leaves 
the old managers in place so shouldn't break any existing functionality)

There are also lots of bugfix and cleanup proposals that need to be reviewed 
and merged as well.

Aside from the above branches, we should have no other feature branches go in. 
So it is time to focus on testing and filing and fixing bugs. I'm Triaging and 
targeting bugs to the rbp.  You can find the list here:

https://launchpad.net/nova/+milestone/diablo-rbp
(More will be added as we find them)

There are two critical ones so far:

https://bugs.launchpad.net/nova/+bug/833552
(Rackspace-ozone is tackling this one)

https://bugs.launchpad.net/nova/+bug/834189
(I'm sending out an email discussing this one)

There are a number of bugs that haven't been triaged or targeted yet.  Here is 
a mostly complete list:

https://bugs.launchpad.net/nova/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=NEWfield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDassignee_option=anyfield.assignee=field.bug_reporter=field.bug_supervisor=field.bug_commenter=field.subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on

Please feel free to assign yourself to a bug and get cracking on it.  We don't 
have a lot of time left, so lets stabilize the best we can.

Vish___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] libvirt vs. Xen driver handling of local storage

2011-08-31 Thread Vishvananda Ishaya
Hey guys,

We have a very annoying discrepancy between how local space is used in the xen 
driver vs the libvirt driver.  I think it is vital that this is rectified 
before the Diablo release.  We already have a few functional gaps between the 
drivers, but the fact that disks are partitioned completely differently between 
the two is very confusing to users.

Bug is here: https://bugs.launchpad.net/nova/+bug/834189

The libvirt driver:

* downloads the image from glance
* resizes the image to 10G if it is  10G
(in the case of a separate kernel and ramdisk image it extends the filesystem 
as well.  In the case of a whole-disk image it just resizes the file because it 
doesn't know enough to change the filesystem)
* attaches a second disk the size of local_gb to the image
(when using block device mapping through the ec2 api, more swap/ephemeral disks 
can be attached as volumes as well)

The XenServer driver (I'm less familiar with this code so please correct me if 
i am wrong here):
* downloads the image from glance
* creates a vdi from the base image
* resizes the vdi to the size of local_gb

The first method of resize to 10G and having separate local_gb is essentially 
the strategy taken by aws.

Drawbacks of the first method:

1) The actual space used by the image is local_gb + 10G (or more if the base 
image is larger than 10G) which is inconsistent.

2) The guest has to deal with the annoyance of not having one large filesystem. 
 It is easier for the user if they can just use all the space that they have 
without thinking about it.

Drawbacks of the second method:

1) Limits cloud images to a particular format.  We can't always guarantee that 
we can resize the image properly.


We need to decide on a common strategy and use it for both hypervisors.

Vish___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] libvirt vs. Xen driver handling of local storage

2011-08-31 Thread Soren Hansen
2011/8/31 Vishvananda Ishaya vishvana...@gmail.com:
 Hey guys,
 We have a very annoying discrepancy between how local space is used in the
 xen driver vs the libvirt driver.  I think it is vital that this is
 rectified before the Diablo release.  We already have a few functional gaps
 between the drivers, but the fact that disks are partitioned completely
 differently between the two is very confusing to users.

Great! As you point out, there are a lot of these, and I'm very happy
that we're starting to sort those out, so thanks for raising this.

 Bug is here: https://bugs.launchpad.net/nova/+bug/834189
 The libvirt driver:
 * downloads the image from glance
 * resizes the image to 10G if it is  10G
 (in the case of a separate kernel and ramdisk image it extends the
 filesystem as well.  In the case of a whole-disk image it just resizes the
 file because it doesn't know enough to change the filesystem)
 * attaches a second disk the size of local_gb to the image
 (when using block device mapping through the ec2 api, more swap/ephemeral
 disks can be attached as volumes as well)
 The XenServer driver (I'm less familiar with this code so please correct me
 if i am wrong here):
 * downloads the image from glance
 * creates a vdi from the base image
 * resizes the vdi to the size of local_gb
 The first method of resize to 10G and having separate local_gb is
 essentially the strategy taken by aws.
[...]
 Drawbacks of the second method:
 1) Limits cloud images to a particular format.  We can't always guarantee
 that we can resize the image properly.

Can you elaborate on this? Both methods resize the disk, how does the
second method impose more limitations than the first?


-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] libvirt vs. Xen driver handling of local storage

2011-08-31 Thread Soren Hansen
2011/8/31 Vishvananda Ishaya vishvana...@gmail.com:
 Can you elaborate on this? Both methods resize the disk, how does the
 second method impose more limitations than the first?
 In the first case it is perfectly reasonable to have a whole disk image that 
 is of a decent size for the base image, so you can get by just fine with the 
 secondary attached disk if the resize does nothing. So to me that is a lot 
 more flexible.

Ah, gotcha. Makes sense.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] libvirt vs. Xen driver handling of local storage

2011-08-31 Thread Chris Behrens
Vish,

I think Rackspace ozone/titan has some upcoming work to do for the resizing for 
xenserver that might close some of the gap.

I think we need some options (flags) if we are to synchronize libvirt/xen.  At 
some point, Rackspace also needs an API extension to support a couple different 
ways of handling resizes.  Until we get there, we at least need an option to 
keep the xenserver code working as-is for now.  I assume others need the 
current libvirt implementation to stay as well.

That said, I think it's probably not too difficult to do the 'libvirt way' for 
Xen, but I don't know about it making diablo.
Adding support into libvirt to do the 'xen way' should be easier, I'd think.  
But I'm the opposite of you, Vish.  I don't know the libvirt layer as well. :)

If we can FLAG the way it works... and make these options work in both 
libvirt/xen, I think we can all remain happy.

- Chris

On Aug 31, 2011, at 11:45 AM, Vishvananda Ishaya wrote:

 Hey guys,
 
 We have a very annoying discrepancy between how local space is used in the 
 xen driver vs the libvirt driver.  I think it is vital that this is rectified 
 before the Diablo release.  We already have a few functional gaps between the 
 drivers, but the fact that disks are partitioned completely differently 
 between the two is very confusing to users.
 
 Bug is here: https://bugs.launchpad.net/nova/+bug/834189
 
 The libvirt driver:
 
 * downloads the image from glance
 * resizes the image to 10G if it is  10G
 (in the case of a separate kernel and ramdisk image it extends the filesystem 
 as well.  In the case of a whole-disk image it just resizes the file because 
 it doesn't know enough to change the filesystem)
 * attaches a second disk the size of local_gb to the image
 (when using block device mapping through the ec2 api, more swap/ephemeral 
 disks can be attached as volumes as well)
 
 The XenServer driver (I'm less familiar with this code so please correct me 
 if i am wrong here):
 * downloads the image from glance
 * creates a vdi from the base image
 * resizes the vdi to the size of local_gb
 
 The first method of resize to 10G and having separate local_gb is essentially 
 the strategy taken by aws.
 
 Drawbacks of the first method:
 
 1) The actual space used by the image is local_gb + 10G (or more if the base 
 image is larger than 10G) which is inconsistent.
 
 2) The guest has to deal with the annoyance of not having one large 
 filesystem.  It is easier for the user if they can just use all the space 
 that they have without thinking about it.
 
 Drawbacks of the second method:
 
 1) Limits cloud images to a particular format.  We can't always guarantee 
 that we can resize the image properly.
 
 
 We need to decide on a common strategy and use it for both hypervisors.
 
 Vish
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Diablo RBP

2011-08-31 Thread Brian Lamar
I've heard this a couple times recently and it's a fine idea. That being said 
I'm not aware of anyone currently working on this. This would be a great thing 
to add for Essex and a great topic in general for the upcoming Design Summit in 
October.

Is this something you want to work on? Either way it would be a good idea to 
add a blueprint w/ wiki page describing what you'd like to get out of the 
documentation. Then we can discuss and prioritize for the next release.

-Original Message-
From: Joshua Harlow harlo...@yahoo-inc.com
Sent: Wednesday, August 31, 2011 7:09pm
To: Vishvananda Ishaya vishvana...@gmail.com, openstack 
openstack@lists.launchpad.net
Subject: Re: [Openstack] Diablo RBP

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
This email may include confidential information. If you received it in error, 
please delete it.
Is there any plan to document the internal-rpc format that will be used (with 
kombu, carrot...)

It would seem very important to have that and I haven't seen any documentation 
for it (maybe per release?)

On 8/31/11 11:17 AM, Vishvananda Ishaya vishvana...@gmail.com wrote:

Hey Everyone,

We managed to get a lot of features in for Diablo-4, and a few have gone in 
afterwards.  There are still a few that I would like to see land:

https://code.launchpad.net/~cbehrens/nova/rpc-kombu/+merge/73096
(fixes a number of bugs with rpc, leaves the carrot code in place so we can 
revert to carrot as default if issues arise)

https://code.launchpad.net/~cloudbuilders/nova/os-keypair-integration/+merge/72140
https://code.launchpad.net/~cloudbuilders/nova/os-user_id-description/+merge/72233
(These are extensions that aren't touching much and are key for feature parity 
in dashboard)

https://code.launchpad.net/~danwent/nova/qmanager-new/+merge/72526
(This is the quantum manager and allows integration with quantum.  It leaves 
the old managers in place so shouldn't break any existing functionality)

There are also lots of bugfix and cleanup proposals that need to be reviewed 
and merged as well.

Aside from the above branches, we should have no other feature branches go in. 
So it is time to focus on testing and filing and fixing bugs. I'm Triaging and 
targeting bugs to the rbp.  You can find the list here:

https://launchpad.net/nova/+milestone/diablo-rbp
(More will be added as we find them)

There are two critical ones so far:

https://bugs.launchpad.net/nova/+bug/833552
(Rackspace-ozone is tackling this one)

https://bugs.launchpad.net/nova/+bug/834189
(I'm sending out an email discussing this one)

There are a number of bugs that haven't been triaged or targeted yet.  Here is 
a mostly complete list:

https://bugs.launchpad.net/nova/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=NEWfield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDassignee_option=anyfield.assignee=field.bug_reporter=field.bug_supervisor=field.bug_commenter=field.subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
 
https://bugs.launchpad.net/nova/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status:list=NEWfield.status:list=CONFIRMEDfield.status:list=TRIAGEDassignee_option=anyfield.assignee=field.bug_reporter=field.bug_supervisor=field.bug_commenter=field.subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on

Please feel free to assign yourself to a bug and get cracking on it.  We don't 
have a lot of time left, so lets stabilize the best we can.

Vish




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp