Re: [Openstack] use php to make container public or not

2011-12-06 Thread pf shineyear
sorry , i did not use keystone

On Tue, Dec 6, 2011 at 7:10 PM, crayon_z crayon@gmail.com wrote:

 **
 Hi, pf shineyear.

 It seems that you use swauth or tempauth as the auth system for Swift.
 Have you tried the keystone auth system? How to make a container private
 with keystone auth system? It seems that all the containers created are
 public.

 Thanks.

 --
 Crayon

  *Sender:* pf shineyear shin...@gmail.com
 *Date:* 2011年11月23日(星期三) 下午12:47
 *To:* openstack openstack@lists.launchpad.net
 *Subject:* [Openstack] use php to make container public or not
  php curl have a shit feature: can not post an empty http header, so if
 you want to perform a command like this in php:

 swift -A http://10.38.10.127:8080/auth/v1.0 -U AUTH_testa27:AUTH_testu27
 -K testp post -r ' ' mycontainer2

 the X-Container-Read will not be send, so you can not make a container
 unpublic from public

 i write an example code to show how make it work with php, just use ' not
  and plus \n at the end

 hope can make some help


 ?php

 $prefix = AUTH_;
 $account_name = $prefix.testa27;
 $user_name = $prefix.testu27;
 $user_pass = testp;
 $container = mycontainer2;
 $user_token = AUTH_tkb21f5fcfea144cf9a99ed1de9a577db2;
 $path = /v1/.$account_name./.$container;


 #if you want to make container public use this
 #$http_headers = array(X-Auth-Token: .$user_token,
 'X-Container-Read: .r:*');

 #if you want to make container not public use this
 $http_headers = array(X-Auth-Token: .$user_token, 'X-Container-Read:
 \n');

 print_r($http_header);

 $ch = curl_init();

 curl_setopt($ch, CURLOPT_URL, http://10.38.10.127:8080.$path);
 curl_setopt($ch, CURLOPT_POST, 1);
 curl_setopt($ch, CURLOPT_HTTPHEADER, $http_headers);
 curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
 curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 60);
 curl_setopt($ch, CURLINFO_HEADER_OUT, true);

 print test set container public\n;

 curl_exec($ch);

 $headers = curl_getinfo($ch, CURLINFO_HEADER_OUT);

 print_r($headers);

 if (!curl_errno($ch))
 {
 $info = curl_getinfo($ch);

 if ((int)($info['http_code'] / 100) != 2)
 {
 print set container public return error
 .$info['http_code'].\n;
 return -1;
 }
 }
 else
 {
 print set container public error\n;
 return -2;
 }

 curl_close($ch);

 ?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] use php to make container public or not

2011-12-06 Thread pf shineyear
why you do not try to swauth? it's create enough i think...

On Tue, Dec 6, 2011 at 7:10 PM, crayon_z crayon@gmail.com wrote:

 **
 Hi, pf shineyear.

 It seems that you use swauth or tempauth as the auth system for Swift.
 Have you tried the keystone auth system? How to make a container private
 with keystone auth system? It seems that all the containers created are
 public.

 Thanks.

 --
 Crayon

  *Sender:* pf shineyear shin...@gmail.com
 *Date:* 2011年11月23日(星期三) 下午12:47
 *To:* openstack openstack@lists.launchpad.net
 *Subject:* [Openstack] use php to make container public or not
  php curl have a shit feature: can not post an empty http header, so if
 you want to perform a command like this in php:

 swift -A http://10.38.10.127:8080/auth/v1.0 -U AUTH_testa27:AUTH_testu27
 -K testp post -r ' ' mycontainer2

 the X-Container-Read will not be send, so you can not make a container
 unpublic from public

 i write an example code to show how make it work with php, just use ' not
  and plus \n at the end

 hope can make some help


 ?php

 $prefix = AUTH_;
 $account_name = $prefix.testa27;
 $user_name = $prefix.testu27;
 $user_pass = testp;
 $container = mycontainer2;
 $user_token = AUTH_tkb21f5fcfea144cf9a99ed1de9a577db2;
 $path = /v1/.$account_name./.$container;


 #if you want to make container public use this
 #$http_headers = array(X-Auth-Token: .$user_token,
 'X-Container-Read: .r:*');

 #if you want to make container not public use this
 $http_headers = array(X-Auth-Token: .$user_token, 'X-Container-Read:
 \n');

 print_r($http_header);

 $ch = curl_init();

 curl_setopt($ch, CURLOPT_URL, http://10.38.10.127:8080.$path);
 curl_setopt($ch, CURLOPT_POST, 1);
 curl_setopt($ch, CURLOPT_HTTPHEADER, $http_headers);
 curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
 curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 60);
 curl_setopt($ch, CURLINFO_HEADER_OUT, true);

 print test set container public\n;

 curl_exec($ch);

 $headers = curl_getinfo($ch, CURLINFO_HEADER_OUT);

 print_r($headers);

 if (!curl_errno($ch))
 {
 $info = curl_getinfo($ch);

 if ((int)($info['http_code'] / 100) != 2)
 {
 print set container public return error
 .$info['http_code'].\n;
 return -1;
 }
 }
 else
 {
 print set container public error\n;
 return -2;
 }

 curl_close($ch);

 ?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] On documentation management

2011-12-06 Thread Arnaud Quette
Dear Anne, Stefano and stackers,

I've seen a few mails passing around, on the documentation topic.

I've quickly looked at the source, and found my comment would still be relevant.
So here we go:

you seem to have the exact same documentation needs and issues I've
faced some time ago:
- being able to version documentation, through a simple source format
- being able to generate a wide range of output (PDF, single / multi
page HTML, ePub, Groff manpage, Docbook, ...)
- have a lightweight and maintainable format (not docbook for
example), that allows to focus on the content, and not the technical
bits.

I found the light with Asciidoc, 2 years ago (pointed by ESR):
http://www.methods.co.nz/asciidoc/
It has an RST like syntax, provides tons of plugins for output, allows
to condition and modularize inclusion is flexible and very nice in
general.
I've since then revamped the whole NUT documentations, including the
website, using this:
- the website and documentations:
http://www.networkupstools.org/documentation.html
- the source code (all .txt files + Makefile.am for the rules):
http://anonscm.debian.org/viewvc/nut/trunk/docs/

If you are interested in, I can elaborate more.

See you soon on the power management topic...

cheers,
Arnaud
-- 
Linux / Unix Expert RD - Eaton - http://powerquality.eaton.com
Network UPS Tools (NUT) Project Leader - http://www.networkupstools.org/
Debian Developer - http://www.debian.org
Free Software Developer - http://arnaud.quette.free.fr/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [GLANCE] Proposal: Combine the container_format and disk_format fields in 2.0 Images API

2011-12-06 Thread Donal Lafferty
Hi Soren,

Had a quick look at the quote and your question below.

From reading the mailing list, it appears that Scott Mosen and possibly Jay 
Pipes are the experts on 'container type'.  

I'll let them step in and field this question...


DL
 

 -Original Message-
 From: Soren Hansen [mailto:so...@linux2go.dk]
 Sent: 05 December 2011 20:25
 To: Donal Lafferty
 Cc: Jay Pipes; openstack@lists.launchpad.net
 Subject: Re: [Openstack] [GLANCE] Proposal: Combine the container_format
 and disk_format fields in 2.0 Images API
 
 2011/12/5 Donal Lafferty donal.laffe...@citrix.com:
  Perhaps the finer details of what MIME-style categorization is are lost 
  on
 me.
  Can you elaborate? Your original example was vhd/x-ms-tools which,
  to my eye, is simply a container type string with a vendor part
  added. What am I missing?
  'vhd' isn't a container type.  It's a disk format.  See
  http://glance.openstack.org/formats.html
 
 Err..
 
 a) What on earth counts as a container type then?
 
 b) That's really beside the point. It's a something with a vendor part 
 added.
 
 --
 Soren Hansen        | http://linux2go.dk/ Ubuntu Developer    |
 http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Computing StackExchange site proposal

2011-12-06 Thread Thierry Carrez
Stefano Maffulli wrote:
 On Mon, 2011-12-05 at 09:51 +0100, Thierry Carrez wrote:
 AskBot (Python/Django, GPL) - http://askbot.com/
 Used by Fedora at http://ask.fedoraproject.org/questions/, can use
 Launchpad OpenID for auth.
 
 I looked at AskBot doesn't seem to have any mechanism to build a FAQ or
 to mark a question as 'solved' or 'obsolete'. I noticed that people edit
 the questions adding the word [solved] in the question, which is not
 very elegant.
 
 I also couldn't make the authentication with Launchpad but it worked
 with my other openID provider (claimID): Launchpad having problems with
 OpenID? Twitter's OAuth worked. The customization of UI seems to be
 limited but it will need more investigation.

Launchpad OpenID worked for me. The URLs are of the form:
https://launchpad.net/~USER;

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] S3 API for Swift

2011-12-06 Thread Khaled Ben Bahri

Hi Anne,

I think that's it :)

Thanks for your help

Best regards
Khaled

 From: a...@openstack.org
 Date: Mon, 5 Dec 2011 07:46:25 -0600
 Subject: Re: [Openstack] S3 API for Swift
 To: khaled-...@hotmail.com
 CC: openstack@lists.launchpad.net
 
 Hi Khaled -
 Is this what you are looking for?
 The Swift3 middleware emulates the S3 REST API on top of Object Storage.
 from
 http://docs.openstack.org/diablo/openstack-object-storage/admin/content/configuring-openstack-object-storage-with-s3_api.html
 
 Hope this is helpful.
 Anne
 
 On Mon, Dec 5, 2011 at 4:08 AM, Khaled Ben Bahri khaled-...@hotmail.com 
 wrote:
  Hi all,
 
  I wonder if there are any API dedicated for openstack Swift that can be used
  for application that use Amazon S3 in order to have more portability
 
  I don't find enough information about this.
  Have any one please, informations about this.
 
  Thanks in advance
 
  Best regards
  Khaled
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-06 Thread Daniel P. Berrange
On Mon, Dec 05, 2011 at 09:07:06PM -0500, Lorin Hochstein wrote:
 
 
 On Dec 4, 2011, at 7:46 AM, Soren Hansen wrote:
 
  2011/12/4 Lorin Hochstein lo...@isi.edu:
  Some of the LXC-related issues we've run into:
  
  - The CPU affinity issue on LXC you mention. Running LXC with OpenStack, 
  you
  don't get proper space sharing out of the box, each instance actually 
  sees
  all of the available CPUs. It's possible to restrict this, but that
  functionality doesn't seem to be exposed through libvirt, so it would have
  to be implemented in nova.

I recently added support for CPU affinity to the libvirt LXC driver. It will
be in libvirt 0.9.8. I also wired up various other cgroups tunables including
NUMA memory binding, block I/O tuning and CPU quota/period caps.

  - LXC doesn't currently support volume attachment through libvirt. We were
  able to implement a workaround by invoking lxc-attach inside of OpenStack
  instead  (e.g., see
  https://github.com/usc-isi/nova/blob/hpc-testing/nova/virt/libvirt/connection.py#L482.
  But to be able to use lxc-attach, we had to upgrade the Linux kernel in
  RHEL6.1 from 2.6.32 to 2.6.38. This kernel isn't supported by SGI, which
  means that we aren't able to load the SGI numa-related kernel modules.

Can you clarify what you mean by volume attachment ?

Are you talking about passing through host block devices, or hotplug of
further filesystems for the container ?

  Why not address these couple of issues in libvirt itself?

If you let me know what issues you have with libvirt + LXC in OpenStack,
I'll put them on my todo list.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Reminder: OpenStack general meeting - 21:00 UTC

2011-12-06 Thread Thierry Carrez
Hello everyone,

Our general meeting will take place at 21:00 UTC this Tuesday in
#openstack-meeting on IRC. PTLs, if you can't make it, please name a
substitute on [2].

We will focus on reviewing progress towards essex-2. We are one week
away from cutting the milestone-proposed branch, so new features need to
be proposed now in order to make it in time.

You can doublecheck what 21:00 UTC means for your timezone at [1]:
[1] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20111206T21

See the meeting agenda, edit the wiki to add new topics for discussion:
[2] http://wiki.openstack.org/Meetings/TeamMeeting

Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] keystone unscoped token

2011-12-06 Thread heut2008
hi,all

when use crendentials to ask for  a unscoped token,should  keystone offers
 more info such as endpoints and user info? is it a bug or for other use?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [RFC] Common config options module

2011-12-06 Thread Gary Kotton
Hi,
Will this module sync the configurations between the different nodes in
the system? That is, if the cloud has a number of Compute modules
running. Will the updated configuration on one of them be replicated to
the additional nodes? If so then this is great, if not, would it be
possible to address this?
Thanks
Gary

-Original Message-
From: openstack-bounces+garyk=radware@lists.launchpad.net
[mailto:openstack-bounces+garyk=radware@lists.launchpad.net] On
Behalf Of Vishvananda Ishaya
Sent: Tuesday, December 06, 2011 1:36 AM
To: Mark McLoughlin; John Dickinson
Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: Re: [Openstack] [RFC] Common config options module

Just read through the description and the code.  I don't have any issues
with the way it is implemented, although others may have some
suggestions/tweaks.  I think it is most important to get the common code
established, so I'm up for implementing you changes in Nova.  I think it
is important to get buy in from Jay and the Glance team ASAP as well.

It would also be great if the Swift team could do a quick review and at
least give us a heads up on whether there are any blockers to moving to
this eventually.  They have a huge install base, so changing their
config files could be significantly more difficult, but it doesn't look
too diffferent from what they are doing.  John, thoughts?

Vish

On Nov 28, 2011, at 7:09 AM, Mark McLoughlin wrote:

 Hey,
 
 I've just posted this blueprint:
 
  https://blueprints.launchpad.net/openstack-common/+spec/common-config
  http://wiki.openstack.org/CommonConfigModule
 
 The idea is to unify option handling across projects with this new
API.
 The module would eventually (soon?) live in openstack-common.
 
 Code and unit tests here:
 
  https://github.com/markmc/nova/blob/common-config/nova/common/cfg.py

https://github.com/markmc/nova/blob/common-config/nova/tests/test_cfg.py
 
 And patches to make both Glance and Nova use it are on the
 'common-config' branches of my github forks:
 
  https://github.com/markmc/nova/commits/common-config
  https://github.com/markmc/glance/commits/common-config
 
 Glance and (especially) Nova still need a bunch of work to be fully
 switched over to the new model, but both trees do actually appear to
 work fine and could be merged now.
 
 Lots of detail in there, but all comments are welcome :)
 
 Thanks,
 Mark.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Thierry Carrez
So the general consensus so far on this discussion seems to be:

(0) The 2011.3 release PPA bears false expectations and should be
removed now. In the future, we should not provide such PPAs: 0-day
packages for the release should be available from the last milestone
PPA anyway.

(1) OpenStack, as an upstream project, should focus on development
rather than on providing a production-ready distribution.

(2) We could provide daily builds from the stable/diablo branch for a
variety of releases (much like what we do for the master branch), but
those should be clearly marked not for production use and be
best-effort only (like our master branch builds).

(3) This should not prevent a group in the community from working on a
project providing an openstack on Lucid production-ready distribution
if they so wishes. This project would just be another distribution of
OpenStack.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] On documentation management

2011-12-06 Thread Anne Gentle
Hi Arnaud -
Asciidoc is great and we can work it into our toolchain. O'Reilly
offers authors the option of either authoring in DocBook (which is
most of our source on docs.openstack.org) or Asciidoc.

I like to accept documentation in any format - even Word docs printed
and handed to me from a briefcase at the Design Summit. We had many
pages of documentation already in DocBook when I joined. I'm
interested in building a community of contributors and finding ways to
enable their contributions, so all formats are welcomed.

Feel free to contact me with specific ideas about Asciidoc contributions.

Thanks,
Anne

On Tue, Dec 6, 2011 at 2:20 AM, Arnaud Quette aquette@gmail.com wrote:
 Dear Anne, Stefano and stackers,

 I've seen a few mails passing around, on the documentation topic.

 I've quickly looked at the source, and found my comment would still be 
 relevant.
 So here we go:

 you seem to have the exact same documentation needs and issues I've
 faced some time ago:
 - being able to version documentation, through a simple source format
 - being able to generate a wide range of output (PDF, single / multi
 page HTML, ePub, Groff manpage, Docbook, ...)
 - have a lightweight and maintainable format (not docbook for
 example), that allows to focus on the content, and not the technical
 bits.

 I found the light with Asciidoc, 2 years ago (pointed by ESR):
 http://www.methods.co.nz/asciidoc/
 It has an RST like syntax, provides tons of plugins for output, allows
 to condition and modularize inclusion is flexible and very nice in
 general.
 I've since then revamped the whole NUT documentations, including the
 website, using this:
 - the website and documentations:
 http://www.networkupstools.org/documentation.html
 - the source code (all .txt files + Makefile.am for the rules):
 http://anonscm.debian.org/viewvc/nut/trunk/docs/

 If you are interested in, I can elaborate more.

 See you soon on the power management topic...

 cheers,
 Arnaud
 --
 Linux / Unix Expert RD - Eaton - http://powerquality.eaton.com
 Network UPS Tools (NUT) Project Leader - http://www.networkupstools.org/
 Debian Developer - http://www.debian.org
 Free Software Developer - http://arnaud.quette.free.fr/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone unscoped token

2011-12-06 Thread Dolph Mathews
The unscoped token keystone returns to you allows you to call GET /tenants
and exchange your unscoped token for one scoped to a tenant. This is
documented in the API developer guide, but the following functional test
illustrates the flow from a client perspective pretty well:

keystone.test.functional.test_auth.TestServiceAuthentication
  test_user_auth_with_role_on_tenant()


Link:
https://github.com/openstack/keystone/blob/master/keystone/test/functional/test_auth.py
See line ~119

-Dolph

On Tue, Dec 6, 2011 at 5:34 AM, heut2008 heut2...@gmail.com wrote:

 hi,all

 when use crendentials to ask for  a unscoped token,should  keystone offers
  more info such as endpoints and user info? is it a bug or for other use?




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Tim Bell

Thierry,

I'm not clear on who will be maintaining the stable/diablo  branch.  The people 
such as EPEL for RedHat systems need to have something with the appropriate bug 
fixes back ported.

There are an increasing number of sites looking to deploy in production and 
cannot follow the latest development version.

Tim

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Duncan McGreggor
On 06 Dec 2011 - 10:11, Duncan McGreggor wrote:
 On 06 Dec 2011 - 14:28, Thierry Carrez wrote:
  So the general consensus so far on this discussion seems to be:
 
  (0) The 2011.3 release PPA bears false expectations and should be
  removed now. In the future, we should not provide such PPAs: 0-day
  packages for the release should be available from the last milestone
  PPA anyway.
 
  (1) OpenStack, as an upstream project, should focus on development
  rather than on providing a production-ready distribution.
 
  (2) We could provide daily builds from the stable/diablo branch for a
  variety of releases (much like what we do for the master branch), but
  those should be clearly marked not for production use and be
  best-effort only (like our master branch builds).
 
  (3) This should not prevent a group in the community from working on a
  project providing an openstack on Lucid production-ready distribution
  if they so wishes. This project would just be another distribution of
  OpenStack.

 This doesn't seem like enough to me. OpenStack isn't just a library;
 it's a fairly substantial collection of software and services, intended
 to be used as a product. If it can't be used as a product, what's the
 use?

 Someone

It was Loic Dachary. He said:


However, as it evolves towards a system widely used in production, it
will face new challenges and the communities working on packaging for
each distribution will provide valuable input to developers. Creating a
packaging team with representatives for each distribution and electing
someone to represent them in the Policy Board could achieve this.


d

 made the suggestion that a new OpenStack group be started, one
 whose focus is on producing a production-ready, distribution-ready,
 release of the software. So can we add one more (need some help with
 wording, here...):

 (4) OpenStack will accept and foster a new project, one that is not
 focused on development, but rather the distribution and it's general
 stability. This distro project will be responsible for advocating on
 behalf of various operating systems/distros/sponsoring vendors for bugs
 that affect performance and stability of OpenStack, or prevent an
 operating system from running OpenStack.

 Thoughts?

 d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift - set preferred proxy/datanodes (cross datacenter schema)

2011-12-06 Thread Caitlin Bestler
Lendro Reox asked:


 We're replicating our datacenter in another location (something like Amazons 
 east and west coast) , thinking about our applications and
 our use of Swift, is there any way that we can set up weights for our 
 datanodes so if a request enter via for example DATACENTER 1 ,
  then we want the main copy of the data being written on a datanode on the 
 SAME datacenter o read from the same datacenter, so
 when we want to read it and comes from a proxy node of the same datacenter we 
 dont add delay of the latency between the two datacenters.
 The moto is if a request to write or read enters via DATACENTER 1 then is 
 served via proxynodes/datanodes located on DATACENTER 1,
 then the replicas gets copied across zones over both datacenters.
 Routing the request to especific proxy nodes is easy, but dont know if swift 
 has a way to manage this internally too for the datanodes

I don't see how you would accomplish that with the current Swift infrastructure.

An object is hashed to a partition, and the ring determines where replicas of 
that partition are stored.

What you seem to be suggesting is that when an object is created in region X 
that it should be assigned to partition that is primarily stored in region X,
While if the same object had been created in region Y it would be assigned to a 
partition that is primary stored in region Y.

The problem is that where this object was first created is not a contributor 
to the hash algorithm, nor could it be since there is no way for someone
trying to get that object to know where it was first created.

What I think you are looking for is a solution where you have *two* rings, 
DATACENTER-WEST and DATACENTER-EAST. Both of these rings would have
an adequate number of replicas to function independently, but would 
asynchronously update each other to provide eventual consistency.

That would use more disk space, but avoids making all updates wait for the data 
to be updated at each site.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Chuck Short
On Tue, 6 Dec 2011 10:11:28 -0800
Duncan McGreggor dun...@dreamhost.com wrote:

 On 06 Dec 2011 - 14:28, Thierry Carrez wrote:
  So the general consensus so far on this discussion seems to be:
 
  (0) The 2011.3 release PPA bears false expectations and should be
  removed now. In the future, we should not provide such PPAs: 0-day
  packages for the release should be available from the last
  milestone PPA anyway.
 
  (1) OpenStack, as an upstream project, should focus on development
  rather than on providing a production-ready distribution.
 
  (2) We could provide daily builds from the stable/diablo branch
  for a variety of releases (much like what we do for the master
  branch), but those should be clearly marked not for production
  use and be best-effort only (like our master branch builds).
 
  (3) This should not prevent a group in the community from working
  on a project providing an openstack on Lucid production-ready
  distribution if they so wishes. This project would just be another
  distribution of OpenStack.
 
 This doesn't seem like enough to me. OpenStack isn't just a library;
 it's a fairly substantial collection of software and services,
 intended to be used as a product. If it can't be used as a product,
 what's the use?
 
 Someone made the suggestion that a new OpenStack group be started, one
 whose focus is on producing a production-ready, distribution-ready,
 release of the software. So can we add one more (need some help with
 wording, here...):
 
 (4) OpenStack will accept and foster a new project, one that is not
 focused on development, but rather the distribution and it's general
 stability. This distro project will be responsible for advocating on
 behalf of various operating systems/distros/sponsoring vendors for
 bugs that affect performance and stability of OpenStack, or prevent an
 operating system from running OpenStack.
 
 Thoughts?
 
 d

Hi

We already have an little informal channel on freenode called
#openstack-packaging.

Regards
chuck


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift - set preferred proxy/datanodes (cross datacenter schema)

2011-12-06 Thread Leandro Reox
Caitlin,

Thanks for your quick response, so ther isnt any way, if a request come
inside DATACENTER 1 and the random mechanism of swift tells to search for
the object on the DATACENTER 2, is no way to avoid it.
What about container sync ?

Regards

On Tue, Dec 6, 2011 at 3:49 PM, Caitlin Bestler caitlin.best...@nexenta.com
 wrote:

  Lendro Reox asked:



  We're replicating our datacenter in another location (something like
 Amazons east and west coast) , thinking about our applications and

  our use of Swift, is there any way that we can set up weights for our
 datanodes so if a request enter via for example DATACENTER 1 ,

   then we want the main copy of the data being written on a datanode on
 the SAME datacenter o read from the same datacenter, so

  when we want to read it and comes from a proxy node of the same
 datacenter we dont add delay of the latency between the two datacenters.
  The moto is if a request to write or read enters via DATACENTER 1 then
 is served via proxynodes/datanodes located on DATACENTER 1,
  then the replicas gets copied across zones over both datacenters.

  Routing the request to especific proxy nodes is easy, but dont know if
 swift has a way to manage this internally too for the datanodes 

 ** **

 I don’t see how you would accomplish that with the current Swift
 infrastructure.

 ** **

 An object is hashed to a partition, and the ring determines where replicas
 of that partition are stored.

 ** **

 What you seem to be suggesting is that when an object is created in region
 X that it should be assigned to partition that is primarily stored in
 region X,

 While if the same object had been created in region Y it would be assigned
 to a partition that is primary stored in region Y.

 ** **

 The problem is that “where this object was first created” is not a
 contributor to the hash algorithm, nor could it be since there is no way
 for someone

 trying to get that object to know where it was first created.

 ** **

 What I think you are looking for is a solution where you have **two**
 rings, DATACENTER-WEST and DATACENTER-EAST. Both of these rings would have
 

 an adequate number of replicas to function independently, but would
 asynchronously update each other to provide eventual consistency.

 ** **

 That would use more disk space, but avoids making all updates wait for the
 data to be updated at each site.

 ** **

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [RFC] Common config options module

2011-12-06 Thread John Dickinson
Overall, I think it's a great thing to have commonality between the projects on 
option names and environment variables. I think it's worthwhile to push for 
that in the swift cli tool in the essex timeframe.

On the topic of common config libraries, though, I think the differences are 
less important. Mark's wiki page describing a common config library sounds 
interesting, but I agree with Monty that it may be better to be a separate (ie 
non-openstack) module.

--John


On Dec 5, 2011, at 3:36 PM, Vishvananda Ishaya wrote:

 Just read through the description and the code.  I don't have any issues with 
 the way it is implemented, although others may have some suggestions/tweaks.  
 I think it is most important to get the common code established, so I'm up 
 for implementing you changes in Nova.  I think it is important to get buy in 
 from Jay and the Glance team ASAP as well.
 
 It would also be great if the Swift team could do a quick review and at least 
 give us a heads up on whether there are any blockers to moving to this 
 eventually.  They have a huge install base, so changing their config files 
 could be significantly more difficult, but it doesn't look too diffferent 
 from what they are doing.  John, thoughts?
 
 Vish
 
 On Nov 28, 2011, at 7:09 AM, Mark McLoughlin wrote:
 
 Hey,
 
 I've just posted this blueprint:
 
 https://blueprints.launchpad.net/openstack-common/+spec/common-config
 http://wiki.openstack.org/CommonConfigModule
 
 The idea is to unify option handling across projects with this new API.
 The module would eventually (soon?) live in openstack-common.
 
 Code and unit tests here:
 
 https://github.com/markmc/nova/blob/common-config/nova/common/cfg.py
 https://github.com/markmc/nova/blob/common-config/nova/tests/test_cfg.py
 
 And patches to make both Glance and Nova use it are on the
 'common-config' branches of my github forks:
 
 https://github.com/markmc/nova/commits/common-config
 https://github.com/markmc/glance/commits/common-config
 
 Glance and (especially) Nova still need a bunch of work to be fully
 switched over to the new model, but both trees do actually appear to
 work fine and could be merged now.
 
 Lots of detail in there, but all comments are welcome :)
 
 Thanks,
 Mark.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Michael Pittaro
On Tue, Dec 6, 2011 at 10:11 AM, Duncan McGreggor dun...@dreamhost.com wrote:
 On 06 Dec 2011 - 14:28, Thierry Carrez wrote:
 So the general consensus so far on this discussion seems to be:

 (0) The 2011.3 release PPA bears false expectations and should be
 removed now. In the future, we should not provide such PPAs: 0-day
 packages for the release should be available from the last milestone
 PPA anyway.

 (1) OpenStack, as an upstream project, should focus on development
 rather than on providing a production-ready distribution.

 (2) We could provide daily builds from the stable/diablo branch for a
 variety of releases (much like what we do for the master branch), but
 those should be clearly marked not for production use and be
 best-effort only (like our master branch builds).

 (3) This should not prevent a group in the community from working on a
 project providing an openstack on Lucid production-ready distribution
 if they so wishes. This project would just be another distribution of
 OpenStack.

 This doesn't seem like enough to me. OpenStack isn't just a library;
 it's a fairly substantial collection of software and services, intended
 to be used as a product. If it can't be used as a product, what's the
 use?

 Someone made the suggestion that a new OpenStack group be started, one
 whose focus is on producing a production-ready, distribution-ready,
 release of the software. So can we add one more (need some help with
 wording, here...):

 (4) OpenStack will accept and foster a new project, one that is not
 focused on development, but rather the distribution and it's general
 stability. This distro project will be responsible for advocating on
 behalf of various operating systems/distros/sponsoring vendors for bugs
 that affect performance and stability of OpenStack, or prevent an
 operating system from running OpenStack.

 Thoughts?


+1 on this idea - I think it has a lot of benefits in coordinating
distro activity.

mike

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift - set preferred proxy/datanodes (cross datacenter schema)

2011-12-06 Thread andi abes
You could try to use the container sync added in 1.4.4.

The scheme would be to setup 2 separate clusters in each data center.
Obviously requests will be satisfied locally.
You will also setup your containers identically, and configure them to
sync, to make sure data is available in both DC's.

You might want to consider how many replicas you want in each data center,
and how you'd recover from failures, rather than just setting up 2 DC x 3-5
replicas for each object.

a.


On Tue, Dec 6, 2011 at 1:49 PM, Caitlin Bestler caitlin.best...@nexenta.com
 wrote:

  Lendro Reox asked:



  We're replicating our datacenter in another location (something like
 Amazons east and west coast) , thinking about our applications and

  our use of Swift, is there any way that we can set up weights for our
 datanodes so if a request enter via for example DATACENTER 1 ,

   then we want the main copy of the data being written on a datanode on
 the SAME datacenter o read from the same datacenter, so

  when we want to read it and comes from a proxy node of the same
 datacenter we dont add delay of the latency between the two datacenters.
  The moto is if a request to write or read enters via DATACENTER 1 then
 is served via proxynodes/datanodes located on DATACENTER 1,
  then the replicas gets copied across zones over both datacenters.

  Routing the request to especific proxy nodes is easy, but dont know if
 swift has a way to manage this internally too for the datanodes 

 ** **

 I don’t see how you would accomplish that with the current Swift
 infrastructure.

 ** **

 An object is hashed to a partition, and the ring determines where replicas
 of that partition are stored.

 ** **

 What you seem to be suggesting is that when an object is created in region
 X that it should be assigned to partition that is primarily stored in
 region X,

 While if the same object had been created in region Y it would be assigned
 to a partition that is primary stored in region Y.

 ** **

 The problem is that “where this object was first created” is not a
 contributor to the hash algorithm, nor could it be since there is no way
 for someone

 trying to get that object to know where it was first created.

 ** **

 What I think you are looking for is a solution where you have **two**
 rings, DATACENTER-WEST and DATACENTER-EAST. Both of these rings would have
 

 an adequate number of replicas to function independently, but would
 asynchronously update each other to provide eventual consistency.

 ** **

 That would use more disk space, but avoids making all updates wait for the
 data to be updated at each site.

 ** **

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [RFC] Common config options module

2011-12-06 Thread Jay Pipes
On Tue, Dec 6, 2011 at 2:22 PM, John Dickinson
john.dickin...@rackspace.com wrote:
 Overall, I think it's a great thing to have commonality between the projects 
 on option names and environment variables. I think it's worthwhile to push 
 for that in the swift cli tool in the essex timeframe.

 On the topic of common config libraries, though, I think the differences are 
 less important. Mark's wiki page describing a common config library sounds 
 interesting, but I agree with Monty that it may be better to be a separate 
 (ie non-openstack) module.

Could you explain the above a bit more? Mark has already posted code
for a common config module and I think the idea is to have it live in
the openstack-common Python library and have Nova, Glance and Swift
import that openstack.common library...
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [DODCS] HPC with Openstack?

2011-12-06 Thread Daniel P. Berrange
On Tue, Dec 06, 2011 at 12:04:53PM -0800, Dong-In David Kang wrote:
 
 
 - Original Message -
  On Mon, Dec 05, 2011 at 09:07:06PM -0500, Lorin Hochstein wrote:
  
  
   On Dec 4, 2011, at 7:46 AM, Soren Hansen wrote:
  
2011/12/4 Lorin Hochstein lo...@isi.edu:
Some of the LXC-related issues we've run into:
   
- The CPU affinity issue on LXC you mention. Running LXC with
OpenStack, you
don't get proper space sharing out of the box, each instance
actually sees
all of the available CPUs. It's possible to restrict this, but
that
functionality doesn't seem to be exposed through libvirt, so it
would have
to be implemented in nova.
  
  I recently added support for CPU affinity to the libvirt LXC driver.
  It will
  be in libvirt 0.9.8. I also wired up various other cgroups tunables
  including
  NUMA memory binding, block I/O tuning and CPU quota/period caps.
 
   Great news! 
  We are also looking forward to seeing SElinux 'sVirt' support for
 LXC by libvirt.
 When do you think it will be available? 
 In libvirt-0.9.8?

0.9.8 is due out any day now, so not that. My goal is to get it
done by the Fedora 17 development freeze, so hopefully 0.9.9,
or 0.9.10 worst case.

  By volume attachment, yes, we mean passing through host block devices that 
 is dynamically created by 
 nova-volume service (using iscsi).
 
 
Why not address these couple of issues in libvirt itself?
  
  If you let me know what issues you have with libvirt + LXC in
  OpenStack,
  I'll put them on my todo list.
  
 
  As Lorin said we implemented it using lxc-attach. 
 With lxc-attach we could pass the major/minor number of the (dynamically 
 crated) devices to the LXC instance.
 And with lxc-attach we could do mknod inside of the LXC instance.
 I think supporting that by libvirt would be very useful.
 However, it needs lxc-attach working for the Linux kernel. 
 We had to upgrade and patch Linux kernel for that purpose.
 If there is a better way, it would be wonderful.
 But I don't know if there is a way other than using lxc-attach.

Yeah, I don't see any practical way todo hotplug with LXC without
having the kernel support merged for attaching to all types of
namespace. Once that's available it will be simple todo it via
libvirt

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Thierry Carrez
Tim Bell wrote:
 I'm not clear on who will be maintaining the stable/diablo  branch.  The 
 people such as EPEL for RedHat systems need to have something with the 
 appropriate bug fixes back ported.
 
 There are an increasing number of sites looking to deploy in production and 
 cannot follow the latest development version.

Agreed on the need, we discussed this at length during the design
summit. The stable branches have been established and are maintained by
the OpenStack Stable Branch Maintainers team. Currently this team is
mostly made of distribution members (Ubuntu and Fedora/RedHat, mostly)
collaborating on a single branch to avoid duplication of effort.

See:
https://launchpad.net/~openstack-stable-maint
http://wiki.openstack.org/StableBranch

Regards,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread ghe. rivero


 (4) OpenStack will accept and foster a new project, one that is not
 focused on development, but rather the distribution and it's general
 stability. This distro project will be responsible for advocating on
 behalf of various operating systems/distros/sponsoring vendors for bugs
 that affect performance and stability of OpenStack, or prevent an
 operating system from running OpenStack.


+1

Every system/distro has its own way to work, but most of them share a
common way to do things.   I think (as a Debian openstack package team
 member) that this group can help a lot to improve the deploy of Openstack,
focusing just only in packaging/distributing stuff, and not development. It
must be a win-win for every ditro and for OpenStack itself.

Ghe Rivero

-- 
 .''`.  Pienso, Luego Incordio
: :' :
`. `'
  `-www.debian.orgwww.hispalinux.es

GPG Key: 26F020F7
GPG fingerprint: 4986 39DA D152 050B 4699  9A71 66DB 5A36 26F0 20F7
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Thierry Carrez
Duncan McGreggor wrote:
 On 06 Dec 2011 - 14:28, Thierry Carrez wrote:
 So the general consensus so far on this discussion seems to be:

 (0) The 2011.3 release PPA bears false expectations and should be
 removed now. In the future, we should not provide such PPAs: 0-day
 packages for the release should be available from the last milestone
 PPA anyway.

 (1) OpenStack, as an upstream project, should focus on development
 rather than on providing a production-ready distribution.

 (2) We could provide daily builds from the stable/diablo branch for a
 variety of releases (much like what we do for the master branch), but
 those should be clearly marked not for production use and be
 best-effort only (like our master branch builds).

 (3) This should not prevent a group in the community from working on a
 project providing an openstack on Lucid production-ready distribution
 if they so wishes. This project would just be another distribution of
 OpenStack.
 
 This doesn't seem like enough to me. OpenStack isn't just a library;
 it's a fairly substantial collection of software and services, intended
 to be used as a product. If it can't be used as a product, what's the
 use?
 
 Someone made the suggestion that a new OpenStack group be started, one
 whose focus is on producing a production-ready, distribution-ready,
 release of the software. So can we add one more (need some help with
 wording, here...):
 
 (4) OpenStack will accept and foster a new project, one that is not
 focused on development, but rather the distribution and it's general
 stability. This distro project will be responsible for advocating on
 behalf of various operating systems/distros/sponsoring vendors for bugs
 that affect performance and stability of OpenStack, or prevent an
 operating system from running OpenStack.

I don't think you need a project (openstack projects are about upstream
software): you need a *team* to coordinate distribution efforts and make
sure openstack projects are packageable etc.

Like zul said, that team actually already informally exists and has an
IRC channel :)

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift - set preferred proxy/datanodes (cross datacenter schema)

2011-12-06 Thread andi abes
sorry, should have included the link:
http://swift.openstack.org/overview_container_sync.html


On Tue, Dec 6, 2011 at 2:49 PM, andi abes andi.a...@gmail.com wrote:

 You could try to use the container sync added in 1.4.4.

 The scheme would be to setup 2 separate clusters in each data center.
 Obviously requests will be satisfied locally.
 You will also setup your containers identically, and configure them to
 sync, to make sure data is available in both DC's.

 You might want to consider how many replicas you want in each data center,
 and how you'd recover from failures, rather than just setting up 2 DC x 3-5
 replicas for each object.

 a.


 On Tue, Dec 6, 2011 at 1:49 PM, Caitlin Bestler 
 caitlin.best...@nexenta.com wrote:

  Lendro Reox asked:



  We're replicating our datacenter in another location (something like
 Amazons east and west coast) , thinking about our applications and

  our use of Swift, is there any way that we can set up weights for our
 datanodes so if a request enter via for example DATACENTER 1 ,

   then we want the main copy of the data being written on a datanode on
 the SAME datacenter o read from the same datacenter, so

  when we want to read it and comes from a proxy node of the same
 datacenter we dont add delay of the latency between the two datacenters.
  The moto is if a request to write or read enters via DATACENTER 1
 then is served via proxynodes/datanodes located on DATACENTER 1,
  then the replicas gets copied across zones over both datacenters.

  Routing the request to especific proxy nodes is easy, but dont know if
 swift has a way to manage this internally too for the datanodes 

 ** **

 I don’t see how you would accomplish that with the current Swift
 infrastructure.

 ** **

 An object is hashed to a partition, and the ring determines where
 replicas of that partition are stored.

 ** **

 What you seem to be suggesting is that when an object is created in
 region X that it should be assigned to partition that is primarily stored
 in region X,

 While if the same object had been created in region Y it would be
 assigned to a partition that is primary stored in region Y.

 ** **

 The problem is that “where this object was first created” is not a
 contributor to the hash algorithm, nor could it be since there is no way
 for someone

 trying to get that object to know where it was first created.

 ** **

 What I think you are looking for is a solution where you have **two**
 rings, DATACENTER-WEST and DATACENTER-EAST. Both of these rings would have
 

 an adequate number of replicas to function independently, but would
 asynchronously update each other to provide eventual consistency.

 ** **

 That would use more disk space, but avoids making all updates wait for
 the data to be updated at each site.

 ** **

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [DODCS] HPC with Openstack?

2011-12-06 Thread Chuck Short
Hi,

So I was the developer who added support for LXC support initially, I
have some comments in line

 On Tue, 6 Dec 2011 12:04:53 -0800 (PST)
Dong-In David Kang dk...@isi.edu wrote:

 
 
 - Original Message -
  On Mon, Dec 05, 2011 at 09:07:06PM -0500, Lorin Hochstein wrote:
  
  
   On Dec 4, 2011, at 7:46 AM, Soren Hansen wrote:
  
2011/12/4 Lorin Hochstein lo...@isi.edu:
Some of the LXC-related issues we've run into:
   
- The CPU affinity issue on LXC you mention. Running LXC with
OpenStack, you
don't get proper space sharing out of the box, each instance
actually sees
all of the available CPUs. It's possible to restrict this, but
that
functionality doesn't seem to be exposed through libvirt, so it
would have
to be implemented in nova.
  
  I recently added support for CPU affinity to the libvirt LXC driver.
  It will
  be in libvirt 0.9.8. I also wired up various other cgroups tunables
  including
  NUMA memory binding, block I/O tuning and CPU quota/period caps.


We are running libvirt 0.9.7 in Ubuntu for Pangolin  and I expect to
have 0.9.8 before Pangolin is released.

   Great news! 
  We are also looking forward to seeing SElinux 'sVirt' support for
 LXC by libvirt.
 When do you think it will be available? 
 In libvirt-0.9.8?
 
  
- LXC doesn't currently support volume attachment through
libvirt. We were
able to implement a workaround by invoking lxc-attach inside
of OpenStack
instead (e.g., see
https://github.com/usc-isi/nova/blob/hpc-testing/nova/virt/libvirt/connection.py#L482.
But to be able to use lxc-attach, we had to upgrade the Linux
kernel in
RHEL6.1 from 2.6.32 to 2.6.38. This kernel isn't supported by
SGI, which
means that we aren't able to load the SGI numa-related kernel
modules.
  
  Can you clarify what you mean by volume attachment ?
  
  Are you talking about passing through host block devices, or hotplug
  of
  further filesystems for the container ?
  
 
  We tried both libvirt-0.9.3 and libvirt-0.9.7.
 For both versions, attachvolume called by OpenStack failed when the
 target instance is an LXC instance. In
 nova/virt/libvirt/connection.py, virt_dom.attachDevice(xml) failed
 for an LXC instance. virt_dom.attachDevice(xml) is calling libvirt
 API.
 
  By volume attachment, yes, we mean passing through host block
 devices that is dynamically created by nova-volume service (using
 iscsi).
 

This is on my todo list for essex. 

Why not address these couple of issues in libvirt itself?
  
  If you let me know what issues you have with libvirt + LXC in
  OpenStack,
  I'll put them on my todo list.
  
 
  As Lorin said we implemented it using lxc-attach. 
 With lxc-attach we could pass the major/minor number of the
 (dynamically crated) devices to the LXC instance. And with lxc-attach
 we could do mknod inside of the LXC instance. I think supporting
 that by libvirt would be very useful. However, it needs lxc-attach
 working for the Linux kernel. We had to upgrade and patch Linux
 kernel for that purpose. If there is a better way, it would be
 wonderful. But I don't know if there is a way other than using
 lxc-attach.
 
  Thanks,
 
  David.
 
 
  Regards,
  Daniel
  --
  |: http://berrange.com -o-
  http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org
  -o- http://virt-manager.org :| |: http://autobuild.org -o-
  http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org
  -o- http://live.gnome.org/gtk-vnc :|
  ___ DODCS mailing list
  do...@mailman.isi.edu
  http://mailman.isi.edu/mailman/listinfo/dodcs
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Tim Bell

We need more than 'just' packaging it is using the testing, documentation 
and above all care to produce *and* maintain a stable release that production 
sites can rely on for 6-12 months and know that others are relying on it too.

Who is going to make the judgement that a bug fix to the latest Essex 
development branch is a valid patch for a backport to stable/diablo and does 
not break production sites ?

Diablo 2011.3 brought much functionality but also some useful points to 
consider for the future as to how we organise the project.

Tim

 
 (4) OpenStack will accept and foster a new project, one that is not 
 focused on development, but rather the distribution and it's general 
 stability. This distro project will be responsible for advocating on 
 behalf of various operating systems/distros/sponsoring vendors for 
 bugs that affect performance and stability of OpenStack, or prevent an 
 operating system from running OpenStack.
 
 Thoughts?
 
 d

Hi

We already have an little informal channel on freenode called 
#openstack-packaging.

Regards
chuck


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [GLANCE] Proposal: Combine the container_format and disk_format fields in 2.0 Images API

2011-12-06 Thread Soren Hansen
2011/12/6 Jay Pipes jaypi...@gmail.com:
 On Fri, Dec 2, 2011 at 3:48 AM, Soren Hansen so...@linux2go.dk wrote:
 2011/12/1 Jay Pipes jaypi...@gmail.com:
 There are basically two things that are relevant: The image type and the
 container format.

 The image type can be either of kernel, ramdisk, filesystem, iso9660,
 disk, or other.
 What value does other give the caller?

It's meant to denote something that isn't a kernel, ramdisk, filesystem,
iso9660, nor disk. Maybe swap space, maybe a raw block device for
Oracle.. Something that's distinct from the other things, but isn't
common enough to warrant its own designation.

 The container type can be: raw, cow, qcow, qcow2, vhd, vmdk, vdi or qed
 (and probably others I've forgotten).
 What about OVA? As I understand it, OVA is the single, tar'd file
 format that may store one or more virtual appliances that are in
 formats like VHD.

I consider that a transport format. Maybe my choice of nomenclature is
off, but an OVA clearly (based on your description) holds a number of
somethings which in turn holds (typically) disk images.

I'd much rather if Glance would extract all the relevant information
from the OVA, store the disk images (setting the appropriate type and
format in the process) and then discard the OVA. Much like how we treat
ami's. It's a transport format.

 How does the following sound? Would this work for folks?

 type field, with the following possible values:

 * kernel
 * filesystem
 * ramdisk
 * disk
 * iso9660

Sure, I can live without the other type.

 format field, with the following possible values:

 * raw - This is an unstructured disk image format
[...]
 * qcow2 - A disk format supported by the QEMU emulator that can expand
 dynamically and supports Copy on Write

(Already responded about OVA).

You're missing qed, qemu's next gen disk format.

 Should there be another format value of:

 * iso - An archive format for the data contents of an optical disc (e.g. 
 CDROM).

 to correspond to the iso9660 type?

No, iso isn't a format in the same sense as vmdk and qcow2, etc.


 Or should images with the iso9660 type have a raw format value?

Yes, your garden variety .iso is a raw formatted iso9660 filesystem. It
could technically be converted to any of the other formats, but seeing
as they're tightly packed (no need for sparseness) and read-only (no
need for sparseness nor copy-on-write), there's rarely much gained from
that (other then confusion).


-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Duncan McGreggor
On 06 Dec 2011 - 13:52, Duncan McGreggor wrote:
 On 06 Dec 2011 - 21:14, Thierry Carrez wrote:
  Tim Bell wrote:
   I'm not clear on who will be maintaining the stable/diablo  branch.
   The people such as EPEL for RedHat systems need to have something
   with the appropriate bug fixes back ported.
  
   There are an increasing number of sites looking to deploy in
   production and cannot follow the latest development version.
 
  Agreed on the need, we discussed this at length during the design
  summit. The stable branches have been established and are maintained by
  the OpenStack Stable Branch Maintainers team. Currently this team is
  mostly made of distribution members (Ubuntu and Fedora/RedHat, mostly)
  collaborating on a single branch to avoid duplication of effort.
 
  See:
  https://launchpad.net/~openstack-stable-maint
  http://wiki.openstack.org/StableBranch

 Okay, I think this mostly addresses item #4 that I wanted to add to your
 summary, Thierry.

 I do have the following minor concerns, though:

  * that wiki page's summary (intro sentence) only specifically mentions
Diablo; I'd like to see something along the lines of currently
focusing on Diablo. If these processes evolve into a successful
model, they will be applied to all future releases.

  * the discussion on the page treats this as an experiment (this is
good!), but I'd like to see a phrase alone the lines of if this
experiment is successful, we will do X to ensure these processes
become an official part of the workflow.

 These are tiny things, but I think they will better set expectations and
 give more warm fuzzies to organizations thinking about deploying
 OpenStack in production environments, seeing that we're considering the
 long-term (given success of the experiment).

 In addition, I would like to emphasize Tim's point from earlier, though:
 it's not just packaging... he said it very well, so I'll quote:

  Tim Bell wrote:
   We need more than 'just' packaging it is using the testing,
   documentation and above all care to produce *and* maintain a stable
   release that production sites can rely on for 6-12 months and know
   that others are relying on it too.

 I would like to see verbage reflecting Tim's concerns added to the wiki
 page as well.

  * What is the QA/testing story?

  * What is the documentation story?

  * What is the support cycle story?

Yikes! I forgot an incredibly important one:

 * What is the migration path story (diablo to essex, essex to f, etc.)?

d

 Ghe Rivero, Michael Pittaro, Tim Bell: does the stable maintenance team
 address your concerns?

 d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [RFC] Common config options module

2011-12-06 Thread Monty Taylor


On 12/06/2011 12:06 PM, Jay Pipes wrote:
 On Tue, Dec 6, 2011 at 2:22 PM, John Dickinson
 john.dickin...@rackspace.com wrote:
 Overall, I think it's a great thing to have commonality between the projects 
 on option names and environment variables. I think it's worthwhile to push 
 for that in the swift cli tool in the essex timeframe.

 On the topic of common config libraries, though, I think the differences are 
 less important. Mark's wiki page describing a common config library sounds 
 interesting, but I agree with Monty that it may be better to be a separate 
 (ie non-openstack) module.
 
 Could you explain the above a bit more? Mark has already posted code
 for a common config module and I think the idea is to have it live in
 the openstack-common Python library and have Nova, Glance and Swift
 import that openstack.common library...

I think (iirc) that markmc and I were talking about both an ultimate
ending point and a way to get there.

As in - start with the code in openstack-common for right now, get
things working, get everything happy.

THEN - because this is actually missing the python world in general, we
can perhaps pull out the functionality into a library that we land
either on pypi or even (if I have my way) in python core. Then
openstack.common could just consume that.

But a decision on the eventual home of a sensible library which handles
both config files and command line options shouldn't really block us
from moving forward on intra-project collaboration at the moment.

Monty

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Vishvananda Ishaya
The purpose of the stable branch and the maint team that theirry mentioned 
earlier is to vet patches.  Are you suggesting that we need a point release 
system for openstack outside of relying on distros to pick release points?

Vish

On Dec 6, 2011, at 1:21 PM, Tim Bell wrote:

 
 We need more than 'just' packaging it is using the testing, documentation 
 and above all care to produce *and* maintain a stable release that production 
 sites can rely on for 6-12 months and know that others are relying on it too.
 
 Who is going to make the judgement that a bug fix to the latest Essex 
 development branch is a valid patch for a backport to stable/diablo and does 
 not break production sites ?
 
 Diablo 2011.3 brought much functionality but also some useful points to 
 consider for the future as to how we organise the project.
 
 Tim
 
 
 (4) OpenStack will accept and foster a new project, one that is not 
 focused on development, but rather the distribution and it's general 
 stability. This distro project will be responsible for advocating on 
 behalf of various operating systems/distros/sponsoring vendors for 
 bugs that affect performance and stability of OpenStack, or prevent an 
 operating system from running OpenStack.
 
 Thoughts?
 
 d
 
 Hi
 
 We already have an little informal channel on freenode called 
 #openstack-packaging.
 
 Regards
 chuck
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [DODCS] HPC with Openstack?

2011-12-06 Thread Dong-In David Kang


- Original Message -
 Hi,
 
 So I was the developer who added support for LXC support initially, I
 have some comments in line
 
  On Tue, 6 Dec 2011 12:04:53 -0800 (PST)
 Dong-In David Kang dk...@isi.edu wrote:
 
  
  
  - Original Message -
   On Mon, Dec 05, 2011 at 09:07:06PM -0500, Lorin Hochstein wrote:
   
   
On Dec 4, 2011, at 7:46 AM, Soren Hansen wrote:
   
 2011/12/4 Lorin Hochstein lo...@isi.edu:
 Some of the LXC-related issues we've run into:

 - The CPU affinity issue on LXC you mention. Running LXC
 with
 OpenStack, you
 don't get proper space sharing out of the box, each
 instance
 actually sees
 all of the available CPUs. It's possible to restrict this,
 but
 that
 functionality doesn't seem to be exposed through libvirt, so
 it
 would have
 to be implemented in nova.
   
   I recently added support for CPU affinity to the libvirt LXC
   driver.
   It will
   be in libvirt 0.9.8. I also wired up various other cgroups
   tunables
   including
   NUMA memory binding, block I/O tuning and CPU quota/period caps.
 
 
 We are running libvirt 0.9.7 in Ubuntu for Pangolin  and I expect to
 have 0.9.8 before Pangolin is released.
 
Great news!
   We are also looking forward to seeing SElinux 'sVirt' support for
  LXC by libvirt.
  When do you think it will be available?
  In libvirt-0.9.8?
  
   
 - LXC doesn't currently support volume attachment through
 libvirt. We were
 able to implement a workaround by invoking lxc-attach
 inside
 of OpenStack
 instead (e.g., see
 https://github.com/usc-isi/nova/blob/hpc-testing/nova/virt/libvirt/connection.py#L482.
 But to be able to use lxc-attach, we had to upgrade the
 Linux
 kernel in
 RHEL6.1 from 2.6.32 to 2.6.38. This kernel isn't supported
 by
 SGI, which
 means that we aren't able to load the SGI numa-related
 kernel
 modules.
   
   Can you clarify what you mean by volume attachment ?
   
   Are you talking about passing through host block devices, or
   hotplug
   of
   further filesystems for the container ?
   
  
   We tried both libvirt-0.9.3 and libvirt-0.9.7.
  For both versions, attachvolume called by OpenStack failed when the
  target instance is an LXC instance. In
  nova/virt/libvirt/connection.py, virt_dom.attachDevice(xml) failed
  for an LXC instance. virt_dom.attachDevice(xml) is calling libvirt
  API.
  
   By volume attachment, yes, we mean passing through host block
  devices that is dynamically created by nova-volume service (using
  iscsi).
  
 
 This is on my todo list for essex.
 

 Is there something that we can help?
We've implemented volume support for LXC in our branch.
https://github.com/usc-isi/nova/blob/hpc-testing/nova/virt/libvirt/connection.py

It has some extra features and the code needs to be polished.
Please let us know if there is something we can help.


 Why not address these couple of issues in libvirt itself?
   
   If you let me know what issues you have with libvirt + LXC in
   OpenStack,
   I'll put them on my todo list.
   
  
   As Lorin said we implemented it using lxc-attach.
  With lxc-attach we could pass the major/minor number of the
  (dynamically crated) devices to the LXC instance. And with
  lxc-attach
  we could do mknod inside of the LXC instance. I think supporting
  that by libvirt would be very useful. However, it needs lxc-attach
  working for the Linux kernel. We had to upgrade and patch Linux
  kernel for that purpose. If there is a better way, it would be
  wonderful. But I don't know if there is a way other than using
  lxc-attach.
  
   Thanks,
  
   David.
  
  
   Regards,
   Daniel
   --
   |: http://berrange.com -o-
   http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org
   -o- http://virt-manager.org :| |: http://autobuild.org -o-
   http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org
   -o- http://live.gnome.org/gtk-vnc :|
   ___ DODCS mailing
   list
   do...@mailman.isi.edu
   http://mailman.isi.edu/mailman/listinfo/dodcs
  
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread ghe. rivero
Packaging is just a minor step and the last one. But also an important
one. Without propering packaging, installation and updates can be a real
pain. We should give packaging a lot of love, but there is people much
more prepared to do it, and with a little of help, can do a great job.

When one installs an stable release in a production environment, expects it
to have some minor updates and bug fixed for a not so small period of time
(6 months looks to short for me).  Having new releases every 6 months can
be really painful in terms on maintain every release bug free for its
lifetime. (Having a lifetime of 1 year, implies to maintain 3 releases
while working on a new one, very time consuming).  Maybe not every release
should be consider equal in terms of prodution release.

Ghe Rivero

On Tue, Dec 6, 2011 at 10:21 PM, Tim Bell tim.b...@cern.ch wrote:


 We need more than 'just' packaging it is using the testing,
 documentation and above all care to produce *and* maintain a stable release
 that production sites can rely on for 6-12 months and know that others are
 relying on it too.

 Who is going to make the judgement that a bug fix to the latest Essex
 development branch is a valid patch for a backport to stable/diablo and
 does not break production sites ?

 Diablo 2011.3 brought much functionality but also some useful points to
 consider for the future as to how we organise the project.

 Tim

 
  (4) OpenStack will accept and foster a new project, one that is not
  focused on development, but rather the distribution and it's general
  stability. This distro project will be responsible for advocating on
  behalf of various operating systems/distros/sponsoring vendors for
  bugs that affect performance and stability of OpenStack, or prevent an
  operating system from running OpenStack.
 
  Thoughts?
 
  d

 Hi

 We already have an little informal channel on freenode called
 #openstack-packaging.

 Regards
 chuck


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
 .''`.  Pienso, Luego Incordio
: :' :
`. `'
  `-www.debian.orgwww.hispalinux.es

GPG Key: 26F020F7
GPG fingerprint: 4986 39DA D152 050B 4699  9A71 66DB 5A36 26F0 20F7
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] On documentation management

2011-12-06 Thread Arnaud Quette
Hi Anne,

2011/12/6 Anne Gentle a...@openstack.org

 Hi Arnaud -
 Asciidoc is great and we can work it into our toolchain. O'Reilly
 offers authors the option of either authoring in DocBook (which is
 most of our source on docs.openstack.org) or Asciidoc.


great thing!


 I like to accept documentation in any format - even Word docs printed
 and handed to me from a briefcase at the Design Summit. We had many
 pages of documentation already in DocBook when I joined. I'm
 interested in building a community of contributors and finding ways to
 enable their contributions, so all formats are welcomed.


good to know, for future contributions.

Feel free to contact me with specific ideas about Asciidoc contributions.


As per your above comment, I'm still not sure if it means you're
considering a switch from Sphinx+RST (seen in Nova sources) and Docbook to
Asciidoc. Any clarification would be appreciated.

As far as I've seen, you already have a good level of build automation with
Sphinx + RST.
But things could probably be improved, in terms of build rules
simplification, removal of redundant content (if any), and general
simplification.

cheers,
Arnaud
-- 
Linux / Unix Expert RD - Eaton - http://powerquality.eaton.com
Network UPS Tools (NUT) Project Leader - http://www.networkupstools.org/
Debian Developer - http://www.debian.org
Free Software Developer - http://arnaud.quette.free.fr/
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] swift urlencode

2011-12-06 Thread pf shineyear
hi all , i just found that swift use urllib.quote and urllib.unquote to
process request url.

but in php urlencode or rawurlencode process result is very different from
python's

for example:

org:  http://www.brighthost.com/test space~.html

php urlencode:  http%3A%2F%2Fwww.brighthost.com%2Ftest+space%7E.html

php rawurlencode: http%3A%2F%2Fwww.brighthost.com%2Ftest%20space~.html

python quote:  http%3A//www.brighthost.com/test%20space%7E.html


so , if you want to send a request to swift, you'd better write a encode
funtion by yourself..
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Duncan McGreggor
On 06 Dec 2011 - 23:56, Loic Dachary wrote:
 On 12/06/2011 09:24 PM, Thierry Carrez wrote:
  Duncan McGreggor wrote:
  On 06 Dec 2011 - 14:28, Thierry Carrez wrote:
  So the general consensus so far on this discussion seems to be:
 
  (0) The 2011.3 release PPA bears false expectations and should be
  removed now. In the future, we should not provide such PPAs: 0-day
  packages for the release should be available from the last milestone
  PPA anyway.
 
  (1) OpenStack, as an upstream project, should focus on development
  rather than on providing a production-ready distribution.
 
  (2) We could provide daily builds from the stable/diablo branch for a
  variety of releases (much like what we do for the master branch), but
  those should be clearly marked not for production use and be
  best-effort only (like our master branch builds).
 
  (3) This should not prevent a group in the community from working on a
  project providing an openstack on Lucid production-ready distribution
  if they so wishes. This project would just be another distribution of
  OpenStack.
  This doesn't seem like enough to me. OpenStack isn't just a library;
  it's a fairly substantial collection of software and services, intended
  to be used as a product. If it can't be used as a product, what's the
  use?
 
  Someone made the suggestion that a new OpenStack group be started, one
  whose focus is on producing a production-ready, distribution-ready,
  release of the software. So can we add one more (need some help with
  wording, here...):
 
  (4) OpenStack will accept and foster a new project, one that is not
  focused on development, but rather the distribution and it's general
  stability. This distro project will be responsible for advocating on
  behalf of various operating systems/distros/sponsoring vendors for bugs
  that affect performance and stability of OpenStack, or prevent an
  operating system from running OpenStack.
  I don't think you need a project (openstack projects are about upstream
  software): you need a *team* to coordinate distribution efforts and make
  sure openstack projects are packageable etc.
 
  Like zul said, that team actually already informally exists and has an
  IRC channel :)
 Packaging is a tremendous amount of work, I'm sure you agree on this
 otherwise this thread would not exist ;-) It is not upstream code
 development indeed. However the people working to package openstack
 provide valuable input to developers and patches that are not only
 essential to packaging but also to the useability of the components.

 Creating a packaging team that acknowledge their contribution to the
 upstream project will show that the packagers contributions are an
 integral part of the openstack development, it would motivate new
 packagers to contribute their changes upstream instead of keeping them
 in a patch directory within the package.

 I think there is an opportunity to leverage the momentum that is
 growing in each distribution by creating an openstack team for them to
 meet. Maybe Stefano Maffulli has an idea about how to go in this
 direction. The IRC channel was a great idea and it could become more.

 Good packages make a huge difference when it comes to deploying a
 solution made of numerous components. A packaging team that spans all
 openstack components would reduce the workload as intended while
 keeping the subject on the agenda.

 Cheers

Wow. I'm so +1 on this. Very well said; sums up my feelings on the
matter too.

Maybe this could be made an agenda item for the next PPB meeting?


d

 begin:vcard
 fn:Loic Dachary
 n:Dachary;Loic
 org:Artisan Logiciel Libre
 adr:;;12 bd Magenta;Paris;;75010;France
 email;internet:l...@dachary.org
 title:Senior Developer
 tel;work:+33 4 84 25 08 05
 tel;home:+33 9 51 18 43 38
 tel;cell:+33 6 64 03 29 07
 note:Born 131414404 before EPOCH.
 url:http://dachary.org/
 version:2.1
 end:vcard





 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] 404 for account on nodes

2011-12-06 Thread Pete Zaitcev
Greetings:

This is most likely to be an administration problem, but I am trying to
use it as a hook to gain understanding into workings of Swift.

I have a test cluster with 2 VMs and 4 nodes. At some point, I reinstalled
one of the nodes, and now this 404 happens from time to time:

Dec  6 19:08:19 kvm-ni account-server 192.168.129.18 - - [07/Dec/2011:02:08:19 
+] HEAD /vdb/18239/AUTH_1 404 - - - - 0.0010 

The root cause, apparently, is that this account does not exist.
On the system which shows no such symptom, we have:

[root@kvm-rei log]# ls -l /srv/node/vdb/accounts/
total 12
drwxrwxrwx. 3 swift swift 4096 Nov 19 00:22 124217
drwxrwxrwx. 3 swift swift 4096 Nov 30 15:36 18239
drwxrwxrwx. 3 swift swift 4096 Nov 18 18:44 236930

On the problem node, we have:

[root@kvm-ni log]# ls -l /srv/node/vdb/accounts/
total 8
drwxrwxrwx. 3 swift swift 4096 Nov 18 22:22 124217
drwxrwxrwx. 3 swift swift 4096 Nov 18 16:50 236930

All I know at present is this blurb in the Administration Guide:

  Replication updates are push based. For object replication,
  updating is just a matter of rsyncing files to the peer.
  Account and container replication push missing records over
  HTTP or rsync whole database files.

So, the suspicion is that the account updater is not running.

Why is it that the accounts are not synchronized? Or in a more
interesting way, what part of code, if any, is supposed to be
responsible for keeping the cluster consistent?

-- Pete

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Nova Subteam Changes

2011-12-06 Thread Vishvananda Ishaya
Hello Everyone,

The Nova subteams have now been active for a month and a half.  Some things are 
going very well, and others could use a little improvement.  To keep things 
moving forward, I'd like to make the following changes:

1) Weekly meeting for team leads. This is a time for us to discuss blueprint 
progress, multiple-team-related issues, etc. Going to shoot for Mondays at 2100 
for this one.  I really need the subteam leads to commit to making this 
meeting. We can discuss at the first meeting and decide if there is a better 
time for this to occur.

2) Closing down the team mailinglists.  Some of the lists have been a bit 
active, but I think the approach that soren has been using of sending messages 
to the regular list with a team [header] is a better approach. Examples:
[db] Should we use zookeeper?
[scaling] Plans for bursting
I realize that this will lead to a little more noise on some of the channels, 
but I think it makes sure that we don't isolate knowledge too much

3) Closing teams. Some of the teams haven't really started having much 
activity.  I'm going to move these teams into a WANTED section on the teams 
page here:
http://wiki.openstack.org/Teams
For now, people can just discuss stuff on the main mailing list, but we won't 
target blueprints to those teams specifically. We can make the teams official 
again once they have some activity and a clear person to be responsible for 
running them.  Please keep in mind that I'm not saying that this stuff isn't 
important, just that there is no need to separate out if there isn't a lot of 
activity.
Specifically the teams which haven't really been very active are:
Nova Upgrades Team
Nova Auth Team
Nova Security Improvements Team
Nova Operational Support Team
Nova EC2 API Team

Im going to leave the ec2 team for now, because it is relatively new.  If 
anyone feels that the other teams above should not be folded back in, please 
let me know.

Hopefully these changes will help us continue to rock essex!

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] slogging GMT time

2011-12-06 Thread pf shineyear
all swift use UTC time (gmtime) not local time

so if any one want to analysis the log or write some plugin for slogging,
please comfirm you already exchange your time to UTC time

or your total calculate will not correct
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [RFC] Common config options module

2011-12-06 Thread Mark McLoughlin
On Mon, 2011-12-05 at 15:36 -0800, Vishvananda Ishaya wrote:
 Just read through the description and the code.  I don't have any
 issues with the way it is implemented, although others may have some
 suggestions/tweaks.  I think it is most important to get the common
 code established, so I'm up for implementing you changes in Nova.

Cool, thanks.

 I think it is important to get buy in from Jay and the Glance team
 ASAP as well.
 
 It would also be great if the Swift team could do a quick review and
 at least give us a heads up on whether there are any blockers to
 moving to this eventually.  They have a huge install base, so
 changing their config files could be significantly more difficult, but
 it doesn't look too diffferent from what they are doing.  John,
 thoughts?

So, it sounds like folks are happy enough with the direction. I'll push
the Nova and Glance changes to gerrit soon and, hopefully, start looking
at Swift, Keystone, Quantum, Melange, etc. soon too.

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Providing packages for stable releases of OpenStack

2011-12-06 Thread Tim Bell

The stable team with Duncan's additions would fully address my concerns.

Tim


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [RFC] Common config options module

2011-12-06 Thread Mark McLoughlin
On Mon, 2011-12-05 at 15:32 -0800, Andy Smith wrote:
 Took a look over the wiki for this. The approach is very similar to one
 I've used recently so I wanted to bring up something that looks like it may
 have been overlooked.
 
 In testing it is frequent practice that you want to ensure global config
 state and somehow overwrite global config state on the fly (for example
 when you have dynamic ports being assigned to a service), passing config
 dicts around everywhere makes this task a bit harder. My solution was to
 force all instances being handed an options dict to assign it to the
 .options attribute on itself, and to use a common name for chained
 applications so that it is possible to traverse the tree:
 
 https://github.com/termie/keystonelight/blob/master/keystonelight/test.py#L107

Glance already was passing a config dict around so, for tests, I found
it quite handy to have a little helper which actually wrote out a
temporary config file with overrides:

  
https://github.com/markmc/glance/blob/31bcda2f09d49a8214fc1a56ffc3543fcad590aa/glance/tests/utils.py#L32

With Nova, it'll be quite a challenge to get to a point where we're not
using a global variable for accessing config. For Nova tests, I added
set_default() and set_override() methods:

  
https://github.com/markmc/nova/blob/101dfcb0fe4f2c5db20e9c8305e8ca87bb5b7e54/nova/common/cfg.py#L855

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp