Re: [Openstack-community] Beyond the wiki page: planning an International Community Portal

2012-05-11 Thread Haisam Ido
I agree with Tim.  I have the skill set to maintain major aspects of Drupal
but maintaining a portal requires a team.  The selection of a tool really
depends on the portal's requirements.

On Fri, May 11, 2012 at 1:25 AM, Tim Bell tim.b...@cern.ch wrote:


 We use Drupal at CERN and it is extremely flexible for setting up branded
 themes, communities, RSS feeds, OpenID, dynamic content etc..

 However, it is also quite complex and so would require some specialised
 skills to set up and maintain.  This is fine if you have them available but
 it is a steep learning curve for this specific purpose.

 Tim

  -Original Message-
  From: openstack-community-bounces+tim.bell=cern...@lists.launchpad.net
  [mailto:openstack-community-bounces+tim.bell=cern...@lists.launchpad.net
 ]
  On Behalf Of Atul Jha
  Sent: 10 May 2012 22:31
  To: Stefano Maffulli; Haisam Ido
  Cc: User Groups Community, OpenStack
  Subject: Re: [Openstack-community] Beyond the wiki page: planning an
  International Community Portal
 
  Hi,
  
  From: openstack-community-
  bounces+atul.jha=csscorp@lists.launchpad.net [openstack-community-
  bounces+atul.jha=csscorp@lists.launchpad.net] on behalf of Stefano
  Maffulli [stef...@openstack.org]
  Sent: Thursday, May 10, 2012 10:34 PM
  To: Haisam Ido
  Cc: User Groups Community,  OpenStack
  Subject: Re: [Openstack-community] Beyond the wiki page: planning an
  International Community Portal
 
  On Thu 10 May 2012 04:23:34 AM PDT, Haisam Ido wrote:
   http://drupal.org/ might be a good choice for the portal.  Here's a
   list of Drupal users http://whousesdrupal.com/home
 
  Depending on what the needs are, Drupal may be overkill. For example,
 
  +1
 
  Going ahead with Drupal will be a bad idea for same reason Stefano
  mentioned.
 
  if we decide not to host and provide web apps for any of the local groups
  (mlists, blog, forums, etc), we may well do with a much smaller
 application
  that's easier to manage than Drupal. Do you think we need to offer these
  applications? Who needs to use them?
 
  By the way, do you have experience running a large Drupal site? Would you
 be
  interested in running this project (should we decide to)?
 
  thanks,
  stef
 
  --
  Mailing list: https://launchpad.net/~openstack-community
  Post to : openstack-community@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack-community
  More help   : https://help.launchpad.net/ListHelp
  http://www.csscorp.com/common/email-disclaimer.php
 
  --
  Mailing list: https://launchpad.net/~openstack-community
  Post to : openstack-community@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack-community
  More help   : https://help.launchpad.net/ListHelp

-- 
Mailing list: https://launchpad.net/~openstack-community
Post to : openstack-community@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-community
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-community] Beyond the wiki page: planning an International Community Portal

2012-05-11 Thread Stefano Maffulli
On Fri 11 May 2012 11:54:21 AM PDT, Doug Hellmann wrote:
 I certainly don't see anything about that in the terms of
 service. http://www.meetup.com/terms/

Right, there is nothing there. I have heard stories from people that 
ran groups on meetup.com and had problems closing/migrating away from 
there.

Basically, to close a group you need to ask meetup.com to do it, after 
you've advertised your intentions and got consensus among other admins. 
The process is described on 
http://www.meetup.com/help/How-do-I-delete-my-Meetup-Group/

They may have made things easier than in the past but I don't think 
this is relevant for our discussion.  Allow me to rephrase what I said 
before:

 Of course it's your choice to use it or not but I'm not
 comfortable advocating for it as a solution

What I meant to say is that I'm in favor of each team picking its own 
preferred tool to keep the local community engaged, informed and 
ultimately happy. I'll keep myself neutral for the tools of choice of 
the local communities. I can provide suggestions, only if asked but the 
choice should always be done by the local coordinators.

Hasaim:

 Shouldn't the requirements be considered prior to selecting a portal?

Yes, indeed. Lets go back to the beginning, below is the list of 
requirements I identified. Is this all we need?

Basic needs

  • A directory of OpenStack user groups (OSUG) that is more flexible
and appealing than a wiki page.
  • A system to get in touch with members (all members or just the
coordinators/leaders?) of the international communities.

Features

  • Register users using SSO
∘ as a user I would like to be able to associate my profile from
Launchpad, Linkedin or Google to the site
  • Support content in multiple languages (switch list and automatic
recognition via browser agent configuration)
  • Support roles: managers of the groups can add resources, members can
sign up as members, anonymous can read all content
  • Show activity from all groups in my own language on the portal home 
page
  • Directory of OSUGroups, with geographic representation
∘ be able to view the groups on a chart
∘ display also the full list of groups
  • Manage content (pages) of generic interest
∘ to host content like how to start a group, general, policies,
trademark stuff, generic icons, etc
  • Per each group,
∘ allow users to add events, each group will expose its ical feed
∘ show to list additional resources for the group: mailing lists,
forums, wiki pages, home page, url of blogs,
∘ import RSS feed from blogs to aggregate content on groups page
∘ display photostreams from flickr and such on the home page


-- 
Mailing list: https://launchpad.net/~openstack-community
Post to : openstack-community@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-community
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Metering] Meeting agenda for today 16:00 UTC (May 10th, 2012)

2012-05-11 Thread Loic Dachary
On 05/10/2012 02:14 PM, Loic Dachary wrote:
 Hi,

 The metering project team holds a meeting in #openstack-meeting, Thursdays at 
 1600 UTC 
 http://www.timeanddate.com/worldclock/fixedtime.html?hour=16min=0sec=0. 
 Everyone is welcome.
 I propose an agenda based on the discussions we had on this list.

 http://wiki.openstack.org/Meetings/MeteringAgenda
 Topic: external API definition

  * API defaults and API extensions
  * API extension
* extension= loads the  python module
*  method query is called with the
  * QUERY_STRING
  * a handler to the storage
  * a pointer to the configuration
  * API calls common arguments
* Datetime range : start and end
  * Transparent cache for aggregation
  * API defaults http://wiki.openstack.org/EfficientMetering#API
* GET list components
* GET list components meters (argument : name of the component)
* GET list accounts
* GET list of meter_type
* GET list of events per account
* GET sum of (meter_volume, meter_duration) for meter_type and account_id
* other ?
  * open discussion

For the record. There were too many issues raised during the meeting to agree 
on the API. Instead, another meeting was scheduled and the agenda calendar 
postponed for a week. 
http://wiki.openstack.org/Meetings/MeteringAgenda?action=diffrev2=20rev1=19

==
#openstack-meeting Meeting
==


Meeting started by dachary at 16:00:22 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-05-10-16.00.log.html
.



Meeting summary
---

* LINK: https://lists.launchpad.net/openstack/msg11523.html  (dachary,
  16:00:22)
* actions from previous meetings  (dachary, 16:00:22)
  * LINK:

http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-05-03-16.00.html
(dachary, 16:00:22)
  * dachary removed obsolete comment about floating IP
http://wiki.openstack.org/EfficientMetering?action=diffrev2=70rev1=69
(dachary, 16:00:22)
  * dachary o6 : note that the resource_id is the container id.
http://wiki.openstack.org/EfficientMetering?action=diffrev2=71rev1=70
(dachary, 16:00:23)
  * The discussion about adding the source notion to the schema took
place on the mailing list
https://lists.launchpad.net/openstack/msg11217.html  (nijaba,
16:01:25)
  * The conclusion was to add a source field to the event record, but no
additional record type to list existing sources.  (nijaba, 16:01:25)
  * jd___ add Swift counters, add resource ID info in counter
definition, describe the table
http://wiki.openstack.org/EfficientMetering?action=diffrev2=57rev1=54
(jd___, 16:03:08)

* meeting organisation  (dachary, 16:04:37)
  * This is 2/5 meetings to decide the details of the architecture of
the Metering project https://launchpad.net/ceilometer  (dachary,
16:04:37)
  * Today's focus is on the definition of external REST API  (dachary,
16:04:37)
  * There has not been enough discussions on the list to cover all
aspects and the focus of this meeting was modified to cope with it.
(dachary, 16:04:37)
  * The meeting is time boxed and there will not be enough time to
introduce inovative ideas and research for solutions.  (dachary,
16:04:37)
  * The debate will be about the pro and cons of the options already
discussed on the mailing list.  (dachary, 16:04:38)
  * LINK: https://lists.launchpad.net/openstack/msg11368.html  (dachary,
16:04:38)

* API defaults and API extensions  (dachary, 16:05:28)
  * AGREED: Ceilometer shouldn't invent its own API extensions
mechanism... it should use the system in Nova.  (dachary, 16:09:20)
  * LINK: https://github.com/cloudbuilders/openstack-munin  (dachary,
16:10:18)
  * LINK: https://github.com/sileht/openstack-munin  (dachary, 16:10:23)
  * ACTION: dachary add info to the wiki on the topic of poll versus
push  (dachary, 16:12:17)

* API defaults  (dachary, 16:13:08)
  * GET list components  (dachary, 16:13:14)
  * GET list components meters (argument : name of the component)
(dachary, 16:13:14)
  * GET list [user_id|project_id|source]  (dachary, 16:13:14)
  * GET list of meter_type  (dachary, 16:13:14)
  * GET list of events per [user_id|project_id|source] ( allow to
specify user_id or project_id  (dachary, 16:13:14)
  * GET sum of (meter_volume, meter_duration) for meter_type and
[user_id|project_id|source]  (dachary, 16:13:15)
  * other ?  (dachary, 16:13:16)
  * GET list of events per user_id  project_id  (dachary, 16:14:20)
  * LINK: http://wiki.openstack.org/OpenStackRESTAPI  (dachary,
16:15:48)
  * LINK: http://wiki.openstack.org/EfficientMetering#Meters  (dachary,
16:16:48)
  * LINK: http://wiki.openstack.org/EfficientMetering#API  (nijaba,
16:23:45)
  * AGREED: all meters have a [start,end[  ( start = timestamp  end )
that limits the returned result 

Re: [Openstack] OpenStack support: KVM vs. QEMU

2012-05-11 Thread Christoph Hellwig
On Tue, 2012-05-08 at 16:08 -0400, Lorin Hochstein wrote:
 Are there any Nova features that work with KVM but don't work with
 QEMU? Either way, I'd like to capture this in the documentation
 
 I know that KVM is faster than QEMU because of hardware support, but I
 don't know if there's additional functionality that only works with
 KVM. The Hypervisor support matrix wiki page
 http://wiki.openstack.org/HypervisorSupportMatrix has no specific
 information on OpenStack features supported by KVM but not QEMU

The first question is what do you mean with KVM?  Any qemu binary using
kernel acceleration by using the kvm kernel module, or using the
qemu-kvm fork that's slowly getting merged back into qemu mainline (it's
down to about 5000 LOC difference now) in either kvm or tcq mode?

There should be no feature difference between using hardware accelerated
KVM mode, and TCG emulation, unless you call boots a reasonable cloud in
your lifetime a feature, or if you look the other way around running
foreign architecture OS images, which is something KVM can't do.

There are features not merged back into mainline qemu, with device
assignment being the last one in current mainline, and MSI(X) and the
I/O thread support others in common released versions.  In theory these
also should work using TCG in qemu-kvm, although it's very unlikely to
be tested very well.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Swift Object Storage ACLs with KeyStone

2012-05-11 Thread ??????
Hello, everyone.

I encountered some problems when i set permissions (ACLs) on Openstack 
Swift containers.
I installed swift-1.4.8(essex) and use keystone-2012.1 as authentication 
system on CentOS 6.2 .

My swift proxy-server.conf and keystone.conf are here:
http://pastebin.com/dUnHjKSj

Then,I use the script named 
opensatck_essex_data.sh(http://pastebin.com/LWGVZrK0) to 
initialize keystone.

After these operations,I got the token of demo:demo and newuser:newuser

curl -s -H 'Content-type: application/json' \
-d '{auth: {tenantName: demo, passwordCredentials: {username: 
demo, password: admin}}}' \
http://127.0.0.1:5000/v2.0/tokens | python -mjson.tool

curl -s -H 'Content-type: application/json' \
-d '{auth: {tenantName: newuser, passwordCredentials: {username: 
newuser, password: admin}}}' \
http://127.0.0.1:5000/v2.0/tokens | python -mjson.tool

Then,enable read access to newuser:newuser

curl ?CX PUT -i \
-H X-Auth-Token: token of demo:demo \
-H X-Container-Read: newuser:newuser \
http://127.0.0.1:8080/v1/AUTH_f1723800c821453d9f22d42d1fbb334b/demodirc

Check the permission of the container:

curl -k -v -H 'X-Auth-Token:token of demo:demo' \
http://127.0.0.1:8080/v1/AUTH_f1723800c821453d9f22d42d1fbb334b/demodirc

This is the reply of the operation:

HTTP/1.1 200 OK
X-Container-Object-Count: 1
X-Container-Read: newuser:newuser
X-Container-Bytes-Used: 2735
Accept-Ranges: bytes
Content-Length: 24
Content-Type: text/plain; charset=utf-8
Date: Fri, 11 May 2012 07:30:23 GMT

opensatck_essex_data.sh

Now,the user newuser:newuser visit the container of demo:demo

curl -k -v -H 'X-Auth-Token:token of newuser:newuser' \
http://127.0.0.1:8080/v1/AUTH_f1723800c821453d9f22d42d1fbb334b/demodirc

While,I got 403 error.Can someone help me?

--
 Best Regards
  
 ZhangJialong___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Accessing VMs in Flat DHCP mode with multiple host

2012-05-11 Thread Michaël Van de Borne

Hi again,

So the problem is now solved.
I hereby post the solution for people from the future.

1. the ping between the compute and the controller was using an IP 
route. So the ping wasn't using only layer 2. This means that there was 
no DHCP request arriving to the network controller.

2. the hosts and the VMs should be in the same subnet
3. we needed to killall dnsmasq and restart nova-network

tcpdump on br100 is useful to track dhcp requests. ARP tables are useful 
as well in order to be sure each host sees the other on layer 2.


thank you all,

yours,

michaël



Michaël Van de Borne
RD Engineer, SOA team, CETIC
Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi


Le 10/05/2012 15:31, Yong Sheng Gong a écrit :

HI,
First you have to make sure the network between your control node's 
br100 and your compute node's br100 are connected.

and then can you show the output on control node:
ps -ef | grep dnsmasq
brctl show
ifconfig
2. can you login to your vm by vnc to see the eth0 configuration and 
then try to run udhcpc?


Thanks
-openstack-bounces+gongysh=cn.ibm@lists.launchpad.net wrote: -

To: openstack@lists.launchpad.net openstack@lists.launchpad.net
From: Michaël Van de Borne michael.vandebo...@cetic.be
Sent by: openstack-bounces+gongysh=cn.ibm@lists.launchpad.net
Date: 05/10/2012 09:03PM
Subject: [Openstack] Accessing VMs in Flat DHCP mode with multiple
host

Hello,

I'm running into troubles accessing my instances.
I have 3 nodes:
1. proxmox that virtualizes in KVM my controller node
1.1 the controller node (10.10.200.50) runs keystone,
nova-api, network, scheduler, vncproxy and volumes but NOT compute
as it is already a VM
2. glance in a physical node
3. compute in a physical node

my nova.conf network config is:
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--routing_source_ip=10.10.200.50
--libvirt_use_virtio_for_bridges=true
--network_manager=nova.network.manager.FlatDHCPManager
--public_interface=eth0
--flat_interface=eth1
--flat_network_bridge=br100
--fixed_range=192.168.200.0/24
--floating_range=10.10.200.0/24
--network_size=256
--flat_network_dhcp_start=192.168.200.5
--flat_injected=False
--force_dhcp_release
--network_host=10.10.200.50

I even explicitly allows icmp and tcp port 22 traffic like this:
euca-authorize -P icmp -t -1:-1 default
euca-authorize -P tcp -p 22 default

before setting these rules, I was getting 'Operation not
permitted' when pinging the VM from the compute node. After
setting these, I just get no output at all (not even 'Destination
Host Unreachable')


The network was created like this:
nova-manage network create private
--fixed_range_v4=192.168.200.0/24 --bridge=br100
--bridge_interface=eth1 --num_networks=1 --network_size=256

However I cannot ping or ssh my instances once they're active. I
have already set up such an Essex environment but the controller
node was physical. Morevover, every examples in the doc presents a
controller node that runs nova-compute.

So I'm wondering if either:
- having the controller in a VM
- or not running compute on the controller
would prevent things to work properly.

What can I check? iptables? is dnsmasq unable to give the VM an
address?

I'm running out of ideas. Any suggestion would be highly appreciated.

Thank you,

michaël




-- 
Michaël Van de Borne

RD Engineer, SOA team, CETIC
Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype:
mikemowgli
www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi
___
Mailing list: https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova subsystem branches and feature branches

2012-05-11 Thread Mark McLoughlin
Hey,

cdub sent on these interesting links:

  http://lwn.net/Articles/328438/
  https://lkml.org/lkml/2012/3/22/489

tl;dr on those is that you're likely to be flamed as a f*cking moron by
Linus unless you manage to understand every little nuance about how he
thinks git should be used :-)

It's really quite interesting to dig into the way Linus does things ...

Some random observations:

  - There's a transition from work being in a git rebase phase to a 
phase where it should never rebase again. Allegedly, this transition
happens when you publish a git tree with the work. It seems to me, 
though, that the transition is really when a subsystem maintainer 
merges your tree into a tree that is known to be their non-rebasing 
release tree that gets sent to linus regularly.

  - What can also happen is that a maintainer picks up a patch, applies 
it to their rebasing next tree, adds their Signed-off-by and may 
happily rebase it until they merge it into their release tree.

  - The Signed-off-by thing is interesting - the first Signed-off-by in 
a commit should match the Author: field and the last should match 
the Committer: field. Once the commit transitions from the git 
rebase phase, no more Signed-off-bys are added. There may be 
multiple Signed-off-bys where a patch has passed through several 
peoples' hands before transitioning out of the git rebase phase.

  - There's a strange emphasis on sharing work-in-progress as patches
rather than via git. It's not hard-and-fast, but there's an element 
of while the work still may re-base, it should be shared mostly as 
patches

The root of this is the whole never rebase a public git tree 
thing. If you want a patch merged into the kernel, the usual way is
for the maintainer to apply your patch from the mailing list rather
than merge a git tree from you.

I find this strange - there's a big flap about NEVER destroy other 
people's history, yet there's a preference for folks to submit 
changes in a way that destroys the history of what exact parent 
commit their work was based on?

  - Kernel maintainers mostly assume the responsibility of resolving 
merge conflicts. Certainly they do so after the transition out of 
the git rebase phase, but often before that too.

How does this compare to our process?

  - All our changes come in as git commits, not patches

  - Typically, only the author of the patch ever re-bases it

  - Our merge commits never resolve any conflicts. If a merge attempt 
results in conflicts, we ask the author to rebase

  - The transition out of git rebase phase happens when the commit is 
merged into master. There's no concept of a commit transitioning 
out of that phase earlier

  - If we used Signed-off-by with our current process, there would only 
be multiple Signed-off-bys where there the work has multiple authors

  - Our history is far from clean history, it's pretty disgusting 
really. The ratio of interesting commits to merge commits is 
roughly 3:2, which is pretty cluttered. With the kernel, it's more 
like 15:1

Anyway, food for thought.

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Glance][Nova] Can't list images

2012-05-11 Thread Leander Bessa Beernaert
David, thx for the tip. When i changed to DEBUG e caught this in keystone:
http://paste.openstack.org/show/16935/  and when i try to access glance
directly i get this: http://paste.openstack.org/show/16936/

On Thu, May 10, 2012 at 6:48 PM, David Kranz david.kr...@qrclab.com wrote:

  I don't know if this is your problem, but the default log level in
 /etc/keystone/logging.conf is WARNING (at least in Ubuntu). I had a similar
 issue and saw important stuff in the log after changing that to DEBUG.

  -David


 On 5/10/2012 11:32 AM, Dolph Mathews wrote:

 Concerning your keystone.log being empty (empty for the duration of the
 request, or completely empty?)... is logging to a specific file configured
 in your keystone.conf? If not, keystone just logs to stdout.

  -Dolph

 On Thu, May 10, 2012 at 10:20 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Can anyone pinpoint what exactly is wrong with this. I've been stuck here
 for the past three days, and nothing i do seems to be working :/


 On Tue, May 8, 2012 at 12:11 PM, Leander Bessa leande...@gmail.comwrote:

 I fixed the swift ip and i'm still getting the same error.

  Here are the log files and the config files:

  nova-api  http://paste.openstack.org/show/16176/

  glance-api.log

  2012-05-08 11:39:55 6143 INFO [eventlet.wsgi.server] Starting
 single process server

  2012-05-08 11:40:01 6255 INFO [eventlet.wsgi.server] Starting
 single process server


 glance-registery.log  http://paste.openstack.org/show/16180/

  glance-api.conf  http://paste.openstack.org/show/16184/

  glance-registry.conf  http://paste.openstack.org/show/16185/

  glance-api-paste.ini  http://paste.openstack.org/show/16186/

  glance-registry-pastet-ini  http://paste.openstack.org/show/16187/

  keystone log is empty.

  Regards,

  Leander

 On Mon, May 7, 2012 at 4:51 PM, Dolph Mathews 
 dolph.math...@gmail.comwrote:

 There's not enough information in those logs to say (check your glance
 config and glance/keystone logs) -- but you'll definitely need to recreate
 that endpoint with SWIFT_HOST defined in your env to use swift through your
 service catalog.

  -Dolph


 On Mon, May 7, 2012 at 9:11 AM, Leander Bessa leande...@gmail.comwrote:

 Does that mean that glance is somehow configured to use swift as
 storage instead of the local file system or is does the error simply occur
 due to the a parsing error because of ${SWIFT_HOST}?


  On Mon, May 7, 2012 at 2:59 PM, Dolph Mathews 
 dolph.math...@gmail.com wrote:

 Your swift endpoint appears to be literally configured in keystone as
 http://${SWIFT_HOST}:8080/v1/...; -- I'm guessing that's
 unreachable :)

  Based on your logs, I'm not certain that will fix your 500, however.

  -Dolph

 On Mon, May 7, 2012 at 5:23 AM, Leander Bessa leande...@gmail.comwrote:

 This is as much as i can capture at the moment.
 http://paste.openstack.org/show/15899/

  For some reason, nothing is written to the logs, am i forgetting a
 flag or something?


 On Fri, May 4, 2012 at 11:30 PM, Yuriy Taraday 
 yorik@gmail.comwrote:

 Please post to http://paste.openstack.org error text and backtrace
 from nova-api.log.

 Kind regards, Yuriy.


 On Fri, May 4, 2012 at 6:13 PM, Leander Bessa leande...@gmail.com
 wrote:
  Hello,
 
  I seem to be unable to list the images available in glance. I'm
 not sure why
  this is happening. I've check the logs for nova-api, glance-api
 and
  glance-registry and am unable to found anything out of the
 ordinary.
 
  Below is an output from the command 'nova image-list'
 
  REQ: curl -i http://192.168.164.128:5000/v2.0/tokens -X POST -H
  Content-Type: application/json -H Accept: application/json -H
  User-Agent: python-novaclient
  REQ BODY: {auth: {tenantName: admin, passwordCredentials:
  {username: admin, password: nova}}}
  RESP:{'date': 'Fri, 04 May 2012 14:08:53 GMT',
 'transfer-encoding':
  'chunked', 'status': '200', 'vary': 'X-Auth-Token',
 'content-type':
  'application/json'} {access: {token: {expires:
 2012-05-05T14:08:53Z,
  id: c6d3145f1e924982982b54e52b97bec9, tenant:
 {description: null,
  enabled: true, id: 765a2012198f4751b8457c49932ec80d,
 name:
  admin}}, serviceCatalog: [{endpoints: [{adminURL:
  http://192.168.164.128:8776/v2/765a2012198f4751b8457c49932ec80d;,
 region:
  nova, internalURL:
  http://192.168.164.128:8776/v2/765a2012198f4751b8457c49932ec80d
 ,
  publicURL:
  http://192.168.164.128:8776/v2/765a2012198f4751b8457c49932ec80d
 }],
  endpoints_links: [], type: volume, name: volume},
 {endpoints:
  [{adminURL:
 
 http://${SWIFT_HOST}:8080/v1/AUTH_765a2012198f4751b8457c49932ec80d;
 ,
  region: nova, internalURL: http://127.0.0.1:8080;,
 publicURL:
 
 http://${SWIFT_HOST}:8080/v1/AUTH_765a2012198f4751b8457c49932ec80d;
 }],
  endpoints_links: [], type: storage, name: swift},
 {endpoints:
  [{adminURL: http://192.168.164.128:9292/v1;, region:
 nova,
  internalURL: http://192.168.164.128:9292/v1;, publicURL:
  http://192.168.164.128:9292/v1}], endpoints_links: [],
 

[Openstack] instance hangs when registering image

2012-05-11 Thread Staicu Gabriel
Hi,

I have a server on which I run all the the services of openstack.
The characteristics of the server are:
-cpu:12 cores
-ram:24 GB
-hdd: 500GB
-ubuntu12.04 64bits
-openstack essex installed from default repository

I observed the following behavior:
If while I am registering an image, I am working on a winxp running instance 
(for example, starting an application) the working instance hangs. Since now I 
didn't discover a method to regain access to it. I tried to reboot it but 
without success.

In the same context while the glance is registering the image the iostat shows 
100% utilization for I/O.

Has anyone else observed the issue?


Regards,
Gabriel
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Keystone] Blueprint to store quota data in Keystone

2012-05-11 Thread Joe Topjian
Hi Everett,


 1. For the keystone CLI I'm proposing using JSON for batch create, update,
 and delete of quotas. I don't believe this is done anywhere else in
 OpenStack. Good idea? Bad idea?
 My plan is to go with the JSON.


IMO, using JSON on the command line is pretty unconventional with regards
to classic CLI commands, but I do think it is interesting.

With regard to your dot notation, couldn't multiple --quota args be used?
For example:

keystone quota-create --quota nova.ram=102400 --quota nova.instances=20
--quota swift.total=1073741824 tenant-id

This is definitely possible programmatically with Python and the
opt-parsing modules, but I was wondering if you chose not to use it as an
example for other non-programmatic reasons.

Secondly, with regard to quota-create and quota-update, is there a huge
difference between the two besides one would ultimately do an insert and
one would do an update? If that is the only difference, could the two be
combined into a single quota-set subcommand?

Thanks,
Joe

-- 
Joe Topjian
Systems Administrator
Cybera Inc.

www.cybera.ca

Cybera is a not-for-profit organization that works to spur and support
innovation, for the economic benefit of Alberta, through the use
of cyberinfrastructure.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova subsystem branches and feature branches

2012-05-11 Thread Mark McLoughlin
On Fri, 2012-05-04 at 17:11 -0700, Chris Wright wrote:

 * Mark McLoughlin (mar...@redhat.com) wrote:

- Subsystem branches would not rebase unless the project dictator 
  outright rejects a merge request from the subsystem branch (i.e.
  I'm not merging commit abcdef0! Fix it and rebase!). This means 
  the subsystem maintainer will need to regularly (probably only when 
  there are conflicts to be dealt with) merge master back into the
  subsystem branch.
 
 Any branch that has a downstream user (someone who's cloned that branch)
 should not rebase.

There are lots of public branches which do rebase though, right? As in,
no matter what the theory is about never rebase a public branch,
people do make their stuff available in public git branches before those
commits make the transition into the never-rebase phase.

The way I think about it is that it's as valid to publish a commit in a
public git repo as it is to send a patch to a mailing list. Indeed, the
former is more useful because you have the context of the parent commit.

What I don't fully understand is how folks make it clear to others
exactly which of their public branches can be expected to never rebase.
Convention? Tribal knowledge?

 If the subsystem-master merge prop is rejected, the subsystem should
 generate a new branch, re-apply the relevant changes (dropping,
 rearranging, etc as necessary), and start a new merge prop.

And anyone who had based their work on the subsystem branch would have
to rebase it on to the new subsystem branch.

 I'm unclear on where gerrit fits on that rebase step?

A subsystem maintainer would accept changes into the subsytem branch
through gerrit but also propose merges into master using gerrit. If the
master merge prop was rejected, you'd need to create a new subsystem
branch and re-propose everything again.

 Regular merging master back is one culprit in your screaming for the
 hills review of kernel git history.  In fact, it's discouraged there.
 The workflow is helped by the merge windows which makes for discrete
 merge points.

Yeah, you don't merge master back at arbitrary times, or where your
branch will conflict with master. But you would merge back at regular
checkpoints (e.g. milestone releases every 6 weeks) but also if there
was something new in master needed by new stuff in the subsystem.

  Should there be a DB subsystem? Probably not, because that would 
  mean every new feature needs to come through this branch or, 
  alternatively, the DB maintainer would need to accept DB schema 
  additions without the context of how it's being used higher up the 
  stack.
 
 This does touch on the reality that some changes will span subsystems
 which will require coordination between subsystems.  Having a mechanism
 for this coordination ahead of time is helpful.  In Linux it's one job
 of the linux-next tree, but that doesn't address jenkins side.

Could you talk through a complicated example of co-ordination and how
linux-net solves that?

e.g. some new feature added to the x86 subsystem for KVM. The stuff gets
accepted into Ingo's x86 tree first, then Avi merges that into his tree,
bases the new KVM stuff on it and waits for Ingo to propose his tree
before proposing the KVM tree?

Hmm, where does linux-net fit in there? Or is linux-net just good for
finding unintentional conflicts between stuff in different subsystem
trees? How are those conflicts then resolved?

- When a feature branch is ready to be merged into a subsystem 
  branch, the patch series is submitted for review. The subsystem 
  maintainer will likely require changes to individual patches and 
  these changes would be made on the feature branch and squashed back 
  into the appropriate individual patch.
 
  (Ideally gerrit's topic review feature will get upstream and 
  we'll use that. This would mean that a patch series could be 
  proposed for review as a single logical unit while still keeping 
  individual patches as separate commits)
 
 One topic that came up at the design summit that I think you mean by the
 above is the to squash or not to squash? question.
 
 The series may need rework (rebase, fix+internal squash, etc) while
 it's a feature branch, but it gets merge prop'd as a real stand alone,
 bisectable patchset (minus the true work-in-progress fiuxps/changes,
 of course).  Put another way... 'git rebase -i' is your friend ;)

Yes, what I was referring to is that gerrit will hopefully gain a
proposed topic reviews feature which will make it easier for reviewers
to review a patch series - i.e. rather than appearing in gerrit as
individual patch reviews with a fairly obscure dependencies widget in
each review, you'd actually be able to go to a review page representing
the entire series of patches and go through each one in order.

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to 

Re: [Openstack] instance hangs when registering image

2012-05-11 Thread Jay Pipes

Hi!

What are the specific commands you are running and what, if any, error 
output are you getting? In addition, can you post the relevant nova-api 
and glance-api log entries to look over?


Thanks,
-jay

On 05/11/2012 08:49 AM, Staicu Gabriel wrote:

Hi,

I have a server on which I run all the the services of openstack.
The characteristics of the server are:
-cpu:12 cores
-ram:24 GB
-hdd: 500GB
-ubuntu12.04 64bits
-openstack essex installed from default repository

I observed the following behavior:
If while I am registering an image, I am working on a winxp running
instance (for example, starting an application) the working instance
hangs. Since now I didn't discover a method to regain access to it. I
tried to reboot it but without success.
In the same context while the glance is registering the image the iostat
shows 100% utilization for I/O.

Has anyone else observed the issue?

Regards,
Gabriel


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [metering] public API design

2012-05-11 Thread Doug Hellmann
During yesterday's meeting we discussed the API proposal at
http://wiki.openstack.org/EfficientMetering#API and came up with a few
missing items and other points for discussion. We should try to work out
those details on the list before the meeting next week.

The original proposal included these API calls:

GET list components
GET list components meters (argument : name of the component)
GET list [user_id|project_id|source]
GET list of meter_type
GET list of events per [user_id|project_id|source] ( allow to specify
user_id or project_id or both )
GET sum of (meter_volume, meter_duration) for meter_type and
[user_id|project_id|source]

They can be broken down into three groups:

- Discover the sorts of things the server can provide:

  - list the components providing metering data
  - list the meters for a given component
  - list the known users
  - list the known projects
  - list the known sources
  - list the types of meters known

- Fetch raw event data, without aggregation:

  - per user
  - per project
  - per source
  - per user and project

- Produce aggregate views of the data:

  - sum volume field for meter type over a time period
- per user
- per project
- per source
  - sum duration field for meter type over a period of time
- per user
- per project
- per source

We agreed that all of the queries related to metering data would need
to have parameters to set the start and end times with the start time
inclusive and the end time exclusive (i.e., start = timestamp  end).

Callers also are likely to want to filter the raw data queries based
on the meter type, so we need to add that as an optional argument
there. The aggregate values have the meter type as a required argument
(because it does not make sense to aggregate data from separate meters
together).

There are a few other queries missing from that list. The items below
are based on the meeting notes and my own thoughts from yesterday,
please add to the list if I have left anything out (or suggest why we
should not implement something suggested here, of course):

- list discrete events that may not have a duration (instance
  creation, IP allocation, etc.)
- list raw event data for a resource (what happened with a specific
  instance?)
- aggregate event data per meter type for a resource over a period of
  time (what costs are related to this instance?)
- sum volume for meter type over a time period for a specific resource
  (how much total bandwidth was used by a VIF?)
- sum duration for meter type over a time period for a specific
  resource (how long did an instance run?)
- metadata for resources (such as location of instances)
- aggregating averages in addition to sums

Some of these items may be better handled in the consumer of this API
(the system that actually bills the user). I am curious to know how
others are handling auditing or other billing detail reporting for
users.

In order to support the discrete events, we need to capture
them. Perhaps those could be implemented as additional counter
types. Thoughts?

In order to provide the metadata we need to capture it. Some metadata
may change (location), so we need to support updates.

- The interesting metadata for a resource may depend on the type of
  resource. Do we need separate tables for that or can we normalize
  somehow?
- How do we map a resource to the correct version of its metadata at
  any given time? Timestamps seem brittle.
- Do we need to reflect the metadata in the aggregation API?

We also discussed extensions, but we should start another thread for
that topic.

Doug
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Network Routing issues.

2012-05-11 Thread Kieran David Evans
Hi all,

I'm having a few issues with my install here. My instances can't access
anything outside the cloud, and adding the correct rules to the security
group and assigning a public IP, the instance isn't accessible from the
outside world. I've had openstack running on this hardware before using
the Stackops Distro, but I've intalled Ubuntu 12.04 and Essex to test it
out as Stackops aren't on essex yet.

I've included the relevant (I think) info below. I'm not sure where/what
to check next, I'm not so good with network debugging unfortunately.

Could someone help, advise, or just generally point me in the right
direction?

Thanks!

/Kieran

I have it set to use FlatDHCP:
# network specific settings
--network_manager=nova.network.manager.FlatDHCPManager
--public_interface=bond0
--flat_interface=eth2
--flat_network_bridge=br100
--fixed_range=10.0.0.0/8
--floating_range=131.251.172.0/24
--network_size=256
--flat_network_dhcp_start=10.0.0.2
--flat_injected=False
--force_dhcp_release
--iscsi_helper=tgtadm
--connection_type=libvirt
--root_helper=sudo nova-rootwrap
--verbose

bond0 is a bonded interface on a public network. I can access the
Internet through that interface. eth2 is on a network connected to the
other hosts, each of which has eth2 connected to this network.

brctl shows eth2 is part of br100.

nova-network:
 brctl show
bridge name bridge id   STP enabled interfaces
br100   8000.001b21cda0d1   no  eth2


nova-compute-1 (with the instance on it):
brctl show
bridge name bridge id   STP enabled interfaces
br100   8000.001b21add0a1   no  eth2
vnet0
virbr0  8000.   yes


I checked through this (
http://docs.openstack.org/trunk/openstack-compute/admin/content/associating-public-ip.html)
and everything looks correct (I think).

  nova secgroup-list-rules default
+-+---+-+---+--+
| IP Protocol | From Port | To Port |  IP Range | Source Group |
+-+---+-+---+--+
| icmp| -1| -1  | 0.0.0.0/0 |  |
| tcp | 22| 22  | 0.0.0.0/0 |  |
+-+---+-+---+--+


The instance IP is 10.0.0.2, so (public IPs hidded):

sudo iptables -L -nv -t nat | grep 10.0.0.2
0 0 DNAT   all  --  *  *   0.0.0.0/0   
x.y.172.22   to:10.0.0.2
   20  1656 DNAT   all  --  *  *   0.0.0.0/0   
x.y.172.22   to:10.0.0.2
0 0 SNAT   all  --  *  *   10.0.0.2
0.0.0.0/0to:x.y.172.22


from ip add:


4: eth2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq master
br100 state UP qlen 1000
link/ether 00:1b:21:cd:a0:d1 brd ff:ff:ff:ff:ff:ff
inet6 fe80::21b:21ff:fecd:a0d1/64 scope link
   valid_lft forever preferred_lft forever


16: bond0: BROADCAST,MULTICAST,MASTER,UP,LOWER_UP mtu 1500 qdisc
noqueue state UP
link/ether 00:1b:21:6d:ef:00 brd ff:ff:ff:ff:ff:ff
inet x.y.172.2/24 brd 131.251.172.255 scope global bond0
inet x.y.172.22/32 scope global bond0
inet6 fe80::21b:21ff:fe6d:ef00/64 scope link
   valid_lft forever preferred_lft forever
17: br100: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue
state UP
link/ether 00:1b:21:cd:a0:d1 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global br100
inet6 fe80::1c2b:8bff:fe38:2003/64 scope link
   valid_lft forever preferred_lft forever

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Routing issues.

2012-05-11 Thread Kieran David Evans
On 11/05/12 17:24, Kieran David Evans wrote:
 Hi all,

 I'm having a few issues with my install here. My instances can't
 access anything outside the cloud, and adding the correct rules to the
 security group and assigning a public IP, the instance isn't
 accessible from the outside world. I've had openstack running on this
 hardware before using the Stackops Distro, but I've intalled Ubuntu
 12.04 and Essex to test it out as Stackops aren't on essex yet.

 I've included the relevant (I think) info below. I'm not sure
 where/what to check next, I'm not so good with network debugging
 unfortunately.

 Could someone help, advise, or just generally point me in the right
 direction?

 Thanks!

 /Kieran

 I have it set to use FlatDHCP:
 # network specific settings
 --network_manager=nova.network.manager.FlatDHCPManager
 --public_interface=bond0
 --flat_interface=eth2
 --flat_network_bridge=br100
 --fixed_range=10.0.0.0/8
 --floating_range=131.251.172.0/24
 --network_size=256
 --flat_network_dhcp_start=10.0.0.2
 --flat_injected=False
 --force_dhcp_release
 --iscsi_helper=tgtadm
 --connection_type=libvirt
 --root_helper=sudo nova-rootwrap
 --verbose

 bond0 is a bonded interface on a public network. I can access the
 Internet through that interface. eth2 is on a network connected to the
 other hosts, each of which has eth2 connected to this network.

 brctl shows eth2 is part of br100.

 nova-network:
  brctl show
 bridge name bridge id   STP enabled interfaces
 br100   8000.001b21cda0d1   no  eth2


 nova-compute-1 (with the instance on it):
 brctl show
 bridge name bridge id   STP enabled interfaces
 br100   8000.001b21add0a1   no  eth2
 vnet0
 virbr0  8000.   yes


 I checked through this (
 http://docs.openstack.org/trunk/openstack-compute/admin/content/associating-public-ip.html)
 and everything looks correct (I think).

   nova secgroup-list-rules default
 +-+---+-+---+--+
 | IP Protocol | From Port | To Port |  IP Range | Source Group |
 +-+---+-+---+--+
 | icmp| -1| -1  | 0.0.0.0/0 |  |
 | tcp | 22| 22  | 0.0.0.0/0 |  |
 +-+---+-+---+--+


 The instance IP is 10.0.0.2, so (public IPs hidded):

 sudo iptables -L -nv -t nat | grep 10.0.0.2
 0 0 DNAT   all  --  *  *   0.0.0.0/0   
 x.y.172.22   to:10.0.0.2
20  1656 DNAT   all  --  *  *   0.0.0.0/0   
 x.y.172.22   to:10.0.0.2
 0 0 SNAT   all  --  *  *   10.0.0.2
 0.0.0.0/0to:x.y.172.22


 from ip add:

 
 4: eth2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq master
 br100 state UP qlen 1000
 link/ether 00:1b:21:cd:a0:d1 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::21b:21ff:fecd:a0d1/64 scope link
valid_lft forever preferred_lft forever
 
 
 16: bond0: BROADCAST,MULTICAST,MASTER,UP,LOWER_UP mtu 1500 qdisc
 noqueue state UP
 link/ether 00:1b:21:6d:ef:00 brd ff:ff:ff:ff:ff:ff
 inet x.y.172.2/24 brd 131.251.172.255 scope global bond0
 inet x.y.172.22/32 scope global bond0
 inet6 fe80::21b:21ff:fe6d:ef00/64 scope link
valid_lft forever preferred_lft forever
 17: br100: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue
 state UP
 link/ether 00:1b:21:cd:a0:d1 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.1/24 brd 10.0.0.255 scope global br100
 inet6 fe80::1c2b:8bff:fe38:2003/64 scope link
valid_lft forever preferred_lft forever

Seems I failed at both spelling, and hiding out public ip addresses
there. D'oh!

/Kieran
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 'admin' role hard-coded in keystone and nova, and policy.json

2012-05-11 Thread Vishvananda Ishaya
Most of nova is configurable via policy.json, but there is the issue with
context.is_admin checks that still exist in a few places. We definitely
need to modify that.

Joshua, the idea is that policy.json will ultimately be managed in keystone
as well. Currently the policy.json is checked for modifications, so it
would be possible to throw it on shared storage and modify it for every
node at once without having to restart the nodes.  This is an interim
solution until we allow for creating and retrieving policies inside of
keystone.

Vish

On Thu, May 10, 2012 at 7:13 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  I was also wondering about this, it seems there are lots of policy.json
 files with hard coded roles in them, which is weird since keystone supports
 the creation of roles and such, but if u create a role which isn’t in a
 policy.json then u have just caused yourself a problem, which isn’t very
 apparent...


 On 5/10/12 2:32 PM, Salman A Baset saba...@us.ibm.com wrote:

 It seems that 'admin' role is hard-coded cross nova and horizon. As a
 result if I want to define 'myadmin' role, and grant it all the admin
 privileges, it does not seem possible. Is this a recognized limitation?

 Further, is there some good documentation on policy.json for nova,
 keystone, and glance?

 Thanks.

 Best Regards,

 Salman A. Baset
 Research Staff Member, IBM T. J. Watson Research Center
 Tel: +1-914-784-6248



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Object Storage ACLs with KeyStone

2012-05-11 Thread Vishvananda Ishaya
I'm not totally sure about this, but you might have to use the project_id
from keystone instead of the project_name when setting up acls.   The same
may be true of user_id.

Vish

On Fri, May 11, 2012 at 12:51 AM, 张家龙 zhan...@awcloud.com wrote:


 Hello, everyone.

 I encountered some problems when i set permissions (ACLs) on Openstack
 Swift containers.
 I installed swift-1.4.8(essex) and use keystone-2012.1 as
 authentication system on CentOS 6.2 .

 My swift proxy-server.conf and keystone.conf are here:
 http://pastebin.com/dUnHjKSj

 Then,I use the script named opensatck_essex_data.sh(
 http://pastebin.com/LWGVZrK0) to
 initialize keystone.

 After these operations,I got the token of demo:demo and newuser:newuser

 curl -s -H 'Content-type: application/json' \
 -d '{auth: {tenantName: demo, passwordCredentials:
 {username: demo, password: admin}}}' \
 http://127.0.0.1:5000/v2.0/tokens | python -mjson.tool

 curl -s -H 'Content-type: application/json' \
 -d '{auth: {tenantName: newuser, passwordCredentials:
 {username: newuser, password: admin}}}' \
 http://127.0.0.1:5000/v2.0/tokens | python -mjson.tool

 Then,enable read access to newuser:newuser

 curl –X PUT -i \
 -H X-Auth-Token: token of demo:demo \
 -H X-Container-Read: newuser:newuser \

 http://127.0.0.1:8080/v1/AUTH_f1723800c821453d9f22d42d1fbb334b/demodirc

 Check the permission of the container:

 curl -k -v -H 'X-Auth-Token:token of demo:demo' \

 http://127.0.0.1:8080/v1/AUTH_f1723800c821453d9f22d42d1fbb334b/demodirc

 This is the reply of the operation:

 HTTP/1.1 200 OK
 X-Container-Object-Count: 1
 X-Container-Read: newuser:newuser
 X-Container-Bytes-Used: 2735
 Accept-Ranges: bytes
 Content-Length: 24
 Content-Type: text/plain; charset=utf-8
 Date: Fri, 11 May 2012 07:30:23 GMT

 opensatck_essex_data.sh

 Now,the user newuser:newuser visit the container of demo:demo

 curl -k -v -H 'X-Auth-Token:token of newuser:newuser' \

 http://127.0.0.1:8080/v1/AUTH_f1723800c821453d9f22d42d1fbb334b/demodirc

 While,I got 403 error.Can someone help me?

 **
 --
 Best Regards

 ZhangJialong
 **


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [client] Event handling

2012-05-11 Thread Doug Hellmann
On Thu, May 10, 2012 at 5:58 PM, Matt Joyce matt.jo...@cloudscaling.comwrote:

 How are we doing event handling in the client?  Is there a blueprint on
 this somewhere?


What sort of event handling?

Doug
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova subsystem branches and feature branches

2012-05-11 Thread Mark McLoughlin
Hi James,

On Tue, 2012-05-08 at 14:03 -0700, James E. Blair wrote:
 Mark McLoughlin mar...@redhat.com writes:
 
  Hey,
 
  We discussed this during the baking area for features design summit
  session. I found that discussion fairly frustrating because there were
  so many of us involved and we all were either wanting to discuss
  slightly different things or had a slightly different understanding of
  what we were discussing. So, here's my attempt to put some more
  structure on the discussion.
 
  tl;dr - subsystem branches are managed by trusted domain experts and
  feature branches are just temporary rebasing branches on personal github
  forks. We've got a tonne of work to do figuring out how this would all
  work. We should probably pick a single subsystem and start with that.
 
  ...
 
  Firstly, problem definition:
 
- Nova is big, complex and has a fairly massive rate of churn. While 
  the nova-core team is big, there isn't enough careful review going 
  on by experts in particular areas and there's a consistently large
  backlog of reviews.
 
- Developers working on features are very keen to have their work 
  land somewhere and this leads to half-finished features being 
  merged onto master rather than developers collaborating to get a 
  feature to a level of completeness and polish before merging into 
  master.
 
  Some assumptions about the solution:
 
- There should be a small number of domain experts who can approve 
  changes to each of major subsystems. This will encourage 
  specialization and give more clear lines of responsibility.
 
- There should be a small number of project dictators who have final 
  approval on merge proposals, but who are not expected to review 
  every patch in great detail. This is good because we need someone 
  with an overall view of the project who can make sure efforts in 
  the various subsystems are coordinated, without that someone being 
  massively overloaded.
 
- New features should be developed on a branch and brought to a level 
  of completeness before being merged into master. This is good 
  because we don't want half-baked stuff in master but also because 
  it encourages developers to break their features into stages where 
  each stage of the work can be brought to completion and merged 
  before moving on to the next stage.
 
- In essence, we're assuming some variation of the kernel distributed 
  development model.
 
  (FWIW, my instinct is to avoid the kernel model on projects. Mostly 
  because it's extremely complex and massive overkill for most 
  projects. Looking at the kernel history with gitk is enough to send 
  anyone screaming for the hills. However, Nova seems to be big 
  enough that we're experiencing the same pressures that drove the 
  kernel to adopt their model)
 
  Ok, what are subsystem branches and how would they work?
 
- Subsystem branches would have a small number of maintainers who can 
  approve a change. These would be domain experts providing strong 
  oversight over a particular area.
 
  (In gerrit, this is a branch with a small team or single person who 
  can +1 approve a review)
 
 Agree.  We can create a branch, say:
 
   subsystem/scheduler/next

Subsystem repos probably make more sense than subsystem branches. A mass
of branches in the one repo isn't ideal, you should be able to choose
which subsystems you're fetching from.

So, that would be:

  openstack/nova/scheduler.git

But another option worth keeping an open mind about is:

  sandywalsh/nova/scheduler.git

In the kernel model, it's all about the subsystem maintainer rather than
the subsystem. 

e.g. rather than the PTL take on the responsibility of deciding what the
subsystems are and who the subsystem maintainers, you allow people to
step up and prove themselves in that role. The PTL decides on who the
subsystem maintainers are simply by deciding who to pull from.

Or put it another way, if the PTL anoints someone as a subsystem
maintainer and finds that maintainer is pretty much AWOL, then we have
to go through this unnecessarily painful process of closing down the
subsystem branch or finding a new maintainer. I see parallels with what
happened with the Nova sub-teams formed at the Essex summit.

 Which is the linux-next equivalent for that subsystem, where new
 features should be proposed and merged in preparation for the next
 merge window.  Merge window in the case of folsom may be 6 months
 long, but would shrink if we move to a more rolling release model.

Hmm, I don't think you want to confuse the issue thinking about next
branches yet. We need subsystem branches for what's going into *this*
release before worrying about branches for the next release.

We could well have an integration thing similar to linux-next, but not
just for next branches.

 We can have subsystem feature branches, 

Re: [Openstack] 'admin' role hard-coded in keystone and nova, and policy.json

2012-05-11 Thread Joshua Harlow
Cool, I'm glad that is the ultimate goal.

It seems like nova should be asking keystone for an initial policy template of 
some kind, which nova then fills in its specifics with or policies can be 
fully defined in keystone, either or.

Just people should be aware that making custom roles might not mean much if 
policy.json files are not also updated.

On 5/11/12 10:51 AM, Vishvananda Ishaya vishvana...@gmail.com wrote:

Most of nova is configurable via policy.json, but there is the issue with 
context.is_admin checks that still exist in a few places. We definitely need to 
modify that.

Joshua, the idea is that policy.json will ultimately be managed in keystone as 
well. Currently the policy.json is checked for modifications, so it would be 
possible to throw it on shared storage and modify it for every node at once 
without having to restart the nodes.  This is an interim solution until we 
allow for creating and retrieving policies inside of keystone.

Vish

On Thu, May 10, 2012 at 7:13 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:
I was also wondering about this, it seems there are lots of policy.json files 
with hard coded roles in them, which is weird since keystone supports the 
creation of roles and such, but if u create a role which isn't in a policy.json 
then u have just caused yourself a problem, which isn't very apparent...


On 5/10/12 2:32 PM, Salman A Baset saba...@us.ibm.com 
http://saba...@us.ibm.com  wrote:

It seems that 'admin' role is hard-coded cross nova and horizon. As a result if 
I want to define 'myadmin' role, and grant it all the admin privileges, it does 
not seem possible. Is this a recognized limitation?

Further, is there some good documentation on policy.json for nova, keystone, 
and glance?

Thanks.

Best Regards,

Salman A. Baset
Research Staff Member, IBM T. J. Watson Research Center
Tel: +1-914-784-6248 tel:%2B1-914-784-6248



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [client] Event handling

2012-05-11 Thread Matt Joyce
well like every time we make an api query in the shell we get a return
result from the query.  how are we handling those return results and
evaluating codes etc?

On Fri, May 11, 2012 at 12:19 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:



 On Thu, May 10, 2012 at 5:58 PM, Matt Joyce 
 matt.jo...@cloudscaling.comwrote:

 How are we doing event handling in the client?  Is there a blueprint on
 this somewhere?


 What sort of event handling?

 Doug


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [client] Event handling

2012-05-11 Thread Doug Hellmann
On Fri, May 11, 2012 at 3:26 PM, Matt Joyce matt.jo...@cloudscaling.comwrote:

 well like every time we make an api query in the shell we get a return
 result from the query.  how are we handling those return results and
 evaluating codes etc?


Don't the client libraries handle that? I feel like I'm still missing
something...




 On Fri, May 11, 2012 at 12:19 PM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:



 On Thu, May 10, 2012 at 5:58 PM, Matt Joyce 
 matt.jo...@cloudscaling.comwrote:

 How are we doing event handling in the client?  Is there a blueprint on
 this somewhere?


 What sort of event handling?

 Doug



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] public API design

2012-05-11 Thread Loic Dachary
On 05/11/2012 05:55 PM, Doug Hellmann wrote:
 During yesterday's meeting we discussed the API proposal at
 http://wiki.openstack.org/EfficientMetering#API and came up with a few
 missing items and other points for discussion. We should try to work out
 those details on the list before the meeting next week.

 The original proposal included these API calls:

 GET list components
 GET list components meters (argument : name of the component)
 GET list [user_id|project_id|source]
 GET list of meter_type
 GET list of events per [user_id|project_id|source] ( allow to specify user_id 
 or project_id or both )
 GET sum of (meter_volume, meter_duration) for meter_type and 
 [user_id|project_id|source]

 They can be broken down into three groups:

 - Discover the sorts of things the server can provide:

 - list the components providing metering data
 - list the meters for a given component
 - list the known users
 - list the known projects
 - list the known sources
 - list the types of meters known

 - Fetch raw event data, without aggregation:

 - per user
 - per project
 - per source
 - per user and project

 - Produce aggregate views of the data:

 - sum volume field for meter type over a time period
 - per user
 - per project
 - per source
 - sum duration field for meter type over a period of time
 - per user
 - per project
 - per source
I updated the wiki with these three groups, it is a good starting point for 
discussion.
http://wiki.openstack.org/EfficientMetering?action=diffrev2=75rev1=74

 We agreed that all of the queries related to metering data would need
 to have parameters to set the start and end times with the start time
 inclusive and the end time exclusive (i.e., start = timestamp  end).

 Callers also are likely to want to filter the raw data queries based
 on the meter type, so we need to add that as an optional argument
 there. The aggregate values have the meter type as a required argument
 (because it does not make sense to aggregate data from separate meters
 together).

 There are a few other queries missing from that list. The items below
 are based on the meeting notes and my own thoughts from yesterday,
 please add to the list if I have left anything out (or suggest why we
 should not implement something suggested here, of course):

 - list discrete events that may not have a duration (instance
 creation, IP allocation, etc.)
 - list raw event data for a resource (what happened with a specific
 instance?)
 - aggregate event data per meter type for a resource over a period of
 time (what costs are related to this instance?)
 - sum volume for meter type over a time period for a specific resource
 (how much total bandwidth was used by a VIF?)
 - sum duration for meter type over a time period for a specific
 resource (how long did an instance run?)
 - metadata for resources (such as location of instances)
 - aggregating averages in addition to sums

I've added this list as a note in the API chapter.
http://wiki.openstack.org/EfficientMetering?action=diffrev2=76rev1=75
 Some of these items may be better handled in the consumer of this API
 (the system that actually bills the user). I am curious to know how
 others are handling auditing or other billing detail reporting for
 users.
I will be in a meeting with the participants of the 
http://www.opencloudware.org/ project early next week and I'll ask them. Maybe 
Daniel Dyer could share a practical experience with us.

 In order to support the discrete events, we need to capture
 them. Perhaps those could be implemented as additional counter
 types. Thoughts?
Are there resources that should be billed (henced metered) if their lifespan is 
shorter than the polling / publishing event period ? I mean that if someone 
creates a VM and immediately destroys it, does it really matter. While writing 
this statement, I though of a way to abuse a system that does not account for 
discrete events like created a VM but only account for this VM has run for 
at least X minutes :-) One could write a software to launch short lived VM at 
specific times and get computing power for free.

Do discrete event need to be separate ? Back to the example above, someone 
creates a VM : that does not create a record in the ceilometer storage. The VM 
is destroyed a few seconds later : it creates an event, as if it was polled / 
published to measure the uptime of the VM. The c1 meter would show the VM has 
been up for 15 seconds since it was created. If the c1 meter is generated from 
an exist event captured from http://wiki.openstack.org/SystemUsageData that 
occurs every 5 minutes, the uptime will be 5 minutes more than the previous one.

While thinking about this use case I also had a simple idea to optimize the 
storage of events and added it to the wiki
http://wiki.openstack.org/EfficientMetering?action=diffrev2=77rev1=76

Cheers

-- 
Loïc Dachary Chief Research Officer
// eNovance labs   http://labs.enovance.com
// ✉ l...@enovance.com  ☎ +33 1 49 70 99 82


Re: [Openstack] 'admin' role hard-coded in keystone and nova, and policy.json

2012-05-11 Thread Gabriel Hurley
In addition to these hardcoded admin (and Member) role names, for legacy 
reasons there are also several roles in the keystone sample data which have 
never been used in OpenStack (e.g. netadmin, etc.):

https://github.com/openstack/keystone/blob/master/tools/sample_data.sh#L119

Just sayin', ;-)


-  Gabriel

From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
[mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] On 
Behalf Of Joshua Harlow
Sent: Thursday, May 10, 2012 7:13 PM
To: Salman A Baset; openstack
Subject: Re: [Openstack] 'admin' role hard-coded in keystone and nova, and 
policy.json

I was also wondering about this, it seems there are lots of policy.json files 
with hard coded roles in them, which is weird since keystone supports the 
creation of roles and such, but if u create a role which isn't in a policy.json 
then u have just caused yourself a problem, which isn't very apparent...

On 5/10/12 2:32 PM, Salman A Baset saba...@us.ibm.com wrote:
It seems that 'admin' role is hard-coded cross nova and horizon. As a result if 
I want to define 'myadmin' role, and grant it all the admin privileges, it does 
not seem possible. Is this a recognized limitation?

Further, is there some good documentation on policy.json for nova, keystone, 
and glance?

Thanks.

Best Regards,

Salman A. Baset
Research Staff Member, IBM T. J. Watson Research Center
Tel: +1-914-784-6248

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 'admin' role hard-coded in keystone and nova, and policy.json

2012-05-11 Thread Dolph Mathews
On Fri, May 11, 2012 at 2:25 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Cool, I’m glad that is the ultimate goal.


Working on it! https://blueprints.launchpad.net/keystone/+spec/rbac-keystone



 It seems like nova should be asking keystone for an initial policy
 template of some kind, which nova then fills in its “specifics” with or
 policies can be fully defined in keystone, either or.


Policy will be fully defined in keystone, and the results will be passed to
nova, etc as part of the auth validation response (what capabilities can
this user perform on this tenant?).



 Just people should be aware that making custom roles might not mean much
 if policy.json files are not also updated.


Today, that's completely true.




 On 5/11/12 10:51 AM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 Most of nova is configurable via policy.json, but there is the issue with
 context.is_admin checks that still exist in a few places. We definitely
 need to modify that.

 Joshua, the idea is that policy.json will ultimately be managed in
 keystone as well. Currently the policy.json is checked for modifications,
 so it would be possible to throw it on shared storage and modify it for
 every node at once without having to restart the nodes.  This is an interim
 solution until we allow for creating and retrieving policies inside of
 keystone.

 Vish

 On Thu, May 10, 2012 at 7:13 PM, Joshua Harlow harlo...@yahoo-inc.com
 wrote:

 I was also wondering about this, it seems there are lots of policy.json
 files with hard coded roles in them, which is weird since keystone supports
 the creation of roles and such, but if u create a role which isn’t in a
 policy.json then u have just caused yourself a problem, which isn’t very
 apparent...


 On 5/10/12 2:32 PM, Salman A Baset saba...@us.ibm.com 
 http://saba...@us.ibm.com  wrote:

 It seems that 'admin' role is hard-coded cross nova and horizon. As a
 result if I want to define 'myadmin' role, and grant it all the admin
 privileges, it does not seem possible. Is this a recognized limitation?

 Further, is there some good documentation on policy.json for nova,
 keystone, and glance?

 Thanks.

 Best Regards,

 Salman A. Baset
 Research Staff Member, IBM T. J. Watson Research Center
 Tel: +1-914-784-6248 tel:%2B1-914-784-6248



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] resources metadata (was: public API design)

2012-05-11 Thread Loic Dachary

 - The interesting metadata for a resource may depend on the type of
 resource. Do we need separate tables for that or can we normalize
 somehow?
 - How do we map a resource to the correct version of its metadata at
 any given time? Timestamps seem brittle.
 - Do we need to reflect the metadata in the aggregation API?

Hi,

I started a new thread for the metadata topic. I suspect it deserves it. 
Although I was reluctant to acknowledge that the metadate should be stored by 
the metering, yesterday's meeting made me realize that it was mandatory. The 
compelling reason ( for me ;-) is that it would make it much more difficult to 
implement a billing system if the metering does not provide a simple way to 
extract metadata and display it in a human readable way (or meaningfull to 
accountants ?) .

I see two separate questions :

a) how to store and query metadata ?
b) what is the semantic of metadata for a given resource ?

My hunch is that there will never be a definitive answer to b) and that the 
best we can do is to provide a format and leave the semantic to the 
documentation of the metering system, explaining the metadata of a resource.

Regarding the storage of the metadata, the metering could listen / poll events 
creating / updating / deleting a given resource and store a history log indexed 
by the resource id. Something like:

{ meter_type: TTT,
resource_id: RRR,
metadata: [{ version: ,
timestamp: TIME1,
payload: PAYLOAD1 },
{ version: ,
timestamp: TIME3,
payload: PAYLOAD2 }]
}

With PPP being the resource dependant metadata that depends on the type of the 
resource. And the metadata array being an ordered list of the successive states 
of the resource over time. The VVV version accounting for changes in the format 
of the payload.

The query would be :

GET /resource/meter_type/resource_id/TIME2

and it would return PAYLOAD1 if TIME2 is in the range [TIME1,TIME3[

I'm not sure why you think timestamp is brittle. Maybe I'm missing something.

Cheers

-- 
Loïc Dachary Chief Research Officer
// eNovance labs   http://labs.enovance.com
// ✉ l...@enovance.com  ☎ +33 1 49 70 99 82


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova subsystem branches and feature branches

2012-05-11 Thread Mark McLoughlin
Hey,

So, one thing came really stuck out to me when comparing our process to
the kernel process:

  In the kernel process, maintainers are responsible for running 
  'git-merge' and they see it as their job to resolve conflicts.

  In our process, Jenkins runs 'git-merge' and runs away screaming at 
  the first sign of conflict.

The kernel developers I talked to see this as the biggest hindrance to
scaling and are fairly incredulous we've managed to scale this far with
it.

The problem is we're pushing the responsibility for resolving merge
conflicts back on the author of the original commit. This creates a
pretty crazy scenario where folks are racing each other to get merged
and the loser gets to rebase again, and maybe again and again.

With subsystem branches, we're making a tiny step away from that. Either
the subsystem maintainer or main branch maintainer will need to resolve
the conflicts since the subsystem branch can't just be rebased.

So, I'd like to see:

  - Subsystem maintainer proposes a tree for merging into master.

(How can we make this proposal appear in the maintainers queue as a
single merge this work item?)

  - A main branch maintainer (the PTL?), reviews the proposal, pulls
the commit into his master branch in his local repo (i.e. creates a
new merge commit) and resolves any conflicts

  - Main branch maintainer pushes this to Jenkins for gating and final 
merging into master (unless another commit gets merged in the 
meantime, this will be a fast-forward merge)

In the longer term, I'd think I'd like to see gating work more like
this:

  - Developers submit patches to gerrit for review

  - Jenkins picks these patches up and runs the gating tests on them

  - When the code has been acked by reviewers and Jenkins, it goes into 
the maintainer's merge queue

  - The maintainer merges stuff in batches from his queue in gerrit into
his local repo, resolving conflicts etc.

  - The maintainer pushes the resulting tree to a private repo somewhere

  - Jenkins picks up the tree, runs the gating tests and fast-forward 
merges it into master

There's a whole bunch of positives to that scheme:

  - Less frustrating pushback on developers when there are merge 
conflicts, removing that scaling limitation

  - There's less of a rush by folks to get stuff merged, because merge 
conflicts don't cause such a merry-go-round of rebasing

  - It fits with what we need for the subsystem branch scheme

  - The PTL gets to cast his eye over everything, allowing him to more 
easily flame people if a patch wasn't adequately reviewed

  - The PTL has a lot more discretion to e.g. merge a security patch 
quickly

  - The PTL's batches serve as better synchronization points, so the 
history looks cleaner

And for completeness, a variant of the scheme above would be where the
maintainer may cherry-pick/rebase commits (rather than merge them) with
some exceptions like subsystem merge props or long patch series. This
would be a massive improvement in the cleanliness of our history and is
exactly analogous to what the kernel maintainers do when applying
patches submitted to a mailing list.

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [metering] licensing

2012-05-11 Thread Doug Hellmann
I was very surprised to see the change to license ceilometer as AGPL [1].
Why are we not using the same Apache v2 license that all other OpenStack
projects are using?

Doug

[1] https://review.stackforge.org/#/c/29/
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] licensing

2012-05-11 Thread Doug Hellmann
And for reference, from Article VIII of the bylaws for the Foundation:

The Project shall not accept contributions of software code unless such
contribution is made on the terms of the Apache 2.0 license, and the
contributor has executed the applicable Contributor License Agreement

http://wiki.openstack.org/Governance/Foundation/Bylaws

On Fri, May 11, 2012 at 4:01 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:

 I was very surprised to see the change to license ceilometer as AGPL [1].
 Why are we not using the same Apache v2 license that all other OpenStack
 projects are using?

 Doug

 [1] https://review.stackforge.org/#/c/29/


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Mailing-list split

2012-05-11 Thread Duncan McGreggor
On Fri, Apr 27, 2012 at 9:44 AM, Duncan McGreggor dun...@dreamhost.com wrote:
 On Fri, Apr 27, 2012 at 8:46 AM, Monty Taylor mord...@inaugust.com wrote:
 Hey everyone!

 On 04/27/2012 05:04 AM, Thierry Carrez wrote:

 To avoid Launchpad list slowness, we would run the new openstack-dev
 list off lists.openstack.org. Given the potential hassle of dealing with
 spam and delivery issues on mission-critical MLs, we are looking into
 the possibility of outsourcing the maintenance of lists.openstack.org to
 a group with established expertise running mailman instances. Please let
 us know ASAP if you could offer such services. We are not married to
 mailman either -- if an alternative service offers good performance and
 better integration (like OpenID-based subscription to integrate with our
 SSO), we would definitely consider it.

 Just to be clear - I definitely think that mailing lists are an
 important part of dev infrastructure and would love for this to be a
 fully integrated part of all of the rest of our tools. However, the
 current set of active infrastructure team members have huge todo lists
 at the moment. So the biggest home run from my perspective would be if
 someone out there had time or resources and wanted to join us on the
 infra team to manage this on our existing resources (turns out we have
 plenty of servers for running this, and even a decent amount of
 expertise, just missing manpower). The existing team would be more than
 happy to be involved, and it would help avoid get-hit-by-a-truck issues.
 We're a pretty friendly bunch, I promise.

 Any takers? Anybody want to pony up somebody with some bandwidth to
 admin a mailman? Respond back here or just find us in #openstack-infra
 and we'll get you plugged in and stuff.

 Thanks!
 Monty

 Count me in, Monty.

 I've been managing mailman lists for about 12 years now (and,
 incidentally, Barry and I are bruthas from anutha mutha), so I'd be
 quite comfortable handling those responsibilities. I can couple it
 with the python.org SIG mail list that I manage, so there'd be zero
 context switching.

 d

Hey folks, quick update for ya...

Here are some Etherpads for this effort:
 * http://etherpad.openstack.org/openstack-dev-ml-prefixes
 * http://etherpad.openstack.org/lists-changes
 * http://etherpad.openstack.org/mailmain-install-and-notes
 * http://etherpad.openstack.org/mailmain-migration-notes

Mailman is installed, with the site-wide mailman list set up.

James Blair has been working on the Exim set up, and I'll be following
up on his changes when I get some time this weekend.

It would be great to migrate our old mail list data:
 * from the old host to the new one
 * from LP archives (don't know if that's possible)

Some of us know folks who maintain launchpad (and mailman, for that
matter), so we'll be reaching out to folks at Canonical to see what we
can do here.

James Blair also has a good plan for DNS, setting a hostname set up
for testing, and then rolling over once we're ready to go live.

ttx had some good input on making sure the list description had the
necessary info about tagging subjects in list emails. Stef had some
ideas about customizing the look and feel. That's all in the notes
linked above.

There have been other conversations in various places, and I'll gather
those up and send out another email.

More soon,

d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] public API design

2012-05-11 Thread Doug Hellmann
On Fri, May 11, 2012 at 3:40 PM, Loic Dachary l...@enovance.com wrote:

 On 05/11/2012 05:55 PM, Doug Hellmann wrote:
  During yesterday's meeting we discussed the API proposal at
  http://wiki.openstack.org/EfficientMetering#API and came up with a few
  missing items and other points for discussion. We should try to work out
  those details on the list before the meeting next week.
 
  The original proposal included these API calls:
 
  GET list components
  GET list components meters (argument : name of the component)
  GET list [user_id|project_id|source]
  GET list of meter_type
  GET list of events per [user_id|project_id|source] ( allow to specify
 user_id or project_id or both )
  GET sum of (meter_volume, meter_duration) for meter_type and
 [user_id|project_id|source]
 
  They can be broken down into three groups:
 
  - Discover the sorts of things the server can provide:
 
  - list the components providing metering data
  - list the meters for a given component
  - list the known users
  - list the known projects
  - list the known sources
  - list the types of meters known
 
  - Fetch raw event data, without aggregation:
 
  - per user
  - per project
  - per source
  - per user and project
 
  - Produce aggregate views of the data:
 
  - sum volume field for meter type over a time period
  - per user
  - per project
  - per source
  - sum duration field for meter type over a period of time
  - per user
  - per project
  - per source
 I updated the wiki with these three groups, it is a good starting point
 for discussion.
 http://wiki.openstack.org/EfficientMetering?action=diffrev2=75rev1=74
 
  We agreed that all of the queries related to metering data would need
  to have parameters to set the start and end times with the start time
  inclusive and the end time exclusive (i.e., start = timestamp  end).
 
  Callers also are likely to want to filter the raw data queries based
  on the meter type, so we need to add that as an optional argument
  there. The aggregate values have the meter type as a required argument
  (because it does not make sense to aggregate data from separate meters
  together).
 
  There are a few other queries missing from that list. The items below
  are based on the meeting notes and my own thoughts from yesterday,
  please add to the list if I have left anything out (or suggest why we
  should not implement something suggested here, of course):
 
  - list discrete events that may not have a duration (instance
  creation, IP allocation, etc.)
  - list raw event data for a resource (what happened with a specific
  instance?)
  - aggregate event data per meter type for a resource over a period of
  time (what costs are related to this instance?)
  - sum volume for meter type over a time period for a specific resource
  (how much total bandwidth was used by a VIF?)
  - sum duration for meter type over a time period for a specific
  resource (how long did an instance run?)
  - metadata for resources (such as location of instances)
  - aggregating averages in addition to sums
 
 I've added this list as a note in the API chapter.
 http://wiki.openstack.org/EfficientMetering?action=diffrev2=76rev1=75
  Some of these items may be better handled in the consumer of this API
  (the system that actually bills the user). I am curious to know how
  others are handling auditing or other billing detail reporting for
  users.
 I will be in a meeting with the participants of the
 http://www.opencloudware.org/ project early next week and I'll ask them.
 Maybe Daniel Dyer could share a practical experience with us.

  In order to support the discrete events, we need to capture
  them. Perhaps those could be implemented as additional counter
  types. Thoughts?
 Are there resources that should be billed (henced metered) if their
 lifespan is shorter than the polling / publishing event period ? I mean
 that if someone creates a VM and immediately destroys it, does it really
 matter. While writing this statement, I though of a way to abuse a system
 that does not account for discrete events like created a VM but only
 account for this VM has run for at least X minutes :-) One could write a
 software to launch short lived VM at specific times and get computing power
 for free.


Exactly. We may bill the customer for the first X minutes of use for a VM
just for creating it, and then do the math necessary to avoid double
billing them during the first billing cycle.

Do discrete event need to be separate ? Back to the example above, someone
 creates a VM : that does not create a record in the ceilometer storage. The
 VM is destroyed a few seconds later : it creates an event, as if it was
 polled / published to measure the uptime of the VM. The c1 meter would show
 the VM has been up for 15 seconds since it was created. If the c1 meter is
 generated from an exist event captured from
 http://wiki.openstack.org/SystemUsageData that occurs every 5 minutes,
 the uptime will be 5 minutes more than the 

Re: [Openstack] Mailing-list split

2012-05-11 Thread Duncan McGreggor
Re-sending to the list...

On Fri, May 4, 2012 at 6:07 PM, Stefano Maffulli stef...@openstack.org wrote:

 On Fri 04 May 2012 12:38:16 PM PDT, Duncan McGreggor wrote:

 Oh, gmane is fine, I just said the first thing that came into my head.
 Now that I think about it, I'm a Emacs/Gnus guy, so I guess if I did
 have an opinion, it would probably be go gmane.  :)

 they're all suboptimal but so much better than pipermail... or not...
 the situation is just bad. Lets wait for MM3 or something else.

 I can't find the openstack list on gmane: is there an archive there now?

We'll subscribe to gmane (which can also add us to mail-archive.com)
as soon as the lists are in their new locations.

 We need to talk about migrating data:
  * from the old host to the new one
  * from LP archives (don't know if that's possible)

 we can migrate data from LP archives (we can ask them for the mbox). I
 wonder if we need to do that though.

It'd be nice to offer folks as a download, so they can search mail
archives off-line. I've certainly done that in the past.

d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] public API design

2012-05-11 Thread Doug Hellmann
On Fri, May 11, 2012 at 3:40 PM, Loic Dachary l...@enovance.com wrote:

 On 05/11/2012 05:55 PM, Doug Hellmann wrote:


 While thinking about this use case I also had a simple idea to optimize
 the storage of events and added it to the wiki
 http://wiki.openstack.org/EfficientMetering?action=diffrev2=77rev1=76


It will probably be just as efficient to handle that during aggregate
calculation one time, rather than trying to do on-the-fly aggregation.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova subsystem branches and feature branches

2012-05-11 Thread James E. Blair
Mark McLoughlin mar...@redhat.com writes:

 Hey,

 So, one thing came really stuck out to me when comparing our process to
 the kernel process:

   In the kernel process, maintainers are responsible for running 
   'git-merge' and they see it as their job to resolve conflicts.

   In our process, Jenkins runs 'git-merge' and runs away screaming at 
   the first sign of conflict.

Gerrit is what is responsible for merging commits, not Jenkins.  Jenkins
is listed as the author of some merge commits because it has instructed
Gerrit to merge those changes.

Gerrit fast-forwards branches when it is able, and automatically merges
when it is unable to fast-forward.  Every merge commit you see in the
history that is made by Jenkins represents a rebase that _did not_ have
to be done by a human.  As you may have noted, it's quite a lot
actually.  Ultimately, I think it represents a large savings in human
work.

 The kernel developers I talked to see this as the biggest hindrance to
 scaling and are fairly incredulous we've managed to scale this far with
 it.

I might agree with them if I had only read your description above.

Keep in mind that the Kernel has something like six times the rate of
change and number of contributors as all of OpenStack.  The Android Open
Source Project also uses Gerrit and it too is quite a bit larger than we
are.

Vish, Thierry, and I spent some time together this week at UDS trying to
reconcile their needs and your suggestions.  I believe Thierry is going
to write that up and send it to the list soon.

-Jim

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp