Re: [openstack-dev] [neutron] Why is network and subnet modeled separately?

2013-08-15 Thread P Balaji-B37839
Though we can add multiple subnets to the Network. We don't have an option to 
attach/select subnet to Network. It is observed that when all the IP Address 
are consumed from first subnet, then it starts using the next subnet available 
for the network.

More flexibility in selecting subnet to Network will be a good thing to have.

Any thoughts or use-case information for not giving an option to select subnet 
for a Network?

Regards,
Balaji.P

From: Lorin Hochstein [mailto:lo...@nimbisservices.com]
Sent: Thursday, August 15, 2013 1:43 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] Why is network and subnet modeled 
separately?

Here's a neutron implementation question: why does neutron model network and 
subnet as separate entities?

Or, to ask another way, are there are any practical use cases where you would 
*not* have a one-to-one relationship between neutron networks and neutron 
subnets in an OpenStack deployment? (e.g. one neutron network associated with 
multiple neutron subnets, or one neutron network associated with zero neutron 
subnets)?

Lorin
--
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.comhttp://www.nimbisservices.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why is network and subnet modeled separately?

2013-08-15 Thread Stephen Gran

Hi,

On 14/08/13 21:12, Lorin Hochstein wrote:

Here's a neutron implementation question: why does neutron model
network and subnet as separate entities?

Or, to ask another way, are there are any practical use cases where you
would *not* have a one-to-one relationship between neutron networks and
neutron subnets in an OpenStack deployment? (e.g. one neutron network
associated with multiple neutron subnets, or one neutron network
associated with zero neutron subnets)?


Different tenants might both use the same subnet range on different 
layer 2 networks.

One one layer 2 network, you might run dual stacked, ie, ipv4 and ipv6.

Supporting these use cases necessitates modeling them separately.

Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 32% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News  Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News  Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-15 Thread Kieran Spear

On 15/08/2013, at 3:02 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/14/2013 12:25 PM, Mac Innes, Kiall wrote:
 So, Are we saying that UIs built on OpenStack APIs shouldn't be able to
 show traditional pagination controls? Or am I missing how this should
 work with marker/limit?
 
 No, not quite what I'm saying. The operation to get the total number of pages 
 -- or more explicitly, the operation to get the *exact* number of pages in a 
 list result -- is expensive, and in order to be reasonably efficient, some 
 level of caching is almost always needed.
 
 However, being able to page forwards and backwards is absolutely possible 
 with limit/marker solutions. It simply requires the paging client (in this 
 case, Horizon), to store the list of previously seen page links returned in 
 listing results (there is a next and prev link in the returned list images 
 results, for example).

Is there a prev link? It's optional in Compute v2 and Images v1 and Nova/Glance 
don't seem to be returning it. I suspect Nova v3 isn't either but I haven't 
tested it. There's no mention of it in Images v2 at all and no prev returned 
there either.

If there is a next and prev link in the returned results then Horizon wouldn't 
need to store anything - the links can be rendered straight into the page.

Anyway, you're right, this is all secondary to the need for decent filtering.

 
 e.g. for 11 pages of content, something like: 1 2 3 .. 10 11
 
 Yeah, that's not an efficient operation unless you have some sort of caching 
 in place. You can use things like MySQL's SQL_CALC_FOUND_ROWS, but that is 
 not efficient since instead of stopping the query after LIMIT rows, you end 
 up executing the entire query to determine the number of rows that *would* 
 have been returned if no LIMIT was applied. In order to make such a thing 
 efficient, you'd want to cache the value of SQL_CALC_FOUND_ROWS in the 
 session and use that to calculate the number of pages.
 
 It's something that can be done, but isn't, IMHO, worth it to get the 
 traditional UI you describe. Instead, a good filter/search UI would be 
 better, with just next/prev links.
 
 Best,
 -jay
 
 Thanks,
 Kiall
 
 On 13/08/13 22:45, Jay Pipes wrote:
 On 08/13/2013 05:04 PM, Gabriel Hurley wrote:
 I have been one of the earliest, loudest, and most consistent PITA's about 
 pagination, so I probably oughta speak up. I would like to state three 
 facts:
 
 1. Marker + limit (e.g. forward-only) pagination is horrific for building 
 a user interface.
 2. Pagination doesn't scale.
 3. OpenStack's APIs have historically had useless filtering capabilities.
 
 In a world where pagination is a must-have feature we need to have page 
 number + limit pagination in order to build a reasonable UI. Ironically 
 though, I'm in favor of ditching pagination altogether. It's the 
 lowest-common denominator, used because we as a community haven't buckled 
 down and built meaningful ways for our users to get to the data they 
 really want.
 
 Filtering is great, but it's only 1/3 of the solution. Let me break it 
 down with problems and high level solutions:
 
 Problem 1: I know what I want and I need to find it.
 Solution: filtering/search systems.
 
 This is a good place to start. Glance has excellent filtering/search
 capabilities -- built in to the API from early on in the Essex
 timeframe, and only expanded over the last few releases.
 
 Pagination solutions should build on a solid filtering/search
 functionality in the API, where there is a consistent sort key and
 direction (either hard-coded or user-determined, doesn't matter).
 
 Limit/offset pagination solutions (forward and backwards paging, random
 skip-to-a-page) are inefficient from a SQL query perspective and should
 be a last resort, IMO, compared to limit/marker. With some smart
 session-storage of a page's markers, backwards paging with limit/marker
 APIs is certainly possible -- just store the previous page's marker.
 
 Problem 2: I don't know what I want, and it may or may not exist.
 Solution: tailored discovery mechanisms.
 
 This should not be a use case that we spend much time on. Frankly, this
 use case can be summarized as the window shopper scenario. Providing a
 quality search/filtering mechanism, including the *API* itself providing
 REST-ful discovery of the filters and search criteria the API supports,
 is way more important...
 
 Problem 3: I need to know something about *all* the data in my system.
 Solution: reporting systems.
 
 Sure, no disagreement there.
 
 We've got the better part of none of that.
 
 I disagree. Some of the APIs have support for a good bit of
 search/filtering. We just need to bring all the projects up to search
 speed, Captain.
 
 Best,
 -jay
 
 p.s. I very often go to the second and third pages of Google searches.
 :) But I never skip to the 127th page of results.
 
But I'd like to solve these issues. I have lots of thoughts on all of
 those, and I think the UX and design 

[openstack-dev] Multiple workers for neurton API server

2013-08-15 Thread Yingjun Li
Hi, all.

Currently, there is only one pid running for neutron-server. It's not
enough to handle the requests when undering lots of API access. So multiple
workers for neutron-server are urgrent necessary.

Please refer to
https://blueprints.launchpad.net/neutron/+spec/multi-workers-for-api-server to
get more details, and the BP needs approval from the core team.

Thanks!

Best,
Yingjun
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for approving Auto HA development blueprint.

2013-08-15 Thread 한승진
Hi Konglingxian

1. evacuate
 - Nova user request for evacuate operation manually.
 - Evacuate call rebuild method in compute

2. auto-ha
 - All operation is doing automatically.
 - Only role of administrator is register auto-ha-hosts, fixing broken
host, restoring migrated vm.
 - auto-ha call stop-start method.( start method is modified for scheduling
when start method is called. https://wiki.openstack.org/wiki/Start )


Important thing is Operation level.

If one fo compute nodes is broken, it cannot receive any rpc call and
operate any action.

We need auto operation model when compute node broken occurs because
administrators cannot monitors services always.

I wonder this is enough answer. If you don't understand, please ask me.

Thanks.















2013/8/14 Konglingxian konglingx...@huawei.com

  Hi yongiman:

 ** **

 I wander what’s the difference between your ‘auto HA’ API and ‘evacuate’**
 **

 ** **

 **

 *Lingxian Kong*

 *Huawei Technologies Co.,LTD.*

 *IT Product Line CloudOS PDU*

 *China, Xi'an*

 *Mobile: +86-18602962792*

 *Email: konglingx...@huawei.com***

 ** **

 *From:* yongi...@gmail.com [mailto:yongi...@gmail.com]
 *Sent:* Tuesday, August 13, 2013 9:12 PM

 *To:* OpenStack Development Mailing List
 *Cc:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] 答复: Proposal for approving Auto HA
 development blueprint.

  ** **

 For realizing auto HA function, we need monitoring service like ceilometer.
 



 

 Ceilometer monitors status of compute nodes ( network
 interface..connection, healthcheck,,etc,,)



 

 What I focus on is that this operation goes on automatically.



 

 Nova expose auto ha API. When nova received a auto api call. VMs
 automatically migrate to auto ha host.( which is extra compute node for
 only auto ha)



 

 All of information of auto ha is stored in auto_ha_hosts table.

 



 

 In this tables, used column of auto ha hosts is changed to true



 

 Administrator check broken compute node and fix( or replace ) the compute
 node.



 

 After fixing the compute node, VMs is migrating to operating compute
 nodes. Now auto ha host is empty again.

 



 

 When the number of runnning VMs in the auto ha host is zero, used column
 is replaced to false for using again by periodic task.



 

 Combination with monitoring service is important. Howerver in this
 blueprint, I want to realize nova's auto ha operation.



 

 My wiki page is still building now, I will fill out as soon as possbile.**
 **



 

 I am expecting your advices . Thank you very much~!

  

 ** **


 Sent from my iPad


 On 2013. 8. 13., at 오후 8:01, balaji patnala patnala...@gmail.com wrote:*
 ***

  Potential candidate as new service like Ceilometer, Heat etc for
 OpenStack and provide High Availability of VMs. Good topic to discuss at
 Summit for implementation post Havana Release. 

 On Tue, Aug 13, 2013 at 12:03 PM, Alex Glikson glik...@il.ibm.com wrote:
 

 Agree. Some enhancements to Nova might be still required (e.g., to handle
 resource reservations, so that there is enough capacity), but the
 end-to-end framework probably should be outside of existing services,
 probably talking to Nova, Ceilometer and potentially other components
 (maybe Cinder, Neutron, Ironic), and 'orchestrating' failure detection,
 fencing and recovery.
 Probably worth a discussion at the upcoming summit.


 Regards,
 Alex



 From:Konglingxian konglingx...@huawei.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:13/08/2013 07:07 AM
 Subject:[openstack-dev] 答复:  Proposal for approving Auto HA
 developmentblueprint. 
  --




 Hi yongiman:

 Your idea is good, but I think the auto HA operation is not OpenStack’s
 business. IMO, Ceilometer offers ‘monitoring’, Nova  offers ‘evacuation’,
 and you can combine them to realize HA operation.

 So, I’m afraid I can’t understand the specific implementation details very
 well.

 Any different opinions?

 *发件人:* yongi...@gmail.com [mailto:yongi...@gmail.com yongi...@gmail.com]
 *
 **发送时间:* 2013年8月12日 20:52*
 **收件人:* openstack-dev@lists.openstack.org*
 **主题:* Re: [openstack-dev] Proposal for approving Auto HA development
 blueprint.



 Hi,

 Now, I am developing auto ha operation for vm high availability.

 This function is all progress automatically.

 It needs other service like ceilometer.

 ceilometer monitors compute nodes.

 When ceilometer detects broken compute node, it send a api call to Nova,
 nova exposes for auto ha API.

 When received auto ha call, nova progress auto ha operation.

 All auto ha enabled VM where are running on broken host are all migrated
 to auto ha Host which is extra compute node for using only Auto-HA
 function.

 Below is my blueprint and 

Re: [openstack-dev] Code review study

2013-08-15 Thread Gareth
That's an interesting article and also meaningful for coders. If I have a
patch more than 200 or 300 lines, to split this may be a good idea.

Some time, an easy patch with a little more lines would prevent more
reviewers to think about it.


On Thu, Aug 15, 2013 at 10:12 AM, Robert Collins
robe...@robertcollins.netwrote:

 This may interest data-driven types here.


 https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/

 Note specifically the citation of 200-400 lines as the knee of the review
 effectiveness curve: that's lower than I thought - I thought 200 was
 clearly fine - but no.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-15 Thread Anne Gentle
On Thu, Aug 15, 2013 at 7:12 AM, Christopher Yeoh cbky...@gmail.com wrote:


 On Thu, Aug 15, 2013 at 11:42 AM, Robert Collins 
 robe...@robertcollins.net wrote:

 This may interest data-driven types here.


 https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/

 Note specifically the citation of 200-400 lines as the knee of the review
 effectiveness curve: that's lower than I thought - I thought 200 was
 clearly fine - but no.


 Very interesting article. One other point which I think is pretty relevant
 is point 4 about getting authors to annotate the code better (and for those
 who haven't read it, they don't mean comments in the code but separately)
 because it results in the authors picking up more bugs before they even
 submit the code.


+one million



 So I wonder if its worth asking people to write more detailed commit logs
 which include some reasoning about why some of the more complex changes
 were done in a certain way and not just what is implemented or fixed. As it
 is many of the commit messages are often very succinct so I think it would
 help on the review efficiency side too.

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-15 Thread Christopher Yeoh
On Thu, Aug 15, 2013 at 9:54 PM, Daniel P. Berrange
berra...@redhat.comwrote:Commit message quality has improved
somewhat since I first wrote 
published

 that page, but there's definitely still scope to improve things further.
 What
 it really needs is for more reviewers to push back against badly written
 commit messages, to nudge authors into the habit of being more verbose in
 their commits.


Agreed. There is often what and sometimes why, but not very often
how in commit messages.

 Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-15 Thread Dolph Mathews
On Thu, Aug 15, 2013 at 7:57 AM, Christopher Yeoh cbky...@gmail.com wrote:

 On Thu, Aug 15, 2013 at 9:54 PM, Daniel P. Berrange 
 berra...@redhat.comwrote:Commit message quality has improved somewhat since 
 I first wrote 
 published

  that page, but there's definitely still scope to improve things further.
 What
 it really needs is for more reviewers to push back against badly written
 commit messages, to nudge authors into the habit of being more verbose in
 their commits.


 Agreed. There is often what and sometimes why, but not very often
 how in commit messages.


++

Beyond the one line summary (which *should* describe what changed),
describing what changed in the commit message is entirely redundant with
the commit itself.



  Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-15 Thread Daniel P. Berrange
On Thu, Aug 15, 2013 at 09:46:07AM -0500, Dolph Mathews wrote:
 On Thu, Aug 15, 2013 at 7:57 AM, Christopher Yeoh cbky...@gmail.com wrote:
 
  On Thu, Aug 15, 2013 at 9:54 PM, Daniel P. Berrange 
  berra...@redhat.comwrote:Commit message quality has improved somewhat 
  since I first wrote 
  published
 
   that page, but there's definitely still scope to improve things further.
  What
  it really needs is for more reviewers to push back against badly written
  commit messages, to nudge authors into the habit of being more verbose in
  their commits.
 
 
  Agreed. There is often what and sometimes why, but not very often
  how in commit messages.
 
 
 ++
 
 Beyond the one line summary (which *should* describe what changed),
 describing what changed in the commit message is entirely redundant with
 the commit itself.

It isn't that clearcut actually. It is quite often helpful to summarize
what changed in the commit message, particularly for changes touching
large areas of code, or many files. The diff's can't always be assumed
to be easily readable - for example if you re-indented a large area of
code, the actual what can be clear as mud. Or if there are related
changes spread across many files  functions, a description of what is
being done will aid reviewers. Just be pragmatic about deciding when
a change is complex enough that it merits summarizing the 'what', as
well as the 'why'.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-15 Thread Dolph Mathews
On Thu, Aug 15, 2013 at 9:56 AM, Daniel P. Berrange berra...@redhat.comwrote:

 On Thu, Aug 15, 2013 at 09:46:07AM -0500, Dolph Mathews wrote:
  On Thu, Aug 15, 2013 at 7:57 AM, Christopher Yeoh cbky...@gmail.com
 wrote:
 
   On Thu, Aug 15, 2013 at 9:54 PM, Daniel P. Berrange 
 berra...@redhat.comwrote:Commit message quality has improved somewhat
 since I first wrote 
   published
  
that page, but there's definitely still scope to improve things
 further.
   What
   it really needs is for more reviewers to push back against badly
 written
   commit messages, to nudge authors into the habit of being more
 verbose in
   their commits.
  
  
   Agreed. There is often what and sometimes why, but not very often
   how in commit messages.
  
 
  ++
 
  Beyond the one line summary (which *should* describe what changed),
  describing what changed in the commit message is entirely redundant
 with
  the commit itself.

 It isn't that clearcut actually. It is quite often helpful to summarize
 what changed in the commit message, particularly for changes touching
 large areas of code, or many files.


Acknowledged, but keyword: *summarize*! If it really won't fit in a one
line summary, that's a giant red flag that you should be producing multiple
commits.


 The diff's can't always be assumed
 to be easily readable - for example if you re-indented a large area of
 code, the actual what can be clear as mud.


Multiple commits!


 Or if there are related
 changes spread across many files  functions, a description of what is
 being done will aid reviewers.


This doesn't necessarily belong in the commit message... this can fall
under item #4 in the article -- annotate your code review before it begins.


 Just be pragmatic about deciding when
 a change is complex enough that it merits summarizing the 'what', as
 well as the 'why'.


 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/:|
 |: http://libvirt.org  -o- http://virt-manager.org:|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/:|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc:|




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Team meeting reminder August 15 18:00 UTC

2013-08-15 Thread Sergey Lukjanov
Hi folks,

We'll be have the Savanna team meeting today as usual in #openstack-meeting-alt 
channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_August.2C_15

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20130815T18

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

2013-08-15 Thread Alex Meade
I don't know any actual numbers but I would have the concern that images tend 
to stick around longer than instances. For example, if someone takes daily 
snapshots of their server and keeps them around for a long time, the number of 
exists events would go up and up.

Just a thought, could be a valid avenue of concern.

-Alex

-Original Message-
From: Doug Hellmann doug.hellm...@dreamhost.com
Sent: Thursday, August 15, 2013 11:17am
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Nova generates a single exists event for each instance, and that doesn't
cause a lot of trouble as far as I've been able to see.

What is the relative number of images compared to instances in a typical
cloud?

Doug


On Tue, Aug 13, 2013 at 7:20 PM, Neal, Phil phil.n...@hp.com wrote:

 I'm a little concerned that a batch payload won't align with exists
 events generated from other services. To my recollection, Cinder, Trove and
 Neutron all emit exists events on a per-instance basisa consumer would
 have to figure out a way to handle/unpack these separately if they needed a
 granular feed. Not the end of the world, I suppose, but a bit inconsistent.

 And a minor quibble: batching would also make it a much bigger issue if a
 consumer missed a notificationthough I guess you could counteract that
 by increasing the frequency (but wouldn't that defeat the purpose?)

 
 
 
  On 08/13/2013 04:35 PM, Andrew Melton wrote:
   I'm just concerned with the type of notification you'd send. It has to
   be enough fine grained so we don't lose too much information.
  
   It's a tough situation, sending out an image.exists for each image with
   the same payload as say image.upload would likely create TONS of
 traffic.
   Personally, I'm thinking about a batch payload, with a bare minimum of
 the
   following values:
  
   'payload': [{'id': 'uuid1', 'owner': 'tenant1', 'created_at':
   'some_date', 'size': 1},
  {'id': 'uuid2', 'owner': 'tenant2', 'created_at':
   'some_date', 'deleted_at': 'some_other_date', 'size': 2}]
  
   That way the audit job/task could be configured to emit in batches
 which
   a deployer could tweak the settings so as to not emit too many
 messages.
   I definitely welcome other ideas as well.
 
  Would it be better to group by tenant vs. image?
 
  One .exists per tenant that contains all the images owned by that tenant?
 
  -S
 
 
   Thanks,
   Andrew Melton
  
  
   On Tue, Aug 13, 2013 at 4:27 AM, Julien Danjou jul...@danjou.info
   mailto:jul...@danjou.info wrote:
  
   On Mon, Aug 12 2013, Andrew Melton wrote:
  
So, my question to the Ceilometer community is this, does this
   sound like
something Ceilometer would find value in and use? If so, would
 this be
something
we would want most deployers turning on?
  
   Yes. I think we would definitely be happy to have the ability to
 drop
   our pollster at some time.
   I'm just concerned with the type of notification you'd send. It
 has to
   be enough fine grained so we don't lose too much information.
  
   --
   Julien Danjou
   // Free Software hacker / freelance consultant
   // http://julien.danjou.info
  
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Pipeline Retry Semantics ...

2013-08-15 Thread Sandy Walsh
Recently I've been focused on ensuring we don't drop notifications in
CM. But problems still exist downstream, after we've captured the raw
event.

From the efforts going on with the Ceilometer sample pipeline, the new
dispatcher model and the upcoming trigger pipeline, the discussion
around retry semantics has being coming up a lot.

In other words What happens when step 4 of a 10 step pipeline fails?

As we get more into processing billing events, we really need to have a
solid understanding of how we prevent double-counting or dropping events.

I've started writing down some thoughts here:
https://wiki.openstack.org/wiki/DuplicateWorkCeilometer

It's a little scattered and I'd like some help tuning it.

Hopefully it'll help grease the skids for the Icehouse Summit talks.

Thanks!
-S

cc/ Josh, I think the State Management team can really help out here.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] as-update-policy implementation details

2013-08-15 Thread Chan, Winson C
I updated the implementation section of 
https://wiki.openstack.org/wiki/Heat/Blueprints/as-update-policy on instance 
naming to support UpdatePolicy where in the case of the LaunchConfiguration 
change, all the instances need to be replaced and to support 
MinInstancesInService, the handle_update should create new instances first 
before deleting old ones in a batch per MaxBatchSize (i.e., group capacity of 2 
with MaxBatchSize=2 and MinInstancesInService=2).  Please review as I may not 
understand the original motivation for the existing scheme in instance naming.  
Thanks.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-15 Thread Dolph Mathews
On Thu, Aug 15, 2013 at 12:00 PM, Mark Washenberger 
mark.washenber...@markwash.net wrote:




 On Thu, Aug 15, 2013 at 5:12 AM, Christopher Yeoh cbky...@gmail.comwrote:


 On Thu, Aug 15, 2013 at 11:42 AM, Robert Collins 
 robe...@robertcollins.net wrote:

 This may interest data-driven types here.


 https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/

 Note specifically the citation of 200-400 lines as the knee of the
 review effectiveness curve: that's lower than I thought - I thought 200 was
 clearly fine - but no.


 Very interesting article. One other point which I think is pretty
 relevant is point 4 about getting authors to annotate the code better (and
 for those who haven't read it, they don't mean comments in the code but
 separately) because it results in the authors picking up more bugs before
 they even submit the code.

 So I wonder if its worth asking people to write more detailed commit logs
 which include some reasoning about why some of the more complex changes
 were done in a certain way and not just what is implemented or fixed. As it
 is many of the commit messages are often very succinct so I think it would
 help on the review efficiency side too.


 Good commit messages are important, but I wonder if a more direct approach
 is for submitters to put review notes for their patches directly in gerrit.
 That allows annotation to be directly in place, without the burden of
 over-commenting.


++ I've done this myself when I can anticipate the questions that reviewers
are going to ask anyway. The best way to get your code merged quickly is to
make it as easy to review as possible!




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] ack(), reject() and requeue() support in rpc ...

2013-08-15 Thread Sandy Walsh


On 08/15/2013 02:00 PM, Eric Windisch wrote:
 On Wed, Aug 14, 2013 at 4:08 PM, Sandy Walsh sandy.wa...@rackspace.com 
 wrote:
 At Eric's request in https://review.openstack.org/#/c/41979/ I'm
 bringing this to the ML for feedback.
 
 Thank you Sandy.
 
 Currently, oslo-common rpc behaviour is to always ack() a message no
 matter what.
 
 Actually, the Qemu and Kombu drivers default to this. The behavior and
 the expectation of the abstraction itself is different, in my opinion.
 The ZeroMQ driver doesn't presently support acknowledgements and
 they're not supported or exposed by the abstraction itself.

Hmm, that's interesting. I'd be curious to know how it deals with a
worker that can't process a message and needs to requeue.

 The reason I've asked for a mailing list post is because
 acknowledgements aren't presently baked into the RPC abstraction/API.
 You're suggesting that the idea of acknowledgements leaks into the
 abstraction. It isn't necessarily bad, but it is significant enough I
 felt it warranted visibility here on the list.

Yep, makes sense.

 Since each notification has a unique message_id, it's easy to detect
 events we've seen before and .reject() them.
 
 Only assuming you have a very small number of consumers or
 store/lookup the seen-messages in a global state store such as
 memcache. That might work in the limited use-cases you intend to
 deploy this, but might not be appropriate at the level of a general
 abstraction. I've seen that features we support such as fanout() get
 horribly abused simply because they're available, used outside their
 respective edge-cases, for patterns they don't work well for.

Actually, we're letting the db deal with it via a unique key constraint.
So it'll still work with a large number of consumers. But you're
correct, if we never had a simple way of detecting dups this would be a
problem.

The direction I see ceilometer going suggests we'll have a large number
of consumers (Collectors) processing and post-processing messages vs.
just the one or two we need now.

 I suppose there is much to be said about giving people the leverage to
 shoot themselves in their own feet, but I'm interested in knowing more
 about how you intend to implement the rejection mechanism. I assume
 you intend to implement this at the consumer level within a project
 (i.e. Ceilometer), or is this something you intend to put into
 service.py?

Sort of. The consumer decides this is a bad message and wants to kill
it. The current mechanism is for the consumer to throw a
RejectMessageException and have the messaging layer reject it (since
messages themselves are not part of the abstraction either). If we were
to make the message itself an API entity, the consumer could call
.reject()/.requeue() directly.

https://review.openstack.org/#/c/40618/

The whole thing falls into a broader set of problems I outline here:
http://lists.openstack.org/pipermail/openstack-dev/2013-August/013710.html

 Also, fyi, I'm not actually terribly opposed to this patch. It makes
 some sense. I just want to make sure we don't foul up the abstraction
 in some way or unintentionally give developers rope they'll inevitably
 strangle themselves on.

That's fair.

I think we can keep this from bothering the rpc side of the fence in the
olso.common.messaging project if notifications have a separate
abstraction from rpc calls. Sadly, for oslo.common.rpc, we have to live
with supporting both.

Cheers!
-S


 --
 Regards,
 Eric Windisch
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] v3 api remove security_groups extension (was Re: security_groups extension in nova api v3)

2013-08-15 Thread Melanie Witt
On Aug 13, 2013, at 3:35 PM, Melanie Witt wrote:

 On Aug 13, 2013, at 2:11 AM, Day, Phil wrote:
 
 If we really want to get clean separation between Nova and Neutron in the V3 
 API should we consider making the Nov aV3 API only accept lists o port ids 
 in the server create command ?
 
 That way there would be no need to every pass security group information 
 into Nova.
 
 Any cross project co-ordination (for example automatically creating ports) 
 could be handled in the client layer, rather than inside Nova.
 
 Server create is always (until there's a separate layer) going to go cross 
 project calling other apis like neutron and cinder while an instance is being 
 provisioned. For that reason, I tend to think it's ok to give some extra 
 convenience of automatically creating ports if needed, and being able to 
 specify security groups.
 
 For the associate and disassociate, the only convenience is being able to use 
 the instance display name and security group name, which is already handled 
 at the client layer. It seems a clearer case of duplicating what neutron 
 offers.

Thinking about this more, it seems like the security_groups extension should 
probably be removed in the v3 api. Originally, we considered not porting it to 
v3 because it's a network-related extension whose actions can be accomplished 
through neutron directly.

Then, it seemed associate/disassociate the with instance would be needed in 
nova, and those actions alone were ported. However, looking into the code more 
I found that's simply a neutron port update (append security group to port). 
Server create is similar.

It seems like the extension isn't really needed in v3. Does anyone have any 
objection to removing it?

Melanie







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-15 Thread Sam Harwell
I like to take a different approach. If my commit message is going to take more 
than a couple lines for people to understand the decisions I made, I go and 
make an issue in the issue tracker before committing locally and then reference 
that issue in the commit message. This helps in a few ways:


1.   If I find a technical or grammatical error in the commit message, it 
can be corrected.

2.   Developers can provide feedback on the subject matter independently of 
the implementation, as well as feedback on the implementation itself.

3.   I like the ability to include formatting and hyperlinks in my 
documentation of the commit.

Sam

From: Christopher Yeoh [mailto:cbky...@gmail.com]
Sent: Thursday, August 15, 2013 7:12 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Code review study


On Thu, Aug 15, 2013 at 11:42 AM, Robert Collins 
robe...@robertcollins.netmailto:robe...@robertcollins.net wrote:
This may interest data-driven types here.

https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
Note specifically the citation of 200-400 lines as the knee of the review 
effectiveness curve: that's lower than I thought - I thought 200 was clearly 
fine - but no.

Very interesting article. One other point which I think is pretty relevant is 
point 4 about getting authors to annotate the code better (and for those who 
haven't read it, they don't mean comments in the code but separately) because 
it results in the authors picking up more bugs before they even submit the code.

So I wonder if its worth asking people to write more detailed commit logs which 
include some reasoning about why some of the more complex changes were done in 
a certain way and not just what is implemented or fixed. As it is many of the 
commit messages are often very succinct so I think it would help on the review 
efficiency side too.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hacking 0.7 Released (for a bug fix)

2013-08-15 Thread Joe Gordon
Hi All,

Hacking 0.7 has just been released, and merged into openstack/requirements (
https://review.openstack.org/#/c/41523/),  due to a bug in hacking 0.6 that
made H202, 'assertRaises Exception too broad', not work (
https://bugs.launchpad.net/hacking/+bug/1206302).

Additionally Hacking 0.7 has support to specify import exceptions in
tox.ini so you don't have to put #noqa everywhere  (
https://bugs.launchpad.net/hacking/+bug/1206189).  This can be used to
ignore the gettext line that doesn't import a module.  In nova this looks
like https://review.openstack.org/#/c/38851/2/tox.ini.


best,
Joe Gordon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Blueprint proposal - Import / Export images with user properties

2013-08-15 Thread Emilien Macchi
Mark,

As you suggested, I've created a single blueprint :
https://blueprints.launchpad.net/glance/+spec/export-import-image-metadata-ovf

I don't have any idea about its dependencies, maybe could you fix the
blueprint if needed.


I think you can also delete :
https://blueprints.launchpad.net/glance/+spec/export-import-image-metadata-ovf
https://blueprints.launchpad.net/glance/+spec/export-export-image-metadata-ovf

..which are now deprecated. (I'm not able to do it).

Let me know if the blueprint looks good or if we need to discuss more.
Also, we could be interested by implementing it.

Thank you,

Emilien Macchi

# OpenStack Engineer
// eNovance Inc.  http://enovance.com
// ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
// 10 rue de la Victoire 75009 Paris

On 08/14/2013 07:39 PM, Mark Washenberger wrote:
 I think this could fit alongside a current blueprint we've discussed
 (https://blueprints.launchpad.net/glance/+spec/iso-image-metadata)
 that does similar things for metadata in isos.

 In general, I think the sane way to add a feature like this is as an
 optional container-format-specific plugin for import and export. Since
 the import and export features are still in pretty early stages of
 development (advanced on design though!), I don't expect such a
 feature would land until mid Icehouse, just fyi.

 Can you restructure these blueprints as a single bp feature to
 export/import metadata in ovf?


 On Wed, Aug 14, 2013 at 10:09 AM, Emilien Macchi
 emilien.mac...@enovance.com mailto:emilien.mac...@enovance.com wrote:

 Hi Mark,


 I was thinking at the OVF container format first since as far I
 know, it does support metadatas.


 Thank's,


 Emilien Macchi
 
 # OpenStack Engineer
 // eNovance Inc.  http://enovance.com
 // ✉ emil...@enovance.com mailto:emil...@enovance.com ☎ +33 (0)1 49 
 70 99 80 tel:%2B33%20%280%291%2049%2070%2099%2080
 // 10 rue de la Victoire 75009 Paris

 On 08/14/2013 06:34 PM, Mark Washenberger wrote:
 Lets dig into this a bit more so that I can understand it.

 Given that we have properties that we want to export with an
 image, where would those properties be stored? Somewhere in the
 image data itself? I believe some image formats support metadata,
 but I can't imagine all of them would. Is there a specific format
 you're thinking of using?


 On Wed, Aug 14, 2013 at 8:36 AM, Emilien Macchi
 emilien.mac...@enovance.com
 mailto:emilien.mac...@enovance.com wrote:

 Hi,


 I would like to discuss here about two blueprint proposal
 (maybe could I merge them into one if you prefer) :

 
 https://blueprints.launchpad.net/glance/+spec/api-v2-export-properties
 
 https://blueprints.launchpad.net/glance/+spec/api-v2-import-properties

 *Use case* :
 I would like to set specific properties to an image which
 could represent a signature, and useful for licensing
 requirements for example.
 To do that, I should be able to export an image with user
 properties included.

 Then, a user could reuse the exported image in the public
 cloud, and Glance will be aware about its properties.
 Obviously, we need the import / export feature.

 The idea here is to be able to identify an image after
 cloning or whatever with a property field. Of course, the
 user could break it in editing the image manually, but I
 consider he / she won't.


 Let me know if you have any thoughts and if the blueprint is
 valuable.

 Regards,

 -- 
 Emilien Macchi
 
 # OpenStack Engineer
 // eNovance Inc.  http://enovance.com
 // ✉ emil...@enovance.com mailto:emil...@enovance.com ☎ +33 
 (0)1 49 70 99 80 tel:%2B33%20%280%291%2049%2070%2099%2080
 // 10 rue de la Victoire 75009 Paris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org 
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Blueprint proposal - Import / Export images with user properties

2013-08-15 Thread Emilien Macchi
Wrong copy paste, sorry.

We can delete :
https://blueprints.launchpad.net/glance/+spec/api-v2-export-properties
https://blueprints.launchpad.net/glance/+spec/api-v2-import-properties

Thank's,

Emilien Macchi

# OpenStack Engineer
// eNovance Inc.  http://enovance.com
// ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
// 10 rue de la Victoire 75009 Paris

On 08/14/2013 07:39 PM, Mark Washenberger wrote:
 I think this could fit alongside a current blueprint we've discussed
 (https://blueprints.launchpad.net/glance/+spec/iso-image-metadata)
 that does similar things for metadata in isos.

 In general, I think the sane way to add a feature like this is as an
 optional container-format-specific plugin for import and export. Since
 the import and export features are still in pretty early stages of
 development (advanced on design though!), I don't expect such a
 feature would land until mid Icehouse, just fyi.

 Can you restructure these blueprints as a single bp feature to
 export/import metadata in ovf?


 On Wed, Aug 14, 2013 at 10:09 AM, Emilien Macchi
 emilien.mac...@enovance.com mailto:emilien.mac...@enovance.com wrote:

 Hi Mark,


 I was thinking at the OVF container format first since as far I
 know, it does support metadatas.


 Thank's,


 Emilien Macchi
 
 # OpenStack Engineer
 // eNovance Inc.  http://enovance.com
 // ✉ emil...@enovance.com mailto:emil...@enovance.com ☎ +33 (0)1 49 
 70 99 80 tel:%2B33%20%280%291%2049%2070%2099%2080
 // 10 rue de la Victoire 75009 Paris

 On 08/14/2013 06:34 PM, Mark Washenberger wrote:
 Lets dig into this a bit more so that I can understand it.

 Given that we have properties that we want to export with an
 image, where would those properties be stored? Somewhere in the
 image data itself? I believe some image formats support metadata,
 but I can't imagine all of them would. Is there a specific format
 you're thinking of using?


 On Wed, Aug 14, 2013 at 8:36 AM, Emilien Macchi
 emilien.mac...@enovance.com
 mailto:emilien.mac...@enovance.com wrote:

 Hi,


 I would like to discuss here about two blueprint proposal
 (maybe could I merge them into one if you prefer) :

 
 https://blueprints.launchpad.net/glance/+spec/api-v2-export-properties
 
 https://blueprints.launchpad.net/glance/+spec/api-v2-import-properties

 *Use case* :
 I would like to set specific properties to an image which
 could represent a signature, and useful for licensing
 requirements for example.
 To do that, I should be able to export an image with user
 properties included.

 Then, a user could reuse the exported image in the public
 cloud, and Glance will be aware about its properties.
 Obviously, we need the import / export feature.

 The idea here is to be able to identify an image after
 cloning or whatever with a property field. Of course, the
 user could break it in editing the image manually, but I
 consider he / she won't.


 Let me know if you have any thoughts and if the blueprint is
 valuable.

 Regards,

 -- 
 Emilien Macchi
 
 # OpenStack Engineer
 // eNovance Inc.  http://enovance.com
 // ✉ emil...@enovance.com mailto:emil...@enovance.com ☎ +33 
 (0)1 49 70 99 80 tel:%2B33%20%280%291%2049%2070%2099%2080
 // 10 rue de la Victoire 75009 Paris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org 
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-08-15 Thread Christopher Armstrong
*Introduction and Requirements*

So there's kind of a perfect storm happening around autoscaling in Heat
right now. It's making it really hard to figure out how I should compose
this email. There are a lot of different requirements, a lot of different
cool ideas, and a lot of projects that want to take advantage of
autoscaling in one way or another: Trove, OpenShift, TripleO, just to name
a few...

I'll try to list the requirements from various people/projects that may be
relevant to autoscaling or scaling in general.

1. Some users want a service like Amazon's Auto Scaling or Rackspace's
Otter -- a simple API that doesn't really involve orchestration.
2. If such a API exists, it makes sense for Heat to take advantage of its
functionality instead of reimplementing it.
3. If Heat integrates with that separate API, however, that API will need
two ways to do its work:
   1. native instance-launching functionality, for the simple use
   2. a way to talk back to Heat to perform orchestration-aware scaling
operations.
4. There may be things that are different than AWS::EC2::Instance that we
would want to scale (I have personally been playing around with the concept
of a ResourceGroup, which would maintain a nested stack of resources based
on an arbitrary template snippet).
5. Some people would like to be able to perform manual operations on an
instance group -- such as Clint Byrum's recent example of remove instance
4 from resource group A.

Please chime in with your additional requirements if you have any! Trove
and TripleO people, I'm looking at you :-)


*TL;DR*

Point 3.2. above is the main point of this email: exactly how should the
autoscaling API talk back to Heat to tell it to add more instances? I
included the other points so that we keep them in mind while considering a
solution.

*Possible Solutions*

I have heard at least three possibilities so far:

1. the autoscaling API should maintain a full template of all the nodes in
the autoscaled nested stack, manipulate it locally when it wants to add or
remove instances, and post an update-stack to the nested-stack associated
with the InstanceGroup.

Pros: It doesn't require any changes to Heat.

Cons: It puts a lot of burden of state management on the autoscale API, and
it arguably spreads out the responsibility of orchestration to the
autoscale API. Also arguable is that automated agents outside of Heat
shouldn't be managing an internal template, which are typically developed
by devops people and kept in version control.

2. There should be a new custom-built API for doing exactly what the
autoscaling service needs on an InstanceGroup, named something unashamedly
specific -- like instance-group-adjust.

Pros: It'll do exactly what it needs to do for this use case; very little
state management in autoscale API; it lets Heat do all the orchestration
and only give very specific delegation to the external autoscale API.

Cons: The API grows an additional method for a specific use case.

3. the autoscaling API should update the Size Property of the
InstanceGroup resource in the stack that it is placed in. This would
require the ability to PATCH a specific piece of a template (an operation
isomorphic to update-stack).

Pros: The API modification is generic, simply a more optimized version of
update-stack; very little state management required in autoscale API.

Cons: This would essentially require manipulating the user-provided
template.  (unless we have a concept of private properties, which perhaps
wouldn't appear in the template as provided by the user, but can be
manipulated with such an update stack operation?)


*Addenda*

Keep in mind that there are use cases which require other types of
manipulation of the InstanceGroup -- not just the autoscaling API. For
example, see Clint's #5 above.


Also, about implementation: Andrew Plunk and I have begun work on Heat
resources for Rackspace's Otter, which I think will be a really good proof
of concept for how this stuff should work in the Heat-native autoscale API.
I am trying to gradually work the design into the native Heat autoscaling
design, and we will need to solve the autoscale-controlling-InstanceGroup
issue soon.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] devstack exercise test failed at euca-register

2013-08-15 Thread XINYU ZHAO
Updated every project to the latest. but each time i ran devstack, the
exercise test failed at the same place bundle.sh
Any hints?

In console.log

Uploaded image as testbucket/bundle.img.manifest.xml
++ euca-register testbucket/bundle.img.manifest.xml
++ cut -f2
+ AMI='S3ResponseError: Unknown error occured.'
+ die_if_not_set 57 AMI 'Failure registering testbucket/bundle.img'
+ local exitcode=0
++ set +o
++ grep xtrace
+ FXTRACE='set -o xtrace'
+ set +o xtrace
+ timeout 15 sh -c 'while euca-describe-images | grep S3ResponseError:
Unknown error occured. | grep -q available; do sleep 1; done'
grep: Unknown: No such file or directory
grep: error: No such file or directory
grep: occured.: No such file or directory
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr
+ euca-deregister S3ResponseError: Unknown error occured.
Only 1 argument (image_id) permitted
+ die 65 'Failure deregistering S3ResponseError: Unknown error occured.'
+ local exitcode=1
+ set +o xtrace
[Call Trace]
/opt/stack/new/devstack/exercises/bundle.sh:65:die
[ERROR] /opt/stack/new/devstack/exercises/bundle.sh:65 Failure
deregistering S3ResponseError: Unknown error occured.



Here is what recorded in n-api log.

2013-08-15 15:44:20.331 27003 DEBUG nova.utils [-] Reloading cached
file /etc/nova/policy.json read_cached_file
/opt/stack/new/nova/nova/utils.py:814
2013-08-15 15:44:20.363 DEBUG nova.api.ec2
[req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] action:
RegisterImage __call__
/opt/stack/new/nova/nova/api/ec2/__init__.py:325
2013-08-15 15:44:20.364 DEBUG nova.api.ec2
[req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] arg:
Architectureval: i386 __call__
/opt/stack/new/nova/nova/api/ec2/__init__.py:328
2013-08-15 15:44:20.364 DEBUG nova.api.ec2
[req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] arg:
ImageLocation   val: testbucket/bundle.img.manifest.xml __call__
/opt/stack/new/nova/nova/api/ec2/__init__.py:328
2013-08-15 15:44:20.370 CRITICAL nova.api.ec2
[req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] Unexpected
S3ResponseError raised
2013-08-15 15:44:20.370 CRITICAL nova.api.ec2
[req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] Environment:
{CONTENT_TYPE: application/x-www-form-urlencoded; charset=UTF-8,
SCRIPT_NAME: /services/Cloud, REQUEST_METHOD: POST,
HTTP_HOST: 127.0.0.1:8773, PATH_INFO: /, SERVER_PROTOCOL:
HTTP/1.0, HTTP_USER_AGENT: Boto/2.10.0 (linux2),
RAW_PATH_INFO: /services/Cloud/, REMOTE_ADDR: 127.0.0.1,
REMOTE_PORT: 44294, wsgi.url_scheme: http, SERVER_NAME:
127.0.0.1, SERVER_PORT: 8773, GATEWAY_INTERFACE: CGI/1.1,
HTTP_ACCEPT_ENCODING: identity}
2013-08-15 15:44:20.371 DEBUG nova.api.ec2.faults
[req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] EC2 error
response: S3ResponseError: Unknown error occured. ec2_error_response
/opt/stack/new/nova/nova/api/ec2/faults.py:31
2013-08-15 15:44:20.371 INFO nova.api.ec2
[req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] 0.109800s
127.0.0.1 POST /services/Cloud/ CloudController:RegisterImage 400
[Boto/2.10.0 (linux2)] application/x-www-form-urlencoded text/xml
2013-08-15 15:44:20.379 INFO nova.ec2.wsgi.server
[req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] 127.0.0.1 POST
/services/Cloud/ HTTP/1.1 status: 400 len: 317 time: 0.1177399


execute manually on the machine:

euca-register testbucket/bundle.img.manifest.xml --debug
2013-08-15 17:00:19,446 euca2ools [DEBUG]:Using access key provided by client.
2013-08-15 17:00:19,446 euca2ools [DEBUG]:Using secret key provided by client.
2013-08-15 17:00:19,446 euca2ools [DEBUG]:Method: POST
2013-08-15 17:00:19,447 euca2ools [DEBUG]:Path: /services/Cloud/
2013-08-15 17:00:19,447 euca2ools [DEBUG]:Data:
2013-08-15 17:00:19,447 euca2ools [DEBUG]:Headers: {}
2013-08-15 17:00:19,447 euca2ools [DEBUG]:Host: 127.0.0.1:8773
2013-08-15 17:00:19,447 euca2ools [DEBUG]:Params: {'Action':
'RegisterImage', 'Version': '2009-11-30', 'Architecture': 'i386',
'ImageLocation': 'testbucket/bundle.img.manifest.xml'}
2013-08-15 17:00:19,447 euca2ools [DEBUG]:establishing HTTP
connection: kwargs={'timeout': 70}
2013-08-15 17:00:19,447 euca2ools [DEBUG]:Token: None
2013-08-15 17:00:19,447 euca2ools [DEBUG]:using _calc_signature_2
2013-08-15 17:00:19,448 euca2ools [DEBUG]:query string:
AWSAccessKeyId=4b14f2d81b9045fdb3a0c989d283ebbeAction=RegisterImageArchitecture=i386ImageLocation=testbucket%2Fbundle.img.manifest.xmlSignatureMethod=HmacSHA256SignatureVersion=2Timestamp=2013-08-16T00%3A00%3A19ZVersion=2009-11-30
2013-08-15 17:00:19,448 euca2ools [DEBUG]:string_to_sign: POST127.0.0.1:8773
/services/Cloud/
AWSAccessKeyId=4b14f2d81b9045fdb3a0c989d283ebbeAction=RegisterImageArchitecture=i386ImageLocation=testbucket%2Fbundle.img.manifest.xmlSignatureMethod=HmacSHA256SignatureVersion=2Timestamp=2013-08-16T00%3A00%3A19ZVersion=2009-11-30
2013-08-15 17:00:19,448 euca2ools [DEBUG]:len(b64)=44
2013-08-15 17:00:19,448 euca2ools [DEBUG]:base64 encoded digest:
WXvtpXvmsFBMpsEE1u4FT33DBq3SuFEzC+AEOhwU7+g=

Re: [openstack-dev] [nova] v3 api remove security_groups extension (was Re: security_groups extension in nova api v3)

2013-08-15 Thread Melanie Witt
On Aug 15, 2013, at 1:13 PM, Joe Gordon wrote:

 +1 from me as long as this wouldn't change anything for the EC2 API's 
 security groups support, which I assume it won't.

Correct, it's unrelated to the ec2 api.

We discussed briefly in the nova meeting today and there was consensus that 
removing the standalone associate/disassociate actions should happen.

Now the question is whether to keep the server create piece and not remove the 
extension entirely. The concern is about a delay in the newly provisioned 
instance being associated with the desired security groups. With the extension, 
the instance gets the desired security groups before the instance is active (I 
think). Without the extension, the client would receive the active instance and 
then call neutron to associate it with the desired security groups.

Would such a delay in associating with security groups be a problem?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev