Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-05 Thread Ladislav Smola

Hello,

+1 to core update. There are still enough Tuskar-UI guys in the core 
team I think.


Ladislav

On 12/04/2013 08:12 AM, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core
  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

My approach to this caused some confusion a while back, so I'm going
to throw in some boilerplate here for a few more editions... - I'm
going to talk about stats here, but they
are only part of the picture : folk that aren't really being /felt/ as
effective reviewers won't be asked to take on -core responsibility,
and folk who are less active than needed but still very connected to
the project may still keep them : it's not pure numbers.

Also, it's a vote: that is direct representation by the existing -core
reviewers as to whether they are ready to accept a new reviewer as
core or not. This mail from me merely kicks off the proposal for any
changes.

But, the metrics provide an easy fingerprint - they are a useful tool
to avoid bias (e.g. remembering folk who are just short-term active) -
human memory can be particularly treacherous - see 'Thinking, Fast and
Slow'.

With that prelude out of the way:

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up so they
aren't caught by surprise.

Our merger with Tuskar has now had plenty of time to bed down; folk
from the Tuskar project who have been reviewing widely within TripleO
for the last three months are not in any way disadvantaged vs previous
core reviewers when merely looking at the stats; and they've had three
months to get familiar with the broad set of codebases we maintain.

90 day active-enough stats:

+--+---++
| Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | Disagreements* |
+--+---++
|   lifeless **| 521   16 181   6 318 14162.2% |   16 (  3.1%)  |
| cmsj **  | 4161  30   1 384 20692.5% |   22 (  5.3%)  |
| clint-fewbar **  | 3792  83   0 294 12077.6% |   11 (  2.9%)  |
|derekh ** | 1960  36   2 158  7881.6% |6 (  3.1%)  |
|slagle ** | 1650  36  94  35  1478.2% |   15 (  9.1%)  |
|ghe.rivero| 1500  26 124   0   082.7% |   17 ( 11.3%)  |
|rpodolyaka| 1420  34 108   0   076.1% |   21 ( 14.8%)  |
|lsmola ** | 1011  15  27  58  3884.2% |4 (  4.0%)  |
|ifarkas **|  950  10   8  77  2589.5% |4 (  4.2%)  |
| jistr ** |  951  19  16  59  2378.9% |5 (  5.3%)  |
|  markmc  |  940  35  59   0   062.8% |4 (  4.3%)  |
|pblaho ** |  831  13  45  24   983.1% |   19 ( 22.9%)  |
|marios ** |  720   7  32  33  1590.3% |6 (  8.3%)  |
|   tzumainn **|  670  17  15  35  1574.6% |3 (  4.5%)  |
|dan-prince|  590  10  35  14  1083.1% |7 ( 11.9%)  |
|   jogo   |  570   6  51   0   089.5% |2 (  3.5%)  |


This is a massive improvement over last months report. \o/ Yay. The
cutoff line here is pretty arbitrary - I extended a couple of rows
below one-per-work-day because Dan and Joe were basically there - and
there is a somewhat bigger gap to the next most active reviewer below
that.

About half of Ghe's reviews are in the last 30 days, and ~85% in the
last 60 - but he has been doing significant numbers of thoughtful
reviews over the whole three months - I'd like to propose him for
-core.
Roman has very similar numbers here, but I don't feel quite as
confident yet - I think he is still coming up to speed on the codebase
(nearly all his reviews are in the last 60 days only) - but I'm
confident that he'll be thoroughly indoctrinated in another month :).
Mark is contributing great throughtful reviews, but the vast majority
are very recent - like Roman, I want to give him some 

Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-05 Thread Ladislav Smola

Hello,

so what is the plan? Tuskar-UI stays in tripleo until tripleo is part of 
the integrated release?


Thanks,
Ladislav

On 12/04/2013 08:44 PM, Lyle, David wrote:

On 5 December 2013 12:10, Robert Collins robe...@robertcollins.net wrote:

-snip-


That said, perhaps we should review these projects.

Tuskar as an API to drive deployment and ops clearly belongs in
TripleO - though we need to keep pushing features out of it into more
generalised tools like Heat, Nova and Solum. TuskarUI though, as far
as I know all the other programs have their web UI in Horizon itself -
perhaps TuskarUI belongs in the Horizon program as a separate code
base for now, and merge them once Tuskar begins integration?


This sounds reasonable to me.  The code base for TuskarUI is building on 
Horizon and we are planning on integrating TuskarUI into Horizon once TripleO 
is part of the integrated release.  The review skills and focus for TuskarUI is 
certainly more consistent with Horizon than the rest of the TripleO program.
  

-Rob


--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystoneclient] [Keystone] [Solum] Last released version of keystoneclient does not work with python33

2013-12-05 Thread Jamie Lennox
On Thu, 2013-12-05 at 05:08 +, Adrian Otto wrote:
 Hi Morgan!
 
 
 Stackforge projects can be configured to make the CLA optional. I am
 willing to speak to the HTTPretty maintainers about the benefits of
 stackforge. Do we happen to know any of them? If not I can track them
 down through email.
 
 
 Adrian
 

The main guy's email is fairly easy to find from his github page. But
I'm pretty sure he won't be interested. He's not involved in OpenStack
at all. 

Feel free to try though. 

Jamie
 
 
 --
 Adrian
 
 
  Original message 
 From: Morgan Fainberg 
 Date:12/04/2013 6:17 PM (GMT-08:00) 
 To: Jamie Lennox ,OpenStack Development Mailing List (not for usage
 questions) 
 Subject: Re: [openstack-dev] [Keystoneclient] [Keystone] [Solum] Last
 released version of keystoneclient does not work with python33 
 
 
 
 On December 4, 2013 at 18:05:07, Jamie Lennox (jamielen...@redhat.com)
 wrote:
 
  
  On Wed, 2013-12-04 at 20:48 -0500, David Stanek wrote: 
   On Wed, Dec 4, 2013 at 6:44 PM, Adrian Otto 
   adrian.o...@rackspace.com wrote: 
   Jamie, 
   
   Thanks for the guidance here. I am checking to see if any of 
   our developers might take an interest in helping with the 
   upstream work. At the very least, it might be nice to have 
   some understanding of how much work there is to be done in 
   HTTPretty. 
   
   
   (Dolph correct me if I am wrong, but...) 
   
   
   I don't think that there is much work to be done beyond getting
  that 
   pull request merged upstream. Dolph ran the tests using the code
  from 
   the pull request somewhat successfully. The errors that we saw
  were 
   just in keystoneclient code. 
  
  But I don't think that there own test suite runs under py33 with
  that 
  branch. So they've hit the main issues, but we won't get a release
  in 
  that state. 
 Should we offer to bring HTTPretty under something like stackforge and
 leverage our CI infrastructure?  Not sure how open the
 owner/maintainers would be to this, but it would help to solve that
 issue… downside is that pull-requests are no longer (gerrit instead)
 used and IIRC CLA is still required for stackforge projects (might be
 a detractor).  Just a passing thought (that might be irrelevant
 depending on the owner/maintainer’s point of view).
 
 
 
 
 —Morgan
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python 2.6 gate-nova-python26 failing

2013-12-05 Thread Rohan Kanade
Hey guys, i have odd situation for my review
https://review.openstack.org/#/c/59860/, all python 2.6 unit tests are
failing, but all python 2.7 pass. Seems to be some issue on the CI side.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-05 Thread Isaku Yamahata
On Wed, Dec 04, 2013 at 12:37:19PM +0900,
Maru Newby ma...@redhat.com wrote:

 On Dec 4, 2013, at 11:57 AM, Clint Byrum cl...@fewbar.com wrote:
 
  Excerpts from Maru Newby's message of 2013-12-03 08:08:09 -0800:
  I've been investigating a bug that is preventing VM's from receiving IP 
  addresses when a Neutron service is under high load:
  
  https://bugs.launchpad.net/neutron/+bug/1192381
  
  High load causes the DHCP agent's status updates to be delayed, causing 
  the Neutron service to assume that the agent is down.  This results in the 
  Neutron service not sending notifications of port addition to the DHCP 
  agent.  At present, the notifications are simply dropped.  A simple fix is 
  to send notifications regardless of agent status.  Does anybody have any 
  objections to this stop-gap approach?  I'm not clear on the implications 
  of sending notifications to agents that are down, but I'm hoping for a 
  simple fix that can be backported to both havana and grizzly (yes, this 
  bug has been with us that long).
  
  Fixing this problem for real, though, will likely be more involved.  The 
  proposal to replace the current wsgi framework with Pecan may increase the 
  Neutron service's scalability, but should we continue to use a 'fire and 
  forget' approach to notification?  Being able to track the success or 
  failure of a given action outside of the logs would seem pretty important, 
  and allow for more effective coordination with Nova than is currently 
  possible.
  
  
  Dropping requests without triggering a user-visible error is a pretty
  serious problem. You didn't mention if you have filed a bug about that.
  If not, please do or let us know here so we can investigate and file
  a bug.
 
 There is a bug linked to in the original message that I am already working 
 on.  The fact that that bug title is 'dhcp agent doesn't configure ports' 
 rather than 'dhcp notifications are silently dropped' is incidental.
 
  
  It seems to me that they should be put into a queue to be retried.
  Sending the notifications blindly is almost as bad as dropping them,
  as you have no idea if the agent is alive or not.
 
 This is more the kind of discussion I was looking for.  
 
 In the current architecture, the Neutron service handles RPC and WSGI with a 
 single process and is prone to being overloaded such that agent heartbeats 
 can be delayed beyond the limit for the agent being declared 'down'.  Even if 
 we increased the agent timeout as Yongsheg suggests, there is no guarantee 
 that we can accurately detect whether an agent is 'live' with the current 
 architecture.  Given that amqp can ensure eventual delivery - it is a queue - 
 is sending a notification blind such a bad idea?  In the best case the agent 
 isn't really down and can process the notification.  In the worst case, the 
 agent really is down but will be brought up eventually by a deployment's 
 monitoring solution and process the notification when it returns.  What am I 
 missing? 
 

Do you mean overload of neutron server? Not neutron agent.
So event agent sends periodic 'live' report, the reports are piled up
unprocessed by server.
When server sends notification, it considers agent dead wrongly.
Not because agent didn't send live reports due to overload of agent.
Is this understanding correct?


 Please consider that while a good solution will track notification delivery 
 and success, we may need 2 solutions:
 
 1. A 'good-enough', minimally-invasive stop-gap that can be back-ported to 
 grizzly and havana.

How about twisting DhcpAgent._periodic_resync_helper?
If no notification is received form server from last sleep,
it calls self.sync_state() even if self.needs_resync = False. Thus the
inconsistency between agent and server due to losing notification
will be fixed.


 2. A 'best-effort' refactor that maximizes the reliability of the DHCP agent.
 
 I'm hoping that coming up with a solution to #1 will allow us the breathing 
 room to work on #2 in this cycle.

Loss of notifications is somewhat inevitable, I think.
(Or logging tasks to stable storage shared between server and agent)
And Unconditionally sending notifications would cause problem.

You mentioned agent crash. Server crash should also be taken care of
for reliability. Admin also sometimes wants to restart neutron
server/agents for some reasons.
Agent can crash after receiving notifications before start processing
actual tasks. Server can crash after commiting changes to DB before sending
notifications. In such cases, notification will be lost.
Polling to resync would be necessary somewhere.

- notification loss isn't considered.
  self.resync is not always run.
  some optimization is possible, for example
  - detect loss by sequence number
  - polling can be postponed when notifications come without loss.

- periodic resync spawns threads, but doesn't wait their completion.
  So if resync takes long time, next resync can start even while
  resync is going on.


Re: [openstack-dev] creating a default for oslo config variables within a project?

2013-12-05 Thread Julien Danjou
On Wed, Dec 04 2013, Sean Dague wrote:

 Honestly, I'd love us to be clever and figure out a not dangerous way
 through this, even if unwise (where we can yell at the user in the LOGs
 loudly, and fail them in J if lock_dir=/tmp) that lets us progress
 through this while gracefully bringing configs into line.

Correct me if I'm wrong, but I think the correct way to deal with that
security problem is to use an atomic operation using open(2) with:
  open(pathname, O_CREAT | O_EXCL)

or mkstemp(3).

That should be doable in Python too.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Unicode strings in Python3

2013-12-05 Thread Julien Danjou
On Wed, Dec 04 2013, Georgy Okrokvertskhov wrote:

 Quick summary: you can't use unicode() function and u' ' strings in Pyhton3.

Not that it's advised, but you can use u' ' back again with Python 3.3.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [policy] Let's make topics mandatory for openstack-dev

2013-12-05 Thread Roman Prykhodchenko
Hi folks,

Open Stack community grows continuously bringing more people and so new
initiatives and new projects. This growing amount of people, initiatives
and projects causes increasing the amount of discussions in our mailing
list.

The problem which I'm talking about is that controlling the mailing list
gets harder with the growth of the community. It takes too much time to
check for important emails and delete/archive the rest even now. And it
does not tend to get any easier in the future.

Most of the email services and email clients support filtering incoming
emails. So one can automatically get rid of certain emails by creating
appropriate filters. Topics in subjects seem to be the best objects for
creating rules, i.e., when someone is interested only in Keystone he can
create an email filter for '[keystone]' substring in the subject.

The problem with the topics is that a lot of emails in openstack-dev do
not contain topics in their subjects, which makes this kind of filtering
very ineffective.

My proposal is to create an automated rule that rejects new emails, if
they do not contain any topic in their subject. What do you guys think?


- Roman Prykhodchenko
- romcheg on freenode.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-05 Thread Maru Newby

On Dec 3, 2013, at 12:18 AM, Joe Gordon joe.gord...@gmail.com wrote:

 
 
 
 On Sun, Dec 1, 2013 at 7:04 PM, John Dickinson m...@not.mn wrote:
 Just to add to the story, Swift uses X-Trans-Id and generates it in the 
 outer-most catch_errors middleware.
 
 Swift's catch errors middleware is responsible for ensuring that the 
 transaction id exists on each request, and that all errors previously 
 uncaught, anywhere in the pipeline, are caught and logged. If there is not a 
 common way to do this, yet, I submit it as a great template for solving this 
 problem. It's simple, scalable, and well-tested (ie tests and running in prod 
 for years).
 
 https://github.com/openstack/swift/blob/master/swift/common/middleware/catch_errors.py
 
 Leaving aside error handling and only focusing on the transaction id (or 
 request id) generation, since OpenStack services are exposed to untrusted 
 clients, how would you propose communicating the appropriate transaction id 
 to a different service? I can see great benefit to having a glance 
 transaction ID carry through to Swift requests (and so on), but how should 
 the transaction id be communicated? It's not sensitive info, but I can 
 imagine a pretty big problem when trying to track down errors if a client 
 application decides to set eg the X-Set-Transaction-Id header on every 
 request to the same thing.
 
 -1 to cross service request IDs, for the reasons John mentions above.
 
 
 Thanks for bringing this up, and I'd welcome a patch in Swift that would use 
 a common library to generate the transaction id, if it were installed. I can 
 see that there would be huge advantage to operators to trace requests through 
 multiple systems.
 
 Another option would be for each system that calls an another OpenStack 
 system to expect and log the transaction ID for the request that was given. 
 This would be looser coupling and be more forgiving for a heterogeneous 
 cluster. Eg when Glance makes a call to Swift, Glance cloud log the 
 transaction id that Swift used (from the Swift response). Likewise, when 
 Swift makes a call to Keystone, Swift could log the Keystone transaction id. 
 This wouldn't result in a single transaction id across all systems, but it 
 would provide markers so an admin could trace the request.
 
 There was a session on this at the summit, and although the notes are a 
 little scarce this was the conclusion we came up with.  Every time a cross 
 service call is made, we will log and send a notification for ceilometer to 
 consume, with the request-ids of both request ids.  One of the benefits of 
 this approach is that we can easily generate a tree of all the API calls that 
 are made (and clearly show when multiple calls are made to the same service), 
 something that just a cross service request id would have trouble with.

Is wise to trust anything a client provides to ensure traceability?  If a user 
receives a request id back from Nova, then submits that request id in an 
unrelated request to Neutron, the traceability would be effectively corrupted.  
If the consensus is that we don't want to securely deliver request ids for 
inter-service calls, how about requiring a service to log its request id along 
with the request id returned from a call to another service to achieve the a 
similar result?  The catch is that every call point (or client instantiation?) 
would have to be modified to pass the request id instead of just logging at one 
place in each service.  Is that a cost worth paying?


m.


 
 https://etherpad.openstack.org/p/icehouse-summit-qa-gate-debugability 
 
 
 With that in mind I think having a standard x-openstack-request-id makes 
 things a little more uniform, and means that adding new services doesn't 
 require new logic to handle new request ids.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as first-class citizen - SSL Termination (Revised)

2013-12-05 Thread Samuel Bercovici
Hi Stephen,

To make sure I understand, which model is fine Basic/Simple or New.

Thanks,
-Sam.


-Original Message-
From: Stephen Gran [mailto:stephen.g...@theguardian.com] 
Sent: Thursday, December 05, 2013 8:22 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as 
first-class citizen - SSL Termination (Revised)

Hi,

I would be happy with this model.  Yes, longer term it might be nice to have an 
independent certificate store so that when you need to be able to validate ssl 
you can, but this is a good intermediate step.

Cheers,

On 02/12/13 09:16, Vijay Venkatachalam wrote:

 LBaaS enthusiasts: Your vote on the revised model for SSL Termination?

 Here is a comparison between the original and revised model for SSL 
 Termination:

 ***
 Original Basic Model that was proposed in summit
 ***
 * Certificate parameters introduced as part of VIP resource.
 * This model is for basic config and there will be a model introduced in 
 future for detailed use case.
 * Each certificate is created for one and only one VIP.
 * Certificate params not stored in DB and sent directly to loadbalancer.
 * In case of failures, there is no way to restart the operation from details 
 stored in DB.
 ***
 Revised New Model
 ***
 * Certificate parameters will be part of an independent certificate resource. 
 A first-class citizen handled by LBaaS plugin.
 * It is a forwarding looking model and aligns with AWS for uploading server 
 certificates.
 * A certificate can be reused in many VIPs.
 * Certificate params stored in DB.
 * In case of failures, parameters stored in DB will be used to restore the 
 system.

 A more detailed comparison can be viewed in the following link
   
 https://docs.google.com/document/d/1fFHbg3beRtmlyiryHiXlpWpRo1oWj8FqVe
 ZISh07iGs/edit?usp=sharing


--
Stephen Gran
Senior Systems Integrator - theguardian.com Please consider the environment 
before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our 
iPad edition theguardian.com/iPad   
Save up to 33% by subscribing to the Guardian and Observer - choose the papers 
you want and get full digital access.
Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also be privileged. If 
you are not the named recipient, please notify the sender and delete the e-mail 
and all attachments immediately.
Do not disclose the contents to another person. You may not use the information 
for any purpose, or store, or copy, it in any way.
 
Guardian News  Media Limited is not liable for any computer viruses or other 
material transmitted with or as part of this e-mail. You should employ virus 
checking software.
 
Guardian News  Media Limited
 
A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP
 
Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] Let's make topics mandatory for openstack-dev

2013-12-05 Thread Robert Collins
On 5 December 2013 22:35, Roman Prykhodchenko
rprikhodche...@mirantis.com wrote:
 Hi folks,

 My proposal is to create an automated rule that rejects new emails, if
 they do not contain any topic in their subject. What do you guys think?

My expectation is that [ALL] would be the result, for things that
currently have no topic.

- Why not filter for just topics locally, wouldn't that be equivalent?

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] Let's make topics mandatory for openstack-dev

2013-12-05 Thread Matthew Booth
On Thu, 2013-12-05 at 11:35 +0200, Roman Prykhodchenko wrote:
 Hi folks,
 
 Open Stack community grows continuously bringing more people and so new
 initiatives and new projects. This growing amount of people, initiatives
 and projects causes increasing the amount of discussions in our mailing
 list.
 
 The problem which I'm talking about is that controlling the mailing list
 gets harder with the growth of the community. It takes too much time to
 check for important emails and delete/archive the rest even now. And it
 does not tend to get any easier in the future.
 
 Most of the email services and email clients support filtering incoming
 emails. So one can automatically get rid of certain emails by creating
 appropriate filters. Topics in subjects seem to be the best objects for
 creating rules, i.e., when someone is interested only in Keystone he can
 create an email filter for '[keystone]' substring in the subject.
 
 The problem with the topics is that a lot of emails in openstack-dev do
 not contain topics in their subjects, which makes this kind of filtering
 very ineffective.
 
 My proposal is to create an automated rule that rejects new emails, if
 they do not contain any topic in their subject. What do you guys think?

Alternatively, separate mailing lists for each current topic area. A
top-level mailing list could subscribe to all of them for anybody who
truly wants the fire hose.

Matt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] Let's make topics mandatory for openstack-dev

2013-12-05 Thread Ronak Shah
I like the proposal and I share the pain of checking for the important
emails in the pile of all and potentially missing on some in timely fashion.

My 2 cents on top of creating an automated rule:
Most of the people (not all) I see on dev-list are active on either 1 or 2
project specific discussions.
Why not create openstack-dev-project lists and have people subsribe to
project list(s) that they are interested in.
Biggest advantage I see with this approach is that one can actively
participate by getting individual emails for the project mailing list that
they are working on.
In parallel they can opt for digest for rest of the project(s) where they
act only in read-only mode.

Thoughts?

Ronak



On Thu, Dec 5, 2013 at 3:05 PM, Roman Prykhodchenko 
rprikhodche...@mirantis.com wrote:

 Hi folks,

 Open Stack community grows continuously bringing more people and so new
 initiatives and new projects. This growing amount of people, initiatives
 and projects causes increasing the amount of discussions in our mailing
 list.

 The problem which I'm talking about is that controlling the mailing list
 gets harder with the growth of the community. It takes too much time to
 check for important emails and delete/archive the rest even now. And it
 does not tend to get any easier in the future.

 Most of the email services and email clients support filtering incoming
 emails. So one can automatically get rid of certain emails by creating
 appropriate filters. Topics in subjects seem to be the best objects for
 creating rules, i.e., when someone is interested only in Keystone he can
 create an email filter for '[keystone]' substring in the subject.

 The problem with the topics is that a lot of emails in openstack-dev do
 not contain topics in their subjects, which makes this kind of filtering
 very ineffective.

 My proposal is to create an automated rule that rejects new emails, if
 they do not contain any topic in their subject. What do you guys think?


 - Roman Prykhodchenko
 - romcheg on freenode.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] Let's make topics mandatory for openstack-dev

2013-12-05 Thread Roman Prykhodchenko
We can create a list of official topics that correspond to certain
projects, i.e., [keystone], [nova], . [org].

If none of them is in the subject the email can be rejected. It's also
possible to create an [other] topic and pre-moderate it.

On 05.12.13 12:02, Robert Collins wrote:
 On 5 December 2013 22:35, Roman Prykhodchenko
 rprikhodche...@mirantis.com wrote:
 Hi folks,
 
 My proposal is to create an automated rule that rejects new emails, if
 they do not contain any topic in their subject. What do you guys think?
 
 My expectation is that [ALL] would be the result, for things that
 currently have no topic.
 
 - Why not filter for just topics locally, wouldn't that be equivalent?
 
 -Rob
 
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as first-class citizen - SSL Termination (Revised)

2013-12-05 Thread Stephen Gran

Hi,

Right, sorry, I see that wasn't clear - I blame lack of coffee :)

I would prefer the Revised New Model.  I much prefer the ability to 
restore a loadbalancer from config in the event of node failure, and the 
ability to do basic sharing of certificates between VIPs.


I think that a longer term plan may involve putting the certificates in 
a smarter system if we decide we want to do things like evaluate trust 
models, but just storing them locally for now will do most of what I 
think people want to do with SSL termination.


Cheers,

On 05/12/13 09:57, Samuel Bercovici wrote:

Hi Stephen,

To make sure I understand, which model is fine Basic/Simple or New.

Thanks,
-Sam.


-Original Message-
From: Stephen Gran [mailto:stephen.g...@theguardian.com]
Sent: Thursday, December 05, 2013 8:22 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as 
first-class citizen - SSL Termination (Revised)

Hi,

I would be happy with this model.  Yes, longer term it might be nice to have an 
independent certificate store so that when you need to be able to validate ssl 
you can, but this is a good intermediate step.

Cheers,

On 02/12/13 09:16, Vijay Venkatachalam wrote:


LBaaS enthusiasts: Your vote on the revised model for SSL Termination?

Here is a comparison between the original and revised model for SSL Termination:

***
Original Basic Model that was proposed in summit
***
* Certificate parameters introduced as part of VIP resource.
* This model is for basic config and there will be a model introduced in future 
for detailed use case.
* Each certificate is created for one and only one VIP.
* Certificate params not stored in DB and sent directly to loadbalancer.
* In case of failures, there is no way to restart the operation from details 
stored in DB.
***
Revised New Model
***
* Certificate parameters will be part of an independent certificate resource. A 
first-class citizen handled by LBaaS plugin.
* It is a forwarding looking model and aligns with AWS for uploading server 
certificates.
* A certificate can be reused in many VIPs.
* Certificate params stored in DB.
* In case of failures, parameters stored in DB will be used to restore the 
system.

A more detailed comparison can be viewed in the following link

https://docs.google.com/document/d/1fFHbg3beRtmlyiryHiXlpWpRo1oWj8FqVe
ZISh07iGs/edit?usp=sharing


--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 33% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News  Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News  Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-05 Thread Jan Provaznik

On 12/04/2013 08:12 AM, Robert Collins wrote:

And the 90 day not-active-enough status:

|   jprovazn **|  220   5  10   7   177.3% |2 (  9.1%)  |
|jomara ** |  210   2   4  15  1190.5% |2 (  9.5%)  |
|mtaylor **|  173   6   0   8   847.1% |0 (  0.0%)  |
|   jtomasek **|  100   0   2   8  10   100.0% |1 ( 10.0%)  |
|jcoufal **|   53   1   0   1   320.0% |0 (  0.0%)  |

Jan, Jordan, Martyn, Jiri and Jaromir are still actively contributing
to TripleO and OpenStack, but I don't think they are tracking /
engaging in the code review discussions enough to stay in -core: I'd
be delighted if they want to rejoin as core - as we discussed last
time, after a shorter than usual ramp up period if they get stuck in.



I will put more attention to reviews in future. Only a nit, it's quite a 
challenge to find something to review - most of the mornings I check 
pending patches everything is already reviewed ;).


Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] Let's make topics mandatory for openstack-dev

2013-12-05 Thread Sylvain Bauza

Le 05/12/2013 11:03, Matthew Booth a écrit :

On Thu, 2013-12-05 at 11:35 +0200, Roman Prykhodchenko wrote:

Hi folks,

Open Stack community grows continuously bringing more people and so new
initiatives and new projects. This growing amount of people, initiatives
and projects causes increasing the amount of discussions in our mailing
list.

The problem which I'm talking about is that controlling the mailing list
gets harder with the growth of the community. It takes too much time to
check for important emails and delete/archive the rest even now. And it
does not tend to get any easier in the future.

Most of the email services and email clients support filtering incoming
emails. So one can automatically get rid of certain emails by creating
appropriate filters. Topics in subjects seem to be the best objects for
creating rules, i.e., when someone is interested only in Keystone he can
create an email filter for '[keystone]' substring in the subject.

The problem with the topics is that a lot of emails in openstack-dev do
not contain topics in their subjects, which makes this kind of filtering
very ineffective.

My proposal is to create an automated rule that rejects new emails, if
they do not contain any topic in their subject. What do you guys think?

Alternatively, separate mailing lists for each current topic area. A
top-level mailing list could subscribe to all of them for anybody who
truly wants the fire hose.

Matt



And so we create silos... :-)

I would give a -1 to Roman's proposal but amend it : why not tagging 
them as [Nosubject] and filter them accordingly in our inboxes, ie. with 
low priority ?

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] heat topic now exists on openstack-dev

2013-12-05 Thread Pavlo Shchelokovskyy
Hi all,

for those interested, there is now a dedicated topic Heat available
for subscription in openstack-dev list.

To set your topic preferences log in to your Mailman settings at
http://lists.openstack.org/cgi-bin/mailman/options/openstack-dev/

Best,
Pavlo.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] This request was rate-limited. (HTTP 413)

2013-12-05 Thread Arindam Choudhury
Hi,

When I try to create a big hadoop cluster (21 nodes), sometimes I am getting 
this error: 

2013-12-05 12:17:57.920 29553 ERROR savanna.context [-] Thread 
'cluster-creating-8d093d9b-c675-4222-b53a-3319d54bc61f' fails with exception: 
'This request was rate-limited. (HTTP 413)'
2013-12-05 12:17:57.920 29553 TRACE savanna.context Traceback (most recent call 
last):
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/context.py, line 128, in wrapper
2013-12-05 12:17:57.920 29553 TRACE savanna.context func(*args, **kwargs)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/api.py, line 123, in _provision_cluster
2013-12-05 12:17:57.920 29553 TRACE savanna.context 
i.create_cluster(cluster)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/instances.py, line 56, in create_cluster
2013-12-05 12:17:57.920 29553 TRACE savanna.context 
_rollback_cluster_creation(cluster, ex)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
2013-12-05 12:17:57.920 29553 TRACE savanna.context self.gen.next()
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/instances.py, line 36, in create_cluster
2013-12-05 12:17:57.920 29553 TRACE savanna.context 
_create_instances(cluster)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/instances.py, line 111, in _create_instances
2013-12-05 12:17:57.920 29553 TRACE savanna.context _run_instance(cluster, 
node_group, idx, aa_groups, userdata)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/instances.py, line 173, in _run_instance
2013-12-05 12:17:57.920 29553 TRACE savanna.context 
key_name=cluster.user_keypair_id)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/.tox/venv/lib/python2.6/site-packages/novaclient/v1_1/servers.py,
 line 658, in create
2013-12-05 12:17:57.920 29553 TRACE savanna.context **boot_kwargs)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/.tox/venv/lib/python2.6/site-packages/novaclient/base.py, line 
402, in _boot
2013-12-05 12:17:57.920 29553 TRACE savanna.context return_raw=return_raw, 
**kwargs)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/.tox/venv/lib/python2.6/site-packages/novaclient/base.py, line 
145, in _create
2013-12-05 12:17:57.920 29553 TRACE savanna.context _resp, body = 
self.api.client.post(url, body=body)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/.tox/venv/lib/python2.6/site-packages/novaclient/client.py, 
line 232, in post
2013-12-05 12:17:57.920 29553 TRACE savanna.context return 
self._cs_request(url, 'POST', **kwargs)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/.tox/venv/lib/python2.6/site-packages/novaclient/client.py, 
line 213, in _cs_request
2013-12-05 12:17:57.920 29553 TRACE savanna.context **kwargs)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/.tox/venv/lib/python2.6/site-packages/novaclient/client.py, 
line 195, in _time_request
2013-12-05 12:17:57.920 29553 TRACE savanna.context resp, body = 
self.request(url, method, **kwargs)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/.tox/venv/lib/python2.6/site-packages/novaclient/client.py, 
line 189, in request
2013-12-05 12:17:57.920 29553 TRACE savanna.context raise 
exceptions.from_response(resp, body, url, method)
2013-12-05 12:17:57.920 29553 TRACE savanna.context OverLimit: This request was 
rate-limited. (HTTP 413)
2013-12-05 12:17:57.920 29553 TRACE savanna.context 

How I can prevent this to happen? Any help will be highly appreciated.

Regards,

Arindam
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Anti-affinity

2013-12-05 Thread Arindam Choudhury
HI,

I have 11 compute nodes. I want to create a hadoop cluster with 1 
master(namenode+jobtracker) with 20 worker (datanode+tasktracker).

How to configure the Anti-affinty so I can run the master in one host, while 
others will be hosting two worker?

I tried some configuration, but I can not achieve it.

Regards,
Arindam
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Anti-affinity

2013-12-05 Thread Dmitry Mescheryakov
Arindam,

It is not achievable with the current Savanna. The anti-affinity feature
allows to run one VM per compute-node only. It can not evenly distribute
VMs in case the number of compute nodes is lower than the desired size of
Hadoop cluster.

Dmitry


2013/12/5 Arindam Choudhury arin...@live.com

 HI,

 I have 11 compute nodes. I want to create a hadoop cluster with 1
 master(namenode+jobtracker) with 20 worker (datanode+tasktracker).

 How to configure the Anti-affinty so I can run the master in one host,
 while others will be hosting two worker?

 I tried some configuration, but I can not achieve it.

 Regards,
 Arindam

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Anti-affinity

2013-12-05 Thread Alexander Ignatov
Arindam,

What exact AA configurations did you use for your cluster? Did you configure 
scheduler filters in Nova as it’s described here?
http://docs.openstack.org/developer/savanna/userdoc/features.html#anti-affinity
 
Also please send Savanna usage related questions to [openstack] 
openst...@lists.openstack.org not to openstack-dev.

Regards,
Alexander Ignatov



On 05 Dec 2013, at 15:30, Arindam Choudhury arin...@live.com wrote:

 HI,
 
 I have 11 compute nodes. I want to create a hadoop cluster with 1 
 master(namenode+jobtracker) with 20 worker (datanode+tasktracker).
 
 How to configure the Anti-affinty so I can run the master in one host, while 
 others will be hosting two worker?
 
 I tried some configuration, but I can not achieve it.
 
 Regards,
 Arindam
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Anti-affinity

2013-12-05 Thread Arindam Choudhury
Hi,

Thanks a lot for your reply.

Regards,

Arindam

Date: Thu, 5 Dec 2013 15:41:33 +0400
From: dmescherya...@mirantis.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [savanna] Anti-affinity

Arindam,
It is not achievable with the current Savanna. The anti-affinity feature allows 
to run one VM per compute-node only. It can not evenly distribute VMs in case 
the number of compute nodes is lower than the desired size of Hadoop cluster.

Dmitry

2013/12/5 Arindam Choudhury arin...@live.com




HI,

I have 11 compute nodes. I want to create a hadoop cluster with 1 
master(namenode+jobtracker) with 20 worker (datanode+tasktracker).

How to configure the Anti-affinty so I can run the master in one host, while 
others will be hosting two worker?


I tried some configuration, but I can not achieve it.

Regards,
Arindam
  

___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] This request was rate-limited. (HTTP 413)

2013-12-05 Thread Arindam Choudhury
Hi,

I introduced this in my nova api-paste.ini:

[pipeline:openstack_compute_api_v2]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2

[pipeline:openstack_volume_api_v1]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1

[filter:ratelimit]
paste.filter_factory = 
nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
limits =(POST, *, .*, 100, MINUTE);(POST, */servers, ^/servers, 500, 
DAY);(PUT, *, .*, 100, MINUTE);(GET, *changes-since*, .*changes-since.*, 
30, MINUTE);(DELETE, *, .*, 100, MINUTE)

I am getting this error when I try to restart openstack-nova-api:


2013-12-05 12:51:20.035 2350 ERROR nova.wsgi [-] Ambiguous section names 
['composite:openstack_compute_api_v2', 'pipeline:openstack_compute_api_v2'] for 
section 'openstack_compute_api_v2' (prefixed by 'app' or 'application' or 
'composite' or 'composit' or 'pipeline' or 'filter-app') found in config 
/etc/nova/api-paste.ini
2013-12-05 12:51:20.044 2549 INFO nova.ec2.wsgi.server [-] (2549) wsgi starting 
up on http://0.0.0.0:8773/

2013-12-05 12:51:20.045 2350 CRITICAL nova [-] Could not load paste app 
'osapi_compute' from /etc/nova/api-paste.ini
2013-12-05 12:51:20.045 2350 TRACE nova Traceback (most recent call last):
2013-12-05 12:51:20.045 2350 TRACE nova   File /usr/bin/nova-api, line 61, in 
module
2013-12-05 12:51:20.045 2350 TRACE nova server = service.WSGIService(api, 
use_ssl=should_use_ssl)
2013-12-05 12:51:20.045 2350 TRACE nova   File 
/usr/lib/python2.6/site-packages/nova/service.py, line 598, in __init__
2013-12-05 12:51:20.045 2350 TRACE nova self.app = 
self.loader.load_app(name)
2013-12-05 12:51:20.045 2350 TRACE nova   File 
/usr/lib/python2.6/site-packages/nova/wsgi.py, line 485, in load_app
2013-12-05 12:51:20.045 2350 TRACE nova raise 
exception.PasteAppNotFound(name=name, path=self.config_path)
2013-12-05 12:51:20.045 2350 TRACE nova PasteAppNotFound: Could not load paste 
app 'osapi_compute' from /etc/nova/api-paste.ini
2013-12-05 12:51:20.045 2350 TRACE nova 
2013-12-05 12:51:20.407 2549 INFO nova.service [-] Parent process has died 
unexpectedly, exiting


Regards,

Arindam

Date: Thu, 5 Dec 2013 15:37:21 +0400
From: dmescherya...@mirantis.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [savanna] This request was rate-limited. (HTTP 
413)

Hello Arindam,
While deploying Hadoop cluster Savanna does pretty many API requests to Nova. 
Naturally, the number of requests is directly proportional to the size of the 
cluster.


On the other hand Nova has a protection agains users abusing API with too many 
requests. It is called rate limiting. You need to set limits higher than they 
are right now if you want to spin-up a cluster of that size. You can find 
details in the Nova docs:
http://docs.openstack.org/grizzly/openstack-compute/admin/content//configuring-compute-API.html


Dmitry

2013/12/5 Arindam Choudhury arin...@live.com




Hi,

When I try to create a big hadoop cluster (21 nodes), sometimes I am getting 
this error: 

2013-12-05 12:17:57.920 29553 ERROR savanna.context [-] Thread 
'cluster-creating-8d093d9b-c675-4222-b53a-3319d54bc61f' fails with exception: 
'This request was rate-limited. (HTTP 413)'

2013-12-05 12:17:57.920 29553 TRACE savanna.context Traceback (most recent call 
last):
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/context.py, line 128, in wrapper
2013-12-05 12:17:57.920 29553 TRACE savanna.context func(*args, **kwargs)

2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/api.py, line 123, in _provision_cluster
2013-12-05 12:17:57.920 29553 TRACE savanna.context 
i.create_cluster(cluster)

2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/instances.py, line 56, in create_cluster
2013-12-05 12:17:57.920 29553 TRACE savanna.context 
_rollback_cluster_creation(cluster, ex)

2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
2013-12-05 12:17:57.920 29553 TRACE savanna.context self.gen.next()
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/instances.py, line 36, in create_cluster

2013-12-05 12:17:57.920 29553 TRACE savanna.context 
_create_instances(cluster)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/instances.py, line 111, in _create_instances

2013-12-05 12:17:57.920 29553 TRACE savanna.context _run_instance(cluster, 
node_group, idx, aa_groups, userdata)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/instances.py, line 173, in _run_instance

2013-12-05 12:17:57.920 29553 TRACE savanna.context 
key_name=cluster.user_keypair_id)
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 

Re: [openstack-dev] [qa] Moving the QA meeting time

2013-12-05 Thread Sean Dague
On 12/04/2013 10:01 PM, Christopher Yeoh wrote:
 On Thu, Dec 5, 2013 at 11:06 AM, Kenichi Oomichi
 oomi...@mxs.nes.nec.co.jp mailto:oomi...@mxs.nes.nec.co.jp wrote:
 
 
 Hi Matthew,
 
 Thank you for picking this up.
 
  -Original Message-
  From: Matthew Treinish [mailto:mtrein...@kortar.org
 mailto:mtrein...@kortar.org]
  Sent: Thursday, December 05, 2013 6:04 AM
  To: openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
  Subject: [openstack-dev] [qa] Moving the QA meeting time
 
  Hi everyone,
 
  I'm looking at changing our weekly QA meeting time to make it more
 globally
  attendable. Right now the current time of 17:00 UTC doesn't really
 work for
  people who live in Asia Pacific timezones. (which includes a third
 of the
  current core review team) There are 2 approaches that I can see
 taking here:
 
   1. We could either move the meeting time later so that it makes
 it easier for
  people in the Asia Pacific region to attend.
 
   2. Or we move to a alternating meeting time, where every other
 week the meeting
  time changes. So we keep the current slot and alternate with
 something more
  friendly for other regions.
 
  I think trying to stick to a single meeting time would be a better
 call just for
  simplicity. But it gets difficult to appease everyone that way
 which is where the
  appeal of the 2nd approach comes in.
 
  Looking at the available time slots here:
 https://wiki.openstack.org/wiki/Meetings
  there are plenty of open slots before 1500 UTC which would be
 early for people in
  the US and late for people in the Asia Pacific region. There are
 plenty of slots
  starting at 2300 UTC which is late for people in Europe.
 
  Would something like 2200 UTC on Wed. or Thurs work for everyone?
 
  What are people's opinions on this?
 
 I am in JST.
 Is Chris in CST, and Marc in CET?
 
 Here is timezone difference.
 15:00 UTC - 07:00 PST - 01:30 CST - 16:00 CET - 24:00 JST
 22:00 UTC - 14:00 PST - 08:30 CST - 23:00 CET - 07:00 JST
 23:00 UTC - 15:00 PST - 09:30 CST - 24:00 CET - 08:00 JST
 
 I feel 22:00 would be nice.
 
 
 22:00 UTC would be fine with me. I have another meeting on at 22:00 UTC
 on Wednesdays (as does Matt now),
 so Thursday is probably better if possible. Otherwise it may be possible
 to move the meeting that Matt and I need to attend.

+1 for Thursday. I'd have to miss one meeting a month on Wed do to a
standing conflict.

22:00 UTC would be fine for me.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Moving the QA meeting time

2013-12-05 Thread Sean Dague
On 12/05/2013 02:37 AM, Koderer, Marc wrote:
 Hi all!
 
 -Ursprüngliche Nachricht-
 Von: Kenichi Oomichi [mailto:oomi...@mxs.nes.nec.co.jp]
 Gesendet: Donnerstag, 5. Dezember 2013 01:37
 An: OpenStack Development Mailing List (not for usage questions)
 Betreff: Re: [openstack-dev] [qa] Moving the QA meeting time


 Hi Matthew,

 Thank you for picking this up.

 -Original Message-
 From: Matthew Treinish [mailto:mtrein...@kortar.org]
 Sent: Thursday, December 05, 2013 6:04 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [qa] Moving the QA meeting time

 Hi everyone,

 I'm looking at changing our weekly QA meeting time to make it more
 globally attendable. Right now the current time of 17:00 UTC doesn't
 really work for people who live in Asia Pacific timezones. (which
 includes a third of the current core review team) There are 2
 approaches that I can see taking here:

  1. We could either move the meeting time later so that it makes it
 easier for
 people in the Asia Pacific region to attend.

  2. Or we move to a alternating meeting time, where every other week
 the meeting
 time changes. So we keep the current slot and alternate with
 something more
 friendly for other regions.

 I think trying to stick to a single meeting time would be a better
 call just for simplicity. But it gets difficult to appease everyone
 that way which is where the appeal of the 2nd approach comes in.

 Looking at the available time slots here:
 https://wiki.openstack.org/wiki/Meetings
 there are plenty of open slots before 1500 UTC which would be early
 for people in the US and late for people in the Asia Pacific region.
 There are plenty of slots starting at 2300 UTC which is late for
 people in Europe.

 Would something like 2200 UTC on Wed. or Thurs work for everyone?

 What are people's opinions on this?

 I am in JST.
 Is Chris in CST, and Marc in CET?
 
 Yes, Giulio and I are in CET. And Attila too, right?
 

 Here is timezone difference.
 15:00 UTC - 07:00 PST - 01:30 CST - 16:00 CET - 24:00 JST 22:00 UTC
 - 14:00 PST - 08:30 CST - 23:00 CET - 07:00 JST 23:00 UTC - 15:00
 PST - 09:30 CST - 24:00 CET - 08:00 JST

 I feel 22:00 would be nice.
 
 I'd prefer to have two slots since 22 UTC is quite late. But I am ok with it 
 if all others are fine.

The other option would be to oscillate on opposite weeks with Ceilometer
- https://wiki.openstack.org/wiki/Meetings/Ceilometer they already have
a well defined every other cadence.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Ubuntu element switched to Saucy

2013-12-05 Thread Chris Jones
Hi

Apologies for being a little late announcing this, but the Ubuntu element
in diskimage-builder has been switched[1] to defaulting to the Saucy
release (i.e. 13.10). Please file bugs if you find any regressions!

[1] https://review.openstack.org/#/c/58714/

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] Let's make topics mandatory for openstack-dev

2013-12-05 Thread Sean Dague
On 12/05/2013 05:09 AM, Ronak Shah wrote:
 I like the proposal and I share the pain of checking for the important
 emails in the pile of all and potentially missing on some in timely fashion.
 
 My 2 cents on top of creating an automated rule:
 Most of the people (not all) I see on dev-list are active on either 1 or
 2 project specific discussions. 
 Why not create openstack-dev-project lists and have people subsribe to
 project list(s) that they are interested in. 
 Biggest advantage I see with this approach is that one can actively
 participate by getting individual emails for the project mailing list
 that they are working on. 
 In parallel they can opt for digest for rest of the project(s) where
 they act only in read-only mode. 
 
 Thoughts?

We already have that functionality with the Topics support in mailman.
You can sign up for just specific topics there and only get those
topics. Emails are tagged with an X-Topics: header for filtering (much
more reliable than subject lines).

For instance:

X-Topics: Keystone

If you only subscribe to some topics, anything that doesn't match those
will not be delivered to you. So you assume everyone tags everything
correctly (which they don't). So do so at your own peril.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] This request was rate-limited. (HTTP 413)

2013-12-05 Thread Dmitry Mescheryakov
Hmm, not sure, I am not an expert in Nova. By the way the link I gave you
is for Grizzly. If you are running a different release, take a look at that
release doc as configuration might look different here.

Dmitry


2013/12/5 Arindam Choudhury arin...@live.com

 Hi,

 I introduced this in my nova api-paste.ini:

 [pipeline:openstack_compute_api_v2]
 pipeline = faultwrap authtoken keystonecontext ratelimit
 osapi_compute_app_v2

 [pipeline:openstack_volume_api_v1]
 pipeline = faultwrap authtoken keystonecontext ratelimit
 osapi_volume_app_v1

 [filter:ratelimit]
 paste.filter_factory =
 nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
 limits =(POST, *, .*, 100, MINUTE);(POST, */servers, ^/servers, 500,
 DAY);(PUT, *, .*, 100, MINUTE);(GET, *changes-since*,
 .*changes-since.*, 30, MINUTE);(DELETE, *, .*, 100, MINUTE)

 I am getting this error when I try to restart openstack-nova-api:


 2013-12-05 12:51:20.035 2350 ERROR nova.wsgi [-] Ambiguous section names
 ['composite:openstack_compute_api_v2', 'pipeline:openstack_compute_api_v2']
 for section 'openstack_compute_api_v2' (prefixed by 'app' or 'application'
 or 'composite' or 'composit' or 'pipeline' or 'filter-app') found in config
 /etc/nova/api-paste.ini
 2013-12-05 12:51:20.044 2549 INFO nova.ec2.wsgi.server [-] (2549) wsgi
 starting up on http://0.0.0.0:8773/

 2013-12-05 12:51:20.045 2350 CRITICAL nova [-] Could not load paste app
 'osapi_compute' from /etc/nova/api-paste.ini
 2013-12-05 12:51:20.045 2350 TRACE nova Traceback (most recent call last):
 2013-12-05 12:51:20.045 2350 TRACE nova   File /usr/bin/nova-api, line
 61, in module
 2013-12-05 12:51:20.045 2350 TRACE nova server =
 service.WSGIService(api, use_ssl=should_use_ssl)
 2013-12-05 12:51:20.045 2350 TRACE nova   File
 /usr/lib/python2.6/site-packages/nova/service.py, line 598, in __init__
 2013-12-05 12:51:20.045 2350 TRACE nova self.app =
 self.loader.load_app(name)
 2013-12-05 12:51:20.045 2350 TRACE nova   File
 /usr/lib/python2.6/site-packages/nova/wsgi.py, line 485, in load_app
 2013-12-05 12:51:20.045 2350 TRACE nova raise
 exception.PasteAppNotFound(name=name, path=self.config_path)
 2013-12-05 12:51:20.045 2350 TRACE nova PasteAppNotFound: Could not load
 paste app 'osapi_compute' from /etc/nova/api-paste.ini
 2013-12-05 12:51:20.045 2350 TRACE nova
 2013-12-05 12:51:20.407 2549 INFO nova.service [-] Parent process has died
 unexpectedly, exiting


 Regards,

 Arindam

 --
 Date: Thu, 5 Dec 2013 15:37:21 +0400
 From: dmescherya...@mirantis.com
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [savanna] This request was rate-limited.
 (HTTP 413)


 Hello Arindam,

 While deploying Hadoop cluster Savanna does pretty many API requests to
 Nova. Naturally, the number of requests is directly proportional to the
 size of the cluster.

 On the other hand Nova has a protection agains users abusing API with too
 many requests. It is called rate limiting. You need to set limits higher
 than they are right now if you want to spin-up a cluster of that size. You
 can find details in the Nova docs:

 http://docs.openstack.org/grizzly/openstack-compute/admin/content//configuring-compute-API.html

 Dmitry


 2013/12/5 Arindam Choudhury arin...@live.com

 Hi,

 When I try to create a big hadoop cluster (21 nodes), sometimes I am
 getting this error:

 2013-12-05 12:17:57.920 29553 ERROR savanna.context [-] Thread
 'cluster-creating-8d093d9b-c675-4222-b53a-3319d54bc61f' fails with
 exception: 'This request was rate-limited. (HTTP 413)'
 2013-12-05 12:17:57.920 29553 TRACE savanna.context Traceback (most recent
 call last):
 2013-12-05 12:17:57.920 29553 TRACE savanna.context   File
 /root/savanna/savanna/context.py, line 128, in wrapper
 2013-12-05 12:17:57.920 29553 TRACE savanna.context func(*args,
 **kwargs)
 2013-12-05 12:17:57.920 29553 TRACE savanna.context   File
 /root/savanna/savanna/service/api.py, line 123, in _provision_cluster
 2013-12-05 12:17:57.920 29553 TRACE savanna.context
 i.create_cluster(cluster)
 2013-12-05 12:17:57.920 29553 TRACE savanna.context   File
 /root/savanna/savanna/service/instances.py, line 56, in create_cluster
 2013-12-05 12:17:57.920 29553 TRACE savanna.context
 _rollback_cluster_creation(cluster, ex)
 2013-12-05 12:17:57.920 29553 TRACE savanna.context   File
 /usr/lib64/python2.6/contextlib.py, line 23, in __exit__
 2013-12-05 12:17:57.920 29553 TRACE savanna.context self.gen.next()
 2013-12-05 12:17:57.920 29553 TRACE savanna.context   File
 /root/savanna/savanna/service/instances.py, line 36, in create_cluster
 2013-12-05 12:17:57.920 29553 TRACE savanna.context
 _create_instances(cluster)
 2013-12-05 12:17:57.920 29553 TRACE savanna.context   File
 /root/savanna/savanna/service/instances.py, line 111, in _create_instances
 2013-12-05 12:17:57.920 29553 TRACE savanna.context
 _run_instance(cluster, node_group, idx, aa_groups, userdata)
 2013-12-05 12:17:57.920 29553 TRACE 

Re: [openstack-dev] [TripleO] capturing build details in images

2013-12-05 Thread Chris Jones
Hi

On 4 December 2013 22:19, Robert Collins robe...@robertcollins.net wrote:

 So - what about us capturing this information outside the image: we
 can create a uuid for the build, and write a file in the image with
 that uuid, and outside the image we can write


 +1

I think having a UUID inside the image is a spectacularly good idea
generally, and this seems like a good way to solve the general problem of
what to put in the image.
It would also be nice to capture the build logs automatically to
$UUID-build.log or something, for folk who really really care about audit
trails and reproducibility.


-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] This request was rate-limited. (HTTP 413)

2013-12-05 Thread Arindam Choudhury
Hi,

I am using openstack grizzly.

I just commented out
[pipeline:openstack_compute_api_v2]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2

[pipeline:openstack_volume_api_v1]

pipeline = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1

and its working good.

Thanks a lot.

Regards,
Arindam
Date: Thu, 5 Dec 2013 16:34:17 +0400
From: dmescherya...@mirantis.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [savanna] This request was rate-limited. (HTTP 
413)

Hmm, not sure, I am not an expert in Nova. By the way the link I gave you is 
for Grizzly. If you are running a different release, take a look at that 
release doc as configuration might look different here.

Dmitry

2013/12/5 Arindam Choudhury arin...@live.com




Hi,

I introduced this in my nova api-paste.ini:

[pipeline:openstack_compute_api_v2]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2

[pipeline:openstack_volume_api_v1]

pipeline = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1

[filter:ratelimit]
paste.filter_factory = 
nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
limits =(POST, *, .*, 100, MINUTE);(POST, */servers, ^/servers, 500, 
DAY);(PUT, *, .*, 100, MINUTE);(GET, *changes-since*, .*changes-since.*, 
30, MINUTE);(DELETE, *, .*, 100, MINUTE)


I am getting this error when I try to restart openstack-nova-api:


2013-12-05 12:51:20.035 2350 ERROR nova.wsgi [-] Ambiguous section names 
['composite:openstack_compute_api_v2', 'pipeline:openstack_compute_api_v2'] for 
section 'openstack_compute_api_v2' (prefixed by 'app' or 'application' or 
'composite' or 'composit' or 'pipeline' or 'filter-app') found in config 
/etc/nova/api-paste.ini

2013-12-05 12:51:20.044 2549 INFO nova.ec2.wsgi.server [-] (2549) wsgi starting 
up on http://0.0.0.0:8773/

2013-12-05 12:51:20.045 2350 CRITICAL nova [-] Could not load paste app 
'osapi_compute' from /etc/nova/api-paste.ini

2013-12-05 12:51:20.045 2350 TRACE nova Traceback (most recent call last):
2013-12-05 12:51:20.045 2350 TRACE nova   File /usr/bin/nova-api, line 61, in 
module
2013-12-05 12:51:20.045 2350 TRACE nova server = service.WSGIService(api, 
use_ssl=should_use_ssl)

2013-12-05 12:51:20.045 2350 TRACE nova   File 
/usr/lib/python2.6/site-packages/nova/service.py, line 598, in __init__
2013-12-05 12:51:20.045 2350 TRACE nova self.app = 
self.loader.load_app(name)
2013-12-05 12:51:20.045 2350 TRACE nova   File 
/usr/lib/python2.6/site-packages/nova/wsgi.py, line 485, in load_app

2013-12-05 12:51:20.045 2350 TRACE nova raise 
exception.PasteAppNotFound(name=name, path=self.config_path)
2013-12-05 12:51:20.045 2350 TRACE nova PasteAppNotFound: Could not load paste 
app 'osapi_compute' from /etc/nova/api-paste.ini

2013-12-05 12:51:20.045 2350 TRACE nova 
2013-12-05 12:51:20.407 2549 INFO nova.service [-] Parent process has died 
unexpectedly, exiting


Regards,

Arindam

Date: Thu, 5 Dec 2013 15:37:21 +0400

From: dmescherya...@mirantis.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [savanna] This request was rate-limited. (HTTP 
413)


Hello Arindam,
While deploying Hadoop cluster Savanna does pretty many API requests to Nova. 
Naturally, the number of requests is directly proportional to the size of the 
cluster.



On the other hand Nova has a protection agains users abusing API with too many 
requests. It is called rate limiting. You need to set limits higher than they 
are right now if you want to spin-up a cluster of that size. You can find 
details in the Nova docs:

http://docs.openstack.org/grizzly/openstack-compute/admin/content//configuring-compute-API.html



Dmitry

2013/12/5 Arindam Choudhury arin...@live.com




Hi,

When I try to create a big hadoop cluster (21 nodes), sometimes I am getting 
this error: 

2013-12-05 12:17:57.920 29553 ERROR savanna.context [-] Thread 
'cluster-creating-8d093d9b-c675-4222-b53a-3319d54bc61f' fails with exception: 
'This request was rate-limited. (HTTP 413)'


2013-12-05 12:17:57.920 29553 TRACE savanna.context Traceback (most recent call 
last):
2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/context.py, line 128, in wrapper
2013-12-05 12:17:57.920 29553 TRACE savanna.context func(*args, **kwargs)


2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/api.py, line 123, in _provision_cluster
2013-12-05 12:17:57.920 29553 TRACE savanna.context 
i.create_cluster(cluster)


2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/root/savanna/savanna/service/instances.py, line 56, in create_cluster
2013-12-05 12:17:57.920 29553 TRACE savanna.context 
_rollback_cluster_creation(cluster, ex)


2013-12-05 12:17:57.920 29553 TRACE savanna.context   File 
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
2013-12-05 12:17:57.920 29553 TRACE savanna.context self.gen.next()

Re: [openstack-dev] [Tempest] [qa] Which is the best way for skipping tests?

2013-12-05 Thread Sean Dague
On 12/04/2013 06:59 PM, Brant Knudson wrote:
 
 In Keystone, we've got some tests that raise self.skipTest('...') in
 the test class setUp() method (not setUpClass). My testing shows that if
 there's several tests in the class then it shows all of those tests as
 skipped (not just 1 skip). Does this do what you want?
 
 Here's an example:
 http://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/test_ipv6.py?id=73dbc00e6ac049f19d0069ecb07ca8ed75627dd5#n30
 
 http://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/core.py?id=73dbc00e6ac049f19d0069ecb07ca8ed75627dd5#n500

Our skipIf on the method does the same thing, because we rarely have a
setUp. Realistically, Tempest tests are often a bit strange relative to
testing you will see in project trees. Because Tempest is running tests
on real resources, computes, cinder volumes, networks which can be very
expensive to setup. This means that the bulk of the time in tempest
tests is on setup.

As a bunch of these tests aren't destructive to those resources, we
cheat a little, and build them once in setUpClass. A naive convert to
setUp would probably increase the Tempest run times by a factor of 6 to
10. We have ideas about how to do a non naive convert... but it will be
late cycle at best.

All of this is a bit of a digression, but it seemed like useful context.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-05 Thread David Chadwick
Hi Arvind

On 04/12/2013 19:04, Tiwari, Arvind wrote:
 Hi David,
 
 The biggest problems in my opinion are,
 
 1. We are overloading and adding extra complexities on role name to
 maintain the generalization for role-def data model. 

On the contrary, having the same role name with multiple definitions (as
in your proposal) is overloading role name.

2. Name spacing
 the role name is not going to resolve all the issues listed in BP. 

It is not designed to. Each issue has its own resolution.
Hierarchical naming resolves the issue of unique role names, that's all.
It allows different entities to create the same role (sub)name that are
actually different roles and have different global names.

3.All the namespaces are derived from mutable string  (domain name,
 project name, service name etc...) which makes the role name
 fragile.

So why are role IDs immutable and role names mutable? (ditto
project/domain/service IDs and names. What is the rationale for that?
Maybe you should not allow the names to change once they have been created.

 
 I think it is time to break generic role-def data model to
 accommodate more specialized use cases.

That is counter-intuitive. You dont break a model to accommodate more
specialized cases. You enhance the model to take the new functionality
into account, and preferably in a backwards compatible way.

regards

David

 
 
 Thanks, Arvind
 
 -Original Message- From: David Chadwick
 [mailto:d.w.chadw...@kent.ac.uk] Sent: Wednesday, December 04, 2013
 10:41 AM To: Adam Young; Tiwari, Arvind; OpenStack Development
 Mailing List (not for usage questions) Cc: Henry Nash;
 dolph.math...@gmail.com Subject: Re: [openstack-dev] [keystone]
 Service scoped role definition
 
 Hi Adam
 
 I understand your problem: having projects and services which have
 the same name, then the lineage of a role containing this name is
 not deterministically known without some other rule or syntax that
 can differentiate between the two.
 
 Since domains contain projects which contain services then isnt the 
 containment hierarchy already known and predetermined? If it is
 then:
 
 4 name components mean it is a service specified role 3 name
 components mean it is a project specified role 2 name components mean
 it is a domain specified role 1 name component means it is globally
 named role (from the default domain)
 
 a null string means the default domain or all projects in a domain.
 You would never have null for a service name.
 
 admin means the global admin role /admin ditto x/admin means the
 admin of the X domain x/y/admin means the admin role for the y
 project in domain x //x/admin means admin for service x from the
 default domain etc.
 
 will that work?
 
 regards
 
 David
 
 
 On 04/12/2013 15:04, Adam Young wrote:
 On 12/04/2013 04:08 AM, David Chadwick wrote:
 I am happy with this as far as it goes. I would like to see it
 being made more general, where domains, services and projects can
 also own and name roles
 Domains should be OK, but services would confuse the matter.  You'd
 have to end up with something like LDAP
 
 role=  domain=default,service=glance
 
 vs
 
 role=  domain=default,project=glance
 
 unless we have unambiguous implicit ordering, we'll need to make
 it explicit, which is messy.
 
 I'd rather do:
 
 One segment: globally defined roles.  These could also be
 considered roles defined in the default domain. Two segments
 service defined roles in the default domain Three Segments, service
 defined roles from non-default domain
 
 To do domain scoped roles we could do something like:
 
 domX//admin
 
 
 But It seems confusing.
 
 Perhaps a better approach for project roles is to have the rule
 that the default domain can show up as an empty string.  Thus,
 project scoped roles from the default domain  would be:
 
 \glance\admin
 
 and from a non default domain
 
 domX\glance\admin
 
 
 
 
 
 
 
 
 regards
 
 David
 
 
 On 04/12/2013 01:51, Adam Young wrote:
 I've been thinking about your comment that nested roles are
 confusing
 
 
 What if we backed off and said the following:
 
 
 Some role-definitions are owned by services.  If a Role
 definition is owned by a service, in role assignment lists in
 tokens, those roles we be prefixd by the service name.  / is a
 reserved cahracter and weill be used as the divider between
 segments of the role definition 
 
 That drops arbitrary nesting, and provides a reasonable
 namespace.  Then a role def would look like:
 
 glance/admin  for the admin role on the glance project.
 
 
 
 In theory, we could add the domain to the namespace, but that
 seems unwieldy.  If we did, a role def would then look like
 this
 
 
 default/glance/admin  for the admin role on the glance
 project.
 
 Is that clearer than the nested roles?
 
 
 
 On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,
 
 Based on our discussion over IRC, I have updated the below
 etherpad with proposal for nested role definition
 
 

Re: [openstack-dev] [policy] Let's make topics mandatory for openstack-dev

2013-12-05 Thread Dean Troyer
On Thu, Dec 5, 2013 at 6:23 AM, Sean Dague s...@dague.net wrote:

 We already have that functionality with the Topics support in mailman.
 You can sign up for just specific topics there and only get those
 topics. Emails are tagged with an X-Topics: header for filtering (much
 more reliable than subject lines).


This topic comes up regularly these days and this and another message from
Stefano[1] are helpful to using topics so I've added it to the MailingList
wiki page[2].  Plus pointers to split-the-list threads so we can maybe
refresh on why things are the way they are.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-November/019233.html
[2] https://wiki.openstack.org/wiki/Mailing_Lists#Future_Development

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Regular LBaaS subteam meeting

2013-12-05 Thread Eugene Nikanorov
Hi, lbaas folks.

Let's meet as usual at #openstack-meeting at 14-00 UTC today.
We'll discuss the current progress with features and third party testing.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as first-class citizen - SSL Termination (Revised)

2013-12-05 Thread Eugene Nikanorov
Hi,

My vote is for separate resource (e.g. 'New Model'). Also I'd like to see
certificate handling as a separate extension/db mixing(in fact, persistence
driver) similar to service_type extension.

Thanks,
Eugene.


On Thu, Dec 5, 2013 at 2:13 PM, Stephen Gran
stephen.g...@theguardian.comwrote:

 Hi,

 Right, sorry, I see that wasn't clear - I blame lack of coffee :)

 I would prefer the Revised New Model.  I much prefer the ability to
 restore a loadbalancer from config in the event of node failure, and the
 ability to do basic sharing of certificates between VIPs.

 I think that a longer term plan may involve putting the certificates in a
 smarter system if we decide we want to do things like evaluate trust
 models, but just storing them locally for now will do most of what I think
 people want to do with SSL termination.

 Cheers,


 On 05/12/13 09:57, Samuel Bercovici wrote:

 Hi Stephen,

 To make sure I understand, which model is fine Basic/Simple or New.

 Thanks,
 -Sam.


 -Original Message-
 From: Stephen Gran [mailto:stephen.g...@theguardian.com]
 Sent: Thursday, December 05, 2013 8:22 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Vote required for
 certificate as first-class citizen - SSL Termination (Revised)

 Hi,

 I would be happy with this model.  Yes, longer term it might be nice to
 have an independent certificate store so that when you need to be able to
 validate ssl you can, but this is a good intermediate step.

 Cheers,

 On 02/12/13 09:16, Vijay Venkatachalam wrote:


 LBaaS enthusiasts: Your vote on the revised model for SSL Termination?

 Here is a comparison between the original and revised model for SSL
 Termination:

 ***
 Original Basic Model that was proposed in summit
 ***
 * Certificate parameters introduced as part of VIP resource.
 * This model is for basic config and there will be a model introduced in
 future for detailed use case.
 * Each certificate is created for one and only one VIP.
 * Certificate params not stored in DB and sent directly to loadbalancer.
 * In case of failures, there is no way to restart the operation from
 details stored in DB.
 ***
 Revised New Model
 ***
 * Certificate parameters will be part of an independent certificate
 resource. A first-class citizen handled by LBaaS plugin.
 * It is a forwarding looking model and aligns with AWS for uploading
 server certificates.
 * A certificate can be reused in many VIPs.
 * Certificate params stored in DB.
 * In case of failures, parameters stored in DB will be used to restore
 the system.

 A more detailed comparison can be viewed in the following link

 https://docs.google.com/document/d/1fFHbg3beRtmlyiryHiXlpWpRo1oWj8FqVe
 ZISh07iGs/edit?usp=sharing


 --
 Stephen Gran
 Senior Systems Integrator - theguardian.com
 Please consider the environment before printing this email.
 --
 Visit theguardian.com
 On your mobile, download the Guardian iPhone app theguardian.com/iphoneand 
 our iPad edition
 theguardian.com/iPad   Save up to 33% by subscribing to the Guardian and
 Observer - choose the papers you want and get full digital access.
 Visit subscribe.theguardian.com

 This e-mail and all attachments are confidential and may also
 be privileged. If you are not the named recipient, please notify
 the sender and delete the e-mail and all attachments immediately.
 Do not disclose the contents to another person. You may not use
 the information for any purpose, or store, or copy, it in any way.

 Guardian News  Media Limited is not liable for any computer
 viruses or other material transmitted with or as part of this
 e-mail. You should employ virus checking software.

 Guardian News  Media Limited

 A member of Guardian Media Group plc
 Registered Office
 PO Box 68164
 Kings Place
 90 York Way
 London
 N1P 2AP

 Registered in England Number 908396

 --


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-05 Thread Imre Farkas

On 12/04/2013 08:12 AM, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core

+1


  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core
As a many of them expressed their further interests in TripleO, I vote 
for keeping them in core for now and let's wait a couple weeks/month to 
see how the stats change.


Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] User registrations

2013-12-05 Thread Saju M
Hi,
I added a new blueprint
https://blueprints.launchpad.net/horizon/+spec/user-registration.

Please check attached file for the plan.

That local DB is optional, we can save extra informations in the field
'text' of keystone's 'user' table as json object.





Regards
Saju Madhavan
+91 09535134654


dia_user_signup.pdf
Description: Adobe PDF document
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly

2013-12-05 Thread Nathani, Sreedhar (APS)
Hello Marun,

Please find the details about my setup and tests which i have done so far

Setup
  - One Physical Box with 16c, 256G memory. 2 VMs created on this Box - One for 
Controller and One for Network Node
  - 16x compute nodes (each has 16c, 256G memory)
  - All the systems are installed with Ubuntu Precise + Havana Bits from Ubuntu 
Cloud Archive

Steps to simulate the issue
  1) Concurrently create 30 Instances (m1.small) using REST API with mincount=30
  2) sleep for 20min and repeat the step (1)


Issue 1
In Havana, once we cross 150 instances (5 batches x 30) during 6th batch some 
instances are going into ERROR state
due to network port not able to create and some instances are getting duplicate 
IP address

Per Maru Newby this issue might related to this bug
https://bugs.launchpad.net/bugs/1192381

I have done the similar with Grizzly on the same environment 2 months back, 
where I could able to deploy close to 240 instances without any errors
Initially on Grizzly also seen the same behavior but with these tunings based 
on this bug
https://bugs.launchpad.net/neutron/+bug/1160442, never had issues (tested more 
than 10 times)
   sqlalchemy_pool_size = 60
   sqlalchemy_max_overflow = 120
   sqlalchemy_pool_timeout = 2
   agent_down_time = 60
   report_internval = 20

In Havana, I have tuned the same tunables but I could never get past 150+ 
instances. Without the tunables I could not able to get past
100 instances. We are getting many timeout errors from the DHCP agent and 
neutron clients

NOTE: After tuning the agent_down_time to 60 and report_interval to 20, we no 
longer getting these error messages
   2013-12-02 11:44:43.421 28201 WARNING neutron.scheduler.dhcp_agent_scheduler 
[-] No more DHCP agents
   2013-12-02 11:44:43.439 28201 WARNING neutron.scheduler.dhcp_agent_scheduler 
[-] No more DHCP agents
   2013-12-02 11:44:43.452 28201 WARNING neutron.scheduler.dhcp_agent_scheduler 
[-] No more DHCP agents


In the compute node openvswitch agent logs, we see these errors repeating 
continuously

2013-12-04 06:46:02.081 3546 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Timeout: Timeout while 
waiting on RPC response - topic: q-plugin, RPC method: 
security_group_rules_for_devices info: unknown
and WARNING neutron.openstack.common.rpc.amqp [-] No calling threads waiting 
for msg_id

DHCP agent has below errors

2013-12-02 15:35:19.557 22125 ERROR neutron.agent.dhcp_agent [-] Unable to 
reload_allocations dhcp.
2013-12-02 15:35:19.557 22125 TRACE neutron.agent.dhcp_agent Timeout: Timeout 
while waiting on RPC response - topic: q-plugin, RPC method: get_dhcp_port 
info: unknown

2013-12-02 15:35:34.266 22125 ERROR neutron.agent.dhcp_agent [-] Unable to sync 
network state.
2013-12-02 15:35:34.266 22125 TRACE neutron.agent.dhcp_agent Timeout: Timeout 
while waiting on RPC response - topic: q-plugin, RPC method: 
get_active_networks_info info: unknown


In Havana, I have merged the code from this patch and set api_workers to 8 (My 
Controller VM has 8cores/16Hyperthreads)
https://review.openstack.org/#/c/37131/

After this patch and starting 8 neutron-server worker threads, during the batch 
creation of 240 instances with 30 concurrent requests during each batch,
238 instances became active and 2 instances went into error. Interesting these 
2 instances which went into error state are from the same compute node.

Unlike earlier this time, the errors are due to 'Too Many Connections' to the 
MySQL database.
2013-12-04 17:07:59.877 21286 AUDIT nova.compute.manager 
[req-26d64693-d1ef-40f3-8350-659e34d5b1d7 c4d609870d4447c684858216da2f8041 
9b073211dd5c4988993341cc955e200b] [instance: 
c14596fd-13d5-482b-85af-e87077d4ed9b] Terminating instance
2013-12-04 17:08:00.578 21286 ERROR nova.compute.manager 
[req-26d64693-d1ef-40f3-8350-659e34d5b1d7 c4d609870d4447c684858216da2f8041 
9b073211dd5c4988993341cc955e200b] [instance: 
c14596fd-13d5-482b-85af-e87077d4ed9b] Error: Remote error: OperationalError 
(OperationalError) (1040, 'Too many connections') None None

Need to back port the patch 'https://review.openstack.org/#/c/37131/' to 
address the Neutron Scaling issues in Havana.
Carl already back porting this patch into Havana  
https://review.openstack.org/#/c/60082/ which is good.

Issue 2
Grizzly :
During the concurrent instance creation in Grizzly, once we cross 210 
instances, during subsequent 30 instance creation some of
the instances could not get their IP address during the first boot with in 
first few min. Instance MAC and IP Address details
were updated in the dnsmasq host file but with a delay. Instances are able to 
get their IP address with a delay eventually.

If we reboot the instance using 'nova reboot' instance used to get IP Address.
* Amount of delay is depending on number of network ports and delay is in the 
range of 8seconds to 2min


Havana :
But in Havana only 81 instances could get the IP during the first boot. Port is 
getting created and 

Re: [openstack-dev] [savanna] Anti-affinity

2013-12-05 Thread Arindam Choudhury
Hi,

Is it possible using anti-affinity to reserve a compute node only for 
master(namenode+jobtracker)?

Regards,
Arindam

From: arin...@live.com
To: openstack-dev@lists.openstack.org
Date: Thu, 5 Dec 2013 12:52:23 +0100
Subject: Re: [openstack-dev] [savanna] Anti-affinity




Hi,

Thanks a lot for your reply.

Regards,

Arindam

Date: Thu, 5 Dec 2013 15:41:33 +0400
From: dmescherya...@mirantis.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [savanna] Anti-affinity

Arindam,
It is not achievable with the current Savanna. The anti-affinity feature allows 
to run one VM per compute-node only. It can not evenly distribute VMs in case 
the number of compute nodes is lower than the desired size of Hadoop cluster.

Dmitry

2013/12/5 Arindam Choudhury arin...@live.com




HI,

I have 11 compute nodes. I want to create a hadoop cluster with 1 
master(namenode+jobtracker) with 20 worker (datanode+tasktracker).

How to configure the Anti-affinty so I can run the master in one host, while 
others will be hosting two worker?


I tried some configuration, but I can not achieve it.

Regards,
Arindam
  

___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] capturing build details in images

2013-12-05 Thread James Slagle
On Wed, Dec 4, 2013 at 5:19 PM, Robert Collins
robe...@robertcollins.net wrote:
 This is a follow up to https://review.openstack.org/59621 to get
 broader discussion..

 So at the moment we capture a bunch of details in the image - what
 parameters the image was built with and some environment variables.

 Last week we were capturing everything, which there is broad consensus
 was too much, but it seems to me that that is based on two things:
  - the security ramifications of unanticipated details being baked
 into the image
  - many variables being irrelevant most of the time

 I think those are both good points. But... the problem with diagnostic
 information is you don't know that you need it until you don't have
 it.

 I'm particularly worried that things like bad http proxies, and third
 party elements that need variables we don't know about will be
 undiagnosable. Forcing everything through a DIB_FOO variable thunk
 seems like just creating work for ourselves - I'd like to avoid that.

 Further, some variables we should capture (like http_proxy) have
 passwords embedded in them, so even whitelisting what variables to
 capture doesn't solve the general problem.

 So - what about us capturing this information outside the image: we
 can create a uuid for the build, and write a file in the image with
 that uuid, and outside the image we can write:
  - all variables (no security ramifications now as this file can be
 kept by whomever built the image)
  - command line args
  - version information for the toolchain etc.

+1.  I like this idea a lot.

What about making the uuid file written outside of the image be in
json format so it's easily machine parseable?

Something like:

dib-uuid.json would contain:

{
  environment : {
  DIB_NO_TMPFS: 1,
  ...
   },
  dib : {
 command-line : ,
 version: .
  }
}

Could keep adding additional things like list of elements used, build time, etc.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] Let's make topics mandatory for openstack-dev

2013-12-05 Thread Thierry Carrez
Dean Troyer wrote:
 On Thu, Dec 5, 2013 at 6:23 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 We already have that functionality with the Topics support in mailman.
 You can sign up for just specific topics there and only get those
 topics. Emails are tagged with an X-Topics: header for filtering (much
 more reliable than subject lines).
 
 This topic comes up regularly these days and this and another message
 from Stefano[1] are helpful to using topics so I've added it to the
 MailingList wiki page[2].  Plus pointers to split-the-list threads so we
 can maybe refresh on why things are the way they are.
 
 [1] 
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/019233.html
 [2] https://wiki.openstack.org/wiki/Mailing_Lists#Future_Development

Mailman Topics are good enough at this point -- the main issue with them
is that if you fail to provide any topic with your email, mailman
defaults to sending it to everyone... so people not following the topic
rules actually get more visibility than people who do.

I think we should just extend the use of sane subject lines and fight
people who fail to provide a topic (if what they are posting is not of
cross-project interest).

As someone who proposed a basic split recently, I'll admit that adopting
a more violent approach to ignoring threads based on their subject lines
worked for me over the last weeks. The harder is to accept that you
can't read everything. Once you do that, aggressively ignoring stuff
you're less interested in is not that hard.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-05 Thread Jiří Stránský

On 4.12.2013 08:12, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core


+1


  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.


I vote for keeping in -core those who already expressed or will express 
intention to be more active in reviews, removal of the rest.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-05 Thread James Slagle
On Wed, Dec 4, 2013 at 2:10 PM, Robert Collins
robe...@robertcollins.net wrote:
 On 5 December 2013 06:55, James Slagle james.sla...@gmail.com wrote:
 On Wed, Dec 4, 2013 at 2:12 AM, Robert Collins
 Jan, Jordan, Martyn, Jiri and Jaromir are still actively contributing
 to TripleO and OpenStack, but I don't think they are tracking /
 engaging in the code review discussions enough to stay in -core: I'd
 be delighted if they want to rejoin as core - as we discussed last
 time, after a shorter than usual ramp up period if they get stuck in.

 What's the shorter than usual ramp up period?

 You know, we haven't actually put numbers on it. But I'd be
 comfortable with a few weeks of sustained involvement.

+1.  Sounds reasonable.


 In general, I agree with your points about removing folks from core.

 We do have a situation though where some folks weren't reviewing as
 frequently when the Tuskar UI/API development slowed a bit post-merge.
  Since that is getting ready to pick back up, my concern with removing
 this group of folks, is that it leaves less people on core who are
 deeply familiar with that code base.  Maybe that's ok, especially if
 the fast track process to get them back on core is reasonable.

 Well, I don't think we want a situation where when a single org
 decides to tackle something else for a bit, that noone can comfortably
 fix bugs in e.g. Tuskar / or worse the whole thing stalls - thats why
 I've been so keen to get /everyone/ in Tripleo-core familiar with the
 entire collection of codebases we're maintaining.

 So I think after 3 months that other cores should be reasonably familiar too 
 ;).

Well, it's not so much about just fixing bugs.  I'm confident our set
of cores could fix bugs in almost any OpenStack related project, and
in fact most do.  It was more just a comment around people who worked
on the initial code being removed from core.  But, if others don't
share that concern, and in fact Ladislav's comment about having
confidence in the number of tuskar-ui guys still on core pretty much
mitigates my concern :).

 That said, perhaps we should review these projects.

 Tuskar as an API to drive deployment and ops clearly belongs in
 TripleO - though we need to keep pushing features out of it into more
 generalised tools like Heat, Nova and Solum. TuskarUI though, as far
 as I know all the other programs have their web UI in Horizon itself -
 perhaps TuskarUI belongs in the Horizon program as a separate code
 base for now, and merge them once Tuskar begins integration?

IMO, I'd like to see Tuskar UI stay in tripleo for now, given that we
are very focused on the deployment story.  And our reviewers are
likely to have strong opinions on that :).  Not that we couldn't go
review in Horizon if we wanted to, but I don't think we need the churn
of making that change right now.

So, I'll send my votes on the other folks after giving them a little
more time to reply.

Thanks.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Anti-affinity

2013-12-05 Thread Dmitry Mescheryakov
No, anti-affinity does not work that way. It allows to distribute nodes
running the same process, but you can't separate nodes running different
processes (i.e. master and workers).

Dmitry


2013/12/5 Arindam Choudhury arin...@live.com

 Hi,

 Is it possible using anti-affinity to reserve a compute node only for
 master(namenode+jobtracker)?

 Regards,
 Arindam

 --
 From: arin...@live.com
 To: openstack-dev@lists.openstack.org
 Date: Thu, 5 Dec 2013 12:52:23 +0100

 Subject: Re: [openstack-dev] [savanna] Anti-affinity

 Hi,

 Thanks a lot for your reply.

 Regards,

 Arindam

 --
 Date: Thu, 5 Dec 2013 15:41:33 +0400
 From: dmescherya...@mirantis.com
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [savanna] Anti-affinity

 Arindam,

 It is not achievable with the current Savanna. The anti-affinity feature
 allows to run one VM per compute-node only. It can not evenly distribute
 VMs in case the number of compute nodes is lower than the desired size of
 Hadoop cluster.

 Dmitry


 2013/12/5 Arindam Choudhury arin...@live.com

 HI,

 I have 11 compute nodes. I want to create a hadoop cluster with 1
 master(namenode+jobtracker) with 20 worker (datanode+tasktracker).

 How to configure the Anti-affinty so I can run the master in one host,
 while others will be hosting two worker?

 I tried some configuration, but I can not achieve it.

 Regards,
 Arindam

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___ OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___ OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-05 Thread Maru Newby

On Dec 5, 2013, at 6:43 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 I have offered up https://review.openstack.org/#/c/60082/ as a
 backport to Havana.  Interest was expressed in the blueprint for doing
 this even before this thread.  If there is consensus for this as the
 stop-gap then it is there for the merging.  However, I do not want to
 discourage discussion of other stop-gap solutions like what Maru
 proposed in the original post.
 
 Carl

Awesome.  No worries, I'm still planning on submitting a patch to improve 
notification reliability.

We seem to be cpu bound now in processing RPC messages.  Do you think it would 
be reasonable to run multiple processes for RPC?


m.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-05 Thread Carl Baldwin
Creating separate processes for API workers does allow a bit more room
for RPC message processing in the main process.  If this isn't enough
and the main process is still bound on CPU and/or green
thread/sqlalchemy blocking then creating separate worker processes for
RPC processing may be the next logical step to scale.  I'll give it
some thought today and possibly create a blueprint.

Carl

On Thu, Dec 5, 2013 at 7:13 AM, Maru Newby ma...@redhat.com wrote:

 On Dec 5, 2013, at 6:43 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 I have offered up https://review.openstack.org/#/c/60082/ as a
 backport to Havana.  Interest was expressed in the blueprint for doing
 this even before this thread.  If there is consensus for this as the
 stop-gap then it is there for the merging.  However, I do not want to
 discourage discussion of other stop-gap solutions like what Maru
 proposed in the original post.

 Carl

 Awesome.  No worries, I'm still planning on submitting a patch to improve 
 notification reliability.

 We seem to be cpu bound now in processing RPC messages.  Do you think it 
 would be reasonable to run multiple processes for RPC?


 m.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Moving the QA meeting time

2013-12-05 Thread David Kranz

On 12/05/2013 07:16 AM, Sean Dague wrote:

On 12/05/2013 02:37 AM, Koderer, Marc wrote:

Hi all!


-Ursprüngliche Nachricht-
Von: Kenichi Oomichi [mailto:oomi...@mxs.nes.nec.co.jp]
Gesendet: Donnerstag, 5. Dezember 2013 01:37
An: OpenStack Development Mailing List (not for usage questions)
Betreff: Re: [openstack-dev] [qa] Moving the QA meeting time


Hi Matthew,

Thank you for picking this up.


-Original Message-
From: Matthew Treinish [mailto:mtrein...@kortar.org]
Sent: Thursday, December 05, 2013 6:04 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [qa] Moving the QA meeting time

Hi everyone,

I'm looking at changing our weekly QA meeting time to make it more
globally attendable. Right now the current time of 17:00 UTC doesn't
really work for people who live in Asia Pacific timezones. (which
includes a third of the current core review team) There are 2

approaches that I can see taking here:

  1. We could either move the meeting time later so that it makes it

easier for

 people in the Asia Pacific region to attend.

  2. Or we move to a alternating meeting time, where every other week

the meeting

 time changes. So we keep the current slot and alternate with

something more

 friendly for other regions.

I think trying to stick to a single meeting time would be a better
call just for simplicity. But it gets difficult to appease everyone
that way which is where the appeal of the 2nd approach comes in.

Looking at the available time slots here:
https://wiki.openstack.org/wiki/Meetings
there are plenty of open slots before 1500 UTC which would be early
for people in the US and late for people in the Asia Pacific region.
There are plenty of slots starting at 2300 UTC which is late for

people in Europe.

Would something like 2200 UTC on Wed. or Thurs work for everyone?

What are people's opinions on this?

I am in JST.
Is Chris in CST, and Marc in CET?

Yes, Giulio and I are in CET. And Attila too, right?


Here is timezone difference.
15:00 UTC - 07:00 PST - 01:30 CST - 16:00 CET - 24:00 JST 22:00 UTC
- 14:00 PST - 08:30 CST - 23:00 CET - 07:00 JST 23:00 UTC - 15:00
PST - 09:30 CST - 24:00 CET - 08:00 JST

I feel 22:00 would be nice.

I'd prefer to have two slots since 22 UTC is quite late. But I am ok with it if 
all others are fine.

The other option would be to oscillate on opposite weeks with Ceilometer
- https://wiki.openstack.org/wiki/Meetings/Ceilometer they already have
a well defined every other cadence.

-Sean



Either option works for me.

 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] capturing build details in images

2013-12-05 Thread Jay Dobies



On 12/05/2013 08:38 AM, James Slagle wrote:

On Wed, Dec 4, 2013 at 5:19 PM, Robert Collins
robe...@robertcollins.net wrote:

This is a follow up to https://review.openstack.org/59621 to get
broader discussion..

So at the moment we capture a bunch of details in the image - what
parameters the image was built with and some environment variables.

Last week we were capturing everything, which there is broad consensus
was too much, but it seems to me that that is based on two things:
  - the security ramifications of unanticipated details being baked
into the image
  - many variables being irrelevant most of the time

I think those are both good points. But... the problem with diagnostic
information is you don't know that you need it until you don't have
it.

I'm particularly worried that things like bad http proxies, and third
party elements that need variables we don't know about will be
undiagnosable. Forcing everything through a DIB_FOO variable thunk
seems like just creating work for ourselves - I'd like to avoid that.

Further, some variables we should capture (like http_proxy) have
passwords embedded in them, so even whitelisting what variables to
capture doesn't solve the general problem.

So - what about us capturing this information outside the image: we
can create a uuid for the build, and write a file in the image with
that uuid, and outside the image we can write:
  - all variables (no security ramifications now as this file can be
kept by whomever built the image)
  - command line args
  - version information for the toolchain etc.


+1.  I like this idea a lot.

What about making the uuid file written outside of the image be in
json format so it's easily machine parseable?

Something like:

dib-uuid.json would contain:

{
   environment : {
   DIB_NO_TMPFS: 1,
   ...
},
   dib : {
  command-line : ,
  version: .
   }
}

Could keep adding additional things like list of elements used, build time, etc.


+1 to having a machine parsable version. Is that going to be a standard 
schema for all images or will there be an open-ended section that 
contains key/value pairs that are contingent on the actual type of image 
being built?





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Icehouse-1 milestone candidates available

2013-12-05 Thread Sergey Lukjanov
It was created for Savanna too.

During the icehouse-1 milestone we implemented 7 blueprints and fixed 18
bugs -https://launchpad.net/savanna/+milestone/icehouse-1
So, it’s time to test milestone-proposed branch -
https://github.com/openstack/savanna/tree/milestone-proposed
Tarballs available here -
http://tarballs.openstack.org/savanna/savanna-milestone-proposed.tar.gz

P.S. python-savannaclient 0.4.0 that supports i1 features is available too.

Thanks.


On Wed, Dec 4, 2013 at 7:08 PM, Thierry Carrez thie...@openstack.orgwrote:

 Hi everyone,

 Milestone-proposed branches were created for Keystone, Glance, Nova,
 Horizon, Neutron, Cinder, Ceilometer, Heat and and Trove in preparation
 for the icehouse-1 milestone publication tomorrow.

 During this milestone (since the opening of the Icehouse development
 cycle) we implemented 69 blueprints and fixed 738 bugs
 (so far).

 Please test proposed deliveries to ensure no critical regression found
 its way in. Milestone-critical fixes will be backported to the
 milestone-proposed branch until final delivery of the milestone, and
 will be tracked using the icehouse-1 milestone targeting.

 You can find candidate tarballs at:
 http://tarballs.openstack.org/keystone/keystone-milestone-proposed.tar.gz
 http://tarballs.openstack.org/glance/glance-milestone-proposed.tar.gz
 http://tarballs.openstack.org/nova/nova-milestone-proposed.tar.gz
 http://tarballs.openstack.org/horizon/horizon-milestone-proposed.tar.gz
 http://tarballs.openstack.org/neutron/neutron-milestone-proposed.tar.gz
 http://tarballs.openstack.org/cinder/cinder-milestone-proposed.tar.gz

 http://tarballs.openstack.org/ceilometer/ceilometer-milestone-proposed.tar.gz
 http://tarballs.openstack.org/heat/heat-milestone-proposed.tar.gz
 http://tarballs.openstack.org/trove/trove-milestone-proposed.tar.gz

 You can also access the milestone-proposed branches directly at:
 https://github.com/openstack/keystone/tree/milestone-proposed
 https://github.com/openstack/glance/tree/milestone-proposed
 https://github.com/openstack/nova/tree/milestone-proposed
 https://github.com/openstack/horizon/tree/milestone-proposed
 https://github.com/openstack/neutron/tree/milestone-proposed
 https://github.com/openstack/cinder/tree/milestone-proposed
 https://github.com/openstack/ceilometer/tree/milestone-proposed
 https://github.com/openstack/heat/tree/milestone-proposed
 https://github.com/openstack/trove/tree/milestone-proposed

 Regards,

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
(sent from random device)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Webex Recording of IPv6 / Neutron Sync-up

2013-12-05 Thread Shixiong Shang
Hi, Jeremy:

Thank you very much for your kind words! I saw you many times before in the 
meetup, but didn’t get a chance to talk to you. Will grab your brain next time. 
:)

For other stackers who also feel interested in last night presentation, here is 
the link to the Webex recording:

https://cisco.webex.com/ciscosales/lsr.php?AT=pbSP=ECrID=73557207rKey=73240b2ec6577fc4

Shixiong




On Dec 4, 2013, at 9:54 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2013-12-04 21:44:53 -0500 (-0500), Shixiong Shang wrote:
 [...]
 I was swamped today to prepare for the presentation that Randy and
 I jointly delivered tonight to local OpenStack community in the
 meet-up sponsored by Cisco.
 [...]
 
 Also, just wanted to congratulate you--the presentation was
 excellent, and I'm quite impressed to see the awesome things you're
 doing with IPv6 and OpenStack (locally, no less!).
 -- 
 Jeremy Stanley
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally] Project structure (organization) update

2013-12-05 Thread Boris Pavlovic
Hi Stackers,

One of the major goals of the Rally Team is to make process of joining
Rally team simple as much as possible.  It means not just to be able to
get merged your patches. It means to be totally involved in Rally team and
fully understand what is happing in project and what are our goals.


We are hard working around refactoring our organization process, to resolve
major issues that we faced. Usually it is not clear:
1) what was done in project recently?
2) what is the goal of the project?
3) our high level roadmap?
4) where can you discuss interesting themes
5) how to use Rally
6) how to start contributing to Rally
7) what are currently open tasks?
8) who  what is contributing now

So actually it is impossible to cover effectively all these steps using one
service. As a result of our investigation and work, we start using few
tools:

1) Wiki page - with a lot of tutorials about how to install, use and extend
Rally functionality
https://wiki.openstack.org/wiki/Rally/
After v.0.1. release we are going to add video instruction also.

2) Launchpad - where we are tracking blueprints/bugs and trying to present
RoadMap in blueprints that are integrated with development process
(patches, reviews  contributors).
https://wiki.openstack.org/wiki/Rally/BenchmarkScenarios

3) Etherpad https://etherpad.openstack.org/p/Rally_Main - that contains all
most important information for developers:
3.1) RoadMaps
3.2) Links to active discussions
3.3) Main projects pages
And I think that this is the right place for any arch discussions.

4) Trello.com:
This is the perfect board to get rapidly information about who and what is
doing at this moment in project, and also to get information about open and
already assigned tasks.

P.S. we will add to the board each contributor, if he is doing anything
related to the Rally that should be tracked. (so it will be open defacto)

5) Project news feed:
Page that contains weekly, high level, human written, updates in projects:
https://wiki.openstack.org/wiki/Rally/Updates

P.S. We shouldn't use wiki page and find  something more suitable...



I hope that this email will help everybody to be more involved in Rally.
Btw any ideas about how to improve current project organization will be
really super useful. Thanks.



Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-05 Thread John Dickinson

On Dec 5, 2013, at 1:36 AM, Maru Newby ma...@redhat.com wrote:

 
 On Dec 3, 2013, at 12:18 AM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
 
 On Sun, Dec 1, 2013 at 7:04 PM, John Dickinson m...@not.mn wrote:
 Just to add to the story, Swift uses X-Trans-Id and generates it in the 
 outer-most catch_errors middleware.
 
 Swift's catch errors middleware is responsible for ensuring that the 
 transaction id exists on each request, and that all errors previously 
 uncaught, anywhere in the pipeline, are caught and logged. If there is not a 
 common way to do this, yet, I submit it as a great template for solving this 
 problem. It's simple, scalable, and well-tested (ie tests and running in 
 prod for years).
 
 https://github.com/openstack/swift/blob/master/swift/common/middleware/catch_errors.py
 
 Leaving aside error handling and only focusing on the transaction id (or 
 request id) generation, since OpenStack services are exposed to untrusted 
 clients, how would you propose communicating the appropriate transaction id 
 to a different service? I can see great benefit to having a glance 
 transaction ID carry through to Swift requests (and so on), but how should 
 the transaction id be communicated? It's not sensitive info, but I can 
 imagine a pretty big problem when trying to track down errors if a client 
 application decides to set eg the X-Set-Transaction-Id header on every 
 request to the same thing.
 
 -1 to cross service request IDs, for the reasons John mentions above.
 
 
 Thanks for bringing this up, and I'd welcome a patch in Swift that would use 
 a common library to generate the transaction id, if it were installed. I can 
 see that there would be huge advantage to operators to trace requests 
 through multiple systems.
 
 Another option would be for each system that calls an another OpenStack 
 system to expect and log the transaction ID for the request that was given. 
 This would be looser coupling and be more forgiving for a heterogeneous 
 cluster. Eg when Glance makes a call to Swift, Glance cloud log the 
 transaction id that Swift used (from the Swift response). Likewise, when 
 Swift makes a call to Keystone, Swift could log the Keystone transaction id. 
 This wouldn't result in a single transaction id across all systems, but it 
 would provide markers so an admin could trace the request.
 
 There was a session on this at the summit, and although the notes are a 
 little scarce this was the conclusion we came up with.  Every time a cross 
 service call is made, we will log and send a notification for ceilometer to 
 consume, with the request-ids of both request ids.  One of the benefits of 
 this approach is that we can easily generate a tree of all the API calls 
 that are made (and clearly show when multiple calls are made to the same 
 service), something that just a cross service request id would have trouble 
 with.
 
 Is wise to trust anything a client provides to ensure traceability?  If a 
 user receives a request id back from Nova, then submits that request id in an 
 unrelated request to Neutron, the traceability would be effectively 
 corrupted.  If the consensus is that we don't want to securely deliver 
 request ids for inter-service calls, how about requiring a service to log its 
 request id along with the request id returned from a call to another service 
 to achieve the a similar result?

Yes, this is what I was proposing. I think this is the best path forward.


 The catch is that every call point (or client instantiation?) would have to 
 be modified to pass the request id instead of just logging at one place in 
 each service.  Is that a cost worth paying?

Perhaps this is my ignorance of how other projects work today, but does this 
not already happen? Is it possible to get a response from an API call to an 
OpenStack project that doesn't include a request id?

 
 
 m.
 
 
 
 https://etherpad.openstack.org/p/icehouse-summit-qa-gate-debugability 
 
 
 With that in mind I think having a standard x-openstack-request-id makes 
 things a little more uniform, and means that adding new services doesn't 
 require new logic to handle new request ids.
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Clint Byrum
Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?
 

I've asked that question a few times, and I think I can collate the
responses I've received below. I think enhancing glance to do these
things is on the table:

1. Glance is for big blobs of data not tiny templates.
2. Versioning of a single resource is desired.
3. Tagging/classifying/listing/sorting
4. Glance is designed to expose the uploaded blobs to nova, not users

My responses:

1: Irrelevant. Smaller things will fit in it just fine.

2: The swift API supports versions. We could also have git as a
backend. This feels like something we can add as an optional feature
without exploding Glance's scope and I imagine it would actually be a
welcome feature for image authors as well. Think about Ubuntu maintaining
official images. If they can keep the ID the same and just add a version
(allowing users to lock down to a version if updated images cause issue)
that seems like a really cool feature for images _and_ templates.

3: I'm sure glance image users would love to have those too.

4: Irrelevant. Heat will need to download templates just like nova, and
making images publicly downloadable is also a thing in glance.

It strikes me that this might be a silo problem instead of an
actual design problem. Folk should not be worried about jumping into
Glance and adding features. Unless of course the Glance folk have
reservations? (adding glance tag to the subject)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Heat API v2 - Removal of template_url?

2013-12-05 Thread ELISHA, Moshe (Moshe)
Hey,

I really liked the v2 Heat API (as proposed in Create a new v2 Heat 
APIhttps://blueprints.launchpad.net/heat/+spec/v2api) and I think it makes a 
lot of sense.

One of the proposed changes is to Remove template_url from the request POST, 
so the template will be passed using the template parameter in the request 
body.

Could someone please elaborate how exactly Heat Orchestration Templates written 
in YAML will be embedded in the body?

As I understand the YAML template should be inserted as string otherwise JSON 
parsers will not be able to parse the JSON body.
If indeed the template is inserted as string, as far as I know, JSON does not 
support multiline strings and the available workarounds are not so pretty and 
require escaping.
The escaping issue gets more complicated when UserData is used in the YAML.

Will the template_url be removed and if so how will the template contain 
the YAML template?

Thanks.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-05 Thread Tiwari, Arvind
Hi David,

Let me capture these details in ether pad. I will drop an email after adding 
these details in etherpad.

Thanks,
Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
Sent: Thursday, December 05, 2013 4:15 AM
To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List (not for 
usage questions)
Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi Arvind

we are making good progress, but what I dont like about your proposal
below is that the role name is not unique. There can be multiple roles
with the same name, but different IDs, and different scopes. I dont like
this, and I think it would be confusing to users/administrators. I think
the role names should be different as well. This is not difficult to
engineer if the names are hierarchically structured based on the name of
the role creator. The creator might be owner of the resource that is
being scoped, but it need not necessarily be. Assuming it was, then in
your examples below we might have role names of NovaEast.admin and
NovaWest.admin. Since these are strings, policies can be easily adapted
to match on NovaWest.admin instead of admin.

regards

david

On 04/12/2013 17:21, Tiwari, Arvind wrote:
 Hi Adam,
 
 I have added my comments in line. 
 
 As per my request yesterday and David's proposal, following role-def data 
 model is looks generic enough and seems innovative to accommodate future 
 extensions.
 
 {
   role: {
 id: 76e72a,
 name: admin, (you can give whatever name you like)
 scope: {
   id: ---id--, (ID should be  1 to 1 mapped with resource in type and 
 must be immutable value)
   type: service | file | domain etc., (Type can be any type of 
 resource which explains the scoping context)
   interface:--interface--  (We are still need working on this field. 
 My idea of this optional field is to indicate the interface of the resource 
 (endpoint for service, path for File,) for which the role-def is  
created and can be empty.)
 }
   }
 }
 
 Based on above data model two admin roles for nova for two separate region wd 
 be as below
 
 {
   role: {
 id: 76e71a,
 name: admin,
 scope: {
   id: 110, (suppose 110 is Nova serviceId)
   interface: 1101, (suppose 1101 is Nova region East endpointId)
   type: service
 }
   }
 }
 
 {
   role: {
 id: 76e72a,
 name: admin,
 scope: {
   id: 110, 
   interface: 1102,(suppose 1102 is Nova region West endpointId)
   type: service
 }
   }
 }
 
 This way we can keep role-assignments abstracted from resource on which the 
 assignment is created. This also open doors to have service and/or endpoint 
 scoped token as I mentioned in https://etherpad.openstack.org/p/1Uiwcbfpxq.
 
 David, I have updated 
 https://etherpad.openstack.org/p/service-scoped-role-definition line #118 
 explaining the rationale behind the field.
 I wd also appreciate your vision on 
 https://etherpad.openstack.org/p/1Uiwcbfpxq too which is support 
 https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens BP.
 
 
 Thanks,
 Arvind
 
 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com] 
 Sent: Tuesday, December 03, 2013 6:52 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage 
 questions)
 Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 I've been thinking about your comment that nested roles are confusing
 AT: Thanks for considering my comment about nested role-def.
 
 What if we backed off and said the following:
 
 
 Some role-definitions are owned by services.  If a Role definition is 
 owned by a service, in role assignment lists in tokens, those roles we 
 be prefixd by the service name.  / is a reserved cahracter and weill be 
 used as the divider between segments of the role definition 
 
 That drops arbitrary nesting, and provides a reasonable namespace.  Then 
 a role def would look like:
 
 glance/admin  for the admin role on the glance project.
 
 AT: It seems this approach is not going to help, service rename would impact 
 all the role-def for a particular service. And we are back to the same 
 problem.
 
 In theory, we could add the domain to the namespace, but that seems 
 unwieldy.  If we did, a role def would then look like this
 
 
 default/glance/admin  for the admin role on the glance project.
 
 Is that clearer than the nested roles?
 AT: It is defiantly clearer but it will create same problems as what we are 
 trying to fix. 
 
 
 
 On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,

 Based on our discussion over IRC, I have updated the below etherpad with 
 proposal for nested role definition

 https://etherpad.openstack.org/p/service-scoped-role-definition

 Please take a look @ Proposal (Ayoung) - Nested role 

Re: [openstack-dev] [heat] Heat API v2 - Removal of template_url?

2013-12-05 Thread Stephen Gran

On 05/12/13 16:11, ELISHA, Moshe (Moshe) wrote:

Hey,

I really liked the v2 Heat API (as proposed in Create a new v2 Heat API
https://blueprints.launchpad.net/heat/+spec/v2api) and I think it
makes a lot of sense.

One of the proposed changes is to “Remove template_url from the request
POST”, so the template will be passed using the “template” parameter in
the request body.

Could someone please elaborate how exactly Heat Orchestration Templates
written in YAML will be embedded in the body?

As I understand the YAML template should be inserted as string otherwise
JSON parsers will not be able to parse the JSON body.

If indeed the template is inserted as string, as far as I know, JSON
does not support multiline strings and the available workarounds are not
so pretty and require escaping.

The escaping issue gets more complicated when UserData is used in the
YAML.

Will the “template_url” be removed and if so how will the “template”
contain the YAML template?


Oh, that would be sad indeed.  We're just looking at this pattern in AWS:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-stack.html

And using that in heat as well would be very welcome.

Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 33% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News  Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News  Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Unicode strings in Python3

2013-12-05 Thread Christopher Armstrong
On Thu, Dec 5, 2013 at 3:26 AM, Julien Danjou jul...@danjou.info wrote:

 On Wed, Dec 04 2013, Georgy Okrokvertskhov wrote:

  Quick summary: you can't use unicode() function and u' ' strings in
 Pyhton3.

 Not that it's advised, but you can use u' ' back again with Python 3.3.


And this is a very useful feature for projects that want to have a single
codebase that runs on both python 2 and python 3, so it's worth taking
advantage of.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Unicode strings in Python3

2013-12-05 Thread Georgy Okrokvertskhov
On Thu, Dec 5, 2013 at 8:32 AM, Christopher Armstrong 
chris.armstr...@rackspace.com wrote:

 On Thu, Dec 5, 2013 at 3:26 AM, Julien Danjou jul...@danjou.info wrote:

 On Wed, Dec 04 2013, Georgy Okrokvertskhov wrote:

  Quick summary: you can't use unicode() function and u' ' strings in
 Pyhton3.

 Not that it's advised, but you can use u' ' back again with Python 3.3.


 And this is a very useful feature for projects that want to have a single
 codebase that runs on both python 2 and python 3, so it's worth taking
 advantage of.


You are right. PEP 414 introduces u literals in Python3.3. Unicode()
function still does not work and should be avoided in the code, though.

-- 
 IRC: radix
 Christopher Armstrong
 Rackspace





-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-12-05 Thread Henry Gessau
Dinner plans

Anita tricked me into volunteering to be responsible for dinner arrangements
during the sprint. :)

My suggestion is to get away from our keyboards in the evenings and eat at a
restaurant together. I look forward to socializing with fellow Openstack
developers in a less code-centric environment.

Every day (Jan 15, 16, 27) I will oversee voting for a restaurant and make
reservations. Attendance is completely voluntary. Each attendee will be
responsible for their own bill, but contact me privately if you have budget
constraints.

-- 
Henry

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Heat API v2 - Removal of template_url?

2013-12-05 Thread Steven Hardy
On Thu, Dec 05, 2013 at 04:11:37PM +, ELISHA, Moshe (Moshe) wrote:
 Hey,
 
 I really liked the v2 Heat API (as proposed in Create a new v2 Heat 
 APIhttps://blueprints.launchpad.net/heat/+spec/v2api) and I think it makes 
 a lot of sense.
 
 One of the proposed changes is to Remove template_url from the request 
 POST, so the template will be passed using the template parameter in the 
 request body.
 
 Could someone please elaborate how exactly Heat Orchestration Templates 
 written in YAML will be embedded in the body?

In exactly the same way they are now, try creating a stack using a HOT yaml
template, with --debug enabled via python heatclient and you'll see what I
mean:

wget 
https://raw.github.com/openstack/heat-templates/master/hot/F18/WordPress_Native.yaml

heat --debug stack-create -P key_name=userkey -f ./WordPress_Native.yaml wp1

This works fine now, so there's nothing to do to support this.

 As I understand the YAML template should be inserted as string otherwise JSON 
 parsers will not be able to parse the JSON body.
 If indeed the template is inserted as string, as far as I know, JSON does not 
 support multiline strings and the available workarounds are not so pretty and 
 require escaping.
 The escaping issue gets more complicated when UserData is used in the YAML.

It's just a string, we do the parsing inside the heat-engine, with either
json or yaml parser, depending on the content of the string.

 Will the template_url be removed and if so how will the template contain 
 the YAML template?

Well, this is a good opportunity to discuss it, since removing it was only
one persons idea (mine ;) and we haven't discussed it much in the team.

My argument for removing it, is:

- We already resolve URLs for environment files in python-heatclient, and
  pass them in the files parameter, so lets to the same for the template.

- In real world deployments, assuming the heat-api has access to
  whatever URL you pass may not be reasonable (or secure, ideally we don't
  want the heat hitting random user-provided URLs)

- We are tying up service resources doing a chunked download of a template,
  when this overhead can be in the client, then we just have to check the
  string length provided in the request in the API. When you consider many
  concurrent requests to heat containing template_url, this will probably
  result in a significant performance advantage.

The python CLI/API could be unmodified, if someone passes template_url, we just
download it in the client and pass the result in the POST to the heat API.

So essentially it shouldn't impact users at all, it's just moving the
processing of template_url from heat-api to python-heatclient.

If anyone has some counter-arguments, lets discuss! :)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Randall Burt
On Dec 5, 2013, at 10:10 AM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?
 
 
 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:
 
 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users
 
 My responses:
 
 1: Irrelevant. Smaller things will fit in it just fine.

Fitting is one thing, optimizations around particular assumptions about the 
size of data and the frequency of reads/writes might be an issue, but I admit 
to ignorance about those details in Glance.

 2: The swift API supports versions. We could also have git as a
 backend. This feels like something we can add as an optional feature
 without exploding Glance's scope and I imagine it would actually be a
 welcome feature for image authors as well. Think about Ubuntu maintaining
 official images. If they can keep the ID the same and just add a version
 (allowing users to lock down to a version if updated images cause issue)
 that seems like a really cool feature for images _and_ templates.

Agreed, though one could argue that using image names and looking up ID's or 
just using ID's as appropriate sort of handle this use case, but I agree that 
having image versioning seems a reasonable feature for Glance to have as well.

 3: I'm sure glance image users would love to have those too.

And image metadata is already there so we don't have to go through those 
discussions all over again ;).

 4: Irrelevant. Heat will need to download templates just like nova, and
 making images publicly downloadable is also a thing in glance.

Yeah, this was the kicker for me. I'd been thinking of adding the 
tenancy/public/private templates use case to the HeatR spec and realized that 
this was a good argument for Glance since it already has this feature.

 It strikes me that this might be a silo problem instead of an
 actual design problem. Folk should not be worried about jumping into
 Glance and adding features. Unless of course the Glance folk have
 reservations? (adding glance tag to the subject)

Perhaps, and if these use cases make sense for the Glance users in general, I 
wouldn't want to re-invent all those wheels either. I admit there's some appeal 
to being able to pass a template ID to stack-create or as the type of a 
provider resource and have an actual API to call that's already got a known, 
tested client that's already part of the OpenStack ecosystem

In the end, though, even if some and not all of our use cases make sense for 
the Glance folks, we still have the option of creating the HeatR service and 
having Glance as a possible back-end data store.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] IRC Meeting Today at 2 Central (2000 UTC)

2013-12-05 Thread Jarret Raim
Barbican is having our official IRC meeting in #openstack-meeting-alt today
at 2 central or 2000 UTC.

The main topic of conversation will be the tasks / comments that came up
from our incubation request. We welcome anyone with additional questions or
comments to come join us.



Thanks,

Jarret Raim   |Security Intrapreneur
-
5000 Walzem RoadOffice: 210.312.3121
San Antonio, TX 78218   Cellular: 210.437.1217
-
rackspace hosting   |   experience fanatical support




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting Nov 5

2013-12-05 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in #openstack-meeting-alt
channel.

Agenda:
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_December.2C_5https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_November.2C_21

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20131205T18

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] creating a default for oslo config variables within a project?

2013-12-05 Thread Clint Byrum
Excerpts from Julien Danjou's message of 2013-12-05 01:22:00 -0800:
 On Wed, Dec 04 2013, Sean Dague wrote:
 
  Honestly, I'd love us to be clever and figure out a not dangerous way
  through this, even if unwise (where we can yell at the user in the LOGs
  loudly, and fail them in J if lock_dir=/tmp) that lets us progress
  through this while gracefully bringing configs into line.
 
 Correct me if I'm wrong, but I think the correct way to deal with that
 security problem is to use an atomic operation using open(2) with:
   open(pathname, O_CREAT | O_EXCL)
 

DOS by a malicious user creating it first is still trivial.

 or mkstemp(3).
 

Can't use mkstemp as the point is this needs to be something shared
between processes.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly

2013-12-05 Thread Peter Feiner
On Thu, Dec 5, 2013 at 8:23 AM, Nathani, Sreedhar (APS)
sreedhar.nath...@hp.com wrote:
 Hello Marun,



 Please find the details about my setup and tests which i have done so far



 Setup

   - One Physical Box with 16c, 256G memory. 2 VMs created on this Box - One
 for Controller and One for Network Node

   - 16x compute nodes (each has 16c, 256G memory)

   - All the systems are installed with Ubuntu Precise + Havana Bits from
 Ubuntu Cloud Archive



 Steps to simulate the issue

   1) Concurrently create 30 Instances (m1.small) using REST API with
 mincount=30

   2) sleep for 20min and repeat the step (1)





 Issue 1

 In Havana, once we cross 150 instances (5 batches x 30) during 6th batch
 some instances are going into ERROR state

 due to network port not able to create and some instances are getting
 duplicate IP address



 Per Maru Newby this issue might related to this bug

 https://bugs.launchpad.net/bugs/1192381



 I have done the similar with Grizzly on the same environment 2 months back,
 where I could able to deploy close to 240 instances without any errors

 Initially on Grizzly also seen the same behavior but with these tunings
 based on this bug

 https://bugs.launchpad.net/neutron/+bug/1160442, never had issues (tested
 more than 10 times)

sqlalchemy_pool_size = 60

sqlalchemy_max_overflow = 120

sqlalchemy_pool_timeout = 2

agent_down_time = 60

report_internval = 20



 In Havana, I have tuned the same tunables but I could never get past 150+
 instances. Without the tunables I could not able to get past

 100 instances. We are getting many timeout errors from the DHCP agent and
 neutron clients



 NOTE: After tuning the agent_down_time to 60 and report_interval to 20, we
 no longer getting these error messages

2013-12-02 11:44:43.421 28201 WARNING
 neutron.scheduler.dhcp_agent_scheduler [-] No more DHCP agents

2013-12-02 11:44:43.439 28201 WARNING
 neutron.scheduler.dhcp_agent_scheduler [-] No more DHCP agents

2013-12-02 11:44:43.452 28201 WARNING
 neutron.scheduler.dhcp_agent_scheduler [-] No more DHCP agents





 In the compute node openvswitch agent logs, we see these errors repeating
 continuously



 2013-12-04 06:46:02.081 3546 TRACE
 neutron.plugins.openvswitch.agent.ovs_neutron_agent Timeout: Timeout while
 waiting on RPC response - topic: q-plugin, RPC method:
 security_group_rules_for_devices info: unknown

 and WARNING neutron.openstack.common.rpc.amqp [-] No calling threads waiting
 for msg_id



 DHCP agent has below errors



 2013-12-02 15:35:19.557 22125 ERROR neutron.agent.dhcp_agent [-] Unable to
 reload_allocations dhcp.

 2013-12-02 15:35:19.557 22125 TRACE neutron.agent.dhcp_agent Timeout:
 Timeout while waiting on RPC response - topic: q-plugin, RPC method:
 get_dhcp_port info: unknown



 2013-12-02 15:35:34.266 22125 ERROR neutron.agent.dhcp_agent [-] Unable to
 sync network state.

 2013-12-02 15:35:34.266 22125 TRACE neutron.agent.dhcp_agent Timeout:
 Timeout while waiting on RPC response - topic: q-plugin, RPC method:
 get_active_networks_info info: unknown





 In Havana, I have merged the code from this patch and set api_workers to 8
 (My Controller VM has 8cores/16Hyperthreads)

 https://review.openstack.org/#/c/37131/



 After this patch and starting 8 neutron-server worker threads, during the
 batch creation of 240 instances with 30 concurrent requests during each
 batch,

 238 instances became active and 2 instances went into error. Interesting
 these 2 instances which went into error state are from the same compute
 node.



 Unlike earlier this time, the errors are due to 'Too Many Connections' to
 the MySQL database.

 2013-12-04 17:07:59.877 21286 AUDIT nova.compute.manager
 [req-26d64693-d1ef-40f3-8350-659e34d5b1d7 c4d609870d4447c684858216da2f8041
 9b073211dd5c4988993341cc955e200b] [instance:
 c14596fd-13d5-482b-85af-e87077d4ed9b] Terminating instance

 2013-12-04 17:08:00.578 21286 ERROR nova.compute.manager
 [req-26d64693-d1ef-40f3-8350-659e34d5b1d7 c4d609870d4447c684858216da2f8041
 9b073211dd5c4988993341cc955e200b] [instance:
 c14596fd-13d5-482b-85af-e87077d4ed9b] Error: Remote error: OperationalError
 (OperationalError) (1040, 'Too many connections') None None



 Need to back port the patch 'https://review.openstack.org/#/c/37131/' to
 address the Neutron Scaling issues in Havana.

 Carl already back porting this patch into Havana
 https://review.openstack.org/#/c/60082/ which is good.



 Issue 2

 Grizzly :

 During the concurrent instance creation in Grizzly, once we cross 210
 instances, during subsequent 30 instance creation some of

 the instances could not get their IP address during the first boot with in
 first few min. Instance MAC and IP Address details

 were updated in the dnsmasq host file but with a delay. Instances are able
 to get their IP address with a delay eventually.



 If we reboot the instance using 'nova reboot' instance used to get IP
 Address.

 * 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Randall Burt
On Dec 5, 2013, at 11:10 AM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from James Slagle's message of 2013-12-05 08:35:12 -0800:
 On Thu, Dec 5, 2013 at 11:10 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?
 
 
 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:
 
 I'm actually interested in the use cases laid out by Heater from both
 a template perspective and image perspective.  For the templates, as
 Robert mentioned, Tuskar needs a solution for this requirement, since
 it's deploying using templates.  For the images, we have the concept
 of a golden image in TripleO and are heavily focused on image based
 deployments.  Therefore, it seems to make sense that TripleO also
 needs a way to version/tag known good images.
 
 Given that, I think it makes sense  to do this in a way so that it's
 consumable for things other than just templates.  In fact, you can
 almost s/template/image/g on the Heater wiki page, and it pretty well
 lays out what I'd like to see for images as well.
 
 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users
 
 My responses:
 
 1: Irrelevant. Smaller things will fit in it just fine.
 
 2: The swift API supports versions. We could also have git as a
 backend.
 
 I would definitely like to see a git backend for versioning.  No
 reason to reimplement a different solution for what already works
 well.  I'm not sure we'd want to put a whole image into git though.
 Perhaps just it's manifest (installed components, software versions,
 etc) in json format would go into git, and that would be associated
 back to the binary image via uuid.  That would even make it easy to
 diff changes between versions, etc.
 
 
 Right, git for a big 'ol image makes little sense.
 
 I'm suggesting that one might want to have two glances, one for images
 which just uses swift versions and would just expose a list of versions,
 and one for templates which would use git and thus expose more features
 like a git remote for the repo. I'm not sure if glance has embraced the
 extension paradigm yet, but this would fall nicely into it.

Alternatively, Glance could have configurable backends for each image type 
allowing for optimization without the (often times messy) extension mechanism? 
This is assuming it doesn't do this already - I really need to start digging 
here. In the spirit of general OpenStack architectural paradigms, as long as 
the service exposes a consistent interface for templates that includes 
versioning support, the back-end store and (possibly) the versioning engine 
should certainly be configurable. Swift probably makes a decent first/default 
implementation.

 This feels like something we can add as an optional feature
 without exploding Glance's scope and I imagine it would actually be a
 welcome feature for image authors as well. Think about Ubuntu maintaining
 official images. If they can keep the ID the same and just add a version
 (allowing users to lock down to a version if updated images cause issue)
 that seems like a really cool feature for images _and_ templates.
 
 3: I'm sure glance image users would love to have those too.
 
 4: Irrelevant. Heat will need to download templates just like nova, and
 making images publicly downloadable is also a thing in glance.
 
 It strikes me that this might be a silo problem instead of an
 actual design problem. Folk should not be worried about jumping into
 Glance and adding features. Unless of course the Glance folk have
 reservations? (adding glance tag to the subject)
 
 I'm +1 for adding these types of features to glance, or at least
 something common, instead of making it specific to Heat templates.
 
 
 Right, it may be that glance is too limited, but from what I've seen,
 it is not and it already has the base concepts that HeaTeR wants to
 have available.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Tim Schnell

On 12/5/13 11:33 AM, Randall Burt randall.b...@rackspace.com wrote:

On Dec 5, 2013, at 11:10 AM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from James Slagle's message of 2013-12-05 08:35:12 -0800:
 On Thu, Dec 5, 2013 at 11:10 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?
 
 
 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:
 
 I'm actually interested in the use cases laid out by Heater from both
 a template perspective and image perspective.  For the templates, as
 Robert mentioned, Tuskar needs a solution for this requirement, since
 it's deploying using templates.  For the images, we have the concept
 of a golden image in TripleO and are heavily focused on image based
 deployments.  Therefore, it seems to make sense that TripleO also
 needs a way to version/tag known good images.
 
 Given that, I think it makes sense  to do this in a way so that it's
 consumable for things other than just templates.  In fact, you can
 almost s/template/image/g on the Heater wiki page, and it pretty well
 lays out what I'd like to see for images as well.
 
 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users
 
 My responses:
 
 1: Irrelevant. Smaller things will fit in it just fine.
 
 2: The swift API supports versions. We could also have git as a
 backend.
 
 I would definitely like to see a git backend for versioning.  No
 reason to reimplement a different solution for what already works
 well.  I'm not sure we'd want to put a whole image into git though.
 Perhaps just it's manifest (installed components, software versions,
 etc) in json format would go into git, and that would be associated
 back to the binary image via uuid.  That would even make it easy to
 diff changes between versions, etc.
 
 
 Right, git for a big 'ol image makes little sense.
 
 I'm suggesting that one might want to have two glances, one for images
 which just uses swift versions and would just expose a list of versions,
 and one for templates which would use git and thus expose more features
 like a git remote for the repo. I'm not sure if glance has embraced the
 extension paradigm yet, but this would fall nicely into it.

Alternatively, Glance could have configurable backends for each image
type allowing for optimization without the (often times messy) extension
mechanism? This is assuming it doesn't do this already - I really need to
start digging here. In the spirit of general OpenStack architectural
paradigms, as long as the service exposes a consistent interface for
templates that includes versioning support, the back-end store and
(possibly) the versioning engine should certainly be configurable.
Swift probably makes a decent first/default implementation.

I'm not sure why we are attempting to fit a round peg in a square hole
here. Glance's entire design is built around serving images and there
absolutely is a difference between serving BLOBs versus relatively small
templates. Just look at the existing implementations, Heat already stores
the template directly in the database while Glance stores the blob in an
external file system.

I'm not opposed to having Glance as an optional configurable backend for
Heater but if you look at the existing database models for Glance, an
entirely separate schema would have to be created to support templates. I
also think that having Heater as a layer above Glance or whatever other
backend would allow for the flexibility for Heater's requirements to
diverge in the future from requirements that might make sense for images
but not templates.

-Tim


 This feels like something we can add as an optional feature
 without exploding Glance's scope and I imagine it would actually be a
 welcome feature for image authors as well. Think about Ubuntu
maintaining
 official images. If they can keep the ID the same and just add a
version
 (allowing users to lock down to a version if updated images cause
issue)
 that seems like a really cool feature for images _and_ templates.
 
 3: I'm sure glance image users would love to have those too.
 
 4: Irrelevant. Heat will need to download templates just like nova,
and
 making images publicly downloadable is also a thing in glance.
 
 It strikes me that this might be a silo problem instead of an
 actual design problem. Folk should not be worried about jumping into
 Glance and adding features. Unless of course the Glance folk have
 reservations? (adding glance tag to the subject)
 
 I'm +1 for adding these types of features to glance, or at least
 something common, instead of making it specific to Heat templates.
 
 
 Right, it may be that glance is too limited, but from what I've seen,
 it is not and it 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Fox, Kevin M
My 2 cents. Glance currently deals with single file images.

A user is going to want the heat repo to operate at a stack level. IE, I want 
to launch stack foo.

For all but the most trivial cases, a stack is made up of more then one 
template. These templates should be versioned as set (stack), not separately.

So, glance would probably need to change to support images with multiple 
files. This could be done, but may be a bigger architectural change then 
originally considered.

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Thursday, December 05, 2013 8:10 AM
To: openstack-dev
Subject: Re: [openstack-dev] [heat] [glance] Heater Proposal

Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?


I've asked that question a few times, and I think I can collate the
responses I've received below. I think enhancing glance to do these
things is on the table:

1. Glance is for big blobs of data not tiny templates.
2. Versioning of a single resource is desired.
3. Tagging/classifying/listing/sorting
4. Glance is designed to expose the uploaded blobs to nova, not users

My responses:

1: Irrelevant. Smaller things will fit in it just fine.

2: The swift API supports versions. We could also have git as a
backend. This feels like something we can add as an optional feature
without exploding Glance's scope and I imagine it would actually be a
welcome feature for image authors as well. Think about Ubuntu maintaining
official images. If they can keep the ID the same and just add a version
(allowing users to lock down to a version if updated images cause issue)
that seems like a really cool feature for images _and_ templates.

3: I'm sure glance image users would love to have those too.

4: Irrelevant. Heat will need to download templates just like nova, and
making images publicly downloadable is also a thing in glance.

It strikes me that this might be a silo problem instead of an
actual design problem. Folk should not be worried about jumping into
Glance and adding features. Unless of course the Glance folk have
reservations? (adding glance tag to the subject)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Clint Byrum
Excerpts from Tim Schnell's message of 2013-12-05 09:49:03 -0800:
 
 On 12/5/13 11:33 AM, Randall Burt randall.b...@rackspace.com wrote:
 
 On Dec 5, 2013, at 11:10 AM, Clint Byrum cl...@fewbar.com
  wrote:
 
  Excerpts from James Slagle's message of 2013-12-05 08:35:12 -0800:
  On Thu, Dec 5, 2013 at 11:10 AM, Clint Byrum cl...@fewbar.com wrote:
  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
  Why not just use glance?
  
  
  I've asked that question a few times, and I think I can collate the
  responses I've received below. I think enhancing glance to do these
  things is on the table:
  
  I'm actually interested in the use cases laid out by Heater from both
  a template perspective and image perspective.  For the templates, as
  Robert mentioned, Tuskar needs a solution for this requirement, since
  it's deploying using templates.  For the images, we have the concept
  of a golden image in TripleO and are heavily focused on image based
  deployments.  Therefore, it seems to make sense that TripleO also
  needs a way to version/tag known good images.
  
  Given that, I think it makes sense  to do this in a way so that it's
  consumable for things other than just templates.  In fact, you can
  almost s/template/image/g on the Heater wiki page, and it pretty well
  lays out what I'd like to see for images as well.
  
  1. Glance is for big blobs of data not tiny templates.
  2. Versioning of a single resource is desired.
  3. Tagging/classifying/listing/sorting
  4. Glance is designed to expose the uploaded blobs to nova, not users
  
  My responses:
  
  1: Irrelevant. Smaller things will fit in it just fine.
  
  2: The swift API supports versions. We could also have git as a
  backend.
  
  I would definitely like to see a git backend for versioning.  No
  reason to reimplement a different solution for what already works
  well.  I'm not sure we'd want to put a whole image into git though.
  Perhaps just it's manifest (installed components, software versions,
  etc) in json format would go into git, and that would be associated
  back to the binary image via uuid.  That would even make it easy to
  diff changes between versions, etc.
  
  
  Right, git for a big 'ol image makes little sense.
  
  I'm suggesting that one might want to have two glances, one for images
  which just uses swift versions and would just expose a list of versions,
  and one for templates which would use git and thus expose more features
  like a git remote for the repo. I'm not sure if glance has embraced the
  extension paradigm yet, but this would fall nicely into it.
 
 Alternatively, Glance could have configurable backends for each image
 type allowing for optimization without the (often times messy) extension
 mechanism? This is assuming it doesn't do this already - I really need to
 start digging here. In the spirit of general OpenStack architectural
 paradigms, as long as the service exposes a consistent interface for
 templates that includes versioning support, the back-end store and
 (possibly) the versioning engine should certainly be configurable.
 Swift probably makes a decent first/default implementation.
 
 I'm not sure why we are attempting to fit a round peg in a square hole
 here. Glance's entire design is built around serving images and there
 absolutely is a difference between serving BLOBs versus relatively small
 templates. Just look at the existing implementations, Heat already stores
 the template directly in the database while Glance stores the blob in an
 external file system.
 

What aspects of circles and squares do you find match glance and the
Heater problem space? :-P

If this were as simple as geometric shape matching, my 4-year old could
do it, right? :) Let's use analogies, I _love_ analogies, but perhaps
lets be more careful about whether they actually aid the discussion.

The blobs in the heat database are going to be a problem actually. I've
dealt with a very similar relatively large system, storing millions
of resumes in a single table for a job candidate database. It performs
horribly even if you shard. These weren't 50MB resumes, these were 30 -
50 kB resumes. That is because every time you need to pull all the rows
you have to pull giant amounts of data and generally just blow out all
of the caches, buffers, etc. Keystone's token table for the sql token
backend has the same problem. RDBMS's are good at storing and retrieving
rows of relatively predictably sized data. They suck for documents.

Heat would do well to just store template references and fetch them in
the rare cases where the raw templates are needed (updates, user asks to
see it, etc). That could so easily be a glance reference.  And if Glance
was suddenly much smarter and more capable of listing/searching/etc. the
things in its database, then Heat gets a win, and so does Nova.

 I'm not opposed to having Glance as an optional configurable backend for
 Heater but if you look at the existing database 

Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as first-class citizen - SSL Termination (Revised)

2013-12-05 Thread Nachi Ueno
Hi folks

OK, It looks like we get consensus on
separate resource way.

Best
Nachi

2013/12/5 Eugene Nikanorov enikano...@mirantis.com:
 Hi,

 My vote is for separate resource (e.g. 'New Model'). Also I'd like to see
 certificate handling as a separate extension/db mixing(in fact, persistence
 driver) similar to service_type extension.

 Thanks,
 Eugene.


 On Thu, Dec 5, 2013 at 2:13 PM, Stephen Gran stephen.g...@theguardian.com
 wrote:

 Hi,

 Right, sorry, I see that wasn't clear - I blame lack of coffee :)

 I would prefer the Revised New Model.  I much prefer the ability to
 restore a loadbalancer from config in the event of node failure, and the
 ability to do basic sharing of certificates between VIPs.

 I think that a longer term plan may involve putting the certificates in a
 smarter system if we decide we want to do things like evaluate trust models,
 but just storing them locally for now will do most of what I think people
 want to do with SSL termination.

 Cheers,


 On 05/12/13 09:57, Samuel Bercovici wrote:

 Hi Stephen,

 To make sure I understand, which model is fine Basic/Simple or New.

 Thanks,
 -Sam.


 -Original Message-
 From: Stephen Gran [mailto:stephen.g...@theguardian.com]
 Sent: Thursday, December 05, 2013 8:22 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Vote required for
 certificate as first-class citizen - SSL Termination (Revised)

 Hi,

 I would be happy with this model.  Yes, longer term it might be nice to
 have an independent certificate store so that when you need to be able to
 validate ssl you can, but this is a good intermediate step.

 Cheers,

 On 02/12/13 09:16, Vijay Venkatachalam wrote:


 LBaaS enthusiasts: Your vote on the revised model for SSL Termination?

 Here is a comparison between the original and revised model for SSL
 Termination:

 ***
 Original Basic Model that was proposed in summit
 ***
 * Certificate parameters introduced as part of VIP resource.
 * This model is for basic config and there will be a model introduced in
 future for detailed use case.
 * Each certificate is created for one and only one VIP.
 * Certificate params not stored in DB and sent directly to loadbalancer.
 * In case of failures, there is no way to restart the operation from
 details stored in DB.
 ***
 Revised New Model
 ***
 * Certificate parameters will be part of an independent certificate
 resource. A first-class citizen handled by LBaaS plugin.
 * It is a forwarding looking model and aligns with AWS for uploading
 server certificates.
 * A certificate can be reused in many VIPs.
 * Certificate params stored in DB.
 * In case of failures, parameters stored in DB will be used to restore
 the system.

 A more detailed comparison can be viewed in the following link

 https://docs.google.com/document/d/1fFHbg3beRtmlyiryHiXlpWpRo1oWj8FqVe
 ZISh07iGs/edit?usp=sharing


 --
 Stephen Gran
 Senior Systems Integrator - theguardian.com
 Please consider the environment before printing this email.
 --
 Visit theguardian.com
 On your mobile, download the Guardian iPhone app theguardian.com/iphone
 and our iPad edition theguardian.com/iPad   Save up to 33% by subscribing to
 the Guardian and Observer - choose the papers you want and get full digital
 access.
 Visit subscribe.theguardian.com

 This e-mail and all attachments are confidential and may also
 be privileged. If you are not the named recipient, please notify
 the sender and delete the e-mail and all attachments immediately.
 Do not disclose the contents to another person. You may not use
 the information for any purpose, or store, or copy, it in any way.

 Guardian News  Media Limited is not liable for any computer
 viruses or other material transmitted with or as part of this
 e-mail. You should employ virus checking software.

 Guardian News  Media Limited

 A member of Guardian Media Group plc
 Registered Office
 PO Box 68164
 Kings Place
 90 York Way
 London
 N1P 2AP

 Registered in England Number 908396

 --


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-05 Thread Tiwari, Arvind
All,

I have captured almost all the email conversation (between Arvind, David and 
Adam) in the etherpad (line #54 - 126 ) and moved old conversation under line 
#130.

https://etherpad.openstack.org/p/service-scoped-role-definition


In the beginning (line # 1 to 51), I have captured where we are right now and 
open questions along with my thoughts.

Please take a look and share your comments/suggestion.

Regards,
Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
Sent: Thursday, December 05, 2013 5:45 AM
To: Tiwari, Arvind; Adam Young
Cc: OpenStack Development Mailing List (not for usage questions); 
dolph.math...@gmail.com
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Almost, but not quite. The role name cannot be anything you like. It
must be globally unique, and named hierarchically. There is a proposal
in another message of this thread for what this could be, based on a 4
component naming scheme with / separators.

regards
david

On 04/12/2013 19:42, Tiwari, Arvind wrote:
 Thanks David,
 
 Appended line # 119 with my reply. endpoint sounds perfect to me.
 
 In a nutshell we are agreeing on following new data model for role-def. 
 
 {
   role: {
 id: 76e72a,
 name: admin, (you can give whatever name you like)
 scope: {
   id: ---id--, (ID should be  1 to 1 mapped with resource in type and 
 must be immutable value)
   type: service | file | domain etc., (Type can be any type of 
 resource which explains the scoping context)
   endpoint:-- endpoint--  (An optional field to indicate the 
 interface of the resource (endpoint for service, path for File,) for 
 which the role-def is created.)
 }
   }
 }
 
 If other community members are cool with this, I will start drafting the API 
 specs?
 
 
 Regards,
 Arvind
 
 
 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
 Sent: Wednesday, December 04, 2013 11:42 AM
 To: Tiwari, Arvind; Adam Young
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 
 
 On 04/12/2013 17:28, Tiwari, Arvind wrote:
 Hi David,

 Thanks for your valuable comments.

 I have updated
 https://etherpad.openstack.org/p/service-scoped-role-definition line
 #118 explaining the rationale behind the field.
 
 #119 for my reply
 

 I wd also appreciate your thoughts on
 https://etherpad.openstack.org/p/1Uiwcbfpxq too,
 
 I have added a comment to the original bug report -
 https://bugs.launchpad.net/keystone/+bug/968696
 
 I think you should be going for simplifying Keystone's RBAC model rather
 than making it more complex. In essence this would mean that assigning
 permissions to roles and users to roles are separate and independent
 processes and that roles on creation do not have to have any baggage or
 restrictions tied to them. Here are my suggestions:
 
 1. Allow different entities to create roles, and use hierarchical role
 naming to maintain global uniqueness and to show which entity created
 (owns) the role definition. Creating a role does not imply anything
 about a role's subsequent permissions unless a scope field is included
 in the definition.
 
 2. When a role is created allow the creator to optionally add a scope
 field which will limit the permissions that can be assigned to the role
 to the prescribed scope.
 
 3. Permissions will be assigned to roles in policy files by resource
 owners. The can assign any permissions to their resources to the role
 that they want to, except that they cannot override the scope field (ie.
 grant permissions to resources which are out of the role's scope).
 
 4. Remove any linkage of roles to tenants/projects on creation. This is
 unnecessary baggage and only complicates the model for no good
 functional reason.
 
 regards
 
 David
 
 
  which is support
 https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens
 BP.


 Thanks, Arvind

 -Original Message- From: David Chadwick
 [mailto:d.w.chadw...@kent.ac.uk] Sent: Wednesday, December 04, 2013
 2:16 AM To: Tiwari, Arvind; OpenStack Development Mailing List (not
 for usage questions); Adam Young Subject: Re: [openstack-dev]
 [keystone] Service scoped role definition

 I have added comments 111 to 122

 david

 On 03/12/2013 23:58, Tiwari, Arvind wrote:
 Hi David,

 I have added my comments underneath line # 97 till line #110, it is
 mostly aligned with your proposal with some modification.

 https://etherpad.openstack.org/p/service-scoped-role-definition


 Thanks for your time, Arvind



 -Original Message- From: Tiwari, Arvind Sent: Monday,
 December 02, 2013 4:22 PM To: Adam Young; OpenStack Development
 Mailing List (not for usage questions); David Chadwick Subject: Re:
 [openstack-dev] [keystone] Service scoped role definition

 Hi Adam and David,

 Thank you so much for all the great comments, seems we are making
 good progress.

 I have replied 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Clint Byrum
Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum cl...@fewbar.com
  wrote:
 
  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
  Why not just use glance?
  
  
  I've asked that question a few times, and I think I can collate the
  responses I've received below. I think enhancing glance to do these
  things is on the table:
  
  1. Glance is for big blobs of data not tiny templates.
  2. Versioning of a single resource is desired.
  3. Tagging/classifying/listing/sorting
  4. Glance is designed to expose the uploaded blobs to nova, not users
  
  My responses:
  
  1: Irrelevant. Smaller things will fit in it just fine.
 
 Fitting is one thing, optimizations around particular assumptions about the 
 size of data and the frequency of reads/writes might be an issue, but I admit 
 to ignorance about those details in Glance.
 

Optimizations can be improved for various use cases. The design, however,
has no assumptions that I know about that would invalidate storing blobs
of yaml/json vs. blobs of kernel/qcow2/raw image.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday December 3rd at 19:00 UTC

2013-12-05 Thread Elizabeth Krumbach Joseph
On Mon, Dec 2, 2013 at 9:36 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday December 3rd, at 19:00 UTC in
 #openstack-meeting

Meeting minutes and logs:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-12-03-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-12-03-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-12-03-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interfaces file format, was [Tempest] Need to prepare the IPv6 environment for static IPv6 injection test case

2013-12-05 Thread Vishvananda Ishaya
Hi Ian,

The rendered network template was a legacy item that got stuck onto the config 
drive so we could remove file injection. It is not intended that this is the 
correct way to do net config. We have intended in the past to put a generic 
form of network info into the metadata service and config drive. Cloud-init can 
parse this data and have code to set up networking config on different 
operating systems.

We actually discussed doing this during the Havana summit, but no one ever made 
any progress. There was some debate about whether there was an existing xml 
format and someone was going to investigate. Since this has not happened, I 
propose we scrap that idea and just produce the network info in json.

Nova should be able to populate this data from its cached network info. It 
might also be nice to stick it in a known location on the metadata server so 
the neutron proxy could potentially overwrite it with more current network data 
if it wanted to.

Vish

On Dec 4, 2013, at 8:26 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 We seem to have bound our config drive file formats to those used by the 
 operating system we're running, which doesn't seem like the right approach to 
 take.
 
 Firstly, the above format doesn't actually work even for Debian-based systems 
 - if you have a network without ipv6, ipv6 ND will be enabled on the 
 ipv4-only interfaces, which strikes me as wrong.  (This is a feature of Linux 
 - ipv4 is enabled on interfaces which are specifically configured with ipv4, 
 but ipv6 is enabled on all interfaces that are brought up.)
 
 But more importantly, the above file template only works for Debian-based 
 machines - not Redhat, not Windows, not anything else - and we seem to have 
 made that a feature of Openstack from the relatively early days of file 
 injection.  That's not an ipv6 only thing but a general statement.  It seems 
 wrong to have to extend Openstack's config drive injection for every OS that 
 might come along, so is there a way we can make this work without tying the 
 two things together?  Are we expecting the cloud-init code in whatever OS to 
 parse and understand this file format, or are they supposed to use other 
 information?  In general, what would the recommendation be for someone using 
 a VM where this config format is not native?
 
 -- 
 Ian.
 
 
 On 2 December 2013 03:01, Yang XY Yu yuyan...@cn.ibm.com wrote:
 Hi all stackers, 
 
 Currently Neutron/Nova code has supported the static IPv6 injection, but 
 there is no tempest scenario coverage to support IPv6 injection test case. So 
 I finished the test case and run the it successfully in my local environment, 
 and already submitted the code-review in community: 
 https://review.openstack.org/#/c/58721/, but the community Jenkins env has 
 not supported IPv6 and there are still a few pre-requisites setup below if 
 running the test case correctly, 
 
 1. Special Image needed to support IPv6 by using cloud-init, currently the 
 cirros image used by tempest does not installed cloud-init. 
 
 2. Prepare interfaces.template file below on compute node. 
 edit  /usr/share/nova/interfaces.template 
 
 # Injected by Nova on instance boot 
 # 
 # This file describes the network interfaces available on your system 
 # and how to activate them. For more information, see interfaces(5). 
 
 # The loopback network interface 
 auto lo 
 iface lo inet loopback 
 
 {% for ifc in interfaces -%} 
 auto {{ ifc.name }} 
 {% if use_ipv6 -%} 
 iface {{ ifc.name }} inet6 static 
 address {{ ifc.address_v6 }} 
 netmask {{ ifc.netmask_v6 }} 
 {%- if ifc.gateway_v6 %} 
 gateway {{ ifc.gateway_v6 }} 
 {%- endif %} 
 {%- endif %} 
 
 {%- endfor %} 
 
 
 So considering these two pre-requisites, what should be done to enable this 
 patch for IPv6 injection? Should I open a bug for cirros to enable 
 cloud-init?   Or skip the test case because of this bug ? 
 Any comments are appreciated! 
 
 Thanks  Best Regards,
 
 Yang Yu(于杨)
 Cloud Solutions and OpenStack Development
 China Systems  Technology Laboratory Beijing
 E-mail: yuyan...@cn.ibm.com 
 Tel: 86-10-82452757
 Address: Ring Bldg. No.28 Building, Zhong Guan Cun Software Park, 
 No. 8 Dong Bei Wang West Road, ShangDi, Haidian District, Beijing 100193, 
 P.R.China 
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [qa] Moving the QA meeting time

2013-12-05 Thread Rochelle.Grober
Hey, guys.

We've got a team in China that is focused mainly on Test and at least one or 
two would like to attend the meetings.  22:00 UTC is a bit early for them so I 
think it would be better to alternate to get reasonable participation.

The team wants to get more involved in the test effort and we have some senior 
Test Engineers on the OpenStack project.  I'm in PST, so if the meeting 
alternated, we could theoretically cover them all.

Thanks,
--Rocky

-Original Message-
From: David Kranz [mailto:dkr...@redhat.com] 
Sent: Thursday, December 05, 2013 6:46 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [qa] Moving the QA meeting time

On 12/05/2013 07:16 AM, Sean Dague wrote:
 On 12/05/2013 02:37 AM, Koderer, Marc wrote:
 Hi all!

 -Ursprüngliche Nachricht-
 Von: Kenichi Oomichi [mailto:oomi...@mxs.nes.nec.co.jp]
 Gesendet: Donnerstag, 5. Dezember 2013 01:37
 An: OpenStack Development Mailing List (not for usage questions)
 Betreff: Re: [openstack-dev] [qa] Moving the QA meeting time


 Hi Matthew,

 Thank you for picking this up.

 -Original Message-
 From: Matthew Treinish [mailto:mtrein...@kortar.org]
 Sent: Thursday, December 05, 2013 6:04 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [qa] Moving the QA meeting time

 Hi everyone,

 I'm looking at changing our weekly QA meeting time to make it more
 globally attendable. Right now the current time of 17:00 UTC doesn't
 really work for people who live in Asia Pacific timezones. (which
 includes a third of the current core review team) There are 2
 approaches that I can see taking here:
   1. We could either move the meeting time later so that it makes it
 easier for
  people in the Asia Pacific region to attend.

   2. Or we move to a alternating meeting time, where every other week
 the meeting
  time changes. So we keep the current slot and alternate with
 something more
  friendly for other regions.

 I think trying to stick to a single meeting time would be a better
 call just for simplicity. But it gets difficult to appease everyone
 that way which is where the appeal of the 2nd approach comes in.

 Looking at the available time slots here:
 https://wiki.openstack.org/wiki/Meetings
 there are plenty of open slots before 1500 UTC which would be early
 for people in the US and late for people in the Asia Pacific region.
 There are plenty of slots starting at 2300 UTC which is late for
 people in Europe.
 Would something like 2200 UTC on Wed. or Thurs work for everyone?

 What are people's opinions on this?
 I am in JST.
 Is Chris in CST, and Marc in CET?
 Yes, Giulio and I are in CET. And Attila too, right?

 Here is timezone difference.
 15:00 UTC - 07:00 PST - 01:30 CST - 16:00 CET - 24:00 JST 22:00 UTC
 - 14:00 PST - 08:30 CST - 23:00 CET - 07:00 JST 23:00 UTC - 15:00
 PST - 09:30 CST - 24:00 CET - 08:00 JST

 I feel 22:00 would be nice.
 I'd prefer to have two slots since 22 UTC is quite late. But I am ok with it 
 if all others are fine.
 The other option would be to oscillate on opposite weeks with Ceilometer
 - https://wiki.openstack.org/wiki/Meetings/Ceilometer they already have
 a well defined every other cadence.

   -Sean


Either option works for me.

  -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly

2013-12-05 Thread Nathani, Sreedhar (APS)
Hello Peter,

Thanks for the info. I will do the tests with your code changes.

What surprises me is, When I did the tests in Grizzly,  up to 210 instances 
could get an IP during the first boot. 
Once we cross 210 active instances, during the next batch some instances could 
not get IP. As the number of active instances grows,  more number of instances 
could not get IP.
But once I restart the instances, those could get IP Address.. I did the tests 
close to 10 times, so this behavior was consistent all the times. 

But in Havana, Instances are not able to get the IP once we cross 80 instances. 
Moreover, we need to restart the dnsmasq process for instances to get IP during 
next reboot

Thanks  Regards,
Sreedhar Nathani


-Original Message-
From: Peter Feiner [mailto:pe...@gridcentric.ca] 
Sent: Thursday, December 05, 2013 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Performance Regression in Neutron/Havana compared 
to Quantum/Grizzly

On Thu, Dec 5, 2013 at 8:23 AM, Nathani, Sreedhar (APS) 
sreedhar.nath...@hp.com wrote:
 Hello Marun,



 Please find the details about my setup and tests which i have done so 
 far



 Setup

   - One Physical Box with 16c, 256G memory. 2 VMs created on this Box 
 - One for Controller and One for Network Node

   - 16x compute nodes (each has 16c, 256G memory)

   - All the systems are installed with Ubuntu Precise + Havana Bits 
 from Ubuntu Cloud Archive



 Steps to simulate the issue

   1) Concurrently create 30 Instances (m1.small) using REST API with
 mincount=30

   2) sleep for 20min and repeat the step (1)





 Issue 1

 In Havana, once we cross 150 instances (5 batches x 30) during 6th 
 batch some instances are going into ERROR state

 due to network port not able to create and some instances are getting 
 duplicate IP address



 Per Maru Newby this issue might related to this bug

 https://bugs.launchpad.net/bugs/1192381



 I have done the similar with Grizzly on the same environment 2 months 
 back, where I could able to deploy close to 240 instances without any 
 errors

 Initially on Grizzly also seen the same behavior but with these 
 tunings based on this bug

 https://bugs.launchpad.net/neutron/+bug/1160442, never had issues 
 (tested more than 10 times)

sqlalchemy_pool_size = 60

sqlalchemy_max_overflow = 120

sqlalchemy_pool_timeout = 2

agent_down_time = 60

report_internval = 20



 In Havana, I have tuned the same tunables but I could never get past 
 150+ instances. Without the tunables I could not able to get past

 100 instances. We are getting many timeout errors from the DHCP agent 
 and neutron clients



 NOTE: After tuning the agent_down_time to 60 and report_interval to 
 20, we no longer getting these error messages

2013-12-02 11:44:43.421 28201 WARNING 
 neutron.scheduler.dhcp_agent_scheduler [-] No more DHCP agents

2013-12-02 11:44:43.439 28201 WARNING 
 neutron.scheduler.dhcp_agent_scheduler [-] No more DHCP agents

2013-12-02 11:44:43.452 28201 WARNING 
 neutron.scheduler.dhcp_agent_scheduler [-] No more DHCP agents





 In the compute node openvswitch agent logs, we see these errors 
 repeating continuously



 2013-12-04 06:46:02.081 3546 TRACE
 neutron.plugins.openvswitch.agent.ovs_neutron_agent Timeout: Timeout 
 while waiting on RPC response - topic: q-plugin, RPC method:
 security_group_rules_for_devices info: unknown

 and WARNING neutron.openstack.common.rpc.amqp [-] No calling threads 
 waiting for msg_id



 DHCP agent has below errors



 2013-12-02 15:35:19.557 22125 ERROR neutron.agent.dhcp_agent [-] 
 Unable to reload_allocations dhcp.

 2013-12-02 15:35:19.557 22125 TRACE neutron.agent.dhcp_agent Timeout:
 Timeout while waiting on RPC response - topic: q-plugin, RPC method:
 get_dhcp_port info: unknown



 2013-12-02 15:35:34.266 22125 ERROR neutron.agent.dhcp_agent [-] 
 Unable to sync network state.

 2013-12-02 15:35:34.266 22125 TRACE neutron.agent.dhcp_agent Timeout:
 Timeout while waiting on RPC response - topic: q-plugin, RPC method:
 get_active_networks_info info: unknown





 In Havana, I have merged the code from this patch and set api_workers 
 to 8 (My Controller VM has 8cores/16Hyperthreads)

 https://review.openstack.org/#/c/37131/



 After this patch and starting 8 neutron-server worker threads, during 
 the batch creation of 240 instances with 30 concurrent requests during 
 each batch,

 238 instances became active and 2 instances went into error. 
 Interesting these 2 instances which went into error state are from the 
 same compute node.



 Unlike earlier this time, the errors are due to 'Too Many Connections' 
 to the MySQL database.

 2013-12-04 17:07:59.877 21286 AUDIT nova.compute.manager
 [req-26d64693-d1ef-40f3-8350-659e34d5b1d7 
 c4d609870d4447c684858216da2f8041 9b073211dd5c4988993341cc955e200b] [instance:
 c14596fd-13d5-482b-85af-e87077d4ed9b] Terminating instance

 

[openstack-dev] [barbican] Barbican 2014.1.b1 (Icehouse) is released

2013-12-05 Thread John Wood
Hello everyone,

It is my pleasure to announce the first milestone delivery for the OpenStack 
Icehouse series.

Information on the milestone and its associated tarball are available at: 
https://launchpad.net/barbican/+milestone/icehouse-1

With this milestone, 2 blueprints have been implemented and 5 bugs fixed. 

An updated Python client library and command line interface for Barbican is 
also available in Pypi here: https://pypi.python.org/pypi/python-barbicanclient/

Thanks to Bryan Payne, Wyllys Ingersoll, Dolph Mathews, Jarret Raim, Andrew 
Hartnett, Douglas Sims, Sheena Gregson, John Vrbanac, Steve Heyman, Douglas 
Mendizabal, Paul Kehrer, Constanze Kratel, Arash Ghoreyshi, and John Wood for 
their contributions to this milestone.

-- 
John Wood

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Brad Topol
Lots of good discussion on this topic.   One thing I would like to point 
out is that we get feedback that OpenStack has too many projects as it is 
and customers get confused on how much of OpenStack they need to install. 
So in the spirit of trying to help insure OpenStack does not continue to 
reinforce this perception, I am hoping that Heater functionality finds a 
home in either Glance or Heat.  I don't have a preference of which. Either 
of these is superior to starting a new project if it can be avoided.

Thanks,

Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Randall Burt randall.b...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   12/05/2013 12:09 PM
Subject:Re: [openstack-dev] [heat] [glance] Heater Proposal



On Dec 5, 2013, at 10:10 AM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?
 
 
 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:
 
 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users
 
 My responses:
 
 1: Irrelevant. Smaller things will fit in it just fine.

Fitting is one thing, optimizations around particular assumptions about 
the size of data and the frequency of reads/writes might be an issue, but 
I admit to ignorance about those details in Glance.

 2: The swift API supports versions. We could also have git as a
 backend. This feels like something we can add as an optional feature
 without exploding Glance's scope and I imagine it would actually be a
 welcome feature for image authors as well. Think about Ubuntu 
maintaining
 official images. If they can keep the ID the same and just add a version
 (allowing users to lock down to a version if updated images cause issue)
 that seems like a really cool feature for images _and_ templates.

Agreed, though one could argue that using image names and looking up ID's 
or just using ID's as appropriate sort of handle this use case, but I 
agree that having image versioning seems a reasonable feature for Glance 
to have as well.

 3: I'm sure glance image users would love to have those too.

And image metadata is already there so we don't have to go through those 
discussions all over again ;).

 4: Irrelevant. Heat will need to download templates just like nova, and
 making images publicly downloadable is also a thing in glance.

Yeah, this was the kicker for me. I'd been thinking of adding the 
tenancy/public/private templates use case to the HeatR spec and realized 
that this was a good argument for Glance since it already has this 
feature.

 It strikes me that this might be a silo problem instead of an
 actual design problem. Folk should not be worried about jumping into
 Glance and adding features. Unless of course the Glance folk have
 reservations? (adding glance tag to the subject)

Perhaps, and if these use cases make sense for the Glance users in 
general, I wouldn't want to re-invent all those wheels either. I admit 
there's some appeal to being able to pass a template ID to stack-create or 
as the type of a provider resource and have an actual API to call that's 
already got a known, tested client that's already part of the OpenStack 
ecosystem

In the end, though, even if some and not all of our use cases make sense 
for the Glance folks, we still have the option of creating the HeatR 
service and having Glance as a possible back-end data store.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as first-class citizen - SSL Termination (Revised)

2013-12-05 Thread Samuel Bercovici
Correct.

Evgeny will update the WIKI accordingly.
We will add a flag in the SSL Certificate to allow specifying that the private 
key can't be persisted. And in this case, the private key could be passed when 
associating the cert_id with the VIP.

Regards,
-Sam.

-Original Message-
From: Nachi Ueno [mailto:na...@ntti3.com] 
Sent: Thursday, December 05, 2013 8:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as 
first-class citizen - SSL Termination (Revised)

Hi folks

OK, It looks like we get consensus on
separate resource way.

Best
Nachi

2013/12/5 Eugene Nikanorov enikano...@mirantis.com:
 Hi,

 My vote is for separate resource (e.g. 'New Model'). Also I'd like to 
 see certificate handling as a separate extension/db mixing(in fact, 
 persistence
 driver) similar to service_type extension.

 Thanks,
 Eugene.


 On Thu, Dec 5, 2013 at 2:13 PM, Stephen Gran 
 stephen.g...@theguardian.com
 wrote:

 Hi,

 Right, sorry, I see that wasn't clear - I blame lack of coffee :)

 I would prefer the Revised New Model.  I much prefer the ability to 
 restore a loadbalancer from config in the event of node failure, and 
 the ability to do basic sharing of certificates between VIPs.

 I think that a longer term plan may involve putting the certificates 
 in a smarter system if we decide we want to do things like evaluate 
 trust models, but just storing them locally for now will do most of 
 what I think people want to do with SSL termination.

 Cheers,


 On 05/12/13 09:57, Samuel Bercovici wrote:

 Hi Stephen,

 To make sure I understand, which model is fine Basic/Simple or New.

 Thanks,
 -Sam.


 -Original Message-
 From: Stephen Gran [mailto:stephen.g...@theguardian.com]
 Sent: Thursday, December 05, 2013 8:22 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Vote required for 
 certificate as first-class citizen - SSL Termination (Revised)

 Hi,

 I would be happy with this model.  Yes, longer term it might be nice 
 to have an independent certificate store so that when you need to be 
 able to validate ssl you can, but this is a good intermediate step.

 Cheers,

 On 02/12/13 09:16, Vijay Venkatachalam wrote:


 LBaaS enthusiasts: Your vote on the revised model for SSL Termination?

 Here is a comparison between the original and revised model for SSL
 Termination:

 ***
 Original Basic Model that was proposed in summit
 ***
 * Certificate parameters introduced as part of VIP resource.
 * This model is for basic config and there will be a model 
 introduced in future for detailed use case.
 * Each certificate is created for one and only one VIP.
 * Certificate params not stored in DB and sent directly to loadbalancer.
 * In case of failures, there is no way to restart the operation 
 from details stored in DB.
 ***
 Revised New Model
 ***
 * Certificate parameters will be part of an independent certificate 
 resource. A first-class citizen handled by LBaaS plugin.
 * It is a forwarding looking model and aligns with AWS for 
 uploading server certificates.
 * A certificate can be reused in many VIPs.
 * Certificate params stored in DB.
 * In case of failures, parameters stored in DB will be used to 
 restore the system.

 A more detailed comparison can be viewed in the following link

 https://docs.google.com/document/d/1fFHbg3beRtmlyiryHiXlpWpRo1oWj8F
 qVe
 ZISh07iGs/edit?usp=sharing


 --
 Stephen Gran
 Senior Systems Integrator - theguardian.com Please consider the 
 environment before printing this email.
 --
 Visit theguardian.com
 On your mobile, download the Guardian iPhone app theguardian.com/iphone
 and our iPad edition theguardian.com/iPad   Save up to 33% by subscribing to
 the Guardian and Observer - choose the papers you want and get full 
 digital access.
 Visit subscribe.theguardian.com

 This e-mail and all attachments are confidential and may also be 
 privileged. If you are not the named recipient, please notify the 
 sender and delete the e-mail and all attachments immediately.
 Do not disclose the contents to another person. You may not use the 
 information for any purpose, or store, or copy, it in any way.

 Guardian News  Media Limited is not liable for any computer viruses 
 or other material transmitted with or as part of this e-mail. You 
 should employ virus checking software.

 Guardian News  Media Limited

 A member of Guardian Media Group plc
 Registered Office
 PO Box 68164
 Kings Place
 90 York Way
 London
 N1P 2AP

 Registered in England Number 908396

 -
 -


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [Neutron][IPv6] Subteam meeting agenda for Dec 5 2013 2100UTC

2013-12-05 Thread Collins, Sean (Contractor)
Agenda has been posted - look forward to speaking to you all soon.

https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes - close to implementation start

2013-12-05 Thread Matt Wagner
On Tue Dec  3 06:53:04 2013, Jaromir Coufal wrote:
 Wireframes walkthrough: https://www.youtube.com/enhance?v=oRtL3aCuEEc

 On 2013/03/12 10:25, Jaromir Coufal wrote:
 Hey folks,

 I opened 2 issues on UX discussion forum with TripleO UI topics:

Hey Jarda, thanks for sharing these! Some comments inline below:

 Resource Management:
 http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/
 - this section was already reviewed before, there is not much
 surprises, just smaller updates
 - we are about to implement this area

I've somehow overlooked the 'Node tags' previously. I'm curious what
format these would take, or if this is something we've discussed. I
remember hearing us kick around an idea for key-value pairs for storing
arbitrary information, maybe ram=64g or rack=c6. Is that what the tags
you have are intended for?


 http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/
 - these are completely new views and they need a lot of attention so
 that in time we don't change direction drastically
 - any feedback here is welcome

One thing I notice here -- and I really hope I'm not opening a can of
worms -- is that this seems to require that you manage many nodes. I
know that's our focus, and I entirely agree with it. But with the way
things are represented in this, it doesn't seem like it would be
possible for a user to set up an all-in-one system (say, for testing)
that ran compute and storage on the same node.

I think it would be very fair to say that's just something we're not
focusing on at this point, and that to start out we're just going to
handle the simple case of wanting to install many nodes, each with only
one distinct type. But I just wanted to clarify that we are, indeed,
making that decision?

Overall I think these look good. Thanks!

-- 
Matt Wagner
Software Engineer, Red Hat



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Moving the QA meeting time

2013-12-05 Thread Matthew Treinish
On Wed, Dec 04, 2013 at 04:04:22PM -0500, Matthew Treinish wrote:
 Hi everyone,
 
 I'm looking at changing our weekly QA meeting time to make it more globally
 attendable. Right now the current time of 17:00 UTC doesn't really work for
 people who live in Asia Pacific timezones. (which includes a third of the
 current core review team) There are 2 approaches that I can see taking here:
 
  1. We could either move the meeting time later so that it makes it easier for
 people in the Asia Pacific region to attend.
 
  2. Or we move to a alternating meeting time, where every other week the 
 meeting
 time changes. So we keep the current slot and alternate with something 
 more
 friendly for other regions.
 
 I think trying to stick to a single meeting time would be a better call just 
 for
 simplicity. But it gets difficult to appease everyone that way which is where 
 the
 appeal of the 2nd approach comes in.
 
 Looking at the available time slots here: 
 https://wiki.openstack.org/wiki/Meetings
 there are plenty of open slots before 1500 UTC which would be early for 
 people in
 the US and late for people in the Asia Pacific region. There are plenty of 
 slots
 starting at 2300 UTC which is late for people in Europe.
 
 Would something like 2200 UTC on Wed. or Thurs work for everyone?
 

So during the QA meeting today we decided to adopt a oscillating meeting time.
Starting next week the meeting will be at 2200 UTC on Thurs., and we will
alternate between that time and 1700 UTC on Thurs. (our current meeting time)
every other week. I've updated the wiki page with the new schedule, and I'll
update the meeting ical feed soon.

I will send a reminder about the new schedule next week before the meeting.

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] ExtraSpecs format bug

2013-12-05 Thread Costantino, Leandro I

Hi!

i am working on the Horizon 'side' of 
https://bugs.launchpad.net/nova/+bug/1256119 , where basically
if you create a ExtraSpec key containing '/', then it cannot be deleted 
anymore.


Is there any restriction about this?
Shall the format of the keys be limited to some specific format or any 
combination should be valid?


For instance, heat use this pattern for stack names: 
[a-zA-Z][a-zA-Z0-9_.-]* .



Regards
Leandro




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Action-Requested: In progress bugs/blueprints/abandoned patches

2013-12-05 Thread Alex Meade
Hey Glance folks,

It seems there are a fairly large number of abandoned patches that are still 
valuable. Some were auto-abandoned by jenkins without negative reviews and 
others were -2'd for the Havana feature freeze then abandoned and forgotten. 

I want to ask everyone to look over their abandoned patches and restore any 
that are still relevant. 

Use this link to see your abandoned patches, just replace alex-meade with 
your own launchpad id.

https://review.openstack.org/#/q/status:abandoned+project:openstack/glance+owner:alex-meade,n,z

Also, if you have any in progress bugs/blueprints assigned to you that you 
are no longer working on, please unassign yourself.

Thanks,

-Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Tim Bell
Completely agree with Brad... a new project for this is not what is needed.

From an operator's point of view, it is a REAL, REAL, REAL pain to be 
configuring yet another project, yet another set of Puppet/Chef recipes, 
additional monitoring, service nodes, new databases, more documentation, 
further packages, pre-requisites and tough divergent roles follow...

While I appreciate that there is an impact from having reviews, this should not 
be the cause for creating more complexity for those who wish to use the 
functionality.

I'd also suggest the Heater guys talk to the Murano guys since my ideal 
scenario is for a simple user kiosk where I can ask for an app, be asked for 
some configuration details and then deploy. The two projects appear to be 
heading towards a converged solution.

Tim

From: Brad Topol [mailto:bto...@us.ibm.com]
Sent: 05 December 2013 20:06
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [heat] [glance] Heater Proposal

Lots of good discussion on this topic.   One thing I would like to point out is 
that we get feedback that OpenStack has too many projects as it is and 
customers get confused on how much of OpenStack they need to install.  So in 
the spirit of trying to help insure OpenStack does not continue to reinforce 
this perception, I am hoping that Heater functionality finds a home in either 
Glance or Heat.  I don't have a preference of which.   Either of these is 
superior to starting a new project if it can be avoided.

Thanks,

Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.commailto:bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date:12/05/2013 12:09 PM
Subject:Re: [openstack-dev] [heat] [glance] Heater Proposal




On Dec 5, 2013, at 10:10 AM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com
wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?


 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users

 My responses:

 1: Irrelevant. Smaller things will fit in it just fine.

Fitting is one thing, optimizations around particular assumptions about the 
size of data and the frequency of reads/writes might be an issue, but I admit 
to ignorance about those details in Glance.

 2: The swift API supports versions. We could also have git as a
 backend. This feels like something we can add as an optional feature
 without exploding Glance's scope and I imagine it would actually be a
 welcome feature for image authors as well. Think about Ubuntu maintaining
 official images. If they can keep the ID the same and just add a version
 (allowing users to lock down to a version if updated images cause issue)
 that seems like a really cool feature for images _and_ templates.

Agreed, though one could argue that using image names and looking up ID's or 
just using ID's as appropriate sort of handle this use case, but I agree that 
having image versioning seems a reasonable feature for Glance to have as well.

 3: I'm sure glance image users would love to have those too.

And image metadata is already there so we don't have to go through those 
discussions all over again ;).

 4: Irrelevant. Heat will need to download templates just like nova, and
 making images publicly downloadable is also a thing in glance.

Yeah, this was the kicker for me. I'd been thinking of adding the 
tenancy/public/private templates use case to the HeatR spec and realized that 
this was a good argument for Glance since it already has this feature.

 It strikes me that this might be a silo problem instead of an
 actual design problem. Folk should not be worried about jumping into
 Glance and adding features. Unless of course the Glance folk have
 reservations? (adding glance tag to the subject)

Perhaps, and if these use cases make sense for the Glance users in general, I 
wouldn't want to re-invent all those wheels either. I admit there's some appeal 
to being able to pass a template ID to stack-create or as the type of a 
provider resource and have an actual API to call that's already got a known, 
tested client that's already part of the OpenStack ecosystem

In the end, though, even if some and not all of our use cases make sense for 
the Glance folks, we still have the option of creating the HeatR service and 
having Glance as a possible 

[openstack-dev] [savanna] team meeting minutes Dec 5

2013-12-05 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
savanna.2013-12-05-18.05.htmlhttp://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-12-05-18.05.html
Log: 
savanna.2013-12-05-18.05.log.htmlhttp://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-12-05-18.05.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Andrew Plunk
Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
  wrote:

  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
  Why not just use glance?
 
 
  I've asked that question a few times, and I think I can collate the
  responses I've received below. I think enhancing glance to do these
  things is on the table:
 
  1. Glance is for big blobs of data not tiny templates.
  2. Versioning of a single resource is desired.
  3. Tagging/classifying/listing/sorting
  4. Glance is designed to expose the uploaded blobs to nova, not users
 
  My responses:
 
  1: Irrelevant. Smaller things will fit in it just fine.

 Fitting is one thing, optimizations around particular assumptions about the 
 size of data and the frequency of reads/writes might be an issue, but I 
 admit to ignorance about those details in Glance.


Optimizations can be improved for various use cases. The design, however,
has no assumptions that I know about that would invalidate storing blobs
of yaml/json vs. blobs of kernel/qcow2/raw image.

I think we are getting out into the weeds a little bit here. It is important to 
think about these apis in terms of what they actually do, before the decision 
of combining them or not can be made.

I think of HeatR as a template storage service, it provides extra data and 
operations on templates. HeatR should not care about how those templates are 
stored.
Glance is an image storage service, it provides extra data and operations on 
images (not blobs), and it happens to use swift as a backend.

If HeatR and Glance were combined, it would result in taking two very different 
types of data (template metadata vs image metadata) and mashing them into one 
service. How would adding the complexity of HeatR benefit Glance, when they are 
dealing with conceptually two very different types of data? For instance, 
should a template ever care about the field minRam that is stored with an 
image? Combining them adds a huge development complexity with a very small 
operations payoff, and so Openstack is already so operationally complex that 
HeatR as a separate service would be knowledgeable. Only clients of Heat will 
ever care about data and operations on templates, so I move that HeatR becomes 
it's own service, or becomes part of Heat.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] icehouse-1 development milestone available

2013-12-05 Thread Sergey Lukjanov
Hi folks,

The first dev milestone of Icehouse cycle is now available for Savanna.

Here is a full list of new features and fixed bugs, as well as tarball
downloads:

https://launchpad.net/savanna/icehouse/icehouse-1

There are 7 blueprints implemented and 18 bugs fixed during the milestone.
It includes savanna, savanna-dashboard, savanna-image-elements and
savanna-extra sub-projects. In addition python-savannaclient 0.4.0 that was
released several days ago.

Please, note that the next milestone, icehouse-2, is scheduled for January
23rd.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Vishvananda Ishaya

On Dec 5, 2013, at 12:42 PM, Andrew Plunk andrew.pl...@rackspace.com wrote:

 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
 wrote:
 
 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?
 
 
 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:
 
 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users
 
 My responses:
 
 1: Irrelevant. Smaller things will fit in it just fine.
 
 Fitting is one thing, optimizations around particular assumptions about the 
 size of data and the frequency of reads/writes might be an issue, but I 
 admit to ignorance about those details in Glance.
 
 
 Optimizations can be improved for various use cases. The design, however,
 has no assumptions that I know about that would invalidate storing blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.
 
 I think we are getting out into the weeds a little bit here. It is important 
 to think about these apis in terms of what they actually do, before the 
 decision of combining them or not can be made.
 
 I think of HeatR as a template storage service, it provides extra data and 
 operations on templates. HeatR should not care about how those templates are 
 stored.
 Glance is an image storage service, it provides extra data and operations on 
 images (not blobs), and it happens to use swift as a backend.

This is not completely correct. Glance already supports something akin to 
templates. You can create an image with metadata properties that specifies a 
complex block device mapping which would allow for multiple volumes and images 
to connected to the vm at boot time. This is functionally a template for a 
single vm.

Glance is pretty useless if is just an image storage service, we already have 
other places that can store bits (swift, cinder). It is much more valuable as a 
searchable repository of bootable templates. I don't see any reason why this 
idea couldn't be extended to include more complex templates that could include 
more than one vm.

We have discussed the future of glance a number of times, and if it is really 
just there to serve up blobs of data + metadata about images, it should go 
away. Glance should offer something more like the AWS image search console. And 
this could clearly support more than just images, you should be able to search 
for and launch more complicated templates as well.

 
 If HeatR and Glance were combined, it would result in taking two very 
 different types of data (template metadata vs image metadata) and mashing 
 them into one service. How would adding the complexity of HeatR benefit 
 Glance, when they are dealing with conceptually two very different types of 
 data? For instance, should a template ever care about the field minRam that 
 is stored with an image?


I don't see these as significantly different types of metadata. Metadata for 
heat templates might be a bit more broad (minFlavor?) I would think that a 
template would care about constraints like this, especially when you consider 
that a user might want to give a command to launch a template but then override 
certain characteristics.

Vish

 Combining them adds a huge development complexity with a very small 
 operations payoff, and so Openstack is already so operationally complex that 
 HeatR as a separate service would be knowledgeable. Only clients of Heat will 
 ever care about data and operations on templates, so I move that HeatR 
 becomes it's own service, or becomes part of Heat.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Tim Schnell
On 12/5/13 12:17 PM, Clint Byrum cl...@fewbar.com wrote:


Excerpts from Tim Schnell's message of 2013-12-05 09:49:03 -0800:
 
 On 12/5/13 11:33 AM, Randall Burt randall.b...@rackspace.com wrote:
 
 On Dec 5, 2013, at 11:10 AM, Clint Byrum cl...@fewbar.com
  wrote:
 
  Excerpts from James Slagle's message of 2013-12-05 08:35:12 -0800:
  On Thu, Dec 5, 2013 at 11:10 AM, Clint Byrum cl...@fewbar.com
wrote:
  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
  Why not just use glance?
  
  
  I've asked that question a few times, and I think I can collate the
  responses I've received below. I think enhancing glance to do these
  things is on the table:
  
  I'm actually interested in the use cases laid out by Heater from
both
  a template perspective and image perspective.  For the templates, as
  Robert mentioned, Tuskar needs a solution for this requirement,
since
  it's deploying using templates.  For the images, we have the concept
  of a golden image in TripleO and are heavily focused on image
based
  deployments.  Therefore, it seems to make sense that TripleO also
  needs a way to version/tag known good images.
  
  Given that, I think it makes sense  to do this in a way so that it's
  consumable for things other than just templates.  In fact, you can
  almost s/template/image/g on the Heater wiki page, and it pretty
well
  lays out what I'd like to see for images as well.
  
  1. Glance is for big blobs of data not tiny templates.
  2. Versioning of a single resource is desired.
  3. Tagging/classifying/listing/sorting
  4. Glance is designed to expose the uploaded blobs to nova, not
users
  
  My responses:
  
  1: Irrelevant. Smaller things will fit in it just fine.
  
  2: The swift API supports versions. We could also have git as a
  backend.
  
  I would definitely like to see a git backend for versioning.  No
  reason to reimplement a different solution for what already works
  well.  I'm not sure we'd want to put a whole image into git though.
  Perhaps just it's manifest (installed components, software versions,
  etc) in json format would go into git, and that would be associated
  back to the binary image via uuid.  That would even make it easy to
  diff changes between versions, etc.
  
  
  Right, git for a big 'ol image makes little sense.
  
  I'm suggesting that one might want to have two glances, one for
images
  which just uses swift versions and would just expose a list of
versions,
  and one for templates which would use git and thus expose more
features
  like a git remote for the repo. I'm not sure if glance has embraced
the
  extension paradigm yet, but this would fall nicely into it.
 
 Alternatively, Glance could have configurable backends for each image
 type allowing for optimization without the (often times messy)
extension
 mechanism? This is assuming it doesn't do this already - I really need
to
 start digging here. In the spirit of general OpenStack architectural
 paradigms, as long as the service exposes a consistent interface for
 templates that includes versioning support, the back-end store and
 (possibly) the versioning engine should certainly be configurable.
 Swift probably makes a decent first/default implementation.
 
 I'm not sure why we are attempting to fit a round peg in a square hole
 here. Glance's entire design is built around serving images and there
 absolutely is a difference between serving BLOBs versus relatively small
 templates. Just look at the existing implementations, Heat already
stores
 the template directly in the database while Glance stores the blob in an
 external file system.
 

What aspects of circles and squares do you find match glance and the
Heater problem space? :-P

If this were as simple as geometric shape matching, my 4-year old could
do it, right? :) Let's use analogies, I _love_ analogies, but perhaps
lets be more careful about whether they actually aid the discussion.

The blobs in the heat database are going to be a problem actually. I've
dealt with a very similar relatively large system, storing millions
of resumes in a single table for a job candidate database. It performs
horribly even if you shard. These weren't 50MB resumes, these were 30 -
50 kB resumes. That is because every time you need to pull all the rows
you have to pull giant amounts of data and generally just blow out all
of the caches, buffers, etc. Keystone's token table for the sql token
backend has the same problem. RDBMS's are good at storing and retrieving
rows of relatively predictably sized data. They suck for documents.

Heat would do well to just store template references and fetch them in
the rare cases where the raw templates are needed (updates, user asks to
see it, etc). That could so easily be a glance reference.  And if Glance
was suddenly much smarter and more capable of listing/searching/etc. the
things in its database, then Heat gets a win, and so does Nova.

 I'm not opposed to having Glance as an optional configurable 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Clint Byrum
Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
  On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
   wrote:
 
   Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
   Why not just use glance?
  
  
   I've asked that question a few times, and I think I can collate the
   responses I've received below. I think enhancing glance to do these
   things is on the table:
  
   1. Glance is for big blobs of data not tiny templates.
   2. Versioning of a single resource is desired.
   3. Tagging/classifying/listing/sorting
   4. Glance is designed to expose the uploaded blobs to nova, not users
  
   My responses:
  
   1: Irrelevant. Smaller things will fit in it just fine.
 
  Fitting is one thing, optimizations around particular assumptions about 
  the size of data and the frequency of reads/writes might be an issue, but 
  I admit to ignorance about those details in Glance.
 
 
 Optimizations can be improved for various use cases. The design, however,
 has no assumptions that I know about that would invalidate storing blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.
 
 I think we are getting out into the weeds a little bit here. It is important 
 to think about these apis in terms of what they actually do, before the 
 decision of combining them or not can be made.
 
 I think of HeatR as a template storage service, it provides extra data and 
 operations on templates. HeatR should not care about how those templates are 
 stored.
 Glance is an image storage service, it provides extra data and operations on 
 images (not blobs), and it happens to use swift as a backend.
 
 If HeatR and Glance were combined, it would result in taking two very 
 different types of data (template metadata vs image metadata) and mashing 
 them into one service. How would adding the complexity of HeatR benefit 
 Glance, when they are dealing with conceptually two very different types of 
 data? For instance, should a template ever care about the field minRam that 
 is stored with an image? Combining them adds a huge development complexity 
 with a very small operations payoff, and so Openstack is already so 
 operationally complex that HeatR as a separate service would be 
 knowledgeable. Only clients of Heat will ever care about data and operations 
 on templates, so I move that HeatR becomes it's own service, or becomes part 
 of Heat.
 

I spoke at length via G+ with Randall and Tim about this earlier today.
I think I understand the impetus for all of this a little better now.

Basically what I'm suggesting is that Glance is only narrow in scope
because that was the only object that OpenStack needed a catalog for
before now.

However, the overlap between a catalog of images and a catalog of
templates is quite comprehensive. The individual fields that matter to
images are different than the ones that matter to templates, but that
is a really minor detail isn't it?

I would suggest that Glance be slightly expanded in scope to be an
object catalog. Each object type can have its own set of fields that
matter to it.

This doesn't have to be a minor change to glance to still have many
advantages over writing something from scratch and asking people to
deploy another service that is 99% the same as Glance.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Tim Schnell
The original intention was to propose the Heater functionality as part of Heat. 
I would just like to clarify that having Heater as a separate project is not 
because of the impact of the gerrit reviews. The proposal for a separate Core 
team or sub-project team was to solve the impact on reviews. Heater is now 
being proposed as a separate project from Heat because previous conversations 
with the Heat Core team has lead us to believe that Heat is not in the business 
of managing templates.

Thanks,
Tim

From: Tim Bell tim.b...@cern.chmailto:tim.b...@cern.ch
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, December 5, 2013 2:13 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [heat] [glance] Heater Proposal

Completely agree with Brad… a new project for this is not what is needed.

From an operator’s point of view, it is a REAL, REAL, REAL pain to be 
configuring yet another project, yet another set of Puppet/Chef recipes, 
additional monitoring, service nodes, new databases, more documentation, 
further packages, pre-requisites and tough divergent roles follow…

While I appreciate that there is an impact from having reviews, this should not 
be the cause for creating more complexity for those who wish to use the 
functionality.

I’d also suggest the Heater guys talk to the Murano guys since my ideal 
scenario is for a simple user kiosk where I can ask for an app, be asked for 
some configuration details and then deploy. The two projects appear to be 
heading towards a converged solution.

Tim

From: Brad Topol [mailto:bto...@us.ibm.com]
Sent: 05 December 2013 20:06
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [heat] [glance] Heater Proposal

Lots of good discussion on this topic.   One thing I would like to point out is 
that we get feedback that OpenStack has too many projects as it is and 
customers get confused on how much of OpenStack they need to install.  So in 
the spirit of trying to help insure OpenStack does not continue to reinforce 
this perception, I am hoping that Heater functionality finds a home in either 
Glance or Heat.  I don't have a preference of which.   Either of these is 
superior to starting a new project if it can be avoided.

Thanks,

Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.commailto:bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date:12/05/2013 12:09 PM
Subject:Re: [openstack-dev] [heat] [glance] Heater Proposal




On Dec 5, 2013, at 10:10 AM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com
wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?


 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users

 My responses:

 1: Irrelevant. Smaller things will fit in it just fine.

Fitting is one thing, optimizations around particular assumptions about the 
size of data and the frequency of reads/writes might be an issue, but I admit 
to ignorance about those details in Glance.

 2: The swift API supports versions. We could also have git as a
 backend. This feels like something we can add as an optional feature
 without exploding Glance's scope and I imagine it would actually be a
 welcome feature for image authors as well. Think about Ubuntu maintaining
 official images. If they can keep the ID the same and just add a version
 (allowing users to lock down to a version if updated images cause issue)
 that seems like a really cool feature for images _and_ templates.

Agreed, though one could argue that using image names and looking up ID's or 
just using ID's as appropriate sort of handle this use case, but I agree that 
having image versioning seems a reasonable feature for Glance to have as well.

 3: I'm sure glance image users would love to have those too.

And image metadata is already there so we don't have to go through those 
discussions all over again ;).

 4: Irrelevant. Heat will need to download templates just like nova, and
 making images publicly downloadable is also a thing in glance.

Yeah, this was the kicker for me. I'd been thinking of adding the 

Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-05 Thread Mark McLoughlin
On Mon, 2013-12-02 at 11:00 -0500, Doug Hellmann wrote:

 I have updated the Oslo wiki page with these details and would appreciate
 feedback on the wording used there.
 
 https://wiki.openstack.org/wiki/Oslo#Graduation

Thanks Doug, that sounds perfect to me.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Mark Washenberger
On Thu, Dec 5, 2013 at 1:05 PM, Vishvananda Ishaya vishvana...@gmail.comwrote:


 On Dec 5, 2013, at 12:42 PM, Andrew Plunk andrew.pl...@rackspace.com
 wrote:

  Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
  On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
  wrote:
 
  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
  Why not just use glance?
 
 
  I've asked that question a few times, and I think I can collate the
  responses I've received below. I think enhancing glance to do these
  things is on the table:
 
  1. Glance is for big blobs of data not tiny templates.
  2. Versioning of a single resource is desired.
  3. Tagging/classifying/listing/sorting
  4. Glance is designed to expose the uploaded blobs to nova, not users
 
  My responses:
 
  1: Irrelevant. Smaller things will fit in it just fine.
 
  Fitting is one thing, optimizations around particular assumptions
 about the size of data and the frequency of reads/writes might be an issue,
 but I admit to ignorance about those details in Glance.
 
 
  Optimizations can be improved for various use cases. The design,
 however,
  has no assumptions that I know about that would invalidate storing blobs
  of yaml/json vs. blobs of kernel/qcow2/raw image.
 
  I think we are getting out into the weeds a little bit here. It is
 important to think about these apis in terms of what they actually do,
 before the decision of combining them or not can be made.
 
  I think of HeatR as a template storage service, it provides extra data
 and operations on templates. HeatR should not care about how those
 templates are stored.
  Glance is an image storage service, it provides extra data and
 operations on images (not blobs), and it happens to use swift as a backend.

 This is not completely correct. Glance already supports something akin to
 templates. You can create an image with metadata properties that
 specifies a complex block device mapping which would allow for multiple
 volumes and images to connected to the vm at boot time. This is
 functionally a template for a single vm.

 Glance is pretty useless if is just an image storage service, we already
 have other places that can store bits (swift, cinder). It is much more
 valuable as a searchable repository of bootable templates. I don't see any
 reason why this idea couldn't be extended to include more complex templates
 that could include more than one vm.


FWIW I agree with all of this. I think Glance's real role in OpenStack is
as a helper and optionally as a gatekeeper for the category of stuff Nova
can boot. So any parameter that affects what Nova is going to boot should
in my view be something Glance can be aware of. This list of parameters
*could* grow to include multiple device images, attached volumes, and other
things that currently live in the realm of flavors such as extra hardware
requirements and networking aspects.

Just so things don't go too crazy, I'll add that since Nova is generally
focused on provisioning individual VMs, anything above the level of an
individual VM should be out of scope for Glance.

I think Glance should alter its approach to be less generally agnostic
about the contents of the objects it hosts. Right now, we are just starting
to do this with images, as we slowly advance on offering server side format
conversion. We could find similar use cases for single vm templates.

It would be fantastic if we could figure out how to turn this idea into
some actionable work in late I/early J. It could be a fun thing to work on
at the midcycle meetup.



 We have discussed the future of glance a number of times, and if it is
 really just there to serve up blobs of data + metadata about images, it
 should go away. Glance should offer something more like the AWS image
 search console. And this could clearly support more than just images, you
 should be able to search for and launch more complicated templates as well.

 
  If HeatR and Glance were combined, it would result in taking two very
 different types of data (template metadata vs image metadata) and mashing
 them into one service. How would adding the complexity of HeatR benefit
 Glance, when they are dealing with conceptually two very different types of
 data? For instance, should a template ever care about the field minRam
 that is stored with an image?


 I don't see these as significantly different types of metadata. Metadata
 for heat templates might be a bit more broad (minFlavor?) I would think
 that a template would care about constraints like this, especially when you
 consider that a user might want to give a command to launch a template but
 then override certain characteristics.

 Vish

  Combining them adds a huge development complexity with a very small
 operations payoff, and so Openstack is already so operationally complex
 that HeatR as a separate service would be knowledgeable. Only clients of
 Heat will ever care about data and operations on templates, so I 

[openstack-dev] [heat] Stack convergence first steps

2013-12-05 Thread Anderson Mesquita
Hey stackers,

We've been working towards making stack convergence (
https://blueprints.launchpad.net/heat/+spec/stack-convergence) one step
closer to being ready at a time.  After the first patch was submitted we
got positive feedback on it as well as some good suggestions as to how to
move it forward.

The first step (https://blueprints.launchpad.net/heat/+spec/stack-check) is
to get all the statuses back from the real world resources and update our
stacks accordingly so that we'll be able to move on to the next step:
converge it to the desired state, fixing any errors that may have happened.

We just submitted another WiP for review, and as we were doing it, a few
questions were raised and we'd like to get everybody's input on them. Our
main concern is around the use and purpose of the `status` of a
stack/resource.  `status` currently appears to represent the status of the
last action taken, and it seems that we may need to repurpose it or
possibly create something else to represent a stack's health (i.e.
everything is up and running as expected, something smells fishy, something
broke, stack's is doomed).  We described this thoroughly here:
https://etherpad.openstack.org/p/heat-convergence

Any thoughts?

Cheers,

andersonvom/rblee88
pairing
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Clint Byrum
Excerpts from Tim Schnell's message of 2013-12-05 13:07:17 -0800:
 On 12/5/13 12:17 PM, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Tim Schnell's message of 2013-12-05 09:49:03 -0800:
  
  On 12/5/13 11:33 AM, Randall Burt randall.b...@rackspace.com wrote:
  
  On Dec 5, 2013, at 11:10 AM, Clint Byrum cl...@fewbar.com
   wrote:
 I would love to hear about future requirements for Heater that won't
 fit into Glance. But thus far, I am struggling to see where this could
 go that wouldn't be a place Glance also wants to go.
 
 I know we covered this a bit in the hangout but one possibility that I
 think I failed to articulate is the fact that the template is a parseable
 data representation while an image is a binary object. In my original
 email I mentioned having a discussion about moving some of the template
 metadata like application information and documentation into the HOT
 specification.
 

I've gone on record before as saying that having things in the template
schema that should be in the catalog is a mistake. I stand by that
assertion. However that is just for HOT, I think for a full blown
packaging format, that may not be the case and so I agree there may be
a case for parsing the contents of the uploaded object.

 This has important ramifications for the user experience as well as
 whether or not abstracting Glance to one-size-fits-all is actually
 feasible at an implementation level. Ideally the Heat template would serve
 as the source of truth for information like application-version and
 template documentation, in the original Heater proposal this is
 important because this data would have to be extracted from the template
 and stored in a metadata field if it is relevant to the way that Heater
 allows you to search or index different templates.
 

Given a separate service, how wold you design things differently than
glance to make this parsing work. As I recall, when you create an
image for glance, it just makes a record in the registry and a place to
upload it to in the storage. Would you do this differently? Does that
invalidate parsing?

Presumably after uploading your template to storage, you would need a
process which inspects the data and populates the registry. How would
glance's design preclude this given the more generic object schema that
we discussed.

 If we had to pass a duplicate version of this metadata in the HTTP Request
 we could avoid grabbing this information from the template but we open up
 data integrity issues. What happens when the end-user updates the metadata
 but not the template? At the same time, I would reiterate that we can't
 just make this data a Heater/Template Catalog metadata issue because we
 want the experience of the template to be consistent regardless of how the
 end-user obtains the template. This is why I would consider this
 information to belong as part of the HOT specification.
 

Don't take this the wrong way, but that would be absurd. Either you
maintain it as the source of truth, or it is derived from the source
of truth. What if I upload lolcatz instead of a template? I can
has meta-data where there is none? Let's stick to the main question:
Can Glance's scope be expanded to have data derived from the objects
themselves. I think the answer is a resounding yes.

 This is relevant to the Glance conversation because, just by the template
 existing in a different format than an image, we have already introduced
 logic in Glance that cannot be abstracted. We could most likely build an
 abstraction layer that covers some large percentage of the overlap in CRUD
 operations between images and templates but I do not think that we can
 expect to do this for every use case with Heat templates.
 

Who said that there would not be logic per object type? The point was to
have a generic object registry, but not an abstract object registry. You
still instantiate them by uploading content and in doing so the objects
take on all of the attributes of their class. So images are opaque.
Things that allow parsing are not.

 I know that I cannot provide a future use case that would exemplify this
 problem but I can imagine that if we choose this design path going
 forward, then we will continue to increase the scope of Glance with every
 new type of object (maybe Solum assemblies?).
 

If Heater were its own service, and Solum assemblies were added,
requiring some new things like multi-file objects, would you support
Solum developers wanting to write a new registry because their use-case
is slightly different?

 I do agree that in general, we should continue to attempt to drive
 consistency and congruency across Openstack projects when possible. I am
 just concerned that this design will take us down rabbit holes both now
 and in the future when it comes to the implementation details. In the
 bigger picture I am attempting to weigh the value of building this
 complexity into Glance against putting Heater into its own project or Heat.
 

Writing a new 

Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-05 Thread Mark McLoughlin
Hi Kevin,

On Mon, 2013-12-02 at 12:39 -0800, Fox, Kevin M wrote:
 Hi all,
 
 I just want to run a crazy idea up the flag pole. TripleO has the
 concept of an under and over cloud. In starting to experiment with
 Docker, I see a pattern start to emerge.
 
  * As a User, I may want to allocate a BareMetal node so that it is
 entirely mine. I may want to run multiple VM's on it to reduce my own
 cost. Now I have to manage the BareMetal nodes myself or nest
 OpenStack into them.
  * As a User, I may want to allocate a VM. I then want to run multiple
 Docker containers on it to use it more efficiently. Now I have to
 manage the VM's myself or nest OpenStack into them.
  * As a User, I may want to allocate a BareMetal node so that it is
 entirely mine. I then want to run multiple Docker containers on it to
 use it more efficiently. Now I have to manage the BareMetal nodes
 myself or nest OpenStack into them.
 
 I think this can then be generalized to:
 As a User, I would like to ask for resources of one type (One AZ?),
 and be able to delegate resources back to Nova so that I can use Nova
 to subdivide and give me access to my resources as a different type.
 (As a different AZ?)
 
 I think this could potentially cover some of the TripleO stuff without
 needing an over/under cloud. For that use case, all the BareMetal
 nodes could be added to Nova as such, allocated by the services
 tenant as running a nested VM image type resource stack, and then made
 available to all tenants. Sys admins then could dynamically shift
 resources from VM providing nodes to BareMetal Nodes and back as
 needed.
 
 This allows a user to allocate some raw resources as a group, then
 schedule higher level services to run only in that group, all with the
 existing api.
 
 Just how crazy an idea is this?

FWIW, I don't think it's a crazy idea at all - indeed I mumbled
something similar a few times in conversation with random people over
the past few months :)

With the increasing interest in containers, it makes a tonne of sense -
you provision a number of VMs and now you want to carve them up by
allocating containers on them. Right now, you'd need to run your own
instance of Nova for that ... which is far too heavyweight.

It is a little crazy in the sense that it's a tonne of work, though.
There's not a whole lot of point in discussing it too much until someone
shows signs of wanting to implement it :)

One problem is how the API would model this nesting, another problem is
making the scheduler aware that some nodes are only available to the
tenant which owns them but maybe a bigger problem is the security model
around allowing a node managed by an untrusted become a compute node.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Heater Proposal

2013-12-05 Thread Randall Burt
On Dec 5, 2013, at 4:08 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com
 wrote:

Hi,

I am really glad to see the line of thinking close to what we at Murano see as 
a right direction for OpenStack development. This is a good initiative which 
potentially will be useful for other projects.  We have very similar idea about 
repository in Murano project and we even implemented the first version of it. 
We are very open for collaboration and exchanging the ideas.


In terms of overlap with Murano, I can see the overlap in the area of Murano 
Metadata Repository. We already have done some work in this area and you can 
find the detailed description here 
https://wiki.openstack.org/wiki/Murano/SimplifiedMetadataRepository. The 
implementation of the first version is already done and we plan to include the 
implementation in Murano 0.4 release which will go out in a week.


For the future roadmap with more advanced functionality we have created 
etherpad:  https://etherpad.openstack.org/p/MuranoMetadata


My concerns around Heater lie in two areas:

- Fit for OpenStack Orchestration program

Do you mean to imply that a repository of orchestration templates is a bad fit 
for the orchestration program?

- Too narrow focus as it formulated right now making hard for other projects 
like Murano take advantage of this services as general purpose metadata 
repository

That's what the discussion around using Glance is about though. The proposal 
started out as a separate service, but arguments are being made that the use 
cases fit into Glance. The use cases don't change as their focused on templates 
and not general object cataloging, but that's something to sort if/when we land 
on an implementation.

I am not sure how metadata repository related to orchestration program as it 
does not orchestrate anything. I would rather consider creating separate 
Service Catalog/Metadata Repository program or consider storage programs like 
Glance or Swift as Heater has the similar feature set. If you replace 
“template” with “object” you will actually propose a new swift implementation 
with replacing existing Swift’s versioning, acl, and metadata for objects.

Doesn't that same argument hold for Murano Metadata Repository as well? And, as 
initially proposed, its not a generic metadata repository but a template 
cataloging system. The difference maybe academic, but I think its important. 
That being said, maybe there's a case for something even more generic (store 
some meta information about some consumable artifact and a pointer to where to 
get it), but IMO, the arguments for Glance then become more compelling (not 
that I've bought in to that completely yet).

Murano as Application Catalog also could be a fit, but I don’t insist :)

It sounds to me like conceptually it would suffer from the same scoping issues 
we're already discussing though.

At the current moment Heat is not opinionated about template placement and this 
provides a lot of flexibility for other projects which use Heat under the hood. 
With your proposal, you are creating new metadata repository solution for 
specific use case of template storage making Heat much more prescriptive.

I'm not sure where this impression comes from. The Heat orchestration 
api/engine would in no way be dependent on the repository. Heat would still 
accept and orchestrate any template you passed it. At best, Heat would be 
extended to be aware of catalog urls and template id's, but in no way was it 
ever meant to imply that Heat would ever be modified to *only* work with 
templates from a catalog or require any of the catalog metadata to function in 
its core role as an orchestration engine.

Combined with TC policy which enforces projects to use existing code, this 
could be a big problem because other projects might want to keep not only Heat 
template but other components and metadata. Murano is a good example for that. 
It has multiple objects associated with Application and Heat template is only 
one of them.  That would mean that other projects would either need to 
duplicate the functionality of the catalog or significantly restrict their own 
functionality.

Or Murano could also extend Glance functionality to include these sorts of 
composite artifacts similar to how Vish described earlier in the thread.

The most scary thing for me is that you propose to add metadata information to 
the HOT template. In Murano we keep UI information in a separate file and this 
provides us a flexibility how to render UI for the same Heat template in 
different Applications. This is a question of separation of concerns as Heat as 
a deployment orchestration should not interfere to UI.

While I agree that things related to specific UI strategies don't really belong 
in the template, I think you may have misconstrued the example cited. As I 
understood it as a if this then wouldn't we have to do this unsightly thing? 
Could be that I 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Randall Burt
On Dec 5, 2013, at 4:45 PM, Steve Baker 
sba...@redhat.commailto:sba...@redhat.com
 wrote:

On 12/06/2013 10:46 AM, Mark Washenberger wrote:



On Thu, Dec 5, 2013 at 1:05 PM, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:

On Dec 5, 2013, at 12:42 PM, Andrew Plunk 
andrew.pl...@rackspace.commailto:andrew.pl...@rackspace.com wrote:

 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at 
 fewbar.comhttp://fewbar.com/
 wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?


 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users

 My responses:

 1: Irrelevant. Smaller things will fit in it just fine.

 Fitting is one thing, optimizations around particular assumptions about the 
 size of data and the frequency of reads/writes might be an issue, but I 
 admit to ignorance about those details in Glance.


 Optimizations can be improved for various use cases. The design, however,
 has no assumptions that I know about that would invalidate storing blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.

 I think we are getting out into the weeds a little bit here. It is important 
 to think about these apis in terms of what they actually do, before the 
 decision of combining them or not can be made.

 I think of HeatR as a template storage service, it provides extra data and 
 operations on templates. HeatR should not care about how those templates are 
 stored.
 Glance is an image storage service, it provides extra data and operations on 
 images (not blobs), and it happens to use swift as a backend.

This is not completely correct. Glance already supports something akin to 
templates. You can create an image with metadata properties that specifies a 
complex block device mapping which would allow for multiple volumes and images 
to connected to the vm at boot time. This is functionally a template for a 
single vm.

Glance is pretty useless if is just an image storage service, we already have 
other places that can store bits (swift, cinder). It is much more valuable as a 
searchable repository of bootable templates. I don't see any reason why this 
idea couldn't be extended to include more complex templates that could include 
more than one vm.

FWIW I agree with all of this. I think Glance's real role in OpenStack is as a 
helper and optionally as a gatekeeper for the category of stuff Nova can 
boot. So any parameter that affects what Nova is going to boot should in my 
view be something Glance can be aware of. This list of parameters *could* grow 
to include multiple device images, attached volumes, and other things that 
currently live in the realm of flavors such as extra hardware requirements and 
networking aspects.

Just so things don't go too crazy, I'll add that since Nova is generally 
focused on provisioning individual VMs, anything above the level of an 
individual VM should be out of scope for Glance.

I think Glance should alter its approach to be less generally agnostic about 
the contents of the objects it hosts. Right now, we are just starting to do 
this with images, as we slowly advance on offering server side format 
conversion. We could find similar use cases for single vm templates.


The average heat template would provision more than one VM, plus any number of 
other cloud resources.

An image is required to provision a single nova server;
a template is required to provision a single heat stack.

Hopefully the above single vm policy could be reworded to be agnostic to the 
service which consumes the object that glance is storing.

To add to this, is it that Glance wants to be *more* integrated and geared 
towards vm or container images or that Glance wishes to have more intimate 
knowledge of the things its cataloging *regardless of what those things 
actually might be*? The reason I ask is that Glance supporting only single vm 
templates when Heat orchestrates the entire (or almost entire) spectrum of 
core and integrated projects means that its suitability as a candidate for a 
template repository plummets quite a bit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >