Re: [openstack-dev] [Nova] [Ironic] [Infra] Making Ironic vote as a third-party Nova driver

2014-05-29 Thread Joshua Hesketh

On 5/29/14 8:52 AM, James E. Blair wrote:

Devananda van der Veen devananda@gmail.com writes:


Hi all!

This is a follow-up to several summit discussions on
how-do-we-deprecate-baremetal, a summary of the plan forward, a call to
raise awareness of the project's status, and hopefully gain some interest
from folks on nova-core to help with spec and code reviews.

The nova.virt.ironic driver lives in Ironic's git tree today [1]. We're
cleaning it up and submitting it to Nova again this cycle. I've posted
specs [2] outlining the design and planned upgrade process. Earlier today,
we enabled voting in Ironic's check and gate queues for the
tempest-dsvm-virtual-ironic job. This runs a tempest scenario test [3]
against devstack, exercising Nova with the Ironic driver to PXE boot a
virtual machine. It has been running for a few months on Ironic, and has
been stable for more than a month. However, because Ironic is not
integrated, we also can't vote in check/gate queues on integrated projects
(like Nova). We can - and do - report the test result in a non-voting way,
though that's easy to miss, since it looks like every other non-voting test.

At the summit [4], it was suggested that we make this job report as though
it were a third-party CI test for a Nova driver. This would be removed at
the time that Ironic graduates and the job is allowed to vote in the gate.
Until that time, I'm happy to have the nova.virt.ironic driver reporting as
a third-party driver (even though it's not) simply to help raise awareness
(third-party CI jobs are watched more closely than non-voting jobs) and
decrease the likelihood that Nova developers will inadvertently break
Ironic's gate.

Given that there's a concrete plan forward, why am I sending this email to
all three teams? A few reasons:
- document the plan that we discussed
- many people from infra and nova were not present during the discussion
and may not be aware of the details
- I may have gotten something wrong (it was a long week)
- and mostly because I don't technically know how to make an upstream job
report as though it's a third-party job, and am hoping someone wants to
volunteer to help figure that out

I think it's a reasonable plan.  To elaborate a bit, I think we
identified three categories of jobs that we run:

a) jobs that are voting
b) jobs that are non-voting because they are advisory
c) jobs that are non-voting for policy reasons but we feel fairly
strongly about

There's a pretty subtle distinction between b and c.  Ideally, there
shouldn't be any.  We've tried to minimize the number of non-voting jobs
to make sure that people don't ignore them.  Nonetheless, it seems that
a large enough number of people still do that non-voting jobs are
considered ineffective in Nova.  I think it's worth noting the potential
danger of de-emphasizing the actual results.  It may make other
non-voting jobs even less effective than they already are.

The intent is to make the jobs described by (c) into voting jobs, but in
a way that they can still be overridden if need be.  The aim is to help
new (eg, incubated) projects join the integrated gate in a way that lets
them prove they are sufficiently mature to do so without impacting the
currently integrated projects.  I believe we're currently thinking that
point is after their integration approval.  If we are comfortable with
incubated projects being able to block the integrated gate earlier, we
could simply make the non-voting jobs voting instead.

Back to the proposal at hand.  I think we should call the kinds of jobs
described in (c) as non-binding.

The best way to do that is to register a second user with Gerrit for
Zuul to use, and have it report non-binding jobs with a +/- 1 vote in
the check queue that is separate from the normal Jenkins vote.  In
order to do that, we will have to modify Zuul to be able to support a
second user, and associate that user with a pipeline.  Then configure a
new non-binding pipeline to use that user and run the desired jobs.

Note that a similar problem of curation may occur with the non-binding
jobs.  If we run jobs for the incubated projects Foo and Bar, they will
share a vote in Gerrit, and Nova developers will have to examine the
results of -1 votes; if Bar consistently fails tests, it may need to be
made non-voting or removed to avoid obscuring Foo's results.

I expect the Zuul modification to take an experienced Zuul developer
about 2-3 days to write, or an inexperienced one about a week.  If no
one else has started it by then, I will probably have some time around
the middle of the cycle to hack on it.  In the mean time, we may want to
make sure that the number of non-voting jobs is at a minimum (and
further reduce them if possible), and emphasize to reviewers the
importance of checking posted results.


I like this plan. I can make a start on implementing this :-)

Cheers,
Josh



-Jim

___
OpenStack-dev mailing list

Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-29 Thread James Polley
Thanks for putting this together Jaromir!

August 1 may be a problem for those of us from Australia - it's the day of
the OpenStack miniconf at Pycon-AU.

I don't know if any of us intended to go to that, but if we did we'd need
to leave no later than the 4:40pm flight on July 30 in order to make it
back in time - and that would have us arriving in Brisbane at 7am on the
day of the miniconf.

If we stayed till midday Friday and caught the 4:40pm flight out, we'd
arrive in BNE at 7am on the 3rd - just in time for the last day of talks
and the two days of sprints.

As I said, I'm not sure how many other people from this part of the world
had intended to go to Pycon-AU and the openstack miniconf, so I'm not sure
how much of a problem this is.


On Wed, May 28, 2014 at 9:05 PM, Jaromir Coufal jcou...@redhat.com wrote:

 Hi to all,

 after previous TripleO  Ironic mid-cycle meetup, which I believe was
 beneficial for all, I would like to suggest that we meet again in the
 middle of Juno cycle to discuss current progress, blockers, next steps and
 of course get some beer all together :)

 Last time, TripleO and Ironic merged their meetings together and I think
 it was great idea. This time I would like to invite also Heat team if they
 want to join. Our cooperation is increasing and I think it would be great,
 if we can discuss all issues together.

 Red Hat offered to host this event, so I am very happy to invite you all
 and I would like to ask, who would come if there was a mid-cycle meetup in
 following dates and place:

 * July 28 - Aug 1
 * Red Hat office, Raleigh, North Carolina

 If you are intending to join, please, fill yourselves into this etherpad:
 https://etherpad.openstack.org/p/juno-midcycle-meetup

 Cheers
 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-29 Thread Tihomir Trifonov
Hi Gabriel,

thanks for joining the conversation. It is a quite old discussion that was
also discussed at the Atlanta design summit, and it seems that the moment
this to happen had finally come. It is always nice to hear the opinion of a
real Django core developer.

Just a follow-up from the discussions in Atlanta for those who didn't
attend - as long as there are some new Web UI projects in OpenStack, it is
a good reason to make a common web framework that to be used for all
projects(something like a common layer on top of Django, to be used from
different projects), which will need to only add their custom logic and
views. Currently there is a lot of duplicate and similar code in different
projects that could be unified. Also this will make it very easy to add new
openstack-related dashboard sites, keeping it consistent to other sites,
hence easier contribution from other project developers.

Back on the naming thing - since Horizon from the very beginning keeps the
common files(base templates and scripts), isn't it better to keep it's
name?. Since Horizon is the base on which the different Dashboard
projects will be built over, I'd propose a kind of broader naming concept -
what comes on the horizon? A *Rainbow*? A *Storm*? This would allow the
easy selection of new names when needed, and all will be related to Horizon
in some way. I'm not sure if this makes sense, but I personally like when
project names have something in common :)

About the plan - it seems reasonable for me, count me in for the big rush.



On Wed, May 28, 2014 at 11:03 PM, Gabriel Hurley
gabriel.hur...@nebula.comwrote:

 It's sort of a silly point, but as someone who would likely consume the
 split-off package outside of the OpenStack context, please give it a proper
 name instead of django_horizon. The module only works in Django, the name
 adds both clutter and confusion, and it's against the Django community's
 conventions to have the python import name be prefixed with django_.

 If the name horizon needs to stay with the reference implementation of
 the dashboard rather than keeping it with the framework as-is that's fine,
 but choose a new real name for the framework code.

 Just to get the ball rolling, and as a nod to the old-timers, I'll propose
 the runner up in the original naming debate: bourbon. ;-)

 If there are architectural questions I can help with in this process let
 me know. I'll try to keep an eye on the progress as it goes.

 All the best,

- Gabriel

  -Original Message-
  From: Radomir Dopieralski [mailto:openst...@sheep.art.pl]
  Sent: Wednesday, May 28, 2014 5:55 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [horizon][infra] Plan for the splitting of
 Horizon into
  two repositories
 
  Hello,
 
  we plan to finally do the split in this cycle, and I started some
 preparations for
  that. I also started to prepare a detailed plan for the whole operation,
 as it
  seems to be a rather big endeavor.
 
  You can view and amend the plan at the etherpad at:
  https://etherpad.openstack.org/p/horizon-split-plan
 
  It's still a little vague, but I plan to gradually get it more detailed.
  All the points are up for discussion, if anybody has any good ideas or
  suggestions, or can help in any way, please don't hesitate to add to this
  document.
 
  We still don't have any dates or anything -- I suppose we will work that
 out
  soonish.
 
  Oh, and great thanks to all the people who have helped me so far with
 it, I
  wouldn't even dream about trying such a thing without you. Also thanks in
  advance to anybody who plans to help!
 
  --
  Radomir Dopieralski
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,
Tihomir Trifonov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] BPs for Juno-1

2014-05-29 Thread mar...@redhat.com
On 28/05/14 17:57, Kyle Mestery wrote:
 On Wed, May 28, 2014 at 12:41 AM, mar...@redhat.com mandr...@redhat.com 
 wrote:
 On 27/05/14 17:14, Kyle Mestery wrote:
 Hi Neutron developers:

 I've spent some time cleaning up the BPs for Juno-1, and they are
 documented at the link below [1]. There are a large number of BPs
 currently under review right now in neutron-specs. If we land some of
 those specs this week, it's possible some of these could make it into
 Juno-1, pending review cycles and such. But I just wanted to highlight
 that I removed a large number of BPs from targeting Juno-1 now which
 did not have specifications linked to them nor specifications which
 were actively under review in neutron-specs.

 Also, a gentle reminder that the process for submitting specifications
 to Neutron is documented here [2].

 Thanks, and please reach out to me if you have any questions!


 Hi Kyle,

 Can you please consider my PUT /subnets/subnet allocation_pools:{}
 review at [1] for Juno-1? Also, I see you have included a bug [1] and
 associated review [2] that I've worked on but the review is already
 pushed to master. Is it there for any pending backports?

 Done, I've added the bug referenced in [2] to Juno-1.

 Thanks!

 
 With regards to [3] below, are you saying you would like to submit
 that as a backport to stable?

No I was more asking if that was the reason it was included (as it has
already been merged) - though I can do that if you think it's a good idea,

thanks, marios


 
 thanks! marios

 [1] https://review.openstack.org/#/c/62042/
 [2] https://bugs.launchpad.net/neutron/+bug/1255338
 [3] https://review.openstack.org/#/c/59212/



 Kyle

 [1] https://launchpad.net/neutron/+milestone/juno-1
 [2] https://wiki.openstack.org/wiki/Blueprints#Neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-29 Thread Robert Collins
Is there any wiggle room on those dates? As James Polley says, PyCon
AU (and the Openstack miniconf, which I'm organising with JHesketh)
overlap significantly - and I can't be in two places at once.

However July 21-25th would be totally doable :)

-Rob

On 28 May 2014 23:05, Jaromir Coufal jcou...@redhat.com wrote:
 Hi to all,

 after previous TripleO  Ironic mid-cycle meetup, which I believe was
 beneficial for all, I would like to suggest that we meet again in the middle
 of Juno cycle to discuss current progress, blockers, next steps and of
 course get some beer all together :)

 Last time, TripleO and Ironic merged their meetings together and I think it
 was great idea. This time I would like to invite also Heat team if they want
 to join. Our cooperation is increasing and I think it would be great, if we
 can discuss all issues together.

 Red Hat offered to host this event, so I am very happy to invite you all and
 I would like to ask, who would come if there was a mid-cycle meetup in
 following dates and place:

 * July 28 - Aug 1
 * Red Hat office, Raleigh, North Carolina

 If you are intending to join, please, fill yourselves into this etherpad:
 https://etherpad.openstack.org/p/juno-midcycle-meetup

 Cheers
 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-05-29 Thread Clark, Robert Graham
Just chiming in with a side-note.

I always liked the idea of Postern for things like this though the crypto
geek in me always worries about making key retrieval too easy for
developers, bad things happen down that road.

The OSSG can help with overall secure design and would be happy to consult
if you¹d like. We have a separate list for security specific stuff but I
think a conversation on -dev would make more sense, just tag it [OSSG] :)

-Rob


On 28/05/2014 23:57, Nachi Ueno na...@ntti3.com wrote:

Hi Zang

Since, SSL-VPN for Juno bp is approved in neturon-spec,
I would like to restart this work.
Could you share your code if it is possible?
Also, Let's discuss how we can collaborate in here.

Best
Nachi


2014-05-01 14:40 GMT-07:00 Nachi Ueno na...@ntti3.com:
 Hi folks

 Clint
 Thanks, things get clear for me now :)





 2014-05-01 13:21 GMT-07:00 John Wood john.w...@rackspace.com:
 I was going to bring up Postern [1] as well, Clint. Unfortunately not
much work has been done on it though.

 [1] https://github.com/cloudkeep/postern

 Thanks,
 John



 
 From: Clint Byrum [cl...@fewbar.com]
 Sent: Thursday, May 01, 2014 2:22 PM
 To: openstack-dev
 Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

 Excerpts from Nachi Ueno's message of 2014-05-01 12:04:23 -0700:
 Ah I got it now!
 so even if we get stolen HDD, we can keep password safe.

 However, I'm still not sure why this is more secure..
 anyway, the ID/PW to access barbican will be written in neutron.conf,
right?


 Yes. However, you can surround the secret in policies. You'll have an
 audit trail of when and where it was accessed, and you can even
restrict
 access, so that out of band you have to open up access with barbican.

 So while the server may have access, that access is now audited and
 limited by policy, instead of just being dependent on the security
 measures you can take to protect a file.

 Furthermore,  ID/PW for mysql will be written in conf file..
 so if we can't trust unix file system protection, there is no security
 in OpenStack.

 The ID/PW for mysql only grants you access to mysql for as long as that
 id/pw are enabled for access. However, the encryption keys for OpenVPN
 will grant any passive listener access for as long as they keep any
 sniffed traffic. They'll also grant an attacker the ability to MITM
 traffic between peers.

 So when an encryption key has been accessed, from where, etc, is quite
 a bit more crucial than knowing when a username/password combo have
 been accessed.

 Producing a trustworthy audit log for access to
/etc/neutron/neutron.conf
 is a lot harder than producing an audit log for a REST API.

 So it isn't so much that file system permissions aren't enough, it is
 that file system observability is expensive.

 Note that at some point there was a POC to have a FUSE driver backed by
 Barbican called 'Postern' I think. That would make these discussions a
 lot simpler. :)


 2014-05-01 10:31 GMT-07:00 Clint Byrum cl...@fewbar.com:
  I think you'd do something like this (Note that I don't know off
the top
  of my head the barbican CLI or openvpn cli switches... just
  pseudo-code):
 
  oconf=$(mktemp -d /tmp/openvpnconfig.XX)
  mount -o tmpfs $oconf size=1M
  barbican get my-secret-openvpn-conf  $oconf/foo.conf
  openvpn --config-dir $oconf foo --daemonize
  umount $oconf
  rmdir $oconf
 
  Excerpts from Nachi Ueno's message of 2014-05-01 10:15:26 -0700:
  Hi Robert
 
  Thank you for your suggestion.
  so your suggestion is let OpenVPN process download key to memory
  directly from Babican?
 
  2014-05-01 9:42 GMT-07:00 Clark, Robert Graham
robert.cl...@hp.com:
   Excuse me interrupting but couldn't you treat the key as largely
   ephemeral, pull it down from Barbican, start the OpenVPN process
and
   then purge the key?  It would of course still be resident in the
memory
   of the OpenVPN process but should otherwise be protected against
   filesystem disk-residency issues.
  
  
   -Original Message-
   From: Nachi Ueno [mailto:na...@ntti3.com]
   Sent: 01 May 2014 17:36
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
  
   Hi Jarret
  
   IMO, Zang point is the issue saving plain private key in the
   filesystem for
   OpenVPN.
   Isn't this same even if we use Barbican?
  
  
  
  
  
   2014-05-01 2:56 GMT-07:00 Jarret Raim
jarret.r...@rackspace.com:
Zang mentioned that part of the issue is that the private key
has to
be stored in the OpenVPN config file. If the config files are
generated and can be stored, then storing the whole config
file in
Barbican protects the private key (and any other settings)
without
having to try to deliver the key to the OpenVPN endpoint in
some
   non-
   standard way.
   
   
Jarret
   
On 4/30/14, 6:08 PM, Nachi Ueno na...@ntti3.com wrote:
   
Jarret
   
   Thanks!
   Currently, the 

Re: [openstack-dev] Recommended way of having a project admin

2014-05-29 Thread Ajaya Agrawal
Hi All,

The reason I ask this question in openstack-dev is there is a lot of
confusion going around other projects moving to keystone v3. Would it be a
problem if I use keystone v3 api for authenticating and point other
services to use keystone v2 with keystone v3 token? The main issue here is
keystone v2 does not support RBAC policies and other services don't
understand keystone v3.

Cheers,
Ajaya


On Wed, May 28, 2014 at 6:28 PM, Ajaya Agrawal ajku@gmail.com wrote:

 Hi All,

 We want to introduce a role of project admin in our cloud who can add
 users only in the project in which he is an admin. AFAIK RBAC policies are
 not supported by keystone v2 api. So I suppose we will need to use keystone
 v3 to support the concept of project admin. But I hear things like all the
 projects don't talk keystone v3 as of now.

 What is the recommended way of doing it?

 Cheers,
 Ajaya

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Removing Get and Delete Messages by ID

2014-05-29 Thread Flavio Percoco

On 28/05/14 17:01 +, Kurt Griffiths wrote:

Crew, as discussed in the last team meeting, I have updated the API v1.1 spec
to remove the ability to get one or more messages by ID. This was done to
remove unnecessary complexity from the API, and to make it easier to support
different types of message store backends. 


However, this now leaves us with asymmetric semantics. On the one hand, we do
not allow retrieving messages by ID, but we still support deleting them by ID.
It seems to me that deleting a message only makes sense in the context of a
claim or pop operation. In the case of a pop, the message is already deleted by
the time the client receives it, so I don’t see a need for including a message
ID in the response. When claiming a batch of messages, however, the client
still needs some way to delete each message after processing it. In this case,
we either need to allow the client to delete an entire batch of messages using
the claim ID, or we still need individual message IDs (hrefs) that can be
DELETEd. 


Deleting a batch of messages can be accomplished in V1.0 using “delete multiple
messages by ID”. Regardless of how it is done, I’ve been wondering if it is
actually an anti-pattern; if a worker crashes after processing N messages, but
before deleting those same N messages, the system is left with several messages
that another worker will pick up and potentially reprocess, although the work
has already been done. If the work is idempotent, this isn’t a big deal.
Otherwise, the client will have to have a way to check whether a message has
already been processed, ignoring it if it has. But whether it is 1 message or N
messages left in a bad state by the first worker, the other worker has to
follow the same logic, so perhaps it would make sense after all to simply allow
deleting entire batches of claimed messages by claim ID, and not worrying about
providing individual message hrefs/IDs for deletion.


There are some risks related to claiming a set of messages and process
them in batch rather than processing 1 message at a time. However,
some of those risks are valid for both scenarios. For instance, if a
worker claims just 1 message and dies before deleting it, the server
will be left with an already processed message.

I believe this is very specific to the each use-case. Based on their
needs, users will have to choose between 'pop'ng' messages out of the
queue or caliming them. One way to provide more info to the user is by
keeping track of how many times (or even the last time) a message has
been claimed. I'm not a big fan of this because it'll add more
complexity and more importantly we won't be able to support this on
the AMQP driver.

It's common to have this kind of 'tolerance' implemented in the
client-side. The server must guarantee the delivery mechanism whereas
the client must be tolerant enough based on the use-case.



With all this in mind, I’m starting to wonder if I should revert my changes to
the spec, and wait to address these changes in the v2.0 API, since it seems
that to do this right, we need to make some changes that are anything but
“minor” (for a minor release).

What does everyone think? Should we postpone this work to 2.0? 


I think this is quite a big change to have in a minor release. I vote
for doing this in v2.0 and I'd also like us to put more thoughts on
it. For example, accessing messages by-id seems to be an important
feature in SQS. I'm not saying the decision must be based on that but
since both Marconi's and SQS's targets are very similar, we should
probably take a deeper look at the utility. Unfortunately, as
mentioned above, it'll be hard to support this on the AMQP driver.

Flavio

--
@flaper87
Flavio Percoco


pgpWtYRog7Qj_.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Neutron] heal_instance_info_cache_interval - Can we kill it?

2014-05-29 Thread Day, Phil
Could we replace the refresh from the period task with a timestamp in the 
network cache of when it was last updated so that we refresh it only when it’s 
accessed if older that X ?

From: Aaron Rosen [mailto:aaronoro...@gmail.com]
Sent: 29 May 2014 01:47
To: Assaf Muller
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Neutron] heal_instance_info_cache_interval 
- Can we kill it?



On Wed, May 28, 2014 at 7:39 AM, Assaf Muller 
amul...@redhat.commailto:amul...@redhat.com wrote:


- Original Message -
 Hi,

 Sorry somehow I missed this email. I don't think you want to disable it,
 though we can definitely have it run less often. The issue with disabling it
 is if one of the notifications from neutron-nova never gets sent
 successfully to nova (neutron-server is restarted before the event is sent
 or some other internal failure). Nova will never update it's cache if the
 heal_instance_info_cache_interval is set to 0.
The thing is, this periodic healing doesn't imply correctness either.
In the case where you lose a notification and the compute node hosting
the VM is hosting a non-trivial amount of VMs it can take (With the default
of 60 seconds) dozens of minutes to update the cache, since you only
update a VM a minute. I could understand the use of a sanity check if it
was performed much more often, but as it is now it seems useless to me
since you can't really rely on it.

I agree with you. That's why we implemented the event callback so that the 
cache would be more up to date. In honesty you can probably safely disable the  
heal_instance_info_cache_interval and things will probably be fine as we 
haven't seen many failures where events from neutron fail to send. If we find 
out this is the case we can definitely make the event sending notification 
logic in neutron much more robust by persisting events to the db and 
implementing retry logic on failure there to help ensure nova gets the 
notification.

What I'm trying to say is that with the inefficiency of the implementation,
coupled with Neutron's default plugin inability to cope with a large
amount of API calls, I feel like the disadvantages outweigh the
advantages when it comes to the cache healing.

Right the current heal_instance implementation has scaling issues as every 
compute node runs this task querying neutron. The more compute nodes you have 
the more querying. Hopefully the nova v3 api should solve this issue though as 
the networking information will no longer have to live in nova as well. So 
someone interested in this data network data can query neutron directly and we 
can avoid these type of caching issues all together :)

How would you feel about disabling it, optimizing the implementation
(For example by introducing a new networking_for_instance API verb to Neutron?)
then enabling it again?

I think this is a good idea we should definitely implement something like this 
so nova can interface with less api calls.

 The neutron-nova events help
 to ensure that the nova info_cache is up to date sooner by having neutron
 inform nova whenever a port's data has changed (@Joe Gordon - this happens
 regardless of virt driver).

 If you're using the libvirt virt driver the neutron-nova events will also be
 used to ensure that the networking is 'ready' before the instance is powered
 on.

 Best,

 Aaron

 P.S: we're working on making the heal_network call to neutron a lot less
 expensive as well in the future.




 On Tue, May 27, 2014 at 7:25 PM, Joe Gordon  
 joe.gord...@gmail.commailto:joe.gord...@gmail.com  wrote:






 On Wed, May 21, 2014 at 6:21 AM, Assaf Muller  
 amul...@redhat.commailto:amul...@redhat.com  wrote:


 Dear Nova aficionados,

 Please make sure I understand this correctly:
 Each nova compute instance selects a single VM out of all of the VMs
 that it hosts, and every heal_instance_info_cache_interval seconds
 queries Neutron for all of its networking information, then updates
 Nova's DB.

 If the information above is correct, then I fail to see how that
 is in anyway useful. For example, for a compute node hosting 20 VMs,
 it would take 20 minutes to update the last one. Seems unacceptable
 to me.

 Considering Icehouse's Neutron to Nova notifications, my question
 is if we can change the default to 0 (Disable the feature), deprecate
 it, then delete it in the K cycle. Is there a good reason not to do this?

 Based on the patch that introduced this function [0] you may be on to
 something, but AFAIK unfortunately the neutron to nova notifications only
 work in libvirt right now [1], so I don' think we can fully deprecate this
 periodic task. That being said turning it off by default may be an option.
 Have you tried disabling this feature and seeing what happens (in the gate
 and/or in production)?

We've disabled it in a scale lab and didn't observe any black holes forming
or other catastrophes.


 [0] https://review.openstack.org/#/c/4269/
 [1] 

Re: [openstack-dev] [Fuel-dev] [Openstack-dev] New RA for Galera

2014-05-29 Thread Bogdan Dobrelya
On 05/27/14 16:44, Bartosz Kupidura wrote:
 Hello,
 Responses inline.
 
 
 Wiadomość napisana przez Vladimir Kuklin vkuk...@mirantis.com w dniu 27 maj 
 2014, o godz. 15:12:
 
 Hi, Bartosz

 First of all, we are using openstack-dev for such discussions.

 Second, there is also Percona's RA for Percona XtraDB Cluster, which looks 
 like pretty similar, although it is written in Perl. May be we could derive 
 something useful from it.

 Next, if you are working on this stuff, let's make it as open for the 
 community as possible. There is a blueprint for Galera OCF script: 
 https://blueprints.launchpad.net/fuel/+spec/reliable-galera-ocf-script. It 
 would be awesome if you wrote down the specification and sent  newer galera 
 ocf code change request to fuel-library gerrit.
 
 Sure, I will update this blueprint. 
 Change request in fuel-library: https://review.openstack.org/#/c/95764/

That is a really nice catch, Bartosz, thank you. I believe we should
review the new OCF script thoroughly and consider omitting
cs_commits/cs_shadows as well. What would be the downsides?

 

 Speaking of crm_attribute stuff. I am very surprised that you are saying 
 that node attributes are altered by crm shadow commit. We are using similar 
 approach in our scripts and have never faced this issue.
 
 This is probably because you update crm_attribute very rarely. And with my 
 approach GTID attribute is updated every 60s on every node (3 updates in 60s, 
 in standard HA setup). 
 
 You can try to update any attribute in loop during deploying cluster to 
 trigger fail with corosync diff.

It sounds reasonable and we should verify it.
I've updated the statuses for related bugs and attached them to the
aforementioned blueprint as well:
https://bugs.launchpad.net/fuel/+bug/1283062/comments/7
https://bugs.launchpad.net/fuel/+bug/1281592/comments/6


 

 Corosync 2.x support is in our roadmap, but we are not sure that we will use 
 Corosync 2.x earlier than 6.x release series start.
 
 Yeah, moreover corosync CMAP is not synced between cluster nodes (or maybe im 
 doing something wrong?). So we need other solution for this...
 

We should use CMAN for Corosync 1.x, perhaps.



 On Tue, May 27, 2014 at 3:08 PM, Bartosz Kupidura bkupid...@mirantis.com 
 wrote:
 Hello guys!
 I would like to start discussion on a new resource agent for 
 galera/pacemaker.

 Main features:
 * Support cluster boostrap
 * Support reboot any node in cluster
 * Support reboot whole cluster
 * To determine which node have latest DB version, we should use galera GTID 
 (Global Transaction ID)
 * Node with latest GTID is galera PC (primary component) in case of 
 reelection
 * Administrator can manually set node as PC

 GTID:
 * get GTID from mysqld --wsrep-recover or SQL query 'SHOW STATUS LIKE 
 ‚wsrep_local_state_uuid''
 * store GTID as crm_attribute for node (crm_attribute --node $HOSTNAME 
 --lifetime $LIFETIME --name gtid --update $GTID)
 * on every monitor/stop/start action update GTID for given node
 * GTID can have 3 format:
  - ----:123 - standard cluster-id:commit-id
  - ----:-1 - standard non initialized 
 cluster, ----:-1
  - ----:INF - commit-id manually set to INF, 
 force RA to create new cluster, with master on given node

 Check if reelection of PC is needed:
 * (node is located in partition with quorum OR we have only 1 node 
 configured in cluster) AND galera resource is not running on any node
 * GTID is manually set to INF on given node

 Check if given node is PC:
 * have highest GTID in cluster, in case we have more than one node with 
 „highest” GTID, we use CRC32 to choose proper PC.
 * GTID is manually set to INF
 * in case node with highest GTID will not come back after cluster reboot 
 (for example disk failure) administrator should set GTID to INF on other node

 I have almost ready RA: http://zynzel.spof.pl/mysql-wss

 Tested with vanila centos galera/pacemaker/corosync - OK
 Tested with Fuel 4.1 - Fail


 Fuel 4.1 with that RA will not deploy correctly, because we use 
 crm_attribute to store GTID, and in manifest we use cs_shadow/cs_commit for 
 every pacemaker resource.
 This lead to cs_commit problem with different configuration in shadow copy 
 and running configuration (running config changed by RA).
 Could not commit shadow instance [..] to the CIB: Application of an update 
 diff failed”

 To solve this we can go in 2 ways:
 1) dont use cs_commit/cs_shadow in manifests
 2) store GTID in other way than crm_attribute

 IMHO 2) is better (less invasive) and we can store GTID in corosync CMAP 
 (http://www.polarhome.com/service/man/generic.php?qf=corosync-cmapctl), but 
 this require corosync 2.X


 --
 Mailing list: https://launchpad.net/~fuel-dev
 Post to : fuel-...@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~fuel-dev
 More help   : https://help.launchpad.net/ListHelp



 -- 
 

[openstack-dev] [Neutron]net-create fail without definite segmentation_id

2014-05-29 Thread Xurong Yang
Hi, stackers
if i define provider when creating network, but no segmentation_id,
net-create fail. why not allocate segmentation_id automatically?
~$ neutron net-create test --provider:network_type=vlan --provider:physical_
network=default
Invalid input for operation: segmentation_id required for VLAN provider
network.

Thanks  regards,
Xurong Yang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Ironic] [Ceilometer] [Horizon] [TripleO] Nodes Management UI - designs

2014-05-29 Thread Jaromir Coufal

Hey Mainn,

mostly it is driven by following requirements:
https://etherpad.openstack.org/p/ironic-ui

plus what you already know from Tuskar point of view - which is simply 
monitoring, monitoring, monitoring :)


Hope it helps
-- Jarda

On 2014/29/05 05:51, Tzu-Mainn Chen wrote:

Hi Jarda,

These look pretty good!  However, I'm having trouble evaluating from a purely
functional point of view, as I'm not entirely sure what the requirements
driving these design.  Would it be possible to list those out. . . ?

Thanks,
Tzu-Mainn Chen

- Original Message -

Hi All,

There is a lot of tags in the subject of this e-mail but believe me that
all listed projects (and even more) are relevant for the designs which I
am sending out.

Nodes management section in Horizon is being expected for a while and
finally I am sharing the results of designing around it.

http://people.redhat.com/~jcoufal/openstack/horizon/nodes/2014-05-28_nodes-ui.pdf

These views are based on modular approach and combination of multiple
services together; for example:
* Ironic - HW details and management
* Ceilometer - Monitoring graphs
* TripleO/Tuskar - Deployment Roles
etc.

Whenever some service is missing, that particular functionality should
be disabled and not displayed to a user.

I am sharing this without any bigger description so that I can get
feedback whether people can get oriented in the UI without hints. Of
course you cannot get each and every detail without exploring, having
tooltips, etc. But the goal for each view is to manage to express at
least the main purpose without explanation. If it does not, it needs to
be fixed.

Next week I will organize a recorded broadcast where I will walk you
through the designs, explain high-level vision, details and I will try
to answer questions if you have any. So feel free to comment anything or
ask whatever comes to your mind here in this thread, so that I can cover
your concerns. Any feedback is very welcome - positive so that I know
what you think that works, as well as negative so that we can improve
the result before implementation.

Thank you all
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Selecting more carefully our dependencies

2014-05-29 Thread Thomas Goirand
Hi everyone,

Recently, wrapt was added as a dependency. The Python module suffers
from obvious design issues, like for example:
- Lack of Python 3.4 support
- Broken with Python 3.2
- Upstream sources in src instead of wrapt so then running py.test
doesn't work unless you do ln -s src wrapt, and then PYTHONPATH=.
py.test tests to run the tests.
- Unit tests not included in the pypi module

That'd be fine, if upstream was comprehensive, and willing to fix
things. It seems like he's going to approve our patches for Python 3.2
and 3.4. But ...

There's an embedded copy of six.py in the code. I've been trying to
convince upstream to remove it, and provided a patch for it. But it's
looking like upstream simply refuses to remove the embedded copy of
six.py. This means that, on each new upstream release, I may have to
rebase my Debian specific patch to remove the copy. See comments here:

https://github.com/GrahamDumpleton/wrapt/pull/24

I've still packaged and uploaded the module to Debian, but the situation
isn't great with upstream, which doesn't seem to understand, which will
inevitably lead to more (useless) work for downstream distros.

So I'm wondering: are we being careful enough when selecting
dependencies? In this case, I think we haven't, and I would recommend
against using wrapt. Not only because it embeds six.py, but because
upstream looks uncooperative, and bound to its own use cases.

In a more general case, I would vouch for avoiding *any* Python package
which is embedding a copy of another one. This should IMO be solved
before the Python module reaches our global-requirements.txt.

Thoughts anyone?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Designate Incubation Request

2014-05-29 Thread Thierry Carrez
Sean Dague wrote:
 I honestly just think we might want to also use it as a time to rethink
 our program concept. Because all our programs that include projects that
 are part of the integrated release are 1 big source tree, and maybe a
 couple of little trees that orbit it (client and now specs repos). If we
 always expect that to be the case, I'm not really sure why we built this
 intermediate grouping.

Programs were established to solve two problems. First one is the
confusion around project types. We used to have project types[1] that
were trying to reflect and include all code repositories that we wanted
to make official. That kept on changing, was very confusing, and did
not allow flexibility for each team in how they preferred to organize
their code repositories. The second problem that solved was to recognize
non-integrated-project efforts which were still essential to the
production of OpenStack, like Infra or Docs.

[1] https://wiki.openstack.org/wiki/ProjectTypes

Programs just let us bless goals and teams and let them organize code
however they want, with contribution to any code repo under that
umbrella being considered official and ATC-status-granting. I would be
a bit reluctant to come back to the projecttypes mess and create
categories of programs (integrated projects on one side, and others).

Back to the topic, the tension here is because DNS is seen as a
network thing and therefore it sounds like it makes sense under
Networking. But programs are not categories or themes. They are
teams aligned on a mission statement. If the teams are different
(Neutron and Designate) then it doesn't make sense to artificially merge
them just because you think of networking as a theme. If the teams
converge, yes it makes sense. If they don't, we should just create a new
program. They are cheap and should reflect how we work, not the other
way around.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-29 Thread Jaromir Coufal

Hi James,

that's a good point. I just restructured the etherpad so that there are 
3 sections: one is for July 28 - Aug 1, second is for July 21-25 and 
third one is for those who want to attend but can't make it within 
suggested dates. So I would like to encourage everybody who is willing 
to attend to put himself into the slots when he can (so if both weeks 
work for you, please enter yourself into both lists).


Thanks
-- Jarda

On 2014/29/05 08:07, James Polley wrote:

Thanks for putting this together Jaromir!

August 1 may be a problem for those of us from Australia - it's the day
of the OpenStack miniconf at Pycon-AU.

I don't know if any of us intended to go to that, but if we did we'd
need to leave no later than the 4:40pm flight on July 30 in order to
make it back in time - and that would have us arriving in Brisbane at
7am on the day of the miniconf.

If we stayed till midday Friday and caught the 4:40pm flight out, we'd
arrive in BNE at 7am on the 3rd - just in time for the last day of talks
and the two days of sprints.

As I said, I'm not sure how many other people from this part of the
world had intended to go to Pycon-AU and the openstack miniconf, so I'm
not sure how much of a problem this is.


On Wed, May 28, 2014 at 9:05 PM, Jaromir Coufal jcou...@redhat.com
mailto:jcou...@redhat.com wrote:

Hi to all,

after previous TripleO  Ironic mid-cycle meetup, which I believe
was beneficial for all, I would like to suggest that we meet again
in the middle of Juno cycle to discuss current progress, blockers,
next steps and of course get some beer all together :)

Last time, TripleO and Ironic merged their meetings together and I
think it was great idea. This time I would like to invite also Heat
team if they want to join. Our cooperation is increasing and I think
it would be great, if we can discuss all issues together.

Red Hat offered to host this event, so I am very happy to invite you
all and I would like to ask, who would come if there was a mid-cycle
meetup in following dates and place:

* July 28 - Aug 1
* Red Hat office, Raleigh, North Carolina

If you are intending to join, please, fill yourselves into this
etherpad:
https://etherpad.openstack.__org/p/juno-midcycle-meetup
https://etherpad.openstack.org/p/juno-midcycle-meetup

Cheers
-- Jarda

_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug status and our 1st Bug Day for Juno

2014-05-29 Thread Kashyap Chamarthy
On Thu, May 29, 2014 at 04:29:34AM +, Tracy Jones wrote:
 Hi Folks - I spoke with Michael at the summit about bug management for
 Juno.   Other than tagging the untagged bugs each week,

I'm try to do that (and some triage/root-cause analysis for
libvirt/QEMU/KVM-based bugs and would like to continue to focus on
keeping an eye on bugs for this release cycle.

 I will also be driving a top ten list of bugs at the nova meeting. The
 meeting is every Wednesday for 1/2 hour at 1630 UTC.  Attendance has
 dropped off since the end of icehouse - in fact no one attended at all
 yesterday.  Im guessing people are focused on BP right now - but
 losing focus on bugs is a bad idea.

Agreed, FWIW. I hope there was some consensus at summit about focusing a
good chunk of release cycle on tackling _existing_ bugs that are
affecting real users.

 Nova currently has about 1200  bugs open (new, triaged, confirmed, in
 progress).  Of those, 556 have no owner (46%) which (usually) mean
 they are not being worked on.
 
  I will be gathering better stats over the next week or so, but my
  sense is that we need to focus a bit more on bugs.  To that end, I
  would like to propose a Bug Day on next Wedesday 6/4.

Count me in. Maybe this existing etherpad[1] can be used to capture
notes.


  [1] https://etherpad.openstack.org/p/nova-bug-management


 Bug day is a day that (regardless of time zone), people spend time on
 either fixing or reviewing bugs.
 
 During that day we work on bugs and review bugs
 
 We hang out on  #openstack-bugday
 
 We admire our progress on http://status.openstack.org/bugday/
 
 In terms of today's top ten bugs.  This week we have 1 regression from
 havana which is not assigned to anyone.
 
 https://bugs.launchpad.net/nova/+bug/1299517
 
 
 Please let me know if you have questions of comments
 


-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-05-29 Thread Xurong Yang
Hi, Folks,

When we configure VXLAN range [1,16M], neutron-server service costs long
time and cpu rate is very high(100%) when initiation. One test base on
postgresql has been verified: more than 1h when VXLAN range is [1, 1M].

So, any good solution about this performance issue?

Thanks,
Xurong Yang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Designate Incubation Request

2014-05-29 Thread Sean Dague
On 05/29/2014 05:26 AM, Thierry Carrez wrote:
 Sean Dague wrote:
 I honestly just think we might want to also use it as a time to rethink
 our program concept. Because all our programs that include projects that
 are part of the integrated release are 1 big source tree, and maybe a
 couple of little trees that orbit it (client and now specs repos). If we
 always expect that to be the case, I'm not really sure why we built this
 intermediate grouping.
 
 Programs were established to solve two problems. First one is the
 confusion around project types. We used to have project types[1] that
 were trying to reflect and include all code repositories that we wanted
 to make official. That kept on changing, was very confusing, and did
 not allow flexibility for each team in how they preferred to organize
 their code repositories. The second problem that solved was to recognize
 non-integrated-project efforts which were still essential to the
 production of OpenStack, like Infra or Docs.
 
 [1] https://wiki.openstack.org/wiki/ProjectTypes
 
 Programs just let us bless goals and teams and let them organize code
 however they want, with contribution to any code repo under that
 umbrella being considered official and ATC-status-granting. I would be
 a bit reluctant to come back to the projecttypes mess and create
 categories of programs (integrated projects on one side, and others).
 
 Back to the topic, the tension here is because DNS is seen as a
 network thing and therefore it sounds like it makes sense under
 Networking. But programs are not categories or themes. They are
 teams aligned on a mission statement. If the teams are different
 (Neutron and Designate) then it doesn't make sense to artificially merge
 them just because you think of networking as a theme. If the teams
 converge, yes it makes sense. If they don't, we should just create a new
 program. They are cheap and should reflect how we work, not the other
 way around.

Ok, that's fare. My confusion might stem from the fact that in nova,
novaclient really is being done by largely distinct subset of folks, and
tends to have minimal overlap with nova itself. That might just be the
size of the effort, and also will hopefully be addressed by the unified
SDK/client getting off the ground.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-29 Thread Radomir Dopieralski
On 05/28/2014 10:03 PM, Gabriel Hurley wrote:
 It's sort of a silly point, but as someone who would likely consume the
 split-off package outside of the OpenStack context, please give it a
proper
 name instead of django_horizon. The module only works in Django, the
name
 adds both clutter and confusion, and it's against the Django
 community's conventions to have the python import name be prefixed
with django_.

Since the name is completely irrelevant from the technical point of
view, and everybody will naturally have their own opinions about it,
I would prefer to skip the whole discussion and just settle on the most
mundane, boring, uninspired and obvious name that we can use, just to
avoid bikeshedding and surprises.

I didn't realize it would violate Django community's conventions, can
you point to where they are documented?

I don't think this is important, but since we have some time until the
patches for static files and other stuff clear, we could have a poll for
the name. Gabriel, would you like to run that?

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [Openstack-dev] New RA for Galera

2014-05-29 Thread Vladimir Kuklin
may be the problem is that you are using liftetime crm attributes instead
of 'reboot' ones. shadow/commit is used by us because we need transactional
behaviour in some cases. if you turn crm_shadow off, then you will
experience problems with multi-state resources and
location/colocation/order constraints. so we need to find a way to make
commits transactional. there are two ways:
1) rewrite corosync providers to use crm_diff command and apply it instead
of shadow commit that can swallow cluster attributes sometimes
2) store 'reboot' attributes instead of lifetime ones



On Thu, May 29, 2014 at 12:42 PM, Bogdan Dobrelya bdobre...@mirantis.comwrote:

 On 05/27/14 16:44, Bartosz Kupidura wrote:
  Hello,
  Responses inline.
 
 
  Wiadomość napisana przez Vladimir Kuklin vkuk...@mirantis.com w dniu
 27 maj 2014, o godz. 15:12:
 
  Hi, Bartosz
 
  First of all, we are using openstack-dev for such discussions.
 
  Second, there is also Percona's RA for Percona XtraDB Cluster, which
 looks like pretty similar, although it is written in Perl. May be we could
 derive something useful from it.
 
  Next, if you are working on this stuff, let's make it as open for the
 community as possible. There is a blueprint for Galera OCF script:
 https://blueprints.launchpad.net/fuel/+spec/reliable-galera-ocf-script.
 It would be awesome if you wrote down the specification and sent  newer
 galera ocf code change request to fuel-library gerrit.
 
  Sure, I will update this blueprint.
  Change request in fuel-library: https://review.openstack.org/#/c/95764/

 That is a really nice catch, Bartosz, thank you. I believe we should
 review the new OCF script thoroughly and consider omitting
 cs_commits/cs_shadows as well. What would be the downsides?

 
 
  Speaking of crm_attribute stuff. I am very surprised that you are
 saying that node attributes are altered by crm shadow commit. We are using
 similar approach in our scripts and have never faced this issue.
 
  This is probably because you update crm_attribute very rarely. And with
 my approach GTID attribute is updated every 60s on every node (3 updates in
 60s, in standard HA setup).
 
  You can try to update any attribute in loop during deploying cluster to
 trigger fail with corosync diff.

 It sounds reasonable and we should verify it.
 I've updated the statuses for related bugs and attached them to the
 aforementioned blueprint as well:
 https://bugs.launchpad.net/fuel/+bug/1283062/comments/7
 https://bugs.launchpad.net/fuel/+bug/1281592/comments/6


 
 
  Corosync 2.x support is in our roadmap, but we are not sure that we
 will use Corosync 2.x earlier than 6.x release series start.
 
  Yeah, moreover corosync CMAP is not synced between cluster nodes (or
 maybe im doing something wrong?). So we need other solution for this...
 

 We should use CMAN for Corosync 1.x, perhaps.

 
 
  On Tue, May 27, 2014 at 3:08 PM, Bartosz Kupidura 
 bkupid...@mirantis.com wrote:
  Hello guys!
  I would like to start discussion on a new resource agent for
 galera/pacemaker.
 
  Main features:
  * Support cluster boostrap
  * Support reboot any node in cluster
  * Support reboot whole cluster
  * To determine which node have latest DB version, we should use galera
 GTID (Global Transaction ID)
  * Node with latest GTID is galera PC (primary component) in case of
 reelection
  * Administrator can manually set node as PC
 
  GTID:
  * get GTID from mysqld --wsrep-recover or SQL query 'SHOW STATUS LIKE
 ‚wsrep_local_state_uuid''
  * store GTID as crm_attribute for node (crm_attribute --node $HOSTNAME
 --lifetime $LIFETIME --name gtid --update $GTID)
  * on every monitor/stop/start action update GTID for given node
  * GTID can have 3 format:
   - ----:123 - standard
 cluster-id:commit-id
   - ----:-1 - standard non initialized
 cluster, ----:-1
   - ----:INF - commit-id manually set to
 INF, force RA to create new cluster, with master on given node
 
  Check if reelection of PC is needed:
  * (node is located in partition with quorum OR we have only 1 node
 configured in cluster) AND galera resource is not running on any node
  * GTID is manually set to INF on given node
 
  Check if given node is PC:
  * have highest GTID in cluster, in case we have more than one node with
 „highest” GTID, we use CRC32 to choose proper PC.
  * GTID is manually set to INF
  * in case node with highest GTID will not come back after cluster
 reboot (for example disk failure) administrator should set GTID to INF on
 other node
 
  I have almost ready RA: http://zynzel.spof.pl/mysql-wss
 
  Tested with vanila centos galera/pacemaker/corosync - OK
  Tested with Fuel 4.1 - Fail
 
 
  Fuel 4.1 with that RA will not deploy correctly, because we use
 crm_attribute to store GTID, and in manifest we use cs_shadow/cs_commit for
 every pacemaker resource.
  This lead to cs_commit problem with 

Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-05-29 Thread ZZelle
Hi,


vxlan network are inserted/verified in DB one by one, which could explain
the time required

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_vxlan.py#L138-L172

Cédric



On Thu, May 29, 2014 at 12:01 PM, Xurong Yang ido...@gmail.com wrote:

 Hi, Folks,

 When we configure VXLAN range [1,16M], neutron-server service costs long
 time and cpu rate is very high(100%) when initiation. One test base on
 postgresql has been verified: more than 1h when VXLAN range is [1, 1M].

 So, any good solution about this performance issue?

 Thanks,
 Xurong Yang



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API meeting

2014-05-29 Thread Christopher Yeoh
Hi,

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate. 

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 9:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Ironic] [Ceilometer] [Horizon] [TripleO] Nodes Management UI - designs

2014-05-29 Thread Lucas Alvares Gomes
On Wed, May 28, 2014 at 10:18 PM, Jaromir Coufal jcou...@redhat.com wrote:
 Hi All,

 There is a lot of tags in the subject of this e-mail but believe me that all
 listed projects (and even more) are relevant for the designs which I am
 sending out.

 Nodes management section in Horizon is being expected for a while and
 finally I am sharing the results of designing around it.

 http://people.redhat.com/~jcoufal/openstack/horizon/nodes/2014-05-28_nodes-ui.pdf

 These views are based on modular approach and combination of multiple
 services together; for example:
 * Ironic - HW details and management

Just giving a heads up about a work going on in Ironic which affects
the way we enroll nodes[1].

tl;dr We want Ironic to support different provisioning methods using
the same flavor, e.g the deploy kernel and deploy ramdisk used by the
ironic-python-agent[2] is different than the one used by the default
PXE driver, so we are moving the deploy KR from the Flavor's
extra_specs and adding it to the Node's driver_info attribute (used
for storing driver-level information), this will require clients to
pass the deploy KR to the node when enrolling it.

[1] 
https://review.openstack.org/#/c/95701/1/specs/juno/add-node-instance-info.rst
[2] https://wiki.openstack.org/wiki/Ironic-python-agent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [Openstack-dev] New RA for Galera

2014-05-29 Thread Bartosz Kupidura
Hello,


Wiadomość napisana przez Vladimir Kuklin vkuk...@mirantis.com w dniu 29 maj 
2014, o godz. 12:09:

 may be the problem is that you are using liftetime crm attributes instead of 
 'reboot' ones. shadow/commit is used by us because we need transactional 
 behaviour in some cases. if you turn crm_shadow off, then you will experience 
 problems with multi-state resources and location/colocation/order 
 constraints. so we need to find a way to make commits transactional. there 
 are two ways:
 1) rewrite corosync providers to use crm_diff command and apply it instead of 
 shadow commit that can swallow cluster attributes sometimes

In PoC i removed all cs_commit/cs_shadow, and looks that everything is working. 
But as you says, this can lead to problems with more complicated deployments.
This need to be verified.

 2) store 'reboot' attributes instead of lifetime ones

I test with —lifetime forever and reboot. No difference for cs_commit/cs_shadow 
fail.

Moreover we need method to store GTID permanent (to support whole cluster 
reboot). 
If we want to stick to cs_commit/cs_shadow, we need other method to store GTID 
than crm_attribute.

 
 
 
 On Thu, May 29, 2014 at 12:42 PM, Bogdan Dobrelya bdobre...@mirantis.com 
 wrote:
 On 05/27/14 16:44, Bartosz Kupidura wrote:
  Hello,
  Responses inline.
 
 
  Wiadomość napisana przez Vladimir Kuklin vkuk...@mirantis.com w dniu 27 
  maj 2014, o godz. 15:12:
 
  Hi, Bartosz
 
  First of all, we are using openstack-dev for such discussions.
 
  Second, there is also Percona's RA for Percona XtraDB Cluster, which looks 
  like pretty similar, although it is written in Perl. May be we could 
  derive something useful from it.
 
  Next, if you are working on this stuff, let's make it as open for the 
  community as possible. There is a blueprint for Galera OCF script: 
  https://blueprints.launchpad.net/fuel/+spec/reliable-galera-ocf-script. It 
  would be awesome if you wrote down the specification and sent  newer 
  galera ocf code change request to fuel-library gerrit.
 
  Sure, I will update this blueprint.
  Change request in fuel-library: https://review.openstack.org/#/c/95764/
 
 That is a really nice catch, Bartosz, thank you. I believe we should
 review the new OCF script thoroughly and consider omitting
 cs_commits/cs_shadows as well. What would be the downsides?
 
 
 
  Speaking of crm_attribute stuff. I am very surprised that you are saying 
  that node attributes are altered by crm shadow commit. We are using 
  similar approach in our scripts and have never faced this issue.
 
  This is probably because you update crm_attribute very rarely. And with my 
  approach GTID attribute is updated every 60s on every node (3 updates in 
  60s, in standard HA setup).
 
  You can try to update any attribute in loop during deploying cluster to 
  trigger fail with corosync diff.
 
 It sounds reasonable and we should verify it.
 I've updated the statuses for related bugs and attached them to the
 aforementioned blueprint as well:
 https://bugs.launchpad.net/fuel/+bug/1283062/comments/7
 https://bugs.launchpad.net/fuel/+bug/1281592/comments/6
 
 
 
 
  Corosync 2.x support is in our roadmap, but we are not sure that we will 
  use Corosync 2.x earlier than 6.x release series start.
 
  Yeah, moreover corosync CMAP is not synced between cluster nodes (or maybe 
  im doing something wrong?). So we need other solution for this...
 
 
 We should use CMAN for Corosync 1.x, perhaps.
 
 
 
  On Tue, May 27, 2014 at 3:08 PM, Bartosz Kupidura bkupid...@mirantis.com 
  wrote:
  Hello guys!
  I would like to start discussion on a new resource agent for 
  galera/pacemaker.
 
  Main features:
  * Support cluster boostrap
  * Support reboot any node in cluster
  * Support reboot whole cluster
  * To determine which node have latest DB version, we should use galera 
  GTID (Global Transaction ID)
  * Node with latest GTID is galera PC (primary component) in case of 
  reelection
  * Administrator can manually set node as PC
 
  GTID:
  * get GTID from mysqld --wsrep-recover or SQL query 'SHOW STATUS LIKE 
  ‚wsrep_local_state_uuid''
  * store GTID as crm_attribute for node (crm_attribute --node $HOSTNAME 
  --lifetime $LIFETIME --name gtid --update $GTID)
  * on every monitor/stop/start action update GTID for given node
  * GTID can have 3 format:
   - ----:123 - standard cluster-id:commit-id
   - ----:-1 - standard non initialized 
  cluster, ----:-1
   - ----:INF - commit-id manually set to 
  INF, force RA to create new cluster, with master on given node
 
  Check if reelection of PC is needed:
  * (node is located in partition with quorum OR we have only 1 node 
  configured in cluster) AND galera resource is not running on any node
  * GTID is manually set to INF on given node
 
  Check if given node is PC:
  * have highest GTID in cluster, in 

Re: [openstack-dev] [nova] Question about addit log in nova-compute.log

2014-05-29 Thread Murray, Paul (HP Cloud)
Comment inline at bottom of message...

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: 06 May 2014 18:44
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Question about addit log in nova-compute.log

On 05/06/2014 01:37 PM, Jiang, Yunhong wrote:
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Monday, May 05, 2014 6:19 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Question about addit log in 
 nova-compute.log

 On 05/05/2014 04:19 PM, Jiang, Yunhong wrote:
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Monday, May 05, 2014 9:50 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Question about addit log in 
 nova-compute.log

 On 05/04/2014 11:09 PM, Chen CH Ji wrote:
 Hi
   I saw in my compute.log has following logs which
 looks
 to me strange at first, Free resource is negative make me confused
 and I
 take a look at the existing code
   looks to me the logic is correct and calculation
 doesn't
 have problem ,but the output 'Free' is confusing

   Is this on purpose or might need to be enhanced?

 2014-05-05 10:51:33.732 4992 AUDIT
 nova.compute.resource_tracker
 [-]
 Free ram (MB): -1559
 2014-05-05 10:51:33.732 4992 AUDIT
 nova.compute.resource_tracker
 [-]
 Free disk (GB): 29
 2014-05-05 10:51:33.732 4992 AUDIT
 nova.compute.resource_tracker
 [-]
 Free VCPUS: -3

 Hi Kevin,

 I think changing free to available might make things a little 
 more clear. In the above case, it may be that your compute worker 
 has both CPU and RAM overcommit enabled.

 Best,
 -jay

 HI, Jay,
 I don't think change 'free' to 'available' will make it clearer.
 IMHO, the calculation of the 'free' is bogus. When report the 
 status in
 the periodic task, the resource tracker has no idea of the 
 over-commit ration at all, thus it simply subtract the total RAM 
 number assigned to instances from the RAM number provided by 
 hypervisor w/o considering the over-commitment at all. So this number really 
 have meaningless.

 Agreed that in it's current state, it's meaningless. But... that 
 said, the numbers *could* be used to show oversubscription 
 percentage, and you don't need to know the max overcommit ratio in 
 order to calculate that with the numbers already known.

 I don't think user can use these number to calculate the 'available'. User 
 has to know the max overcommit ratio to know the 'available'. Also, it's 
 really ironic to provide some meaningless information and have the user to 
 calculate to get meaningful.

 This is related to https://bugs.launchpad.net/nova/+bug/1300775 . I think it 
 will be better if we can have the resource tracker to knows about the ratio.

Sorry, you misunderstood me... I was referring to the resource tracker above, 
not a regular user. The resource tracker already knows the total amount of 
physical resources available on each compute node, and it knows the resource 
usage reported by each compute node. Therefore, the resource tracker already 
has all the information it needs to understand the *actual* overcommit ratio of 
CPU and memory on each compute node, regardless of the settings of the 
*maximum* overcommit ratio on a compute node (which is in each compute node's 
nova.conf).

Hope that makes things a bit clearer! Sorry for the confusion :)

[pmurray]: In the past, when debugging problems in our systems, I have found it 
useful to know what the free value actually is. So I think a percentage/ratio 
value would be less helpful to me as an operator. Additionally, if the message 
changed depending on the sign of the value it would make it harder to search 
for it in the logs. So from that point of view I would say the message is fine 
as it is. I can see that free being negative seems odd. Could add (negative 
value means overcommitted resource) to the message?

Paul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]net-create fail without definite segmentation_id

2014-05-29 Thread Henry Gessau
On 5/29/2014 4:41 AM, Xurong Yang wrote:
 Hi, stackers
 if i define provider when creating network, but no segmentation_id, 
 net-create fail. why not allocate segmentation_id automatically?
 ~$ neutron net-create test --provider:network_type=vlan 
 --provider:physical_network=default
 Invalid input for operation: segmentation_id required for VLAN provider 
 network.

There is a blueprint[1] to address this, with a spec[2] and some code proposed
that you can find by searching for topic:bp/provider-network-partial-specs.
Only for the ML2 plugin.

[1] 
https://blueprints.launchpad.net/neutron/+spec/provider-network-partial-specs
[2] https://review.openstack.org/91540


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Alexander Ignatov
On 28 May 2014, at 20:02, Sergey Lukjanov slukja...@mirantis.com wrote:

 Hey folks,
 
 it's a small wrap-up for the topic Sahara subprojects releasing and
 versioning that was discussed partially on summit and requires some
 more discussions. You can find details in [0].
 
 common
 
 We'll include only one tarball for sahara to the release launchpad
 pages. All other links will be provided in docs.

+1. And keep python-saharaclient on the corresponding launchpad page.

 
 sahara-dashboard
 
 The merging to Horizon process is now in progress. We've decided that
 j1 is the deadline for merging main code parts and during the j2 all
 the code should be merged into Horizon, so, if in time of j2 we'll
 have some work on merging sahara-dashboard to Horizon not done we'll
 need to fallback to the separated sahara-dashboard repo release for
 Juno cycle and continue merging the code into the Horizon to be able
 to completely kill sahara-dashboard repo in K release.
 
 Where we should keep our UI integration tests?

Once sahara-dashboard code is not merged to Horizon we could keep 
integration tests in the same repo. Once dashboard code is merged we
could keep tests in sahara-extra repo. AFAIR we have plans to convert
our UI tests to Horizon-capable tests with mocked rest calls. So we could
keep non-converted UI tests in sahara-extra until they are done.

 
 sahara-image-elements
 
 We're agreed that some common parts should be merged into the
 diskimage-builder repo (like java support, ssh, etc.). The main issue
 of keeping -image-elements separated is how to release them and
 provide mapping sahara version - elements version. You can find
 different options in etherpad [0], I'll write here about the option
 that I think will work best for us.
 
 So, the idea is that sahara-image-elements is a bunch of scripts and
 tools for building images for Sahara. It's high coupled with plugins's
 code in Sahara, so, we need to align them good. Current default
 decision is to keep aligned versioning like 2014.1 and etc. It'll be
 discussed on the weekly irc team meeting May 29.

I vote to keep sahara-image-elements as separate repo and release it
as you Sergey propose. I see problems with sahara-ci when running all bunch 
of integration tests for checking image-elements and core sahara code
on each patch sent to sahara repo in case of collapsed two repos.

 
 sahara-extra
 
 Keep it as is, no need to stop releasing, because we're not publishing
 anything to pypi. No real need for tags.

+1. Also I think we can move our rest-api-samples from sahara to sahara-extra
repo as well.
 
 
 open questions
 
 If you have any objections for this model, please, share your thoughts
 before June 3 due to the Juno-1 (June 12) to have enough time to apply
 selected approach.
 
 [0] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward
 
 Thanks.
 
 -- 
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mahout-as-a-service [sahara]

2014-05-29 Thread Matthew Farrellee

On 05/28/2014 12:37 PM, Dat Tran wrote:

Hi everyone,

I have a idea for new project: Mahout-as-a-service.
Main idea of this project:
- Install OpenStack
- Deploying OpenStack Sahara source
- Deploying Mahout on Sahara OpenStack system.
- Construction of the API.

Through web or mobile interface, users can:
- Enable / Disable Mahout on Hadoop cluster
- Run Mahout job
- Get information on surveillance systems related to Mahout job.
- Statistics and service costs over time and total resource use.

Definitely!!! APIs will be public. Look forward to your comments.
Hopefully in this summer, we can do something together.

Thank you very much! :)


dat,

since mahout is a great ml library that leverages mapreduce (and now 
spark  h2o), it may be simpler for you to make sure that mahout is 
installed by the various sahara plugins.


in fact, i bet you could run mahout jobs using edp and the java action 
right now in sahara. if that's true it's probably a bit clunky and worth 
the effort to streamline.


best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Matthew Farrellee

On 05/29/2014 07:23 AM, Alexander Ignatov wrote:

On 28 May 2014, at 20:02, Sergey Lukjanov slukja...@mirantis.com wrote:



sahara-image-elements


We're agreed that some common parts should be merged into the
diskimage-builder repo (like java support, ssh, etc.). The main issue
of keeping -image-elements separated is how to release them and
provide mapping sahara version - elements version. You can find
different options in etherpad [0], I'll write here about the option
that I think will work best for us.

So, the idea is that sahara-image-elements is a bunch of scripts and
tools for building images for Sahara. It's high coupled with plugins's
code in Sahara, so, we need to align them good. Current default
decision is to keep aligned versioning like 2014.1 and etc. It'll be
discussed on the weekly irc team meeting May 29.


I vote to keep sahara-image-elements as separate repo and release it
as you Sergey propose. I see problems with sahara-ci when running all bunch
of integration tests for checking image-elements and core sahara code
on each patch sent to sahara repo in case of collapsed two repos.


this problem was raised during the design summit and i thought the 
resolution was that the sahara-ci could be smart about which set of 
itests it ran. for instance, a change in the elements would trigger 
image rebuild, a change outside the elements would trigger service 
itests. a change that covered both elements and the service could 
trigger all tests.


is that still possible?


best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-29 Thread Alexander Ignatov

On 28 May 2014, at 17:14, Sergey Lukjanov slukja...@mirantis.com wrote:
 1. How should we handle addition of new functionality to the API,
 should we bump minor version and just add new endpoints?

Agree with most of folks. No new versions on adding new endpoints. 
Semantic changes require new major version of rest api.

 2. For which period of time should we keep deprecated API and client for it?

One release cycle for deprecation period.

 3. How to publish all images and/or keep stability of building images
 for plugins?
 

We should keep all images for all plugins (non-deprecated as Matt mentioned) 
for each release. In addition we could keep  at least one image which could be 
downloaded and used with master branch of Sahara. Plugin vendors could keep 
its own set of images and we can reflect it in the docs.

Regards,
Alexander Ignatov




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy for linking bug or bp in commit message

2014-05-29 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 28/05/14 23:03, Nachi Ueno wrote:
 Hi folks
 
 OK, so it looks like this is consensus in the community,
 
 Link bug or bp for most of commit Exception for not linking bug: 
 (1)  Infra sync (2)  minor fix. (typo, small code refactor, fix doc
 string etc)
 
 Ihar, Assaf
 Sorry for taking time for this discussion, I'll remove my comment
 for requesting bug report.
 

There's nothing wrong about being cautious and ask community. :) As a
benefit, now we have this issue kinda settled and can link to the
discussion in similar occasions when similar point is raised.

Thanks for tracking the concern,
/Ihar

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJThympAAoJEC5aWaUY1u57CRAIAMIqfJF2TQ3LlapziYDp7NwB
Slf16WQSNzvs8WakX8h6RVZSqraHaQJsrkXddO+C2iTMfaaJ3GJ/k4nvA2rztCQc
Y/4Lp+6fXMzbI96stV/OP8HgdWkah1006C/6muXgEx7DDKfElus643EIO5oyofFT
7HY7jd0ZiV087T2w7zEkclGUKWGS+09qeCU5dPcPmCVTqA6qLtBtqijttIjQSkGQ
FNw+LJHvxPGCF/ftaVGynZxOMHAKj3kAtzIzyULaRhA7Ci0F7tihCNdcmCupewPx
+TAmT5VTAr0amw14p6TYg6GKfbw/1mOousJlfUUr4NS3SN4UdRAUgZi4/2oKXXY=
=c4o5
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Flavors] Flavor framework implementation questions

2014-05-29 Thread mar...@redhat.com
On 29/05/14 00:48, Eugene Nikanorov wrote:
 Hi,
 
 I have two questions that previously were briefly discussed. Both of them
 still cause discussions within advanced services community, so I'd like to
 make final clarification in this email thread.
 
 1. Usage of Service Type Framework
 I think right now there is a consensus that letting user to choose a vendor
 is not what neutron should do. To be more specific, having public
 'provider' attribute on a resource may create certain problems
 for operators because it binds resource and implementation too tightly that
 it can't be maintained without changing user configuration.
 The question that was discussed is if it's ok to remove this public
 attribute and still use the rest of framework?
 I think the answer is yes: the binding between resource and implementing
 driver is ok if it's read-only and visible only to admin. This still serves
 as hint for dispatching requests to driver and also may help operator to
 troubleshoot issues.
 I'd like to hear other opinions on this because there are patches for vpn
 and fwaas that use STF with the difference described above.

pardon my ignorance, I don't know what STF is. I missed the summit
discussions ('provider attribute' on resources must have come up there).
My take on the 'specific vendor' issue is that there isn't one, given
the current proposal. From the discussion there I think there is a use
case for a user saying 'i want a firewall from foo_vendor' and as you
said it's just a specialised use-case of the flavor framework.

Furthermore, right now the 'capabilities' for Flavors are very loosely
defined (just key:value tags on Flavor resource). Why can't we just also
define a 'vendor:foo' capability and use it for that purpose. I imagine
I'm oversimplifying somewhere but would that not address the concerns?

marios

 
 2. Choosing implementation and backend
 This topic was briefly touched at design session dedicated to flavors.
 
 Current proposal that was discussed on advanced services meetings was to
 employ 2-step scheduling as described in the following sample code:
 https://review.openstack.org/#/c/83055/7/neutron/tests/unit/test_flavors.pyline
 38-56
 
 In my opinion the proposed way has the following benefits:
 - it delegates actual backend choosing to a driver.
 This way different vendors may not be required to agree on how to bind
 resource to a backend or any kind of other common implementation stuff that
 usually leads to lots of discussions.
 - allows different configurable vendor-specific algorithms to be used when
 binding resource to a backend
 - some drivers don't have notion of a backend because external entities
 manage backends for them
 
 Another proposal is to have single-step scheduling where each driver
 exposes the list of backends
 and then scheduling just chooses the backend based on capabilities in the
 flavor.
 
 I'd like to better understand the benefits of the second approach (this is
 all implementation so from user standpoint it doesn't make difference)
 
 So please add your opinions on those questions.
 
 My suggestion is also to have a short 30 min meeting sometime this or next
 week to finalize those questions. There are several patches and blueprints
 that depend on them.
 
 Thanks,
 Eugene.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

2014-05-29 Thread Samuel Bercovici
+1 to Carlos.

In addition, there should be possible for LBaaS (It might only be just the 
LBaaS drivers) to get the information including the private key back so that 
the backend can use it.
This means that a trusted communication channel between the driver and 
Barbican needs to be established so that such information will be passed.
I am waiting for the code review in barbican to check that such capabilities 
will be available.

Regards,
-Sam.



-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
Sent: Wednesday, May 28, 2014 7:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication


On May 27, 2014, at 9:13 PM, Stephen Balukoff sbaluk...@bluebox.net
 wrote:

 Hi y'all!
 
 I would advocate that if the user asks the front-end API for the private key 
 information (ie. GET request), what they get back is the key's modulus and 
 nothing else. This should work to verify whether a given key matches a given 
 cert, and doesn't break security requirements for those who are never allowed 
 to actually touch private key data. And if a user added the key themselves 
 through the front-end API, I think it's safe to assume the responsibility for 
 keeping a copy of the key they can access lies with the user.

I'm thinking at this point all user interaction with their cert and key be 
handled by barbican directly instead of through our API. It seems like we've 
punted everything but the IDs to barbican. Returning the modulus is still RSA 
centric though. 


 
 Having said this, of course anything which spins up virtual appliances, or 
 configures physical appliances is going to need access to the actual private 
 key. So any back-end API(s) will probably need to have different behavior 
 here.
 
 One thing I do want to point out is that with the 'transient' nature of 
 back-end guests / virtual servers, it's probably going to be important to 
 store the private key data in something non-volatile, like barbican's store. 
 While it may be a good idea to add a feature that generates a private key and 
 certificate signing request via our API someday for certain organizations' 
 security requirements, one should never have the only store for this private 
 key be a single virtual server. This is also going to be important if a 
 certificate + key combination gets re-used in another listener in some way, 
 or when horizontal scaling features get added.

I don't think our API needs to handle the CSRs it looks like barbican 
aspires to do this so our API really is pretty insulated.

 
 Thanks,
 Stephen
 
 -- 
 Stephen Balukoff 
 Blue Box Group, LLC 
 (800)613-4305 x807
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova default quotas

2014-05-29 Thread Day, Phil


From: Kieran Spear [mailto:kisp...@gmail.com]
Sent: 28 May 2014 06:05
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] nova default quotas

Hi Joe,

On 28/05/2014, at 11:21 AM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:




On Tue, May 27, 2014 at 1:30 PM, Kieran Spear 
kisp...@gmail.commailto:kisp...@gmail.com wrote:


On 28/05/2014, at 6:11 AM, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:

 Phil,

 You are correct and this seems to be an error. I don't think in the earlier 
 ML thread[1] that anyone remembered that the quota classes were being used 
 for default quotas. IMO we need to revert this removal as we (accidentally) 
 removed a Havana feature with no notification to the community. I've 
 reactivated a bug[2] and marked it critical.

+1.

We rely on this to set the default quotas in our cloud.

Hi Kieran,

Can you elaborate on this point. Do you actually use the full quota-class 
functionality that allows for quota classes, if so what provides the quota 
classes? If you only use this for setting the default quotas, why do you prefer 
the API and not setting the config file?

We just need the defaults. My comment was more to indicate that yes, this is 
being used by people. I'm sure we could switch to using the config file, and 
generally I prefer to keep configuration in code, but finding out about this 
half way through a release cycle isn't ideal.

I notice that only the API has been removed in Icehouse, so I'm assuming the 
impact is limited to *changing* the defaults, which we don't do often. I was 
initially worried that after upgrading to Icehouse we'd be left with either no 
quotas or whatever the config file defaults are, but it looks like this isn't 
the case.

Unfortunately the API removal in Nova was followed by similar changes in 
novaclient and Horizon, so fixing Icehouse at this point is probably going to 
be difficult.

[Day, Phil]  I think we should revert the changes in all three system then.   
We have the rules about not breaking API compatibility in place for a reason, 
if we want to be taken seriously as a stable API then we need to be prepared to 
roll back if we goof-up.

Joe - was there a nova-specs BP for the change ?  I'm wondering how this one 
slipped through


Cheers,
Kieran




Kieran


 Vish

 [1] 
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/027574.html
 [2] https://bugs.launchpad.net/nova/+bug/1299517

 On May 27, 2014, at 12:19 PM, Day, Phil 
 philip@hp.commailto:philip@hp.com wrote:

 Hi Vish,

 I think quota classes have been removed from Nova now.

 Phil


 Sent from Samsung Mobile


  Original message 
 From: Vishvananda Ishaya
 Date:27/05/2014 19:24 (GMT+00:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] nova default quotas

 Are you aware that there is already a way to do this through the cli using 
 quota-class-update?

 http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html (near 
 the bottom)

 Are you suggesting that we also add the ability to use just regular 
 quota-update? I'm not sure i see the need for both.

 Vish

 On May 20, 2014, at 9:52 AM, Cazzolato, Sergio J 
 sergio.j.cazzol...@intel.commailto:sergio.j.cazzol...@intel.com wrote:

 I would to hear your thoughts about an idea to add a way to manage the 
 default quota values through the API.

 The idea is to use the current quota api, but sending ''default' instead of 
 the tenant_id. This change would apply to quota-show and quota-update 
 methods.

 This approach will help to simplify the implementation of another blueprint 
 named per-flavor-quotas

 Feedback? Suggestions?


 Sergio Juan Cazzolato
 Intel Software Argentina

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev 

Re: [openstack-dev] Designate Incubation Request

2014-05-29 Thread Mac Innes, Kiall
On Thu, 2014-05-29 at 11:26 +0200, Thierry Carrez wrote:
 Back to the topic, the tension here is because DNS is seen as a
 network thing and therefore it sounds like it makes sense under
 Networking. But programs are not categories or themes. They are
 teams aligned on a mission statement. If the teams are different
 (Neutron and Designate) then it doesn't make sense to artificially
 merge
 them just because you think of networking as a theme. If the teams
 converge, yes it makes sense. If they don't, we should just create a
 new
 program. They are cheap and should reflect how we work, not the other
 way around.

+1

This is exactly how I feel about programs, and couldn't have said it
better myself :)

Thanks Thierry!

Kiall
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Designate Incubation Request

2014-05-29 Thread Mac Innes, Kiall
On Thu, 2014-05-29 at 08:00 +0930, Michael Davies wrote:
 On Thu, May 29, 2014 at 4:22 AM, Sean Dague s...@dague.net wrote:
  I would agree this doesn't make sense in Neutron.
 
  I do wonder if it makes sense in the Network program. I'm getting
  suspicious of the programs for projects model if every new project
  incubating in seems to need a new program. Which isn't really a
  reflection on designate, but possibly on our program structure.
 
 I also agree here - DNS isn't a program by itself in my opinion, it
 should be in a group of  other Network Application Services.

I disagree - but Thierry put it better than I could have ever said at
[1], so I'll just refer to his email :)

Thanks,
Kiall

[1]:
http://lists.openstack.org/pipermail/openstack-dev/2014-May/036213.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-29 Thread Sergey Lukjanov
Bunch of responses/thoughts:

 API

I'm ok that semantics additions could be done in one API version w/o
increasing minor version. I like the idea to keep only major API
versions starting from the next API version.
RE backward compat period, for now 1-2 cycles is ok.

 Images

Agreed that we should publish release images for non-deprecated
plugins (somehow, all on infra or vendor-based).

On Thu, May 29, 2014 at 3:59 PM, Alexander Ignatov
aigna...@mirantis.com wrote:

 On 28 May 2014, at 17:14, Sergey Lukjanov slukja...@mirantis.com wrote:
 1. How should we handle addition of new functionality to the API,
 should we bump minor version and just add new endpoints?

 Agree with most of folks. No new versions on adding new endpoints.
 Semantic changes require new major version of rest api.

 2. For which period of time should we keep deprecated API and client for it?

 One release cycle for deprecation period.

 3. How to publish all images and/or keep stability of building images
 for plugins?


 We should keep all images for all plugins (non-deprecated as Matt mentioned)
 for each release. In addition we could keep  at least one image which could be
 downloaded and used with master branch of Sahara. Plugin vendors could keep
 its own set of images and we can reflect it in the docs.

 Regards,
 Alexander Ignatov




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Sergey Lukjanov
Re sahara-image-elements we found a bunch of issues that we should
solve and that's why I think that keeping current releasing is still
the best option.

- we should test it better and depend on stable diskimage-builder version
The dib is now published to pypi, so, we could make
sahara-image-elements in dib-style and publish it to pypi in the same
style. It makes us able to add some sanity tests for images checking
and add gate jobs for running them (it could be done anyway, but this
approach with separated repo looks more consistent). Developing
sahara-image-elements as a pip-installable project we could add
diskimage-builder to the requirements.txt of it and manage it's
version, it'll provide us good flexibility - for example, we'll be
able to specify to use latest release dib.
- all scripts and dib will not be installed with sahara (50/50)

On Thu, May 29, 2014 at 3:57 PM, Matthew Farrellee m...@redhat.com wrote:
 On 05/29/2014 07:23 AM, Alexander Ignatov wrote:

 On 28 May 2014, at 20:02, Sergey Lukjanov slukja...@mirantis.com wrote:


 sahara-image-elements


 We're agreed that some common parts should be merged into the
 diskimage-builder repo (like java support, ssh, etc.). The main issue
 of keeping -image-elements separated is how to release them and
 provide mapping sahara version - elements version. You can find
 different options in etherpad [0], I'll write here about the option
 that I think will work best for us.

 So, the idea is that sahara-image-elements is a bunch of scripts and
 tools for building images for Sahara. It's high coupled with plugins's
 code in Sahara, so, we need to align them good. Current default
 decision is to keep aligned versioning like 2014.1 and etc. It'll be
 discussed on the weekly irc team meeting May 29.


 I vote to keep sahara-image-elements as separate repo and release it
 as you Sergey propose. I see problems with sahara-ci when running all
 bunch
 of integration tests for checking image-elements and core sahara code
 on each patch sent to sahara repo in case of collapsed two repos.


 this problem was raised during the design summit and i thought the
 resolution was that the sahara-ci could be smart about which set of itests
 it ran. for instance, a change in the elements would trigger image rebuild,
 a change outside the elements would trigger service itests. a change that
 covered both elements and the service could trigger all tests.

 is that still possible?


 best,


 matt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Matthew Farrellee

On 05/29/2014 09:59 AM, Trevor McKay wrote:

below, sahara-extra



sahara-extra


Keep it as is, no need to stop releasing, because we're not publishing
anything to pypi. No real need for tags.


Even if we keep the repo for now, I think we could simplify a little
bit.  The edp-examples could be moved to the Sahara repo.  Some of those
examples we use in the integration tests anyway -- why have them
duplicated?


+1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]net-create fail without definite segmentation_id

2014-05-29 Thread ZZelle
Hi,

The blueprint let admins provide some provider attributes and let neutron
the remaining attributes.
The blueprint specification and associated implementation are under
reviews[1].

[1]
https://review.openstack.org/#/q/topic:bp/provider-network-partial-specs,n,z


On Thu, May 29, 2014 at 1:23 PM, Henry Gessau ges...@cisco.com wrote:

 On 5/29/2014 4:41 AM, Xurong Yang wrote:
  Hi, stackers
  if i define provider when creating network, but no segmentation_id,
 net-create fail. why not allocate segmentation_id automatically?
  ~$ neutron net-create test --provider:network_type=vlan
 --provider:physical_network=default
  Invalid input for operation: segmentation_id required for VLAN provider
 network.

 There is a blueprint[1] to address this, with a spec[2] and some code
 proposed
 that you can find by searching for topic:bp/provider-network-partial-specs.
 Only for the ML2 plugin.

 [1]
 https://blueprints.launchpad.net/neutron/+spec/provider-network-partial-specs
 [2] https://review.openstack.org/91540


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Sergey Lukjanov
 client

We have separated launchpad project for client and we're publishing
client releases to it.

 extra

I'm neutral about moving job examples and API samples between sahara and extra

 ui tests

if we'll be able to remove sahara-dashboard before good integration
tests for our pages will be done in horizon than we could move them to
the extra repo as a temporary place, but sahara is the wrong place for
ui test due to different layers.

On Thu, May 29, 2014 at 5:59 PM, Trevor McKay tmc...@redhat.com wrote:
 below, sahara-extra

 On Wed, 2014-05-28 at 20:02 +0400, Sergey Lukjanov wrote:
 Hey folks,

 it's a small wrap-up for the topic Sahara subprojects releasing and
 versioning that was discussed partially on summit and requires some
 more discussions. You can find details in [0].

  common

 We'll include only one tarball for sahara to the release launchpad
 pages. All other links will be provided in docs.

  sahara-dashboard

 The merging to Horizon process is now in progress. We've decided that
 j1 is the deadline for merging main code parts and during the j2 all
 the code should be merged into Horizon, so, if in time of j2 we'll
 have some work on merging sahara-dashboard to Horizon not done we'll
 need to fallback to the separated sahara-dashboard repo release for
 Juno cycle and continue merging the code into the Horizon to be able
 to completely kill sahara-dashboard repo in K release.

 Where we should keep our UI integration tests?

  sahara-image-elements

 We're agreed that some common parts should be merged into the
 diskimage-builder repo (like java support, ssh, etc.). The main issue
 of keeping -image-elements separated is how to release them and
 provide mapping sahara version - elements version. You can find
 different options in etherpad [0], I'll write here about the option
 that I think will work best for us.

 So, the idea is that sahara-image-elements is a bunch of scripts and
 tools for building images for Sahara. It's high coupled with plugins's
 code in Sahara, so, we need to align them good. Current default
 decision is to keep aligned versioning like 2014.1 and etc. It'll be
 discussed on the weekly irc team meeting May 29.

  sahara-extra

 Keep it as is, no need to stop releasing, because we're not publishing
 anything to pypi. No real need for tags.

 Even if we keep the repo for now, I think we could simplify a little
 bit.  The edp-examples could be moved to the Sahara repo.  Some of those
 examples we use in the integration tests anyway -- why have them
 duplicated?



  open questions

 If you have any objections for this model, please, share your thoughts
 before June 3 due to the Juno-1 (June 12) to have enough time to apply
 selected approach.

 [0] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward

 Thanks.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Trevor McKay
below, sahara-extra

On Wed, 2014-05-28 at 20:02 +0400, Sergey Lukjanov wrote:
 Hey folks,
 
 it's a small wrap-up for the topic Sahara subprojects releasing and
 versioning that was discussed partially on summit and requires some
 more discussions. You can find details in [0].
 
  common
 
 We'll include only one tarball for sahara to the release launchpad
 pages. All other links will be provided in docs.
 
  sahara-dashboard
 
 The merging to Horizon process is now in progress. We've decided that
 j1 is the deadline for merging main code parts and during the j2 all
 the code should be merged into Horizon, so, if in time of j2 we'll
 have some work on merging sahara-dashboard to Horizon not done we'll
 need to fallback to the separated sahara-dashboard repo release for
 Juno cycle and continue merging the code into the Horizon to be able
 to completely kill sahara-dashboard repo in K release.
 
 Where we should keep our UI integration tests?
 
  sahara-image-elements
 
 We're agreed that some common parts should be merged into the
 diskimage-builder repo (like java support, ssh, etc.). The main issue
 of keeping -image-elements separated is how to release them and
 provide mapping sahara version - elements version. You can find
 different options in etherpad [0], I'll write here about the option
 that I think will work best for us.
 
 So, the idea is that sahara-image-elements is a bunch of scripts and
 tools for building images for Sahara. It's high coupled with plugins's
 code in Sahara, so, we need to align them good. Current default
 decision is to keep aligned versioning like 2014.1 and etc. It'll be
 discussed on the weekly irc team meeting May 29.
 
  sahara-extra
 
 Keep it as is, no need to stop releasing, because we're not publishing
 anything to pypi. No real need for tags.

Even if we keep the repo for now, I think we could simplify a little
bit.  The edp-examples could be moved to the Sahara repo.  Some of those
examples we use in the integration tests anyway -- why have them
duplicated?

 
 
  open questions
 
 If you have any objections for this model, please, share your thoughts
 before June 3 due to the Juno-1 (June 12) to have enough time to apply
 selected approach.
 
 [0] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward
 
 Thanks.
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Michael McCune


- Original Message -
   sahara-extra
  
  Keep it as is, no need to stop releasing, because we're not publishing
  anything to pypi. No real need for tags.
 
 Even if we keep the repo for now, I think we could simplify a little
 bit.  The edp-examples could be moved to the Sahara repo.  Some of those
 examples we use in the integration tests anyway -- why have them
 duplicated?

+1

i think having the examples in the sahara repo makes it much easier for new 
users.

mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Michael McCune


- Original Message -
 Re sahara-image-elements we found a bunch of issues that we should
 solve and that's why I think that keeping current releasing is still
 the best option.
 
 - we should test it better and depend on stable diskimage-builder version
 The dib is now published to pypi, so, we could make
 sahara-image-elements in dib-style and publish it to pypi in the same
 style. It makes us able to add some sanity tests for images checking
 and add gate jobs for running them (it could be done anyway, but this
 approach with separated repo looks more consistent). Developing
 sahara-image-elements as a pip-installable project we could add
 diskimage-builder to the requirements.txt of it and manage it's
 version, it'll provide us good flexibility - for example, we'll be
 able to specify to use latest release dib.
 - all scripts and dib will not be installed with sahara (50/50)

I think if we are going to make sahara-image-elements into a full-fledged pypi 
package we should refactor diskimage-create.sh into a python script. It will 
give up better options for argument parsing and I feel more control over the 
flow of operations.

mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-29 Thread Sergey Lukjanov
So, it looks like we have an agreement on all question.

There is only one technical question - keeping release images means
that we need to keep the whole matrix of images: plugin X version X
OSy [X root-passwdord]. I'll take a look on total size of them and
ability to publish them on OS infra.

On Thu, May 29, 2014 at 5:54 PM, Trevor McKay tmc...@redhat.com wrote:
 Catching up...

 On Thu, 2014-05-29 at 15:59 +0400, Alexander Ignatov wrote:
 On 28 May 2014, at 17:14, Sergey Lukjanov slukja...@mirantis.com wrote:
  1. How should we handle addition of new functionality to the API,
  should we bump minor version and just add new endpoints?

 Agree with most of folks. No new versions on adding new endpoints.
 Semantic changes require new major version of rest api.

 +1 this and previous comments.  I don't think we'll generate too many
 semantic changes (but I could be wrong :) )

 I agree with Mike that we should have simple version numbers, v1, v2, v3

  2. For which period of time should we keep deprecated API and client for 
  it?

 One release cycle for deprecation period.

 +1.  If we give folks N cycles, they will always wait until the Nth
 cycle to move away.  Might as well be 1.


  3. How to publish all images and/or keep stability of building images
  for plugins?
 

 We should keep all images for all plugins (non-deprecated as Matt mentioned)
 for each release. In addition we could keep  at least one image which could 
 be
 downloaded and used with master branch of Sahara. Plugin vendors could keep
 its own set of images and we can reflect it in the docs.

 I agree with keeping all images grouped with a release for all supported
 plugins in that release.

 Are we suggesting here that there are 2 places to find images, one in
 the Sahara releases and a second in a vendor repo listed in the docs?

 Regards,
 Alexander Ignatov




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Matthew Farrellee

On 05/29/2014 10:15 AM, Michael McCune wrote:



- Original Message -

Re sahara-image-elements we found a bunch of issues that we should
solve and that's why I think that keeping current releasing is still
the best option.

- we should test it better and depend on stable diskimage-builder version
The dib is now published to pypi, so, we could make
sahara-image-elements in dib-style and publish it to pypi in the same
style. It makes us able to add some sanity tests for images checking
and add gate jobs for running them (it could be done anyway, but this
approach with separated repo looks more consistent). Developing
sahara-image-elements as a pip-installable project we could add
diskimage-builder to the requirements.txt of it and manage it's
version, it'll provide us good flexibility - for example, we'll be
able to specify to use latest release dib.
- all scripts and dib will not be installed with sahara (50/50)


I think if we are going to make sahara-image-elements into a
full-fledged pypi package we should refactor diskimage-create.sh into
a python script. It will give up better options for argument parsing
and I feel more control over the flow of operations.

mike


the image-elements is too unstable to be used by anyone but an expert at 
this point. imho we should make sure the experts produce working images 
first, it's what our users will need in the first place, then make the 
image generation more stable.


best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Michael McCune


- Original Message -
 the image-elements is too unstable to be used by anyone but an expert at
 this point. imho we should make sure the experts produce working images
 first, it's what our users will need in the first place, then make the
 image generation more stable.

+1

mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-29 Thread Matthew Farrellee

On 05/29/2014 10:22 AM, Sergey Lukjanov wrote:

So, it looks like we have an agreement on all question.

There is only one technical question - keeping release images means
that we need to keep the whole matrix of images: plugin X version X
OSy [X root-passwdord]. I'll take a look on total size of them and
ability to publish them on OS infra.


that's definitely an upper bound. in practice it will be considerably less.

for juno we'd have -

 . vanilla hadoop1 fedora
 . vanilla hadoop1 ubuntu
 . vanilla hadoop1 centos6
 . ?vanilla hadoop1 centos7?
 . vanilla hadoop2 fedora
 . vanilla hadoop2 ubuntu
 . vanilla hadoop2 centos6
 . ?vanilla hadoop2 centos7?

 . hdp hadoop1 centos
 . hdp hadoop2 centos

 . spark ubuntu
 . ?spark fedora?
 . ?spark centos?

i do not think we should release any images that have a root password 
set (essentially a backdoor).


for K we should deprecate the hadoop1 versions and thus significantly 
cut the size of the new image artifact.


best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] BPs for Juno-1

2014-05-29 Thread Tomoe Sugihara
Hello Neutron core team,

We have three specs[1][2][3] submitted last week, for which have gotten +1s
from non core contributors.
It would be great if core devs could review them and give us some feedback.
(I noticed that I didn't include links to gerrit reviews in launchpad BPs
and just fixed them)

They are all our plugin specific changes, so it shouldn't affect the core
or the other parts of the Neutron.
Note that our 3rd party CI system is now running in a good shape, I believe
there's no major
issues with them going through the process.

If there are issues, we'd like to address them as soon as possible.
I look forward to getting your feedback.

Best Regards,
Tomoe

[1] Add blueprint for MidoNet quotas ext support:
https://review.openstack.org/#/c/94553/
[2] Add blueprint for MidoNet L3-ext-gw-modes support:
https://review.openstack.org/#/c/94785/
[3] Add spec for Update MidoNet plugin in Juno release:
https://review.openstack.org/#/c/95100/



On Thu, May 29, 2014 at 3:39 PM, mar...@redhat.com mandr...@redhat.com
wrote:

 On 28/05/14 17:57, Kyle Mestery wrote:
  On Wed, May 28, 2014 at 12:41 AM, mar...@redhat.com mandr...@redhat.com
 wrote:
  On 27/05/14 17:14, Kyle Mestery wrote:
  Hi Neutron developers:
 
  I've spent some time cleaning up the BPs for Juno-1, and they are
  documented at the link below [1]. There are a large number of BPs
  currently under review right now in neutron-specs. If we land some of
  those specs this week, it's possible some of these could make it into
  Juno-1, pending review cycles and such. But I just wanted to highlight
  that I removed a large number of BPs from targeting Juno-1 now which
  did not have specifications linked to them nor specifications which
  were actively under review in neutron-specs.
 
  Also, a gentle reminder that the process for submitting specifications
  to Neutron is documented here [2].
 
  Thanks, and please reach out to me if you have any questions!
 
 
  Hi Kyle,
 
  Can you please consider my PUT /subnets/subnet allocation_pools:{}
  review at [1] for Juno-1? Also, I see you have included a bug [1] and
  associated review [2] that I've worked on but the review is already
  pushed to master. Is it there for any pending backports?
 
  Done, I've added the bug referenced in [2] to Juno-1.

  Thanks!

 
  With regards to [3] below, are you saying you would like to submit
  that as a backport to stable?

 No I was more asking if that was the reason it was included (as it has
 already been merged) - though I can do that if you think it's a good idea,

 thanks, marios


 
  thanks! marios
 
  [1] https://review.openstack.org/#/c/62042/
  [2] https://bugs.launchpad.net/neutron/+bug/1255338
  [3] https://review.openstack.org/#/c/59212/
 
 
 
  Kyle
 
  [1] https://launchpad.net/neutron/+milestone/juno-1
  [2] https://wiki.openstack.org/wiki/Blueprints#Neutron
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-29 Thread Alexander Ignatov
On 29 May 2014, at 18:43, Matthew Farrellee m...@redhat.com wrote:

 i do not think we should release any images that have a root password set 
 (essentially a backdoor).
 
 for K we should deprecate the hadoop1 versions and thus significantly cut the 
 size of the new image artifact.
 


Agree don’t publish images with root password. This option is made for debug 
purposes and if needed users may build its own image for that.


Regards,
Alexander Ignatov




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Flavors] Flavor framework implementation questions

2014-05-29 Thread Gary Duan
Hi, Marios,

STF stands for 'service type framework'. It's the current way to dispatch
calls to different drivers based on 'provider' attribute of the LBaaS
service instance. Firewall and VPN implementations were not upstreamed as
we want to move to Flavor Framework.

I think the flavor framework does allow the operator to expose vendor names
if the operator chooses to.

Thanks,
Gary


On Thu, May 29, 2014 at 5:38 AM, mar...@redhat.com mandr...@redhat.comwrote:

 On 29/05/14 00:48, Eugene Nikanorov wrote:
  Hi,
 
  I have two questions that previously were briefly discussed. Both of them
  still cause discussions within advanced services community, so I'd like
 to
  make final clarification in this email thread.
 
  1. Usage of Service Type Framework
  I think right now there is a consensus that letting user to choose a
 vendor
  is not what neutron should do. To be more specific, having public
  'provider' attribute on a resource may create certain problems
  for operators because it binds resource and implementation too tightly
 that
  it can't be maintained without changing user configuration.
  The question that was discussed is if it's ok to remove this public
  attribute and still use the rest of framework?
  I think the answer is yes: the binding between resource and implementing
  driver is ok if it's read-only and visible only to admin. This still
 serves
  as hint for dispatching requests to driver and also may help operator to
  troubleshoot issues.
  I'd like to hear other opinions on this because there are patches for vpn
  and fwaas that use STF with the difference described above.

 pardon my ignorance, I don't know what STF is. I missed the summit
 discussions ('provider attribute' on resources must have come up there).
 My take on the 'specific vendor' issue is that there isn't one, given
 the current proposal. From the discussion there I think there is a use
 case for a user saying 'i want a firewall from foo_vendor' and as you
 said it's just a specialised use-case of the flavor framework.

 Furthermore, right now the 'capabilities' for Flavors are very loosely
 defined (just key:value tags on Flavor resource). Why can't we just also
 define a 'vendor:foo' capability and use it for that purpose. I imagine
 I'm oversimplifying somewhere but would that not address the concerns?

 marios

 
  2. Choosing implementation and backend
  This topic was briefly touched at design session dedicated to flavors.
 
  Current proposal that was discussed on advanced services meetings was to
  employ 2-step scheduling as described in the following sample code:
 
 https://review.openstack.org/#/c/83055/7/neutron/tests/unit/test_flavors.pyline
  38-56
 
  In my opinion the proposed way has the following benefits:
  - it delegates actual backend choosing to a driver.
  This way different vendors may not be required to agree on how to bind
  resource to a backend or any kind of other common implementation stuff
 that
  usually leads to lots of discussions.
  - allows different configurable vendor-specific algorithms to be used
 when
  binding resource to a backend
  - some drivers don't have notion of a backend because external entities
  manage backends for them
 
  Another proposal is to have single-step scheduling where each driver
  exposes the list of backends
  and then scheduling just chooses the backend based on capabilities in the
  flavor.
 
  I'd like to better understand the benefits of the second approach (this
 is
  all implementation so from user standpoint it doesn't make difference)
 
  So please add your opinions on those questions.
 
  My suggestion is also to have a short 30 min meeting sometime this or
 next
  week to finalize those questions. There are several patches and
 blueprints
  that depend on them.
 
  Thanks,
  Eugene.
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-29 Thread Chad Roberts
I agree with this.  No real sense in leaving image generation up to novice 
users at this point.
+1

- Original Message -
From: Michael McCune mimcc...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Thursday, May 29, 2014 10:39:50 AM
Subject: Re: [openstack-dev] [sahara] summit wrap-up: subprojects



- Original Message -
 the image-elements is too unstable to be used by anyone but an expert at
 this point. imho we should make sure the experts produce working images
 first, it's what our users will need in the first place, then make the
 image generation more stable.

+1

mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Paul Ward

Well, for my specific error, it was an intermittent ssl handshake error
before the request was ever sent to the
neutron-server.  In our case, we saw that 4 out of 5 resize operations
worked, the fifth failed with this ssl
handshake error in neutronclient.

I certainly think a GET is safe to retry, and I agree with your statement
that PUTs and DELETEs probably
are as well.  This still leaves a change in nova needing to be made to
actually a) specify a conf option and
b) pass it to neutronclient where appropriate.


Aaron Rosen aaronoro...@gmail.com wrote on 05/28/2014 07:38:56 PM:

 From: Aaron Rosen aaronoro...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/28/2014 07:44 PM
 Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient

 Hi,

 I'm curious if other openstack clients implement this type of retry
 thing. I think retrying on GET/DELETES/PUT's should probably be okay.

 What types of errors do you see in the neutron-server when it fails
 to respond? I think it would be better to move the retry logic into
 the server around the failures rather than the client (or better yet
 if we fixed the server :)). Most of the times I've seen this type of
 failure is due to deadlock errors caused between (sqlalchemy and
 eventlet *i think*) which cause the client to eventually timeout.

 Best,

 Aaron


 On Wed, May 28, 2014 at 11:51 AM, Paul Ward wpw...@us.ibm.com wrote:
 Would it be feasible to make the retry logic only apply to read-only
 operations?  This would still require a nova change to specify the
 number of retries, but it'd also prevent invokers from shooting
 themselves in the foot if they call for a write operation.



 Aaron Rosen aaronoro...@gmail.com wrote on 05/27/2014 09:40:00 PM:

  From: Aaron Rosen aaronoro...@gmail.com

  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 05/27/2014 09:44 PM

  Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
 
  Hi,

 
  Is it possible to detect when the ssl handshaking error occurs on
  the client side (and only retry for that)? If so I think we should
  do that rather than retrying multiple times. The danger here is
  mostly for POST operations (as Eugene pointed out) where it's
  possible for the response to not make it back to the client and for
  the operation to actually succeed.
 
  Having this retry logic nested in the client also prevents things
  like nova from handling these types of failures individually since
  this retry logic is happening inside of the client. I think it would
  be better not to have this internal mechanism in the client and
  instead make the user of the client implement retry so they are
  aware of failures.
 
  Aaron
 

  On Tue, May 27, 2014 at 10:48 AM, Paul Ward wpw...@us.ibm.com wrote:
  Currently, neutronclient is hardcoded to only try a request once in
  retry_request by virtue of the fact that it uses self.retries as the
  retry count, and that's initialized to 0 and never changed.  We've
  seen an issue where we get an ssl handshaking error intermittently
  (seems like more of an ssl bug) and a retry would probably have
  worked.  Yet, since neutronclient only tries once and gives up, it
  fails the entire operation.  Here is the code in question:
 
  https://github.com/openstack/python-neutronclient/blob/master/
  neutronclient/v2_0/client.py#L1296
 
  Does anybody know if there's some explicit reason we don't currently
  allow configuring the number of retries?  If not, I'm inclined to
  propose a change for just that.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Hide CI comments in Gerrit

2014-05-29 Thread Yuriy Taraday
On Tue, May 27, 2014 at 6:07 PM, James E. Blair jebl...@openstack.orgwrote:

 I wonder if it would
 be possible to detect them based on the presence of a Verified vote?


Not all CIs always add a vote. Only 3 or so of over 9000 Neutron's CIs put
their +/-1s on the change.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Use of AngularJS

2014-05-29 Thread Musso, Veronica A
Hello,

During the last Summit the use of AngularJS in Horizon was discussed and there 
is the intention to do a better use of it in the dashboards.
 I think this blueprint could help 
https://blueprints.launchpad.net/horizon/+spec/django-angular-integration, 
since it proposes the integration of Django-Angular 
(http://django-angular.readthedocs.org/en/latest/index.html).
I would like to know the community opinion about it, due I could start its 
implementation.

Thanks!

Best Regards,
Verónica Musso

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy for linking bug or bp in commit message

2014-05-29 Thread Yuriy Taraday
On Wed, May 28, 2014 at 3:54 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Fri, May 23, 2014 at 1:13 PM, Nachi Ueno na...@ntti3.com wrote:

 (2) Avoid duplication of works
 I have several experience of this.  Anyway, we should encourage people
 to check listed bug before
 writing patches.


 That's a very good point, but I don't think requiring a bug/bp for every
 patch is a good way to address this. Perhaps there is another way.


We can require developer to either link to bp/bug or explicitly add
Minor-fix line to the commit message.
I think that would force commit author to at least think about whether
commit worth submitting a bug/bp or not.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Tuskar-UI] Location for common dashboard code?

2014-05-29 Thread Lyle, David
I think this falls inline with other items we are working toward in
Horizon, namely more pluggable components on panels.

I think creating a directory in openstack_dashboard for these reusable
components makes a lot of sense. And usage should eventually moved to
there.
I would suggest something as mundane as ³openstack_dashboard/common².

David

On 5/28/14, 10:36 AM, Tzu-Mainn Chen tzuma...@redhat.com wrote:

Heya,

Tuskar-UI is currently extending classes directly from
openstack-dashboard.  For example, right now
our UI for Flavors extends classes in both
openstack_dashboard.dashboards.admin.flavors.tables and
openstack_dashboard.dashboards.admin.flavors.workflows.  In the future,
this sort of pattern will
increase; we anticipate doing similar things with Heat code in
openstack-dashboard.

However, since tuskar-ui is intended to be a separate dashboard that has
the potential to live
away from openstack-dashboard, it does feel odd to directly extend
openstack-dashboard dashboard
components.  Is there a separate place where such code might live?
Something similar in concept
to 
https://github.com/openstack/horizon/tree/master/openstack_dashboard/usage
 ?


Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Tuskar-UI] Location for common dashboard code?

2014-05-29 Thread Lyle, David
We are in the process of removing the redundancy between Project and Admin
by using RBAC to allow sharing of one code base for multiple roles. This
is a WIP.

David

On 5/28/14, 1:53 PM, Tzu-Mainn Chen tzuma...@redhat.com wrote:

Hi Doug,

Thanks for the response!  I agree with you in the cases where we are
extending
things like panels; if you're extending those, you're extending the
dashboard
itself.  However, things such as workflows feel like they could
reasonably live
independently of the dashboard for re-use elsewhere.

Incidentally, I know that within openstack_dashboard there are cases
where, say,
the admin dashboard extends instances tables from the project dashboard.
That
feels a bit odd to me; wouldn't it be cleaner to have both dashboards
extend
some common instances table that lives independently of either dashboard?

Thanks,
Tzu-Mainn Chen

- Original Message -
 Hey Tzu-Mainn,
 
 I've actually discouraged people from doing this sort of thing when
 customizing Horizon.  IMO it's risky to extend those panels because they
 really aren't intended as extension points.  We intend Horizon to be
 extensible by adding additional panels or dashboards.  I know you are
 closely involved in Horizon development, so you are better able to
manage
 that better than most customizers.
 
 Still, I wonder if we can better address this for Tuskar-UI as well as
 other situations by defining extensibility points in the dashboard
panels
 and workflows themselves.  Like well defined ways to add/show a column
of
 data, add/hide row actions, add/skip a workflow step, override text
 elements, etc.  Is it viable to create a few well defined extension
points
 and meet your need to modify existing dashboard panels?
 
 In any case, it seems to me that if you are overriding the dashboard
 panels, it's reasonable that tuskar-ui should be dependent on the
 dashboard.
 
 Doug Fish
 
 
 
 
 
 From:Tzu-Mainn Chen tzuma...@redhat.com
 To:  OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date:05/28/2014 11:40 AM
 Subject: [openstack-dev] [Horizon][Tuskar-UI] Location for common
 dashboardcode?
 
 
 
 Heya,
 
 Tuskar-UI is currently extending classes directly from
openstack-dashboard.
 For example, right now
 our UI for Flavors extends classes in both
 openstack_dashboard.dashboards.admin.flavors.tables and
 openstack_dashboard.dashboards.admin.flavors.workflows.  In the future,
 this sort of pattern will
 increase; we anticipate doing similar things with Heat code in
 openstack-dashboard.
 
 However, since tuskar-ui is intended to be a separate dashboard that has
 the potential to live
 away from openstack-dashboard, it does feel odd to directly extend
 openstack-dashboard dashboard
 components.  Is there a separate place where such code might live?
 Something similar in concept
 to
 
https://github.com/openstack/horizon/tree/master/openstack_dashboard/usag
e
 ?
 
 
 Thanks,
 Tzu-Mainn Chen
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Specifying file encoding

2014-05-29 Thread Martin Geisler
Ryan Brady rbr...@redhat.com writes:

 Would you want to merge the patches that simply remove the unneeded
 lines and then let me followup with patches that remove © along with
 the then unnecessary coding lines?

 If I was in your position, I'd update the patches that remove lines to
 include all of the affected files and also remove the ©. I'd abandon
 the patches that simply update the line for Emacs compatibility.

Done: https://review.openstack.org/96123/ Let me know what you think.

 I'm asking since it seems that Gerrit encourages a different style of
 development than most other projects I know -- single large commits
 instead of a series of smaller commits, each one logical step
 building on the previous.
 

 When patches are too complex, breaking them down makes it easier to
 review, easier to test and easier to revert. In this case, I don't
 think you're adding complexity by changing a line and a character in
 comments for each file in the scope of a project. Opinions may vary
 project to project.

Yeah, that makes sense and I've already heard some differnet opinions.
That's of course fine with me!

I'm a developer in the Mercurial project, and we have strict policies
about patches doing one thing only. Exaclty what one thing can then
sometimes be up for debate :)

Thanks for the help so far!

-- 
Martin Geisler

https://plus.google.com/+MartinGeisler/


pgpzwz81KBikm.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Infra] Mid-Cycle Meet-up

2014-05-29 Thread Matthew Treinish

Hi Everyone,

So we'd like to announce to everyone that we're going to be doing a combined
Infra and QA program mid-cycle meet-up. It will be the week of July 14th in
Darmstadt, Germany at Deutsche Telekom who has graciously offered to sponsor the
event. The plan is to use the week as both a time for face to face collaboration
for both programs respectively as well as having a couple days of bootstrapping
for new users/contributors. The intent was that this would be useful for people
who are interested in contributing to either Infra or QA, and those who are
running third party CI systems.

The current break down for the week that we're looking at is:

July 14th: Infra
July 15th: Infra
July 16th: Bootstrapping for new users
July 17th: More bootstrapping
July 18th: QA

We still have to work out more details, and will follow up once we have them.
But, we thought it would be better to announce the event earlier so people can
start to plan travel if they need it.


Thanks,

Matt Treinish
Jim Blair


pgpTJ9D1xZKm7.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy for linking bug or bp in commit message

2014-05-29 Thread Nachi Ueno
I like the idea

2014-05-29 8:33 GMT-07:00 Yuriy Taraday yorik@gmail.com:
 On Wed, May 28, 2014 at 3:54 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Fri, May 23, 2014 at 1:13 PM, Nachi Ueno na...@ntti3.com wrote:

 (2) Avoid duplication of works
 I have several experience of this.  Anyway, we should encourage people
 to check listed bug before
 writing patches.


 That's a very good point, but I don't think requiring a bug/bp for every
 patch is a good way to address this. Perhaps there is another way.


 We can require developer to either link to bp/bug or explicitly add
 Minor-fix line to the commit message.
 I think that would force commit author to at least think about whether
 commit worth submitting a bug/bp or not.

 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] BPs for Juno-1

2014-05-29 Thread Edgar Magana Perdomo (eperdomo)
I will take them!

Edgar

From: Tomoe Sugihara to...@midokura.commailto:to...@midokura.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, May 29, 2014 at 7:57 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] BPs for Juno-1

Hello Neutron core team,

We have three specs[1][2][3] submitted last week, for which have gotten +1s 
from non core contributors.
It would be great if core devs could review them and give us some feedback.
(I noticed that I didn't include links to gerrit reviews in launchpad BPs and 
just fixed them)

They are all our plugin specific changes, so it shouldn't affect the core or 
the other parts of the Neutron.
Note that our 3rd party CI system is now running in a good shape, I believe 
there's no major
issues with them going through the process.

If there are issues, we'd like to address them as soon as possible.
I look forward to getting your feedback.

Best Regards,
Tomoe

[1] Add blueprint for MidoNet quotas ext support: 
https://review.openstack.org/#/c/94553/
[2] Add blueprint for MidoNet L3-ext-gw-modes support: 
https://review.openstack.org/#/c/94785/
[3] Add spec for Update MidoNet plugin in Juno release: 
https://review.openstack.org/#/c/95100/



On Thu, May 29, 2014 at 3:39 PM, mar...@redhat.commailto:mar...@redhat.com 
mandr...@redhat.commailto:mandr...@redhat.com wrote:
On 28/05/14 17:57, Kyle Mestery wrote:
 On Wed, May 28, 2014 at 12:41 AM, mar...@redhat.commailto:mar...@redhat.com 
 mandr...@redhat.commailto:mandr...@redhat.com wrote:
 On 27/05/14 17:14, Kyle Mestery wrote:
 Hi Neutron developers:

 I've spent some time cleaning up the BPs for Juno-1, and they are
 documented at the link below [1]. There are a large number of BPs
 currently under review right now in neutron-specs. If we land some of
 those specs this week, it's possible some of these could make it into
 Juno-1, pending review cycles and such. But I just wanted to highlight
 that I removed a large number of BPs from targeting Juno-1 now which
 did not have specifications linked to them nor specifications which
 were actively under review in neutron-specs.

 Also, a gentle reminder that the process for submitting specifications
 to Neutron is documented here [2].

 Thanks, and please reach out to me if you have any questions!


 Hi Kyle,

 Can you please consider my PUT /subnets/subnet allocation_pools:{}
 review at [1] for Juno-1? Also, I see you have included a bug [1] and
 associated review [2] that I've worked on but the review is already
 pushed to master. Is it there for any pending backports?

 Done, I've added the bug referenced in [2] to Juno-1.

 Thanks!


 With regards to [3] below, are you saying you would like to submit
 that as a backport to stable?

No I was more asking if that was the reason it was included (as it has
already been merged) - though I can do that if you think it's a good idea,

thanks, marios



 thanks! marios

 [1] https://review.openstack.org/#/c/62042/
 [2] https://bugs.launchpad.net/neutron/+bug/1255338
 [3] https://review.openstack.org/#/c/59212/



 Kyle

 [1] https://launchpad.net/neutron/+milestone/juno-1
 [2] https://wiki.openstack.org/wiki/Blueprints#Neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-05-29 Thread YAMAMOTO Takashi
as per discussions on l3 subteem meeting today, i started
a bgp speakers comparison wiki page for this bp.

https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison

Artem, can you add other requirements as columns?

as one of ryu developers, i'm naturally biased to ryu bgp.
i appreciate if someone provides more info for other bgp speakers.

YAMAMOTO Takashi

 Good afternoon Neutron developers!
 
 There has been a discussion about dynamic routing in Neutron for the past few 
 weeks in the L3 subteam weekly meetings. I've submitted a review request of 
 the blueprint documenting the proposal of this feature: 
 https://review.openstack.org/#/c/90833/. If you have any feedback or 
 suggestions for improvement, I would love to hear your comments and include 
 your thoughts in the document.
 
 Thank you.
 
 Sincerely,
 Artem Dmytrenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Armando M.
Hi Paul,

Just out of curiosity, I am assuming you are using the client that
still relies on httplib2. Patch [1] replaced httplib2 with requests,
but I believe that a new client that incorporates this change has not
yet been published. I wonder if the failures you are referring to
manifest themselves with the former http library rather than the
latter. Could you clarify?

Thanks,
Armando

[1] - https://review.openstack.org/#/c/89879/

On 29 May 2014 17:25, Paul Ward wpw...@us.ibm.com wrote:
 Well, for my specific error, it was an intermittent ssl handshake error
 before the request was ever sent to the
 neutron-server.  In our case, we saw that 4 out of 5 resize operations
 worked, the fifth failed with this ssl
 handshake error in neutronclient.

 I certainly think a GET is safe to retry, and I agree with your statement
 that PUTs and DELETEs probably
 are as well.  This still leaves a change in nova needing to be made to
 actually a) specify a conf option and
 b) pass it to neutronclient where appropriate.


 Aaron Rosen aaronoro...@gmail.com wrote on 05/28/2014 07:38:56 PM:

 From: Aaron Rosen aaronoro...@gmail.com


 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/28/2014 07:44 PM

 Subject: Re: [openstack-dev] [neutron] Supporting retries in neutronclient

 Hi,

 I'm curious if other openstack clients implement this type of retry
 thing. I think retrying on GET/DELETES/PUT's should probably be okay.

 What types of errors do you see in the neutron-server when it fails
 to respond? I think it would be better to move the retry logic into
 the server around the failures rather than the client (or better yet
 if we fixed the server :)). Most of the times I've seen this type of
 failure is due to deadlock errors caused between (sqlalchemy and
 eventlet *i think*) which cause the client to eventually timeout.

 Best,

 Aaron


 On Wed, May 28, 2014 at 11:51 AM, Paul Ward wpw...@us.ibm.com wrote:
 Would it be feasible to make the retry logic only apply to read-only
 operations?  This would still require a nova change to specify the
 number of retries, but it'd also prevent invokers from shooting
 themselves in the foot if they call for a write operation.



 Aaron Rosen aaronoro...@gmail.com wrote on 05/27/2014 09:40:00 PM:

  From: Aaron Rosen aaronoro...@gmail.com

  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 05/27/2014 09:44 PM

  Subject: Re: [openstack-dev] [neutron] Supporting retries in
  neutronclient
 
  Hi,

 
  Is it possible to detect when the ssl handshaking error occurs on
  the client side (and only retry for that)? If so I think we should
  do that rather than retrying multiple times. The danger here is
  mostly for POST operations (as Eugene pointed out) where it's
  possible for the response to not make it back to the client and for
  the operation to actually succeed.
 
  Having this retry logic nested in the client also prevents things
  like nova from handling these types of failures individually since
  this retry logic is happening inside of the client. I think it would
  be better not to have this internal mechanism in the client and
  instead make the user of the client implement retry so they are
  aware of failures.
 
  Aaron
 

  On Tue, May 27, 2014 at 10:48 AM, Paul Ward wpw...@us.ibm.com wrote:
  Currently, neutronclient is hardcoded to only try a request once in
  retry_request by virtue of the fact that it uses self.retries as the
  retry count, and that's initialized to 0 and never changed.  We've
  seen an issue where we get an ssl handshaking error intermittently
  (seems like more of an ssl bug) and a retry would probably have
  worked.  Yet, since neutronclient only tries once and gives up, it
  fails the entire operation.  Here is the code in question:
 
  https://github.com/openstack/python-neutronclient/blob/master/
  neutronclient/v2_0/client.py#L1296
 
  Does anybody know if there's some explicit reason we don't currently
  allow configuring the number of retries?  If not, I'm inclined to
  propose a change for just that.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] Designate Incubation Request

2014-05-29 Thread Zane Bitter

On 29/05/14 05:26, Thierry Carrez wrote:

Sean Dague wrote:

I honestly just think we might want to also use it as a time to rethink
our program concept. Because all our programs that include projects that
are part of the integrated release are 1 big source tree, and maybe a
couple of little trees that orbit it (client and now specs repos). If we
always expect that to be the case, I'm not really sure why we built this
intermediate grouping.


Programs were established to solve two problems. First one is the
confusion around project types. We used to have project types[1] that
were trying to reflect and include all code repositories that we wanted
to make official. That kept on changing, was very confusing, and did
not allow flexibility for each team in how they preferred to organize
their code repositories. The second problem that solved was to recognize
non-integrated-project efforts which were still essential to the
production of OpenStack, like Infra or Docs.

[1] https://wiki.openstack.org/wiki/ProjectTypes

Programs just let us bless goals and teams and let them organize code
however they want, with contribution to any code repo under that
umbrella being considered official and ATC-status-granting.


This is definitely how it *should* work.

I think the problem is that we still have elements of the 'project' 
terminology around from the bad old days of the pointless 
core/core-but-don't-call-it-core/library/gating/supporting project 
taxonomy, where project == repository. The result is that every time a 
new project gets incubated, the reaction is always Oh man, you want a 
new *program* too? That sounds really *heavyweight*. If people treated 
the terms 'program' and 'project' as interchangeable and just referred 
to repositories by another name ('repositories', perhaps?) then this 
wouldn't keep coming up.


(IMHO the quickest way to effect this change in mindset would be to drop 
the term 'program' and call the programs projects. In what meaningful 
sense is e.g. Infra or Docs not a project?)



I would be
a bit reluctant to come back to the projecttypes mess and create
categories of programs (integrated projects on one side, and others).


I agree, but why do we need different categories? Is anybody at all 
confused about this? Are there people out there installing our custom 
version of Gerrit and wondering why it won't boot VMs?


The categories existed largely because of the aforementioned strange 
definition of 'project' and the need to tightly control the membership 
of the TC. Now that the latter is no longer an issue, we could eliminate 
the distinction between programs and projects without bringing the 
categories back.



Back to the topic, the tension here is because DNS is seen as a
network thing and therefore it sounds like it makes sense under
Networking. But programs are not categories or themes. They are
teams aligned on a mission statement. If the teams are different
(Neutron and Designate) then it doesn't make sense to artificially merge
them just because you think of networking as a theme. If the teams
converge, yes it makes sense. If they don't, we should just create a new
program. They are cheap and should reflect how we work, not the other
way around.


+1

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Nikita Konovalov for storyboard-core

2014-05-29 Thread James E. Blair
jebl...@openstack.org (James E. Blair) writes:

 Nikita Konovalov has been reviewing changes to both storyboard and
 storyboard-webclient for some time.  He is the second most active
 storyboard reviewer and is very familiar with the codebase (having
 written a significant amount of the server code).  He regularly provides
 good feedback, understands where the project is heading, and in general
 is in accord with the current core team, which has been treating his +1s
 as +2s for a while now.

 Please respond with +1s or concerns, and if the consensus is in favor, I
 will add him to the group.

 Nikita, thank you very much for your work!

Nikita is now in storyboard-core.  Congratulations!

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Redesign of Keystone Federation

2014-05-29 Thread Morgan Fainberg
I agree that there is room for improvement on the Federation design within 
Keystone. I would like to re-iterate what Adam said that we are already seeing 
efforts to fully integrate further protocol support (OpenID Connect, etc) 
within the current system. Lets be sure that whatever redesign work is proposed 
and accepted takes into account the current stakeholders (that are really using 
Federation) and ensure full backwards compatibility.

I firmly believe we can work within the Apache module framework for Juno. 
Moving beyond Juno we may need to start implementing the more native modules 
(proposal #2). Lets be sure whatever redesign work we perform this cycle 
doesn’t lock us exclusively into one path or another. It shouldn’t be too hard 
to continue making incremental improvements (agile methodology) and keeping the 
stakeholders engaged.

David and Kristy, the slides and summit session are a great starting place for 
this work. Now we need to get the proposal drafted up in the new Keystone-Specs 
repository ( https://git.openstack.org/cgit/openstack/keystone-specs ) so we 
can keep this conversation on track. Having the specification clearly outlined 
and targeted will help us address any concerns with the proposal/redesign as we 
move into implementation.

Cheers,
Morgan
—
Morgan Fainberg


From: Adam Young ayo...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: May 28, 2014 at 09:24:26
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [keystone] Redesign of Keystone Federation  

On 05/28/2014 11:59 AM, David Chadwick wrote:  
 Hi Everyone  
  
 at the Atlanta meeting the following slides were presented during the  
 federation session  
  
 http://www.slideshare.net/davidwchadwick/keystone-apach-authn  
  
 It was acknowledged that the current design is sub-optimal, but was a  
 best first efforts to get something working in time for the IceHouse  
 release, which it did successfully.  
  
 Now is the time to redesign federated access in Keystone in order to  
 allow for:  
 i) the inclusion of more federation protocols such as OpenID and OpenID  
 Connect via Apache plugins  

These are underway: Steve Mar just posted review for OpenID connect.  
 ii) federating together multiple Keystone installations  
I think Keystone should be dealt with separately. Keystone is not a good  
stand-alone authentication mechanism.  

 iii) the inclusion of federation protocols directly into Keystone where  
 good Apache plugins dont yet exist e.g. IETF ABFAB  
I though this was mostly pulling together other protocols such as Radius?  
http://freeradius.org/mod_auth_radius/  

  
 The Proposed Design (1) in the slide show is the simplest change to  
 make, in which the Authn module has different plugins for different  
 federation protocols, whether via Apache or not.  

I'd like to avoid doing non-HTTPD modules for as long as possible.  

  
 The Proposed Design (2) is cleaner since the plugins are directly into  
 Keystone and not via the Authn module, but it requires more  
 re-engineering work, and it was questioned in Atlanta whether that  
 effort exists or not.  

The method parameter is all that is going to vary for most of the Auth  
mechanisms. X509 and Kerberos both require special set up of the HTTP  
connection to work, which means client and server sides need to be in  
sync: SAML, OpenID and the rest have no such requirements.  

  
 Kent therefore proposes that we go with Proposed Design (1). Kent will  
 provide drafts of the revised APIs and the re-engineered code for  
 inspection and approval by the group, if the group agrees to go with  
 this revised design.  
  
 If you have any questions about the proposed re-design, please don't  
 hesitate to ask  
  
 regards  
  
 David and Kristy  
  
 ___  
 OpenStack-dev mailing list  
 OpenStack-dev@lists.openstack.org  
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  


___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Joshua Hesketh for infra-core

2014-05-29 Thread James E. Blair
jebl...@openstack.org (James E. Blair) writes:

 The Infrastructure program has a unique three-tier team structure:
 contributors (that's all of us!), core members (people with +2 ability
 on infra projects in Gerrit) and root members (people with
 administrative access).  Read all about it here:

   http://ci.openstack.org/project.html#team

 Joshua Hesketh has been reviewing a truly impressive number of infra
 patches for quite some time now.  He has an excellent grasp of how the
 CI system functions, no doubt in part because he runs a copy of it and
 has been doing significant work on evolving it to continue to scale.
 His reviews of python projects are excellent and particularly useful,
 but he also has a grasp of how the whole system fits together, which is
 a key thing for a member of infra-core.

 Please respond with any comments or concerns.

 Thanks, Joshua, for all your work!

Joshua is now in infra-core.  Congratulations!

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Sergey Lukjanov for infra-root

2014-05-29 Thread James E. Blair
jebl...@openstack.org (James E. Blair) writes:

 The Infrastructure program has a unique three-tier team structure:
 contributors (that's all of us!), core members (people with +2 ability
 on infra projects in Gerrit) and root members (people with
 administrative access).  Read all about it here:

   http://ci.openstack.org/project.html#team

 Sergey has been an extremely valuable member of infra-core for some time
 now, providing reviews on a wide range of infrastructure projects which
 indicate a growing familiarity with the large number of complex systems
 that make up the project infrastructure.  In particular, Sergey has
 expertise in systems related to the configuration of Jenkins jobs, Zuul,
 and Nodepool which is invaluable in diagnosing and fixing operational
 problems as part of infra-root.

 Please respond with any comments or concerns.

 Thanks again Sergey for all your work!

Sergey is now in infra-root.  Congratulations!

-Jim
(And Jeremy is no longer the new guy!)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Paul Ward
Yes, we're still on a code level that uses httplib2.  I noticed that as
well, but wasn't sure if that would really
help here as it seems like an ssl thing itself.  But... who knows??  I'm
not sure how consistently we can
recreate this, but if we can, I'll try using that patch to use requests and
see if that helps.



Armando M. arma...@gmail.com wrote on 05/29/2014 11:52:34 AM:

 From: Armando M. arma...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/29/2014 11:58 AM
 Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient

 Hi Paul,

 Just out of curiosity, I am assuming you are using the client that
 still relies on httplib2. Patch [1] replaced httplib2 with requests,
 but I believe that a new client that incorporates this change has not
 yet been published. I wonder if the failures you are referring to
 manifest themselves with the former http library rather than the
 latter. Could you clarify?

 Thanks,
 Armando

 [1] - https://review.openstack.org/#/c/89879/

 On 29 May 2014 17:25, Paul Ward wpw...@us.ibm.com wrote:
  Well, for my specific error, it was an intermittent ssl handshake error
  before the request was ever sent to the
  neutron-server.  In our case, we saw that 4 out of 5 resize operations
  worked, the fifth failed with this ssl
  handshake error in neutronclient.
 
  I certainly think a GET is safe to retry, and I agree with your
statement
  that PUTs and DELETEs probably
  are as well.  This still leaves a change in nova needing to be made to
  actually a) specify a conf option and
  b) pass it to neutronclient where appropriate.
 
 
  Aaron Rosen aaronoro...@gmail.com wrote on 05/28/2014 07:38:56 PM:
 
  From: Aaron Rosen aaronoro...@gmail.com
 
 
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 05/28/2014 07:44 PM
 
  Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
 
  Hi,
 
  I'm curious if other openstack clients implement this type of retry
  thing. I think retrying on GET/DELETES/PUT's should probably be okay.
 
  What types of errors do you see in the neutron-server when it fails
  to respond? I think it would be better to move the retry logic into
  the server around the failures rather than the client (or better yet
  if we fixed the server :)). Most of the times I've seen this type of
  failure is due to deadlock errors caused between (sqlalchemy and
  eventlet *i think*) which cause the client to eventually timeout.
 
  Best,
 
  Aaron
 
 
  On Wed, May 28, 2014 at 11:51 AM, Paul Ward wpw...@us.ibm.com wrote:
  Would it be feasible to make the retry logic only apply to read-only
  operations?  This would still require a nova change to specify the
  number of retries, but it'd also prevent invokers from shooting
  themselves in the foot if they call for a write operation.
 
 
 
  Aaron Rosen aaronoro...@gmail.com wrote on 05/27/2014 09:40:00 PM:
 
   From: Aaron Rosen aaronoro...@gmail.com
 
   To: OpenStack Development Mailing List (not for usage questions)
   openstack-dev@lists.openstack.org,
   Date: 05/27/2014 09:44 PM
 
   Subject: Re: [openstack-dev] [neutron] Supporting retries in
   neutronclient
  
   Hi,
 
  
   Is it possible to detect when the ssl handshaking error occurs on
   the client side (and only retry for that)? If so I think we should
   do that rather than retrying multiple times. The danger here is
   mostly for POST operations (as Eugene pointed out) where it's
   possible for the response to not make it back to the client and for
   the operation to actually succeed.
  
   Having this retry logic nested in the client also prevents things
   like nova from handling these types of failures individually since
   this retry logic is happening inside of the client. I think it would
   be better not to have this internal mechanism in the client and
   instead make the user of the client implement retry so they are
   aware of failures.
  
   Aaron
  
 
   On Tue, May 27, 2014 at 10:48 AM, Paul Ward wpw...@us.ibm.com
wrote:
   Currently, neutronclient is hardcoded to only try a request once in
   retry_request by virtue of the fact that it uses self.retries as the
   retry count, and that's initialized to 0 and never changed.  We've
   seen an issue where we get an ssl handshaking error intermittently
   (seems like more of an ssl bug) and a retry would probably have
   worked.  Yet, since neutronclient only tries once and gives up, it
   fails the entire operation.  Here is the code in question:
  
   https://github.com/openstack/python-neutronclient/blob/master/
   neutronclient/v2_0/client.py#L1296
  
   Does anybody know if there's some explicit reason we don't currently
   allow configuring the number of retries?  If not, I'm inclined to
   propose a change for just that.
  
   ___
   OpenStack-dev mailing list
   

Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-29 Thread Devananda van der Veen
On Wed, May 28, 2014 at 11:42 PM, Robert Collins
robe...@robertcollins.netwrote:

 Is there any wiggle room on those dates? As James Polley says, PyCon
 AU (and the Openstack miniconf, which I'm organising with JHesketh)
 overlap significantly - and I can't be in two places at once.

 However July 21-25th would be totally doable :)


FWIW, that's a direct overlap with OSCON, which may be an issue for some
folks (like me).

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-29 Thread Devananda van der Veen
Hi Jaromir,

I agree that the midcycle meetup with TripleO and Ironic was very
beneficial last cycle, but this cycle, Ironic is co-locating its sprint
with Nova. Our focus needs to be working with them to merge the
nova.virt.ironic driver. Details will be forthcoming as we work out the
exact details with Nova. That said, I'll try to make the TripleO sprint as
well -- assuming the dates don't overlap.

Cheers,
Devananda


On Wed, May 28, 2014 at 4:05 AM, Jaromir Coufal jcou...@redhat.com wrote:

 Hi to all,

 after previous TripleO  Ironic mid-cycle meetup, which I believe was
 beneficial for all, I would like to suggest that we meet again in the
 middle of Juno cycle to discuss current progress, blockers, next steps and
 of course get some beer all together :)

 Last time, TripleO and Ironic merged their meetings together and I think
 it was great idea. This time I would like to invite also Heat team if they
 want to join. Our cooperation is increasing and I think it would be great,
 if we can discuss all issues together.

 Red Hat offered to host this event, so I am very happy to invite you all
 and I would like to ask, who would come if there was a mid-cycle meetup in
 following dates and place:

 * July 28 - Aug 1
 * Red Hat office, Raleigh, North Carolina

 If you are intending to join, please, fill yourselves into this etherpad:
 https://etherpad.openstack.org/p/juno-midcycle-meetup

 Cheers
 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-29 Thread Mike Spreitzer
Devananda van der Veen devananda@gmail.com wrote on 05/29/2014 
01:26:12 PM:

 Hi Jaromir,
 
 I agree that the midcycle meetup with TripleO and Ironic was very 
 beneficial last cycle, but this cycle, Ironic is co-locating its 
 sprint with Nova. Our focus needs to be working with them to merge 
 the nova.virt.ironic driver. Details will be forthcoming as we work 
 out the exact details with Nova. That said, I'll try to make the 
 TripleO sprint as well -- assuming the dates don't overlap.
 
 Cheers,
 Devananda
 

 On Wed, May 28, 2014 at 4:05 AM, Jaromir Coufal jcou...@redhat.com 
wrote:
 Hi to all,
 
 after previous TripleO  Ironic mid-cycle meetup, which I believe 
 was beneficial for all, I would like to suggest that we meet again 
 in the middle of Juno cycle to discuss current progress, blockers, 
 next steps and of course get some beer all together :)
 
 Last time, TripleO and Ironic merged their meetings together and I 
 think it was great idea. This time I would like to invite also Heat 
 team if they want to join. Our cooperation is increasing and I think
 it would be great, if we can discuss all issues together.
 
 Red Hat offered to host this event, so I am very happy to invite you
 all and I would like to ask, who would come if there was a mid-cycle
 meetup in following dates and place:
 
 * July 28 - Aug 1
 * Red Hat office, Raleigh, North Carolina
 
 If you are intending to join, please, fill yourselves into this 
etherpad:
 https://etherpad.openstack.org/p/juno-midcycle-meetup
 
 Cheers
 -- Jarda

Given the organizers, I assume this will be strongly focused on TripleO 
and Ironic.
Would this be a good venue for all the mid-cycle discussion that will be 
relevant to Heat?
Is anyone planning a distinct Heat-focused mid-cycle meetup?

Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Redesign of Keystone Federation

2014-05-29 Thread Brad Topol
+1!   Excellent summary and analysis Morgan!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Morgan Fainberg morgan.fainb...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   05/29/2014 01:07 PM
Subject:Re: [openstack-dev] [keystone] Redesign of Keystone 
Federation



I agree that there is room for improvement on the Federation design within 
Keystone. I would like to re-iterate what Adam said that we are already 
seeing efforts to fully integrate further protocol support (OpenID 
Connect, etc) within the current system. Lets be sure that whatever 
redesign work is proposed and accepted takes into account the current 
stakeholders (that are really using Federation) and ensure full backwards 
compatibility.

I firmly believe we can work within the Apache module framework for Juno. 
Moving beyond Juno we may need to start implementing the more native 
modules (proposal #2). Lets be sure whatever redesign work we perform this 
cycle doesn’t lock us exclusively into one path or another. It shouldn’t 
be too hard to continue making incremental improvements (agile 
methodology) and keeping the stakeholders engaged.

David and Kristy, the slides and summit session are a great starting place 
for this work. Now we need to get the proposal drafted up in the new 
Keystone-Specs repository ( 
https://git.openstack.org/cgit/openstack/keystone-specs ) so we can keep 
this conversation on track. Having the specification clearly outlined and 
targeted will help us address any concerns with the proposal/redesign as 
we move into implementation.

Cheers,
Morgan
—
Morgan Fainberg

From: Adam Young ayo...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: May 28, 2014 at 09:24:26
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [keystone] Redesign of Keystone Federation 

On 05/28/2014 11:59 AM, David Chadwick wrote: 
 Hi Everyone 
 
 at the Atlanta meeting the following slides were presented during the 
 federation session 
 
 http://www.slideshare.net/davidwchadwick/keystone-apach-authn 
 
 It was acknowledged that the current design is sub-optimal, but was a 
 best first efforts to get something working in time for the IceHouse 
 release, which it did successfully. 
 
 Now is the time to redesign federated access in Keystone in order to 
 allow for: 
 i) the inclusion of more federation protocols such as OpenID and OpenID 
 Connect via Apache plugins 

These are underway: Steve Mar just posted review for OpenID connect. 
 ii) federating together multiple Keystone installations 
I think Keystone should be dealt with separately. Keystone is not a good 
stand-alone authentication mechanism. 

 iii) the inclusion of federation protocols directly into Keystone where 
 good Apache plugins dont yet exist e.g. IETF ABFAB 
I though this was mostly pulling together other protocols such as Radius? 
http://freeradius.org/mod_auth_radius/ 

 
 The Proposed Design (1) in the slide show is the simplest change to 
 make, in which the Authn module has different plugins for different 
 federation protocols, whether via Apache or not. 

I'd like to avoid doing non-HTTPD modules for as long as possible. 

 
 The Proposed Design (2) is cleaner since the plugins are directly into 
 Keystone and not via the Authn module, but it requires more 
 re-engineering work, and it was questioned in Atlanta whether that 
 effort exists or not. 

The method parameter is all that is going to vary for most of the Auth 
mechanisms. X509 and Kerberos both require special set up of the HTTP 
connection to work, which means client and server sides need to be in 
sync: SAML, OpenID and the rest have no such requirements. 

 
 Kent therefore proposes that we go with Proposed Design (1). Kent will 
 provide drafts of the revised APIs and the re-engineered code for 
 inspection and approval by the group, if the group agrees to go with 
 this revised design. 
 
 If you have any questions about the proposed re-design, please don't 
 hesitate to ask 
 
 regards 
 
 David and Kristy 
 
 ___ 
 OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Nova] [Ironic] [Infra] Making Ironic vote as a third-party Nova driver

2014-05-29 Thread Devananda van der Veen
On Wed, May 28, 2014 at 10:54 PM, Joshua Hesketh 
joshua.hesk...@rackspace.com wrote:

 On 5/29/14 8:52 AM, James E. Blair wrote:

 Devananda van der Veen devananda@gmail.com writes:

  Hi all!

 This is a follow-up to several summit discussions on
 how-do-we-deprecate-baremetal, a summary of the plan forward, a call to
 raise awareness of the project's status, and hopefully gain some interest
 from folks on nova-core to help with spec and code reviews.

 The nova.virt.ironic driver lives in Ironic's git tree today [1]. We're
 cleaning it up and submitting it to Nova again this cycle. I've posted
 specs [2] outlining the design and planned upgrade process. Earlier
 today,
 we enabled voting in Ironic's check and gate queues for the
 tempest-dsvm-virtual-ironic job. This runs a tempest scenario test [3]
 against devstack, exercising Nova with the Ironic driver to PXE boot a
 virtual machine. It has been running for a few months on Ironic, and has
 been stable for more than a month. However, because Ironic is not
 integrated, we also can't vote in check/gate queues on integrated
 projects
 (like Nova). We can - and do - report the test result in a non-voting
 way,
 though that's easy to miss, since it looks like every other non-voting
 test.

 At the summit [4], it was suggested that we make this job report as
 though
 it were a third-party CI test for a Nova driver. This would be removed at
 the time that Ironic graduates and the job is allowed to vote in the
 gate.
 Until that time, I'm happy to have the nova.virt.ironic driver reporting
 as
 a third-party driver (even though it's not) simply to help raise
 awareness
 (third-party CI jobs are watched more closely than non-voting jobs) and
 decrease the likelihood that Nova developers will inadvertently break
 Ironic's gate.

 Given that there's a concrete plan forward, why am I sending this email
 to
 all three teams? A few reasons:
 - document the plan that we discussed
 - many people from infra and nova were not present during the discussion
 and may not be aware of the details
 - I may have gotten something wrong (it was a long week)
 - and mostly because I don't technically know how to make an upstream job
 report as though it's a third-party job, and am hoping someone wants to
 volunteer to help figure that out

 I think it's a reasonable plan.  To elaborate a bit, I think we
 identified three categories of jobs that we run:

 a) jobs that are voting
 b) jobs that are non-voting because they are advisory
 c) jobs that are non-voting for policy reasons but we feel fairly
 strongly about

 There's a pretty subtle distinction between b and c.  Ideally, there
 shouldn't be any.  We've tried to minimize the number of non-voting jobs
 to make sure that people don't ignore them.  Nonetheless, it seems that
 a large enough number of people still do that non-voting jobs are
 considered ineffective in Nova.  I think it's worth noting the potential
 danger of de-emphasizing the actual results.  It may make other
 non-voting jobs even less effective than they already are.

 The intent is to make the jobs described by (c) into voting jobs, but in
 a way that they can still be overridden if need be.  The aim is to help
 new (eg, incubated) projects join the integrated gate in a way that lets
 them prove they are sufficiently mature to do so without impacting the
 currently integrated projects.  I believe we're currently thinking that
 point is after their integration approval.  If we are comfortable with
 incubated projects being able to block the integrated gate earlier, we
 could simply make the non-voting jobs voting instead.

 Back to the proposal at hand.  I think we should call the kinds of jobs
 described in (c) as non-binding.

 The best way to do that is to register a second user with Gerrit for
 Zuul to use, and have it report non-binding jobs with a +/- 1 vote in
 the check queue that is separate from the normal Jenkins vote.  In
 order to do that, we will have to modify Zuul to be able to support a
 second user, and associate that user with a pipeline.  Then configure a
 new non-binding pipeline to use that user and run the desired jobs.

 Note that a similar problem of curation may occur with the non-binding
 jobs.  If we run jobs for the incubated projects Foo and Bar, they will
 share a vote in Gerrit, and Nova developers will have to examine the
 results of -1 votes; if Bar consistently fails tests, it may need to be
 made non-voting or removed to avoid obscuring Foo's results.

 I expect the Zuul modification to take an experienced Zuul developer
 about 2-3 days to write, or an inexperienced one about a week.  If no
 one else has started it by then, I will probably have some time around
 the middle of the cycle to hack on it.  In the mean time, we may want to
 make sure that the number of non-voting jobs is at a minimum (and
 further reduce them if possible), and emphasize to reviewers the
 importance of checking posted results.


 I like 

Re: [openstack-dev] Selecting more carefully our dependencies

2014-05-29 Thread Joshua Harlow
Hi thomas,

Since I'm the one that added wrapt to the requirements list I thought it
would be appropriate for me to respond :)

So looking over the pull request it seems like there was agreement that
the issues will be fixed and the adjustments will occur (that seems to be
the case last time I checked it). So that¹s good news and I'd like to
thank you for bringing these issues up to the author (who seemed like he
required a little educating about what to do here, which is just how this
works, aka, teaching others and explaining are as much of being part of
the community as anything else).

So I'm thankful u worked through that and it seems to be going aok there
(correct me if I am misreading it).

As for why I pulled it in (incase u are wondering). Wrapt allows for a
single decorator to be aware of whether its being applied to a instance
method, a non-instance method, a static method (or even on-top of a class)
using a single decorator. This imho has been one of the painful things
that is lacking with python decorators and made it hard to make a *single*
depreciation decorator (the review for taskflow for this is @
https://review.openstack.org/#/c/87055/) that doesn't require
specialization for each possible type of decorator type (a depreciation
decorator for methods, another one for functions, another one for...). So
wrapt seems to solve this which makes it imho really nice 'syntax sugar'
for solving this problem. So that was my usage of it (the review still
isn't in yet, soon I hope).

If u feel that others need to jump on that pull request or talk to the
author, maybe we should work through this instead of just 'abandoning
ship' first. Since I am pretty sure this won't be our time that we have to
'teach' others good practices to help them improve their own code.

Btw the six inclusion seems to actually be a statement the six authors
seem to endorse so I'm not sure I can blame others for just following that
endorsement (even if it's not really good practice to copy code around).

Six supports every Python version since 2.5.  It is contained in only one
Python file, so it can be easily copied into your project. (The copyright
and license notice must be retained.) (from
https://pypi.python.org/pypi/six)

-Original Message-
From: Thomas Goirand z...@debian.org
Organization: Debian
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Thursday, May 29, 2014 at 2:25 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: [openstack-dev] Selecting more carefully our dependencies

Hi everyone,

Recently, wrapt was added as a dependency. The Python module suffers
from obvious design issues, like for example:
- Lack of Python 3.4 support
- Broken with Python 3.2
- Upstream sources in src instead of wrapt so then running py.test
doesn't work unless you do ln -s src wrapt, and then PYTHONPATH=.
py.test tests to run the tests.
- Unit tests not included in the pypi module

That'd be fine, if upstream was comprehensive, and willing to fix
things. It seems like he's going to approve our patches for Python 3.2
and 3.4. But ...

There's an embedded copy of six.py in the code. I've been trying to
convince upstream to remove it, and provided a patch for it. But it's
looking like upstream simply refuses to remove the embedded copy of
six.py. This means that, on each new upstream release, I may have to
rebase my Debian specific patch to remove the copy. See comments here:

https://github.com/GrahamDumpleton/wrapt/pull/24

I've still packaged and uploaded the module to Debian, but the situation
isn't great with upstream, which doesn't seem to understand, which will
inevitably lead to more (useless) work for downstream distros.

So I'm wondering: are we being careful enough when selecting
dependencies? In this case, I think we haven't, and I would recommend
against using wrapt. Not only because it embeds six.py, but because
upstream looks uncooperative, and bound to its own use cases.

In a more general case, I would vouch for avoiding *any* Python package
which is embedding a copy of another one. This should IMO be solved
before the Python module reaches our global-requirements.txt.

Thoughts anyone?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova default quotas

2014-05-29 Thread Matt Riedemann



On 5/27/2014 4:44 PM, Vishvananda Ishaya wrote:

I’m not sure that this is the right approach. We really have to add the old 
extension back for compatibility, so it might be best to simply keep that 
extension instead of adding a new way to do it.

Vish

On May 27, 2014, at 1:31 PM, Cazzolato, Sergio J sergio.j.cazzol...@intel.com 
wrote:


I have created a blueprint to add this functionality to nova.

https://review.openstack.org/#/c/94519/


-Original Message-
From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
Sent: Tuesday, May 27, 2014 5:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] nova default quotas

Phil,

You are correct and this seems to be an error. I don't think in the earlier ML 
thread[1] that anyone remembered that the quota classes were being used for 
default quotas. IMO we need to revert this removal as we (accidentally) removed 
a Havana feature with no notification to the community. I've reactivated a 
bug[2] and marked it critcal.

Vish

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-February/027574.html
[2] https://bugs.launchpad.net/nova/+bug/1299517

On May 27, 2014, at 12:19 PM, Day, Phil philip@hp.com wrote:


Hi Vish,

I think quota classes have been removed from Nova now.

Phil


Sent from Samsung Mobile


 Original message 
From: Vishvananda Ishaya
Date:27/05/2014 19:24 (GMT+00:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] nova default quotas

Are you aware that there is already a way to do this through the cli using 
quota-class-update?

http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html (near 
the bottom)

Are you suggesting that we also add the ability to use just regular 
quota-update? I'm not sure i see the need for both.

Vish

On May 20, 2014, at 9:52 AM, Cazzolato, Sergio J sergio.j.cazzol...@intel.com 
wrote:


I would to hear your thoughts about an idea to add a way to manage the default 
quota values through the API.

The idea is to use the current quota api, but sending ''default' instead of the 
tenant_id. This change would apply to quota-show and quota-update methods.

This approach will help to simplify the implementation of another blueprint 
named per-flavor-quotas

Feedback? Suggestions?


Sergio Juan Cazzolato
Intel Software Argentina

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The reverted series for nova on master is here [1].

Once that's merged I can work on backporting the revert for the API 
change to stable/icehouse, which will be a little tricky given conflicts 
from master.


[1] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:restore-quota-class,n,z


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Joshua Hesketh for infra-core

2014-05-29 Thread Sergey Lukjanov
And /me is no longer the new guy :)

On Thu, May 29, 2014 at 9:05 PM, James E. Blair jebl...@openstack.org wrote:
 jebl...@openstack.org (James E. Blair) writes:

 The Infrastructure program has a unique three-tier team structure:
 contributors (that's all of us!), core members (people with +2 ability
 on infra projects in Gerrit) and root members (people with
 administrative access).  Read all about it here:

   http://ci.openstack.org/project.html#team

 Joshua Hesketh has been reviewing a truly impressive number of infra
 patches for quite some time now.  He has an excellent grasp of how the
 CI system functions, no doubt in part because he runs a copy of it and
 has been doing significant work on evolving it to continue to scale.
 His reviews of python projects are excellent and particularly useful,
 but he also has a grasp of how the whole system fits together, which is
 a key thing for a member of infra-core.

 Please respond with any comments or concerns.

 Thanks, Joshua, for all your work!

 Joshua is now in infra-core.  Congratulations!

 -Jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Redesign of Keystone Federation

2014-05-29 Thread Tim Bell

A further vote to maintain compatibility . One of the key parts to a good 
federation design is to be using it in the field and encountering real life 
problems.

Production sites expect stability of interfaces and functions. If this cannot 
be reasonably ensured, the federation function deployment scope will be very 
limited and remain lightly used. Without usage, the real end user functional 
gaps and additional requirements cannot be determined.

Tim

From: Brad Topol [mailto:bto...@us.ibm.com]
Sent: 29 May 2014 19:31
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] Redesign of Keystone Federation

+1!   Excellent summary and analysis Morgan!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.commailto:bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:Morgan Fainberg 
morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:05/29/2014 01:07 PM
Subject:Re: [openstack-dev] [keystone] Redesign of Keystone Federation




I agree that there is room for improvement on the Federation design within 
Keystone. I would like to re-iterate what Adam said that we are already seeing 
efforts to fully integrate further protocol support (OpenID Connect, etc) 
within the current system. Lets be sure that whatever redesign work is proposed 
and accepted takes into account the current stakeholders (that are really using 
Federation) and ensure full backwards compatibility.

I firmly believe we can work within the Apache module framework for Juno. 
Moving beyond Juno we may need to start implementing the more native modules 
(proposal #2). Lets be sure whatever redesign work we perform this cycle 
doesn’t lock us exclusively into one path or another. It shouldn’t be too hard 
to continue making incremental improvements (agile methodology) and keeping the 
stakeholders engaged.

David and Kristy, the slides and summit session are a great starting place for 
this work. Now we need to get the proposal drafted up in the new Keystone-Specs 
repository ( https://git.openstack.org/cgit/openstack/keystone-specs ) so we 
can keep this conversation on track. Having the specification clearly outlined 
and targeted will help us address any concerns with the proposal/redesign as we 
move into implementation.

Cheers,
Morgan

—
Morgan Fainberg

From: Adam Young ayo...@redhat.commailto:ayo...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: May 28, 2014 at 09:24:26
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [keystone] Redesign of Keystone Federation

On 05/28/2014 11:59 AM, David Chadwick wrote:
 Hi Everyone

 at the Atlanta meeting the following slides were presented during the
 federation session

 http://www.slideshare.net/davidwchadwick/keystone-apach-authn

 It was acknowledged that the current design is sub-optimal, but was a
 best first efforts to get something working in time for the IceHouse
 release, which it did successfully.

 Now is the time to redesign federated access in Keystone in order to
 allow for:
 i) the inclusion of more federation protocols such as OpenID and OpenID
 Connect via Apache plugins

These are underway: Steve Mar just posted review for OpenID connect.
 ii) federating together multiple Keystone installations
I think Keystone should be dealt with separately. Keystone is not a good
stand-alone authentication mechanism.

 iii) the inclusion of federation protocols directly into Keystone where
 good Apache plugins dont yet exist e.g. IETF ABFAB
I though this was mostly pulling together other protocols such as Radius?
http://freeradius.org/mod_auth_radius/


 The Proposed Design (1) in the slide show is the simplest change to
 make, in which the Authn module has different plugins for different
 federation protocols, whether via Apache or not.

I'd like to avoid doing non-HTTPD modules for as long as possible.


 The Proposed Design (2) is cleaner since the plugins are directly into
 Keystone and not via the Authn module, but it requires more
 re-engineering work, and it was questioned in Atlanta whether that
 effort exists or not.

The method parameter is all that is going to vary for most of the Auth
mechanisms. X509 and Kerberos both require special set up of the HTTP
connection to work, which means client and server sides need to be in
sync: SAML, OpenID and the rest have no such requirements.


 Kent therefore proposes that we go with Proposed Design (1). Kent will
 provide drafts of the revised APIs and the re-engineered 

Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Armando M.
mishandling of SSL was the very reason why I brought that change
forward; so I wouldn't rule it out completely ;)

A.

On 29 May 2014 19:15, Paul Ward wpw...@us.ibm.com wrote:
 Yes, we're still on a code level that uses httplib2.  I noticed that as
 well, but wasn't sure if that would really
 help here as it seems like an ssl thing itself.  But... who knows??  I'm not
 sure how consistently we can
 recreate this, but if we can, I'll try using that patch to use requests and
 see if that helps.



 Armando M. arma...@gmail.com wrote on 05/29/2014 11:52:34 AM:

 From: Armando M. arma...@gmail.com


 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/29/2014 11:58 AM

 Subject: Re: [openstack-dev] [neutron] Supporting retries in neutronclient

 Hi Paul,

 Just out of curiosity, I am assuming you are using the client that
 still relies on httplib2. Patch [1] replaced httplib2 with requests,
 but I believe that a new client that incorporates this change has not
 yet been published. I wonder if the failures you are referring to
 manifest themselves with the former http library rather than the
 latter. Could you clarify?

 Thanks,
 Armando

 [1] - https://review.openstack.org/#/c/89879/

 On 29 May 2014 17:25, Paul Ward wpw...@us.ibm.com wrote:
  Well, for my specific error, it was an intermittent ssl handshake error
  before the request was ever sent to the
  neutron-server.  In our case, we saw that 4 out of 5 resize operations
  worked, the fifth failed with this ssl
  handshake error in neutronclient.
 
  I certainly think a GET is safe to retry, and I agree with your
  statement
  that PUTs and DELETEs probably
  are as well.  This still leaves a change in nova needing to be made to
  actually a) specify a conf option and
  b) pass it to neutronclient where appropriate.
 
 
  Aaron Rosen aaronoro...@gmail.com wrote on 05/28/2014 07:38:56 PM:
 
  From: Aaron Rosen aaronoro...@gmail.com
 
 
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 05/28/2014 07:44 PM
 
  Subject: Re: [openstack-dev] [neutron] Supporting retries in
  neutronclient
 
  Hi,
 
  I'm curious if other openstack clients implement this type of retry
  thing. I think retrying on GET/DELETES/PUT's should probably be okay.
 
  What types of errors do you see in the neutron-server when it fails
  to respond? I think it would be better to move the retry logic into
  the server around the failures rather than the client (or better yet
  if we fixed the server :)). Most of the times I've seen this type of
  failure is due to deadlock errors caused between (sqlalchemy and
  eventlet *i think*) which cause the client to eventually timeout.
 
  Best,
 
  Aaron
 
 
  On Wed, May 28, 2014 at 11:51 AM, Paul Ward wpw...@us.ibm.com wrote:
  Would it be feasible to make the retry logic only apply to read-only
  operations?  This would still require a nova change to specify the
  number of retries, but it'd also prevent invokers from shooting
  themselves in the foot if they call for a write operation.
 
 
 
  Aaron Rosen aaronoro...@gmail.com wrote on 05/27/2014 09:40:00 PM:
 
   From: Aaron Rosen aaronoro...@gmail.com
 
   To: OpenStack Development Mailing List (not for usage questions)
   openstack-dev@lists.openstack.org,
   Date: 05/27/2014 09:44 PM
 
   Subject: Re: [openstack-dev] [neutron] Supporting retries in
   neutronclient
  
   Hi,
 
  
   Is it possible to detect when the ssl handshaking error occurs on
   the client side (and only retry for that)? If so I think we should
   do that rather than retrying multiple times. The danger here is
   mostly for POST operations (as Eugene pointed out) where it's
   possible for the response to not make it back to the client and for
   the operation to actually succeed.
  
   Having this retry logic nested in the client also prevents things
   like nova from handling these types of failures individually since
   this retry logic is happening inside of the client. I think it would
   be better not to have this internal mechanism in the client and
   instead make the user of the client implement retry so they are
   aware of failures.
  
   Aaron
  
 
   On Tue, May 27, 2014 at 10:48 AM, Paul Ward wpw...@us.ibm.com
   wrote:
   Currently, neutronclient is hardcoded to only try a request once in
   retry_request by virtue of the fact that it uses self.retries as the
   retry count, and that's initialized to 0 and never changed.  We've
   seen an issue where we get an ssl handshaking error intermittently
   (seems like more of an ssl bug) and a retry would probably have
   worked.  Yet, since neutronclient only tries once and gives up, it
   fails the entire operation.  Here is the code in question:
  
   https://github.com/openstack/python-neutronclient/blob/master/
   neutronclient/v2_0/client.py#L1296
  
   Does anybody know if there's some explicit reason we don't currently
   allow 

Re: [openstack-dev] [keystone] Redesign of Keystone Federation

2014-05-29 Thread Dolph Mathews
On Thu, May 29, 2014 at 12:59 PM, Tim Bell tim.b...@cern.ch wrote:



 A further vote to maintain compatibility . One of the key parts to a good
 federation design is to be using it in the field and encountering real life
 problems.



 Production sites expect stability of interfaces and functions. If this
 cannot be reasonably ensured, the federation function deployment scope will
 be very limited and remain lightly used. Without usage, the real end user
 functional gaps and additional requirements cannot be determined.


+1

Maintaining compatibility with OS-FEDERATION is not something we can
compromise on: backwards compatibility should be guaranteed. If we made a
terrible decision in the established groundwork that precludes solving a
use case with sufficiently high demand (and I have not seen any evidence
suggesting that to be the case), we'll have to build an alternative
solution in parallel - not redesign OS-FEDERATION.




 Tim



 *From:* Brad Topol [mailto:bto...@us.ibm.com]
 *Sent:* 29 May 2014 19:31

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [keystone] Redesign of Keystone Federation



 +1!   Excellent summary and analysis Morgan!

 --Brad


 Brad Topol, Ph.D.
 IBM Distinguished Engineer
 OpenStack
 (919) 543-0646
 Internet:  bto...@us.ibm.com
 Assistant: Kendra Witherspoon (919) 254-0680



 From:Morgan Fainberg morgan.fainb...@gmail.com
 To:OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date:05/29/2014 01:07 PM
 Subject:Re: [openstack-dev] [keystone] Redesign of Keystone
 Federation
  --




 I agree that there is room for improvement on the Federation design within
 Keystone. I would like to re-iterate what Adam said that we are already
 seeing efforts to fully integrate further protocol support (OpenID Connect,
 etc) within the current system. Lets be sure that whatever redesign work is
 proposed and accepted takes into account the current stakeholders (that are
 really using Federation) and ensure full backwards compatibility.

 I firmly believe we can work within the Apache module framework for Juno.
 Moving beyond Juno we may need to start implementing the more native
 modules (proposal #2). Lets be sure whatever redesign work we perform this
 cycle doesn’t lock us exclusively into one path or another. It shouldn’t be
 too hard to continue making incremental improvements (agile methodology)
 and keeping the stakeholders engaged.

 David and Kristy, the slides and summit session are a great starting place
 for this work. Now we need to get the proposal drafted up in the new
 Keystone-Specs repository (
 https://git.openstack.org/cgit/openstack/keystone-specs ) so we can keep
 this conversation on track. Having the specification clearly outlined and
 targeted will help us address any concerns with the proposal/redesign as we
 move into implementation.

 Cheers,
 Morgan


 *— Morgan Fainberg*


 From: Adam Young ayo...@redhat.com
 Reply: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: May 28, 2014 at 09:24:26
 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [keystone] Redesign of Keystone Federation

 On 05/28/2014 11:59 AM, David Chadwick wrote:
  Hi Everyone
 
  at the Atlanta meeting the following slides were presented during the
  federation session
 
  http://www.slideshare.net/davidwchadwick/keystone-apach-authn
 
  It was acknowledged that the current design is sub-optimal, but was a
  best first efforts to get something working in time for the IceHouse
  release, which it did successfully.
 
  Now is the time to redesign federated access in Keystone in order to
  allow for:
  i) the inclusion of more federation protocols such as OpenID and OpenID
  Connect via Apache plugins

 These are underway: Steve Mar just posted review for OpenID connect.
  ii) federating together multiple Keystone installations
 I think Keystone should be dealt with separately. Keystone is not a good
 stand-alone authentication mechanism.

  iii) the inclusion of federation protocols directly into Keystone where
  good Apache plugins dont yet exist e.g. IETF ABFAB
 I though this was mostly pulling together other protocols such as Radius?
 http://freeradius.org/mod_auth_radius/

 
  The Proposed Design (1) in the slide show is the simplest change to
  make, in which the Authn module has different plugins for different
  federation protocols, whether via Apache or not.

 I'd like to avoid doing non-HTTPD modules for as long as possible.

 
  The Proposed Design (2) is cleaner since the plugins are directly into
  Keystone and not via the Authn module, but it requires more
  re-engineering work, and it was questioned in Atlanta whether that
  effort exists or not.

 The method parameter is all that is going to vary for most of the Auth
 

[openstack-dev] [Murano] A new version of python-muranoclient (0.5.2) is to be released

2014-05-29 Thread Timur Sufiev
Hi there!

Recently we've changed the way application packages are paginated in
murano-dashboard (AppCatalog and Package Definitions panels) -
adopting the Glance approach with generators and 'next_marker'
property. This change concerns all 3 murano components - murano-api
[1], python-muranoclient [2] and murano-dashboard [3]. In order to use
proper version of python-muranoclient in murano-dashboard, a new
version of python-muranoclient will be released, namely 0.5.2.

[1] https://review.openstack.org/#/c/93900/
[2] https://review.openstack.org/#/c/93899/
[3] https://review.openstack.org/#/c/93939/

-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-29 Thread Zane Bitter

On 29/05/14 13:33, Mike Spreitzer wrote:

Devananda van der Veen devananda@gmail.com wrote on 05/29/2014
01:26:12 PM:

  Hi Jaromir,
 
  I agree that the midcycle meetup with TripleO and Ironic was very
  beneficial last cycle, but this cycle, Ironic is co-locating its
  sprint with Nova. Our focus needs to be working with them to merge
  the nova.virt.ironic driver. Details will be forthcoming as we work
  out the exact details with Nova. That said, I'll try to make the
  TripleO sprint as well -- assuming the dates don't overlap.
 
  Cheers,
  Devananda
 

  On Wed, May 28, 2014 at 4:05 AM, Jaromir Coufal jcou...@redhat.com
wrote:
  Hi to all,
 
  after previous TripleO  Ironic mid-cycle meetup, which I believe
  was beneficial for all, I would like to suggest that we meet again
  in the middle of Juno cycle to discuss current progress, blockers,
  next steps and of course get some beer all together :)
 
  Last time, TripleO and Ironic merged their meetings together and I
  think it was great idea. This time I would like to invite also Heat
  team if they want to join. Our cooperation is increasing and I think
  it would be great, if we can discuss all issues together.
 
  Red Hat offered to host this event, so I am very happy to invite you
  all and I would like to ask, who would come if there was a mid-cycle
  meetup in following dates and place:
 
  * July 28 - Aug 1
  * Red Hat office, Raleigh, North Carolina
 
  If you are intending to join, please, fill yourselves into this etherpad:
  https://etherpad.openstack.org/p/juno-midcycle-meetup
 
  Cheers
  -- Jarda

Given the organizers, I assume this will be strongly focused on TripleO
and Ironic.
Would this be a good venue for all the mid-cycle discussion that will be
relevant to Heat?
Is anyone planning a distinct Heat-focused mid-cycle meetup?


We haven't had one in the past, but the project is getting bigger so, 
given our need to sync with the TripleO folks anyway, this may be a good 
opportunity to try. Certainly it's unlikely that any Heat developers 
attending will spend the _whole_ week working with the TripleO team, so 
there should be time to do something like what you're suggesting. I 
think we just need to see who is willing  able to attend, and work out 
an agenda on that basis.


For my part, I will certainly be there for the whole week if it's July 
28 - Aug 1. If it's the week before I may not be able to make it at all.


BTW one timing option I haven't seen mentioned is to follow Pycon-AU's 
model of running e.g. Friday-Tuesday (July 25-29). I know nobody wants 
to be stuck in Raleigh, NC on a weekend (I've lived there, I understand 
;), but for folks who have a long ways to travel it's one weekend lost 
instead of two.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][mysql] IMPORTANT: MySQL Galera does *not* support SELECT ... FOR UPDATE

2014-05-29 Thread Robert Collins
I just bent Jay's ear on IRC about this for a bit...

On 21 May 2014 05:07, Jay Pipes jaypi...@gmail.com wrote:

 We are one of those operators that use Galera for replicating our mysql
 databases. We used to  see issues with deadlocks when having multiple
 mysql writers in our mysql cluster. As a workaround we have our haproxy
 configuration in an active-standby configuration for our mysql VIP.

 I seem to recall we had a lot of the deadlocks happen through Neutron.
 When we go through our Icehouse testing, we will redo our multimaster
 mysql setup and provide feedback on the issues we see.


 Thanks very much, Sridar, much appreciated.

 This issue was raised at the Neutron IRC meeting yesterday, and we've agreed
 to take a staged approach. We will first work on documentation to add to the
 operations guide that explains the issues (and the tradeoffs of going to a
 single-writer cluster configuration vs. just having the clients retry some
 request). Later stages will work on a non-locking quota-management
 algorithm, possibly in conjunction with Climate, and looking into how to use
 coarser-grained file locks or a distributed lock manager for handling
 cross-component deterministic reads in Neutron.

So - correct my if I've (still :)) got it wrong, but there are two
orthogonal issues here:

a) conflicts in SQL are a normal fact of life - even with SELECT FOR
UPDATE on a single-node MySQL deployment. There is a standard
signalling mechanism for them, and the Galera behaviour here is
in-spec. It differs from the single-node situation in only two ways:
 1) It *always* happens when one client COMMITs rather than sometimes
happening on the SELECT FOR UPDATE and sometimes on COMMIT
 2) It happens to all clients implicated in the data being replicated,
rather than just the unlucky schmuck who came along second
It is worth calling out that the DB itself remains atomic and
consistent - there is no data integrity issue at the RDBMS layer.

b) SELECT FOR UPDATE makes us see more conflicts, but see (a) -
conflicts are a normal part of using a SQL storage layer.

SO while I'm keen to see us reduce the frequency with which we trigger
replication conflicts in Galera, I'd like to see the staged approach
be:

A) Documentation
B) Fix / add retry support pervasively through both Neutron and
OpenStack as a whole. Its baseline sanity for SQL usage
C) Implement more sophisticated schemas/update logic to
reduce/eliminate SELECT FOR UPDATE.

C seems like substantially more review and design work than B, while B
isn't 'easy' its *still necessary to be correct*.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-29 Thread Anita Kuno
On 05/28/2014 08:54 AM, Radomir Dopieralski wrote:
 Hello,
 
 we plan to finally do the split in this cycle, and I started some
 preparations for that. I also started to prepare a detailed plan for the
 whole operation, as it seems to be a rather big endeavor.
 
 You can view and amend the plan at the etherpad at:
 https://etherpad.openstack.org/p/horizon-split-plan
 
 It's still a little vague, but I plan to gradually get it more detailed.
 All the points are up for discussion, if anybody has any good ideas or
 suggestions, or can help in any way, please don't hesitate to add to
 this document.
 
 We still don't have any dates or anything -- I suppose we will work that
 out soonish.
 
 Oh, and great thanks to all the people who have helped me so far with
 it, I wouldn't even dream about trying such a thing without you. Also
 thanks in advance to anybody who plans to help!
 
I'd like to confirm that we are all aware that this patch creates 16 new
repos under the administration of horizon-ptl and horizon-core:
https://review.openstack.org/#/c/95716/

If I'm late to the party and the only one that this is news to, that is
fine. Sixteen additional repos seems like a lot of additional reviews
will be needed.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] dealing with M:N relashionships for Pools and Listeners

2014-05-29 Thread Samuel Bercovici
Before solving everything, I would like first to itemize the things we should 
solve/consider.
So pleas focus first on what is it that we need to pay attention for and less 
on how to solve such issues.

Follows the list of items:

· Provisioning status/state

o   Should it only be on the loadblancer?

o   Do we need a more granular status per logical child object?

o   Update process

§  What happens when a logical child object is modified?

§  Where can a user check the success of the update?

· Operation status/state - this refers to information returning from 
the load balancing back end / driver

o   How is member status that failed health monitor reflected, on which LBaaS 
object and how can a user understand the failure?

· Administrator state management

o   How is a change in admin_state on member, pool, listener get managed

o   Do we expect a change in the operation state to reflect this?

· Statistics consumption

o   From which object will the user poll to get statistics for the different 
sub objects in the model (ex: load balancer)?

o   How can statistics from a shared logical object be obtained in context of 
the load balancer (ex: pool statistics for a specific listener in a specific 
load balancer)?

· Deletion of shared objects

o   Do we support deletion of shared objects that will cascade delete?

Regards,
-Sam.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-29 Thread Gabriel Hurley
Forgive me if I'm misunderstanding, but those all look like repositories that 
are strictly tracking upstreams. They're not maintained by the 
Horizon/OpenStack developers whatsoever. Is this intentional/necessary?

- Gabriel

 -Original Message-
 From: Anita Kuno [mailto:ante...@anteaya.info]
 Sent: Thursday, May 29, 2014 12:30 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [horizon][infra] Plan for the splitting of 
 Horizon
 into two repositories
 
 On 05/28/2014 08:54 AM, Radomir Dopieralski wrote:
  Hello,
 
  we plan to finally do the split in this cycle, and I started some
  preparations for that. I also started to prepare a detailed plan for
  the whole operation, as it seems to be a rather big endeavor.
 
  You can view and amend the plan at the etherpad at:
  https://etherpad.openstack.org/p/horizon-split-plan
 
  It's still a little vague, but I plan to gradually get it more detailed.
  All the points are up for discussion, if anybody has any good ideas or
  suggestions, or can help in any way, please don't hesitate to add to
  this document.
 
  We still don't have any dates or anything -- I suppose we will work
  that out soonish.
 
  Oh, and great thanks to all the people who have helped me so far with
  it, I wouldn't even dream about trying such a thing without you. Also
  thanks in advance to anybody who plans to help!
 
 I'd like to confirm that we are all aware that this patch creates 16 new repos
 under the administration of horizon-ptl and horizon-core:
 https://review.openstack.org/#/c/95716/
 
 If I'm late to the party and the only one that this is news to, that is fine.
 Sixteen additional repos seems like a lot of additional reviews will be 
 needed.
 
 Thanks,
 Anita.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-29 Thread Anita Kuno
On 05/29/2014 03:45 PM, Gabriel Hurley wrote:
 Forgive me if I'm misunderstanding, but those all look like repositories that 
 are strictly tracking upstreams. They're not maintained by the 
 Horizon/OpenStack developers whatsoever. Is this intentional/necessary?
 
 - Gabriel
The permissions on all the new repositories require +A from horizon-core
and tagging from horizon-ptl:
https://review.openstack.org/#/c/95716/4/modules/openstack_project/files/gerrit/acls/stackforge/xstatic.config

Anita.
 
 -Original Message-
 From: Anita Kuno [mailto:ante...@anteaya.info]
 Sent: Thursday, May 29, 2014 12:30 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [horizon][infra] Plan for the splitting of 
 Horizon
 into two repositories

 On 05/28/2014 08:54 AM, Radomir Dopieralski wrote:
 Hello,

 we plan to finally do the split in this cycle, and I started some
 preparations for that. I also started to prepare a detailed plan for
 the whole operation, as it seems to be a rather big endeavor.

 You can view and amend the plan at the etherpad at:
 https://etherpad.openstack.org/p/horizon-split-plan

 It's still a little vague, but I plan to gradually get it more detailed.
 All the points are up for discussion, if anybody has any good ideas or
 suggestions, or can help in any way, please don't hesitate to add to
 this document.

 We still don't have any dates or anything -- I suppose we will work
 that out soonish.

 Oh, and great thanks to all the people who have helped me so far with
 it, I wouldn't even dream about trying such a thing without you. Also
 thanks in advance to anybody who plans to help!

 I'd like to confirm that we are all aware that this patch creates 16 new 
 repos
 under the administration of horizon-ptl and horizon-core:
 https://review.openstack.org/#/c/95716/

 If I'm late to the party and the only one that this is news to, that is fine.
 Sixteen additional repos seems like a lot of additional reviews will be 
 needed.

 Thanks,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican][Heat] Reviews requested for Barbican resources

2014-05-29 Thread Randall Burt
Hello Barbican devs. I was wondering if we could get some of you to weigh in on 
a couple of reviews for adding Barbican support in Heat. We seem to be churning 
a bit around current and future features supported by the resources and could 
use some expert opinions.

Blueprint: https://blueprints.launchpad.net/heat/+spec/barbican-resources
Order Resource: https://review.openstack.org/81906
Secret Resource: https://review.openstack.org/79355

Thanks in advance for your time.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican][Heat] Reviews requested for Barbican resources

2014-05-29 Thread John Wood
Hello Randall,

We'll take a look at these CRs for sure, thanks.

Thanks,
John


From: Randall Burt [randall.b...@rackspace.com]
Sent: Thursday, May 29, 2014 3:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Barbican][Heat] Reviews requested for Barbican
resources

Hello Barbican devs. I was wondering if we could get some of you to weigh in on 
a couple of reviews for adding Barbican support in Heat. We seem to be churning 
a bit around current and future features supported by the resources and could 
use some expert opinions.

Blueprint: https://blueprints.launchpad.net/heat/+spec/barbican-resources
Order Resource: https://review.openstack.org/81906
Secret Resource: https://review.openstack.org/79355

Thanks in advance for your time.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Kevin Benton
Httplib2 does very little directly with ssl other than a wrap_socket call.
Unless requests has special ssl error handling and retry logic, it will be
exposed to the same set of underlying errors from the ssl library so a
retry that at least catches socket and ssl errors is a good idea.
On May 29, 2014 1:17 PM, Paul Ward wpw...@us.ibm.com wrote:

 Yes, we're still on a code level that uses httplib2.  I noticed that as
 well, but wasn't sure if that would really
 help here as it seems like an ssl thing itself.  But... who knows??  I'm
 not sure how consistently we can
 recreate this, but if we can, I'll try using that patch to use requests
 and see if that helps.



 Armando M. arma...@gmail.com wrote on 05/29/2014 11:52:34 AM:

  From: Armando M. arma...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 05/29/2014 11:58 AM
  Subject: Re: [openstack-dev] [neutron] Supporting retries in
 neutronclient
 
  Hi Paul,
 
  Just out of curiosity, I am assuming you are using the client that
  still relies on httplib2. Patch [1] replaced httplib2 with requests,
  but I believe that a new client that incorporates this change has not
  yet been published. I wonder if the failures you are referring to
  manifest themselves with the former http library rather than the
  latter. Could you clarify?
 
  Thanks,
  Armando
 
  [1] - https://review.openstack.org/#/c/89879/
 
  On 29 May 2014 17:25, Paul Ward wpw...@us.ibm.com wrote:
   Well, for my specific error, it was an intermittent ssl handshake error
   before the request was ever sent to the
   neutron-server.  In our case, we saw that 4 out of 5 resize operations
   worked, the fifth failed with this ssl
   handshake error in neutronclient.
  
   I certainly think a GET is safe to retry, and I agree with your
 statement
   that PUTs and DELETEs probably
   are as well.  This still leaves a change in nova needing to be made to
   actually a) specify a conf option and
   b) pass it to neutronclient where appropriate.
  
  
   Aaron Rosen aaronoro...@gmail.com wrote on 05/28/2014 07:38:56 PM:
  
   From: Aaron Rosen aaronoro...@gmail.com
  
  
   To: OpenStack Development Mailing List (not for usage questions)
   openstack-dev@lists.openstack.org,
   Date: 05/28/2014 07:44 PM
  
   Subject: Re: [openstack-dev] [neutron] Supporting retries in
 neutronclient
  
   Hi,
  
   I'm curious if other openstack clients implement this type of retry
   thing. I think retrying on GET/DELETES/PUT's should probably be okay.
  
   What types of errors do you see in the neutron-server when it fails
   to respond? I think it would be better to move the retry logic into
   the server around the failures rather than the client (or better yet
   if we fixed the server :)). Most of the times I've seen this type of
   failure is due to deadlock errors caused between (sqlalchemy and
   eventlet *i think*) which cause the client to eventually timeout.
  
   Best,
  
   Aaron
  
  
   On Wed, May 28, 2014 at 11:51 AM, Paul Ward wpw...@us.ibm.com
 wrote:
   Would it be feasible to make the retry logic only apply to read-only
   operations?  This would still require a nova change to specify the
   number of retries, but it'd also prevent invokers from shooting
   themselves in the foot if they call for a write operation.
  
  
  
   Aaron Rosen aaronoro...@gmail.com wrote on 05/27/2014 09:40:00 PM:
  
From: Aaron Rosen aaronoro...@gmail.com
  
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date: 05/27/2014 09:44 PM
  
Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
   
Hi,
  
   
Is it possible to detect when the ssl handshaking error occurs on
the client side (and only retry for that)? If so I think we should
do that rather than retrying multiple times. The danger here is
mostly for POST operations (as Eugene pointed out) where it's
possible for the response to not make it back to the client and for
the operation to actually succeed.
   
Having this retry logic nested in the client also prevents things
like nova from handling these types of failures individually since
this retry logic is happening inside of the client. I think it would
be better not to have this internal mechanism in the client and
instead make the user of the client implement retry so they are
aware of failures.
   
Aaron
   
  
On Tue, May 27, 2014 at 10:48 AM, Paul Ward wpw...@us.ibm.com
 wrote:
Currently, neutronclient is hardcoded to only try a request once in
retry_request by virtue of the fact that it uses self.retries as the
retry count, and that's initialized to 0 and never changed.  We've
seen an issue where we get an ssl handshaking error intermittently
(seems like more of an ssl bug) and a retry would probably have
worked.  Yet, since neutronclient only tries 

Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-29 Thread Lyle, David
The idea here is to decouple 3rd party static files from being embedded in
the Horizon repo. There are several reasons for this move. With embedded
3rd party static files upgrading the static files is cumbersome, versions
can be difficult to track and updates can be difficult to synchronize.
This change encourages developers to fix bugs upstream at the source,
rather than edit the static copies in the horizon repo. With the proposed
repo split, both the existing horizon and openstack_dashboard components
may have a common dependency on these static packages.

There are several more xstatic packages that horizon will pull in that are
maintained outside openstack. The packages added are only those that did
not have existing xstatic packages. These packages will be updated very
sparingly, only when updating say bootstrap or jquery versions.

We are certainly open to feedback.

David

 
On 5/29/14, 1:53 PM, Anita Kuno ante...@anteaya.info wrote:

On 05/29/2014 03:45 PM, Gabriel Hurley wrote:
 Forgive me if I'm misunderstanding, but those all look like
repositories that are strictly tracking upstreams. They're not
maintained by the Horizon/OpenStack developers whatsoever. Is this
intentional/necessary?
 
 - Gabriel
The permissions on all the new repositories require +A from horizon-core
and tagging from horizon-ptl:
https://review.openstack.org/#/c/95716/4/modules/openstack_project/files/g
errit/acls/stackforge/xstatic.config

Anita.
 
 -Original Message-
 From: Anita Kuno [mailto:ante...@anteaya.info]
 Sent: Thursday, May 29, 2014 12:30 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [horizon][infra] Plan for the splitting
of Horizon
 into two repositories

 On 05/28/2014 08:54 AM, Radomir Dopieralski wrote:
 Hello,

 we plan to finally do the split in this cycle, and I started some
 preparations for that. I also started to prepare a detailed plan for
 the whole operation, as it seems to be a rather big endeavor.

 You can view and amend the plan at the etherpad at:
 https://etherpad.openstack.org/p/horizon-split-plan

 It's still a little vague, but I plan to gradually get it more
detailed.
 All the points are up for discussion, if anybody has any good ideas or
 suggestions, or can help in any way, please don't hesitate to add to
 this document.

 We still don't have any dates or anything -- I suppose we will work
 that out soonish.

 Oh, and great thanks to all the people who have helped me so far with
 it, I wouldn't even dream about trying such a thing without you. Also
 thanks in advance to anybody who plans to help!

 I'd like to confirm that we are all aware that this patch creates 16
new repos
 under the administration of horizon-ptl and horizon-core:
 https://review.openstack.org/#/c/95716/

 If I'm late to the party and the only one that this is news to, that
is fine.
 Sixteen additional repos seems like a lot of additional reviews will
be needed.

 Thanks,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-29 Thread Anita Kuno
On 05/29/2014 04:55 PM, Lyle, David wrote:
 The idea here is to decouple 3rd party static files from being embedded in
 the Horizon repo. There are several reasons for this move. With embedded
 3rd party static files upgrading the static files is cumbersome, versions
 can be difficult to track and updates can be difficult to synchronize.
 This change encourages developers to fix bugs upstream at the source,
 rather than edit the static copies in the horizon repo. With the proposed
 repo split, both the existing horizon and openstack_dashboard components
 may have a common dependency on these static packages.
 
 There are several more xstatic packages that horizon will pull in that are
 maintained outside openstack. The packages added are only those that did
 not have existing xstatic packages. These packages will be updated very
 sparingly, only when updating say bootstrap or jquery versions.
 
 We are certainly open to feedback.
 
 David
All I needed to hear was that you and the rest of the core reviewers are
aware of that you will be administering these new repos. That was my
concern.

I'll let the rest of the design discussions proceed.

Thanks David,
Anita.
 
  
 On 5/29/14, 1:53 PM, Anita Kuno ante...@anteaya.info wrote:
 
 On 05/29/2014 03:45 PM, Gabriel Hurley wrote:
 Forgive me if I'm misunderstanding, but those all look like
 repositories that are strictly tracking upstreams. They're not
 maintained by the Horizon/OpenStack developers whatsoever. Is this
 intentional/necessary?

 - Gabriel
 The permissions on all the new repositories require +A from horizon-core
 and tagging from horizon-ptl:
 https://review.openstack.org/#/c/95716/4/modules/openstack_project/files/g
 errit/acls/stackforge/xstatic.config

 Anita.

 -Original Message-
 From: Anita Kuno [mailto:ante...@anteaya.info]
 Sent: Thursday, May 29, 2014 12:30 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [horizon][infra] Plan for the splitting
 of Horizon
 into two repositories

 On 05/28/2014 08:54 AM, Radomir Dopieralski wrote:
 Hello,

 we plan to finally do the split in this cycle, and I started some
 preparations for that. I also started to prepare a detailed plan for
 the whole operation, as it seems to be a rather big endeavor.

 You can view and amend the plan at the etherpad at:
 https://etherpad.openstack.org/p/horizon-split-plan

 It's still a little vague, but I plan to gradually get it more
 detailed.
 All the points are up for discussion, if anybody has any good ideas or
 suggestions, or can help in any way, please don't hesitate to add to
 this document.

 We still don't have any dates or anything -- I suppose we will work
 that out soonish.

 Oh, and great thanks to all the people who have helped me so far with
 it, I wouldn't even dream about trying such a thing without you. Also
 thanks in advance to anybody who plans to help!

 I'd like to confirm that we are all aware that this patch creates 16
 new repos
 under the administration of horizon-ptl and horizon-core:
 https://review.openstack.org/#/c/95716/

 If I'm late to the party and the only one that this is news to, that
 is fine.
 Sixteen additional repos seems like a lot of additional reviews will
 be needed.

 Thanks,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as input any consideration ?

2014-05-29 Thread Carl Baldwin
Keshava,

How much of a problem is routing prefix fragmentation for you?
 Fragmentation causes routing table bloat and may reduce the performance of
the routing table.  It also increases the amount of information traded by
the routing protocol.  Which aspect(s) is (are) affecting you?  Can you
quantify this effect?

A major motivation for my interest in employing a dynamic routing protocol
within a datacenter is to enable IP mobility so that I don't need to worry
about doing things like scheduling instances based on their IP addresses.
 Also, I believe that it can make floating ips more floaty so that they
can cross network boundaries without having to statically configure routers.

To get this mobility, it seems inevitable to accept the fragmentation in
the routing prefixes.  This level of fragmentation would be contained to a
well-defined scope, like within a datacenter.  Is it your opinion that
trading off fragmentation for mobility a bad trade-off?  Maybe it depends
on the capabilities of the TOR switches and routers that you have.  Maybe
others can chime in here.

Carl


On Wed, May 28, 2014 at 10:11 PM, A, Keshava keshav...@hp.com wrote:

  Hi,

 Motivation behind this  requirement is “ to achieve VM prefix aggregation
  using routing protocol ( BGP/OSPF)”.

 So that prefix advertised from cloud to upstream will be aggregated.



 I do not have idea how the current scheduler is implemented.

 But schedule to  maintain some kind of the ‘Network to Node mapping to VM”
 ..

 Based on that mapping to if any new VM  getting hosted to give prefix in
 those Nodes based one input preference.



 It will be great help us from routing side if this is available in the
 infrastructure.

 I am available for review/technical discussion/meeting.





 Thanks  regards,

 Keshava.A



 *From:* jcsf31...@gmail.com [mailto:jcsf31...@gmail.com]
 *Sent:* Thursday, May 29, 2014 9:14 AM
 *To:* openstack-dev@lists.openstack.org; Carl Baldwin; Kyle Mestery;
 OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as
 input any consideration ?



 Hi keshava,



 This is an area that I am interested in.   I'd be happy to collaborate
 with you on a blueprint.This would require enhancements to the
 scheduler as you suggested.



 There are a number of uses cases for this.





 ‎John.



 Sent from my  smartphone.

 *From: *A, Keshava‎

 *Sent: *Tuesday, May 27, 2014 10:58 AM

 *To: *Carl Baldwin; Kyle Mestery; OpenStack Development Mailing List (not
 for usage questions)

 *Reply To: *OpenStack Development Mailing List (not for usage questions)

 *Subject: *[openstack-dev] [neutron][L3] VM Scheduling v/s Network as
 input any consideration ?



 Hi,

 I have one of the basic question about the Nova Scheduler in the following
 below scenario.

 Whenever a new VM to be hosted is there any consideration of network
 attributes ?

 Example let us say all the VMs with 10.1.x is under TOR-1, and 20.1.xy are
 under TOR-2.

 A new CN nodes is inserted under TOR-2 and at same time a new  tenant VM
 needs to be  hosted for 10.1.xa network.



 Then is it possible to mandate the new VM(10.1.xa)   to hosted under TOR-1
 instead of it got scheduled under TOR-2 ( where there CN-23 is completely
 free from resource perspective ) ?

 This is required to achieve prefix/route aggregation and to avoid network
 broadcast (incase if they are scattered across different TOR/Switch) ?









 Thanks  regards,

 Keshava.A






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [Heat] Glance Metadata Catalog for Capabilities and Tags

2014-05-29 Thread Tripp, Travis S
Hello everyone!

At the summit in Atlanta we demonstrated the Graffiti project concepts.  We 
received very positive feedback from members of multiple dev projects as well 
as numerous operators.  We were specifically asked multiple times about getting 
the Graffiti metadata catalog concepts into Glance so that we can start to 
officially support the ideas we demonstrated in Horizon.

After a number of additional meetings at the summit and working through ideas 
the past week, we've created the initial proposal for adding a Metadata Catalog 
to Glance for capabilities and tags.  This is distinct from the Artifact 
Catalog, but we do see that capability and tag catalog can be used with the 
artifact catalog.

We've detailed our initial proposal in the following Google Doc.  Mark 
Washenberger agreed that this was a good place to capture the initial proposal 
and we can later move it over to the Glance spec repo which will be integrated 
with Launchpad blueprints soon.

https://docs.google.com/document/d/1cS2tJZrj748ZsttAabdHJDzkbU9nML5S4oFktFNNd68

Please take a look and let's discuss!

Also, the following video is a brief recap of what was demo' d at the summit.  
It should help to set a lot of understanding behind the ideas in the proposal.

https://www.youtube.com/watch?v=Dhrthnq1bnw


Thank you!

Travis Tripp (HP)
Murali Sundar (Intel)


A Few Related Blueprints
https://blueprints.launchpad.net/horizon/+spec/instance-launch-using-capability-filtering
https://blueprints.launchpad.net/horizon/+spec/tagging
https://blueprints.launchpad.net/horizon/+spec/faceted-search
https://blueprints.launchpad.net/horizon/+spec/host-aggregate-update-metadata
https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Short term scaling strategies for large Heat stacks

2014-05-29 Thread Clint Byrum
Hello!

I am writing to get some brainstorming started on how we might mitigate
some of the issues we've seen while deploying large stacks on Heat. I am
sending this to the dev list because it may involve landing fixes rather
than just using different strategies. The problems outlined here are
well known and reported as bugs or feature requests, but there may be
more that we can do.

First off, we've pushed Heat quite a bit further than it ever was able
to go before. This is due to the fantastic work done across the Heat
development community to respond to issues we've reported. We are really
excited to get started on the effort to move Heat toward a convergence
model, but it is quite clearly a medium-term strategy. What we are
looking for is a short term bridge to get us through the near-term
problems while code lands to fix things in the mid-term.

We have a desire to deploy a fairly wide stack of servers. OpenStack's
simplest architecture has a few controllers which keep state and need
to be HA, and then lots of compute nodes.

What we've seen is that while deploying a cluster with a single controller
and n compute nodes, the probability of stack failure goes up as n
goes up. So we want to mitigate the impact and enable a deployer to
manage a cluster like this with Heat.

We have also seen that the single thread that must manage an action will
take quite a lot of CPU power to process a large stack, which makes
operations on a large stack take a long time and thus increases the
impact of any changes that must be made.

Strategies:

Abandon + Adopt
===

In this strategy, a failure will be responded to by abandoning the stack
in Heat, leaving the successful resources in place. Then the resulting
abandon serialization will be editted to match reality, and the stack
adopted. This suffers from a bug where in-instance users created inside
the stack, while still valid, will not be given access to the metadata.
fix the bugs in abandon/adopt to make sure this works.

Pros: * Exists today

Cons: * Bugs must be fixed
  * Manual process is undefined and requires engineering effort to
recover.

Multiple Stacks
===

We could break the stack up between controllers, and compute nodes. The
controller will be less likely to fail because it will probably be 3 nodes
for a reasonably sized cloud. The compute nodes would then live in their
own stack of (n) nodes. We could further break that up into chunks of
compute nodes, which would further mitigate failure. If a small chunk of
compute nodes fails, we can just migrate off of them. One challenge here
is that compute nodes need to know about all of the other compute nodes
to support live migration. We would have to do a second stack update after
creation to share data between all of these stacks to make this work.

Pros: * Exists today

Cons: * Complicates host awareness
  * Still vulnerable to stack failure (just reduces probability and
impact).

Manual State Manipulation
=

We could create tools for administrators to go into the Heat database
and fix the stack. This is basically the same approach as
abandon/adopt, but it is lighter weight and works around the issue of
losing track of in-instance users.

Pros: * Light weight
  * Possible today

Cons: * Violates API layers
  * Requires out of band access to Heat data store.
  * Will not survive database schema changes

update-failure-recovery
===

This is a blueprint I believe Zane is working on to land in Juno. It will
allow us to retry a failed create or update action. Combined with the
separate controller/compute node strategy, this may be our best option,
but it is unclear whether that code will be available soon or not. The
chunking is definitely required, because with 500 compute nodes, if
node #250 fails, the remaining 249 nodes that are IN_PROGRESS will be
cancelled, which makes the impact of a transient failure quite extreme.
Also without chunking, we'll suffer from some of the performance
problems we've seen where a single engine process will have to do all of
the work to bring up a stack.

Pros: * Uses blessed strategy

Cons: * Implementation is not complete
  * Still suffers from heavy impact of failure
  * Requires chunking to be feasible


Anyway, these are the strategies I have available today. Does anyone
else have some ideas to help us make use of the current Heat to deploy
large stacks? Thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] How about deprecate cfg.CONF.allow_overlapping_ips?

2014-05-29 Thread Nachi Ueno
Hi folks

Today, we are can change allow overlapping ips or not by configuration.
This has impact of database design, and actually, this flag complicate the
implementations.

Whey we have this flag is a historical reason. This was needed when many OS
don't support namespaces, however Most of OS support namespace.

so IMO, we can deprecate it.
Any thought on this?

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] Should #tuskar business be conducted in the #tripleo channel?

2014-05-29 Thread James Slagle
On Thu, May 29, 2014 at 12:25 PM, Anita Kuno ante...@anteaya.info wrote:
 As I was reviewing this patch today:
 https://review.openstack.org/#/c/96160/

 It occurred to me that the tuskar project is part of the tripleo
 program:
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n247

 I wondered if business, including bots posting to irc for #tuskar is
 best conducted in the #tripleo channel. I spoke with Chris Jones in
 #tripleo and he said the topic hadn't come up before. He asked me if I
 wanted to kick off the email thread, so here we are.

 Should #tuskar business be conducted in the #tripleo channel?

I'd say yes. I don't think the additional traffic would be a large
distraction at all to normal TripleO business.

I can however see how it might be nice to have #tuskar to talk tuskar
api and tuskar ui stuff in the same channel. Do folks usually do that?
Or is tuskar-ui conversation already happening in #openstack-horizon?

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Short term scaling strategies for large Heat stacks

2014-05-29 Thread Mike Spreitzer
Clint Byrum cl...@fewbar.com wrote on 05/29/2014 07:52:07 PM:

 I am writing to get some brainstorming started on how we might mitigate
 some of the issues we've seen while deploying large stacks on Heat. I am
 sending this to the dev list because it may involve landing fixes rather
 than just using different strategies. The problems outlined here are
 well known and reported as bugs or feature requests, but there may be
 more that we can do.
 
 ...
 
 Strategies:
 
 ...
 
 update-failure-recovery
 ===
 
 This is a blueprint I believe Zane is working on to land in Juno. It 
will
 allow us to retry a failed create or update action. Combined with the
 separate controller/compute node strategy, this may be our best option,
 but it is unclear whether that code will be available soon or not. The
 chunking is definitely required, because with 500 compute nodes, if
 node #250 fails, the remaining 249 nodes that are IN_PROGRESS will be
 cancelled, which makes the impact of a transient failure quite extreme.
 Also without chunking, we'll suffer from some of the performance
 problems we've seen where a single engine process will have to do all of
 the work to bring up a stack.
 
 Pros: * Uses blessed strategy
 
 Cons: * Implementation is not complete
   * Still suffers from heavy impact of failure
   * Requires chunking to be feasible

I like this one.  As I remarked in the convergence discussion, I think the 
first step there is a DB schema change to separate desired and observed 
state.  Once that is done, failure on one resource need not wedge a stack; 
non-dependent resources (like the peer compute nodes) can still be 
created.

This does not address the issue of putting a lot of work in one process; 
that requires a more radical change.

Regards,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][CI] reduced capacity, rebuilding hp1 region

2014-05-29 Thread Robert Collins
Hi, the HP1 tripleo test cloud region has been systematically failing
and rather than flogging it along we're going to strip it down and
bring it back up with some of the improvements that have happened over
the last $months, as well as changing the undercloud to deploy via
Ironic and other goodness.

There's plenty to do to help move this along - we'll be spinning up a
list of automation issues and glitches that need fixing here -
https://etherpad.openstack.org/p/tripleo-ci-hp1-rebuild

My goal is to have the entirety of each step automated, so we're not
carrying odd quirks or workarounds.

If you are interested please do jump into #tripleo and chat to myself
or DerekH about how you can help out.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >