Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-17 Thread Dmitry Tantsur
On Tue, 2014-09-16 at 15:42 -0400, Zane Bitter wrote:
 On 16/09/14 15:24, Devananda van der Veen wrote:
  On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter zbit...@redhat.com wrote:
  On 16/09/14 13:56, Devananda van der Veen wrote:
 
  On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy sha...@redhat.com wrote:
 
  For example, today, I've been looking at the steps required for driving
  autodiscovery:
 
  https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
 
  Driving this process looks a lot like application orchestration:
 
  1. Take some input (IPMI credentials and MAC addresses)
  2. Maybe build an image and ramdisk(could drop credentials in)
  3. Interact with the Ironic API to register nodes in maintenance mode
  4. Boot the nodes, monitor state, wait for a signal back containing some
   data obtained during discovery (same as WaitConditions or
   SoftwareDeployment resources in Heat..)
  5. Shutdown the nodes and mark them ready for use by nova
 
 
  My apologies if the following sounds snarky -- but I think there are a
  few misconceptions that need to be cleared up about how and when one
  might use Ironic. I also disagree that 1..5 looks like application
  orchestration. Step 4 is a workflow, which I'll go into in a bit, but
  this doesn't look at all like describing or launching an application
  to me.
 
 
  +1 (Although step 3 does sound to me like something that matches Heat's
  scope.)
 
  I think it's a simplistic use case, and Heat supports a lot more
  complexity than is necessary to enroll nodes with Ironic.
 
 
  Step 1 is just parse a text file.
 
  Step 2 should be a prerequisite to doing -anything- with Ironic. Those
  images need to be built and loaded in Glance, and the image UUID(s)
  need to be set on each Node in Ironic (or on the Nova flavor, if going
  that route) after enrollment. Sure, Heat can express this
  declaratively (ironic.node.driver_info must contain key:deploy_kernel
  with value:), but are you suggesting that Heat build the images,
  or just take the UUIDs as input?
 
  Step 3 is, again, just parse a text file
 
  I'm going to make an assumption here [*], because I think step 4 is
  misleading. You shouldn't boot a node using Ironic -- you do that
  through Nova. And you _dont_ get to specify which node you're booting.
  You ask Nova to provision an _instance_ on a _flavor_ and it picks an
  available node from the pool of nodes that match the request.
 
 
  I think your assumption is incorrect. Steve is well aware that provisioning
  a bare-metal Ironic server is done through the Nova API. What he's
  suggesting here is that the nodes would be booted - not Nova-booted, but
  booted in the sense of having power physically applied - while in
  maintenance mode in order to do autodiscovery of their capabilities,
 
  Except simply applying power doesn't, in itself, accomplish anything
  besides causing the machine to power on. Ironic will only prepare the
  PXE boot environment when initiating a _deploy_.
 
  From what I gather elsewhere in this thread, the autodiscovery stuff is 
 a proposal for the future, not something that exists in Ironic now, and 
 that may be the source of the confusion.
 
 In any case, the etherpad linked at the top of this email was written by 
 someone in the Ironic team and _clearly_ describes PXE booting a 
 discovery image in maintenance mode in order to obtain hardware 
 information about the box.
If was written by me and it seems to be my fault that I didn't state
there more clear that this work is not and probably will not be merged
into Ironic upstream. Sorry for the confusion.

That said, my experiments proved quite possible (though not without some
network-related hacks as of now) to follow these steps to collect (aka
discover) hardware information required for scheduling from a node,
knowing only it's IPMI credentials.

 
 cheers,
 Zane.
 
  which
  is presumably hard to do automatically when they're turned off.
 
  Vendors often have ways to do this while the power is turned off, eg.
  via the OOB management interface.
 
  He's also
  suggesting that Heat could drive this process, which I happen to disagree
  with because it is a workflow not an end state.
 
  +1
 
  However the main takeaway
  here is that you guys are talking completely past one another, and have 
  been
  for some time.
 
 
  Perhaps more detail in the expected interactions with Ironic would be
  helpful and avoid me making (perhaps incorrect) assumptions.
 
  -D
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread Richard Jones
You're quite probably correct - going through the OWASP threat list in more
detail is on my TODO. That was just off the top of my head as something
that has me concerned but I've not investigated it thoroughly.

On 17 September 2014 14:15, Adam Young ayo...@redhat.com wrote:

  On 09/16/2014 08:56 PM, Richard Jones wrote:

 CORS for all of OpenStack is possible once the oslo middleware lands*, but
 as you note it's only one of many elements to be considered when exposing
 the APIs to browsers. There is no current support for CSRF protection in
 the OpenStack APIs, for example. I believe that sort of functionality
 belongs in an intermediary between the APIs and the browser.


 Typically, CSRF is done by writing a customer header.  Why wouldn't the
 -X-Auth-Token header qualify?  Its not a cookie, automatically added.  So,
 CORS support would be necesary for horizon to send the token on a request
 to Nova, but no other site would be able to do that.  No?




  Richard

  * https://review.openstack.org/#/c/120964/

 On 17 September 2014 08:59, Gabriel Hurley gabriel.hur...@nebula.com
 wrote:

 This is generally the right plan. The hard parts are in getting people to
 deploy it correctly and securely, and handling fallback cases for lack of
 browser support, etc.

 What we really don't want to do is to encourage people to set
 Access-Control-Allow-Origin: * type headers or other such nonsense simply
 because it's too much work to do things correctly. This becomes especially
 challenging for federated clouds.

 I would encourage looking at the problem of adding all the necessary
 headers for CORS as an OpenStack-wide issue. Once you figure it out for
 Keystone, the next logical step is to want to make calls from the browser
 directly to all the other service endpoints, and each service is going to
 have to respond with the correct CORS headers
 (Access-Control-Allow-Methods and Access-Control-Allow-Headers are
 particularly fun ones for projects like Glance or Swift). A common
 middleware and means of configuring it will go a long way to easing user
 pain and spurring adoption of the new mechanisms. It will help the Horizon
 team substantially in the long run to do it consistently and predictably
 across the stack.

 As a side-note, once we're in the realm of handling all this sensitive
 data with the browser as a middleman, encouraging people to configure
 things like CSP is probably also a good idea to make sure we're not loading
 malicious scripts or other resources.

 Securing a browser-centric world is a tricky realm... let's make sure we
 get it right. :-)

  - Gabriel

  -Original Message-
  From: Adam Young [mailto:ayo...@redhat.com]
  Sent: Tuesday, September 16, 2014 3:40 PM
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] [Keystone][Horizon] CORS and Federation
 
  Phase one for dealing with Federation can be done with CORS support
 solely
  for Keystone/Horizon  integration:
 
  1.  Horizon Login page creates Javascript to do AJAX call to Keystone 2.
  Keystone generates a token 3.  Javascript reads token out of response
 and
  sends it to Horizon.
 
  This should support Kerberos, X509, and Password auth;  the Keystone
 team
  is discussing how to advertise mechanisms, lets leave the onus on us to
 solve
  that one and get back in a timely manner.
 
  For Federation, the handshake is a little more complex, and there might
 be a
  need for some sort of popup window for the user to log in to their home
  SAML provider.  Its several more AJAX calls, but the end effect should
 be the
  same:  get a standard Keystone token and hand it to Horizon.
 
  This would mean that Horizon would have to validate tokens the same way
  as any other endpoint.  That should not be too hard, but there is a
 little bit of
  create a user, get a token, make a call logic that currently lives
 only in
  keystonemiddleware/auth_token;  Its a solvable problem.
 
  This approach will support the straight Javascript approach that
 Richard Jones
  discussed;  Keystone behind a proxy will work this way without CORS
  support.  If CORS  can be sorted out for the other services, we can do
 straight
  Javascript without the Proxy.  I see it as phased approach with this
 being the
  first phase.
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [sahara] weekly team meeting Sept 18 1800 UTC

2014-09-17 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140918T18
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140911T18


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] weekly team meeting Sept 18 1800 UTC

2014-09-17 Thread Sergey Lukjanov
correct time link is:

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140918T18

On Wednesday, September 17, 2014, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi folks,

 We'll be having the Sahara team meeting as usual in
 #openstack-meeting-alt channel.

 Agenda:
 https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings


 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140918T18
 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140911T18


 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-17 Thread mar...@redhat.com
Hi,

as part of general housekeeping on our reviews, it was discussed at last
week's meeting [1] that we should set workflow -1 for stale reviews
(like gerrit used to do when I were a lad).

The specific criteria discussed was 'items that have a -1 from a core
but no response from author for 14 days'. This topic came up again
during today's meeting and it wasn't clear if the intention was for
cores to start enforcing this? So:

Do we start setting WIP/workflow -1 for those reviews that have a -1
from a core but no response from author for 14 days

thanks, marios

[1]
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-09-09-19.04.log.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-17 Thread Steven Hardy
On Tue, Sep 16, 2014 at 02:06:59PM -0700, Devananda van der Veen wrote:
 On Tue, Sep 16, 2014 at 12:42 PM, Zane Bitter zbit...@redhat.com wrote:
  On 16/09/14 15:24, Devananda van der Veen wrote:
 
  On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter zbit...@redhat.com wrote:
 
  On 16/09/14 13:56, Devananda van der Veen wrote:
 
 
  On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy sha...@redhat.com wrote:
 
 
  For example, today, I've been looking at the steps required for driving
  autodiscovery:
 
  https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
 
  Driving this process looks a lot like application orchestration:
 
  1. Take some input (IPMI credentials and MAC addresses)
  2. Maybe build an image and ramdisk(could drop credentials in)
  3. Interact with the Ironic API to register nodes in maintenance mode
  4. Boot the nodes, monitor state, wait for a signal back containing
  some
   data obtained during discovery (same as WaitConditions or
   SoftwareDeployment resources in Heat..)
  5. Shutdown the nodes and mark them ready for use by nova
 
 
  My apologies if the following sounds snarky -- but I think there are a
  few misconceptions that need to be cleared up about how and when one
  might use Ironic. I also disagree that 1..5 looks like application
  orchestration. Step 4 is a workflow, which I'll go into in a bit, but
  this doesn't look at all like describing or launching an application
  to me.
 
 
 
  +1 (Although step 3 does sound to me like something that matches Heat's
  scope.)
 
 
  I think it's a simplistic use case, and Heat supports a lot more
  complexity than is necessary to enroll nodes with Ironic.
 
 
  Step 1 is just parse a text file.
 
  Step 2 should be a prerequisite to doing -anything- with Ironic. Those
  images need to be built and loaded in Glance, and the image UUID(s)
  need to be set on each Node in Ironic (or on the Nova flavor, if going
  that route) after enrollment. Sure, Heat can express this
  declaratively (ironic.node.driver_info must contain key:deploy_kernel
  with value:), but are you suggesting that Heat build the images,
  or just take the UUIDs as input?
 
  Step 3 is, again, just parse a text file
 
  I'm going to make an assumption here [*], because I think step 4 is
  misleading. You shouldn't boot a node using Ironic -- you do that
  through Nova. And you _dont_ get to specify which node you're booting.
  You ask Nova to provision an _instance_ on a _flavor_ and it picks an
  available node from the pool of nodes that match the request.
 
 
 
  I think your assumption is incorrect. Steve is well aware that
  provisioning
  a bare-metal Ironic server is done through the Nova API. What he's
  suggesting here is that the nodes would be booted - not Nova-booted, but
  booted in the sense of having power physically applied - while in
  maintenance mode in order to do autodiscovery of their capabilities,
 
 
  Except simply applying power doesn't, in itself, accomplish anything
  besides causing the machine to power on. Ironic will only prepare the
  PXE boot environment when initiating a _deploy_.
 
 
  From what I gather elsewhere in this thread, the autodiscovery stuff is a
  proposal for the future, not something that exists in Ironic now, and that
  may be the source of the confusion.
 
  In any case, the etherpad linked at the top of this email was written by
  someone in the Ironic team and _clearly_ describes PXE booting a discovery
  image in maintenance mode in order to obtain hardware information about the
  box.
 
 
 Huh. I should have looked at that earlier in the discussion. It is
 referring to out-of-tree code whose spec was not approved during Juno.
 
 Apparently, and unfortunately, throughout much of this discussion,
 folks have been referring to potential features Ironic might someday
 have, whereas I have been focused on the features we actually support
 today. That is probably why it seems we are talking past each other.

FWIW I think a big part of the problem has been that you've been focussing
on the fact that my solution doesn't match your preconceived ideas of how
Ironic should interface with the world, while completely ignoring the
use-case, e.g the actual problem I'm trying to solve.

That is why I'm referring to features Ironic might someday have - because
Ironic currently does not solve my problem, so I'm looking for a workable
way to change that.

When I posted the draft Ironic resources, I did fail to provide detailed
use-case info, so my bad there, but since I've posted the spec I don't
really feel like the discussion has been much more productive - I've tried,
repeatedly, to get you to understand my use-case, and you've tried,
repeatedly, to tell me my implementation is wrong (without providing any
fully-formed alternative, I call this unqualified your-idea-sucks, a
common and destructive review anti-pattern IMO)

It wasn't until Jay Faulkner's message earlier in this thread that someone
actually proposed a 

[openstack-dev] [neutron] Creating resources for non-existent tenants

2014-09-17 Thread Elena Ezhova
Hi, all!

I have been looking at the bug
https://bugs.launchpad.net/neutron/+bug/1338885 and it turned out that it
is relevant not only for firewall rules but for all resources that take
tenant-is for create and update.

I need a piece of advice on a preferable way of solving the problem.

First of all, there may be two situations:

1. Neutron using Keystone

2. Neutron working without it

In the second case there is obviously nothing to be done.

But when Neutron uses Keystone, tenant-id should be checked against
existing keystone tenants. I can think of 2 ways of doing this. This may be
done either by calling keystone client directly from neutron while
preparing request body [1] or move the check to keystone middleware. In any
case, such check will be performed during each create or update operation
preventing admin from providing non-existent tenants. For now I think that
calling the keystone client from Neutron code is not the best idea and
prefer the second option. I would really appreciate recommendations about
the best way of making the check.

It still leaves the situation when an existing tenant is deleted from
keystone and its resources are left orphaned, but it is being dealt with by
[2].

Thanks,

Elena


[1]
https://github.com/openstack/neutron/blob/master/neutron/api/v2/base.py#L545

[2] https://blueprints.launchpad.net/neutron/+spec/tenant-delete
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-17 Thread Lucas Alvares Gomes
 1. Not everyone will have an enterprise CMDB, so there should be some way
 to input inventory without one (even if it is a text file fed into
 ironicclient). The bulk-loading format to do this is TBD.

 2. A way to generate that inventory in an automated way is desirable for
 some folks, but looks likely to be out-of-scope for Ironic.  Folks are -1
 on using heat to drive this process, so we'll probably end up with some
 scary shell scripts instead, or maybe a mistral workflow in future.

FWIW, I'm not against having an automatic way to discovery the
hardware properties in Ironic.

Lemme try to be clear, if Ironic needs to know the size of disk,
amount of memory, no of CPUs and CPU arch to be able to deploy a
machine, and BMCs like Drac, iLO, MM, etc... provides an OOB endpoint
which Ironic can use to get such informations, so I don't see why we
should not consume that from in Ironic. Note that I don't agree that
Ironic should store *all* informations about the hardware, but the
informations it's going to use to be able to *deploy* that machine.

So the reasons I think we should do that is:

1) First because Ironic is an abstraction layer for the hardware, so
if it's a feature that the hardware provides I don't see why not
abstracting and creating an API for it. We already have the
vendor_passthru endpoint where things like that can be used by the
vendor to expose their new shiny hardware capabiltities, and if other
drivers start implementing the same thing in their vendor_passthru we
can go then and promote that feature to a common API. So we already
even have a strategy for that.


2) To be less error prone, this is about quality. Why would I rely on
a human to input all this information to Ironic if I can go there and
interrogate the hardware directly and be sure that this is the real
amount of resources I have there. @Jim even said in this thread that
dealing with the incorrect data from vendor, DC team, etc is the hard
part of registering this things. So I don't wanna rely on them, if I
have a mean to talk to the hardware directly and get this informations
why not do it?


 3. Vendor-specific optimization of nodes for particular roles will be
 handled via Ironic drivers, which expose capabilities which can be selected
 via nova flavors. (is there a BP for this?)

 4. Stuff like RAID configuration will be handled via in-band config
 management tools, nobody has offered any solution for using management
 interfaces to do this, and drac-raid-mgmt is unlikely to land in Ironic
 (where would such an interface be appropriate then?)

IMO, this falls in the same argument I have about Ironic is an
abstraction layer for hardware, if the BMC supports it and configuring
RAID is something that makes sense to do prior to deploying a node, I
don't see why we should not abstract it. Again, we have a strategy for
that, let vendors expose it first in their vendor_passthru interface,
and if more see it as a nice feature to have and implement in their
vendor_passthru as well we can go and promote it to a common API
interface


 5. Nobody has offered any solution for management and convergence of BIOS
 and firmware levels (would this be part of the Ironic driver mentioned in
 (3), or are we punting the entire problem to in-band provision-time tooling?)

 If anyone can help by providing existing BP's related to the above (which I
 can follow and/or contribute to) that would be great - I'm happy to drop
 the whole Heat resource thing, but only if there's a clear path to solving
 the problems in some other/better way.

 Thanks,

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-17 Thread Dmitry Tantsur
On Wed, 2014-09-17 at 10:36 +0100, Steven Hardy wrote:
 On Tue, Sep 16, 2014 at 02:06:59PM -0700, Devananda van der Veen wrote:
  On Tue, Sep 16, 2014 at 12:42 PM, Zane Bitter zbit...@redhat.com wrote:
   On 16/09/14 15:24, Devananda van der Veen wrote:
  
   On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter zbit...@redhat.com wrote:
  
   On 16/09/14 13:56, Devananda van der Veen wrote:
  
  
   On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy sha...@redhat.com 
   wrote:
  
  
   For example, today, I've been looking at the steps required for 
   driving
   autodiscovery:
  
   https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
  
   Driving this process looks a lot like application orchestration:
  
   1. Take some input (IPMI credentials and MAC addresses)
   2. Maybe build an image and ramdisk(could drop credentials in)
   3. Interact with the Ironic API to register nodes in maintenance mode
   4. Boot the nodes, monitor state, wait for a signal back containing
   some
data obtained during discovery (same as WaitConditions or
SoftwareDeployment resources in Heat..)
   5. Shutdown the nodes and mark them ready for use by nova
  
  
   My apologies if the following sounds snarky -- but I think there are a
   few misconceptions that need to be cleared up about how and when one
   might use Ironic. I also disagree that 1..5 looks like application
   orchestration. Step 4 is a workflow, which I'll go into in a bit, but
   this doesn't look at all like describing or launching an application
   to me.
  
  
  
   +1 (Although step 3 does sound to me like something that matches Heat's
   scope.)
  
  
   I think it's a simplistic use case, and Heat supports a lot more
   complexity than is necessary to enroll nodes with Ironic.
  
  
   Step 1 is just parse a text file.
  
   Step 2 should be a prerequisite to doing -anything- with Ironic. Those
   images need to be built and loaded in Glance, and the image UUID(s)
   need to be set on each Node in Ironic (or on the Nova flavor, if going
   that route) after enrollment. Sure, Heat can express this
   declaratively (ironic.node.driver_info must contain key:deploy_kernel
   with value:), but are you suggesting that Heat build the images,
   or just take the UUIDs as input?
  
   Step 3 is, again, just parse a text file
  
   I'm going to make an assumption here [*], because I think step 4 is
   misleading. You shouldn't boot a node using Ironic -- you do that
   through Nova. And you _dont_ get to specify which node you're booting.
   You ask Nova to provision an _instance_ on a _flavor_ and it picks an
   available node from the pool of nodes that match the request.
  
  
  
   I think your assumption is incorrect. Steve is well aware that
   provisioning
   a bare-metal Ironic server is done through the Nova API. What he's
   suggesting here is that the nodes would be booted - not Nova-booted, but
   booted in the sense of having power physically applied - while in
   maintenance mode in order to do autodiscovery of their capabilities,
  
  
   Except simply applying power doesn't, in itself, accomplish anything
   besides causing the machine to power on. Ironic will only prepare the
   PXE boot environment when initiating a _deploy_.
  
  
   From what I gather elsewhere in this thread, the autodiscovery stuff is a
   proposal for the future, not something that exists in Ironic now, and that
   may be the source of the confusion.
  
   In any case, the etherpad linked at the top of this email was written by
   someone in the Ironic team and _clearly_ describes PXE booting a 
   discovery
   image in maintenance mode in order to obtain hardware information about 
   the
   box.
  
  
  Huh. I should have looked at that earlier in the discussion. It is
  referring to out-of-tree code whose spec was not approved during Juno.
  
  Apparently, and unfortunately, throughout much of this discussion,
  folks have been referring to potential features Ironic might someday
  have, whereas I have been focused on the features we actually support
  today. That is probably why it seems we are talking past each other.
 
 FWIW I think a big part of the problem has been that you've been focussing
 on the fact that my solution doesn't match your preconceived ideas of how
 Ironic should interface with the world, while completely ignoring the
 use-case, e.g the actual problem I'm trying to solve.
 
 That is why I'm referring to features Ironic might someday have - because
 Ironic currently does not solve my problem, so I'm looking for a workable
 way to change that.
 
 When I posted the draft Ironic resources, I did fail to provide detailed
 use-case info, so my bad there, but since I've posted the spec I don't
 really feel like the discussion has been much more productive - I've tried,
 repeatedly, to get you to understand my use-case, and you've tried,
 repeatedly, to tell me my implementation is wrong (without providing any
 fully-formed alternative, I 

Re: [openstack-dev] [Cinder][Nova][Oslo] Moving Brick out of Cinder

2014-09-17 Thread Ivan Kolodyazhny
Thanks for the feedback!

I need to look closer to Cinder Agent specs.

Walter,
Hope, I could help you and Duncan with making this.

Regards,
Ivan Kolodyazhny

On Wed, Sep 17, 2014 at 2:32 AM, Mathieu Gagné mga...@iweb.com wrote:

 On 2014-09-16 7:03 PM, Walter A. Boring IV wrote:

  The upside to brick not making it in Nova is that it has given us some
 time to rethink things a bit.  What I would actually
 like to see happen now is to create a new cinder/storage agent instead
 of just a brick library.   The agent would run on every cinder node,
 nova node and potentially ironic nodes to do LUN discovery. Duncan and I
 are looking into this for the Kilo release.


 Thanks for reviving this idea [1] [2] [3]. I wish to say that I like it
 because months ago, we found use cases for it and wished cinder-agent was a
 thing.

 One use case we have is the need to rescan an iSCSI target used by a Nova
 instance after an in-use volume has been extended in order to reflect its
 new size at the hypervisor level.

 During the implementation, we quickly saw the code duplication hell that
 exists between both Nova and Cinder, all implementing iSCSI management. Due
 to the amount of work required to introduce a cinder-agent and due to my
 inexperience with OpenStack back then, we went down an other path instead.

 We addressed our needs by introducing a Cinder-Nova interaction through
 custom code in Cinder and an API extension in Nova: Cinder triggers the
 rescan through the Nova API (instead of cinder-agent).

 With cinder-agent, Cinder would be able to remotely trigger a rescan of
 the iSCSI target without relying on a custom API extension in Nova. I feel
 this implementation would be much more resilient and reduce code
 duplication in the long term.

 I'm sure there is more use cases. This is mine.

 [1] https://blueprints.launchpad.net/cinder/+spec/cinder-agent
 [2] https://lists.launchpad.net/openstack/msg19825.html
 [3] https://etherpad.openstack.org/p/cinder-agent

 --
 Mathieu


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Juno-3 milestone candidates available

2014-09-17 Thread Akihiro Motoki
A bit late reply.

As one of translators, I can say wrong strings are worse.
String Freeze is a soft freeze and it just declares we don't make big change
mainly due to feature additions. It does not necessarily prevent fixes
including string changes.

In Horizon case, after starting translations not a small number of
bugs on *strings* itself are found
and several number of strings fixes are on-going, and as far as I
remember in Horizon reviews,
over 100 strings was added from FFE blueprints and over 50 strings
have been fixed or changes
from bug fixes. It is same as what happened in the past releases.
What I would like to mention is that most of such changes are done in
early RC1 phase.

Note that a small number of string changes of error messages would not
be a problem
if the context of strings are clear (from translators perspective).

Thanks,
Akihiro


On Sat, Sep 6, 2014 at 11:36 PM, Thierry Carrez thie...@openstack.org wrote:
 In that precise case, given how early it is in the freeze, I think
 giving a quick heads-up to the -i18n team/list should be enough :) Also
 /adding/ a string is not as disruptive to their work as modifying a
 potentially-already-translated one.

 Joe Cropper wrote:
 +1 to what Jay said.

 I’m not sure whether the string freeze applies to bugs, but the defect
 that Matt mentioned (for which I authored the fix) adds a string, albeit
 to fix a bug.  Hoping it’s more desirable to have an untranslated
 correct message than a translated incorrect message.  :-)

 - Joe
 On Sep 5, 2014, at 3:41 PM, Jay Bryant jsbry...@electronicjungle.net
 mailto:jsbry...@electronicjungle.net wrote:

 Matt,

 I don't think that is the right solution.

 If the string changes I think the only problem is it won't be
 translated if it is thrown.   That is better than breaking the coding
 standard imho.

 Jay

 On Sep 5, 2014 3:30 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
 mailto:mrie...@linux.vnet.ibm.com wrote:



 On 9/5/2014 5:10 AM, Thierry Carrez wrote:

 Hi everyone,

 We just hit feature freeze[1], so please do not approve
 changes that add
 features or new configuration options unless those have been
 granted a
 feature freeze exception.

 This is also string freeze[2], so you should avoid changing
 translatable
 strings. If you have to modify a translatable string, you
 should give a
 heads-up to the I18N team.

 Finally, this is also DepFreeze[3], so you should avoid adding new
 dependencies (bumping oslo or openstack client libraries is OK
 until
 RC1). If you have a new dependency to add, raise a thread on
 openstack-dev about it.

 The juno-3 development milestone was tagged, it contains more
 than 135
 features and 760 bugfixes added since the juno-2 milestone 6
 weeks ago
 (not even counting the Oslo libraries in the mix). You can
 find the full
 list of new features and fixed bugs, as well as tarball
 downloads, at:

 https://launchpad.net/__keystone/juno/juno-3
 https://launchpad.net/keystone/juno/juno-3
 https://launchpad.net/glance/__juno/juno-3
 https://launchpad.net/glance/juno/juno-3
 https://launchpad.net/nova/__juno/juno-3
 https://launchpad.net/nova/juno/juno-3
 https://launchpad.net/horizon/__juno/juno-3
 https://launchpad.net/horizon/juno/juno-3
 https://launchpad.net/neutron/__juno/juno-3
 https://launchpad.net/neutron/juno/juno-3
 https://launchpad.net/cinder/__juno/juno-3
 https://launchpad.net/cinder/juno/juno-3
 https://launchpad.net/__ceilometer/juno/juno-3
 https://launchpad.net/ceilometer/juno/juno-3
 https://launchpad.net/heat/__juno/juno-3
 https://launchpad.net/heat/juno/juno-3
 https://launchpad.net/trove/__juno/juno-3
 https://launchpad.net/trove/juno/juno-3
 https://launchpad.net/sahara/__juno/juno-3
 https://launchpad.net/sahara/juno/juno-3

 Many thanks to all the PTLs and release management liaisons
 who made us
 reach this important milestone in the Juno development cycle.
 Thanks in
 particular to John Garbutt, who keeps on doing an amazing job
 at the
 impossible task of keeping the Nova ship straight in troubled
 waters
 while we head toward the Juno release port.

 Regards,

 [1] https://wiki.openstack.org/__wiki/FeatureFreeze
 https://wiki.openstack.org/wiki/FeatureFreeze
 [2] https://wiki.openstack.org/__wiki/StringFreeze
 https://wiki.openstack.org/wiki/StringFreeze
 [3] https://wiki.openstack.org/__wiki/DepFreeze
 https://wiki.openstack.org/wiki/DepFreeze


 I should probably know this, but at least I'm asking first. :)

 Here is an example of a new translatable 

[openstack-dev] [heat] Convergence - persistence desired and observed state

2014-09-17 Thread Gurjar, Unmesh
Hi All,

The convergence blueprint (https://review.openstack.org/#/c/95907/) introduces 
two new database tables (resource_observed and resource_properties_observed ) 
for storing the observed state of a resource (currently under review: 
https://review.openstack.org/#/c/109012/).

However, it can be simplified by storing the desired and observed state of a 
resource in the resource table itself (two columns in the form of a blob 
storing a JSON). Please let me know your concerns or suggestions about this 
approach.

Thanks,
Unmesh G.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagentoDB] Tomorrow IRC meeting

2014-09-17 Thread Ilya Sviridov
Hello stackers,

MagnetoDB team is having IRC meeting tomorrow 13:00 UTC

The agenda can be found here[1]

Feel free to join and add items to agenda

[1] https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda

Have a nice day,
Ilya Sviridov
isviridov @ FreeNode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][nova] VM restarting on host failure in convergence

2014-09-17 Thread Jastrzebski, Michal
All,

Currently OpenStack does not have a built-in HA mechanism for tenant
instances which could restore virtual machines in case of a host
failure. Openstack assumes every app is designed for failure and can
handle instance failure and will self-remediate, but that is rarely
the case for the very large Enterprise application ecosystem.
Many existing enterprise applications are stateful, and assume that
the physical infrastructure is always on.

Even the OpenStack controller services themselves do not gracefully
handle failure.

When these applications were virtualized, they were virtualized on
platforms that enabled very high SLAs for each virtual machine,
allowing the application to not be rewritten as the IT team moved them
from physical to virtual. Now while these apps cannot benefit from
methods like automatic scaleout, the application owners will greatly
benefit from the self-service capabilities they will recieve as they
utilize the OpenStack control plane.

I'd like to suggest to expand heat convergence mechanism to enable
self-remediation of virtual machines and other heat resources.

convergence specs: https://review.openstack.org/#/c/95907/

Basic flow would look like this:

1. Nova detects host failure and posts notification
Nova service_group API implements host health monitor. We will
use it as notification source when host goes down. Afaik there are
some issues with that, and we might need to fix them. We need
host-health notification source with low latency and good
reliability (when we get host-down notification, we will be 100%
sure that its actually down).
2. Nova sends notifs about affected resources
Nova generates list of affected resources (VMs for example) and
notifies that they are down.
3. Convergence listens on resource-health notification
It schedules rebuild of affected resources, for example VMs on
given host.
4. We introduce different, configurable methods for resource rescue
Client might want to cover different resources with different
level of SLA. For example http edge server may be fault tolerant
and all we want is to simply recreate it on different node and add
to LBaaS pool to regain quorum, while DB server has to be
evacuated.
5. We call nova evacuate if server is configured to use it
By evacuate I mean nova evacuate --on-shared-storage, so
in fact we'll boot up same vm (from existing disk), keep addesses,
data and so on. This will allow pet-servers to minimize downtime
caused by host failure.
We might stumble upon fencing problem in this case. Nova already
has some form of safeguard implemented (it deletes evacuated
instances when host comes back up). We might want to add more
reliable form of fencing (storage locking?) to nova in the future.
6. Heat makes sure that all the configuration needed are applied
Volumes attached, processes run and so on.

In short, what we'll need from nova is to have 100% reliable
host-health monitor and equally reliable rebuild/evacuate mechanism
with fencing and scheduler. In heat we need scallable and reliable
event listener and engine to decide which action to perform in given
situation.

Regards,
Michał inc0 Jastrzębski
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Convergence - persistence desired and observed state

2014-09-17 Thread Qiming Teng
On Wed, Sep 17, 2014 at 12:27:34PM +, Gurjar, Unmesh wrote:
 Hi All,
 
 The convergence blueprint (https://review.openstack.org/#/c/95907/) 
 introduces two new database tables (resource_observed and 
 resource_properties_observed ) for storing the observed state of a resource 
 (currently under review: https://review.openstack.org/#/c/109012/).
 
 However, it can be simplified by storing the desired and observed state of a 
 resource in the resource table itself (two columns in the form of a blob 
 storing a JSON). Please let me know your concerns or suggestions about this 
 approach.
 
 Thanks,
 Unmesh G.

It doesn't sounds like a good idea to me unless we have some plans to
handle potential concurrency and compatibility issues.

Regards,
Qiming


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][TC][Zaqar] Another graduation attempt, new lessons learned

2014-09-17 Thread Flavio Percoco
Greetings,

As probably many of you already know, Zaqar (formerly known as Marconi)
has recently been evaluated for integration. This is the second time
this project (and team) has gone through this process and just like last
time, it wasn't as smooth as we all would have liked it to be.

I thought about sending this email - regardless of what result is - to
give a summary of what the experience have been like from the project
side. Some things were quite frustrating and I think they could be
revisited and improved, hence this email and ideas as to how I think we
could make them better.

## Misunderstanding of the project goals:

For both graduation attempts, the goals of the project were not
clear. It felt like the communication between TC and PTL was
insufficient to convey enough data to make an informed decision.

I think we need to work on a better plan to follow-up with incubated
projects. I think these projects should have a schedule and specific
incubated milestones in addition to the integrated release milestones.
For example, it'd be good to have at least 3 TC meetings were the
project shows the progress, the goals that have been achieved and where
it is standing on the integration requirements.

These meetings should be used to address concerns right away. Based on
Zaqar's experience, it's clear that graduating is more than just meeting
the requirements listed here[0]. The requirements may change and other
project-specific concerns may also be raised. The important thing here,
though, is to be all on the same page of what's needed.

I suggested after the Juno summit that we should have TC representative
for each incubated project[1]. I still think that's a good idea and we
should probably evaluate a way to make that, or something like that,
happen. We tried to put it in practice during Juno - Devananda
volunteered to be Zaqar's representative. Thanks for doing this - but it
didn't work out as we expected. It would probably be a better idea,
given the fact that we're all overloaded with things to do, to have a
sub-team of 2 or 3 TC members assigned to a project. These TC
representatives could lead incubated projects through the process and
work as a bridge between the TC and the project.

Would a plan like the one mentioned above scale for the current TC and
the number of incubated projects?

[0]
https://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst#n79

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-May/035341.html

## Have a better structured meeting

One of the hard things of attending a TC meeting as a representative for
a project is that you get 13 people asking several different questions
at the same time, which is impossible to keep up with. I think, for
future integration/incubation/whatever reviews/meetings, there should be
a better structure. One of the things I'd recommend is to move all the
*long* technical discussions to the mailing list and avoid having them
during the graduation meeting. IRC discussions are great but I'd
probably advice having them in the project channel or during the project
meeting time and definitely before the graduation meeting.

What makes this `all-against-one` thing harder are the parallel
discussions that normally happen during these meetings. We should really
work hard on avoiding these kind of parallel discussions because they
distract attendees and make the really discussion harder and frustrating.

I don't have a good solution for this because everyone's questions are
important. I wonder if there's a good way to voice folks during the
meeting. Also, would it be useful to work out a list of *must discuss*
questions before the meeting?

I think the graduation review meeting should be just about that, doing a
final review. Getting to the graduation meeting with open questions
means the project wasn't followed closely enough throughout the
incubation period. Regardless of this, I still think project's sync
meetings with the TC would be more fruitful if we had a better structure
that would allow everyone to clear out existing doubts.

## Keep feedback constructive

This point here is more a heads up for the whole community and not
specific to the graduation process.

We all agree that the more constructive we are, the easier it'll be to
find the right solution. The more constructive we are, the less painful
the process will be and most importantly the less burned we'll all be at
the end of the journey.

We are all humans and we get caught up in discussions and it's even
harder when we're defending our own opinions. This sometimes leads us to
nonconstructive communication, which leads to frustration and no
solution at all.

There were a couple of things that I think affected the communication
during this process, especially during the meetings so I wrote a list of
what I think are good things to keep in mind while having these kind of
discussions:

- Prefer questions that have specific answers 

Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread David Chadwick
Hi Adam

Kristy has already added support to Horizon for federated login to
Keystone. She will send you details of how she did this.

One issue that arose was this:
in order to give the user the list of IDPs/protocols that are trusted,
the call to Keystone needs to be authenticated. But the user is not yet
authenticated. So Horizon has to have its own credentials for logging
into Keystone so that it can retrieve the list of IdPs for the user.
This works, but it is not ideal.

The situation is far worse for the Keystone command line client. The
user is not logged in and the Keystone client does not have its own
account on Keystone, so it cannot retrieve the list of IdPs for the
user. The only way that Kristy could solve this, was to remove the
requirement for authentication to the API that retrieves the list of
IdPs. But this is not a standard solution as it requires modifying the
core Keystone code.

We need a fix to address this issue. My suggestion would be to make the
API for retrieving the list of trusted IDPs publicly accessible, so that
no credentials are needed for this.

regards

David


On 16/09/2014 23:39, Adam Young wrote:
 Phase one for dealing with Federation can be done with CORS support
 solely for Keystone/Horizon  integration:
 
 1.  Horizon Login page creates Javascript to do AJAX call to Keystone
 2.  Keystone generates a token
 3.  Javascript reads token out of response and sends it to Horizon.
 
 This should support Kerberos, X509, and Password auth;  the Keystone
 team is discussing how to advertise mechanisms, lets leave the onus on
 us to solve that one and get back in a timely manner.
 
 For Federation, the handshake is a little more complex, and there might
 be a need for some sort of popup window for the user to log in to their
 home SAML provider.  Its several more AJAX calls, but the end effect
 should be the same:  get a standard Keystone token and hand it to Horizon.
 
 This would mean that Horizon would have to validate tokens the same way
 as any other endpoint.  That should not be too hard, but there is a
 little bit of create a user, get a token, make a call logic that
 currently lives only in keystonemiddleware/auth_token;  Its a solvable
 problem.
 
 This approach will support the straight Javascript approach that Richard
 Jones discussed;  Keystone behind a proxy will work this way without
 CORS support.  If CORS  can be sorted out for the other services, we can
 do straight Javascript without the Proxy.  I see it as phased approach
 with this being the first phase.
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-17 Thread Charles Crouch


- Original Message -
 Hi,
 
 as part of general housekeeping on our reviews, it was discussed at last
 week's meeting [1] that we should set workflow -1 for stale reviews
 (like gerrit used to do when I were a lad).
 
 The specific criteria discussed was 'items that have a -1 from a core
 but no response from author for 14 days'. This topic came up again
 during today's meeting and it wasn't clear if the intention was for
 cores to start enforcing this? So:
 
 Do we start setting WIP/workflow -1 for those reviews that have a -1
 from a core but no response from author for 14 days
 
 thanks, marios
 
 [1]
 http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-09-09-19.04.log.html

So it looks like this has already started..

https://review.openstack.org/#/c/105275/

I think we need to document on the wiki *precisely* the criteria for setting 
WIP/workflow -1. For example that review above has a Jenkins failure but no
core reviews at all.

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread Steve Martinelli
++ to your suggestion David, I think making
the list of trusted IdPs publicly available makes sense. 

- Steve

David Chadwick d.w.chadw...@kent.ac.uk wrote
on 09/17/2014 09:37:21 AM:

 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: openstack-dev@lists.openstack.org, Kristy
Siu k.w.s@kent.ac.uk, 
 Date: 09/17/2014 09:42 AM
 Subject: Re: [openstack-dev] [Keystone][Horizon]
CORS and Federation
 
 Hi Adam
 
 Kristy has already added support to Horizon for federated login to
 Keystone. She will send you details of how she did this.
 
 One issue that arose was this:
 in order to give the user the list of IDPs/protocols that are trusted,
 the call to Keystone needs to be authenticated. But the user is not
yet
 authenticated. So Horizon has to have its own credentials for logging
 into Keystone so that it can retrieve the list of IdPs for the user.
 This works, but it is not ideal.
 
 The situation is far worse for the Keystone command line client. The
 user is not logged in and the Keystone client does not have its own
 account on Keystone, so it cannot retrieve the list of IdPs for the
 user. The only way that Kristy could solve this, was to remove the
 requirement for authentication to the API that retrieves the list
of
 IdPs. But this is not a standard solution as it requires modifying
the
 core Keystone code.
 
 We need a fix to address this issue. My suggestion would be to make
the
 API for retrieving the list of trusted IDPs publicly accessible, so
that
 no credentials are needed for this.
 
 regards
 
 David
 
 
 On 16/09/2014 23:39, Adam Young wrote:
  Phase one for dealing with Federation can be done with CORS support
  solely for Keystone/Horizon integration:
  
  1. Horizon Login page creates _javascript_ to do AJAX call
to Keystone
  2. Keystone generates a token
  3. _javascript_ reads token out of response and sends it
to Horizon.
  
  This should support Kerberos, X509, and Password auth; the
Keystone
  team is discussing how to advertise mechanisms, lets leave the
onus on
  us to solve that one and get back in a timely manner.
  
  For Federation, the handshake is a little more complex, and there
might
  be a need for some sort of popup window for the user to log in
to their
  home SAML provider. Its several more AJAX calls, but the
end effect
  should be the same: get a standard Keystone token and hand
it to Horizon.
  
  This would mean that Horizon would have to validate tokens the
same way
  as any other endpoint. That should not be too hard, but
there is a
  little bit of create a user, get a token, make a call
logic that
  currently lives only in keystonemiddleware/auth_token; Its
a solvable
  problem.
  
  This approach will support the straight _javascript_ approach that
Richard
  Jones discussed; Keystone behind a proxy will work this
way without
  CORS support. If CORS can be sorted out for the other
services, we can
  do straight _javascript_ without the Proxy. I see it as phased
approach
  with this being the first phase.
  
  
  
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-17 Thread Derek Higgins
On 17/09/14 14:40, Charles Crouch wrote:
 
 
 - Original Message -
 Hi,

 as part of general housekeeping on our reviews, it was discussed at last
 week's meeting [1] that we should set workflow -1 for stale reviews
 (like gerrit used to do when I were a lad).

 The specific criteria discussed was 'items that have a -1 from a core
 but no response from author for 14 days'. This topic came up again
 during today's meeting and it wasn't clear if the intention was for
 cores to start enforcing this? So:

 Do we start setting WIP/workflow -1 for those reviews that have a -1
 from a core but no response from author for 14 days

 thanks, marios

 [1]
 http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-09-09-19.04.log.html
 
 So it looks like this has already started..
 
 https://review.openstack.org/#/c/105275/
 
 I think we need to document on the wiki *precisely* the criteria for setting 
 WIP/workflow -1.
Yup, we definitely should

 For example that review above has a Jenkins failure but no
 core reviews at all.
FWIW I reckon a jenkins -1 should also start the 2 week clock but in the
case you've linked the -1 was only 2 days ago so should have remained
untouched.

 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Adding hp1 back running tripleo CI

2014-09-17 Thread Derek Higgins
On 15/09/14 22:37, Gregory Haynes wrote:
 This is a total shot in the dark, but a couple of us ran into issues
 with the Ubuntu Trusty kernel (I know I hit it on HP hardware) that was
 causing severely degraded performance for TripleO. This fixed with a
 recently released kernel in Trusty... maybe you could be running into
 this?

thanks Greg,

To try this out, I've redeployed the new testenv image and ran 35
overcloud jobs on it(32 passed), the average time for these was 130
minutes so unfortunately no major difference.

The old kernel was
3.13.0-33-generic #58-Ubuntu SMP Tue Jul 29 16:45:05 UTC 2014 x86_64
the one one is
3.13.0-35-generic #62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014 x86_64

Derek

 
 -Greg
 
 Also its worth noting the test I have been using to compare jobs is the
 F20 overcloud job, something has happened recently causing this job to
 run slower then it used to run (possibly upto 30 minutes slower), I'll
 now try to get to the bottom of this. So the times may not end up being
 as high as referenced above but I'm assuming the relative differences
 between the two clouds wont change.

 thoughts?
 Derek

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday September 18th at 22:00 UTC

2014-09-17 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, September 18th at 22:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

It's also worth noting that a few weeks ago we started having a regular
dedicated Devstack topic during the meetings. So if anyone is interested in
Devstack development please join the meetings to be a part of the discussion.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
07:30 ACST
0:00 CEST
17:00 CDT
15:00 PDT

-Matt Treinish


pgpc4SdOssubw.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Confused about the future of health maintenance and OS::Heat::HARestarter

2014-09-17 Thread Mike Spreitzer
Background: Health maintenance is very important to users, and I have 
users who want to do it now and into the future.  Today a Heat user can 
write a template that maintains the health of a resource R.  The detection 
of a health problem can be done by anything that hits a webhook.  That 
generality is important; it is not sufficient to determine health by 
looking at what physical and/or virtual resources exist, it is also highly 
desirable to test whether these things are functioning well (e.g., the URL 
based health checking possible through an OS::Neutron::Pool; e.g., the 
user has his own external system that detects health problems).  The 
webhook is provided by an OS::Heat::HARestarter (note the name bug: such a 
thing does not restart anything, rather it deletes and re-creates a given 
resource and all its dependents) that deletes and re-creates R and its 
health detection/recovery wiring.  For a more specific example, consider 
the case of detection using the services of an OS::Neutron::Pool.  Note 
that it is not even necessary for there to be workload traffic through the 
associated OS::Neutron::LoadBalancer; all we are using here is the 
monitoring prescribed by the Pool's OS::Neutron::HealthMonitor.  The 
user's template has, in addition to R, three things: (1) an 
OS::Neutron::PoolMember that puts R in the Pool, (2) an 
OS::Heat::HARestarter that deletes and re-creates R and all its 
dependents, and (3) a Ceilometer alarm that detects when Neutron is 
reporting that the PoolMember is unhealthy and responds by hitting the 
HARestarter's webhook.  Note that all three of those are dependent on R, 
and thus are deleted and re-created when the HARestarter's webhook is hit; 
this avoids most of the noted issues with HARestarter.  R can be a stack 
that includes both a Nova server and an OS::Neutron::Port, to work around 
a Nova bug with implicit ports.

There is a movement afoot to remove HARestarter.  My concern is what can 
users do, now and into the future.  The first and most basic issue is 
this: at every step in the roadmap, it must be possible for users to 
accomplish health maintenance.  The second issue is easing the impact on 
what users write.  It would be pretty bad if the roadmap looks like this: 
before point X, users can only accomplish health maintenance as I outlined 
above, and from point X onward the user has to do something different. 
That is, there should be a transition period during which users can do 
things either the old way or the new way.  It would be even better if we, 
or a cloud provider, could provide an abstraction that will be usable 
throughout the roadmap (once that abstraction becomes available).  For 
example, if there were a resource type OS::Heat::ReliableAutoScalingGroup 
that adds health maintenance functionality (with detection by an 
OS::Neutron::Pool and exposure of per-member webhooks usable by anything) 
to OS::Heat::AutoScalingGroup.  Once some other way to do that maintenance 
becomes available, the implementation of 
OS::Heat::ReliableAutoScalingGroup could switch to that without requiring 
any changes to users' templates.  If at some point in the future 
OS::Heat::ReliableAutoScalingGroup becomes exactly equivalent to 
OS::Heat::AutoScalingGroup then we could deprecate 
OS::Heat::ReliableAutoScalingGroup and, at a later time, remove it.  Even 
better: since health maintenance is not logically connected to scaling 
group membership, make the abstraction be simply OS::Heat::HealthyResource 
(i.e., it is about a single resource regardless of whether it is a member 
of a scaling group) rather than OS::Heat::ReliableAutoScalingGroup. 
Question: would that abstraction (including the higher level detection and 
exposure of re-creation webhook) be implementable (or a no-op) in the 
planned future?

To aid in understanding: while it may be distasteful for a resource like 
HARestarter to tweak its containing stack, the critical question is 
whether it will remain *possible* throughout a transition period.  Is 
there an issue with such hacks being *possible* throughout a reasonable 
transition period?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread Marek Denis



On 17.09.2014 15:45, Steve Martinelli wrote:

++ to your suggestion David, I think making the list of trusted IdPs
publicly available makes sense.


I think this might be useful in an academic/science world but on the 
other hand most cloud providers from the 'business' world might be very 
reluctant to expose list of their clients for free.



cheers,

Marek.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread David Chadwick


On 17/09/2014 14:55, Marek Denis wrote:
 
 
 On 17.09.2014 15:45, Steve Martinelli wrote:
 ++ to your suggestion David, I think making the list of trusted IdPs
 publicly available makes sense.
 
 I think this might be useful in an academic/science world but on the
 other hand most cloud providers from the 'business' world might be very
 reluctant to expose list of their clients for free.
 

It is interesting that this latter comment came from the
academic/science world, whereas the supportive one came from the
business world :-)

So maybe there could be a config parameter in keystone to determine
which option is installed?

regards

David

 
 cheers,
 
 Marek.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread Tim Bell
Has Kristy's patch made it into Juno ? 


Tim

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: 17 September 2014 15:37
 To: openstack-dev@lists.openstack.org; Kristy Siu
 Subject: Re: [openstack-dev] [Keystone][Horizon] CORS and Federation
 
 Hi Adam
 
 Kristy has already added support to Horizon for federated login to Keystone. 
 She
 will send you details of how she did this.
 
 One issue that arose was this:
 in order to give the user the list of IDPs/protocols that are trusted, the 
 call to
 Keystone needs to be authenticated. But the user is not yet authenticated. So
 Horizon has to have its own credentials for logging into Keystone so that it 
 can
 retrieve the list of IdPs for the user.
 This works, but it is not ideal.
 
 The situation is far worse for the Keystone command line client. The user is 
 not
 logged in and the Keystone client does not have its own account on Keystone, 
 so
 it cannot retrieve the list of IdPs for the user. The only way that Kristy 
 could
 solve this, was to remove the requirement for authentication to the API that
 retrieves the list of IdPs. But this is not a standard solution as it requires
 modifying the core Keystone code.
 
 We need a fix to address this issue. My suggestion would be to make the API 
 for
 retrieving the list of trusted IDPs publicly accessible, so that no 
 credentials are
 needed for this.
 
 regards
 
 David
 
 
 On 16/09/2014 23:39, Adam Young wrote:
  Phase one for dealing with Federation can be done with CORS support
  solely for Keystone/Horizon  integration:
 
  1.  Horizon Login page creates Javascript to do AJAX call to Keystone
  2.  Keystone generates a token 3.  Javascript reads token out of
  response and sends it to Horizon.
 
  This should support Kerberos, X509, and Password auth;  the Keystone
  team is discussing how to advertise mechanisms, lets leave the onus on
  us to solve that one and get back in a timely manner.
 
  For Federation, the handshake is a little more complex, and there
  might be a need for some sort of popup window for the user to log in
  to their home SAML provider.  Its several more AJAX calls, but the end
  effect should be the same:  get a standard Keystone token and hand it to
 Horizon.
 
  This would mean that Horizon would have to validate tokens the same
  way as any other endpoint.  That should not be too hard, but there is
  a little bit of create a user, get a token, make a call logic that
  currently lives only in keystonemiddleware/auth_token;  Its a solvable
  problem.
 
  This approach will support the straight Javascript approach that
  Richard Jones discussed;  Keystone behind a proxy will work this way
  without CORS support.  If CORS  can be sorted out for the other
  services, we can do straight Javascript without the Proxy.  I see it
  as phased approach with this being the first phase.
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread Adam Young

On 09/17/2014 10:07 AM, David Chadwick wrote:


On 17/09/2014 14:55, Marek Denis wrote:


On 17.09.2014 15:45, Steve Martinelli wrote:

++ to your suggestion David, I think making the list of trusted IdPs
publicly available makes sense.

I think this might be useful in an academic/science world but on the
other hand most cloud providers from the 'business' world might be very
reluctant to expose list of their clients for free.


It is interesting that this latter comment came from the
academic/science world, whereas the supportive one came from the
business world :-)

So maybe there could be a config parameter in keystone to determine
which option is installed?


My thought was that there would be a public list, which is a subset of 
the overall list.


For non-publicized IdPs, the end users would get an URL out of  band and 
enter that in when prompted;  if they enter an invalid URL, they would 
get an warning message.


It wouldn't hide the fact that a customer was registered with a given 
cloud provider, but wouldn't advertise it, either.






regards

David


cheers,

Marek.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread Steve Martinelli
I had the same thought :)

Config option or some documentation that outlines changes in policy.json.
Either way, we should support both.

David Chadwick d.w.chadw...@kent.ac.uk wrote
on 09/17/2014 10:07:43 AM:

 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: openstack-dev@lists.openstack.org, 
 Date: 09/17/2014 10:10 AM
 Subject: Re: [openstack-dev] [Keystone][Horizon]
CORS and Federation
 
 
 
 On 17/09/2014 14:55, Marek Denis wrote:
  
  
  On 17.09.2014 15:45, Steve Martinelli wrote:
  ++ to your suggestion David, I think making the list of trusted
IdPs
  publicly available makes sense.
  
  I think this might be useful in an academic/science world but
on the
  other hand most cloud providers from the 'business' world might
be very
  reluctant to expose list of their clients for free.
  
 
 It is interesting that this latter comment came from the
 academic/science world, whereas the supportive one came from the
 business world :-)
 
 So maybe there could be a config parameter in keystone to determine
 which option is installed?
 
 regards
 
 David
 
  
  cheers,
  
  Marek.
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread David Chadwick
this would work as well, but wouldn't it require two different API calls?

On 17/09/2014 15:17, Adam Young wrote:
 On 09/17/2014 10:07 AM, David Chadwick wrote:

 On 17/09/2014 14:55, Marek Denis wrote:

 On 17.09.2014 15:45, Steve Martinelli wrote:
 ++ to your suggestion David, I think making the list of trusted IdPs
 publicly available makes sense.
 I think this might be useful in an academic/science world but on the
 other hand most cloud providers from the 'business' world might be very
 reluctant to expose list of their clients for free.

 It is interesting that this latter comment came from the
 academic/science world, whereas the supportive one came from the
 business world :-)

 So maybe there could be a config parameter in keystone to determine
 which option is installed?
 
 My thought was that there would be a public list, which is a subset of
 the overall list.
 
 For non-publicized IdPs, the end users would get an URL out of  band and
 enter that in when prompted;  if they enter an invalid URL, they would
 get an warning message.
 
 It wouldn't hide the fact that a customer was registered with a given
 cloud provider, but wouldn't advertise it, either.
 
 
 

 regards

 David

 cheers,

 Marek.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] zeromq work for kilo

2014-09-17 Thread Doug Hellmann
This thread [1] has turned more “future focused, so I’m moving the 
conversation to the -dev list where we usually have those sorts of discussions.

[1] http://lists.openstack.org/pipermail/openstack/2014-September/009253.html

On Sep 17, 2014, at 7:54 AM, James Page james.p...@ubuntu.com wrote:

 Signed PGP part
 Hi Li
 
 On 17/09/14 11:58, Li Ma wrote:
  The scale potential is very appealing and is something I want to
  test - - hopefully in the next month or so.
 
  Canonical are interested in helping to maintain this driver and
  hopefully we help any critical issues prior to Juno release.
 
 
  That sounds good. I just went through all the bugs reported in the
  community.
 
  The only critical bug which makes ZeroMQ malfunction is
  https://bugs.launchpad.net/oslo.messaging/+bug/1301723 and the
  corresponding review is under way:
  https://review.openstack.org/#/c/84938/
 
 Agreed
 
  Others are tagged to 'zmq' in
  https://bugs.launchpad.net/oslo.messaging
 
 Looking through Doug's suggested list of information and collating
 what I know of from our work last week:
 
 1) documentation for how to configure and use zeromq with
 oslo.messaging (note, not the version in oslo-incubator, the version
 in the messaging library repository)
 
 As part of our sprint, I worked on automating deployment of OpenStack
 + 0MQ using Ubuntu + Juju (service orchestration tool). I can re-jig
 that work into some general documentation on how best to configure
 ZeroMQ with OpenStack - the current documentation is a bit raw and
 does not talk about how to configure the oslo-messaging-zmq-receiver
 at all.
 
 I also plan some packaging updates for Debian/Ubuntu in our next dev
 cycle to make this a little easier to configure and digest - for
 example, right now no systemd unit/upstart configuration/sysv init
 script is provided to manage the zmq-receiver.
 
 I'd also like to document the current design of the ZMQ driver - Doug
 - where is the best place todo this? I thought in the source tree
 somewhere.

The documentation in the oslo.messaging repository [2] would be a good place to 
start for that. If we decide deployers/operators need the information we can 
either refer to it from the guides managed by the documentation team or we can 
move/copy the information. How about if you start a new drivers subdirectory 
there, and add information about zmq. We can have other driver authors provide 
similar detail about their drivers in the same directory.

[2] http://git.openstack.org/cgit/openstack/oslo.messaging/tree/doc/source

 
 2) a list of the critical bugs that need to be fixed + any existing
 patches associated with those bugs, so they can be reviewed early in kilo
 
 This blocks operation of nova+neutron environements:
 
 https://bugs.launchpad.net/oslo.messaging/+bug/1301723
   Summary: Message was sent to wrong node with zmq as rpc_backend
   Patch: https://review.openstack.org/84938
 
 Also notifcations are effectively unimplemented which prevents use
 with Ceilometer so I'd also add:
 
 https://bugs.launchpad.net/oslo.messaging/+bug/1368154
   Summary: https://bugs.launchpad.net/oslo.messaging/+bug/
   Patch: https://review.openstack.org/120745

That’s a good list, and shorter than I expected. I have added these bugs to the 
next-kilo milestone.

 
 3) an analysis of what it would take to be able to run functional
 tests for zeromq on our CI infrastructure, not necessarily the full
 tempest run or devstack-gate job, probably functional tests we place
 in the tree with the driver (we will be doing this for all of the
 drivers) + besides writing new functional tests, we need to bring the
 unit tests for zeromq into the oslo.messaging repository
 
 Kapil Thangavelu started work on both functional tests for the ZMQ
 driver last week; the output from the sprint is here:
 
https://github.com/ostack-musketeers/oslo.messaging
 
 it covers the ZMQ driver (including messaging through the zmq-receiver
 proxy) and the associated MatchMakers (local, ring, redis) at a
 varying levels of coverage, but I feel it moves things in the right
 direction - Kapil's going to raise a review for this in the next
 couple of days.
 
 Doug - has any structure been agreed within the oslo.messaging tree
 for unit/functional test splits? Right now we have them all in one place.

I think we will want them split up, but we don’t have an agreed existing 
structure for that. I would like to see a test framework of some sort that 
defines the tests in a way that can be used to run the same functional for all 
of the drivers as separate jobs (with appropriate hooks for ensuring the needed 
services are running, etc.). Setting that up warrants its own spec, because 
there are going to be quite a few details to work out. We will also need to 
participate in the larger conversation about how to set up those functional 
test jobs to be consistent with the other projects.

 
 Edward Hope-Morley also worked on getting devstack working 

Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread David Chadwick
Hi Tim

I don't believe she has pushed this through the official channel yet as
we were very pushed for time to get something working for our GIANT
CLASSe project. We only did the work in the latter half of August.  I
also don't know if we are too late for Juno or not.

regards

David

On 17/09/2014 15:14, Tim Bell wrote:
 Has Kristy's patch made it into Juno ? 
 
 
 Tim
 
 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: 17 September 2014 15:37
 To: openstack-dev@lists.openstack.org; Kristy Siu
 Subject: Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

 Hi Adam

 Kristy has already added support to Horizon for federated login to Keystone. 
 She
 will send you details of how she did this.

 One issue that arose was this:
 in order to give the user the list of IDPs/protocols that are trusted, the 
 call to
 Keystone needs to be authenticated. But the user is not yet authenticated. So
 Horizon has to have its own credentials for logging into Keystone so that it 
 can
 retrieve the list of IdPs for the user.
 This works, but it is not ideal.

 The situation is far worse for the Keystone command line client. The user is 
 not
 logged in and the Keystone client does not have its own account on Keystone, 
 so
 it cannot retrieve the list of IdPs for the user. The only way that Kristy 
 could
 solve this, was to remove the requirement for authentication to the API that
 retrieves the list of IdPs. But this is not a standard solution as it 
 requires
 modifying the core Keystone code.

 We need a fix to address this issue. My suggestion would be to make the API 
 for
 retrieving the list of trusted IDPs publicly accessible, so that no 
 credentials are
 needed for this.

 regards

 David


 On 16/09/2014 23:39, Adam Young wrote:
 Phase one for dealing with Federation can be done with CORS support
 solely for Keystone/Horizon  integration:

 1.  Horizon Login page creates Javascript to do AJAX call to Keystone
 2.  Keystone generates a token 3.  Javascript reads token out of
 response and sends it to Horizon.

 This should support Kerberos, X509, and Password auth;  the Keystone
 team is discussing how to advertise mechanisms, lets leave the onus on
 us to solve that one and get back in a timely manner.

 For Federation, the handshake is a little more complex, and there
 might be a need for some sort of popup window for the user to log in
 to their home SAML provider.  Its several more AJAX calls, but the end
 effect should be the same:  get a standard Keystone token and hand it to
 Horizon.

 This would mean that Horizon would have to validate tokens the same
 way as any other endpoint.  That should not be too hard, but there is
 a little bit of create a user, get a token, make a call logic that
 currently lives only in keystonemiddleware/auth_token;  Its a solvable
 problem.

 This approach will support the straight Javascript approach that
 Richard Jones discussed;  Keystone behind a proxy will work this way
 without CORS support.  If CORS  can be sorted out for the other
 services, we can do straight Javascript without the Proxy.  I see it
 as phased approach with this being the first phase.





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread Adam Young

On 09/17/2014 10:14 AM, Tim Bell wrote:

Has Kristy's patch made it into Juno ?
I don't see any patches from Kristy in either the merged or pending 
review state for non-keystone projects;


https://review.openstack.org/#/q/owner:%22Kristy+Siu%22,n,z

So I'm guessing it is proof-of-concept code that has not been yet submitted.

How do you propose going from Horizon to Keystone using  SAML creds?




Tim


-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
Sent: 17 September 2014 15:37
To: openstack-dev@lists.openstack.org; Kristy Siu
Subject: Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

Hi Adam

Kristy has already added support to Horizon for federated login to Keystone. She
will send you details of how she did this.

One issue that arose was this:
in order to give the user the list of IDPs/protocols that are trusted, the call 
to
Keystone needs to be authenticated. But the user is not yet authenticated. So
Horizon has to have its own credentials for logging into Keystone so that it can
retrieve the list of IdPs for the user.
This works, but it is not ideal.

The situation is far worse for the Keystone command line client. The user is not
logged in and the Keystone client does not have its own account on Keystone, so
it cannot retrieve the list of IdPs for the user. The only way that Kristy could
solve this, was to remove the requirement for authentication to the API that
retrieves the list of IdPs. But this is not a standard solution as it requires
modifying the core Keystone code.

We need a fix to address this issue. My suggestion would be to make the API for
retrieving the list of trusted IDPs publicly accessible, so that no credentials 
are
needed for this.

regards

David


On 16/09/2014 23:39, Adam Young wrote:

Phase one for dealing with Federation can be done with CORS support
solely for Keystone/Horizon  integration:

1.  Horizon Login page creates Javascript to do AJAX call to Keystone
2.  Keystone generates a token 3.  Javascript reads token out of
response and sends it to Horizon.

This should support Kerberos, X509, and Password auth;  the Keystone
team is discussing how to advertise mechanisms, lets leave the onus on
us to solve that one and get back in a timely manner.

For Federation, the handshake is a little more complex, and there
might be a need for some sort of popup window for the user to log in
to their home SAML provider.  Its several more AJAX calls, but the end
effect should be the same:  get a standard Keystone token and hand it to

Horizon.

This would mean that Horizon would have to validate tokens the same
way as any other endpoint.  That should not be too hard, but there is
a little bit of create a user, get a token, make a call logic that
currently lives only in keystonemiddleware/auth_token;  Its a solvable
problem.

This approach will support the straight Javascript approach that
Richard Jones discussed;  Keystone behind a proxy will work this way
without CORS support.  If CORS  can be sorted out for the other
services, we can do straight Javascript without the Proxy.  I see it
as phased approach with this being the first phase.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Release Report

2014-09-17 Thread mar...@redhat.com
1. os-apply-config: release: 0.1.21 -- 0.1.22
-- https://pypi.python.org/pypi/os-apply-config/0.1.22
--
http://tarballs.openstack.org/os-apply-config/os-apply-config-0.1.22.tar.gz

2. os-refresh-config:   no changes, 0.1.7

3. os-collect-config:   no changes, 0.1.28

4. os-cloud-config: release: 0.1.9 -- 0.1.10
-- https://pypi.python.org/pypi/os-cloud-config/0.1.10
--
http://tarballs.openstack.org/os-cloud-config/os-cloud-config-0.1.10.tar.gz

5. diskimage-builder:   release: 0.1.30 -- 0.1.31
-- https://pypi.python.org/pypi/diskimage-builder/0.1.31
--
http://tarballs.openstack.org/diskimage-builder/diskimage-builder-0.1.31.tar.gz

6. dib-utils:   no changes, 0.0.6

7. tripleo-heat-templates:  release: 0.7.6 -- 0.7.7
-- https://pypi.python.org/pypi/tripleo-heat-templates/0.7.7
--
http://tarballs.openstack.org/tripleo-heat-templates/tripleo-heat-templates-0.7.7.tar.gz

8: tripleo-image-elements:  release: 0.8.6 -- 0.8.7
-- https://pypi.python.org/pypi/tripleo-image-elements/0.8.7
--
http://tarballs.openstack.org/tripleo-image-elements/tripleo-image-elements-0.8.7.tar.gz

9: tuskar:  release 0.4.11 -- 0.4.12
-- https://pypi.python.org/pypi/tuskar/0.4.12
-- http://tarballs.openstack.org/tuskar/tuskar-0.4.12.tar.gz

10. python-tuskarclient:release 0.1.11 -- 0.1.12
-- https://pypi.python.org/pypi/python-tuskarclient/0.1.12
--
http://tarballs.openstack.org/python-tuskarclient/python-tuskarclient-0.1.12.tar.gz

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] are we going to remove the novaclient v3 shell or what?

2014-09-17 Thread Matt Riedemann
This has come up a couple of times in IRC now but the people that 
probably know the answer aren't available.


There are python-novaclient patches that are adding new CLIs to the v2 
(v1_1) and v3 shells, but now that we have the v2.1 API (v2 on v3) why 
do we still have a v3 shell in the client?  Are there plans to remove that?


I don't really care either way, but need to know for code reviews.

One example: [1]

[1] https://review.openstack.org/#/c/108942/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread Adam Young

On 09/17/2014 10:35 AM, David Chadwick wrote:

this would work as well, but wouldn't it require two different API calls?


I think it would be 2 calls no matter what.

OK,  lets talk this through:

1. Configure Horizon to return a generic login page, with a button that 
says Or do Federated
2.  Use clicks that button and gets the Federated UI.  No new HTTP 
request needed for this, still just static Javascript.  Form has an edit 
field for the user to enter the SAML IdP, anda button to fetch the list 
of the public IdPs from Keystone.
3.  Assume user clicks the list of public IdPs, those fill out a 
drop-down box.  If the user clicks on one, it populates the textbox with 
the URL for the IdP.
3a.  However, if the users IdP is not in the list, they go back to the 
email they got from their IT guy that has Paste this URL into the IdP 
edit box


4. User clicks OK.

Window pops up with the WebUI for the user to visit the SAML IdP URL. 
This will be of the form

|
GET 
htps://keystone/main/OS-FEDERATION/identity_providers/{identity_provider}/protocols/{protocol}/auth|


Which will perform the necessary redirects to get the user the SAML 
assertion, and return it to Keystone.


5.  Javascript picks up the Federated unscoped token from the response 
at the end of step 4 and use that to get a scoped token.



6.  Javascript submites the scoped token to Horizon and user is logged in.

Whew.




On 17/09/2014 15:17, Adam Young wrote:

On 09/17/2014 10:07 AM, David Chadwick wrote:

On 17/09/2014 14:55, Marek Denis wrote:

On 17.09.2014 15:45, Steve Martinelli wrote:

++ to your suggestion David, I think making the list of trusted IdPs
publicly available makes sense.

I think this might be useful in an academic/science world but on the
other hand most cloud providers from the 'business' world might be very
reluctant to expose list of their clients for free.


It is interesting that this latter comment came from the
academic/science world, whereas the supportive one came from the
business world :-)

So maybe there could be a config parameter in keystone to determine
which option is installed?

My thought was that there would be a public list, which is a subset of
the overall list.

For non-publicized IdPs, the end users would get an URL out of  band and
enter that in when prompted;  if they enter an invalid URL, they would
get an warning message.

It wouldn't hide the fact that a customer was registered with a given
cloud provider, but wouldn't advertise it, either.




regards

David


cheers,

Marek.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-17 Thread Matt Riedemann



On 9/16/2014 1:01 PM, Joe Gordon wrote:


On Sep 15, 2014 8:31 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:
 
  On 09/15/2014 08:07 PM, Jeremy Stanley wrote:
 
  On 2014-09-15 17:59:10 -0400 (-0400), Jay Pipes wrote:
  [...]
 
  Sometimes it's pretty hard to determine whether something in the
  E-R check page is due to something in the infra scripts, some
  transient issue in the upstream CI platform (or part of it), or
  actually a bug in one or more of the OpenStack projects.
 
  [...]
 
  Sounds like an NP-complete problem, but if you manage to solve it
  let me know and I'll turn it into the first line of triage for Infra
  bugs. ;)
 
 
  LOL, thanks for making me take the last hour reading Wikipedia pages
about computational complexity theory! :P
 
  No, in all seriousness, I wasn't actually asking anyone to boil the
ocean, mathematically. I think doing a couple things just making the
categorization more obvious (a UI thing, really) and doing some
(hopefully simple?) inspection of some control group of patches that we
know do not introduce any code changes themselves and comparing to
another group of patches that we know *do* introduce code changes to
Nova, and then seeing if there are a set of E-R issues that consistently
appear in *both* groups. That set of E-R issues has a higher likelihood
of not being due to Nova, right?

We use launchpad's affected projects listings on the elastic recheck
page to say what may be causing the bug.  Tagging projects to bugs is a
manual process, but one that works pretty well.

UI: The elastic recheck UI definitely could use some improvements. I am
very poor at writing UIs, so patches welcome!

 
  OK, so perhaps it's not the most scientific or well-thought out plan,
but hey, it's a spark for thought... ;)
 
  Best,
  -jay
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'm not great with UIs either but would a dropdown of the affected 
projects be helpful and then people can filter on their favorite 
project and then the page is sorted by top offenders as we have today?


There are times when the top bugs are infra issues (pip timeouts for 
exapmle) so you have to scroll a ways before finding something for your 
project (nova isn't the only one).


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] client release deadline - Sept 18th

2014-09-17 Thread Matt Riedemann



On 9/15/2014 12:57 PM, Matt Riedemann wrote:



On 9/10/2014 11:08 AM, Kyle Mestery wrote:

On Wed, Sep 10, 2014 at 10:01 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:



On 9/9/2014 4:19 PM, Sean Dague wrote:


As we try to stabilize OpenStack Juno, many server projects need to get
out final client releases that expose new features of their servers.
While this seems like not a big deal, each of these clients releases
ends up having possibly destabilizing impacts on the OpenStack whole
(as
the clients do double duty in cross communicating between services).

As such in the release meeting today it was agreed clients should have
their final release by Sept 18th. We'll start applying the dependency
freeze to oslo and clients shortly after that, all other requirements
should be frozen at this point unless there is a high priority bug
around them.

 -Sean



Thanks for bringing this up. We do our own packaging and need time
for legal
clearances and having the final client releases done in a reasonable
time
before rc1 is helpful.  I've been pinging a few projects to do a final
client release relatively soon.  python-neutronclient has a release this
week and I think John was planning a python-cinderclient release this
week
also.


Just a slight correction: python-neutronclient will have a final
release once the L3 HA CLI changes land [1].

Thanks,
Kyle

[1] https://review.openstack.org/#/c/108378/


--

Thanks,

Matt Riedemann



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



python-cinderclient 1.1.0 was released on Saturday:

https://pypi.python.org/pypi/python-cinderclient/1.1.0



python-novaclient 2.19.0 was released yesterday [1].

List of changes:

mriedem@ubuntu:~/git/python-novaclient$ git log 2.18.1..2.19.0 --oneline 
--no-merges

cd56622 Stop using intersphinx
d96f13d delete python bytecode before every test run
4bd0c38 quota delete tenant_id parameter should be required
3d68063 Don't display duplicated security groups
2a1c07e Updated from global requirements
319b61a Fix test mistake with requests-mock
392148c Use oslo.utils
e871bd2 Use Token fixtures from keystoneclient
aa30c13 Update requirements.txt to include keystoneclient
bcc009a Updated from global requirements
f0beb29 Updated from global requirements
cc4f3df Enhance network-list to allow --fields
fe95fe4 Adding Nova Client support for auto find host APIv2
b3da3eb Adding Nova Client support for auto find host APIv3
3fa04e6 Add filtering by service to hosts list command
c204613 Quickstart (README) doc should refer to nova
9758ffc Updated from global requirements
53be1f4 Fix listing of flavor-list (V1_1) to display swap value
db6d678 Use adapter from keystoneclient
3955440 Fix the return code of the command delete
c55383f Fix variable error for nova --service-type
caf9f79 Convert to requests-mock
33058cb Enable several checks and do not check docs/source/conf.py
abae04a Updated from global requirements
68f357d Enable check for E131
b6afd59 Add support for security-group-default-rules
ad9a14a Fix rxtx_factor name for creating a flavor
ff4af92 Allow selecting the network for doing the ssh with
9ce03a9 fix host resource repr to use 'host' attribute
4d25867 Enable H233
60d1283 Don't log sensitive auth data
d51b546 Enabled hacking checks H305 and H307
8ec2a29 Edits on help strings
c59a0c8 Add support for new fields in network create
67585ab Add version-list for listing REST API versions
0ff4afc Description is mandatory parameter when creating Security Group
6ee0b28 Filter endpoints by region whenever possible
32d13a6 Add missing parameters for server rebuild
f10d8b6 Fixes typo in error message of do_network_create
9f1ee12 Mention keystoneclient.Session use in docs
58cdcab Fix booting from volume when using api v3
52c5ad2 Sync apiclient from oslo-incubator
2acfb9b Convert server tests to httpretty
762bf69 Adding cornercases for set_metadata
313a2f8 Add way to specify key-name from environ

[1] https://pypi.python.org/pypi/python-novaclient/2.19.0

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Thomas Goirand
Hi,

I'm horrified by what I just found. I have just found out this in
glanceclient:

  File bla/tests/test_ssl.py, line 19, in module
from requests.packages.urllib3 import poolmanager
ImportError: No module named packages.urllib3

Please *DO NOT* do this. Instead, please use urllib3 from ... urllib3.
Not from requests. The fact that requests is embedding its own version
of urllib3 is an heresy. In Debian, the embedded version of urllib3 is
removed from requests.

In Debian, we spend a lot of time to un-vendorize stuff, because
that's a security nightmare. I don't want to have to patch all of
OpenStack to do it there as well.

And no, there's no good excuse here...

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Donald Stufft
I don't know the specific situation but it's appropriate to do this if you're 
using requests and wish to interact with the urllib3 that requests is using.

 On Sep 17, 2014, at 11:15 AM, Thomas Goirand z...@debian.org wrote:
 
 Hi,
 
 I'm horrified by what I just found. I have just found out this in
 glanceclient:
 
  File bla/tests/test_ssl.py, line 19, in module
from requests.packages.urllib3 import poolmanager
 ImportError: No module named packages.urllib3
 
 Please *DO NOT* do this. Instead, please use urllib3 from ... urllib3.
 Not from requests. The fact that requests is embedding its own version
 of urllib3 is an heresy. In Debian, the embedded version of urllib3 is
 removed from requests.
 
 In Debian, we spend a lot of time to un-vendorize stuff, because
 that's a security nightmare. I don't want to have to patch all of
 OpenStack to do it there as well.
 
 And no, there's no good excuse here...
 
 Thomas Goirand (zigo)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Donald Stufft
Looking at the code on my phone it looks completely correct to use the vendored 
copy here and it wouldn't actually work otherwise. 

 On Sep 17, 2014, at 11:17 AM, Donald Stufft don...@stufft.io wrote:
 
 I don't know the specific situation but it's appropriate to do this if you're 
 using requests and wish to interact with the urllib3 that requests is using.
 
 On Sep 17, 2014, at 11:15 AM, Thomas Goirand z...@debian.org wrote:
 
 Hi,
 
 I'm horrified by what I just found. I have just found out this in
 glanceclient:
 
 File bla/tests/test_ssl.py, line 19, in module
   from requests.packages.urllib3 import poolmanager
 ImportError: No module named packages.urllib3
 
 Please *DO NOT* do this. Instead, please use urllib3 from ... urllib3.
 Not from requests. The fact that requests is embedding its own version
 of urllib3 is an heresy. In Debian, the embedded version of urllib3 is
 removed from requests.
 
 In Debian, we spend a lot of time to un-vendorize stuff, because
 that's a security nightmare. I don't want to have to patch all of
 OpenStack to do it there as well.
 
 And no, there's no good excuse here...
 
 Thomas Goirand (zigo)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][TC][Zaqar] Another graduation attempt, new lessons learned

2014-09-17 Thread Eoghan Glynn

Thanks for bring this to the list, Flavio.

A few thoughts in line ...

 Greetings,
 
 As probably many of you already know, Zaqar (formerly known as Marconi)
 has recently been evaluated for integration. This is the second time
 this project (and team) has gone through this process and just like last
 time, it wasn't as smooth as we all would have liked it to be.
 
 I thought about sending this email - regardless of what result is - to
 give a summary of what the experience have been like from the project
 side. Some things were quite frustrating and I think they could be
 revisited and improved, hence this email and ideas as to how I think we
 could make them better.
 
 ## Misunderstanding of the project goals:
 
 For both graduation attempts, the goals of the project were not
 clear. It felt like the communication between TC and PTL was
 insufficient to convey enough data to make an informed decision.
 
 I think we need to work on a better plan to follow-up with incubated
 projects. I think these projects should have a schedule and specific
 incubated milestones in addition to the integrated release milestones.
 For example, it'd be good to have at least 3 TC meetings were the
 project shows the progress, the goals that have been achieved and where
 it is standing on the integration requirements.
 
 These meetings should be used to address concerns right away. Based on
 Zaqar's experience, it's clear that graduating is more than just meeting
 the requirements listed here[0]. The requirements may change and other
 project-specific concerns may also be raised. The important thing here,
 though, is to be all on the same page of what's needed.
 
 I suggested after the Juno summit that we should have TC representative
 for each incubated project[1]. I still think that's a good idea and we
 should probably evaluate a way to make that, or something like that,
 happen. We tried to put it in practice during Juno - Devananda
 volunteered to be Zaqar's representative. Thanks for doing this - but it
 didn't work out as we expected. It would probably be a better idea,
 given the fact that we're all overloaded with things to do, to have a
 sub-team of 2 or 3 TC members assigned to a project. These TC
 representatives could lead incubated projects through the process and
 work as a bridge between the TC and the project.
 
 Would a plan like the one mentioned above scale for the current TC and
 the number of incubated projects?

Agreed that the expectations on the TC representative should be made
clearer. It would be best IMO if this individual (or small sub-team)
could commit to doing a deep-dive on the project and be ready to act
as a mediator with the rest of the TC around the project's intended
use-cases, architecture, APIs etc.

There need not necessarily be an expectation that the representative(s)
would champion the project, but they should ensure that there aren't
what the heck is this thing? style questions still being asked right
at the end of the incubation cycle. 

 [0]
 https://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst#n79
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2014-May/035341.html
 
 ## Have a better structured meeting
 
 One of the hard things of attending a TC meeting as a representative for
 a project is that you get 13 people asking several different questions
 at the same time, which is impossible to keep up with. I think, for
 future integration/incubation/whatever reviews/meetings, there should be
 a better structure. One of the things I'd recommend is to move all the
 *long* technical discussions to the mailing list and avoid having them
 during the graduation meeting. IRC discussions are great but I'd
 probably advice having them in the project channel or during the project
 meeting time and definitely before the graduation meeting.
 
 What makes this `all-against-one` thing harder are the parallel
 discussions that normally happen during these meetings. We should really
 work hard on avoiding these kind of parallel discussions because they
 distract attendees and make the really discussion harder and frustrating.

As an observer from afar at those TC meetings, the tone of *some* of
the discussion seemed a bit adversarial, or at least very challenging
to respond to in a coherent way. I wouldn't relish the job of trying 
to field rapid-fire, overlapping questions in real-time, some of which
cast doubts on very fundamental aspects of the project. While I agree
every TC member's questions are important, that approach isn't the most
effective way of ensuring good answers are forthcoming.

+1 that this could be improved by ensuring that most of the detailed
technical discussion has already been aired well in advance on the ML.

In addition, I wonder might there be some mileage in approaches such
as:

 * encouraging the TC to register fundamental concerns/doubts in
   advance via an etherpad so that the project team gets a 

Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread David Chadwick


On 17/09/2014 15:38, Adam Young wrote:
 On 09/17/2014 10:14 AM, Tim Bell wrote:
 Has Kristy's patch made it into Juno ?
 I don't see any patches from Kristy in either the merged or pending
 review state for non-keystone projects;
 
 https://review.openstack.org/#/q/owner:%22Kristy+Siu%22,n,z
 
 So I'm guessing it is proof-of-concept code that has not been yet
 submitted.

That is correct

 
 How do you propose going from Horizon to Keystone using  SAML creds?

The flow is something like this

Browser  Horizon Keystone/Apache   IDP
 -get login-
  --retrieve IDPs--
  -list of IdPs  --
-display login page-
-Choose fed login-
-redirect to Keystone-
|
-fed login request -
  remember address of Horizon
--- SAML Authn request  (redirect)---
|


---User logs in 
SAML Authn Response (redirect)--
|
--
-redirect back to Horizon-
|  using remembered address
--

The modification that Kristy had to make to Keystone was the last
redirection request to Horizon had to be remembered when the initial
request was received

We would like to make our proof of concept code available to the Horizon
experts for them to re-engineer/toughen to industry standards so that it
can be released asap to the public.

We have documentation that describes our release which you can download
from here

https://classe.sec.cs.kent.ac.uk/juno-server.pdf

Note that a newer version will be uploaded very shortly as the above
version contains partial VO documentation which will be stripped out
until we can complete it later this month or next.

regards

David


 
 

 Tim

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: 17 September 2014 15:37
 To: openstack-dev@lists.openstack.org; Kristy Siu
 Subject: Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

 Hi Adam

 Kristy has already added support to Horizon for federated login to
 Keystone. She
 will send you details of how she did this.

 One issue that arose was this:
 in order to give the user the list of IDPs/protocols that are
 trusted, the call to
 Keystone needs to be authenticated. But the user is not yet
 authenticated. So
 Horizon has to have its own credentials for logging into Keystone so
 that it can
 retrieve the list of IdPs for the user.
 This works, but it is not ideal.

 The situation is far worse for the Keystone command line client. The
 user is not
 logged in and the Keystone client does not have its own account on
 Keystone, so
 it cannot retrieve the list of IdPs for the user. The only way that
 Kristy could
 solve this, was to remove the requirement for authentication to the
 API that
 retrieves the list of IdPs. But this is not a standard solution as it
 requires
 modifying the core Keystone code.

 We need a fix to address this issue. My suggestion would be to make
 the API for
 retrieving the list of trusted IDPs publicly accessible, so that no
 credentials are
 needed for this.

 regards

 David


 On 16/09/2014 23:39, Adam Young wrote:
 Phase one for dealing with Federation can be done with CORS support
 solely for Keystone/Horizon  integration:

 1.  Horizon Login page creates Javascript to do AJAX call to Keystone
 2.  Keystone generates a token 3.  Javascript reads token out of
 response and sends it to Horizon.

 This should support Kerberos, X509, and Password auth;  the Keystone
 team is discussing how to advertise mechanisms, lets leave the onus on
 us to solve that one and get back in a timely manner.

 For Federation, the handshake is a little more complex, and there
 might be a need for some sort of popup window for the user to log in
 to their home SAML provider.  Its several more AJAX calls, but the end
 effect should be the same:  get a standard Keystone token and hand
 it to
 Horizon.
 This would mean that Horizon would have to validate tokens the same
 way as any other endpoint.  That should not be too hard, but there is
 a little bit of create a user, get a token, make a call logic that
 currently lives only in keystonemiddleware/auth_token;  Its a solvable
 problem.

 This approach will support the straight Javascript approach that
 Richard Jones discussed;  Keystone behind a proxy will work this way
 without CORS support.  If CORS  can be sorted out for the other
 services, we can do straight Javascript without the Proxy.  I see it
 as phased approach with this being the first phase.





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Re: [openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-17 Thread Sullivan, Jon Paul
 -Original Message-
 From: Derek Higgins [mailto:der...@redhat.com]
 Sent: 17 September 2014 14:49
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [TripleO] Set WIP for stale patches?
 
 On 17/09/14 14:40, Charles Crouch wrote:
 
 
  - Original Message -
  Hi,
 
  as part of general housekeeping on our reviews, it was discussed at
  last week's meeting [1] that we should set workflow -1 for stale
  reviews (like gerrit used to do when I were a lad).
 
  The specific criteria discussed was 'items that have a -1 from a core
  but no response from author for 14 days'. This topic came up again
  during today's meeting and it wasn't clear if the intention was for
  cores to start enforcing this? So:
 
  Do we start setting WIP/workflow -1 for those reviews that have a -1
  from a core but no response from author for 14 days
 
  thanks, marios
 
  [1]
  http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-09-
  09-19.04.log.html
 
  So it looks like this has already started..
 
  https://review.openstack.org/#/c/105275/
 
  I think we need to document on the wiki *precisely* the criteria for
  setting WIP/workflow -1.
 Yup, we definitely should
 
  For example that review above has a Jenkins failure but no core
  reviews at all.
 FWIW I reckon a jenkins -1 should also start the 2 week clock but in the
 case you've linked the -1 was only 2 days ago so should have remained
 untouched.

I think this highlights exactly why this should be an automated process.  No 
errors in application, and no errors in interpretation of what has happened.

So the -1 from Jenkins was a reaction to the comment created by adding the 
workflow -1.  This is going to happen on all of the patches that have their 
workflow value altered (tests will run, result would be whatever the result of 
the test was, of course).

But I also agree that the Jenkins vote should not be included in the 
determination of marking a patch WIP, but a human review should (So Code-Review 
and not Verified column).

And in fact, for the specific example to hand, the last Jenkins vote was 
actually a +1, so as I understand it should not have been marked WIP.

 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] reopen a change / pull request for nova-pythonclient ?

2014-09-17 Thread Alex Leonhardt
hi,

how does one re-open a abandoned change / pull request ? it just timed
out and was then abandoned -

https://review.openstack.org/#/c/57834/

please let me know

thanks!
alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-17 Thread Duncan Thomas
On 16 September 2014 01:28, Nathan Kinder nkin...@redhat.com wrote:
 The idea would be to leave normal tokens with a smaller validity period
 (like the current default of an hour), but also allow one-time use
 tokens to be requested.

Cinder backup makes many requests to swift during a backup, one per
chunk to be uploaded plus one or more for the metadata file.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread Marek Denis

Hi,

First of all, we should clarify whether your JS client wants to 
implement ECP or WebSSO workflow. They are slightly different.


I feel JS is smart enough to implement the ECP flow and then and it 
could simply implement what we already have in the keystoneclient [0]. 
This + some discovery service described by Adam would be required. 
However, some problems may arise when it comes to ADFS  support (we 
needed separate plugin in keystoneclient) and other protocols which 
should work by default from browsers.


[0] 
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/contrib/auth/v3/saml2.py#L29


Rest of the comments inlined.

On 17.09.2014 17:00, Adam Young wrote:

On 09/17/2014 10:35 AM, David Chadwick wrote:

this would work as well, but wouldn't it require two different API calls?


I think it would be 2 calls no matter what.

OK,  lets talk this through:

1. Configure Horizon to return a generic login page, with a button that
says Or do Federated
2.  Use clicks that button and gets the Federated UI.  No new HTTP
request needed for this, still just static Javascript.  Form has an edit
field for the user to enter the SAML IdP, anda button to fetch the list
of the public IdPs from Keystone.
3.  Assume user clicks the list of public IdPs, those fill out a
drop-down box.  If the user clicks on one, it populates the textbox with
the URL for the IdP.
3a.  However, if the users IdP is not in the list, they go back to the
email they got from their IT guy that has Paste this URL into the IdP
edit box


Well, we don't keep any IdPs' URL in Keystone backend at the moment.
However, this can be easily fixed.


4. User clicks OK.


OK


Window pops up with the WebUI for the user to visit the SAML IdP URL.
This will be of the form
|
GET
htps://keystone/main/OS-FEDERATION/identity_providers/{identity_provider}/protocols/{protocol}/auth|

Which will perform the necessary redirects to get the user the SAML
assertion, and return it to Keystone.


Let's assume this step would work for now. I am interested how would the 
SP-IDP-SP workflow look like from the JS perspective. In classic WebSSO, 
where user uses only his browser:


1) Go to the protected resource 
(/v3/OS-FEDERATION/identity_providers/{identity_provider}/protocols/{protocol}/auth 
in our case)


2) (skipping the DS step for now) get redirected to the IdP
3) Authenticate with the IdP
4) Get redirected to the SP
5) Read your protected content, because you have been authenticated and 
authorized to do so (get an unscoped, federated token issued by Keystone 
in our case)


Now, I can imagine, if that's the websso approach can we somehow make JS 
mimic the browser with all its blessing? So the user would actualy see 
the real HTML website? If so, that would be enough to make it work.



5.  Javascript picks up the Federated unscoped token from the response
at the end of step 4 and use that to get a scoped token.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] reopen a change / pull request for nova-pythonclient ?

2014-09-17 Thread Russell Bryant
On 09/17/2014 11:47 AM, Alex Leonhardt wrote:
 hi,
 
 how does one re-open a abandoned change / pull request ? it just timed
 out and was then abandoned - 
 
 https://review.openstack.org/#/c/57834/
 
 please let me know

I re-opened it.  You should be able to update it now.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread Matthieu Huin
Hi,

- Original Message -
 From: Adam Young ayo...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, September 17, 2014 5:00:16 PM
 Subject: Re: [openstack-dev] [Keystone][Horizon] CORS and Federation
 
 On 09/17/2014 10:35 AM, David Chadwick wrote:
 
 
 
 this would work as well, but wouldn't it require two different API calls?
 
 I think it would be 2 calls no matter what.
 
 OK, lets talk this through:
 
 1. Configure Horizon to return a generic login page, with a button that says
 Or do Federated
 2. Use clicks that button and gets the Federated UI. No new HTTP request
 needed for this, still just static Javascript. Form has an edit field for
 the user to enter the SAML IdP, anda button to fetch the list of the public
 IdPs from Keystone.
 3. Assume user clicks the list of public IdPs, those fill out a drop-down
 box. If the user clicks on one, it populates the textbox with the URL for
 the IdP.
 3a. However, if the users IdP is not in the list, they go back to the email
 they got from their IT guy that has Paste this URL into the IdP edit box
 
 4. User clicks OK.
 
 Window pops up with the WebUI for the user to visit the SAML IdP URL. This
 will be of the form
 
 GET
 htps://keystone/main/OS-FEDERATION/identity_providers/{identity_provider}/protocols/{protocol}/auth
 
 Which will perform the necessary redirects to get the user the SAML
 assertion, and return it to Keystone.
 
 5. Javascript picks up the Federated unscoped token from the response at the
 end of step 4 and use that to get a scoped token.
 

I think the jump to step 6 isn't unnecessary: logging in Horizon requires only 
a username and a password, 
unless I am wrong scoping is done afterwards by selecting a project in a list. 
Besides, should we expect
a federated user to necessarily know beforehand to which domain/project she was 
mapped ?

 
 6. Javascript submites the scoped token to Horizon and user is logged in.

Horizon will also need to keep track of the fact that a federated auth was used:

* to list projects and domains available to the user
* to scope the unscoped saml token

As these are done through the federation API rather than the usual one.

 Whew.

Whew indeed ! 

 
 
 
 
 
 On 17/09/2014 15:17, Adam Young wrote:
 
 
 
 On 09/17/2014 10:07 AM, David Chadwick wrote:
 
 
 
 On 17/09/2014 14:55, Marek Denis wrote:
 
 
 
 On 17.09.2014 15:45, Steve Martinelli wrote:
 
 
 
 ++ to your suggestion David, I think making the list of trusted IdPs
 publicly available makes sense.
 I think this might be useful in an academic/science world but on the
 other hand most cloud providers from the 'business' world might be very
 reluctant to expose list of their clients for free.
 It is interesting that this latter comment came from the
 academic/science world, whereas the supportive one came from the
 business world :-)
 
 So maybe there could be a config parameter in keystone to determine
 which option is installed?
 My thought was that there would be a public list, which is a subset of
 the overall list.
 
 For non-publicized IdPs, the end users would get an URL out of  band and
 enter that in when prompted;  if they enter an invalid URL, they would
 get an warning message.
 
 It wouldn't hide the fact that a customer was registered with a given
 cloud provider, but wouldn't advertise it, either.
 
 
 
 regards
 
 David
 
 
 
 cheers,
 
 Marek.
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread K . W . S . Siu
Hi all,

The code for my proof of concept software is at 
https://github.com/kwss/horizon/tree/federated (templates)

And

https://github.com/kwss/django_openstack_auth/tree/federated (federation 
handling). 

Please note that the horizon branch also contains some additional panels for 
managing IdPs and mappings. 

Regards,
Kristy

 On 17 Sep 2014, at 16:34, David Chadwick d.w.chadw...@kent.ac.uk wrote:
 
 
 
 On 17/09/2014 15:38, Adam Young wrote:
 On 09/17/2014 10:14 AM, Tim Bell wrote:
 Has Kristy's patch made it into Juno ?
 I don't see any patches from Kristy in either the merged or pending
 review state for non-keystone projects;
 
 https://review.openstack.org/#/q/owner:%22Kristy+Siu%22,n,z
 
 So I'm guessing it is proof-of-concept code that has not been yet
 submitted.
 
 That is correct
 
 
 How do you propose going from Horizon to Keystone using  SAML creds?
 
 The flow is something like this
 
 Browser  Horizon Keystone/Apache   IDP
 -get login-
  --retrieve IDPs--
  -list of IdPs  --
 -display login page-
 -Choose fed login-
 -redirect to Keystone-
 |
 -fed login request -
  remember address of Horizon
 --- SAML Authn request  (redirect)---
 |
 
 
 ---User logs in 
 SAML Authn Response (redirect)--
 |
 --
 -redirect back to Horizon-
 |  using remembered address
 --
 
 The modification that Kristy had to make to Keystone was the last
 redirection request to Horizon had to be remembered when the initial
 request was received
 
 We would like to make our proof of concept code available to the Horizon
 experts for them to re-engineer/toughen to industry standards so that it
 can be released asap to the public.
 
 We have documentation that describes our release which you can download
 from here
 
 https://classe.sec.cs.kent.ac.uk/juno-server.pdf
 
 Note that a newer version will be uploaded very shortly as the above
 version contains partial VO documentation which will be stripped out
 until we can complete it later this month or next.
 
 regards
 
 David
 
 
 
 
 
 Tim
 
 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: 17 September 2014 15:37
 To: openstack-dev@lists.openstack.org; Kristy Siu
 Subject: Re: [openstack-dev] [Keystone][Horizon] CORS and Federation
 
 Hi Adam
 
 Kristy has already added support to Horizon for federated login to
 Keystone. She
 will send you details of how she did this.
 
 One issue that arose was this:
 in order to give the user the list of IDPs/protocols that are
 trusted, the call to
 Keystone needs to be authenticated. But the user is not yet
 authenticated. So
 Horizon has to have its own credentials for logging into Keystone so
 that it can
 retrieve the list of IdPs for the user.
 This works, but it is not ideal.
 
 The situation is far worse for the Keystone command line client. The
 user is not
 logged in and the Keystone client does not have its own account on
 Keystone, so
 it cannot retrieve the list of IdPs for the user. The only way that
 Kristy could
 solve this, was to remove the requirement for authentication to the
 API that
 retrieves the list of IdPs. But this is not a standard solution as it
 requires
 modifying the core Keystone code.
 
 We need a fix to address this issue. My suggestion would be to make
 the API for
 retrieving the list of trusted IDPs publicly accessible, so that no
 credentials are
 needed for this.
 
 regards
 
 David
 
 
 On 16/09/2014 23:39, Adam Young wrote:
 Phase one for dealing with Federation can be done with CORS support
 solely for Keystone/Horizon  integration:
 
 1.  Horizon Login page creates Javascript to do AJAX call to Keystone
 2.  Keystone generates a token 3.  Javascript reads token out of
 response and sends it to Horizon.
 
 This should support Kerberos, X509, and Password auth;  the Keystone
 team is discussing how to advertise mechanisms, lets leave the onus on
 us to solve that one and get back in a timely manner.
 
 For Federation, the handshake is a little more complex, and there
 might be a need for some sort of popup window for the user to log in
 to their home SAML provider.  Its several more AJAX calls, but the end
 effect should be the same:  get a standard Keystone token and hand
 it to
 Horizon.
 This would mean that Horizon would have to validate tokens the same
 way as any other endpoint.  That should not be too hard, but there is
 a little bit of create a user, get a token, make a call logic that
 currently lives only in keystonemiddleware/auth_token;  Its a solvable
 problem.
 
 This approach will support the straight Javascript approach that
 Richard Jones discussed;  Keystone behind a proxy will work this way
 without CORS support.  If CORS  can be sorted out for the other
 services, we can do straight 

Re: [openstack-dev] reopen a change / pull request for nova-pythonclient ?

2014-09-17 Thread Daniel P. Berrange
On Wed, Sep 17, 2014 at 04:47:06PM +0100, Alex Leonhardt wrote:
 hi,
 
 how does one re-open a abandoned change / pull request ? it just timed
 out and was then abandoned -
 
 https://review.openstack.org/#/c/57834/
 
 please let me know

Just re-upload the change, maintaining the same Change-Id line in the
commit message.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [metrics] New version of the Activity Board

2014-09-17 Thread Daniel Izquierdo

Hi everyone,

I'd like to introduce the new activity board look and feel and other 
improvements in the metrics side.



* What is it?


Activity board is the place where you can find development metrics of 
the OpenStack Foundation projects.



* Where to get it?
===

There is a live version at [1].

If you want to run a version by yourself, you should download in first 
place the OpenStack activity-board git repository at [2]
and clone the daily updated dataset of JSON files at [3] under the 
~/activity-board/browser/data/json dir. Then use your favourite web server.



* What's changed?


Main changes include:
- Navigation is now focused on projects. [4]
- Information is now divided into the several releases of the project 
(top right menu).

- Organizations page site improved [5]
- Personal page site improved [6]
- Mailing lists hotspots [6]. Hot topics are calculated by the last 30 
days, 365 days and the whole history.



* Further work
=
- Add Juno release information
- Allow to have projects navigation per release
- Add Askbot data per release
- Add IRC data per release
- Improve navigation (feedback is more than welcome here).
- Update documentation


Cheers,
Daniel.


[1] http://activity.openstack.org/dash/browser/
[2] http://git.openstack.org/cgit/openstack-infra/activity-board/
[3] https://github.com/Bitergia/openstack-dashboard-json
[4] Example navigating through the metrics of Nova: 
http://activity.openstack.org/dash/browser/project.html?project=nova
[5] 
http://activity.openstack.org/dash/browser/company.html?company=OpenStack%20Foundation

[6] http://activity.openstack.org/dash/browser/people.html?id=18
[7] http://activity.openstack.org/dash/browser/mls.html


--
Daniel Izquierdo Cortazar, PhD
Chief Data Officer
-
Software Analytics for your peace of mind
www.bitergia.com
@bitergia


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] reopen a change / pull request for nova-pythonclient ?

2014-09-17 Thread Russell Bryant
On 09/17/2014 11:56 AM, Daniel P. Berrange wrote:
 On Wed, Sep 17, 2014 at 04:47:06PM +0100, Alex Leonhardt wrote:
 hi,

 how does one re-open a abandoned change / pull request ? it just timed
 out and was then abandoned -

 https://review.openstack.org/#/c/57834/

 please let me know
 
 Just re-upload the change, maintaining the same Change-Id line in the
 commit message.

Gerrit will reject it if it's still abandoned.  You have to restore it
first.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] CORS and Federation

2014-09-17 Thread David Chadwick


On 17/09/2014 16:53, Marek Denis wrote:
 Hi,
 
 First of all, we should clarify whether your JS client wants to
 implement ECP or WebSSO workflow. They are slightly different.

Our modification to Horizon uses WebSSO since this is the obvious
profile for a browser to use as it can handle redirects automatically.

Your modification for Keystone client uses ECP which is the obvious one
for this to use, as redirects are not required.

However, we also have a mod for Keystone client which uses WebSSO, in
case this is all the keystone server supports

regards

David

 
 I feel JS is smart enough to implement the ECP flow and then and it
 could simply implement what we already have in the keystoneclient [0].
 This + some discovery service described by Adam would be required.
 However, some problems may arise when it comes to ADFS  support (we
 needed separate plugin in keystoneclient) and other protocols which
 should work by default from browsers.
 
 [0]
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/contrib/auth/v3/saml2.py#L29
 
 
 Rest of the comments inlined.
 
 On 17.09.2014 17:00, Adam Young wrote:
 On 09/17/2014 10:35 AM, David Chadwick wrote:
 this would work as well, but wouldn't it require two different API
 calls?

 I think it would be 2 calls no matter what.

 OK,  lets talk this through:

 1. Configure Horizon to return a generic login page, with a button that
 says Or do Federated
 2.  Use clicks that button and gets the Federated UI.  No new HTTP
 request needed for this, still just static Javascript.  Form has an edit
 field for the user to enter the SAML IdP, anda button to fetch the list
 of the public IdPs from Keystone.
 3.  Assume user clicks the list of public IdPs, those fill out a
 drop-down box.  If the user clicks on one, it populates the textbox with
 the URL for the IdP.
 3a.  However, if the users IdP is not in the list, they go back to the
 email they got from their IT guy that has Paste this URL into the IdP
 edit box
 
 Well, we don't keep any IdPs' URL in Keystone backend at the moment.
 However, this can be easily fixed.
 
 4. User clicks OK.
 
 OK
 
 Window pops up with the WebUI for the user to visit the SAML IdP URL.
 This will be of the form
 |
 GET
 htps://keystone/main/OS-FEDERATION/identity_providers/{identity_provider}/protocols/{protocol}/auth|


 Which will perform the necessary redirects to get the user the SAML
 assertion, and return it to Keystone.
 
 Let's assume this step would work for now. I am interested how would the
 SP-IDP-SP workflow look like from the JS perspective. In classic WebSSO,
 where user uses only his browser:
 
 1) Go to the protected resource
 (/v3/OS-FEDERATION/identity_providers/{identity_provider}/protocols/{protocol}/auth
 in our case)
 
 2) (skipping the DS step for now) get redirected to the IdP
 3) Authenticate with the IdP
 4) Get redirected to the SP
 5) Read your protected content, because you have been authenticated and
 authorized to do so (get an unscoped, federated token issued by Keystone
 in our case)
 
 Now, I can imagine, if that's the websso approach can we somehow make JS
 mimic the browser with all its blessing? So the user would actualy see
 the real HTML website? If so, that would be enough to make it work.
 
 5.  Javascript picks up the Federated unscoped token from the response
 at the end of step 4 and use that to get a scoped token.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Oslo] Moving Brick out of Cinder

2014-09-17 Thread Doug Hellmann

On Sep 16, 2014, at 6:02 PM, Flavio Percoco fla...@redhat.com wrote:

 On 09/16/2014 11:55 PM, Ben Nemec wrote:
 Based on my reading of the wiki page about this it sounds like it should
 be a sub-project of the Storage program.  While it is targeted for use
 by multiple projects, it's pretty specific to interacting with Cinder,
 right?  If so, it seems like Oslo wouldn't be a good fit.  We'd just end
 up adding all of cinder-core to the project anyway. :-)
 
 +1 I think the same arguments and conclusions we had on glance-store
 make sense here. I'd probably go with having it under the Block Storage
 program.

I agree. I’m sure we could find some Oslo contributors to give you advice about 
APIs if you like, but I don’t think the library needs to be part of Oslo to be 
reusable.

Doug

 
 Flavio
 
 
 -Ben
 
 On 09/16/2014 12:49 PM, Ivan Kolodyazhny wrote:
 Hi Stackers!
 
 I'm working on moving Brick out of Cinder for K release.
 
 There're a lot of open questions for now:
 
   - Should we move it to oslo or somewhere on stackforge?
   - Better architecture of it to fit all Cinder and Nova requirements
   - etc.
 
 Before starting discussion, I've created some proof-of-concept to try it. I
 moved Brick to some lib named oslo.storage for testing only. It's only one
 of the possible solution to start work on it.
 
 All sources are aviable on GitHub [1], [2].
 
 [1] - I'm not sure that this place and name is good for it, it's just a PoC.
 
 [1] https://github.com/e0ne/oslo.storage
 [2] https://github.com/e0ne/cinder/tree/brick - some tests still failed.
 
 Regards,
 Ivan Kolodyazhny
 
 On Mon, Sep 8, 2014 at 4:35 PM, Ivan Kolodyazhny e...@e0ne.info wrote:
 
 Hi All!
 
 I would to start moving Cinder Brick [1] to oslo as was described on
 Cinder mid-cycle meetup [2]. Unfortunately I missed meetup so I want be
 sure that nobody started it and we are on the same page.
 
 According to the Juno 3 release, there was not enough time to discuss [3]
 on the latest Cinder weekly meeting and I would like to get some feedback
 from the all OpenStack community, so I propose to start this discussion on
 mailing list for all projects.
 
 I anybody didn't started it and it is useful at least for both Nova and
 Cinder I would to start this work according oslo guidelines [4] and
 creating needed blueprints to make it finished until Kilo 1 is over.
 
 
 
 [1] https://wiki.openstack.org/wiki/CinderBrick
 [2] https://etherpad.openstack.org/p/cinder-meetup-summer-2014
 [3]
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/044608.html
 [4] https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary
 
 Regards,
 Ivan Kolodyazhny.
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 @flaper87
 Flavio Percoco
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Clint Byrum
This is where Debian's one urllib3 to rule them all model fails in
a modern fast paced world. Debian is arguably doing the right thing by
pushing everyone to use one API, and one library, so that when that one
library is found to be vulnerable to security problems, one update covers
everyone. Also, this is an HTTP/HTTPS library.. so nobody can make the
argument that security isn't paramount in this context.

But we all know that the app store model has started to bleed down into
backend applications, and now you just ship the virtualenv or docker
container that has your app as you tested it, and if that means you're
20 versions behind on urllib3, that's your problem, not the OS vendor's.

I think it is _completely_ irresponsible of requests, a library, to
embed another library. But I don't know if we can avoid making use of
it if we are going to be exposed to objects that are attached to it.

Anyway, Thomas, if you're going to send the mob with pitchforks and
torches somewhere, I'd say send them to wherever requests makes its
home. OpenStack is just buying their mutated product.

Excerpts from Donald Stufft's message of 2014-09-17 08:22:48 -0700:
 Looking at the code on my phone it looks completely correct to use the 
 vendored copy here and it wouldn't actually work otherwise. 
 
  On Sep 17, 2014, at 11:17 AM, Donald Stufft don...@stufft.io wrote:
  
  I don't know the specific situation but it's appropriate to do this if 
  you're using requests and wish to interact with the urllib3 that requests 
  is using.
  
  On Sep 17, 2014, at 11:15 AM, Thomas Goirand z...@debian.org wrote:
  
  Hi,
  
  I'm horrified by what I just found. I have just found out this in
  glanceclient:
  
  File bla/tests/test_ssl.py, line 19, in module
from requests.packages.urllib3 import poolmanager
  ImportError: No module named packages.urllib3
  
  Please *DO NOT* do this. Instead, please use urllib3 from ... urllib3.
  Not from requests. The fact that requests is embedding its own version
  of urllib3 is an heresy. In Debian, the embedded version of urllib3 is
  removed from requests.
  
  In Debian, we spend a lot of time to un-vendorize stuff, because
  that's a security nightmare. I don't want to have to patch all of
  OpenStack to do it there as well.
  
  And no, there's no good excuse here...
  
  Thomas Goirand (zigo)
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Davanum Srinivas
I was trying request-ifying oslo.vmware and ran into this as well:
https://review.openstack.org/#/c/121956/

And we don't seem to have urllib3 in global-requirements either.
Should we do that first?

-- dims

On Wed, Sep 17, 2014 at 1:05 PM, Clint Byrum cl...@fewbar.com wrote:
 This is where Debian's one urllib3 to rule them all model fails in
 a modern fast paced world. Debian is arguably doing the right thing by
 pushing everyone to use one API, and one library, so that when that one
 library is found to be vulnerable to security problems, one update covers
 everyone. Also, this is an HTTP/HTTPS library.. so nobody can make the
 argument that security isn't paramount in this context.

 But we all know that the app store model has started to bleed down into
 backend applications, and now you just ship the virtualenv or docker
 container that has your app as you tested it, and if that means you're
 20 versions behind on urllib3, that's your problem, not the OS vendor's.

 I think it is _completely_ irresponsible of requests, a library, to
 embed another library. But I don't know if we can avoid making use of
 it if we are going to be exposed to objects that are attached to it.

 Anyway, Thomas, if you're going to send the mob with pitchforks and
 torches somewhere, I'd say send them to wherever requests makes its
 home. OpenStack is just buying their mutated product.

 Excerpts from Donald Stufft's message of 2014-09-17 08:22:48 -0700:
 Looking at the code on my phone it looks completely correct to use the 
 vendored copy here and it wouldn't actually work otherwise.

  On Sep 17, 2014, at 11:17 AM, Donald Stufft don...@stufft.io wrote:
 
  I don't know the specific situation but it's appropriate to do this if 
  you're using requests and wish to interact with the urllib3 that requests 
  is using.
 
  On Sep 17, 2014, at 11:15 AM, Thomas Goirand z...@debian.org wrote:
 
  Hi,
 
  I'm horrified by what I just found. I have just found out this in
  glanceclient:
 
  File bla/tests/test_ssl.py, line 19, in module
from requests.packages.urllib3 import poolmanager
  ImportError: No module named packages.urllib3
 
  Please *DO NOT* do this. Instead, please use urllib3 from ... urllib3.
  Not from requests. The fact that requests is embedding its own version
  of urllib3 is an heresy. In Debian, the embedded version of urllib3 is
  removed from requests.
 
  In Debian, we spend a lot of time to un-vendorize stuff, because
  that's a security nightmare. I don't want to have to patch all of
  OpenStack to do it there as well.
 
  And no, there's no good excuse here...
 
  Thomas Goirand (zigo)
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Oslo] Moving Brick out of Cinder

2014-09-17 Thread Davanum Srinivas
+1 to Doug's comments.

On Wed, Sep 17, 2014 at 1:02 PM, Doug Hellmann d...@doughellmann.com wrote:

 On Sep 16, 2014, at 6:02 PM, Flavio Percoco fla...@redhat.com wrote:

 On 09/16/2014 11:55 PM, Ben Nemec wrote:
 Based on my reading of the wiki page about this it sounds like it should
 be a sub-project of the Storage program.  While it is targeted for use
 by multiple projects, it's pretty specific to interacting with Cinder,
 right?  If so, it seems like Oslo wouldn't be a good fit.  We'd just end
 up adding all of cinder-core to the project anyway. :-)

 +1 I think the same arguments and conclusions we had on glance-store
 make sense here. I'd probably go with having it under the Block Storage
 program.

 I agree. I’m sure we could find some Oslo contributors to give you advice 
 about APIs if you like, but I don’t think the library needs to be part of 
 Oslo to be reusable.

 Doug


 Flavio


 -Ben

 On 09/16/2014 12:49 PM, Ivan Kolodyazhny wrote:
 Hi Stackers!

 I'm working on moving Brick out of Cinder for K release.

 There're a lot of open questions for now:

   - Should we move it to oslo or somewhere on stackforge?
   - Better architecture of it to fit all Cinder and Nova requirements
   - etc.

 Before starting discussion, I've created some proof-of-concept to try it. I
 moved Brick to some lib named oslo.storage for testing only. It's only one
 of the possible solution to start work on it.

 All sources are aviable on GitHub [1], [2].

 [1] - I'm not sure that this place and name is good for it, it's just a 
 PoC.

 [1] https://github.com/e0ne/oslo.storage
 [2] https://github.com/e0ne/cinder/tree/brick - some tests still failed.

 Regards,
 Ivan Kolodyazhny

 On Mon, Sep 8, 2014 at 4:35 PM, Ivan Kolodyazhny e...@e0ne.info wrote:

 Hi All!

 I would to start moving Cinder Brick [1] to oslo as was described on
 Cinder mid-cycle meetup [2]. Unfortunately I missed meetup so I want be
 sure that nobody started it and we are on the same page.

 According to the Juno 3 release, there was not enough time to discuss [3]
 on the latest Cinder weekly meeting and I would like to get some feedback
 from the all OpenStack community, so I propose to start this discussion on
 mailing list for all projects.

 I anybody didn't started it and it is useful at least for both Nova and
 Cinder I would to start this work according oslo guidelines [4] and
 creating needed blueprints to make it finished until Kilo 1 is over.



 [1] https://wiki.openstack.org/wiki/CinderBrick
 [2] https://etherpad.openstack.org/p/cinder-meetup-summer-2014
 [3]
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/044608.html
 [4] https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary

 Regards,
 Ivan Kolodyazhny.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] callback on workflow completion

2014-09-17 Thread Dmitri Zimine
Use case: 
The client software fires the workflow execution and needs to be know when the 
workflow is complete. There is no good pool strategy as workflow can take 
arbitrary time from ms to days. Callback notification is needed. 

Solution is a webhook

Option 1: pass callback URL as part of starting workflow execution:
POST /executions
workflow_name=flow
callback= {
events: [[on-task-complete, on-execution-complete]
url: http://bla.com
method:POST
headers: {}
… other stuff to form proper HTTP call, like API tokens, etc ...
}
…..


Option 2: webhook endpoint
Mistral exposes /webhook controller 
Client creates a webhook and receives events for all executions for selected 
workflows. 
{  
  name: web,
  active: true,
  events: [  ]
  “workflows”: [wf1, wf2] 
  url: http://example.com/webhook;,  
}

Opinions: 

DZ: my opinion: 
Option 1 for it is simple and convenient for a client. 
It seems like an optimal solution for “tracking executions and tasks” use case.

Option 2 is an overkill: makes it harder for a client (post workflow, post 
webhook, post execution, delete workflow, delete webhook) introduces lifecycle 
management problems (e.g., workflow deleted - webhook orphaned).

I vaguely recall someone from Heat compared these options and regretted one of 
them for security reasons, but can’t remember details.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][nova] VM restarting on host failure in convergence

2014-09-17 Thread Russell Bryant
On 09/17/2014 09:03 AM, Jastrzebski, Michal wrote:
 In short, what we'll need from nova is to have 100% reliable
 host-health monitor and equally reliable rebuild/evacuate mechanism
 with fencing and scheduler. In heat we need scallable and reliable
 event listener and engine to decide which action to perform in given
 situation.

Unfortunately, I don't think Nova can provide this alone.  Nova only
knows about whether or not the nova-compute daemon is current
communicating with the rest of the system.  Even if the nova-compute
daemon drops out, the compute node may still be running all instances
just fine.  We certainly don't want to impact those running workloads
unless absolutely necessary.

I understand that you're suggesting that we enhance Nova to be able to
provide that level of knowledge and control.  I actually don't think
Nova should have this knowledge of its underlying infrastructure.

I would put the host monitoring infrastructure (to determine if a host
is down) and fencing capability as out of scope for Nova and as a part
of the supporting infrastructure.  Assuming those pieces can properly
detect that a host is down and fence it, then all that's needed from
Nova is the evacuate capability, which is already there.  There may be
some enhancements that could be done to it, but surely it's quite close.

There's also the part where a notification needs to go out saying that
the instance has failed.  Some thing (which could be Heat in the case of
this proposal) can react to that, either directly or via ceilometer, for
example.  There is an API today to hard reset the state of an instance
to ERROR.  After a host is fenced, you could use this API to mark all
instances on that host as dead.  I'm not sure if there's an easy way to
do that for all instances on a host today.  That's likely an enhancement
we could make to python-novaclient, similar to the evacuate all
instances on a host enhancement that was done in novaclient.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-17 Thread Day, Phil
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 12 September 2014 19:37
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Expand resource name allowed
 characters
 
 Had to laugh about the PILE OF POO character :) Comments inline...

Can we get support for that in gerrit ?
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] callback on workflow completion

2014-09-17 Thread Renat Akhmerov
Ok, here is what I think...

I totally support the first option for its easiness in terms of understanding 
how it all should work (no need to figure out if some additional objects must 
be deleted if a workflow has been removed etc. etc.). We actually have two BPS 
[0] and [1] where the idea was similar to your option #2. But I admit that 
they’ve been around for a while and I think are obsolete (even though having 
eventually the same goal of notifying the outside world about executions/tasks 
events).

The only thing I would like to suggest is how we define a callback (keeping in 
mind it should be a valid JSON in reality):

POST /executions
workflow_name=flow
callbacks=[{
events: [[on-task-complete, on-execution-complete]
action: std.http url=‘http://foo.bar.com' method=POST headers=‘{}' ##
},
   {# another callback}
   ]

and/or

POST /executions
workflow_name=flow
callbacks=[{
events: [[on-task-complete, on-execution-complete]
action: std.http
parameters: {
url: http://foo.bar.com,
method: POST
headers: {
# Whatever headers we need.
} 
}
},
   {# another callback} 
   ]

In other word we can trivially generalise this so that:
we can use not only webhooks but any action accessible in Mistral (e.g. it may 
be other transport)
it is consistent with our DSL

We might even want to allow “workflow” as well as “action” but not sure if we 
need to get that far for now.

Thoughts?

[0] https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-amqp
[1] https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-http

Renat Akhmerov
@ Mirantis Inc.



On 17 Sep 2014, at 10:36, Dmitri Zimine dzim...@stackstorm.com wrote:

 Use case: 
 The client software fires the workflow execution and needs to be know when 
 the workflow is complete. There is no good pool strategy as workflow can take 
 arbitrary time from ms to days. Callback notification is needed. 
 
 Solution is a webhook
 
 Option 1: pass callback URL as part of starting workflow execution:
 POST /executions
 workflow_name=flow
 callback= {
 events: [[on-task-complete, on-execution-complete]
 url: http://bla.com
 method:POST
 headers: {}
 … other stuff to form proper HTTP call, like API tokens, etc ...
 }
 …..
 
 
 Option 2: webhook endpoint
 Mistral exposes /webhook controller 
 Client creates a webhook and receives events for all executions for selected 
 workflows. 
 {  
   name: web,
   active: true,
   events: [  ]
   “workflows”: [wf1, wf2] 
   url: http://example.com/webhook;,  
 }
 
 Opinions: 
 
 DZ: my opinion: 
 Option 1 for it is simple and convenient for a client. 
 It seems like an optimal solution for “tracking executions and tasks” use 
 case.
 
 Option 2 is an overkill: makes it harder for a client (post workflow, post 
 webhook, post execution, delete workflow, delete webhook) introduces 
 lifecycle management problems (e.g., workflow deleted - webhook orphaned).
 
 I vaguely recall someone from Heat compared these options and regretted one 
 of them for security reasons, but can’t remember details.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Clint Byrum
Excerpts from Davanum Srinivas's message of 2014-09-17 10:15:29 -0700:
 I was trying request-ifying oslo.vmware and ran into this as well:
 https://review.openstack.org/#/c/121956/
 
 And we don't seem to have urllib3 in global-requirements either.
 Should we do that first?

Honestly, after reading this:

https://github.com/kennethreitz/requests/pull/1812

I think we might want to consider requests a poor option. Its author
clearly doesn't understand the role a _library_ plays in software
development and considers requests an application, not a library.

For instance, why is requests exposing internal implementation details
at all?  It should be wrapping any exceptions or objects to avoid
forcing users to make this choice at all.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][nova] VM restarting on host failure in convergence

2014-09-17 Thread Clint Byrum
Excerpts from Jastrzebski, Michal's message of 2014-09-17 06:03:06 -0700:
 All,
 
 Currently OpenStack does not have a built-in HA mechanism for tenant
 instances which could restore virtual machines in case of a host
 failure. Openstack assumes every app is designed for failure and can
 handle instance failure and will self-remediate, but that is rarely
 the case for the very large Enterprise application ecosystem.
 Many existing enterprise applications are stateful, and assume that
 the physical infrastructure is always on.
 

There is a fundamental debate that OpenStack's vendors need to work out
here. Existing applications are well served by existing virtualization
platforms. Turning OpenStack into a work-alike to oVirt is not the end
goal here. It's a happy accident that traditional apps can sometimes be
bent onto the cloud without much modification.

The thing that clouds do is they give development teams a _limited_
infrastructure that lets IT do what they're good at (keep the
infrastructure up) and lets development teams do what they're good at (run
their app). By putting HA into the _app_, and not the _infrastructure_,
the dev teams get agility and scalability. No more waiting weeks for
allocationg specialized servers with hardware fencing setups and fibre
channel controllers to house a shared disk system so the super reliable
virtualization can hide HA from the user.

Spin up vms. Spin up volumes.  Run some replication between regions,
and be resilient.

So, as long as it is understood that whatever is being proposed should
be an application centric feature, and not an infrastructure centric
feature, this argument remains interesting in the cloud context.
Otherwise, it is just an invitation for OpenStack to open up direct
competition with behemoths like vCenter.

 Even the OpenStack controller services themselves do not gracefully
 handle failure.
 

Which ones?

 When these applications were virtualized, they were virtualized on
 platforms that enabled very high SLAs for each virtual machine,
 allowing the application to not be rewritten as the IT team moved them
 from physical to virtual. Now while these apps cannot benefit from
 methods like automatic scaleout, the application owners will greatly
 benefit from the self-service capabilities they will recieve as they
 utilize the OpenStack control plane.
 

These apps were virtualized for IT's benefit. But the application authors
and users are now stuck in high-cost virtualization. The cloud is best
utilized when IT can control that cost and shift the burden of uptime
to the users by offering them more overall capacity and flexibility with
the caveat that the individual resources will not be as reliable.

So what I'm most interested in is helping authors change their apps to
be reslient on their own, not in putting more burden on IT.

 I'd like to suggest to expand heat convergence mechanism to enable
 self-remediation of virtual machines and other heat resources.
 

Convergence is still nascent. I don't know if I'd pile on to what might
take another 12 - 18 months to get done anyway. We're just now figuring
out how to get started where we thought we might already be 1/3 of the
way through. Just something to consider.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Adding hp1 back running tripleo CI

2014-09-17 Thread Clint Byrum
Excerpts from Derek Higgins's message of 2014-09-17 06:53:25 -0700:
 On 15/09/14 22:37, Gregory Haynes wrote:
  This is a total shot in the dark, but a couple of us ran into issues
  with the Ubuntu Trusty kernel (I know I hit it on HP hardware) that was
  causing severely degraded performance for TripleO. This fixed with a
  recently released kernel in Trusty... maybe you could be running into
  this?
 
 thanks Greg,
 
 To try this out, I've redeployed the new testenv image and ran 35
 overcloud jobs on it(32 passed), the average time for these was 130
 minutes so unfortunately no major difference.
 
 The old kernel was
 3.13.0-33-generic #58-Ubuntu SMP Tue Jul 29 16:45:05 UTC 2014 x86_64

This kernel definitely had the kvm bugs Greg and I exprienced in the
past

 the one one is
 3.13.0-35-generic #62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014 x86_64
 

Darn. This one does not. Is it possible the hardware is just less
powerful?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Oslo] Moving Brick out of Cinder

2014-09-17 Thread Ivan Kolodyazhny
Thanks a lot for a comments!

As discussed in IRC (#openstack-cinder), moving Brick to Oslo or Stackforge
isn't the best solution.

We're moving on making Cinder Agent (or Cinder Storage agent) [1]  based on
Brick code instead of making Brick as a separate python library used in
Cinder and Nova.

I'll deprecate my oslo.storage GitHub repo and rename it to not confuse
anybody in a future.

[1] https://etherpad.openstack.org/p/cinder-storage-agent

Regards,
Ivan Kolodyazhny,
Web Developer,
http://blog.e0ne.info/,
http://notacash.com/,
http://kharkivpy.org.ua/

On Wed, Sep 17, 2014 at 8:16 PM, Davanum Srinivas dava...@gmail.com wrote:

 +1 to Doug's comments.

 On Wed, Sep 17, 2014 at 1:02 PM, Doug Hellmann d...@doughellmann.com
 wrote:
 
  On Sep 16, 2014, at 6:02 PM, Flavio Percoco fla...@redhat.com wrote:
 
  On 09/16/2014 11:55 PM, Ben Nemec wrote:
  Based on my reading of the wiki page about this it sounds like it
 should
  be a sub-project of the Storage program.  While it is targeted for use
  by multiple projects, it's pretty specific to interacting with Cinder,
  right?  If so, it seems like Oslo wouldn't be a good fit.  We'd just
 end
  up adding all of cinder-core to the project anyway. :-)
 
  +1 I think the same arguments and conclusions we had on glance-store
  make sense here. I'd probably go with having it under the Block Storage
  program.
 
  I agree. I’m sure we could find some Oslo contributors to give you
 advice about APIs if you like, but I don’t think the library needs to be
 part of Oslo to be reusable.
 
  Doug
 
 
  Flavio
 
 
  -Ben
 
  On 09/16/2014 12:49 PM, Ivan Kolodyazhny wrote:
  Hi Stackers!
 
  I'm working on moving Brick out of Cinder for K release.
 
  There're a lot of open questions for now:
 
- Should we move it to oslo or somewhere on stackforge?
- Better architecture of it to fit all Cinder and Nova requirements
- etc.
 
  Before starting discussion, I've created some proof-of-concept to try
 it. I
  moved Brick to some lib named oslo.storage for testing only. It's
 only one
  of the possible solution to start work on it.
 
  All sources are aviable on GitHub [1], [2].
 
  [1] - I'm not sure that this place and name is good for it, it's just
 a PoC.
 
  [1] https://github.com/e0ne/oslo.storage
  [2] https://github.com/e0ne/cinder/tree/brick - some tests still
 failed.
 
  Regards,
  Ivan Kolodyazhny
 
  On Mon, Sep 8, 2014 at 4:35 PM, Ivan Kolodyazhny e...@e0ne.info
 wrote:
 
  Hi All!
 
  I would to start moving Cinder Brick [1] to oslo as was described on
  Cinder mid-cycle meetup [2]. Unfortunately I missed meetup so I want
 be
  sure that nobody started it and we are on the same page.
 
  According to the Juno 3 release, there was not enough time to
 discuss [3]
  on the latest Cinder weekly meeting and I would like to get some
 feedback
  from the all OpenStack community, so I propose to start this
 discussion on
  mailing list for all projects.
 
  I anybody didn't started it and it is useful at least for both Nova
 and
  Cinder I would to start this work according oslo guidelines [4] and
  creating needed blueprints to make it finished until Kilo 1 is over.
 
 
 
  [1] https://wiki.openstack.org/wiki/CinderBrick
  [2] https://etherpad.openstack.org/p/cinder-meetup-summer-2014
  [3]
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/044608.html
  [4] https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary
 
  Regards,
  Ivan Kolodyazhny.
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: https://twitter.com/dims

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Mike Bayer

On Sep 17, 2014, at 2:46 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Davanum Srinivas's message of 2014-09-17 10:15:29 -0700:
 I was trying request-ifying oslo.vmware and ran into this as well:
 https://review.openstack.org/#/c/121956/
 
 And we don't seem to have urllib3 in global-requirements either.
 Should we do that first?
 
 Honestly, after reading this:
 
 https://github.com/kennethreitz/requests/pull/1812
 
 I think we might want to consider requests a poor option. Its author
 clearly doesn't understand the role a _library_ plays in software
 development and considers requests an application, not a library.
 
 For instance, why is requests exposing internal implementation details
 at all?  It should be wrapping any exceptions or objects to avoid
 forcing users to make this choice at all.

that link is horrifying.   I’m really surprised Requests does this, and that 
nobody has complained very loudly about it.   It’s wrong on every level not the 
least of which is the huge security implications.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-17 Thread Joe Gordon
On Tue, Sep 16, 2014 at 8:02 AM, Kurt Griffiths 
kurt.griffi...@rackspace.com wrote:

  Right, graphing those sorts of variables has always been part of our
 test plan. What I’ve done so far was just some pilot tests, and I realize
 now that I wasn’t very clear on that point. I wanted to get a rough idea of
 where the Redis driver sat in case there were any obvious bug fixes that
 needed to be taken care of before performing more extensive testing. As it
 turns out, I did find one bug that has since been fixed.

  Regarding latency, saying that it is not important” is an exaggeration;
 it is definitely important, just not the* only *thing that is important.
 I have spoken with a lot of prospective Zaqar users since the inception of
 the project, and one of the common threads was that latency needed to be
 reasonable. For the use cases where they see Zaqar delivering a lot of
 value, requests don't need to be as fast as, say, ZMQ, but they do need
 something that isn’t horribly *slow,* either. They also want HTTP,
 multi-tenant, auth, durability, etc. The goal is to find a reasonable
 amount of latency given our constraints and also, obviously, be able to
 deliver all that at scale.


Can you further quantify what you would consider too slow, is it 100ms too
slow.



  In any case, I’ve continue working through the test plan and will be
 publishing further test results shortly.

   graph latency versus number of concurrent active tenants

  By tenants do you mean in the sense of OpenStack Tenants/Project-ID's or
 in  the sense of “clients/workers”? For the latter case, the pilot tests
 I’ve done so far used multiple clients (though not graphed), but in the
 former case only one “project” was used.


multiple  Tenant/Project-IDs



   From: Joe Gordon joe.gord...@gmail.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Friday, September 12, 2014 at 1:45 PM
 To: OpenStack Dev openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

  If zaqar is like amazon SQS, then the latency for a single message and
 the throughput for a single tenant is not important. I wouldn't expect
 anyone who has latency sensitive work loads or needs massive throughput to
 use zaqar, as these people wouldn't use SQS either. The consistency of the
 latency (shouldn't change under load) and zaqar's ability to scale
 horizontally mater much more. What I would be great to see some other
 things benchmarked instead:

  * graph latency versus number of concurrent active tenants
 * graph latency versus message size
 * How throughput scales as you scale up the number of assorted zaqar
 components. If one of the benefits of zaqar is its horizontal scalability,
 lets see it.
  * How does this change with message batching?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] DVR Tunnel Design Question

2014-09-17 Thread Armando M.
VLAN is on the radar, vxlan/gre was done to start with.

I believe Vivek mentioned the rationale in some other thread. The gist
of it below:

In the current architecture, we use a unique DVR MAC per compute node
to forward DVR Routed traffic directly to destination compute node.
The DVR routed traffic from the source compute node will carry
'destination VMs underlay VLAN' in the frame, but the Source Mac in
that same frame will be the DVR Unique MAC. So, same DVR Unique Mac is
used for potentially a number of overlay network VMs that would exist
on that same source compute node.

The underlay infrastructure switches will see the same DVR Unique MAC
being associated with different VLANs on incoming frames, and so this
would result in VLAN Thrashing on the switches in the physical cloud
infrastructure. Since tunneling protocols carry the entire DVR routed
inner frames as tunnel payloads, there is no thrashing effect on
underlay switches.

There will still be thrashing effect on endpoints on CNs themselves,
when they try to learn that association between inner frame source MAC
and the TEP port on which the tunneled frame is received. But that we
have addressed in L2 Agent by having a 'DVR Learning Blocker' table,
which ensures that learning for DVR routed packets alone is
side-stepped.

As a result, VLAN was not promoted as a supported underlay for the
initial DVR architecture.

Cheers,
Armando

On 16 September 2014 20:35, 龚永生 gong...@unitedstack.com wrote:
 I think the VLAN should also be supported later.  The tunnel should not be
 the prerequisite for the DVR feature.


 -- Original --
 From:  Steve Wormleyopenst...@wormley.com;
 Date:  Wed, Sep 17, 2014 10:29 AM
 To:  openstack-devopenstack-dev@lists.openstack.org;
 Subject:  [openstack-dev] [neutron] DVR Tunnel Design Question

 In our environment using VXLAN/GRE would make it difficult to keep some of
 the features we currently offer our customers. So for a while now I've been
 looking at the DVR code, blueprints and Google drive docs and other than it
 being the way the code was written I can't find anything indicating why a
 Tunnel/Overlay network is required for DVR or what problem it was solving.

 Basically I'm just trying to see if I missed anything as I look into doing a
 VLAN/OVS implementation.

 Thanks,
 -Steve Wormley


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Ian Cordasco
On 9/17/14, 1:46 PM, Clint Byrum cl...@fewbar.com wrote:

Excerpts from Davanum Srinivas's message of 2014-09-17 10:15:29 -0700:
 I was trying request-ifying oslo.vmware and ran into this as well:
 https://review.openstack.org/#/c/121956/
 
 And we don't seem to have urllib3 in global-requirements either.
 Should we do that first?

Honestly, after reading this:

https://github.com/kennethreitz/requests/pull/1812

I think we might want to consider requests a poor option. Its author
clearly doesn't understand the role a _library_ plays in software
development and considers requests an application, not a library.

Yes that is Kenneth’s opinion. That is not the opinion of the core
developers though. We see it as a library but this is something we aren’t
going to currently change any time soon.

For instance, why is requests exposing internal implementation details
at all?

Where exactly are we exposing internal implementation details? A normal
user (even advanced users) can use requests without ever digging into
requests.packages. What implementation details are we exposing and where?

It should be wrapping any exceptions or objects to avoid
forcing users to make this choice at all.

We do. Occasionally (like in 2.4.0) urllib3 adds an exception that we
missed notice of and it slips through. We released 2.4.1 a couple days
later with the fix for that. Pretty much every error we’ve seen or know
about is caught and rewrapped as a requests exception. I’m not sure what
you’re arguing here, unless of course you have not used requests.

That aside, I’ve been mulling over how effectively the clients use
requests. I haven’t investigated all of them, but many seem to reach into
implementation details on their own. If I remember nova client has
something it has commented as “connection pooling” while requests and
urllib3 do that automatically. I haven’t started to investigate exactly
why they do this. Likewise, glance client has custom certificate
verification in glanceclient.common.https. Why? I’m not exactly certain
yet. It seems for the most part from what little I’ve seen that requests
is too high-level a library for OpenStack’s needs at best, and actively
obscures details OpenStack developers need (or don’t realize requests
provides in most cases).

Circling back to the issue of vendoring though: it’s a conscious decision
to do this, and in the last two years there have been 2 CVEs reported for
requests. There have been none for urllib3 and none for chardet. (Frankly
I don’t think either urllib3 or chardet have had any CVEs reported against
them, but let’s ignore that for now.) While security is typically the
chief concern with vendoring, none of the libraries we use have had
security issues rendering it a moot point in my opinion. The benefits of
vendoring for us as a team have been numerous and we will likely continue
to do it until it stops benefiting us and our users.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-17 Thread Gordon Sim

On 09/16/2014 08:55 AM, Flavio Percoco wrote:

pub/sub doesn't necessarily guarantees messages delivery, it really
depends on the implementation.


As I understand it, the model for pub-sub in Zaqar is to have multiple 
subscribers polling the queue with gets, and have the messages removed 
from the queue only when they expire. Is that right?


If the ttl of the messages is long enough, a subscriber can start 
getting the queue from where they left off (if they have or can recover 
their last used marker) or from the head of the queue.


So although not acknowledged, subscribers can retry on failover 
providing they do so before the message expires.



That said, there are ways to guarantee
that depending on the method used. For example, if the subscriber is a
webhook, we can use the response status code to ack the message. if it
has a persistent connection like websocket or even (long|short)-poll an
ack may be needed.


In the pub-sub case, to remove a message based on acks you need to wait 
until all known subscribers have acked it. With the current model there 
is no explicit concept of subscriber (nor of ack in the non-competing 
consumer case). Without changing that I don't think you can use the 
response of a webhook anyway (unless of course there are not get style 
subscribers on the queue).


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-17 Thread Maish Saidel-Keesing
This looks great - but I am afraid that something might be missing.

As part of the Design summit in Atlanta there was an Ops Meetup track.
[1] I do not see where this fits into the current planning process that
has been posted.
I would like to assume that part of the purpose of the summit is to also
collect feedback from Enterprise Operators and also from smaller ones as
well.

If that is so then I would kindly request that there be some other way
of allowing that part of the community to voice their concerns, and
provide feedback.

Perhaps a track that is not only Operator centric - but also an End-user
focused one as well (mixing the two would be fine as well)

Most of them are not on the openstack-dev list and they do not
participate in the IRC team meetings, simply because they have no idea
that these exist or maybe do not feel comfortable there. So they will
not have any exposure to the process.

My 0.02 Shekels.

[1] - http://junodesignsummit.sched.org/overview/type/ops+meetup



On 12/09/2014 18:42, Thierry Carrez wrote:
 Eoghan Glynn wrote:
 If you think this is wrong and think the design summit suggestion
 website is a better way to do it, let me know why! If some programs
 really can't stand the 'etherpad/IRC' approach I'll see how we can spin
 up a limited instance.
 +1 on a collaborative scheduling process within each project.

 That's pretty much what we did within the ceilometer core group for
 the Juno summit, except that we used a googledocs spreadsheet instead
 of an etherpad.

 So I don't think we need to necessarily mandate usage of an etherpad,
 just let every project decide whatever shared document format they
 want to use.

 FTR the benefit of a googledocs spreadsheet in my view would include
 the ease of totalling votes  sessions slots, color-coding candidate
 sessions for merging etc.
 Good point. I've replaced the wording in the wiki page -- just use
 whatever suits you best, as long as it's a public document and you can
 link to it.


-- 
Maish Saidel-Keesing


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Dean Troyer
Interestingly enough, the distros are doing exactly what they don't want us
to do, ie, rebuilding things to use 'their' tested version of dependencies
rather than the included one...

On Wed, Sep 17, 2014 at 2:42 PM, Ian Cordasco ian.corda...@rackspace.com
wrote:

 That aside, I’ve been mulling over how effectively the clients use
 requests. I haven’t investigated all of them, but many seem to reach into
 implementation details on their own. If I remember nova client has
 something it has commented as “connection pooling” while requests and
 urllib3 do that automatically. I haven’t started to investigate exactly
 why they do this. Likewise, glance client has custom certificate
 verification in glanceclient.common.https. Why? I’m not exactly certain
 yet. It seems for the most part from what little I’ve seen that requests
 is too high-level a library for OpenStack’s needs at best, and actively
 obscures details OpenStack developers need (or don’t realize requests
 provides in most cases).


Part of that is my doing, the initial conversion from httplib2 to requests
was intended to be as simple as possible in order to get the benefits of
proper certificate verification.  glanceclient never got this (maybe until
recently?) because it uses OpenSSL.  The come-back-and-clean-things-up work
was intended to be Alessio's apiclient stuff that I think is still in
oslo-incubator.  That was never finished for a variety of reasons.  Since
that time you're seeing the results of other fixes (connection-pooling
being one) that look at the existing code and not at the proper re-factor
to push that stuff into requests.

The real fix for the clients is to start over and re-build them on top of
(in this case) requests to utilize all that it brings.  This is already
happening...

FWIW I totally understand the desire for vendoring...I want to do the same
thing with OSC because of the number of times we've been broken by
requests, prettytable and others changing out from under us.  It is easy
enough for me to fix my box but a cloud user that just want to get his VMs
running isn't going to be happy, especially on Windows.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-17 Thread Anita Kuno
On 09/17/2014 04:01 PM, Maish Saidel-Keesing wrote:
 This looks great - but I am afraid that something might be missing.
 
 As part of the Design summit in Atlanta there was an Ops Meetup track.
 [1] I do not see where this fits into the current planning process that
 has been posted.
 I would like to assume that part of the purpose of the summit is to also
 collect feedback from Enterprise Operators and also from smaller ones as
 well.
 
 If that is so then I would kindly request that there be some other way
 of allowing that part of the community to voice their concerns, and
 provide feedback.
 
 Perhaps a track that is not only Operator centric - but also an End-user
 focused one as well (mixing the two would be fine as well)
 
 Most of them are not on the openstack-dev list and they do not
 participate in the IRC team meetings, simply because they have no idea
 that these exist or maybe do not feel comfortable there. So they will
 not have any exposure to the process.
 
 My 0.02 Shekels.
 
 [1] - http://junodesignsummit.sched.org/overview/type/ops+meetup
 
Hi Maish:

This thread is about the Design Summit, the Operators Track is a
different thing.

In Atlanta the Operators Track was organized by Tom Fifield and I have
every confidence he is working hard to ensure the operators have a voice
in Paris and that those interested can participate.

Last summit the Operators Track ran on the Monday and the Friday giving
folks who usually spend most of the time at the Design summit to
participate and hear the operator's voices. I know I did and I found it
highly educational.

Thanks,
Anita.

 
 
 On 12/09/2014 18:42, Thierry Carrez wrote:
 Eoghan Glynn wrote:
 If you think this is wrong and think the design summit suggestion
 website is a better way to do it, let me know why! If some programs
 really can't stand the 'etherpad/IRC' approach I'll see how we can spin
 up a limited instance.
 +1 on a collaborative scheduling process within each project.

 That's pretty much what we did within the ceilometer core group for
 the Juno summit, except that we used a googledocs spreadsheet instead
 of an etherpad.

 So I don't think we need to necessarily mandate usage of an etherpad,
 just let every project decide whatever shared document format they
 want to use.

 FTR the benefit of a googledocs spreadsheet in my view would include
 the ease of totalling votes  sessions slots, color-coding candidate
 sessions for merging etc.
 Good point. I've replaced the wording in the wiki page -- just use
 whatever suits you best, as long as it's a public document and you can
 link to it.

 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Mike Bayer

On Sep 17, 2014, at 3:42 PM, Ian Cordasco ian.corda...@rackspace.com wrote:

 
 Circling back to the issue of vendoring though: it’s a conscious decision
 to do this, and in the last two years there have been 2 CVEs reported for
 requests. There have been none for urllib3 and none for chardet. (Frankly
 I don’t think either urllib3 or chardet have had any CVEs reported against
 them, but let’s ignore that for now.) While security is typically the
 chief concern with vendoring, none of the libraries we use have had
 security issues rendering it a moot point in my opinion.

That’s just amazing.  Requests actually deals with security features 
*directly*, certificates, TLS connections, everything; to take the attitude 
that “well there’ve been hardly any security issues in a *whole two years*, so 
I’m not so concerned” is really not one that is acceptable by serious 
development teams.

Wouldn’t it be a problem for *you* if Requests itself were vendored?   You fix 
a major security hole, but your consuming projects don’t respond, their 
developers are on vacation, sorry, so that hole just keeps right on going.   
People make sure to upgrade their Requests libraries locally, but for all those 
poor saps who have *no idea* they have widely used apps that are bundling it 
silently, they remain totally open to vulnerabilities and the black hats have 
disneyland at their disposal.   The blame keeps going right to you as well.  Is 
that really how things should be done?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Ian Cordasco
On 9/17/14, 3:11 PM, Mike Bayer mba...@redhat.com wrote:


On Sep 17, 2014, at 3:42 PM, Ian Cordasco ian.corda...@rackspace.com
wrote:

 
 Circling back to the issue of vendoring though: it’s a conscious
decision
 to do this, and in the last two years there have been 2 CVEs reported
for
 requests. There have been none for urllib3 and none for chardet.
(Frankly
 I don’t think either urllib3 or chardet have had any CVEs reported
against
 them, but let’s ignore that for now.) While security is typically the
 chief concern with vendoring, none of the libraries we use have had
 security issues rendering it a moot point in my opinion.

That’s just amazing.  Requests actually deals with security features
*directly*, certificates, TLS connections, everything; to take the
attitude that “well there’ve been hardly any security issues in a *whole
two years*, so I’m not so concerned” is really not one that is acceptable
by serious development teams.

I said 2 years, because I wasn’t involved much before that, but looking at
the histories of the involved projects there aren’t mentions of CVEs
before then either.

Wouldn’t it be a problem for *you* if Requests itself were vendored?
You fix a major security hole, but your consuming projects don’t respond,
their developers are on vacation, sorry, so that hole just keeps right on
going. 

Isn’t the whole point of distributing a library to let people use it as
they see fit? If requests fixes it immediately and releases it, it’s not
our responsibility to search out every piece of software to fix it for
them. We took all of the appropriate measures to document the two CVEs
that were reported earlier this year. Software that vendored requests
including still vulnerable to those two exposures are responsible for
their own updates. Further, let’s consider this potential situation:

Project X pins a version of requests. Alice doesn’t know anything about
requests and does pip install X. Until Alice takes a more active role in
the development of Project X and looks into requests, she will never know
she’s installed software that has exposures in it. In all likelihood, any
person who just uses something that pins requests will never check for it.
If they just use pip and Project X never updates, it’s not our
responsibility for anything that happens to the user.

People make sure to upgrade their Requests libraries locally, but for all
those poor saps who have *no idea* they have widely used apps that are
bundling it silently, they remain totally open to vulnerabilities and the
black hats have disneyland at their disposal.

I think more applications bundle it than you realize. You’re likely using
one daily that does it.

The blame keeps going right to you as well.  Is that really how things
should be done?

And yeah, we’ll continue to take the blame for the mistake that was made
for those two exposures. As for “Is that how things should be done?”
that’s not for me to answer. More than enough projects do it and do it out
of necessity. The reality is that by vendoring its dependencies, requests
allows its users more flexibility than other projects. Even if we didn’t,
users would still likely find ways to vendor requests and its dependencies
for their own use and in doing so would have to modify requests to rewrite
the import statements to point at those vendored dependencies. The fact is
that vendoring is a real solution and it’s deployed more often than you
likely realize. It benefits our project and it benefits our users.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-17 Thread Joe Gordon
Hi All,

My understanding of Zaqar is that it's like SQS. SQS uses distributed
queues, which have a few unusual properties [0]:
Message Order

Amazon SQS makes a best effort to preserve order in messages, but due to
the distributed nature of the queue, we cannot guarantee you will receive
messages in the exact order you sent them. If your system requires that
order be preserved, we recommend you place sequencing information in each
message so you can reorder the messages upon receipt.
At-Least-Once Delivery

Amazon SQS stores copies of your messages on multiple servers for
redundancy and high availability. On rare occasions, one of the servers
storing a copy of a message might be unavailable when you receive or delete
the message. If that occurs, the copy of the message will not be deleted on
that unavailable server, and you might get that message copy again when you
receive messages. Because of this, you must design your application to be
idempotent (i.e., it must not be adversely affected if it processes the
same message more than once).
Message Sample

The behavior of retrieving messages from the queue depends whether you are
using short (standard) polling, the default behavior, or long polling. For
more information about long polling, see Amazon SQS Long Polling
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-long-polling.html
.

With short polling, when you retrieve messages from the queue, Amazon SQS
samples a subset of the servers (based on a weighted random distribution)
and returns messages from just those servers. This means that a particular
receive request might not return all your messages. Or, if you have a small
number of messages in your queue (less than 1000), it means a particular
request might not return any of your messages, whereas a subsequent request
will. If you keep retrieving from your queues, Amazon SQS will sample all
of the servers, and you will receive all of your messages.

The following figure shows short polling behavior of messages being
returned after one of your system components makes a receive request.
Amazon SQS samples several of the servers (in gray) and returns the
messages from those servers (Message A, C, D, and B). Message E is not
returned to this particular request, but it would be returned to a
subsequent request.



Presumably SQS has these properties because it makes the system scalable,
if so does Zaqar have the same properties (not just making these same
guarantees in the API, but actually having these properties in the
backends)? And if not, why? I looked on the wiki [1] for information on
this, but couldn't find anything.





[0]
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/DistributedQueues.html
[1] https://wiki.openstack.org/wiki/Zaqar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-17 Thread Maish Saidel-Keesing

On 17/09/2014 23:12, Anita Kuno wrote:
 On 09/17/2014 04:01 PM, Maish Saidel-Keesing wrote:
 This looks great - but I am afraid that something might be missing.

 As part of the Design summit in Atlanta there was an Ops Meetup track.
 [1] I do not see where this fits into the current planning process that
 has been posted.
 I would like to assume that part of the purpose of the summit is to also
 collect feedback from Enterprise Operators and also from smaller ones as
 well.

 If that is so then I would kindly request that there be some other way
 of allowing that part of the community to voice their concerns, and
 provide feedback.

 Perhaps a track that is not only Operator centric - but also an End-user
 focused one as well (mixing the two would be fine as well)

 Most of them are not on the openstack-dev list and they do not
 participate in the IRC team meetings, simply because they have no idea
 that these exist or maybe do not feel comfortable there. So they will
 not have any exposure to the process.

 My 0.02 Shekels.

 [1] - http://junodesignsummit.sched.org/overview/type/ops+meetup

 Hi Maish:

 This thread is about the Design Summit, the Operators Track is a
 different thing.

 In Atlanta the Operators Track was organized by Tom Fifield and I have
 every confidence he is working hard to ensure the operators have a voice
 in Paris and that those interested can participate.

 Last summit the Operators Track ran on the Monday and the Friday giving
 folks who usually spend most of the time at the Design summit to
 participate and hear the operator's voices. I know I did and I found it
 highly educational.

 Thanks,
 Anita.
Thanks for the clarification Anita :)
Maish

 On 12/09/2014 18:42, Thierry Carrez wrote:
 Eoghan Glynn wrote:
 If you think this is wrong and think the design summit suggestion
 website is a better way to do it, let me know why! If some programs
 really can't stand the 'etherpad/IRC' approach I'll see how we can spin
 up a limited instance.
 +1 on a collaborative scheduling process within each project.

 That's pretty much what we did within the ceilometer core group for
 the Juno summit, except that we used a googledocs spreadsheet instead
 of an etherpad.

 So I don't think we need to necessarily mandate usage of an etherpad,
 just let every project decide whatever shared document format they
 want to use.

 FTR the benefit of a googledocs spreadsheet in my view would include
 the ease of totalling votes  sessions slots, color-coding candidate
 sessions for merging etc.
 Good point. I've replaced the wording in the wiki page -- just use
 whatever suits you best, as long as it's a public document and you can
 link to it.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Maish Saidel-Keesing


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Networking Service for Sahara

2014-09-17 Thread Sharan Kumar M
Hi all,

What is the default networking service for Sahara? Is it Nova Network or
Neutron? I referred this page
http://docs.openstack.org/developer/sahara/userdoc/features.html#neutron-and-nova-network-support
and it says Nova Network. Is that right?

Thanks,
Sharan Kumar M
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Robert Collins
On 18 September 2014 08:01, Dean Troyer dtro...@gmail.com wrote:
 Interestingly enough, the distros are doing exactly what they don't want us
 to do, ie, rebuilding things to use 'their' tested version of dependencies
 rather than the included one...

Indeed - but the distros are solving for two specific issues:

1) Effort by the distro team

1a) 'how to minimise effort delivering up-to-date packages when the
package count is 20k+'.  This is a pure numbers game: update one
binary on a users system, or 10, or 20 etc. Things deep down a
dependency tree can turn up in huge numbers of places, if vendoring is
commonplace.

1b) 'how to security fix packages where the upstream has stopped being
responsive' - updating vendored trees is often harder than just
unpacking a new release, since they may have deltas in addition to
being vendored - and vendoring may also require patches (depending on
the language). Just waiting for a new vendor release can be a long
process sometimes :)

And both of these are in the context of

2) how to fix things promptly for users

2a) binary packages are often quite substantial - particularly for
some c++ programs - a non-binary delta based approach (and thats what
all the distros started with) will consume a tonne of bandwidth if you
have to multiply out the uses of a package.

2b) distros were privileged in our modern responsible disclosure world
(via the vendor-sec list - I'm not sure what the current state of play
is) - but at one point they found out about security issues *before*
small consumers of packages do, and would fix it and then the upstream
release is made.

You can see, I think, how vendoring plays havoc with the amount of
effort a small team has to exert to keep a large set of packages
patched ahead of upstream releases of the vendored libraries. Its not
an intrinsic problem - its a problem we've constructed by centralising
and limiting notifications of CVEs: unless requests authors are part
of the urllib3 security response team, they can never respond to CVE's
in as timely a manner *while vendoring is in use*.

...
 FWIW I totally understand the desire for vendoring...I want to do the same
 thing with OSC because of the number of times we've been broken by requests,
 prettytable and others changing out from under us.  It is easy enough for me
 to fix my box but a cloud user that just want to get his VMs running isn't
 going to be happy, especially on Windows.

 dt

OOI were thouse changes API breaks or were we depending on nonpublic aspects?

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] client releases 0.7.2 0.7.3

2014-09-17 Thread Sergey Lukjanov
Hi folks, 0.7.2 has been released with the main changes - synced oslo,
updated requirements and support for security groups.

The 0.7.2 release introduced stable/icehouse incompatibility and so
we've released the 0.7.3 version with fix for it.

Thanks.

P.S. Some links:

https://launchpad.net/python-saharaclient/0.7.x/0.7.3
http://tarballs.openstack.org/python-saharaclient/python-saharaclient-0.7.3.tar.gz

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] Core developer nomination

2014-09-17 Thread Ilya Sviridov
Hello magnetodb contributors,

I'm glad to nominate Charles Wang to core developers of MagnetoDB.

He is top non-core reviewer [1], implemented notifications [2] in mdb and
made a great progress with performance, stability and scalability testing
 of MagnetoDB

[1] http://stackalytics.com/report/contribution/magnetodb/90
[2] https://blueprints.launchpad.net/magnetodb/+spec/magnetodb-notifications

Welcome to team, Charles!
Looking forward for your contribution

--
Ilya Sviridov
isviridov @ FreeNode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Dean Troyer
On Wed, Sep 17, 2014 at 3:53 PM, Robert Collins robe...@robertcollins.net
wrote:

 On 18 September 2014 08:01, Dean Troyer dtro...@gmail.com wrote:
  Interestingly enough, the distros are doing exactly what they don't want
 us
  to do, ie, rebuilding things to use 'their' tested version of
 dependencies
  rather than the included one...

 Indeed - but the distros are solving for two specific issues:


No argument, just observing the recursive nature of this...

Also, if we pin to a version, is the downstream consequence different?
 IIRC Thomas has had to do this with Django (1.7?) and Horizon, probably
with others too.

As a provider of an app package directly to users, dealing with the
front-line consequences of changing dependencies falls on me.  And its one
reason with this hat on I want static linking, or a Python equivalent of it
(bundling/vendoring) at the app level.

As an upstream to a distro, I'm happy to let them deal with all of that.
 Isn't it fun being in the middle?

OOI were thouse changes API breaks or were we depending on nonpublic
 aspects?


prettytable was packaging once and I don't recall the other.  requests,
aside from the recent 2.4.0 release, was the 1.0.0 release when we weren't
expecting it and nothing was pinned 1.0.0.  I think that was an API change
that bit us.  The 1.0.0 version was clear, but not having the control over
the timing of the change is what makes me understand Kenneth's position on
urllib3 and why those who bundle requests do that too...

Is my Go-ness showing yet?

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-17 Thread James Polley
On Wed, Sep 17, 2014 at 6:26 PM, mar...@redhat.com mandr...@redhat.com
wrote:

 Hi,

 as part of general housekeeping on our reviews, it was discussed at last
 week's meeting [1] that we should set workflow -1 for stale reviews
 (like gerrit used to do when I were a lad).

 The specific criteria discussed was 'items that have a -1 from a core
 but no response from author for 14 days'. This topic came up again
 during today's meeting and it wasn't clear if the intention was for
 cores to start enforcing this? So:

 Do we start setting WIP/workflow -1 for those reviews that have a -1
 from a core but no response from author for 14 days


I'm in favour of doing this; as long as we make it clear that we're doing
it to help us focus review effort on things that are under active
development - it doesn't mean we think the patch shouldn't land, it just
means we know it's not ready yet so we don't want reviewers to be looking
at it until it moves forward.

For the sake of making sure new developers don't get put off, I'd like to
see us leaving a comment explaining why we're WIPing the change and noting
that uploading a new revision will remove the WIP automatically


 thanks, marios

 [1]

 http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-09-09-19.04.log.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] DVR Tunnel Design Question

2014-09-17 Thread Kevin Benton
Can you clarify what you mean with the thrashing condition? MAC addresses
only need to be unique per-VLAN so I don't see how the same MAC on multiple
VLANs from the same physical port would lead to any issues.

On Wed, Sep 17, 2014 at 12:41 PM, Armando M. arma...@gmail.com wrote:

 VLAN is on the radar, vxlan/gre was done to start with.

 I believe Vivek mentioned the rationale in some other thread. The gist
 of it below:

 In the current architecture, we use a unique DVR MAC per compute node
 to forward DVR Routed traffic directly to destination compute node.
 The DVR routed traffic from the source compute node will carry
 'destination VMs underlay VLAN' in the frame, but the Source Mac in
 that same frame will be the DVR Unique MAC. So, same DVR Unique Mac is
 used for potentially a number of overlay network VMs that would exist
 on that same source compute node.

 The underlay infrastructure switches will see the same DVR Unique MAC
 being associated with different VLANs on incoming frames, and so this
 would result in VLAN Thrashing on the switches in the physical cloud
 infrastructure. Since tunneling protocols carry the entire DVR routed
 inner frames as tunnel payloads, there is no thrashing effect on
 underlay switches.

 There will still be thrashing effect on endpoints on CNs themselves,
 when they try to learn that association between inner frame source MAC
 and the TEP port on which the tunneled frame is received. But that we
 have addressed in L2 Agent by having a 'DVR Learning Blocker' table,
 which ensures that learning for DVR routed packets alone is
 side-stepped.

 As a result, VLAN was not promoted as a supported underlay for the
 initial DVR architecture.

 Cheers,
 Armando

 On 16 September 2014 20:35, 龚永生 gong...@unitedstack.com wrote:
  I think the VLAN should also be supported later.  The tunnel should not
 be
  the prerequisite for the DVR feature.
 
 
  -- Original --
  From:  Steve Wormleyopenst...@wormley.com;
  Date:  Wed, Sep 17, 2014 10:29 AM
  To:  openstack-devopenstack-dev@lists.openstack.org;
  Subject:  [openstack-dev] [neutron] DVR Tunnel Design Question
 
  In our environment using VXLAN/GRE would make it difficult to keep some
 of
  the features we currently offer our customers. So for a while now I've
 been
  looking at the DVR code, blueprints and Google drive docs and other than
 it
  being the way the code was written I can't find anything indicating why a
  Tunnel/Overlay network is required for DVR or what problem it was
 solving.
 
  Basically I'm just trying to see if I missed anything as I look into
 doing a
  VLAN/OVS implementation.
 
  Thanks,
  -Steve Wormley
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Networking Service for Sahara

2014-09-17 Thread Andrew Lazarev
Hi Sharan,

Sahara works with either network service installed in OpenStack. If
OpenStack uses neutron - sahara will use neutron too. If nova network is
used, Sahara supports that as well.

Thanks,
Andrew.

On Wed, Sep 17, 2014 at 1:38 PM, Sharan Kumar M sharan.monikan...@gmail.com
 wrote:

 Hi all,

 What is the default networking service for Sahara? Is it Nova Network or
 Neutron? I referred this page
 http://docs.openstack.org/developer/sahara/userdoc/features.html#neutron-and-nova-network-support
 and it says Nova Network. Is that right?

 Thanks,
 Sharan Kumar M

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Oslo] Moving Brick out of Cinder

2014-09-17 Thread Walter A. Boring IV
Thanks for the effort Ivan.   Your interest in brick is also helping us 
push forward with the idea of the agent that we've had in mind for quite 
some time.


For those interested, I have created an etherpad that discusses some of 
the requirements and design decisions/discussion on the cinder/storage 
agent

here:
https://etherpad.openstack.org/p/cinder-storage-agent


Walt



Thanks a lot for a comments!

As discussed in IRC (#openstack-cinder), moving Brick to Oslo or 
Stackforge isn't the best solution.


We're moving on making Cinder Agent (or Cinder Storage agent) [1] 
 based on Brick code instead of making Brick as a separate python 
library used in Cinder and Nova.


I'll deprecate my oslo.storage GitHub repo and rename it to not 
confuse anybody in a future.


[1] https://etherpad.openstack.org/p/cinder-storage-agent

Regards,
Ivan Kolodyazhny,
Web Developer,
http://blog.e0ne.info/,
http://notacash.com/,
http://kharkivpy.org.ua/

On Wed, Sep 17, 2014 at 8:16 PM, Davanum Srinivas dava...@gmail.com 
mailto:dava...@gmail.com wrote:


+1 to Doug's comments.

On Wed, Sep 17, 2014 at 1:02 PM, Doug Hellmann
d...@doughellmann.com mailto:d...@doughellmann.com wrote:

 On Sep 16, 2014, at 6:02 PM, Flavio Percoco fla...@redhat.com
mailto:fla...@redhat.com wrote:

 On 09/16/2014 11:55 PM, Ben Nemec wrote:
 Based on my reading of the wiki page about this it sounds like
it should
 be a sub-project of the Storage program. While it is targeted
for use
 by multiple projects, it's pretty specific to interacting with
Cinder,
 right?  If so, it seems like Oslo wouldn't be a good fit. 
We'd just end

 up adding all of cinder-core to the project anyway. :-)

 +1 I think the same arguments and conclusions we had on
glance-store
 make sense here. I'd probably go with having it under the Block
Storage
 program.

 I agree. I’m sure we could find some Oslo contributors to give
you advice about APIs if you like, but I don’t think the library
needs to be part of Oslo to be reusable.

 Doug


 Flavio


 -Ben

 On 09/16/2014 12:49 PM, Ivan Kolodyazhny wrote:
 Hi Stackers!

 I'm working on moving Brick out of Cinder for K release.

 There're a lot of open questions for now:

   - Should we move it to oslo or somewhere on stackforge?
   - Better architecture of it to fit all Cinder and Nova
requirements
   - etc.

 Before starting discussion, I've created some
proof-of-concept to try it. I
 moved Brick to some lib named oslo.storage for testing only.
It's only one
 of the possible solution to start work on it.

 All sources are aviable on GitHub [1], [2].

 [1] - I'm not sure that this place and name is good for it,
it's just a PoC.

 [1] https://github.com/e0ne/oslo.storage
 [2] https://github.com/e0ne/cinder/tree/brick - some tests
still failed.

 Regards,
 Ivan Kolodyazhny

 On Mon, Sep 8, 2014 at 4:35 PM, Ivan Kolodyazhny
e...@e0ne.info mailto:e...@e0ne.info wrote:

 Hi All!

 I would to start moving Cinder Brick [1] to oslo as was
described on
 Cinder mid-cycle meetup [2]. Unfortunately I missed meetup
so I want be
 sure that nobody started it and we are on the same page.

 According to the Juno 3 release, there was not enough time
to discuss [3]
 on the latest Cinder weekly meeting and I would like to get
some feedback
 from the all OpenStack community, so I propose to start this
discussion on
 mailing list for all projects.

 I anybody didn't started it and it is useful at least for
both Nova and
 Cinder I would to start this work according oslo guidelines
[4] and
 creating needed blueprints to make it finished until Kilo 1
is over.



 [1] https://wiki.openstack.org/wiki/CinderBrick
 [2] https://etherpad.openstack.org/p/cinder-meetup-summer-2014
 [3]


http://lists.openstack.org/pipermail/openstack-dev/2014-September/044608.html
 [4] https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary

 Regards,
 Ivan Kolodyazhny.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org

[openstack-dev] [zaqar] Signing Off from Core

2014-09-17 Thread Alejandro Cabrera
Hey all,

I am officially removing myself as a core member of the Zaqar project.

Thanks for all the good times, friends, and I wish you the best for the future!

Cheers,
- Alej


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] reopen a change / pull request for nova-pythonclient ?

2014-09-17 Thread Alex Leonhardt
Thanks guys!

Alex

On 17 September 2014 17:16, Russell Bryant rbry...@redhat.com wrote:

 On 09/17/2014 11:56 AM, Daniel P. Berrange wrote:
  On Wed, Sep 17, 2014 at 04:47:06PM +0100, Alex Leonhardt wrote:
  hi,
 
  how does one re-open a abandoned change / pull request ? it just timed
  out and was then abandoned -
 
  https://review.openstack.org/#/c/57834/
 
  please let me know
 
  Just re-upload the change, maintaining the same Change-Id line in the
  commit message.

 Gerrit will reject it if it's still abandoned.  You have to restore it
 first.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Oslo] Moving Brick out of Cinder

2014-09-17 Thread John Griffith
On Sep 17, 2014 3:48 PM, Walter A. Boring IV walter.bor...@hp.com wrote:

 Thanks for the effort Ivan.   Your interest in brick is also helping us
push forward with the idea of the agent that we've had in mind for quite
some time.

 For those interested, I have created an etherpad that discusses some of
the requirements and design decisions/discussion on the cinder/storage
agent
 here:
 https://etherpad.openstack.org/p/cinder-storage-agent


 Walt


 Thanks a lot for a comments!

 As discussed in IRC (#openstack-cinder), moving Brick to Oslo or
Stackforge isn't the best solution.

 We're moving on making Cinder Agent (or Cinder Storage agent) [1]  based
on Brick code instead of making Brick as a separate python library used in
Cinder and Nova.

 I'll deprecate my oslo.storage GitHub repo and rename it to not confuse
anybody in a future.

 [1] https://etherpad.openstack.org/p/cinder-storage-agent

 Regards,
 Ivan Kolodyazhny,
 Web Developer,
 http://blog.e0ne.info/,
 http://notacash.com/,
 http://kharkivpy.org.ua/

 On Wed, Sep 17, 2014 at 8:16 PM, Davanum Srinivas dava...@gmail.com
wrote:

 +1 to Doug's comments.

 On Wed, Sep 17, 2014 at 1:02 PM, Doug Hellmann d...@doughellmann.com
wrote:
 
  On Sep 16, 2014, at 6:02 PM, Flavio Percoco fla...@redhat.com wrote:
 
  On 09/16/2014 11:55 PM, Ben Nemec wrote:
  Based on my reading of the wiki page about this it sounds like it
should
  be a sub-project of the Storage program.  While it is targeted for
use
  by multiple projects, it's pretty specific to interacting with
Cinder,
  right?  If so, it seems like Oslo wouldn't be a good fit.  We'd
just end
  up adding all of cinder-core to the project anyway. :-)
 
  +1 I think the same arguments and conclusions we had on glance-store
  make sense here. I'd probably go with having it under the Block
Storage
  program.
 
  I agree. I’m sure we could find some Oslo contributors to give you
advice about APIs if you like, but I don’t think the library needs to be
part of Oslo to be reusable.
 
  Doug
 
 
  Flavio
 
 
  -Ben
 
  On 09/16/2014 12:49 PM, Ivan Kolodyazhny wrote:
  Hi Stackers!
 
  I'm working on moving Brick out of Cinder for K release.
 
  There're a lot of open questions for now:
 
- Should we move it to oslo or somewhere on stackforge?
- Better architecture of it to fit all Cinder and Nova
requirements
- etc.
 
  Before starting discussion, I've created some proof-of-concept to
try it. I
  moved Brick to some lib named oslo.storage for testing only. It's
only one
  of the possible solution to start work on it.
 
  All sources are aviable on GitHub [1], [2].
 
  [1] - I'm not sure that this place and name is good for it, it's
just a PoC.
 
  [1] https://github.com/e0ne/oslo.storage
  [2] https://github.com/e0ne/cinder/tree/brick - some tests still
failed.
 
  Regards,
  Ivan Kolodyazhny
 
  On Mon, Sep 8, 2014 at 4:35 PM, Ivan Kolodyazhny e...@e0ne.info
wrote:
 
  Hi All!
 
  I would to start moving Cinder Brick [1] to oslo as was described
on
  Cinder mid-cycle meetup [2]. Unfortunately I missed meetup so I
want be
  sure that nobody started it and we are on the same page.
 
  According to the Juno 3 release, there was not enough time to
discuss [3]
  on the latest Cinder weekly meeting and I would like to get some
feedback
  from the all OpenStack community, so I propose to start this
discussion on
  mailing list for all projects.
 
  I anybody didn't started it and it is useful at least for both
Nova and
  Cinder I would to start this work according oslo guidelines [4]
and
  creating needed blueprints to make it finished until Kilo 1 is
over.
 
 
 
  [1] https://wiki.openstack.org/wiki/CinderBrick
  [2] https://etherpad.openstack.org/p/cinder-meetup-summer-2014
  [3]
 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044608.html
  [4] https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary
 
  Regards,
  Ivan Kolodyazhny.
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: https://twitter.com/dims

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 

[openstack-dev] pbr alpha and dev version handling

2014-09-17 Thread Doug Hellmann
Earlier today we discovered a problem with the way pbr is generating dev 
version numbers for commits following tags using alpha pre-version suffixes 
[1]. Basically what’s happening is a commit following a tag like 1.3.0.0a3 is 
coming out as a 1.3.0.devX version, which then appears to be older than the 
alpha tag, causing the version of the library installed from source to be 
replaced by a published package which is actually older than the source 
version. That downgrade then potentially loses fixes or features added to the 
source repository but not yet released.

The problem is related to a change in pbr that has been around for several 
weeks now, but we just noticed that it was happening today while debugging 
another issue. There are a few approaches to fixing the problem, but it’s not 
clear which is best, so I’m starting this thread to get some help working that 
out. I see 3 options myself, maybe there are others:


1. We could revert the semver-related changes in pbr that caused the problem 
and go back to the way of calculating dev versions until we can sort everything 
out properly.

I think that would include these changes:

dc62764 Only consider tags that look like versions.
449f0ab Accept capitalized Sem-Ver headers
85ba960 Handle more legacy version numbers
c1c99a7 Look for and process sem-ver pseudo headers in git
c7e00a3 Raise an error if preversion versions are too low
81c2000 Teach pbr about post versioned dev versions.
1758998 Handle more local dev version cases
5957364 Introduce a SemanticVersion object

There are some changes related to the way the ChangeLog is generated that may 
also be affected. It may be possible to leave a lot of the semver code in place 
and just bypass its use, I haven’t looked into that yet.


2. We could allow pbr to consider dev versions of pre-releases. This might, for 
example, lead to a version number 1.3.0.0a3.dev10.

This is apparently not supported by semver, but I don’t care as much about 
someone else’s standard as I do about creating something that works reliably 
for our needs. It’s not clear how a version like that would be converted to a 
deb or rpm version string, which is part of the point of the changes we’ve been 
working on this cycle. 


3. We could live with the problem for a few more days and not use pre-release 
versions for kilo.

I know this is a popular option with some people, but it is not a simple 
decision. Many people have suggested that we should not depend on alpha 
versions of libraries. However, the Oslo libraries are under development just 
as the applications are. We are not using alpha versions as an indicator of 
quality, and stopping using alpha versions will not magically make the quality 
of the libraries in any way better. We purposefully chose to use alpha versions 
as a way to prevent new releases of libraries from automatically being used in 
stable deployments, before those same libraries had been tested against trunk 
for some time. Before we change that system, we need to provide an alternate 
solution.



I’m tempted to try option 1 as the most expedient. Option 2 seems appealing as 
well because it follows our intent with our choice of version numbers. Option 3 
will require more thought and planning, and I expect we’ll have that 
conversation anyway, but I don’t want to rush into that sort of change.

What do you all think?

Doug


[1] https://bugs.launchpad.net/pbr/+bug/1370608
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Signing Off from Core

2014-09-17 Thread Flavio Percoco
On 09/17/2014 11:50 PM, Alejandro Cabrera wrote:
 Hey all,
 
 I am officially removing myself as a core member of the Zaqar project.
 
 Thanks for all the good times, friends, and I wish you the best for the 
 future!

Alejandro,

I think I speak for everyone when I say the project is where it is also
because of your contributions and support. You played a key role in the
project's growth and we all thank you a lot for that.

I'm sad to see you go and I wish you luck on your new adventures :)
Cheers,
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Signing Off from Core

2014-09-17 Thread Victoria Martínez de la Cruz
Thanks for everything Alej!

Besides your contributions to the Zaqar team you also shown to be a great
person.

I'm truly grateful that I could have you as a mentor during GSoC.

All the best :)

2014-09-17 19:14 GMT-03:00 Flavio Percoco fla...@redhat.com:

 On 09/17/2014 11:50 PM, Alejandro Cabrera wrote:
  Hey all,
 
  I am officially removing myself as a core member of the Zaqar project.
 
  Thanks for all the good times, friends, and I wish you the best for the
 future!

 Alejandro,

 I think I speak for everyone when I say the project is where it is also
 because of your contributions and support. You played a key role in the
 project's growth and we all thank you a lot for that.

 I'm sad to see you go and I wish you luck on your new adventures :)
 Cheers,
 Flavio


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-17 Thread Kurt Griffiths
Great question. So, some use cases, like guest agent, would like to see 
something around ~20ms if the agent is needing to respond to requests from a 
control surface/panel while a user clicks around. I spoke with a social media 
company who was also interested in low latency just because they have a big 
volume of messages they need to slog through in a timely manner or they will 
get behind (long-polling or websocket support was something they would like to 
see).

Other use cases should be fine with, say, 100ms. I want to say Heat’s needs 
probably fall into that latter category, but I’m only speculating.

Some other feedback we got a while back was that people would like a knob to 
tweak queue attributes. E.g., the tradeoff between durability and performance. 
That led to work on queue “flavors”, which Flavio has been working on this past 
cycle, so I’ll let him chime in on that.

From: Joe Gordon joe.gord...@gmail.commailto:joe.gord...@gmail.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, September 17, 2014 at 2:32 PM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

Can you further quantify what you would consider too slow, is it 100ms too slow.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Mike Bayer

On Sep 17, 2014, at 4:31 PM, Ian Cordasco ian.corda...@rackspace.com wrote:

 Project X pins a version of requests. Alice doesn’t know anything about
 requests and does pip install X. Until Alice takes a more active role in
 the development of Project X and looks into requests, she will never know
 she’s installed software that has exposures in it.

If a vulnerability is reported in urllib3 1.9.1, Alice, as well as me and 
everyone else who is not a novice, will at least know we need to run:

$ pip show urllib3 
---
Name: urllib3
Version: 1.9.1
Location: 
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
Requires: 


and we know right there we have to upgrade.  We upgrade, and we’re done.If 
we see that some library is pinning it, we will know.  We will complain loudly 
to that library’s author and/or replace that library.   The tools are there to 
give us what we need to be aware and to escalate the problem.

When a library silently bundles the source code and bypasses any normal means 
of us knowing it’s present unless we read the source code or scour the 
documentation, we have no way to know we’re affected.Some applications, 
particularly pip, have to do this, however, it should only be for technical 
reasons.  It should not be because you don’t want novice users to have to learn 
something, or because you’re angling to have lots of downloads on pypi.


 People make sure to upgrade their Requests libraries locally, but for all
 those poor saps who have *no idea* they have widely used apps that are
 bundling it silently, they remain totally open to vulnerabilities and the
 black hats have disneyland at their disposal.
 
 I think more applications bundle it than you realize. You’re likely using
 one daily that does it.


SQLAlchemy itself vendorizes Queue and some fragments of six, but that is of a 
much smaller scale, and is for technical reasons, rather than appeasing-newbie 
reasons.   But HTTP has a lot of security-critical surface area.   If I were to 
just bundle my own fork of an HMAC library with a few of my own special 
enhancements, that would be seen as a problem.


 And yeah, we’ll continue to take the blame for the mistake that was made
 for those two exposures. As for “Is that how things should be done?”
 that’s not for me to answer. More than enough projects do it and do it out
 of necessity. The reality is that by vendoring its dependencies, requests
 allows its users more flexibility than other projects.

I haven’t seen the technical reason for Requests doing this, I’ve only seen 
this reason: I want my users to be free to not use packaging if they don't 
won't to. They can just grab the tarball and go.”.   If that’s really the only 
reason, then I fail to see how that reason has anything to do with flexibility, 
other than the flexibility to remain lazy and ignorant of basic computer 
programming skills - and Requests is a library *for programmers*.  It doesn’t 
do anything without typing code.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] keep old specs

2014-09-17 Thread Aaron Rosen
I agree as well. I think moving them to an unimplemented folder makes sense
and would be helpful in reviewing if one re-proposes a blueprint.

On Mon, Sep 15, 2014 at 7:20 AM, Russell Bryant rbry...@redhat.com wrote:

 On 09/15/2014 10:01 AM, Kevin Benton wrote:
  Some of the specs had a significant amount of detail and thought put
  into them. It seems like a waste to bury them in a git tree history.
 
  By having them in a place where external parties (e.g. operators) can
  easily find them, they could get more visibility and feedback for any
  future revisions. Just being able to see that a feature was previously
  designed out and approved can prevent a future person from wasting a
  bunch of time typing up a new spec for the same feature. Hardly anyone
  is going to search deleted specs from two cycles ago if it requires
  checking out a commit.
 
  Why just restrict the whole repo to being documentation of what went
  in?  If that's all the specs are for, why don't we just wait to create
  them until after the code merges?

 FWIW, I agree with you that it makes sense to keep them in a directory
 that makes it clear that they were not completed.

 There's a ton of useful info in them.  Even if they get re-proposed,
 it's still useful to see the difference in the proposal as it evolved
 between releases.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: retrying)

2014-09-17 Thread Joshua Harlow
On a related and slightly less problematic case is another one like this...

https://github.com/rholder/retrying/issues/11

On Sep 17, 2014, at 8:15 AM, Thomas Goirand z...@debian.org wrote:

 Hi,
 
 I'm horrified by what I just found. I have just found out this in
 glanceclient:
 
  File bla/tests/test_ssl.py, line 19, in module
from requests.packages.urllib3 import poolmanager
 ImportError: No module named packages.urllib3
 
 Please *DO NOT* do this. Instead, please use urllib3 from ... urllib3.
 Not from requests. The fact that requests is embedding its own version
 of urllib3 is an heresy. In Debian, the embedded version of urllib3 is
 removed from requests.
 
 In Debian, we spend a lot of time to un-vendorize stuff, because
 that's a security nightmare. I don't want to have to patch all of
 OpenStack to do it there as well.
 
 And no, there's no good excuse here...
 
 Thomas Goirand (zigo)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Signing Off from Core

2014-09-17 Thread Fei Long Wang
Thanks for all your effort on Zaqar, Alej. And all the help for me when
I involved in Zaqar. I assume you will be not far away from the team and
OpenStack :)


On 18/09/14 09:50, Alejandro Cabrera wrote:
 Hey all,

 I am officially removing myself as a core member of the Zaqar project.

 Thanks for all the good times, friends, and I wish you the best for the 
 future!

 Cheers,
 - Alej


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-17 Thread James Polley
On Thu, Sep 18, 2014 at 8:24 AM, James E. Blair cor...@inaugust.com wrote:

 Sullivan, Jon Paul jonpaul.sulli...@hp.com writes:

  I think this highlights exactly why this should be an automated
  process.  No errors in application, and no errors in interpretation of
  what has happened.
 
  So the -1 from Jenkins was a reaction to the comment created by adding
  the workflow -1.  This is going to happen on all of the patches that
  have their workflow value altered (tests will run, result would be
  whatever the result of the test was, of course).

 Jenkins only runs tests in reaction to comments if they say recheck.

  But I also agree that the Jenkins vote should not be included in the
  determination of marking a patch WIP, but a human review should (So
  Code-Review and not Verified column).
 
  And in fact, for the specific example to hand, the last Jenkins vote
  was actually a +1, so as I understand it should not have been marked
  WIP.

 I'd like to help you see the reviews you want to see without
 externalizing your individual workflow onto contributors.  What tool do
 you use to find reviews?


We updated our wiki back in June to point people at:

https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftripleo-incubator+OR+project%3Aopenstack%2Ftripleo-image-elements+OR+project%3Aopenstack%2Ftripleo-heat-templates+OR+project%3Aopenstack%2Ftripleo-specs+OR+project%3Aopenstack%2Fos-apply-config+OR+project%3Aopenstack%2Fos-collect-config+OR+project%3Aopenstack%2Fos-refresh-config+OR+project%3Aopenstack%2Fos-cloud-config+OR+project%3Aopenstack%2Fdiskimage-builder+OR+project%3Aopenstack%2Fdib-utils+OR+project%3Aopenstack-infra%2Ftripleo-ci+OR+project%3Aopenstack%2Ftuskar+OR+project%3Aopenstack%2Fpython-tuskarclient%29+status%3Aopen+NOT+label%3AWorkflow%3C%3D-1+NOT+label%3ACode-Review%3C%3D-2title=TripleO+InboxMy+Patches+Requiring+Attention=owner%3Aself+%28label%3AVerified-1%252cjenkins+OR+label%3ACode-Review-1%29TripleO+Specs=NOT+owner%3Aself+project%3Aopenstack%2Ftripleo-specsNeeds+Approval=label%3AVerified%3E%3D1%252cjenkins+NOT+owner%3Aself+label%3ACode-Review%3E%3D2+NOT+label%3ACode-Review-15+Days+Without+Feedback=label%3AVerified%3E%3D1%252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%2Ftripleo-specs+NOT+label%3ACode-Review%3C%3D2+age%3A5dNo+Negative+Feedback=label%3AVerified%3E%3D1%252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%2Ftripleo-specs+NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%3E%3D2+limit%3A50Other=label%3AVerified%3E%3D1%252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%2Ftripleo-specs+label%3ACode-Review-1+limit%3A20

It's probably not the only thing people use, but it should give you an idea
of what we'd find useful.


 If it's gerrit's webui, have you tried using the Review Inbox dashboard?
 Here it is for the tripleo-image-elements project:


 https://review.openstack.org/#/projects/openstack/tripleo-image-elements,dashboards/important-changes:review-inbox-dashboard


That gave me an error: Error in operator label:Code-Review=0,self

After trying a few times I realised I hadn't logged in yet. After logging
in, it works. It'd be nice if it triggered the login sequence instead of
just giving me an error.



 If you would prefer something else, we can customize those dashboards to
 do whatever you want, including ignoring changes that have not been
 updated in 2 weeks.


I'd love to be able to sort by least-recently-updated, to encourage a FIFO
pattern. At present, even on our dashboard above, the best we've been able
to do is highlight things that are (for instance) 2 weeks without review -
but even then, the things that are just barely two weeks without review sit
at the top, while things that have been 60 days or more without review are
hidden further down.



 -Jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Log Rationalization -- Bring it on!

2014-09-17 Thread Rochelle.RochelleGrober
TL;DR:  I consider the poor state of log consistency a major impediment for 
more widespread adoption of OpenStack and would like to volunteer to own this 
cross-functional process to begin to unify and standardize logging messages and 
attributes for Kilo while dealing with the most egregious issues as the 
community identifies them.



Recap from some mail threads:



From Sean Dague on Kilo cycle goals:

2. Consistency in southbound interfaces (Logging first)



Logging and notifications are south bound interfaces from OpenStack providing 
information to people, or machines, about what is going on.

There is also a 3rd proposed south bound with osprofiler.



For Kilo: I think it's reasonable to complete the logging standards and 
implement them. I expect notifications (which haven't quite kicked off) are 
going to take 2 cycles.



I'd honestly *really* love to see a unification path for all the the southbound 
parts, logging, osprofiler, notifications, because there is quite a bit of 
overlap in the instrumentation/annotation inside the main code for all of these.


And from Doug Hellmann:
1. Sean has done a lot of analysis and started a spec on standardizing logging 
guidelines where he is gathering input from developers, deployers, and 
operators [1]. Because it is far enough for us to see real progress, it's a 
good place for us to start experimenting with how to drive cross-project 
initiatives involving code and policy changes from outside of a single project. 
We have a couple of potentially related specs in Oslo as part of the oslo.log 
graduation work [2] [3], but I think most of the work will be within the 
applications.

[1] https://review.openstack.org/#/c/91446/
[2] 
https://blueprints.launchpad.net/oslo.log/+spec/app-agnostic-logging-parameters
[3] https://blueprints.launchpad.net/oslo.log/+spec/remove-context-adapter



And from James Blair:

1) Improve log correlation and utility



If we're going to improve the stability of OpenStack, we have to be able to 
understand what's going on when it breaks.  That's both true as developers when 
we're trying to diagnose a failure in an integration test, and it's true for 
operators who are all too often diagnosing the same failure in a real 
deployment.  Consistency in logging across projects as well as a cross-project 
request token would go a long way toward this.

While I am not currently managing an OpenStack deployment, writing tests or 
code, or debugging the stack, I have spent many years doing just that.  Through 
QA, Ops and Customer support, I have come to revel in good logging and log 
messages and curse the holes and vagaries in many systems.

Defining/refining logs to be useful and usable is a cross-functional effort 
that needs to include:

· Operators

· QA

· End Users

· Community managers

· Tech Pubs

· Translators

· Developers

· TC (which provides the forum and impetus for all the projects to 
cooperate on this)

At the moment, I think this effort may best work under the auspices of Oslo 
(oslo.log), I'd love to hear other proposals.

Here is the beginnings of my proposal of how to attack and subdue the painful 
state of logs:


· Post this email to the MLs (dev, ops, enduser) to get feedback, 
garner support and participants in the process
(Done;-)

· In parallel:

o   Collect up problems, issues, ideas, solutions on an etherpad 
https://etherpad.openstack.org/p/Log-Rationalization where anyone in the 
communities can post.

o   Categorize  reported Log issues into classes (already identified classes):

§  Format Consistency across projects

§  Log level definition and categorization across classes

§  Time syncing entries across tens of logfiles

§  Relevancy/usefulness of information provided within messages

§  Etc (missing a lot here, but I'm sure folks will speak up)

o   Analyze existing log message formats, standards across integrated projects

o   File bugs where issues identified are actual project bugs

o   Build a session outline for F2F working session at the Paris Design Summit

· At the Paris Design Summit, use a session and/or pod discussions to 
set priorities, recruit contributors, start and/or flesh out specs and 
blueprints

· Proceed according to priorities, specs, blueprints, contributions and 
changes as needed as the work progresses.

· Keep an active and open rapport and reporting process for the user 
community to comment and participate in the processes.
Measures of success:

· Log messages provide consistency of format enough for productive 
mining through operator writable scripts

· Problem debugging is simplified through the ability to trust 
timestamps across all OpenStack logs (and use scripts to get to the time you 
want in any/all of the logfiles)

· Standards for format, content, levels and translations have been 
proposed and agreed to be adopted across all 

[openstack-dev] PostgreSQL jobs slow in the gate

2014-09-17 Thread Clark Boylan
Hello,

Recent sampling of test run times shows that our tempest jobs run
against clouds using PostgreSQL are significantly slower than jobs run
against clouds using MySQL.

(check|gate)-tempest-dsvm-full has an average run time of 52.9 minutes
(stddev 5.92 minutes) over 516 runs.
(check|gate)-tempest-dsvm-postgres-full has an average run time of 73.78
minutes (stddev 11.01 minutes) over 493 runs.

I think this is a bug and and an important one to solve prior to release
if we want to continue to care and feed for PostgreSQL support. I
haven't filed a bug in LP because I am not sure where the slowness is
and creating a bug against all the projects is painful. (If there are
suggestions for how to do this in a non painful way I will happily go
file a proper bug).

Is there interest in fixing this? If not we should probably reconsider
removing these PostgreSQL jobs from the gate.

Note, a quick spot check indicates the increase in job time is not
related to job setup. Total time before running tempest appears to be
just over 18 minutes in the jobs I checked.

Thank you,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >