[openstack-dev] [mistral] Multi-tenancy and ceilometer triggers

2014-06-10 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi

I was looking at 
https://blueprints.launchpad.net/mistral/+spec/mistral-ceilometer-integration
and trying to figure out how to implement that.

I can see some problems:
- - at the moment the trust is created when you PUT the workbook definition
  this means that if a totally different user executes the workbook, it will be 
run as the user that
created the workbook :-O
 
https://github.com/stackforge/mistral/blob/master/mistral/services/workbooks.py#L27
 
https://github.com/stackforge/mistral/blob/master/mistral/engine/data_flow.py#L92
- - Workbooks can't be sharable if the trust is created at workbook create time.
- - If the trust is not created at workbook create time, how do you use 
triggers?

It seems to me that it is a mistake putting the triggers in the workbook
because there are three entities here:
1) the shareable workbook with tasks (a template really that could be stored in 
glance)
2) the execution entity (keeps track of the running tasks)
3) the person / trigger that initiates the execution
   - execution context
   - authenticated token

if we put 3) into 1) we are going to have authentication issues and
potentially give up the idea of sharing workbooks.

I'd suggest we have a new entity (and endpoint) for triggers.
- - This would associate 3 things: trust_id, workbook and trigger rule
- - This could also be then used to generate a URL for ceilometer or solum to 
call
  in an autonomous way.
- - One issue is if your workflow takes a *really* long time and you don't use 
the
  trigger then you won't have a trust, but a normal user token. But maybe if
  the manually initiates the execution, we can create a manual trigger in the
  background?

I can also help out with: 
https://blueprints.launchpad.net/mistral/+spec/mistral-multitenancy
I believe all that needs to be done is to filter the db items by project_id 
that is
in the user context.

Any thoughts on the above (or better ways of moving forward)?

- -Angus

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTlp7HAAoJEFrDYBLxZjWoqtAH/3Un3miZmcPjXCO/klU7jsXw
nEYQhWBI+IJuZ5W9MgSHkLg2PwfL6nFxhzyFjG5GloH7QQjO+jGIeE+sBSwPPF/K
kTkllROUhzOO+VFMTIA3y+c173oklmmUtznbuUvDLgLtxNEgtxOWyvZMF3vHO5sS
VkzfSXhg+VbZdg7lVqkaPOtRY/tJ7uVvtskeGZJRIVbE1iINGtqW0aC0WMXXLb7c
7ek8H9lYuxiQ10++7lU+0g6Yn6Momtcmh5j+dTZvJsZw/XEPCc+aDYsE+Yz9tqwb
blh2tWAqNri+xWtumyIAnfv2teJtiDUkzRqRTwxycBOdrkhQ6Nq0RpTCg15jNsA=
=TXJE
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][gate] ceilometer unit test frequently failing in gate

2014-06-10 Thread Eoghan Glynn


 Over the last 7 days ceilometer unit test jobs have a 18% failure rate in the
 gate queue [0], while we see expect to see some failures in integration
 testing, unit tests should not be failing in the gate with such a high
 frequency (and for so long).
 
 It looks like these failures are due to two bugs [1] [2]. I would like to
 propose that until these bugs are resolved, that ceilometer refrain from
 approving patches as to not negatively impact the gate queue, which is
 already in a tenuous state.

Hi Joe,

Thanks for raising this.

We have approved patches addressing both persistent failures
in the verification queue:

  https://review.openstack.org/98953
  https://review.openstack.org/98820

BTW these data on per-project unit test failure rates sound
interesting and useful. Are these rates surfaced somewhere
easily consumable (by folks outside of the QA team)?

Cheers,
Eoghan
 
 best,
 Joe
 
 [0]
 http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiRmluaXNoZWQ6XCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgQU5EIHByb2plY3Q6XCJvcGVuc3RhY2svY2VpbG9tZXRlclwiIEFORCBidWlsZF9xdWV1ZTpcImdhdGVcIiBBTkQgKGJ1aWxkX25hbWU6XCJnYXRlLWNlaWxvbWV0ZXItcHl0aG9uMjdcIiBPUiAgYnVpbGRfbmFtZTpcImdhdGUtY2VpbG9tZXRlci1weXRob24yNlwiKSIsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50Iiwib2Zmc2V0IjowLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwMjM1NjkyMDE2MCwibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6ImJ1aWxkX3N0YXR1cyJ9
 [1] https://bugs.launchpad.net/ceilometer/+bug/1323524
 [2] https://bugs.launchpad.net/ceilometer/+bug/1327344
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] DRBD integration as volume driver

2014-06-10 Thread Philipp Marek
Hello Duncan,

 The best thing to do with the code is push up a gerrit review! No need
 to be shy, and you're very welcome to push up code before the
 blueprint is in, it just won't get merged.
thank you for your encouragement!


I pushed another fix for Cinder last week (2 lines, allowing to start 
the services via pudb) by committing


commit 7b6c6685ba3fb40b6ed65d8e3697fa9aac899d85
Author: Philipp Marek philipp.ma...@linbit.com
Date:   Fri Jun 6 11:48:52 2014 +0200

Make starting cinder services possible with pudb, too.


I had that rebased to be on top of
6ff7d035bf507bf2ec9d066e3fcf81f29b4b481c
(the then-master HEAD), and pushed to
refs/for/master
on
ssh://phma...@review.openstack.org:29418/openstack/cinder

but couldn't find the commit in Gerrit anywhere ..

Even a search
https://review.openstack.org/#/q/owner:self,n,z
is empty.


Clicking around I found
https://review.openstack.org/#/admin/projects/openstack/cinder
which says 
Require a valid contributor agreement to upload: TRUE
but to the best of my knowledge this should be done:
https://review.openstack.org/#/settings/agreements
says
Verified   ICLA   OpenStack Individual Contributor License Agreement


So I'm a bit confused right now - what am I doing wrong?



 I'm very interested in this code.
As soon as I've figured out how this Gerrit thing works you can take a 
look ... (or even sooner, see the github link in my previous mail.)



Regards,

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-sdk-php] Pending reviews

2014-06-10 Thread Jamie Hannaford
Hey folks,

Could we get these two patches reviewed either today or tomorrow? The first is 
array syntax:

https://review.openstack.org/#/c/94323https://review.openstack.org/#/c/94323/5

The second is removing the “bin” and “scripts” directories from top-level tree, 
as discussed in last week’s meeting:

https://review.openstack.org/#/c/98048/

Neither has code changes, so should be fairly simple to review. Thanks!

Jamie



Jamie Hannaford
Software Developer III - CH [experience Fanatical Support]

Tel:+41434303908
Mob:+41791009767
[Rackspace]



Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy
-
Rackspace Hosting Australia PTY LTD a company registered in the state of 
Victoria, Australia (company registered number ACN 153 275 524) whose 
registered office is at Suite 3, Level 7, 210 George Street, Sydney, NSW 2000, 
Australia. Rackspace Hosting Australia PTY LTD privacy policy can be viewed at 
www.rackspace.com.au/company/legal-privacy-statement.php
-
Rackspace US, Inc, 5000 Walzem Road, San Antonio, Texas 78218, United States of 
America
Rackspace US, Inc privacy policy can be viewed at 
www.rackspace.com/information/legal/privacystatement
-
Rackspace Limited is a company registered in England  Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ.
Rackspace Limited privacy policy can be viewed at 
www.rackspace.co.uk/legal/privacy-policy
-
Rackspace Benelux B.V. is a company registered in the Netherlands (company KvK 
nummer 34276327) whose registered office is at Teleportboulevard 110, 1043 EJ 
Amsterdam.
Rackspace Benelux B.V privacy policy can be viewed at 
www.rackspace.nl/juridisch/privacy-policy
-
Rackspace Asia Limited is a company registered in Hong Kong (Company no: 
1211294) whose registered office is at 9/F, Cambridge House, Taikoo Place, 979 
King's Road, Quarry Bay, Hong Kong.
Rackspace Asia Limited privacy policy can be viewed at 
www.rackspace.com.hk/company/legal-privacy-statement.php
-
This e-mail message (including any attachments or embedded documents) is 
intended for the exclusive and confidential use of the individual or entity to 
which this message is addressed, and unless otherwise expressly indicated, is 
confidential and privileged information of Rackspace. Any dissemination, 
distribution or copying of the enclosed material is prohibited. If you receive 
this transmission in error, please notify us immediately by e-mail at 
ab...@rackspace.com and delete the original message. Your cooperation is 
appreciated.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] AggregateMultiTenancyIsolation scheduler filter - bug, or new feature proposal?

2014-06-10 Thread Jesse Pretorius
On 9 June 2014 15:18, Belmiro Moreira moreira.belmiro.email.li...@gmail.com
 wrote:

 I would say that is a documentation bug for the
 “AggregateMultiTenancyIsolation” filter.


Great, thanks. I've logged a bug for this:
https://bugs.launchpad.net/openstack-manuals/+bug/1328400


 When this was implemented the objective was to schedule only instances
 from specific tenants for those aggregates but not make them exclusive.


 That’s why the work on
 https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates
 started but was left on hold because it was believed
 https://blueprints.launchpad.net/nova/+spec/whole-host-allocation had
 some similarities and eventually could solve the problem in a more generic
 way.


 However p-clouds implementation is marked as “slow progress” and I believe
 there is no active work at the moment.


 Probably is a good time to review the ProjectsToAggregateFilter filter
 again. The implementation and reviews are available at
 https://review.openstack.org/#/c/28635/


Agreed. p-clouds is a much greater framework with much deeper and wider
effects. The isolated aggregate which you submitted code for is exactly
what we're looking for and actually what we're using in production today.

I'm proposing that we put together the nova-spec for
https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates,
but as suggested in my earlier message I think a simpler approach would be
to modify the existing filter to meet our needs by simply using an
additional metadata tag to designate the aggregate as an exclusive one. In
the blueprint you did indicate that you were going to put together a
nova-spec for it, but I couldn't find one in the specs repository - either
merged or WIP.


 One of the problems raised was performance concerns considering the number
 of DB queries required. However this can be documented if people intend to
 enable the filter.


As suggested by Phil Day in https://review.openstack.org/#/c/28635/ there
is now a caching capability (landed in
https://review.openstack.org/#/c/33720/) which reduces the number of DB
calls.

Can I suggest that we collaborate on the spec? Perhaps we can discuss this
on IRC? My nick is odyssey4me and I'm in #openstack much of the typical
working day and often in the evenings. My time zone is GMT+2.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Containers] Notice: Containers Team meeting cancelled this week

2014-06-10 Thread Adrian Otto
Team,

Due to a number of expected absences (DockerCon plenaries conflict with our 
regularly scheduled meeting), we will skip our Containers Team Meeting this 
week. Please accept my sincere apologies for the short notice. Our next 
scheduled meeting is:

2014-06-17 2200 UTC

I look forward to meeting with you then.

Regrets,

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] 2014.1.1 preparation

2014-06-10 Thread Sergey Lukjanov
All patches merged now. Here is the launchpad page for the sahara
2014.1.1 release - https://launchpad.net/sahara/+milestone/2014.1.1

We're doing some final testing now and release tag will be pushed later today.

Thanks.

On Thu, Jun 5, 2014 at 4:34 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 All patches are now approved, but stuck in gate, so, the 2014.1.1
 release for Sahara will be right after all changes merged into the
 stable/icehouse branch.

 On Tue, Jun 3, 2014 at 2:30 PM, Sergey Lukjanov slukja...@mirantis.com 
 wrote:
 Okay, it makes sense, I've updated the etherpad -
 https://etherpad.openstack.org/p/sahara-2014.1.1

 Here is the chain of backports for 2014.1.1 -
 https://review.openstack.org/#/q/topic:sahara-2014.1.1,n,z

 Review appreciate, all changes are cherry-picked and only one conflict
 was in 
 https://review.openstack.org/#/c/97458/1/sahara/swift/swift_helper.py,cm
 due to the multi-region support addition.

 Thanks.

 On Tue, Jun 3, 2014 at 1:48 PM, Dmitry Mescheryakov
 dmescherya...@mirantis.com wrote:
 I agree with Andrew and actually think that we do need to have
 https://review.openstack.org/#/c/87573 (Fix running EDP job on
 transient cluster) fixed in stable branch.

 We also might want to add https://review.openstack.org/#/c/93322/
 (Create trusts for admin user with correct tenant name). This is
 another fix for transient clusters, but it is not even merged into
 master branch yet.

 Thanks,

 Dmitry

 2014-06-03 13:27 GMT+04:00 Sergey Lukjanov slukja...@mirantis.com:
 Here is etherpad to track preparation -
 https://etherpad.openstack.org/p/sahara-2014.1.1

 On Tue, Jun 3, 2014 at 10:08 AM, Sergey Lukjanov slukja...@mirantis.com 
 wrote:
 /me proposing to backport:

 Docs:

 https://review.openstack.org/#/c/87531/ Change IRC channel name to
 #openstack-sahara
 https://review.openstack.org/#/c/96621/ Added validate_edp method to
 Plugin SPI doc
 https://review.openstack.org/#/c/89647/ Updated architecture diagram in 
 docs

 EDP:

 https://review.openstack.org/#/c/93564/ 
 https://review.openstack.org/#/c/93564/

 On Tue, Jun 3, 2014 at 10:03 AM, Sergey Lukjanov slukja...@mirantis.com 
 wrote:
 Hey folks,

 this Thu, June 5 is the date for 2014.1.1 release. We already have
 some back ported patches to the stable/icehouse branch, so, the
 question is do we need some more patches to back port? Please, propose
 them here.

 2014.1 - stable/icehouse diff:
 https://github.com/openstack/sahara/compare/2014.1...stable/icehouse

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Use of final and private keywords to limit extending

2014-06-10 Thread Choi, Sam
Regarding use of the final keyword and limiting extending in general, a few 
thoughts below:

- While I found the blog post about final classes to be informative, I'd take 
it with a grain of salt. The author bills himself as a consultant who works 
with enterprise web applications. Briefly looking at his background, 
practically all his gigs were short consulting jobs. I don't see a track record 
for open source projects so it would appear that his views are likely more 
applicable for enterprise developers working within closed systems.
- The Java community has already beaten this subject to death over the past 
decade. During recent years, it seems that the debate has waned. I occasionally 
see enterprise Java devs use final to communicate their intent so that their 
system isn't butchered many years down the road when it becomes poorly 
understood legacy code. 
- On the other hand, I hardly ever see final classes in open source code/APIs. 
- Regarding future-proofing, I agree that it's easier to switch from final to 
not using final than the other way around. However, I've actually had cases 
where I needed to extend a final class in an API and was simply annoyed by the 
author's desire to control how I use the API. I also understood that if the 
author were to change certain things, my extension may have to be refactored. 
That's a risk I'm certainly willing to take to get the job done.

Thanks,
--
Sam Choi
Hewlett-Packard Co. 
HP Cloud Services
+1 650 316 1652 / Office


-Original Message-
From: Matthew Farina [mailto:m...@mattfarina.com] 
Sent: Monday, June 09, 2014 7:09 AM
To: Jamie Hannaford
Cc: Shaunak Kashyap; Glen Campbell; OpenStack Development Mailing List (not for 
usage questions); Choi, Sam; Farina, Matt
Subject: Re: [openstack-sdk-php] Use of final and private keywords to limit 
extending

If you don't mind I'd like to step back for a moment and talk about the end 
users of this codebase and the types code it will be used in.

We're looking to make application developers successful in PHP. The top 10% of 
PHP application developers aren't an issue. If they have an SDK or not they 
will build amazing things. It's the long tail of app devs. Many of these 
developers don't know things we might take for granted, like dependency 
injection. A lot of them may writing spaghetti procedural code. I use these 
examples because I've run into them in the past couple months. We need to make 
these folks successful in a cost effective and low barrier to entry manner.

When I've gotten into the world of closed source PHP (or any other language for 
that matter) and work that's not in the popular space I've seen many things 
that aren't clean or pretty. But, they work.

That means this SDK needs to be useful in the modern frameworks (which vary 
widely on opinions) and in environments we may not like.

The other thing I'd like to talk about is the protected keyword. I use this a 
lot. Using protected means an outside caller can't access the method. Only 
other methods on the class or classes that extend it.
This is an easy way to have an API and internals.

Private is different. Private means it's part of the class but not there for 
extended classes. It's not just about controlling the public API for callers 
but not letting classes that extend this one have access to the functionality.

Given the scope of who our users are...

- Any place we use the `final` scoping we need to explain how to extend it 
properly. It's a teaching moment for someone who might not come to a direction 
on what to do very quickly. Think about the long tail of developers and 
projects, most of which are not open source.

Note, I said I'm not opposed to using final. It's an intentional decision. For 
the kinds of things we're doing I can't see all to many use cases for using 
final. We need to enable users to be successful without controlling how they 
write applications because this is an add-on to help them not a driver for 
their architecture.

- For scoping private and public APIs, `protected` is a better keyword unless 
we are intending on blocking extension. If we block extension we should explain 
how to handled overriding things that are likely to happen in real world 
applications that are not ideally written or architected.

At the end of the day, applications that successfully do what they need to do 
while using OpenStack on the backend is what will make OpenStack more 
successful. We need to help make it easy for the developers, no matter how they 
choose to code, to be successful. I find it useful to focus on end users and 
their practical cases over the theory of how to design something.

Thoughts,
Matt


On Fri, Jun 6, 2014 at 10:01 AM, Jamie Hannaford 
jamie.hannaf...@rackspace.com wrote:
 So this is an issue that’s been heavily discussed recently in the PHP 
 community.

 Based on personal opinion, I heavily favor and use private properties 
 in software I write. I haven’t, however, used the “final” 

Re: [openstack-dev] Promoting healing script to scheme migration script?

2014-06-10 Thread Anna Kamyshnikova
Hi,

Here is a link to WIP healing script https://review.openstack.org/96438.
The idea on which it is based on is shown in this spec
https://review.openstack.org/95738.

Regards,
Ann


On Mon, Jun 9, 2014 at 7:07 PM, Johannes Erdfelt johan...@erdfelt.com
wrote:

 On Mon, Jun 09, 2014, Jakub Libosvar libos...@redhat.com wrote:
  I'd like to get some opinions on following idea:
 
  Because currently we have (thanks to Ann) WIP of healing script capable
  of changing database scheme by comparing tables in the database to
  models in current codebase, I started to think whether it could be used
  generally to db upgrades instead of generating migration scripts.

 Do you have a link to these healing scripts?

  If I understand correctly the purpose of migration scripts used to be to:
  1) separate changes according plugins
  2) upgrade database scheme
  3) migrate data according the changed scheme
 
  Since we dropped on conditional migrations, we can cross out no.1).
  The healing script is capable of doing no.2) without any manual effort
  and without adding migration script.
 
  That means if we will decide to go along with using script for updating
  database scheme, migration scripts will be needed only for data
  migration (no.3)) which are from my experience rare.
 
  Also other benefit would be that we won't need to store all database
  models from Icehouse release which we probably will need in case we want
  to heal database in order to achieve idempotent Icehouse database
  scheme with Juno codebase.
 
  Please share your ideas and reveal potential glitches in the proposal.

 I'm actually working on a project to implement declarative schema
 migrations for Nova using the existing model we currently maintain.

 The main goals for our project are to reduce the amount of work
 maintaining the database schema but also to reduce the amount of
 downtime during software upgrades by doing schema changes online (where
 possible).

 I'd like to see what other haves done and are working on the future so
 we don't unnecessarily duplicate work :)

 JE


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-06-10 Thread Clint Byrum
Excerpts from Jaromir Coufal's message of 2014-06-08 16:44:58 -0700:
 Hi,
 
 it looks that there is no more activity on the survey for mid-cycle 
 dates so I went forward to evaluate it.
 
 I created a table view into the etherpad [0] and results are following:
 * option1 (Jul 28 - Aug 1): 27 attendees - collides with Nova/Ironic
 * option2 (Jul 21-25) : 27 attendees
 * option3 (Jul 25-29) : 17 attendees - collides with Nova/Ironic
 * option4 (Aug 11-15) : 13 attendees
 
 I think that we can remove options 3 and 4 from the consideration, 
 because there is lot of people who can't make it. So we have option1 and 
 option2 left. Since Robert and Devananda (PTLs on the projects) can't 
 make option1, which also conflicts with Nova/Ironic meetup, I think it 
 is pretty straightforward.
 
 Based on the survey the winning date for the mid-cycle meetup is 
 option2: July 21th - 25th.
 
 Does anybody have very strong reason why we shouldn't fix the date for 
 option2 and proceed forward with the organization for the meetup?
 

July 21-25 is also the shortest notice. I will not be able to attend
as plans have already been made for the summer and I've already been
travelling quite a bit recently, after all we were all just at the summit
a few weeks ago.

I question the reasoning that being close to FF is a bad thing, and
suggest adding much later dates. But I understand since the chosen dates
are so close, there is a need to make a decision immediately.

Alternatively, I suggest that we split Heat out of this, and aim at
later dates in August.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Flavio Percoco

On 09/06/14 19:31 +, Kurt Griffiths wrote:

Folks, this may be a bit of a bombshell, but I think we have been dancing
around the issue for a while now and we need to address it head on. Let me
start with some background.

Back when we started designing the Marconi API, we knew that we wanted to
support several messaging patterns. We could do that using a unified queue
resource, combining both task distribution and feed semantics. Or we could
create disjoint resources in the API, or even create two separate services
altogether, one each for the two semantic groups.

The decision was made to go with a unified API for these reasons:

 • It would afford hybrid patterns, such as auditing or diagnosing a task
   distribution queue
 • Once you implement guaranteed delivery for a message feed over HTTP,
   implementing task distribution is a relatively straightforward addition. If
   you want both types of semantics, you don’t necessarily gain anything by
   implementing them separately.

Lately we have been talking about writing drivers for traditional message
brokers that will not be able to support the message feeds part of the API.
I’ve started to think that having a huge part of the API that may or may not
“work”, depending on how Marconi is deployed, is not a good story for users,
esp. in light of the push to make different clouds more interoperable. 


Therefore, I think we have a very big decision to make here as a team and a
community. I see three options right now. I’ve listed several—but by no means
conclusive—pros and cons for each, as well as some counterpoints, based on past
discussions.

Option A. Allow drivers to only implement part of the API

For:

 • Allows for a wider variety of backends. (counter: may create subtle
   differences in behavior between deployments)
 • May provide opportunities for tuning deployments for specific workloads

* Simplifies client implementation and API


Against:

 • Makes it hard for users to create applications that work across multiple
   clouds, since critical functionality may or may not be available in a given
   deployment. (counter: how many users need cross-cloud compatibility? Can
   they degrade gracefully?)


This is definitely unfortunate but I believe it's a fair trade-off. I
believe the same happens in other services that have support for
different drivers.

We said we'd come up with a set of features that we considered core
for Marconi and that based on that we'd evaluate everything. Victoria
has been doing a great job with identifying what endpoints can/cannot
be supported by AMQP brokers. I believe this is a key thing to have
before we make any decision here.



Option B. Split the service in two. Different APIs, different services. One
would be message feeds, while the other would be something akin to Amazon’s
SQS.

For:

 • Same as Option A, plus creates a clean line of functionality for deployment
   (deploy one service or the other, or both, with clear expectations of what
   messaging patterns are supported in any case).

Against:

 • Removes support for hybrid messaging patterns (counter: how useful are such
   patterns in the first place?)
 • Operators now have two services to deploy and support, rather than just one
   (counter: can scale them independently, perhaps leading to gains in
   efficiency)



Strong -1 for having 2 separate services. IMHO, this would just
complicate things from a admin / user perspective.



Option C. Require every backend to support the entirety of the API as it now
stands. 


For:

 • Least disruptive in terms of the current API design and implementation
 • Affords a wider variety of messaging patterns (counter: YAGNI?)
 • Reuses code in drivers and API between feed and task distribution
   operations (counter: there may be ways to continue sharing some code if the
   API is split)

Against:

 • Requires operators to deploy a NoSQL cluster (counter: many operators are
   comfortable with NoSQL today)
 • Currently requires MongoDB, which is AGPL (counter: a Redis driver is under
   development)
 • A unified API is hard to tune for performance (counter: Redis driver should
   be able to handle high-throughput use cases, TBD)



A and C are both reasonable solutions. I personally prefer A with a
set of optional features well defined. In addition to that, we have
discussed having features discoverability that will allow users to
know features are supported. This makes developing applications on top
of Marconi a bit harder, though.

That said, I believe what needs to be done is re-think some of the
existing endpoints in order to embrace existing technologies /
protocols. With the addition of flavors, we'll have the same issue.
For instance, a user with 2 queues on different storage drivers - 1 on
mongodb and 1 on say rabbitmq - will likely have to develop based on
the features exposed by the driver with less supported features -
unless the application is segmented. In other words, the user won't be
able to 

Re: [openstack-dev] [oslo] oslo-specs approval process

2014-06-10 Thread Flavio Percoco

On 09/06/14 20:59 -0400, Doug Hellmann wrote:

On Mon, Jun 9, 2014 at 5:56 PM, Ben Nemec openst...@nemebean.com wrote:

Hi all,

While the oslo-specs repository has been available for a while and a
number of specs proposed, we hadn't agreed on a process for actually
approving them (i.e. the normal 2 +2's or something else).  This was
discussed at the Oslo meeting last Friday and the method decided upon by
the people present was that only the PTL (Doug Hellmann, dhellmann on
IRC) would approve specs.

However, he noted that he would still like to see at _least_ 2 +2's on a
spec, and +1's from interested users are always appreciated as well.
Basically he's looking for a consensus from the reviewers.

This e-mail is intended to notify anyone interested in the oslo-specs
process of how it will work going forward, and to provide an opportunity
for anyone not at the meeting to object if they so desire.  Barring a
significant concern being raised, the method outlined above will be
followed from now on.

Meeting discussion log:
http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-06-16.00.log.html#l-66

Thanks.

-Ben


Thanks, Ben.

As Ben said, everyone is welcome to review the plans but I would
*especially* like all liaisons from other programs to take a look
through the specs with an eye for potential issues.

Thanks in advance for your feedback!
Doug


Thanks for the heads up.
Flavio





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpBlC0wy8drn.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-10 Thread Clint Byrum
Excerpts from Vijay Venkatachalam's message of 2014-06-09 21:48:43 -0700:
 
 My vote is for option #2 (without the registration). It is simpler to start 
 with this approach. How is delete handled though?
 
 Ex. What is the expectation when user attempts to delete a 
 certificate/container which is referred by an entity like LBaaS listener?
 
 
 1.   Will there be validation in Barbican to prevent this? *OR*
 
 2.   LBaaS listener will have a dangling reference/pointer to certificate?
 

Dangling reference. To avoid that, one should update all references
before deleting.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-10 Thread Mark McLoughlin
On Mon, 2014-06-09 at 20:14 -0400, Doug Hellmann wrote:
 On Mon, Jun 9, 2014 at 6:11 PM, Eoghan Glynn egl...@redhat.com wrote:
 
 
  Based on the discussion I'd like to propose these options:
  1. Cinder-certified driver - This is an attempt to move the certification
  to the project level.
  2. CI-tested driver - This is probably the most accurate, at least for what
  we're trying to achieve for Juno: Continuous Integration of Vendor-specific
  Drivers.
 
  Hi Ramy,
 
  Thanks for these constructive suggestions.
 
  The second option is certainly a very direct and specific reflection of
  what is actually involved in getting the Cinder project's imprimatur.
 
 I do like tested.
 
 I'd like to understand what the foundation is planning for
 certification as well, to know how big of an issue this really is.
 Even if they aren't going to certify drivers, I have heard discussions
 around training and possibly other areas so I would hate for us to
 introduce confusion by having different uses of that term in similar
 contexts. Mark, do you know who is working on that within the board or
 foundation?

http://blogs.gnome.org/markmc/2014/05/17/may-11-openstack-foundation-board-meeting/

Boris Renski raised the possibility of the Foundation attaching the
trademark to a verified, certified or tested status for drivers. It
wasn't discussed at length because board members hadn't been briefed in
advance, but I think it's safe to say there was a knee-jerk negative
reaction from a number of members. This is in the context of the
DriverLog report:

  http://stackalytics.com/report/driverlog
  
http://www.mirantis.com/blog/cloud-drivers-openstack-driverlog-part-1-solving-driver-problem/
  http://www.mirantis.com/blog/openstack-will-open-source-vendor-certifications/

AIUI the CI tested phrase was chosen in DriverLog to avoid the
controversial area Boris describes in the last link above. I think that
makes sense. Claiming this CI testing replaces more traditional
certification programs is a sure way to bog potentially useful
collaboration down in vendor politics.

Avoiding dragging the project into those sort of politics is something
I'm really keen on, and why I think the word certification is best
avoided so we can focus on what we're actually trying to achieve.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Diagnostics spec

2014-06-10 Thread Gary Kotton
Hi,
Any chance of getting a review on https://review.openstack.org/84691.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Mark McLoughlin
On Mon, 2014-06-09 at 19:31 +, Kurt Griffiths wrote:
 Lately we have been talking about writing drivers for traditional
 message brokers that will not be able to support the message feeds
 part of the API. I’ve started to think that having a huge part of the
 API that may or may not “work”, depending on how Marconi is deployed,
 is not a good story for users, esp. in light of the push to make
 different clouds more interoperable.

Perhaps the first point to get super clear on is why drivers for
traditional message brokers are needed. What problems would such drivers
address? Who would the drivers help? Would the Marconi team recommend
using any of those drivers for a production queuing service? Would the
subset of Marconi's API which is implementable by these drivers really
be useful for application developers?

I'd like to understand that in more detail because I worry the Marconi
team is being pushed into adding these drivers without truly believing
they will be useful. And if that would not be a sane context to make a
serious architectural change.

OTOH if there are real, valid use cases for these drivers, then
understanding those would inform the architecture decision.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy mechanism drivers in ML2

2014-06-10 Thread Irena Berezovsky
Hi Luke,
Please see my comments inline.

BR,
Irena
From: Luke Gorrie [mailto:l...@tail-f.com]
Sent: Monday, June 09, 2014 12:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy 
mechanism drivers in ML2

On 6 June 2014 10:17, henry hly 
henry4...@gmail.commailto:henry4...@gmail.com wrote:
ML2 mechanism drivers are becoming another kind of plugins. Although they can 
be loaded together, but can not work with each other.
[...]
Could we remove all device related adaption(rest/ssh/netconf/of... proxy) from 
these mechanism driver to the agent side, leaving only necessary code in the 
plugin?

In the Snabb NFV mech driver [*] we are trying a design that you might find 
interesting.

We stripped the mech driver down to bare bones and declared that the agent has 
to access the Neutron configuration independently.

In practice this means that our out-of-tree agent is connecting to Neutron's 
MySQL database directly instead of being fed config changes by custom sync code 
in ML2. This means there are very little work for the mech driver to do (in our 
case check configuration and perform special port binding).
[IrenaB] The DB access approach was previously used by OVS and LinuxBridge 
Agents and at some point (~Grizzly Release) was changed to use RPC 
communication. You can see the reasons and design details covered by [1] and 
the patch itself in [2]

[1]: 
https://docs.google.com/document/d/1MbcBA2Os4b98ybdgAw2qe_68R1NG6KMh8zdZKgOlpvg/edit?pli=1
[2] https://review.openstack.org/#/c/9591/

We are also trying to avoid running an OVS/LinuxBridge-style agent on the 
compute hosts in order to keep the code footprint small. I hope we will succeed 
-- I'd love to hear if somebody else is running agent-less? Currently we 
depend on a really ugly workaround to make VIF binding succeed and we are 
looking for a clean alternative: 
https://github.com/lukego/neutron/commit/31d6d0657aeae9fd97a63e4d53da34fb86be92f7
[IrenaB] I think that for “Non SDN Controller” Mechanism Drivers there will be 
need for some sort of agent to handle port update events even though it might 
not be required in order to bind the port.

[*] Snabb NFV mech driver code: https://review.openstack.org/95711

Cheers,
-Luke


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] Re: NFV in OpenStack use cases and context

2014-06-10 Thread MENDELSOHN, ITAI (ITAI)
Shall we continue this discussion?

Itai

On 6/9/14 8:54 PM, Steve Gordon sgor...@redhat.com wrote:

- Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com,
OpenStack Development Mailing List (not for usage
 
 Just adding openstack-dev to the CC for now :).
 
 - Original Message -
  From: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com
  Subject: Re: NFV in OpenStack use cases and context
  
  Can we look at them one by one?
  
  Use case 1 - It's pure IaaS
  Use case 2 - Virtual network function as a service. It's actually
about
  exposing services to end customers (enterprises) by the service
provider.
  Use case 3 - VNPaaS - is similar to #2 but at the service level. At
larger
  scale and not at the app level only.
  Use case 4 - VNF forwarding graphs. It's actually about dynamic
  connectivity between apps.
  Use case 5 - vEPC and vIMS - Those are very specific (good) examples
of SP
  services to be deployed.
  Use case 6 - virtual mobile base station. Another very specific
example,
  with different characteristics than the other two above.
  Use case 7 - Home virtualisation.
  Use case 8 - Virtual CDN
  
  As I see it those have totally different relevancy to OpenStack.
  Assuming we don't want to boil the ocean hereŠ
  
  1-3 seems to me less relevant here.
  4 seems to be a Neutron area.
  5-8 seems to be usefully to understand the needs of the NFV apps. The
use
  case can help to map those needs.
  
  For 4 I guess the main part is about chaining and Neutron between DCs.
  Soma may call it SDN in WAN...
  
  For 5-8 at the end an option is to map all those into:
  -performance (net BW, storage BW mainly). That can be mapped to
SR-IOV,
  NUMA. Etc'
  -determinism. Shall we especially minimise noisy neighbours. Not
sure
  how NFV is special here, but for sure it's a major concern for lot of
SPs.
  That can be mapped to huge pages, cache QOS, etc'.
  -overcoming of short term hurdles (just because of apps migrations
  issues). Small example is the need to define the tick policy of KVM
just
  because that's what the app needs. Again, not sure how NFV special it
is,
  and again a major concern of mainly application owners in the NFV
domain.
  
  Make sense?

Hi Itai,

This makes sense to me. I think what we need to expand upon, with the
ETSI NFV documents as a reference, is a two to three paragraph
explanation of each use case explained at a more basic level - ideally on
the Wiki page. It seems that use case 5 might make a particularly good
initial target to work on fleshing out as an example? We could then look
at linking the use case to concrete requirements based on this, I suspect
we might want to break them down into:

a) The bare minimum requirements for OpenStack to support the use case at
all. That is, requirements that without which the VNF simply can not
function.

b) The requirements that are not mandatory but would be beneficial for
OpenStack to support the use case. In particularly that might be
requirements that would improve VNF performance or reliability by some
margin (possibly significantly) but which it can function without if
absolutely required.

Thoughts?

Steve




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ServiceVM] servicevm IRC meeting minutes (June 10 Tuesday 5:00(AM)UTC-)

2014-06-10 Thread Isaku Yamahata
Meeting minutes of June 10

* Announcement
  - started to create repos in stackforge
review is on-going

* nfv follow up
  blueprints
  - VLAN-aware-VLAN, l2-gateway, network-truncking
https://review.openstack.org/#/c/97714/
https://review.openstack.org/#/c/94612/
https://blueprints.launchpad.net/neutron/+spec/l2-gateway
ACTION: yamahata update BP, see how review goes unless eric does. 
(yamahata, 05:13:45)
ACTION: s3wong ping rossella_s (yamahata, 05:13:57)

  - unfirewall port
https://review.openstack.org/97715 (yamahata, 05:15:21)
= collect use case/requirements on neutron ports
ACTION: anyone update wiki page with usecase/requirement (yamahata, 
05:28:43)
https://wiki.openstack.org/wiki/ServiceVM/neutron-port-attributes 
(yamahata, 05:28:06)

* open discussion (yamahata, 05:37:12)
  some ideas
  relationship with NFV team

thanks,

On Mon, Jun 09, 2014 at 03:16:27PM +0900,
Isaku Yamahata isaku.yamah...@gmail.com wrote:

 Hi. This is a reminder mail for the servicevm IRC meeting
 June 10, 2014 Tuesdays 5:00(AM)UTC-
 #openstack-meeting on freenode
 https://wiki.openstack.org/wiki/Meetings/ServiceVM
 
 agenda: (feel free to add your items)
 * project incubation
 * NFV meeting follow up
 * open discussion
 
 -- 
 Isaku Yamahata isaku.yamah...@gmail.com

-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy mechanism drivers in ML2

2014-06-10 Thread Luke Gorrie
Hi Irena,

Thanks for the very interesting perspective!

On 10 June 2014 10:57, Irena Berezovsky ire...@mellanox.com wrote:

  *[IrenaB] The DB access approach was previously used by OVS and
 LinuxBridge Agents and at some point (~Grizzly Release) was changed to use
 RPC communication.*


That is very interesting. I've been involved in OpenStack since the Havana
cycle and was not familiar with the old design.

I'm optimistic about the scalability of our implementation. We have
sanity-tested with 300 compute nodes and a 300ms sync interval. I am sure
we will find some parts that we need to spend optimization energy on,
however.

The other scalability aspect we are being careful of is the cost of
individual update operations. (In LinuxBridge that would be the iptables,
ebtables, etc commands.) In our implementation the compute nodes preprocess
the Neutron config into a small config file for the local traffic plane and
then load that in one atomic operation (SIGHUP style). Again, I am sure
we will find cases that we need to spend optimization effort on, but the
design seems scalable to me thanks to the atomicity.

For concreteness, here is the agent we are running on the DB node to make
the Neutron config available:
https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-master

and here is the agent that pulls it onto the compute node:
https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-agent

TL;DR we snapshot the config with mysqldump and distribute it with git.

Here's the sanity test I referred to:
https://groups.google.com/d/msg/snabb-devel/blmDuCgoknc/PP_oMgopiB4J

I will be glad to report on our experience and what we change based on our
deployment experience during the Juno cycle.

*[IrenaB] I think that for “Non SDN Controller” Mechanism Drivers there
 will be need for some sort of agent to handle port update events even
 though it might not be required in order to bind the port.*


True. Indeed, we do have an agent running on the compute host, and it we
are synchronizing it with port updates based on the mechanism described
above.

Really what I mean is: Can we keep our agent out-of-tree and apart from ML2
and decide for ourselves how to keep it synchronized (instead of using the
MQ)? Is there a precedent for doing things this way in an ML2 mech driver
(e.g. one of the SDNs)?

Cheers!
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Diagnostics spec

2014-06-10 Thread John Garbutt
We have stopped reviewing specs (at least that was the plan), to get
Juno-1 out the door before Thursday.

Hopefully on Friday, it will be full steam ahead with nova-specs reviews.

John

On 10 June 2014 09:44, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 Any chance of getting a review on https://review.openstack.org/84691.
 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] mocking policy

2014-06-10 Thread Maxime Vidori
+1 for the use of mock.

Is mox3 really needed? Or can we move our tests for python3 to mock, and use 
this library for every tests for python3?

- Original Message -
From: David Lyle david.l...@hp.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, June 10, 2014 5:58:07 AM
Subject: Re: [openstack-dev] [horizon] mocking policy

I have no problem with this proposal.

David

On 6/4/14, 6:41 AM, Radomir Dopieralski openst...@sheep.art.pl wrote:

Hello,

I'd like to start a discussion about the use of mocking libraries in
Horizon's tests, in particular, mox and mock.

As you may know, Mox is the library that has been used so far, and we
have a lot of tests written using it. It is based on a similar Java
library and does very strict checking, although its error reporting may
leave something more to be desired.

Mock is a more pythonic library, insluded in the stdlib of recent Python
versions, but also available as a separate library for older pythons. It
has a much more relaxed approach, allowing you to only test the things
that you actually care about and to write tests that don't have to be
rewritten after each and every refactoring.

Some OpenStack projects, such as Nova, seem to have adopted an approach
that favors Mock in newly written tests, but allows use of Mox for older
tests, or when it's more suitable for the job.

In Horizon we only use Mox, and Mock is not even in requirements.txt. I
would like to propose to add Mock to requirements.txt and start using it
in new tests where it makes more sense than Mox -- in particular, when
we are writing unit tests only testing small part of the code.

Thoughts?
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Flavio Percoco

On 10/06/14 09:48 +0100, Mark McLoughlin wrote:

On Mon, 2014-06-09 at 19:31 +, Kurt Griffiths wrote:

Lately we have been talking about writing drivers for traditional
message brokers that will not be able to support the message feeds
part of the API. I’ve started to think that having a huge part of the
API that may or may not “work”, depending on how Marconi is deployed,
is not a good story for users, esp. in light of the push to make
different clouds more interoperable.


Perhaps the first point to get super clear on is why drivers for
traditional message brokers are needed. What problems would such drivers
address? Who would the drivers help? Would the Marconi team recommend
using any of those drivers for a production queuing service? Would the
subset of Marconi's API which is implementable by these drivers really
be useful for application developers?



These are all very good questions that should be taken into account
when evaluating not just the drivers under discussion but also future
drivers.

As mentioned in my previous email on this thread, I don't think we are
ready to make a final/good decision here because it's still not clear
what the trade-off is. Some things are clear, though:

1. Marconi relies on a store-forward message delivery model
2. As of now (v1.x) it relies on Queues as a first-citizen resource in
the API.

Ideally, those drivers would be used when a higher throughput is
needed since they're known to be faster and created to solve this
issue.

There's something else that Marconi brings to those technologies,
which is the ability to create clusters of storage - as of now those
storage would be segmented in a pre-configured, per-queue basis. In
other words, Marconi has support for storage shards. This helps
solving some of the scaling issues in some of the existing queuing
technologies. This is probably not really relevant but worth
mentioning.


I'd like to understand that in more detail because I worry the Marconi
team is being pushed into adding these drivers without truly believing
they will be useful. And if that would not be a sane context to make a
serious architectural change.

OTOH if there are real, valid use cases for these drivers, then
understanding those would inform the architecture decision.


Completely agreed with the feeling.

I'm one of those willing to support AMQP brokers but, FWIW, not
blindly. For now, the AMQP driver is a research driver that should
help answering some of the questions you asked above. The work on that
driver is happening in a separate repo.

Your questions, as mentioned above, are a good starting point for this
discussion.

Thanks for the great feedback.

Flavio

--
@flaper87
Flavio Percoco


pgppKMk5h8tFc.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-06-10 Thread Tomas Sedovic
On 10/06/14 10:25, Clint Byrum wrote:
 Excerpts from Jaromir Coufal's message of 2014-06-08 16:44:58 -0700:
 Hi,

 it looks that there is no more activity on the survey for mid-cycle 
 dates so I went forward to evaluate it.

 I created a table view into the etherpad [0] and results are following:
 * option1 (Jul 28 - Aug 1): 27 attendees - collides with Nova/Ironic
 * option2 (Jul 21-25) : 27 attendees
 * option3 (Jul 25-29) : 17 attendees - collides with Nova/Ironic
 * option4 (Aug 11-15) : 13 attendees

 I think that we can remove options 3 and 4 from the consideration, 
 because there is lot of people who can't make it. So we have option1 and 
 option2 left. Since Robert and Devananda (PTLs on the projects) can't 
 make option1, which also conflicts with Nova/Ironic meetup, I think it 
 is pretty straightforward.

 Based on the survey the winning date for the mid-cycle meetup is 
 option2: July 21th - 25th.

 Does anybody have very strong reason why we shouldn't fix the date for 
 option2 and proceed forward with the organization for the meetup?

 
 July 21-25 is also the shortest notice. I will not be able to attend
 as plans have already been made for the summer and I've already been
 travelling quite a bit recently, after all we were all just at the summit
 a few weeks ago.
 
 I question the reasoning that being close to FF is a bad thing, and
 suggest adding much later dates. But I understand since the chosen dates
 are so close, there is a need to make a decision immediately.
 
 Alternatively, I suggest that we split Heat out of this, and aim at
 later dates in August.

Apologies for not participating earlier, I wasn't sure I'd be able to go
until now.

July 21th - 25th doesn't work for me at all (wedding). Any later date
should be okay so I second both of Clint's suggestions.



 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] DRBD integration as volume driver

2014-06-10 Thread Philipp Marek
So, I now tried to push the proof-of-concept driver to Gerrit,
and got this:

 Downloading/unpacking dbus (from -r /home/jenkins/workspace/gate-
cinder-pep8/requirements.txt (line 32))
   http://pypi.openstack.org/openstack/dbus/ uses an insecure transport 
scheme (http). Consider using https if pypi.openstack.org has it 
available
   Could not find any downloads that satisfy the requirement dbus (from 
-r /home/jenkins/workspace/gate-cinder-pep8/requirements.txt (line 32))


So, how would I get additional modules (dbus and its dependencies) onto 
pypi.openstack.org? I couldn't find a process for that.


Regards,

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] DRBD integration as volume driver

2014-06-10 Thread Philipp Marek
Hrmpf, sent too fast again.

I guess https://wiki.openstack.org/wiki/Requirements is the link I was 
looking for.


Sorry for the noise.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Gate still backed up - need assistance with nova-network logging enhancements

2014-06-10 Thread Michael Still
https://review.openstack.org/99002 adds more logging to
nova/network/manager.py, but I think you're not going to love the
debug log level. Was this the sort of thing you were looking for
though?

Michael

On Mon, Jun 9, 2014 at 11:45 PM, Sean Dague s...@dague.net wrote:
 Based on some back of envelope math the gate is basically processing 2
 changes an hour, failing one of them. So if you want to know how long
 the gate is, take the length / 2 in hours.

 Right now we're doing a lot of revert roulette, trying to revert things
 that we think landed about the time things went bad. I call this
 roulette because in many cases the actual issue isn't well understood. A
 key reason for this is:

 *nova network is a blackhole*

 There is no work unit logging in nova-network, and no attempted
 verification that the commands it ran did a thing. Most of these
 failures that we don't have good understanding of are the network not
 working under nova-network.

 So we could *really* use a volunteer or two to prioritize getting that
 into nova-network. Without it we might manage to turn down the failure
 rate by reverting things (or we might not) but we won't really know why,
 and we'll likely be here again soon.

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] AggregateMultiTenancyIsolation scheduler filter - bug, or new feature proposal?

2014-06-10 Thread Jesse Pretorius
On 10 June 2014 12:11, John Garbutt j...@johngarbutt.com wrote:

 There was a spec I read that was related to this idea of excluding
 things that don't match the filter. I can't seem to find that, but the
 general idea makes total sense.

 As a heads up, the scheduler split means we are wanting to change how
 the aggregate filters are working, more towards this direction,
 proposed by Jay:
 https://review.openstack.org/#/c/98664/


Thanks John. If I understand it correctly, once Jay's patch lands (assuming
it does), then we can simply use the aggregate data returned through the
same method to get the aggregate filter_tenant_id and filter_tenant_exclusive
values for all the hosts in one go, making it much quicker and simpler to
pass or fail the hosts?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-10 Thread Dina Belova
Hello, stackers!


Oslo.messaging is future of how different OpenStack components communicate
with each other, and really I’d love to start discussion about how we can
make this library even better then it’s now and how can we refactor it make
more production-ready.


As we all remember, oslo.messaging was initially inspired to be created as
a logical continuation of nova.rpc - as a separated library, with lots of
transports supported, etc. That’s why oslo.messaging inherited not only
advantages of now did the nova.rpc work (and it were lots of them), but
also some architectural decisions that currently sometimes lead to the
performance issues (we met some of them while Ceilometer performance
testing [1] during the Icehouse).


For instance, simple testing messaging server (with connection pool and
eventlet) can process 700 messages per second. The same functionality
implemented using plain kombu (without connection pool and eventlet)
driver is processing ten times more - 7000-8000 messages per second.


So we have the following suggestions about how we may make this process
better and quicker (and really I’d love to collect your feedback, folks):


1) Currently we have main loop running in the Executor class, and I guess
it’ll be much better to move it to the Server class, as it’ll make
relationship between the classes easier and will leave Executor only one
task - process the message and that’s it (in blocking or eventlet mode).
Moreover, this will make further refactoring much easier.

2) Some of the drivers implementations (such as impl_rabbit and impl_qpid,
for instance) are full of useless separated classes that in reality might
be included to other ones. There are already some changes making the whole
structure easier [2], and after the 1st issue will be solved Dispatcher and
Listener also will be able to be refactored.

3) If we’ll separate RPC functionality and messaging functionality it’ll
make code base clean and easily reused.

4) Connection pool can be refactored to implement more efficient connection
reusage.


Folks, are you ok with such a plan? Alexey Kornienko already started some
of this work [2], but really we want to be sure that we chose the correct
vector of development here.


Thanks!


[1]
https://docs.google.com/document/d/1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing

[2]
https://review.openstack.org/#/q/status:open+owner:akornienko+project:openstack/oslo.messaging,n,z

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-10 Thread Evgeny Fedoruk
Hi All,

Carlos, Vivek, German, thanks for reviewing the RST doc.
There are some issues I want to pinpoint final decision on them here, in ML, 
before writing it down in the doc.
Other issues will be commented on the document itself.


1.   Support/No support in JUNO

Referring to summit's etherpad 
https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7,

a.   SNI certificates list was decided to be supported. Was decision made 
not to support it?
Single certificate with multiple domains can only partly address the need for 
SNI, still, different applications
on back-end will need different certificates.

b.  Back-end re-encryption was decided to be supported. Was decision made 
not to support it?

c.   With front-end client authentication and back-end server 
authentication not supported,
Should certificate chains be supported?

2.   Barbican TLS containers

a.   TLS containers are immutable.

b.  TLS container is allowed to be deleted, always.

   i.  Even 
when it is used by LBaaS VIP listener (or other service).

 ii.  Meta data 
on TLS container will help tenant to understand that container is in use by 
LBaaS service/VIP listener

iii.  If every 
VIP listener will register itself in meta-data while retrieving container, 
how that registration will be removed when VIP listener stops using the 
certificate?

Please comment on these points and review the document on gerrit 
(https://review.openstack.org/#/c/98640)
I will update the document with decisions on above topics.

Thank you!
Evgeny


From: Evgeny Fedoruk
Sent: Monday, June 09, 2014 2:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit


Hi All,



A Spec. RST  document for LBaaS TLS support was added to Gerrit for review

https://review.openstack.org/#/c/98640



You are welcome to start commenting it for any open discussions.

I tried to address each aspect being discussed, please add comments about 
missing things.



Thanks,

Evgeny

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Julien Danjou
On Mon, Jun 09 2014, Doug Hellmann wrote:

 We went with a single large storage API in ceilometer initially, but
 we had some discussions at the Juno summit about it being a bad
 decision because it resulted in storing some data like alarm
 definitions in database formats that just didn't make sense for that.
 Julien and Eoghan may want to fill in more details.

 Keystone has separate backends for tenants, tokens, the catalog, etc.,
 so you have precedent there for splitting up the features in a way
 that makes it easier for driver authors and for building features on
 appropriate backends.

+1

Use the best kind of storage that feats your pattern. If SQL is a
solution for part of your features use that. If it's not, use something
else.

Don't try to shoehorn all your feature set in a single driver system. We
did that for Ceilometer basically, and it has been proven to be a
mistake.

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-10 Thread Clark, Robert Graham
It looks like this has come full circle and we are back at the simplest case.

# Containers are immutable
# Changing a cert means creating a new container and, when ready, pointing 
LBaaS at the new container

This makes a lot of sense to me, it removes a lot of handholding and keeps 
Barbican and LBaaS nicely decoupled. It also keeps certificate lifecycle 
management firmly in the hands of the user, which imho is a good thing. With 
this model it’s fairly trivial to provide guidance / additional tooling for 
lifecycle management if required but at the same time the simplest case (I want 
a cert and I want LBaaS) is met without massive code overhead for edge-cases.


From: Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, 10 June 2014 05:48
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas


My vote is for option #2 (without the registration). It is simpler to start 
with this approach. How is delete handled though?

Ex. What is the expectation when user attempts to delete a 
certificate/container which is referred by an entity like LBaaS listener?


1.   Will there be validation in Barbican to prevent this? *OR*

2.   LBaaS listener will have a dangling reference/pointer to certificate?

Thanks,
Vijay V.

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, June 10, 2014 7:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Weighing in here:

I'm all for option #2 as well.

Stephen

On Mon, Jun 9, 2014 at 4:42 PM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:
Excerpts from Douglas Mendizabal's message of 2014-06-09 16:08:02 -0700:
 Hi all,

 I’m strongly in favor of having immutable TLS-typed containers, and very
 much opposed to storing every revision of changes done to a container.  I
 think that storing versioned containers would add too much complexity to
 Barbican, where immutable containers would work well.

Agree completely. Create a new one for new values. Keep the old ones
while they're still active.


 I’m still not sold on the idea of registering services with Barbican, even
 though (or maybe especially because) Barbican would not be using this data
 for anything.  I understand the problem that we’re trying to solve by
 associating different resources across projects, but I don’t feel like
 Barbican is the right place to do this.

Agreed also, this is simply not Barbican or Neutron's role. Be a REST
API for secrets and networking, not all dancing all singing nannies that
prevent any possibly dangerous behavior with said API's.

 It seems we’re leaning towards option #2, but I would argue that
 orchestration of services is outside the scope of Barbican’s role as a
 secret-store.  I think this is a problem that may need to be solved at a
 higher level.  Maybe an openstack-wide registry of dependend entities
 across services?
An optional openstack-wide registry of depended entities is called
Heat.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-10 Thread Davanum Srinivas
Dina, Alexey,

Do you mind filing some spec(s) please?

http://markmail.org/message/yqhndsr3zrqcfwq4
http://markmail.org/message/kpk35uikcnodq3jb

thanks,
dims

On Tue, Jun 10, 2014 at 7:03 AM, Dina Belova dbel...@mirantis.com wrote:
 Hello, stackers!


 Oslo.messaging is future of how different OpenStack components communicate
 with each other, and really I’d love to start discussion about how we can
 make this library even better then it’s now and how can we refactor it make
 more production-ready.


 As we all remember, oslo.messaging was initially inspired to be created as a
 logical continuation of nova.rpc - as a separated library, with lots of
 transports supported, etc. That’s why oslo.messaging inherited not only
 advantages of now did the nova.rpc work (and it were lots of them), but also
 some architectural decisions that currently sometimes lead to the
 performance issues (we met some of them while Ceilometer performance testing
 [1] during the Icehouse).


 For instance, simple testing messaging server (with connection pool and
 eventlet) can process 700 messages per second. The same functionality
 implemented using plain kombu (without connection pool and eventlet)  driver
 is processing ten times more - 7000-8000 messages per second.


 So we have the following suggestions about how we may make this process
 better and quicker (and really I’d love to collect your feedback, folks):


 1) Currently we have main loop running in the Executor class, and I guess
 it’ll be much better to move it to the Server class, as it’ll make
 relationship between the classes easier and will leave Executor only one
 task - process the message and that’s it (in blocking or eventlet mode).
 Moreover, this will make further refactoring much easier.

 2) Some of the drivers implementations (such as impl_rabbit and impl_qpid,
 for instance) are full of useless separated classes that in reality might be
 included to other ones. There are already some changes making the whole
 structure easier [2], and after the 1st issue will be solved Dispatcher and
 Listener also will be able to be refactored.

 3) If we’ll separate RPC functionality and messaging functionality it’ll
 make code base clean and easily reused.

 4) Connection pool can be refactored to implement more efficient connection
 reusage.


 Folks, are you ok with such a plan? Alexey Kornienko already started some of
 this work [2], but really we want to be sure that we chose the correct
 vector of development here.


 Thanks!


 [1]
 https://docs.google.com/document/d/1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing

 [2]
 https://review.openstack.org/#/q/status:open+owner:akornienko+project:openstack/oslo.messaging,n,z


 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Fuel] relationship btw TripleO and Fuel

2014-06-10 Thread Mike Scherbakov
That's right, we (Fuel devs) are contributing to the TripleO. Our devs
contributes into all areas where overlap occurs, and these include TripleO,
Ironic and other projects.
As we work with the TripleO team, the Fuel team will continue to enhance
Fuel based on users demand.  Since Fuel has been shipping as part of
customer-facing products for about 18 months now, we’re getting some great
feedback about what deployment and operation capabilities are needed in
production environments on both small and large scale. This makes Fuel a
good alternative to TripleO until the two projects reach parity.

Main architectural difference to me is the deployment approach. Fuel
deploys (and configures) OpenStack in HA fashion, other (such as MongoDB)
components utilizing Puppet. TripleO uses prepared images, which already
contain predefined set of roles, and after they are rolled out, only
post-configuration is made to the system. TripleO, in this way, seems to be
more suited for homogenous deployments, with predefined set of roles for
nodes. For Fuel, it was a requirement from the day one to support
heterogenous nodes, ability to deploy different combination of roles. While
Fuel getting way out of using OS installers to image-based provisioning
[1], we see a large demand on ability to support user-defined combination
of roles. This leads to such requirements as network setups, disk
partitioning layout based on roles applied with allowance for the user to
tweak defaults if necessary.

As for Fuel roadmap, we've recently enabled Fuel master node upgrades with
the use of LXC  Docker. Current focus is on OpenStack patching  upgrades
and pluggable architecture for Fuel, so to make it easily extendable.
Obviously, there is a bunch of things we would love to see implemented
sooner than later [2], but mentioned are the most important. One of the
another important goals is to increase focus on Rally and Tempest to
proactively identify areas in OpenStack which need fixing in order to
increase robustness, performance, HA-awareness and scalability of OpenStack
Core in order to meet increasing users demand.

[1]
http://docs-draft.openstack.org/75/95575/23/check/gate-fuel-specs-docs/52ace98/doc/build/html/specs/5.1/image-based-os-provisioning.html
[2] https://blueprints.launchpad.net/fuel

Thanks,


On Mon, Jun 9, 2014 at 1:56 AM, Robert Collins robe...@robertcollins.net
wrote:

 Well, fuel devs are also hacking on TripleO :) I don't know the exact
 timelines but I'm certainly hopeful that we'll see long term
 convergence - as TripleO gets more capable, more and more of Fuel
 could draw on TripleO facilities, for instance.

 -Rob

 On 9 June 2014 19:41, LeslieWang wqyu...@hotmail.com wrote:
  Dear all,
 
  Seems like both Fuel and TripleO are designed to solve problem of complex
  Openstack installation and Deployment. TripleO is using Heat for
  orchestration. If we can define network creation, OS provision and
  deployment in Heat template, seems like they can achieve similar goal. So
  can anyone explain the difference of these two projects, and future
 roadmap
  of each of them? Thanks!
 
  TripleO is a program aimed at installing, upgrading and operating
 OpenStack
  clouds using OpenStack's own cloud facilities as the foundations -
 building
  on nova, neutron and heat to automate fleet management at datacentre
 scale
  (and scaling down to as few as 2 machines).
 
  Fuel is an all-in-one control plane for automated hardware discovery,
  network verification, operating systems provisioning and deployment of
  OpenStack. It provides user-friendly Web interface for installations
  management, simplifying OpenStack installation up to a few clicks.
 
  Best Regards
  Leslie
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-10 Thread Dina Belova
Dims,

No problem with creating the specs, we just want to understand if the
community is OK with our suggestions in general :)
If so, I'll create the appropriate specs and we'll discuss them :)

Thanks
-- Dina


On Tue, Jun 10, 2014 at 3:31 PM, Davanum Srinivas dava...@gmail.com wrote:

 Dina, Alexey,

 Do you mind filing some spec(s) please?

 http://markmail.org/message/yqhndsr3zrqcfwq4
 http://markmail.org/message/kpk35uikcnodq3jb

 thanks,
 dims

 On Tue, Jun 10, 2014 at 7:03 AM, Dina Belova dbel...@mirantis.com wrote:
  Hello, stackers!
 
 
  Oslo.messaging is future of how different OpenStack components
 communicate
  with each other, and really I’d love to start discussion about how we can
  make this library even better then it’s now and how can we refactor it
 make
  more production-ready.
 
 
  As we all remember, oslo.messaging was initially inspired to be created
 as a
  logical continuation of nova.rpc - as a separated library, with lots of
  transports supported, etc. That’s why oslo.messaging inherited not only
  advantages of now did the nova.rpc work (and it were lots of them), but
 also
  some architectural decisions that currently sometimes lead to the
  performance issues (we met some of them while Ceilometer performance
 testing
  [1] during the Icehouse).
 
 
  For instance, simple testing messaging server (with connection pool and
  eventlet) can process 700 messages per second. The same functionality
  implemented using plain kombu (without connection pool and eventlet)
  driver
  is processing ten times more - 7000-8000 messages per second.
 
 
  So we have the following suggestions about how we may make this process
  better and quicker (and really I’d love to collect your feedback, folks):
 
 
  1) Currently we have main loop running in the Executor class, and I guess
  it’ll be much better to move it to the Server class, as it’ll make
  relationship between the classes easier and will leave Executor only one
  task - process the message and that’s it (in blocking or eventlet mode).
  Moreover, this will make further refactoring much easier.
 
  2) Some of the drivers implementations (such as impl_rabbit and
 impl_qpid,
  for instance) are full of useless separated classes that in reality
 might be
  included to other ones. There are already some changes making the whole
  structure easier [2], and after the 1st issue will be solved Dispatcher
 and
  Listener also will be able to be refactored.
 
  3) If we’ll separate RPC functionality and messaging functionality it’ll
  make code base clean and easily reused.
 
  4) Connection pool can be refactored to implement more efficient
 connection
  reusage.
 
 
  Folks, are you ok with such a plan? Alexey Kornienko already started
 some of
  this work [2], but really we want to be sure that we chose the correct
  vector of development here.
 
 
  Thanks!
 
 
  [1]
 
 https://docs.google.com/document/d/1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing
 
  [2]
 
 https://review.openstack.org/#/q/status:open+owner:akornienko+project:openstack/oslo.messaging,n,z
 
 
  Best regards,
 
  Dina Belova
 
  Software Engineer
 
  Mirantis Inc.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-10 Thread Flavio Percoco

On 10/06/14 15:03 +0400, Dina Belova wrote:

Hello, stackers!


Oslo.messaging is future of how different OpenStack components communicate with
each other, and really I’d love to start discussion about how we can make this
library even better then it’s now and how can we refactor it make more
production-ready.


As we all remember, oslo.messaging was initially inspired to be created as a
logical continuation of nova.rpc - as a separated library, with lots of
transports supported, etc. That’s why oslo.messaging inherited not only
advantages of now did the nova.rpc work (and it were lots of them), but also
some architectural decisions that currently sometimes lead to the performance
issues (we met some of them while Ceilometer performance testing [1] during the
Icehouse).


For instance, simple testing messaging server (with connection pool and
eventlet) can process 700 messages per second. The same functionality
implemented using plain kombu (without connection pool and eventlet)  driver is
processing ten times more - 7000-8000 messages per second.


So we have the following suggestions about how we may make this process better
and quicker (and really I’d love to collect your feedback, folks):


1) Currently we have main loop running in the Executor class, and I guess it’ll
be much better to move it to the Server class, as it’ll make relationship
between the classes easier and will leave Executor only one task - process the
message and that’s it (in blocking or eventlet mode). Moreover, this will make
further refactoring much easier.


To some extent, the executors are part of the server class since the
later is the one actually controlling them. If I understood your
proposal, the server class would implement the event loop, which means
we would have an EventletServer / BlockingServer, right?

If what I said is what you meant, then I disagree. Executors keep the
eventloop isolated from other parts of the library and this is really
important for us. One of the reason is to easily support multiple
python versions - by having different event loops.

Is my assumption correct? Could you elaborate more?



2) Some of the drivers implementations (such as impl_rabbit and impl_qpid, for
instance) are full of useless separated classes that in reality might be
included to other ones. There are already some changes making the whole
structure easier [2], and after the 1st issue will be solved Dispatcher and
Listener also will be able to be refactored.


This was done on purpose. The idea was to focus on backwards
compatibility rather than cleaning up/improving the drivers. That
said, sounds like those drivers could user some clean up. However, I
think we should first extend the test suite a bit more before hacking
the existing drivers.




3) If we’ll separate RPC functionality and messaging functionality it’ll make
code base clean and easily reused.


What do you mean with this?



4) Connection pool can be refactored to implement more efficient connection
reusage.


Please, elaborate. What changes do you envision?

As Dims suggested, I think filing some specs for this (and keeping the
proposals separate) would help a lot in understanding what the exact
plan is.

Glad to know you're looking forward to help improving oslo.messaging.

Thanks,
Flavio


Folks, are you ok with such a plan? Alexey Kornienko already started some of
this work [2], but really we want to be sure that we chose the correct vector
of development here.


Thanks!


[1] https://docs.google.com/document/d/
1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing

[2] https://review.openstack.org/#/q/
status:open+owner:akornienko+project:openstack/oslo.messaging,n,z


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpTe9lXrjjsY.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Gordon Sim

On 06/10/2014 09:48 AM, Mark McLoughlin wrote:

Perhaps the first point to get super clear on is why drivers for
traditional message brokers are needed. What problems would such drivers
address? Who would the drivers help? Would the Marconi team recommend
using any of those drivers for a production queuing service? Would the
subset of Marconi's API which is implementable by these drivers really
be useful for application developers?

I'd like to understand that in more detail because I worry the Marconi
team is being pushed into adding these drivers without truly believing
they will be useful.


Your questions are all good ones to ask, and I would certainly agree 
that doing anything without truly believing it to be useful is not a 
recipe for success.


To lay my cards on the table, my background is in message brokers, 
however I would certainly not want to appear to be pushing anyone into 
anything.


My question (in addition to those above) would be in what way is Marconi 
different to 'traditional message brokers' (which after all have been 
providing 'a production queuing service' for some time)?


I understand that having HTTP as the protocol used by clients is of 
central importance. However many 'traditional message brokers' have 
offered that as well. Will Marconi only support HTTP as a transport, or 
will it add other protocols as well?


Scalability is another focus as I understand it. There is more than one 
dimension to scaling when it comes to messaging however. Can anyone 
describe what is unique about the Marconi design with respect to 
scalability?


I sincerely hope these question don't sound argumentative, since that is 
not my intent. My background may blind me to what is obvious to others, 
and my questions may not be helpful. Please ignore them if that is the 
case! :-)


--Gordon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Gordon Sim

On 06/09/2014 08:31 PM, Kurt Griffiths wrote:

Lately we have been talking about writing drivers for traditional
message brokers that will not be able to support the message feeds part
of the API.


Could you elaborate a little on this point? In some sense of the term at 
least, handling message feeds is what 'traditional' message brokers are 
all about. What are 'message feeds' in the Marconi context, in more 
detail? And what aspect of them is it that message brokers don't support?



I’ve started to think that having a huge part of the API
that may or may not “work”, depending on how Marconi is deployed, is not
a good story for users


I agree, that certainly doesn't sound good.

--Gordon.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-10 Thread Alexei Kornienko

Hi,

Please find some answers inline.

Regards,
Alexei

On 06/10/2014 03:06 PM, Flavio Percoco wrote:

On 10/06/14 15:03 +0400, Dina Belova wrote:

Hello, stackers!


Oslo.messaging is future of how different OpenStack components 
communicate with
each other, and really I'd love to start discussion about how we can 
make this

library even better then it's now and how can we refactor it make more
production-ready.


As we all remember, oslo.messaging was initially inspired to be 
created as a

logical continuation of nova.rpc - as a separated library, with lots of
transports supported, etc. That's why oslo.messaging inherited not only
advantages of now did the nova.rpc work (and it were lots of them), 
but also
some architectural decisions that currently sometimes lead to the 
performance
issues (we met some of them while Ceilometer performance testing [1] 
during the

Icehouse).


For instance, simple testing messaging server (with connection pool and
eventlet) can process 700 messages per second. The same functionality
implemented using plain kombu (without connection pool and eventlet)  
driver is

processing ten times more - 7000-8000 messages per second.


So we have the following suggestions about how we may make this 
process better

and quicker (and really I'd love to collect your feedback, folks):


1) Currently we have main loop running in the Executor class, and I 
guess it'll
be much better to move it to the Server class, as it'll make 
relationship
between the classes easier and will leave Executor only one task - 
process the
message and that's it (in blocking or eventlet mode). Moreover, this 
will make

further refactoring much easier.


To some extent, the executors are part of the server class since the
later is the one actually controlling them. If I understood your
proposal, the server class would implement the event loop, which means
we would have an EventletServer / BlockingServer, right?

If what I said is what you meant, then I disagree. Executors keep the
eventloop isolated from other parts of the library and this is really
important for us. One of the reason is to easily support multiple
python versions - by having different event loops.

Is my assumption correct? Could you elaborate more?
No It's not how we plan it. Server will do the loop and pass received 
message to dispatcher and executor. It means that we would still have 
blocking executor and eventlet executor in the same server class. We 
would just change the implementation part to make it more consistent and 
easier to control.






2) Some of the drivers implementations (such as impl_rabbit and 
impl_qpid, for

instance) are full of useless separated classes that in reality might be
included to other ones. There are already some changes making the whole
structure easier [2], and after the 1st issue will be solved 
Dispatcher and

Listener also will be able to be refactored.


This was done on purpose. The idea was to focus on backwards
compatibility rather than cleaning up/improving the drivers. That
said, sounds like those drivers could user some clean up. However, I
think we should first extend the test suite a bit more before hacking
the existing drivers.




3) If we'll separate RPC functionality and messaging functionality 
it'll make

code base clean and easily reused.


What do you mean with this?
We mean that current drivers are written with RPC code hardcoded inside 
(ReplyWaiter, etc.). Thats not how messaging library is supposed to 
work. We can move RPC to a separate layer and this would be beneficial 
for both rpc (code will become more clean and less error-prone) and core 
messaging part (we'll be able to implement messaging in way that will 
work much faster).




4) Connection pool can be refactored to implement more efficient 
connection

reusage.


Please, elaborate. What changes do you envision?
Currently there is a class that is called ConnectionContext that is used 
to manage pool. Additionaly it can be accessed/configured in several 
other places. If we refactor it a little bit it would be much easier to 
use connections from the pool.


As Dims suggested, I think filing some specs for this (and keeping the
proposals separate) would help a lot in understanding what the exact
plan is.

Glad to know you're looking forward to help improving oslo.messaging.

Thanks,
Flavio

Folks, are you ok with such a plan? Alexey Kornienko already started 
some of
this work [2], but really we want to be sure that we chose the 
correct vector

of development here.


Thanks!


[1] https://docs.google.com/document/d/
1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing

[2] https://review.openstack.org/#/q/
status:open+owner:akornienko+project:openstack/oslo.messaging,n,z


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-10 Thread Gordon Sim

On 06/10/2014 12:03 PM, Dina Belova wrote:

Hello, stackers!


Oslo.messaging is future of how different OpenStack components
communicate with each other, and really I’d love to start discussion
about how we can make this library even better then it’s now and how can
we refactor it make more production-ready.


As we all remember, oslo.messaging was initially inspired to be created
as a logical continuation of nova.rpc - as a separated library, with
lots of transports supported, etc. That’s why oslo.messaging inherited
not only advantages of now did the nova.rpc work (and it were lots of
them), but also some architectural decisions that currently sometimes
lead to the performance issues (we met some of them while Ceilometer
performance testing [1] during the Icehouse).


For instance, simple testing messaging server (with connection pool and
eventlet) can process 700 messages per second. The same functionality
implemented using plain kombu (without connection pool and eventlet)
driver is processing ten times more - 7000-8000 messages per second.


So we have the following suggestions about how we may make this process
better and quicker (and really I’d love to collect your feedback, folks):


1) Currently we have main loop running in the Executor class, and I
guess it’ll be much better to move it to the Server class, as it’ll make
relationship between the classes easier and will leave Executor only one
task - process the message and that’s it (in blocking or eventlet mode).
Moreover, this will make further refactoring much easier.

2) Some of the drivers implementations (such as impl_rabbit and
impl_qpid, for instance) are full of useless separated classes that in
reality might be included to other ones. There are already some changes
making the whole structure easier [2], and after the 1st issue will be
solved Dispatcher and Listener also will be able to be refactored.

3) If we’ll separate RPC functionality and messaging functionality it’ll
make code base clean and easily reused.

4) Connection pool can be refactored to implement more efficient
connection reusage.


Folks, are you ok with such a plan? Alexey Kornienko already started
some of this work [2], but really we want to be sure that we chose the
correct vector of development here.


For the impl_qpid driver, I think there would need to be quite 
significant changes to make it efficient. At present there are several 
synchronous roundtrips for every RPC call made[1]. Notifications are not 
treated any differently than RPCs (and sending a call is no different to 
sending a cast).


I agree the connection pooling is not efficient. For qpid at least it 
creates too many connections for no real benefit[2].


I think this may be a result of trying to fit the same high-level design 
to two entirely different underlying APIs.


For me at least, this also makes it hard to spot issues by reading the 
code. The qpid specific 'unit' tests for oslo.messaging also fail for me 
everytime when an actual qpidd broker is running (I haven't yet got to 
the bottom of that).


I'm personally not sure that the changes to impl_qpid you linked to have 
much impact on either efficiency or readability, safety of the code. I 
think there could be a lot of work required to significantly improve 
that driver, and I wonder if that would be better spent on e.g. the AMQP 
1.0 driver which I believe will perform much better and will offer more 
choice in deployment.


--Gordon

[1] For both the request and the response, the sender is created every 
time, which results in at least one roundtrip to the broker. Again, for 
both the request and the response, the message is then sent with a 
blocking send, meaning a further synchronous round trip for each. So for 
an RPC call, instead of just one roundtrip, there are at least four.


[2] In my view, what matters more than per-connection throughput for 
olso.messaging, is the scalability of the system as you add many RPC 
clients and servers. Excessive creation of connections by each process 
will have a negative impact on this. I don't believe the current code 
gets anywhere close to the limits of the underlying connection and 
suspect it would be more efficient and faster to multiplex different 
streams down the same connection. This would be especially true where 
using eventlet I suspect.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-10 Thread Mike Scherbakov
Interesting stuff.
Do you think that we can get rid of Astute at some point being purely
replaced by Salt?
And listening for the commands from Fuel?

Can you please clarify, does the suggested approach implies that we can
have both puppet  SaltStack? Even if you ever switch to anything
different, it is important to provide a smooth and step-by-step way for it.



On Mon, Jun 9, 2014 at 6:05 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Hi folks,

 I know that sometime ago saltstack was evaluated to be used as
 orchestrator in fuel, so I've prepared some initial specification, that
 addresses basic points of integration, and general requirements for
 orchestrator.

 In my opinion saltstack perfectly fits our needs, and we can benefit from
 using mature orchestrator, that has its own community. I still dont have
 all the answers, but , anyway, i would like to ask all of you to start a
 review for specification


 https://docs.google.com/document/d/1uOHgxM9ZT_2IdcmWvgpEfCMoV8o0Fk7BoAlsGHEoIfs/edit?usp=sharing

 I will place it in fuel-docs repo as soon as specification will be full
 enough to start POC, or if you think that spec should placed there as is, i
 can do it now

 Thank you

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-10 Thread Duncan Thomas
On 10 June 2014 09:33, Mark McLoughlin mar...@redhat.com wrote:

 Avoiding dragging the project into those sort of politics is something
 I'm really keen on, and why I think the word certification is best
 avoided so we can focus on what we're actually trying to achieve.

Avoiding those sorts of politics - 'XXX says it is a certified config,
it doesn't work, cinder is junk' - is why I'd rather the cinder core
team had a certification program, at least we've some control then and
*other* people can't impose their idea of certification on us. I think
politics happens, whether you will it or not, so a far more sensible
stance is to play it out in advance.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-10 Thread Anita Kuno
On 06/10/2014 04:33 AM, Mark McLoughlin wrote:
 On Mon, 2014-06-09 at 20:14 -0400, Doug Hellmann wrote:
 On Mon, Jun 9, 2014 at 6:11 PM, Eoghan Glynn egl...@redhat.com wrote:


 Based on the discussion I'd like to propose these options:
 1. Cinder-certified driver - This is an attempt to move the certification
 to the project level.
 2. CI-tested driver - This is probably the most accurate, at least for what
 we're trying to achieve for Juno: Continuous Integration of Vendor-specific
 Drivers.

 Hi Ramy,

 Thanks for these constructive suggestions.

 The second option is certainly a very direct and specific reflection of
 what is actually involved in getting the Cinder project's imprimatur.

 I do like tested.

 I'd like to understand what the foundation is planning for
 certification as well, to know how big of an issue this really is.
 Even if they aren't going to certify drivers, I have heard discussions
 around training and possibly other areas so I would hate for us to
 introduce confusion by having different uses of that term in similar
 contexts. Mark, do you know who is working on that within the board or
 foundation?
 
 http://blogs.gnome.org/markmc/2014/05/17/may-11-openstack-foundation-board-meeting/
 
 Boris Renski raised the possibility of the Foundation attaching the
 trademark to a verified, certified or tested status for drivers. It
 wasn't discussed at length because board members hadn't been briefed in
 advance, but I think it's safe to say there was a knee-jerk negative
 reaction from a number of members. This is in the context of the
 DriverLog report:
 
   http://stackalytics.com/report/driverlog
   
 http://www.mirantis.com/blog/cloud-drivers-openstack-driverlog-part-1-solving-driver-problem/
   
 http://www.mirantis.com/blog/openstack-will-open-source-vendor-certifications/
 
 AIUI the CI tested phrase was chosen in DriverLog to avoid the
 controversial area Boris describes in the last link above. I think that
 makes sense. Claiming this CI testing replaces more traditional
 certification programs is a sure way to bog potentially useful
 collaboration down in vendor politics.
Actually FWIW the DriverLog is not posting accurate information, I came
upon two instances yesterday where I found the information
questionable at best. I know I questioned it. Kyle and I have agreed
to not rely on the DriverLog information as it currently stands as a way
of assessing the fitness of third party CI systems. I'll add some
footnotes for those who want more details. [%%], [++], []
 
 Avoiding dragging the project into those sort of politics is something
 I'm really keen on, and why I think the word certification is best
 avoided so we can focus on what we're actually trying to achieve.
 
 Mark.
I agree with Mark, everytime we try to 'abstract' away from logs and put
an new interface on it, the focus moves to the interface and folks stop
paying attention to logs. We archive and have links to artifacts for a
reason and I think we need to encourage and support people to access
these artifacts and draw their own conclusions, which is in keeping with
our license.

Copy/pasting Mark here:
Also AIUI certification implies some level of warranty or guarantee,
which goes against the pretty clear language WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND in our license :) [**]

Thanks,
Anita.

Anita's footnotes:
[%%]
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2014-06-09.log
timestamp 2014-06-09T20:09:56 and timestamp 2014-06-09T20:11:24
[++]
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-06-09-21.01.log.html
timestamp 21:49:47
[]
http://lists.openstack.org/pipermail/openstack-dev/2014-June/037064.html
[**]
http://lists.openstack.org/pipermail/openstack-dev/2014-June/036963.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [driverlog] Tail-f CI and it's lack of running and it's DriverLog status

2014-06-10 Thread Kyle Mestery
On Mon, Jun 9, 2014 at 9:22 PM, Luke Gorrie l...@tail-f.com wrote:
 Howdy Kyle,

 On 9 June 2014 22:37, Kyle Mestery mest...@noironetworks.com wrote:

 After talking with various infra folks, we've noticed the Tail-f CI
 system is not voting anymore. According to some informal research, the
 last run for this CI setup was in April [1]. Can you verify this
 system is still running? We will need this to be working by the middle
 of Juno-2, with a history of voting or we may remove the Tail-f driver
 from the tree.


 The history is that I have debugged the CI setup using the Sandbox repo
 hooks. Then I shut that down. The next step is to bring it up and connect it
 to the Neutron project Gerrit hook. I'll get on to that -- thanks for the
 prod.

Can you provide links to your voting history on the sandbox repository please?

 I am being very conservative about making changes to the way I interact with
 the core CI infrastructure because frankly I am scared of accidentally
 creating unintended wide-reaching consequences :).

Yes, it's good to have caution so you don't spam other reviews with
superfluous comments.

 Also, along these lines, I'm curious why DriverLog reports this driver
 Green and as tested [2]. What is the criteria for this? I'd like to
 propose a patch changing this driver from Green to something else
 since it's not running for the past few months.


 Fair question. I am happy to make the DriverLog reflect reality. Is
 DriverLog doing this based on the presence of a 'ci' section in
 default_data.json? (Is the needed patch to temporarily remove that section?)

Yes, I believe you can just submit a patch to DriverLog to reflect the
current status.

 I'll focus on getting my CI hooked up to the Neutron project hook in order
 to moot this issue anyway.

We'll keep an eye on this as well.

Thanks,
Kyle

 Cheers,
 -Luke



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-10 Thread Alexei Kornienko

On 06/10/2014 03:59 PM, Gordon Sim wrote:

On 06/10/2014 12:03 PM, Dina Belova wrote:

Hello, stackers!


Oslo.messaging is future of how different OpenStack components
communicate with each other, and really I’d love to start discussion
about how we can make this library even better then it’s now and how can
we refactor it make more production-ready.


As we all remember, oslo.messaging was initially inspired to be created
as a logical continuation of nova.rpc - as a separated library, with
lots of transports supported, etc. That’s why oslo.messaging inherited
not only advantages of now did the nova.rpc work (and it were lots of
them), but also some architectural decisions that currently sometimes
lead to the performance issues (we met some of them while Ceilometer
performance testing [1] during the Icehouse).


For instance, simple testing messaging server (with connection pool and
eventlet) can process 700 messages per second. The same functionality
implemented using plain kombu (without connection pool and eventlet)
driver is processing ten times more - 7000-8000 messages per second.


So we have the following suggestions about how we may make this process
better and quicker (and really I’d love to collect your feedback, 
folks):



1) Currently we have main loop running in the Executor class, and I
guess it’ll be much better to move it to the Server class, as it’ll make
relationship between the classes easier and will leave Executor only one
task - process the message and that’s it (in blocking or eventlet mode).
Moreover, this will make further refactoring much easier.

2) Some of the drivers implementations (such as impl_rabbit and
impl_qpid, for instance) are full of useless separated classes that in
reality might be included to other ones. There are already some changes
making the whole structure easier [2], and after the 1st issue will be
solved Dispatcher and Listener also will be able to be refactored.

3) If we’ll separate RPC functionality and messaging functionality it’ll
make code base clean and easily reused.

4) Connection pool can be refactored to implement more efficient
connection reusage.


Folks, are you ok with such a plan? Alexey Kornienko already started
some of this work [2], but really we want to be sure that we chose the
correct vector of development here.


For the impl_qpid driver, I think there would need to be quite 
significant changes to make it efficient. At present there are several 
synchronous roundtrips for every RPC call made[1]. Notifications are 
not treated any differently than RPCs (and sending a call is no 
different to sending a cast).


I agree the connection pooling is not efficient. For qpid at least it 
creates too many connections for no real benefit[2].


I think this may be a result of trying to fit the same high-level 
design to two entirely different underlying APIs.


For me at least, this also makes it hard to spot issues by reading the 
code. The qpid specific 'unit' tests for oslo.messaging also fail for 
me everytime when an actual qpidd broker is running (I haven't yet got 
to the bottom of that).


I'm personally not sure that the changes to impl_qpid you linked to 
have much impact on either efficiency or readability, safety of the code. 
Indeed it was only to remove some of the unnecessary complexity of the 
code. We'll see more improvement after we'll implement points 1,2 from 
the original email (cause the will allow us to proceed to further 
improvement)


I think there could be a lot of work required to significantly improve 
that driver, and I wonder if that would be better spent on e.g. the 
AMQP 1.0 driver which I believe will perform much better and will 
offer more choice in deployment.
I agree with you on this. However I'm not sure that we can do such a 
decision. If we focus on amqp driver only we should mention it 
explicitly and deprecate qpid driver completely. There is no point in 
keeping driver that is not really functional.


--Gordon

[1] For both the request and the response, the sender is created every 
time, which results in at least one roundtrip to the broker. Again, 
for both the request and the response, the message is then sent with a 
blocking send, meaning a further synchronous round trip for each. So 
for an RPC call, instead of just one roundtrip, there are at least four.


[2] In my view, what matters more than per-connection throughput for 
olso.messaging, is the scalability of the system as you add many RPC 
clients and servers. Excessive creation of connections by each process 
will have a negative impact on this. I don't believe the current code 
gets anywhere close to the limits of the underlying connection and 
suspect it would be more efficient and faster to multiplex different 
streams down the same connection. This would be especially true where 
using eventlet I suspect.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] use of the word certified

2014-06-10 Thread Mark McLoughlin
On Tue, 2014-06-10 at 14:06 +0100, Duncan Thomas wrote:
 On 10 June 2014 09:33, Mark McLoughlin mar...@redhat.com wrote:
 
  Avoiding dragging the project into those sort of politics is something
  I'm really keen on, and why I think the word certification is best
  avoided so we can focus on what we're actually trying to achieve.
 
 Avoiding those sorts of politics - 'XXX says it is a certified config,
 it doesn't work, cinder is junk' - is why I'd rather the cinder core
 team had a certification program, at least we've some control then and
 *other* people can't impose their idea of certification on us. I think
 politics happens, whether you will it or not, so a far more sensible
 stance is to play it out in advance.

Exposing which configurations are actively tested is a perfectly sane
thing to do. I don't see why you think calling this certification is
necessary to achieve your goals. I don't know what you mean be others
imposing their idea of certification.

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-06-10 Thread Jaromir Coufal

On 2014/10/06 10:25, Clint Byrum wrote:

Excerpts from Jaromir Coufal's message of 2014-06-08 16:44:58 -0700:

Hi,

it looks that there is no more activity on the survey for mid-cycle
dates so I went forward to evaluate it.

I created a table view into the etherpad [0] and results are following:
* option1 (Jul 28 - Aug 1): 27 attendees - collides with Nova/Ironic
* option2 (Jul 21-25) : 27 attendees
* option3 (Jul 25-29) : 17 attendees - collides with Nova/Ironic
* option4 (Aug 11-15) : 13 attendees

I think that we can remove options 3 and 4 from the consideration,
because there is lot of people who can't make it. So we have option1 and
option2 left. Since Robert and Devananda (PTLs on the projects) can't
make option1, which also conflicts with Nova/Ironic meetup, I think it
is pretty straightforward.

Based on the survey the winning date for the mid-cycle meetup is
option2: July 21th - 25th.

Does anybody have very strong reason why we shouldn't fix the date for
option2 and proceed forward with the organization for the meetup?



July 21-25 is also the shortest notice. I will not be able to attend
as plans have already been made for the summer and I've already been
travelling quite a bit recently, after all we were all just at the summit
a few weeks ago.

I question the reasoning that being close to FF is a bad thing, and
suggest adding much later dates. But I understand since the chosen dates
are so close, there is a need to make a decision immediately.

Alternatively, I suggest that we split Heat out of this, and aim at
later dates in August.


Hi Clint,

here is the challenge:

I tried to get feedback as soon as possible. We are not able to satisfy 
everybody's restrictions. During summer everybody is planning for 
vacation and it is impossible to find a time which works for all. I was 
reading your restrictions in the etherpad and it is excluding options 
from the end of July until second half of August. So I didn't expect 
that option 2 would conflict with your plans.


In August folks usually have scheduled private time off so it will be 
much worse than July to find some date. Just look at restrictions and on 
August's dates proposal, how the number of attendees decreased.


Furthermore, why I am against being close to FF is that TripleO itself 
is not restricted by FF but the project is dependent on various other 
projects for which FF applies to. So being in the middle of the feature 
proposal milestone makes sense for us to plan forward. What benefit from 
the meetup do we get if we can't discuss, propose and get needed 
features in?


Options 1, 3 and 4 are out of the question. I am sure that if we do it 
in the end of August we will for sure lose various attendees (probably 
more than in July) and furthermore we will not be able to propose new 
features and neither get them in.


As you mentioned, we need to make a decision immediately. The etherpad 
was on for a while and I had various discussions in the meantime. It 
seems that July 21-25 makes the biggest sense and allows majority people 
to attend.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Transport Clients, Service Clients, and state

2014-06-10 Thread Matthew Farina
Those are some good questions and I pondered them last night even
before I read your email.

Right now, if no transport layer is passed in a default one is used
for them. The don't make me think implementation is already there. If
you want to use something other than the default one than you need to
inject one. I even had an idea how to tweak our use of the default one
to improve performance in the network layer. More on that after some
testing.

The idea that a class either holds state or performs functionality is
not agreed to. Some people like that. For example, Rich Hickey talked
about it at Strangeloop in his talk Simple Made Easy
(http://www.infoq.com/presentations/Simple-Made-Easy). Alternately,
there are definitions of OOP floating around like, Rather than
structure programs as code and data, an object-oriented system
integrates the two using the concept of an “object”. An object is an
abstract data type that has state (data) and behavior (code). This is
from http://spf13.com/post/is-go-object-oriented which talks about the
Go programming language.

While this has some debate to it, when I read about object oriented
programming on Wikipedia it starts out saying, Object-oriented
programming (OOP) is a programming paradigm that represents the
concept of objects that have data fields (attributes that describe
the object) and associated procedures known as methods.

A transport layer in this case does have state and functionality. The
state is related to it's ability to transport things. Things like a
proxy server or timeout. Those are state. The functionality is moving
data.

There are other transport clients that don't have state about about
what they are transporting (like a base URL). For example, in the Go
standard library there is an HTTP client (with a lower level transport
mechanism) and neither of these layers has state about what they are
transporting. The state is around how they transport it.

The library as it was brought into Stackforge (before the author of
that component had ever seen Go) wrote one that worked this way. It
was a singleton but it's design handled transporting this way.

I was poking around the popular request library used in node.js/JS
(and used by pkgcloud for OpenStack) and it operates in the similar
manner to what we're talking about in a transport layer. Sure, you can
set default headers. That's useful for setting things like the user
agent. Looking at the example use of request and how pkgcloud uses it
works similar to how I've described using a transport layer.

Transport layers like this are not unheard of.

I'm aware of a number of codebases where Guzzle is used as a stateless
transport layer. Using curl with connection pooling across platforms
in PHP is hard. The API isn't easy to use. Guzzle gives it a good API
and makes it easy. Lots of people like that and use it that way. Since
the codebases that came to mind were not open source I poked around at
open source codebases. I found a lot of projects that use it as a
stateless client. I looked at Guzzle 4 which doesn't have much uptake
and Guzzle 3.

To answer the question, Do we really feel confident building our
transport client like nobody else has before?... I feel confident
doing it this way and it is like others before. It's just not the hot
talked about thing on the conference circuit right now.

Hope this helps clarify my thinking.

- Matt




On Mon, Jun 9, 2014 at 11:54 AM, Jamie Hannaford
jamie.hannaf...@rackspace.com wrote:
 So what you’re proposing is effectively a stateless transport client that
 accepts input and then works on it? I didn’t realize you meant this until
 your previous e-mail - the more I think about your idea, the more I like it.
 I think it’s great for two reasons:

 - It really ties in with our idea of a pluggable transport layer. My idea of
 having transport state doesn’t actually go against this (nor does it break
 single responsibility/blur separation of concerns like you mention - but
 that’s another debate!) - but your idea enforces it in a stronger way.

 - In OOP, many people believe that a class should either hold state or
 perform functionality - but not both. If we remove state from our client,
 we’d have a smaller and more coherent scope of responsibility.

 However, as much as I like the idea, I do have some major reservations which
 we need to think about:

 - Nearly all HTTP clients out there hold state - whether it’s base URLs,
 default headers, whatever. Guzzle does this, ZF2 does this, Symfony does
 this, Python’s base HTTP client does this. Do we really feel confident
 building our transport client like nobody else has before? Maybe there are
 good reasons why they chose to maintain state in their clients for reasons
 we can’t immediately see now.

 - Do we really expect our end-users to create, manage and inject the
 transport clients? You yourself admitted that most of our users will not
 understand or use dependency injection - so why push a pattern that will
 

[openstack-dev] [NFV] Sub-team Meeting Reminder - Wednesday June 11 @ 1400 utc

2014-06-10 Thread Steve Gordon
Hi all,

Just a reminder that the next meeting of the NFV sub-team is scheduled for 
Wednesday June 11 @ 1400 UTC in #openstack-meeting-alt.

Agenda:

* Review actions from last week 
  * russellb: NFV topic on ML
  * russellb: #openstack-nfv setup
  * bauzas: gerrit dashboard
  * cdub: Review use case documentation developed over the last week
* Discuss a small set of blueprints
  * blueprint list TBD
  * open discussion

The agenda is also available at 
https://etherpad.openstack.org/p/nfv-meeting-agenda for editing.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [driverlog] Tail-f CI and it's lack of running and it's DriverLog status

2014-06-10 Thread Luke Gorrie
Howdy!

Here is a successful Sandbox test from right now:
https://review.openstack.org/#/c/99061/. I don't immediately see how to
list all historical sandbox tests. (The previous ones are from before the
Summit anyway.)

I enabled the CI for the openstack/neutron Gerrit feed now. Here is a
change that it tested right now: https://review.openstack.org/#/c/95526/.
(Voting is disabled on the account and the config conservatively is set to
vote 0 on failure.)

Yes, I believe you can just submit a patch to DriverLog to reflect the
 current status.


DriverLog is sourced from default_data.json (
https://github.com/stackforge/driverlog/blob/master/etc/default_data.json#L1007)?
If so then it does reflect the current status:

ci: {
id: tailfncs,
success_pattern: Successful,
failure_pattern: Failed
}

i.e. it specifies which CI account is associated with this driver, and that
corresponds to a CI that is now up and running.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Pending reviews

2014-06-10 Thread Matthew Farina
The reviews are in and they are both merged. Thanks for the reminder.


On Tue, Jun 10, 2014 at 3:12 AM, Jamie Hannaford 
jamie.hannaf...@rackspace.com wrote:

  Hey folks,

  Could we get these two patches reviewed either today or tomorrow? The
 first is array syntax:

  https://review.openstack.org/#/c/94323
 https://review.openstack.org/#/c/94323/5

  The second is removing the “bin” and “scripts” directories from
 top-level tree, as discussed in last week’s meeting:

  https://review.openstack.org/#/c/98048/

  Neither has code changes, so should be fairly simple to review. Thanks!

  Jamie



   Jamie Hannaford
 Software Developer III - CH [image: experience Fanatical Support] [image:
 LINE] Tel: +41434303908Mob: +41791009767 [image: Rackspace]



 Rackspace International GmbH a company registered in the Canton of Zurich,
 Switzerland (company identification number CH-020.4.047.077-1) whose
 registered office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland.
 Rackspace International GmbH privacy policy can be viewed at
 www.rackspace.co.uk/legal/swiss-privacy-policy
 -
 Rackspace Hosting Australia PTY LTD a company registered in the state of
 Victoria, Australia (company registered number ACN 153 275 524) whose
 registered office is at Suite 3, Level 7, 210 George Street, Sydney, NSW
 2000, Australia. Rackspace Hosting Australia PTY LTD privacy policy can be
 viewed at www.rackspace.com.au/company/legal-privacy-statement.php
 -
 Rackspace US, Inc, 5000 Walzem Road, San Antonio, Texas 78218, United
 States of America
 Rackspace US, Inc privacy policy can be viewed at
 www.rackspace.com/information/legal/privacystatement
 -
 Rackspace Limited is a company registered in England  Wales (company
 registered number 03897010) whose registered office is at 5 Millington
 Road, Hyde Park Hayes, Middlesex UB3 4AZ.
 Rackspace Limited privacy policy can be viewed at
 www.rackspace.co.uk/legal/privacy-policy
 -
 Rackspace Benelux B.V. is a company registered in the Netherlands (company
 KvK nummer 34276327) whose registered office is at Teleportboulevard 110,
 1043 EJ Amsterdam.
 Rackspace Benelux B.V privacy policy can be viewed at
 www.rackspace.nl/juridisch/privacy-policy
 -
 Rackspace Asia Limited is a company registered in Hong Kong (Company no:
 1211294) whose registered office is at 9/F, Cambridge House, Taikoo Place,
 979 King's Road, Quarry Bay, Hong Kong.
 Rackspace Asia Limited privacy policy can be viewed at
 www.rackspace.com.hk/company/legal-privacy-statement.php
 -
 This e-mail message (including any attachments or embedded documents) is
 intended for the exclusive and confidential use of the individual or entity
 to which this message is addressed, and unless otherwise expressly
 indicated, is confidential and privileged information of Rackspace. Any
 dissemination, distribution or copying of the enclosed material is
 prohibited. If you receive this transmission in error, please notify us
 immediately by e-mail at ab...@rackspace.com and delete the original
 message. Your cooperation is appreciated.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-10 Thread Solly Ross
Response inline

- Original Message -
 From: Alex Glikson glik...@il.ibm.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, June 9, 2014 3:13:52 PM
 Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation 
 ratio out of scheduler
 
  So maybe the problem isn’t having the flavors so much, but in how the user
  currently has to specific an exact match from that list.
 If the user could say “I want a flavor with these attributes” and then the
 system would find a “best match” based on criteria set by the cloud admin
 then would that be a more user friendly solution ?
 
 Interesting idea.. Thoughts how this can be achieved?

Well, that is *essentially* what a scheduler does -- you give it a set of 
parameters
and it finds a chunk of resources (in this case, a flavor) to match those 
features.
I'm *not* suggesting that we reuse any scheduling code, it's just one way to 
think
about it.

Another way to think about it would be to produce a distance score and choose 
the
flavor with the smallest distance, discounting flavors that couldn't fit the 
target
configuration.  The distance score would simply be a sum of distances between 
the individual
resources for the target and flavor.

Best Regards,
Solly Ross

 
 Alex
 
 
 
 
 From: Day, Phil philip@hp.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 06/06/2014 12:38 PM
 Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation
 ratio out of scheduler
 
 
 
 
 
 From: Scott Devoid [ mailto:dev...@anl.gov ]
 Sent: 04 June 2014 17:36
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation
 ratio out of scheduler
 
 Not only live upgrades but also dynamic reconfiguration.
 
 Overcommitting affects the quality of service delivered to the cloud user. In
 this situation in particular, as in many situations in general, I think we
 want to enable the service provider to offer multiple qualities of service.
 That is, enable the cloud provider to offer a selectable level of
 overcommit. A given instance would be placed in a pool that is dedicated to
 the relevant level of overcommit (or, possibly, a better pool if the
 selected one is currently full). Ideally the pool sizes would be dynamic.
 That's the dynamic reconfiguration I mentioned preparing for.
 
 +1 This is exactly the situation I'm in as an operator. You can do different
 levels of overcommit with host-aggregates and different flavors, but this
 has several drawbacks:
 1. The nature of this is slightly exposed to the end-user, through
 extra-specs and the fact that two flavors cannot have the same name. One
 scenario we have is that we want to be able to document our flavor
 names--what each name means, but we want to provide different QoS standards
 for different projects. Since flavor names must be unique, we have to create
 different flavors for different levels of service. Sometimes you do want to
 lie to your users!
 [Day, Phil] I agree that there is a problem with having every new option we
 add in extra_specs leading to a new set of flavors. There are a number of
 changes up for review to expose more hypervisor capabilities via extra_specs
 that also have this potential problem. What I’d really like to be able to
 ask for a s a user is something like “a medium instance with a side order of
 overcommit”, rather than have to choose from a long list of variations. I
 did spend some time trying to think of a more elegant solution – but as the
 user wants to know what combinations are available it pretty much comes down
 to needing that full list of combinations somewhere. So maybe the problem
 isn’t having the flavors so much, but in how the user currently has to
 specific an exact match from that list.
 If the user could say “I want a flavor with these attributes” and then the
 system would find a “best match” based on criteria set by the cloud admin
 (for example I might or might not want to allow a request for an
 overcommitted instance to use my not-overcommited flavor depending on the
 roles of the tenant) then would that be a more user friendly solution ?
 
 2. If I have two pools of nova-compute HVs with different overcommit
 settings, I have to manage the pool sizes manually. Even if I use puppet to
 change the config and flip an instance into a different pool, that requires
 me to restart nova-compute. Not an ideal situation.
 [Day, Phil] If the pools are aggregates, and the overcommit is defined by
 aggregate meta-data then I don’t see why you need to restart nova-compute.
 3. If I want to do anything complicated, like 3 overcommit tiers with good,
 better, best performance and allow the scheduler to pick better for a
 good instance if the good pool is full, this is very hard and
 complicated to do with the current system.
 [Day, 

Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-10 Thread David Kranz

On 06/09/2014 02:24 PM, Sean Dague wrote:

On 06/09/2014 01:38 PM, David Kranz wrote:

On 06/02/2014 06:57 AM, Sean Dague wrote:

Towards the end of the summit there was a discussion about us using a
shared review dashboard to see if a common view by the team would help
accelerate people looking at certain things. I spent some time this
weekend working on a tool to make building custom dashboard urls much
easier.

My current proposal is the following, and would like comments on it:
https://github.com/sdague/gerrit-dash-creator/blob/master/dashboards/qa-program.dash

All items in the dashboard are content that you've not voted on in the
current patch revision, that you don't own, and that have passing
Jenkins test results.

1. QA Specs - these need more eyes, so we highlight them at top of page
2. Patches that are older than 5 days, with no code review
3. Patches that you are listed as a reviewer on, but haven't voting on
current version
4. Patches that already have a +2, so should be landable if you agree.
5. Patches that have no negative code review feedback on them
6. Patches older than 2 days, with no code review

Thanks, Sean. This is working great for me, but I think there is another
important item that is missing and hope it is possible to add, perhaps
even as among the most important items:

Patches that you gave a -1, but the response is a comment explaining why
the -1 should be withdrawn rather than a new patch.

So how does one automatically detect those using the gerrit query language?

-Sean
Based on the docs I looked at, you can't. The one downside of every one 
using a dashboard like this is that if a patch does not show in your 
view, it is as if it does not exist for you. So at least for now, if you 
want some one to remove a -1 based on some argument, you have to ping 
them directly. Not the end of the world.


 -David



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] BPs and bugs for Juno-1

2014-06-10 Thread Kyle Mestery
Neutron devs:

The list of targeted BPs and bugs for Juno-1 is here [1, as we
discussed in the team meeting yesterday [2]. Per discussion with ttx
today, of the 4 remaining BPs targeted for Juno-1, any which do not
have code in flight to be merged by EOD today will be re-targeted for
Juno-2. For the bugs targeted for Juno-1, the same will apply. I'm
going to scrub the bugs today and ones which do not have any patches
will be moved to Juno-2.

Core reviewers: please focus on trying to merge the final pieces of Juno-1.

Thanks!
Kyle

[1] https://launchpad.net/neutron/+milestone/juno-1
[2] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-06-09-21.01.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-10 Thread Duncan Thomas
On 10 June 2014 15:07, Mark McLoughlin mar...@redhat.com wrote:

 Exposing which configurations are actively tested is a perfectly sane
 thing to do. I don't see why you think calling this certification is
 necessary to achieve your goals.

What is certification except a formal way of saying 'we tested it'? At
least when you test it enough to have some degree of confidence in
your testing.

That's *exactly* what certification means.

 I don't know what you mean be others
 imposing their idea of certification.

I mean that if some company or vendor starts claiming 'Product X is
certified for use with cinder', that is bad for the cinder core team,
since we didn't define what got tested or to what degree.

Whether we like it or not, when something doesn't work in cinder, it
is rare for people to blame the storage vendor in their complaints.
'Cinder is broken' is what we hear (and I've heard it, even though
what they meant is 'my storage vendor hasn't tested or updated their
driver in two releases', that isn't what they /said/). Since cinder,
and therefore cinder-core, is going to get the blame, I feel we should
try to maintain some degree of control over the claims.

If we run our own minimal certification program, which is what we've
started doing (started with a script which did a test run and tried to
require vendors to run it, that didn't work out well so we're now
requiring CI integration instead), then we at least have the option of
saying 'You're running an non-certified product, go talk to your
vendor' when dealing with the cases we have no control over. Vendors
that don't follow the CI  cert requirements eventually get their
driver removed, that simple.



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] - follow up on scheduling discussion

2014-06-10 Thread Tim Hinrichs
Hi all,

I see that many of the use cases require information from different OS 
components, e.g. networking, compute, and storage.  One thing to think about is 
where those constraints are written/stored and how the data the constraints 
depend on is pulled together.  The Congress project might be helpful here, and 
I’m happy to help explore options.  Let me know if you’re interested.

https://wiki.openstack.org/wiki/Congress

Tim  



On Jun 4, 2014, at 11:25 AM, ramki Krishnan r...@brocade.com wrote:

 All,
 
 Thanks for the interest in the NFV scheduling topic. Please find a proposal 
 on Smart Scheduler (Solver Scheduler) enhancements for NFV: Use Cases, 
 Constraints etc.. 
 https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/document/d/1k60BQXOMkZS0SIxpFOppGgYp416uXcJVkAFep3Oeju8/edit%23heading%3Dh.wlbclagujw8ck=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZnz4%3D%0As=836991d6daab66b519de3b670db8af001144ddb20e636665b395597aa118538f
 
 Based on this proposal, we are planning to enhance the existing 
 solver-scheduler blueprint 
 https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/nova/%2Bspec/solver-schedulerk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZnz4%3D%0As=d1883d82f8d09081b35986987b5f2f9f1d7731f16b2a5251fdbf26c1b26b294d.
 
 Would love to hear your comments and thoughts. Would be glad to arrange a 
 conference call if needed.
 
 Thanks,
 Ramki
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZnz4%3D%0As=7df001977efa968a09f3fae30b16ae35f4278a7bc82fcb3cdbbc9639cc505503


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [driverlog] Tail-f CI and it's lack of running and it's DriverLog status

2014-06-10 Thread Ilya Shakhat
Hi!

Tail-f driver seems to be configured correctly. DriverLog will poll Gerrit
in the next 4 hours and update driver details screen.

Regarding green mark on summary screen - it is shown for those drivers that
have configured CI and CI ran at least once. But it doesn't take into
account when the last successful run was. I suppose this mark may be
changed to something like CI health and be green only if tests passed on
master and were run during the last month.

Thanks,
Ilya


2014-06-10 18:33 GMT+04:00 Luke Gorrie l...@tail-f.com:

 Howdy!

 Here is a successful Sandbox test from right now:
 https://review.openstack.org/#/c/99061/. I don't immediately see how to
 list all historical sandbox tests. (The previous ones are from before the
 Summit anyway.)

 I enabled the CI for the openstack/neutron Gerrit feed now. Here is a
 change that it tested right now: https://review.openstack.org/#/c/95526/.
 (Voting is disabled on the account and the config conservatively is set to
 vote 0 on failure.)

 Yes, I believe you can just submit a patch to DriverLog to reflect the
 current status.


 DriverLog is sourced from default_data.json (
 https://github.com/stackforge/driverlog/blob/master/etc/default_data.json#L1007)?
 If so then it does reflect the current status:

 ci: {
 id: tailfncs,
 success_pattern: Successful,
 failure_pattern: Failed
 }

 i.e. it specifies which CI account is associated with this driver, and
 that corresponds to a CI that is now up and running.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [driverlog] Tail-f CI and it's lack of running and it's DriverLog status

2014-06-10 Thread Kyle Mestery
On Tue, Jun 10, 2014 at 10:15 AM, Ilya Shakhat ishak...@mirantis.com wrote:
 Hi!

 Tail-f driver seems to be configured correctly. DriverLog will poll Gerrit
 in the next 4 hours and update driver details screen.

 Regarding green mark on summary screen - it is shown for those drivers that
 have configured CI and CI ran at least once. But it doesn't take into
 account when the last successful run was. I suppose this mark may be changed
 to something like CI health and be green only if tests passed on master
 and were run during the last month.

It would be great if this could be in the last few days. Third party
CI systems should be testing enough upstream commits so they should
run at least every few days.

 Thanks,
 Ilya


 2014-06-10 18:33 GMT+04:00 Luke Gorrie l...@tail-f.com:

 Howdy!

 Here is a successful Sandbox test from right now:
 https://review.openstack.org/#/c/99061/. I don't immediately see how to list
 all historical sandbox tests. (The previous ones are from before the Summit
 anyway.)

 I enabled the CI for the openstack/neutron Gerrit feed now. Here is a
 change that it tested right now: https://review.openstack.org/#/c/95526/.
 (Voting is disabled on the account and the config conservatively is set to
 vote 0 on failure.)

 Yes, I believe you can just submit a patch to DriverLog to reflect the
 current status.


 DriverLog is sourced from default_data.json
 (https://github.com/stackforge/driverlog/blob/master/etc/default_data.json#L1007)?
 If so then it does reflect the current status:

 ci: {
 id: tailfncs,
 success_pattern: Successful,
 failure_pattern: Failed
 }

 i.e. it specifies which CI account is associated with this driver, and
 that corresponds to a CI that is now up and running.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] - follow up on scheduling discussion

2014-06-10 Thread ramki Krishnan
Hi Tim,

Agree, Congress is a good place to store the scheduling constraints.

Thanks,
Ramki

-Original Message-
From: Tim Hinrichs [mailto:thinri...@vmware.com] 
Sent: Tuesday, June 10, 2014 8:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Norival Figueira; Debo Dutta (dedutta) (dedu...@cisco.com)
Subject: Re: [openstack-dev] [NFV] - follow up on scheduling discussion

Hi all,

I see that many of the use cases require information from different OS 
components, e.g. networking, compute, and storage.  One thing to think about is 
where those constraints are written/stored and how the data the constraints 
depend on is pulled together.  The Congress project might be helpful here, and 
I'm happy to help explore options.  Let me know if you're interested.

https://wiki.openstack.org/wiki/Congress

Tim  



On Jun 4, 2014, at 11:25 AM, ramki Krishnan r...@brocade.com wrote:

 All,
 
 Thanks for the interest in the NFV scheduling topic. Please find a proposal 
 on Smart Scheduler (Solver Scheduler) enhancements for NFV: Use Cases, 
 Constraints etc.. 
 https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/document/d/1k60BQXOMkZS0SIxpFOppGgYp416uXcJVkAFep3Oeju8/edit%23heading%3Dh.wlbclagujw8ck=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZnz4%3D%0As=836991d6daab66b519de3b670db8af001144ddb20e636665b395597aa118538f
 
 Based on this proposal, we are planning to enhance the existing 
 solver-scheduler blueprint 
 https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/nova/%2Bspec/solver-schedulerk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZnz4%3D%0As=d1883d82f8d09081b35986987b5f2f9f1d7731f16b2a5251fdbf26c1b26b294d.
 
 Would love to hear your comments and thoughts. Would be glad to arrange a 
 conference call if needed.
 
 Thanks,
 Ramki
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZnz4%3D%0As=7df001977efa968a09f3fae30b16ae35f4278a7bc82fcb3cdbbc9639cc505503


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] - follow up on scheduling discussion

2014-06-10 Thread Yathiraj Udupi (yudupi)
Hi Tim,



In our current implementation of Smart (Solver) Scheduler,  the constraints are 
defined as pluggable modules (just like filter definitions in the filter 
scheduler) and are pulled in together when necessary to solve the scheduling 
decision. And regarding the data that we get from different services in storage 
(cinder)  , network and so on, we are currently using their clients to directly 
get the data to use along with the constraints.



I believe a policy implementation can specify which constraints to use and 
which data. So the data can potentially be saved in the individual services.



The Gantt project is also planning to have internal DB which will be used for 
scheduling. That is another option where the unified data can be, when we want 
to do unified scheduling that we describe in our Smart scheduler project.



I will open to explore options as to where Congress will fit in here,  but 
currently I feel it is one layer above this.



Thanks,

Yathi.







Sent from my LG G2, an ATT 4G LTE smartphone





-- Original message--

From: Tim Hinrichs

Date: Tue, Jun 10, 2014 8:27 AM

To: OpenStack Development Mailing List (not for usage questions);

Cc: Norival Figueira;Debo Dutta (dedutta);

Subject:Re: [openstack-dev] [NFV] - follow up on scheduling discussion



Hi all,

I see that many of the use cases require information from different OS 
components, e.g. networking, compute, and storage.  One thing to think about is 
where those constraints are written/stored and how the data the constraints 
depend on is pulled together.  The Congress project might be helpful here, and 
I’m happy to help explore options.  Let me know if you’re interested.

https://wiki.openstack.org/wiki/Congress

Tim



On Jun 4, 2014, at 11:25 AM, ramki Krishnan r...@brocade.com wrote:

 All,

 Thanks for the interest in the NFV scheduling topic. Please find a proposal 
 on Smart Scheduler (Solver Scheduler) enhancements for NFV: Use Cases, 
 Constraints etc.. 
 https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/document/d/1k60BQXOMkZS0SIxpFOppGgYp416uXcJVkAFep3Oeju8/edit%23heading%3Dh.wlbclagujw8ck=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZnz4%3D%0As=836991d6daab66b519de3b670db8af001144ddb20e636665b395597aa118538f

 Based on this proposal, we are planning to enhance the existing 
 solver-scheduler blueprint 
 https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/nova/%2Bspec/solver-schedulerk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZnz4%3D%0As=d1883d82f8d09081b35986987b5f2f9f1d7731f16b2a5251fdbf26c1b26b294d.

 Would love to hear your comments and thoughts. Would be glad to arrange a 
 conference call if needed.

 Thanks,
 Ramki


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZnz4%3D%0As=7df001977efa968a09f3fae30b16ae35f4278a7bc82fcb3cdbbc9639cc505503


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [driverlog] Tail-f CI and it's lack of running and it's DriverLog status

2014-06-10 Thread Collins, Sean
On Tue, Jun 10, 2014 at 10:33:49AM EDT, Luke Gorrie wrote:
 Howdy!
 
 Here is a successful Sandbox test from right now:
 https://review.openstack.org/#/c/99061/. I don't immediately see how to
 list all historical sandbox tests. (The previous ones are from before the
 Summit anyway.)

One of the links that is posted in that review comment for the Tail-f
NCS Jenkins timed out for me.

http://egg.snabb.co:8080/job/jenkins-ncs/19/

I notice that there is another link included in that review that does
work and has the logs:

http://88.198.8.227:81/html/ci-logs/2014-06-10_15-52-09

Can you comment on what the egg.snabb.co URL is for?

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [driverlog] Tail-f CI and it's lack of running and it's DriverLog status

2014-06-10 Thread Luke Gorrie
Hi Sean,

On 10 June 2014 18:09, Collins, Sean sean_colli...@cable.comcast.com
wrote:

 One of the links that is posted in that review comment for the Tail-f
 NCS Jenkins timed out for me.

 http://egg.snabb.co:8080/job/jenkins-ncs/19/

 I notice that there is another link included in that review that does
 work and has the logs:

 http://88.198.8.227:81/html/ci-logs/2014-06-10_15-52-09

 Can you comment on what the egg.snabb.co URL is for?


I saw that too and removed the inaccessible link. The more recent reviews
show only the link to the logs.

The egg.snabb.co URL was inserted by the BUILD_STATS string in Jenkins's
template for Gerrit messages. It's a link to the Jenkins web interface that
I use for administration, and the timeout is because it's protected behind
a firewall.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Re: [openstack-tc] use of the word certified

2014-06-10 Thread Jay Pipes

Sorry, replied to wrong ML...

 Original Message 
Subject: Re: [openstack-tc] [openstack-dev] use of the word certified
Date: Tue, 10 Jun 2014 11:37:38 -0400
From: Jay Pipes jaypi...@gmail.com
To: openstack...@lists.openstack.org

On 06/10/2014 09:53 AM, Sean Dague wrote:

On 06/10/2014 09:14 AM, Anita Kuno wrote:

On 06/10/2014 04:33 AM, Mark McLoughlin wrote:

On Mon, 2014-06-09 at 20:14 -0400, Doug Hellmann wrote:

On Mon, Jun 9, 2014 at 6:11 PM, Eoghan Glynn egl...@redhat.com wrote:




Based on the discussion I'd like to propose these options:
1. Cinder-certified driver - This is an attempt to move the certification
to the project level.
2. CI-tested driver - This is probably the most accurate, at least for what
we're trying to achieve for Juno: Continuous Integration of Vendor-specific
Drivers.


Hi Ramy,

Thanks for these constructive suggestions.

The second option is certainly a very direct and specific reflection of
what is actually involved in getting the Cinder project's imprimatur.


I do like tested.

I'd like to understand what the foundation is planning for
certification as well, to know how big of an issue this really is.
Even if they aren't going to certify drivers, I have heard discussions
around training and possibly other areas so I would hate for us to
introduce confusion by having different uses of that term in similar
contexts. Mark, do you know who is working on that within the board or
foundation?


http://blogs.gnome.org/markmc/2014/05/17/may-11-openstack-foundation-board-meeting/

Boris Renski raised the possibility of the Foundation attaching the
trademark to a verified, certified or tested status for drivers. It
wasn't discussed at length because board members hadn't been briefed in
advance, but I think it's safe to say there was a knee-jerk negative
reaction from a number of members. This is in the context of the
DriverLog report:

   http://stackalytics.com/report/driverlog
   
http://www.mirantis.com/blog/cloud-drivers-openstack-driverlog-part-1-solving-driver-problem/
   
http://www.mirantis.com/blog/openstack-will-open-source-vendor-certifications/

AIUI the CI tested phrase was chosen in DriverLog to avoid the
controversial area Boris describes in the last link above. I think that
makes sense. Claiming this CI testing replaces more traditional
certification programs is a sure way to bog potentially useful
collaboration down in vendor politics.

Actually FWIW the DriverLog is not posting accurate information, I came
upon two instances yesterday where I found the information
questionable at best. I know I questioned it. Kyle and I have agreed
to not rely on the DriverLog information as it currently stands as a way
of assessing the fitness of third party CI systems. I'll add some
footnotes for those who want more details. [%%], [++], []


Avoiding dragging the project into those sort of politics is something
I'm really keen on, and why I think the word certification is best
avoided so we can focus on what we're actually trying to achieve.

Mark.

I agree with Mark, everytime we try to 'abstract' away from logs and put
an new interface on it, the focus moves to the interface and folks stop
paying attention to logs. We archive and have links to artifacts for a
reason and I think we need to encourage and support people to access
these artifacts and draw their own conclusions, which is in keeping with
our license.

Copy/pasting Mark here:
Also AIUI certification implies some level of warranty or guarantee,
which goes against the pretty clear language WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND in our license :) [**]


Honestly, the bigger issue I've got at this point is that driverlog is
horribly inaccurate. Based on DriverLog you'd see that we don't test KVM
or QEMU at all, only XenAPI.


Then shouldn't the focus be on both reporting bugs to DriverLog [1] and
fixing these inaccuracies? DriverLog doesn't use the term certified
anywhere, for the record.

It is an honest best effort to provide some insight into the testability
of various drivers in the OpenStack ecosystem in a more up-to-date way
than outdated wiki pages showing matrixes of support for something.

It's an alpha project that can and will have bugs. I can absolutely
guarantee you that the developers of the DriverLog project are more
interested in getting accurate information shown in the interface than
with any of the politics around the word certified.

Best,

-jay

[1] https://bugs.launchpad.net/driverlog




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] versioning and releases

2014-06-10 Thread Doug Hellmann
As part of the push to release code from the oslo incubator in
stand-alone libraries, we have had several different discussions about
versioning and release schedules. This is an attempt to collect all of
the decisions we have made in those discussions and to lay out the
rationale for the approach we are taking. I don't expect any of this
to be a surprise, since we've talked about it, but I haven't actually
written it all down in one place before so some of you might not have
seen all of the points. Please let me know if you see any issues with
the proposal or have questions. If everyone agrees that this makes
sense, I'll put it in the wiki.

Doug



Background:

We have two types of oslo libraries. Libraries like oslo.config and
oslo.messaging were created by extracting incubated code, updating the
public API, and packaging it. Libraries like cliff and taskflow were
created as standalone packages from the beginning, and later adopted
by the oslo team to manage their development and maintenance.

Incubated libraries have been released at the end of a release cycle,
as with the rest of the integrated packages. Adopted libraries have
historically been released as needed during their development. We
would like to synchronize these so that all oslo libraries are
officially released with the rest of the software created by OpenStack
developers.

The first release of oslo.config was 1.1.0, as part of the grizzly
release. The first release of oslo.messaging was 1.2.0, as part of the
havana release. oslo.config was also updated to 1.2.0 during havana.
All current adopted libraries have release numbers less than 1.0.0.

In the past, alpha releases of oslo libraries have been distributed as
tarballs on an openstack server, with official releases going to PyPI.
Applications that required the alpha release specified the tarball in
their requirements list, followed by a version specifier. This allowed
us to prepare alpha releases, without worrying that their release
would break continuous-deployment systems by making new library
releases available to pip. This approach still made it difficult for
an application developer to rely on new features of an oslo library,
until an alpha version was produced.

When the PyPI mirror was introduced in our CI system, relying on
tarballs not available on the mirror conflicted with our desire to
have the gate system install *only* from the package mirror. As we are
now installing only from the mirror, we need to publish our alpha
releases in a format that will work with the mirror.

We already gate OpenStack applications and oslo libraries using
integration tests using the normal devstack-gate jobs. During Icehouse
we had a couple of oslo library releases that broke unit tests of
applications after the library was released. We plan to address that
with separate gating jobs during juno. In addition to that gating, we
need to support developers who want to use new features of oslo
libraries before official releases are available.

A Version Numbering Scheme:

At the Juno summit, Josh proposed that we use semantic versioning
(SemVer) for oslo libraries [1]. Part of that proposal also included
ideas for allowing breaking backwards compatibility at some release
boundaries, and I am explicitly *not* addressing
backwards-incompatible changes beyond saying that we do not expect to
have any during Juno. We do need to solve the problem of breaking API
compatibility, but I want to take one step at a time. The first step
is choosing a rational release versioning scheme.

SemVer is widely used and gives us relatively clear guidelines about
choosing new version numbers. It supports alpha releases, which are
going to be key to meeting some of our other requirements. I propose
that we adopt pbr's modified SemVer [2] for new releases, beginning
with Juno.

The versions for existing libraries oslo.config and oslo.messaging
will be incremented from their Icehouse versions but updating the
minor number (1.x.0) at the end of the Juno cycle.

All adopted libraries using numbers less than 1.0 will be released as
1.0.0 at the end of the Juno cycle, based on the fact that we expect
deployers to use them in production.

Releases during Juno should *all* be marked as alphas of the
anticipated upcoming SemVer-based release number (1.0.0.0aN or
1.2.0.0aN or whatever). The new CI system can create packages as
Python wheels and publish them to the appropriate servers, which means
projects will no longer need to refer explicitly to pre-release
tarballs.

Releases after Juno will follow a similar pattern, incrementing the
minor number and using alpha releases within the development cycle.

[1] https://etherpad.openstack.org/p/juno-oslo-semantic-versioning
[2] http://docs.openstack.org/developer/pbr/semver.html

Frequent Alpha Releases:

While we can run gate jobs using the master branch of oslo libraries,
developers will have to take extra steps to run unit tests this way
locally. To reduce this process overhead, 

Re: [openstack-dev] [NFV] Re: NFV in OpenStack use cases and context

2014-06-10 Thread Steve Gordon
- Original Message -
 From: Stephen Wong stephen.kf.w...@gmail.com
 To: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com, OpenStack 
 Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 
 Hi,
 
 Perhaps I have missed it somewhere in the email thread? Where is the
 use case = bp document we are supposed to do for this week? Has it been
 created yet?
 
 Thanks,
 - Stephen

Hi,

Itai is referring to the ETSI NFV use cases document [1] and the discussion is 
around how we distill those - or a subset of them - into a more consumable 
format for an OpenStack audience on the Wiki. At this point I think the best 
approach is to simply start entering one of them (perhaps #5) into the Wiki and 
go from there. Ideally this would form a basis for discussing the format etc.

Thanks,

Steve

[1] 
http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf

 On Tue, Jun 10, 2014 at 2:00 AM, MENDELSOHN, ITAI (ITAI) 
 itai.mendels...@alcatel-lucent.com wrote:
 
  Shall we continue this discussion?
 
  Itai
 
  On 6/9/14 8:54 PM, Steve Gordon sgor...@redhat.com wrote:
 
  - Original Message -
   From: Steve Gordon sgor...@redhat.com
   To: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com,
  OpenStack Development Mailing List (not for usage
  
   Just adding openstack-dev to the CC for now :).
  
   - Original Message -
From: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com
Subject: Re: NFV in OpenStack use cases and context
   
Can we look at them one by one?
   
Use case 1 - It's pure IaaS
Use case 2 - Virtual network function as a service. It's actually
  about
exposing services to end customers (enterprises) by the service
  provider.
Use case 3 - VNPaaS - is similar to #2 but at the service level. At
  larger
scale and not at the app level only.
Use case 4 - VNF forwarding graphs. It's actually about dynamic
connectivity between apps.
Use case 5 - vEPC and vIMS - Those are very specific (good) examples
  of SP
services to be deployed.
Use case 6 - virtual mobile base station. Another very specific
  example,
with different characteristics than the other two above.
Use case 7 - Home virtualisation.
Use case 8 - Virtual CDN
   
As I see it those have totally different relevancy to OpenStack.
Assuming we don't want to boil the ocean hereŠ
   
1-3 seems to me less relevant here.
4 seems to be a Neutron area.
5-8 seems to be usefully to understand the needs of the NFV apps. The
  use
case can help to map those needs.
   
For 4 I guess the main part is about chaining and Neutron between DCs.
Soma may call it SDN in WAN...
   
For 5-8 at the end an option is to map all those into:
-performance (net BW, storage BW mainly). That can be mapped to
  SR-IOV,
NUMA. Etc'
-determinism. Shall we especially minimise noisy neighbours. Not
  sure
how NFV is special here, but for sure it's a major concern for lot of
  SPs.
That can be mapped to huge pages, cache QOS, etc'.
-overcoming of short term hurdles (just because of apps migrations
issues). Small example is the need to define the tick policy of KVM
  just
because that's what the app needs. Again, not sure how NFV special it
  is,
and again a major concern of mainly application owners in the NFV
  domain.
   
Make sense?
  
  Hi Itai,
  
  This makes sense to me. I think what we need to expand upon, with the
  ETSI NFV documents as a reference, is a two to three paragraph
  explanation of each use case explained at a more basic level - ideally on
  the Wiki page. It seems that use case 5 might make a particularly good
  initial target to work on fleshing out as an example? We could then look
  at linking the use case to concrete requirements based on this, I suspect
  we might want to break them down into:
  
  a) The bare minimum requirements for OpenStack to support the use case at
  all. That is, requirements that without which the VNF simply can not
  function.
  
  b) The requirements that are not mandatory but would be beneficial for
  OpenStack to support the use case. In particularly that might be
  requirements that would improve VNF performance or reliability by some
  margin (possibly significantly) but which it can function without if
  absolutely required.
  
  Thoughts?
  
  Steve
  
  
 
 
 

-- 
Steve Gordon, RHCE
Product Manager, Red Hat Enterprise Linux OpenStack Platform
Red Hat Canada (Toronto, Ontario)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy mechanism drivers in ML2

2014-06-10 Thread Irena Berezovsky
Hi Luke,
Very impressive solution!

I do not think there is a problem to keep agent out of the tree in a short 
term, but would highly recommend to put it upstream in a longer term.
You will benefit from quite valuable community review. And most important it 
will allow to keep your code as much as possible aligned with neutron code 
base. Once there are some general changes done by other people, your code will 
be taken into account and won’t be broken accidentally.
I would like to mention that there is Modular L2 Agent initiative driven by ML2 
team, you may be interested to follow: 
https://etherpad.openstack.org/p/modular-l2-agent-outline

Best Regards,
Irena

From: luk...@gmail.com [mailto:luk...@gmail.com] On Behalf Of Luke Gorrie
Sent: Tuesday, June 10, 2014 12:48 PM
To: Irena Berezovsky
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy 
mechanism drivers in ML2

Hi Irena,

Thanks for the very interesting perspective!

On 10 June 2014 10:57, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:
[IrenaB] The DB access approach was previously used by OVS and LinuxBridge 
Agents and at some point (~Grizzly Release) was changed to use RPC 
communication.

That is very interesting. I've been involved in OpenStack since the Havana 
cycle and was not familiar with the old design.

I'm optimistic about the scalability of our implementation. We have 
sanity-tested with 300 compute nodes and a 300ms sync interval. I am sure we 
will find some parts that we need to spend optimization energy on, however.

The other scalability aspect we are being careful of is the cost of individual 
update operations. (In LinuxBridge that would be the iptables, ebtables, etc 
commands.) In our implementation the compute nodes preprocess the Neutron 
config into a small config file for the local traffic plane and then load that 
in one atomic operation (SIGHUP style). Again, I am sure we will find cases 
that we need to spend optimization effort on, but the design seems scalable to 
me thanks to the atomicity.

For concreteness, here is the agent we are running on the DB node to make the 
Neutron config available:
https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-master

and here is the agent that pulls it onto the compute node:
https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-agent

TL;DR we snapshot the config with mysqldump and distribute it with git.

Here's the sanity test I referred to: 
https://groups.google.com/d/msg/snabb-devel/blmDuCgoknc/PP_oMgopiB4J

I will be glad to report on our experience and what we change based on our 
deployment experience during the Juno cycle.

[IrenaB] I think that for “Non SDN Controller” Mechanism Drivers there will be 
need for some sort of agent to handle port update events even though it might 
not be required in order to bind the port.

True. Indeed, we do have an agent running on the compute host, and it we are 
synchronizing it with port updates based on the mechanism described above.

Really what I mean is: Can we keep our agent out-of-tree and apart from ML2 and 
decide for ourselves how to keep it synchronized (instead of using the MQ)? Is 
there a precedent for doing things this way in an ML2 mech driver (e.g. one of 
the SDNs)?

Cheers!
-Luke


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Kurt Griffiths
 What are 'message feeds' in the Marconi context, in more detail? And
what aspect of them is it that message brokers don't support?

Great question. When I say “feeds” I mean a “feed” in the sense of RSS or
Atom. People do, in fact, use Atom to implement certain messaging
patterns. You can think of Marconi’s current API design as taking the idea
of message syndication and including the SQS-like semantics around
claiming a batch of messages for a period of time, after which the
messages return to the “pool” unless they are deleted in the interim.

I think the crux of the issue is that Marconi follows the REST
architectural style. As such, the client must track the state of where it
is in the queue it is consuming (to keep the server stateless). So, it
must be given some kind of marker, allowing it to page through messages in
the queue. 

Also noteworthy is that simply reading a message does not also delete
it, which affords the pub-sub messaging pattern. One could imagine
taking a More prescriptive approach to pub-sub by introducing some
sort of “exchange” resource, but the REST style generally encourages
working at the level of affordances (not to say we couldn’t stray if
need be; I am describing the API as it stands today).

To my knowledge, this API can’t be mapped directly to AMQP. Perhaps
thereare other types of brokers that can do it?

On 6/10/14, 7:17 AM, Gordon Sim g...@redhat.com wrote:

On 06/09/2014 08:31 PM, Kurt Griffiths wrote:
 Lately we have been talking about writing drivers for traditional
 message brokers that will not be able to support the message feeds part
 of the API.

Could you elaborate a little on this point? In some sense of the term at
least, handling message feeds is what 'traditional' message brokers are
all about. What are 'message feeds' in the Marconi context, in more
detail? And what aspect of them is it that message brokers don't support?

 I’ve started to think that having a huge part of the API
 that may or may not “work”, depending on how Marconi is deployed, is not
 a good story for users

I agree, that certainly doesn't sound good.

--Gordon.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] Odd behavior from requirements checks

2014-06-10 Thread Kevin L. Mitchell
I've been seeing failures from the requirements gating check on changes
proposed by the requirements bot.  It's actually complaining that the
proposed changes don't match what's in global-requirements.txt, even
though they are textually identical.  An example is here:


http://logs.openstack.org/68/96268/6/check/gate-python-novaclient-requirements/ea3cdf2/console.html

Anyone have any insights into this?  I've seen the same thing happening
on nova.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-10 Thread Jay Pipes

On 06/10/2014 12:32 PM, Sean Dague wrote:

On 06/10/2014 11:37 AM, Jay Pipes wrote:

On 06/10/2014 09:53 AM, Sean Dague wrote:

On 06/10/2014 09:14 AM, Anita Kuno wrote:

On 06/10/2014 04:33 AM, Mark McLoughlin wrote:

On Mon, 2014-06-09 at 20:14 -0400, Doug Hellmann wrote:

On Mon, Jun 9, 2014 at 6:11 PM, Eoghan Glynn
egl...@redhat.com wrote:




Based on the discussion I'd like to propose these
options: 1. Cinder-certified driver - This is an
attempt to move the certification to the project
level. 2. CI-tested driver - This is probably the most
accurate, at least for what we're trying to achieve for
Juno: Continuous Integration of Vendor-specific
Drivers.


Hi Ramy,

Thanks for these constructive suggestions.

The second option is certainly a very direct and
specific reflection of what is actually involved in
getting the Cinder project's imprimatur.


I do like tested.

I'd like to understand what the foundation is planning for
certification as well, to know how big of an issue this
really is. Even if they aren't going to certify drivers, I
have heard discussions around training and possibly other
areas so I would hate for us to introduce confusion by
having different uses of that term in similar contexts.
Mark, do you know who is working on that within the board
or foundation?


http://blogs.gnome.org/markmc/2014/05/17/may-11-openstack-foundation-board-meeting/





Boris Renski raised the possibility of the Foundation attaching the

trademark to a verified, certified or tested status for
drivers. It wasn't discussed at length because board members
hadn't been briefed in advance, but I think it's safe to say
there was a knee-jerk negative reaction from a number of
members. This is in the context of the DriverLog report:

http://stackalytics.com/report/driverlog

http://www.mirantis.com/blog/cloud-drivers-openstack-driverlog-part-1-solving-driver-problem/



http://www.mirantis.com/blog/openstack-will-open-source-vendor-certifications/





AIUI the CI tested phrase was chosen in DriverLog to avoid the

controversial area Boris describes in the last link above. I
think that makes sense. Claiming this CI testing replaces
more traditional certification programs is a sure way to bog
potentially useful collaboration down in vendor politics.

Actually FWIW the DriverLog is not posting accurate
information, I came upon two instances yesterday where I found
the information questionable at best. I know I questioned it.
Kyle and I have agreed to not rely on the DriverLog information
as it currently stands as a way of assessing the fitness of
third party CI systems. I'll add some footnotes for those who
want more details. [%%], [++], []


Avoiding dragging the project into those sort of politics is
something I'm really keen on, and why I think the word
certification is best avoided so we can focus on what we're
actually trying to achieve.

Mark.

I agree with Mark, everytime we try to 'abstract' away from
logs and put an new interface on it, the focus moves to the
interface and folks stop paying attention to logs. We archive
and have links to artifacts for a reason and I think we need to
encourage and support people to access these artifacts and draw
their own conclusions, which is in keeping with our license.

Copy/pasting Mark here: Also AIUI certification implies some
level of warranty or guarantee, which goes against the pretty
clear language WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND
in our license :) [**]


Honestly, the bigger issue I've got at this point is that
driverlog is horribly inaccurate. Based on DriverLog you'd see
that we don't test KVM or QEMU at all, only XenAPI.


Then shouldn't the focus be on both reporting bugs to DriverLog [1]
and fixing these inaccuracies? DriverLog doesn't use the term
certified anywhere, for the record.

It is an honest best effort to provide some insight into the
testability of various drivers in the OpenStack ecosystem in a more
up-to-date way than outdated wiki pages showing matrixes of support
for something.

It's an alpha project that can and will have bugs. I can
absolutely guarantee you that the developers of the DriverLog
project are more interested in getting accurate information shown
in the interface than with any of the politics around the word
certified.


That seemed like a pretty obvious error. :)


I'd rather have the errors be obvious and correctable than obscure and
hidden behind some admin curtain.


If we're calling it alpha than perhaps it shouldn't be presented to
users when they go to stackalytics, which has largely become the
defacto place where press an analysts go to get project statistics?

I'm fine with it being alpha, and treated as such, off in a corner.
But it seems to be presented front and center with stackalytics, so
really needs to be held to a higher standard.


So, if this is all about the placement of the DriverLog button on the
stackalytics page, then we should talk about that separately from a
discussion of the data that is 

Re: [openstack-dev] Fwd: Re: [openstack-tc] use of the word certified

2014-06-10 Thread Boris Renski
Thanks Jay.

Whatever inaccuracies or errors you see with DriverLog, please file a bug
or an update request:
https://wiki.openstack.org/wiki/DriverLog#How_To:_Add_a_new_driver_to_DriverLog.


Also, we are more than happy to hear any suggestions on what information to
display and how to call what. As pointed out earlier in the thread, for the
exact reasons raised by Anita and Eoghan, there is no mention of certified
anywhere in DriverLog.

-Boris


On Tue, Jun 10, 2014 at 9:22 AM, Jay Pipes jaypi...@gmail.com wrote:

 Sorry, replied to wrong ML...

  Original Message 
 Subject: Re: [openstack-tc] [openstack-dev] use of the word certified
 Date: Tue, 10 Jun 2014 11:37:38 -0400
 From: Jay Pipes jaypi...@gmail.com
 To: openstack...@lists.openstack.org

 On 06/10/2014 09:53 AM, Sean Dague wrote:

 On 06/10/2014 09:14 AM, Anita Kuno wrote:

 On 06/10/2014 04:33 AM, Mark McLoughlin wrote:

 On Mon, 2014-06-09 at 20:14 -0400, Doug Hellmann wrote:

 On Mon, Jun 9, 2014 at 6:11 PM, Eoghan Glynn egl...@redhat.com
 wrote:



  Based on the discussion I'd like to propose these options:
 1. Cinder-certified driver - This is an attempt to move the
 certification
 to the project level.
 2. CI-tested driver - This is probably the most accurate, at least
 for what
 we're trying to achieve for Juno: Continuous Integration of
 Vendor-specific
 Drivers.


 Hi Ramy,

 Thanks for these constructive suggestions.

 The second option is certainly a very direct and specific reflection
 of
 what is actually involved in getting the Cinder project's imprimatur.


 I do like tested.

 I'd like to understand what the foundation is planning for
 certification as well, to know how big of an issue this really is.
 Even if they aren't going to certify drivers, I have heard discussions
 around training and possibly other areas so I would hate for us to
 introduce confusion by having different uses of that term in similar
 contexts. Mark, do you know who is working on that within the board or
 foundation?


 http://blogs.gnome.org/markmc/2014/05/17/may-11-openstack-
 foundation-board-meeting/

 Boris Renski raised the possibility of the Foundation attaching the
 trademark to a verified, certified or tested status for drivers. It
 wasn't discussed at length because board members hadn't been briefed in
 advance, but I think it's safe to say there was a knee-jerk negative
 reaction from a number of members. This is in the context of the
 DriverLog report:

http://stackalytics.com/report/driverlog
http://www.mirantis.com/blog/cloud-drivers-openstack-
 driverlog-part-1-solving-driver-problem/
http://www.mirantis.com/blog/openstack-will-open-source-
 vendor-certifications/

 AIUI the CI tested phrase was chosen in DriverLog to avoid the
 controversial area Boris describes in the last link above. I think that
 makes sense. Claiming this CI testing replaces more traditional
 certification programs is a sure way to bog potentially useful
 collaboration down in vendor politics.

 Actually FWIW the DriverLog is not posting accurate information, I came
 upon two instances yesterday where I found the information
 questionable at best. I know I questioned it. Kyle and I have agreed
 to not rely on the DriverLog information as it currently stands as a way
 of assessing the fitness of third party CI systems. I'll add some
 footnotes for those who want more details. [%%], [++], []


 Avoiding dragging the project into those sort of politics is something
 I'm really keen on, and why I think the word certification is best
 avoided so we can focus on what we're actually trying to achieve.

 Mark.

 I agree with Mark, everytime we try to 'abstract' away from logs and put
 an new interface on it, the focus moves to the interface and folks stop
 paying attention to logs. We archive and have links to artifacts for a
 reason and I think we need to encourage and support people to access
 these artifacts and draw their own conclusions, which is in keeping with
 our license.

 Copy/pasting Mark here:
 Also AIUI certification implies some level of warranty or guarantee,
 which goes against the pretty clear language WITHOUT WARRANTIES OR
 CONDITIONS OF ANY KIND in our license :) [**]


 Honestly, the bigger issue I've got at this point is that driverlog is
 horribly inaccurate. Based on DriverLog you'd see that we don't test KVM
 or QEMU at all, only XenAPI.


 Then shouldn't the focus be on both reporting bugs to DriverLog [1] and
 fixing these inaccuracies? DriverLog doesn't use the term certified
 anywhere, for the record.

 It is an honest best effort to provide some insight into the testability
 of various drivers in the OpenStack ecosystem in a more up-to-date way
 than outdated wiki pages showing matrixes of support for something.

 It's an alpha project that can and will have bugs. I can absolutely
 guarantee you that the developers of the DriverLog project are more
 interested in getting accurate information shown in the interface than

Re: [openstack-dev] use of the word certified

2014-06-10 Thread Ben Nemec
On 06/10/2014 10:09 AM, Duncan Thomas wrote:
 On 10 June 2014 15:07, Mark McLoughlin mar...@redhat.com wrote:
 
 Exposing which configurations are actively tested is a perfectly sane
 thing to do. I don't see why you think calling this certification is
 necessary to achieve your goals.
 
 What is certification except a formal way of saying 'we tested it'? At
 least when you test it enough to have some degree of confidence in
 your testing.
 
 That's *exactly* what certification means.

I think maybe the issue people are having with the word certification
is that the way it's frequently used in the industry also implies a
certain level of support that we as a community can't/won't provide.  To
me it's a pretty strong word and I can understand the concern that users
might read more into it than we intend.

 
 I don't know what you mean be others
 imposing their idea of certification.
 
 I mean that if some company or vendor starts claiming 'Product X is
 certified for use with cinder', that is bad for the cinder core team,
 since we didn't define what got tested or to what degree.

I wonder if they can even legally do that, but assuming they can what's
to stop them from claiming whatever crazy thing they want to, even if
there is a certified set of Cinder drivers?  No matter what term we
use to describe our testing process, someone is going to come up with
something that sounds similar but has no real meaning from our
perspective.  If we certify drivers, someone else will verify or
validate or whatever.

Without a legal stick to use for that situation I don't see that we can
really do much about the problem, other than to tell people they need to
talk to the company making the claims if they're having problems.

 
 Whether we like it or not, when something doesn't work in cinder, it
 is rare for people to blame the storage vendor in their complaints.
 'Cinder is broken' is what we hear (and I've heard it, even though
 what they meant is 'my storage vendor hasn't tested or updated their
 driver in two releases', that isn't what they /said/). Since cinder,
 and therefore cinder-core, is going to get the blame, I feel we should
 try to maintain some degree of control over the claims.
 
 If we run our own minimal certification program, which is what we've
 started doing (started with a script which did a test run and tried to
 require vendors to run it, that didn't work out well so we're now
 requiring CI integration instead), then we at least have the option of
 saying 'You're running an non-certified product, go talk to your
 vendor' when dealing with the cases we have no control over. Vendors
 that don't follow the CI  cert requirements eventually get their
 driver removed, that simple.

How would You're using an untested product, go talk to your vendor not
accomplish the same thing?

-Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Re: [openstack-tc] use of the word certified

2014-06-10 Thread Sean Dague
Sorry, I do feel like it's kind of crazy and irresponsible to throw data
out there with something as wrong as 'OpenStack doesn't test QEMU' and
then follow that up with 'Oh, file a bug to fix it!'.

Then promote it to something as prominent as stackalytics.

I mean... guys... seriously? :)

-Sean

On 06/10/2014 12:48 PM, Boris Renski wrote:
 Thanks Jay.
 
 Whatever inaccuracies or errors you see with DriverLog, please file a
 bug or an update request:
 https://wiki.openstack.org/wiki/DriverLog#How_To:_Add_a_new_driver_to_DriverLog.
 
 
 Also, we are more than happy to hear any suggestions on what information
 to display and how to call what. As pointed out earlier in the thread,
 for the exact reasons raised by Anita and Eoghan, there is no mention of
 certified anywhere in DriverLog.
 
 -Boris
 
 
 On Tue, Jun 10, 2014 at 9:22 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 
 Sorry, replied to wrong ML...
 
  Original Message 
 Subject: Re: [openstack-tc] [openstack-dev] use of the word certified
 Date: Tue, 10 Jun 2014 11:37:38 -0400
 From: Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com
 To: openstack-tc@lists.openstack.__org
 mailto:openstack...@lists.openstack.org
 
 On 06/10/2014 09:53 AM, Sean Dague wrote:
 
 On 06/10/2014 09:14 AM, Anita Kuno wrote:
 
 On 06/10/2014 04:33 AM, Mark McLoughlin wrote:
 
 On Mon, 2014-06-09 at 20:14 -0400, Doug Hellmann wrote:
 
 On Mon, Jun 9, 2014 at 6:11 PM, Eoghan Glynn
 egl...@redhat.com mailto:egl...@redhat.com wrote:
 
 
 
 Based on the discussion I'd like to propose
 these options:
 1. Cinder-certified driver - This is an
 attempt to move the certification
 to the project level.
 2. CI-tested driver - This is probably the
 most accurate, at least for what
 we're trying to achieve for Juno: Continuous
 Integration of Vendor-specific
 Drivers.
 
 
 Hi Ramy,
 
 Thanks for these constructive suggestions.
 
 The second option is certainly a very direct and
 specific reflection of
 what is actually involved in getting the Cinder
 project's imprimatur.
 
 
 I do like tested.
 
 I'd like to understand what the foundation is
 planning for
 certification as well, to know how big of an issue
 this really is.
 Even if they aren't going to certify drivers, I have
 heard discussions
 around training and possibly other areas so I would
 hate for us to
 introduce confusion by having different uses of that
 term in similar
 contexts. Mark, do you know who is working on that
 within the board or
 foundation?
 
 
 
 http://blogs.gnome.org/markmc/__2014/05/17/may-11-openstack-__foundation-board-meeting/
 
 http://blogs.gnome.org/markmc/2014/05/17/may-11-openstack-foundation-board-meeting/
 
 Boris Renski raised the possibility of the Foundation
 attaching the
 trademark to a verified, certified or tested status for
 drivers. It
 wasn't discussed at length because board members hadn't
 been briefed in
 advance, but I think it's safe to say there was a
 knee-jerk negative
 reaction from a number of members. This is in the
 context of the
 DriverLog report:
 
http://stackalytics.com/__report/driverlog
 http://stackalytics.com/report/driverlog
  
  
 http://www.mirantis.com/blog/__cloud-drivers-openstack-__driverlog-part-1-solving-__driver-problem/
 
 http://www.mirantis.com/blog/cloud-drivers-openstack-driverlog-part-1-solving-driver-problem/
  
  
 http://www.mirantis.com/blog/__openstack-will-open-source-__vendor-certifications/
 
 http://www.mirantis.com/blog/openstack-will-open-source-vendor-certifications/
 
 AIUI the CI tested phrase was chosen in DriverLog to
 avoid the
 controversial area Boris describes in the last link
 above. I think that
 makes sense. Claiming this CI testing replaces more
 

Re: [openstack-dev] use of the word certified

2014-06-10 Thread Mark McLoughlin
On Tue, 2014-06-10 at 16:09 +0100, Duncan Thomas wrote:
 On 10 June 2014 15:07, Mark McLoughlin mar...@redhat.com wrote:
 
  Exposing which configurations are actively tested is a perfectly sane
  thing to do. I don't see why you think calling this certification is
  necessary to achieve your goals.
 
 What is certification except a formal way of saying 'we tested it'? At
 least when you test it enough to have some degree of confidence in
 your testing.
 
 That's *exactly* what certification means.

I disagree. I think the word has substantially more connotations than
simply this has been tested.

http://lists.openstack.org/pipermail/openstack-dev/2014-June/036963.html

  I don't know what you mean be others
  imposing their idea of certification.
 
 I mean that if some company or vendor starts claiming 'Product X is
 certified for use with cinder',

On what basis would any vendor claim such certification?

  that is bad for the cinder core team,
 since we didn't define what got tested or to what degree.

That sounds like you mean Storage technology X is certified for use
with Vendor Y OpenStack?.

i.e. that Vendor Y has certified the driver for use with their version
if OpenStack but the Cinder team has no influence over what that means
in practice?

 Whether we like it or not, when something doesn't work in cinder, it
 is rare for people to blame the storage vendor in their complaints.
 'Cinder is broken' is what we hear (and I've heard it, even though
 what they meant is 'my storage vendor hasn't tested or updated their
 driver in two releases', that isn't what they /said/).

Presumably people are complaining about that driver not working with
some specific downstream version of OpenStack, right? Not e.g.
stable/icehouse devstack or something?

i.e. even aside from the driver, we're already talking about something
we as an upstream project don't control the quality of.

 Since cinder,
 and therefore cinder-core, is going to get the blame, I feel we should
 try to maintain some degree of control over the claims.

I'm starting to see where you're coming from, but I fear this
certification thing will make it even worse.

Right now you can easily shrug off any responsibility for the quality of
a third party driver or an untested in-tree driver. Sure, some people
may have unreasonable expectations about such things, but you can't stop
people being idiots. You can better communicate expectations, though,
and that's excellent.

But as soon as you certify that driver cinder-core takes on a
responsibility that I would think is unreasonable even if the driver was
tested. But you said it's certified!

Is cinder-core really ready to take on responsibility for every issue
users see with certified drivers and downstream OpenStack products?

 If we run our own minimal certification program, which is what we've
 started doing (started with a script which did a test run and tried to
 require vendors to run it, that didn't work out well so we're now
 requiring CI integration instead), then we at least have the option of
 saying 'You're running an non-certified product, go talk to your
 vendor' when dealing with the cases we have no control over. Vendors
 that don't follow the CI  cert requirements eventually get their
 driver removed, that simple.

What about issues with a certified driver? Don't talk to the vendor,
talk to us instead?

If it's an out-of-tree driver then we say talk to your vendor. 

If it's an in-tree driver, those actively maintaining the driver provide
best effort community support like anything else.

If it's an in-tree driver and isn't being actively maintained, and best
effort community support isn't being provided, then we need a way to
communicate that unmaintained status. The level of testing it receives
is what we currently see as the most important aspect, but it's not the
only aspect.

If the user is actually using a distro or other downstream product
rather than pure upstream, it's completely normal for upstream to say
talk to your distro maintainers or product vendor.

Upstream projects can only provide limited support for even motivated
and clueful users, particularly when those users are actually using
downstream variants of the project. It certainly makes sense to clarify
that, but a certification program will actually raise the expectations
users have about the level of support upstream will provide.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-compute vfsguestfs

2014-06-10 Thread abhishek jain
Hi Rich

I'm able to solve the problem regarding PAPR in libguestfs on my powerpc
ubuntu.By default the libguestfs was configuring pseries machine and
afterwards I changed it to my original machine i.e ppce500 .The changes are
performed in ./src/guestfs-internal.h file.

However still my VM is stucking in spawning state.The compute node is not
able generate the xml file required for running the instances which I have
checked by comparing the nova-compute logs of controller node aw well as
compute node since I'm able to run VM on controller node.

Thanks for the help,I'll let you know if any further issues.



On Sat, Jun 7, 2014 at 5:06 PM, Richard W.M. Jones rjo...@redhat.com
wrote:

 On Tue, May 27, 2014 at 03:25:10PM +0530, abhishek jain wrote:
  Hi Daniel
 
  Thanks for the help.
  The end result of my setup is that the VM is stucking at Spawning state
 on
  my compute node whereas it is working fine on the controller node.
  Therefore I'm comparing nova-compute logs of both compute node as well as
  controller node and trying to proceed step by step.
  I'm having all the above packages enabled
 
  Do you have any idea regarding reason for VM stucking at spawning state.

 The most common reason is that nested virt is broken.  libguestfs is the
 canary
 in the mine here, not the cause of the problem.

 Rich.

 
 
  On Tue, May 27, 2014 at 2:38 PM, Daniel P. Berrange berra...@redhat.com
 wrote:
 
   On Tue, May 27, 2014 at 12:04:23PM +0530, abhishek jain wrote:
Hi
Below is the code to which I'm going to reffer to..
   
 vim /opt/stack/nova/nova/virt/disk/vfs/api.py
   
#
   
try:
LOG.debug(_(Trying to import guestfs))
importutils.import_module(guestfs)
hasGuestfs = True
except Exception:
pass
   
if hasGuestfs:
LOG.debug(_(Using primary VFSGuestFS))
return importutils.import_object(
nova.virt.disk.vfs.guestfs.VFSGuestFS,
imgfile, imgfmt, partition)
else:
LOG.debug(_(Falling back to VFSLocalFS))
return importutils.import_object(
nova.virt.disk.vfs.localfs.VFSLocalFS,
imgfile, imgfmt, partition)
   
###
   
When I'm launching  VM from the controller node onto compute node,the
nova compute logs on the compute node displays...Falling back to
VFSLocalFS and the result is that the VM is stuck in spawning state.
However When I'm trying to launch a VM onto controller node form the
controller node itself,the nova compute logs on the controller node
dislpays ...Using primary VFSGuestFS and I'm able to launch VM on
controller node.
Is there any module in the kernel or any package that i need to
enable.Please help regarding this.
  
   VFSGuestFS requires the libguestfs python module  corresponding native
   package to be present, and only works with KVM/QEMU enabled hosts.
  
   VFSLocalFS requires loopback module, nbd module, qemu-nbd, kpartx and
   a few other misc host tools
  
   Neither of these should cause a VM getting stuck in the spawning
   state, even if stuff they need is missing.
  
   Regards,
   Daniel
   --
   |: http://berrange.com  -o-
 http://www.flickr.com/photos/dberrange/:|
   |: http://libvirt.org  -o-
 http://virt-manager.org:|
   |: http://autobuild.org   -o-
 http://search.cpan.org/~danberr/:|
   |: http://entangle-photo.org   -o-
 http://live.gnome.org/gtk-vnc:|
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Richard Jones, Virtualization Group, Red Hat
 http://people.redhat.com/~rjones
 Read my programming and virtualization blog: http://rwmj.wordpress.com
 virt-builder quickly builds VMs from scratch
 http://libguestfs.org/virt-builder.1.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Gordon Sim

On 06/10/2014 05:27 PM, Kurt Griffiths wrote:

I think the crux of the issue is that Marconi follows the REST
architectural style. As such, the client must track the state of where it
is in the queue it is consuming (to keep the server stateless). So, it
must be given some kind of marker, allowing it to page through messages in
the queue.


Isn't this true both for task distribution and feed semantics?

[...]

To my knowledge, this API can’t be mapped directly to AMQP.


I believe you are right, you could not map the HTTP based interface 
defined for Marconi onto standard AMQP. As well as the explicit paging 
through the queue you mention above, the concepts of claims and direct 
deletion of messages are not exposed through AMQP[1].


AMQP was designed as an alternative interface into a messaging service, 
one tailored for asynchronous behaviour. It isn't intended as a direct 
interface to message storage.



Perhaps there are other types of brokers that can do it?


There are certainly brokers that support a RESTful HTTP based 
interface[2] alongside other protocols. I don't know of any standard 
protocol designed for controlling message storage however.


--Gordon.

[1] You could use a filter (aka a selector) to retrieve/dequeue a 
specific message by id or some other property. You could also define 
behaviour where an acquired message gets automatically released by the 
server after a configurable time. However, in my view, even with such 
extensions AMQP would still not be an ideal interface for message 
*storage* as opposed to message transfer between two processes.


[2] http://activemq.apache.org/rest.html, http://hornetq.jboss.org/rest 
and I'm sure there are others.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Kurt Griffiths
 Will Marconi only support HTTP as a transport, or will it add other
protocols as well?

We are focusing on HTTP for Juno, but are considering adding a
lower-level, persistent transport (perhaps based on WebSocket) in the K
cycle.

 Can anyone describe what is unique about the Marconi design with respect
to scalability?

TBH, I don’t know that there is anything terribly “unique” about it. :)

First of all, since Marconi uses HTTP and follows the REST architectural
style, you get all the associated scaling benefits from that.

Regarding the backend, Marconi has a notion of “pools, across which
queues can be sharded. Messages for an individual queue may not be
sharded across  multiple pools, but a single queue may be sharded
within a given pool,  depending on whether the driver supports it. In
any case, you can imagine each pool as encapsulating a single DB or
broker cluster. Once you reach the limits of scalability within your
initial pool (due to networking, hard limitations in the given backend,
etc.), you can provision other pools as needed.

 in what way is Marconi different to 'traditional message brokers' (which
after all have been providing 'a production queuing service' for some
time)?

That’s a great question. As I have said before, I think there is certainly
room for some kind of broker-as-a-service in the OpenStack ecosystem that
is more along the lines of Trove. Such a service would provision
single-tenant instances of a given broker and provide a framework for
adding value-add management features for said broker.

For some use cases, such a service could be a cost-effective solution, but
I don’t think it is the right answer for everyone. Not only due to cost,
but also because many developers (and sys admins) actually prefer an
HTTP-based API. Marconi’s multi-tenant, REST API was designed to serve
this market. 

 I understand that having HTTP as the protocol used by clients is of
central importance. However many 'traditional message brokers’ have
offered that as well.

Great point. In fact, I’d love to get more info on brokers that offer
support for HTTP. What are some examples? Do they support multi-tenancy?

-KG


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [driverlog] Tail-f CI and it's lack of running and it's DriverLog status

2014-06-10 Thread Collins, Sean
Cool,

Thanks.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Re: [openstack-tc] use of the word certified

2014-06-10 Thread Jay Pipes

On 06/10/2014 01:00 PM, Sean Dague wrote:

Sorry, I do feel like it's kind of crazy and irresponsible to throw data
out there with something as wrong as 'OpenStack doesn't test QEMU' and
then follow that up with 'Oh, file a bug to fix it!'.

Then promote it to something as prominent as stackalytics.

I mean... guys... seriously? :)


Wow, you sound quite condescending there. All I'm trying to point out is 
that while you may argue it is crazy and irresponsible to have put 
driverlog up on stackalytics, we had tried to get people to comment on 
driverlog for over a month, and nobody responded or cared. Now that it's 
on stackalytics main page, here comes the uproar.


We can take it out from the main stackalytics home page, but AFAICT, the 
added exposure that this has created is exactly the sort of attention 
that should have been given to the thing --- more than a month ago when 
we asked for it.


I take offense to the implication that somehow Mirantis has done this 
stuff off in some dark corner. We've been begging for input on this 
stuff at the board and dev list level for a while now.


-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-10 Thread Carlos Garza
 Ok but we still need input from Stephen Balukoff and Jorge to see how this 
will integrate with the API being proposed. I'm not sure if they were intending 
to use the attributes your discussing as well as which object was going to 
contain them.
On Jun 10, 2014, at 6:13 AM, Evgeny Fedoruk evge...@radware.com
wrote:

 Hi All,
 
 Carlos, Vivek, German, thanks for reviewing the RST doc.
 There are some issues I want to pinpoint final decision on them here, in ML, 
 before writing it down in the doc.
 Other issues will be commented on the document itself.
 
 1.   Support/No support in JUNO
 Referring to summit’s etherpad 
 https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7,
 a.   SNI certificates list was decided to be supported. Was decision made 
 not to support it?
 Single certificate with multiple domains can only partly address the need for 
 SNI, still, different applications 
 on back-end will need different certificates.
 b.  Back-end re-encryption was decided to be supported. Was decision made 
 not to support it?
 c.   With front-end client authentication and back-end server 
 authentication not supported, 
 Should certificate chains be supported?
 2.   Barbican TLS containers
 a.   TLS containers are immutable.
 b.  TLS container is allowed to be deleted, always.
 i.  Even when 
 it is used by LBaaS VIP listener (or other service).
   ii.  Meta data 
 on TLS container will help tenant to understand that container is in use by 
 LBaaS service/VIP listener
  iii.  If every 
 VIP listener will “register” itself in meta-data while retrieving container, 
 how that “registration” will be removed when VIP listener stops using the 
 certificate?
 
 Please comment on these points and review the document on gerrit 
 (https://review.openstack.org/#/c/98640)
 I will update the document with decisions on above topics.
 
 Thank you!
 Evgeny
 
 
 From: Evgeny Fedoruk 
 Sent: Monday, June 09, 2014 2:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit
 
 Hi All,
 
 A Spec. RST  document for LBaaS TLS support was added to Gerrit for review
 https://review.openstack.org/#/c/98640
 
 You are welcome to start commenting it for any open discussions.
 I tried to address each aspect being discussed, please add comments about 
 missing things.
 
 Thanks,
 Evgeny
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] RabbitMQ (AMQP 0.9) driver for Marconi

2014-06-10 Thread Janczuk, Tomasz
I the last few days I attempted to implement a RabbitMQ (AMQP 0.9) storage 
driver for Marconi. These are the take-aways from this experiment. High level, 
it showed that current Marconi APIs *cannot* be mapped onto the AMQP 0.9 
abstractions. In fact, currently it is not even possible to support a subset of 
functionality that would allow both message publication and consumption.

  1.  Marconi exposes HTTP APIs that allow messages to be listed without 
consuming them. This API cannot be implemented on top of AMQP 0.9 which 
implements a strict queueing semantics.

  2.  Marconi exposes HTTP APIs that allow random access to messages by ID. 
This API cannot be implemented on top of AMQP 0.9 which does not allow random 
access to messages, and the message ID concept is not present in the model.

  3.  Marconi exposes HTTP APIs that allow queues to be created, deleted, and 
listed. Queue creation and deletion can be implemented with AMQP 0.9, but 
listing queues is not possible with AMQP. However, listing queues can be 
implemented by accessing RabbitMQ management plugin over proprietary HTTP APIs 
that Rabbit exposes.

  4.  Marconi message publishing APIs return server-assigned message IDs. 
Message IDs are absent from the AMQP 0.9 model and so the result of message 
publication cannot provide them.

  5.  Marconi message consumption API creates a “claim ID” for a set of 
consumed messages, up to a limit. In the AMQP 0.9 model (as well as SQS and 
Azure Queues), “claim ID” maps onto the concept of “delivery tag” which has a 
1-1 relationship with a message. Since there is no way to represent the 1-N 
mapping between claimID and messages in the AMQP 0.9 model, it effectively 
restrict consumption of messages to one per claimID. This in turn prevents 
batch consumption benefits.

  6.  Marconi message consumption acknowledgment requires both claimID and 
messageID to be provided. MessageID concept is missing in AMQP 0.9. In order to 
implement this API, assuming the artificial 1-1 restriction of claim-message 
mapping from #5 above, this API could be implemented by requiring that 
messageID === claimID. This is really a workaround.

  7.  RabbitMQ message acknowledgment MUST be sent over the same AMQP channel 
instance on which the message was originally received. This requires that the 
two Marconi HTTP calls that receive and acknowledge a message are affinitized 
to the same Marconi backend. It either substantially complicates driver 
implementation (server-side reverse proxing of requests) or adds new 
requirements onto the Marconi deployment (server affinity through load 
balancing).

  8.  Currently Marconi does not support an HTTP API that allows a message to 
be consumed with immediate acknowledgement (such API is in the pipeline 
however). Despite the fact that such API would not even support the 
at-least-once guarantee, combined with the restriction from #7 it means that 
there is simply *no way* currently for a RabbitMQ based driver to implement any 
form of message consumption using today’s HTTP API.

If Marconi aspires to support a range of implementation choices for the HTTP 
APIs it prescribes, the HTTP APIs will likely need to be re-factored and 
simplified. Key issues are related to the APIs that allow messages to be looked 
up without consuming them, the explicit modeling of message IDs (unnecessary in 
systems with strict queuing semantics), and the acknowledgment (claim) model 
that is different from most acknowledgments models out there (SQS, Azure 
Queues, AMQP).

I believe Marconi would benefit from a small set of core HTTP APIs that reflect 
a strict messaging semantics, providing a scenario parity with SQS or Azure 
Storage Queues.

Thanks,
Tomasz Janczuk
@tjanczuk
HP


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TC] [Murano] Follow up on cross-project session

2014-06-10 Thread Ruslan Kamaldinov
Hi community and TC members!

First a little bit of history:

Murano applied for incubation in February 2014 [1]. TC discussion [2]
finished the following resolution (quote from ttx):
Murano is slightly too far up the stack at this point to meet the
measured progression of openstack as a whole requirement. We'll
facilitate collaboration in that space by setting up a cross-project
session to advance this at the next design summit.

Thanks to the TC, we had two official cross-project sessions at the
Atlanta summit. We also had a follow-up session with a more focused
group of people.

-

I would like to share the results of these sessions with the community
and with the TC members. The official etherpads from the session is
located here: https://etherpad.openstack.org/p/solum-murano-heat-session-results

Now, more details. Here is the outcome of the first two (official) sessions:
* We addressed questions about use-cases and user roles of Murano [3]
* We clarified that Murano is already using existing projects and will
use other projects (such as Heat and Mistral) to the fullest extent
possible to avoid any functional duplication
* We reached a consensus and vision on non-overlapping scopes of
Murano and Solum

Here is the link to the session etherpad [4].

Our follow-up session was very productive. The goal of this session
was to define and document a clear scope for each project [5]. This is
the main document in regards to the goals set by TC in response to
Murano incubation request. We identified a clear scope for each
project and possible points for integration and collaboration:
* Application Development - Solum
* Application Catalog/Composition - Murano (see
https://etherpad.openstack.org/p/wGy9S94l3Q for details)
* Application Deployment/Configuration - Heat

We hope that we addressed major concerns raised at the TC meeting [2]
initiated by the previous Murano incubation request. We will now focus
on stabilizing and maturing the project itself with the end goal of
again applying for incubation by the end of the Juno cycle.


[1] http://lists.openstack.org/pipermail/openstack-dev/2014-February/027736.html
[2] http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-03-04-20.02.log.txt
[3] https://etherpad.openstack.org/p/wGy9S94l3Q
[4] https://etherpad.openstack.org/p/9XQ7Q2NQdv
[5] https://etherpad.openstack.org/p/solum-murano-heat-session-results

Thanks,
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DriverLog] What to fix and when

2014-06-10 Thread Anita Kuno
On 06/10/2014 01:58 PM, Jay Pipes wrote:
 Stackers,
 
 OK, we are fully aware that there are problems with the early DriverLog
 data that is shown in the dashboard. Notably, the Nova driver stuff is
 not correct for the default virt drivers. We will work on fixing that ASAP.
 
 Our focus to date has mostly been on the Cinder and Neutron driver
 stats, since it is those communities that have most recently been
 working on standing up external CI systems. The Nova driver systems
 simply, and embarrassingly, fell through the cracks. My apologies to
 anyone who was offended or misled.
 
 Please, if you have any problems with DriverLog, be sure to log a bug on
 Launchpad [1] and we'll get things fixed ASAP.
 
 Best,
 -jay
 
 [1] https://launchpad.net/driverlog
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thanks Jay:

Can we have a driver log representative present at third-party meetings
to participate in discussions? That would be great.

Thanks Jay,
Anita

https://wiki.openstack.org/wiki/Meetings/ThirdParty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Re: [openstack-tc] use of the word certified

2014-06-10 Thread Stefano Maffulli
On 06/10/2014 10:39 AM, Jay Pipes wrote:
 We've been begging for input on this
 stuff at the board and dev list level for a while now.

And people are all ear now and leaving comments, which is good :) I
think adding a clear warning on stackalytics.com that the data from
DriverLog may not be accurate is a fair request.

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] Re: NFV in OpenStack use cases and context

2014-06-10 Thread MENDELSOHN, ITAI (ITAI)
#5 is a good reference point for the type of apps we can encounter in NFV.
I guess it's a good idea to start with it.

Itai

Sent from my iPhone

 On Jun 10, 2014, at 7:16 PM, Steve Gordon sgor...@redhat.com wrote:
 
 - Original Message -
 From: Stephen Wong stephen.kf.w...@gmail.com
 To: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com, 
 OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 
 Hi,
 
Perhaps I have missed it somewhere in the email thread? Where is the
 use case = bp document we are supposed to do for this week? Has it been
 created yet?
 
 Thanks,
 - Stephen
 
 Hi,
 
 Itai is referring to the ETSI NFV use cases document [1] and the discussion 
 is around how we distill those - or a subset of them - into a more consumable 
 format for an OpenStack audience on the Wiki. At this point I think the best 
 approach is to simply start entering one of them (perhaps #5) into the Wiki 
 and go from there. Ideally this would form a basis for discussing the format 
 etc.
 
 Thanks,
 
 Steve
 
 [1] 
 http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf
 
 On Tue, Jun 10, 2014 at 2:00 AM, MENDELSOHN, ITAI (ITAI) 
 itai.mendels...@alcatel-lucent.com wrote:
 
 Shall we continue this discussion?
 
 Itai
 
 On 6/9/14 8:54 PM, Steve Gordon sgor...@redhat.com wrote:
 
 - Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com,
 OpenStack Development Mailing List (not for usage
 
 Just adding openstack-dev to the CC for now :).
 
 - Original Message -
 From: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com
 Subject: Re: NFV in OpenStack use cases and context
 
 Can we look at them one by one?
 
 Use case 1 - It's pure IaaS
 Use case 2 - Virtual network function as a service. It's actually
 about
 exposing services to end customers (enterprises) by the service
 provider.
 Use case 3 - VNPaaS - is similar to #2 but at the service level. At
 larger
 scale and not at the app level only.
 Use case 4 - VNF forwarding graphs. It's actually about dynamic
 connectivity between apps.
 Use case 5 - vEPC and vIMS - Those are very specific (good) examples
 of SP
 services to be deployed.
 Use case 6 - virtual mobile base station. Another very specific
 example,
 with different characteristics than the other two above.
 Use case 7 - Home virtualisation.
 Use case 8 - Virtual CDN
 
 As I see it those have totally different relevancy to OpenStack.
 Assuming we don't want to boil the ocean hereŠ
 
 1-3 seems to me less relevant here.
 4 seems to be a Neutron area.
 5-8 seems to be usefully to understand the needs of the NFV apps. The
 use
 case can help to map those needs.
 
 For 4 I guess the main part is about chaining and Neutron between DCs.
 Soma may call it SDN in WAN...
 
 For 5-8 at the end an option is to map all those into:
 -performance (net BW, storage BW mainly). That can be mapped to
 SR-IOV,
 NUMA. Etc'
 -determinism. Shall we especially minimise noisy neighbours. Not
 sure
 how NFV is special here, but for sure it's a major concern for lot of
 SPs.
 That can be mapped to huge pages, cache QOS, etc'.
 -overcoming of short term hurdles (just because of apps migrations
 issues). Small example is the need to define the tick policy of KVM
 just
 because that's what the app needs. Again, not sure how NFV special it
 is,
 and again a major concern of mainly application owners in the NFV
 domain.
 
 Make sense?
 
 Hi Itai,
 
 This makes sense to me. I think what we need to expand upon, with the
 ETSI NFV documents as a reference, is a two to three paragraph
 explanation of each use case explained at a more basic level - ideally on
 the Wiki page. It seems that use case 5 might make a particularly good
 initial target to work on fleshing out as an example? We could then look
 at linking the use case to concrete requirements based on this, I suspect
 we might want to break them down into:
 
 a) The bare minimum requirements for OpenStack to support the use case at
 all. That is, requirements that without which the VNF simply can not
 function.
 
 b) The requirements that are not mandatory but would be beneficial for
 OpenStack to support the use case. In particularly that might be
 requirements that would improve VNF performance or reliability by some
 margin (possibly significantly) but which it can function without if
 absolutely required.
 
 Thoughts?
 
 Steve
 
 -- 
 Steve Gordon, RHCE
 Product Manager, Red Hat Enterprise Linux OpenStack Platform
 Red Hat Canada (Toronto, Ontario)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Name proposals

2014-06-10 Thread Radomir Dopieralski
Hello everyone.

We have collected a fine number of name proposals for the library part
of Horizon, and now it is time to vote for them. I have set up a poll on
CIVS, and if you contributed to Horizon within the last year, you should
receive an e-mail with the link that lets you vote.

If you didn't receive an e-mail, but you would like to vote anyways,
you can do so by visiting the following URL:

http://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_ea99af9511f3f255akey=e4c064fca36f8d26

Please rank the names from the best to the worst. The names that were
obvious conflicts or intended for the dashboard part were removed from
the list. Once we get the results, we will consult with the OpenStack
Foundation and select the first valid name with the highest ranking.

The poll will end at the next Horizon team meeting, Tuesday, June 17, at
16:00 UTC.

Thank you,
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-10 Thread Brandon Logan
Any core neutron people have a chance to give their opinions on this
yet?

Thanks,
Brandon

On Thu, 2014-06-05 at 15:28 +, Buraschi, Andres wrote:
 Thanks, Kyle. Great.
 
 -Original Message-
 From: Kyle Mestery [mailto:mest...@noironetworks.com] 
 Sent: Thursday, June 05, 2014 11:27 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API
 
 On Wed, Jun 4, 2014 at 4:27 PM, Brandon Logan brandon.lo...@rackspace.com 
 wrote:
  Hi Andres,
  I've assumed (and we know how assumptions work) that the deprecation 
  would take place in Juno and after a cyle or two it would totally be 
  removed from the code.  Even if #1 is the way to go, the old /vips 
  resource would be deprecated in favor of /loadbalancers and /listeners.
 
  I agree #2 is cleaner, but I don't want to start on an implementation 
  (though I kind of already have) that will fail to be merged in because 
  of the strategy.  The strategies are pretty different so one needs to 
  be decided on.
 
  As for where LBaaS is intended to end up, I don't want to speak for 
  Kyle, so this is my understanding; It will end up outside of the 
  Neutron code base but Neutron and LBaaS and other services will all 
  fall under a Networking (or Network) program.  That is my 
  understanding and I could be totally wrong.
 
 That's my understanding as well, I think Brandon worded it perfectly.
 
  Thanks,
  Brandon
 
  On Wed, 2014-06-04 at 20:30 +, Buraschi, Andres wrote:
  Hi Brandon, hi Kyle!
  I'm a bit confused about the deprecation (btw, thanks for sending this 
  Brandon!), as I (wrongly) assumed #1 would be the chosen path for the new 
  API implementation. I understand the proposal and #2 sounds actually 
  cleaner.
 
  Just out of curiosity, Kyle, where is LBaaS functionality intended to end 
  up, if long-term plans are to remove it from Neutron?
 
  (Nit question, I must clarify)
 
  Thank you!
  Andres
 
  -Original Message-
  From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
  Sent: Wednesday, June 04, 2014 2:18 PM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API
 
  Thanks for your feedback Kyle.  I will be at that meeting on Monday.
 
  Thanks,
  Brandon
 
  On Wed, 2014-06-04 at 11:54 -0500, Kyle Mestery wrote:
   On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan 
   brandon.lo...@rackspace.com wrote:
This is an LBaaS topic bud I'd like to get some Neutron Core 
members to give their opinions on this matter so I've just 
directed this to Neutron proper.
   
The design for the new API and object model for LBaaS needs to be 
locked down before the hackathon in a couple of weeks and there 
are some questions that need answered.  This is pretty urgent to 
come on to a decision on and to get a clear strategy defined so 
we can actually do real code during the hackathon instead of 
wasting some of that valuable time discussing this.
   
   
Implementation must be backwards compatible
   
There are 2 ways that have come up on how to do this:
   
1) New API and object model are created in the same extension and 
plugin as the old.  Any API requests structured for the old API 
will be translated/adapted to the into the new object model.
PROS:
-Only one extension and plugin
-Mostly true backwards compatibility -Do not have to rename 
unchanged resources and models
CONS:
-May end up being confusing to an end-user.
-Separation of old api and new api is less clear -Deprecating and 
removing old api and object model will take a bit more work -This 
is basically API versioning the wrong way
   
2) A new extension and plugin are created for the new API and 
object model.  Each API would live side by side.  New API would 
need to have different names for resources and object models from 
Old API resources and object models.
PROS:
-Clean demarcation point between old and new -No translation 
layer needed -Do not need to modify existing API and object 
model, no new bugs -Drivers do not need to be immediately 
modified -Easy to deprecate and remove old API and object model 
later
CONS:
-Separate extensions and object model will be confusing to 
end-users -Code reuse by copy paste since old extension and 
plugin will be deprecated and removed.
-This is basically API versioning the wrong way
   
Now if #2 is chosen to be feasible and acceptable then there are 
a number of ways to actually do that.  I won't bring those up 
until a clear decision is made on which strategy above is the most 
acceptable.
   
   Thanks for sending this out Brandon. I'm in favor of option #2 
   above, especially considering the long-term plans to remove LBaaS 
   from Neutron. That approach will help the eventual end goal there. 
   I am also curious on what 

Re: [openstack-dev] [TC] [Murano] Follow up on cross-project session

2014-06-10 Thread Thierry Carrez
Ruslan Kamaldinov wrote:
 Hi community and TC members!
 [...]

Please only follow-up on -dev! This shall keep this thread consistent.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-10 Thread Stephen Balukoff
Adam--

Wouldn't the user see the duplicate key/cert copy in their barbican
interface, or are you proposing storing these secrets in a
not-assigned-to-the-tenant kind of way?

In any case, it strikes me as misleading to have an explicit delete command
sent to Barbican not have the effect of making the key unusable in all
other contexts. It would be less surprising behavior, IMO, to have a
deleted barbican container result in connected load balancing services
breaking. (Though without notification to LBaaS, the connected service
might break weeks or months after the key disappeared from barbican, which
would be more surprising behavior.)

Personally, I like your idea, as I think most of our users would rather
have LBaaS issue warnings when the user has done something stupid like this
rather than break entirely. I know our support staff would rather it
behaved this way.

What's your proposal for cleaning up copied secrets when they're actually
no longer in use by any LB?

Stephen


On Tue, Jun 10, 2014 at 12:04 PM, Adam Harwell adam.harw...@rackspace.com
wrote:

 So, it looks like any sort of validation on Deletes in Barbican is going
 to be a no-go. I'd like to propose a third option, which might be the
 safest route to take for LBaaS while still providing some of the
 convenience of using Barbican as a central certificate store. Here is a
 diagram of the interaction sequence to create a loadbalancer:
 http://bit.ly/1pgAC7G

 Summary: Pass the Barbican TLS Container ID to the LBaaS create call,
 get the container from Barbican, and store a shadow-copy of the
 container again in Barbican, this time on the LBaaS service account.
 The secret will now be duplicated (it still exists on the original tenant,
 but also exists on the LBaaS tenant), but we're not talking about a huge
 amount of data here -- just a few kilobytes. With this approach, we retain
 most of the advantages we wanted to get from using Barbican -- we don't
 need to worry about taking secret data through the LBaaS API (we still
 just take a barbicanID from the user), and the user can still use a single
 barbicanID (the original one they created -- the copies are invisible to
 them) when passing their TLS info to other services. We gain the
 additional advantage that it no longer matters what happens to the
 original TLS container -- it could be deleted and it would not impact our
 service.

 What do you guys think of that option?



 As an addendum (not saying we necessarily want to do this, but it's an
 option), we COULD retain both the original and the copied barbicanID in
 our database and attempt to fetch them in that order when we need the TLS
 info again, which would allow us to do some alerting if the user does
 delete their key. For example: the user has deleted their key because it
 was compromised, but they forgot they used it on their loadbalancer - a
 failover event occurs and we attempt to fetch the info from Barbican -
 the first fetch fails, but the second fetch to our copy succeeds - the
 failover of the LB is successful, and we send an alert to notify the user
 that their LB is using TLS info that has been deleted from Barbican.


 --Adam


 https://keybase.io/rm_you





 On 6/10/14 6:21 AM, Clark, Robert Graham robert.cl...@hp.com wrote:

 It looks like this has come full circle and we are back at the simplest
 case.
 
 # Containers are immutable
 # Changing a cert means creating a new container and, when ready,
 pointing LBaaS at the new container
 
 This makes a lot of sense to me, it removes a lot of handholding and
 keeps Barbican and LBaaS nicely decoupled. It also keeps certificate
 lifecycle management firmly in the hands of the user, which imho is a
 good thing. With this model it¹s fairly trivial to provide guidance /
 additional tooling for lifecycle management if required but at the same
 time the simplest case (I want a cert and I want LBaaS) is met without
 massive code overhead for edge-cases.
 
 
 From: Vijay Venkatachalam
 vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
 Reply-To: OpenStack List
 openstack-dev@lists.openstack.orgmailto:
 openstack-...@lists.openstack.or
 g
 Date: Tuesday, 10 June 2014 05:48
 To: OpenStack List
 openstack-dev@lists.openstack.orgmailto:
 openstack-...@lists.openstack.or
 g
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
 Integration Ideas
 
 
 My vote is for option #2 (without the registration). It is simpler to
 start with this approach. How is delete handled though?
 
 Ex. What is the expectation when user attempts to delete a
 certificate/container which is referred by an entity like LBaaS listener?
 
 
 1.   Will there be validation in Barbican to prevent this? *OR*
 
 2.   LBaaS listener will have a dangling reference/pointer to
 certificate?
 
 Thanks,
 Vijay V.
 
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
 Sent: Tuesday, June 10, 2014 7:43 AM
 To: OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-10 Thread Stephen Balukoff
I responded in the other thread just now, but I did want to say:

The problem with a dangling reference is this might mean that the
associated Listener breaks at some random time after the barbican container
goes away. While this is intuitive and expected behavior if it happens
shortly after the barbican container disappears, I think it would be very
unexpected if it happens weeks or months after the barbican container goes
away (and would probably result in a troubleshooting nightmare).  Having
had to deal with many of these in the course of my career, I really dislike
ticking time bombs like this, so I'm starting to be convinced that the
approach Adam Harwell recommended in the other thread (ie. keep shadow
copies of containers) is probably the most graceful way to handle dangling
references, even if it does mean that when a user deletes a container, it
isn't really gone yet.

So! If we're not going to tie into an eventing or notification system which
would cause a Listener to break immediately after a connected barbican
container is deleted, then I think Adam's approach is the next best
alternative.

Also: Samuel: I agree that it would be great to be able to add meta-data so
that the user can be made easily aware of which of their barbican
containers are in use by LBaaS listeners.

Stephen


On Tue, Jun 10, 2014 at 12:17 PM, Carlos Garza carlos.ga...@rackspace.com
wrote:

 See adams message re: Re: [openstack-dev] [Neutron][LBaaS] Barbican
 Neutron LBaaS Integration Ideas.
 He's advocating keeping a shadow copy of the private key that is owned by
 the LBaaS service so that incase a key is tampered with during an
 LB update migration etc we can still check with the shadow backup and
 compare it to the user owned TLS container in case its not their it can be
 used.

 On Jun 10, 2014, at 12:47 PM, Samuel Bercovici samu...@radware.com
  wrote:

  To elaborate on the case where containers get deleted while LBaaS still
 references it.
  We think that the following approach will do:
  · The end user can delete a container and leave a “dangling”
 reference in LBaaS.
  · It would be nice to allow adding meta data on the container so
 that the user will be aware which listeners use this container. This is
 optional. It can also be optional for LBaaS to implement adding the
 listeners ID automatically into this metadata just for information.
  · In LBaaS, if an update happens which requires to pull the
 container from Barbican and if the ID references a non-existing container,
 the update will fail and will indicate that the reference certificate does
 not exists any more. This validation could be implemented on the LBaaS API
 itself as well as also by the driver who will actually need the container.
 
  Regards,
  -Sam.
 
 
  From: Evgeny Fedoruk
  Sent: Tuesday, June 10, 2014 2:13 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document
 on Gerrit
 
  Hi All,
 
  Carlos, Vivek, German, thanks for reviewing the RST doc.
  There are some issues I want to pinpoint final decision on them here, in
 ML, before writing it down in the doc.
  Other issues will be commented on the document itself.
 
  1.   Support/No support in JUNO
  Referring to summit’s etherpad
 https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7,
  a.   SNI certificates list was decided to be supported. Was decision
 made not to support it?
  Single certificate with multiple domains can only partly address the
 need for SNI, still, different applications
  on back-end will need different certificates.
  b.  Back-end re-encryption was decided to be supported. Was decision
 made not to support it?
  c.   With front-end client authentication and back-end server
 authentication not supported,
  Should certificate chains be supported?
  2.   Barbican TLS containers
  a.   TLS containers are immutable.
  b.  TLS container is allowed to be deleted, always.
 i.
  Even when it is used by LBaaS VIP listener (or other service).
   ii.
  Meta data on TLS container will help tenant to understand that container
 is in use by LBaaS service/VIP listener
  iii.  If
 every VIP listener will “register” itself in meta-data while retrieving
 container, how that “registration” will be removed when VIP listener stops
 using the certificate?
 
  Please comment on these points and review the document on gerrit (
 https://review.openstack.org/#/c/98640)
  I will update the document with decisions on above topics.
 
  Thank you!
  Evgeny
 
 
  From: Evgeny Fedoruk
  Sent: Monday, June 09, 2014 2:54 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Neutron][LBaaS] TLS support RST document on
 

Re: [openstack-dev] Fwd: Re: [openstack-tc] use of the word certified

2014-06-10 Thread Jay Pipes

On 06/10/2014 02:57 PM, Stefano Maffulli wrote:

On 06/10/2014 10:39 AM, Jay Pipes wrote:

We've been begging for input on this
stuff at the board and dev list level for a while now.


And people are all ear now and leaving comments, which is good :) I
think adding a clear warning on stackalytics.com that the data from
DriverLog may not be accurate is a fair request.


Yep, a totally fair request. I'll see what can be done in short order.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-10 Thread Martinez, Christian
Here my feedback regarding the designs:

Page 2:

* I think that the admin would probably want to filter alarms per user, 
project, name, meter_name, current_alarm_state(ok=alarm ready; 
insufficient data = alarm not ready; alarm =alarm triggered), but we 
don't have all that columns on the table. Maybe it will be better just to add 
columns for those fields, or have another tables or tabs that could allow the 
admin to see the alarms based on that parameters.

* I would add a delete alarm button as a table action

* Nice to have: if we are thinking about combining alarms, maybe 
having a combine alarm button as table action that gets activated when the 
admin selects two or more alarms.

o   When the button is clicked, it should show something like the Add Alarm 
dialog, allowing the user to create a new combined alarm, based on their 
previous alarm selection

Page 3-5:

* Love the workflow!

* A couple of things related to the Alarm When setup:

o   Depending on the resource that is selected (from page 2) you would have a 
list of the possible meters to be considered. For example, if your resource is 
an instance, you would have the following list of meters: number of instances, 
cpu time used, Average CPU utilization, memory, etc. This will also affect the 
threshold unit to be used. In the design, there is a textbox that has a 
percentage label (%) right next to it. The thing is that this threshold 
could be a percentage (for example, CPU utilization), but it could be a flat 
number as well (for example, number of instances on the project).

o   (Related to your point 5) There are two things related to combined alarms 
that we need to consider.

?  1) the combination can be between any type of alarm: you could combine 
alarms associated to different resources, meters, users? (Ceilometer expert 
will know). You even could combine combined alarms with other alarms as well. 
The AND and OR operation between the alarms can be used for combined alarms. 
For instance, combine two alarms with an OR operator

?  2) Adding two rules to match to a single alarm is not supported by 
Ceilometer. For that, you use combined alarms :). The idea of adding triggering 
rules to the alarm creation dialog is great for me, but I'm not sure if 
Ceilometer supports that.

Page 6:

* Really liked the way that actions and state could be set, but we 
should see how the notifications will be handled. Maybe these actions could be 
set by default in our first version and after that, start thinking about 
setting custom actions for alarm states in the future (same for email add-on  
at the user settings)

Page 7:  Viewing Alarm History A.K.A: the alarms that have occurred.

* Same as page 2: I think that the admin would probably want to filter 
alarms per user, project, name, meter_name, etc. (for instance, to see what 
alarms have being triggered on the project X), but we don't have that columns 
on the table. Maybe it will be better just to add columns for those fields, or 
have another tables or tabs that could allow the admin to see the alarms based 
on that parameters.

* Is the alarm date column referring to the date in which the alarm was 
created or the date in which the alarm was triggered?

* Is the alarm name content a link or a simple text? What would happen 
when the admin selects an alarm? Is It going to show the update alarm dialog? 
Are there any actions associated to the rows?

* Maybe changing the name of the tab to Activated alarms or smth that 
actually it's interpreted as in here you can see the alarms that have 
occurred.

Hope it helps

Cheers,
H

From: Liz Blanchard [mailto:lsure...@redhat.com]
Sent: Monday, June 9, 2014 2:36 PM
To: Eoghan Glynn
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm 
Management

Hi all,

Thanks again for the great comments on the initial cut of wireframes. I've 
updated them a fair amount based on feedback in this e-mail thread along with 
the feedback written up here:
https://etherpad.openstack.org/p/alarm-management-page-design-discussion

Here is a link to the new version:
http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-06-05.pdf

And a quick explanation of the updates that I made from the last version:

1) Removed severity.

2) Added Status column. I also added details around the fact that users can 
enable/disable alerts.

3) Updated Alarm creation workflow to include choosing the project and user 
(optionally for filtering the resource list), choosing resource, and allowing 
for choose of amount of time to monitor for alarming.
 -Perhaps we could be even more sophisticated for how we let users filter 
down to find the right resources that they want to monitor for alarms?

4) As for notifying users...I've updated the Alarms section to be Alarms 
History. The point here is to 

Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-10 Thread Stephen Balukoff
Hi Evgeny,

Comments inline.

On Tue, Jun 10, 2014 at 4:13 AM, Evgeny Fedoruk evge...@radware.com wrote:

  Hi All,



 Carlos, Vivek, German, thanks for reviewing the RST doc.

 There are some issues I want to pinpoint final decision on them here, in
 ML, before writing it down in the doc.

 Other issues will be commented on the document itself.



 1.   Support/No support in JUNO

 Referring to summit’s etherpad
 https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7,

 a.   SNI certificates list was decided to be supported. Was decision
 made not to support it?
 Single certificate with multiple domains can only partly address the need
 for SNI, still, different applications
 on back-end will need different certificates.

SNI support is a show stopper for us if it's not there. We have too many
production services which rely on SNI working for us not to have this
feature going forward.

I'm not sure I understand what you're proposing when you say Single
certificate with multiple domains can only partly address the need for SNI,
still, different applications on back-end will need different certificates.

In order to fully support SNI, you simply need to be able to specify
alternate certificate/key(s) to use indexed by the hostname with which the
non-default certificate(s) correspond. And honestly, if you're going to
implement TLS termination support at all, then adding SNI is a pretty
trivial next step.

 b.  Back-end re-encryption was decided to be supported. Was decision
 made not to support it?

So, some operators have said they need this eventually, but I think the
plan was to hold off on both re-encryption and any kind of client
certificate authentication in this release cycle.  I could be remembering
the discussion on this incorrectly, though, so others should feel free to
correct me on this.

 c.   With front-end client authentication and back-end server
 authentication not supported,
 Should certificate chains be supported?

Certificate chains are a different beast entirely. What I mean by that is:
 Any given front-end (ie. server) certificate issued by an authoritative
CA today may need intermediate certificates supplied for most browsers to
trust the issued certificate implicitly. (Most do, in my experience.)
Therefore, in order to effectively do TLS offload at all, we need to be
able to supply an intermediate CA chain which will be supplied with the
server cert when a client connects to the service.

If you're talking about the CA bundle or chain which will be used to verify
client certificates when we offer that feature...  then no, we don't need
that until we offer the feature.


  2.   Barbican TLS containers

 a.   TLS containers are immutable.

 b.  TLS container is allowed to be deleted, always.

i.  Even
 when it is used by LBaaS VIP listener (or other service).

  ii.  Meta
 data on TLS container will help tenant to understand that container is in
 use by LBaaS service/VIP listener

 iii.  If
 every VIP listener will “register” itself in meta-data while retrieving
 container, how that “registration” will be removed when VIP listener stops
 using the certificate?


So, there's other discussion of these points in previous replies in this
thread, but to summarize:

* There are multiple strategies for handling barbican container deletion,
and I favor the one suggested by Adam of creating shadow versions of any
referenced containers. I specifically do not like the one which allows for
dangling references, as this could mean the associated listener breaks
weeks or months after the barbican container was deleted (assuming no
eventing system is used, which it sounds like people are against.)

* Meta data on container is a good idea, IMO. Perhaps we might consider
simply leaving generic meta-data which is essentially a note to the
barbican system, or any GUI referencing the cert to check with the LBaaS
service to see which listeners are using the cert?  This wouldn't need to
be cleaned up, because if the container actually isn't in use by LBaaS
anymore, then LBaaS would simply respond that nothing is using the cert
anymore.




 Please comment on these points and review the document on gerrit (
 https://review.openstack.org/#/c/98640)

 I will update the document with decisions on above topics.


I shall try to make sure I have time to review the document later today or
tomorrow.

Stephen



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-10 Thread Adam Harwell
Doug: The reasons a LB might be reprovisioned are fairly important — mostly 
around HA, for fail overs or capacity — exactly the times we're trying avoid a 
failure.

Stephen: yes, I am talking about storing the copy in a non-tenant way (on the 
tenant-id for the LBaaS Service Account, not visible to the user). We would 
have to delete our shadow-copy when the loadbalancer was updated with a new 
barbicanID by the user, and make a copy of the new container to take its place.
Also, yes, I think it would be quite surprising (and far from ideal) when the 
LB you set up breaks weeks or months later when an HA event occurs and you 
haven't actually made any changes in quite a long time. Unfortunately, 
making the key unusable in all other contexts on a Barbican delete isn't 
really an option, so this is the best fallback I can think of.

--Adam

https://keybase.io/rm_you


From: Doug Wiegley do...@a10networks.commailto:do...@a10networks.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 10, 2014 2:53 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

 In any case, it strikes me as misleading to have an explicit delete command 
 sent to Barbican not have the effect of making the key unusable in all other 
 contexts. It would be less surprising behavior, IMO, to have a deleted 
 barbican container result in connected load balancing services breaking. 
 (Though without notification to LBaaS, the connected service might break 
 weeks or months after the key disappeared from barbican, which would be more 
 surprising behavior.)

Since a delete in barbican will not affect neutron/lbaas, and since most 
backends will have had to make their own copy of the key at lb provision time, 
the barbican delete will not result in lbaas breaking, I think.  The shadow 
copy would only get used if the lb had to be re-provisioned for some reason 
before it was given a new key id, which seems a fair bit of complexity for what 
is gained.

doug


From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 10, 2014 at 1:47 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Adam--

Wouldn't the user see the duplicate key/cert copy in their barbican interface, 
or are you proposing storing these secrets in a not-assigned-to-the-tenant kind 
of way?

In any case, it strikes me as misleading to have an explicit delete command 
sent to Barbican not have the effect of making the key unusable in all other 
contexts. It would be less surprising behavior, IMO, to have a deleted barbican 
container result in connected load balancing services breaking. (Though without 
notification to LBaaS, the connected service might break weeks or months after 
the key disappeared from barbican, which would be more surprising behavior.)

Personally, I like your idea, as I think most of our users would rather have 
LBaaS issue warnings when the user has done something stupid like this rather 
than break entirely. I know our support staff would rather it behaved this way.

What's your proposal for cleaning up copied secrets when they're actually no 
longer in use by any LB?

Stephen


On Tue, Jun 10, 2014 at 12:04 PM, Adam Harwell 
adam.harw...@rackspace.commailto:adam.harw...@rackspace.com wrote:
So, it looks like any sort of validation on Deletes in Barbican is going
to be a no-go. I'd like to propose a third option, which might be the
safest route to take for LBaaS while still providing some of the
convenience of using Barbican as a central certificate store. Here is a
diagram of the interaction sequence to create a loadbalancer:
http://bit.ly/1pgAC7G

Summary: Pass the Barbican TLS Container ID to the LBaaS create call,
get the container from Barbican, and store a shadow-copy of the
container again in Barbican, this time on the LBaaS service account.
The secret will now be duplicated (it still exists on the original tenant,
but also exists on the LBaaS tenant), but we're not talking about a huge
amount of data here -- just a few kilobytes. With this approach, we retain
most of the advantages we wanted to get from using Barbican -- we don't
need to worry about taking secret data through the LBaaS API (we still
just take a barbicanID from the user), and the user can still use a single
barbicanID (the original one they created -- the copies are invisible to
them) when passing their TLS info to other services. We 

[openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-10 Thread Mohammad Banikazemi

Following the discussions in the ML2 subgroup weekly meetings, I have added
more information on the etherpad [1] describing the proposed architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-10 Thread Susanne Balle
What was discussed at yesterday's Neutron core meeting?



On Tue, Jun 10, 2014 at 3:38 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Any core neutron people have a chance to give their opinions on this
 yet?

 Thanks,
 Brandon

 On Thu, 2014-06-05 at 15:28 +, Buraschi, Andres wrote:
  Thanks, Kyle. Great.
 
  -Original Message-
  From: Kyle Mestery [mailto:mest...@noironetworks.com]
  Sent: Thursday, June 05, 2014 11:27 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API
 
  On Wed, Jun 4, 2014 at 4:27 PM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:
   Hi Andres,
   I've assumed (and we know how assumptions work) that the deprecation
   would take place in Juno and after a cyle or two it would totally be
   removed from the code.  Even if #1 is the way to go, the old /vips
   resource would be deprecated in favor of /loadbalancers and /listeners.
  
   I agree #2 is cleaner, but I don't want to start on an implementation
   (though I kind of already have) that will fail to be merged in because
   of the strategy.  The strategies are pretty different so one needs to
   be decided on.
  
   As for where LBaaS is intended to end up, I don't want to speak for
   Kyle, so this is my understanding; It will end up outside of the
   Neutron code base but Neutron and LBaaS and other services will all
   fall under a Networking (or Network) program.  That is my
   understanding and I could be totally wrong.
  
  That's my understanding as well, I think Brandon worded it perfectly.
 
   Thanks,
   Brandon
  
   On Wed, 2014-06-04 at 20:30 +, Buraschi, Andres wrote:
   Hi Brandon, hi Kyle!
   I'm a bit confused about the deprecation (btw, thanks for sending
 this Brandon!), as I (wrongly) assumed #1 would be the chosen path for the
 new API implementation. I understand the proposal and #2 sounds actually
 cleaner.
  
   Just out of curiosity, Kyle, where is LBaaS functionality intended to
 end up, if long-term plans are to remove it from Neutron?
  
   (Nit question, I must clarify)
  
   Thank you!
   Andres
  
   -Original Message-
   From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
   Sent: Wednesday, June 04, 2014 2:18 PM
   To: openstack-dev@lists.openstack.org
   Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API
  
   Thanks for your feedback Kyle.  I will be at that meeting on Monday.
  
   Thanks,
   Brandon
  
   On Wed, 2014-06-04 at 11:54 -0500, Kyle Mestery wrote:
On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan
brandon.lo...@rackspace.com wrote:
 This is an LBaaS topic bud I'd like to get some Neutron Core
 members to give their opinions on this matter so I've just
 directed this to Neutron proper.

 The design for the new API and object model for LBaaS needs to be
 locked down before the hackathon in a couple of weeks and there
 are some questions that need answered.  This is pretty urgent to
 come on to a decision on and to get a clear strategy defined so
 we can actually do real code during the hackathon instead of
 wasting some of that valuable time discussing this.


 Implementation must be backwards compatible

 There are 2 ways that have come up on how to do this:

 1) New API and object model are created in the same extension and
 plugin as the old.  Any API requests structured for the old API
 will be translated/adapted to the into the new object model.
 PROS:
 -Only one extension and plugin
 -Mostly true backwards compatibility -Do not have to rename
 unchanged resources and models
 CONS:
 -May end up being confusing to an end-user.
 -Separation of old api and new api is less clear -Deprecating and
 removing old api and object model will take a bit more work -This
 is basically API versioning the wrong way

 2) A new extension and plugin are created for the new API and
 object model.  Each API would live side by side.  New API would
 need to have different names for resources and object models from
 Old API resources and object models.
 PROS:
 -Clean demarcation point between old and new -No translation
 layer needed -Do not need to modify existing API and object
 model, no new bugs -Drivers do not need to be immediately
 modified -Easy to deprecate and remove old API and object model
 later
 CONS:
 -Separate extensions and object model will be confusing to
 end-users -Code reuse by copy paste since old extension and
 plugin will be deprecated and removed.
 -This is basically API versioning the wrong way

 Now if #2 is chosen to be feasible and acceptable then there are
 a number of ways to actually do that.  I won't bring those up
 until a clear decision is made on which strategy above is the
 most acceptable.

Thanks for sending this out 

Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-10 Thread Stephen Balukoff
Yep, I'd like to know here, too--  as knowing the answer to this unblocks
implementation work for us.


On Tue, Jun 10, 2014 at 12:38 PM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 Any core neutron people have a chance to give their opinions on this
 yet?

 Thanks,
 Brandon

 On Thu, 2014-06-05 at 15:28 +, Buraschi, Andres wrote:
  Thanks, Kyle. Great.
 
  -Original Message-
  From: Kyle Mestery [mailto:mest...@noironetworks.com]
  Sent: Thursday, June 05, 2014 11:27 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API
 
  On Wed, Jun 4, 2014 at 4:27 PM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:
   Hi Andres,
   I've assumed (and we know how assumptions work) that the deprecation
   would take place in Juno and after a cyle or two it would totally be
   removed from the code.  Even if #1 is the way to go, the old /vips
   resource would be deprecated in favor of /loadbalancers and /listeners.
  
   I agree #2 is cleaner, but I don't want to start on an implementation
   (though I kind of already have) that will fail to be merged in because
   of the strategy.  The strategies are pretty different so one needs to
   be decided on.
  
   As for where LBaaS is intended to end up, I don't want to speak for
   Kyle, so this is my understanding; It will end up outside of the
   Neutron code base but Neutron and LBaaS and other services will all
   fall under a Networking (or Network) program.  That is my
   understanding and I could be totally wrong.
  
  That's my understanding as well, I think Brandon worded it perfectly.
 
   Thanks,
   Brandon
  
   On Wed, 2014-06-04 at 20:30 +, Buraschi, Andres wrote:
   Hi Brandon, hi Kyle!
   I'm a bit confused about the deprecation (btw, thanks for sending
 this Brandon!), as I (wrongly) assumed #1 would be the chosen path for the
 new API implementation. I understand the proposal and #2 sounds actually
 cleaner.
  
   Just out of curiosity, Kyle, where is LBaaS functionality intended to
 end up, if long-term plans are to remove it from Neutron?
  
   (Nit question, I must clarify)
  
   Thank you!
   Andres
  
   -Original Message-
   From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
   Sent: Wednesday, June 04, 2014 2:18 PM
   To: openstack-dev@lists.openstack.org
   Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API
  
   Thanks for your feedback Kyle.  I will be at that meeting on Monday.
  
   Thanks,
   Brandon
  
   On Wed, 2014-06-04 at 11:54 -0500, Kyle Mestery wrote:
On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan
brandon.lo...@rackspace.com wrote:
 This is an LBaaS topic bud I'd like to get some Neutron Core
 members to give their opinions on this matter so I've just
 directed this to Neutron proper.

 The design for the new API and object model for LBaaS needs to be
 locked down before the hackathon in a couple of weeks and there
 are some questions that need answered.  This is pretty urgent to
 come on to a decision on and to get a clear strategy defined so
 we can actually do real code during the hackathon instead of
 wasting some of that valuable time discussing this.


 Implementation must be backwards compatible

 There are 2 ways that have come up on how to do this:

 1) New API and object model are created in the same extension and
 plugin as the old.  Any API requests structured for the old API
 will be translated/adapted to the into the new object model.
 PROS:
 -Only one extension and plugin
 -Mostly true backwards compatibility -Do not have to rename
 unchanged resources and models
 CONS:
 -May end up being confusing to an end-user.
 -Separation of old api and new api is less clear -Deprecating and
 removing old api and object model will take a bit more work -This
 is basically API versioning the wrong way

 2) A new extension and plugin are created for the new API and
 object model.  Each API would live side by side.  New API would
 need to have different names for resources and object models from
 Old API resources and object models.
 PROS:
 -Clean demarcation point between old and new -No translation
 layer needed -Do not need to modify existing API and object
 model, no new bugs -Drivers do not need to be immediately
 modified -Easy to deprecate and remove old API and object model
 later
 CONS:
 -Separate extensions and object model will be confusing to
 end-users -Code reuse by copy paste since old extension and
 plugin will be deprecated and removed.
 -This is basically API versioning the wrong way

 Now if #2 is chosen to be feasible and acceptable then there are
 a number of ways to actually do that.  I won't bring those up
 until a clear decision is made on which strategy above is the
 most 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-10 Thread Stephen Balukoff
Hi Adam,

If nothing else, we could always write a garbage collector process which
periodically scans for shadow containers that are not in use by any
listeners anymore and cleans them up. I suppose that's actually not a
difficult problem to solve.

Anyway, yes, I'm liking your suggestion quite a bit:  It leaves barbican
and LBaaS neatly de-coupled, and prevents a ticking time bomb from
exploding in the face of a user who deletes a container they didn't know
was still in use.

Stephen


On Tue, Jun 10, 2014 at 1:19 PM, Adam Harwell adam.harw...@rackspace.com
wrote:

  Doug: The reasons a LB might be reprovisioned are fairly important —
 mostly around HA, for fail overs or capacity — exactly the times we're
 trying avoid a failure.

  Stephen: yes, I am talking about storing the copy in a non-tenant way
 (on the tenant-id for the LBaaS Service Account, not visible to the user).
 We would have to delete our shadow-copy when the loadbalancer was updated
 with a new barbicanID by the user, and make a copy of the new container to
 take its place.
 Also, yes, I think it would be quite surprising (and far from ideal) when
 the LB you set up breaks weeks or months later when an HA event occurs and
 you haven't actually made any changes in quite a long time.
 Unfortunately, making the key unusable in all other contexts on a
 Barbican delete isn't really an option, so this is the best fallback I can
 think of.

  --Adam

  https://keybase.io/rm_you


   From: Doug Wiegley do...@a10networks.com

 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 2:53 PM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
 Integration Ideas

 In any case, it strikes me as misleading to have an explicit delete
 command sent to Barbican not have the effect of making the key unusable in
 all other contexts. It would be less surprising behavior, IMO, to have a
 deleted barbican container result in connected load balancing services
 breaking. (Though without notification to LBaaS, the connected service
 might break weeks or months after the key disappeared from barbican, which
 would be more surprising behavior.)

  Since a delete in barbican will not affect neutron/lbaas, and since most
 backends will have had to make their own copy of the key at lb provision
 time, the barbican delete will not result in lbaas breaking, I think.  The
 shadow copy would only get used if the lb had to be re-provisioned for some
 reason before it was given a new key id, which seems a fair bit of
 complexity for what is gained.

  doug


   From: Stephen Balukoff sbaluk...@bluebox.net
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 at 1:47 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
 Integration Ideas

   Adam--

  Wouldn't the user see the duplicate key/cert copy in their barbican
 interface, or are you proposing storing these secrets in a
 not-assigned-to-the-tenant kind of way?

  In any case, it strikes me as misleading to have an explicit delete
 command sent to Barbican not have the effect of making the key unusable in
 all other contexts. It would be less surprising behavior, IMO, to have a
 deleted barbican container result in connected load balancing services
 breaking. (Though without notification to LBaaS, the connected service
 might break weeks or months after the key disappeared from barbican, which
 would be more surprising behavior.)

  Personally, I like your idea, as I think most of our users would rather
 have LBaaS issue warnings when the user has done something stupid like this
 rather than break entirely. I know our support staff would rather it
 behaved this way.

  What's your proposal for cleaning up copied secrets when they're
 actually no longer in use by any LB?

  Stephen


 On Tue, Jun 10, 2014 at 12:04 PM, Adam Harwell adam.harw...@rackspace.com
  wrote:

 So, it looks like any sort of validation on Deletes in Barbican is going
 to be a no-go. I'd like to propose a third option, which might be the
 safest route to take for LBaaS while still providing some of the
 convenience of using Barbican as a central certificate store. Here is a
 diagram of the interaction sequence to create a loadbalancer:
 http://bit.ly/1pgAC7G

 Summary: Pass the Barbican TLS Container ID to the LBaaS create call,
 get the container from Barbican, and store a shadow-copy of the
 container again in Barbican, this time on the LBaaS service account.
 The secret will now be duplicated (it still exists on the original tenant,
 but also exists on the LBaaS tenant), but we're not talking about a huge
 amount of data here -- just a few 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-10 Thread Doug Wiegley
 Doug: The reasons a LB might be reprovisioned are fairly important — mostly 
 around HA, for fail overs or capacity — exactly the times we're trying avoid 
 a failure.

Certainly the ticking time bomb is a bad idea, but HA seems cleaner to do in 
the backend, rather than at the openstack API level (the dangling reference 
doesn’t kick in until the lbaas api is used to accomplish that failover.)  And 
the lbaas api also doesn’t have any provisions for helping to shuffle for 
capacity, so that also becomes a backend issue.  And the backend won’t be 
natively dealing with a barbican reference.

However, couple this with service VM’s, and I guess the issue pops back into 
the forefront.  This is going to be an issue that everyone using ssl certs in 
barbican is going to have, so it seems we’re applying a band-aid in the wrong 
place.

Doug


From: Adam Harwell 
adam.harw...@rackspace.commailto:adam.harw...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 10, 2014 at 2:19 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Doug: The reasons a LB might be reprovisioned are fairly important — mostly 
around HA, for fail overs or capacity — exactly the times we're trying avoid a 
failure.

Stephen: yes, I am talking about storing the copy in a non-tenant way (on the 
tenant-id for the LBaaS Service Account, not visible to the user). We would 
have to delete our shadow-copy when the loadbalancer was updated with a new 
barbicanID by the user, and make a copy of the new container to take its place.
Also, yes, I think it would be quite surprising (and far from ideal) when the 
LB you set up breaks weeks or months later when an HA event occurs and you 
haven't actually made any changes in quite a long time. Unfortunately, 
making the key unusable in all other contexts on a Barbican delete isn't 
really an option, so this is the best fallback I can think of.

--Adam

https://keybase.io/rm_you


From: Doug Wiegley do...@a10networks.commailto:do...@a10networks.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 10, 2014 2:53 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

 In any case, it strikes me as misleading to have an explicit delete command 
 sent to Barbican not have the effect of making the key unusable in all other 
 contexts. It would be less surprising behavior, IMO, to have a deleted 
 barbican container result in connected load balancing services breaking. 
 (Though without notification to LBaaS, the connected service might break 
 weeks or months after the key disappeared from barbican, which would be more 
 surprising behavior.)

Since a delete in barbican will not affect neutron/lbaas, and since most 
backends will have had to make their own copy of the key at lb provision time, 
the barbican delete will not result in lbaas breaking, I think.  The shadow 
copy would only get used if the lb had to be re-provisioned for some reason 
before it was given a new key id, which seems a fair bit of complexity for what 
is gained.

doug


From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 10, 2014 at 1:47 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Adam--

Wouldn't the user see the duplicate key/cert copy in their barbican interface, 
or are you proposing storing these secrets in a not-assigned-to-the-tenant kind 
of way?

In any case, it strikes me as misleading to have an explicit delete command 
sent to Barbican not have the effect of making the key unusable in all other 
contexts. It would be less surprising behavior, IMO, to have a deleted barbican 
container result in connected load balancing services breaking. (Though without 
notification to LBaaS, the connected service might break weeks or months after 
the key disappeared from barbican, which would be more surprising behavior.)

Personally, I like your idea, as I think most of our users would rather have 
LBaaS issue warnings when the user has done something stupid like this rather 
than break entirely. I know our support staff would rather it behaved this way.

What's your proposal 

[openstack-dev] [nova][pci] A couple of questions

2014-06-10 Thread Robert Li (baoli)
Hi Yunhong  Yongli,

In the routine _prepare_pci_devices_for_use(), it’s referring to 
dev[‘hypervisor_name’]. I didn’t see code that’s setting it up, or the libvirt 
nodedev xml includes hypervisor_name. Is this specific to Xen?

Another question is about the issue that was raised in this review: 
https://review.openstack.org/#/c/82206/. It’s about the use of node id or host 
name in the PCI device table. I’d like to know you guys’ thoughts on that.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Janczuk, Tomasz
Using processes to isolate tenants is certainly possible. There is a range
of isolation mechanisms that can be used, from VM level isolation
(basically a separate deployment of the broker per-tenant), to process
level isolation, to sub-process isolation. The higher the density the
lower the overall marginal cost of adding a tenant to the system, and
overall cost of operating it. From the cost perspective it is therefore
desired to provide sub-process multi-tenancy mechanism; at the same time
this is the most challenging approach.

On 6/10/14, 1:39 PM, Gordon Sim g...@redhat.com wrote:

On 06/10/2014 06:33 PM, Janczuk, Tomasz wrote:
  From my perspective the key promise of Marconi is to provide a
 *multi-tenant*,*HTTP*  based queuing system. Think an OpenStack
equivalent
 of SQS or Azure Storage Queues.

 As far as I know there are no off-the-shelve message brokers out these
 that fit that bill.

Indeed. The brokers I'm familiar with don't have multi-tenancy built
into them. But rather than have one broker process support multiple
tenants, wouldn't it be possible to just have separate processes (even
separate containers) for each tenant?

 Note that when I say ³multi-tenant² I don¹t mean just having
multi-tenancy
 concept reflected in the APIs. The key aspect of the multi-tenancy is
 security hardening against a variety of attacks absent in single-tenant
 broker deployments. For example, an authenticated DOS attack.

Understood, ensuring that one tenant is completely isolated from being
impacted by anything another tenant might try to do.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >