Re: [openstack-dev] question of vcpu-memory-hotplug progress

2014-01-18 Thread Wangshen (Peter)
 -Original Message-
 From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
 Sent: Friday, January 17, 2014 1:01 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Jinbo (Justin)
 Subject: Re: [openstack-dev] question of vcpu-memory-hotplug progress

 Can you provide more information on how this is implemented? Support in
 kvm seems spotty and there are likely guests that don't support it all. The
 memory hotplug idea doesn't even seem to exist.

Cpu hotplug feature has been implemented in since qemu(version 1.6.2) and 
libvirt(version 1.2). Based on them we did a preliminary implementation (using 
libvirt's setvpusflags interfaces), and successfully tested on the guest (both 
fedora 18 and windows 2008 datacenter edition).
Although some older operating system does not support cpu hotplug, we still 
think it is useful for someone who use the operating system which support.

 Note that you need to set a milestone before it will be put in the queue to
 be reviewed.
 
 https://wiki.openstack.org/wiki/Blueprints#Blueprint_Review_Criteria
 
 In this particular case I think there needs to be a lot more information
 about the plan for implementation before I would think it is valuable to
 spend time on this feature.
 

The current bluprint contains cpu hotplug and memory hotplug features. But the 
two features are independent with each other. So whether it's better to split 
it into two blueprints, cpu hotplug and memory hotplug, for accelerating the 
progress of the blueprints and easy to follow the progress?
If you agree with our suggestion, we will submit a new blueprint of cpu 
hotplug, as Blueprint_Review_Criteria.

Thanks. 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-18 Thread Andrew Hutchings

On 17 Jan 2014, at 19:53, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-01-17 at 17:03 +, Andrew Hutchings wrote:
 On 17 Jan 2014, at 16:10, Jay Pipes jaypi...@gmail.com wrote:
 
 On Fri, 2014-01-17 at 14:34 +0100, Thomas Herve wrote:
 Hi all,
 
 I've been looking at Neutron default LBaaS provider using haproxy, and 
 while it's working nicely, it seems to have several shortcomings in terms 
 of scalability and high availability. The Libra project seems to offer a 
 more robust alternative, at least for scaling. The haproxy implementation 
 in Neutron seems to continue to evolve (like with 
 https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy), but I'm 
 wondering why we can't leverage Libra. The APIs are a bit different, but 
 the goals look very similar, and there is a waste of effort with 2 
 different implementations. Maybe we could see a Libra driver for Neutron 
 LBaaS for example?
 
 Yep, it's a completely duplicative and wasteful effort.
 
 It would be great for Libra developers to contribute to Neutron LBaaS.
 
 Hi Jay and Thomas,
 
 I am the outgoing technical lead of Libra for HP.  But will reply whilst the 
 new technical lead (Marc Pilon) gets subscribed to this.
 
 :( I had no idea, Andrew!

Not a big deal, I have some cool stuff open source stuff in HP I’m moving on to 
which I’m excited about and can help Openstack in other ways.  You should hear 
about that in a few months time.

 I would go as far as duplicative or wasteful. Libra existed before Neutron 
 LBaaS and is originally based on the Atlas API specifications.  Neutron 
 LBaaS has started duplicating some of our features recently which we find 
 quite flattering.
 
 I presume you meant you would *not* go as far as duplicative or
 wasteful :)

Yes, sorry, got back from Seattle a couple of days ago and am extremely jet 
lagged :)

 So, please don't take this the wrong way... but does anyone other than
 HP run Libra? Likewise, does anyone other than Rackspace run Atlas?

No one else runs it in production that I know about, but there are several 
trying it out and appearing to want to contribute.

 I find it a little difficult to comprehend why, if Libra preceded work
 on Neutron LBaaS, that it wasn't used as the basis of Neutron's LBaaS
 work. I can understand this for Atlas, since it's Java, but Libra is
 Python code... so it's even more confusing to me.
 
 Granted, I don't know the history of Neutron LBaaS, but it just seems to
 be that this particular area (LBaaS) has such blatantly overlapping
 codebases with separate contributor teams. Just baffling really.
 
 Any background or history you can give me (however opinionated!) would
 be very much appreciated :)

I don’t know if we pre-dated the planning phase, judging by a later email on 
this thread our planning phases were at the same time.  I wasn’t a big part of 
the planning phase so it is difficult to comment there.  But we had something 
we could use before we were in a place where we could even try out Neutron.  
Also to start with our API was Java based (a Python replacement came later).

 After the 5.x release of Libra has been stabilised we will be working 
 towards integration with Neutron.  It is a very important thing on our 
 roadmap and we are already working with 2 other large companies in Openstack 
 to figure that piece out.
 
 Which large OpenStack companies? Are these companies currently deploying
 Libra?

I’m not sure what is public and what isn’t so I won’t name names.  They are 
currently talking to us about the best ways of working with us.  Both companies 
want to use Libra in different and interesting ways.  They are not currently 
deploying it but are both playing with it.  It is early days, they both 
approached us just before the Christmas break.

We know that working with the wider community with Libra has not been our 
strong point.  It is something I want the team to rectify and they are showing 
signs of making that happen.  People that are interested in Libra are welcome 
to hang out in the #stackforge-libra IRC channel to talk to us.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [Tempest - Stress Test] : implement a full SSH connection on ssh_floating.py and improve it

2014-01-18 Thread Koderer, Marc
Hello Julien,

maybe my blog post helps you with some more details:

http://telekomcloud.github.io/2013/09/11/new-ways-of-tempest-stress-testing.html

You can run single test if you add a new json file with the test function you 
want to test. Like:
https://github.com/openstack/tempest/blob/master/tempest/stress/etc/sample-unit-test.json

With that you can launch them with the parameters you already described.

Regards,
Marc


From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Friday, January 17, 2014 3:49 PM
To: Koderer, Marc
Cc: openstack-dev@lists.openstack.org
Subject: RE: [Tempest - Stress Test] : implement a full SSH connection on 
ssh_floating.py and improve it

Hi Marc,

The Etherpad you provided was helpful to know the current state of the stress 
tests.

I admit that I have some difficulties to understand how I can run a single test 
built with the @stresstest decorator (even not a beginner in Python, I still 
have things to learn on this technology and a lot more on OpenStack/Tempest :) 
).
I used to run my test using ./run_stress.py -t JSON configuration pointing at 
my action/.py script -d duration, which allowed me to run only one test 
with a dedicated configuration (number of threads, ...)

For what I understand now in Tempest, I only managed to run all tests, using 
./run_tests.sh and the only configuration I found related to stress tests was 
the [stress] section in tempest.conf.

For example : let say I ported my SSH stress test as a scenario test with the 
@stresstest decorator.
How can I launch this test (and only this one) and use a dedicated 
configuration file like ones we can found in tempest/stress/etc ?

Another question I have now : in the case that I have to use run_test.sh and 
not run_stress.py anymore, how do I get the test runs statistics I used to 
have, and where should I put some code to improve them ?

When I will have cleared my mind with all these kinds of practical details, 
maybe I should add some content on the Wiki about stress tests in Tempest.

Best Regards,

Julien LELOUP
julien.lel...@3ds.com

-Original Message-
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Friday, January 17, 2014 3:07 PM
To: LELOUP Julien
Cc: openstack-dev@lists.openstack.org
Subject: RE: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection 
on ssh_floating.py and improve it

Hi Julien,

most of the cases in tempest/stress are already covered by exiting tests in 
/api or /scenario. The only thing that is missing is the decorator on them.

BTW here is the Etherpad from the summit talk that we had:
https://etherpad.openstack.org/p/icehouse-summit-qa-stress-tests

It possible help to understand the state. I didn't managed to work on the 
action items that are left.

Your suggestions sound good. So I'd happy so see some patches :)

Regards
Marc

From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Friday, January 17, 2014 11:52 AM
To: Koderer, Marc
Cc: openstack-dev@lists.openstack.org
Subject: RE: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection 
on ssh_floating.py and improve it

Hello Marc,

Thanks for your answer.

At the moment I'm willing to spend some time on this kind of scenario so I will 
see how to use the stress decorator inside a scenario test.
Does this means that all stress tests available in tempest/stress should be 
ported as scenario tests with this decorator ?

I do have some ideas about features on stress test that I find useful for my 
own use case : like adding more statistics on stress test runs in order to use 
them as benchmarks.
I don't know if this kind of feature was already discussed in the OpenStack 
community but since stress tests are a bit deprecated now, maybe there is some 
room for this kind of improvement on fresh stress tests.

Best Regards,

Julien LELOUP

-Original Message-
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Friday, January 17, 2014 9:45 AM
To: LELOUP Julien
Cc: openstack-dev@lists.openstack.org
Subject: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on 
ssh_floating.py and improve it

Hello Juilen,

I forwarded your mail to the correct mailing list. Please do not use the qa 
list any longer.

I am happy that you are interested in stress tests. All the tests in 
tempest/stress/actions are more or less deprecated. So what you should use 
instead is the stress decorator (e.g. 
https://github.com/openstack/tempest/blob/master/tempest/api/volume/test_volumes_actions.py#L55).
Unfortunately it's not yet used for scenarios like you describe. I'd suggest to 
build a scenario test in tempest/scenario and use this decorator on it.

Any patch like that is welcome on gerrit. If you are planning to work in that 
area for more than just a patch, a blueprint would be nice. A good way to 
coordinate your efforts is also in the QA meeting
(https://wiki.openstack.org/wiki/Meetings/QATeamMeeting)


Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-18 Thread Andrew Hutchings
Hi,

I haven’t read through those (need to go spend time with family so replying 
quickly) but given the dates the planning phases for Quantum/Neutron LBaaS and 
Libra LBaaS were at the same time.

There was an internal evaluation of the current LBaaS solutions done at the 
time and it was believed by the people evaluating that Atlas was a good place 
to start.  I came in just as that evaluation had finished (late August 2012) 
and the decision was pretty much made.  In retrospect I may have personally 
gone the Mirantis LBaaS as a starting point.  But either way there were some 
good starting places.

Libra was initially a few trees so history is hard to track, but we had 
something in production by December that year.

In response to Alex, the Libra team in HP is growing (it is currently still 
pretty small) and that should help us have more engineers to work with the 
Neutron team.  The current goal is to get 5.x out of the door which adds things 
like metering/billing support and then planning how we can integrate well with 
Neutron.  I believe the Libra team have a few diagrams flying around on a 
mutually beneficial way of doing that.

Kind Regards
Andrew

On 17 Jan 2014, at 23:00, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com 
wrote:

 Hi,
 
 
 Here are e-mail threads which keeps the history of LBaaS decisions:
 LBaaS IIRC meeting minutes: 
 http://lists.openstack.org/pipermail/openstack-dev/2012-August/000390.html
 LBaaS e-mail discussion: 
 http://lists.openstack.org/pipermail/openstack-dev/2012-August/000785.html
 
 As you see there was a comparison of existed at that moment LBaaS solutions:
  * Atlas-LB
  * Mirantis LBaaS
  * eBay LBaaS
 
 Git history shows that the initial commit for Libra was on September 10th 
 2012. This commit contains few files without any LBaaS functionality.
 
 I think it is quite fair to say that OpenStack community did a great job on 
 carefully evaluating existing and working LBaaS projects and made a decision 
 to add some of existing functionality to Quantum.
 
 Thanks
 Georgy
 
 
 On Fri, Jan 17, 2014 at 1:12 PM, Alex Freedland afreedl...@mirantis.com 
 wrote:
 Andrew, Jay and all,
 
 Thank you for bringing this topic up. Incidentally, just a month ago at 
 OpenStack Israel I spoke to Monty and other HP folks about getting the Libra 
 initiatives integrated into LBaaS.  I am happy that this discussion is now 
 happening on the mailing list. 
 
 I remember the history of how this got started. Mirantis was working with a 
 number of customers (GAP, PayPal, and a few others) who were asking for LBaaS 
 feature. At that time, Atlas was the default choice in the community, but its 
 Java-based implementation did not agree with the rest of OpenStack. 
 
 There was no Libra anywhere in the OpenStack sandbox, so we have proposed a 
 set of blueprints and Eugene Nikonorov and the team started moving ahead with 
 the implementation. Even before the code was accepted into Quantum, a number 
 of customers started to use it and a number of vendors (F5, Radware, etc.) 
 joined the community to add there own plugins. 
 
 Consequently, the decision was made to add LBaaS to Quantum (aka Neutron). 
 
 We would love to see the Libra developers join the Neutron team and 
 collaborate on the ways to bring the two initiatives together.
 
 
 Alex Freedland
 Community Team
 Mirantis, Inc.
 
 
 
 
 On Fri, Jan 17, 2014 at 11:53 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Fri, 2014-01-17 at 17:03 +, Andrew Hutchings wrote:
  On 17 Jan 2014, at 16:10, Jay Pipes jaypi...@gmail.com wrote:
 
   On Fri, 2014-01-17 at 14:34 +0100, Thomas Herve wrote:
   Hi all,
  
   I've been looking at Neutron default LBaaS provider using haproxy, and 
   while it's working nicely, it seems to have several shortcomings in 
   terms of scalability and high availability. The Libra project seems to 
   offer a more robust alternative, at least for scaling. The haproxy 
   implementation in Neutron seems to continue to evolve (like with 
   https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy), but 
   I'm wondering why we can't leverage Libra. The APIs are a bit different, 
   but the goals look very similar, and there is a waste of effort with 2 
   different implementations. Maybe we could see a Libra driver for Neutron 
   LBaaS for example?
  
   Yep, it's a completely duplicative and wasteful effort.
  
   It would be great for Libra developers to contribute to Neutron LBaaS.
 
  Hi Jay and Thomas,
 
  I am the outgoing technical lead of Libra for HP.  But will reply whilst 
  the new technical lead (Marc Pilon) gets subscribed to this.
 
 :( I had no idea, Andrew!
 
  I would go as far as duplicative or wasteful. Libra existed before Neutron 
  LBaaS and is originally based on the Atlas API specifications.  Neutron 
  LBaaS has started duplicating some of our features recently which we find 
  quite flattering.
 
 I presume you meant you would *not* go as far as duplicative or
 wasteful 

Re: [openstack-dev] a common client library

2014-01-18 Thread Doug Hellmann
On Fri, Jan 17, 2014 at 11:03 PM, John Utz john@wdc.com wrote:

 Outlook Web MUA, forgive the top post. :-(

 While a single import line that brings all the good stuff in at one shot
 is very convenient for the creation of an application, it would muddy the
 security picture *substantially* for the exact type of developer\customer
 that you would be targeting with this sort of syntactic sugar.

 As Jesse alludes to below, the expanding tree of dependencies would be
 masked by the aggregation.

 So, most likely, they would be pulling in vast numbers of things that they
 don't require to get their simple app done (there's an idea! an eclipse
 plugin that helpfully points out all the things that you are *not* using
 and offers to redo your imports for you :-) ).


I'm not sure what vast numbers of things you mean. The point is to make
one thing to talk to an OpenStack cloud consistently, instead of a separate
library for every facet of the cloud. Surely building on a common code
base will have the opposite effect you suggest -- it should reduce the
number of dependencies, and make it easier to track security updates in
those dependencies.

Doug




 As a result, when a security defect is published concerning one of those
 hidden dependencies, they will not have any reason to think that it effects
 them.

 just my us$0.02;

 johnu
 
 From: Jesse Noller [jesse.nol...@rackspace.com]
 Sent: Thursday, January 16, 2014 5:42 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] a common client library

 On Jan 16, 2014, at 4:59 PM, Renat Akhmerov rakhme...@mirantis.com
 mailto:rakhme...@mirantis.com wrote:

 On 16 Jan 2014, at 13:06, Jesse Noller jesse.nol...@rackspace.commailto:
 jesse.nol...@rackspace.com wrote:

 Since it’s pretty easy to get lost among all the opinions I’d like to
 clarify/ask a couple of things:


   *   Keeping all the clients physically separate/combining them in to a
 single library. Two things here:
  *   In case of combining them, what exact project are we considering?
 If this list is limited to core projects like nova and keystone what policy
 could we have for other projects to join this list? (Incubation,
 graduation, something else?)
  *   In terms of granularity and easiness of development I’m for
 keeping them separate but have them use the same boilerplate code,
 basically we need a OpenStack Rest Client Framework which is flexible
 enough to address all the needs in an abstract domain agnostic manner. I
 would assume that combining them would be an additional organizational
 burden that every stakeholder would have to deal with.

 Keeping them separate is awesome for *us* but really, really, really sucks
 for users trying to use the system.

 You may be right but not sure that adding another line into
 requirements.txt is a huge loss of usability.


 It is when that 1 dependency pulls in 6 others that pull in 10 more -
 every little barrier or potential failure from the inability to make a
 static binary to how each tool acts different is a paper cut of frustration
 to an end user.

 Most of the time the clients don't even properly install because of
 dependencies on setuptools plugins and other things. For developers (as
 I've said) the story is worse: you have potentially 22+ individual packages
 and their dependencies to deal with if they want to use a complete
 openstack install from their code.

 So it doesn't boil down to just 1 dependency: it's a long laundry list of
 things that make consumers' lives more difficult and painful.

 This doesn't even touch on the fact there aren't blessed SDKs or tools
 pointing users to consume openstack in their preferred programming language.

 Shipping an API isn't enough - but it can be fixed easily enough.

 Renat Akhmerov
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-18 Thread Doug Hellmann
I like the idea of a fresh start, but I don't think that's incompatible
with the other work to clean up the existing clients. That cleanup work
could help with creating the backwards compatibility layer, if a new
library needs to include one, for example.

As far as namespace packages and separate client libraries, I'm torn. It
makes sense, and I originally assumed we would want to take that approach.
The more I think about it, though, the more I like the approach Dean took
with the CLI, creating a single repository with a team responsible for
managing consistency in the UI.

Doug


On Sat, Jan 18, 2014 at 1:00 AM, Jamie Lennox jamielen...@redhat.comwrote:

 I can't see any reason that all of these situations can't be met.

 We can finally take the openstack pypi namespace, move keystoneclient -
 openstack.keystone and similar for the other projects. Have them all based
 upon openstack.base and probably an openstack.transport for transport.

 For the all-in-one users we can then just have openstack.client which
 depends on all of the openstack.x projects. This would satisfy the
 requirement of keeping projects seperate, but having the one entry point
 for newer users. Similar to the OSC project (which could acutally rely on
 the new all-in-one).

 This would also satisfy a lot of the clients who have i know are looking
 to move to a version 2 and break compatability with some of the crap from
 the early days.

 I think what is most important here is deciding what we want from our
 clients and discussing a common base that we are happy to support - not
 just renaming the existing ones.

 (I don't buy the problem with large amounts of dependencies, if you have a
 meta-package you just have one line in requirements and pip will figure the
 rest out.)

 Jamie

 - Original Message -
  From: Jonathan LaCour jonathan-li...@cleverdevil.org
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Saturday, 18 January, 2014 4:00:58 AM
  Subject: Re: [openstack-dev] a common client library
 
  On Thu, Jan 16, 2014 at 1:23 PM, Donald Stufft  don...@stufft.io 
 wrote:
 
 
 
 
  On Jan 16, 2014, at 4:06 PM, Jesse Noller  jesse.nol...@rackspace.com 
  wrote:
 
 
 
 
 
  On Jan 16, 2014, at 2:22 PM, Renat Akhmerov  rakhme...@mirantis.com 
 wrote:
 
 
 
 
  Since it’s pretty easy to get lost among all the opinions I’d like to
  clarify/ask a couple of things:
 
 
 
  * Keeping all the clients physically separate/combining them in to a
  single library. Two things here:
  * In case of combining them, what exact project are we
 considering?
  If this list is limited to core projects like nova and keystone
 what
  policy could we have for other projects to join this list?
  (Incubation, graduation, something else?)
  * In terms of granularity and easiness of development I’m for
 keeping
  them separate but have them use the same boilerplate code,
 basically
  we need a OpenStack Rest Client Framework which is flexible
 enough
  to address all the needs in an abstract domain agnostic manner. I
  would assume that combining them would be an additional
  organizational burden that every stakeholder would have to deal
  with.
 
  Keeping them separate is awesome for *us* but really, really, really
 sucks
  for users trying to use the system.
 
  I agree. Keeping them separate trades user usability for developer
 usability,
  I think user usability is a better thing to strive for.
  100% agree with this. In order for OpenStack to be its most successful, I
  believe firmly that a focus on end-users and deployers needs to take the
  forefront. That means making OpenStack clouds as easy to
 consume/leverage as
  possible for users and tool builders, and simplifying/streamlining as
 much
  as possible.
 
  I think that a single, common client project, based upon package
 namespaces,
  with a unified, cohesive feel is a big step in this direction.
 
  --
  Jonathan LaCour
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-18 Thread Jesse Noller

On Jan 18, 2014, at 8:13 AM, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:

I like the idea of a fresh start, but I don't think that's incompatible with 
the other work to clean up the existing clients. That cleanup work could help 
with creating the backwards compatibility layer, if a new library needs to 
include one, for example.

As far as namespace packages and separate client libraries, I'm torn. It makes 
sense, and I originally assumed we would want to take that approach. The more I 
think about it, though, the more I like the approach Dean took with the CLI, 
creating a single repository with a team responsible for managing consistency 
in the UI.

Doug



I’m going to pursue the latter - but at the same time the current effort to 
clean things up seems to be running afoul of the keystone client changes in 
flight?


On Sat, Jan 18, 2014 at 1:00 AM, Jamie Lennox 
jamielen...@redhat.commailto:jamielen...@redhat.com wrote:
I can't see any reason that all of these situations can't be met.

We can finally take the openstack pypi namespace, move keystoneclient - 
openstack.keystone and similar for the other projects. Have them all based upon 
openstack.base and probably an openstack.transport for transport.

For the all-in-one users we can then just have openstack.client which depends 
on all of the openstack.x projects. This would satisfy the requirement of 
keeping projects seperate, but having the one entry point for newer users. 
Similar to the OSC project (which could acutally rely on the new all-in-one).

This would also satisfy a lot of the clients who have i know are looking to 
move to a version 2 and break compatability with some of the crap from the 
early days.

I think what is most important here is deciding what we want from our clients 
and discussing a common base that we are happy to support - not just renaming 
the existing ones.

(I don't buy the problem with large amounts of dependencies, if you have a 
meta-package you just have one line in requirements and pip will figure the 
rest out.)

Jamie

- Original Message -
 From: Jonathan LaCour 
 jonathan-li...@cleverdevil.orgmailto:jonathan-li...@cleverdevil.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Sent: Saturday, 18 January, 2014 4:00:58 AM
 Subject: Re: [openstack-dev] a common client library

 On Thu, Jan 16, 2014 at 1:23 PM, Donald Stufft  
 don...@stufft.iomailto:don...@stufft.io  wrote:




 On Jan 16, 2014, at 4:06 PM, Jesse Noller  
 jesse.nol...@rackspace.commailto:jesse.nol...@rackspace.com 
 wrote:





 On Jan 16, 2014, at 2:22 PM, Renat Akhmerov  
 rakhme...@mirantis.commailto:rakhme...@mirantis.com  wrote:




 Since it’s pretty easy to get lost among all the opinions I’d like to
 clarify/ask a couple of things:



 * Keeping all the clients physically separate/combining them in to a
 single library. Two things here:
 * In case of combining them, what exact project are we considering?
 If this list is limited to core projects like nova and keystone what
 policy could we have for other projects to join this list?
 (Incubation, graduation, something else?)
 * In terms of granularity and easiness of development I’m for keeping
 them separate but have them use the same boilerplate code, basically
 we need a OpenStack Rest Client Framework which is flexible enough
 to address all the needs in an abstract domain agnostic manner. I
 would assume that combining them would be an additional
 organizational burden that every stakeholder would have to deal
 with.

 Keeping them separate is awesome for *us* but really, really, really sucks
 for users trying to use the system.

 I agree. Keeping them separate trades user usability for developer usability,
 I think user usability is a better thing to strive for.
 100% agree with this. In order for OpenStack to be its most successful, I
 believe firmly that a focus on end-users and deployers needs to take the
 forefront. That means making OpenStack clouds as easy to consume/leverage as
 possible for users and tool builders, and simplifying/streamlining as much
 as possible.

 I think that a single, common client project, based upon package namespaces,
 with a unified, cohesive feel is a big step in this direction.

 --
 Jonathan LaCour


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list

Re: [openstack-dev] a common client library

2014-01-18 Thread Jesse Noller

On Jan 18, 2014, at 12:00 AM, Jamie Lennox jamielen...@redhat.com wrote:

 I can't see any reason that all of these situations can't be met. 
 
 We can finally take the openstack pypi namespace, move keystoneclient - 
 openstack.keystone and similar for the other projects. Have them all based 
 upon openstack.base and probably an openstack.transport for transport.
 
 For the all-in-one users we can then just have openstack.client which depends 
 on all of the openstack.x projects. This would satisfy the requirement of 
 keeping projects seperate, but having the one entry point for newer users. 
 Similar to the OSC project (which could acutally rely on the new all-in-one).
 
 This would also satisfy a lot of the clients who have i know are looking to 
 move to a version 2 and break compatability with some of the crap from the 
 early days.
 
 I think what is most important here is deciding what we want from our clients 
 and discussing a common base that we are happy to support - not just renaming 
 the existing ones.
 
 (I don't buy the problem with large amounts of dependencies, if you have a 
 meta-package you just have one line in requirements and pip will figure the 
 rest out.)

You’re assuming:

1: Pip works when installing the entire dependency graph (it often doesn’t)
2: For some of these requirements, the user has a compiler installed (they 
don’t)
3: Installing 1 “meta package” that install N+K dependencies makes end user 
consumers happy (it doesn’t)
4: All of these dependencies make shipping a single binary deployment easier 
(it doesn’t)
5: Installing and using all of these things makes using openstack within my 
code conceptually simpler (it doesn’t)

We can start with *not* renaming the sub clients (meaning) collapsing them into 
the singular namespace; but the problem is that every one of those sub 
dependencies is potential liability to someone using this single client. 

If yes, we could only target fedora, and rely on yum  rpm, I’d agree with you 
- but for python application dependencies across multiple OSes and developers 
doing ci/cd using these systems I can’t. I also don’t want user to stumble into 
the nuanced vagaries of the sub-clients when writing application code; writing 
glue code to bind them all together does work very well (we know this from 
experience).


 
 Jamie
 
 - Original Message -
 From: Jonathan LaCour jonathan-li...@cleverdevil.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Saturday, 18 January, 2014 4:00:58 AM
 Subject: Re: [openstack-dev] a common client library
 
 On Thu, Jan 16, 2014 at 1:23 PM, Donald Stufft  don...@stufft.io  wrote:
 
 
 
 
 On Jan 16, 2014, at 4:06 PM, Jesse Noller  jesse.nol...@rackspace.com 
 wrote:
 
 
 
 
 
 On Jan 16, 2014, at 2:22 PM, Renat Akhmerov  rakhme...@mirantis.com  wrote:
 
 
 
 
 Since it’s pretty easy to get lost among all the opinions I’d like to
 clarify/ask a couple of things:
 
 
 
* Keeping all the clients physically separate/combining them in to a
single library. Two things here:
* In case of combining them, what exact project are we considering?
If this list is limited to core projects like nova and keystone what
policy could we have for other projects to join this list?
(Incubation, graduation, something else?)
* In terms of granularity and easiness of development I’m for keeping
them separate but have them use the same boilerplate code, basically
we need a OpenStack Rest Client Framework which is flexible enough
to address all the needs in an abstract domain agnostic manner. I
would assume that combining them would be an additional
organizational burden that every stakeholder would have to deal
with.
 
 Keeping them separate is awesome for *us* but really, really, really sucks
 for users trying to use the system.
 
 I agree. Keeping them separate trades user usability for developer usability,
 I think user usability is a better thing to strive for.
 100% agree with this. In order for OpenStack to be its most successful, I
 believe firmly that a focus on end-users and deployers needs to take the
 forefront. That means making OpenStack clouds as easy to consume/leverage as
 possible for users and tool builders, and simplifying/streamlining as much
 as possible.
 
 I think that a single, common client project, based upon package namespaces,
 with a unified, cohesive feel is a big step in this direction.
 
 --
 Jonathan LaCour
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev 

Re: [openstack-dev] a common client library

2014-01-18 Thread Sean Dague
On 01/18/2014 01:06 AM, Robert Collins wrote:
 On 17 January 2014 09:22, Renat Akhmerov rakhme...@mirantis.com wrote:
 Since it’s pretty easy to get lost among all the opinions I’d like to
 clarify/ask a couple of things:

 Keeping all the clients physically separate/combining them in to a single
 library. Two things here:

 In case of combining them, what exact project are we considering? If this
 list is limited to core projects like nova and keystone what policy could we
 have for other projects to join this list? (Incubation, graduation,
 something else?)
 In terms of granularity and easiness of development I’m for keeping them
 separate but have them use the same boilerplate code, basically we need a
 OpenStack Rest Client Framework which is flexible enough to address all the
 needs in an abstract domain agnostic manner. I would assume that combining
 them would be an additional organizational burden that every stakeholder
 would have to deal with.

 Has anyone ever considered an idea of generating a fully functional REST
 client automatically based on an API specification (WADL could be used for
 that)? Not sure how convenient it would be, it really depends on a
 particular implementation, but as an idea it could be at least thought of.
 Sounds a little bit crazy though, I recognize it :).
 
 Launchpadlib which builds on wadllib did *exactly* that. It worked
 fairly well with the one caveat that it fell into the ORM trap - just
 in time lookups for everything with crippling roundtrips.

-1 if the answer to anything is WADL. It's terrible, and an abandoned
sun rfc at this point. We've got some real progress in getting JSON
schema in place in Nova (for validation, but it's incremental steps, and
good ones), which I think is much more fruitful.

I also think auto generated clients have a lot of challenges in the same
way that full javascript pages in browsers have. If you screw up in a
subtle way you can just completely disable your clients from connecting
to your server entirely (or block them from using bits of the lesser
used function path because a minor bit of schema fat fingered). So
without a ton of additional verification on that path, I wouldn't want
that anywhere near openstack. Especially with vendor extensions, which
mean that enabling a network driver might break all your clients.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Diesel] Proposal for new project

2014-01-18 Thread Adrian Otto
Rob,

Please contact me, You have completely misunderstood the Solum project.

Thanks,

Adrian

On Jan 17, 2014, at 9:31 PM, Raymond, Rob rob.raym...@hp.com
 wrote:

 
 Hi Raj
 
 As I see it, Solum is a set of utilities aimed at developers to use
 OpenStack clouds but will not be part of OpenStack proper.
 While Diesel is meant to be a service that is provided by an OpenStack
 cloud (and at some point becoming part of OpenStack itself). It defines a
 contract and division of responsibility between developer and cloud.
 
 Rob
 
  Original message 
 From: Rajesh Ramchandani
 Date:01/17/2014 8:04 PM (GMT-08:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: openstack-dev at lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Subject: Re: [openstack-dev] [Diesel] Proposal for new project
 
 Hi Rob - there seems be overlap with project Solum. Can you please outline
 high level differences between Diesel and Solum?
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-18 Thread Donald Stufft

On Jan 18, 2014, at 12:58 AM, Robert Collins robe...@robertcollins.net wrote:

 Out of interest - whats the overhead of running tls compression
 against compressed data? Is it really noticable?

The overhead doesn’t really matter much as you want TLS
Compression disabled because of CRIME anyways. Most Linux
distros and such ship with it disabled by default now IIRC.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-18 Thread Donald Stufft

On Jan 18, 2014, at 9:58 AM, Jesse Noller jesse.nol...@rackspace.com wrote:

 
 On Jan 18, 2014, at 12:00 AM, Jamie Lennox jamielen...@redhat.com wrote:
 
 I can't see any reason that all of these situations can't be met. 
 
 We can finally take the openstack pypi namespace, move keystoneclient - 
 openstack.keystone and similar for the other projects. Have them all based 
 upon openstack.base and probably an openstack.transport for transport.
 
 For the all-in-one users we can then just have openstack.client which 
 depends on all of the openstack.x projects. This would satisfy the 
 requirement of keeping projects seperate, but having the one entry point for 
 newer users. Similar to the OSC project (which could acutally rely on the 
 new all-in-one).
 
 This would also satisfy a lot of the clients who have i know are looking to 
 move to a version 2 and break compatability with some of the crap from the 
 early days.
 
 I think what is most important here is deciding what we want from our 
 clients and discussing a common base that we are happy to support - not just 
 renaming the existing ones.
 
 (I don't buy the problem with large amounts of dependencies, if you have a 
 meta-package you just have one line in requirements and pip will figure the 
 rest out.)
 
 You’re assuming:
 
 1: Pip works when installing the entire dependency graph (it often doesn’t)
 2: For some of these requirements, the user has a compiler installed (they 
 don’t)
 3: Installing 1 “meta package” that install N+K dependencies makes end user 
 consumers happy (it doesn’t)
 4: All of these dependencies make shipping a single binary deployment easier 
 (it doesn’t)
 5: Installing and using all of these things makes using openstack within my 
 code conceptually simpler (it doesn’t)
 
 We can start with *not* renaming the sub clients (meaning) collapsing them 
 into the singular namespace; but the problem is that every one of those sub 
 dependencies is potential liability to someone using this single client. 
 
 If yes, we could only target fedora, and rely on yum  rpm, I’d agree with 
 you - but for python application dependencies across multiple OSes and 
 developers doing ci/cd using these systems I can’t. I also don’t want user to 
 stumble into the nuanced vagaries of the sub-clients when writing application 
 code; writing glue code to bind them all together does work very well (we 
 know this from experience).
 

As much as I would like to say (with my pip developer and PyPI Admin hat on) 
that depending on 22+ libraries in a single client will be a seamless 
experience for end users I can’t in good faith say that it would be yet. We’re 
working on trying to make that true but honestly each dependency in a graph 
does introduce risk.

As of right now there is no real dependency solver in pip, so if someone 
depends on the openstack client themselves, and then depends on something else 
that depends on one of the sub clients as well if those version specs don’t 
match up there is a very good chance that the end user will run into a very 
confusing message at runtime. Openstack itself has run into this problem and it 
was a big motivator for the global requirements project.

Additionally it’s not uncommon for users to have policy driven requirements 
that require them to get every dependency they pull in checked for compliance 
(License, security etc). Having a 22+ node dependency graph makes this issue 
much harder in general.

I also believe in general it’s asking for user confusion. It’s much simpler to 
document a single way of doing it, however splitting the clients up and then 
wrapping them with a single “openstack” client means that you have at least two 
ways of doing something; The direct “use just a single library” approach and 
the “use the openstack wrapper” approach. Don’t underestimate the confusion 
this will cause end users.

Keeping them all under one project will make it far easier to have a cohesive 
API amongst all the various services, it will reduce duplication of efforts, as 
well as make it easier to track security updates and I believe a wholly 
superior end user experience.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Diesel] Proposal for new project

2014-01-18 Thread Jay Pipes
On Sat, 2014-01-18 at 03:31 +, Raymond, Rob wrote:
 I would like to gauge interest in a new project named Diesel.
 
 https://wiki.openstack.org/wiki/Diesel
 
 If you are already familiar with Savanna, the best way to describe it is:
 Savanna is to map reduce applications as Diesel is to web applications.
 
 The mission of Diesel is to allow OpenStack clouds to run applications.
 The cloud administrator can control the non functional aspects, freeing up
 the application developer to focus on their application and its
 functionality.
 
 In the spirit of Google App Engine, Heroku, Engine Yard and others, Diesel
 runs web applications in the cloud. It can be used by cloud administrators
 to define the application types that they support. They are also
 responsible for defining through Diesel how these applications run on top
 of their cloud infrastructure. Diesel will control the availability and
 scalability of the web application deployment.

Hi Rob,

So I've read through the above wiki page a couple times, and I'm really
having trouble understanding how Diesel differs from Solum.

In the wiki page, you mention:

Solum - Solum is focused on the development lifecycle for the
application. The application may be one that Diesel can run.

Could you elaborate on how you envision how Diesel differs from Solum in
its basic intent? Solum deploys applications into a cloud. Diesel is
intended to run applications in clouds, but I'm not sure how there is
a difference between deploying an application into a cloud and running
one.

Perhaps I'm just missing something, though, so perhaps you might
elaboraste?

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-18 Thread Jay Pipes
On Sat, 2014-01-18 at 09:06 +, Andrew Hutchings wrote:
 
 On 17 Jan 2014, at 19:53, Jay Pipes jaypi...@gmail.com wrote:
 
  On Fri, 2014-01-17 at 17:03 +, Andrew Hutchings wrote:
   On 17 Jan 2014, at 16:10, Jay Pipes jaypi...@gmail.com wrote:
   
On Fri, 2014-01-17 at 14:34 +0100, Thomas Herve wrote:
 Hi all,
 
 I've been looking at Neutron default LBaaS provider using
 haproxy, and while it's working nicely, it seems to have
 several shortcomings in terms of scalability and high
 availability. The Libra project seems to offer a more robust
 alternative, at least for scaling. The haproxy implementation
 in Neutron seems to continue to evolve (like with
 https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy), but 
 I'm wondering why we can't leverage Libra. The APIs are a bit 
 different, but the goals look very similar, and there is a waste of 
 effort with 2 different implementations. Maybe we could see a Libra 
 driver for Neutron LBaaS for example?

Yep, it's a completely duplicative and wasteful effort.

It would be great for Libra developers to contribute to Neutron
LBaaS.
   
   Hi Jay and Thomas,
   
   I am the outgoing technical lead of Libra for HP.  But will reply
   whilst the new technical lead (Marc Pilon) gets subscribed to
   this.
  
  :( I had no idea, Andrew!
 
 Not a big deal, I have some cool stuff open source stuff in HP I’m
 moving on to which I’m excited about and can help Openstack in other
 ways.  You should hear about that in a few months time.

Cool. I look forward to that -- around summit time, eh?

  So, please don't take this the wrong way... but does anyone other
  than HP run Libra? Likewise, does anyone other than Rackspace run
  Atlas?
 
 No one else runs it in production that I know about, but there are
 several trying it out and appearing to want to contribute.

OK.

 I find it a little difficult to comprehend why, if Libra preceded work
  on Neutron LBaaS, that it wasn't used as the basis of Neutron's
  LBaaS work. I can understand this for Atlas, since it's Java, but
  Libra is Python code... so it's even more confusing to me.
  
  Granted, I don't know the history of Neutron LBaaS, but it just
  seems to be that this particular area (LBaaS) has such blatantly
  overlapping codebases with separate contributor teams. Just baffling
  really.
  
  Any background or history you can give me (however opinionated!)
  would be very much appreciated :)
 
 I don’t know if we pre-dated the planning phase, judging by a later
 email on this thread our planning phases were at the same time.  I
 wasn’t a big part of the planning phase so it is difficult to comment
 there.  But we had something we could use before we were in a place
 where we could even try out Neutron.  Also to start with our API was
 Java based (a Python replacement came later).

K, good to understand.

   After the 5.x release of Libra has been stabilised we will be
   working towards integration with Neutron.  It is a very important
   thing on our roadmap and we are already working with 2 other large
   companies in Openstack to figure that piece out.
  
  Which large OpenStack companies? Are these companies currently
  deploying Libra?
 
 I’m not sure what is public and what isn’t so I won’t name names.
 They are currently talking to us about the best ways of working with
 us.  Both companies want to use Libra in different and interesting
 ways.  They are not currently deploying it but are both playing with
 it.  It is early days, they both approached us just before the
 Christmas break.

Did these companies say they had looked at Neutron LBaaS and found its
design or implementation lacking in some way?

 We know that working with the wider community with Libra has not been
 our strong point.  It is something I want the team to rectify and they
 are showing signs of making that happen.  People that are interested
 in Libra are welcome to hang out in the #stackforge-libra IRC channel
 to talk to us.

Cutting to the chase... have there been any discussions about the
long-term direction of Libra and Neutron LBaaS. I see little point
having two OpenStack endpoints that implement the same basic load
balancing functionality.

Is the main problem in aligning Libra and Neutron the fact that Libra is
a wholly-separate endpoint/project and Neutron LBaaS is part of the
Neutron project?

Best,
-jay 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] domain admin role query

2014-01-18 Thread Florent Flament
Hi,

Following-up on this thread (although late), I have detailed the steps
allowing to have Keystone with multiple domains properly set:
http://www.florentflament.com/blog/setting-keystone-v3-domains.html

I hope that it may be useful for people willing to play with the
Identity v3 API and domains.

Florent Flament

On Wed, 2013-12-18 at 12:10 -0800, Ravi Chunduru wrote:
 Thanks Dolph,
  It worked now. I specified domain id in the scope.
 
 
 -Ravi.
 
 
 On Wed, Dec 18, 2013 at 12:05 PM, Ravi Chunduru ravi...@gmail.com
 wrote:
 Hi Dolph,
   I dont have project yet to use in the scope. The intention
 is to get a token using domain admin credentials and create
 project using it.
 
 
 Thanks,
 -Ravi.
 
 
 On Wed, Dec 18, 2013 at 11:39 AM, Dolph Mathews
 dolph.math...@gmail.com wrote:
 
 On Wed, Dec 18, 2013 at 12:48 PM, Ravi Chunduru
 ravi...@gmail.com wrote:
 Thanks all for the information.
 I have now v3 policies in place, the issue is
 that as a domain admin I could not create a
 project in the domain. I get 403 unauthorized
 status.
 
 
 I see that when as a  'domain admin' request a
 token, the response did not have any roles.
  In the token request, I couldnt specify the
 project - as we are about to create the
 project in next step.
 
 
 Specify a domain as the scope to obtain domain-level
 authorization in the resulting token.
 
 
 See the third example under Scope:
 
 
   
 https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#scope-scope
  
 
 
 Here is the complete request/response of all
 the steps done.
 https://gist.github.com/kumarcv/8015275
 
 
 
 I am assuming its a bug. Please let me know
 your opinions.
 
 
 Thanks,
 -Ravi.
 
 
 
 
 
 
 On Thu, Dec 12, 2013 at 3:00 PM, Henry Nash
 hen...@linux.vnet.ibm.com wrote:
 Hi
 
 So the idea wasn't the you create a
 domain with the id of
 'domain_admin_id', rather that you
 create the domain that you plan to use
 for your admin domain, and then paste
 its (auto-generated) domain_id into
 the policy file.
 
 Henry
 On 12 Dec 2013, at 03:11, Paul
 Belanger
 paul.belan...@polybeacon.com wrote:
 
  On 13-12-11 11:18 AM, Lyle, David
 wrote:
  +1 on moving the domain admin role
 rules to the default policy.json
 
  -David Lyle
 
  From: Dolph Mathews
 [mailto:dolph.math...@gmail.com]
  Sent: Wednesday, December 11, 2013
 9:04 AM
  To: OpenStack Development Mailing
 List (not for usage questions)
  Subject: Re: [openstack-dev]
 [keystone] domain admin role query
 
 
  On Tue, Dec 10, 2013 at 10:49 PM,
 Jamie Lennox jamielen...@redhat.com
 wrote:
  Using the default policies it will
 simply check for the admin role and
  

Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-18 Thread Jay Pipes
On Fri, 2014-01-17 at 13:12 -0800, Alex Freedland wrote:
 Andrew, Jay and all,
 
 Thank you for bringing this topic up. Incidentally, just a month ago
 at OpenStack Israel I spoke to Monty and other HP folks about getting
 the Libra initiatives integrated into LBaaS.  I am happy that this
 discussion is now happening on the mailing list. 
 
 I remember the history of how this got started. Mirantis was working
 with a number of customers (GAP, PayPal, and a few others) who were
 asking for LBaaS feature. At that time, Atlas was the default choice
 in the community, but its Java-based implementation did not agree with
 the rest of OpenStack. 
 
 There was no Libra anywhere in the OpenStack sandbox, so we have
 proposed a set of blueprints and Eugene Nikonorov and the team started
 moving ahead with the implementation. Even before the code was
 accepted into Quantum, a number of customers started to use it and a
 number of vendors (F5, Radware, etc.) joined the community to add
 there own plugins. 
 
 Consequently, the decision was made to add LBaaS to Quantum (aka
 Neutron).

Thanks for the above history, Alex, appreciated.

 We would love to see the Libra developers join the Neutron team and
 collaborate on the ways to bring the two initiatives together.

Agreed. However, I'm afraid progress towards alignment won't happen
until there's a concerted effort between the two groups to resolve any
architectural differences and a create set of blueprints that outline
the work required to bring the best of both worlds together.

Best,
-jay 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-18 Thread Pádraig Brady
On 01/15/2014 02:42 PM, Alexei Kornienko wrote:
 If you are working on linux system following can help you:
 
 dd if=/dev/urandom of=/dev/sda bs=4k

That's going to be slow.
The shred tool should be already installed on most Linux systems,
and uses an internal PRNG to write either zeros or random data in a fast manner.

The cinder LVM driver has support for this already as an option,
by setting volume_clear=shred. That defaults to writing a random
pattern _3 times_.

Depending on your requirements, writing zeros might suffice,
in which case the default volume_clear=zero functionality
which uses dd if=/dev/zero ... could be used.

thanks,
Pádraig.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-18 Thread Robert Collins
On 19 January 2014 04:48, Sean Dague s...@dague.net wrote:
 On 01/18/2014 01:06 AM, Robert Collins wrote:

 Launchpadlib which builds on wadllib did *exactly* that. It worked
 fairly well with the one caveat that it fell into the ORM trap - just
 in time lookups for everything with crippling roundtrips.

 -1 if the answer to anything is WADL. It's terrible, and an abandoned
 sun rfc at this point. We've got some real progress in getting JSON
 schema in place in Nova (for validation, but it's incremental steps, and
 good ones), which I think is much more fruitful.

 I also think auto generated clients have a lot of challenges in the same
 way that full javascript pages in browsers have. If you screw up in a
 subtle way you can just completely disable your clients from connecting
 to your server entirely (or block them from using bits of the lesser
 used function path because a minor bit of schema fat fingered). So
 without a ton of additional verification on that path, I wouldn't want
 that anywhere near openstack. Especially with vendor extensions, which
 mean that enabling a network driver might break all your clients.

To be clear: I'm not advocating this approach. Just answering the
question 'has it been tried', with 'yes, and this is what was wrong'.

The actual code generation and execution worked very well, and most
features added to the server side APIs were available immediately in
the client with no upgrades. However any new /type/ of feature
required changes to the compiler.

Personally, I think hand crafted clients have a much better feel to
them - idiomatic to the language they are in, easier to predict, and
much easier to understand.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Naming of a deployment

2014-01-18 Thread Yuriy Taraday
Hi all.

I might be a little out of context, but isn't that thing deployed on some
kind of cloud?


 * cluster -- is too generic, but also has connotations in HPC and
 various other technologies (databases, MQs, etc).

 * installation -- reminds me of a piece of performance art ;)

 * instance -- too much cross-terminology with server instance in Nova
 and Ironic


In which case I'd suggest borrowing another option from TripleO:
overcloud.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-18 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-01-18 11:02:12 -0800:
 Cutting to the chase... have there been any discussions about the
 long-term direction of Libra and Neutron LBaaS. I see little point
 having two OpenStack endpoints that implement the same basic load
 balancing functionality.
 
 Is the main problem in aligning Libra and Neutron the fact that Libra is
 a wholly-separate endpoint/project and Neutron LBaaS is part of the
 Neutron project?

In the past I've been told the only reason Libra/Atlas persist is that
Neutron wasn't ready. To me that means 'lets go make Neutron better'.
But then, I haven't been given any crazy short lead times for rolling
out LBaaS.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] Undefined attributes in WSME

2014-01-18 Thread Yuriy Taraday
On Tue, Jan 14, 2014 at 6:09 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:

 On Mon, Jan 13, 2014 at 9:36 PM, Jamie Lennox jamielen...@redhat.comwrote:

 On Mon, 2014-01-13 at 10:05 -0500, Doug Hellmann wrote:
  What requirement(s) led to keystone supporting this feature?

 I've got no idea where the requirement came from however it is something
 that is
 supported now and so not something we can back out of.


 If it's truly a requirement, we can look into how to make that work. The
 data is obviously present in the request, so we would just need to preserve
 it.


We've seen a use case for arbitrary attributes in Keystone objects. Cloud
administrator might want to store some metadata along with a user object.
For example, customer name/id and couple additional fields for contact
information. The same might be applied to projects and  domains.

So this is a very nice feature that should be kept around. It might be
wrapped in some way (like in explicit unchecked metadata attribute) in a
new API version though.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [wsme] Undefined attributes in WSME

2014-01-18 Thread Morgan Fainberg
Yes, this feature is used in real deployments just as Yuriy described. I
really want to avoid a new API version since we're just now getting solidly
into V3 being used more extensively.  Is it unreasonable to have wsme allow
extra values in some manner? (I think that is the crux, is it something
that can even be expected)

--Morgan

On Saturday, January 18, 2014, Yuriy Taraday
yorik@gmail.comjavascript:_e({}, 'cvml',
'yorik@gmail.com');
wrote:


 On Tue, Jan 14, 2014 at 6:09 PM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:

 On Mon, Jan 13, 2014 at 9:36 PM, Jamie Lennox jamielen...@redhat.comwrote:

 On Mon, 2014-01-13 at 10:05 -0500, Doug Hellmann wrote:
  What requirement(s) led to keystone supporting this feature?

 I've got no idea where the requirement came from however it is something
 that is
 supported now and so not something we can back out of.


 If it's truly a requirement, we can look into how to make that work. The
 data is obviously present in the request, so we would just need to preserve
 it.


 We've seen a use case for arbitrary attributes in Keystone objects. Cloud
 administrator might want to store some metadata along with a user object.
 For example, customer name/id and couple additional fields for contact
 information. The same might be applied to projects and  domains.

 So this is a very nice feature that should be kept around. It might be
 wrapped in some way (like in explicit unchecked metadata attribute) in a
 new API version though.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] paramiko requirement of = 1.9.0?

2014-01-18 Thread Matthew Farrellee

jon,

please confirm a suspicion of mine.

the neutron-private-net-provisioning bp impl added a sock= parameter to 
the ssh.connect call in remote.py 
(https://github.com/openstack/savanna/commit/9afb5f60).


we currently require paramiko = 1.8.0, but it looks like the sock param 
was only added to paramiko 1.9.0 
(https://github.com/paramiko/paramiko/commit/31ea4f0734a086f2345aaea57fd6fc1c3ea4a87e)


do we need paramiko = 1.9.0 as our requirement?

also, what version are you using in your installation?

best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-18 Thread Yair Fried
MT:Is your issue here that it's just called basic ops and you don't think 
that's
reflective of what is being tested in that file anymore

No.
My issue is, that the current scenario is, in fact, at least 2 separate 
scenarios:
1. original basic_ops
2. reassociate_floating_ip
to which I would like to add ( https://review.openstack.org/#/c/55146/ ):
4. check external/internal connectivity
3. update dns

While #2 includes #1 as part of its setup, its failing shouldn't prevent #1 
from passing. the obvious solution would be to create separate modules for each 
test case, but since they all share the same setup sequence, IMO, they should 
at least share code.
Notice that in my patch, #2 still includes #1.

Actually, the more network scenario we get, the more we will need to do 
something in that direction, since most of the scenarios will require the setup 
of a VM with a floating-ip to ssh into.

So either we do this, or we move all of this code to scenario.manager which is 
also becoming very complicated

Yair

- Original Message -
From: Matthew Treinish mtrein...@kortar.org
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Friday, January 17, 2014 6:17:55 AM
Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down 
NetworkBasicOps to smaller test cases

On Wed, Jan 15, 2014 at 11:20:22AM -0500, Yair Fried wrote:
 Hi Guys
 As Maru pointed out - NetworkBasicOps scenario has grown out of proportion 
 and is no longer basic ops.

Is your issue here that it's just called basic ops and you don't think that's
reflective of what is being tested in that file anymore. If that's the case
then just change the name.

 
 So, I started breaking it down to smaller test cases that can fail 
 independently.

I'm not convinced this is needed. Some scenarios are going to be very involved
and complex. Each scenario tests is designed to simulate real use cases in the
cloud, so obviously some of them will be fairly large. The solution for making
debugging easier in these cases is to make sure that any failures have clear
messages. Also make sure there is plenty of signposting debug log messages so
when something goes wrong you know what state the test was in. 

If you split things up into smaller individual tests you'll most likely end up
making tests that are really aren't scenario tests. They'd be closer to API
tests, just using the official clients, which really shouldn't be in the
scenario tests.

 
 Those test cases share the same setup and tear-down code:
 1. create network resources (and verify them)
 2. create VM with floating IP.
 
 I see 2 options to manage these resources:
 1. Completely isolated - resources are created and cleaned using setUp() and 
 tearDown() methods [1]. Moving cleanup to tearDown revealed this bug [2]. 
 Apparently the previous way (with tearDownClass) wasn't as fast). This has 
 the side effect of having expensive resources (ie VMs and floating IPs) 
 created and discarded  multiple times though they are unchanged.
 
 2. Shared Resources - Using the idea of (or actually using) Fixtures - use 
 the same resources unless a test case fails, in which case resources are 
 deleted and created by the next test case [3].

If you're doing this and splitting things into smaller tests then it has to be
option 1. Scenarios have to be isolated if there are resources shared between
scenario tests that really is only one scenario and shouldn't be split. In fact
I've been working on a change that fixes the scenario test tearDowns that has 
the
side effect of enforcing this policy.

Also just for the record we've tried doing option 2 in the past, for example
there used to be a tenant-reuse config option. The problem with doing that was
actually tends to cause more non-deterministic failures or adding a not
insignificant wait time to ensure the state is clean when you start the next
test. Which is why we ended up pulling this out of tree. What ends up happening
is that you get leftover state from previous tests and the second test ends up
failing because things aren't in the clean state that the test case assumes. If
you look at some of the oneserver files in the API that is the only place we
still do this in the tempest, and we've had many issues on making that work
reliably. Those tests are in a relatively good place now but those are much
simpler tests. Also between each test setUp has to check and ensure that the
shared server is in the proper state. If it's not then the shared server has to
be rebuilt. This methodology would become far more involved for the scenario
tests where you have to manage more than one shared resource.

 
 I would like to hear your opinions, and know if anyone has any thoughts or 
 ideas on which direction is best, and why.
 
 Once this is completed, we can move on to other scenarios as well
 
 Regards
 Yair
 
 [1] fully isolated - https://review.openstack.org/#/c/66879/
 [2] 

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-18 Thread Irena Berezovsky
Hi Robert, Yonhong,
Although network XML solution (option 1) is very elegant, it has one major 
disadvantage. As Robert mentioned, the disadvantage of the network XML is the 
inability to know what SR-IOV PCI device was actually allocated. When neutron 
is responsible to set networking configuration, manage admin status, set 
security groups, it should be able to identify the SR-IOV PCI device to apply 
configuration. Within current libvirt Network XML implementation, it does not 
seem possible.
Between option (2) and (3), I do not have any preference, it should be as 
simple as possible.
Option (3) that I raised can be achieved by renaming the network interface of 
Virtual Function via 'ip link set  name'. Interface logical name can be based 
on neutron port UUID. This will  allow neutron to discover devices, if backend 
plugin requires it. Once VM is migrating, suitable Virtual Function on the 
target node should be allocated, and then its corresponding network interface 
should be renamed to same logical name. This can be done without system 
rebooting. Still need to check how the Virtual Function corresponding network 
interface can be returned to its original name once is not used anymore as VM 
vNIC.

Regards,
Irena 

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] 
Sent: Friday, January 17, 2014 9:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Robert, thanks for your long reply. Personally I'd prefer option 2/3 as it keep 
Nova the only entity for PCI management.

Glad you are ok with Ian's proposal and we have solution to resolve the libvirt 
network scenario in that framework.

Thanks
--jyh

 -Original Message-
 From: Robert Li (baoli) [mailto:ba...@cisco.com]
 Sent: Friday, January 17, 2014 7:08 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network 
 support
 
 Yunhong,
 
 Thank you for bringing that up on the live migration support. In 
 addition to the two solutions you mentioned, Irena has a different 
 solution. Let me put all the them here again:
 1. network xml/group based solution.
In this solution, each host that supports a provider 
 net/physical net can define a SRIOV group (it's hard to avoid the term 
 as you can see from the suggestion you made based on the PCI flavor 
 proposal). For each SRIOV group supported on a compute node, A network 
 XML will be created the first time the nova compute service is running 
 on that node.
 * nova will conduct scheduling, but not PCI device allocation
 * it's a simple and clean solution, documented in libvirt as 
 the way to support live migration with SRIOV. In addition, a network 
 xml is nicely mapped into a provider net.
 2. network xml per PCI device based solution
This is the solution you brought up in this email, and Ian 
 mentioned this to me as well. In this solution, a network xml is 
 created when A VM is created. the network xml needs to be removed once 
 the VM is removed. This hasn't been tried out as far as I  know.
 3. interface xml/interface rename based solution
Irena brought this up. In this solution, the ethernet interface 
 name corresponding to the PCI device attached to the VM needs to be 
 renamed. One way to do so without requiring system reboot is to change 
 the udev rule's file for interface renaming, followed by a udev 
 reload.
 
 Now, with the first solution, Nova doesn't seem to have control over 
 or visibility of the PCI device allocated for the VM before the VM is 
 launched. This needs to be confirmed with the libvirt support and see 
 if such capability can be provided. This may be a potential drawback 
 if a neutron plugin requires detailed PCI device information for operation.
 Irena may provide more insight into this. Ideally, neutron shouldn't 
 need this information because the device configuration can be done by 
 libvirt invoking the PCI device driver.
 
 The other two solutions are similar. For example, you can view the 
 second solution as one way to rename an interface, or camouflage an 
 interface under a network name. They all require additional works 
 before the VM is created and after the VM is removed.
 
 I also agree with you that we should take a look at XenAPI on this.
 
 
 With regard to your suggestion on how to implement the first solution 
 with some predefined group attribute, I think it definitely can be 
 done. As I have pointed it out earlier, the PCI flavor proposal is 
 actually a generalized version of the PCI group. In other words, in 
 the PCI group proposal, we have one predefined attribute called PCI 
 group, and everything else works on top of that. In the PCI flavor 
 proposal, attribute is arbitrary. So certainly we can define a 
 particular attribute for networking, which let's temporarily 

Re: [openstack-dev] [rally] Naming of a deployment

2014-01-18 Thread Oleg Gelbukh
Yuriy, the idea is to choose something more or less general. 'Overcloud'
would be very specific to my taste. It could also create confusion for
users who want to depoy tests targets with other tools, like Fuel or
Devstack.

--
Best regards,
Oleg Gelbukh


On Sun, Jan 19, 2014 at 1:17 AM, Yuriy Taraday yorik@gmail.com wrote:

 Hi all.

 I might be a little out of context, but isn't that thing deployed on some
 kind of cloud?


 * cluster -- is too generic, but also has connotations in HPC and
 various other technologies (databases, MQs, etc).

 * installation -- reminds me of a piece of performance art ;)

 * instance -- too much cross-terminology with server instance in Nova
 and Ironic


 In which case I'd suggest borrowing another option from TripleO:
 overcloud.

 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev