Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-10 Thread Mike Spreitzer
Regarding Alex's question of which component does holistic infrastructure 
scheduling, I hesitate to simply answer "heat".  Heat is about 
orchestration, and infrastructure scheduling is another matter.  I have 
attempted to draw pictures to sort this out, see 
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U 
and 
https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-GQQ1bRVgBpJdstpu0lH_TONw6g 
.  In those you will see that I identify holistic infrastructure 
scheduling as separate functionality from infrastructure orchestration 
(the main job of today's heat engine) and also separate from software 
orchestration concerns.  However, I also see a close relationship between 
holistic infrastructure scheduling and heat, as should be evident in those 
pictures too.

Alex made a remark about the needed inputs, and I agree but would like to 
expand a little on the topic.  One thing any scheduler needs is knowledge 
of the amount, structure, and capacity of the hosting thingies (I wish I 
could say "resources", but that would be confusing) onto which the 
workload is to be scheduled.  Scheduling decisions are made against 
available capacity.  I think the most practical way to determine available 
capacity is to separately track raw capacity and current (plus already 
planned!) allocations from that capacity, finally subtracting the latter 
from the former.

In Nova, for example, sensing raw capacity is handled by the various 
nova-compute agents reporting that information.  I think a holistic 
infrastructure scheduler should get that information from the various 
individual services (Nova, Cinder, etc) that it is concerned with 
(presumably they have it anyway).

A holistic infrastructure scheduler can keep track of the allocations it 
has planned (regardless of whether they have been executed yet).  However, 
there may also be allocations that did not originate in the holistic 
infrastructure scheduler.  The individual underlying services should be 
able to report (to the holistic infrastructure scheduler, even if lowly 
users are not so authorized) all the allocations currently in effect.  An 
accurate union of the current and planned allocations is what we want to 
subtract from raw capacity to get available capacity.

If there is a long delay between planning and executing an allocation, 
there can be nasty surprises from competitors --- if there are any 
competitors.  Actually, there can be nasty surprises anyway.  Any 
scheduler should be prepared for nasty surprises, and react by some 
sensible retrying.  If nasty surprises are rare, we are pretty much done. 
If nasty surprises due to the presence of competing managers are common, 
we may be able to combat the problem by changing the long delay to a short 
one --- by moving the allocation execution earlier into a stage that is 
only about locking in allocations, leaving all the other work involved in 
creating virtual resources to later (perhaps Climate will be good for 
this).  If the delay between planning and executing an allocation is short 
and there are many nasty surprises due to competing managers, then you 
have too much competition between managers --- don't do that.

Debo wants a simpler nova-centric story.  OK, how about the following. 
This is for the first step in the roadmap, where scheduling decisions are 
still made independently for each VM instance.  For the client/service 
interface, I think we can do this with a simple clean two-phase interface 
when traditional software orchestration is in play, a one-phase interface 
when slick new software orchestration is used.  Let me outline the 
two-phase flow.  We extend the Nova API with CRUD operations on VRTs 
(top-level groups).  For example, the CREATE operation takes a definition 
of a top-level group and all its nested groups, definitions (excepting 
stuff like userdata) of all the resources (only VM instances, for now) 
contained in those groups, all the relationships among those 
groups/resources, and all the applications of policy to those groups, 
resources, and relationships.  This is a rest-style interface; the CREATE 
operation takes a definition of the thing (a top-level group and all that 
it contains) being created; the UPDATE operation takes a revised 
definition of the whole thing.  Nova records the presented information; 
the familiar stuff is stored essentially as it is today (but marked as 
being in some new sort of tentative state), and the grouping, 
relationship, and policy stuff is stored according to a model like the one 
Debo&Yathi wrote.  The CREATE operation returns a UUID for the newly 
created top-level group.  The invocation of the top-level group CRUD is a 
single operation and it is the first of the two phases.  In the second 
phase of a CREATE flow, the client creates individual resources with the 
same calls as are used today, except that each VM instance create call is 
augmented with a pointer into the policy information.  T

Re: [openstack-dev] Newbie python novaclient question

2013-10-10 Thread Alex
Yes , this method seems to look for the corresponding action but still doesn't 
seem to be the one actually calling them.

Regards
Al



On Oct 10, 2013, at 11:07 PM, Noorul Islam K M  wrote:

> Alex  writes:
> 
>> Thank you Noorul. I looked at the review. My question is that in 
>> openstackcomputeshell.main which line call the v1_1/ shell.py.?
>> 
> 
> I would look at get_subcommand_parser() method.
> 
> Thanks and Regards
> Noorul
> 
>> 
>> 
>> On Oct 10, 2013, at 9:03 PM, Noorul Islam K M  wrote:
>> 
>>> A L  writes:
>>> 
 Dear Openstack Dev Gurus,
 
 I am trying to understand the python novaclient code. Can someone please
 point me to where in openstackcomputeshell class in shell.py does the
 actual function or class related to an argument gets called?
 
>>> 
>>> This review [1] is something I submitted and it adds a sub command. May
>>> be this will give you some clue.
>>> 
>>> [1] https://review.openstack.org/#/c/40181/
>>> 
>>> Thanks and Regards
>>> Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newbie python novaclient question

2013-10-10 Thread Noorul Islam K M
Alex  writes:

> Thank you Noorul. I looked at the review. My question is that in 
> openstackcomputeshell.main which line call the v1_1/ shell.py.?
>

I would look at get_subcommand_parser() method.

Thanks and Regards
Noorul

>
>
> On Oct 10, 2013, at 9:03 PM, Noorul Islam K M  wrote:
>
>> A L  writes:
>> 
>>> Dear Openstack Dev Gurus,
>>> 
>>> I am trying to understand the python novaclient code. Can someone please
>>> point me to where in openstackcomputeshell class in shell.py does the
>>> actual function or class related to an argument gets called?
>>> 
>> 
>> This review [1] is something I submitted and it adds a sub command. May
>> be this will give you some clue.
>> 
>> [1] https://review.openstack.org/#/c/40181/
>> 
>> Thanks and Regards
>> Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newbie python novaclient question

2013-10-10 Thread Alex
Thank you Noorul. I looked at the review. My question is that in 
openstackcomputeshell.main which line call the v1_1/ shell.py.?


Thanks
Al


On Oct 10, 2013, at 9:03 PM, Noorul Islam K M  wrote:

> A L  writes:
> 
>> Dear Openstack Dev Gurus,
>> 
>> I am trying to understand the python novaclient code. Can someone please
>> point me to where in openstackcomputeshell class in shell.py does the
>> actual function or class related to an argument gets called?
>> 
> 
> This review [1] is something I submitted and it adds a sub command. May
> be this will give you some clue.
> 
> [1] https://review.openstack.org/#/c/40181/
> 
> Thanks and Regards
> Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newbie python novaclient question

2013-10-10 Thread Noorul Islam K M
A L  writes:

> Dear Openstack Dev Gurus,
>
> I am trying to understand the python novaclient code. Can someone please
> point me to where in openstackcomputeshell class in shell.py does the
> actual function or class related to an argument gets called?
>

This review [1] is something I submitted and it adds a sub command. May
be this will give you some clue.

[1] https://review.openstack.org/#/c/40181/

Thanks and Regards
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-10 Thread Matthew Treinish
On Thu, Oct 10, 2013 at 07:39:37PM -0700, Joe Gordon wrote:
> On Thu, Oct 10, 2013 at 7:28 PM, Matt Riedemann  wrote:
> > >
> > > > 4. What is the max amount of time for us to report test results?  Dan
> > > > didn't seem to think 48 hours would fly. :)
> > >
> > > Honestly, I think that 12 hours during peak times is the upper limit of
> > > what could be considered useful. If it's longer than that, many patches
> > > could go into the tree without a vote, which defeats the point.
> >
> > Yeah, I was just joking about the 48 hour thing, 12 hours seems excessive
> > but I guess that has happened when things are super backed up with gate
> > issues and rechecks.
> >
> > Right now things take about 4 hours, with Tempest being around 1.5 hours
> > of that. The rest of the time is setup and install, which includes heat
> > and ceilometer. So I guess that raises another question, if we're really
> > setting this up right now because of nova, do we need to have heat and
> > ceilometer installed and configured in the initial delivery of this if
> > we're not going to run tempest tests against them (we don't right now)?
> >
> 
> 
> In general the faster the better, and if things get to slow enough that we
> have to wait for powervm CI to report back, I
> think its reasonable to go ahead and approve things without hearing back.
>  In reality if you can report back in under 12 hours this will rarely
> happen (I think).
> 
> 
> >
> > I think some aspect of the slow setup time is related to DB2 and how
> > the migrations perform with some of that, but the overall time is not
> > considerably different from when we were running this with MySQL so
> > I'm reluctant to blame it all on DB2.  I think some of our topology
> > could have something to do with it too since the IVM hypervisor is running
> > on a separate system and we are gated on how it's performing at any
> > given time.  I think that will be our biggest challenge for the scale
> > issues with community CI.
> >
> > >
> > > > 5. What are the minimum tests that need to run (excluding APIs that the
> > > > powervm driver doesn't currently support)?
> > > > - smoke/gate/negative/whitebox/scenario/cli?  Right now we have
> > > > 1152 tempest tests running, those are only within api/scenario/cli and
> > > > we don't run everything.

Well that's almost a full run right now, the full tempest jobs have 1290 tests
of which we skip 65 because of bugs or configuration. (don't run neutron api
tests without neutron) That number is actually pretty high since you are
running with neutron. Right now the neutron gating jobs only have 221 jobs and
skip 8 of those. Can you share the list of things you've got working with
neutron so we can up the number of gating tests?

> > >
> > > I think that "a full run of tempest" should be required. That said, if
> > > there are things that the driver legitimately doesn't support, it makes
> > > sense to exclude those from the tempest run, otherwise it's not useful.
> >
> 
> ++
> 
> 
> 
> >  >
> > > I think you should publish the tempest config (or config script, or
> > > patch, or whatever) that you're using so that we can see what it means
> > > in terms of the coverage you're providing.
> >
> > Just to clarify, do you mean publish what we are using now or publish
> > once it's all working?  I can certainly attach our nose.cfg and
> > latest x-unit results xml file.
> >
> 
> We should publish all logs, similar to what we do for upstream (
> http://logs.openstack.org/96/48196/8/gate/gate-tempest-devstack-vm-full/70ae562/
> ).

Yes, and part of that is the devstack logs which shows all the configuration
steps for getting an environment up and running. This is sometimes very useful
for debugging. So this is probably information that you'll want to replicate in
whatever the logging output for the powervm jobs ends up being.

> > >
> > > > 6. Network service? We're running with openvswitch 1.10 today so we
> > > > probably want to continue with that if possible.
> > >
> > > Hmm, so that means neutron? AFAIK, not much of tempest runs with
> > > Nova/Neutron.
> > >
> > > I kinda think that since nova-network is our default right now (for
> > > better or worse) that the run should include that mode, especially if
> > > using neutron excludes a large portion of the tests.
> > >
> > > I think you said you're actually running a bunch of tempest right now,
> > > which conflicts with my understanding of neutron workiness. Can you
> > clarify?
> >
> > Correct, we're running with neutron using the ovs plugin. We basically have
> > the same issues that the neutron gate jobs have, which is related to
> > concurrency
> > issues and tenant isolation (we're doing the same as devstack with neutron
> > in that we don't run tempest with tenant isolation).  We are running most
> > of the nova and most of the neutron API tests though (we don't have all
> > of the neutron-dependent scenario tests working though, probably more due
> > to incompetence in sett

Re: [openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-10 Thread Matthew Treinish
On Thu, Oct 10, 2013 at 04:55:51PM -0500, Matt Riedemann wrote:
> Based on the discussion with Russell and Dan Smith in the nova meeting 
> today, here are some of my notes from the meeting that can continue the 
> discussion.  These are all pretty rough at the moment so please bear with 
> me, this is more to just get the ball rolling on ideas.
> 
> Notes on powervm CI:
> 
> 1. What OS to run on?  Fedora 19, RHEL 6.4?
> - Either of those is probably fine, we use RHEL 6.4 right now 
> internally.

I'd say use Fedora 19 over RHEL 6.4 that way you can use python 2.7 and run
tempest with testr instead of nose. While you won't be able to run things in
parallel if you're using neutron right now, moving forward that should
hopefully be fixed soon. Running in parallel may help with the execution time
a bit.


-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Newbie python novaclient question

2013-10-10 Thread A L
Dear Openstack Dev Gurus,

I am trying to understand the python novaclient code. Can someone please
point me to where in openstackcomputeshell class in shell.py does the
actual function or class related to an argument gets called?

Thanks a bunch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] HTTPUnauthorized at /project/instances/

2013-10-10 Thread 苌智
I installed ceilometer, according to "
https://fedoraproject.org/wiki/QA:Testcase_OpenStack_ceilometer_install";.
It appears an error "
HTTPUnauthorized at /project/instances/
 " in dashboard after using "admin user" to create a user . I think
ceilometer goes wrong . Could someone give me some advice ?

Thanks a lot .
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Change I3e080c30: Fix "resource" length in project_user_quotas table for Havana?

2013-10-10 Thread Joshua Hesketh

Hi there,

I've been reviewing this change which is currently proposed for master 
and I think it needs to be considered for the next Havana RC.


Change I3e080c30: Fix "resource" length in project_user_quotas table
https://review.openstack.org/#/c/47299/

I'm new to the process around these kinds of patches but I imagine that 
we should use one of the placeholder migrations in the havana branch and 
cherry-pick it back into master?


Cheers,
Josh

--
Rackspace Australia


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-10 Thread Joe Gordon
On Thu, Oct 10, 2013 at 7:28 PM, Matt Riedemann  wrote:

>
>
>
>
> Dan Smith  wrote on 10/10/2013 08:26:14 PM:
>
> > From: Dan Smith 
> > To: OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>,
> > Date: 10/10/2013 08:31 PM
> > Subject: Re: [openstack-dev] [nova][powervm] my notes from the
> > meeting on powervm CI
> >
> > > 4. What is the max amount of time for us to report test results?  Dan
> > > didn't seem to think 48 hours would fly. :)
> >
> > Honestly, I think that 12 hours during peak times is the upper limit of
> > what could be considered useful. If it's longer than that, many patches
> > could go into the tree without a vote, which defeats the point.
>
> Yeah, I was just joking about the 48 hour thing, 12 hours seems excessive
> but I guess that has happened when things are super backed up with gate
> issues and rechecks.
>
> Right now things take about 4 hours, with Tempest being around 1.5 hours
> of that. The rest of the time is setup and install, which includes heat
> and ceilometer. So I guess that raises another question, if we're really
> setting this up right now because of nova, do we need to have heat and
> ceilometer installed and configured in the initial delivery of this if
> we're not going to run tempest tests against them (we don't right now)?
>


In general the faster the better, and if things get to slow enough that we
have to wait for powervm CI to report back, I
think its reasonable to go ahead and approve things without hearing back.
 In reality if you can report back in under 12 hours this will rarely
happen (I think).


>
> I think some aspect of the slow setup time is related to DB2 and how
> the migrations perform with some of that, but the overall time is not
> considerably different from when we were running this with MySQL so
> I'm reluctant to blame it all on DB2.  I think some of our topology
> could have something to do with it too since the IVM hypervisor is running
> on a separate system and we are gated on how it's performing at any
> given time.  I think that will be our biggest challenge for the scale
> issues with community CI.
>
> >
> > > 5. What are the minimum tests that need to run (excluding APIs that the
> > > powervm driver doesn't currently support)?
> > > - smoke/gate/negative/whitebox/scenario/cli?  Right now we have
> > > 1152 tempest tests running, those are only within api/scenario/cli and
> > > we don't run everything.
> >
> > I think that "a full run of tempest" should be required. That said, if
> > there are things that the driver legitimately doesn't support, it makes
> > sense to exclude those from the tempest run, otherwise it's not useful.
>

++



>  >
> > I think you should publish the tempest config (or config script, or
> > patch, or whatever) that you're using so that we can see what it means
> > in terms of the coverage you're providing.
>
> Just to clarify, do you mean publish what we are using now or publish
> once it's all working?  I can certainly attach our nose.cfg and
> latest x-unit results xml file.
>

We should publish all logs, similar to what we do for upstream (
http://logs.openstack.org/96/48196/8/gate/gate-tempest-devstack-vm-full/70ae562/
).



>
> >
> > > 6. Network service? We're running with openvswitch 1.10 today so we
> > > probably want to continue with that if possible.
> >
> > Hmm, so that means neutron? AFAIK, not much of tempest runs with
> > Nova/Neutron.
> >
> > I kinda think that since nova-network is our default right now (for
> > better or worse) that the run should include that mode, especially if
> > using neutron excludes a large portion of the tests.
> >
> > I think you said you're actually running a bunch of tempest right now,
> > which conflicts with my understanding of neutron workiness. Can you
> clarify?
>
> Correct, we're running with neutron using the ovs plugin. We basically have
> the same issues that the neutron gate jobs have, which is related to
> concurrency
> issues and tenant isolation (we're doing the same as devstack with neutron
> in that we don't run tempest with tenant isolation).  We are running most
> of the nova and most of the neutron API tests though (we don't have all
> of the neutron-dependent scenario tests working though, probably more due
> to incompetence in setting up neutron than anything else).
>
> >
> > > 7. Cinder backend? We're running with the storwize driver but we do we
> > > do about the remote v7000?
> >
> > Is there any reason not to just run with a local LVM setup like we do in
> > the real gate? I mean, additional coverage for the v7000 driver is
> > great, but if it breaks and causes you to not have any coverage at all,
> > that seems, like, bad to me :)
>
> Yeah, I think we'd just run with a local LVM setup, that's what we do for
> x86_64 and s390x tempest runs. For whatever reason we thought we'd do
> storwize for our ppc64 runs, probably just to have a matrix of coverage.
>
> >
> > > Again, just getting some thoughts ou

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Joe Gordon
On Thu, Oct 10, 2013 at 5:43 PM, Tim Smith  wrote:

> On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant wrote:
>
>
>> Please understand that I only want to help here.  Perhaps a good way for
>> you to get more review attention is get more karma in the dev community
>> by helping review other patches.  It looks like you don't really review
>> anything outside of your own stuff, or patches that touch hyper-v.  In
>> the absence of significant interest in hyper-v from others, the only way
>> to get more attention is by increasing your karma.
>>
>
> NB: I don't have any vested interest in this discussion except that I want
> to make sure OpenStack stays "Open", i.e. inclusive. I believe the concept
> of "reviewer karma", while seemingly sensible, is actually subtly counter
> to the goals of openness, innovation, and vendor neutrality, and would also
> lead to overall lower commit quality.
>
>
The way I see it there are a few parts to 'karma' including:

* The ratio of reviewers to open patches is way off. In nova there are only
21 reviewers who have done on average two reviews a day for the past 30
days [1], and there are 226 open reviews, 125 of which are waiting for a
reviewer.  So one part of the karma is helping out the team as a whole with
the review work load (and the more insightful the review the better).  If
we have more reviewers, more patches get looked at faster.
* The more I see someone being active, through reviews or through patches,
the more I trust there +1/-1s and patches.


While there are some potentially negative sides to karma, I don't see how
the properties above, which to me are the major elements of karma, can be
considered negative.


[1] http://www.russellbryant.net/openstack-stats/nova-reviewers-30.txt
[2] http://www.russellbryant.net/openstack-stats/nova-openreviews.html



> Brian Kernighan famously wrote: "Debugging is twice as hard as writing the
> code in the first place." A corollary is that constructing a mental model
> of code is hard; perhaps harder than writing the code in the first place.
> It follows that reviewing code is not an easy task, especially if one has
> not been intimately involved in the original development of the code under
> review. In fact, if a reviewer is not intimately familiar with the code
> under review, and therefore only able to perform the functions of human
> compiler and style-checker (functions which can be and typically are
> performed by automatic tools), the rigor of their review is at best
> less-than-ideal, and at worst purely symbolic.
>
>
FWIW, we have automatic style-checking.



> It is logical, then, that a reviewer should review changes to code that
> he/she is familiar with. Attempts to gamify the implicit review
> prioritization system through a "karma" scheme are sadly doomed to fail, as
> contributors hoping to get their patches reviewed will have no option but
> to "build karma" reviewing patches in code they are unfamiliar with,
> leading to a higher number of low quality reviews.
>
> So, if a cross-functional "karma" system won't accomplish the desired
> result (high-quality reviews of commits across all functional units), what
> will it accomplish (besides overall lower commit quality)?
>
> Because the "karma" system inherently favors entrenched (read: heavily
> deployed) code, it forms a slippery slope leading to a mediocre
> "one-size-fits-all" stack, where contributors of new technologies,
> approaches, and hardware/software drivers will see their contributions die
> on the vine due to lack of core reviewer attention. If the driver team for
> a widely deployed hypervisor (outside of the OpenStack space - they can't
> really be expected to have wide OpenStack deployment without a mature
> driver) is having difficulty with reviews due to an implicit "karma"
> deficit, imagine the challenges that will be faced by the future
> SDN/SDS/SDx innovators of the world hoping to find a platform for their
> innovation in OpenStack.
>
> Again, I don't have any vested interest in this discussion, except that I
> believe the concept of "reviewer karma" to be counter to both software
> quality and openness. In this particular case it would seem that the
> simplest solution to this problem would be to give one of the hyper-v team
> members core reviewer status, but perhaps there are consequences to that
> that elude me.
>

> Regards,
> Tim
>
>
>
>> https://review.openstack.org/#/q/reviewer:3185+project:openstack/nova,n,z
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://li

Re: [openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-10 Thread Matt Riedemann
Dan Smith  wrote on 10/10/2013 08:26:14 PM:

> From: Dan Smith 
> To: OpenStack Development Mailing List 
, 
> Date: 10/10/2013 08:31 PM
> Subject: Re: [openstack-dev] [nova][powervm] my notes from the 
> meeting on powervm CI
> 
> > 4. What is the max amount of time for us to report test results?  Dan
> > didn't seem to think 48 hours would fly. :)
> 
> Honestly, I think that 12 hours during peak times is the upper limit of
> what could be considered useful. If it's longer than that, many patches
> could go into the tree without a vote, which defeats the point.

Yeah, I was just joking about the 48 hour thing, 12 hours seems excessive
but I guess that has happened when things are super backed up with gate
issues and rechecks.

Right now things take about 4 hours, with Tempest being around 1.5 hours
of that. The rest of the time is setup and install, which includes heat
and ceilometer. So I guess that raises another question, if we're really
setting this up right now because of nova, do we need to have heat and
ceilometer installed and configured in the initial delivery of this if
we're not going to run tempest tests against them (we don't right now)?

I think some aspect of the slow setup time is related to DB2 and how
the migrations perform with some of that, but the overall time is not
considerably different from when we were running this with MySQL so
I'm reluctant to blame it all on DB2.  I think some of our topology
could have something to do with it too since the IVM hypervisor is running
on a separate system and we are gated on how it's performing at any
given time.  I think that will be our biggest challenge for the scale
issues with community CI.

> 
> > 5. What are the minimum tests that need to run (excluding APIs that 
the
> > powervm driver doesn't currently support)?
> > - smoke/gate/negative/whitebox/scenario/cli?  Right now we 
have
> > 1152 tempest tests running, those are only within api/scenario/cli and
> > we don't run everything.
> 
> I think that "a full run of tempest" should be required. That said, if
> there are things that the driver legitimately doesn't support, it makes
> sense to exclude those from the tempest run, otherwise it's not useful.
> 
> I think you should publish the tempest config (or config script, or
> patch, or whatever) that you're using so that we can see what it means
> in terms of the coverage you're providing.

Just to clarify, do you mean publish what we are using now or publish
once it's all working?  I can certainly attach our nose.cfg and
latest x-unit results xml file.

> 
> > 6. Network service? We're running with openvswitch 1.10 today so we
> > probably want to continue with that if possible.
> 
> Hmm, so that means neutron? AFAIK, not much of tempest runs with
> Nova/Neutron.
> 
> I kinda think that since nova-network is our default right now (for
> better or worse) that the run should include that mode, especially if
> using neutron excludes a large portion of the tests.
> 
> I think you said you're actually running a bunch of tempest right now,
> which conflicts with my understanding of neutron workiness. Can you 
clarify?

Correct, we're running with neutron using the ovs plugin. We basically 
have
the same issues that the neutron gate jobs have, which is related to 
concurrency
issues and tenant isolation (we're doing the same as devstack with neutron
in that we don't run tempest with tenant isolation).  We are running most
of the nova and most of the neutron API tests though (we don't have all
of the neutron-dependent scenario tests working though, probably more due
to incompetence in setting up neutron than anything else).

> 
> > 7. Cinder backend? We're running with the storwize driver but we do we
> > do about the remote v7000?
> 
> Is there any reason not to just run with a local LVM setup like we do in
> the real gate? I mean, additional coverage for the v7000 driver is
> great, but if it breaks and causes you to not have any coverage at all,
> that seems, like, bad to me :)

Yeah, I think we'd just run with a local LVM setup, that's what we do for
x86_64 and s390x tempest runs. For whatever reason we thought we'd do
storwize for our ppc64 runs, probably just to have a matrix of coverage.

> 
> > Again, just getting some thoughts out there to help us figure out our
> > goals for this, especially around 4 and 5.
> 
> Yeah, thanks for starting this discussion!
> 
> --Dan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Joe Gordon
On Thu, Oct 10, 2013 at 6:57 PM, Matt Riedemann  wrote:

> Getting integration testing hooked up for the hyper-v driver with tempest
> should go a long way here which is a good reason to have it.  As has been
> mentioned, there is a core team of people that understand the internals of
> the hyper-v driver and the subtleties of when it won't work, and only those
> with a vested interest in using it will really care about it.
>
> My team has the same issue with the powervm driver.  We don't have
> community integration testing hooked up yet.  We run tempest against it
> internally so we know what works and what doesn't, but besides standard
> code review practices that apply throughout everything (strong unit test
> coverage, consistency with other projects, hacking rules, etc), any other
> reviewer has to generally take it on faith that what's in there works as
> it's supposed to.  Sure, there is documentation available on what the
> native commands do and anyone can dig into those to figure it out, but I
> wouldn't expect that low-level of review from anyone that doesn't regularly
> work on the powervm driver.  I think the same is true for anything here.
>  So the equalizer is a rigorously tested and broad set of integration
> tests, which is where we all need to get to with tempest and continuous
> integration.
>

Well said, I couldn't agree more!


>
> We've had the same issues as mentioned in the original note about things
> slipping out of releases or taking a long time to get reviewed, and we've
> had to fork code internally because of it which we then have to continue to
> try and get merged upstream - and it's painful, but it is what it is,
> that's the nature of the business.
>
> Personally my experience has been that the more I give the more I get.
>  The more I'm involved in what others are doing and the more I review
> other's code, the more I can build a relationship which is mutually
> beneficial.  Sometimes I can only say 'hey, you need unit tests for this or
> this doesn't seem right but I'm not sure', but unless you completely
> automate code coverage metrics and build that back into reviews, e.g. does
> your 1000 line blueprint have 95% code coverage in the tests, you still
> need human reviewers on everything, regardless of context.  Even then it's
> not going to be enough, there will always be a need for people with a
> broader vision of the project as a whole that can point out where things
> are going in the wrong direction even if it fixes a bug.
>
> The point is I see both sides of the argument, I'm sure many people do.
>  In a large complicated project like this it's inevitable.  But I think the
> quality and adoption of OpenStack speaks for itself and I believe a key
> component of that is the review system and that's only as good as the
> people which are going to uphold the standards across the project.  I've
> been on enough development projects that give plenty of lip service to code
> quality and review standards which are always the first thing to go when a
> deadline looms, and those projects are always ultimately failures.
>
>
>
> Thanks,
>
> *MATT RIEDEMANN*
> Advisory Software Engineer
> Cloud Solutions and OpenStack Development
> --
>  *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
> E-mail:* *mrie...@us.ibm.com* 
> [image: IBM]
>
> 3605 Hwy 52 N
> Rochester, MN 55901-1407
> United States
>
>
>
>
>
> From:Tim Smith 
> To:OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>,
> Date:10/10/2013 07:48 PM
> Subject:Re: [openstack-dev] [Hyper-V] Havana status
> --
>
>
>
> On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant 
> <*rbry...@redhat.com*>
> wrote:
>
> Please understand that I only want to help here.  Perhaps a good way for
> you to get more review attention is get more karma in the dev community
> by helping review other patches.  It looks like you don't really review
> anything outside of your own stuff, or patches that touch hyper-v.  In
> the absence of significant interest in hyper-v from others, the only way
> to get more attention is by increasing your karma.
>
> NB: I don't have any vested interest in this discussion except that I want
> to make sure OpenStack stays "Open", i.e. inclusive. I believe the concept
> of "reviewer karma", while seemingly sensible, is actually subtly counter
> to the goals of openness, innovation, and vendor neutrality, and would also
> lead to overall lower commit quality.
>
> Brian Kernighan famously wrote: "Debugging is twice as hard as writing the
> code in the first place." A corollary is that constructing a mental model
> of code is hard; perhaps harder than writing the code in the first place.
> It follows that reviewing code is not an easy task, especially if one has
> not been intimately involved in the original development of the code under
> review. In fact, if a reviewer is not intimately familiar with the code
>

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Joe Gordon
On Thu, Oct 10, 2013 at 11:20 AM, Alessandro Pilotti <
apilo...@cloudbasesolutions.com> wrote:

>  Hi all,
>
>  As the Havana release date is approaching fast, I'm sending this email
> to sum up the situation for pending bugs and reviews related to the Hyper-V
> integration in OpenStack.
>
>  In the past weeks we diligently marked bugs that are related to Havana
> features with the "havana-rc-potential" tag, which at least for what Nova
> is concerned, had absolutely no effect.
> Our code is sitting in the review queue as usual and, not being tagged for
> a release or prioritised, there's no guarantee that anybody will take a
> look at the patches in time for the release. Needless to say, this starts
> to feel like a Kafka novel. :-)
> The goal for us is to make sure that our efforts are directed to the main
> project tree, avoiding the need to focus on a separate fork with more
> advanced features and updated code, even if this means slowing down a lot
> our pace. Due to the limited review bandwidth available in Nova we had to
> postpone to Icehouse blueprints which were already implemented for Havana,
> which is fine, but we definitely cannot leave bug fixes behind (even if
> they are just a small number, like in this case).
>
>  Some of those bugs are critical for Hyper-V support in Havana, while the
> related fixes typically consist in small patches with very few line changes.
>
>  Here's the detailed status:
>
>
>  --Nova--
>
>  The following bugs have already been fixed and are waiting for review:
>
>
>  VHD format check is not properly performed for fixed disks in the
> Hyper-V driver
>
>  https://bugs.launchpad.net/nova/+bug/1233853
> https://review.openstack.org/#/c/49269/
>
>
> Deploy instances failed on Hyper-V with Chinese locale
>
>  https://bugs.launchpad.net/nova/+bug/1229671
> https://review.openstack.org/#/c/48267/
>
>
>  Nova Hyper-V driver volumeutils iscsicli ListTargets contains a typo
>
>  https://bugs.launchpad.net/nova/+bug/1237432
> https://review.openstack.org/#/c/50671/
>

This link is incorrect, it should read
https://review.openstack.org/#/c/50482/


>
>
>Hyper-V driver needs tests for WMI WQL instructions
>
>https://bugs.launchpad.net/nova/+bug/1220256
>https://review.openstack.org/#/c/48940/
>

As a core reviewer, if I see no +1 from any Hyper-V people on a patch, I an
inclined to come back to it later. Like this one.


>
>
>target_iqn is referenced before assignment after exceptions in
> hyperv/volumeop.py attch_volume()
>https://bugs.launchpad.net/nova/+bug/1233837
>https://review.openstack.org/#/c/49259/
>
>  --Neutron--
>
>  Waiting for review
>
>ml2 plugin may let hyperv agents ports to build status
>
>https://bugs.launchpad.net/neutron/+bug/1224991
>https://review.openstack.org/#/c/48306/
>
>
>
>  The following two bugs are still requiring some work, but will be done
> in teh next days.
>
>Hyper-V fails to spawn snapshots
>
> https://bugs.launchpad.net/nova/+bug/1234759
>https://review.openstack.org/#/c/50439/
>
>VHDX snapshot from Hyper-V driver is bigger than original instance
>
>https://bugs.launchpad.net/nova/+bug/1231911
>https://review.openstack.org/#/c/48645/
>
>
>  As usual, thanks for your help!
>
>  Alessandro
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Matt Riedemann
Getting integration testing hooked up for the hyper-v driver with tempest 
should go a long way here which is a good reason to have it.  As has been 
mentioned, there is a core team of people that understand the internals of 
the hyper-v driver and the subtleties of when it won't work, and only 
those with a vested interest in using it will really care about it.

My team has the same issue with the powervm driver.  We don't have 
community integration testing hooked up yet.  We run tempest against it 
internally so we know what works and what doesn't, but besides standard 
code review practices that apply throughout everything (strong unit test 
coverage, consistency with other projects, hacking rules, etc), any other 
reviewer has to generally take it on faith that what's in there works as 
it's supposed to.  Sure, there is documentation available on what the 
native commands do and anyone can dig into those to figure it out, but I 
wouldn't expect that low-level of review from anyone that doesn't 
regularly work on the powervm driver.  I think the same is true for 
anything here.  So the equalizer is a rigorously tested and broad set of 
integration tests, which is where we all need to get to with tempest and 
continuous integration.

We've had the same issues as mentioned in the original note about things 
slipping out of releases or taking a long time to get reviewed, and we've 
had to fork code internally because of it which we then have to continue 
to try and get merged upstream - and it's painful, but it is what it is, 
that's the nature of the business.

Personally my experience has been that the more I give the more I get. The 
more I'm involved in what others are doing and the more I review other's 
code, the more I can build a relationship which is mutually beneficial. 
Sometimes I can only say 'hey, you need unit tests for this or this 
doesn't seem right but I'm not sure', but unless you completely automate 
code coverage metrics and build that back into reviews, e.g. does your 
1000 line blueprint have 95% code coverage in the tests, you still need 
human reviewers on everything, regardless of context.  Even then it's not 
going to be enough, there will always be a need for people with a broader 
vision of the project as a whole that can point out where things are going 
in the wrong direction even if it fixes a bug.

The point is I see both sides of the argument, I'm sure many people do. In 
a large complicated project like this it's inevitable.  But I think the 
quality and adoption of OpenStack speaks for itself and I believe a key 
component of that is the review system and that's only as good as the 
people which are going to uphold the standards across the project.  I've 
been on enough development projects that give plenty of lip service to 
code quality and review standards which are always the first thing to go 
when a deadline looms, and those projects are always ultimately failures.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Tim Smith 
To: OpenStack Development Mailing List 
, 
Date:   10/10/2013 07:48 PM
Subject:Re: [openstack-dev] [Hyper-V] Havana status



On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant  
wrote:
 
Please understand that I only want to help here.  Perhaps a good way for
you to get more review attention is get more karma in the dev community
by helping review other patches.  It looks like you don't really review
anything outside of your own stuff, or patches that touch hyper-v.  In
the absence of significant interest in hyper-v from others, the only way
to get more attention is by increasing your karma.

NB: I don't have any vested interest in this discussion except that I want 
to make sure OpenStack stays "Open", i.e. inclusive. I believe the concept 
of "reviewer karma", while seemingly sensible, is actually subtly counter 
to the goals of openness, innovation, and vendor neutrality, and would 
also lead to overall lower commit quality.

Brian Kernighan famously wrote: "Debugging is twice as hard as writing the 
code in the first place." A corollary is that constructing a mental model 
of code is hard; perhaps harder than writing the code in the first place. 
It follows that reviewing code is not an easy task, especially if one has 
not been intimately involved in the original development of the code under 
review. In fact, if a reviewer is not intimately familiar with the code 
under review, and therefore only able to perform the functions of human 
compiler and style-checker (functions which can be and typically are 
performed by automatic tools), the rigor of their review is at best 
less-than-ideal, and at worst purely symbolic.

It is logical, then, that a reviewer should review changes to code that 
he/she is familiar with. Attempts to gamify the impl

Re: [openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-10 Thread Dan Smith
> 4. What is the max amount of time for us to report test results?  Dan
> didn't seem to think 48 hours would fly. :)

Honestly, I think that 12 hours during peak times is the upper limit of
what could be considered useful. If it's longer than that, many patches
could go into the tree without a vote, which defeats the point.

> 5. What are the minimum tests that need to run (excluding APIs that the
> powervm driver doesn't currently support)?
> - smoke/gate/negative/whitebox/scenario/cli?  Right now we have
> 1152 tempest tests running, those are only within api/scenario/cli and
> we don't run everything.

I think that "a full run of tempest" should be required. That said, if
there are things that the driver legitimately doesn't support, it makes
sense to exclude those from the tempest run, otherwise it's not useful.

I think you should publish the tempest config (or config script, or
patch, or whatever) that you're using so that we can see what it means
in terms of the coverage you're providing.

> 6. Network service? We're running with openvswitch 1.10 today so we
> probably want to continue with that if possible.

Hmm, so that means neutron? AFAIK, not much of tempest runs with
Nova/Neutron.

I kinda think that since nova-network is our default right now (for
better or worse) that the run should include that mode, especially if
using neutron excludes a large portion of the tests.

I think you said you're actually running a bunch of tempest right now,
which conflicts with my understanding of neutron workiness. Can you clarify?

> 7. Cinder backend? We're running with the storwize driver but we do we
> do about the remote v7000?

Is there any reason not to just run with a local LVM setup like we do in
the real gate? I mean, additional coverage for the v7000 driver is
great, but if it breaks and causes you to not have any coverage at all,
that seems, like, bad to me :)

> Again, just getting some thoughts out there to help us figure out our
> goals for this, especially around 4 and 5.

Yeah, thanks for starting this discussion!

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Tim Smith
On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant  wrote:


> Please understand that I only want to help here.  Perhaps a good way for
> you to get more review attention is get more karma in the dev community
> by helping review other patches.  It looks like you don't really review
> anything outside of your own stuff, or patches that touch hyper-v.  In
> the absence of significant interest in hyper-v from others, the only way
> to get more attention is by increasing your karma.
>

NB: I don't have any vested interest in this discussion except that I want
to make sure OpenStack stays "Open", i.e. inclusive. I believe the concept
of "reviewer karma", while seemingly sensible, is actually subtly counter
to the goals of openness, innovation, and vendor neutrality, and would also
lead to overall lower commit quality.

Brian Kernighan famously wrote: "Debugging is twice as hard as writing the
code in the first place." A corollary is that constructing a mental model
of code is hard; perhaps harder than writing the code in the first place.
It follows that reviewing code is not an easy task, especially if one has
not been intimately involved in the original development of the code under
review. In fact, if a reviewer is not intimately familiar with the code
under review, and therefore only able to perform the functions of human
compiler and style-checker (functions which can be and typically are
performed by automatic tools), the rigor of their review is at best
less-than-ideal, and at worst purely symbolic.

It is logical, then, that a reviewer should review changes to code that
he/she is familiar with. Attempts to gamify the implicit review
prioritization system through a "karma" scheme are sadly doomed to fail, as
contributors hoping to get their patches reviewed will have no option but
to "build karma" reviewing patches in code they are unfamiliar with,
leading to a higher number of low quality reviews.

So, if a cross-functional "karma" system won't accomplish the desired
result (high-quality reviews of commits across all functional units), what
will it accomplish (besides overall lower commit quality)?

Because the "karma" system inherently favors entrenched (read: heavily
deployed) code, it forms a slippery slope leading to a mediocre
"one-size-fits-all" stack, where contributors of new technologies,
approaches, and hardware/software drivers will see their contributions die
on the vine due to lack of core reviewer attention. If the driver team for
a widely deployed hypervisor (outside of the OpenStack space - they can't
really be expected to have wide OpenStack deployment without a mature
driver) is having difficulty with reviews due to an implicit "karma"
deficit, imagine the challenges that will be faced by the future
SDN/SDS/SDx innovators of the world hoping to find a platform for their
innovation in OpenStack.

Again, I don't have any vested interest in this discussion, except that I
believe the concept of "reviewer karma" to be counter to both software
quality and openness. In this particular case it would seem that the
simplest solution to this problem would be to give one of the hyper-v team
members core reviewer status, but perhaps there are consequences to that
that elude me.

Regards,
Tim



> https://review.openstack.org/#/q/reviewer:3185+project:openstack/nova,n,z
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Common requirements for services' discussion

2013-10-10 Thread Rudra Rugge
Here are the blueprints (mentioned by Harshad below) to add complete AWS
VPC compatibility in Openstack. AWS EC2 compatibility already exists in 
Openstack.

https://blueprints.launchpad.net/neutron/+spec/ipam-extensions-for-neutron
https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
https://blueprints.launchpad.net/nova/+spec/aws-vpc-support

Services extension is relevant to NATaas (or Natasha :)), VPNaas,
in AWS VPC.

Regards,
Rudra

On Oct 10, 2013, at 6:15 AM, Harshad Nakil 
mailto:hna...@contrailsystems.com>>
 wrote:

Agree,
I like what AWS had done. Have a concept of NAT instance. 90 % use cases are 
solved by just specifying
Inside and outside networks for the NAT instance.

If one wants fancier NAT config they can always use NATaas API(s)
To configure this instance.

There is a blueprint for bringing Amazon VPC API compatibility to nova and 
related extensions to quantum already propose concept of NAT instance.

How the NAT instance is implemented is left to the plugin.

Regards
-Harshad


On Oct 10, 2013, at 1:47 AM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:

Can I just ask you to not call it NATaas... if you want to pick a name for it, 
go for Natasha :)

By the way, the idea of a NAT service plugin was first introduced at the 
Grizzly summit in San Diego.
One hurdle, not a big one however, would be that the external gateway and 
floating IP features of the L3 extension already implicitly implements NAT.
It will be important to find a solution to ensure NAT can be configured 
explicitly as well while allowing for configuring external gateway and floating 
IPs through the API in the same way that we do today.

Apart from this, another interesting aspect would be to be see if we can come 
up with an approach which will result in an API which abstracts as much as 
possible networking aspects. In other words, I would like to avoid an API which 
ends up being "iptables over rest", if possible.

Regards,
Salvatore


On 10 October 2013 09:55, Bob Melander (bmelande) 
mailto:bmela...@cisco.com>> wrote:
Hi Edgar,

I'm also interested in a broadening of NAT capability in Neutron using the 
evolving service framework.

Thanks,
Bob

From: Edgar Magana mailto:emag...@plumgrid.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: onsdag 9 oktober 2013 21:38
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Common requirements for services' 
discussion

Hello all,

Is anyone working on NATaaS?
I know we have some developer working on Router as a Service and they probably 
want to include NAT functionality but I have some interest in having NAT as a 
Service.

Please, response is somebody is interested in having some discussions about it.

Thanks,

Edgar

From: Sumit Naiksatam 
mailto:sumitnaiksa...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, October 8, 2013 8:30 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron] Common requirements for services' discussion

Hi All,

We had a VPNaaS meeting yesterday and it was felt that we should have a 
separate meeting to discuss the topics common to all services. So, in 
preparation for the Icehouse summit, I am proposing an IRC meeting on Oct 14th 
22:00 UTC (immediately after the Neutron meeting) to discuss common aspects 
related to the FWaaS, LBaaS, and VPNaaS.

We will begin with service insertion and chaining discussion, and I hope we can 
collect requirements for other common aspects such as service agents, services 
instances, etc. as well.

Etherpad for service insertion & chaining can be found here:
https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining

Hope you all can join.

Thanks,
~Sumit.


___ OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Notifications from non-local exchanges

2013-10-10 Thread Sandy Walsh


On 10/10/2013 06:16 PM, Neal, Phil wrote:
> Greetings all, I'm looking at how to expand the ability of our CM
> instance to consume notifications and have a quick question about
> the configuration and flow...
> 
> For the notifications central agent ,  we rely on the services (i.e.
> glance, cinder)  to drop messages on the same messaging host as used
> by Ceilometer. From there the listener picks it up and cycles through
> the plugin logic to convert it to a sample. It's apparent that we
> can't pass an alternate hostname via the control_exchange values, so
> is there another method for harvesting messages off of other
> instances (e.g. another compute node)?

Hey Phil,

You don't really need to specify the exchange name to consume
notifications. It will default to the control-exchange if not specified
anyway.

How it works isn't so obvious.

Depending on the priority of then notification the oslo notifier will
publish on . using the service's control-exchange. If
that queue doesn't exist it'll create it and bind the control-exchange
to it. This is so we can publish even if there are no consumers yet.

Oslo.rpc creates a 1:1 mapping of routing_key and queue to topic (no
wildcards). So we get

> -> binding: routing_key "." ->
queue "."

(essentially, 1 queue per priority)

Which is why, if you want to enable services to generate notifications,
you just have to set the driver and the topic(s) to publish on. Exchange
is implied and routing key/queue are inferred from topic.

Likewise we only have to specify the queue name to consume, since we
only need an exchange to publish.

I have a bare-bones oslo notifier consumer and client here if you want
to mess around with it (and a bare-bones kombu version in the parent).

https://github.com/SandyWalsh/amqp_sandbox/tree/master/oslo

Not sure if that answered your question or made it worse? :)

Cheers
-S


> 
> 
> - Phil
> 
> ___ OpenStack-dev mailing
> list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] VMWareapi patch status - 10/10

2013-10-10 Thread Tracy Jones

Hi Folks - the current state of VMware API patches is as follows - ordered by 
readiness for review

Out of 17 patches we have
6 ready for core reviewers to take a look
12 needing +1 review
1 needing review


Ordered by fitness for review:

== needs one more +2/approval ==
* https://review.openstack.org/49842
title: 'VMwareVCDriver Fix sparse disk copy error on spawn'
votes: +2:1, +1:2, -1:0, -2:0. +6 days in progress, revision: 2 is 0 
days old 
* https://review.openstack.org/50352
title: 'VMware: Network fallback in case specified one not found'
votes: +2:1, +1:2, -1:0, -2:0. +2 days in progress, revision: 1 is 2 
days old 
* https://review.openstack.org/50761
title: 'Fix vmwareapi driver get_diagnostics calls'
votes: +2:1, +1:3, -1:0, -2:0. +1 days in progress, revision: 1 is 1 
days old 
* https://review.openstack.org/47743
title: 'VMWare: bug fix for Vim exception handling'
votes: +2:1, +1:3, -1:0, -2:0. +18 days in progress, revision: 8 is 8 
days old 

== ready for core ==
* https://review.openstack.org/47289
title: 'Fixes datastore selection bug'
votes: +2:0, +1:5, -1:0, -2:0. +22 days in progress, revision: 15 is 4 
days old 
* https://review.openstack.org/49465
title: 'VMware: fix regression attaching iscsi cinder volumes'
votes: +2:0, +1:7, -1:0, -2:0. +8 days in progress, revision: 3 is 6 
days old 

== needs review ==
* https://review.openstack.org/50560
title: 'VMware: Fix bug for root disk size'
votes: +2:0, +1:2, -1:0, -2:0. +1 days in progress, revision: 1 is 1 
days old 
* https://review.openstack.org/50375
title: 'VMware: Network fallback in case specified one not found'
votes: +2:0, +1:3, -1:0, -2:0. +2 days in progress, revision: 2 is 0 
days old 
* https://review.openstack.org/50780
title: 'Make the vmware pause/unpause unit tests actually test so...'
votes: +2:0, +1:3, -1:0, -2:0. +1 days in progress, revision: 1 is 1 
days old 
* https://review.openstack.org/50053
title: 'VMware: fix bug with booting from volumes'
votes: +2:0, +1:3, -1:0, -2:0. +3 days in progress, revision: 2 is 3 
days old 
* https://review.openstack.org/48544
title: 'Fix issue when a image upload is interrupted it is not cl...'
votes: +2:0, +1:2, -1:0, -2:0. +14 days in progress, revision: 6 is 3 
days old 
* https://review.openstack.org/50563
title: 'VMWare: Disabling linked clone doesn't cache images'
votes: +2:0, +1:2, -1:0, -2:0. +1 days in progress, revision: 1 is 1 
days old 
* https://review.openstack.org/49695
title: 'VMware: remove deprecated configuration variable'
votes: +2:0, +1:3, -1:0, -2:0. +6 days in progress, revision: 2 is 5 
days old 
* https://review.openstack.org/49692
title: 'VMware: iscsi target discovery fails while attaching volumes'
votes: +2:0, +1:3, -1:0, -2:0. +6 days in progress, revision: 2 is 6 
days old 
* https://review.openstack.org/46231
title: 'VMware: fix VM resize bug'
votes: +2:0, +1:3, -1:0, -2:0. +28 days in progress, revision: 9 is 9 
days old 
* https://review.openstack.org/43270
title: 'vmware driver selection of vm_folder_ref.'
votes: +2:0, +1:1, -1:0, -2:0. +49 days in progress, revision: 1 is 49 
days old 
* https://review.openstack.org/43270
title: 'vmware driver selection of vm_folder_ref.'
votes: +2:0, +1:1, -1:0, -2:0. +49 days in progress, revision: 1 is 49 
days old 
* https://review.openstack.org/46400
title: 'VMware: booting multiple instances fails if image is not ...'
votes: +2:0, +1:2, -1:0, -2:0. +28 days in progress, revision: 7 is 20 
days old 

== needs revision ==
* https://review.openstack.org/50841
title: 'VMware: fix bug for reporting instance UUID's'
votes: +2:0, +1:1, -1:1, -2:0. +1 days in progress, revision: 3 is 0 
days old 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Alessandro Pilotti




On Oct 10, 2013, at 23:50 , Russell Bryant 
mailto:rbry...@redhat.com>>
 wrote:

On 10/10/2013 02:20 PM, Alessandro Pilotti wrote:
Hi all,

As the Havana release date is approaching fast, I'm sending this email
to sum up the situation for pending bugs and reviews related to the
Hyper-V integration in OpenStack.

In the past weeks we diligently marked bugs that are related to Havana
features with the "havana-rc-potential" tag, which at least for what
Nova is concerned, had absolutely no effect.
Our code is sitting in the review queue as usual and, not being tagged
for a release or prioritised, there's no guarantee that anybody will
take a look at the patches in time for the release. Needless to say,
this starts to feel like a Kafka novel. :-)
The goal for us is to make sure that our efforts are directed to the
main project tree, avoiding the need to focus on a separate fork with
more advanced features and updated code, even if this means slowing down
a lot our pace. Due to the limited review bandwidth available in Nova we
had to postpone to Icehouse blueprints which were already implemented
for Havana, which is fine, but we definitely cannot leave bug fixes
behind (even if they are just a small number, like in this case).

Some of those bugs are critical for Hyper-V support in Havana, while the
related fixes typically consist in small patches with very few line changes.

Does the rant make you feel better?  :-)


Hi Russell,

This was definitely not meant to sound like a rant, I apologise if you felt it 
that way. :-)

With a more general view of nova review performance, our averages are
very good right now and are meeting our goals for review turnaround times:

http://russellbryant.net/openstack-stats/nova-openreviews.html

--> Total Open Reviews: 230
--> Waiting on Submitter: 105
--> Waiting on Reviewer: 125

--> Stats since the latest revision:
> Average wait time: 3 days, 12 hours, 14 minutes
> Median wait time: 1 days, 12 hours, 31 minutes
> Number waiting more than 7 days: 19

--> Stats since the last revision without -1 or -2 (ignoring jenkins):
> Average wait time: 5 days, 10 hours, 57 minutes
> Median wait time: 2 days, 13 hours, 27 minutes


Usually when this type of discussion comes up, the first answer that I hear is 
some defensive data about how well project X ranks compared to some metric or 
the whole OpenStack average.
I'm not putting into discussion how much and well you guys are working (I 
actually firmly believe that you DO work very well), I'm just discussing about 
the way in which blueprints and bugs get prioritised.

Working on areas like Hyper-V inside of the OpenStack ecosystem is currently 
quite peculiar from a project management perspective due to the fragmentation 
of the commits among a number of larger projects.
Our bits are spread allover between Nova, Neutron, Cinder, Ceilometer, Windows 
Cloud-Init and let's not forget Crowbar and OpenVSwitch, although not stricly 
part of OpenStack. Except obviously Windows Cloud-Init, in none of those 
projects our contribution reaches the critical mass required for the project to 
be somehow dependent on what we do and reach a "core" status that would 
generate a sufficient autonomy. Furthermore, to complicate things more, with 
every release we are adding features to more projects.

On the other side, to get our code reviewed and merged we are always dependent 
on the good will and best effort of core reviewers that don't necessarily know 
or care about specific driver, plugin or agent internals. This brings to even 
longer review cycles even considering that reviewers are clearly doing their 
best in understanding the patches and we couldn't be more thankful.

"Best effort" has also a very specific meaning: in Nova all the Havana Hyper-V 
blueprints were marked as "low priority" (which can be translated in: "the only 
way to get them merged is to beg for reviews or maybe commit them on day 1 of 
the release cycle and pray") while most of the Hyper-V bugs had no priority at 
all (which can be translated in "make some noise on the ML and IRC or nobody 
will care"). :-)

This reality unfortunately applies to most of the sub-projects (non only 
Hyper-V) and can be IMHO solved only by delegating more authonomy to the 
sub-project teams on their specific area of competence across OpenStack as a 
whole. Hopefully we'll manage to find a solution during the design summit as we 
are definitely not the only ones feeling this way, by judging on various 
threads in this ML.

I personally consider that in a large project like this one there are multiple 
ways to work towards the achievement of the "greater good". Our call obviously 
consists in bringing OpenStack to the Microsoft world, which so far worked very 
well, I'd just prefer to be able to dedicate more resources on adding features, 
fixing bugs and make users happy instead of useless waits.

Also note that there are no hyper-v patches that are in the top 5 of any
o

Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-10 Thread Clint Byrum
Excerpts from Angus Salkeld's message of 2013-10-10 15:27:48 -0700:
> On 10/10/13 11:59 +0400, Stan Lagun wrote:
> >This rises number of questions:
> >
> >1. What about conditional dependencies? Like config3 depends on config1 AND
> >config2 OR config3.
> 
> We have the AND, but not an OR. To depend on two resources you just
> have 2 references to the 2 resources.
> 

AND is concrete. OR is not. I don't actually think it is useful for what
Heat is intended to do. This is not not packaging, this is deploying.
For deploying, Heat needs to know _what to do_, not "what is possible".

> >
> >2. How do I pass values between configs? For example config1 requires value
> >from user input and config2 needs an output value obtained from applying
> >config1
> 
> {Fn::GetAtt: [config2, the_name_of_the_attribute]}
> 

This is a little bit misleading. Heat does not have any good ways to
get a "value obtained from applying config1". The "data" attribute of
the WaitCondition is the only way I know, and it is really unwieldy,
as it can basically only dump a json string of all of the things each
signaler has fed back in.

That said, I wonder how many of the "value obtained from applying config1"
would be satisfied by the recently proposed "random string generation"
resource. Most of the time what people want to communicate back is just
auth details. If we push auth details in from Heat to both sides, that
alleviates all of my current use cases for this type of feature.

> >
> >3. How would you do error handling? For example config3 on server3 requires
> >config1 to be applied on server1 and config2 on server2. Suppose that there
> >was an error while applying config2 (and config1 succeeded). How do I
> >specify reaction for that? Maybe I need then to try to apply config4 to
> >server2 and continue or maybe just roll everything back
> 
> We currently have no "on_error" but it is not out of scope. The
> current action is either to rollback the stack or leave it in the
> failed state (depending on what you choose).
> 

Right, I can definitely see more actions being added as we identify the
commonly desired options.

> >
> >4. How these config dependencies play with nested stacks and resources like
> >LoadBalancer that create such stacks? How do I specify that myConfig
> >depends on HA proxy being configured if that config was declared in nested
> >stack that is generated by resource's Python code and is not declared in my
> >HOT template?
> 
> It is normally based on the actual data/variable that you are
> dependant on.
> loadbalancer: depends on autoscaling instance_list
> (actually in the loadbalancer config would be a "GetAtt: [scalegroup, 
> InstanceList]")
> 
> Then if you want to depend on that config you could depend on an
> attribute of that resource that changes on reconfigure.
> 
> config1:
>type: OS::SoftwareConfig::Ssh
>properties:
>  script: {GetAtt: [scalegroup, InstanceList]}
>  hosted_on: loadbalancer
>  ...
> 
> config2:
>type: OS::SoftwareConfig::Ssh
>properties:
>  script: {GetAtt: [config1, ConfigAppliedCount]}
>  hosted_on: somewhere_else
>  ...
> 
> I am sure we could come up with some better syntax for this. But
> the logic seems easily possible to me.
> 
> As far as nested stacks go: you just need an output to be useable
> externally - basically design your API.
> 
> >
> >5. The solution is not generic. For example I want to write HOT template
> >for my custom load-balancer and a scalable web-servers group. Load balancer
> >config depends on all configs of web-servers. But web-servers are created
> >dynamically (autoscaling). That means dependency graph needs to be also
> >dynamically modified. But if you explicitly list config names instead of
> >something like "depends on all configs of web-farm X" you have no way to
> >describe such rule. In other words we need generic dependency, not just
> >dependency on particular config
> 
> Why won't just depending on the scaling group be enough? if it needs
> to be updated it will update all within the group before progressing
> to the dependants.
> 

In the example, loadbalancer doesn't have to depend on all of the nodes
being configured.  Why would it? It gets a signal when the list changes,
but it can be created as soon as the _group_ is created.

Anyway, no dependency is needed. Your LB has health checks, you feed
them in, and when the webservers are configured, they pass, and it sends
traffic there.

> >
> >6. What would you do on STACK UPDATE that modifies the dependency graph?
> >
> >The notation of configs and there
> 
> What we normally do go through the resources, see what can be updated:
> - without replacement
> - needs deleting
> - is new
> - requires updating
> 
> Each resource type can define what will require replacing or not.
> 
> I think we can achieve what you want with some small improvements to
> the HOT format and with some new resource types - IMHO.

Agree with Angus here. I think we're closer to yo

Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-10 Thread Angus Salkeld

On 10/10/13 11:59 +0400, Stan Lagun wrote:

This rises number of questions:

1. What about conditional dependencies? Like config3 depends on config1 AND
config2 OR config3.


We have the AND, but not an OR. To depend on two resources you just
have 2 references to the 2 resources.



2. How do I pass values between configs? For example config1 requires value
from user input and config2 needs an output value obtained from applying
config1


{Fn::GetAtt: [config2, the_name_of_the_attribute]}



3. How would you do error handling? For example config3 on server3 requires
config1 to be applied on server1 and config2 on server2. Suppose that there
was an error while applying config2 (and config1 succeeded). How do I
specify reaction for that? Maybe I need then to try to apply config4 to
server2 and continue or maybe just roll everything back


We currently have no "on_error" but it is not out of scope. The
current action is either to rollback the stack or leave it in the
failed state (depending on what you choose).



4. How these config dependencies play with nested stacks and resources like
LoadBalancer that create such stacks? How do I specify that myConfig
depends on HA proxy being configured if that config was declared in nested
stack that is generated by resource's Python code and is not declared in my
HOT template?


It is normally based on the actual data/variable that you are
dependant on.
loadbalancer: depends on autoscaling instance_list
(actually in the loadbalancer config would be a "GetAtt: [scalegroup, 
InstanceList]")

Then if you want to depend on that config you could depend on an
attribute of that resource that changes on reconfigure.

config1:
  type: OS::SoftwareConfig::Ssh
  properties:
script: {GetAtt: [scalegroup, InstanceList]}
hosted_on: loadbalancer
...

config2:
  type: OS::SoftwareConfig::Ssh
  properties:
script: {GetAtt: [config1, ConfigAppliedCount]}
hosted_on: somewhere_else
...

I am sure we could come up with some better syntax for this. But
the logic seems easily possible to me.

As far as nested stacks go: you just need an output to be useable
externally - basically design your API.




5. The solution is not generic. For example I want to write HOT template
for my custom load-balancer and a scalable web-servers group. Load balancer
config depends on all configs of web-servers. But web-servers are created
dynamically (autoscaling). That means dependency graph needs to be also
dynamically modified. But if you explicitly list config names instead of
something like "depends on all configs of web-farm X" you have no way to
describe such rule. In other words we need generic dependency, not just
dependency on particular config


Why won't just depending on the scaling group be enough? if it needs
to be updated it will update all within the group before progressing
to the dependants.



6. What would you do on STACK UPDATE that modifies the dependency graph?

The notation of configs and there


What we normally do go through the resources, see what can be updated:
- without replacement
- needs deleting
- is new
- requires updating

Each resource type can define what will require replacing or not.

I think we can achieve what you want with some small improvements to
the HOT format and with some new resource types - IMHO.

-Angus



On Thu, Oct 10, 2013 at 4:25 AM, Angus Salkeld  wrote:


On 09/10/13 19:31 +0100, Steven Hardy wrote:


On Wed, Oct 09, 2013 at 06:59:22PM +0200, Alex Rudenko wrote:


Hi everyone,

I've read this thread and I'd like to share some thoughts. In my opinion,
workflows (which run on VMs) can be integrated with heat templates as
follows:

   1. workflow definitions should be defined separately and processed by
   stand-alone workflow engines (chef, puppet etc).



I agree, and I think this is the direction we're headed with the
software-config blueprints - essentially we should end up with some new
Heat *resources* which encapsulate software configuration.



Exactly.

I think we need a software-configuration-aas sub-project that knows
how to take puppet/chef/salt/... config and deploy it. Then Heat just
has Resources for these (OS::SoftwareConfig::Puppet).
We should even move our WaitConditions and Metadata over to that
yet-to-be-made service so that Heat is totally clean of software config.

How would this solve ordering issues:

resources:
 config1:
   type: OS::SoftwareConfig::Puppet
   hosted_on: server1
   ...
 config2:
   type: OS::SoftwareConfig::Puppet
   hosted_on: server1
   depends_on: config3
   ...
 config3:
   type: OS::SoftwareConfig::Puppet
   hosted_on: server2
   depends_on: config1
   ...
 server1:
   type: OS::Nova::Server
   ...
 server2:
   type: OS::Nova::Server
   ...


Heat knows all about ordering:
It starts the resources:
server1, server2
config1
config3
config2

There is the normal contract in the client:
we post the config to software-config-service
and we wait for the state == ACTIVE (when the config is app

[openstack-dev] [Neutron] Extraroute and router extensions

2013-10-10 Thread Artem Dmytrenko
Hi Rudra, Nachi.

Glad to see this discussion on the mailing list! The ExtraRoute routes are 
fairly
limited and it would be great to be able to store more complete routing
information in Neutron. I've submitted a blueprint proposing expanding 
ExtraRoute
parameters to include more information (extended-route-params). But it still
has a problem where routes are stored in a list and are not indexed. So an 
update
could be painful.

Could you share what attributes would you like to see in your RIB API?

Thanks!
Artem

P.S. I'm OpenStack newbie, looking forward to learning from and working with 
you!

>Hi Rudra
>
>ExtraRoute bp was designed for adding some "extra" routing for the router.
>The spec is very handy for simple and small use cases.
>However it won't fit large use cases, because it takes all route in a Json 
>List.
># It means we need to send full route for updating.
>
>As Salvatore suggests, we need to keep backward compatibility.
>so, IMO, we should create Routing table extension.
>
>I'm thinking about this in the context of L3VPN (MPLS) extension.
>My Idea is to have a RIB API in the Neutron.
>For vpnv4 routes it may have RT or RDs.
>
>Best
>Nachi ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-10 Thread Regnier, Greg J

The use cases defined (so far) cover these cases:
Single service instance in a single service VM (agree this 
avoids complexity pointed out by Harshad)
Multiple service instances on a single service VM (provides 
flexibility, extensibility)

Not explicitly covered is the case of a logical service across >1 VM.
This seems like a potentially common case, and can be added.
But implementation-wise, when a service wants to span multiple service VMs, it 
seems that is a policy and scheduling decision to be made by the service 
plugin. Question: Does the multiple VM use case put any new requirements on 
this framework (within its scope as a helper library for service plugins)?

Thx,
Greg


From: Bob Melander (bmelande) [mailto:bmela...@cisco.com]
Sent: Thursday, October 10, 2013 12:48 PM
To: OpenStack Development Mailing List
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Possibly but not necessarily. Some VMs have a large footprint, have 
multi-service capability and physical devices with capabilities sufficient for 
tenant isolation are not that rare (especially if tenants can only indirectly 
"control" them through a cloud service API).

My point is that if we take into account, in the design, the case where 
multiple service instances are hosted by a single service VM we'll be well 
positioned to support other use cases. But that is not to say the 
implementation effort should target that aspect initially.

Thanks,
 Bob

10 okt 2013 kl. 15:12 skrev "Harshad Nakil" 
mailto:hna...@contrailsystems.com>>:
Won't it be simpler to keep service instance  as one or more VMs, rather than 
1VM being many service instances?
Usually a appliance is collectively (all it's functions) providing a service. 
Like firewall or load balancer. A appliance is packaged as VM.
It will be easier to manage
it will be easier for the provider to charge.
It will be easier to control resource allocation.
Once a appliance is physical device than you have all of the above issues and 
usually multi-tenancy implementation is weak in most of physical appliances.

Regards
-Harshad


On Oct 10, 2013, at 12:44 AM, "Bob Melander (bmelande)" 
mailto:bmela...@cisco.com>> wrote:
Harshad,

By service instance I referred to the logical entities that Neutron creates 
(e.g. Neutron's router). I see a service VM as a (virtual) host where one or 
several service instances can be placed.
The service VM (at least if managed through Nova) will belong to a tenant and 
the service instances are owned by tenants.

If the service VM tenant is different from service instance tenants (which is a 
simple way to "hide" the service VM from the tenants owning the service 
instances) then it is not clear to me how the existing access control in 
openstack will support pinning the service VM to a particular tenant owning a 
service instance.

Thanks,
Bob

From: Harshad Nakil 
mailto:hna...@contrailsystems.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: onsdag 9 oktober 2013 18:56
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Admin creating service instance for a tenant could common use case. But 
ownership of service can be controlled via already existing access control 
mechanism in openstack. If the service instance belonged to a particular 
project then other tenants should by definition should not be able to use this 
instance.
On Tue, Oct 8, 2013 at 11:34 PM, Bob Melander (bmelande) 
mailto:bmela...@cisco.com>> wrote:
For use case 2, ability to "pin" an admin/operator owned VM to a particular 
tenant can be useful.
I.e., the service VMs are owned by the operator but a particular service VM 
will only allow service instances from a single tenant.

Thanks,
Bob

From: , Greg J 
mailto:greg.j.regn...@intel.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: tisdag 8 oktober 2013 23:48
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Hi,

Re: blueprint:  
https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

Before going into more detail on the mechanics, would like to nail down use 
cases.

Based on input and feedback, here is what I see so far.



Assumptions:



- a 'Service VM' hosts one or more 'Service Instances'

- each Service Instance has one or more Data Ports that plug into Neutron 
networks

- each Service Instance has a Service Management i/f for Service management 
(e.g. FW rules)

- each Service Instance has a VM Management i/f for VM management (e.g. health 
monitor)



Use case 1: Private Service VM

Owned by tenant

VM hosts one or more service instances

Ports of each service instance only

[openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-10 Thread Matt Riedemann
Based on the discussion with Russell and Dan Smith in the nova meeting 
today, here are some of my notes from the meeting that can continue the 
discussion.  These are all pretty rough at the moment so please bear with 
me, this is more to just get the ball rolling on ideas.

Notes on powervm CI:

1. What OS to run on?  Fedora 19, RHEL 6.4?
- Either of those is probably fine, we use RHEL 6.4 right now 
internally.
2. Deployment - RDO? SmokeStack? Devstack?
- SmokeStack is preferable since it packages rpms which is what 
we're using internally.
3. Backing database - mysql or DB2 10.5?
- Prefer DB2 since that's what we want to support in Icehouse and 
it's what we use internally, but there are differences in how long it 
takes to create a database with DB2 versus MySQL so when you multiply that 
times 7 databases (keystone, cinder, glance, nova, heat, neutron, 
ceilometer) it's going to add up unless we can figure out a better way to 
do it (single database with multiple schemas?).  Internally we use a 
pre-created image with the DB2 databases already created, we just run the 
migrate scripts against them so we don't have to wait for the create times 
every run - would that fly in community?
4. What is the max amount of time for us to report test results?  Dan 
didn't seem to think 48 hours would fly. :)
5. What are the minimum tests that need to run (excluding APIs that the 
powervm driver doesn't currently support)?
- smoke/gate/negative/whitebox/scenario/cli?  Right now we have 
1152 tempest tests running, those are only within api/scenario/cli and we 
don't run everything.
6. Network service? We're running with openvswitch 1.10 today so we 
probably want to continue with that if possible.
7. Cinder backend? We're running with the storwize driver but we do we do 
about the remote v7000?

Again, just getting some thoughts out there to help us figure out our 
goals for this, especially around 4 and 5.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-10 Thread Debojyoti Dutta
Alex, agree with your comments. I think we need to think of both 1.
and 2. as the eventual outcome and the destination. IF we decide to
improve upon scheduling/polices at the heat level, that should be a
very nice and independent endeavor and we can all learn from it. I
dont think we can design this all upfront.

IMO the simple enough road is to do
1. simple resource group extension and show how it can be used to
specify groups of resources - no matter what you do on top of this,
you will need to specify groups of resources (e.g. API proposal from
Yathi/Garyk/Mike/Me)
2. have policies which can be simple scheduler hints for now
3. have some notion of intelligent scheduling (a simple example is a
solver scheduler)
4. have some notion of fast state management (like Boris' proposal)

Ref:
[api] 
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit
[overall] 
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit

debo

Mike: I agree with most of your ideas of extensions to
heat/policies/ordering/dependencies etc but I wish we could start with
a simple API from the nova side that will grow into a cross services
thing while you could start from the Heat side and then eventually
come to teh same midpoint. I somehow feel we are almost there wrt the
1st cut of the API \cite{api}.

On Wed, Oct 9, 2013 at 11:11 PM, Alex Glikson  wrote:
> Thanks for the pointer -- was not able to attend that meeting,
> unfortunately. Couple of observations, based on what I've heard till now.
> 1. I think it is important not to restrict the discussion to Nova resources.
> So, I like the general direction in [1] to target a generic mechanism and
> API. However, once we start following that path, it becomes more challenging
> to figure out which component should manage those cross-resource constructs
> (Heat sounds like a reasonable candidate -- which seems consistent with the
> proposal at [2]), and what should be the API between it and the services
> deciding on the actual placement of individual resources (nova, cinder,
> neutron).
> 2. Moreover, we should take into account that we may need to take into
> consideration multiple sources of topology -- physical (maybe provided by
> Ironic, affecting availability -- hosts, racks, etc), virtual-compute
> (provided by Nova, affecting resource isolation -- mainly hosts),
> virtual-network (affecting connectivity and bandwidth/latency.. think of SDN
> policies enforcing routing and QoS almost orthogonally to physical
> topology), virtual-storage (affecting VM-to-volume connectivity and
> bandwidth/latency.. think of FC network implying topology different than the
> physical one and the IP network one).
>
> I wonder whether we will be able to come up with a simple-enough initial
> approach & implementation, which would not limit the ability to extend &
> customize it going forward to cover all the above.
>
> Regards,
> Alex
>
> [1]
> https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit
> [2] https://wiki.openstack.org/wiki/Heat/PolicyExtension
>
> 
> Alex Glikson
> Manager, Cloud Operating System Technologies, IBM Haifa Research Lab
> http://w3.haifa.ibm.com/dept/stt/cloud_sys.html |
> http://www.research.ibm.com/haifa/dept/stt/cloud_sys.shtml
> Email: glik...@il.ibm.com | Phone: +972-4-8281085 | Mobile: +972-54-647
> | Fax: +972-4-8296112
>
>
>
>
> From:Mike Spreitzer 
> To:OpenStack Development Mailing List
> ,
> Date:10/10/2013 07:59 AM
> Subject:Re: [openstack-dev] [scheduler] APIs for Smart Resource
> Placement - Updated Instance Group Model and API extension model - WIP Draft
> 
>
>
>
> Yes, there is more than the northbound API to discuss.  Gary started us
> there in the Scheduler chat on Oct 1, when he broke the issues down like
> this:
>
> 11:12:22 AM garyk: 1. a user facing API
> 11:12:41 AM garyk: 2. understanding which resources need to be tracked
> 11:12:48 AM garyk: 3. backend implementation
>
> The full transcript is at
> http://eavesdrop.openstack.org/meetings/scheduling/2013/scheduling.2013-10-01-15.08.log.html
>
> Alex Glikson  wrote on 10/09/2013 02:14:03 AM:
>>
>> Good summary. I would also add that in A1 the schedulers (e.g., in
>> Nova and Cinder) could talk to each other to coordinate. Besides
>> defining the policy, and the user-facing APIs, I think we should
>> also outline those cross-component APIs (need to think whether they
>> have to be user-visible, or can be admin).
>>
>> Regards,
>> Alex ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.opens

Re: [openstack-dev] [Ceilometer] Notifications from non-local exchanges

2013-10-10 Thread Neal, Phil
Greetings all,
I'm looking at how to expand the ability of our CM instance to consume 
notifications and have a quick question about  the configuration and flow...

For the notifications central agent ,  we rely on the services (i.e. glance, 
cinder)  to drop messages on the same messaging host as used by Ceilometer. 
From there the listener picks it up and cycles through the plugin logic to 
convert it to a sample. It's apparent that we can't pass an alternate hostname 
via the control_exchange values, so is there another method for harvesting 
messages off of other  instances (e.g. another compute node)?


- Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] tempest blueprint cleanup for icehouse - please update your blueprint if you are working on it

2013-10-10 Thread Sean Dague
We're trying to clean up the tempest blueprints prior to the icehouse 
summit to ensure that what's in https://blueprints.launchpad.net/tempest 
has some reflection on reality.


There are a large number in Delivery: Unknown states (most in the New 
status). If these are really things people are working on, please update 
the Delivery status to something beyond Unknown (Not Started is fair, it 
at least demonstrates that someone is keeping tabs on the blueprint).


Anything left in an Unknown state on Monday has the risk of being marked 
invalid. We'd like to get down to a small number of actual in flight 
blueprints before we start adding more for icehouse.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] CD cloud MVP1 completed

2013-10-10 Thread Robert Collins
That is, we're now deploying a trunk KVM based OpenStack using heat +
nova baremetal on a continuous basis into the test rack we have, it's
been running stably for 12 hours or so. Yay!

I think we should do a retrospective
(http://finding-marbles.com/retr-o-mat/what-is-a-retrospective/) at
the next TripleO meeting
https://wiki.openstack.org/wiki/TripleO/TripleOCloud/MVP1Retrospective
is where we'll capture the info from it.

We now have a choice - we can spend a bit of time on consolidation -
polish such as better instrumentation (e.g. logstash, deploy timing
data etc)   or we can get stuck into MVP2 : stateful upgrades where we
are updating a cloud rather than completely rebuilding. Also MVP2 may
be overly aggressive (in that there may be be dragons hidden there
that lead us to get stuck in a quagmire).

I think that we should get stateful upgrades going, and then
consolidate: a cloud that resets every 50m is a bit too interrupted to
expect users to use it, so we really have three key things to do to
get folk using the cloud [at all]:
 - externally accessible networking
 - preserve the users and vm's etc in the cloud during a deploy
 - live migrate or otherwise avoid breaking active VMs that are
running during a deploy

And after that we can get into serious incremental value delivery.

So, my proposal is that we do external networking (I think I can knock
that off today); then we reframe MVP2 as preserving users and vm's
etc, and add another overcloud MVP for keeping VM's running.

Thoughts?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Russell Bryant
On 10/10/2013 02:20 PM, Alessandro Pilotti wrote:
> Hi all,
> 
> As the Havana release date is approaching fast, I'm sending this email
> to sum up the situation for pending bugs and reviews related to the
> Hyper-V integration in OpenStack.
> 
> In the past weeks we diligently marked bugs that are related to Havana
> features with the "havana-rc-potential" tag, which at least for what
> Nova is concerned, had absolutely no effect.
> Our code is sitting in the review queue as usual and, not being tagged
> for a release or prioritised, there's no guarantee that anybody will
> take a look at the patches in time for the release. Needless to say,
> this starts to feel like a Kafka novel. :-)
> The goal for us is to make sure that our efforts are directed to the
> main project tree, avoiding the need to focus on a separate fork with
> more advanced features and updated code, even if this means slowing down
> a lot our pace. Due to the limited review bandwidth available in Nova we
> had to postpone to Icehouse blueprints which were already implemented
> for Havana, which is fine, but we definitely cannot leave bug fixes
> behind (even if they are just a small number, like in this case).
> 
> Some of those bugs are critical for Hyper-V support in Havana, while the
> related fixes typically consist in small patches with very few line changes.

Does the rant make you feel better?  :-)

With a more general view of nova review performance, our averages are
very good right now and are meeting our goals for review turnaround times:

http://russellbryant.net/openstack-stats/nova-openreviews.html

--> Total Open Reviews: 230
--> Waiting on Submitter: 105
--> Waiting on Reviewer: 125

--> Stats since the latest revision:
> Average wait time: 3 days, 12 hours, 14 minutes
> Median wait time: 1 days, 12 hours, 31 minutes
> Number waiting more than 7 days: 19

--> Stats since the last revision without -1 or -2 (ignoring jenkins):
> Average wait time: 5 days, 10 hours, 57 minutes
> Median wait time: 2 days, 13 hours, 27 minutes

Also note that there are no hyper-v patches that are in the top 5 of any
of the lists of patches waiting the longest.  So, you are certainly not
being singled out here.

Please understand that I only want to help here.  Perhaps a good way for
you to get more review attention is get more karma in the dev community
by helping review other patches.  It looks like you don't really review
anything outside of your own stuff, or patches that touch hyper-v.  In
the absence of significant interest in hyper-v from others, the only way
to get more attention is by increasing your karma.

https://review.openstack.org/#/q/reviewer:3185+project:openstack/nova,n,z

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2013-10-10 Thread Anita Kuno

Confirmed.

On 10/10/2013 10:14 PM, Boris Pavlovic wrote:


Dear Stackers, I would like to put my candidacy for a position on the 
OpenStack Technical Committee. I have been an active OpenStack 
contributor for over a year, my work mostly concentrated around 
improving existing OpenStack code (unifying common parts of OpenStack 
projects and putting them to oslo, improving performance and test 
coverage, fixing bugs). In the previous cycles I have been focusing on 
improving OpenStack Database code, improving its performance, fixing a 
lot of nasty bugs, making it more maintainable, durable and backward 
compatible. I lead the community effort across all core projects to 
centralize database


code into oslo-incubator (others will be switched in IceHouse). In 
addition to being an active contributor, I spend a lot of time helping 
newcomers to OpenStack to become better Open Source citizens. During 
Havana I coordinated the activies of 16 of my team members across 
several projects (nova, oslo, cinder and glance) which helped Mirantis 
to make a meaningful contribution to OpenStack: 
http://stackalytics.com/?release=havana&metric=commits&project_type=core 
Currently I am focusing on the goal of consistently improving 
OpenStack performance at scale, arguably one of the biggest challenges 
across all of OpenStack. I believe that the problem with scale and 
performance could be solved easily by the community. The main problem 
is that contributors don't have an easy way to see how their commits 
affect performance at scale. This is why two months ago (with the help 
of four of my colleagues) I started work on project Rally (The 
OpenStack Benchmark System). Rally allows everyone to see, close to 
real life, the performance of the OpenStack cloud at scale. This 
system will be closely integrated with the existing OpenStack CI, 
making the process of tuning OpenStack scalability and performance 
simple and transparent for everybody.


Next Monday, in collaboration with our colleagues from Bluehost and 
IBM, we are going to release the first version of Rally.


I believe that at this point in time, the focus of the community ought 
to shift to enabling our customers to adopt OpenStack in real 
production use cases. This means that such issues as performance, 
quality, reliability, maintainability and scalability should get a 
higher priority, and as a member of the OpenStack TC, I would like to 
become a strong advocate for making OpenStack production ready. Links: 
1. Rally Wiki https://wiki.openstack.org/wiki/Rally


2. Rally Launchpad https://launchpad.net/rally

3. Example of Rally results: 
https://docs.google.com/a/mirantis.com/file/d/0B7XIUFtx6EISTEpPb0tRSTFIaFk/edit?usp=drive_web 
3. My Launchpad https://launchpad.net/~boris-42 
 4. My Contribution 
https://review.openstack.org/#/q/owner:boris-42+status:merged,n,z 5. 
My Contribution statistics: 
http://stackalytics.com/?release=all&metric=commits&project_type=All&user_id=boris-42 



Best regards, Boris Pavlovic --- Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2013-10-10 Thread Boris Pavlovic
Dear Stackers, I would like to put my candidacy for a position on the
OpenStack Technical Committee. I have been an active OpenStack contributor
for over a year, my work mostly concentrated around improving existing
OpenStack code (unifying common parts of OpenStack projects and putting
them to oslo, improving performance and test coverage, fixing bugs). In the
previous cycles I have been focusing on improving OpenStack Database code,
improving its performance, fixing a lot of nasty bugs, making it more
maintainable, durable and backward compatible. I lead the community effort
across all core projects to centralize database

code into oslo-incubator (others will be switched in IceHouse). In addition
to being an active contributor, I spend a lot of time helping newcomers to
OpenStack to become better Open Source citizens. During Havana I
coordinated the activies of 16 of my team members across several projects
(nova, oslo, cinder and glance) which helped Mirantis to make a meaningful
contribution to OpenStack:
http://stackalytics.com/?release=havana&metric=commits&project_type=coreCurrently
I am focusing on the goal of consistently improving OpenStack
performance at scale, arguably one of the biggest challenges across all of
OpenStack. I believe that the problem with scale and performance could be
solved easily by the community. The main problem is that contributors don’t
have an easy way to see how their commits affect performance at scale. This
is why two months ago (with the help of four of my colleagues) I started
work on project Rally (The OpenStack Benchmark System). Rally allows
everyone to see, close to real life, the performance of the OpenStack cloud
at scale. This system will be closely integrated with the existing
OpenStack CI, making the process of tuning OpenStack scalability and
performance simple and transparent for everybody.

Next Monday, in collaboration with our colleagues from Bluehost and IBM, we
are going to release the first version of Rally.

I believe that at this point in time, the focus of the community ought to
shift to enabling our customers to adopt OpenStack in real production use
cases. This means that such issues as performance, quality, reliability,
maintainability and scalability should get a higher priority, and as a
member of the OpenStack TC, I would like to become a strong advocate for
making OpenStack production ready. Links: 1. Rally Wiki
https://wiki.openstack.org/wiki/Rally

2. Rally Launchpad https://launchpad.net/rally

3. Example of Rally results:
https://docs.google.com/a/mirantis.com/file/d/0B7XIUFtx6EISTEpPb0tRSTFIaFk/edit?usp=drive_web3.
My Launchpad
https://launchpad.net/~boris-42 4. My Contribution
https://review.openstack.org/#/q/owner:boris-42+status:merged,n,z 5. My
Contribution statistics:
http://stackalytics.com/?release=all&metric=commits&project_type=All&user_id=boris-42

Best regards, Boris Pavlovic --- Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-10 Thread Bob Melander (bmelande)
Possibly but not necessarily. Some VMs have a large footprint, have 
multi-service capability and physical devices with capabilities sufficient for 
tenant isolation are not that rare (especially if tenants can only indirectly 
"control" them through a cloud service API).

My point is that if we take into account, in the design, the case where 
multiple service instances are hosted by a single service VM we'll be well 
positioned to support other use cases. But that is not to say the 
implementation effort should target that aspect initially.

Thanks,
 Bob

10 okt 2013 kl. 15:12 skrev "Harshad Nakil" 
mailto:hna...@contrailsystems.com>>:

Won't it be simpler to keep service instance  as one or more VMs, rather than 
1VM being many service instances?
Usually a appliance is collectively (all it's functions) providing a service. 
Like firewall or load balancer. A appliance is packaged as VM.
It will be easier to manage
it will be easier for the provider to charge.
It will be easier to control resource allocation.
Once a appliance is physical device than you have all of the above issues and 
usually multi-tenancy implementation is weak in most of physical appliances.

Regards
-Harshad


On Oct 10, 2013, at 12:44 AM, "Bob Melander (bmelande)" 
mailto:bmela...@cisco.com>> wrote:

Harshad,

By service instance I referred to the logical entities that Neutron creates 
(e.g. Neutron's router). I see a service VM as a (virtual) host where one or 
several service instances can be placed.
The service VM (at least if managed through Nova) will belong to a tenant and 
the service instances are owned by tenants.

If the service VM tenant is different from service instance tenants (which is a 
simple way to "hide" the service VM from the tenants owning the service 
instances) then it is not clear to me how the existing access control in 
openstack will support pinning the service VM to a particular tenant owning a 
service instance.

Thanks,
Bob

From: Harshad Nakil 
mailto:hna...@contrailsystems.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: onsdag 9 oktober 2013 18:56
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Admin creating service instance for a tenant could common use case. But 
ownership of service can be controlled via already existing access control 
mechanism in openstack. If the service instance belonged to a particular 
project then other tenants should by definition should not be able to use this 
instance.

On Tue, Oct 8, 2013 at 11:34 PM, Bob Melander (bmelande) 
mailto:bmela...@cisco.com>> wrote:
For use case 2, ability to "pin" an admin/operator owned VM to a particular 
tenant can be useful.
I.e., the service VMs are owned by the operator but a particular service VM 
will only allow service instances from a single tenant.

Thanks,
Bob

From: , Greg J 
mailto:greg.j.regn...@intel.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: tisdag 8 oktober 2013 23:48
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Hi,

Re: blueprint:  
https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

Before going into more detail on the mechanics, would like to nail down use 
cases.

Based on input and feedback, here is what I see so far.



Assumptions:



- a 'Service VM' hosts one or more 'Service Instances'

- each Service Instance has one or more Data Ports that plug into Neutron 
networks

- each Service Instance has a Service Management i/f for Service management 
(e.g. FW rules)

- each Service Instance has a VM Management i/f for VM management (e.g. health 
monitor)



Use case 1: Private Service VM

Owned by tenant

VM hosts one or more service instances

Ports of each service instance only plug into network(s) owned by tenant



Use case 2: Shared Service VM

Owned by admin/operator

VM hosts multiple service instances

The ports of each service instance plug into one tenants network(s)

Service instance provides isolation from other service instances within VM



Use case 3: Multi-Service VM

Either Private or Shared Service VM

Support multiple service types (e.g. FW, LB, …)


-  Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [nova] Looking for clarification on the diagnostics API

2013-10-10 Thread Matt Riedemann
Looks like this has been brought up a couple of times:

https://lists.launchpad.net/openstack/msg09138.html 

https://lists.launchpad.net/openstack/msg08555.html 

But they seem to kind of end up in the same place I already am - it seems 
to be an open-ended API that is hypervisor-specific.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Matt Riedemann/Rochester/IBM
To: "OpenStack Development Mailing List" 
, 
Date:   10/10/2013 02:12 PM
Subject:[nova] Looking for clarification on the diagnostics API


Tempest recently got some new tests for the nova diagnostics API [1] which 
failed when I was running against the powervm driver since it doesn't 
implement that API.  I started looking at other drivers that did and found 
that libvirt, vmware and xenapi at least had code for the get_diagnostics 
method.  I found that the vmware driver was re-using it's get_info method 
for get_diagnostics which led to bug 1237622 [2] but overall caused some 
confusion about the difference between the compute driver's get_info and 
get_diagnostics mehods.  It looks like get_info is mainly just used to get 
the power_state of the instance.

First, the get_info method has a nice docstring for what it needs returned 
[3] but the get_diagnostics method doesn't [4].  From looking at the API 
docs [5], the diagnostics API basically gives an example of values to get 
back which is completely based on what the libvirt driver returns. Looking 
at the xenapi driver code, it looks like it does things a bit differently 
than the libvirt driver (maybe doesn't return the exact same keys, but it 
returns information based on what Xen provides).

I'm thinking about implementing the diagnostics API for the powervm driver 
but I'd like to try and get some help on defining just what should be 
returned from that call.  There are some IVM commands available to the 
powervm driver for getting hardware resource information about an LPAR so 
I think I could implement this pretty easily.

I think it basically comes down to providing information about the 
processor, memory, storage and network interfaces for the instance but if 
anyone has more background information on that API I'd like to hear it.

[1] 
https://github.com/openstack/tempest/commit/da0708587432e47f85241201968e6402190f0c5d
 

[2] https://bugs.launchpad.net/nova/+bug/1237622 
[3] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L144 

[4] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L299 

[5] http://paste.openstack.org/show/48236/ 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States

<><>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Looking for clarification on the diagnostics API

2013-10-10 Thread Matt Riedemann
Tempest recently got some new tests for the nova diagnostics API [1] which 
failed when I was running against the powervm driver since it doesn't 
implement that API.  I started looking at other drivers that did and found 
that libvirt, vmware and xenapi at least had code for the get_diagnostics 
method.  I found that the vmware driver was re-using it's get_info method 
for get_diagnostics which led to bug 1237622 [2] but overall caused some 
confusion about the difference between the compute driver's get_info and 
get_diagnostics mehods.  It looks like get_info is mainly just used to get 
the power_state of the instance.

First, the get_info method has a nice docstring for what it needs returned 
[3] but the get_diagnostics method doesn't [4].  From looking at the API 
docs [5], the diagnostics API basically gives an example of values to get 
back which is completely based on what the libvirt driver returns. Looking 
at the xenapi driver code, it looks like it does things a bit differently 
than the libvirt driver (maybe doesn't return the exact same keys, but it 
returns information based on what Xen provides).

I'm thinking about implementing the diagnostics API for the powervm driver 
but I'd like to try and get some help on defining just what should be 
returned from that call.  There are some IVM commands available to the 
powervm driver for getting hardware resource information about an LPAR so 
I think I could implement this pretty easily.

I think it basically comes down to providing information about the 
processor, memory, storage and network interfaces for the instance but if 
anyone has more background information on that API I'd like to hear it.

[1] 
https://github.com/openstack/tempest/commit/da0708587432e47f85241201968e6402190f0c5d
 

[2] https://bugs.launchpad.net/nova/+bug/1237622 
[3] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L144 

[4] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L299 

[5] http://paste.openstack.org/show/48236/ 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Icehouse design summit proposal deadline

2013-10-10 Thread Russell Bryant
Greetings,

We already have more proposals for the Nova design summit track than
time slots.  Please get your proposals in as soon as possible, and
ideally no later than 1 week from today - Thursday, October 17.  At that
point we will be focusing on putting a schedule together in order to
have the schedule completed at least a week in advance of the summit.

Thanks!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-10 Thread Anita Kuno

Confirmed.

On 10/10/2013 08:18 PM, Mark McClain wrote:

All-

I'd like to announce my candidacy to continue serving on the Technical 
Committee.

About Me
-
I am currently a member of the Technical Committee and the Networking (Neutron) 
PTL.  I began working as developer on OpenStack during the Essex cycle. In 
addition to my work on Neutron, I have contributed code and reviews to most of 
the other integrated projects. I believe that cross project contributions are 
essential to foster collaboration and sharing of ideas within our community.  
I'm a member of the Stable Release Team and the Requirements Team.  My work on 
both teams provides a cross project view of our community with an eye towards 
stability and consistency.  Outside of development, I travel worldwide to 
conferences to advocate and educate on OpenStack and interact with users and 
deployers.  I've been professionally developing Python software for 13 years.


Platform
---
OpenStack is one community comprised of many parts and we must function as one 
unit to continue our growth.  As a TC member, I will continue to place the 
interests of the larger community first when making decisions.  There are 
several key areas I'd like to see the TC focus on:

Unified Experience
For OpenStack to be successful we must strive to provide a unified 
experience for both users and deployers.  Users want tools that are well 
documented and easy to use.  The documentation and tools must be portable 
across deployments (both public and private) so that users do not need to 
concern themselves with the implementation details.  At the same time, 
deployers should be able to upgrade between releases and maintain a known level 
of compatibility.  Our community has worked hard to improve this experience and 
this should remain a focus going forward.

Development
The Technical Committee should serve as a high level forum to 
facilitate defining cross project technical and procedural requirements.  While 
many of our programs share commonalities, there are still differences in 
policies and technical decisions.  The TC should work to build consensus and 
reduce the differences between the projects so the community can function as 
one.

Scope
The issue of scope was a recurring theme during my recent term on the 
TC.  As the OpenStack ecosystem grows beyond Infrastructure as a Service, the 
committee needs to more clearly define the criteria used to determine the kind 
of projects and programs that fit within the scope of integrated releases and 
how they move through the progression of incubation to graduation.  In addition 
to defining the criteria, the Technical Committee should to work develop 
policies and procedures to provide some guidance to projects which are outside 
of the scope an integrated release, but valuable to our community.


We have built a very special community through the contributions of many.  
These contributions have powered our phenomenal growth and I'm excited about 
our future!

Thanks,
mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Alessandro Pilotti
Hi all,

As the Havana release date is approaching fast, I'm sending this email to sum 
up the situation for pending bugs and reviews related to the Hyper-V 
integration in OpenStack.

In the past weeks we diligently marked bugs that are related to Havana features 
with the "havana-rc-potential" tag, which at least for what Nova is concerned, 
had absolutely no effect.
Our code is sitting in the review queue as usual and, not being tagged for a 
release or prioritised, there's no guarantee that anybody will take a look at 
the patches in time for the release. Needless to say, this starts to feel like 
a Kafka novel. :-)
The goal for us is to make sure that our efforts are directed to the main 
project tree, avoiding the need to focus on a separate fork with more advanced 
features and updated code, even if this means slowing down a lot our pace. Due 
to the limited review bandwidth available in Nova we had to postpone to 
Icehouse blueprints which were already implemented for Havana, which is fine, 
but we definitely cannot leave bug fixes behind (even if they are just a small 
number, like in this case).

Some of those bugs are critical for Hyper-V support in Havana, while the 
related fixes typically consist in small patches with very few line changes.

Here's the detailed status:


--Nova--

The following bugs have already been fixed and are waiting for review:


VHD format check is not properly performed for fixed disks in the Hyper-V driver

https://bugs.launchpad.net/nova/+bug/1233853
https://review.openstack.org/#/c/49269/


Deploy instances failed on Hyper-V with Chinese locale

https://bugs.launchpad.net/nova/+bug/1229671
https://review.openstack.org/#/c/48267/


Nova Hyper-V driver volumeutils iscsicli ListTargets contains a typo

https://bugs.launchpad.net/nova/+bug/1237432
https://review.openstack.org/#/c/50671/


Hyper-V driver needs tests for WMI WQL instructions

https://bugs.launchpad.net/nova/+bug/1220256
https://review.openstack.org/#/c/48940/


target_iqn is referenced before assignment after exceptions in 
hyperv/volumeop.py attch_volume()

https://bugs.launchpad.net/nova/+bug/1233837
https://review.openstack.org/#/c/49259/


--Neutron--

Waiting for review

ml2 plugin may let hyperv agents ports to build status

https://bugs.launchpad.net/neutron/+bug/1224991
https://review.openstack.org/#/c/48306/



The following two bugs are still requiring some work, but will be done in teh 
next days.

Hyper-V fails to spawn snapshots

https://bugs.launchpad.net/nova/+bug/1234759
https://review.openstack.org/#/c/50439/

VHDX snapshot from Hyper-V driver is bigger than original instance

https://bugs.launchpad.net/nova/+bug/1231911
https://review.openstack.org/#/c/48645/


As usual, thanks for your help!

Alessandro



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2013-10-10 Thread Mark McClain
All-

I'd like to announce my candidacy to continue serving on the Technical 
Committee.

About Me
-
I am currently a member of the Technical Committee and the Networking (Neutron) 
PTL.  I began working as developer on OpenStack during the Essex cycle. In 
addition to my work on Neutron, I have contributed code and reviews to most of 
the other integrated projects. I believe that cross project contributions are 
essential to foster collaboration and sharing of ideas within our community.  
I'm a member of the Stable Release Team and the Requirements Team.  My work on 
both teams provides a cross project view of our community with an eye towards 
stability and consistency.  Outside of development, I travel worldwide to 
conferences to advocate and educate on OpenStack and interact with users and 
deployers.  I've been professionally developing Python software for 13 years.


Platform
---
OpenStack is one community comprised of many parts and we must function as one 
unit to continue our growth.  As a TC member, I will continue to place the 
interests of the larger community first when making decisions.  There are 
several key areas I'd like to see the TC focus on:

Unified Experience
For OpenStack to be successful we must strive to provide a unified 
experience for both users and deployers.  Users want tools that are well 
documented and easy to use.  The documentation and tools must be portable 
across deployments (both public and private) so that users do not need to 
concern themselves with the implementation details.  At the same time, 
deployers should be able to upgrade between releases and maintain a known level 
of compatibility.  Our community has worked hard to improve this experience and 
this should remain a focus going forward.

Development
The Technical Committee should serve as a high level forum to 
facilitate defining cross project technical and procedural requirements.  While 
many of our programs share commonalities, there are still differences in 
policies and technical decisions.  The TC should work to build consensus and 
reduce the differences between the projects so the community can function as 
one.

Scope
The issue of scope was a recurring theme during my recent term on the 
TC.  As the OpenStack ecosystem grows beyond Infrastructure as a Service, the 
committee needs to more clearly define the criteria used to determine the kind 
of projects and programs that fit within the scope of integrated releases and 
how they move through the progression of incubation to graduation.  In addition 
to defining the criteria, the Technical Committee should to work develop 
policies and procedures to provide some guidance to projects which are outside 
of the scope an integrated release, but valuable to our community.


We have built a very special community through the contributions of many.  
These contributions have powered our phenomenal growth and I'm excited about 
our future!

Thanks,
mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint for IPAM and Policy extensions in Neutron

2013-10-10 Thread Nachi Ueno
Hi Rudra

2013/10/8 Rudra Rugge :
> Hi Nachi,
>
> Please see inline:
>
> On Oct 8, 2013, at 10:42 AM, Nachi Ueno  wrote:
>
>> Hi Rudra
>>
>> Thanks!
>>
>> Some questions and comments
>>
>> -  name and fq_name
>> How we use name and fq_name ?
>> IMO, we should prevent to use shorten name.
>>
> [Rudra] 'name' meets all the current Neutron models like (network, subnet, 
> etc).
> 'fq_name' is a free string added for plugins to use in their own context. 
> fq_name
> hierarchy could be different in each plugin.
> Example:
> name: test_policy
> fq_name: [default-domain:test-project:test_policy]
> while a different plugin may use it as
> fq_name: [test-project:test_policy]

so this is kind of id?

>> - "src_ports": ["80-80"],
>> For API consistency, we should use similar way of the security groups
>> http://docs.openstack.org/api/openstack-network/2.0/content/POST_createSecGroupRule__security-group-rules_security-groups-ext.html
>>
> [Rudra] This is a list of start and end ports. If source port ranges to be 
> allowed
> are [100-200] and [1000-1200]. Security groups support only a single range.

Subnet.allocation_pools has range. so may be we should use this style.
http://docs.openstack.org/api/openstack-network/2.0/content/Concepts-d1e369.html#Network

[{"start":100, "end":200}, {"start":1000,"end":1200}

>
>> - PolicyRuleCreate
>> Could you add more example if the action contains services.
>>
>> "action_list": ["simple_action-pass"],
> [Rudra] Will update the spec with more examples.
>
>>
>> This spec is related with service framework discussion also.
>> so I wanna know the detail and different with service framework.
>>
> [Rudra] Could you please point me to the service framework spec/discussion. 
> Thanks.

This is BP.
https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering
We will have some IRC meeting after next neutron core meeting also.

Best
Nachi

>
>> it is also helpful if we could have full list of examples.
> [Rudra] Will add more examples.
>
> Cheers,
> Rudra
>
>>
>> Best
>> Nachi
>>
>>
>>
>>
>>
>> 2013/10/7 Rudra Rugge :
>>> Hi Nachi,
>>>
>>> I have split the spec for policy and VPN wiki served as a good reference 
>>> point. Please review and provide comments:
>>> https://wiki.openstack.org/wiki/Blueprint-policy-extensions-for-neutron
>>>
>>> Thanks,
>>> Rudra
>>>
>>> On Oct 4, 2013, at 4:56 PM, Nachi Ueno  wrote:
>>>
 2013/10/4 Rudra Rugge :
> Hi Nachi,
>
> Inline response
>
> On 10/4/13 12:54 PM, "Nachi Ueno"  wrote:
>
>> Hi Rudra
>>
>> inline responded
>>
>> 2013/10/4 Rudra Rugge :
>>> Hi Nachi,
>>>
>>> Thanks for reviewing the BP. Please see inline:
>>>
>>> On 10/4/13 11:30 AM, "Nachi Ueno"  wrote:
>>>
 Hi Rudra

 Two comment from me

 (1) IPAM and Network policy extension looks like independent extension.
 so IPAM part and Network policy should be divided for two blueprints.
>>>
>>> [Rudra] I agree that these need to be split into two blueprints. I will
>>> create another BP.
>>
>> Thanks
>>

 (2) The team IPAM is too general word. IMO we should use more specific
 word.
 How about SubnetGroup?
>>>
>>> [Rudra] IPAM holds more information.
>>>   - All DHCP attributes for this IPAM subnet
>>>   - DNS server configuration
>>>   - In future address allocation schemes
>>
>> Actually, Neutron Subnet has dhcp, DNS, ip allocation schemes.
>> If I understand your proposal correct, IPAM is a group of subnets
>> for of which shares common parameters.
>> Also, you can propose to extend existing subnet.
>
> [Rudra] Neutron subnet requires a network as I understand. IPAM info
> should not have such dependency. Similar to Amazon VPC model where all
> IPAM information can be stored even if a a network is not created.
> Association to networks can happen at a later time.

 OK I got it. However IPAM is still too general word.
 Don't you have any alternatives?

 Best
 Nachi

> Rudra
>
>
>
>>

 (3) Network Policy Resource
 I would like to know more details of this api

 I would like to know resource definition and
 sample API request and response json.

 (This is one example
 https://wiki.openstack.org/wiki/Quantum/VPNaaS )

 Especially, I'm interested in src-addresses, dst-addresses, action-list
 properties.
 Also, how can we express any port in your API?
>>>
>>> [Rudra] Will add the details of the resources and APIs after separating
>>> the blueprint.
>>
>> Thanks!
>>
>> Best
>> Nachi
>>
>>> Regards,
>>> Rudra
>>>

 Best
 Nachi


 2013/10/4 Rudra Rugge :
> Hi All,
>
>>>

Re: [openstack-dev] [neutron] Extraroute and router extensions

2013-10-10 Thread Nachi Ueno
Hi Rudra

ExtraRoute bp was designed for adding some "extra" routing for the router.
The spec is very handy for simple and small use cases.
However it won't fit large use cases, because it takes all route in a Json List.
# It means we need to send full route for updating.

As Salvatore suggests, we need to keep backward compatibility.
so, IMO, we should create Routing table extension.

I'm thinking about this in the context of L3VPN (MPLS) extension.
My Idea is to have a RIB API in the Neutron.
For vpnv4 routes it may have RT or RDs.

Best
Nachi

2013/10/9 Ronak Shah :
> Hi Rudra,
> Please see inline:
>
> Thanks,
> Ronak
>
>>
>> --
>>
>> Message: 3
>> Date: Wed, 9 Oct 2013 18:25:15 +
>> From: Rudra Rugge 
>> To: OpenStack Development Mailing List
>> 
>> Subject: [openstack-dev] [neutron] Extraroute and router extensions
>> Message-ID: <172af81c-43c5-42e1-b389-00af74003...@juniper.net>
>> Content-Type: text/plain; charset="us-ascii"
>>
>>
>> Updated the subject [neutron]
>>
>> Hi All,
>>
>> Is the extra route extension always tied to the router extension or
>> can it live in a separate route-table container.
>
> [RONAK] - Yes, extra-route is actually a router's route as per following
> definition from extraroute_db.py
> class RouterRoute(model_base.BASEV2, models_v2.Route):
> router_id = sa.Column(sa.String(36),
>   sa.ForeignKey('routers.id',
> ondelete="CASCADE"),
>   primary_key=True)
> If extra-route routes
>>
>> are available in separate container then sharing of such
>> containers across networks is possible.
>
> [RONAK] - Agreed with the idea. Will following work?
> class NetworkRoute(model_base.BASEV2, models_v2.Route):
>network_id = sa.Column(sa.String(36),
>   sa.ForeignKey(networks.id',
> ondelete="CASCADE"),
>   primary_key=True)
>
>> Another reason to remove the dependency would be to have
>> next hops that are not CIDRs. Next-hops should be allowed as
>> interface or a VM instance such as NAT instance. This would
>> make the extra route extension more generic.
>
> [RONAK] - Agreed. From db standpoint, nexthop just a 64 byte string. But in
> extraroute_db.py there is a restriction of nexthop be only CIDR which should
> be relaxed. Also, I feel taking out this restriction would mean that you
> could assign a static route with a nexthop prior to an incarnation of the
> nexthop (ie. VM instance, host interface etc).
>
>>
>> This way an extra-route container can be attached/bound to
>> either a router extension or to a network as well. Many plugins
>> do not need a separate router entity for most of the inter-network
>> routing.
>
> [RONAK] - What changes do you have in mind?
>
>> Thanks,
>> Rudra
>>
>>
>>
>>
>> --
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stable/grizzly

2013-10-10 Thread Adam Gandelman
On 10/10/2013 02:20 AM, Alan Pevec wrote:
> 2013/10/10 Gary Kotton :
>> The problem seems to be with the boto python library. I am not really
>> familiar with this but I have seen this is the last – we may need to update
>> the requirements again to exclude a specific version.
> Yeah, it's bad boto update again:
> -boto==2.13.3
> +boto==2.14.0
>
> Let's cap it as a quickfix, it's stable/grizzly freeze today so we
> need gates fixed asap!
>
> Cheers,
> Alan
>

Gary just proposed:

https://review.openstack.org/#/c/50985/1

I'm in favor of this approach rather than adding a strict version
requirement to boto in stable. IMHO, adding such an explicit requirement
in a stable update is a bad idea that should be added to the stable
branch blacklist.

Adam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] neutron unittests hang in OL6/py26

2013-10-10 Thread Bhuvan Arumugam
Hello,

Test environment:
Neutron version: 2014.1.a61.gc7db907
Python version: v2.6 (we could replicate this issue with v2.7 as well)
OS: Oracle Linux 6.3
No of CPUs: 1
No of testr workers: 1
pip freeze: refer to bottom of email
pip index: http://pypi.openstack.org/openstack

We are unable to execute neutron unittests. It is very slow to execute each
test and never complete. Anyone here use Openstack in identical environment
and face similar issues.

This is tracked in https://bugs.launchpad.net/neutron/+bug/1234857. The
workaround proposed in this bug to mock.patch
FixedIntervalLoopingCalldoesn't help. If you are interested to take a
peek, here is the testr
output.
http://livecipher.com/testr-1234857.log.gz

If you analyze the testr output, the tests started at 2013-10-08 20:42:45
and until 2013-10-08 23:43:09 (3hrs), it had executed only 4730 tests. If I
let it run, it never complete. It takes over 10mins for each test,
randomly. None of the tests had failed though. Note: as per testr output,
the last test 
neutron.tests.unit.nec.test_security_group.TestNecSecurityGroups.test_delete_default_security_group_admin
had failed, because I terminated tox.

For instance, 
neutron.tests.unit.nec.test_security_group.TestNecSecurityGroups.test_create_security_group_rule_ethertype_invalid_as_number
had taken 12mins, as per above testr output (23:23 - 23;35). However, if I
run this test or similar tests manually, it is executed within few seconds.

The pip list (pip freeze) is identical to upstream gate job. If you haven't
faced this issue in your environment, any suggestion to debug this issue
would be helpful.

Babel==1.3 Jinja2==2.7.1 Mako==0.9.0 MarkupSafe==0.18 Paste==1.7.5.1
PasteDeploy==1.5.0 Pygments==1.6 Routes==1.13 SQLAlchemy==0.7.10
Sphinx==1.2b3 WebOb==1.2.3 WebTest==2.0.6 alembic==0.6.0 amqp==1.0.13
amqplib==1.0.2 anyjson==0.3.3 argparse==1.2.1 beautifulsoup4==4.3.2
cliff==1.4.5 cmd2==0.6.7 configobj==4.7.2 coverage==3.7 discover==0.4.0
docutils==0.11 eventlet==0.14.0 extras==0.0.3 fixtures==0.3.14 flake8==2.0
greenlet==0.4.1 hacking==0.7.2 httplib2==0.8 importlib==1.0.2
iso8601==0.1.4 jsonrpclib==0.1.3 kombu==2.5.15 mccabe==0.2.1 mock==1.0.1
mox==0.5.3 netaddr==0.7.10 neutron==2014.1.a61.gc7db907 ordereddict==1.1
oslo.config==1.2.1 pbr==0.5.21 pep8==1.4.5 prettytable==0.7.2
pyflakes==0.7.3 pyparsing==2.0.1 python-keystoneclient==0.3.2
python-mimeparse==0.1.4 python-neutronclient==2.3.1
python-novaclient==2.15.0 python-subunit==0.0.15 pytz==2013.7
pyudev==0.16.1 repoze.lru==0.6 requests==2.0.0 setuptools-git==1.0
simplejson==3.3.1 six==1.4.1 stevedore==0.12 testrepository==0.0.17
testtools==0.9.32 waitress==0.8.5


-- 
Regards,
Bhuvan Arumugam
www.livecipher.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Re: Service VM discussion - mgmt ifs

2013-10-10 Thread Isaku Yamahata
On Thu, Oct 03, 2013 at 11:33:35PM +,
"Regnier, Greg J"  wrote:

> RE: vlan trunking support for network tunnels
> Copying to dev mailing list.
>   - Greg
> 
> -Original Message-
> From: Kyle Mestery (kmestery) [mailto:kmest...@cisco.com] 
> Sent: Thursday, October 03, 2013 6:33 AM
> To: Bob Melander (bmelande)
> Cc: Regnier, Greg J
> Subject: Re: Service VM discussion - mgmt ifs
> 
> On Oct 3, 2013, at 1:56 AM, Bob Melander (bmelande)  
> wrote:
> > 
> > The N1kv plugin only uses VXLAN but for that tunneling method the VLAN 
> > trunking is supported. The way it works is that each VXLAN is mapped to a 
> > *link local* VLAN. That technique is pretty much amenable to any tunneling 
> > method.
> > 
> > There is a blueprint for trunking support in Neutron written by Kyle 
> > (https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api).
> >  I think that it would be very useful for the service VM framework if at 
> > least the ML2 and OVS plugins would implement the above blueprint. 
> > 
> I think this blueprint would be worth shooting for in Icehouse. I can flesh 
> it out a bit more so there is more to see on it and we can target it for 
> Icehouse if you guys think this makes sense. I think not only would it help 
> the service VM approach being taken here, but for running "OpenStack on 
> OpenStack" deployments, having a trunk port to the VM makes a lot of sense 
> and enables more networking options for that type of testing.
> 
> Thanks,
> Kyle

Hi Kyle.
Can you please elaborate on how service VM sees packets from such ports?
Say, in case of VLAN trunking, service VM should understand VLAN tag?

By looking at the BP, I don't understand the relation of network interface
in guests, OVS ports(in case of OVS plugin), and neutron port you are
proposing. (Maybe this is the reason of this discussion, though)
So far they are 1:1:1, but you'd like to make it more flexible, I guess.

thanks,


> > We actually have an implementation also for the OVS plugin that supports 
> > its tunneling methods. But we have not yet attempted to upstream it.
> > 
> > Thanks,
> > Bob
> > 
> > Ps. Thanks for inserting the email comments into the document. If we can 
> > extend it further in the coming weeks to get a full(er) picture then during 
> > summit we can identify/discuss suitable pieces to implement in phases 
> > during Iceberg timeframe. 
> > 
> > 
> > 3 okt 2013 kl. 01:13 skrev "Regnier, Greg J" :
> > 
> >> Hi Bob,
> >>  
> >> Does the VLAN trunking solution work with tenant networks that use (VxLAN, 
> >> NVGRE) tunnels?
> >>  
> >> Thanks,
> >> Greg
> >>  
> >> From: Bob Melander (bmelande) [mailto:bmela...@cisco.com]
> >> Sent: Wednesday, September 25, 2013 2:57 PM
> >> To: Regnier, Greg J; Sumit Naiksatam; Rudrajit Tapadar (rtapadar); 
> >> David Chang (dwchang); Joseph Swaminathan; Elzur, Uri; Marc Benoit; 
> >> Sridar Kandaswamy (skandasw); Dan Florea (dflorea); Kanzhe Jiang; 
> >> Kuang-Ching Wang; Gary Duan; Yi Sun; Rajesh Mohan; Maciocco, 
> >> Christian; Kyle Mestery (kmestery)
> >> Subject: Re: Service VM discussion - mgmt ifs ... The service VM 
> >> framework scheduler should preferably also allow selection of VIFs to 
> >> host a logical resource's logical interfaces. To clarify the last 
> >> statement, one "use case"
> >> could be to spin up a VM with more VIFs than are needed initially 
> >> (e.g., if the VM does not support vif hot-plugging). Another "use 
> >> case" is if the plugin supports VLAN trunking and attachement of the 
> >> logical resource's logical interface to a network corresponds to trunking 
> >> of a network on a VIF.
> >>  
> >> There are at least three (or four) ways to dynamically plug a logical 
> >> service resource inside a VM to networks:
> >> - Create a VM VIF on demand for the logical interface of the service 
> >> resource
> >> ("hot-plugging")
> >> - Pre-populate the VM with a set of VIFs that can be allocated to 
> >> logical interfaces of the service resources
> >> - Create a set of VM VIFs (on demand or during VM creation) that 
> >> carry VLAN trunks for which logical (VLAN) interfaces are created and 
> >> allocated to service resources.
> >>  
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2013-10-10 Thread Thierry Carrez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

John Dickinson wrote:
> I'd like to announce my candidacy to the OpenStack Technical
> Committee.

Confirmed.

- -- 
Thierry Carrez (ttx)
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSVtMtAAoJEFB6+JAlsQQjkyYQALXKqFAJ0L4/qJjLVCbdISLN
PfQsvXUa5hMN2LkQgfHr9+nH5QdSaFjMeKZTyqeOqAEpTQEw3HJs1lmUSkgH/FZY
C9wfHqybFWgpa4SbX+e9yEdKzfzELo3x6sjodSKi6FPAEJTWDMQ9UBLdM1K8z6ms
Z0CNxW3h/kMcDOuz5klqFF/rvwYA+OauagN7ECZcycmIPPeNaUow7vsKBUKXrUJT
TD7+X0mGiGu0GLtO/LVM2+fCFyBwNrdsHJMQCS4ZKr/dY8/pG3F94GDTeM9zrdW+
R5DULwYSyNAJ5IP3nBQN983EGsmdvYi0x+H+P6zRLXMNdBRukh3xKvwqQaQJ+yJI
48JEfe5rsgpgtkfDpEonfitEcMrklop2NoBtOBUgkUlnkOqL4r2AtYmXwGoXx7eA
lvAAarzk2JliUk+HHhB11sv9Oerp1DSjC36mpIW9+u1bI4UuKiIzt7ONsJFuhlRm
UqIe53Sx6AgWaJBYkVihSzJcjpr1jniVkeOHbV6WI7Bql8TyfGX4UnYWuTDJRtCF
qiSqe1TDe76BzrVkylUr+1dlHeS4uAK/xfjrgH9L2wyfBWb/G8+Jxjylt/RSLcZ4
aj+obRO75QubIF8jb2UQMwVaV06yRCMnVBzV5QUd/Ul3NAK+1zMsTFQG8QNnefYZ
Fp0pVCl1S8fMZpVcaFaL
=HmCt
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation request for Manila

2013-10-10 Thread Thierry Carrez
Swartzlander, Ben wrote:
> Please consider our formal request for incubation status of the Manila
> project:
> 
> https://wiki.openstack.org/wiki/Manila_Overview

Note that with the TC elections under way, this request won't be
examined until the new TC is in place.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Change ListOpt and DictOpt default values

2013-10-10 Thread Flavio Percoco

On 10/10/13 15:29 +0100, Mark McLoughlin wrote:

Hi Flavio,

On Thu, 2013-10-10 at 14:40 +0200, Flavio Percoco wrote:

Greetings,

I'd like to propose to change both ListOpt and DictOpt default values
to [] and {} respectively. These values are, IMHO, saner defaults than
None for this 2 options and behavior won't be altered - unles `is not
None` is being used.

Since I may be missing some history, I'd like to ask if there's a
reason why None was kept as the default `default` value for this 2 options?

As mentioned above, this change may be backward incompatible in cases
like:

if conf.my_list_opt is None:


Does anyone if there are cases like this?


I'd need a lot of persuasion that this won't break some use of
oslo.config somewhere. Not "why would anyone do that?" hand-waving.
People do all sorts of weird stuff with APIs.


Agreed, that's what I'd like to find out, I'm sure there are cases
like:

   for item in conf.my_list_opt:
   

Which means that they're already using `default=[]` but I'm not 100%
sure about the backward incompatible change I mentioned.



If people really think this is a big issue, I'd make it opt-in. Another
boolean flag like the recently added validate_default_values.


TBH, I don't think this will be a really big issue but I'll do more
research on this if we agree the change makes sense. AFAICT, usages
like the one mentioned may result in something like:

   if conf.my_list_opt is None:
   val = [] # Or something else

   # OR

   val = conf.my_list_opt or []

In which cases, using `default=[]` would had made more sense. I
haven't done a lot of research on this yet.


As regards bumping the major number and making incompatible changes - I
think we should only consider that when there's a tonne of legacy
compatibility stuff that we want to get rid of. For example, if we had a
bunch of opt-in flags like these, then there'd come a point where we'd
say "let's do 2.0 and clean those up". However, doing such a thing is
disruptive and I'd only be in favour of it if the backwards compat
support was really getting in our way.



Agreed here as well!

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2013-10-10 Thread John Dickinson
I'd like to announce my candidacy to the OpenStack Technical Committee.

As the Swift PTL, I've been involved in the TC for a while (and the PPB before 
that and the POC before that). I've seen OpenStack grow the very beginning, and 
I'm very proud to be a part of it.

As we all know, OpenStack has grown tremendously since it started. Open source, 
design, development, and community give people the ability to have ownership of 
their data. These core principles are why I think OpenStack will continue to 
change how people build and use technology for many years to come.

Of course, principles and ideas don't run in production. Code does. Therefore I 
think that a very important role of the TC is to ensure that all of the 
OpenStack projects do work, work well together, and promote the vision of 
OpenStack to provide ubiquitous cloud infrastructure. 

I believe that OpenStack is a unified project that provides independent 
OpenStack solutions to hard problems.

I believe that OpenStack needs to clearly define its scope so that it can stay 
focused on fulfilling its mission.

I believe that OpenStack is good for both public and private clouds, and that 
the private cloud use case (i.e. deploying OpenStack internally for internal 
users only) will be the dominant deployment pattern for OpenStack.

If elected, I will continue to promote these goals for the TC. Thank you for 
your support.

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable/grizzly

2013-10-10 Thread Adam Gandelman
On 10/10/2013 04:42 AM, Gary Kotton wrote:
> Trunk - https://review.openstack.org/50904
> Stable/Grizzly - https://review.openstack.org/#/c/50905/
> There is an alternative patch - https://review.openstack.org/#/c/50873/7
> I recall seeing the same problem a few month ago and the bot version was
> excluded - not sure why the calling code was not updated. Maybe someone
> who is familiar with that can chime in.
> Thanks
> Gary
>
> On 10/10/13 12:20 PM, "Alan Pevec"  wrote:
>

Missed the chance to weigh in on the stable review, but is there a
reason we're also bumping the minimum required boto version for a stable
point update? 


Adam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Incubation request for Manila

2013-10-10 Thread Swartzlander, Ben
Please consider our formal request for incubation status of the Manila project:
https://wiki.openstack.org/wiki/Manila_Overview

thanks!
-Ben Swartzlander

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

2013-10-10 Thread Chris Friesen

On 10/10/2013 01:19 AM, Prashant Upadhyaya wrote:

Hi Chris,

I note two of your comments --



When we worked on H release, we target for basic PCI support
like accelerator card or encryption card etc.



PU> So I note that you are already solving the PCI pass through
usecase somehow ? How ? If you have solved this already in terms of
architecture then SRIOV should not be difficult.


Notice the double indent...that was actually Jiang's statement that I
quoted.



Do we run into the same complexity if we have spare physical NICs
on the host that get passed in to the guest?



PU> In part you are correct. However there is one additional thing.
When we have multiple physical NIC's, the Compute Node's linux is
still in control over those.





In case of SRIOV, you can dice up a single
physical NIC into multiple NIC's (effectively), and expose each of
these diced up NIC's to a VM each. This means that the VM will now
'directly' access the NIC bypassing the Hypervisor.





But if there are two
physical NIC's which were diced up with SRIOV, then VM's on the diced
parts of the first  physical NIC cannot communicate easily with the
VM's on the diced parts of the second physical NIC. So a native
implementation has to be there on the Compute Node which will aid
this (this native implementation will take over the Physical
Function, PF of each NIC) and will be able to 'switch' the packets
between VM's of different physical diced up NIC's [if we need that
usecase]


Is this strictly necessary?  It seems like it would be simpler to let 
the packets be sent out over the wire and the switch/router would send 
them back to the other NIC.  Of course this would result in higher use 
of the physical link, but on the other hand it would mean less work for 
the CPU on the compute node.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Change ListOpt and DictOpt default values

2013-10-10 Thread Mark McLoughlin
Hi Flavio,

On Thu, 2013-10-10 at 14:40 +0200, Flavio Percoco wrote:
> Greetings,
> 
> I'd like to propose to change both ListOpt and DictOpt default values
> to [] and {} respectively. These values are, IMHO, saner defaults than
> None for this 2 options and behavior won't be altered - unles `is not
> None` is being used.
> 
> Since I may be missing some history, I'd like to ask if there's a
> reason why None was kept as the default `default` value for this 2 options?
> 
> As mentioned above, this change may be backward incompatible in cases
> like:
> 
> if conf.my_list_opt is None:
> 
> 
> Does anyone if there are cases like this?

I'd need a lot of persuasion that this won't break some use of
oslo.config somewhere. Not "why would anyone do that?" hand-waving.
People do all sorts of weird stuff with APIs.

If people really think this is a big issue, I'd make it opt-in. Another
boolean flag like the recently added validate_default_values.

As regards bumping the major number and making incompatible changes - I
think we should only consider that when there's a tonne of legacy
compatibility stuff that we want to get rid of. For example, if we had a
bunch of opt-in flags like these, then there'd come a point where we'd
say "let's do 2.0 and clean those up". However, doing such a thing is
disruptive and I'd only be in favour of it if the backwards compat
support was really getting in our way.

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports

2013-10-10 Thread Mathieu Rohon
Nevertheless, it should be ok if you change the port on every agent,
and use the same port.

regards

On Thu, Oct 10, 2013 at 4:04 PM, P Balaji-B37839  wrote:
> Hi Rohon,
>
> Thanks for confirmation.
>
> We will file a bug on this.
>
> Regards,
> Balaji.P
>
>> -Original Message-
>> From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com]
>> Sent: Thursday, October 10, 2013 6:43 PM
>> To: OpenStack Development Mailing List
>> Subject: Re: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports
>>
>> hi,
>>
>> good point Balaji, the dst_port is the port on which ovs is listening for
>> vxlan packets, but I don't know what is the option in ovs-vsctl to set
>> the remote port of the vxlan unicast tunnel interface.
>> But it looks like a bug since you're right, tunnel_sync and tunnel_update
>> RPC messages should handle the VXLAN udp port, additionnaly to the vxlan
>> remote ip.
>>
>> Kyle is away for the moment, he should know the good option to set in
>> ovs-vsctl to specify a remote port for a vxlan tunnel. But you should
>> fill a bug for that, to track this issue.
>>
>> Thanks for playing with vxlan, and  helping us to debug it!
>>
>>
>> On Wed, Oct 9, 2013 at 6:55 AM, P Balaji-B37839 
>> wrote:
>> > Any comments on the below from community using OVS will be helpful.
>> >
>> > Regards,
>> > Balaji.P
>> >
>> >> -Original Message-
>> >> From: P Balaji-B37839
>> >> Sent: Tuesday, October 08, 2013 2:31 PM
>> >> To: OpenStack Development Mailing List; Addepalli Srini-B22160
>> >> Subject: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports
>> >>
>> >> Hi,
>> >>
>> >> Current OVS Agent is creating tunnel with dst_port as the port
>> >> configured in INI file on Compute Node. If all the compute nodes on
>> >> VXLAN network are configured for DEFAULT port it is fine.
>> >>
>> >> When any of the Compute Nodes are configured for CUSTOM udp port as
>> >> VXLAN UDP Port, Then how does the tunnel will be established with
>> remote IP.
>> >>
>> >> It is observed that the fan-out RPC message is not having the
>> >> destination port information.
>> >>
>> >> Regards,
>> >> Balaji.P
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Should packstack configure host network interfaces?

2013-10-10 Thread Lars Kellogg-Stedman
If you deploy OpenStack with Neutron using Packstack and do something
like this...

packstack ...  --neutron-ovs-bridge-interfaces=br-eth1:eth1

...packstack will happily add interface eth1 to br-eth1, but will
neither (a) ensure that it is up right now nor (b) ensure that it is
up after a reboot.  This in contrast to Nova networking, which in
general takes care of bringing up the necessary interfaces at runtime.

Should packstack set up the necessary host configuration to ensure
that the interfaces are up?  Or is this the responsibility of the
local administrator?

Thanks,

-- 
Lars Kellogg-Stedman 



pgpEhsBnmW49t.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports

2013-10-10 Thread P Balaji-B37839
Hi Rohon,

Thanks for confirmation.

We will file a bug on this.

Regards,
Balaji.P

> -Original Message-
> From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com]
> Sent: Thursday, October 10, 2013 6:43 PM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports
> 
> hi,
> 
> good point Balaji, the dst_port is the port on which ovs is listening for
> vxlan packets, but I don't know what is the option in ovs-vsctl to set
> the remote port of the vxlan unicast tunnel interface.
> But it looks like a bug since you're right, tunnel_sync and tunnel_update
> RPC messages should handle the VXLAN udp port, additionnaly to the vxlan
> remote ip.
> 
> Kyle is away for the moment, he should know the good option to set in
> ovs-vsctl to specify a remote port for a vxlan tunnel. But you should
> fill a bug for that, to track this issue.
> 
> Thanks for playing with vxlan, and  helping us to debug it!
> 
> 
> On Wed, Oct 9, 2013 at 6:55 AM, P Balaji-B37839 
> wrote:
> > Any comments on the below from community using OVS will be helpful.
> >
> > Regards,
> > Balaji.P
> >
> >> -Original Message-
> >> From: P Balaji-B37839
> >> Sent: Tuesday, October 08, 2013 2:31 PM
> >> To: OpenStack Development Mailing List; Addepalli Srini-B22160
> >> Subject: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports
> >>
> >> Hi,
> >>
> >> Current OVS Agent is creating tunnel with dst_port as the port
> >> configured in INI file on Compute Node. If all the compute nodes on
> >> VXLAN network are configured for DEFAULT port it is fine.
> >>
> >> When any of the Compute Nodes are configured for CUSTOM udp port as
> >> VXLAN UDP Port, Then how does the tunnel will be established with
> remote IP.
> >>
> >> It is observed that the fan-out RPC message is not having the
> >> destination port information.
> >>
> >> Regards,
> >> Balaji.P
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Change ListOpt and DictOpt default values

2013-10-10 Thread David Ripton

On 10/10/2013 09:45 AM, Ben Nemec wrote:

On 2013-10-10 07:40, Flavio Percoco wrote:

Greetings,

I'd like to propose to change both ListOpt and DictOpt default values
to [] and {} respectively. These values are, IMHO, saner defaults than
None for this 2 options and behavior won't be altered - unles `is not
None` is being used.

Since I may be missing some history, I'd like to ask if there's a
reason why None was kept as the default `default` value for this 2
options?

As mentioned above, this change may be backward incompatible in cases
like:

   if conf.my_list_opt is None:
   

Does anyone if there are cases like this?

Also, I know it is possible to do:

   cfg.ListOpt('option', default=[])

This is not terrible, TBH, but it doesn't feel right. I've made the
mistake to ignore the `default` keyword myself, although I know `[]`
is not the default option for `ListOpt`. As already said, I'd expect
`[]` to be the default, non-set value for `ListOpt`.

Thoughts?

Cheers,
FF

P.S: I'm not sure I'll make it to tomorrows meeting so, I starting the
discussion here made more sense.


Since this is technically an incompatible API change, would a major
version bump be needed for oslo.config if we did this?  Maybe nobody's
relying on the existing behavior, but since oslo.config is a released
library its API is supposed to be stable.


+1.  boto just broke our builds by making an incompatible API change in 
version 2.14.  We can't make every project in the world not do that, but 
we sure should avoid doing it ourselves.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Change ListOpt and DictOpt default values

2013-10-10 Thread Ben Nemec

On 2013-10-10 07:40, Flavio Percoco wrote:

Greetings,

I'd like to propose to change both ListOpt and DictOpt default values
to [] and {} respectively. These values are, IMHO, saner defaults than
None for this 2 options and behavior won't be altered - unles `is not
None` is being used.

Since I may be missing some history, I'd like to ask if there's a
reason why None was kept as the default `default` value for this 2 
options?


As mentioned above, this change may be backward incompatible in cases
like:

   if conf.my_list_opt is None:
   

Does anyone if there are cases like this?

Also, I know it is possible to do:

   cfg.ListOpt('option', default=[])

This is not terrible, TBH, but it doesn't feel right. I've made the
mistake to ignore the `default` keyword myself, although I know `[]`
is not the default option for `ListOpt`. As already said, I'd expect
`[]` to be the default, non-set value for `ListOpt`.

Thoughts?

Cheers,
FF

P.S: I'm not sure I'll make it to tomorrows meeting so, I starting the
discussion here made more sense.


Since this is technically an incompatible API change, would a major 
version bump be needed for oslo.config if we did this?  Maybe nobody's 
relying on the existing behavior, but since oslo.config is a released 
library its API is supposed to be stable.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Common requirements for services' discussion

2013-10-10 Thread Harshad Nakil
Agree,
I like what AWS had done. Have a concept of NAT instance. 90 % use cases
are solved by just specifying
Inside and outside networks for the NAT instance.

If one wants fancier NAT config they can always use NATaas API(s)
To configure this instance.

There is a blueprint for bringing Amazon VPC API compatibility to nova and
related extensions to quantum already propose concept of NAT instance.

How the NAT instance is implemented is left to the plugin.

Regards
-Harshad


On Oct 10, 2013, at 1:47 AM, Salvatore Orlando  wrote:

Can I just ask you to not call it NATaas... if you want to pick a name for
it, go for Natasha :)

By the way, the idea of a NAT service plugin was first introduced at the
Grizzly summit in San Diego.
One hurdle, not a big one however, would be that the external gateway and
floating IP features of the L3 extension already implicitly implements NAT.
It will be important to find a solution to ensure NAT can be configured
explicitly as well while allowing for configuring external gateway and
floating IPs through the API in the same way that we do today.

Apart from this, another interesting aspect would be to be see if we can
come up with an approach which will result in an API which abstracts as
much as possible networking aspects. In other words, I would like to avoid
an API which ends up being "iptables over rest", if possible.

Regards,
Salvatore


On 10 October 2013 09:55, Bob Melander (bmelande) wrote:

>  Hi Edgar,
>
>  I'm also interested in a broadening of NAT capability in Neutron using
> the evolving service framework.
>
>  Thanks,
> Bob
>
>   From: Edgar Magana 
> Reply-To: OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>
> Date: onsdag 9 oktober 2013 21:38
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron] Common requirements for services'
> discussion
>
>   Hello all,
>
>  Is anyone working on NATaaS?
> I know we have some developer working on Router as a Service and they
> probably want to include NAT functionality but I have some interest in
> having NAT as a Service.
>
>  Please, response is somebody is interested in having some discussions
> about it.
>
>  Thanks,
>
>  Edgar
>
>   From: Sumit Naiksatam 
> Reply-To: OpenStack List 
> Date: Tuesday, October 8, 2013 8:30 PM
> To: OpenStack List 
> Subject: [openstack-dev] [Neutron] Common requirements for services'
> discussion
>
>  Hi All,
>
>  We had a VPNaaS meeting yesterday and it was felt that we should have a
> separate meeting to discuss the topics common to all services. So, in
> preparation for the Icehouse summit, I am proposing an IRC meeting on Oct
> 14th 22:00 UTC (immediately after the Neutron meeting) to discuss common
> aspects related to the FWaaS, LBaaS, and VPNaaS.
>
>  We will begin with service insertion and chaining discussion, and I hope
> we can collect requirements for other common aspects such as service
> agents, services instances, etc. as well.
>
>  Etherpad for service insertion & chaining can be found here:
>
> https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining
>
>  Hope you all can join.
>
>  Thanks,
> ~Sumit.
>
>
>  ___ OpenStack-dev mailing
> list OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Change ListOpt and DictOpt default values

2013-10-10 Thread Davanum Srinivas
Flavio,

sounds good to me.

-- dims


On Thu, Oct 10, 2013 at 8:46 AM, Julien Danjou  wrote:

> On Thu, Oct 10 2013, Flavio Percoco wrote:
>
> > This is not terrible, TBH, but it doesn't feel right. I've made the
> > mistake to ignore the `default` keyword myself, although I know `[]`
> > is not the default option for `ListOpt`. As already said, I'd expect
> > `[]` to be the default, non-set value for `ListOpt`.
> >
> > Thoughts?
>
> Sounds like a good idea; I can't think of any con.
>
> --
> Julien Danjou
> /* Free Software hacker * independent consultant
>http://julien.danjou.info */
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: http://davanum.wordpress.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports

2013-10-10 Thread Mathieu Rohon
hi,

good point Balaji, the dst_port is the port on which ovs is listening
for vxlan packets, but I don't know what is the option in ovs-vsctl to
set the remote port of the vxlan unicast tunnel interface.
But it looks like a bug since you're right, tunnel_sync and
tunnel_update RPC messages should handle the VXLAN udp port,
additionnaly to the vxlan remote ip.

Kyle is away for the moment, he should know the good option to set in
ovs-vsctl to specify a remote port for a vxlan tunnel. But you should
fill a bug for that, to track this issue.

Thanks for playing with vxlan, and  helping us to debug it!


On Wed, Oct 9, 2013 at 6:55 AM, P Balaji-B37839  wrote:
> Any comments on the below from community using OVS will be helpful.
>
> Regards,
> Balaji.P
>
>> -Original Message-
>> From: P Balaji-B37839
>> Sent: Tuesday, October 08, 2013 2:31 PM
>> To: OpenStack Development Mailing List; Addepalli Srini-B22160
>> Subject: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports
>>
>> Hi,
>>
>> Current OVS Agent is creating tunnel with dst_port as the port configured
>> in INI file on Compute Node. If all the compute nodes on VXLAN network
>> are configured for DEFAULT port it is fine.
>>
>> When any of the Compute Nodes are configured for CUSTOM udp port as VXLAN
>> UDP Port, Then how does the tunnel will be established with remote IP.
>>
>> It is observed that the fan-out RPC message is not having the destination
>> port information.
>>
>> Regards,
>> Balaji.P
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-10 Thread Harshad Nakil
Won't it be simpler to keep service instance  as one or more VMs, rather
than 1VM being many service instances?
Usually a appliance is collectively (all it's functions) providing a
service. Like firewall or load balancer. A appliance is packaged as VM.
It will be easier to manage
it will be easier for the provider to charge.
It will be easier to control resource allocation.
Once a appliance is physical device than you have all of the above issues
and usually multi-tenancy implementation is weak in most of physical
appliances.

Regards
-Harshad


On Oct 10, 2013, at 12:44 AM, "Bob Melander (bmelande)" 
wrote:

 Harshad,

 By service instance I referred to the logical entities that Neutron
creates (e.g. Neutron's router). I see a service VM as a (virtual) host
where one or several service instances can be placed.
The service VM (at least if managed through Nova) will belong to a tenant
and the service instances are owned by tenants.

 If the service VM tenant is different from service instance tenants (which
is a simple way to "hide" the service VM from the tenants owning the
service instances) then it is not clear to me how the existing access
control in openstack will support pinning the service VM to a particular
tenant owning a service instance.

 Thanks,
Bob

  From: Harshad Nakil 
Reply-To: OpenStack Development Mailing List <
openstack-dev@lists.openstack.org>
Date: onsdag 9 oktober 2013 18:56
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

  Admin creating service instance for a tenant could common use case. But
ownership of service can be controlled via already existing access control
mechanism in openstack. If the service instance belonged to a particular
project then other tenants should by definition should not be able to use
this instance.

On Tue, Oct 8, 2013 at 11:34 PM, Bob Melander (bmelande)  wrote:

>  For use case 2, ability to "pin" an admin/operator owned VM to a
> particular tenant can be useful.
> I.e., the service VMs are owned by the operator but a particular service
> VM will only allow service instances from a single tenant.
>
>  Thanks,
> Bob
>
>   From: , Greg J 
> Reply-To: OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>
> Date: tisdag 8 oktober 2013 23:48
> To: "openstack-dev@lists.openstack.org"  >
> Subject: [openstack-dev] [Neutron] Service VM discussion - Use Cases
>
>   Hi,
>
> ** **
>
> Re: blueprint:
> https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
>
> Before going into more detail on the mechanics, would like to nail down
> use cases.  
>
> Based on input and feedback, here is what I see so far.  
>
> ** **
>
> Assumptions:
>
>  
>
> - a 'Service VM' hosts one or more 'Service Instances'
>
> - each Service Instance has one or more Data Ports that plug into Neutron
> networks
>
> - each Service Instance has a Service Management i/f for Service
> management (e.g. FW rules)
>
> - each Service Instance has a VM Management i/f for VM management (e.g.
> health monitor)
>
>  
>
> Use case 1: Private Service VM 
>
> Owned by tenant
>
> VM hosts one or more service instances
>
> Ports of each service instance only plug into network(s) owned by tenant**
> **
>
>  
>
> Use case 2: Shared Service VM
>
> Owned by admin/operator
>
> VM hosts multiple service instances
>
> The ports of each service instance plug into one tenants network(s)
>
> Service instance provides isolation from other service instances within VM
> 
>
>  
>
> Use case 3: Multi-Service VM
>
> Either Private or Shared Service VM
>
> Support multiple service types (e.g. FW, LB, …)
>
> ** **
>
> -  Greg
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable/grizzly

2013-10-10 Thread Alan Pevec
2013/10/10 Sean Dague :
> Hmph. So boto changed their connection function signatures to have a 3rd
> argument, and put it second, and nothing has defaults.

So isn't that a boto bug? Not sure what their backward-compatibility
statement is but it is silly to break API just like that[1]


Cheers,
Alan

[1] https://github.com/boto/boto/commit/789ace93be380ecd36220b7009f0b497dacdc1cb

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Change ListOpt and DictOpt default values

2013-10-10 Thread Julien Danjou
On Thu, Oct 10 2013, Flavio Percoco wrote:

> This is not terrible, TBH, but it doesn't feel right. I've made the
> mistake to ignore the `default` keyword myself, although I know `[]`
> is not the default option for `ListOpt`. As already said, I'd expect
> `[]` to be the default, non-set value for `ListOpt`.
>
> Thoughts?

Sounds like a good idea; I can't think of any con.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] Change ListOpt and DictOpt default values

2013-10-10 Thread Flavio Percoco

Greetings,

I'd like to propose to change both ListOpt and DictOpt default values
to [] and {} respectively. These values are, IMHO, saner defaults than
None for this 2 options and behavior won't be altered - unles `is not
None` is being used.

Since I may be missing some history, I'd like to ask if there's a
reason why None was kept as the default `default` value for this 2 options?

As mentioned above, this change may be backward incompatible in cases
like:

   if conf.my_list_opt is None:
   

Does anyone if there are cases like this?

Also, I know it is possible to do:

   cfg.ListOpt('option', default=[])

This is not terrible, TBH, but it doesn't feel right. I've made the
mistake to ignore the `default` keyword myself, although I know `[]`
is not the default option for `ListOpt`. As already said, I'd expect
`[]` to be the default, non-set value for `ListOpt`.

Thoughts?

Cheers,
FF

P.S: I'm not sure I'll make it to tomorrows meeting so, I starting the
discussion here made more sense.

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][Libvirt] Disabling nova-compute when a connection to libvirt is broken.

2013-10-10 Thread Vladik Romanovsky
Hello everyone,

I have been recently working on a migration bug in nova (Bug #1233184). 

I noticed that compute service remains available, even if a connection to 
libvirt is broken.
I thought that it might be better to disable the service (using 
conductor.manager.update_service()) and resume it once it's connected again. 
(maybe keep the host_stats periodic task running or create a dedicated one, 
once it succeed, the service will become available again).
This way new vms wont be scheduled nor migrated to the disconnected host.

Any thoughts on that?
Is anyone already working on that?

Thank you,
Vladik

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable/grizzly

2013-10-10 Thread Sean Dague

On 10/10/2013 06:00 AM, Thierry Carrez wrote:

Alan Pevec wrote:

2013/10/10 Gary Kotton :

The problem seems to be with the boto python library. I am not really
familiar with this but I have seen this is the last – we may need to update
the requirements again to exclude a specific version.


Yeah, it's bad boto update again:
-boto==2.13.3
+boto==2.14.0

Let's cap it as a quickfix, it's stable/grizzly freeze today so we
need gates fixed asap!


Do we have a bug filed for this yet ? I'd like to mention it to QA/CI
folks when they are up.


Hmph. So boto changed their connection function signatures to have a 3rd 
argument, and put it second, and nothing has defaults.


The unit tests that blow up here already have some boto special casing, 
my inclination is to add more of it for the version at hand. I'm going 
to propose something and see if that fixes it (which should be easy to 
backport).


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable/grizzly

2013-10-10 Thread Gary Kotton
Trunk - https://review.openstack.org/50904
Stable/Grizzly - https://review.openstack.org/#/c/50905/
There is an alternative patch - https://review.openstack.org/#/c/50873/7
I recall seeing the same problem a few month ago and the bot version was
excluded - not sure why the calling code was not updated. Maybe someone
who is familiar with that can chime in.
Thanks
Gary

On 10/10/13 12:20 PM, "Alan Pevec"  wrote:

>2013/10/10 Gary Kotton :
>> The problem seems to be with the boto python library. I am not really
>> familiar with this but I have seen this is the last ­ we may need to
>>update
>> the requirements again to exclude a specific version.
>
>Yeah, it's bad boto update again:
>-boto==2.13.3
>+boto==2.14.0
>
>Let's cap it as a quickfix, it's stable/grizzly freeze today so we
>need gates fixed asap!
>
>Cheers,
>Alan
>
>___
>Openstack-stable-maint mailing list
>openstack-stable-ma...@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Jekins failed for all nova submit due to library boto change method signature

2013-10-10 Thread Gary Kotton
Hi,
I am dealing this with this – please see https://review.openstack.org/50904
Thanks
Gary

From: Chang Bo Guo mailto:guoc...@cn.ibm.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 10, 2013 2:15 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [nova] Jekins failed for all nova submit due to 
library boto change method signature

Hi ALL,

Recently, Jekins build failed for all nova submit. This is due to boto changed 
method signature.
get_http_connection(host,is_secure)---> get_http_connection(host, port, 
is_secure)
see 
https://boto.readthedocs.org/en/latest/ref/boto.html#boto.connection.AWSAuthConnection.get_http_connection

I open a bug https://bugs.launchpad.net/nova/+bug/1237825, and I'm not boto 
expert but submit a temporary fix for this 
https://review.openstack.org/#/c/50873/
Hope this can be merged or others can help fix this urgent issue ASAP.

Best Regards
---
Eric Guo  郭长波
Cloud Solutions and Openstack Development
China System & Technology Laboratories (CSTL), IBM
Tel:86-10-82452019
Internet Mail: guoc...@cn.ibm.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Jekins failed for all nova submit due to library boto change method signature

2013-10-10 Thread Chang Bo Guo
Hi ALL,

Recently, Jekins build failed for all nova submit. This is due to boto 
changed method signature.
get_http_connection(host,is_secure) ---> get_http_connection(host, port, 
is_secure)
see 
https://boto.readthedocs.org/en/latest/ref/boto.html#boto.connection.AWSAuthConnection.get_http_connection

I open a bug https://bugs.launchpad.net/nova/+bug/1237825, and I'm not 
boto expert but submit a temporary fix for this 
https://review.openstack.org/#/c/50873/
Hope this can be merged or others can help fix this urgent issue ASAP.

Best Regards
---
Eric Guo  郭长波
Cloud Solutions and Openstack Development
China System & Technology Laboratories (CSTL), IBM
Tel:86-10-82452019
Internet Mail: guoc...@cn.ibm.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable/grizzly

2013-10-10 Thread Thierry Carrez
Alan Pevec wrote:
> 2013/10/10 Gary Kotton :
>> The problem seems to be with the boto python library. I am not really
>> familiar with this but I have seen this is the last – we may need to update
>> the requirements again to exclude a specific version.
> 
> Yeah, it's bad boto update again:
> -boto==2.13.3
> +boto==2.14.0
> 
> Let's cap it as a quickfix, it's stable/grizzly freeze today so we
> need gates fixed asap!

Do we have a bug filed for this yet ? I'd like to mention it to QA/CI
folks when they are up.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stable/grizzly

2013-10-10 Thread Thierry Carrez
Alan Pevec wrote:
> 2013/10/10 Gary Kotton :
>> The problem seems to be with the boto python library. I am not really
>> familiar with this but I have seen this is the last – we may need to update
>> the requirements again to exclude a specific version.
> 
> Yeah, it's bad boto update again:
> -boto==2.13.3
> +boto==2.14.0
> 
> Let's cap it as a quickfix, it's stable/grizzly freeze today so we
> need gates fixed asap!

My understanding is that this affects nova unit tests, any other project
affected ?

Push the quickfix to master and stable/grizzly so that it's ready to be
reviewed and accepted when core devs gets up.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stable/grizzly

2013-10-10 Thread Alan Pevec
2013/10/10 Gary Kotton :
> The problem seems to be with the boto python library. I am not really
> familiar with this but I have seen this is the last – we may need to update
> the requirements again to exclude a specific version.

Yeah, it's bad boto update again:
-boto==2.13.3
+boto==2.14.0

Let's cap it as a quickfix, it's stable/grizzly freeze today so we
need gates fixed asap!

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-10 Thread Thomas Spatzier
Hi all,

Lakshminaraya Renganarayana  wrote on 10.10.2013
01:34:41:
> From: Lakshminaraya Renganarayana 
> To: Joshua Harlow ,
> Cc: OpenStack Development Mailing List

> Date: 10.10.2013 01:37
> Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
> proposal for workflows
>
> Hi Joshua,
>
> I agree that there is an element of taskflow in what I described.
> But, I am aiming for something much more lightweight which can be
> naturally blended with HOT constructs and Heat engine. To be a bit
> more specific, Heat already has dependencies and coordination
> mechanisms. So, I am aiming for may be just one additional construct
> in Heat/HOT and some logic in Heat that would support coordination.

First of all, the use case you presented in your earlier mail is really
good and illustrative. And I agree that there should be constructs in HOT
to declare those kinds of dependencies. So how to define this in HOT is one
work item.
How this gets implemented is another item, and yes, maybe this is something
that Heat can delegate to taskflow. Because if taskflow has those
capabilities, why re-implement it.

>
> Thanks,
> LN
>
> _
> Lakshminarayanan Renganarayana
> Research Staff Member
> IBM T.J. Watson Research Center
> http://researcher.ibm.com/person/us-lrengan
>
>
> Joshua Harlow  wrote on 10/09/2013 03:55:00 PM:
>
> > From: Joshua Harlow 
> > To: OpenStack Development Mailing List  > d...@lists.openstack.org>, Lakshminaraya Renganarayana/Watson/IBM@IBMUS
> > Date: 10/09/2013 03:55 PM
> > Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
> > proposal for workflows
> >
> > Your example sounds a lot like what taskflow is build for doing.
> >
> > https://github.com/stackforge/taskflow/blob/master/taskflow/
> > examples/calculate_in_parallel.py is a decent example.
> >
> > In that one, tasks are created and input/output dependencies are
> > specified (provides, rebind, and the execute function arguments
itself).
> >
> > This is combined into the taskflow concept of a flow, one of those
> > flows types is a dependency graph.
> >
> > Using a parallel engine (similar in concept to a heat engine) we can
> > run all non-dependent tasks in parallel.
> >
> > An example that I just created that shows this (and shows it
> > running) that closer matches your example.
> >
> > Program (this will work against the current taskflow codebase):
> > http://paste.openstack.org/show/48156/
> >
> > Output @ http://paste.openstack.org/show/48157/
> >
> > -Josh
> >
> > From: Lakshminaraya Renganarayana 
> > Reply-To: OpenStack Development Mailing List  > d...@lists.openstack.org>
> > Date: Wednesday, October 9, 2013 11:31 AM
> > To: OpenStack Development Mailing List

> > Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
> > proposal for workflows
> >
> > Steven Hardy  wrote on 10/09/2013 05:24:38 AM:
> >
> > >
> > > So as has already been mentioned, Heat defines an internal workflow,
based
> > > on the declarative model defined in the template.
> > >
> > > The model should define dependencies, and Heat should convert those
> > > dependencies into a workflow internally.  IMO if the user also needs
to
> > > describe a workflow explicitly in the template, then we've probably
failed
> > > to provide the right template interfaces for describing
depenendencies.
> >
> > I agree with Steven here, models should define the dependencies and
Heat
> > should realize/enforce them. An important design issue is granularity
at
> > which dependencies are defined and enforced. I am aware of the
> wait-condition
> > and signal constructs in Heat, but I find them a bit low-level as
> > they are prone
> > to the classic dead-lock and race condition problems.  I would like to
have
> > higher level constructs that support finer-granularity dependences
which
> > are needed for software orchestration. Reading through the
variousdisucssion
> > on this topic in this mailing list, I see that many would like to have
such
> > higher level constructs for coordination.
> >
> > In our experience with software orchestration using our own DSL
> and also with
> > some extensions to Heat, we found that the granularity of VMs or
> > Resources to be
> > too coarse for defining dependencies for software orchestration. For
> > example, consider
> > a two VM app, with VMs vmA, vmB, and a set of software components
> > (ai's and bi's)
> > to be installed on them:
> >
> > vmA = base-vmA + a1 + a2 + a3
> > vmB = base-vmB + b1 + b2 + b3
> >
> > let us say that software component b1 of vmB, requires a config
> > value produced by
> > software component a1 of vmA. How to declaratively model this
> > dependence? Clearly,
> > modeling a dependence between just base-vmA and base-vmB is not
> > enough. However,
> > defining a dependence between the whole of vmA and vmB is too
> > coarse. It would be ideal
> > to be able to define a dependence at the granularity of software
> > components, i.e.,
> > vm

Re: [openstack-dev] TC Candidacy

2013-10-10 Thread Thierry Carrez
Chmouel Boudjnah wrote:
> Hi,
> 
> I'd like to put my candidacy for a position on the OpenStack Technical
> Committee.

Confirmed.

> I have been involved with OpenStack since before it actually existed
> while working on Swift at the Rackspace Cloud.
> 
> My experience with OpenSource started in the late 90s while working on
> the Linux-Mandrake distribution taking care of to the low-level
> components of the distribution and contributing to numerous OpenSource
> projects.
> 
> My focus in OpenStack has been mainly on Swift and Keystone and
> Devstack making sure Swift (acting as a core-dev there) has been well
> integrated within OpenStack.
> 
> I have been speaking at conferences and meetups on OpenStack around
> the world for the last couple of years helping them.
> 
> I have written numerous articles on my blog[1] about Swift and
> Keystone and OpenStack in general which seems to be pretty popular.
> 
> In addition to that I am the lead dev of a large team doing OpenStack
> development at my company helping working full-time on various
> OpenStack matters
> 
> I strongly believe in OpenStack as a whole and I am pretty happy when
> we moved over programs recognizing all the work that has been done by
> the various people making OpenStack this success.
> 
> I want to help the TC with integrating new projects and be more
> active when choosing the new projects helping them to get well
> integrated with the other OpenStack projects.
> 
> I would be honored to be a member of the TC helping on the
> technical matters based on the feedback I am getting from the
> different people I meet every day talking to them about OpenStack.
> 
> Thanks for taking me into consideration.
> 
> Chmouel.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2013-10-10 Thread Thierry Carrez
Chris Behrens wrote:
> Hi all,
> 
> I'd like to announce my candidacy for a seat on the OpenStack
> Technical Committee.

Confirmed.

> 
> - General background -
> 
> I have over 15 years of experience designing and building distributed
> systems.  I am currently a Principal Engineer at Rackspace, where
> I have been for a little over 3 years now.  Most of my time at
> Rackspace has been spent working on OpenStack as both a developer
> and a technical leader.  My first week at Rackspace was spent at
> the very first OpenStack Design Summit in Austin where the project
> was announced.
> 
> Prior to working at Rackspace, I held various roles over 14 years
> at Concentric Network Corporation/XO Communications including Senior
> Software Architect and eventually Director of Engineering.  My main
> focus there was on an award winning web/email hosting platform which
> we'd built to be extremely scalable and fault tolerant.  While my
> name is not on this patent, I was heavily involved with the development
> and design that led to US6611861.
> 
> - Why am I interested? -
> 
> This is my 3rd time running and I don't want to be considered a failure!
> 
> But seriously, as I have mentioned in the past, I have strong
> feelings for OpenStack and I want to help as much as possible to
> take it to the next level.  I have a lot of technical knowledge and
> experience building scalable distributed systems.  I would like to
> use this knowledge for good, not evil.
> 
> - OpenStack contributions -
> 
> As I mentioned above, I was at the very first design summit, so
> I've been involved with the project from the beginning.  I started
> the initial work for nova-scheduler shortly after the project was
> opened.  I also implemented the RPC support for kombu, making sure
> to properly support reconnecting and so forth which didn't work
> quite so well with the carrot code.  I've contributed a number of
> improvements designed to make nova-api more performant.  I've worked
> on the filter scheduler as well as designing and implementing the
> first version of the Zones replacement that we named 'Cells'.  And
> most recently, I was involved in the design and implementation of
> the unified objects code in nova.
> 
> During Icehouse, I'm hoping to focus on performance and stabilization
> while also helping to finish objects conversion.
> 
> - Summary -
> 
> I feel my years of experience contributing to and leading large scale
> technical projects along with my knowledge of the OpenStack projects
> will provide a good foundation for technical leadership.
> 
> Thanks,
> 
> - Chris
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-10 Thread Thierry Carrez
Robert Collins wrote:
> I'm interested in serving on the OpenStack TC.

Confirmed.

> 
> # About me
> 
> I've been working on OpenStack for only a year now, since joining
> Monty's merry gang of reprobates at HP. However I've been
> entirely focused on networking and distributed systems since ~2000 -
> having as highlights -core membership in the squid HTTP cache team,
> one of the founders of the Bazaar DVCS project, and a huge mix of
> testing and development efficiency thrown into the mix :). Earlier
> this year I was privileged to become a Python Software Foundation
> member, and I'm keen to see us collaborating more with upstream,
> particularly around testing.
> 
> I live in New Zealand, giving me overlap with the US and with a lot of
> Asia, but talking with Europe requires planning :)
> 
> # Platform
> 
> At the recent TripleO sprint in Seattle I was told I should apply for
> the TC; after some soul searching, I think yes, I should :).
> 
> Three key things occurred to me:
> 
> All of our joint hard work to develop OpenStack is wasted if users
> can't actually obtain and deploy it. This is why we're working on
> making deployment a systematic, rigorous and repeatable upstream
> activity: we need to know as part of the CI gate that what we're
> developing is usable, in real world scenarios. This is a
> multi-component problem: we can't bolt 'be deployable' on after all
> the code is written : and thats why during the last two cycles I've
> been talking about the problems deploying from trunk at the summits,
> and will continue to do so. This cross-program, cross-project effort
> ties into the core of what we do, and it's imperative we have folk on
> the TC that are actually deploying OpenStack (TripleO is running a
> live cloud -https://wiki.openstack.org/wiki/TripleO/TripleOCloud- all
> TripleO devs are helping deploy a production cloud).
> 
> I have a -lot- of testing experience, and ongoing unit and functional
> testing evolution will continue to play a significant role in
> OpenStack quality; the TC can help advise across all projects about
> automated testing; I'd be delighted to assist with that.
> 
> Finally, and I'm going to quote Monty here: "As a TC member, I will
> place OpenStack's interests over the interests of any individual
> project if a conflict between the project and OpenStack, or a project
> with another project should a arise." - I think this is a key attitude
> we should all hold: we're building an industry changing platform, and
> we need to think of the success of the whole platform as being *the*
> primary thing to aim for.
> 
> Thank you for your consideration,
> Rob
> 


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2013-10-10 Thread Thierry Carrez
Joshua Harlow wrote:
> Howdy ya'll!
> 
> ==Who am I==
> 
> I'd like to also put myself up for the Technical Committee candidate
> position, via one of the seats that are being made available. 

Confirmed.

> I have been active with OpenStack since ~around~ diablo and have helped
> lead the effort to 'marry' OpenStack and Y! in a way that benefits both
> the community and Yahoo.
> 
> ==What I have done==
> 
> I have been/am an active contribute to nova, glance, cinder.
>  - History @ https://review.openstack.org/#/q/owner:harlowja,n,z
> 
> I have also helped create the following (along with others in the
> community):
>  - https://launchpad.net/anvil (a tool like devstack, that automatically
> builds OpenStack code & dependencies into packages)
>  - https://launchpad.net/taskflow (my active project, that has big
> plans/goals)
> 
> I also am a major contributor to: https://launchpad.net/cloud-init
> 
> ==What else I do== 
> 
> In my spare time I code more than I should, mountain bike, ski, and rock
> climb.
> 
> ==Background and Experience==
> 
> I work at Yahoo! as one of the technical leads on the OpenStack team
> where we have been working to get better involved in the OpenStack
> community and establishing OpenStack internally. We are focused on scale
> (tens of thousands of servers), reliability, security, and making the
> best software that is humanly possible (who doesn't want that)!
> 
> A few examples of projects that I have been on:
>  - Sponsored search stack (~8000 hosts across ~5 datacenters)
>  - Frontpage stack [www.yahoo.com] (millions of page views, huge scale)
>  - OpenStack (many users, lots of hypervisors, lots of vms, 4+ datacenters)
> 
> ==What I think I can bring==
> 
> I have been on various engineering teams at Yahoo! for the last 6 years.
> I have designed/architected and implemented code that runs on
> http://www.yahoo.com, the ad systems, the social network backends. Each
> project has required understanding how scale and reliability can be
> achieved, so that it’s possible to maximize uptime (thus getting more
> customers).
> 
> Currently I have been working on establishing OpenStack in Yahoo! and
> making sure Yahoo! keeps on being an active and innovative contributor.
> I believe I can help out in scale (how far can eventlet go...),
> architectural decisions (more services or less??) and help OpenStack be
> as reliable and manageable as possible (taskflow I think has a great
> potential for helping here).
> 
> I also believe that we as a community need to continue encouraging the
> growth of innovative projects and continue building OpenStack as a
> platform that drives the infrastructure of many (if not all) of the
> companies in the world (small and big). I believe the TC can help guide
> OpenStack into this direction (and continue guiding it) and I hope with
> myself on the TC (if voted in) that my unique experiences at Y! (ranging
> from deploying OpenStack, supporting it and developing future features
> for it) will be useful in guiding the general direction.
> 
> Thanks for taking me into consideration!
> 
> -Josh
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Common requirements for services' discussion

2013-10-10 Thread Salvatore Orlando
Can I just ask you to not call it NATaas... if you want to pick a name for
it, go for Natasha :)

By the way, the idea of a NAT service plugin was first introduced at the
Grizzly summit in San Diego.
One hurdle, not a big one however, would be that the external gateway and
floating IP features of the L3 extension already implicitly implements NAT.
It will be important to find a solution to ensure NAT can be configured
explicitly as well while allowing for configuring external gateway and
floating IPs through the API in the same way that we do today.

Apart from this, another interesting aspect would be to be see if we can
come up with an approach which will result in an API which abstracts as
much as possible networking aspects. In other words, I would like to avoid
an API which ends up being "iptables over rest", if possible.

Regards,
Salvatore


On 10 October 2013 09:55, Bob Melander (bmelande) wrote:

>  Hi Edgar,
>
>  I'm also interested in a broadening of NAT capability in Neutron using
> the evolving service framework.
>
>  Thanks,
> Bob
>
>   From: Edgar Magana 
> Reply-To: OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>
> Date: onsdag 9 oktober 2013 21:38
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron] Common requirements for services'
> discussion
>
>   Hello all,
>
>  Is anyone working on NATaaS?
> I know we have some developer working on Router as a Service and they
> probably want to include NAT functionality but I have some interest in
> having NAT as a Service.
>
>  Please, response is somebody is interested in having some discussions
> about it.
>
>  Thanks,
>
>  Edgar
>
>   From: Sumit Naiksatam 
> Reply-To: OpenStack List 
> Date: Tuesday, October 8, 2013 8:30 PM
> To: OpenStack List 
> Subject: [openstack-dev] [Neutron] Common requirements for services'
> discussion
>
>  Hi All,
>
>  We had a VPNaaS meeting yesterday and it was felt that we should have a
> separate meeting to discuss the topics common to all services. So, in
> preparation for the Icehouse summit, I am proposing an IRC meeting on Oct
> 14th 22:00 UTC (immediately after the Neutron meeting) to discuss common
> aspects related to the FWaaS, LBaaS, and VPNaaS.
>
>  We will begin with service insertion and chaining discussion, and I hope
> we can collect requirements for other common aspects such as service
> agents, services instances, etc. as well.
>
>  Etherpad for service insertion & chaining can be found here:
>
> https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining
>
>  Hope you all can join.
>
>  Thanks,
> ~Sumit.
>
>
>  ___ OpenStack-dev mailing
> list OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-10 Thread Stan Lagun
This rises number of questions:

1. What about conditional dependencies? Like config3 depends on config1 AND
config2 OR config3.

2. How do I pass values between configs? For example config1 requires value
from user input and config2 needs an output value obtained from applying
config1

3. How would you do error handling? For example config3 on server3 requires
config1 to be applied on server1 and config2 on server2. Suppose that there
was an error while applying config2 (and config1 succeeded). How do I
specify reaction for that? Maybe I need then to try to apply config4 to
server2 and continue or maybe just roll everything back

4. How these config dependencies play with nested stacks and resources like
LoadBalancer that create such stacks? How do I specify that myConfig
depends on HA proxy being configured if that config was declared in nested
stack that is generated by resource's Python code and is not declared in my
HOT template?

5. The solution is not generic. For example I want to write HOT template
for my custom load-balancer and a scalable web-servers group. Load balancer
config depends on all configs of web-servers. But web-servers are created
dynamically (autoscaling). That means dependency graph needs to be also
dynamically modified. But if you explicitly list config names instead of
something like "depends on all configs of web-farm X" you have no way to
describe such rule. In other words we need generic dependency, not just
dependency on particular config

6. What would you do on STACK UPDATE that modifies the dependency graph?

The notation of configs and there


On Thu, Oct 10, 2013 at 4:25 AM, Angus Salkeld  wrote:

> On 09/10/13 19:31 +0100, Steven Hardy wrote:
>
>> On Wed, Oct 09, 2013 at 06:59:22PM +0200, Alex Rudenko wrote:
>>
>>> Hi everyone,
>>>
>>> I've read this thread and I'd like to share some thoughts. In my opinion,
>>> workflows (which run on VMs) can be integrated with heat templates as
>>> follows:
>>>
>>>1. workflow definitions should be defined separately and processed by
>>>stand-alone workflow engines (chef, puppet etc).
>>>
>>
>> I agree, and I think this is the direction we're headed with the
>> software-config blueprints - essentially we should end up with some new
>> Heat *resources* which encapsulate software configuration.
>>
>
> Exactly.
>
> I think we need a software-configuration-aas sub-project that knows
> how to take puppet/chef/salt/... config and deploy it. Then Heat just
> has Resources for these (OS::SoftwareConfig::Puppet).
> We should even move our WaitConditions and Metadata over to that
> yet-to-be-made service so that Heat is totally clean of software config.
>
> How would this solve ordering issues:
>
> resources:
>  config1:
>type: OS::SoftwareConfig::Puppet
>hosted_on: server1
>...
>  config2:
>type: OS::SoftwareConfig::Puppet
>hosted_on: server1
>depends_on: config3
>...
>  config3:
>type: OS::SoftwareConfig::Puppet
>hosted_on: server2
>depends_on: config1
>...
>  server1:
>type: OS::Nova::Server
>...
>  server2:
>type: OS::Nova::Server
>...
>
>
> Heat knows all about ordering:
> It starts the resources:
> server1, server2
> config1
> config3
> config2
>
> There is the normal contract in the client:
> we post the config to software-config-service
> and we wait for the state == ACTIVE (when the config is applied)
> before progressing to a resource that is dependant on it.
>
> -Angus
>
>
>
>> IMO there is some confusion around the scope of HOT, we should not be
>> adding functionality to it which already exists in established config
>> management tools IMO, instead we should focus on better integration with
>> exisitng tools at the resource level, and identifying template interfaces
>> which require more flexibility (for example serialization primitives)
>>
>> 2. the HOT resources should reference workflows which they require,
>>>specifying a type of workflow and the way to access a workflow
>>> definition.
>>>The workflow definition might be provided along with HOT.
>>>
>>
>> So again, I think this acatually has very little to do with HOT.  The
>> *Heat* resources may define software configuration, or possibly some sort
>> of workflow, which is acted upon by $thing which is not Heat.
>>
>> So in the example provided by the OP, maybe you'd have a Murano resource,
>> which knows how to define the input to the Murano API, which might trigger
>> workflow type actions to happen in the Murano service.
>>
>> 3. Heat should treat the orchestration templates as transactions (i.e.
>>>Heat should be able to rollback in two cases: 1) if something goes
>>> wrong
>>>during processing of an orchestration workflow 2) when a stand-alone
>>>workflow engine reports an error during processing of a workflow
>>> associated
>>>with a resource)
>>>
>>
>> So we already have the capability for resources to recieve signals, which
>> would allow (2) in the asynchronous case. 

Re: [openstack-dev] [Neutron] Common requirements for services' discussion

2013-10-10 Thread Bob Melander (bmelande)
Hi Edgar,

I'm also interested in a broadening of NAT capability in Neutron using the 
evolving service framework.

Thanks,
Bob

From: Edgar Magana mailto:emag...@plumgrid.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: onsdag 9 oktober 2013 21:38
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Common requirements for services' 
discussion

Hello all,

Is anyone working on NATaaS?
I know we have some developer working on Router as a Service and they probably 
want to include NAT functionality but I have some interest in having NAT as a 
Service.

Please, response is somebody is interested in having some discussions about it.

Thanks,

Edgar

From: Sumit Naiksatam 
mailto:sumitnaiksa...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, October 8, 2013 8:30 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron] Common requirements for services' discussion

Hi All,

We had a VPNaaS meeting yesterday and it was felt that we should have a 
separate meeting to discuss the topics common to all services. So, in 
preparation for the Icehouse summit, I am proposing an IRC meeting on Oct 14th 
22:00 UTC (immediately after the Neutron meeting) to discuss common aspects 
related to the FWaaS, LBaaS, and VPNaaS.

We will begin with service insertion and chaining discussion, and I hope we can 
collect requirements for other common aspects such as service agents, services 
instances, etc. as well.

Etherpad for service insertion & chaining can be found here:
https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining

Hope you all can join.

Thanks,
~Sumit.


___ OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-10 Thread Bob Melander (bmelande)
Harshad,

By service instance I referred to the logical entities that Neutron creates 
(e.g. Neutron's router). I see a service VM as a (virtual) host where one or 
several service instances can be placed.
The service VM (at least if managed through Nova) will belong to a tenant and 
the service instances are owned by tenants.

If the service VM tenant is different from service instance tenants (which is a 
simple way to "hide" the service VM from the tenants owning the service 
instances) then it is not clear to me how the existing access control in 
openstack will support pinning the service VM to a particular tenant owning a 
service instance.

Thanks,
Bob

From: Harshad Nakil 
mailto:hna...@contrailsystems.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: onsdag 9 oktober 2013 18:56
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Admin creating service instance for a tenant could common use case. But 
ownership of service can be controlled via already existing access control 
mechanism in openstack. If the service instance belonged to a particular 
project then other tenants should by definition should not be able to use this 
instance.

On Tue, Oct 8, 2013 at 11:34 PM, Bob Melander (bmelande) 
mailto:bmela...@cisco.com>> wrote:
For use case 2, ability to "pin" an admin/operator owned VM to a particular 
tenant can be useful.
I.e., the service VMs are owned by the operator but a particular service VM 
will only allow service instances from a single tenant.

Thanks,
Bob

From: , Greg J 
mailto:greg.j.regn...@intel.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: tisdag 8 oktober 2013 23:48
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Hi,

Re: blueprint:  
https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

Before going into more detail on the mechanics, would like to nail down use 
cases.

Based on input and feedback, here is what I see so far.



Assumptions:



- a 'Service VM' hosts one or more 'Service Instances'

- each Service Instance has one or more Data Ports that plug into Neutron 
networks

- each Service Instance has a Service Management i/f for Service management 
(e.g. FW rules)

- each Service Instance has a VM Management i/f for VM management (e.g. health 
monitor)



Use case 1: Private Service VM

Owned by tenant

VM hosts one or more service instances

Ports of each service instance only plug into network(s) owned by tenant



Use case 2: Shared Service VM

Owned by admin/operator

VM hosts multiple service instances

The ports of each service instance plug into one tenants network(s)

Service instance provides isolation from other service instances within VM



Use case 3: Multi-Service VM

Either Private or Shared Service VM

Support multiple service types (e.g. FW, LB, …)


-  Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-10 Thread Bob Melander (bmelande)
While specification of which networks a service VM has interfaces on indicates 
which tenant(s) it serves, that by itself does not allow setting constraints on 
which tenants that VM will accept to serve.
Setting such constraints could be taken a long way, almost like ACL. However, 
I'm not proposing something that extensive. Ability to flag that a certain VM 
should only allow to serve a single tenant (but still multiple service 
instances for that tenant) would cover a requirement we've been given in work 
we've done.

Thanks,
Bob


From: Sumit Naiksatam 
mailto:sumitnaiksa...@gmail.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: onsdag 9 oktober 2013 23:09
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Thanks Bob, I agree this is an important aspect of the implementation. However, 
apart from being able to specify which network(s) the VM has interfaces on, 
what more needs to be done specifically in the proposed library to achieve the 
tenant level isolation?

Thanks,
~Sumit.


On Tue, Oct 8, 2013 at 11:34 PM, Bob Melander (bmelande) 
mailto:bmela...@cisco.com>> wrote:
For use case 2, ability to "pin" an admin/operator owned VM to a particular 
tenant can be useful.
I.e., the service VMs are owned by the operator but a particular service VM 
will only allow service instances from a single tenant.

Thanks,
Bob

From: , Greg J 
mailto:greg.j.regn...@intel.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: tisdag 8 oktober 2013 23:48
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Hi,

Re: blueprint:  
https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

Before going into more detail on the mechanics, would like to nail down use 
cases.

Based on input and feedback, here is what I see so far.



Assumptions:



- a 'Service VM' hosts one or more 'Service Instances'

- each Service Instance has one or more Data Ports that plug into Neutron 
networks

- each Service Instance has a Service Management i/f for Service management 
(e.g. FW rules)

- each Service Instance has a VM Management i/f for VM management (e.g. health 
monitor)



Use case 1: Private Service VM

Owned by tenant

VM hosts one or more service instances

Ports of each service instance only plug into network(s) owned by tenant



Use case 2: Shared Service VM

Owned by admin/operator

VM hosts multiple service instances

The ports of each service instance plug into one tenants network(s)

Service instance provides isolation from other service instances within VM



Use case 3: Multi-Service VM

Either Private or Shared Service VM

Support multiple service types (e.g. FW, LB, …)


-  Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

2013-10-10 Thread Prashant Upadhyaya
Hi Chris,

I note two of your comments --

> > When we worked on H release, we target for basic PCI support like
> > accelerator card or encryption card etc.

PU> So I note that you are already solving the PCI pass through usecase somehow 
? How ? If you have solved this already in terms of architecture then SRIOV 
should not be difficult.

> Do we run into the same complexity if we have spare physical NICs on
> the host that get passed in to the guest?

PU> In part you are correct. However there is one additional thing. When we 
have multiple physical NIC's, the Compute Node's linux is still in control over 
those. So the data into the VM and out still travels all those tunneling 
devices and finally goes out of these physical NIC's. The NIC is _not_ exposed 
directly to the VM. The VM still has the emulated NIC which interfaces out with 
the tap and over the linux bridge
In case of SRIOV, you can dice up a single physical NIC into multiple NIC's 
(effectively), and expose each of these diced up NIC's to a VM each. This means 
that the VM will now 'directly' access the NIC bypassing the Hypervisor. 
Similar to PCI pass through, but now you have one pass through for each VM with 
the diced NIC.  So that is a major consideration to keep in mind because this 
means that we will bypass all those tunneling devices in the middle. But since 
you say that you are working with PCI passthrough and seem to have solved it, 
this is a mere extension of that.

Further, for single physical NIC which is diced up and is connected to VM's on 
a single Compute Node, the NIC provides a 'switch' using which these VM's can 
talk to each other. This can aid us because we have bypassed all the tunneling 
devices.
But if there are two physical NIC's which were diced up with SRIOV, then VM's 
on the diced parts of the first  physical NIC cannot communicate easily with 
the VM's on the diced parts of the second physical NIC.
So a native implementation has to be there on the Compute Node which will aid 
this (this native implementation will take over the Physical Function, PF of 
each NIC) and will be able to 'switch' the packets between VM's of different 
physical diced up NIC's [if we need that usecase]

Regards
-Prashant

-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Thursday, October 10, 2013 12:15 PM
To: Jiang, Yunhong; Chris Friesen; openst...@lists.openstack.org
Cc: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: Re: [openstack-dev] [Openstack] Neutron support for passthrough of 
networking devices?

Hi Chris, Jiang,
We are also looking into enchantment of basic PCI pass-through to provide 
SR-IOV based networking.
In order to support automatic provisioning, it requires the awareness to what 
virtual network to connect the requested SR-IOV device.
This should be considered by the scheduler  in order to run VM on the Host that 
is connected to the physical network.
It requires the Neutron to be aware of PCI pass though allocated device and 
allocate port on the virtual network.
It will require some sort of VIF Driver to manage the libvirt device settings.
It may also require neutron agent to apply port policy on the device. I think 
it makes sense to  support this as part of ML2 neutron plugin (via mechanism 
driver).
In case you plan to attend the design summit, maybe it worth to collaborate 
there and discuss what can be done in the coming  Icehouse release?

Regards,
Irena

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Thursday, October 10, 2013 2:26 AM
To: Chris Friesen; openst...@lists.openstack.org
Subject: Re: [Openstack] Neutron support for passthrough of networking devices?

Several thing in my mind:
a) NIC need more information like the switch, and these information need be 
managed by nova also. We have basic support, but not fully implemented.
b) How to setup the device, including the mac address or 802.1Qbh etc. Libvirt 
have several option to support it, need more work to support them, also need 
consider other virt driver like xenapi etc.
c) How to achieve the isolation of tenants, and how to setup like the router in 
Neutron. I'm not well on Neutron, but I think others may have more idea on it.

Thanks
--jyh

> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: Wednesday, October 09, 2013 11:53 AM
> To: openst...@lists.openstack.org
> Subject: Re: [Openstack] Neutron support for passthrough of networking
> devices?
>
> On 10/09/2013 12:31 PM, Jiang, Yunhong wrote:
> > When we worked on H release, we target for basic PCI support like
> > accelerator card or encryption card etc. I think SR-IOV network
> > support is more complex and requires more effort, in both Nova side
> > and Neutron side. We are working on some enhancement in Nova side
> > now. But the whole picture may need more time/discussion.
>
> Can you elaborate on the complexities?  Assuming you

Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

2013-10-10 Thread Prashant Upadhyaya
Hi Chris,

I note two of your comments --

> > When we worked on H release, we target for basic PCI support like
> > accelerator card or encryption card etc.

PU> So I note that you are already solving the PCI pass through usecase somehow 
? How ? If you have solved this already in terms of architecture then SRIOV 
should not be difficult.

> Do we run into the same complexity if we have spare physical NICs on
> the host that get passed in to the guest?

PU> In part you are correct. However there is one additional thing. When we 
have multiple physical NIC's, the Compute Node's linux is still in control over 
those. So the data into the VM and out still travels all those tunneling 
devices and finally goes out of these physical NIC's. The NIC is _not_ exposed 
directly to the VM. The VM still has the emulated NIC which interfaces out with 
the tap and over the linux bridge
In case of SRIOV, you can dice up a single physical NIC into multiple NIC's 
(effectively), and expose each of these diced up NIC's to a VM each. This means 
that the VM will now 'directly' access the NIC bypassing the Hypervisor. 
Similar to PCI pass through, but now you have one pass through for each VM with 
the diced NIC.  So that is a major consideration to keep in mind because this 
means that we will bypass all those tunneling devices in the middle. But since 
you say that you are working with PCI passthrough and seem to have solved it, 
this is a mere extension of that.

Further, for single physical NIC which is diced up and is connected to VM's on 
a single Compute Node, the NIC provides a 'switch' using which these VM's can 
talk to each other. This can aid us because we have bypassed all the tunneling 
devices.
But if there are two physical NIC's which were diced up with SRIOV, then VM's 
on the diced parts of the first  physical NIC cannot communicate easily with 
the VM's on the diced parts of the second physical NIC.
So a native implementation has to be there on the Compute Node which will aid 
this (this native implementation will take over the Physical Function, PF of 
each NIC) and will be able to 'switch' the packets between VM's of different 
physical diced up NIC's [if we need that usecase]

Regards
-Prashant


-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Thursday, October 10, 2013 12:15 PM
To: Jiang, Yunhong; Chris Friesen; openst...@lists.openstack.org
Cc: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: Re: [openstack-dev] [Openstack] Neutron support for passthrough of 
networking devices?

Hi Chris, Jiang,
We are also looking into enchantment of basic PCI pass-through to provide 
SR-IOV based networking.
In order to support automatic provisioning, it requires the awareness to what 
virtual network to connect the requested SR-IOV device.
This should be considered by the scheduler  in order to run VM on the Host that 
is connected to the physical network.
It requires the Neutron to be aware of PCI pass though allocated device and 
allocate port on the virtual network.
It will require some sort of VIF Driver to manage the libvirt device settings.
It may also require neutron agent to apply port policy on the device. I think 
it makes sense to  support this as part of ML2 neutron plugin (via mechanism 
driver).
In case you plan to attend the design summit, maybe it worth to collaborate 
there and discuss what can be done in the coming  Icehouse release?

Regards,
Irena

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Thursday, October 10, 2013 2:26 AM
To: Chris Friesen; openst...@lists.openstack.org
Subject: Re: [Openstack] Neutron support for passthrough of networking devices?

Several thing in my mind:
a) NIC need more information like the switch, and these information need be 
managed by nova also. We have basic support, but not fully implemented.
b) How to setup the device, including the mac address or 802.1Qbh etc. Libvirt 
have several option to support it, need more work to support them, also need 
consider other virt driver like xenapi etc.
c) How to achieve the isolation of tenants, and how to setup like the router in 
Neutron. I'm not well on Neutron, but I think others may have more idea on it.

Thanks
--jyh

> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: Wednesday, October 09, 2013 11:53 AM
> To: openst...@lists.openstack.org
> Subject: Re: [Openstack] Neutron support for passthrough of networking
> devices?
>
> On 10/09/2013 12:31 PM, Jiang, Yunhong wrote:
> > When we worked on H release, we target for basic PCI support like
> > accelerator card or encryption card etc. I think SR-IOV network
> > support is more complex and requires more effort, in both Nova side
> > and Neutron side. We are working on some enhancement in Nova side
> > now. But the whole picture may need more time/discussion.
>
> Can you elaborate on the complexities?  Assuming yo