[openstack-dev] [nova] nova backup not working in stable/icehouse?

2014-08-28 Thread Preston L. Bannister
Looking to put a proper implementation of instance backup into OpenStack.
Started by writing a simple set of baseline tests and running against the
stable/icehouse branch. They failed!

https://github.com/dreadedhill-work/openstack-backup-scripts

Scripts and configuration are in the above. Simple tests.

At first I assumed there was a configuration error in my Devstack ... but
at this point I believe the errors are in fact in OpenStack. (Also I have
rather more colorful things to say about what is and is not logged.)

Try to backup bootable Cinder volumes attached to instances ... and all
fail. Try to backup instances booted from images, and all-but-one fail
(without logged errors, so far as I see).

Was concerned about preserving existing behaviour (as I am currently
hacking the Nova backup API), but ... if the existing is badly broken, this
may not be a concern. (Makes my job a bit simpler.)

If someone is using "nova backup" successfully (more than one backup at a
time), I *would* rather like to know!

Anyone with different experience?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Review change to nova api pretty please?

2014-08-28 Thread Alex Leonhardt
Hi All,

Could someone please do the honor :)
https://review.openstack.org/#/c/116472/ ?
PEP8 failed, but thats not my fault ;) hehe

Thanks!
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Kevin Benton
I see. Then if a group's ultimate goal is their own project, would the
Neutron incubator even make sense as a first step?


On Thu, Aug 28, 2014 at 6:48 PM, Kyle Mestery  wrote:

> On Thu, Aug 28, 2014 at 5:55 PM, Kevin Benton  wrote:
> > I think we need some clarification here too about the difference between
> the
> > general OpenStack Incubation and the Neutron incubation. From my
> > understanding, the Neutron incubation isn't the path to a separate
> project
> > and independence from Neutron. It's a process to get into Neutron. So if
> you
> > want to keep it as a separate project with its own cores and a PTL,
> Neutron
> > incubation would not be the way to go.
>
> That's not true, there are 3 ways out of incubation: 1) The project
> withers and dies on it's own. 2) The project is spun back into
> Neutron. 3) The project is spun out into it's own project.
>
> However, it's worth noting that if the project is spun out into it's
> own entity, it would have to go through incubation to become a fully
> functioning OpenStack project of it's own.
>
> >
> >
> > On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
> > wrote:
> >>
> >> Just for us to learn about the incubator status, here are some of the
> info
> >> on incubation:
> >>
> >> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
> >> https://wiki.openstack.org/wiki/Governance/NewProjects
> >>
> >> Susanne
> >>
> >>
> >> On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
> >> wrote:
> >>>
> >>>  I would like to discuss the pros and cons of putting Octavia into the
> >>> Neutron LBaaS incubator project right away. If it is going to be the
> >>> reference implementation for LBaaS v 2 then I believe Octavia belong in
> >>> Neutron LBaaS v2 incubator.
> >>>
> >>> The Pros:
> >>> * Octavia is in Openstack incubation right away along with the lbaas v2
> >>> code. We do not have to apply for incubation later on.
> >>> * As incubation project we have our own core and should be able ot
> commit
> >>> our code
> >>> * We are starting out as an OpenStack incubated project
> >>>
> >>> The Cons:
> >>> * Not sure of the velocity of the project
> >>> * Incubation not well defined.
> >>>
> >>> If Octavia starts as a standalone stackforge project we are assuming
> that
> >>> it would be looked favorable on when time is to move it into incubated
> >>> status.
> >>>
> >>> Susanne
> >>>
> >>>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Kevin Benton
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CEILOMETER] Trending Alarm

2014-08-28 Thread Nejc Saje



On 08/28/2014 08:07 PM, Henrique Truta wrote:


Hello, everyone!

I want to have an alarm that is triggered by some kind of trend. For
example, an alarm that is triggeredwhen the CPU utilization is growing
steadly (for example, has grown approximately 10% per 5 minutes, where
the percentage and time window would be parameters, but then I would
evaluate also more complex forms to compute trends).Is there any way to
do this kind of task?

I took a brief look on the code and saw that new evaluators can be
created. So, I thought about two possibilities: the former includes
creating a new Evaluator that considers a given window size and the
latter considers on adding a "change rate" comparator, which will enable
to set the growth rate as the threshold.

What do you think about it?


What about adding a new rate_of_change transformer on the cpu_util samples?

http://docs.openstack.org/developer/ceilometer/configuration.html#rate-of-change-transformer

That way you would have a new meter that denotes the rate-of-change of 
the cpu utilization and could set up alarms on that.


Cheers,
Nejc



Best Regards




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Migration review update

2014-08-28 Thread Brandon Logan
Updated the migration review:
https://review.openstack.org/#/c/114671/3

Added the active_node and spare_node table.  Not sure if this is what
people had in mind for this, but we do need a table to map load balancer
to VMs/nodes/devices, so I figured an active_node table would be
appropriate for that.  Please comment on what needs changed if I have
the wrong idea.

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBass] Design sessions for Neutron LBaaS. What do we want/need?

2014-08-28 Thread Brandon Logan
Adding correct subject tags because I replied to the original email.  I
blame you Susanne!

On Thu, 2014-08-28 at 23:47 -0500, Brandon Logan wrote:
> I'm not sure exactly how many design sessions will be available but it
> seems like 2 for Neutron LBaaS and 2 for Octavia will be hard to
> accomplish.  Neutron LBaaS had 2 in Atlanta didn't it?  One broad one
> ofr Neutron LBaaS and one more specific to TLS and L7.  I'm totally on
> board for having 2 for each though.  I just think since Octavia is still
> just an idea at this point, it'd be hard getting space and time for a
> design session for it, much less 2.  Doesn't stop us from doing the pods
> or ad hoc sessions though.
> 
> As for topics:
> Neutron LBaaS
> 1) I've been wanting to try and solve the problem (at least I think it
> is a problem) of drivers being responsible for managing the status of
> entities.  In my opinion, Neutron LBaaS should be as consistent as
> possible not matter what drivers are being used.  This is caused by
> supporting both Asynchronous and Synchronous drivers.  I've got some
> ideas on how to solve this.
> 2) Different status types on entities.  Operating status and
> Provisioning status.
> 
> Octavia
> I hope we have gotten far enough along this to have some really detailed
> design discussions.  Hopefully we are within reach of a 0.5 milestone.
> Other than that, too early to tell what exact kind of design talks we
> will need.
> 
> Thanks,
> Brandon
> 
> On Thu, 2014-08-28 at 10:49 -0400, Susanne Balle wrote:
> > 
> > 
> > LBaaS team,
> > 
> > 
> > As we discussed in the Weekly LBaaS meeting this morning we should
> > make sure we get the design sessions scheduled that we are interested
> > in. 
> > 
> > 
> > We currently agreed on the following:
> > 
> > 
> > * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we
> > want to go over status and also the whole incubator thingy and how we
> > will best move forward. 
> > 
> > 
> > * Octavia: We want to schedule 2 sessions. 
> > ---  During one of the sessions I would like to discuss the pros and
> > cons of putting Octavia into the Neutron LBaaS incubator project right
> > away. If it is going to be the reference implementation for LBaaS v 2
> > then I believe Octavia belong in Neutron LBaaS v2 incubator. 
> > 
> > 
> > * Flavors which should be coordinated with markmcclain and
> > enikanorov. 
> > --- https://review.openstack.org/#/c/102723/
> > 
> > 
> > Is this too many sessions given the constraints? I am assuming that we
> > can also meet at the pods like we did at the last summit. 
> > 
> > 
> > thoughts?
> > 
> > 
> > Regards Susanne
> > 
> > Thierry
> > Carrez 
> > Aug 27 (1 day
> > ago)
> > 
> > 
> > 
> > 
> > to OpenStack
> > 
> > 
> > 
> > 
> > 
> > Hi everyone,
> > 
> > I've been thinking about what changes we can bring to
> > the Design Summit
> > format to make it more productive. I've heard the feedback from the
> > mid-cycle meetups and would like to apply some of those ideas for
> > Paris,
> > within the constraints we have (already booked space and time). Here
> > is
> > something we could do:
> > 
> > Day 1. Cross-project sessions / incubated projects / other projects
> > 
> > I think that worked well last time. 3 parallel rooms where we can
> > address top cross-project questions, discuss the results of the
> > various
> > experiments we conducted during juno. Don't hesitate to schedule 2
> > slots
> > for discussions, so that we have time to come to the bottom of those
> > issues. Incubated projects (and maybe "other" projects, if space
> > allows)
> > occupy the remaining space on day 1, and could occupy "pods" on the
> > other days.
> > 
> > Day 2 and Day 3. Scheduled sessions for various programs
> > 
> > That's our traditional scheduled space. We'll have a 33% less slots
> > available. So, rather than trying to cover all the scope, the idea
> > would
> > be to focus those sessions on specific issues which really require
> > face-to-face discussion (which can't be solved on the ML or using spec
> > discussion) *or* require a lot of user feedback. That way, appearing
> > in
> > the general schedule is very helpful. This will require us to be a lot
> > stricter on what we accept there and what we don't -- we won't have
> > space for courtesy sessions anymore, and traditional/unnecessary
> > sessions (like my traditional "release schedule" one) should just move
> > to the mailing-list.
> > 
> > Day 4. Contributors meetups
> > 
> > On the last day, we could try to split the space so that we can
> > conduct
> > parallel midcycle-meetup-like contributors gatherings, with no time
> > boundaries and an open agenda. Large projects could get a full day,
> > smaller projects would get half a day (but could continue the
> > discussion
> > in a local bar). Ideally that meetup would end with some alignment on
> > release goals, but the idea is to make the best of that time together
> > to
> > solve the issues you have. Friday would finish with th

Re: [openstack-dev] Design sessions for Neutron LBaaS. What do we want/need?

2014-08-28 Thread Brandon Logan
I'm not sure exactly how many design sessions will be available but it
seems like 2 for Neutron LBaaS and 2 for Octavia will be hard to
accomplish.  Neutron LBaaS had 2 in Atlanta didn't it?  One broad one
ofr Neutron LBaaS and one more specific to TLS and L7.  I'm totally on
board for having 2 for each though.  I just think since Octavia is still
just an idea at this point, it'd be hard getting space and time for a
design session for it, much less 2.  Doesn't stop us from doing the pods
or ad hoc sessions though.

As for topics:
Neutron LBaaS
1) I've been wanting to try and solve the problem (at least I think it
is a problem) of drivers being responsible for managing the status of
entities.  In my opinion, Neutron LBaaS should be as consistent as
possible not matter what drivers are being used.  This is caused by
supporting both Asynchronous and Synchronous drivers.  I've got some
ideas on how to solve this.
2) Different status types on entities.  Operating status and
Provisioning status.

Octavia
I hope we have gotten far enough along this to have some really detailed
design discussions.  Hopefully we are within reach of a 0.5 milestone.
Other than that, too early to tell what exact kind of design talks we
will need.

Thanks,
Brandon

On Thu, 2014-08-28 at 10:49 -0400, Susanne Balle wrote:
> 
> 
> LBaaS team,
> 
> 
> As we discussed in the Weekly LBaaS meeting this morning we should
> make sure we get the design sessions scheduled that we are interested
> in. 
> 
> 
> We currently agreed on the following:
> 
> 
> * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we
> want to go over status and also the whole incubator thingy and how we
> will best move forward. 
> 
> 
> * Octavia: We want to schedule 2 sessions. 
> ---  During one of the sessions I would like to discuss the pros and
> cons of putting Octavia into the Neutron LBaaS incubator project right
> away. If it is going to be the reference implementation for LBaaS v 2
> then I believe Octavia belong in Neutron LBaaS v2 incubator. 
> 
> 
> * Flavors which should be coordinated with markmcclain and
> enikanorov. 
> --- https://review.openstack.org/#/c/102723/
> 
> 
> Is this too many sessions given the constraints? I am assuming that we
> can also meet at the pods like we did at the last summit. 
> 
> 
> thoughts?
> 
> 
> Regards Susanne
> 
> Thierry
> Carrez 
> Aug 27 (1 day
> ago)
> 
> 
> 
> 
> to OpenStack
> 
> 
> 
> 
> 
> Hi everyone,
> 
> I've been thinking about what changes we can bring to
> the Design Summit
> format to make it more productive. I've heard the feedback from the
> mid-cycle meetups and would like to apply some of those ideas for
> Paris,
> within the constraints we have (already booked space and time). Here
> is
> something we could do:
> 
> Day 1. Cross-project sessions / incubated projects / other projects
> 
> I think that worked well last time. 3 parallel rooms where we can
> address top cross-project questions, discuss the results of the
> various
> experiments we conducted during juno. Don't hesitate to schedule 2
> slots
> for discussions, so that we have time to come to the bottom of those
> issues. Incubated projects (and maybe "other" projects, if space
> allows)
> occupy the remaining space on day 1, and could occupy "pods" on the
> other days.
> 
> Day 2 and Day 3. Scheduled sessions for various programs
> 
> That's our traditional scheduled space. We'll have a 33% less slots
> available. So, rather than trying to cover all the scope, the idea
> would
> be to focus those sessions on specific issues which really require
> face-to-face discussion (which can't be solved on the ML or using spec
> discussion) *or* require a lot of user feedback. That way, appearing
> in
> the general schedule is very helpful. This will require us to be a lot
> stricter on what we accept there and what we don't -- we won't have
> space for courtesy sessions anymore, and traditional/unnecessary
> sessions (like my traditional "release schedule" one) should just move
> to the mailing-list.
> 
> Day 4. Contributors meetups
> 
> On the last day, we could try to split the space so that we can
> conduct
> parallel midcycle-meetup-like contributors gatherings, with no time
> boundaries and an open agenda. Large projects could get a full day,
> smaller projects would get half a day (but could continue the
> discussion
> in a local bar). Ideally that meetup would end with some alignment on
> release goals, but the idea is to make the best of that time together
> to
> solve the issues you have. Friday would finish with the design summit
> feedback session, for those who are still around.
> 
> 
> I think this proposal makes the best use of our setup: discuss clear
> cross-project issues, address key specific topics which need
> face-to-face time and broader attendance, then try to replicate the
> success of midcycle meetup-like open unscheduled time to discuss
> whatever is hot at this point.
> 
> There are still details to work o

Re: [openstack-dev] [cinder]pylint errors with hashlib

2014-08-28 Thread John Griffith
On Mon, Aug 25, 2014 at 8:47 PM, Clark Boylan  wrote:

> On Mon, Aug 25, 2014, at 06:45 PM, Murali Balcha wrote:
> > Pylint on my patch is failing with the following error:
> >
> > Module 'hashlib' has no 'sha256'
> >
> > Cinder pylint already has following exceptions,
> >
> >
> > pylint_exceptions:["Instance of 'sha1' has no 'update' member", ""]
> >
> > pylint_exceptions:["Module 'hashlib' has no 'sha224' member", ""]
> >
> >
> > So I think "hashlib has no 'sha256'" should be added to the exception
> > list as well. How can I update the exception list?
> >
> >
> > Thanks,
> >
> > Murali Balcha
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> I think this may be related to your install of python. Mine does not
> have this problem.
>
> $ python
> Python 2.7.6 (default, Mar 22 2014, 22:59:56)
> [GCC 4.8.2] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import hashlib
> >>> hashlib.sha256
> 
> >>> hashlib.sha224
> 
> >>> s = hashlib.sha1()
> >>> s.update('somestring')
> >>>
>
> You should not need to treat these as acceptable failures.
>
> Clark
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​
The error pointed out by Murali is actually showing up in the gate [1].  I
think adding the pylint exception is fine in this case.

[1]:
http://logs.openstack.org/68/110068/8/check/gate-cinder-pylint/8c6813d/console.html
​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS] LBaaS v2 API syntax additions/changes

2014-08-28 Thread Brandon Logan
Hi Yair,

On Thu, 2014-08-28 at 07:47 -0400, Yair Fried wrote:
> I would like to add a question to John's list
> 
> 
> 
> - Original Message -
> > From: "John Schwarz" 
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Sent: Tuesday, August 26, 2014 2:22:33 PM
> > Subject: Re: [openstack-dev] [Neutron] [LBaaS] LBaaS v2 API syntax  
> > additions/changes
> > 
> > 
> > 
> > On 08/25/2014 10:06 PM, Brandon Logan wrote:
> > >>
> > >> 2. Therefor, there should be some configuration to specifically enable
> > >> either version (not both) in case LBaaS is needed. In this case, the
> > >> other version is disabled (ie. a REST query for non-active version
> > >> should return a "not activated" error). Additionally, adding a
> > >> 'lb-version' command to return the version currently active seems like a
> > >> good user-facing idea. We should see how this doesn't negatively effect
> > >> the db migration process (for example, allowing read-only commands for
> > >> both versions?)
> > > 
> > > A /version endpoint can be added for both v1 and v2 extensions and
> > > service plugins.  If it doesn't already exist, it would be nice if
> > > neutron had an endpoint that would return the list of loaded extensions
> > > and their versions.
> > > 
> > There is 'neutron ext-list', but I'm not familiar enough with it or with
> > the REST API to say if we can use that.
> > >>
> > >> 3. Another decision that's needed to be made is the syntax for v2. As
> > >> mentioned, the current new syntax is 'neutron lbaas--'
> > >> (against the old 'lb--'), keeping in mind that once v1
> > >> is deprecated, a syntax like 'lbv2--' would be probably
> > >> unwanted. Is 'lbaas--' okay with everyone?
> > > 
> > > That is the reason we with with lbaas because lbv2 looks ugly and we'd
> > > be stuck with it for the lifetime of v2, unless we did another migration
> > > back to lb for it.  Which seemed wrong to do, since then we'd have to
> > > accept both lbv2 and lb commands, and then deprecate lbv2.
> > > 
> > > I assume this also means you are fine with the prefix in the API
> > > resource of /lbaas as well then?
> > > 
> > I don't mind, as long there is a similar mechanism which disables the
> > non-active REST API commands. Does anyone disagree?
> > >>
> > >> 4. If we are going for different API between versions, appropriate
> > >> patches also need to be written for lbaas-related scripts and also
> > >> Tempest, and their maintainers should probably be notified.
> > > 
> > > Could you elaborate on this? I don't understand what you mean by
> > > "different API between version."
> > > 
> > The intention was that the change of the user-facing API also forces
> > changes on other levels - not only neutronclient needs to be modified
> > accordingly, but also tempest system tests, horizon interface regarding
> > LBaaS...
> 
> 
> 5. If we accept #3 and #4 to mean that the python-client API and CLI must be 
> changed/updated and so does Tempest clients and tests, then what about other 
> projects consuming the Neutron API? How are Heat and Ceilometer going to be 
> affected by this change?

That's a good question about Heat and Ceilometer, and honestly it hasn't
been discussed.  It definitely should be something that should be
researched.  I think once the incubator dust has settled and we know
what goes where, we can dive into this further.  Thanks for bringing it
up.

> 
> Yair
> 
> 
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-28 Thread Jyoti Ranjan
Yes, after version and agent ensuring to pick correct data will not cause
Swift issue.


On Thu, Aug 28, 2014 at 2:59 AM, Steve Baker  wrote:

> On 28/08/14 03:41, Zane Bitter wrote:
> > On 27/08/14 11:04, Steven Hardy wrote:
> >> On Wed, Aug 27, 2014 at 07:54:41PM +0530, Jyoti Ranjan wrote:
> >>> I am little bit skeptical about using Swift for this use case
> >>> because of
> >>> its eventual consistency issue. I am not sure Swift cluster is
> >>> good to be
> >>> used for this kind of problem. Please note that Swift cluster
> >>> may give you
> >>> old data at some point of time.
> >>
> >> This is probably not a major problem, but it's certainly worth
> >> considering.
> >>
> >> My assumption is that the latency of making the replicas consistent
> >> will be
> >> small relative to the timeout for things like SoftwareDeployments, so
> >> all
> >> we need is to ensure that instances  eventually get the new data, act on
> >
> > That part is fine, but if they get the new data and then later get the
> > old data back again... that would not be so good.
>
> It would be fairly easy for the agent to check last modified headers and
> ignore data which is older than the most recently fetched metadata.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS] LBaaS v2 API syntax additions/changes

2014-08-28 Thread Brandon Logan
On Tue, 2014-08-26 at 14:22 +0300, John Schwarz wrote:
> 
> On 08/25/2014 10:06 PM, Brandon Logan wrote:
> >>
> >> 2. Therefor, there should be some configuration to specifically enable
> >> either version (not both) in case LBaaS is needed. In this case, the
> >> other version is disabled (ie. a REST query for non-active version
> >> should return a "not activated" error). Additionally, adding a
> >> 'lb-version' command to return the version currently active seems like a
> >> good user-facing idea. We should see how this doesn't negatively effect
> >> the db migration process (for example, allowing read-only commands for
> >> both versions?)
> > 
> > A /version endpoint can be added for both v1 and v2 extensions and
> > service plugins.  If it doesn't already exist, it would be nice if
> > neutron had an endpoint that would return the list of loaded extensions
> > and their versions.
> > 
> There is 'neutron ext-list', but I'm not familiar enough with it or with
> the REST API to say if we can use that.

Looks like this will be sufficient.  No new rest endpoint needed.

> >>
> >> 3. Another decision that's needed to be made is the syntax for v2. As
> >> mentioned, the current new syntax is 'neutron lbaas--'
> >> (against the old 'lb--'), keeping in mind that once v1
> >> is deprecated, a syntax like 'lbv2--' would be probably
> >> unwanted. Is 'lbaas--' okay with everyone?
> > 
> > That is the reason we with with lbaas because lbv2 looks ugly and we'd
> > be stuck with it for the lifetime of v2, unless we did another migration
> > back to lb for it.  Which seemed wrong to do, since then we'd have to
> > accept both lbv2 and lb commands, and then deprecate lbv2.
> > 
> > I assume this also means you are fine with the prefix in the API
> > resource of /lbaas as well then?
> > 
> I don't mind, as long there is a similar mechanism which disables the
> non-active REST API commands. Does anyone disagree?
> >>
> >> 4. If we are going for different API between versions, appropriate
> >> patches also need to be written for lbaas-related scripts and also
> >> Tempest, and their maintainers should probably be notified.
> > 
> > Could you elaborate on this? I don't understand what you mean by
> > "different API between version."
> > 
> The intention was that the change of the user-facing API also forces
> changes on other levels - not only neutronclient needs to be modified
> accordingly, but also tempest system tests, horizon interface regarding
> LBaaS...

Oh yes this is in the works.  Miguel is spearheading the tempest tests
and has made good progress on it.  Horizon integration hasn't begun yet
though.  Definitely something we want to get in though.  Have to wait
until more information about the incubator comes out and where these
patches for other products need to go.

> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Brandon Logan
We may have to put quotas or something built into Octavia.  Since we are
keeping the concept of tenant in Octavia, this may as well be done.
That probably doesn't totally solve the problem though.

ServerGroups will be great to use for keeping VMs off the same host for
HA.  It will be tough to use them for colocation and apolocation of
loadbalancers, though not impossible from my shallow research of it.

P.S. I feel like I wrote this before so sorry if this is a duplicate,
but I must be losing my mind.

Thanks,
Brandon

On Thu, 2014-08-28 at 17:36 -0400, Susanne Balle wrote:
> We need to be careful. I believe that a user can use these filters to
> keep requesting VMs in the case of nova to get to the size of your
> cloud. 
> 
> 
> Also given that nova now has ServerGroups let's not make a quick
> decision on using something that is being replaced with something
> better. I suggest we investigated ServerGroups a little more before we
> discard it.  
> 
> The operator should really decide how he/she wants Anti-affinity by
> setting the right filters in nova.
> 
> 
> Susanne
> 
> 
> On Thu, Aug 28, 2014 at 5:12 PM, Brandon Logan
>  wrote:
> Trevor and I just worked through some scenarios to make sure
> it can
> handle colocation and apolocation.  It looks like it does,
> however not
> everything will so simple, especially when we introduce
> horizontal
> scaling.  Trevor's going to write up an email about some of
> the caveats
> but so far just using a table to track what LB has what VMs
> and on what
> hosts will be sufficient.
> 
> Thanks,
> Brandon
> 
> On Thu, 2014-08-28 at 13:49 -0700, Stephen Balukoff wrote:
> > I'm trying to think of a use case that wouldn't be satisfied
> using
> > those filters and am not coming up with anything. As such, I
> don't see
> > a problem using them to fulfill our requirements around
> colocation and
> > apolocation.
> >
> >
> > Stephen
> >
> >
> > On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan
> >  wrote:
> > Yeah we were looking at the SameHost and
> DifferentHost filters
> > and that
> > will probably do what we need.  Though I was hoping
> we could
> > do a
> > combination of both but we can make it work with
> those filters
> > I
> > believe.
> >
> > Thanks,
> > Brandon
> >
> > On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle
> wrote:
> > > Brandon
> > >
> > >
> > > I am not sure how ready that nova feature is for
> general use
> > and have
> > > asked our nova lead about that. He is on vacation
> but should
> > be back
> > > by the start of next week. I believe this is the
> right
> > approach for us
> > > moving forward.
> > >
> > >
> > >
> > > We cannot make it mandatory to run the 2 filters
> but we can
> > say in the
> > > documentation that if these two filters aren't set
> that we
> > cannot
> > > guaranty Anti-affinity or Affinity.
> > >
> > >
> > > The other way we can implement this is by using
> availability
> > zones and
> > > host aggregates. This is one technique we use to
> make sure
> > we deploy
> > > our in-cloud services in an HA model. This also
> would assume
> > that the
> > > operator is setting up Availabiltiy zones which we
> can't.
> > >
> > >
> > >
> >
>  
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> > >
> > >
> > >
> > > Sahara is currently using the following filters to
> support
> > host
> > > affinity which is probably due to the fact that
> they did the
> > work
> > > before ServerGroups. I am not advocating the use
> of those
> > filters but
> > > just showing you that we can document the feature
> and it
> > will be up to
> > > the operator to set it up to get the right
> behavior.
> > >
> > >
> > >

Re: [openstack-dev] [nova] [neutron] Specs for K release

2014-08-28 Thread Brandon Logan
Kyle,
Does this apply to blueprints that are destined for the incubator as
well?  I assume the incubator does require a spec process too.

Thanks,
Brandon

On Thu, 2014-08-28 at 08:37 -0500, Kyle Mestery wrote:
> On Thu, Aug 28, 2014 at 8:30 AM, Michael Still  wrote:
> > For nova we haven't gotten around to doing this, but it shouldn't be a
> > big deal. I'll add it to the agenda for today's meeting.
> >
> > Michael
> >
> For Neutron, I have not gone through and removed specs which merged
> and haven't made it yet. I'll do that today with a review to
> neutron-specs, and once we hit FF next week I'll make another pass to
> remove things which didn't make Juno. Keep in mind if your spec
> doesn't make Juno you will have to re-propose it for Kilo.
> 
> Thanks!
> Kyle
> 
> > On Thu, Aug 28, 2014 at 2:07 AM, Andreas Scheuring
> >  wrote:
> >> Hi,
> >> is it already possible to submit specs (nova & neutron) for the K
> >> release? Would be great for getting early feedback and tracking
> >> comments. Or should I just commit it to the juno folder?
> >>
> >> Thanks,
> >> Andreas
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > --
> > Rackspace Australia
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Author tags

2014-08-28 Thread Baohua Yang
+1.


On Wed, Aug 27, 2014 at 10:25 PM, Kyle Mestery  wrote:

> On Wed, Aug 27, 2014 at 8:24 AM, Gary Kotton  wrote:
> > Hi,
> > A few cycles ago the Nova group decided to remove @author from copyright
> > statements. This is due to the fact that this information is stored in
> git.
> > After adding a similar hacking rule to Neutron it has stirred up some
> > debate.
> > Does anyone have any reason to for us not to go ahead with
> > https://review.openstack.org/#/c/112329/.
> > Thanks
> > Gary
> >
> My main concern is around landing a change like this during feature
> freeze week, I think at best this should land at the start of Kilo.
>
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Workflow on-finish

2014-08-28 Thread Renat Akhmerov
Yes, it’s just a regular task that sends a request. Something like:

notify_about_completion:
  action: std.http
  parameters:
url: whatever_we_need.org
method: GET

You can also take a look at webhooks examples in mistral-extra.

Renat Akhmerov
@ Mirantis Inc.



On 29 Aug 2014, at 01:22, W Chan  wrote:

> Is there an example somewhere that I can reference on how to define this 
> special task?  Thanks!
> 
> 
> On Wed, Aug 27, 2014 at 10:02 PM, Renat Akhmerov  
> wrote:
> Right now, you can just include a special task into a workflow that, for 
> example, sends an HTTP request to whatever you need to notify about workflow 
> completion. Although, I see it rather as a hack (not so horrible though).
> 
> Renat Akhmerov
> @ Mirantis Inc.
> 
> 
> 
> On 28 Aug 2014, at 12:01, Renat Akhmerov  wrote:
> 
>> There are two blueprints that I supposed to use for this purpose:
>> https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-http
>> https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-amqp
>> 
>> So my opinion:
>> This functionality should be orthogonal to what we configure in DSL.
>> The mechanism of listeners would is more generic and would your requirement 
>> as a special case.
>> At this point, I see that we may want to implement a generic 
>> transport-agnostic listener mechanism internally (not that hard task) and 
>> then implement required transport specific plugins to it.
>> 
>> Inviting everyone to discussion.
>> 
>> Thanks
>> 
>> Renat Akhmerov
>> @ Mirantis Inc.
>> 
>> 
>> 
>> On 28 Aug 2014, at 06:17, W Chan  wrote:
>> 
>>> Renat,
>>> 
>>> It will be helpful to perform a callback on completion of the async 
>>> workflow.  Can we add on-finish to the workflow spec and when workflow 
>>> completes, runs task(s) defined in the on-finish section of the spec?  This 
>>> will allow the workflow author to define how the callback is to be done.
>>> 
>>> Here's the bp link. 
>>> https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-on-finish
>>> 
>>> Thanks.
>>> Winson
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Some thoughts about Horizon's test suite

2014-08-28 Thread Richard Jones
I'm relatively new to Horizon, but I come with a bunch of experience with
Python more generally. I've contributed a couple of small patches to
Horizon in an effort to get more familiar with the codebase. Julie Pichon's
blog post about testing in Horizon has been invaluable <
http://www.jpichon.net/blog/2013/07/testing-in-horizon/>.

Very recently I attempted to fix a simple bug in which a Panel was being
displayed when it shouldn't have been. The resultant 5-line fix ended up
breaking 498 of the 1048 unit tests in the suite. I estimated that it would
take about a week's effort to address all the failing tests. For more
information see <
http://mechanicalcat.net/richard/log/Python/When_testing_goes_bad>

In talking about the issues I've hit I found quite a few people both in
#openstack-horizon and other places who are quite forthright in their
desire to get rid of all use of mox. I heartily endorse this motion.
Indeed, I would be willing to start the process of weeding mox out of the
suite entirely, one test module at a time, if there is support for such an
effort from Horizon core/TL.

Removing mox is one (admittedly very large) piece of the puzzle though. As
I mentioned in my blog post, there are other structural issues that would
need to be addressed also; the use of unittest.TestCase and the varying
methods of mocking.

At the moment, making a fix like the one I attempted is prohibitively
expensive to do, and thus won't be done because Horizon's test suite has
become too restrictive to change (unnecessarily so). I'd like to see that
change.


 Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Horizon] [Heat] Merlin project (formerly known as cross-project UI library for Heat/Mistral/Murano/Solum) plans for PoC and more

2014-08-28 Thread Zane Bitter

On 28/08/14 13:31, Drago Rosson wrote:

You are in luck, because I have just now open-sourced Barricade! Check it
out [4].

[4]https://github.com/rackerlabs/barricade


Please add a license (preferably ASL 2.0). "Open Source" doesn't mean 
"the source is on GitHub", it means that the code is licensed under a 
particular set of terms.


- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Stephen Balukoff
Susanne--

I think you are conflating the difference between "OpenStack incubation"
and "Neutron incubator." These are two very different matters and should be
treated separately. So, addressing each one individually:

*"OpenStack Incubation"*
I think this has been the end-goal of Octavia all along and continues to be
the end-goal. Under this scenario, Octavia is its own stand-alone project
with its own PTL and core developer team, its own governance, and should
eventually become part of the integrated OpenStack release. No project ever
starts out as "OpenStack incubated."

*"Neutron Incubator"*
This has only become a serious discussion in the last few weeks and has yet
to land, so there are many assumptions about this which don't pan out
(either because of purposeful design and governance decisions, or because
of how this project actually ends up being implemented from a practical
standpoint). But given the inherent limitations about making statements
with so many unknowns, the following seem fairly clear from what has been
shared so far:

   - Neutron incubator is the on-ramp for projects which should eventually
   become a part of Neutron itself.
   - Projects which enter the Neutron incubator on-ramp should be fairly
   close to maturity in their final form. I think the intent here is for them
   to live in incubator for 1 or 2 cycles before either being merged into
   Neutron core, or being ejected (as abandoned, or as a separate project).
   - Neutron incubator projects effectively do not have their own PTL and
   core developer team, and do not have their own governance.

In addition we know the following about Neutron LBaaS and Octavia:

   - It's already (informally?) agreed that the ultimate long-term place
   for a LBaaS solution is probably to be spun out into its own project, which
   might appropriately live under a yet-to-be-defined master "Networking"
   project. (This would make Neutron, LBaaS, VPNaaS, FWaaS, etc. effective
   "peer" projects under the Networking umbrella.)  Since this "Networking"
   umbrella project has even less defined about it than Neutron incubator,
   it's impossible to know whether being a part of Neutron incubator would be
   of any benefit to Octavia (or, conversely, to Neutron incubator) at all as
   an on-ramp to becoming part of "Networking." Presumably, Octavia *might* fit
   well under the "Networking" umbrella-- but, again, with nothing defined
   there it's impossible to draw any reasonable conclusions at this time.
   - When the LBaaS component spins out of Neutron, it will more than
   likely not be Octavia.  Octavia is *intentionally* less friendly to 3rd
   party load balancer vendors both because it's envisioned that Octavia would
   just be another implementation which lives along-side said 3rd party vendor
   products (plugging into a higher level LBaaS layer via a driver), and
   because we don't want to have to compromise certain design features of
   Octavia to meet the lowest common denominator 3rd party vendor product.
   (3rd party vendors are welcome, but we will not make design compromises to
   meet the needs of a proprietary product-- compatibility with available
   open-source products and standards trumps this.)
   - The end-game for the above point is: In the future I see "Openstack
   LBaaS" (or whatever the project calls itself) being a separate but
   complimentary project to Octavia.
   - While its true that we would like Octavia to become the reference
   implementation for Neutron LBaaS, we are nowhere near being able to deliver
   on that. Attempting to become a part of Neutron LBaaS right now is likely
   just to create frustration (and very little merged code) for both the
   Octavia and Neutron teams.



So given that the only code in Octavia right now are a few database
migrations, we are very, very far away from being ready for either
OpenStack incubation or the Neutron incubator project. I don't think it's
very useful to be spending time right now worrying about either of these
outcomes:  We should be working on Octavia!

Please also understand:  I realize that probably the reason you're asking
this right now is because you have a mandate within your organization to
use only "official" OpenStack branded components, and if Octavia doesn't
fall within that category, you won't be able to use it.  Of course everyone
working on this project wants to make that happen too, so we're doing
everything we can to make sure we don't jeopardize that possibility. And
there are enough voices in this project that want that to happen, so I
think if we strayed from the path to get there, there would be sufficient
clangor over this that it would be hard to miss. But I don't think there's
anyone at all at this time that can honestly give you a promise that
Octavia definitely will be incubated and will definitely end up in the
integrated OpenStack release.

If you want to increase the chances of that happening, please help push the
project forwa

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Kyle Mestery
On Thu, Aug 28, 2014 at 5:55 PM, Kevin Benton  wrote:
> I think we need some clarification here too about the difference between the
> general OpenStack Incubation and the Neutron incubation. From my
> understanding, the Neutron incubation isn't the path to a separate project
> and independence from Neutron. It's a process to get into Neutron. So if you
> want to keep it as a separate project with its own cores and a PTL, Neutron
> incubation would not be the way to go.

That's not true, there are 3 ways out of incubation: 1) The project
withers and dies on it's own. 2) The project is spun back into
Neutron. 3) The project is spun out into it's own project.

However, it's worth noting that if the project is spun out into it's
own entity, it would have to go through incubation to become a fully
functioning OpenStack project of it's own.

>
>
> On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
> wrote:
>>
>> Just for us to learn about the incubator status, here are some of the info
>> on incubation:
>>
>> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
>> https://wiki.openstack.org/wiki/Governance/NewProjects
>>
>> Susanne
>>
>>
>> On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
>> wrote:
>>>
>>>  I would like to discuss the pros and cons of putting Octavia into the
>>> Neutron LBaaS incubator project right away. If it is going to be the
>>> reference implementation for LBaaS v 2 then I believe Octavia belong in
>>> Neutron LBaaS v2 incubator.
>>>
>>> The Pros:
>>> * Octavia is in Openstack incubation right away along with the lbaas v2
>>> code. We do not have to apply for incubation later on.
>>> * As incubation project we have our own core and should be able ot commit
>>> our code
>>> * We are starting out as an OpenStack incubated project
>>>
>>> The Cons:
>>> * Not sure of the velocity of the project
>>> * Incubation not well defined.
>>>
>>> If Octavia starts as a standalone stackforge project we are assuming that
>>> it would be looked favorable on when time is to move it into incubated
>>> status.
>>>
>>> Susanne
>>>
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kevin Benton
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Stefano Maffulli
On 08/28/2014 03:04 PM, Susanne Balle wrote:
> Just for us to learn about the incubator status, here are some of the
> info on incubation:
> 
> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
> https://wiki.openstack.org/wiki/Governance/NewProjects

These are not the correct documents for the Neutron incubator.

You should look at this instead:

https://wiki.openstack.org/wiki/Network/Incubator

(which is modeled after the Oslo incubator
https://wiki.openstack.org/wiki/Oslo#Incubation)

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Adam Harwell
Yeah, I think I agree there. If we were to go the Neutron-incubator route, we'd 
end up with Neutron-Octavia, and I don't think that's what we want, right?
I believe to be "Openstack-Octavia" we need to be incubated as a separate 
project.

--Adam

https://keybase.io/rm_you


From: Kevin Benton mailto:blak...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, August 28, 2014 3:55 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

I think we need some clarification here too about the difference between the 
general OpenStack Incubation and the Neutron incubation. From my understanding, 
the Neutron incubation isn't the path to a separate project and independence 
from Neutron. It's a process to get into Neutron. So if you want to keep it as 
a separate project with its own cores and a PTL, Neutron incubation would not 
be the way to go.


On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
mailto:sleipnir...@gmail.com>> wrote:
Just for us to learn about the incubator status, here are some of the info on 
incubation:

https://wiki.openstack.org/wiki/Governance/Approved/Incubation
https://wiki.openstack.org/wiki/Governance/NewProjects

Susanne


On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
mailto:sleipnir...@gmail.com>> wrote:
 I would like to discuss the pros and cons of putting Octavia into the Neutron 
LBaaS incubator project right away. If it is going to be the reference 
implementation for LBaaS v 2 then I believe Octavia belong in Neutron LBaaS v2 
incubator.

The Pros:
* Octavia is in Openstack incubation right away along with the lbaas v2 code. 
We do not have to apply for incubation later on.
* As incubation project we have our own core and should be able ot commit our 
code
* We are starting out as an OpenStack incubated project

The Cons:
* Not sure of the velocity of the project
* Incubation not well defined.

If Octavia starts as a standalone stackforge project we are assuming that it 
would be looked favorable on when time is to move it into incubated status.

Susanne




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Joe Gordon
On Thu, Aug 28, 2014 at 3:27 PM, Chris Friesen 
wrote:

> On 08/28/2014 04:01 PM, Joe Gordon wrote:
>
>>
>>
>>
>> On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh
>> mailto:alan.kavan...@ericsson.com>> wrote:
>>
>> I share Donald's points here, I believe what would help is to
>> clearly describe in the Wiki the process and workflow for the BP
>> approval process and build in this process how to deal with
>> discrepancies/disagreements and build timeframes for each stage and
>> process of appeal etc.
>> The current process would benefit from some fine tuning and helping
>> to build safe guards and time limits/deadlines so folks can expect
>> responses within a reasonable time and not be left waiting in the
>> cold.
>>
>>
>> This is a resource problem, the nova team simply does not have enough
>> people doing enough reviews to make this possible.
>>
>
> All the more reason to make it obvious which reviews are not being
> addressed in a timely fashion.  (I'm thinking something akin to the order
> screen at a fast food restaurant that starts blinking in red and beeping if
> an order hasn't been filled in a certain amount of time.)
>
> Perhaps by making it clear that reviews are a bottleneck this will
> actually help to address the problem.


Yes, better tracking of when a review goes stale (nova and nova-specs) is a
great idea. Now we just need a volunteer to work on a good way to identify
them, make that information easy to consume, and make a plan to help us
wrangle the reviews.

Russell has some numbers (and code) around this, Nova has 618 open reviews,
of which 225 are waiting for reviewers [0].

[0] http://russellbryant.net/openstack-stats/nova-openreviews.html


>
>
> Chris
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] Specs for K release

2014-08-28 Thread Kevin Benton
Submit it as a patch to the specs repo!


On Thu, Aug 28, 2014 at 2:47 PM, Alan Kavanagh 
wrote:

> That's a fairly good point Michael, and if that can get correlated to the
> proposed incubation section for that project then I believe this would help
> alleviate a lot of frustration and help folks understand what to expect and
> what are the next steps etc.
> How do we get this formulated and agreed so we can have this approved and
> proceed?
> /Alan
> -Original Message-
> From: Michael Still [mailto:mi...@stillhq.com]
> Sent: August-28-14 6:51 PM
> To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [nova] [neutron] Specs for K release
>
> On Thu, Aug 28, 2014 at 6:53 AM, Daniel P. Berrange 
> wrote:
> > On Thu, Aug 28, 2014 at 11:51:32AM +, Alan Kavanagh wrote:
> >> How to do we handle specs that have slipped through the cracks
> >> and did not make it for Juno?
> >
> > Rebase the proposal so it is under the 'kilo' directory path
> > instead of 'juno' and submit it for review again. Make sure
> > to keep the ChangeId line intact so people see the history
> > of any review comments in the earlier Juno proposal.
>
> Yes, but...
>
> I think we should talk about tweaking the structure of the juno
> directory. Something like having proposed, approved, and implemented
> directories. That would provide better signalling to operators about
> what we actually did, what we thought we'd do, and what we didn't do.
>
> I worry that gerrit is a terrible place to archive the things which
> were proposed by not approved. If someone else wants to pick something
> up later, its super hard for them to find.
>
> Michael
>
> --
> Rackspace Australia
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Kevin Benton
I think we need some clarification here too about the difference between
the general OpenStack Incubation and the Neutron incubation. From my
understanding, the Neutron incubation isn't the path to a separate project
and independence from Neutron. It's a process to get into Neutron. So if
you want to keep it as a separate project with its own cores and a PTL,
Neutron incubation would not be the way to go.


On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
wrote:

> Just for us to learn about the incubator status, here are some of the info
> on incubation:
>
> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
> https://wiki.openstack.org/wiki/Governance/NewProjects
>
> Susanne
>
>
> On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
> wrote:
>
>>  I would like to discuss the pros and cons of putting Octavia into the
>> Neutron LBaaS incubator project right away. If it is going to be the
>> reference implementation for LBaaS v 2 then I believe Octavia belong in
>> Neutron LBaaS v2 incubator.
>>
>> The Pros:
>> * Octavia is in Openstack incubation right away along with the lbaas v2
>> code. We do not have to apply for incubation later on.
>> * As incubation project we have our own core and should be able ot commit
>> our code
>> * We are starting out as an OpenStack incubated project
>>
>> The Cons:
>> * Not sure of the velocity of the project
>> * Incubation not well defined.
>>
>> If Octavia starts as a standalone stackforge project we are assuming that
>> it would be looked favorable on when time is to move it into incubated
>> status.
>>
>> Susanne
>>
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Trevor Vardeman
Hello all,

TL;DR
Using the SameHostFilter and DifferentHostFilter will work functionally
for what Octavia needs for colocation, apolocation, and HA
anti-affinity.  There are a couple topics that need discussed:

How should VMs be allocated per host when evaluating colocation, if each
load balancer minimally has 2 VMs?  (Active-Active or Active-Passive)

How would a spare node pool handle affinity (i.e. will every host have a
separate spare node pool)?



Brandon and I spent a little time white-boarding our thoughts on this
affinity/anti-affinity problem.  Basically we came up with a couple
tables we'll need in the DB, and one table representing information
retrieved from nova, as follows:
Note:  The tables were written with "fixed width" text.  Looks really
bad in HTML.


LB Table
+---+--+---+
| LB_ID | colocate | apolocate |
+---+--+---+
|   1   |  |   |
+---+--+---+
|   2   |  | 1 |
+---+--+---+
|   3   |2 |   |
+---+--+---+
|   4   |1 | 3 |
+---+--+---+

DB Association Table
+---+---+-+
| LB_ID | VM_ID | HOST_ID |
+---+---+-+
|   1   |   A   |I|
+---+---+-+
|   1   |   B   |II   |
+---+---+-+
|   2   |   C   |   III   |
+---+---+-+
|   2   |   D   |IV   |
+---+---+-+
|   3   |   E   |   III   |
+---+---+-+
|   3   |   F   |IV   |
+---+---+-+
|   4   |   G   |I|
+---+---+-+
|   4   |   H   |II   |
+---+---+-+

Nova Information Table
+---++-+-+
| VM_ID | SameHostFilter | DifferentHostFilter | HOST_ID |
+---++-+-+
|   A   || |I|
+---++-+-+
|   B   ||  A  |II   |
+---++-+-+
|   C   || A B |   III   |
+---++-+-+
|   D   ||A B C|IV   |
+---++-+-+
|   E   |  C D   | |   III   |
+---++-+-+
|   F   |  C D   |  E  |IV   |
+---++-+-+
|   G   |  A B   | E F |I|
+---++-+-+
|   H   |  A B   |E F G|II   |
+---++-+-+

The first thing we discussed was an Active-Active setup.  Above you can
see I enforce that the first VM will not be on the same host as the
second.  In the first table, I've given some ideas about what LB will
colocate/apolocate with another, and configured them in the association
table appropriately.  Can you see any configuration combination we might
have over-looked?

As for scaling, we considered adding of VMs in an Active-Active setup to
be just as trivial as the initial creation.  Just include another VM id
in the list for DifferentHostFilter and it'll guarantee a different host
assignment.

The second discussion was for Active-Passive, and we decided it would be
very similar to Active-Active in accordance to appending to a list for
filtering.  For each required Active node created for scaling, standing
up another Passive node would happen with just another VM id
specification in the filter.  This keeps all the Active/Passive on
different hosts.  One could just as easily write some logic to keep all
the Passives on one Host and all the Actives on a different, though this
would potentially cause other problems.

One thing that just popped into my head would be scaling on different
hosts to different degrees.  Example:  I already have 2 VMs, 1 active
and 1 passive each (so 4 VMs total right now).  My scaling solution
could call for another 4 VMs to be stood up in the same fashion, but the
hosts matching up like the following table:

+---+---+-++
| LB_ID | VM_ID | HOST_ID | ACTIVE |
+---+---+-++
|   1   |   A   |I|1   |
+---+---+-++
|   1   |   B   |II   |0   |
+---+---+-++
|   2   |   C   |   III   |1   |
+---+---+-++
|   2   |   D   |IV   |0   |
+---+---+-++
|   1   |   E   |I|1   |
+---+---+-++
|   1   |   F   |II   |0   |
+---+---+-++
|   2   |   G   |   III   |1   |
+---+---+-++
|   2   |   H   |IV   |0   |
+---+---

[openstack-dev] [nova] Kilo Specs Schedule

2014-08-28 Thread Joe Gordon
We just finished discussing when to open up Kilo specs at the nova meeting
today [0], and Kilo specs will open right after we cut Juno RC1 (around
Sept 25th [1]). Additionally, the spec template will most likely be revised.

We still have a huge amount of work to do for Juno and the nova team is
mostly concerned with the 50 blueprints we have up for review [2] and the
1000 open bugs [3] (186 of which have patches up for review). The RC1
timeframe is the right fit for when we can start to move our focus out to
upcoming kilo items.


[0]
http://eavesdrop.openstack.org/meetings/nova/2014/nova.2014-08-28-21.01.log.html
[1] https://wiki.openstack.org/wiki/Juno_Release_Schedule
[2] https://blueprints.launchpad.net/nova/juno
[3] http://54.201.139.117/nova-bugs.html

best,
Joe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-28 Thread James Polley
On Thu, Aug 28, 2014 at 10:40 PM, Thierry Carrez 
wrote:

> James Polley wrote:
> >>> Point of clarification:  I've heard PTL=Project Technical Lead
> >>> and PTL=Program Technical Lead. Which is it?  It is kind of
> >>> important as OpenStack grows, because the first is responsible
> >>> for *a* project, and the second is responsible for all projects
> >>> within a program.
> >>
> >> Now Program, formerly Project.
> >
> > I think this is worthy of more exploration. Our docs seem to be very
> > inconsistent about what a PTL is - and more broadly, what the difference
> > is between a Project and a Program.
> >
> > Just a few examples:
> >
> > https://wiki.openstack.org/wiki/PTLguide says "Program Technical
> > Lead". https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
> > simply says PTL - but does say that each PTL is elected by/for a
> > Program. However, Thierry pointed
> > to https://wiki.openstack.org/wiki/Governance/Foundation/Structure which
> > still refers to Project Technical Leads and says explicitly that they
> > lead individual projects, not programs. I actually have edit access to
> > that page, so I could at least update that with a simple
> > "s/Project/Program/", if I was sure that was the right thing to do.
>
> Don't underestimate how stale wiki pages can become! Yes, fix it.
>

I don't know if I've fixed it, but I've certainly replaced all users of the
word Project with Program.

Whether or not it now matches reality, I'm not sure.

I alsp removed (what I assume is) a stale reference to the PPB and added a
new heading for the TC.


> > http://www.openstack.org/ has a link in the bottom nav that says
> > "Projects"; it points to http://www.openstack.org/projects/ which
> > redirects to http://www.openstack.org/software/ which has a list of
> > things like "Compute" and "Storage" - which as far as I know are
> > Programs, not Projects. I don't know how to update that link in the nav
> > panel.
>
> That's because the same word ("compute") is used for two different
> things: a program name ("Compute") and an "official OpenStack name" for
> a project ("OpenStack Compute a.k.a. Nova"). Basically official
> OpenStack names reduce confusion for newcomers ("What is Nova ?"), but
> they confuse old-timers at some point ("so the Compute program produces
> Nova a.k.a. OpenStack Compute ?").
>

That's confusing to me. I had thought that part of the reason for the
separation was to enable a level of indirection - if the Compute program
team decide that a new project called (for example) SuperNova should be the
main project, that just means that Openstack Compute is now a pointer to a
different project, supported by the same program team.

It sounds like that isn't the intent though?


> > I wasn't around when the original Programs/Projects discussion was
> > happening - which, I suspect, has a lot to do with why I'm confused
> > today - it seems as though people who were around at the time understand
> > the difference, but people who have joined since then are relying on
> > multiple conflicting verbal definitions. I believe, though,
> > that
> http://lists.openstack.org/pipermail/openstack-dev/2013-June/010821.html
> > was one of the earliest starting points of the discussion. That page
> > points at https://wiki.openstack.org/wiki/Projects, which today contains
> > a list of Programs. That page does have a definition of what a Program
> > is, but doesn't explain what a Project is or how they relate to
> > Programs. This page seems to be locked down, so I can't edit it.
>
> https://wiki.openstack.org/wiki/Projects was renamed to
> https://wiki.openstack.org/wiki/Programs with the wiki helpfully leaving
> a redirect behind. So the content you are seeing here is the "Programs"
> wiki page, which is why it doesn't define "projects".
>
> We don't really use the word "project" that much anymore, we prefer to
> talk about code repositories. Programs are teams working on a set of
> code repositories. Some of those code repositories may appear in the
> integrated release.
>

This explanation of the difference between projects and programs sounds
like it would be useful to add to /Programs - but I can't edit that page.

>
> > That page does mention projects, once. The context makes it read, to me,
> > as though a program can follow one process to "become part of OpenStack"
> > and then another process to "become an Integrated project and part of
> > the OpenStack coordinated release" - when my understanding of reality is
> > that the second process applies to Projects, not Programs.
> >
> > I've tried to find any other page that talks about what a Project is and
> > how they relate to Programs, but I haven't been able to find anything.
> > Perhaps there's some definition locked up in a mailing list thread or
> > some TC minutes, but I haven't been able to find it.
> >
> > During the previous megathread, I got the feeling that at least some of
> > the diff

Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Alan Kavanagh
+1 my sentiments exactly, and this will actually help folks contribute in a 
more meaningful and productive way.
/Alan

-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com] 
Sent: August-29-14 12:28 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

On 08/28/2014 04:01 PM, Joe Gordon wrote:
>
>
>
> On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh 
> mailto:alan.kavan...@ericsson.com>> wrote:
>
> I share Donald's points here, I believe what would help is to
> clearly describe in the Wiki the process and workflow for the BP
> approval process and build in this process how to deal with
> discrepancies/disagreements and build timeframes for each stage and
> process of appeal etc.
> The current process would benefit from some fine tuning and helping
> to build safe guards and time limits/deadlines so folks can expect
> responses within a reasonable time and not be left waiting in the cold.
>
>
> This is a resource problem, the nova team simply does not have enough 
> people doing enough reviews to make this possible.

All the more reason to make it obvious which reviews are not being addressed in 
a timely fashion.  (I'm thinking something akin to the order screen at a fast 
food restaurant that starts blinking in red and beeping if an order hasn't been 
filled in a certain amount of time.)

Perhaps by making it clear that reviews are a bottleneck this will actually 
help to address the problem.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Chris Friesen

On 08/28/2014 04:01 PM, Joe Gordon wrote:




On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh
mailto:alan.kavan...@ericsson.com>> wrote:

I share Donald's points here, I believe what would help is to
clearly describe in the Wiki the process and workflow for the BP
approval process and build in this process how to deal with
discrepancies/disagreements and build timeframes for each stage and
process of appeal etc.
The current process would benefit from some fine tuning and helping
to build safe guards and time limits/deadlines so folks can expect
responses within a reasonable time and not be left waiting in the cold.


This is a resource problem, the nova team simply does not have enough
people doing enough reviews to make this possible.


All the more reason to make it obvious which reviews are not being 
addressed in a timely fashion.  (I'm thinking something akin to the 
order screen at a fast food restaurant that starts blinking in red and 
beeping if an order hasn't been filled in a certain amount of time.)


Perhaps by making it clear that reviews are a bottleneck this will 
actually help to address the problem.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Boris Pavlovic
Joe,


This is a resource problem, the nova team simply does not have enough
> people doing enough reviews to make this possible.


Adding in such case more bureaucracy (specs) is not the best way to resolve
team throughput issues...

my 2cents


Best regards,
Boris Pavlovic


On Fri, Aug 29, 2014 at 2:01 AM, Joe Gordon  wrote:

>
>
>
> On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh  > wrote:
>
>> I share Donald's points here, I believe what would help is to clearly
>> describe in the Wiki the process and workflow for the BP approval process
>> and build in this process how to deal with discrepancies/disagreements and
>> build timeframes for each stage and process of appeal etc.
>> The current process would benefit from some fine tuning and helping to
>> build safe guards and time limits/deadlines so folks can expect responses
>> within a reasonable time and not be left waiting in the cold.
>>
>
>
> This is a resource problem, the nova team simply does not have enough
> people doing enough reviews to make this possible.
>
>
>> My 2cents!
>> /Alan
>>
>> -Original Message-
>> From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
>> Sent: August-28-14 10:43 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
>>
>> I would contend that that right there is an indication that there's a
>> problem with the process.  You submit a BP and then you have no idea of
>> what is happening and no way of addressing any issues.  If the priority is
>> wrong I can explain why I think the priority should be higher, getting
>> stonewalled leaves me with no idea what's wrong and no way to address any
>> problems.
>>
>> I think, in general, almost everyone is more than willing to adjust
>> proposals based upon feedback.  Tell me what you think is wrong and I'll
>> either explain why the proposal is correct or I'll change it to address the
>> concerns.
>>
>> Trying to deal with silence is really hard and really frustrating.
>> Especially given that we're not supposed to spam the mailing it's really
>> hard to know what to do.  I don't know the solution but we need to do
>> something.  More core team members would help, maybe something like an
>> automatic timeout where BPs/patches with no negative scores and no activity
>> for a week get flagged for special handling.
>>
>> I feel we need to change the process somehow.
>>
>> --
>> Don Dugger
>> "Censeo Toto nos in Kansa esse decisse." - D. Gale
>> Ph: 303/443-3786
>>
>> -Original Message-
>> From: Jay Pipes [mailto:jaypi...@gmail.com]
>> Sent: Thursday, August 28, 2014 1:44 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
>>
>> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
>> > I'll try and not whine about my pet project but I do think there is a
>> > problem here.  For the Gantt project to split out the scheduler there
>> > is a crucial BP that needs to be implemented (
>> > https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP
>> > has been rejected and we'll have to try again for Kilo.  My question
>> > is did we do something wrong or is the process broken?
>> >
>> > Note that we originally proposed the BP on 4/23/14, went through 10
>> > iterations to the final version on 7/25/14 and the final version got
>> > three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to
>> > specific people, we didn't get the second +2, hence the rejection.
>> >
>> > I understand that reviews are a burden and very hard but it seems
>> > wrong that a BP with multiple positive reviews and no negative reviews
>> > is dropped because of what looks like indifference.
>>
>> I would posit that this is not actually indifference. The reason that
>> there may not have been >1 +2 from a core team member may very well have
>> been that the core team members did not feel that the blueprint's priority
>> was high enough to put before other work, or that the core team members did
>> have the time to comment on the spec (due to them not feeling the blueprint
>> had the priority to justify the time to do a full review).
>>
>> Note that I'm not a core drivers team member.
>>
>> Best,
>> -jay
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bi

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Susanne Balle
Just for us to learn about the incubator status, here are some of the info
on incubation:

https://wiki.openstack.org/wiki/Governance/Approved/Incubation
https://wiki.openstack.org/wiki/Governance/NewProjects

Susanne


On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
wrote:

>  I would like to discuss the pros and cons of putting Octavia into the
> Neutron LBaaS incubator project right away. If it is going to be the
> reference implementation for LBaaS v 2 then I believe Octavia belong in
> Neutron LBaaS v2 incubator.
>
> The Pros:
> * Octavia is in Openstack incubation right away along with the lbaas v2
> code. We do not have to apply for incubation later on.
> * As incubation project we have our own core and should be able ot commit
> our code
> * We are starting out as an OpenStack incubated project
>
> The Cons:
> * Not sure of the velocity of the project
> * Incubation not well defined.
>
> If Octavia starts as a standalone stackforge project we are assuming that
> it would be looked favorable on when time is to move it into incubated
> status.
>
> Susanne
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Joe Gordon
On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh 
wrote:

> I share Donald's points here, I believe what would help is to clearly
> describe in the Wiki the process and workflow for the BP approval process
> and build in this process how to deal with discrepancies/disagreements and
> build timeframes for each stage and process of appeal etc.
> The current process would benefit from some fine tuning and helping to
> build safe guards and time limits/deadlines so folks can expect responses
> within a reasonable time and not be left waiting in the cold.
>


This is a resource problem, the nova team simply does not have enough
people doing enough reviews to make this possible.


> My 2cents!
> /Alan
>
> -Original Message-
> From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
> Sent: August-28-14 10:43 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
>
> I would contend that that right there is an indication that there's a
> problem with the process.  You submit a BP and then you have no idea of
> what is happening and no way of addressing any issues.  If the priority is
> wrong I can explain why I think the priority should be higher, getting
> stonewalled leaves me with no idea what's wrong and no way to address any
> problems.
>
> I think, in general, almost everyone is more than willing to adjust
> proposals based upon feedback.  Tell me what you think is wrong and I'll
> either explain why the proposal is correct or I'll change it to address the
> concerns.
>
> Trying to deal with silence is really hard and really frustrating.
> Especially given that we're not supposed to spam the mailing it's really
> hard to know what to do.  I don't know the solution but we need to do
> something.  More core team members would help, maybe something like an
> automatic timeout where BPs/patches with no negative scores and no activity
> for a week get flagged for special handling.
>
> I feel we need to change the process somehow.
>
> --
> Don Dugger
> "Censeo Toto nos in Kansa esse decisse." - D. Gale
> Ph: 303/443-3786
>
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Thursday, August 28, 2014 1:44 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
>
> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
> > I'll try and not whine about my pet project but I do think there is a
> > problem here.  For the Gantt project to split out the scheduler there
> > is a crucial BP that needs to be implemented (
> > https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP
> > has been rejected and we'll have to try again for Kilo.  My question
> > is did we do something wrong or is the process broken?
> >
> > Note that we originally proposed the BP on 4/23/14, went through 10
> > iterations to the final version on 7/25/14 and the final version got
> > three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to
> > specific people, we didn't get the second +2, hence the rejection.
> >
> > I understand that reviews are a burden and very hard but it seems
> > wrong that a BP with multiple positive reviews and no negative reviews
> > is dropped because of what looks like indifference.
>
> I would posit that this is not actually indifference. The reason that
> there may not have been >1 +2 from a core team member may very well have
> been that the core team members did not feel that the blueprint's priority
> was high enough to put before other work, or that the core team members did
> have the time to comment on the spec (due to them not feeling the blueprint
> had the priority to justify the time to do a full review).
>
> Note that I'm not a core drivers team member.
>
> Best,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Susanne Balle
 I would like to discuss the pros and cons of putting Octavia into the
Neutron LBaaS incubator project right away. If it is going to be the
reference implementation for LBaaS v 2 then I believe Octavia belong in
Neutron LBaaS v2 incubator.

The Pros:
* Octavia is in Openstack incubation right away along with the lbaas v2
code. We do not have to apply for incubation later on.
* As incubation project we have our own core and should be able ot commit
our code
* We are starting out as an OpenStack incubated project

The Cons:
* Not sure of the velocity of the project
* Incubation not well defined.

If Octavia starts as a standalone stackforge project we are assuming that
it would be looked favorable on when time is to move it into incubated
status.

Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] Specs for K release

2014-08-28 Thread Alan Kavanagh
That's a fairly good point Michael, and if that can get correlated to the 
proposed incubation section for that project then I believe this would help 
alleviate a lot of frustration and help folks understand what to expect and 
what are the next steps etc. 
How do we get this formulated and agreed so we can have this approved and 
proceed?
/Alan
-Original Message-
From: Michael Still [mailto:mi...@stillhq.com] 
Sent: August-28-14 6:51 PM
To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [nova] [neutron] Specs for K release

On Thu, Aug 28, 2014 at 6:53 AM, Daniel P. Berrange  wrote:
> On Thu, Aug 28, 2014 at 11:51:32AM +, Alan Kavanagh wrote:
>> How to do we handle specs that have slipped through the cracks
>> and did not make it for Juno?
>
> Rebase the proposal so it is under the 'kilo' directory path
> instead of 'juno' and submit it for review again. Make sure
> to keep the ChangeId line intact so people see the history
> of any review comments in the earlier Juno proposal.

Yes, but...

I think we should talk about tweaking the structure of the juno
directory. Something like having proposed, approved, and implemented
directories. That would provide better signalling to operators about
what we actually did, what we thought we'd do, and what we didn't do.

I worry that gerrit is a terrible place to archive the things which
were proposed by not approved. If someone else wants to pick something
up later, its super hard for them to find.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Alan Kavanagh
I share Donald's points here, I believe what would help is to clearly describe 
in the Wiki the process and workflow for the BP approval process and build in 
this process how to deal with discrepancies/disagreements and build timeframes 
for each stage and process of appeal etc.
The current process would benefit from some fine tuning and helping to build 
safe guards and time limits/deadlines so folks can expect responses within a 
reasonable time and not be left waiting in the cold. 
My 2cents!
/Alan

-Original Message-
From: Dugger, Donald D [mailto:donald.d.dug...@intel.com] 
Sent: August-28-14 10:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

I would contend that that right there is an indication that there's a problem 
with the process.  You submit a BP and then you have no idea of what is 
happening and no way of addressing any issues.  If the priority is wrong I can 
explain why I think the priority should be higher, getting stonewalled leaves 
me with no idea what's wrong and no way to address any problems.

I think, in general, almost everyone is more than willing to adjust proposals 
based upon feedback.  Tell me what you think is wrong and I'll either explain 
why the proposal is correct or I'll change it to address the concerns.

Trying to deal with silence is really hard and really frustrating.  Especially 
given that we're not supposed to spam the mailing it's really hard to know what 
to do.  I don't know the solution but we need to do something.  More core team 
members would help, maybe something like an automatic timeout where BPs/patches 
with no negative scores and no activity for a week get flagged for special 
handling.

I feel we need to change the process somehow.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Thursday, August 28, 2014 1:44 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
> I'll try and not whine about my pet project but I do think there is a 
> problem here.  For the Gantt project to split out the scheduler there 
> is a crucial BP that needs to be implemented ( 
> https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP 
> has been rejected and we'll have to try again for Kilo.  My question 
> is did we do something wrong or is the process broken?
>
> Note that we originally proposed the BP on 4/23/14, went through 10 
> iterations to the final version on 7/25/14 and the final version got 
> three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to 
> specific people, we didn't get the second +2, hence the rejection.
>
> I understand that reviews are a burden and very hard but it seems 
> wrong that a BP with multiple positive reviews and no negative reviews 
> is dropped because of what looks like indifference.

I would posit that this is not actually indifference. The reason that there may 
not have been >1 +2 from a core team member may very well have been that the 
core team members did not feel that the blueprint's priority was high enough to 
put before other work, or that the core team members did have the time to 
comment on the spec (due to them not feeling the blueprint had the priority to 
justify the time to do a full review).

Note that I'm not a core drivers team member.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Alan Kavanagh
+1, that would be the most pragmatic way to address this, silence has different 
meanings to different people, a response would clarify the ambiguity and 
misunderstanding.
/Alan

-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com] 
Sent: August-28-14 11:18 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

On 08/28/2014 03:02 PM, Jay Pipes wrote:

> I understand your frustration about the silence, but the silence from 
> core team members may actually be a loud statement about where their 
> priorities are.

Or it could be that they haven't looked at it, aren't aware of it, or haven't 
been paying attention.

I think it would be better to make feedback explicit and remove any 
uncertainty/ambiguity.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Alan Kavanagh
I don't think silence ever helps, its better to respond even if it is to 
disagree, one on one with the person.
Alan

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: August-28-14 11:02 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?


On 08/28/2014 04:42 PM, Dugger, Donald D wrote:
> I would contend that that right there is an indication that there's a 
> problem with the process.  You submit a BP and then you have no idea 
> of what is happening and no way of addressing any issues.  If the 
> priority is wrong I can explain why I think the priority should be 
> higher, getting stonewalled leaves me with no idea what's wrong and no 
> way to address any problems.
>
> I think, in general, almost everyone is more than willing to adjust 
> proposals based upon feedback.  Tell me what you think is wrong and 
> I'll either explain why the proposal is correct or I'll change it to 
> address the concerns.

In many of the Gantt IRC meetings as well as the ML, I and others have 
repeatedly raised concerns about the scheduler split being premature and not a 
priority compared to the cleanup of the internal interfaces around the resource 
tracker and scheduler. This feedback was echoed in the mid-cycle meetup session 
as well. Sylvain and I have begun the work of cleaning up those interfaces and 
fixing the bugs around non-versioned data structures and inconsistent calling 
interfaces in the scheduler and resource tracker. Progress is being made 
towards these things.

> Trying to deal with silence is really hard and really frustrating.
> Especially given that we're not supposed to spam the mailing it's 
> really hard to know what to do.  I don't know the solution but we need 
> to do something.  More core team members would help, maybe something 
> like an automatic timeout where BPs/patches with no negative scores 
> and no activity for a week get flagged for special handling.

Yes, I think flagging blueprints for special handling would be a good thing. 
Keep in mind, though, that there are an enormous number of proposed 
specifications, with the vast majority of folks only caring about their own 
proposed specs, and very few doing reviews on anything other than their own 
patches or specific area of interest.

Doing reviews on other folks' patches and blueprints would certainly help in 
this regard. If cores only see someone contributing to a small, isolated 
section of the code or only to their own blueprints/patches, they generally 
tend to implicitly down-play that person's reviews in favor of 
patches/blueprints from folks that are reviewing non-related patches and 
contributing to reduce the total review load.

I understand your frustration about the silence, but the silence from core team 
members may actually be a loud statement about where their priorities are.

Best,
-jay

> I feel we need to change the process somehow.
>
> -- Don Dugger "Censeo Toto nos in Kansa esse decisse." - D. Gale Ph:
> 303/443-3786
>
> -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] 
> Sent: Thursday, August 28, 2014 1:44 PM
> To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] 
> [nova] Is the BP approval process broken?
>
> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
>> I'll try and not whine about my pet project but I do think there is a 
>> problem here.  For the Gantt project to split out the scheduler there 
>> is a crucial BP that needs to be implemented ( 
>> https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP 
>> has been rejected and we'll have to try again for Kilo.  My question 
>> is did we do something wrong or is the process broken?
>>
>> Note that we originally proposed the BP on 4/23/14, went through
>> 10 iterations to the final version on 7/25/14 and the final version 
>> got three +1s and a +2 by 8/5.  Unfortunately, even after reaching 
>> out to specific people, we didn't get the second +2, hence the 
>> rejection.
>>
>> I understand that reviews are a burden and very hard but it seems 
>> wrong that a BP with multiple positive reviews and no negative 
>> reviews is dropped because of what looks like indifference.
>
> I would posit that this is not actually indifference. The reason that 
> there may not have been >1 +2 from a core team member may very well 
> have been that the core team members did not feel that the blueprint's 
> priority was high enough to put before other work, or that the core 
> team members did have the time to comment on the spec (due to them not 
> feeling the blueprint had the priority to justify the time to do a 
> full review).
>
> Note that I'm not a core drivers team member.
>
> Best, -jay
>
>
> ___ OpenStack-dev mailing 
> list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___ Open

Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Susanne Balle
We need to be careful. I believe that a user can use these filters to keep
requesting VMs in the case of nova to get to the size of your cloud.

Also given that nova now has ServerGroups let's not make a quick decision
on using something that is being replaced with something better. I suggest
we investigated ServerGroups a little more before we discard it.

The operator should really decide how he/she wants Anti-affinity by setting
the right filters in nova.

Susanne


On Thu, Aug 28, 2014 at 5:12 PM, Brandon Logan 
wrote:

> Trevor and I just worked through some scenarios to make sure it can
> handle colocation and apolocation.  It looks like it does, however not
> everything will so simple, especially when we introduce horizontal
> scaling.  Trevor's going to write up an email about some of the caveats
> but so far just using a table to track what LB has what VMs and on what
> hosts will be sufficient.
>
> Thanks,
> Brandon
>
> On Thu, 2014-08-28 at 13:49 -0700, Stephen Balukoff wrote:
> > I'm trying to think of a use case that wouldn't be satisfied using
> > those filters and am not coming up with anything. As such, I don't see
> > a problem using them to fulfill our requirements around colocation and
> > apolocation.
> >
> >
> > Stephen
> >
> >
> > On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan
> >  wrote:
> > Yeah we were looking at the SameHost and DifferentHost filters
> > and that
> > will probably do what we need.  Though I was hoping we could
> > do a
> > combination of both but we can make it work with those filters
> > I
> > believe.
> >
> > Thanks,
> > Brandon
> >
> > On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> > > Brandon
> > >
> > >
> > > I am not sure how ready that nova feature is for general use
> > and have
> > > asked our nova lead about that. He is on vacation but should
> > be back
> > > by the start of next week. I believe this is the right
> > approach for us
> > > moving forward.
> > >
> > >
> > >
> > > We cannot make it mandatory to run the 2 filters but we can
> > say in the
> > > documentation that if these two filters aren't set that we
> > cannot
> > > guaranty Anti-affinity or Affinity.
> > >
> > >
> > > The other way we can implement this is by using availability
> > zones and
> > > host aggregates. This is one technique we use to make sure
> > we deploy
> > > our in-cloud services in an HA model. This also would assume
> > that the
> > > operator is setting up Availabiltiy zones which we can't.
> > >
> > >
> > >
> >
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> > >
> > >
> > >
> > > Sahara is currently using the following filters to support
> > host
> > > affinity which is probably due to the fact that they did the
> > work
> > > before ServerGroups. I am not advocating the use of those
> > filters but
> > > just showing you that we can document the feature and it
> > will be up to
> > > the operator to set it up to get the right behavior.
> > >
> > >
> > > Regards
> > >
> > >
> > > Susanne
> > >
> > >
> > >
> > > Anti-affinity
> > > One of the problems in Hadoop running on OpenStack is that
> > there is no
> > > ability to control where machine is actually running. We
> > cannot be
> > > sure that two new virtual machines are started on different
> > physical
> > > machines. As a result, any replication with cluster is not
> > reliable
> > > because all replicas may turn up on one physical machine.
> > > Anti-affinity feature provides an ability to explicitly tell
> > Sahara to
> > > run specified processes on different compute nodes. This is
> > especially
> > > useful for Hadoop datanode process to make HDFS replicas
> > reliable.
> > > The Anti-Affinity feature requires certain scheduler filters
> > to be
> > > enabled on Nova. Edit your/etc/nova/nova.conf in the
> > following way:
> > >
> > > [DEFAULT]
> > >
> > > ...
> > >
> > >
> > scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> > > scheduler_default_filters=DifferentHostFilter,SameHostFilter
> > > This feature is supported by all plugins out of the box.
> > >
> > >
> > >
> > http://docs.openstack.org/developer/sahara/userdoc/features.html
> > >
> > >
> > >
> 

Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Chris Friesen

On 08/28/2014 03:02 PM, Jay Pipes wrote:


I understand your frustration about the silence, but the silence from
core team members may actually be a loud statement about where their
priorities are.


Or it could be that they haven't looked at it, aren't aware of it, or 
haven't been paying attention.


I think it would be better to make feedback explicit and remove any 
uncertainty/ambiguity.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Brandon Logan
Trevor and I just worked through some scenarios to make sure it can
handle colocation and apolocation.  It looks like it does, however not
everything will so simple, especially when we introduce horizontal
scaling.  Trevor's going to write up an email about some of the caveats
but so far just using a table to track what LB has what VMs and on what
hosts will be sufficient.

Thanks,
Brandon

On Thu, 2014-08-28 at 13:49 -0700, Stephen Balukoff wrote:
> I'm trying to think of a use case that wouldn't be satisfied using
> those filters and am not coming up with anything. As such, I don't see
> a problem using them to fulfill our requirements around colocation and
> apolocation.
> 
> 
> Stephen
> 
> 
> On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan
>  wrote:
> Yeah we were looking at the SameHost and DifferentHost filters
> and that
> will probably do what we need.  Though I was hoping we could
> do a
> combination of both but we can make it work with those filters
> I
> believe.
> 
> Thanks,
> Brandon
> 
> On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> > Brandon
> >
> >
> > I am not sure how ready that nova feature is for general use
> and have
> > asked our nova lead about that. He is on vacation but should
> be back
> > by the start of next week. I believe this is the right
> approach for us
> > moving forward.
> >
> >
> >
> > We cannot make it mandatory to run the 2 filters but we can
> say in the
> > documentation that if these two filters aren't set that we
> cannot
> > guaranty Anti-affinity or Affinity.
> >
> >
> > The other way we can implement this is by using availability
> zones and
> > host aggregates. This is one technique we use to make sure
> we deploy
> > our in-cloud services in an HA model. This also would assume
> that the
> > operator is setting up Availabiltiy zones which we can't.
> >
> >
> >
> 
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> >
> >
> >
> > Sahara is currently using the following filters to support
> host
> > affinity which is probably due to the fact that they did the
> work
> > before ServerGroups. I am not advocating the use of those
> filters but
> > just showing you that we can document the feature and it
> will be up to
> > the operator to set it up to get the right behavior.
> >
> >
> > Regards
> >
> >
> > Susanne
> >
> >
> >
> > Anti-affinity
> > One of the problems in Hadoop running on OpenStack is that
> there is no
> > ability to control where machine is actually running. We
> cannot be
> > sure that two new virtual machines are started on different
> physical
> > machines. As a result, any replication with cluster is not
> reliable
> > because all replicas may turn up on one physical machine.
> > Anti-affinity feature provides an ability to explicitly tell
> Sahara to
> > run specified processes on different compute nodes. This is
> especially
> > useful for Hadoop datanode process to make HDFS replicas
> reliable.
> > The Anti-Affinity feature requires certain scheduler filters
> to be
> > enabled on Nova. Edit your/etc/nova/nova.conf in the
> following way:
> >
> > [DEFAULT]
> >
> > ...
> >
> >
> scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> > scheduler_default_filters=DifferentHostFilter,SameHostFilter
> > This feature is supported by all plugins out of the box.
> >
> >
> >
> http://docs.openstack.org/developer/sahara/userdoc/features.html
> >
> >
> >
> >
> >
> > On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
> >  wrote:
> > Nova scheduler has ServerGroupAffinityFilter and
> > ServerGroupAntiAffinityFilter which does the
> colocation and
> > apolocation
> > for VMs.  I think this is something we've discussed
> before
> > about taking
> > advantage of nova's scheduling.  I need to verify
> that this
> > will work
> > with what we (RAX) plan to do, but I'd like to get
> everyone
> > else's
> > thoughts.  Also, if we do decide this works for
> everyone
>  

Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Jay Pipes


On 08/28/2014 04:42 PM, Dugger, Donald D wrote:

I would contend that that right there is an indication that there's a
problem with the process.  You submit a BP and then you have no idea
of what is happening and no way of addressing any issues.  If the
priority is wrong I can explain why I think the priority should be
higher, getting stonewalled leaves me with no idea what's wrong and
no way to address any problems.

I think, in general, almost everyone is more than willing to adjust
proposals based upon feedback.  Tell me what you think is wrong and
I'll either explain why the proposal is correct or I'll change it to
address the concerns.


In many of the Gantt IRC meetings as well as the ML, I and others have 
repeatedly raised concerns about the scheduler split being premature and 
not a priority compared to the cleanup of the internal interfaces around 
the resource tracker and scheduler. This feedback was echoed in the 
mid-cycle meetup session as well. Sylvain and I have begun the work of 
cleaning up those interfaces and fixing the bugs around non-versioned 
data structures and inconsistent calling interfaces in the scheduler and 
resource tracker. Progress is being made towards these things.



Trying to deal with silence is really hard and really frustrating.
Especially given that we're not supposed to spam the mailing it's
really hard to know what to do.  I don't know the solution but we
need to do something.  More core team members would help, maybe
something like an automatic timeout where BPs/patches with no
negative scores and no activity for a week get flagged for special
handling.


Yes, I think flagging blueprints for special handling would be a good 
thing. Keep in mind, though, that there are an enormous number of 
proposed specifications, with the vast majority of folks only caring 
about their own proposed specs, and very few doing reviews on anything 
other than their own patches or specific area of interest.


Doing reviews on other folks' patches and blueprints would certainly 
help in this regard. If cores only see someone contributing to a small, 
isolated section of the code or only to their own blueprints/patches, 
they generally tend to implicitly down-play that person's reviews in 
favor of patches/blueprints from folks that are reviewing non-related 
patches and contributing to reduce the total review load.


I understand your frustration about the silence, but the silence from 
core team members may actually be a loud statement about where their 
priorities are.


Best,
-jay


I feel we need to change the process somehow.

-- Don Dugger "Censeo Toto nos in Kansa esse decisse." - D. Gale Ph:
303/443-3786

-Original Message- From: Jay Pipes
[mailto:jaypi...@gmail.com] Sent: Thursday, August 28, 2014 1:44 PM
To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
[nova] Is the BP approval process broken?

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:

I'll try and not whine about my pet project but I do think there is
a problem here.  For the Gantt project to split out the scheduler
there is a crucial BP that needs to be implemented (
https://review.openstack.org/#/c/89893/ ) and, unfortunately, the
BP has been rejected and we'll have to try again for Kilo.  My
question is did we do something wrong or is the process broken?

Note that we originally proposed the BP on 4/23/14, went through
10 iterations to the final version on 7/25/14 and the final version
got three +1s and a +2 by 8/5.  Unfortunately, even after reaching
out to specific people, we didn't get the second +2, hence the
rejection.

I understand that reviews are a burden and very hard but it seems
wrong that a BP with multiple positive reviews and no negative
reviews is dropped because of what looks like indifference.


I would posit that this is not actually indifference. The reason that
there may not have been >1 +2 from a core team member may very well
have been that the core team members did not feel that the
blueprint's priority was high enough to put before other work, or
that the core team members did have the time to comment on the spec
(due to them not feeling the blueprint had the priority to justify
the time to do a full review).

Note that I'm not a core drivers team member.

Best, -jay


___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Chris Friesen

On 08/28/2014 02:25 PM, Jay Pipes wrote:

On 08/28/2014 04:05 PM, Chris Friesen wrote:

On 08/28/2014 01:44 PM, Jay Pipes wrote:

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:



I understand that reviews are a burden and very hard but it seems wrong
that a BP with multiple positive reviews and no negative reviews is
dropped because of what looks like indifference.


I would posit that this is not actually indifference. The reason that
there may not have been >1 +2 from a core team member may very well have
been that the core team members did not feel that the blueprint's
priority was high enough to put before other work, or that the core team
members did have the time to comment on the spec (due to them not
feeling the blueprint had the priority to justify the time to do a full
review).


The overall "scheduler-lib" Blueprint is marked with a "high" priority
at "http://status.openstack.org/release/";.  Hopefully that would apply
to sub-blueprints as well.


a) There are no sub-blueprints to that scheduler-lib blueprint


I guess my terminology was wrong.  The original email referred to 
"https://review.openstack.org/#/c/89893/"; as the "crucial BP that needs 
to be implemented".  That is part of 
"https://review.openstack.org/#/q/topic:bp/isolate-scheduler-db,n,z";, 
which is listed as a Gerrit topic in the "scheduler-lib" blueprint that 
I pointed out.



b) If there were sub-blueprints, that does not mean that they would
necessarily take the same priority as their parent blueprint


I'm not sure how that would work.  If we have a high-priority blueprint 
depending on work that is considered low-priority, that would seem to 
set up a classic priority inversion scenario.



c) There's no reason priorities can't be revisited when necessary


Sure, but in that case it might be a good idea to make the updated 
priority explicit.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Request to include AMQP 1.0 support in Juno-3

2014-08-28 Thread Ken Giusti
On Thu, 28 Aug 2014 13:36:46 +0100, Mark McLoughlin wrote:
> On Thu, 2014-08-28 at 13:24 +0200, Flavio Percoco wrote:
> > On 08/27/2014 03:35 PM, Ken Giusti wrote:
> > > Hi All,
> > >
> > > I believe Juno-3 is our last chance to get this feature [1] included
> > > into olso.messaging.
> > >

> >
> >
> > Hi Ken,
> >
> > Thanks a lot for your hard work here. As I stated in my last comment on
> > the driver's review, I think we should let this driver land and let
> > future patches improve it where/when needed.
> >
> > I agreed on letting the driver land as-is based on the fact that there
> > are patches already submitted ready to enable the gates for this driver.
>
> I feel bad that the driver has been in a pretty complete state for quite
> a while but hasn't received a whole lot of reviews. There's a lot of
> promise to this idea, so it would be ideal if we could unblock it.
>
> One thing I've been meaning to do this cycle is add concrete advice for
> operators on the state of each driver. I think we'd be a lot more
> comfortable merging this in Juno if we could somehow make it clear to
> operators that it's experimental right now. My idea was:
>
>   - Write up some notes which discusses the state of each driver e.g.
>
>   - RabbitMQ - the default, used by the majority of OpenStack
> deployments, perhaps list some of the known bugs, particularly
> around HA.
>
>   - Qpid - suitable for production, but used in a limited number of
> deployments. Again, list known issues. Mention that it will
> probably be removed with the amqp10 driver matures.
>
>   - Proton/AMQP 1.0 - experimental, in active development, will
> support  multiple brokers and topologies, perhaps a pointer to a
> wiki page with the current TODO list
>
>   - ZeroMQ - unmaintained and deprecated, planned for removal in
> Kilo

Sounds like a plan - I'll take on the Qpid and Proton notes.  I've
been (trying) to keep the status of the Proton stuff up to date on the
blueprint page:

https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation

Is there a more appropriate home for these notes?  Etherpad?

>
>   - Propose this addition to the API docs and ask the operators list
> for feedback
>
>   - Propose a patch which adds a load-time deprecation warning to the
> ZeroMQ driver
>
>   - Include a load-time experimental warning in the proton driver

Done!

>
> Thoughts on that?
>
> (I understand the ZeroMQ situation needs further discussion - I don't
> think that's on-topic for the thread, I was just using it as example of
> what kind of advice we'd be giving in these docs)
>
> Mark.
>
> -
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Stephen Balukoff
I'm trying to think of a use case that wouldn't be satisfied using those
filters and am not coming up with anything. As such, I don't see a problem
using them to fulfill our requirements around colocation and apolocation.

Stephen


On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan 
wrote:

> Yeah we were looking at the SameHost and DifferentHost filters and that
> will probably do what we need.  Though I was hoping we could do a
> combination of both but we can make it work with those filters I
> believe.
>
> Thanks,
> Brandon
>
> On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> > Brandon
> >
> >
> > I am not sure how ready that nova feature is for general use and have
> > asked our nova lead about that. He is on vacation but should be back
> > by the start of next week. I believe this is the right approach for us
> > moving forward.
> >
> >
> >
> > We cannot make it mandatory to run the 2 filters but we can say in the
> > documentation that if these two filters aren't set that we cannot
> > guaranty Anti-affinity or Affinity.
> >
> >
> > The other way we can implement this is by using availability zones and
> > host aggregates. This is one technique we use to make sure we deploy
> > our in-cloud services in an HA model. This also would assume that the
> > operator is setting up Availabiltiy zones which we can't.
> >
> >
> >
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> >
> >
> >
> > Sahara is currently using the following filters to support host
> > affinity which is probably due to the fact that they did the work
> > before ServerGroups. I am not advocating the use of those filters but
> > just showing you that we can document the feature and it will be up to
> > the operator to set it up to get the right behavior.
> >
> >
> > Regards
> >
> >
> > Susanne
> >
> >
> >
> > Anti-affinity
> > One of the problems in Hadoop running on OpenStack is that there is no
> > ability to control where machine is actually running. We cannot be
> > sure that two new virtual machines are started on different physical
> > machines. As a result, any replication with cluster is not reliable
> > because all replicas may turn up on one physical machine.
> > Anti-affinity feature provides an ability to explicitly tell Sahara to
> > run specified processes on different compute nodes. This is especially
> > useful for Hadoop datanode process to make HDFS replicas reliable.
> > The Anti-Affinity feature requires certain scheduler filters to be
> > enabled on Nova. Edit your/etc/nova/nova.conf in the following way:
> >
> > [DEFAULT]
> >
> > ...
> >
> > scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> > scheduler_default_filters=DifferentHostFilter,SameHostFilter
> > This feature is supported by all plugins out of the box.
> >
> >
> > http://docs.openstack.org/developer/sahara/userdoc/features.html
> >
> >
> >
> >
> >
> > On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
> >  wrote:
> > Nova scheduler has ServerGroupAffinityFilter and
> > ServerGroupAntiAffinityFilter which does the colocation and
> > apolocation
> > for VMs.  I think this is something we've discussed before
> > about taking
> > advantage of nova's scheduling.  I need to verify that this
> > will work
> > with what we (RAX) plan to do, but I'd like to get everyone
> > else's
> > thoughts.  Also, if we do decide this works for everyone
> > involved,
> > should we make it mandatory that the nova-compute services are
> > running
> > these two filters?  I'm also trying to see if we can use this
> > to also do
> > our own colocation and apolocation on load balancers, but it
> > looks like
> > it will be a bit complex if it can even work.  Hopefully, I
> > can have
> > something definitive on that soon.
> >
> > Thanks,
> > Brandon
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Dugger, Donald D
I would contend that that right there is an indication that there's a problem 
with the process.  You submit a BP and then you have no idea of what is 
happening and no way of addressing any issues.  If the priority is wrong I can 
explain why I think the priority should be higher, getting stonewalled leaves 
me with no idea what's wrong and no way to address any problems.

I think, in general, almost everyone is more than willing to adjust proposals 
based upon feedback.  Tell me what you think is wrong and I'll either explain 
why the proposal is correct or I'll change it to address the concerns.

Trying to deal with silence is really hard and really frustrating.  Especially 
given that we're not supposed to spam the mailing it's really hard to know what 
to do.  I don't know the solution but we need to do something.  More core team 
members would help, maybe something like an automatic timeout where BPs/patches 
with no negative scores and no activity for a week get flagged for special 
handling.

I feel we need to change the process somehow.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Thursday, August 28, 2014 1:44 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
> I'll try and not whine about my pet project but I do think there is a 
> problem here.  For the Gantt project to split out the scheduler there 
> is a crucial BP that needs to be implemented ( 
> https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP 
> has been rejected and we'll have to try again for Kilo.  My question 
> is did we do something wrong or is the process broken?
>
> Note that we originally proposed the BP on 4/23/14, went through 10 
> iterations to the final version on 7/25/14 and the final version got 
> three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to 
> specific people, we didn't get the second +2, hence the rejection.
>
> I understand that reviews are a burden and very hard but it seems 
> wrong that a BP with multiple positive reviews and no negative reviews 
> is dropped because of what looks like indifference.

I would posit that this is not actually indifference. The reason that there may 
not have been >1 +2 from a core team member may very well have been that the 
core team members did not feel that the blueprint's priority was high enough to 
put before other work, or that the core team members did have the time to 
comment on the spec (due to them not feeling the blueprint had the priority to 
justify the time to do a full review).

Note that I'm not a core drivers team member.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Jay Pipes

On 08/28/2014 04:05 PM, Chris Friesen wrote:

On 08/28/2014 01:44 PM, Jay Pipes wrote:

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:



I understand that reviews are a burden and very hard but it seems wrong
that a BP with multiple positive reviews and no negative reviews is
dropped because of what looks like indifference.


I would posit that this is not actually indifference. The reason that
there may not have been >1 +2 from a core team member may very well have
been that the core team members did not feel that the blueprint's
priority was high enough to put before other work, or that the core team
members did have the time to comment on the spec (due to them not
feeling the blueprint had the priority to justify the time to do a full
review).


The overall "scheduler-lib" Blueprint is marked with a "high" priority
at "http://status.openstack.org/release/";.  Hopefully that would apply
to sub-blueprints as well.


a) There are no sub-blueprints to that scheduler-lib blueprint

b) If there were sub-blueprints, that does not mean that they would 
necessarily take the same priority as their parent blueprint


c) There's no reason priorities can't be revisited when necessary

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Brandon Logan
Yeah we were looking at the SameHost and DifferentHost filters and that
will probably do what we need.  Though I was hoping we could do a
combination of both but we can make it work with those filters I
believe.

Thanks,
Brandon

On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> Brandon
> 
> 
> I am not sure how ready that nova feature is for general use and have
> asked our nova lead about that. He is on vacation but should be back
> by the start of next week. I believe this is the right approach for us
> moving forward.
> 
> 
> 
> We cannot make it mandatory to run the 2 filters but we can say in the
> documentation that if these two filters aren't set that we cannot
> guaranty Anti-affinity or Affinity. 
> 
> 
> The other way we can implement this is by using availability zones and
> host aggregates. This is one technique we use to make sure we deploy
> our in-cloud services in an HA model. This also would assume that the
> operator is setting up Availabiltiy zones which we can't.
> 
> 
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> 
> 
> 
> Sahara is currently using the following filters to support host
> affinity which is probably due to the fact that they did the work
> before ServerGroups. I am not advocating the use of those filters but
> just showing you that we can document the feature and it will be up to
> the operator to set it up to get the right behavior.
> 
> 
> Regards
> 
> 
> Susanne 
> 
> 
> 
> Anti-affinity
> One of the problems in Hadoop running on OpenStack is that there is no
> ability to control where machine is actually running. We cannot be
> sure that two new virtual machines are started on different physical
> machines. As a result, any replication with cluster is not reliable
> because all replicas may turn up on one physical machine.
> Anti-affinity feature provides an ability to explicitly tell Sahara to
> run specified processes on different compute nodes. This is especially
> useful for Hadoop datanode process to make HDFS replicas reliable.
> The Anti-Affinity feature requires certain scheduler filters to be
> enabled on Nova. Edit your/etc/nova/nova.conf in the following way:
> 
> [DEFAULT]
> 
> ...
> 
> scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> scheduler_default_filters=DifferentHostFilter,SameHostFilter
> This feature is supported by all plugins out of the box.
> 
> 
> http://docs.openstack.org/developer/sahara/userdoc/features.html
> 
> 
> 
> 
> 
> On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
>  wrote:
> Nova scheduler has ServerGroupAffinityFilter and
> ServerGroupAntiAffinityFilter which does the colocation and
> apolocation
> for VMs.  I think this is something we've discussed before
> about taking
> advantage of nova's scheduling.  I need to verify that this
> will work
> with what we (RAX) plan to do, but I'd like to get everyone
> else's
> thoughts.  Also, if we do decide this works for everyone
> involved,
> should we make it mandatory that the nova-compute services are
> running
> these two filters?  I'm also trying to see if we can use this
> to also do
> our own colocation and apolocation on load balancers, but it
> looks like
> it will be a bit complex if it can even work.  Hopefully, I
> can have
> something definitive on that soon.
> 
> Thanks,
> Brandon
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Picking a Name for the Tempest Library

2014-08-28 Thread Matthew Treinish
On Fri, Aug 22, 2014 at 11:26:25AM -0400, Matthew Treinish wrote:
> On Fri, Aug 15, 2014 at 03:14:21PM -0400, Matthew Treinish wrote:
> > Hi Everyone,
> > 
> > So as part of splitting out common functionality from tempest into a 
> > library [1]
> > we need to create a new repository. Which means we have the fun task of 
> > coming
> > up with something to name it. I'm personally thought we should call it:
> > 
> >  - mesocyclone
> > 
> > Which has the advantage of being a cloud/weather thing, and the name sort of
> > fits because it's a precursor to a tornado. Also, it's an available 
> > namespace on
> > both launchpad and pypi. But there has been expressed concern that both it 
> > is a
> > bit on the long side (which might have 80 char line length implications) and
> > it's unclear from the name what it does. 
> > 
> > During the last QA meeting some alternatives were also brought up:
> > 
> >  - tempest-lib / lib-tempest
> >  - tsepmet
> >  - blackstorm
> >  - calm
> >  - tempit
> >  - integration-test-lib
> > 
> > (although I'm not entirely sure I remember which ones were serious 
> > suggestions
> > or just jokes)
> > 
> > So as a first step I figured that I'd bring it up on the ML to see if 
> > anyone had
> > any other suggestions. (or maybe get a consensus around one choice) I'll 
> > take
> > the list, check if the namespaces are available, and make a survey so that
> > everyone can vote and hopefully we'll have a clear choice for a name from 
> > that.
> > 
> 
> Since the consensus was for renaming tempest and making tempest the library 
> name,
> which wasn't really feasible, I opened up a survey to poll everyone on the 
> which
> name to use:
> 
> https://www.surveymonkey.com/s/RLLZRGJ
> 
> The choices were taken from the initial list I posted and from the suggestions
> which people posted based on the availability of the names.
> 
> I'll keep it open for about a week, or until a clear favorite emerges.
> 

So I just closed the survey because one name had a commanding lead and in the
past 48hrs there was only 1 vote. The winner is tempest-lib, with 13 of 33
votes. The results from the survey are:

tempest-lib: 13
lib-tempest: 2
libtempest: 4
mesocyclone: 4
blackstorm: 4
caliban: 2
tempit: 3
pocovento: 1

-Matt Treinish


pgpW9eXHCkKfw.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-28 Thread Doug Hellmann

On Aug 28, 2014, at 3:31 PM, Sean Dague  wrote:

> On 08/28/2014 03:06 PM, Jay Pipes wrote:
>> On 08/28/2014 02:21 PM, Sean Dague wrote:
>>> On 08/28/2014 01:58 PM, Jay Pipes wrote:
 On 08/27/2014 11:34 AM, Doug Hellmann wrote:
> 
> On Aug 27, 2014, at 8:51 AM, Thierry Carrez 
> wrote:
> 
>> Hi everyone,
>> 
>> I've been thinking about what changes we can bring to the Design
>> Summit format to make it more productive. I've heard the feedback
>> from the mid-cycle meetups and would like to apply some of those
>> ideas for Paris, within the constraints we have (already booked
>> space and time). Here is something we could do:
>> 
>> Day 1. Cross-project sessions / incubated projects / other
>> projects
>> 
>> I think that worked well last time. 3 parallel rooms where we can
>> address top cross-project questions, discuss the results of the
>> various experiments we conducted during juno. Don't hesitate to
>> schedule 2 slots for discussions, so that we have time to come to
>> the bottom of those issues. Incubated projects (and maybe "other"
>> projects, if space allows) occupy the remaining space on day 1, and
>> could occupy "pods" on the other days.
> 
> If anything, I’d like to have fewer cross-project tracks running
> simultaneously. Depending on which are proposed, maybe we can make
> that happen. On the other hand, cross-project issues is a big theme
> right now so maybe we should consider devoting more than a day to
> dealing with them.
 
 I agree with Doug here. I'd almost say having a single cross-project
 room, with serialized content would be better than 3 separate
 cross-project tracks. By nature, the cross-project sessions will attract
 developers that work or are interested in a set of projects that looks
 like a big Venn diagram. By having 3 separate cross-project tracks, we
 would increase the likelihood that developers would once more have to
 choose among simultaneous sessions that they have equal interest in. For
 Infra and QA folks, this likelihood is even greater...
 
 I think I'd prefer a single cross-project track on the first day.
>>> 
>>> So the fallout of that is there will be 6 or 7 cross-project slots for
>>> the design summit. Maybe that's the right mix if the TC does a good job
>>> picking the top 5 things we want accomplished from a cross project
>>> standpoint during the cycle. But it's going to have to be a pretty
>>> directed pick. I think last time we had 21 slots, and with a couple of
>>> doubling up that gave 19 sessions. (about 30 - 35 proposals for that
>>> slot set).
>> 
>> I'm not sure that would be a bad thing :)
>> 
>> I think one of the reasons the mid-cycles have been successful is that
>> they have adequately limited the scope of discussions and I think by
>> doing our homework by fully vetting and voting on cross-project sessions
>> and being OK with saying "No, not this time.", we will be more
>> productive than if we had 20+ cross-project sessions.
>> 
>> Just my two cents, though..
> 
> I'm not sure it would be a bad thing either. I just wanted to be
> explicit about what we are saying the cross projects sessions are for in
> this case: the 5 key cross project activities the TC believes should be
> worked on this next cycle.

We’ve talked about several cross-project needs recently. Let’s start a list of 
things we think we’re ready to make significant progress on during Kilo (not 
just things we *need* to do, but things we think we *can* do *now*):

1. logging cleanup and standardization


> 
> The other question is if we did that what's running in competition to
> cross project day? Is it another free form pod day for people not
> working on those things?

That seems like a good use of time.

> 
>   -Sean
> 
>> 
>> -jay
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> -- 
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Chris Friesen

On 08/28/2014 01:44 PM, Jay Pipes wrote:

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:



I understand that reviews are a burden and very hard but it seems wrong
that a BP with multiple positive reviews and no negative reviews is
dropped because of what looks like indifference.


I would posit that this is not actually indifference. The reason that
there may not have been >1 +2 from a core team member may very well have
been that the core team members did not feel that the blueprint's
priority was high enough to put before other work, or that the core team
members did have the time to comment on the spec (due to them not
feeling the blueprint had the priority to justify the time to do a full
review).


The overall "scheduler-lib" Blueprint is marked with a "high" priority 
at "http://status.openstack.org/release/";.  Hopefully that would apply 
to sub-blueprints as well.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-28 Thread Anita Kuno
On 08/28/2014 03:31 PM, Sean Dague wrote:
> On 08/28/2014 03:06 PM, Jay Pipes wrote:
>> On 08/28/2014 02:21 PM, Sean Dague wrote:
>>> On 08/28/2014 01:58 PM, Jay Pipes wrote:
 On 08/27/2014 11:34 AM, Doug Hellmann wrote:
>
> On Aug 27, 2014, at 8:51 AM, Thierry Carrez 
> wrote:
>
>> Hi everyone,
>>
>> I've been thinking about what changes we can bring to the Design
>> Summit format to make it more productive. I've heard the feedback
>> from the mid-cycle meetups and would like to apply some of those
>> ideas for Paris, within the constraints we have (already booked
>> space and time). Here is something we could do:
>>
>> Day 1. Cross-project sessions / incubated projects / other
>> projects
>>
>> I think that worked well last time. 3 parallel rooms where we can
>> address top cross-project questions, discuss the results of the
>> various experiments we conducted during juno. Don't hesitate to
>> schedule 2 slots for discussions, so that we have time to come to
>> the bottom of those issues. Incubated projects (and maybe "other"
>> projects, if space allows) occupy the remaining space on day 1, and
>> could occupy "pods" on the other days.
>
> If anything, I’d like to have fewer cross-project tracks running
> simultaneously. Depending on which are proposed, maybe we can make
> that happen. On the other hand, cross-project issues is a big theme
> right now so maybe we should consider devoting more than a day to
> dealing with them.

 I agree with Doug here. I'd almost say having a single cross-project
 room, with serialized content would be better than 3 separate
 cross-project tracks. By nature, the cross-project sessions will attract
 developers that work or are interested in a set of projects that looks
 like a big Venn diagram. By having 3 separate cross-project tracks, we
 would increase the likelihood that developers would once more have to
 choose among simultaneous sessions that they have equal interest in. For
 Infra and QA folks, this likelihood is even greater...

 I think I'd prefer a single cross-project track on the first day.
>>>
>>> So the fallout of that is there will be 6 or 7 cross-project slots for
>>> the design summit. Maybe that's the right mix if the TC does a good job
>>> picking the top 5 things we want accomplished from a cross project
>>> standpoint during the cycle. But it's going to have to be a pretty
>>> directed pick. I think last time we had 21 slots, and with a couple of
>>> doubling up that gave 19 sessions. (about 30 - 35 proposals for that
>>> slot set).
>>
>> I'm not sure that would be a bad thing :)
>>
>> I think one of the reasons the mid-cycles have been successful is that
>> they have adequately limited the scope of discussions and I think by
>> doing our homework by fully vetting and voting on cross-project sessions
>> and being OK with saying "No, not this time.", we will be more
>> productive than if we had 20+ cross-project sessions.
>>
>> Just my two cents, though..
> 
> I'm not sure it would be a bad thing either. I just wanted to be
> explicit about what we are saying the cross projects sessions are for in
> this case: the 5 key cross project activities the TC believes should be
> worked on this next cycle.
> 
> The other question is if we did that what's running in competition to
> cross project day? Is it another free form pod day for people not
> working on those things?
> 
>   -Sean
> 
>>
>> -jay
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
I'm curious to know how many people would be expected to be all in the
same room? And what percentage of these folks are participating in the
conversation and how many are audience?

One of the issues that seem to be universal in the identified discontent
area with summit sessions currently (which gets discussed after each of
the mid-cycles) is that 30 people talking in a room with an audience of
200 isn't very efficient. I wonder if this well intentioned direction
might end up with this result which many folks I talked to don't want.

The other issue that comes to mind for me is trying to allow everyone to
be included in the discussion while keeping it focusing and reducing the
side conversations. If folks are impatient to have their point (or off
topic joke) heard, they won't wait for a turn from whoever is chairing,
they will just start talking. This can create tension for the rest of
the folks who *are* patiently trying to wait their turn. I chaired a day
and a half of discussions at the qa/infra mid-cycle (the res

Re: [openstack-dev] [oslo] change to deprecation policy in the incubator

2014-08-28 Thread Doug Hellmann

On Aug 28, 2014, at 12:14 PM, Doug Hellmann  wrote:

> Before Juno we set a deprecation policy for graduating libraries that said 
> the incubated versions of the modules would stay in the incubator repository 
> for one full cycle after graduation. This gives projects time to adopt the 
> libraries and still receive bug fixes to the incubated version (see 
> https://wiki.openstack.org/wiki/Oslo#Graduation).
> 
> That policy worked well early on, but has recently introduced some challenges 
> with the low level modules. Other modules in the incubator are still 
> importing the incubated versions of, for example, timeutils, and so tests 
> that rely on mocking out or modifying the behavior of timeutils do not work 
> as expected when different parts of the application code end up calling 
> different versions of timeutils. We had similar issues with the notifiers and 
> RPC code, and I expect to find other cases as we continue with the 
> graduations.
> 
> To deal with this problem, I propose that for Kilo we delete graduating 
> modules as soon as the new library is released, rather than waiting to the 
> end of the cycle. We can update the other incubated modules at the same time, 
> so that the incubator will always use the new libraries and be consistent.
> 
> We have not had a lot of patches where backports were necessary, but there 
> have been a few important ones, so we need to retain the ability to handle 
> them and allow projects to adopt libraries at a reasonable pace. To handle 
> backports cleanly, we can “freeze” all changes to the master branch version 
> of modules slated for graduation during Kilo (we would need to make a good 
> list very early in the cycle), and use the stable/juno branch for backports.
> 
> The new process would be:
> 
> 1. Declare which modules we expect to graduate during Kilo.
> 2. Changes to those pre-graduation modules could be made in the master branch 
> before their library is released, as long as the change is also backported to 
> the stable/juno branch at the same time (we should enforce this by having 
> both patches submitted before accepting either).
> 3. When graduation for a library starts, freeze those modules in all branches 
> until the library is released.
> 4. Remove modules from the incubator’s master branch after the library is 
> released.
> 5. Land changes in the library first.
> 6. Backport changes, as needed, to stable/juno instead of master.
> 
> It would be better to begin the export/import process as early as possible in 
> Kilo to keep the window where point 2 applies very short.
> 
> If there are objections to using stable/juno, we could introduce a new branch 
> with a name like backports/kilo, but I am afraid having the extra branch to 
> manage would just cause confusion.
> 
> I would like to move ahead with this plan by creating the stable/juno branch 
> and starting to update the incubator as soon as the oslo.log repository is 
> imported (https://review.openstack.org/116934).

That change has merged and the oslo.log repository has been created.

Doug

> 
> Thoughts?
> 
> Doug
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-28 Thread Jay Pipes

On 08/28/2014 03:31 PM, Sean Dague wrote:

On 08/28/2014 03:06 PM, Jay Pipes wrote:

On 08/28/2014 02:21 PM, Sean Dague wrote:

On 08/28/2014 01:58 PM, Jay Pipes wrote:

On 08/27/2014 11:34 AM, Doug Hellmann wrote:


On Aug 27, 2014, at 8:51 AM, Thierry Carrez 
wrote:


Hi everyone,

I've been thinking about what changes we can bring to the Design
Summit format to make it more productive. I've heard the feedback
from the mid-cycle meetups and would like to apply some of those
ideas for Paris, within the constraints we have (already booked
space and time). Here is something we could do:

Day 1. Cross-project sessions / incubated projects / other
projects

I think that worked well last time. 3 parallel rooms where we can
address top cross-project questions, discuss the results of the
various experiments we conducted during juno. Don't hesitate to
schedule 2 slots for discussions, so that we have time to come to
the bottom of those issues. Incubated projects (and maybe "other"
projects, if space allows) occupy the remaining space on day 1, and
could occupy "pods" on the other days.


If anything, I’d like to have fewer cross-project tracks running
simultaneously. Depending on which are proposed, maybe we can make
that happen. On the other hand, cross-project issues is a big theme
right now so maybe we should consider devoting more than a day to
dealing with them.


I agree with Doug here. I'd almost say having a single cross-project
room, with serialized content would be better than 3 separate
cross-project tracks. By nature, the cross-project sessions will attract
developers that work or are interested in a set of projects that looks
like a big Venn diagram. By having 3 separate cross-project tracks, we
would increase the likelihood that developers would once more have to
choose among simultaneous sessions that they have equal interest in. For
Infra and QA folks, this likelihood is even greater...

I think I'd prefer a single cross-project track on the first day.


So the fallout of that is there will be 6 or 7 cross-project slots for
the design summit. Maybe that's the right mix if the TC does a good job
picking the top 5 things we want accomplished from a cross project
standpoint during the cycle. But it's going to have to be a pretty
directed pick. I think last time we had 21 slots, and with a couple of
doubling up that gave 19 sessions. (about 30 - 35 proposals for that
slot set).


I'm not sure that would be a bad thing :)

I think one of the reasons the mid-cycles have been successful is that
they have adequately limited the scope of discussions and I think by
doing our homework by fully vetting and voting on cross-project sessions
and being OK with saying "No, not this time.", we will be more
productive than if we had 20+ cross-project sessions.

Just my two cents, though..


I'm not sure it would be a bad thing either. I just wanted to be
explicit about what we are saying the cross projects sessions are for in
this case: the 5 key cross project activities the TC believes should be
worked on this next cycle.


Yes.


The other question is if we did that what's running in competition to
cross project day? Is it another free form pod day for people not
working on those things?


It could be a pod day, sure. Or just an extended hallway session day... :)

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Jay Pipes

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:

I’ll try and not whine about my pet project but I do think there is a
problem here.  For the Gantt project to split out the scheduler there is
a crucial BP that needs to be implemented (
https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP has
been rejected and we’ll have to try again for Kilo.  My question is did
we do something wrong or is the process broken?

Note that we originally proposed the BP on 4/23/14, went through 10
iterations to the final version on 7/25/14 and the final version got
three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to
specific people, we didn’t get the second +2, hence the rejection.

I understand that reviews are a burden and very hard but it seems wrong
that a BP with multiple positive reviews and no negative reviews is
dropped because of what looks like indifference.


I would posit that this is not actually indifference. The reason that 
there may not have been >1 +2 from a core team member may very well have 
been that the core team members did not feel that the blueprint's 
priority was high enough to put before other work, or that the core team 
members did have the time to comment on the spec (due to them not 
feeling the blueprint had the priority to justify the time to do a full 
review).


Note that I'm not a core drivers team member.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Doug Hellmann

On Aug 28, 2014, at 2:16 PM, Sean Dague  wrote:

> On 08/28/2014 02:07 PM, Joe Gordon wrote:
>> 
>> 
>> 
>> On Thu, Aug 28, 2014 at 10:17 AM, Sean Dague > > wrote:
>> 
>>On 08/28/2014 12:48 PM, Doug Hellmann wrote:
>>> 
>>> On Aug 27, 2014, at 5:56 PM, Sean Dague >> wrote:
>>> 
 On 08/27/2014 05:27 PM, Doug Hellmann wrote:
> 
> On Aug 27, 2014, at 2:54 PM, Sean Dague >> wrote:
> 
>> Note: thread intentionally broken, this is really a different
>>topic.
>> 
>> On 08/27/2014 02:30 PM, Doug Hellmann wrote:>
>>> On Aug 27, 2014, at 1:30 PM, Chris Dent >> wrote:
>>> 
 On Wed, 27 Aug 2014, Doug Hellmann wrote:
 
> I have found it immensely helpful, for example, to have a
>>written set
> of the steps involved in creating a new library, from
>>importing the
> git repo all the way through to making it available to other
>>projects.
> Without those instructions, it would have been much harder
>>to split up
> the work. The team would have had to train each other by word of
> mouth, and we would have had constant issues with inconsistent
> approaches triggering different failures. The time we spent
>>building
> and verifying the instructions has paid off to the extent
>>that we even
> had one developer not on the core team handle a graduation
>>for us.
 
 +many more for the relatively simple act of just writing
>>stuff down
>>> 
>>> "Write it down.” is my theme for Kilo.
>> 
>> I definitely get the sentiment. "Write it down" is also hard
>>when you
>> are talking about things that do change around quite a bit.
>>OpenStack as
>> a whole sees 250 - 500 changes a week, so the interaction
>>pattern moves
>> around enough that it's really easy to have *very* stale
>>information
>> written down. Stale information is even more dangerous than no
>> information some times, as it takes people down very wrong paths.
>> 
>> I think we break down on communication when we get into a
>>conversation
>> of "I want to learn gate debugging" because I don't quite know
>>what that
>> means, or where the starting point of understanding is. So those
>> intentions are well meaning, but tend to stall. The reality was
>>there
>> was no road map for those of us that dive in, it's just
>>understanding
>> how OpenStack holds together as a whole and where some of the
>>high risk
>> parts are. And a lot of that comes with days staring at code
>>and logs
>> until patterns emerge.
>> 
>> Maybe if we can get smaller more targeted questions, we can
>>help folks
>> better? I'm personally a big fan of answering the targeted
>>questions
>> because then I also know that the time spent exposing that
>>information
>> was directly useful.
>> 
>> I'm more than happy to mentor folks. But I just end up finding
>>the "I
>> want to learn" at the generic level something that's hard to
>>grasp onto
>> or figure out how we turn it into action. I'd love to hear more
>>ideas
>> from folks about ways we might do that better.
> 
> You and a few others have developed an expertise in this
>>important skill. I am so far away from that level of expertise that
>>I don’t know the questions to ask. More often than not I start with
>>the console log, find something that looks significant, spend an
>>hour or so tracking it down, and then have someone tell me that it
>>is a red herring and the issue is really some other thing that they
>>figured out very quickly by looking at a file I never got to.
> 
> I guess what I’m looking for is some help with the patterns.
>>What made you think to look in one log file versus another? Some of
>>these jobs save a zillion little files, which ones are actually
>>useful? What tools are you using to correlate log entries across all
>>of those files? Are you doing it by hand? Is logstash useful for
>>that, or is that more useful for finding multiple occurrences of the
>>same issue?
> 
> I realize there’s not a way to write a how-to that will live
>>forever. Maybe one way to deal with that is to write up the research
>>done on bugs soon after they are solved, and publish that to the
>>mailing list. Even the retrospective view is useful because we can
>>all learn from it without having to live through it. The mailing
>>list is a fairly ephemeral medium, and something very old in the
>>archives is understood to have a good chance of being out of date so
>>we don’t have to keep adding disclaimers.
 
 Sure. Matt's actually working up a blog post describing the thing he
 nailed earlier in

Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Doug Hellmann

On Aug 28, 2014, at 2:15 PM, Sean Dague  wrote:

> On 08/28/2014 01:48 PM, Doug Hellmann wrote:
>> 
>> On Aug 28, 2014, at 1:17 PM, Sean Dague  wrote:
>> 
>>> On 08/28/2014 12:48 PM, Doug Hellmann wrote:
 
 On Aug 27, 2014, at 5:56 PM, Sean Dague  wrote:
 
> On 08/27/2014 05:27 PM, Doug Hellmann wrote:
>> 
>> On Aug 27, 2014, at 2:54 PM, Sean Dague  wrote:
>> 
>>> Note: thread intentionally broken, this is really a different topic.
>>> 
>>> On 08/27/2014 02:30 PM, Doug Hellmann wrote:>
 On Aug 27, 2014, at 1:30 PM, Chris Dent  wrote:
 
> On Wed, 27 Aug 2014, Doug Hellmann wrote:
> 
>> I have found it immensely helpful, for example, to have a written set
>> of the steps involved in creating a new library, from importing the
>> git repo all the way through to making it available to other 
>> projects.
>> Without those instructions, it would have been much harder to split 
>> up
>> the work. The team would have had to train each other by word of
>> mouth, and we would have had constant issues with inconsistent
>> approaches triggering different failures. The time we spent building
>> and verifying the instructions has paid off to the extent that we 
>> even
>> had one developer not on the core team handle a graduation for us.
> 
> +many more for the relatively simple act of just writing stuff down
 
 "Write it down.” is my theme for Kilo.
>>> 
>>> I definitely get the sentiment. "Write it down" is also hard when you
>>> are talking about things that do change around quite a bit. OpenStack as
>>> a whole sees 250 - 500 changes a week, so the interaction pattern moves
>>> around enough that it's really easy to have *very* stale information
>>> written down. Stale information is even more dangerous than no
>>> information some times, as it takes people down very wrong paths.
>>> 
>>> I think we break down on communication when we get into a conversation
>>> of "I want to learn gate debugging" because I don't quite know what that
>>> means, or where the starting point of understanding is. So those
>>> intentions are well meaning, but tend to stall. The reality was there
>>> was no road map for those of us that dive in, it's just understanding
>>> how OpenStack holds together as a whole and where some of the high risk
>>> parts are. And a lot of that comes with days staring at code and logs
>>> until patterns emerge.
>>> 
>>> Maybe if we can get smaller more targeted questions, we can help folks
>>> better? I'm personally a big fan of answering the targeted questions
>>> because then I also know that the time spent exposing that information
>>> was directly useful.
>>> 
>>> I'm more than happy to mentor folks. But I just end up finding the "I
>>> want to learn" at the generic level something that's hard to grasp onto
>>> or figure out how we turn it into action. I'd love to hear more ideas
>>> from folks about ways we might do that better.
>> 
>> You and a few others have developed an expertise in this important 
>> skill. I am so far away from that level of expertise that I don’t know 
>> the questions to ask. More often than not I start with the console log, 
>> find something that looks significant, spend an hour or so tracking it 
>> down, and then have someone tell me that it is a red herring and the 
>> issue is really some other thing that they figured out very quickly by 
>> looking at a file I never got to.
>> 
>> I guess what I’m looking for is some help with the patterns. What made 
>> you think to look in one log file versus another? Some of these jobs 
>> save a zillion little files, which ones are actually useful? What tools 
>> are you using to correlate log entries across all of those files? Are 
>> you doing it by hand? Is logstash useful for that, or is that more 
>> useful for finding multiple occurrences of the same issue?
>> 
>> I realize there’s not a way to write a how-to that will live forever. 
>> Maybe one way to deal with that is to write up the research done on bugs 
>> soon after they are solved, and publish that to the mailing list. Even 
>> the retrospective view is useful because we can all learn from it 
>> without having to live through it. The mailing list is a fairly 
>> ephemeral medium, and something very old in the archives is understood 
>> to have a good chance of being out of date so we don’t have to keep 
>> adding disclaimers.
> 
> Sure. Matt's actually working up a blog post describing the thing he
> nailed earlier in the week.
 
 Yes, I appreciate that both of you are responding to my questions. :-)
 
 I have some more specific questions/comments b

Re: [openstack-dev] [Octavia] Octavia VM image design

2014-08-28 Thread Susanne Balle
I agree with Michael. We need to use the OpenStack tooling.

Sahara is encountering some of the same issues we are as they are building
up their hadoop VM/clusters.

See

http://docs.openstack.org/developer/sahara/userdoc/vanilla_plugin.html
http://docs.openstack.org/developer/sahara/userdoc/diskimagebuilder.html

for inspiration,

Susanne



On Wed, Aug 27, 2014 at 6:21 PM, Michael Johnson 
wrote:

> I am investigating building scripts that use diskimage-builder
> (https://github.com/openstack/diskimage-builder) to create a "purpose
> built" image.  This should allow some flexibility in the base image
> and the output image format (including a path to docker).
>
> The definition of "purpose built" is open at this point.  I will
> likely try to have a minimal Ubuntu based VM image as a starting
> point/test case and we can add/change as necessary.
>
> Michael
>
>
> On Wed, Aug 27, 2014 at 2:12 PM, Dustin Lundquist 
> wrote:
> > It seems to me there are two major approaches to the Octavia VM design:
> >
> > Start with a standard Linux distribution (e.g. Ubuntu 14.04 LTS) and
> install
> > HAProxy 1.5 and Octavia control layer
> > Develop a minimal purpose driven distribution (similar to m0n0wall) with
> > just HAProxy, iproute2 and a Python runtime for the control layer.
> >
> > The primary difference here is additional development effort for option
> 2,
> > verses the increased image size of option 1. Using Ubuntu and CirrOS
> images
> > a representative of the two options it looks like the image size
> difference
> > is on the about 20 times larger for a full featured distribution. If one
> of
> > the HA models is to spin up a replacement instance on failure the image
> size
> > could be significantly affect fail-over time.
> >
> > For initial work I think starting with a standard distribution would be
> > sensible, but we should target systemd (Debian adopted systemd as new
> > default, and Ubuntu is following suit). I wanted to find out if there is
> > interest in a minimal Octavia image, and if so this may affect design
> > decisions on the instance control plane component.
> >
> >
> > -Dustin
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-28 Thread Anne Gentle
On Wed, Aug 27, 2014 at 7:51 AM, Thierry Carrez 
wrote:

> Hi everyone,
>
> I've been thinking about what changes we can bring to the Design Summit
> format to make it more productive. I've heard the feedback from the
> mid-cycle meetups and would like to apply some of those ideas for Paris,
> within the constraints we have (already booked space and time). Here is
> something we could do:
>
> Day 1. Cross-project sessions / incubated projects / other projects
>
> I think that worked well last time. 3 parallel rooms where we can
> address top cross-project questions, discuss the results of the various
> experiments we conducted during juno. Don't hesitate to schedule 2 slots
> for discussions, so that we have time to come to the bottom of those
> issues. Incubated projects (and maybe "other" projects, if space allows)
> occupy the remaining space on day 1, and could occupy "pods" on the
> other days.
>
>
Yep, I think this works in theory, the tough part will be when all the
incubating projects realize they're sending people for a single day? Maybe
it'll work out differently than I think though. It means fitting ironic,
barbican, designate, manila, marconi in a day?

Also since QA, Infra, and Docs are cross-project AND Programs, where do
they land?


> Day 2 and Day 3. Scheduled sessions for various programs
>
> That's our traditional scheduled space. We'll have a 33% less slots
> available. So, rather than trying to cover all the scope, the idea would
> be to focus those sessions on specific issues which really require
> face-to-face discussion (which can't be solved on the ML or using spec
> discussion) *or* require a lot of user feedback. That way, appearing in
> the general schedule is very helpful. This will require us to be a lot
> stricter on what we accept there and what we don't -- we won't have
> space for courtesy sessions anymore, and traditional/unnecessary
> sessions (like my traditional "release schedule" one) should just move
> to the mailing-list.
>

I like thinking about what we can move to the mailing lists. Nice.


>
> Day 4. Contributors meetups
>
> On the last day, we could try to split the space so that we can conduct
> parallel midcycle-meetup-like contributors gatherings, with no time
> boundaries and an open agenda. Large projects could get a full day,
> smaller projects would get half a day (but could continue the discussion
> in a local bar). Ideally that meetup would end with some alignment on
> release goals, but the idea is to make the best of that time together to
> solve the issues you have. Friday would finish with the design summit
> feedback session, for those who are still around.
>
>
Sounds good.


>
> I think this proposal makes the best use of our setup: discuss clear
> cross-project issues, address key specific topics which need
> face-to-face time and broader attendance, then try to replicate the
> success of midcycle meetup-like open unscheduled time to discuss
> whatever is hot at this point.
>
> There are still details to work out (is it possible split the space,
> should we use the usual design summit CFP website to organize the
> "scheduled" time...), but I would first like to have your feedback on
> this format. Also if you have alternative proposals that would make a
> better use of our 4 days, let me know.
>
> Cheers,
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-28 Thread Sean Dague
On 08/28/2014 03:06 PM, Jay Pipes wrote:
> On 08/28/2014 02:21 PM, Sean Dague wrote:
>> On 08/28/2014 01:58 PM, Jay Pipes wrote:
>>> On 08/27/2014 11:34 AM, Doug Hellmann wrote:

 On Aug 27, 2014, at 8:51 AM, Thierry Carrez 
 wrote:

> Hi everyone,
>
> I've been thinking about what changes we can bring to the Design
> Summit format to make it more productive. I've heard the feedback
> from the mid-cycle meetups and would like to apply some of those
> ideas for Paris, within the constraints we have (already booked
> space and time). Here is something we could do:
>
> Day 1. Cross-project sessions / incubated projects / other
> projects
>
> I think that worked well last time. 3 parallel rooms where we can
> address top cross-project questions, discuss the results of the
> various experiments we conducted during juno. Don't hesitate to
> schedule 2 slots for discussions, so that we have time to come to
> the bottom of those issues. Incubated projects (and maybe "other"
> projects, if space allows) occupy the remaining space on day 1, and
> could occupy "pods" on the other days.

 If anything, I’d like to have fewer cross-project tracks running
 simultaneously. Depending on which are proposed, maybe we can make
 that happen. On the other hand, cross-project issues is a big theme
 right now so maybe we should consider devoting more than a day to
 dealing with them.
>>>
>>> I agree with Doug here. I'd almost say having a single cross-project
>>> room, with serialized content would be better than 3 separate
>>> cross-project tracks. By nature, the cross-project sessions will attract
>>> developers that work or are interested in a set of projects that looks
>>> like a big Venn diagram. By having 3 separate cross-project tracks, we
>>> would increase the likelihood that developers would once more have to
>>> choose among simultaneous sessions that they have equal interest in. For
>>> Infra and QA folks, this likelihood is even greater...
>>>
>>> I think I'd prefer a single cross-project track on the first day.
>>
>> So the fallout of that is there will be 6 or 7 cross-project slots for
>> the design summit. Maybe that's the right mix if the TC does a good job
>> picking the top 5 things we want accomplished from a cross project
>> standpoint during the cycle. But it's going to have to be a pretty
>> directed pick. I think last time we had 21 slots, and with a couple of
>> doubling up that gave 19 sessions. (about 30 - 35 proposals for that
>> slot set).
> 
> I'm not sure that would be a bad thing :)
> 
> I think one of the reasons the mid-cycles have been successful is that
> they have adequately limited the scope of discussions and I think by
> doing our homework by fully vetting and voting on cross-project sessions
> and being OK with saying "No, not this time.", we will be more
> productive than if we had 20+ cross-project sessions.
> 
> Just my two cents, though..

I'm not sure it would be a bad thing either. I just wanted to be
explicit about what we are saying the cross projects sessions are for in
this case: the 5 key cross project activities the TC believes should be
worked on this next cycle.

The other question is if we did that what's running in competition to
cross project day? Is it another free form pod day for people not
working on those things?

-Sean

> 
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBass] Design sessions for Neutron LBaaS. What do we want/need?

2014-08-28 Thread Susanne Balle
Let's use a different email thread to discuss if Octavia should be part of
the Neutron incubator project right away or not. I would like to keep the
two discussions separate.



Susanne


On Thu, Aug 28, 2014 at 3:20 PM, Stephen Balukoff 
wrote:

> Hi Susanne--
>
> Regarding the Octavia sessions:  I think we probably will have enough to
> discuss that we could use two design sessions.  However, I also think that
> we can probably come to conclusions on whether Octavia should become a part
> of Neutron Incubator right away via discussion on this mailing list.  Do we
> want to have that discussion in another thread, or should we use this one?
>
> Stephen
>
>
> On Thu, Aug 28, 2014 at 7:51 AM, Susanne Balle 
> wrote:
>
>> With a corrected Subject. Susanne
>>
>>
>>
>> On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle 
>> wrote:
>>
>>>
>>> LBaaS team,
>>>
>>> As we discussed in the Weekly LBaaS meeting this morning we should make
>>> sure we get the design sessions scheduled that we are interested in.
>>>
>>> We currently agreed on the following:
>>>
>>> * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we
>>> want to go over status and also the whole incubator thingy and how we will
>>> best move forward.
>>>
>>> * Octavia: We want to schedule 2 sessions.
>>> ---  During one of the sessions I would like to discuss the pros and
>>> cons of putting Octavia into the Neutron LBaaS incubator project right
>>> away. If it is going to be the reference implementation for LBaaS v 2 then
>>> I believe Octavia belong in Neutron LBaaS v2 incubator.
>>>
>>> * Flavors which should be coordinated with markmcclain and enikanorov.
>>> --- https://review.openstack.org/#/c/102723/
>>>
>>> Is this too many sessions given the constraints? I am assuming that we
>>> can also meet at the pods like we did at the last summit.
>>>
>>> thoughts?
>>>
>>> Regards Susanne
>>>
>>> Thierry Carrez 
>>> Aug 27 (1 day ago)
>>>  to OpenStack
>>>  Hi everyone,
>>>
>>> I've been thinking about what changes we can bring to the Design Summit
>>> format to make it more productive. I've heard the feedback from the
>>> mid-cycle meetups and would like to apply some of those ideas for Paris,
>>> within the constraints we have (already booked space and time). Here is
>>> something we could do:
>>>
>>> Day 1. Cross-project sessions / incubated projects / other projects
>>>
>>> I think that worked well last time. 3 parallel rooms where we can
>>> address top cross-project questions, discuss the results of the various
>>> experiments we conducted during juno. Don't hesitate to schedule 2 slots
>>> for discussions, so that we have time to come to the bottom of those
>>> issues. Incubated projects (and maybe "other" projects, if space allows)
>>> occupy the remaining space on day 1, and could occupy "pods" on the
>>> other days.
>>>
>>> Day 2 and Day 3. Scheduled sessions for various programs
>>>
>>> That's our traditional scheduled space. We'll have a 33% less slots
>>> available. So, rather than trying to cover all the scope, the idea would
>>> be to focus those sessions on specific issues which really require
>>> face-to-face discussion (which can't be solved on the ML or using spec
>>> discussion) *or* require a lot of user feedback. That way, appearing in
>>> the general schedule is very helpful. This will require us to be a lot
>>> stricter on what we accept there and what we don't -- we won't have
>>> space for courtesy sessions anymore, and traditional/unnecessary
>>> sessions (like my traditional "release schedule" one) should just move
>>> to the mailing-list.
>>>
>>> Day 4. Contributors meetups
>>>
>>> On the last day, we could try to split the space so that we can conduct
>>> parallel midcycle-meetup-like contributors gatherings, with no time
>>> boundaries and an open agenda. Large projects could get a full day,
>>> smaller projects would get half a day (but could continue the discussion
>>> in a local bar). Ideally that meetup would end with some alignment on
>>> release goals, but the idea is to make the best of that time together to
>>> solve the issues you have. Friday would finish with the design summit
>>> feedback session, for those who are still around.
>>>
>>>
>>> I think this proposal makes the best use of our setup: discuss clear
>>> cross-project issues, address key specific topics which need
>>> face-to-face time and broader attendance, then try to replicate the
>>> success of midcycle meetup-like open unscheduled time to discuss
>>> whatever is hot at this point.
>>>
>>> There are still details to work out (is it possible split the space,
>>> should we use the usual design summit CFP website to organize the
>>> "scheduled" time...), but I would first like to have your feedback on
>>> this format. Also if you have alternative proposals that would make a
>>> better use of our 4 days, let me know.
>>>
>>> Cheers,
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@l

Re: [openstack-dev] Design sessions for Neutron LBaaS. What do we want/need?

2014-08-28 Thread Susanne Balle
Let's use a different email thread to discuss if Octavia should be part of
the Neutron incubator project right away or not. I would like to keep the
two discussions separate.

Susanne


On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle 
wrote:

>
> LBaaS team,
>
> As we discussed in the Weekly LBaaS meeting this morning we should make
> sure we get the design sessions scheduled that we are interested in.
>
> We currently agreed on the following:
>
> * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we
> want to go over status and also the whole incubator thingy and how we will
> best move forward.
>
> * Octavia: We want to schedule 2 sessions.
> ---  During one of the sessions I would like to discuss the pros and cons
> of putting Octavia into the Neutron LBaaS incubator project right away. If
> it is going to be the reference implementation for LBaaS v 2 then I believe
> Octavia belong in Neutron LBaaS v2 incubator.
>
> * Flavors which should be coordinated with markmcclain and enikanorov.
> --- https://review.openstack.org/#/c/102723/
>
> Is this too many sessions given the constraints? I am assuming that we can
> also meet at the pods like we did at the last summit.
>
> thoughts?
>
> Regards Susanne
>
> Thierry Carrez 
> Aug 27 (1 day ago)
>  to OpenStack
>  Hi everyone,
>
> I've been thinking about what changes we can bring to the Design Summit
> format to make it more productive. I've heard the feedback from the
> mid-cycle meetups and would like to apply some of those ideas for Paris,
> within the constraints we have (already booked space and time). Here is
> something we could do:
>
> Day 1. Cross-project sessions / incubated projects / other projects
>
> I think that worked well last time. 3 parallel rooms where we can
> address top cross-project questions, discuss the results of the various
> experiments we conducted during juno. Don't hesitate to schedule 2 slots
> for discussions, so that we have time to come to the bottom of those
> issues. Incubated projects (and maybe "other" projects, if space allows)
> occupy the remaining space on day 1, and could occupy "pods" on the
> other days.
>
> Day 2 and Day 3. Scheduled sessions for various programs
>
> That's our traditional scheduled space. We'll have a 33% less slots
> available. So, rather than trying to cover all the scope, the idea would
> be to focus those sessions on specific issues which really require
> face-to-face discussion (which can't be solved on the ML or using spec
> discussion) *or* require a lot of user feedback. That way, appearing in
> the general schedule is very helpful. This will require us to be a lot
> stricter on what we accept there and what we don't -- we won't have
> space for courtesy sessions anymore, and traditional/unnecessary
> sessions (like my traditional "release schedule" one) should just move
> to the mailing-list.
>
> Day 4. Contributors meetups
>
> On the last day, we could try to split the space so that we can conduct
> parallel midcycle-meetup-like contributors gatherings, with no time
> boundaries and an open agenda. Large projects could get a full day,
> smaller projects would get half a day (but could continue the discussion
> in a local bar). Ideally that meetup would end with some alignment on
> release goals, but the idea is to make the best of that time together to
> solve the issues you have. Friday would finish with the design summit
> feedback session, for those who are still around.
>
>
> I think this proposal makes the best use of our setup: discuss clear
> cross-project issues, address key specific topics which need
> face-to-face time and broader attendance, then try to replicate the
> success of midcycle meetup-like open unscheduled time to discuss
> whatever is hot at this point.
>
> There are still details to work out (is it possible split the space,
> should we use the usual design summit CFP website to organize the
> "scheduled" time...), but I would first like to have your feedback on
> this format. Also if you have alternative proposals that would make a
> better use of our 4 days, let me know.
>
> Cheers,
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBass] Design sessions for Neutron LBaaS. What do we want/need?

2014-08-28 Thread Stephen Balukoff
Hi Susanne--

Regarding the Octavia sessions:  I think we probably will have enough to
discuss that we could use two design sessions.  However, I also think that
we can probably come to conclusions on whether Octavia should become a part
of Neutron Incubator right away via discussion on this mailing list.  Do we
want to have that discussion in another thread, or should we use this one?

Stephen


On Thu, Aug 28, 2014 at 7:51 AM, Susanne Balle 
wrote:

> With a corrected Subject. Susanne
>
>
>
> On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle 
> wrote:
>
>>
>> LBaaS team,
>>
>> As we discussed in the Weekly LBaaS meeting this morning we should make
>> sure we get the design sessions scheduled that we are interested in.
>>
>> We currently agreed on the following:
>>
>> * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we
>> want to go over status and also the whole incubator thingy and how we will
>> best move forward.
>>
>> * Octavia: We want to schedule 2 sessions.
>> ---  During one of the sessions I would like to discuss the pros and cons
>> of putting Octavia into the Neutron LBaaS incubator project right away. If
>> it is going to be the reference implementation for LBaaS v 2 then I believe
>> Octavia belong in Neutron LBaaS v2 incubator.
>>
>> * Flavors which should be coordinated with markmcclain and enikanorov.
>> --- https://review.openstack.org/#/c/102723/
>>
>> Is this too many sessions given the constraints? I am assuming that we
>> can also meet at the pods like we did at the last summit.
>>
>> thoughts?
>>
>> Regards Susanne
>>
>> Thierry Carrez 
>> Aug 27 (1 day ago)
>>  to OpenStack
>>  Hi everyone,
>>
>> I've been thinking about what changes we can bring to the Design Summit
>> format to make it more productive. I've heard the feedback from the
>> mid-cycle meetups and would like to apply some of those ideas for Paris,
>> within the constraints we have (already booked space and time). Here is
>> something we could do:
>>
>> Day 1. Cross-project sessions / incubated projects / other projects
>>
>> I think that worked well last time. 3 parallel rooms where we can
>> address top cross-project questions, discuss the results of the various
>> experiments we conducted during juno. Don't hesitate to schedule 2 slots
>> for discussions, so that we have time to come to the bottom of those
>> issues. Incubated projects (and maybe "other" projects, if space allows)
>> occupy the remaining space on day 1, and could occupy "pods" on the
>> other days.
>>
>> Day 2 and Day 3. Scheduled sessions for various programs
>>
>> That's our traditional scheduled space. We'll have a 33% less slots
>> available. So, rather than trying to cover all the scope, the idea would
>> be to focus those sessions on specific issues which really require
>> face-to-face discussion (which can't be solved on the ML or using spec
>> discussion) *or* require a lot of user feedback. That way, appearing in
>> the general schedule is very helpful. This will require us to be a lot
>> stricter on what we accept there and what we don't -- we won't have
>> space for courtesy sessions anymore, and traditional/unnecessary
>> sessions (like my traditional "release schedule" one) should just move
>> to the mailing-list.
>>
>> Day 4. Contributors meetups
>>
>> On the last day, we could try to split the space so that we can conduct
>> parallel midcycle-meetup-like contributors gatherings, with no time
>> boundaries and an open agenda. Large projects could get a full day,
>> smaller projects would get half a day (but could continue the discussion
>> in a local bar). Ideally that meetup would end with some alignment on
>> release goals, but the idea is to make the best of that time together to
>> solve the issues you have. Friday would finish with the design summit
>> feedback session, for those who are still around.
>>
>>
>> I think this proposal makes the best use of our setup: discuss clear
>> cross-project issues, address key specific topics which need
>> face-to-face time and broader attendance, then try to replicate the
>> success of midcycle meetup-like open unscheduled time to discuss
>> whatever is hot at this point.
>>
>> There are still details to work out (is it possible split the space,
>> should we use the usual design summit CFP website to organize the
>> "scheduled" time...), but I would first like to have your feedback on
>> this format. Also if you have alternative proposals that would make a
>> better use of our 4 days, let me know.
>>
>> Cheers,
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-28 Thread Jay Pipes

On 08/28/2014 02:21 PM, Sean Dague wrote:

On 08/28/2014 01:58 PM, Jay Pipes wrote:

On 08/27/2014 11:34 AM, Doug Hellmann wrote:


On Aug 27, 2014, at 8:51 AM, Thierry Carrez 
wrote:


Hi everyone,

I've been thinking about what changes we can bring to the Design
Summit format to make it more productive. I've heard the feedback
from the mid-cycle meetups and would like to apply some of those
ideas for Paris, within the constraints we have (already booked
space and time). Here is something we could do:

Day 1. Cross-project sessions / incubated projects / other
projects

I think that worked well last time. 3 parallel rooms where we can
address top cross-project questions, discuss the results of the
various experiments we conducted during juno. Don't hesitate to
schedule 2 slots for discussions, so that we have time to come to
the bottom of those issues. Incubated projects (and maybe "other"
projects, if space allows) occupy the remaining space on day 1, and
could occupy "pods" on the other days.


If anything, I’d like to have fewer cross-project tracks running
simultaneously. Depending on which are proposed, maybe we can make
that happen. On the other hand, cross-project issues is a big theme
right now so maybe we should consider devoting more than a day to
dealing with them.


I agree with Doug here. I'd almost say having a single cross-project
room, with serialized content would be better than 3 separate
cross-project tracks. By nature, the cross-project sessions will attract
developers that work or are interested in a set of projects that looks
like a big Venn diagram. By having 3 separate cross-project tracks, we
would increase the likelihood that developers would once more have to
choose among simultaneous sessions that they have equal interest in. For
Infra and QA folks, this likelihood is even greater...

I think I'd prefer a single cross-project track on the first day.


So the fallout of that is there will be 6 or 7 cross-project slots for
the design summit. Maybe that's the right mix if the TC does a good job
picking the top 5 things we want accomplished from a cross project
standpoint during the cycle. But it's going to have to be a pretty
directed pick. I think last time we had 21 slots, and with a couple of
doubling up that gave 19 sessions. (about 30 - 35 proposals for that
slot set).


I'm not sure that would be a bad thing :)

I think one of the reasons the mid-cycles have been successful is that 
they have adequately limited the scope of discussions and I think by 
doing our homework by fully vetting and voting on cross-project sessions 
and being OK with saying "No, not this time.", we will be more 
productive than if we had 20+ cross-project sessions.


Just my two cents, though..

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Susanne Balle
Brandon

I am not sure how ready that nova feature is for general use and have asked
our nova lead about that. He is on vacation but should be back by the start
of next week. I believe this is the right approach for us moving forward.

We cannot make it mandatory to run the 2 filters but we can say in the
documentation that if these two filters aren't set that we cannot guaranty
Anti-affinity or Affinity.

The other way we can implement this is by using availability zones and host
aggregates. This is one technique we use to make sure we deploy our
in-cloud services in an HA model. This also would assume that the operator
is setting up Availabiltiy zones which we can't.

http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/

Sahara is currently using the following filters to support host affinity
which is probably due to the fact that they did the work before
ServerGroups. I am not advocating the use of those filters but just showing
you that we can document the feature and it will be up to the operator to
set it up to get the right behavior.

Regards

Susanne

Anti-affinity
One of the problems in Hadoop running on OpenStack is that there is no
ability to control where machine is actually running. We cannot be sure
that two new virtual machines are started on different physical machines.
As a result, any replication with cluster is not reliable because all
replicas may turn up on one physical machine. Anti-affinity feature
provides an ability to explicitly tell Sahara to run specified processes on
different compute nodes. This is especially useful for Hadoop datanode
process to make HDFS replicas reliable.

The Anti-Affinity feature requires certain scheduler filters to be enabled
on Nova. Edit your/etc/nova/nova.conf in the following way:

[DEFAULT]

...

scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
scheduler_default_filters=DifferentHostFilter,SameHostFilter

This feature is supported by all plugins out of the box.
http://docs.openstack.org/developer/sahara/userdoc/features.html



On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan 
wrote:

> Nova scheduler has ServerGroupAffinityFilter and
> ServerGroupAntiAffinityFilter which does the colocation and apolocation
> for VMs.  I think this is something we've discussed before about taking
> advantage of nova's scheduling.  I need to verify that this will work
> with what we (RAX) plan to do, but I'd like to get everyone else's
> thoughts.  Also, if we do decide this works for everyone involved,
> should we make it mandatory that the nova-compute services are running
> these two filters?  I'm also trying to see if we can use this to also do
> our own colocation and apolocation on load balancers, but it looks like
> it will be a bit complex if it can even work.  Hopefully, I can have
> something definitive on that soon.
>
> Thanks,
> Brandon
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-28 Thread Kevin Benton
Yes, in theory all of the plugins should be removable from the core neutron
repo. So then it would only need to be responsible for the APIs, db models,
etc. However, IIRC there are no plans to move any reference plugins from
the tree.


On Thu, Aug 28, 2014 at 11:20 AM, Jeremy Stanley  wrote:

> On 2014-08-28 08:31:26 -0700 (-0700), Kevin Benton wrote:
> [...]
> > DVR completely changed the reference L3 service plugin, which
> > lives in the main tree.
> >
> > A well-defined, versioned internal API would not have helped any
> > of the issues I brought up.
> [...]
>
> Except, perhaps, insofar as that (in some ideal world) it might
> allow the reference L3 service plugin to be extracted from the main
> tree and developed within a separate source code repository with its
> own life cycle.
> --
> Jeremy Stanley
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-28 Thread Joe Gordon
On Thu, Aug 28, 2014 at 2:40 AM, Daniel P. Berrange 
wrote:

> On Thu, Aug 28, 2014 at 01:04:57AM +, Dugger, Donald D wrote:
> > I'll try and not whine about my pet project but I do think there
> > is a problem here.  For the Gantt project to split out the scheduler
> > there is a crucial BP that needs to be implemented (
> > https://review.openstack.org/#/c/89893/ ) and, unfortunately, the
> > BP has been rejected and we'll have to try again for Kilo.  My question
> > is did we do something wrong or is the process broken?
> >
> > Note that we originally proposed the BP on 4/23/14, went through 10
> > iterations to the final version on 7/25/14 and the final version got
> > three +1s and a +2 by 8/5.  Unfortunately, even after reaching out
> > to specific people, we didn't get the second +2, hence the rejection.
>
> I see at that it did not even get one +2 at the time of the feature
> proposal approval freeze. You then successfully requested an exception
> and after a couple more minor updates got a +2 from John but from no
> one else.
>
> I do think this shows a flaw in our (core teams) handling of the
> blueprint. When we agreed upon the freeze exception, that should
> have included a firm commitment for a least 2 core devs to review
> it. IOW I think it is reasonable to say that either your feature
> should have ended up with two +2s and +A, or you should have seen
> a -1 from another core dev. I don't think it is acceptable that
> after the exception was approved it only got feedback from one
> core dev.   I actually thought that when approving exceptions, we
> always got 2 cores to agree to review the item to avoid this, so
> I'm not sure why we failed here.
>
> > I understand that reviews are a burden and very hard but it seems
> > wrong that a BP with multiple positive reviews and no negative
> > reviews is dropped because of what looks like indifference.  Given
> > that there is still time to review the actual code patches it seems
> > like there should be a simpler way to get a BP approved.  Without
> > an approved BP it's difficult to even start the coding process.
>

So the question is the BP approval process broken doesn't have a simple
answer. There are definitely things we should change, but in this case I
think the process sort of worked. The problem you hit is we just don't have
enough people doing reviews. Your blueprint didn't get approved in part
because the ratio of reviews needed to reviewers is off. If we don't even
have enough bandwidth to approve this spec we certainly don't have enough
bandwidth to review the code associated with the spec.



> >
> > I see 2 possibilities here:
> >
> >
> > 1)  This is an isolated case specific to this BP.  If so,
> > there's no need to change the procedures but I would like to
> > know what we should be doing differently.  We got a +2 review
> > on 8/4 and then silence for 3 weeks.
> >
> > 2)  This is a process problem that other people encounter.
> > Maybe there are times when silence means assent.  Something
> > like a BP with multiple +1s and at least one +2 should
> > automatically be accepted if no one reviews it 2 weeks after
> > the +2 is given.
>
> My two thoughts are
>
>  - When we approve something for exception should actively monitor
>progress of review to ensure it gets the neccessary attention to
>either approve or reject it. It makes no sense to approve an
>exception and then let it lie silently waiting for weeks with no
>attention. I'd expect that any time exceptions are approved we
>should babysit them and actively review their status in the weekly
>meeting to ensure they are followed up on.
>
>  - Core reviewers should prioritize reviews of things which already
>have a +2 on them. I wrote about this in the context of code reviews
>last week, but all my points apply equally to spec reviews I believe.
>
>
> http://lists.openstack.org/pipermail/openstack-dev/2014-August/043657.html
>
> Also note that in Kilo the process will be slightly less heavyweight in
> that we're going to try allow some features changes into tree without
> first requiring a spec/blueprint to be written. I can't say offhand
> whether this particular feature would have qualifying for the lighter
> process, but in general by reducing need for specs for the more trivial
> items, we'll have more time available for review of things which do
> require specs.
>

Under the proposed changes to the spec/blueprint process, this would still
need a spec.


>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://list

Re: [openstack-dev] [Mistral] Workflow on-finish

2014-08-28 Thread W Chan
Is there an example somewhere that I can reference on how to define this
special task?  Thanks!


On Wed, Aug 27, 2014 at 10:02 PM, Renat Akhmerov 
wrote:

> Right now, you can just include a special task into a workflow that, for
> example, sends an HTTP request to whatever you need to notify about
> workflow completion. Although, I see it rather as a hack (not so horrible
> though).
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
> On 28 Aug 2014, at 12:01, Renat Akhmerov  wrote:
>
> There are two blueprints that I supposed to use for this purpose:
> https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-http
> https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-amqp
>
> So my opinion:
>
>- This functionality should be orthogonal to what we configure in DSL.
>- The mechanism of listeners would is more generic and would your
>requirement as a special case.
>- At this point, I see that we may want to implement a generic
>transport-agnostic listener mechanism internally (not that hard task) and
>then implement required transport specific plugins to it.
>
>
> Inviting everyone to discussion.
>
> Thanks
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
> On 28 Aug 2014, at 06:17, W Chan  wrote:
>
> Renat,
>
> It will be helpful to perform a callback on completion of the async
> workflow.  Can we add on-finish to the workflow spec and when workflow
> completes, runs task(s) defined in the on-finish section of the spec?  This
> will allow the workflow author to define how the callback is to be done.
>
> Here's the bp link.
> https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-on-finish
>
> Thanks.
> Winson
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-28 Thread Sean Dague
On 08/28/2014 01:58 PM, Jay Pipes wrote:
> On 08/27/2014 11:34 AM, Doug Hellmann wrote:
>>
>> On Aug 27, 2014, at 8:51 AM, Thierry Carrez 
>> wrote:
>>
>>> Hi everyone,
>>>
>>> I've been thinking about what changes we can bring to the Design
>>> Summit format to make it more productive. I've heard the feedback
>>> from the mid-cycle meetups and would like to apply some of those
>>> ideas for Paris, within the constraints we have (already booked
>>> space and time). Here is something we could do:
>>>
>>> Day 1. Cross-project sessions / incubated projects / other
>>> projects
>>>
>>> I think that worked well last time. 3 parallel rooms where we can
>>> address top cross-project questions, discuss the results of the
>>> various experiments we conducted during juno. Don't hesitate to
>>> schedule 2 slots for discussions, so that we have time to come to
>>> the bottom of those issues. Incubated projects (and maybe "other"
>>> projects, if space allows) occupy the remaining space on day 1, and
>>> could occupy "pods" on the other days.
>>
>> If anything, I’d like to have fewer cross-project tracks running
>> simultaneously. Depending on which are proposed, maybe we can make
>> that happen. On the other hand, cross-project issues is a big theme
>> right now so maybe we should consider devoting more than a day to
>> dealing with them.
> 
> I agree with Doug here. I'd almost say having a single cross-project
> room, with serialized content would be better than 3 separate
> cross-project tracks. By nature, the cross-project sessions will attract
> developers that work or are interested in a set of projects that looks
> like a big Venn diagram. By having 3 separate cross-project tracks, we
> would increase the likelihood that developers would once more have to
> choose among simultaneous sessions that they have equal interest in. For
> Infra and QA folks, this likelihood is even greater...
> 
> I think I'd prefer a single cross-project track on the first day.

So the fallout of that is there will be 6 or 7 cross-project slots for
the design summit. Maybe that's the right mix if the TC does a good job
picking the top 5 things we want accomplished from a cross project
standpoint during the cycle. But it's going to have to be a pretty
directed pick. I think last time we had 21 slots, and with a couple of
doubling up that gave 19 sessions. (about 30 - 35 proposals for that
slot set).

>>> Day 2 and Day 3. Scheduled sessions for various programs
>>>
>>> That's our traditional scheduled space. We'll have a 33% less
>>> slots available. So, rather than trying to cover all the scope, the
>>> idea would be to focus those sessions on specific issues which
>>> really require face-to-face discussion (which can't be solved on
>>> the ML or using spec discussion) *or* require a lot of user
>>> feedback. That way, appearing in the general schedule is very
>>> helpful. This will require us to be a lot stricter on what we
>>> accept there and what we don't -- we won't have space for courtesy
>>> sessions anymore, and traditional/unnecessary sessions (like my
>>> traditional "release schedule" one) should just move to the
>>> mailing-list.
>>
>> The message I’m getting from this change in available space is that
>> we need to start thinking about and writing up ideas early, so teams
>> can figure out which upcoming specs need more discussion and which
>> don’t.
> 
> ++
> 
> Also, I think as a community we need to get much better about saying
> "No" for certain things. No to sessions that don't have much specific
> details to them. No to blueprints that don't add much functionality that
> cannot be widely used or taken advantage of. No to specs that don't have
> a narrow-enough scope, etc.
> 
> I also think we need to be better at saying "Yes" to other things,
> though... but that's a different thread ;)
> 
>>> Day 4. Contributors meetups
>>>
>>> On the last day, we could try to split the space so that we can
>>> conduct parallel midcycle-meetup-like contributors gatherings, with
>>> no time boundaries and an open agenda. Large projects could get a
>>> full day, smaller projects would get half a day (but could continue
>>> the discussion in a local bar). Ideally that meetup would end with
>>> some alignment on release goals, but the idea is to make the best
>>> of that time together to solve the issues you have. Friday would
>>> finish with the design summit feedback session, for those who are
>>> still around.
>>
>> This is a good compromise between needing to allow folks to move
>> around between tracks (including speaking at the conference) and
>> having a large block of unstructured time for deep dives.
> 
> Agreed.
> 
> Best,
> -jay
> 
>>> I think this proposal makes the best use of our setup: discuss
>>> clear cross-project issues, address key specific topics which need
>>> face-to-face time and broader attendance, then try to replicate
>>> the success of midcycle meetup-like open unscheduled time to
>>> discuss whatever is hot a

[openstack-dev] Launch of a instance failed

2014-08-28 Thread Nikesh Kumar Mahalka
Hi i am deploying a devstack juno on ubuntu 14.04 server virtual machine.
After installation,when i am trying to launch a instance,its failed.
I am getting "host not found" error.
Below is part of /opt/stack/logs/screen/screen-n-cond.log


Below is ther error
2014-08-28 23:44:59.448 ERROR nova.scheduler.utils
[req-6f220296-8ec2-4e49-821d-0d69d3acc315 admin admin] [instance:
7f105394-414c-4458-b1a1-6f37d6cff87a] Error from last host:
juno-devstack-server (node juno-devstack-server): [u'Traceback (most recent
call last):\n', u'  File "/opt/stack/nova/nova/compute/manager.py", line
1932, in do_build_and_run_instance\nfilter_properties)\n', u'  File
"/opt/stack/nova/nova/compute/manager.py", line 2067, in
_build_and_run_instance\ninstance_uuid=instance.uuid,
reason=six.text_type(e))\n', u'RescheduledException: Build of instance
7f105394-414c-4458-b1a1-6f37d6cff87a was re-scheduled: not all arguments
converted during string formatting\n']







Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-28 Thread Jeremy Stanley
On 2014-08-28 08:31:26 -0700 (-0700), Kevin Benton wrote:
[...]
> DVR completely changed the reference L3 service plugin, which
> lives in the main tree. 
> 
> A well-defined, versioned internal API would not have helped any
> of the issues I brought up.
[...]

Except, perhaps, insofar as that (in some ideal world) it might
allow the reference L3 service plugin to be extracted from the main
tree and developed within a separate source code repository with its
own life cycle.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Joe Gordon
On Thu, Aug 28, 2014 at 5:17 AM, Thierry Carrez 
wrote:

> David Kranz wrote:
> > On 08/27/2014 03:43 PM, Sean Dague wrote:
> >> On 08/27/2014 03:33 PM, David Kranz wrote:
> >>> Race conditions are what makes debugging very hard. I think we are in
> >>> the process of experimenting with such an idea: asymetric gating by
> >>> moving functional tests to projects, making them deeper and more
> >>> extensive, and gating against their own projects. The result should be
> >>> that when a code change is made, we will spend much more time running
> >>> tests of code that is most likely to be growing a race bug from the
> >>> change. Of course there is a risk that we will impair integration
> >>> testing and we will have to be vigilant about that. One mitigating
> >>> factor is that if cross-project interaction uses apis (official or not)
> >>> that are well tested by the functional tests, there is less risk that a
> >>> bug will only show up only when those apis are used by another project.
> >>
> >> So, sorry, this is really not about systemic changes (we're running
> >> those in parallel), but more about skills transfer in people getting
> >> engaged. Because we need both. I guess that's the danger of breaking the
> >> thread is apparently I lost part of the context.
> >>
> > I agree we need both. I made the comment because if we can make gate
> > debugging less daunting
> > then less skill will be needed and I think that is key. Honestly, I am
> > not sure the full skill you have can be transferred. It was gained
> > partly through learning in simpler times.
>
> I think we could develop tools and visualizations that would help the
> debugging tasks. We could make those tasks more visible, and therefore
> more appealing to the brave souls that step up to tackle them. Sean and
> Joe did a ton of work improving the raw data, deriving graphs from it,
> highlighting log syntax or adding helpful Apache footers. But those days
> they spend so much time fixing the issues themselves, they can't
> continue on improving those tools.
>

Some tooling improvements I would like to do but probably don't have the
time for:

* Add the ability to filter http://status.openstack.org/elastic-recheck/ by
project. So a neutron dev can see the list of bugs that are neutron related
* Make the list of open reviews on
http://status.openstack.org/elastic-recheck/ easier to find
* Create an up to date diagram of what OpenStack looks like when running,
how services interact etc.
http://docs.openstack.org/training-guides/content/figures/5/figures/image31.jpg
 and
http://docs.openstack.org/admin-guide-cloud/content/figures/2/figures/openstack-arch-havana-logical-v1.jpg
are out of date
* Make http://jogo.github.io/gate easier to understand. This is what I
check to see the health of the gate.
* Build a request-id tracker for logs. Make it easier to find the logs for
a given request-id across multiple  services (nova-api,nova-scheduler etc.)


>
> And that's part of where the gate burnout comes from: spending so much
> time on the issues themselves that you can no longer work on preventing
> them from happening, or making the job of handling the issues easier, or
> documenting/mentoring other people so that they can do it in your place.
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Sean Dague
On 08/28/2014 02:07 PM, Joe Gordon wrote:
> 
> 
> 
> On Thu, Aug 28, 2014 at 10:17 AM, Sean Dague  > wrote:
> 
> On 08/28/2014 12:48 PM, Doug Hellmann wrote:
> >
> > On Aug 27, 2014, at 5:56 PM, Sean Dague  > wrote:
> >
> >> On 08/27/2014 05:27 PM, Doug Hellmann wrote:
> >>>
> >>> On Aug 27, 2014, at 2:54 PM, Sean Dague  > wrote:
> >>>
>  Note: thread intentionally broken, this is really a different
> topic.
> 
>  On 08/27/2014 02:30 PM, Doug Hellmann wrote:>
> > On Aug 27, 2014, at 1:30 PM, Chris Dent  > wrote:
> >
> >> On Wed, 27 Aug 2014, Doug Hellmann wrote:
> >>
> >>> I have found it immensely helpful, for example, to have a
> written set
> >>> of the steps involved in creating a new library, from
> importing the
> >>> git repo all the way through to making it available to other
> projects.
> >>> Without those instructions, it would have been much harder
> to split up
> >>> the work. The team would have had to train each other by word of
> >>> mouth, and we would have had constant issues with inconsistent
> >>> approaches triggering different failures. The time we spent
> building
> >>> and verifying the instructions has paid off to the extent
> that we even
> >>> had one developer not on the core team handle a graduation
> for us.
> >>
> >> +many more for the relatively simple act of just writing
> stuff down
> >
> > "Write it down.” is my theme for Kilo.
> 
>  I definitely get the sentiment. "Write it down" is also hard
> when you
>  are talking about things that do change around quite a bit.
> OpenStack as
>  a whole sees 250 - 500 changes a week, so the interaction
> pattern moves
>  around enough that it's really easy to have *very* stale
> information
>  written down. Stale information is even more dangerous than no
>  information some times, as it takes people down very wrong paths.
> 
>  I think we break down on communication when we get into a
> conversation
>  of "I want to learn gate debugging" because I don't quite know
> what that
>  means, or where the starting point of understanding is. So those
>  intentions are well meaning, but tend to stall. The reality was
> there
>  was no road map for those of us that dive in, it's just
> understanding
>  how OpenStack holds together as a whole and where some of the
> high risk
>  parts are. And a lot of that comes with days staring at code
> and logs
>  until patterns emerge.
> 
>  Maybe if we can get smaller more targeted questions, we can
> help folks
>  better? I'm personally a big fan of answering the targeted
> questions
>  because then I also know that the time spent exposing that
> information
>  was directly useful.
> 
>  I'm more than happy to mentor folks. But I just end up finding
> the "I
>  want to learn" at the generic level something that's hard to
> grasp onto
>  or figure out how we turn it into action. I'd love to hear more
> ideas
>  from folks about ways we might do that better.
> >>>
> >>> You and a few others have developed an expertise in this
> important skill. I am so far away from that level of expertise that
> I don’t know the questions to ask. More often than not I start with
> the console log, find something that looks significant, spend an
> hour or so tracking it down, and then have someone tell me that it
> is a red herring and the issue is really some other thing that they
> figured out very quickly by looking at a file I never got to.
> >>>
> >>> I guess what I’m looking for is some help with the patterns.
> What made you think to look in one log file versus another? Some of
> these jobs save a zillion little files, which ones are actually
> useful? What tools are you using to correlate log entries across all
> of those files? Are you doing it by hand? Is logstash useful for
> that, or is that more useful for finding multiple occurrences of the
> same issue?
> >>>
> >>> I realize there’s not a way to write a how-to that will live
> forever. Maybe one way to deal with that is to write up the research
> done on bugs soon after they are solved, and publish that to the
> mailing list. Even the retrospective view is useful because we can
> all learn from it without having to live through it. The mailing
> list is a fairly ephemeral medium, and something very old in the
> archives is understood to have a good chance of being out

Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Sean Dague
On 08/28/2014 01:48 PM, Doug Hellmann wrote:
> 
> On Aug 28, 2014, at 1:17 PM, Sean Dague  wrote:
> 
>> On 08/28/2014 12:48 PM, Doug Hellmann wrote:
>>>
>>> On Aug 27, 2014, at 5:56 PM, Sean Dague  wrote:
>>>
 On 08/27/2014 05:27 PM, Doug Hellmann wrote:
>
> On Aug 27, 2014, at 2:54 PM, Sean Dague  wrote:
>
>> Note: thread intentionally broken, this is really a different topic.
>>
>> On 08/27/2014 02:30 PM, Doug Hellmann wrote:>
>>> On Aug 27, 2014, at 1:30 PM, Chris Dent  wrote:
>>>
 On Wed, 27 Aug 2014, Doug Hellmann wrote:

> I have found it immensely helpful, for example, to have a written set
> of the steps involved in creating a new library, from importing the
> git repo all the way through to making it available to other projects.
> Without those instructions, it would have been much harder to split up
> the work. The team would have had to train each other by word of
> mouth, and we would have had constant issues with inconsistent
> approaches triggering different failures. The time we spent building
> and verifying the instructions has paid off to the extent that we even
> had one developer not on the core team handle a graduation for us.

 +many more for the relatively simple act of just writing stuff down
>>>
>>> "Write it down.” is my theme for Kilo.
>>
>> I definitely get the sentiment. "Write it down" is also hard when you
>> are talking about things that do change around quite a bit. OpenStack as
>> a whole sees 250 - 500 changes a week, so the interaction pattern moves
>> around enough that it's really easy to have *very* stale information
>> written down. Stale information is even more dangerous than no
>> information some times, as it takes people down very wrong paths.
>>
>> I think we break down on communication when we get into a conversation
>> of "I want to learn gate debugging" because I don't quite know what that
>> means, or where the starting point of understanding is. So those
>> intentions are well meaning, but tend to stall. The reality was there
>> was no road map for those of us that dive in, it's just understanding
>> how OpenStack holds together as a whole and where some of the high risk
>> parts are. And a lot of that comes with days staring at code and logs
>> until patterns emerge.
>>
>> Maybe if we can get smaller more targeted questions, we can help folks
>> better? I'm personally a big fan of answering the targeted questions
>> because then I also know that the time spent exposing that information
>> was directly useful.
>>
>> I'm more than happy to mentor folks. But I just end up finding the "I
>> want to learn" at the generic level something that's hard to grasp onto
>> or figure out how we turn it into action. I'd love to hear more ideas
>> from folks about ways we might do that better.
>
> You and a few others have developed an expertise in this important skill. 
> I am so far away from that level of expertise that I don’t know the 
> questions to ask. More often than not I start with the console log, find 
> something that looks significant, spend an hour or so tracking it down, 
> and then have someone tell me that it is a red herring and the issue is 
> really some other thing that they figured out very quickly by looking at 
> a file I never got to.
>
> I guess what I’m looking for is some help with the patterns. What made 
> you think to look in one log file versus another? Some of these jobs save 
> a zillion little files, which ones are actually useful? What tools are 
> you using to correlate log entries across all of those files? Are you 
> doing it by hand? Is logstash useful for that, or is that more useful for 
> finding multiple occurrences of the same issue?
>
> I realize there’s not a way to write a how-to that will live forever. 
> Maybe one way to deal with that is to write up the research done on bugs 
> soon after they are solved, and publish that to the mailing list. Even 
> the retrospective view is useful because we can all learn from it without 
> having to live through it. The mailing list is a fairly ephemeral medium, 
> and something very old in the archives is understood to have a good 
> chance of being out of date so we don’t have to keep adding disclaimers.

 Sure. Matt's actually working up a blog post describing the thing he
 nailed earlier in the week.
>>>
>>> Yes, I appreciate that both of you are responding to my questions. :-)
>>>
>>> I have some more specific questions/comments below. Please take all of this 
>>> in the spirit of trying to make this process easier by pointing out where 
>>> I’ve found it hard, and not just me complaining. I’d like to work on fixing 
>>> any of 

[openstack-dev] [CEILOMETER] Trending Alarm

2014-08-28 Thread Henrique Truta


Hello, everyone!

I want to have an alarm that is triggered by some kind of trend. For 
example, an alarm that is triggeredwhen the CPU utilization is growing 
steadly (for example, has grown approximately 10% per 5 minutes, where 
the percentage and time window would be parameters, but then I would 
evaluate also more complex forms to compute trends).Is there any way to 
do this kind of task?


I took a brief look on the code and saw that new evaluators can be 
created. So, I thought about two possibilities: the former includes 
creating a new Evaluator that considers a given window size and the 
latter considers on adding a "change rate" comparator, which will enable 
to set the growth rate as the threshold.


What do you think about it?

Best Regards
--
Henrique Truta

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Dean Troyer
On Thu, Aug 28, 2014 at 12:44 PM, Doug Hellmann 
wrote:

> I usually use the functions for editing ENABLED_SERVICES. Is it still
> common to edit the variable directly?
>
> Not generally.  It was looking at it in log files to see what was/was not
enabled where I started to think about that.  The default is already pretty
long, however having full words might make the scan easier than x- does.

> I've started scratching out a plan to migrate to full names and will get
> it into an Etherpad soon.  Also simplifying the log file configuration vars
> and locations.
>
> https://etherpad.openstack.org/p/devstack-logging


> Cool. Let us know if we can make any changes in oslo.log to simplify that
> work.
>

I don't think olso.log is involved, this is all of the log files that
DevStack generates or captures: screen windows and the stack.sh run itself.
 There might be room to optimize if we're capturing something that is also
being logged elsewhere, but when using screen people seem to want it all in
a window (see horizon and recent keystone windows) anyway.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Joe Gordon
On Thu, Aug 28, 2014 at 10:17 AM, Sean Dague  wrote:

> On 08/28/2014 12:48 PM, Doug Hellmann wrote:
> >
> > On Aug 27, 2014, at 5:56 PM, Sean Dague  wrote:
> >
> >> On 08/27/2014 05:27 PM, Doug Hellmann wrote:
> >>>
> >>> On Aug 27, 2014, at 2:54 PM, Sean Dague  wrote:
> >>>
>  Note: thread intentionally broken, this is really a different topic.
> 
>  On 08/27/2014 02:30 PM, Doug Hellmann wrote:>
> > On Aug 27, 2014, at 1:30 PM, Chris Dent  wrote:
> >
> >> On Wed, 27 Aug 2014, Doug Hellmann wrote:
> >>
> >>> I have found it immensely helpful, for example, to have a written
> set
> >>> of the steps involved in creating a new library, from importing the
> >>> git repo all the way through to making it available to other
> projects.
> >>> Without those instructions, it would have been much harder to
> split up
> >>> the work. The team would have had to train each other by word of
> >>> mouth, and we would have had constant issues with inconsistent
> >>> approaches triggering different failures. The time we spent
> building
> >>> and verifying the instructions has paid off to the extent that we
> even
> >>> had one developer not on the core team handle a graduation for us.
> >>
> >> +many more for the relatively simple act of just writing stuff down
> >
> > "Write it down.” is my theme for Kilo.
> 
>  I definitely get the sentiment. "Write it down" is also hard when you
>  are talking about things that do change around quite a bit. OpenStack
> as
>  a whole sees 250 - 500 changes a week, so the interaction pattern
> moves
>  around enough that it's really easy to have *very* stale information
>  written down. Stale information is even more dangerous than no
>  information some times, as it takes people down very wrong paths.
> 
>  I think we break down on communication when we get into a conversation
>  of "I want to learn gate debugging" because I don't quite know what
> that
>  means, or where the starting point of understanding is. So those
>  intentions are well meaning, but tend to stall. The reality was there
>  was no road map for those of us that dive in, it's just understanding
>  how OpenStack holds together as a whole and where some of the high
> risk
>  parts are. And a lot of that comes with days staring at code and logs
>  until patterns emerge.
> 
>  Maybe if we can get smaller more targeted questions, we can help folks
>  better? I'm personally a big fan of answering the targeted questions
>  because then I also know that the time spent exposing that information
>  was directly useful.
> 
>  I'm more than happy to mentor folks. But I just end up finding the "I
>  want to learn" at the generic level something that's hard to grasp
> onto
>  or figure out how we turn it into action. I'd love to hear more ideas
>  from folks about ways we might do that better.
> >>>
> >>> You and a few others have developed an expertise in this important
> skill. I am so far away from that level of expertise that I don’t know the
> questions to ask. More often than not I start with the console log, find
> something that looks significant, spend an hour or so tracking it down, and
> then have someone tell me that it is a red herring and the issue is really
> some other thing that they figured out very quickly by looking at a file I
> never got to.
> >>>
> >>> I guess what I’m looking for is some help with the patterns. What made
> you think to look in one log file versus another? Some of these jobs save a
> zillion little files, which ones are actually useful? What tools are you
> using to correlate log entries across all of those files? Are you doing it
> by hand? Is logstash useful for that, or is that more useful for finding
> multiple occurrences of the same issue?
> >>>
> >>> I realize there’s not a way to write a how-to that will live forever.
> Maybe one way to deal with that is to write up the research done on bugs
> soon after they are solved, and publish that to the mailing list. Even the
> retrospective view is useful because we can all learn from it without
> having to live through it. The mailing list is a fairly ephemeral medium,
> and something very old in the archives is understood to have a good chance
> of being out of date so we don’t have to keep adding disclaimers.
> >>
> >> Sure. Matt's actually working up a blog post describing the thing he
> >> nailed earlier in the week.
> >
> > Yes, I appreciate that both of you are responding to my questions. :-)
> >
> > I have some more specific questions/comments below. Please take all of
> this in the spirit of trying to make this process easier by pointing out
> where I’ve found it hard, and not just me complaining. I’d like to work on
> fixing any of these things that can be fixed, by writing or reviewing
> patches for early in kilo.
> >
> >>
> >> Here is my off the cuff

Re: [openstack-dev] [all] Design Summit reloaded

2014-08-28 Thread Jay Pipes

On 08/27/2014 11:34 AM, Doug Hellmann wrote:


On Aug 27, 2014, at 8:51 AM, Thierry Carrez 
wrote:


Hi everyone,

I've been thinking about what changes we can bring to the Design
Summit format to make it more productive. I've heard the feedback
from the mid-cycle meetups and would like to apply some of those
ideas for Paris, within the constraints we have (already booked
space and time). Here is something we could do:

Day 1. Cross-project sessions / incubated projects / other
projects

I think that worked well last time. 3 parallel rooms where we can
address top cross-project questions, discuss the results of the
various experiments we conducted during juno. Don't hesitate to
schedule 2 slots for discussions, so that we have time to come to
the bottom of those issues. Incubated projects (and maybe "other"
projects, if space allows) occupy the remaining space on day 1, and
could occupy "pods" on the other days.


If anything, I’d like to have fewer cross-project tracks running
simultaneously. Depending on which are proposed, maybe we can make
that happen. On the other hand, cross-project issues is a big theme
right now so maybe we should consider devoting more than a day to
dealing with them.


I agree with Doug here. I'd almost say having a single cross-project 
room, with serialized content would be better than 3 separate 
cross-project tracks. By nature, the cross-project sessions will attract 
developers that work or are interested in a set of projects that looks 
like a big Venn diagram. By having 3 separate cross-project tracks, we 
would increase the likelihood that developers would once more have to 
choose among simultaneous sessions that they have equal interest in. For 
Infra and QA folks, this likelihood is even greater...


I think I'd prefer a single cross-project track on the first day.


Day 2 and Day 3. Scheduled sessions for various programs

That's our traditional scheduled space. We'll have a 33% less
slots available. So, rather than trying to cover all the scope, the
idea would be to focus those sessions on specific issues which
really require face-to-face discussion (which can't be solved on
the ML or using spec discussion) *or* require a lot of user
feedback. That way, appearing in the general schedule is very
helpful. This will require us to be a lot stricter on what we
accept there and what we don't -- we won't have space for courtesy
sessions anymore, and traditional/unnecessary sessions (like my
traditional "release schedule" one) should just move to the
mailing-list.


The message I’m getting from this change in available space is that
we need to start thinking about and writing up ideas early, so teams
can figure out which upcoming specs need more discussion and which
don’t.


++

Also, I think as a community we need to get much better about saying 
"No" for certain things. No to sessions that don't have much specific 
details to them. No to blueprints that don't add much functionality that 
cannot be widely used or taken advantage of. No to specs that don't have 
a narrow-enough scope, etc.


I also think we need to be better at saying "Yes" to other things, 
though... but that's a different thread ;)



Day 4. Contributors meetups

On the last day, we could try to split the space so that we can
conduct parallel midcycle-meetup-like contributors gatherings, with
no time boundaries and an open agenda. Large projects could get a
full day, smaller projects would get half a day (but could continue
the discussion in a local bar). Ideally that meetup would end with
some alignment on release goals, but the idea is to make the best
of that time together to solve the issues you have. Friday would
finish with the design summit feedback session, for those who are
still around.


This is a good compromise between needing to allow folks to move
around between tracks (including speaking at the conference) and
having a large block of unstructured time for deep dives.


Agreed.

Best,
-jay


I think this proposal makes the best use of our setup: discuss
clear cross-project issues, address key specific topics which need
face-to-face time and broader attendance, then try to replicate
the success of midcycle meetup-like open unscheduled time to
discuss whatever is hot at this point.

There are still details to work out (is it possible split the
space, should we use the usual design summit CFP website to
organize the "scheduled" time...), but I would first like to have
your feedback on this format. Also if you have alternative
proposals that would make a better use of our 4 days, let me know.

Cheers,

-- Thierry Carrez (ttx)

___ OpenStack-dev
mailing list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_

Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Doug Hellmann

On Aug 28, 2014, at 1:17 PM, Sean Dague  wrote:

> On 08/28/2014 12:48 PM, Doug Hellmann wrote:
>> 
>> On Aug 27, 2014, at 5:56 PM, Sean Dague  wrote:
>> 
>>> On 08/27/2014 05:27 PM, Doug Hellmann wrote:
 
 On Aug 27, 2014, at 2:54 PM, Sean Dague  wrote:
 
> Note: thread intentionally broken, this is really a different topic.
> 
> On 08/27/2014 02:30 PM, Doug Hellmann wrote:>
>> On Aug 27, 2014, at 1:30 PM, Chris Dent  wrote:
>> 
>>> On Wed, 27 Aug 2014, Doug Hellmann wrote:
>>> 
 I have found it immensely helpful, for example, to have a written set
 of the steps involved in creating a new library, from importing the
 git repo all the way through to making it available to other projects.
 Without those instructions, it would have been much harder to split up
 the work. The team would have had to train each other by word of
 mouth, and we would have had constant issues with inconsistent
 approaches triggering different failures. The time we spent building
 and verifying the instructions has paid off to the extent that we even
 had one developer not on the core team handle a graduation for us.
>>> 
>>> +many more for the relatively simple act of just writing stuff down
>> 
>> "Write it down.” is my theme for Kilo.
> 
> I definitely get the sentiment. "Write it down" is also hard when you
> are talking about things that do change around quite a bit. OpenStack as
> a whole sees 250 - 500 changes a week, so the interaction pattern moves
> around enough that it's really easy to have *very* stale information
> written down. Stale information is even more dangerous than no
> information some times, as it takes people down very wrong paths.
> 
> I think we break down on communication when we get into a conversation
> of "I want to learn gate debugging" because I don't quite know what that
> means, or where the starting point of understanding is. So those
> intentions are well meaning, but tend to stall. The reality was there
> was no road map for those of us that dive in, it's just understanding
> how OpenStack holds together as a whole and where some of the high risk
> parts are. And a lot of that comes with days staring at code and logs
> until patterns emerge.
> 
> Maybe if we can get smaller more targeted questions, we can help folks
> better? I'm personally a big fan of answering the targeted questions
> because then I also know that the time spent exposing that information
> was directly useful.
> 
> I'm more than happy to mentor folks. But I just end up finding the "I
> want to learn" at the generic level something that's hard to grasp onto
> or figure out how we turn it into action. I'd love to hear more ideas
> from folks about ways we might do that better.
 
 You and a few others have developed an expertise in this important skill. 
 I am so far away from that level of expertise that I don’t know the 
 questions to ask. More often than not I start with the console log, find 
 something that looks significant, spend an hour or so tracking it down, 
 and then have someone tell me that it is a red herring and the issue is 
 really some other thing that they figured out very quickly by looking at a 
 file I never got to.
 
 I guess what I’m looking for is some help with the patterns. What made you 
 think to look in one log file versus another? Some of these jobs save a 
 zillion little files, which ones are actually useful? What tools are you 
 using to correlate log entries across all of those files? Are you doing it 
 by hand? Is logstash useful for that, or is that more useful for finding 
 multiple occurrences of the same issue?
 
 I realize there’s not a way to write a how-to that will live forever. 
 Maybe one way to deal with that is to write up the research done on bugs 
 soon after they are solved, and publish that to the mailing list. Even the 
 retrospective view is useful because we can all learn from it without 
 having to live through it. The mailing list is a fairly ephemeral medium, 
 and something very old in the archives is understood to have a good chance 
 of being out of date so we don’t have to keep adding disclaimers.
>>> 
>>> Sure. Matt's actually working up a blog post describing the thing he
>>> nailed earlier in the week.
>> 
>> Yes, I appreciate that both of you are responding to my questions. :-)
>> 
>> I have some more specific questions/comments below. Please take all of this 
>> in the spirit of trying to make this process easier by pointing out where 
>> I’ve found it hard, and not just me complaining. I’d like to work on fixing 
>> any of these things that can be fixed, by writing or reviewing patches for 
>> early in kilo.
>> 
>>> 
>>> Here is my off the 

Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Doug Hellmann

On Aug 28, 2014, at 1:00 PM, Dean Troyer  wrote:

> On Thu, Aug 28, 2014 at 11:48 AM, Doug Hellmann  wrote:
> In my case, a neutron call failed. Most of the other services seem to have a 
> *-api.log file, but neutron doesn’t. It took a little while to find the 
> API-related messages in screen-q-svc.txt (I’m glad I’ve been around long 
> enough to know it used to be called “quantum”). I get that screen-n-*.txt 
> would collide with nova. Is it necessary to abbreviate those filenames at all?
> 
> Cleaning up the service names has been a background conversation for some 
> time and came up again last night in IRC.  I abbreviated them in the first 
> place to try to get them all in my screen status bar, so that was a while 
> ago...
> 
> I don't think the current ENABLED_SERVICES is scaling well and using full 
> names (nova-api, glance-registry, etc) will make it even harder to read.  May 
> that is misplaced concern? I do think though that making the logfile names 
> and locations more obvious in the gate results will be helpful.

I usually use the functions for editing ENABLED_SERVICES. Is it still common to 
edit the variable directly?

> 
> I've started scratching out a plan to migrate to full names and will get it 
> into an Etherpad soon.  Also simplifying the log file configuration vars and 
> locations.

Cool. Let us know if we can make any changes in oslo.log to simplify that work.

Doug

> 
> dt
> 
> -- 
> 
> Dean Troyer
> dtro...@gmail.com
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Request to include AMQP 1.0 support in Juno-3

2014-08-28 Thread Doug Hellmann

On Aug 28, 2014, at 8:36 AM, Mark McLoughlin  wrote:

> On Thu, 2014-08-28 at 13:24 +0200, Flavio Percoco wrote:
>> On 08/27/2014 03:35 PM, Ken Giusti wrote:
>>> Hi All,
>>> 
>>> I believe Juno-3 is our last chance to get this feature [1] included
>>> into olso.messaging.
>>> 
>>> I honestly believe this patch is about as low risk as possible for a
>>> change that introduces a whole new transport into oslo.messaging.  The
>>> patch shouldn't affect the existing transports at all, and doesn't
>>> come into play unless the application specifically turns on the new
>>> 'amqp' transport, which won't be the case for existing applications.
>>> 
>>> The patch includes a set of functional tests which exercise all the
>>> messaging patterns, timeouts, and even broker failover. These tests do
>>> not mock out any part of the driver - a simple test broker is included
>>> which allows the full driver codepath to be executed and verified.
>>> 
>>> IFAIK, the only remaining technical block to adding this feature,
>>> aside from core reviews [2], is sufficient infrastructure test coverage.
>>> We discussed this a bit at the last design summit.  The root of the
>>> issue is that this feature is dependent on a platform-specific library
>>> (proton) that isn't in the base repos for most of the CI platforms.
>>> But it is available via EPEL, and the Apache QPID team is actively
>>> working towards getting the packages into Debian (a PPA is available
>>> in the meantime).
>>> 
>>> In the interim I've proposed a non-voting CI check job that will
>>> sanity check the new driver on EPEL based systems [3].  I'm also
>>> working towards adding devstack support [4], which won't be done in
>>> time for Juno but nevertheless I'm making it happen.
>>> 
>>> I fear that this feature's inclusion is stuck in a chicken/egg
>>> deadlock: the driver won't get merged until there is CI support, but
>>> the CI support won't run correctly (and probably won't get merged)
>>> until the driver is available.  The driver really has to be merged
>>> first, before I can continue with CI/devstack development.
>>> 
>>> [1] 
>>> https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
>>> [2] https://review.openstack.org/#/c/75815/
>>> [3] https://review.openstack.org/#/c/115752/
>>> [4] https://review.openstack.org/#/c/109118/
>> 
>> 
>> Hi Ken,
>> 
>> Thanks a lot for your hard work here. As I stated in my last comment on
>> the driver's review, I think we should let this driver land and let
>> future patches improve it where/when needed.
>> 
>> I agreed on letting the driver land as-is based on the fact that there
>> are patches already submitted ready to enable the gates for this driver.
> 
> I feel bad that the driver has been in a pretty complete state for quite
> a while but hasn't received a whole lot of reviews. There's a lot of
> promise to this idea, so it would be ideal if we could unblock it.
> 
> One thing I've been meaning to do this cycle is add concrete advice for
> operators on the state of each driver. I think we'd be a lot more
> comfortable merging this in Juno if we could somehow make it clear to
> operators that it's experimental right now. My idea was:
> 
>  - Write up some notes which discusses the state of each driver e.g.
> 
>  - RabbitMQ - the default, used by the majority of OpenStack 
>deployments, perhaps list some of the known bugs, particularly 
>around HA.
> 
>  - Qpid - suitable for production, but used in a limited number of 
>deployments. Again, list known issues. Mention that it will 
>probably be removed with the amqp10 driver matures.
> 
>  - Proton/AMQP 1.0 - experimental, in active development, will
>support  multiple brokers and topologies, perhaps a pointer to a
>wiki page with the current TODO list
> 
>  - ZeroMQ - unmaintained and deprecated, planned for removal in
>Kilo
> 
>  - Propose this addition to the API docs and ask the operators list 
>for feedback
> 
>  - Propose a patch which adds a load-time deprecation warning to the 
>ZeroMQ driver
> 
>  - Include a load-time experimental warning in the proton driver
> 
> Thoughts on that?

By "API docs" do you mean the ones in the oslo.messaging repository? Would it 
be better to put this information in the operator’s guide?

Other than the question of where to put it, I definitely think this is the sort 
of guidance we should document, including through the logged warnings.

Doug

> 
> (I understand the ZeroMQ situation needs further discussion - I don't
> think that's on-topic for the thread, I was just using it as example of
> what kind of advice we'd be giving in these docs)
> 
> Mark.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenSt

Re: [openstack-dev] [UX] [Horizon] [Heat] Merlin project (formerly known as cross-project UI library for Heat/Mistral/Murano/Solum) plans for PoC and more

2014-08-28 Thread Drago Rosson
Timur,

Composable entities can be a real need for Heat if provider templates
(which allow templates to be used as a resource, with a template’s
parameters and outputs becoming properties and attributes, respectively)
are to be included in the app. A provider template resource, since it is a
template itself, would be composed of resources which would require a
composable entity. What is great about D3’s force graph is that it’s nodes
and links can be completely arbitrary - meaning they can be any JavaScript
object (including an SVG or DOM element). Additionally, the force graph
simulation updates x and y properties on those elements and calls a
user-defined “tick” function. The tick function can use the x and y
properties in any way it wants to do the *actual* update to the position
of each element. For example, this is how multiple foci can be implemented
[1]. Lots of other customization is available, including starting and
stopping the simulation, updating the node and link data, and having
per-element control of most (all?) properties such as charge or link
distance.

Composability can be achieved using SVG’s  elements to group multiple
graphical elements together. The tick function would need to update the
’s transform attribute [2]. This is how it is done in my app since my
nodes and links are composed of icons, labels, backgrounds, etc. I think
that D3’s force graph is not a limiting factor since it itself does not
concern itself with graphics at all. Therefore, the question seems to be
whether D3 can do everything graphically that Merlin needs. D3 is not a
graphics API, but it does have support for graphical manipulation,
animations, and events. They have sufficed for me so far. Plus, D3 can do
these things without having to use its fancy data transformations so it
can be used as a low-level SVG library where necessary. D3 can do a lot
[3] so hopefully it could also do what Merlin needs.

You are in luck, because I have just now open-sourced Barricade! Check it
out [4]. I am working on getting documentation written for it but to see
some ways it can be used, look at its test suite [5].

[1] http://bl.ocks.org/mbostock/1021953
[2] node.attr("transform", function (d) {
return "translate(" + d.x + ", " + d.y + ")";
});
[3] http://christopheviau.com/d3list/
[4] https://github.com/rackerlabs/barricade

[5] 
https://github.com/rackerlabs/barricade/blob/master/test/barricade_Spec.js

On 8/28/14, 10:03 AM, "Timur Sufiev"  wrote:

>Hello, Drago!
>
>I'm extremely interested in learning more about your HOT graphical
>builder. The screenshots you had attached look gorgeous! Yours visual
>representation of Heat resources is much more concise and simple than
>I had drawn in Merlin PoC mock-ups [1]. On the other hand I have some
>suspicions that D3.js is a good fit for a general purpose UI toolkit
>Merlin aims to provide. Please don't get me wrong, D3.js is a great
>library which can do fantastic things with data - in case your
>data<->visualization use-case maps to the one of the facilities D3.js
>provides out of the box. In case it doesn't, there are 2 options:
>either change your approach to what should be visualized/how it should
>be visualized, or tweak some inner machinery of D3.js
>
>While bending the design towards the facilities of D3.js doesn't seem
>a viable choice, changing D3.js from inside can be painful too. AFAIK
>force-directed graph layout from D3.js doesn't provide the means to
>represent composable entities (which isn't a big problem for Heat, but
>is a very serious issue for Murano) out of the box. By composable I
>mean something like [2] - but with much more complex inner structure
>(imagine the Resource entity [3] having as its properties other
>Resource entities which are shown as simple rounded rectangles with
>labels on that picture, but are expanded into complex objects similar
>to [3] once the user, say, clicks on them). As far as I understand,
>you are visualizing that kind of composition via arrow links, but I'd
>like to try another design options (especially in case of Murano) and
>fear that D3.js will constrain me here. I've been thinking a bit about
>using more low-level SVG js-framework like Raphael.js - it doesn't
>offer most of the goodies D3.js does, but also it doesn't force me to
>create the design based on some data transformations in a way that
>D3.js does, providing the good old procedural API instead. Of course,
>I may be wrong, perhaps more time and efforts invested into Merlin PoC
>would allow me to realize it (or not).

>Yet you are totally right having stressed the importance of right tool
>for implementing the underlying object model (or JSON-wrapper as you
>called it) - Barricade.js. That's the second big part of work Merlin
>had to do, and I couldn't underestimate how it would be beneficial for
>Merlin to leverage some of the facilities that Barricade.js provides.
>I'll gladly look at the demo of template builder and Barricade. Is
>there any chance I co

Re: [openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths

2014-08-28 Thread Gil Vernik
Hi Michael,

I have an update to  this patch with  temp auth authentication also, but 
it's not yet submitted.
I am not aware of v3 support.

All the best,
Gil.



From:   Michael McCune 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   28/08/2014 08:14 PM
Subject:Re: [openstack-dev] [sahara] Notes on developing Sahara 
Spark EDP to work with swift:// paths



hi Gil,

that's cool about the patch to Spark, has there been any talk about 
upgrading that patch to include Keystone v3 operations?

- Original Message -
> Hi,
> 
> In case this is helpful for you, this is the patch i submitted to Spark
> about Swift and Spark integration ( about to be merged )
> https://github.com/apache/spark/pull/1010
> 
> I sent information about this patch to this mailing list about two 
months
> ago.
> 
> All the best,
> Gil.
> 
> 
> 
> 
> 
> From:   Trevor McKay 
> To: OpenStack Development Mailing List
> 
> Date:   28/08/2014 06:22 PM
> Subject:[openstack-dev] [sahara] Notes on developing Sahara 
Spark
> EDP to work with swift:// paths
> 
> 
> 
> Hi folks,
> 
>   I've updated this etherpad with notes from an investigation of
> Spark/Swift and the hadoop-openstack plugin carried in the sahara-extra
> repo.
> 
>   Following the notes there, I was able to access swift:// paths from
> Spark jobs on a Spark standalone cluster launched from Sahara and then
> fixed up by hand.
> 
>   Comments welcome.  This is a POC at this point imho, we have work to
> do to fully integrate this into Sahara.
> 
> https://etherpad.openstack.org/p/sahara_spark_edp
> 
> Best,
> 
> Trevor
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] Specs for K release

2014-08-28 Thread Mandeep Dhami
+1

I agree that this is a good idea.

Regards,
Mandeep





On Thu, Aug 28, 2014 at 10:13 AM, Jay Pipes  wrote:

> On 08/28/2014 12:50 PM, Michael Still wrote:
>
>> On Thu, Aug 28, 2014 at 6:53 AM, Daniel P. Berrange 
>> wrote:
>>
>>> On Thu, Aug 28, 2014 at 11:51:32AM +, Alan Kavanagh wrote:
>>>
 How to do we handle specs that have slipped through the cracks
 and did not make it for Juno?

>>>
>>> Rebase the proposal so it is under the 'kilo' directory path
>>> instead of 'juno' and submit it for review again. Make sure
>>> to keep the ChangeId line intact so people see the history
>>> of any review comments in the earlier Juno proposal.
>>>
>>
>> Yes, but...
>>
>> I think we should talk about tweaking the structure of the juno
>> directory. Something like having proposed, approved, and implemented
>> directories. That would provide better signalling to operators about
>> what we actually did, what we thought we'd do, and what we didn't do.
>>
>
> I think this would be really useful.
>
> -jay
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-28 Thread Richard Woo
I have another question about incubator proposal, for CLI and GUI. Do we
imply that the incubator feature will need to branch python-neutron client,
Horizon, and or Nova ( if changes are needed)?




On Tue, Aug 26, 2014 at 7:09 PM, James E. Blair  wrote:

> Hi,
>
> After reading https://wiki.openstack.org/wiki/Network/Incubator I have
> some thoughts about the proposed workflow.
>
> We have quite a bit of experience and some good tools around splitting
> code out of projects and into new projects.  But we don't generally do a
> lot of importing code into projects.  We've done this once, to my
> recollection, in a way that preserved history, and that was with the
> switch to keystone-lite.
>
> It wasn't easy; it's major git surgery and would require significant
> infra-team involvement any time we wanted to do it.
>
> However, reading the proposal, it occurred to me that it's pretty clear
> that we expect these tools to be able to operate outside of the Neutron
> project itself, to even be releasable on their own.  Why not just stick
> with that?  In other words, the goal of this process should be to create
> separate projects with their own development lifecycle that will
> continue indefinitely, rather than expecting the code itself to merge
> into the neutron repo.
>
> This has advantages in simplifying workflow and making it more
> consistent.  Plus it builds on known integration mechanisms like APIs
> and python project versions.
>
> But more importantly, it helps scale the neutron project itself.  I
> think that a focused neutron core upon which projects like these can
> build on in a reliable fashion would be ideal.
>
> -Jim
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Sean Dague
On 08/28/2014 12:48 PM, Doug Hellmann wrote:
> 
> On Aug 27, 2014, at 5:56 PM, Sean Dague  wrote:
> 
>> On 08/27/2014 05:27 PM, Doug Hellmann wrote:
>>>
>>> On Aug 27, 2014, at 2:54 PM, Sean Dague  wrote:
>>>
 Note: thread intentionally broken, this is really a different topic.

 On 08/27/2014 02:30 PM, Doug Hellmann wrote:>
> On Aug 27, 2014, at 1:30 PM, Chris Dent  wrote:
>
>> On Wed, 27 Aug 2014, Doug Hellmann wrote:
>>
>>> I have found it immensely helpful, for example, to have a written set
>>> of the steps involved in creating a new library, from importing the
>>> git repo all the way through to making it available to other projects.
>>> Without those instructions, it would have been much harder to split up
>>> the work. The team would have had to train each other by word of
>>> mouth, and we would have had constant issues with inconsistent
>>> approaches triggering different failures. The time we spent building
>>> and verifying the instructions has paid off to the extent that we even
>>> had one developer not on the core team handle a graduation for us.
>>
>> +many more for the relatively simple act of just writing stuff down
>
> "Write it down.” is my theme for Kilo.

 I definitely get the sentiment. "Write it down" is also hard when you
 are talking about things that do change around quite a bit. OpenStack as
 a whole sees 250 - 500 changes a week, so the interaction pattern moves
 around enough that it's really easy to have *very* stale information
 written down. Stale information is even more dangerous than no
 information some times, as it takes people down very wrong paths.

 I think we break down on communication when we get into a conversation
 of "I want to learn gate debugging" because I don't quite know what that
 means, or where the starting point of understanding is. So those
 intentions are well meaning, but tend to stall. The reality was there
 was no road map for those of us that dive in, it's just understanding
 how OpenStack holds together as a whole and where some of the high risk
 parts are. And a lot of that comes with days staring at code and logs
 until patterns emerge.

 Maybe if we can get smaller more targeted questions, we can help folks
 better? I'm personally a big fan of answering the targeted questions
 because then I also know that the time spent exposing that information
 was directly useful.

 I'm more than happy to mentor folks. But I just end up finding the "I
 want to learn" at the generic level something that's hard to grasp onto
 or figure out how we turn it into action. I'd love to hear more ideas
 from folks about ways we might do that better.
>>>
>>> You and a few others have developed an expertise in this important skill. I 
>>> am so far away from that level of expertise that I don’t know the questions 
>>> to ask. More often than not I start with the console log, find something 
>>> that looks significant, spend an hour or so tracking it down, and then have 
>>> someone tell me that it is a red herring and the issue is really some other 
>>> thing that they figured out very quickly by looking at a file I never got 
>>> to.
>>>
>>> I guess what I’m looking for is some help with the patterns. What made you 
>>> think to look in one log file versus another? Some of these jobs save a 
>>> zillion little files, which ones are actually useful? What tools are you 
>>> using to correlate log entries across all of those files? Are you doing it 
>>> by hand? Is logstash useful for that, or is that more useful for finding 
>>> multiple occurrences of the same issue?
>>>
>>> I realize there’s not a way to write a how-to that will live forever. Maybe 
>>> one way to deal with that is to write up the research done on bugs soon 
>>> after they are solved, and publish that to the mailing list. Even the 
>>> retrospective view is useful because we can all learn from it without 
>>> having to live through it. The mailing list is a fairly ephemeral medium, 
>>> and something very old in the archives is understood to have a good chance 
>>> of being out of date so we don’t have to keep adding disclaimers.
>>
>> Sure. Matt's actually working up a blog post describing the thing he
>> nailed earlier in the week.
> 
> Yes, I appreciate that both of you are responding to my questions. :-)
> 
> I have some more specific questions/comments below. Please take all of this 
> in the spirit of trying to make this process easier by pointing out where 
> I’ve found it hard, and not just me complaining. I’d like to work on fixing 
> any of these things that can be fixed, by writing or reviewing patches for 
> early in kilo.
> 
>>
>> Here is my off the cuff set of guidelines:
>>
>> #1 - is it a test failure or a setup failure
>>
>> This should be pretty easy to figure out. Test failures come at the end
>> of

Re: [openstack-dev] [nova] [neutron] Specs for K release

2014-08-28 Thread Jay Pipes

On 08/28/2014 12:50 PM, Michael Still wrote:

On Thu, Aug 28, 2014 at 6:53 AM, Daniel P. Berrange  wrote:

On Thu, Aug 28, 2014 at 11:51:32AM +, Alan Kavanagh wrote:

How to do we handle specs that have slipped through the cracks
and did not make it for Juno?


Rebase the proposal so it is under the 'kilo' directory path
instead of 'juno' and submit it for review again. Make sure
to keep the ChangeId line intact so people see the history
of any review comments in the earlier Juno proposal.


Yes, but...

I think we should talk about tweaking the structure of the juno
directory. Something like having proposed, approved, and implemented
directories. That would provide better signalling to operators about
what we actually did, what we thought we'd do, and what we didn't do.


I think this would be really useful.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths

2014-08-28 Thread Michael McCune
hi Gil,

that's cool about the patch to Spark, has there been any talk about upgrading 
that patch to include Keystone v3 operations?

- Original Message -
> Hi,
> 
> In case this is helpful for you, this is the patch i submitted to Spark
> about Swift and Spark integration ( about to be merged )
> https://github.com/apache/spark/pull/1010
> 
> I sent information about this patch to this mailing list about two months
> ago.
> 
> All the best,
> Gil.
> 
> 
> 
> 
> 
> From:   Trevor McKay 
> To: OpenStack Development Mailing List
> 
> Date:   28/08/2014 06:22 PM
> Subject:[openstack-dev] [sahara] Notes on developing Sahara Spark
> EDP to work with swift:// paths
> 
> 
> 
> Hi folks,
> 
>   I've updated this etherpad with notes from an investigation of
> Spark/Swift and the hadoop-openstack plugin carried in the sahara-extra
> repo.
>  
>   Following the notes there, I was able to access swift:// paths from
> Spark jobs on a Spark standalone cluster launched from Sahara and then
> fixed up by hand.
> 
>   Comments welcome.  This is a POC at this point imho, we have work to
> do to fully integrate this into Sahara.
> 
> https://etherpad.openstack.org/p/sahara_spark_edp
> 
> Best,
> 
> Trevor
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths

2014-08-28 Thread Trevor McKay
Gil,

  thanks! I'll take a look.

Trevor

On Thu, 2014-08-28 at 19:31 +0300, Gil Vernik wrote:
> Hi, 
> 
> In case this is helpful for you, this is the patch i submitted to
> Spark about Swift and Spark integration ( about to be merged ) 
> https://github.com/apache/spark/pull/1010 
> 
> I sent information about this patch to this mailing list about two
> months ago. 
> 
> All the best, 
> Gil. 
> 
> 
> 
> 
> 
> From:Trevor McKay  
> To:OpenStack Development Mailing List
>  
> Date:28/08/2014 06:22 PM 
> Subject:[openstack-dev] [sahara] Notes on developing Sahara
> Spark EDP to work with swift:// paths 
> 
> __
> 
> 
> 
> Hi folks,
> 
>  I've updated this etherpad with notes from an investigation of
> Spark/Swift and the hadoop-openstack plugin carried in the
> sahara-extra
> repo.
>  
>  Following the notes there, I was able to access swift:// paths from
> Spark jobs on a Spark standalone cluster launched from Sahara and then
> fixed up by hand.
> 
>  Comments welcome.  This is a POC at this point imho, we have work to
> do to fully integrate this into Sahara.
> 
> https://etherpad.openstack.org/p/sahara_spark_edp
> 
> Best,
> 
> Trevor
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Spam] Re: [Openstack][TripleO] [Ironic] What if undercloud machines down, can we reboot overcloud machines?

2014-08-28 Thread Jim Rollenhagen


On August 28, 2014 8:58:11 AM PDT, Clint Byrum  wrote:
>Excerpts from Jyoti Ranjan's message of 2014-08-27 21:20:19 -0700:
>> I do agree but it create an extra requirement for Undercloud if we
>high
>> availability is important criteria. Because of this, undercloud has
>to be
>> there 24x7, 365 days and to make it available we need to have HA for
>this
>> also. So, you indirectly mean that undercloud also should be designed
>> keeping high availability in mind.
>
>I'm worried that you may be overstating the needs of a typical cloud.
>
>The undercloud needs to be able to reach a state of availability when
>you need to boot boxes. Even if you are doing CD and _constantly_
>rebooting boxes, you can take your undercloud down for an hour, as long
>as it can be brought back up for emergencies.
>
>However, Ironic has already been designed this way. I believe that
>Ironic has a nice dynamic hash ring of server ownership, and if you
>mark a conductor down, the other conductors will assume ownership of
>the machines that it was holding. So the path to making this HA is
>basically "add one more undercloud server."
>
>Ironic experts, please tell me this is true, and not just something I
>inserted into my own distorted version of reality to help me sleep at
>night.

This is correct, HA is achieved in Ironic by having multiple conductors and API 
servers. It isn't perfect today, but Greg Haynes is working on some of this and 
it is planned to land in Juno. 

>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

// jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Dean Troyer
On Thu, Aug 28, 2014 at 11:48 AM, Doug Hellmann 
wrote:

> In my case, a neutron call failed. Most of the other services seem to have
> a *-api.log file, but neutron doesn’t. It took a little while to find the
> API-related messages in screen-q-svc.txt (I’m glad I’ve been around long
> enough to know it used to be called “quantum”). I get that screen-n-*.txt
> would collide with nova. Is it necessary to abbreviate those filenames at
> all?
>

Cleaning up the service names has been a background conversation for some
time and came up again last night in IRC.  I abbreviated them in the first
place to try to get them all in my screen status bar, so that was a while
ago...

I don't think the current ENABLED_SERVICES is scaling well and using full
names (nova-api, glance-registry, etc) will make it even harder to read.
 May that is misplaced concern? I do think though that making the logfile
names and locations more obvious in the gate results will be helpful.

I've started scratching out a plan to migrate to full names and will get it
into an Etherpad soon.  Also simplifying the log file configuration vars
and locations.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] Specs for K release

2014-08-28 Thread Michael Still
On Thu, Aug 28, 2014 at 6:53 AM, Daniel P. Berrange  wrote:
> On Thu, Aug 28, 2014 at 11:51:32AM +, Alan Kavanagh wrote:
>> How to do we handle specs that have slipped through the cracks
>> and did not make it for Juno?
>
> Rebase the proposal so it is under the 'kilo' directory path
> instead of 'juno' and submit it for review again. Make sure
> to keep the ChangeId line intact so people see the history
> of any review comments in the earlier Juno proposal.

Yes, but...

I think we should talk about tweaking the structure of the juno
directory. Something like having proposed, approved, and implemented
directories. That would provide better signalling to operators about
what we actually did, what we thought we'd do, and what we didn't do.

I worry that gerrit is a terrible place to archive the things which
were proposed by not approved. If someone else wants to pick something
up later, its super hard for them to find.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-28 Thread Doug Hellmann

On Aug 27, 2014, at 5:56 PM, Sean Dague  wrote:

> On 08/27/2014 05:27 PM, Doug Hellmann wrote:
>> 
>> On Aug 27, 2014, at 2:54 PM, Sean Dague  wrote:
>> 
>>> Note: thread intentionally broken, this is really a different topic.
>>> 
>>> On 08/27/2014 02:30 PM, Doug Hellmann wrote:>
 On Aug 27, 2014, at 1:30 PM, Chris Dent  wrote:
 
> On Wed, 27 Aug 2014, Doug Hellmann wrote:
> 
>> I have found it immensely helpful, for example, to have a written set
>> of the steps involved in creating a new library, from importing the
>> git repo all the way through to making it available to other projects.
>> Without those instructions, it would have been much harder to split up
>> the work. The team would have had to train each other by word of
>> mouth, and we would have had constant issues with inconsistent
>> approaches triggering different failures. The time we spent building
>> and verifying the instructions has paid off to the extent that we even
>> had one developer not on the core team handle a graduation for us.
> 
> +many more for the relatively simple act of just writing stuff down
 
 "Write it down.” is my theme for Kilo.
>>> 
>>> I definitely get the sentiment. "Write it down" is also hard when you
>>> are talking about things that do change around quite a bit. OpenStack as
>>> a whole sees 250 - 500 changes a week, so the interaction pattern moves
>>> around enough that it's really easy to have *very* stale information
>>> written down. Stale information is even more dangerous than no
>>> information some times, as it takes people down very wrong paths.
>>> 
>>> I think we break down on communication when we get into a conversation
>>> of "I want to learn gate debugging" because I don't quite know what that
>>> means, or where the starting point of understanding is. So those
>>> intentions are well meaning, but tend to stall. The reality was there
>>> was no road map for those of us that dive in, it's just understanding
>>> how OpenStack holds together as a whole and where some of the high risk
>>> parts are. And a lot of that comes with days staring at code and logs
>>> until patterns emerge.
>>> 
>>> Maybe if we can get smaller more targeted questions, we can help folks
>>> better? I'm personally a big fan of answering the targeted questions
>>> because then I also know that the time spent exposing that information
>>> was directly useful.
>>> 
>>> I'm more than happy to mentor folks. But I just end up finding the "I
>>> want to learn" at the generic level something that's hard to grasp onto
>>> or figure out how we turn it into action. I'd love to hear more ideas
>>> from folks about ways we might do that better.
>> 
>> You and a few others have developed an expertise in this important skill. I 
>> am so far away from that level of expertise that I don’t know the questions 
>> to ask. More often than not I start with the console log, find something 
>> that looks significant, spend an hour or so tracking it down, and then have 
>> someone tell me that it is a red herring and the issue is really some other 
>> thing that they figured out very quickly by looking at a file I never got to.
>> 
>> I guess what I’m looking for is some help with the patterns. What made you 
>> think to look in one log file versus another? Some of these jobs save a 
>> zillion little files, which ones are actually useful? What tools are you 
>> using to correlate log entries across all of those files? Are you doing it 
>> by hand? Is logstash useful for that, or is that more useful for finding 
>> multiple occurrences of the same issue?
>> 
>> I realize there’s not a way to write a how-to that will live forever. Maybe 
>> one way to deal with that is to write up the research done on bugs soon 
>> after they are solved, and publish that to the mailing list. Even the 
>> retrospective view is useful because we can all learn from it without having 
>> to live through it. The mailing list is a fairly ephemeral medium, and 
>> something very old in the archives is understood to have a good chance of 
>> being out of date so we don’t have to keep adding disclaimers.
> 
> Sure. Matt's actually working up a blog post describing the thing he
> nailed earlier in the week.

Yes, I appreciate that both of you are responding to my questions. :-)

I have some more specific questions/comments below. Please take all of this in 
the spirit of trying to make this process easier by pointing out where I’ve 
found it hard, and not just me complaining. I’d like to work on fixing any of 
these things that can be fixed, by writing or reviewing patches for early in 
kilo.

> 
> Here is my off the cuff set of guidelines:
> 
> #1 - is it a test failure or a setup failure
> 
> This should be pretty easy to figure out. Test failures come at the end
> of console log and say that tests failed (after you see a bunch of
> passing tempest tests).
> 
> Always start at *the end* of files and wor

Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-08-28 Thread Sean Dague
On 08/28/2014 12:22 PM, Doug Hellmann wrote:
> 
> On Aug 28, 2014, at 6:41 AM, Radomir Dopieralski  
> wrote:
> 
>> On 27/08/14 16:31, Sean Dague wrote:
>>
>> [snip]
>>
>>> In python 2.7 (using pip) namespaces are a bolt on because of the way
>>> importing modules works. And depending on how you install things in a
>>> namespace will overwrite the base __init__.py for the top level part of
>>> the namespace in such a way that you can't get access to the submodules.
>>>
>>> It's well known, and every conversation with dstuft that I've had in the
>>> past was "don't use namespaces".
>>
>> I think this is actually a solved problem. You just need a single line
>> in your __init__.py files:
>>
>> https://bitbucket.org/thomaswaldmann/xstatic-jquery/src/tip/xstatic/__init__.py
>>
> 
> The problem is that the setuptools implementation of namespace packages 
> breaks in a way that is repeatable but difficult to debug when a common 
> OpenStack installation pattern is used. So the fix is “don’t do that” where I 
> thought “that” meant the installation pattern and Sean thought it meant “use 
> namespace packages”. :-)

Stupid english... be more specific!

Yeh, Doug provides the most concise statement of where we failed on
communication (I take a big chunk of that blame). Hopefully now it's a
lot clearer what's going on, and why it hurts if you do it.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths

2014-08-28 Thread Gil Vernik
Hi,

In case this is helpful for you, this is the patch i submitted to Spark 
about Swift and Spark integration ( about to be merged )
https://github.com/apache/spark/pull/1010

I sent information about this patch to this mailing list about two months 
ago.

All the best,
Gil.





From:   Trevor McKay 
To: OpenStack Development Mailing List 

Date:   28/08/2014 06:22 PM
Subject:[openstack-dev] [sahara] Notes on developing Sahara Spark 
EDP to work with swift:// paths



Hi folks,

  I've updated this etherpad with notes from an investigation of
Spark/Swift and the hadoop-openstack plugin carried in the sahara-extra
repo.
 
  Following the notes there, I was able to access swift:// paths from
Spark jobs on a Spark standalone cluster launched from Sahara and then
fixed up by hand.

  Comments welcome.  This is a POC at this point imho, we have work to
do to fully integrate this into Sahara.

https://etherpad.openstack.org/p/sahara_spark_edp

Best,

Trevor


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-08-28 Thread Doug Hellmann

On Aug 28, 2014, at 6:41 AM, Radomir Dopieralski  wrote:

> On 27/08/14 16:31, Sean Dague wrote:
> 
> [snip]
> 
>> In python 2.7 (using pip) namespaces are a bolt on because of the way
>> importing modules works. And depending on how you install things in a
>> namespace will overwrite the base __init__.py for the top level part of
>> the namespace in such a way that you can't get access to the submodules.
>> 
>> It's well known, and every conversation with dstuft that I've had in the
>> past was "don't use namespaces".
> 
> I think this is actually a solved problem. You just need a single line
> in your __init__.py files:
> 
> https://bitbucket.org/thomaswaldmann/xstatic-jquery/src/tip/xstatic/__init__.py
> 

The problem is that the setuptools implementation of namespace packages breaks 
in a way that is repeatable but difficult to debug when a common OpenStack 
installation pattern is used. So the fix is “don’t do that” where I thought 
“that” meant the installation pattern and Sean thought it meant “use namespace 
packages”. :-)

Doug



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-28 Thread Mark McClain

On Aug 28, 2014, at 10:45 AM, Jay Pipes  wrote:

> On 08/27/2014 04:28 PM, Kevin Benton wrote:
>> What are you talking about? The only reply was from me clarifying that
>> one of the purposes of the incubator was for components of neutron that
>> are experimental but are intended to be merged.
> 
> Right. The special unicorns.
> 
> > In that case it might
>> not make sense to have a life cycle of their own in another repo
>> indefinitely.
> 
> The main reasons these "experimental components" don't make sense to live in 
> their own repo indefinitely are:
> 
> a) Neutron's design doesn't make it easy or straightforward to build/layer 
> other things on top of it, or:
> 

Correct and this something I want the team to address in Kilo.  Many of the L7 
services would be easier to build if we invest some time early in the cycle to 
establishing a well defined interface for a few items.  I’m sure the LBaaS team 
has good feedback to share with everyone.

> b) The experimental piece of code intends to replace whole-hog a large chunk 
> of Neutron's existing codebase, or:
> 
> c) The experimental piece of code relies so heavily on inconsistent, 
> unversioned internal interface and plugin calls that it cannot be designed 
> externally due to the fragility of those interfaces

I’m glad Jim reminded us of the pain of merging histories and the availability 
of feature branches.  I think for items where we’re replacing large chunks of 
code feature branches make lots of sense.

> 
> Fixing a) is the solution to these problems. An incubator area where 
> "experimental components" can live will just continue to mask the true 
> problem domain, which is that Neutron's design is cumbersome to build on top 
> of, and its cross-component interfaces need to be versioned, made consistent, 
> and cleaned up to use versioned data structures instead of passing random 
> nested dicts of randomly-prefixed string key/values.
> 
> Frankly, we're going through a similar problem in Nova right now. There is a 
> group of folks who believe that separating the nova-scheduler code into the 
> Gantt project will magically make placement decision code and solver 
> components *easier* to work on (because the pace of coding can be increased 
> if there wasn't that pesky nova-core review process). But this is not 
> correct, IMO. Separating out the scheduler into its own project before 
> internal interfaces and data structures are cleaned up and versioned will 
> just lead to greater technical debt and an increase in frustration on the 
> part of Nova developers and scheduler developers alike.

Right hopefully the incubator will allow us to develop components that should 
be independent from the start without incurring too much debt.

> 
> -jay
> 
>> On Wed, Aug 27, 2014 at 11:52 AM, Jay Pipes > > wrote:
>> 
>>On 08/26/2014 07:09 PM, James E. Blair wrote:
>> 
>>Hi,
>> 
>>After reading
>>https://wiki.openstack.org/__wiki/Network/Incubator
>> I have
>>some thoughts about the proposed workflow.
>> 
>>We have quite a bit of experience and some good tools around
>>splitting
>>code out of projects and into new projects.  But we don't
>>generally do a
>>lot of importing code into projects.  We've done this once, to my
>>recollection, in a way that preserved history, and that was with the
>>switch to keystone-lite.
>> 
>>It wasn't easy; it's major git surgery and would require significant
>>infra-team involvement any time we wanted to do it.
>> 
>>However, reading the proposal, it occurred to me that it's
>>pretty clear
>>that we expect these tools to be able to operate outside of the
>>Neutron
>>project itself, to even be releasable on their own.  Why not
>>just stick
>>with that?  In other words, the goal of this process should be
>>to create
>>separate projects with their own development lifecycle that will
>>continue indefinitely, rather than expecting the code itself to
>>merge
>>into the neutron repo.
>> 
>>This has advantages in simplifying workflow and making it more
>>consistent.  Plus it builds on known integration mechanisms like
>>APIs
>>and python project versions.
>> 
>>But more importantly, it helps scale the neutron project itself.  I
>>think that a focused neutron core upon which projects like these can
>>build on in a reliable fashion would be ideal.
>> 
>> 
>>Despite replies to you saying that certain branches of Neutron
>>development work are special unicorns, I wanted to say I *fully*
>>support your above statement.
>> 
>>Best,
>>-jay
>> 
>> 
>> 
>>_
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack

[openstack-dev] [oslo] change to deprecation policy in the incubator

2014-08-28 Thread Doug Hellmann
Before Juno we set a deprecation policy for graduating libraries that said the 
incubated versions of the modules would stay in the incubator repository for 
one full cycle after graduation. This gives projects time to adopt the 
libraries and still receive bug fixes to the incubated version (see 
https://wiki.openstack.org/wiki/Oslo#Graduation).

That policy worked well early on, but has recently introduced some challenges 
with the low level modules. Other modules in the incubator are still importing 
the incubated versions of, for example, timeutils, and so tests that rely on 
mocking out or modifying the behavior of timeutils do not work as expected when 
different parts of the application code end up calling different versions of 
timeutils. We had similar issues with the notifiers and RPC code, and I expect 
to find other cases as we continue with the graduations.

To deal with this problem, I propose that for Kilo we delete graduating modules 
as soon as the new library is released, rather than waiting to the end of the 
cycle. We can update the other incubated modules at the same time, so that the 
incubator will always use the new libraries and be consistent.

We have not had a lot of patches where backports were necessary, but there have 
been a few important ones, so we need to retain the ability to handle them and 
allow projects to adopt libraries at a reasonable pace. To handle backports 
cleanly, we can “freeze” all changes to the master branch version of modules 
slated for graduation during Kilo (we would need to make a good list very early 
in the cycle), and use the stable/juno branch for backports.

The new process would be:

1. Declare which modules we expect to graduate during Kilo.
2. Changes to those pre-graduation modules could be made in the master branch 
before their library is released, as long as the change is also backported to 
the stable/juno branch at the same time (we should enforce this by having both 
patches submitted before accepting either).
3. When graduation for a library starts, freeze those modules in all branches 
until the library is released.
4. Remove modules from the incubator’s master branch after the library is 
released.
5. Land changes in the library first.
6. Backport changes, as needed, to stable/juno instead of master.

It would be better to begin the export/import process as early as possible in 
Kilo to keep the window where point 2 applies very short.

If there are objections to using stable/juno, we could introduce a new branch 
with a name like backports/kilo, but I am afraid having the extra branch to 
manage would just cause confusion.

I would like to move ahead with this plan by creating the stable/juno branch 
and starting to update the incubator as soon as the oslo.log repository is 
imported (https://review.openstack.org/116934).

Thoughts?

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Gerrit Downtime on August 30, 2014

2014-08-28 Thread James E. Blair
Flavio Percoco  writes:

> On 08/28/2014 05:39 PM, James E. Blair wrote:
>> Hi,
>> 
>> Gerrit will be unavailable starting at 1600-1630 UTC on Saturday,
>> August 30, 2014 to rename the glance.store project to glancestore.
>
> I went with glance_store
>
> Hope that's fine!

Even better!

> Thanks a lot for addressing this so quickly.

No problem.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Spam] Re: [Openstack][TripleO] [Ironic] What if undercloud machines down, can we reboot overcloud machines?

2014-08-28 Thread Clint Byrum
Excerpts from Jyoti Ranjan's message of 2014-08-27 21:20:19 -0700:
> I do agree but it create an extra requirement for Undercloud if we high
> availability is important criteria. Because of this, undercloud has to be
> there 24x7, 365 days and to make it available we need to have HA for this
> also. So, you indirectly mean that undercloud also should be designed
> keeping high availability in mind.

I'm worried that you may be overstating the needs of a typical cloud.

The undercloud needs to be able to reach a state of availability when
you need to boot boxes. Even if you are doing CD and _constantly_
rebooting boxes, you can take your undercloud down for an hour, as long
as it can be brought back up for emergencies.

However, Ironic has already been designed this way. I believe that
Ironic has a nice dynamic hash ring of server ownership, and if you
mark a conductor down, the other conductors will assume ownership of
the machines that it was holding. So the path to making this HA is
basically "add one more undercloud server."

Ironic experts, please tell me this is true, and not just something I
inserted into my own distorted version of reality to help me sleep at
night.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] VPNaaS pending state handling

2014-08-28 Thread Sridhar Ramaswamy
https://bugs.launchpad.net/neutron/+bug/1355360

I'm working on this vpn vendor bug and am looking for guidance on the
approach. I'm also relatively new to neutron development so bear with some
newbie gaffs :)

The problem reported in this bug, in a nutshell, is the policies in the
neutron vpn db and virtual-machine implementing vpn goes out of sync when
the agent restarts (restart could be either operator driven or due to a
software error).

CSR vpn device driver currently doesn't do a sync when it comes up. I'm
going to add that as part of this bug fix. Still it will only partially
solve the problem as it will take care of new connections created (which
goes to PENDING_CREATE state) & updates to existing connections while the
agent was down but NOT for deletes. For deletes the connection entry gets
deleted right at vpn_db level.

My proposal is to introduce PENDING_DELETE state for vpn site-to-site
connection.  Implementing pending_delete will involve,

1) Moving the delete operation from vpn_db into service driver
2) Changing the reference ipsec service driver to handle PENDING_DELETE
state. For now we can just do a simple db delete to preserve the existing
behavior.
3) CSR device driver will make use of PENDING_DELETE to correctly delete
the entries in the CSR device when the agent comes up.

Sounds reasonable? Any thoughts?

thanks,
- Sridhar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >