Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-09-05 Thread Arkady_Kanevsky
Please, drive new multi projects requirements thru use cases of Product WG.
Thanks,
Arkady

-Original Message-
From: joehuang [mailto:joehu...@huawei.com]
Sent: Tuesday, August 23, 2016 9:01 PM
To: OpenStack Development Mailing List (not for usage questions) ; 
openstack-operators
Cc: discovery-...@inria.fr
Subject: Re: [openstack-dev] [all][massively distributed][architecture] 
Coordination between actions/WGs

Hello, Adrien,

How about different focus for different working gruop? For example, "massively 
distributed" working group can focus on identifying the use cases, challenges, 
issues in current openstack to support such fog/edge computing scenario, and 
even including the use cases/scenario from ETSI mobile edge computing 
(http://www.etsi.org/technologies-clusters/technologies/mobile-edge-computing, 
https://portal.etsi.org/portals/0/tbpages/mec/docs/mobile-edge_computing_-_introductory_technical_white_paper_v1%2018-09-14.pdf).
 For "architecture" working group, how about to focus on dicsussing technology 
solution/proposal to address these issues/challenges?

We have discussed/exchanged ideas a lot before/in/after Austin summit. As 
Tricircle has worked in the multisite area for several cycles, a lots of use 
cases/challengs/issues also have been identified, the proposal of Tricircle 
could be one basis to be discussed in "arhictecture" working group, other 
proposals are also welcome.

Best Regards
Chaoyi Huang (joehuang)


From: lebre.adr...@free.fr [lebre.adr...@free.fr]
Sent: 23 August 2016 18:17
To: OpenStack Development Mailing List; openstack-operators
Cc: discovery-...@inria.fr
Subject: [openstack-dev] [all][massively distributed][architecture] 
Coordination between actions/WGs

Hi Folks,

During the last summit, we suggested to create a new working group that deals 
with the massively distributed use case:
How can OpenStack be "slightly" revised to operate Fog/Edge Computing 
infrastructures, i.e. infrastructures composed of several sites.
The first meeting we did in Austin showed us that additional materials were 
mandatory to better understand the scope as well as the actions we can perform 
in this working group.

After exchanging with different persons and institutions, we have identified 
several actions that we would like to achieve and that make the creation of 
such a working group relevant from our point of view.

Among the list of possible actions, we would like to identify major scalability 
issues and clarify intra-site vs inter-site exchanges between the different 
services of OpenStack in a multi-site context (i.e. with the vanilla OpenStack 
code).
Such information will enable us to better understand how and where each service 
should be deployed and whether it should be revised.

We have started an action with the Performance WG with the ultimate goal to 
analyse how OpenStack behaves from the performance aspect as well as the 
interactions between the various services in such a context.

Meanwhile, we saw during this summer the Clynt's proposal about the 
Architecture WG.

Although we are very exciting about this WG (we are convinced it will be 
valuable for the whole community), we are wondering whether the actions we 
envision in the Massively distributed WG will not overlap the ones 
(scalability, multi-site operations ...) that could be performed in the 
Archictecture WG.

The goal of this email is to :

(i) understand whether the fog/edge computing use case is in the scope of the 
Architecture WG.

(ii) if not, whether it makes sense to create a working group that focus on 
scalability and multi-site challenges (Folks from Orange Labs and British 
Telecom for instance already told us that they are interesting by such a 
use-case).

(iii) what is the best way to coordinate our efforts with the actions performed 
in other WGs such as the Performance and Architecture ones (e.g., actions 
performed/decisions taken in the Architecture WG can have impacts on the 
massively distributed WG and thus drive the way we should perform actions to 
progress to the Fog/Edge Computing target)


According to the feedback, we will create dedicated wiki pages for the 
massively distributed WG.
Remarks/comments welcome.

Ad_rien_
Further information regarding the Fog/Edge Computing use-case we target is 
available at http://beyondtheclouds.github.io

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-09-05 Thread Arkady_Kanevsky
That is the question of how many releases backwards community is willing to 
“support”.
The current answer is 1 year. If customer wants something earlier – do it 
yourself. Or to be precise work with you vendor whose driver you want updated. 
This creates vendor driver mismatch issues for validation of older version. The 
only two options I see are 3rd party distros or OpenStack changing its policy.
Former is being done now.
Thus, it works.
Thanks,
Arkady


From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: Monday, August 22, 2016 3:45 PM
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for 
drivers


What is the logic for that? It's a massive duplication of effort, and it leads 
to defacto forks and inconsistencies between clouds - exactly what the 
OpenStack mission is against.

Many/most of the clouds actually in production are already out of upstream 
stable policy. The more convergence we can get on what happens after that the 
better. There are zero advantages I can see to each vendor going it alone.

On 22 Aug 2016 19:31, 
> wrote:
Sorry if touch 3rd rail.
But should backport bug fixes to older releases be done in distros and not 
upstream?

-Original Message-
From: Walter A. Boring IV 
[mailto:walter.bor...@hpe.com]
Sent: Tuesday, August 09, 2016 12:34 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for 
drivers

On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:
> Duncan Thomas wrote:
>
>> On 8 August 2016 at 21:12, Matthew Treinish
>> wrote:
>> Ignoring all that, this is also contrary to how we perform testing in
>> OpenStack.
>> We don't turn off entire classes of testing we have so we can land
>> patches, that's just a recipe for disaster.
>>
>> But is it more of a disaster (for the consumers) than zero testing,
>> zero review, scattered around the internet
>> if-you're-lucky-with-a-good-wind you'll maybe get the right patch
>> set? Because that's where we are right now, and vendors, distributors
>> and the cinder core team are all saying it's a disaster.
>
> If consumers rely on upstream releases, then they are expected to
> migrate to newer releases after EOL, not switch to a random branch on
> the internet. If they rely on some commercial product, then they
> usually have an extended period of support and certification for their
> drivers, so it’s not a problem for them.
>
> Ihar
This is entirely unrealistic. Force customers to upgrade. Good luck
explaining to a bank that in order to get their cinder driver fix in, they have 
to upgrade their entire OpenStack deployment. Real world customers simply will 
balk at this all day long.

Walt
>
> __
> 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-22 Thread Arkady_Kanevsky
Sorry if touch 3rd rail.
But should backport bug fixes to older releases be done in distros and not 
upstream?

-Original Message-
From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
Sent: Tuesday, August 09, 2016 12:34 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for 
drivers

On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:
> Duncan Thomas wrote:
>
>> On 8 August 2016 at 21:12, Matthew Treinish
>> wrote:
>> Ignoring all that, this is also contrary to how we perform testing in
>> OpenStack.
>> We don't turn off entire classes of testing we have so we can land
>> patches, that's just a recipe for disaster.
>>
>> But is it more of a disaster (for the consumers) than zero testing,
>> zero review, scattered around the internet
>> if-you're-lucky-with-a-good-wind you'll maybe get the right patch
>> set? Because that's where we are right now, and vendors, distributors
>> and the cinder core team are all saying it's a disaster.
>
> If consumers rely on upstream releases, then they are expected to
> migrate to newer releases after EOL, not switch to a random branch on
> the internet. If they rely on some commercial product, then they
> usually have an extended period of support and certification for their
> drivers, so it's not a problem for them.
>
> Ihar
This is entirely unrealistic. Force customers to upgrade. Good luck
explaining to a bank that in order to get their cinder driver fix in, they have 
to upgrade their entire OpenStack deployment. Real world customers simply will 
balk at this all day long.

Walt
>
> __
> 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-17 Thread Arkady_Kanevsky
What is the goal of undercloud?
Primarily to deploy and manage/upgrade/update overcloud.
It is not targeted for multitenancy and the only "application" running on it is 
overcloud.
While it may have a couple of VMs running in undercloud it is more convenience 
than actual need.

So what are the OpenStack projects need to run in undercloud to achieve its 
primary goal?

Having robust undercloud so it can handle faults, like node or network 
failures, is more important than being able to deploy all OpenStack services on 
it.

Arkady

-Original Message-
From: Dan Prince [mailto:dpri...@redhat.com]
Sent: Friday, August 05, 2016 6:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

On Fri, 2016-08-05 at 12:27 +0200, Dmitry Tantsur wrote:
> On 08/04/2016 11:48 PM, Dan Prince wrote:
> >
> > Last week I started some prototype work on what could be a new way
> > to install the Undercloud. The driving force behind this was some of
> > the recent "composable services" work we've done in TripleO so
> > initially I called in composable undercloud. There is an etherpad
> > here with links to some of the patches already posted upstream (many
> > of which stand as general imporovements on their own outside the
> > scope of what I'm talking about here).
> >
> > https://etherpad.openstack.org/p/tripleo-composable-undercloud
> >
> > The idea in short is that we could spin up a small single process
> > all-
> > in-one heat-all (engine and API) and thereby avoid things like
> > Rabbit, and MySQL. Then we can use Heat templates to drive the
> > Undercloud deployment just like we do in the Overcloud.
> I don't want to sound rude, but please no. The fact that you have a
> hammer does not mean everything around is nails :( What problem are
> you trying to solve by doing it?

Several problems I think.

One is TripleO has gradually moved away from elements. And while we still use 
DIB elements for some things we no longer favor that tool and instead rely on 
Heat and config management tooling to do our stepwise deployment ordering. This 
leaves us using instack-undercloud a tool built specifically to install 
elements locally as a means to create our undercloud. It works... and I do 
think we've packaged it nicely but it isn't the best architectural fit for 
where we are going I think. I actually think that from an end/user contribution 
standpoint using t-h- t could be quite nice for adding features to the 
Undercloud.

Second would be re-use. We just spent a huge amount of time in Newton (and some 
in Mitaka) refactoring t-h-t around composable services. So say you add a new 
composable service for Barbican in the Overcloud...
wouldn't it be nice to be able to consume the same thing in your Undercloud as 
well? Right now you can't, you have to do some of the work twice and in quite 
different formats I think. Sure, there is some amount of shared puppet work but 
that is only part of the picture I think.

There are new features to think about here too. Once upon a time TripleO 
supported multi-node underclouds. When we switched to instack- undercloud we 
moved away from that. By switching back to tripleo-heat- templates we could 
structure our templates around abstractions like resource groups and the new 
'deployed-server' trick that allow you to create machines either locally or 
perhaps via Ironic too. We could avoid Ironic entirely and always install the 
Undercloud on existing servers via 'deployed-server' as well.

Lastly, there is container work ongoing for the Overcloud. Again, I'd like to 
see us adopt a format that would allow it to be used in the Undercloud as well 
as opposed to having to re-implement features in the Over and Under clouds all 
the time.

>
> Undercloud installation is already sometimes fragile, but it's
> probably the least fragile part right now (at least from my
> experience) And at the very least it's pretty obviously debuggable in
> most cases. THT is hard to understand and often impossible to debug.
> I'd prefer we move away from THT completely rather than trying to fix
> it in one more place where heat does not fit..

What tool did you have in mind. FWIW I started with heat because by using just 
Heat I was able to take the initial steps to prototype this.

In my mind Mistral might be next here and in fact it already supports the 
single process launching idea thing. Keeping the undercloud installer as light 
as possible would be ideal though.

Dan

>
> >
> >
> > I created a short video demonstration which goes over some of the
> > history behind the approach, and shows a live demo of all of this
> > working with the patches above:
> >
> > https://www.youtube.com/watch?v=y1qMDLAf26Q
> >
> > Thoughts? Would it be cool to have a session to discuss this more in
> > Barcelona?
> >
> > Dan Prince (dprince)
> >
> > ___
> > ___
> > 

Re: [openstack-dev] [tripleo] service validation during deployment steps

2016-08-02 Thread Arkady_Kanevsky
What about ability of service expert to plug-in remediation module?
If remediation action succeed - proceed, if not then stop.
Remediation module can be extended independently from main flow.
Thanks,
Arkady

-Original Message-
From: Steven Hardy [mailto:sha...@redhat.com]
Sent: Wednesday, July 27, 2016 3:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tripleo] service validation during deployment 
steps

Hi Emilien,

On Tue, Jul 26, 2016 at 03:59:33PM -0400, Emilien Macchi wrote:
> I would love to hear some feedback about $topic, thanks.

Sorry for the slow response, we did dicuss this on IRC, but providing that 
feedback and some other comments below:

> On Fri, Jul 15, 2016 at 11:31 AM, Emilien Macchi wrote:
> > Hi,
> >
> > Some people on the field brought interesting feedback:
> >
> > "As a TripleO User, I would like the deployment to stop immediately
> > after an resource creation failure during a step of the deployment
> > and be able to easily understand what service or resource failed to
> > be installed".
> >
> > Example:
> > If during step4 Puppet tries to deploy Neutron and OVS, but OVS
> > fails to start for some reasons, deployment should stop at the end
> > of the step.

I don't think anyone will argue against this use-case, we absolutely want to 
enable a better "fail fast" for deployment problems, as well as better 
surfacing of why it failed.

> > So there are 2 things in this user story:
> >
> > 1) Be able to run some service validation within a step deployment.
> > Note about the implementation: make the validation composable per
> > service (OVS, nova, etc) and not per role (compute, controller, etc).

+1, now we have composable services we need any validations to be
associated with the services, not the roles.

That said, it's fairly easy to imagine an interface like 
step_config/config_settings could be used to wire in composable service 
validations on a per-role basis, e.g similar to what we do here, but
per-step:

https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud.yaml#L1144

Similar to what was proposed (but never merged) here:

https://review.openstack.org/#/c/174150/15/puppet/controller-post-puppet.yaml

> > 2) Make this information readable and easy to access and understand
> > for our users.
> >
> > I have a proof-of-concept for 1) and partially 2), with the example
> > of
> > OVS: https://review.openstack.org/#/c/342202/
> > This patch will make sure OVS is actually usable at step 4 by
> > running 'ovs-vsctl show' during the Puppet catalog and if it's
> > working, it will create a Puppet anchor. This anchor is currently
> > not useful but could be in future if we want to rely on it for 
> > orchestration.
> > I wrote the service validation in Puppet 2 years ago when doing
> > Spinal Stack with eNovance:
> > https://github.com/openstack/puppet-openstacklib/blob/master/manifes
> > ts/service_validation.pp I think we could re-use it very easily, it
> > has been proven to work.
> > Also, the code is within our Puppet profiles, so it's by design
> > composable and we don't need to make any connection with our current
> > services with some magic. Validation will reside within Puppet
> > manifests.
> > If you look my PoC, this code could even live in puppet-vswitch
> > itself (we already have this code for puppet-nova, and some others).

I think having the validations inside the puppet implementation is OK, but 
ideally I think we do want it to be part of the puppet modules themselves (not 
part of the puppet-tripleo abstraction layer).

The issue I'd have with putting it in puppet-tripleo is that if we're going to 
do this in a tripleo specific way, it should probably be done via a method 
that's more config tool agnostic. Otherwise we'll have to recreate the same 
validations for future implementations (I'm thinking specifically about 
containers here, and possibly ansible[1].

So, in summary, I'm +1 on getting this integrated if it can be done with little 
overhead and it's something we can leverage via the puppet modules vs 
puppet-tripleo.

> >
> > Ok now, what if validation fails?
> > I'm testing it here: https://review.openstack.org/#/c/342205/
> > If you look at /var/log/messages, you'll see:
> >
> > Error:
> > /Stage[main]/Tripleo::Profile::Base::Neutron::Ovs/Openstacklib::Serv
> > ice_validation[openvswitch]/Exec[execute
> > openvswitch validation]/returns: change from notrun to 0 failed
> >
> > So it's pretty clear by looking at logs that openvswitch service
> > validation failed and something is wrong. You'll also notice in the
> > logs that deployed stopped at step 4 since OVS is not considered to
> > run.
> > It's partially addressing 2) because we need to make it more
> > explicit and readable. Dan Prince had the idea to use
> > https://github.com/ripienaar/puppet-reportprint to print a nice
> > report of Puppet catalog result (we haven't tried it yet). We could
> > also use 

Re: [openstack-dev] [TripleO] TripleO deep dive hour?

2016-07-10 Thread Arkady_Kanevsky
Dell - Internal Use - Confidential
+2

From: Emilien Macchi [mailto:emil...@redhat.com]
Sent: Tuesday, June 28, 2016 6:01 PM
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [TripleO] TripleO deep dive hour?


Excellent idea, it would also be a good opportunity to take notes and improve 
our documentation.

---
Emilien Macchi

On Jun 28, 2016 6:24 PM, "Qasim Sarfraz" 
> wrote:
+2, that would be great.

On Wednesday, June 29, 2016, James Slagle 
> wrote:
We've got some new contributors around TripleO recently, and I'd like
to offer up a "TripleO deep dive hour".

The idea is to spend 1 hour a week in a high bandwidth environment
(Google Hangouts / Bluejeans / ???) to deep dive on a TripleO related
topic. The topic could be anything TripleO related, such as general
onboarding, CI, networking, new features, etc.

I'm by no means an expert on all those things, but I'd like to
facilitate the conversation and I'm happy to lead the first few
"dives" and share what I know. If it proves to be a popular format,
hopefully I can convince some other folks to lead discussions on
various topics.

I think it'd be appropriate to record these sessions so that what is
discussed is available to all. However, I don't intend these to be a
presentation format, and instead more of a q discussion. If I don't
get any ideas for topics though, I may choose to prepare something to
present :).

Our current meeting time of day at 1400 UTC seems to suit a lot of
folks, so how about 1400 UTC on Thursdays? If folks think this is
something that would be valuable and want to do it, we could start
next Thursday, July 7th.


--
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Regards,
Qasim Sarfraz


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: Architecture Working Group

2016-06-30 Thread Arkady_Kanevsky
+1

-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
Sent: Monday, June 20, 2016 10:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] Proposal: Architecture Working Group

+1 , great idea.

if we can add a mission/objective based on the nice definitions you added, will 
help a long way in cross-project architecture evolution.
moreover, I'd like this to be a integration point for openstack projects (and 
not a silo) so that we can build the shared understanding we really need to 
build.

On 6/17/16 5:52 PM, Clint Byrum wrote:
> ar·chi·tec·ture
> ˈärkəˌtek(t)SHər/
> noun
> noun: architecture
>
> 1.
>
> the art or practice of designing and constructing buildings.
>
> synonyms:building design, building style, planning, building,
> construction;
>
> formalarchitectonics
>
> "modern architecture"
>
> the style in which a building is designed or constructed, especially with 
> regard to a specific period, place, or culture.
>
> plural noun: architectures
>
> "Victorian architecture"
>
> 2.
>
> the complex or carefully designed structure of something.
>
> "the chemical architecture of the human brain"
>
> the conceptual structure and logical organization of a computer or 
> computer-based system.
>
> "a client/server architecture"
>
> synonyms:structure, construction, organization, layout, design,
> build, anatomy, makeup;
>
> informalsetup
>
> "the architecture of a computer system"
>
>
> Introduction
> =
>
> OpenStack is a big system. We have debated what it actually is [1],
> and there are even t-shirts to poke fun at the fact that we don't have
> good answers.
>
> But this isn't what any of us wants. We'd like to be able to point at
> something and proudly tell people "This is what we designed and
> implemented."
>
> And for each individual project, that is a possibility. Neutron can
> tell you they designed how their agents and drivers work. Nova can
> tell you that they designed the way conductors handle communication
> with API nodes and compute nodes. But when we start talking about how
> they interact with each other, it's clearly just a coincidental mash
> of de-facto standards and specs that don't help anyone make decisions
> when refactoring or adding on to the system.
>
> Oslo and cross-project initiatives have brought some peace and order
> to the implementation and engineering processes, but not to the design
> process. New ideas still start largely in the project where they are
> needed most, and often conflict with similar decisions and ideas in
> other projects [dlm, taskflow, tooz, service discovery, state
> machines, glance tasks, messaging patterns, database patterns, etc.
> etc.]. Often times this creates a log jam where none of the projects
> adopt a solution that would align with others. Most of the time when
> things finally come to a head these things get done in a piecemeal
> fashion, where it's half done here,
> 1/3 over there, 1/4 there, and 3/4 over there..., which to the outside
> looks like chaos, because that's precisely what it is.
>
> And this isn't always a technical design problem. OpenStack, for
> instance, isn't really a micro service architecture. Of course, it
> might look like that in diagrams [2], but we all know it really isn't.
> The compute node is home to agents for every single concern, and the
> API interactions between the services is too tightly woven to consider
> many of them functional without the same lockstep version of other
> services together. A game to play is ask yourself what would happen if
> a service was isolated on its own island, how functional would its API
> be, if at all. Is this something that we want? No. But there doesn't
> seem to be a place where we can go to actually design, discuss,
> debate, and ratify changes that would help us get to the point of
> gathering the necessary will and capability to enact these efforts.
>
> Maybe nova-compute should be isolated from nova, with an API that
> nova, cinder and neutron talk to. Maybe we should make the scheduler
> cross-project aware and capable of scheduling more than just nova
> instances. Maybe we should have experimental groups that can look at
> how some of this functionality could perhaps be delegated to
> non-openstack projects. We hear that Mesos, for example to help with
> the scheduling aspects, but how do we discuss these outside hijacking
> threads on the mailing list? These are things that we all discuss in
> the hallways and bars and parties at the summit, but because they
> cross projects at the design level, and are inherently a lot of social
> and technical and exploratory work, Many of us fear we never get to a
> place of turning our dreams into reality.
>
> So, with that, I'd like to propose the creation of an Architecture
> Working Group. This group's charge would not be design by committee,
> but a place for architects to share their designs and gain support
> across projects to 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-30 Thread Arkady_Kanevsky
There is a version of Tempest that is released as part of OpenStack release.
Agree with Mark that we should stick to versions parity.

-Original Message-
From: Mark Voelker [mailto:mvoel...@vmware.com]
Sent: Monday, June 20, 2016 8:25 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tempest][nova][defcore] Add option to disable 
some strict response checking for interop testing


> On Jun 20, 2016, at 8:46 AM, Doug Hellmann wrote:
>
> Excerpts from Mark Voelker's message of 2016-06-16 20:33:36 +:
>
>> On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
>>
>>
>>> I don't think DefCore actually needs to change old versions of
>>> Tempest, but maybe Chris or Mark can verify that?
>>
>> So if I'm groking this correctly, there's kind of two scenarios being
>> painted here. One is the "LCD" approach where we use the
>> $osversion-eol version of Tempest, where $osversion matches the
>> oldest version covered in a Guideline. The other is to use the
>> start-of-$osversion version of Tempest where $osversion is the
>> OpenStack version after the most recent one in the Guideline. The
>> former may result in some fairly long-lived flags, and the latter is
>> actually not terribly different than what we do today I think.
>> Let me try to talk through both...
>>
>> In some cases, tests get flagged in the Guidelines because of bugs in
>> the test or because the test needs refactoring. The underlying
>> Capabilities the those tests are testing actually work fine. Once we
>> identify such an issue, the test can be fixed...in master. Under the
>> first scenario, this potentially creates some very long-lived flags:
>>
>> 2016.01 is the most current Guideline right now covers Juno, Kilo,
>> Liberty (and Mitaka after it was released). It's one of the two
>> Guidelines that you can use if you want an OpenStack Powered license
>> from he Foundation, $vendor wants to run it against their shiny new
>> Mitaka cloud. They run the Juno EOL version of Tempest (tag=8), they
>> find a test issue, and we flag it. A few weeks later, a fix lands in
>> Tempest. Several months later the next Guideline rolls
>> around: the oldest covered release is Kilo and we start telling
>> people to use the Kilo-EOL version of Tempest. That doesn't have the
>> fix, so the flag stays. Another six months goes by and we get a
>> Guideline and we're up to the Liberty-EOL version of Tempest. No
>> fix, flag stays. Six more months, and now we're at Mitaka-EOL, and
>> that's the first version that includes the fix.
>>
>> Generally speaking long lived flags aren't so great because it means
>> the tests are not required...which means there's less or no assurance
>> that the capabilities they test for actually work in the clouds that
>> adhere to those Guidelines. So, the worst-case scenario here looks
>> kind of ugly.
>>
>> As Matt correctly pointed out though, the capabilities DefCore
>> selects for are generally pretty stable API's that are long-lived
>> across many releases, so we haven't run into a lot of issues running
>> pretty new versions of Tempest against older clouds to date. In fact
>> I'm struggling to think of a time we've flagged something because
>> someone complained the test wasn't runnable against an older release
>> covered by the Guideline in question. I can think of plenty of times
>> where we've flagged something due to a test issue though...keep in mind
>> we're still in pretty formative times with DefCore here where these
>> tests are starting to be used in a new way for the first time.
>> Anyway, as Matt points out we could potentially use a much newer
>> Tempest tag: tag=11 (which is the start of Newton development and is
>> a roughly 2 month old version of Tempest). Next Guideline rolls
>> around, we use the tag for start-of-ocata, and we get the fix and can
>> drop the flag.
>>
>> Today, RefStack client by default checks out a specific SHA of
>> Tempest [1] (it actually did use a tag at some point in the past, and
>> still can). When we see a fix for a flagged test go in, we or the
>> Refstack folks can do a quick test to make sure everything's in order
>> and then update that SHA to match the version with the fix. That way
>> we're relatively sure we have a version that works today, and will
>> work when we drop the flag in the next Guideline too. When we
>> finalize that next Guideline, we also update the test-repositories
>> section of the new Guideline that Matt pointed to earlier to reflect
>> the best-known version on the day the Guideline was sent to the Board
>> for approval. One added benefit of this approach is that people
>> running the tests today may get a version of Tempest that includes a
>> fix for a flagged test. A flagged test isn't required, but it does
>> get run-and now will show a passing result, so we have data that says
>> "this provider actually does support this capability (even though
>> it's flagged), and the test does indeed seem to be 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-29 Thread Arkady_Kanevsky
Chris,
If we add openstack config to refstack submission will that provide sufficient 
info for "interoperability" LOGO?
That includes version  of APIs for each service.
https://review.openstack.org/#/c/300057/

Thanks,
Arkady

-Original Message-
From: Chris Hoge [mailto:ch...@openstack.org]
Sent: Wednesday, June 22, 2016 1:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tempest][nova][defcore] Add option to disable 
some strict response checking for interop testing


> On Jun 22, 2016, at 11:24 AM, Sean Dague wrote:
>
> On 06/22/2016 01:59 PM, Chris Hoge wrote:
>>
>>> On Jun 20, 2016, at 5:10 AM, Sean Dague
>>> > wrote:
>>>
>>> On 06/14/2016 07:19 PM, Chris Hoge wrote:

> On Jun 14, 2016, at 3:59 PM, Edward Leafe
> > wrote:
>
> On Jun 14, 2016, at 5:50 PM, Matthew Treinish
> > wrote:
>
>> But, if we add another possible state on the defcore side like
>> conditional pass, warning, yellow, etc. (the name doesn't matter)
>> which is used to indicate that things on product X could only
>> pass when strict validation was disabled (and be clear about
>> where and why) then my concerns would be alleviated.
>> I just do
>> not want this to end up not being visible to end users trying to
>> evaluate interoperability of different clouds using the test
>> results.
>
> +1
>
> Don't fail them, but don't cover up their incompatibility, either.
> -- Ed Leafe

 That's not my proposal. My requirement is that vendors who want to
 do this state exactly which APIs are sending back additional data,
 and that this information be published.

 There are different levels of incompatibility. A response with
 additional data that can be safely ignored is different from a
 changed response that would cause a client to fail.
>>>
>>> It's actually not different. It's really not.
>>>
>>> This idea that it's safe to add response data is based on an
>>> assumption that software versions only move forward. If you have a
>>> single deploy of software, that's fine.
>>>
>>> However as noted, we've got production clouds on Juno <-> Mitaka in
>>> the wild. Which means if we want to support horizontal transfer
>>> between clouds, the user experienced timeline might be start on a
>>> Mitaka cloud, then try to move to Juno. So anything added from Juno
>>> -> Mitaka without signaling has exactly the same client breaking
>>> behavior as removing attributes.
>>>
>>> Which is why microversions are needed for attribute adds.
>>
>> I'd like to note that Nova v2.0 is still a supported API, which as
>> far as I understand allows for additional attributes and extensions.
>> That Tempest doesn't allow for disabling strict checking when using a
>> v2.0 endpoint is a problem.
>>
>> The reporting of v2.0 in the Marketplace (which is what we do right
>> now) is also a signal to a user that there may be vendor additions to
>> the API.
>>
>> DefCore doesn't disallow the use of a 2.0 endpoint as part of the
>> interoperability standard.
>
> This is a point of confusion.
>
> The API definition did not allow that. The implementation of the API
> stack did.

And downstream vendors took advantage of that. We may not like it, but it's a 
reality in the current ecosystem.

> In Liberty the v2.0 API is optionally provided by a different backend
> stack that doesn't support extensions.
> In Mitaka it is default v2.0 API on a non extensions backend In Newton
> the old backend is deleted.
>
> From Newton forward there is still a v2.0 API, but all the code hooks
> that provided facilities for extensions are gone.

It's really important that the current documentation reflect the code and 
intent of the dev team. As of writing this e-mail,

"* v2 (SUPPORTED) and v2 extensions (SUPPORTED) (Will be deprecated in the near 
future.)"[1]

Even with this being removed in Newton, DefCore still has to allow for it in 
every supported version.

-Chris

[1] http://docs.openstack.org/developer/nova/

> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [refstack] configuration user story

2016-03-31 Thread Arkady_Kanevsky
Per our discussion at midcycle,
I had submitted user story for configuration info use cases. 
https://review.openstack.org/#/c/300057/
Looking forward to reviews.
Once user story settles we will start the work on blueprints.
Thanks,
Arkady

Arkady Kanevsky, Ph.D.
Director of SW Development
Dell ESG
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally][Product] Rally roadmap slide updated

2016-03-25 Thread Arkady_Kanevsky
Team,
I had updated the slide based on Rally team feedback
https://docs.google.com/presentation/d/1GmJ2iaDfLkQda_TGxHpFEEIQgrm_scvDsvep_wXVeUQ/edit#slide=id.g84f9afb4d_0_0

Thanks,
Arkady
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Contributing to TripleO is challenging

2016-03-19 Thread Arkady_Kanevsky
Emilien,
Agree on the rant. But not clear on concrete proposal to fix it.
Spend more time "fixing" CI and use Tempest as a gate is a bit wage.
Unless we test known working version of each project in TripleO CI you are 
dependent on health of other components.
Thanks,
Arkady

-Original Message-
From: Emilien Macchi [mailto:emil...@redhat.com]
Sent: Friday, March 04, 2016 8:23 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [tripleo] Contributing to TripleO is challenging

That's not the name of any Summit's talk, it's just an e-mail I wanted to write 
for a long time.

It is an attempt to expose facts or things I've heard a lot; and bring 
constructive thoughts about why it's challenging to contribute in TripleO 
project.


1/ "I don't review this patch, we don't have CI coverage."

One thing I've noticed in TripleO is that a very few people are involved in CI 
work.
In my opinion, CI system is more critical than any feature in a product.
Developing Software without tests is a bit like http://goo.gl/OlgFRc All people 
- specially core - in the project should be involved in CI work. If you are 
TripleO core and you don't contribute on CI, you might ask yourself why.


2/ "I don't review this patch, CI is broken."

Another thing I've noticed in TripleO is that when CI is broken, again, a very 
few people are actually working on fixing failures.
My experience over the last years taught me to stop my daily work when CI is 
broken and fix it asap.


3/ "I don't review it, because this feature / code is not my area".

My first though is "Aren't we supposed to be engineers and learn new areas?"
My second though is that I think we have a problem with TripleO Heat Templates.
THT or TripleO Heat Templates's code is 80% of Puppet / Hiera. If TripleO core 
say "I'm not familiar with Puppet", we have a problem here, isn't?
Maybe should we split this repository? Or revisit the list of people who can +2 
patches on THT.


4/ Patches are stalled. Most of the time.

Over the last 12 months, I've pushed a lot of patches in TripleO and one thing 
I've noticed is that if I don't ping people, my patch got no review. And I have 
to rebase it, every week, because the interface changed. I got +2, cool ! Oh, 
merge conflict. Rebasing. Waiting for +2 again... and so on..

I personally spent 20% of my time to review code, every day.
I wrote a blog post about how I'm doing review, with Gertty:
http://my1.fr/blog/reviewing-puppet-openstack-patches/
I suggest TripleO folks to spend more time on reviews, for some reasons:

* decreasing frustration from contributors
* accelerate development process
* teach new contributors to work on TripleO, and eventually scale-up the core 
team. It's a time investment, but worth it.

In Puppet team, we have weekly triage sessions and it's pretty helpful.


5/ Most of the tests are run... manually.

How many times I've heard "I've tested this patch locally, and it does not work 
so -1".

The only test we do in current CI is a ping to an instance. Seriously?
Most of OpenStack CIs (Fuel included), run Tempest, for testing APIs and real 
scenarios. And we run a ping.
That's similar to 1/ but I wanted to raise it too.



If we don't change our way to work on TripleO, people will be more frustrated 
and reduce contributions at some point.
I hope from here we can have a open and constructive discussion to try to 
improve the TripleO project.

Thank you for reading so far.
--
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-02 Thread Arkady_Kanevsky
Rally is not part of the gate.
Also making performance test without 3rd party CI will not be very useful.
It is a good idea to run Rally performance and scenario testing but outside 
gate process.

From: Ivan Kolodyazhny [mailto:e...@e0ne.info]
Sent: Wednesday, March 02, 2016 8:36 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [cinder] Proposal: changes to our current testing 
process

Eric,

There are Gorka's patches [10] to remove API Races


[10] 
https://review.openstack.org/#/q/project:openstack/cinder+branch:master+topic:fix/api-races-simplified

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Wed, Mar 2, 2016 at 4:27 PM, Eric Harney 
> wrote:
On 03/02/2016 06:25 AM, Ivan Kolodyazhny wrote:
> Hi Team,
>
> Here are my thoughts and proposals how to make Cinder testing process
> better. I won't cover "3rd party CI's" topic here. I will share my opinion
> about current and feature jobs.
>
>
> Unit-tests
>
>- Long-running tests. I hope, everybody will agree that unit-tests must
>be quite simple and very fast. Unit tests which takes more than 3-5 seconds
>should be refactored and/or moved to 'integration' tests.
>Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
>good to have some hacking checks to prevent such issues in a future.
>
>- Tests coverage. We don't check it in an automatic way on gates.
>Usually, we require to add some unit-tests during code review process. Why
>can't we add coverage job to our CI and do not merge new patches, with
>will decrease tests coverage rate? Maybe, such job could be voting in a
>future to not ignore it. For now, there is not simple way to check coverage
>because 'tox -e cover' output is not useful [2].
>
>
> Functional tests for Cinder
>
> We introduced some functional tests last month [3]. Here is a patch to
> infra to add new job [4]. Because these tests were moved from unit-tests, I
> think we're OK to make this job voting. Such tests should not be a
> replacement for Tempest. They even could tests Cinder with Fake Driver to
> make it faster and not related on storage backends issues.
>
>
> Tempest in-tree tests
>
> Sean started work on it [5] and I think it's a good idea to get them in
> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a real
> backend.
>
>
> Functional tests for python-brick-cinderclient-ext
>
> There are patches that introduces functional tests [6] and new job [7].
>
>
> Functional tests for python-cinderclient
>
> We've got a very limited set of such tests and non-voting job. IMO, we can
> run them even with Cinder Fake Driver to make them not depended on a
> storage backend and make it faster. I believe, we can make this job voting
> soon. Also, we need more contributors to this kind of tests.
>
>
> Integrated tests for python-cinderclient
>
> We need such tests to make sure that we won't break Nova, Heat or other
> python-cinderclient consumers with a next merged patch. There is a thread
> in openstack-dev ML about such tests [8] and proposal [9] to introduce them
> to python-cinderclient.
>
>
> Rally tests
>
> IMO, it would be good to have new Rally scenarios for every patches like
> 'improves performance', 'fixes concurrency issues', etc. Even if we as a
> Cinder community don't have enough time to implement them, we have to ask
> for them in reviews, openstack-dev ML, file Rally bugs and blueprints if
> needed.
>
Are there any recent examples of a fix like this recently where it would
seem like a reasonable task to write a Rally scenario along with the patch?

Not being very familiar with Rally (as I think most of us aren't), I'm
having a hard time picturing this.

>
> [1] https://review.openstack.org/#/c/282861/
> [2] http://paste.openstack.org/show/488925/
> [3] https://review.openstack.org/#/c/267801/
> [4] https://review.openstack.org/#/c/287115/
> [5] https://review.openstack.org/#/c/274471/
> [6] https://review.openstack.org/#/c/265811/
> [7] https://review.openstack.org/#/c/265925/
> [8]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088027.html
> [9] https://review.openstack.org/#/c/279432/
>
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-21 Thread Arkady_Kanevsky
With nova and Keystone both at v3 is helps to consistent versioning across all 
projects.
Still need documentation for transition clients from one API version to next.
With new functionality not available in previous version it should be easier 
than API changes.


-Original Message-
From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
Sent: Friday, February 19, 2016 4:18 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] adding a new /v3 endpoint for 
api-microversions


>> But, there are no such clients today. And there is no library that
>> does this yet. It will be 4 - 6 months (or even more likely 12+)
>> until that's in the ecosystem. Which is why adding the header
>> validation to existing
>> v2 API, and backporting to liberty / kilo, will provide really
>> substantial coverage for the concern the bswartz is bringing forward.
> Yeah, I have to agree with that. We can certainly have the protection
> out in time.
>
> The only concern there is the admin who set up his Kilo initial
> release cloud and doesn't want to touch it for updates. But they
> likely have more pressing issues than this any way.
>
>> -Sean
>>
>>

Not that I'm adding much to this conversation that hasn't been said already, 
but I am pro v2 API, purely because of how painful and long it's been to get 
the official OpenStack projects to adopt the v2 API from v1. I know we need to 
be sort of concerned about other 'client's
that call the API, but for me that's way down the lists of concerns.
If we go to v3 API, most likely it's going to be another 3+ years before folks 
can use the new Cinder features that the microversioned changes will provides. 
This in effect invalidates the microversion capability in Cinder's API 
completely.

/sadness
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stable branch policy for Mitaka

2016-02-15 Thread Arkady_Kanevsky
I like any formal documented process.
Having only bug fixes for stable releases is a good and consistent stand.

But we are bumping into fundamental issue of integrated release.
By the time release comes out the new functionality of nova, or neutron or most 
other components do not triple heat templates that deploy and configure that 
functionality for overcloud. So as soon as somebody will try to use Triple O 
(or RDO manager or OSP director, or any other derivative) they try to remedy 
this shortcoming. Being good openstack citizens these folks submit heat 
templates upstream. We can state that we will not take them into stable 
release. They will push that community away from upstream and will diverge 
their deployment from it. Or they will pound on their openstack provider (if it 
is not upstream) to have a place to maintain them for them. Either way it is 
ugly. And will make upgrade and update for these Customers a nightmare.

TripleO is not a unique to that. Horizon, and others, has it in spades.

Do not know if we will be able to fix it in openstack itself or we rely on 
distros to handle it.
But we will need some place for users to provide their fixes and extension of 
current capabilities of TripleO to support existing configuration and new 
functionality of released core openstack.

Thanks,
Arkady

-Original Message-
From: Steven Hardy [mailto:sha...@redhat.com]
Sent: Monday, February 15, 2016 3:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Stable branch policy for Mitaka

On Wed, Feb 10, 2016 at 07:05:41PM +0100, James Slagle wrote:
> On Wed, Feb 10, 2016 at 4:57 PM, Steven Hardy wrote:
>
> Hi all,
>
> We discussed this in our meeting[1] this week, and agreed a ML
> discussion
> to gain consensus and give folks visibility of the outcome would be a
> good
> idea.
>
> In summary, we adopted a more permissive "release branch" policy[2] for
> our
> stable/liberty branches, where feature backports would be allowed,
> provided
> they worked with liberty and didn't break backwards compatibility.
>
> The original idea was really to provide a mechanism to "catch up" where
> features are added e.g to liberty OpenStack components late in the cycle
> and TripleO requires changes to integrate with them.
>
> However, the reality has been that the permissive backport policy has
> been
> somewhat abused (IMHO) with a large number of major features being
> proposed
> for backport, and in a few cases this has broken downstream (RDO)
> consumers
> of TripleO.
>
> Thus, I would propose that from Mitaka, we revise our backport policy to
> simply align with the standard stable branch model observed by all
> projects[3].
>
> Hopefully this will allow us to retain the benefits of the stable branch
> process, but provide better stability for downstream consumers of these
> branches, and minimise confusion regarding what is a permissable
> backport.
>
> If we do this, only backports that can reasonably be considered
> "Appropriate fixes"[4] will be valid backports - in the majority of
> cases
> this will mean bugfixes only, and large features where the risk of
> regression is significant will not be allowed.
>
> What are peoples thoughts on this?
>
> â**I'm in agreement. I think this change is needed and will help set
> better expectations around what will be included in which release.
>
> If we adopt this as the new policy, then the immediate followup is to set
> and communicate when we'll be cutting the stable branches, so that it's
> understood when the features have to be done/committed. I'd suggest that
> we more or less completely adopt the integrated release schedule[1]. Which
> I believe means the week of RC1 for cutting the stable/mitaka branches,
> which is March 14th-18th.
>
> It seems to follow logically then that we'd then want to also be more
> aggresively aligned with other integrated release events such as the
> feature freeze date, Feb 29th - March 4th.

Yes, agreeing a backport policy is the first step, and aligning all our release 
policies with the rest of OpenStack is the logical next step.

> An alternative to strictly following the schedule, would be to say that
> TripleO lags the integrated release dates by some number of weeks (1 or 2
> I'd think), to allow for some "catchup" time since TripleO is often
> consuming features from projects part of the integrated release.

The risk with this approach is there remains some confusion about our 
deadlines, and there is an increased risk that our 1-2 weeks window slips and 
we end up with a similar problem to that which we have now.

I'd propose we align with whatever schedule the puppet community observes, 
given that (with out current implementation at least), it's unlikely we can 
land any features actually related to new-feature-in-$service type patches 
without that feature already having support in the puppet modules?

Perhaps we can seek out some guidance from Emilien, as 

Re: [openstack-dev] Proposal: Separate design summits from OpenStack conferences

2016-02-15 Thread Arkady_Kanevsky
I think this will be very detrimental to development community.
The best feedback we get is from user/customer community that are at the summit 
but most likely will not attend separate design summit.

I will ignore financial implication of 2 separate summits.

-Original Message-
From: Eoghan Glynn [mailto:egl...@redhat.com]
Sent: Monday, February 15, 2016 3:36 AM
To: James Bottomley
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] Proposal: Separate design summits from 
OpenStack conferences



> > Honestly I don't know of any communication between two cores at a +2
> > party that couldn't have just as easily happened surrounded by other
> > contributors. Nor, I hope, does anyone put in the substantial
> > reviewing effort required to become a core in order to score a few
> > free beers and see some local entertainment. Similarly for the TC,
> > one would hope that dinner doesn't figure in the system incentives
> > that drives folks to throw their hat into the ring.
>
> Heh, you'd be surprised.
>
> I don't object to the proposal, just the implication that there's
> something wrong with parties for specific groups: we did abandon the
> speaker party at Plumbers because the separation didn't seem to be
> useful and concentrated instead on doing a great party for everyone.
>
> > In any case, I've derailed the main thrust of the discussion here,
> > which I believe could be summed up by:
> >
> > "let's dial down the glitz a notch, and get back to basics"
> >
> > That sentiment I support in general, but I'd just be more selective
> > as to which social events should be first in line to be culled in
> > order to create a better atmosphere at summit.
> >
> > And I'd be far more concerned about getting the choice of location,
> > cadence, attendees, and format right, than in questions of who
> > drinks with whom.
>
> OK, so here's a proposal, why not reinvent the Cores party as a Meet
> the Cores Party instead (open to all design summit attendees)? Just
> make sure it's advertised in a way that could only possibly appeal to
> design summit attendees (so the suits don't want to go), use the same
> buget (which will necessitate a dial down) and it becomes an inclusive
> event that serves a useful purpose.

Sure, I'd be totally agnostic on the branding as long as the widest audience is 
invited ... e.g. all ATCs, or even all summit attendees.

Actually that distinction between ATCs and other attendees just sparked another 
thought ...

Traditionally all ATCs earn a free pass for summit, whereas the other attendees 
pay $600 or more for entry. I'm wondering if (a) there's some 
cross-subsidization going on here and (b) if the design summit was cleaved off, 
would the loss of the income from the non-ATCs sound the death-knell for the 
traditional ATC free pass?

>From my Pov, that would not be an excellent outcome. Someone with better 
>visibility on the financial structure of summit funding might be able to 
>clarify that.

Cheers,
Eoghan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Arkady_Kanevsky
Either 1 or 3.
2 does not solve anything.

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com]
Sent: Friday, January 15, 2016 10:01 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] Proposal: copyright-holders file in each 
project, or copyright holding forced to the OpenStack Foundation

On 01/15/2016 10:38 AM, Daniel P. Berrange wrote:
> On Fri, Jan 15, 2016 at 08:48:21PM +0800, Thomas Goirand wrote:
>> This isn't the first time I'm calling for it. Let's hope this time,
>> I'll be heard.
>>
>> Randomly, contributors put their company names into source code. When
>> they do, then effectively, this tells that a given source file
>> copyright holder is whatever is claimed, even though someone from
>> another company may have patched it.
>>
>> As a result, we have a huge mess. It's impossible for me, as a
>> package maintainer, to accurately set the copyright holder names in
>> the debian/copyright file, which is a required by the Debian FTP masters.
>
> I don't think OpenStack is in a different situation to the vast
> majority of open source projects I've worked with or seen. Except for
> those projects requiring copyright assignment to a single entity, it
> is normal for source files to contain an unreliable random splattering
> of Copyright notices. This hasn't seemed to create a blocking problem
> for their maintenance in Debian. Loooking at the debian/copyright
> files I see most of them have just done a grep for the 'Copyright'
> statements & included as is - IOW just ignored the fact that this is
> essentially worthless info and included it regardless.

Agree. debian/copyright should be a best effort - but it can only be as good as 
the input data available to the packager. Try getting an accurate 
debian/copyright file for the MySQL source tree at some point.
(and good luck)

>> I see 2 ways forward:
>> 1/ Require everyone to give-up copyright holding, and give it to the
>> OpenStack Foundation.
>> 2/ Maintain a copyright-holder file in each project.

> 3/ Do nothing, just populate debian/copyright with the random
> set of 'Copyright' lines that happen to be the source files,
> as appears to be common practice across many debian packages
>
> eg the kernel package
>
>
> http://metadata.ftp-master.debian.org/changelogs/main/l/linux/linux_3.
> 16.7-ckt11-1+deb8u3_copyright
>
> "Copyright: 1991-2012 Linus Torvalds and many others"
>
> if its good enough for the Debian kernel package, it should be
> good enough for openstack packages too IMHO.

I vote for 3


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Is Swift a good choice of database for the TripleO API?

2016-01-06 Thread Arkady_Kanevsky
When you state Swift I assume you mean TripleO will use swift APIs to access 
objects in it (template, metadata, etc).
There are many environment where there is no swift but Ceph or other object 
store that can be used for it using swift APIs.

Are all these will be under single user/project that TripleO will create for 
overcloud.
While deployment is in progress or operational the "files" will be read-only.
That would work for production but not for development.

Feels that we are moving toward identity management for users of for heat 
templates for TripleO.
Arkady

From: Jiri Tomasek [mailto:jtoma...@redhat.com]
Sent: Wednesday, January 06, 2016 5:27 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] Is Swift a good choice of database for 
the TripleO API?

On 01/06/2016 11:48 AM, Dougal Matthews wrote:


On 5 January 2016 at 17:09, Jiri Tomasek 
> wrote:
On 12/23/2015 07:40 PM, Steven Hardy wrote:
On Wed, Dec 23, 2015 at 11:05:05AM -0600, Ben Nemec wrote:
On 12/23/2015 10:26 AM, Steven Hardy wrote:
On Wed, Dec 23, 2015 at 09:28:59AM -0600, Ben Nemec wrote:
On 12/23/2015 03:19 AM, Dougal Matthews wrote:

On 22 December 2015 at 17:59, Ben Nemec 

>> wrote:

 Can we just do git like I've been suggesting all along? ;-)

 More serious discussion inline. :-)

 On 12/22/2015 09:36 AM, Dougal Matthews wrote:
 > Hi all,
 >
 > This topic came up in the 2015-12-15 meeting[1], and again briefly
 today.
 > After working with the code that came out of the deployment library
 > spec[2] I
 > had some concerns with how we are storing the templates.
 >
 > Simply put, when we are dealing with 100+ files from
 tripleo-heat-templates
 > how can we ensure consistency in Swift without any atomicity or
 > transactions.
 > I think this is best explained with a couple of examples.
 >
 >  - When we create a new deployment plan (upload all the templates
 to swift)
 >how do we handle the case where there is an error? For example,
 if we are
 >uploading 10 files - what do we do if the 5th one fails for
 some reason?
 >There is a patch to do a manual rollback[3], but I have
 concerns about
 >doing this in Python. If Swift is completely inaccessible for a
 short
 >period the rollback wont work either.
 >
 >  - When deploying to Heat, we need to download all the YAML files from
 > Swift.
 >This can take a couple of seconds. What happens if somebody
 starts to
 >upload a new version of the plan in the middle? We could end up
 trying to
 >deploy half old and half new files. We wouldn't have a
 consistent view of
 >the database.
 >
 > We had a few suggestions in the meeting:
 >
 >  - Add a locking mechanism. I would be concerned about deadlocks or
 > having to
 >lock for the full duration of a deploy.

 There should be no need to lock the plan for the entire deploy.  It's
 not like we're re-reading the templates at the end of the deploy today.
  It's a one-shot read and then the plan could be unlocked, at least as
 far as I know.


Good point. That would be holding the lock for longer than we need.

 The only option where we wouldn't need locking at all is the
 read-copy-update model Clint mentions, which might be a valid option as
 well.  Whatever we do, there are going to be concurrency issues though.
  For example, what happens if two users try to make updates to the plan
 at the same time?  If you don't either merge the changes or disallow one
 of them completely then one user's changes might be lost.

 TBH, this is further convincing me that we should just make this git
 backed and let git handle the merging and conflict resolution (never
 mind the fact that it gets us a well-understood version control system
 for "free").  For updates that don't conflict with other changes, git
 can merge them automatically, but for merge conflicts you just return a
 rebase error to the user and make them resolve it.  I have a feeling
 this is the behavior we'll converge on eventually anyway, and rather
 than reimplement git, let's just use the real thing.


I'd be curious to hear more how you would go about doing this with git. I've
never automated git to this level, so I am concerned about what issues we
might hit.
TBH I haven't thought it through to that extent yet.  I'm mostly
suggesting it because it seems like a fit for the template storage
requirements - we know we want version control, we want to be able to
merge changes from multiple sources, and we want some way to handle
merge conflicts.  Git does all of this already.

That said, I'm not sure about everything here.  

Re: [openstack-dev] Gerrit Upgrade 12/16

2015-12-17 Thread Arkady_Kanevsky
Here. Here.
Even trivial thing like review button to submit is hard to find. New UI is much 
less intuitive than old one

From: Vikram Choudhary [mailto:viks...@gmail.com]
Sent: Thursday, December 17, 2015 3:59 AM
To: OpenStack Development Mailing List (not for usage questions) 
; n...@spencerkrum.com
Subject: Re: [openstack-dev] Gerrit Upgrade 12/16

Hi All,

Old interface was better. Is it possible to restore the same!

Thanks
Vikram

On Tue, Dec 15, 2015 at 1:23 AM, Spencer Krum 
> wrote:
This is a gentle reminder that the downtime will be this Wednesday
starting at 17:00 UTC.

Thank you for your patience,
Spencer

--
  Spencer Krum
  n...@spencerkrum.com

On Tue, Dec 1, 2015, at 10:19 PM, Stefano Maffulli wrote:
> On 12/01/2015 06:38 PM, Spencer Krum wrote:
> > There is a thread beginning here:
> > http://lists.openstack.org/pipermail/openstack-dev/2015-October/076962.html
> > which covers what to expect from the new software.
>
> Nice! This is awesome: the new review panel lets you edit files on the
> web interface. No more `git review -d` and subsequent commit to fix a
> typo. I think this is huge for documentation and all sort of nitpicking
> :)
>
> And while I'm at it, I respectfully bow to the infra team: keeping pace
> with frequent software upgrades at this size is no small feat and a rare
> accomplishment. Good job.
>
> /stef
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Dependencies of snapshots on volumes

2015-12-09 Thread Arkady_Kanevsky
You can do lazy copy that happens only when volume or snapshot is deleted.
You will need to have refcount on metadata.

-Original Message-
From: Li, Xiaoyan [mailto:xiaoyan...@intel.com]
Sent: Tuesday, December 08, 2015 10:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder] Dependencies of snapshots on volumes

Hi all,

Currently when deleting a volume, it checks whether there are snapshots created 
from it. If yes deletion is prohibited. But it allows to extend the volume, no 
check whether there are snapshots from it.

The two behaviors in Cinder are not consistent from my viewpoint.

In backend storage, their behaviors are same.
For full snapshot, if still in copying progress, both extend and deletion are 
not allowed. If snapshot copying finishes, both extend and deletion are allowed.
For incremental snapshot, both extend and deletion are not allowed.

As a result, this raises two concerns here:
1. Let such operations behavior same in Cinder.
2. I prefer to let storage driver decide the dependencies, not in the general 
core codes.

Meanwhile, if we let driver to decide the dependencies, the following changes 
need to do in Cinder:
1. When creating a snapshot from volume, it needs copy all metadata of volume 
to snapshot. Currently it doesn't.
Any other potential issues please let me know.

Any input will be appreciated.

Best wishes
Lisa


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [tripleo] stop maintaining puppet-tuskar

2015-12-03 Thread Arkady_Kanevsky
Emilien,
So what is the statement we (OpenStack) wants to provide about UI for OOO?
If we will not provide it people outside our community and compatitors to 
OpenStack will fill the void.
Thanks,
Arkady

-Original Message-
From: Emilien Macchi [mailto:emil...@redhat.com]
Sent: Wednesday, December 02, 2015 10:38 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [puppet] [tripleo] stop maintaining puppet-tuskar

Hi,

I don't find any official statement on the Internet, but I've heard Tuskar is 
not going to be maintained anymore (tell me if I'm wrong).

If I'm not wrong, I suggest we stop maintaining puppet-tuskar, and 
stable/liberty would be the latest release that we have maintained. I would 
also drop all the code in master and update the README explaining the module is 
not maintained anymore.

Thanks for your help,
--
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]Ironic operations on nodes in maintenance mode

2015-11-24 Thread Arkady_Kanevsky
Another use cases for maintenance node are:

* HW component replacement, e.g. NIC, or disk

* FW upgrade/downgrade - we should be able to use ironic FW management 
API/CLI for it.

* HW configuration change. Like re-provision server, like changing RAID 
configuration. Again, we should be able to use ironic FW management API/CLI for 
it.

Thanks,
Arkady

-Original Message-
From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
Sent: Tuesday, November 24, 2015 9:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ironic]Ironic operations on nodes in maintenance 
mode

On Mon, Nov 23, 2015 at 03:35:58PM -0800, Shraddha Pandhe wrote:
> Hi,
>
> I would like to know how everyone is using maintenance mode and what
> is expected from admins about nodes in maintenance. The reason I am
> bringing up this topic is because, most of the ironic operations,
> including manual cleaning are not allowed for nodes in maintenance. Thats a 
> problem for us.
>
> The way we use it is as follows:
>
> We allow users to put nodes in maintenance mode (indirectly) if they
> find something wrong with the node. They also provide a maintenance
> reason along with it, which gets stored as "user_reason" under
> maintenance_reason. So basically we tag it as user specified reason.
>
> To debug what happened to the node our operators use manual cleaning
> to re-image the node. By doing this, they can find out all the issues
> related to re-imaging (dhcp, ipmi, image transfer, etc). This
> debugging process applies to all the nodes that were put in
> maintenance either by user, or by system (due to power cycle failure or due 
> to cleaning failure).

Interesting; do you let the node go through cleaning between the user nuking 
the instance and doing this manual cleaning stuff?

At Rackspace, we leverage the fact that maintenance mode will not allow the 
node to proceed through the state machine. If a user reports a hardware issue, 
we maintenance the node on their behalf, and when they delete it, it boots the 
agent for cleaning and begins heartbeating.
Heartbeats are ignored in maintenance mode, which gives us time to investigate 
the hardware, fix things, etc. When the issue is resolved, we remove 
maintenance mode, it goes through cleaning, then back in the pool.

We used to enroll nodes in maintenance mode, back when the API put them in the 
available state immediately, to avoid them being scheduled to until we knew 
they were good to go. The enroll state solved this for us.

Last, we use maintenance mode on available nodes if we want to temporarily pull 
them from the pool for a manual process or some testing. This can also be 
solved by the manageable state.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-03 Thread Arkady_Kanevsky
Agree with Thierry.
Let's define requirements.
We are trying to solve HA not scale infinitely number of cinder instances 
running.
Thanks,
Arkady

-Original Message-
From: Gorka Eguileor [mailto:gegui...@redhat.com]
Sent: Monday, August 03, 2015 3:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

On Mon, Aug 03, 2015 at 10:22:42AM +0200, Thierry Carrez wrote:
 Flavio Percoco wrote:
  [...]
  So, to summarize, I love the effort behind this. But, as others have
  mentioned, I'd like us to take a step back, run this accross teams
  and come up with an opinonated solution that would work for everyone.
 
  Starting this discussion now would allow us to prepare enough
  material to reach an agreement in Tokyo and work on a single
  solution for Mikata. This sounds like a good topic for a cross-project 
  session.

 +1

 The last thing we want is to rush a solution that would only solve a
 particular project use case. Personally I'd like us to pick the
 simplest solution that can solve most of the use cases. Each of the
 solutions bring something to the table -- Zookeeper is mature, Consul
 is featureful, etcd is lean and simple... Let's not dive into the best
 solution but clearly define the problem space first.

 --
 Thierry Carrez (ttx)


I don't see those as different solutions from the point of view of
Cinder, they are different implementations to the same solution case,
using a DLM to lock resources.

We keep circling back to the fancy names like moths to a flame, when we
are still discussing whether we need or want a DLM for the solution. I
think we should stop doing that, we need to decide on the solution from
an abstract point of view (like you say, define the problem space) and
not get caught up on discussions of which one of those is best. If we
end up deciding to use a DLM, which is unlikely, then we can look into
available drivers in Tooz and if we are not convinced with the ones we
have (Redis, ZooKeeper, etc.) then we discuss which one we should be
using instead and just add it to Tooz.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Some Changes to Cinder Core

2015-05-26 Thread Arkady_Kanevsky
Well deserved.
Congrat!

-Original Message-
From: yang, xing [mailto:xing.y...@emc.com]
Sent: Friday, May 22, 2015 8:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] Some Changes to Cinder Core

Definitely +1!

Thanks,
Xing


-Original Message-
From: Mike Perez [mailto:thin...@gmail.com]
Sent: Friday, May 22, 2015 7:34 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [cinder] Some Changes to Cinder Core

This is long overdue, but it gives me great pleasure to nominate Sean McGinnis 
for Cinder core.

Reviews:
https://review.openstack.org/#/q/reviewer:+%22Sean+McGinnis%22,n,z

Contributions:
https://review.openstack.org/#/q/owner:+%22Sean+McGinnis%22,n,z

30/90 day review stats:
http://stackalytics.com/report/contribution/cinder-group/30
http://stackalytics.com/report/contribution/cinder-group/90

As new contributors step up to help in the project, some move onto other 
things. I would like to recognize Avishay Traeger for his contributions, and 
now unfortunately departure from the Cinder core team.

Cinder core, please reply with a +1 for approval. This will be left open until 
May 29th. Assuming there are no objections, this will go forward after voting 
is closed.

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [RefStackl] - http://refstack.org/ - does not resolve

2015-05-06 Thread Arkady_Kanevsky
What are we doing to have name resolved?
Meanwhile what is IP address to reach it?
Do we really expect people to submit results to that web site?
Thanks,
Arkady
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Building images separation and moving images into right place at right time

2015-04-17 Thread Arkady_Kanevsky
Dell - Internal Use - Confidential 

We need an ability for an Admin to add/remove new images at will to deploy new 
overcloud images at any time.
Expect that it is standard glance functionality.


-Original Message-
From: Jaromir Coufal [mailto:jcou...@redhat.com] 
Sent: Friday, April 17, 2015 8:51 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [tripleo] Building images separation and moving images 
into right place at right time

Hi All,

at the moment we are building discovery, deploy and overcloud images all at 
once. Then we face user to deal with uploading all images at one step.

User should not be exposed to discovery/deploy images. This should happen 
automatically for the user during undercloud installation as post-config step, 
so that undercloud is usable.

Once user installs undercloud (and have discovery  deploy images at their 
place) he should be able to build / download / create overcloud images (by 
overcloud images I mean overcloud-full.*). This is what user should deal with.

For this we will need to separate building process for discovery+deploy images 
and for overcloud images. Is that possible?

-- Jarda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Building images separation and moving images into right place at right time

2015-04-17 Thread Arkady_Kanevsky
Dell - Internal Use - Confidential 

If images under consideration are for overcloud nodes then they will changing 
all the time also. It all depends on which layer you are working on.
Just consider images for overcloud nodes as a cloud application, and then 
follow rules you would apply to any cloud application development.

I feel that we are in agreement on the goal and only specific work to be done 
is under discussion.

Do you have a blueprint or spec? That will simplify discussion.
Thanks,
Arkady

-Original Message-
From: Jaromir Coufal [mailto:jcou...@redhat.com] 
Sent: Friday, April 17, 2015 10:54 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tripleo] Building images separation and moving 
images into right place at right time

Hey Arkady,

yes, this should stay as fundamental requirement. This is tandard Glance 
functionality, I just want to separate discover and deploy images since these 
will not be very likely subject of change and they belong into undercloud 
installation stage.

That's why I want to separate overcloud building images (which is actually 
already there) so that user can easily replace this image with different one.

-- Jarda

On 17/04/15 16:18, arkady_kanev...@dell.com wrote:
 We need an ability for an Admin to add/remove new images at will to deploy 
 new overcloud images at any time.
 Expect that it is standard glance functionality.


 -Original Message-
 From: Jaromir Coufal [mailto:jcou...@redhat.com]
 Sent: Friday, April 17, 2015 8:51 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [tripleo] Building images separation and 
 moving images into right place at right time

 Hi All,

 at the moment we are building discovery, deploy and overcloud images all at 
 once. Then we face user to deal with uploading all images at one step.

 User should not be exposed to discovery/deploy images. This should happen 
 automatically for the user during undercloud installation as post-config 
 step, so that undercloud is usable.

 Once user installs undercloud (and have discovery  deploy images at their 
 place) he should be able to build / download / create overcloud images (by 
 overcloud images I mean overcloud-full.*). This is what user should deal with.

 For this we will need to separate building process for discovery+deploy 
 images and for overcloud images. Is that possible?

 -- Jarda

 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread Arkady_Kanevsky
Another scenario.
The default LVM driver is local to cinder service.  Thus, it may work fine as 
soon as you go outside controller node it does not.
We had a discussion on choosing different default driver and expect that 
discussion to continue.

Not all drivers support all features. We have a table that list which features 
each driver support.

The question I would ask is setting which test to skip in the driver is the 
right place?
Why not specify it in the Tempest which driver run against.
Then we can setup rules when drivers should remove themselves from that 
blackout list.
That is easier to track, can be cleanly used by defcore and for tagging.

Thanks,
Arkady

From: John Griffith [mailto:john.griffi...@gmail.com]
Sent: Monday, March 30, 2015 8:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on 
the goal and skipping tests



On Mon, Mar 30, 2015 at 6:21 PM, Rochelle Grober 
rochelle.gro...@huawei.commailto:rochelle.gro...@huawei.com wrote:
Top posting… I believe the main issue was a problem with snapshots that caused 
false negatives for most cinder drivers.  But, that got fixed.  Unfortunately, 
we haven’t yet established a good process to notify third parties when skipped 
tests are fixed and should be “unskipped”.  Maybe tagging the tests can help on 
this.  But, I really do think this round was a bit of first run gotchas and 
rookie mistakes on all sides.  A good post mortem on how to better communicate 
changes and deadlines may go a long way to smooth these out in the next round.

--Rocky

John Griffith on Monday, March 30, 2015 15:36 wrote:
On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
doug...@parksidesoftware.commailto:doug...@parksidesoftware.com wrote:
A few reasons, I’m sure there are others:

- Broken tests that hardcode something about the ref implementation. The test 
needs to be fixed, of course, but in the meantime, a constantly failing CI is 
worthless (hello, lbaas scenario test.)
​Certainly... but that's relatively easy to fix (bug/patch to Tempest).  
Although that's not actually the case in this particular context as there are a 
handful of third party devices that run the full set of tests that the ref 
driver runs with no additional skips or modifications.
​

- Test relies on some “optional” feature, like overlapping IP subnets that the 
backend doesn’t support.  I’d argue it’s another case of broken tests if they 
require an optional feature, but it still needs skipping in the meantime.

​This may be something specific to Neutron perhaps?  In Cinder LVM is pretty 
much the lowest common denominator.  I'm not aware of any volume tests in 
Tempest that rely on optional features that don't pick this up automatically 
out of the config (like multi-backend for example).
​

- Some new feature added to an interface, in the presence of shims/decomposed 
drivers/plugins (e.g. adding TLS termination support to lbaas.) Those 
implementations will lag the feature commit, by definition.

​Yeah, certainly I think this highlights some of the differences between Cinder 
and Neutron perhaps and the differences in complexity.
Thanks for the feedback... I don't disagree per say, however Cinder is set up a 
bit different here in terms of expectations for base functionality requirements 
and compatibility but your points are definitely well taken. ​

Thanks,
doug


On Mar 30, 2015, at 2:54 PM, John Griffith 
john.griffi...@gmail.commailto:john.griffi...@gmail.com wrote:

This may have already been raised/discussed, but I'm kinda confused so thought 
I'd ask on the ML here.  The whole point of third party CI as I recall was to 
run the same tests that we run in the official Gate against third party 
drivers.  To me that would imply that a CI system/device that marks itself as 
GOOD doesn't do things like add skips locally that aren't in the tempest code 
already?

In other words, seems like cheating to say My CI passes and all is good, 
except for the tests that don't work which I skip... but pay no attention to 
those please.

Did I miss something, isn't the whole point of Third Party CI to demonstrate 
that a third parties backend is tested and functions to the same degree that 
the reference implementations do? So the goal (using Cinder for example) was to 
be able to say that any API call that works on the LVM reference driver will 
work on the drivers listed in driverlog; and that we know this because they run 
the same Tempest API tests?

Don't get me wrong, certainly not saying there's malice or things should be 
marked as no good... but if the practice is to skip what you can't do then 
maybe that should be documented in the driverlog submission, as opposed to just 
stating Yeah, we run CI successfully.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] Cinder Review Inbox Was RE: Cinder Third-Party CI: what next? (was Re: [cinder] Request exemption for removal of NetApp FC drivers (no voting CI))

2015-03-24 Thread Arkady_Kanevsky
Dell - Internal Use - Confidential
The link from cinder page to Cinder review inbox  
(https://review.openstack.org/#/dashboard/) from 
https://wiki.openstack.org/wiki/Cinder#Resources is empty.

Link to bugs does work.

-Original Message-
From: Mike Perez [mailto:thin...@gmail.com]
Sent: Tuesday, March 24, 2015 3:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Cinder Review Inbox Was RE: Cinder Third-Party CI: 
what next? (was Re: [cinder] Request exemption for removal of NetApp FC drivers 
(no voting CI))

On 19:38 Tue 24 Mar , Asselin, Ramy wrote:
 Changing the topic.

 Mike Perez did include a Cinder Review Inbox link on the main Cinder
 Wiki page.
 https://wiki.openstack.org/wiki/Cinder#Resources

 Not sure how many people/cores use that link but that could be a step
 towards what you're asking for.
 The query should probably be proposed to
 http://git.openstack.org/cgit/stackforge/gerrit-dash-creator/tree/dash
 boards

 Ramy


 From: Rochelle Grober [mailto:rochelle.gro...@huawei.com]
 Sent: Tuesday, March 24, 2015 12:02 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Cinder Third-Party CI: what next? (was
 Re: [cinder] Request exemption for removal of NetApp FC drivers (no
 voting CI))

 One other item I'd like to bring up.

 While Nova and Neutron are well distributed around the globe and have Core 
 Reviewers on IRC in the Asia daytime, some other projects are not so well 
 distributed as yet. A problem I noticed a number of times is that an Asia 
 based developer will post to the mailing list to get some attention for 
 his/her patch. This is frowned upon in the community, but when there are few 
 to no Core Reviewers in IRC, getting that first core review can be 
 difficult. Emailing the PTL is something I'm sure the PTLs would like to 
 limit as they are already swamped. So, how do we get timely first core 
 review of patches in areas of the world where Core presence in IRC is slim 
 to none?

 I can think of a few options but they don't seem great:

 * A filter for dashboards that flags reviews with multiple +1s and no core 
 along with a commitment of the Core team to perform a review within x number 
 of days

 * A separate mailing list for project review requests

 * Somehow queueing requests in the IRC channel so that offline developers 
 can easily find review requests when looking at channel logs

 * ???

 Solving this issue could help not just Third Party developers, but all of 
 OpenStack and make the community more inviting to Asian and Australian (and 
 maybe European and African) developers.

 --Rocky

Yes thanks Ramy. That's exactly what this dashboard helps with and has existed 
since January. I brought it up at the Cinder midcycle meetup, multiple times in 
meetings, reviewathons, etc. I was letting it sit and have people try it out 
before proposing to see if anyone had feedback. Since it has been sitting for a 
while without feedback, I'll go ahead and propose it.

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Third-Party CI: what next? (was Re: [cinder] Request exemption for removal of NetApp FC drivers (no voting CI))

2015-03-24 Thread Arkady_Kanevsky
Dell - Internal Use - Confidential
+2 on Mike’s job

From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: Tuesday, March 24, 2015 3:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Cinder Third-Party CI: what next? (was Re: 
[cinder] Request exemption for removal of NetApp FC drivers (no voting CI))

On 23 March 2015 at 22:50, Mike Perez 
thin...@gmail.commailto:thin...@gmail.com wrote:

I've talked to folks at the OpenStack foundation to get feedback on my
communication, and was told this was good, and even better than previous
communication to controversial changes.

I expected nevertheless people to be angry with me and blame me regardless of
my attempts to help people be successful and move the community forward.

As somebody who has previously attempted to drive 3rd party CI in Cinder 
forward and completely burnt out on the process, I have to applaud Mike's 
efforts. We needed a line in the sand to force the issue, or we were never 
going to get to 100% coverage of drivers, which was what we desperately needed.

For those who've have issues getting CI to be stable, this is a genuine 
reflection of the stability of Openstack in general, but it is not something 
we're going to be able to make progress on without CI systems exposing 
problems. That's the entire point of the 3rd party CI program - we knew there 
were bugs, stability issues and drivers that just plain didn't work, but we 
couldn't  do much about it without being able to test changes.

Thanks to Mike for the finally push on this, and to all of the various people 
in both cinder and infra who've been very active in helping people get their CI 
running, sometimes in very trying circumstances.

--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][cinder][nova][neutron] going forward to oslo-config-generator ...

2015-03-21 Thread Arkady_Kanevsky
Jay,
That sound reasonable.
We will need to document in a guide for driver developers what to do when new 
option is added deprecated in conf file for a driver.
Expect that nothing extra will need to be done beyond what we are doing now 
when new functionality added/deprecated from scheduler/default driver and 
perculates into drivers a release later.

I can also comment directly on the patch if it make sense.
Thanks,
Arkady

-Original Message-
From: Jay S. Bryant [mailto:jsbry...@electronicjungle.net]
Sent: Friday, March 20, 2015 5:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [oslo][cinder][nova][neutron] going forward to 
oslo-config-generator ...

All,

Let me start with the TLDR;

Cinder, Nova and Neutron have lots of configuration options that need to be 
processed by oslo-config-generator to create the .conf.sample file. There are a 
couple of different ways this could be done. I have one proposal out, which has 
raised concerns, there is a second approach that could be taken which I am 
proposing below. Please read on if you have a strong opinion on the precedent 
we will try to set in Cinder. :-)

We discussed in the oslo meeting a couple of weeks ago a plan for how Cinder 
was going to blaze a trail to the new oslo-config-generator. The result of that 
discussion and work is here: [1] It needs some more work but has the bare bones 
pieces there to move to using oslo-config-generator.

With the change I have written extensive hacking checks that ensure that any 
lists that are registered with register_opts() are included in the base 
cinder/opts.py file that is then a single entry point that pulls all of the 
options together to generate the cinder.conf.sample file.
This has raised concern, however, that whenever a developer adds a new list of 
configuration options, they are going to have to know to go back to 
cinder/opts.py and add their module and option list there. The hacking check 
should catch this before code is submitted, but we are possibly setting 
ourselves up for cases where the patch will fail in the gate because updates 
are not made in all the correct places and because
pep8 isn't run before the patch is pushed.

It is important to note, that this will not happen every time a configuration 
option is changed or added, as was the case with the old check-uptodate.sh 
script. Only when a new list of configuration options is added which is a much 
less likely occurrence. To avoid this happening at all it was proposed by the 
Cinder team that we use the code I wrote for the hacking checks to dynamically 
go through the files and create cinder/opts.py whenever 'tox -egenconfig' is 
run. Doing this makes me uncomfortable as it is not consistent with anything 
else I am familiar with in OpenStack and is not consistent with what other 
projects are doing to handle this problem. In discussion with Doug Hellman, the 
approach also seemed to cause him concern. So, I don't believe that is the 
right solution.

An alternative that may be a better solution was proposed by Doug:

We could even further reduce the occurrence of such issues by moving the
list_opts() function down into each driver and have an entry point for 
oslo.config.opts in setup.cfg for each of the drivers. As with the currently 
proposed solution, the developer doesn't have to edit a top level file for a 
new configuration option. This solution adds that the developer doesn't have to 
edit a top level file to add a new configuration item list to their driver. 
With this approach the change would happen in the driver's list_opts() 
function, rather than in cinder/opts.py . The only time that setup.cfg would 
needed to edited is when a new package is added or when a new driver is added. 
This would reduce some of the already minimal burden on the developer. We, 
however, would need to agree upon some method for aggregating together the 
options lists on a per package (i.e. cinder.scheduler, cinder.api) level. This 
approach, however, also has the advantage of providing a better indication in 
the sample config file of where the options are coming from. That is an 
improvement over what I have currently proposed.

Does Doug's proposal sound more agreeable to everyone? It is important to note 
that the fact that some manual intervention is required to 'plumb' in the new 
configuration options was done by design. There is a little more work required 
to make options available to oslo-config-generator but the ability to use 
different namespaces, different sample configs, etc were added with the new 
generator. These additional capabilities were requested by other projects. So, 
moving to this design does have the potential for more long-term gain.

Thanks for taking the time to consider this!

Jay

[1] https://review.openstack.org/#/c/165431/


__
OpenStack Development Mailing List (not for usage 

[openstack-dev] licenses

2014-11-25 Thread Arkady_Kanevsky
What is the license that should be used for specs and for the code?
While all the code I had seen is under Apache 2 license, many of the specs are 
under CCPL-3 license.
Is that the guidance?
Thanks,
Arkady
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NOVA] https://review.openstack.org/91486

2014-07-18 Thread Arkady_Kanevsky
Why is this blueprint dropped?
We really like it to be in the code base.
It has been submitted to Icehouse and has been ready for review for more than 2 
months.
The code is ready for review and we have been using it for a while internally.
Thanks,
Arkady

Arkady Kanevsky, Ph.D.
Director of SW development
Dell ESG
Dell Inc. One Dell Way, MS PS2-47
Round Rock, TX 78682, USA
Phone: 512 723 5264

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread Arkady_Kanevsky
Why is OOO being singled out for backwards compatibility?

-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Monday, June 16, 2014 11:42 AM
To: jr...@redhat.com; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [TripleO] Backwards compatibility policy for our 
projects

On 16 June 2014 17:30, Jason Rist jr...@redhat.com wrote:
 I'm going to have to agree with Tomas here.  There doesn't seem to be 
 any reasonable expectation of backwards compatibility for the reasons 
 he outlined, despite some downstream releases that may be impacted.


Backward compatibility is a hard habit to get into, and easy to put off. If 
you're not making any guarantees now, when are you going to start making them? 
How much breakage can users expect? Without wanting to look entirely like a 
troll, should TripleO be dropped as an official until it can start making such 
guarantees? I think every other official OpenStack project has a stable API 
policy of some kind, even if they don't entirely match...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev