Re: [openstack-dev] [rally] Moving OpenStack plugins into separate repo

2018-04-11 Thread Boris Pavlovic
Andrey,

Great news!

Best regards,
Boris Pavlovic

On Wed, Apr 11, 2018 at 9:14 AM, Andrey Kurilin 
wrote:

> Hi Stackers!
>
> Today I am happy to announce great news!
>
> From a historical perspective, Rally is testing (benchmarking) tool for
> OpenStack, but it is changed. More and more users want to use Rally for
> different platforms and environments. Our pluggable system allows doing
> this.
> To make the framework lightweight and simplify our release model, we
> decided to move OpenStack to the separate repository[1].
>
> [1] https://git.openstack.org/cgit/openstack/rally-openstack
>
> We cut the first release 1.0.0 two weeks ago, and it is published to
> PyPI[2].
>
> [2] https://pypi.python.org/pypi/rally-openstack
>
> If you are Rally consumer and do not have custom plugins, the migration
> should be simple. Just install rally-openstack package instead of rally and
> everything will work as previously. rally-openstack has a dependency to
> rally, so you need nothing more than installing one package.
>
> If you have custom plugins, do not worry, the migration should be simple
> for you too. The first release has the similar structure as it was in rally
> repository. The only thing which should be changed is importing
> rally_openstack instead of rally.plugins.openstack.
>
> --
> Best regards,
> Andrey Kurilin.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme Testing

2017-11-01 Thread Boris Pavlovic
Sam,

 Etherpad: https://etherpad.openstack.org/p/SYD-extreme-testing


I really don't want to sound like a person that say use Rally my best ever
project blablbab and other BS.
I think that "reinventing wheels" approach is how humanity evolves and
that's why I like this effort in any case.

But really, I read carefully etherpad and I really see in description of
Eris just plain Rally as is:

- Rally allows you to create tests as YAML
- Rally allows you to inject in various actions during the load (Rally
Hooks) which makes it easy to do chaos and other kind of testing
- Rally is pluggable and you can write even your own Runners (scenario
executors) that will generate load pattern that you need
- Rally has SLAs plugins (that can deeply analyze result of test cases) and
say whatever they pass or not
- We are working on feature that allows you to mix different workloads in
parallel (and generate more realistic load)
-.

So it would be really nice if you can share gaps that you faced that are
blocking you to use directly Rally..

Thanks!

Best regards,
Boris Pavlovic


On Tue, Oct 31, 2017 at 10:50 PM, Sam P  wrote:

> Hi All,
>
>  Sending out a gentle reminder of Sydney Summit Forum Session
> regarding this topic.
>
>  Extreme/Destructive Testing
>  Tuesday, November 7, 1:50pm-2:30pm
>  Sydney Convention and Exhibition Centre - Level 4 - C4.11
>  [https://www.openstack.org/summit/sydney-2017/summit-
> schedule/events/20470/extremedestructive-testing]
>  Eatherpad: https://etherpad.openstack.org/p/SYD-extreme-testing
>
>  Your participation in this session would be greatly appreciated.
> --- Regards,
> Sampath
>
>
>
> On Mon, Aug 14, 2017 at 11:43 PM, Tim Bell  wrote:
> > +1 for Boris’ suggestion. Many of us use Rally to probe our clouds and
> have
> > significant tooling behind it to integrate with local availability
> reporting
> > and trouble ticketing systems. It would be much easier to deploy new
> > functionality such as you propose if it was integrated into an existing
> > project framework (such as Rally).
> >
> >
> >
> > Tim
> >
> >
> >
> > From: Boris Pavlovic 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: Monday, 14 August 2017 at 12:57
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Cc: openstack-operators 
> > Subject: Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme
> > Testing
> >
> >
> >
> > Sam,
> >
> >
> >
> > Seems like a good plan and huge topic ;)
> >
> >
> >
> > I would as well suggest to take a look at the similar efforts in
> OpenStack:
> >
> > - Failure injection: https://github.com/openstack/os-faults
> >
> > - Rally Hooks Mechanism (to inject in rally scenarios failures):
> > https://rally.readthedocs.io/en/latest/plugins/implementation/hook_and_
> trigger_plugins.html
> >
> >
> >
> >
> >
> > Best regards,
> > Boris Pavlovic
> >
> >
> >
> >
> >
> > On Mon, Aug 14, 2017 at 2:35 AM, Sam P  wrote:
> >
> > Hi All,
> >
> > This is a follow up for OpenStack Extreme Testing session[1]
> > we did in MEX-ops-meetup.
> >
> > Quick intro for those who were not there:
> > In this work, we proposed to add new testing framework for openstack.
> > This framework will provides tool for create tests with destructive
> > scenarios which will check for High Availability, failover and
> > recovery of OpenStack cloud.
> > Please refer the link on top of the [1] for further details.
> >
> > Follow up:
> > We are planning periodic irc meeting and have an irc
> > channel for discussion. I will get back to you with those details soon.
> >
> > At that session, we did not have time to discuss last 3 items,
> > Reference architectures
> >  We are discussing about the reference architecture in [2].
> >
> > What sort of failures do you see today in your environment?
> >  Currently we are considering, service failures, backend services (mq,
> > DB, etc.) failures,
> >  Network sw failures..etc. To begin with the implementation, we are
> > considering to start with
> >  service failures. Please let us know what failures are more frequent
> > in your environment.
> >
> > Emulation/Simulation mechanisms, etc.
> >  Rather than doing actual scale, load, or performance tests, we are
> > thinking to build a emulation/simulation mechanism
> > to get the predictions or result of how will openstack be

[openstack-dev] [openstack-infra][stable][urgent][rally] Someone deleted Rally stable branch, we need to restore

2017-10-26 Thread Boris Pavlovic
Hi,

Someone somehow deleted 0.9.* branches from GitHub repo
https://github.com/openstack/rally
A lot of end users are using these branches and are affected by this
change.

Can someone help to restore them?


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Containerizing tempest

2017-10-09 Thread Boris Pavlovic
Chandan,

Not a big expert in Kola, but I have the same task for
containerizing  Rally.
The solution is simple, just use ENTRYPOINT like here:
https://github.com/openstack/rally/blob/master/Dockerfile#L38 (but for
tempest)


Best regards,
Boris Pavlovic

On Mon, Oct 9, 2017 at 10:08 PM, Chandan kumar  wrote:

> Hello,
>
> I am planning to containerizing tempest for Tripleo.
> Kolla project provides tempest kolla image [1.].
> On a containerized Tripleo deployment, the Kolla tempest image will be
> available on undercloud and the end user should able to run tempest
> from there using tempest cli.
>
> I need some help on how to proceed:
> [1.] Where to make changes in Tripleo in order to make Kolla Tempest
> image available on undercloud?
> [2.] Since Tempest is not a service but an application, how to expose
> tempest cli without tempest cli on undercloud without entering into
> tempest kolla image?
>
> Links:
> [1.] https://github.com/openstack/kolla/tree/master/docker/tempest
>
> Thanks,
>
> Chandan Kumar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-14 Thread Boris Pavlovic
Jay,

OK, I'll bite.


This doesn't sound like a constructive discussion. Bye Bye.

Best regards,
Boris Pavlovic

On Thu, Sep 14, 2017 at 8:50 AM, Jay Pipes  wrote:

> OK, I'll bite.
>
> On 09/13/2017 08:56 PM, Boris Pavlovic wrote:
>
>> Jay,
>>
>> All that you say exactly explains the reason why more and more companies
>> are leaving OpenStack.
>>
>
> All that I say? The majority of what I was "saying" was actually asking
> you to back up your statements with actual proof points instead of making
> wild conjectures.
>
> Companies and actually end users care only about their things and how can
>> they get their job done. They want thing that they can run and support
>> easily and that resolves their problems.
>>
>
> No disagreement from me. That said, I fail to see what the above statement
> has to do with anything I wrote.
>
> They initially think that it's a good idea to take a OpenStack as a
>> Framework and build sort of product on top of it because it's so open and
>> large and everybody uses...
>>
>
> End users of OpenStack don't "build sort of product on top". End users of
> OpenStack call APIs or use Horizon to launch VMs, create networks, volumes,
> and whatever else those end users need for their own use cases.
>
> Soon they understand that OpenStack has very complicated operations
>> because it's not designed to be a product but rather framework and that the
>> complexity of running OpenStack is similar to development in house solution
>> and as time is spend they have only few options: move to public cloud or
>> some other private cloud solution...
>>
>
> Deployers of OpenStack use the method of installing and configuring
> OpenStack that matches best their cultural fit, experience and level of
> comfort with underlying technologies and vendors (packages vs. source vs.
> images, using a vendor distribution vs. going it alone, Chef vs. Puppet vs.
> Ansible vs. SaltStack vs. Terraform, etc). The way they configure OpenStack
> services is entirely dependent on the use cases they wish to support for
> their end users. And, to repeat myself, there is NO SINGLE USE CASE for
> infrastructure services like OpenStack. Therefore there is zero chance for
> a "standard deployment" of OpenStack becoming a reality.
>
> Just like there are myriad ways of deploying and configuring OpenStack,
> there are myriad ways of deploying and configuring k8s. Why? Because
> deploying and configuring highly distributed systems is a hard problem to
> solve. And maintaining and operating those systems is an even harder
> problem to solve.
>
> We as a community can continue saying that the current OpenStack approach
>> is the best
>>
>
> Nobody is saying that the current OpenStack approach is the best. I
> certainly have never said this. All that I have asked is that you actually
> back up your statements with proof points that demonstrate how and why a
> different approach to building software will lead to specific improvements
> in quality or user experience.
>
> and keep loosing customers/users/community, or change something
>> drastically, like bring technical leadership to OpenStack Foundation
>> that is going to act like benevolent dictator that focuses OpenStack
>> effort on shrinking uses cases, redesigning architecture and moving
>> to the right direction...
>>
>
> What *specifically* is the "right direction" for OpenStack to take?
> Please, as I asked you in the original response, provide actual details
> other than "we should have a monolithic application". Provide an argument
> as to how and why *your* direction is "right" for every user of OpenStack.
>
> When you say "technical leadership", what specifically are you wanting to
> see?
>
>
>> I know this all sounds like a big change, but let's be honest current
>> situation doesn't look healthy...
>> By the way, almost all successful projects in open source have benevolent
>> dictator and everybody is OK with that's how things works...
>>
>
> Who is the benevolent dictator of k8s? Who is the benevolent dictator of
> MySQL? Of PostgreSQL? Of etcd?
>
> You have a particularly myopic view of what "successful" is for open
> source, IMHO.
>
> Awesome news. I will keep this in mind when users (like GoDaddy) ask
>> Nova to never break anything ever and keep behaviour like scheduler
>> retries that represent giant technical debt.
>>
>> I am writing here on my behalf (using my personal email, if you haven't
>> seen), are we actually Ope

Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-13 Thread Boris Pavlovic
Jay,

All that you say exactly explains the reason why more and more companies
are leaving OpenStack.

Companies and actually end users care only about their things and how can
they get their job done. They want thing that they can run and support
easily and that resolves their problems.

They initially think that it's a good idea to take a OpenStack as a
Framework and build sort of product on top of it because it's so open and
large and everybody uses...

Soon they understand that OpenStack has very complicated operations because
it's not designed to be a product but rather framework and that the
complexity of running OpenStack is similar to development in house solution
and as time is spend they have only few options: move to public cloud or
some other private cloud solution...

We as a community can continue saying that the current OpenStack approach
is the best and keep loosing customers/users/community, or change something
drastically, like bring technical leadership to OpenStack Foundation that
is going to act like benevolent dictator that  focuses OpenStack effort on
shrinking uses cases, redesigning architecture and moving to the right
direction...

I know this all sounds like a big change, but let's be honest current
situation doesn't look healthy...
By the way, almost all successful projects in open source have benevolent
dictator and everybody is OK with that's how things works...


Awesome news. I will keep this in mind when users (like GoDaddy) ask Nova
> to never break anything ever and keep behaviour like scheduler retries that
> represent giant technical debt.


I am writing here on my behalf (using my personal email, if you haven't
seen), are we actually Open Source? or Enterprise Source?

More over I don't think that what you say is going to be an issue for
GoDaddy, at least soon, because we still can't upgrade, because it's NP
complete problem (even if you run just core projects), which is what my
email was about, and I saw the same stories in bunch of other companies.


Yes, let's definitely go the opposite direction of microservices and
> loosely coupled domains which is the best practices of software development
> over the last two decades. While we're at it, let's rewrite OpenStack
> projects in COBOL.


I really don't want to answer on this provocation, because it shifts the
focus from major topic. But I really can't stop myself ;)

- There is no sliver bullet in programming. For example, would Git or Linux
be better if it was written using microservices approach?
- Mircroservices are obsolete you should use new hype thing called FaaS (I
am just curios when these FaaS fellow are going to implement modules for
FaaS and when they are going to understand that they will need actually
everything development in programming languages (OOP, AOP, DI, ...) to glue
these things;) )
- I was talking about architectural changes, not a programming language, so
it's sort of big type mismatch and logically wrong. However, what's wrong
with Cobol? If you use right architecture and right algorithms it will
definitely work better than implementation of program in any other language
with wrong architecture and bad algorithms... so not sure that I understand
this point/joke...


Best regards,
Boris Pavlovic

On Wed, Sep 13, 2017 at 10:44 AM, Jay Pipes  wrote:

> On 09/12/2017 06:53 PM, Boris Pavlovic wrote:
>
>> Mike,
>>
>> Great intiative, unfortunately I wasn't able to attend it, however I have
>> some thoughts...
>> You can't simplify OpenStack just by fixing few issues that are described
>> in the etherpad mostly..
>>
>> TC should work on shrinking the OpenStack use cases and moving towards
>> the product (box) complete solution instead of pieces of bunch barely
>> related things..
>>
>
> OpenStack is not a product. It's a collection of projects that represent a
> toolkit for various cloud-computing functionality.
>
> *Simple things to improve: *
>> /This is going to allow community to work together, and actually get
>> feedback in standard way, and incrementally improve quality. /
>>
>> 1) There should be one and only one:
>> 1.1) deployment/packaging(may be docker) upgrade mechanism used by
>> everybody
>>
>
> Good luck with that :) The likelihood of the deployer/packager community
> agreeing on a single solution is zero.
>
> 1.2) monitoring/logging/tracing mechanism used by everybody
>>
>
> Also close to zero chance of agreeing on a single solution. Better to
> focus instead on ensuring various service projects are monitorable and
> transparent.
>
> 1.3) way to configure all services (e.g. k8 etcd way)
>>
>
> Are you referring to the way to configure k8s services or the way to
> configure/setup

Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-12 Thread Boris Pavlovic
Mike,

Great intiative, unfortunately I wasn't able to attend it, however I have
some thoughts...
You can't simplify OpenStack just by fixing few issues that are described
in the etherpad mostly..

TC should work on shrinking the OpenStack use cases and moving towards the
product (box) complete solution instead of pieces of bunch barely related
things..

*Simple things to improve: *
*This is going to allow community to work together, and actually get
feedback in standard way, and incrementally improve quality. *

1) There should be one and only one:
1.1) deployment/packaging(may be docker) upgrade mechanism used by
everybody
1.2) monitoring/logging/tracing mechanism used by everybody
1.3) way to configure all services (e.g. k8 etcd way)
2) Projects must have standardize interface that allows these projects to
use them in same way.
3) Testing & R&D should be performed only against this standard deployment

*Hard things to improve: *

OpenStack projects were split in far from ideal way, which leads to bunch
of gaps that we have now:
1.1) Code & functional duplications:  Quotas, Schedulers, Reservations,
Health checks, Loggign, Tracing, 
1.2) Non optimal workflows (booting VM takes 400 DB requests) because data
is stored in Cinder,Nova,Neutron
1.3) Lack of resources (as every project is doing again and again same work
about same parts)

What we can do:

*1) Simplify internal communication *
1.1) Instead of AMQP for internal communication inside projects use just
HTTP, load balancing & retries.

*2) Use API Gateway pattern *
3.1) Provide to use high level API one IP address with one client
3.2) Allows to significant reduce load on Keystone because tokens are
checked only in API gateway
3.3) Simplifies communication between projects (they are now in trusted
network, no need to check token)

*3) Fix the OpenStack split *
3.1) Move common functionality to separated internal services: Scheduling,
Logging, Monitoring, Tracing, Quotas, Reservations (it would be even better
if this thing would have more or less monolithic architecture)
3.2) Somehow deal with defragmentation of resources e.g. VM Volumes and
Networks data which is heavily connected.


*4) Don't be afraid to break things*
Maybe it's time for OpenStack 2:

   - In any case most of people provide API on top of OpenStack for usage
   - In any case there is no standard and easy way to upgrade

So basically we are not losing anything even if we do not backward
compatible changes and rethink completely architecture and API.


I know this sounds like science fiction, but I believe community will
appreciate steps in this direction...


Best regards,
Boris Pavlovic

On Tue, Sep 12, 2017 at 2:33 PM, Mike Perez  wrote:

> Hey all,
>
> The session is over. I’m hanging near registration if anyone wants to
> discuss things. Shout out to John for coming by on discussions with
> simplifying dependencies. I welcome more packagers to join the
> discussion.
>
> https://etherpad.openstack.org/p/simplifying-os
>
> —
> Mike Perez
>
>
> On September 12, 2017 at 11:45:05, Mike Perez (thin...@gmail.com) wrote:
> > Hey all,
> >
> > Back in a joint meeting with the TC, UC, Foundation and The Board it was
> decided as an area
> > of OpenStack to focus was Simplifying OpenStack. This intentionally was
> very broad
> > so the community can kick start the conversation and help tackle some
> broad feedback
> > we get.
> >
> > Unfortunately yesterday there was a low turn out in the Simplification
> room. A group
> > of people from the Swift team, Kevin Fox and Swimingly were nice enough
> to start the conversation
> > and give some feedback. You can see our initial ether pad work here:
> >
> > https://etherpad.openstack.org/p/simplifying-os
> >
> > There are efforts happening everyday helping with this goal, and our
> team has made some
> > documented improvements that can be found in our report to the board
> within the ether
> > pad. I would like to take a step back with this opportunity to have in
> person discussions
> > for us to identify what are the area of simplifying that are worthwhile.
> I’m taking a break
> > from the room at the moment for lunch, but I encourage people at 13:30
> local time to meet
> > at the simplification room level b in the big thompson room. Thank you!
> >
> > —
> > Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme Testing

2017-08-14 Thread Boris Pavlovic
Sam,

Seems like a good plan and huge topic ;)

I would as well suggest to take a look at the similar efforts in OpenStack:
- Failure injection: https://github.com/openstack/os-faults
- Rally Hooks Mechanism (to inject in rally scenarios failures):
https://rally.readthedocs.io/en/latest/plugins/implementation/hook_and_trigger_plugins.html


Best regards,
Boris Pavlovic


On Mon, Aug 14, 2017 at 2:35 AM, Sam P  wrote:

> Hi All,
>
> This is a follow up for OpenStack Extreme Testing session[1]
> we did in MEX-ops-meetup.
>
> Quick intro for those who were not there:
> In this work, we proposed to add new testing framework for openstack.
> This framework will provides tool for create tests with destructive
> scenarios which will check for High Availability, failover and
> recovery of OpenStack cloud.
> Please refer the link on top of the [1] for further details.
>
> Follow up:
> We are planning periodic irc meeting and have an irc
> channel for discussion. I will get back to you with those details soon.
>
> At that session, we did not have time to discuss last 3 items,
> Reference architectures
>  We are discussing about the reference architecture in [2].
>
> What sort of failures do you see today in your environment?
>  Currently we are considering, service failures, backend services (mq,
> DB, etc.) failures,
>  Network sw failures..etc. To begin with the implementation, we are
> considering to start with
>  service failures. Please let us know what failures are more frequent
> in your environment.
>
> Emulation/Simulation mechanisms, etc.
>  Rather than doing actual scale, load, or performance tests, we are
> thinking to build a emulation/simulation mechanism
> to get the predictions or result of how will openstack behave on such
> situations.
> This interesting idea was proposed by the Gautam and need more
> discussion on this.
>
> Please let us know you questions or comments.
>
> Request to Mike Perez:
>  We discussed about synergies with openstack assertion tags and other
> efforts to do similar testing in openstack.
>  Could you please give some info or pointer of previous discussions.
>
> [1] https://etherpad.openstack.org/p/MEX-ops-extreme-testing
> [2] https://openstack-lcoo.atlassian.net/wiki/spaces/
> LCOO/pages/15477787/Extreme+Testing-Vision+Arch
>
> --- Regards,
> Sampath
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][performance] Proposing tail-based sampling in OSProfiler

2017-08-04 Thread Boris Pavlovic
Ilya,

Continuous tracing is a cool story, but before proceeding it would be good
> to estimate the overhead. There will be an additional delay introduced by
> OSProfiler library itself and delay caused by events transfer to consumer.
> OSProfiler overhead is critical to minimize. E.g. VM creation produces >1k
> events, which gives almost 2 times performance penalty in DevStack. Would
> be definitely nice to have the same test run on real environment --
> something that Performance Team could help with.


As far as I understand the idea of continuous tracing is to collect as few
as possible metrics to get insights of the request (not all tracepoints).
If you keep only API, RPC and Driver calls it is going to
drastically reduce amount of metrics collected.

As well, one of the things that should be done is sending the metrics in
bulk after the request in async way, that way we won't slow down UX and
won't add too much load on underlaying infrastructure.


Rajul,

ICYMI, Boris is father of OSprofiler in OpenStack [1]
>
>> This is why I was excited to get the first response from him and curious
>> on his stand. Really looking forward to get more on this from him. Also,
>> Josh's response on other Tracing thread peeked my curiosity further.
>
>
I'll try to elaborate my points. For monitoring perspective it's going to
be super beneficial to have continuous tracing and I fully support the
effort. However, it won't help community too much to fix the real problems
in architecture (in my opinion it's too late), for example creating VM
performs ~400 DB requests... and yep this is going to be slow, and now
what? how can you fix that?..

Best regards,
Boris Pavlovic



On Fri, Aug 4, 2017 at 1:12 PM, Rajul Kumar 
wrote:

> Hi Vinh
>
> For the `agent idea`, I think it is very good.
>
> However, in OpenStack, that idea may be really hard for us.
>
> The reason is the same with what Boris think.
>
>
> Thanks. We did a poc and working to integrate it with OSProfiler without
> affecting any of the services.
> I understand this will be difficult.
>
> For tail-based and adaptive sampling, it is another story.
>
> Exactly. This needs some major changes. We will need this if we look to
> have an effective tracing and any kind of automated analysis of the system.
>
> However, in naïve way, we can use sampling abilities from other
> OpenTracing compatible tracers
>
> such as Uber Jaeger, Appdash, Zipkin (has an open pull request), LighStep
> … by making OSprofiler
>
> compatible with OpenTracing API.
>
> I agree. Initially, this can be done.
> However, the limitations of traces they generate is another story and
> working to come up with another blueprint on that.
>
> ICYMI, Boris is father of OSprofiler in OpenStack [1]
>
> This is why I was excited to get the first response from him and curious
> on his stand. Really looking forward to get more on this from him. Also,
> Josh's response on other Tracing thread peeked my curiosity further.
>
> Thanks
> Rajul
>
>
>
>
>
> On Thu, Aug 3, 2017 at 10:04 PM, vin...@vn.fujitsu.com <
> vin...@vn.fujitsu.com> wrote:
>
>> Hi Rajul,
>>
>>
>>
>> For the `agent idea`, I think it is very good.
>>
>> However, in OpenStack, that idea may be really hard for us.
>>
>> The reason is the same with what Boris think.
>>
>>
>>
>> For the sampling part, head-based sampling can be implemented in
>> OSprofiler.
>>
>> For tail-based and adaptive sampling, it is another story.
>>
>> However, in naïve way, we can use sampling abilities from other
>> OpenTracing compatible tracers
>>
>> such as Uber Jaeger, Appdash, Zipkin (has an open pull request), LighStep
>> … by making OSprofiler
>>
>> compatible with OpenTracing API.
>>
>>
>>
>> ICYMI, Boris is father of OSprofiler in OpenStack [1]
>>
>>
>>
>> [1] https://specs.openstack.org/openstack/oslo-specs/specs/mitak
>> a/osprofiler-cross-service-project-profiling.html
>>
>>
>>
>> Best regards,
>>
>>
>>
>> Vinh Nguyen Trong
>>
>> PODC – Fujitsu Vietnam Ltd.
>>
>>
>>
>> *From:* Rajul Kumar [mailto:kumar.r...@husky.neu.edu]
>> *Sent:* Friday, 04 August, 2017 03:49
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> *Subject:* Re: [openstack-dev] [oslo][performance] Proposing tail-based
>> sampling in OSProfiler
>>
>>
>>
>> Hi Boris
>>
>>
>>
>> That is a point of concern.
>>
>> Can you please dir

Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Boris Pavlovic
Monty,


* drop stevedore/plugin support. An OpenStack REST client has no need for
> plugins. All services are welcome. *note below*


Back to 60s style of development? Just copy paste things? no plugins? no
architecture? no libs?

That's not going to work for dozens of OpenStack projects. It's just won't
scale. Every project should maintain plugin for their project. And it
should be enough to say "pip install python-client" that
extend the Core OpenStack python client and adds support of new commands.

The whole core part should be only about how to make plugins interface in
such way that it's easy to extend and provide to end user nice user
experience from both side (shell & python) and nice and stable interface
for client developers .

By the way stevedore is really providing very bad plugin experience and
should not be used definitely.

Best regards,
Boris Pavlovic

On Fri, Aug 4, 2017 at 12:05 PM, Joshua Harlow 
wrote:

> Also note that this appears to exist:
>
> https://github.com/openstack/python-openstackclient/blob/mas
> ter/requirements.txt#L10
>
> So even if python-openstacksdk is not a top level project, I would assume
> that it being a requirement would imply that it is? Or perhaps neither the
> python-openstackclient or python-openstacksdk should really be used? I've
> been telling people that python-openstackclient should be good to use (I
> hope that is still correct, though I do have to tell people to *not* use
> python-openstackclient from python itself, and only use it from bash/other
> shell).
>
>
> Michael Johnson wrote:
>
>> Hi OpenStack developers,
>>
>> I was wondering what is the current status of the python-openstacksdk
>> project.  The Octavia team has posted some patches implementing our new
>> Octavia v2 API [1] in the SDK, but we have not had any reviews.  I have
>> also
>> asked some questions in #openstack-sdks with no responses.
>> I see that there are some maintenance patches getting merged and a pypi
>> release was made 6/14/17 (though not through releases project).  I'm not
>> seeing any mailing list traffic and the IRC meetings seem to have ended in
>> 2016.
>>
>> With all the recent contributor changes, I want to make sure the project
>> isn't adrift in the sea of OpenStack before we continue to spend
>> development
>> time implementing the SDK for Octavia. We were also planning to use it as
>> the backing for our dashboard project.
>>
>> Since it's not in the governance projects list I couldn't determine who
>> the
>> PTL to ping would be, so I decided to ping the dev mailing list.
>>
>> My questions:
>> 1. Is this project abandoned?
>> 2. Is there a plan to make it an official project?
>> 3. Should we continue to develop for it?
>>
>> Thanks,
>> Michael (johnsom)
>>
>> [1]
>> https://review.openstack.org/#/q/project:openstack/python-op
>> enstacksdk+statu
>> s:open+topic:%255Eoctavia.*
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][performance] Proposing tail-based sampling in OSProfiler

2017-08-03 Thread Boris Pavlovic
Rajul,

May I ask why you think so?


Exposed by OSprofiler issues are going to be really hard to fix in current
OpenStack architecture.

Best regards,
Boris Pavlovic

On Thu, Aug 3, 2017 at 12:56 PM, Rajul Kumar 
wrote:

> Hi Boris
>
> Good to hear from you.
> May I ask why you think so?
>
> We do see some potential with OSProfiler for this and further objectives.
>
> Thanks
> Rajul
>
> On Thu, Aug 3, 2017 at 3:48 PM, Boris Pavlovic  wrote:
>
>> Rajul,
>>
>> It makes sense! However, maybe it's a bit too late... ;)
>>
>> Best regards,
>> Boris Pavlovic
>>
>> On Thu, Aug 3, 2017 at 12:16 PM, Rajul Kumar 
>> wrote:
>>
>>> Hello everyone
>>>
>>> I have added a blueprint on having tail-based sampling as a sampling
>>> option for continuous tracing in OSProfiler. It would be really helpful to
>>> have some thoughts, ideas, comments on this from the community.
>>>
>>> Continuous tracing provides a good insight on how various transactions
>>> behave across in a distributed system. Currently, OpenStack doesn't have a
>>> defined solution for continuous tracing. Though, it has OSProfiler that
>>> does generates selective traces, it may not capture the occurrence. Even if
>>> we have OSProfiler running continuously [1], we need to sample the traces
>>> so as to cut down the data generated and still keep the useful info.
>>>
>>> Head based sampling can be applied that decides initially whether a
>>> trace should be saved or not. However, it may miss out on some useful
>>> traces. I propose to have tail-based sampling [2] mechanism that makes the
>>> decision at the end of the transaction and tends to keep all the useful
>>> traces. This may require a lot of changes depending on what all type of
>>> info is required and the solution that we pick to implement it [2]. This
>>> may not affect the current working of any of the services on OpenStack as
>>> it will be off the critical path [3].
>>>
>>> Please share your thoughts on this and what solution should be preferred
>>> in a broader OpenStack's perspective.
>>> This is a step in the process of having an automated diagnostic solution
>>> for OpenStack cluster.
>>>
>>> [1] https://blueprints.launchpad.net/osprofiler/+spec/osprof
>>> iler-overhead-control
>>> [2] https://blueprints.launchpad.net/osprofiler/+spec/tail-b
>>> ased-coherent-sampling
>>> [3] https://blueprints.launchpad.net/osprofiler/+spec/asynch
>>> ronous-trace-collection
>>>
>>> Thanks
>>> Rajul Kumar
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][performance] Proposing tail-based sampling in OSProfiler

2017-08-03 Thread Boris Pavlovic
Rajul,

It makes sense! However, maybe it's a bit too late... ;)

Best regards,
Boris Pavlovic

On Thu, Aug 3, 2017 at 12:16 PM, Rajul Kumar 
wrote:

> Hello everyone
>
> I have added a blueprint on having tail-based sampling as a sampling
> option for continuous tracing in OSProfiler. It would be really helpful to
> have some thoughts, ideas, comments on this from the community.
>
> Continuous tracing provides a good insight on how various transactions
> behave across in a distributed system. Currently, OpenStack doesn't have a
> defined solution for continuous tracing. Though, it has OSProfiler that
> does generates selective traces, it may not capture the occurrence. Even if
> we have OSProfiler running continuously [1], we need to sample the traces
> so as to cut down the data generated and still keep the useful info.
>
> Head based sampling can be applied that decides initially whether a trace
> should be saved or not. However, it may miss out on some useful traces. I
> propose to have tail-based sampling [2] mechanism that makes the decision
> at the end of the transaction and tends to keep all the useful traces. This
> may require a lot of changes depending on what all type of info is required
> and the solution that we pick to implement it [2]. This may not affect the
> current working of any of the services on OpenStack as it will be off the
> critical path [3].
>
> Please share your thoughts on this and what solution should be preferred
> in a broader OpenStack's perspective.
> This is a step in the process of having an automated diagnostic solution
> for OpenStack cluster.
>
> [1] https://blueprints.launchpad.net/osprofiler/+spec/osprofiler-overhead-
> control
> [2] https://blueprints.launchpad.net/osprofiler/+spec/tail-based-coherent-
> sampling
> [3] https://blueprints.launchpad.net/osprofiler/+spec/asynchronous-trace-
> collection
>
> Thanks
> Rajul Kumar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][rally] Disabling Glance Testing in Rally gates

2017-07-13 Thread Boris Pavlovic
Hi stackers,


Unfortunately what was discussed in other thread (situation in glance is
critical) happened.
Glance stopped working and Rally team is forced to disable checking of it
in Rally gates.

P.S. Seems like this patch is casing the problems:
https://github.com/openstack-dev/devstack/commit/1fa653635781cd975a1031e212b35b6c38196ba4

Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-26 Thread Boris Pavlovic
Mike,

I was recently helping one Intern to join OpenStack community and make some
contribution.

And I found that current workflow is extremely complex and I think not all
people that want to contribute can pass it..

Current workflow is:
- Go to Gerrit sign in
- Find how to contirubte to Gerrit (fail with this because no ssh key)
- Find in Gerrit where to upload ssh (because no agreement)
- Find in Gerrit where to accept License agreement  (fail because your
agreement is invalid and contact info should be provided in Gerrit)
- Server can't accept contact infomration (is what you see in gerrit)
- Go to OpenStack.org sign in (to fix the problem with Gerrit)
- Update contact information
- When you try to contribute your first commit (if you already created it,
you won't be able unit you do git commit --ament, so git review will add
change-id)

Overall it would take 1-2 days for people not familiar with OpenStack.


What about if one make  "Sing-Up" page:

1) Few steps: provide Username, Contact info, Agreement, SSH key (and it
will do all work for you set Gerrit, OpenStack,...)
2) After one finished form it gets instruction for his OS how to setup and
run properly git review
3) Maybe few tutorials (how to find some bug, how to test it and where are
the docs, devstack, ...)

That would simplify onboarding process...

Best regards,
Boris Pavlovic

On Mon, Jun 26, 2017 at 2:45 AM, Alexandra Settle 
wrote:

> I think this is a good idea :) thanks Mike. We get a lot of people coming
> to the docs chan or ML asking for help/where to start and sometimes it’s
> difficult to point them in the right direction.
>
>
>
> Just from experience working with contributor documentation, I’d avoid all
> screen shots if you can – updating them whenever the process changes
> (surprisingly often) is a lot of unnecessary technical debt.
>
>
>
> The docs team put a significant amount of effort in a few releases back
> writing a pretty comprehensive Contributor Guide. For the purposes you
> describe below, I imagine a lot of the content here could be adapted. The
> process of setting up for code and docs is exactly the same:
> http://docs.openstack.org/contributor-guide/index.html
>
>
>
> I also wonder if we could include a ‘what is openstack’ 101 for new
> contributors. I find that there is a **lot** of material out there, but
> it is often very hard to explain to people what each project does, how they
> all interact, why we install from different sources, why do we have
> official and unofficial projects etc. It doesn’t have to be seriously
> in-depth, but an overview that points people who are interested in the
> right directions. Often this will help people decide on what project they’d
> like to undertake.
>
>
>
> Cheers,
>
>
>
> Alex
>
>
>
> *From: *Mike Perez 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Friday, June 23, 2017 at 9:17 PM
> *To: *OpenStack Development Mailing List  openstack.org>
> *Cc: *Wes Wilson , "ild...@openstack.org" <
> ild...@openstack.org>, "knel...@openstack.org" 
> *Subject: *[openstack-dev] [docs][all][ptl] Contributor Portal and Better
> New Contributor On-boarding
>
>
>
> Hello all,
>
>
>
> Every month we have people asking on IRC or the dev mailing list having
> interest in working on OpenStack, and sometimes they're given different
> answers from people, or worse, no answer at all.
>
>
>
> Suggestion: lets work our efforts together to create some common
> documentation so that all teams in OpenStack can benefit.
>
>
>
> First it’s important to note that we’re not just talking about code
> projects here. OpenStack contributions come in many forms such as running
> meet ups, identifying use cases (product working group), documentation,
> testing, etc. We want to make sure those potential contributors feel
> welcomed too!
>
>
>
> What is common documentation? Things like setting up Git, the many
> accounts you need to setup to contribute (gerrit, launchpad, OpenStack
> foundation account). Not all teams will use some common documentation, but
> the point is one or more projects will use them. Having the common
> documentation worked on by various projects will better help prevent
> duplicated efforts, inconsistent documentation, and hopefully just more
> accurate information.
>
>
>
> A team might use special tools to do their work. These can also be
> integrated in this idea as well.
>
>
>
> Once we have common documentation we can have something like:
>
> 1. Choose your own adventure: I want to contribute by code
>
> 2. What service type are you interested in? (Database, Block storage,
> compute)

Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Boris Pavlovic
Hi,

Does this look too complicated and and a bit over designed.

For example, why we can't store all data in memory of single python
application with simple REST API and have
simple mechanism for plugins that are filtering. Basically there is no any
kind of problems with storing it on single host.

If we even have 100k hosts and every host has about 10KB -> 1GB of RAM (I
can just use phone)

There are easy ways to copy the state across different instance (sharing
updates)

And I thought that Placement project is going to be such centralized small
simple APP for collecting all
resource information and doing this very very simple and easy placement
selection...


Best regards,
Boris Pavlovic

On Mon, Jun 19, 2017 at 5:05 PM, Edward Leafe  wrote:

> On Jun 19, 2017, at 5:27 PM, Jay Pipes  wrote:
>
>
> It was from the straw man example. Replacing the $FOO_UUID with UUIDs, and
> then stripping out all whitespace resulted in about 1500 bytes. Your
> example, with whitespace included, is 1600 bytes.
>
>
> It was the "per compute host" that I objected to.
>
>
> I guess it would have helped to see an example of the data returned for
> multiple compute nodes. The straw man example was for a single compute node
> with SR-IOV, NUMA and shared storage. There was no indication how multiple
> hosts meeting the requested resources would be returned.
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally][no-admin] Finally Rally can be run without admin user

2017-06-13 Thread Boris Pavlovic
Hi stackers,

*Intro*

Initially Rally was targeted for developers which means running it from
admin was OK.
Admin was basically used to simplify preparing environment for testing:
create and setup users/tenants, networks, quotas and other resources that
requires admin role.
As well it was used to cleanup all resources after test was executed.

*Problem*

More and more operators were running Rally against their production
environments, and they were not happy with the thing that they should
provide admin, they would rather prepare environment by hand and provide
already existing users than allow Rally to mess up with admin rights =)

*Solution*

After years of refactoring we changed almost everything;) and we managed to
keep Rally as simple as it was and support Operators and Developers needs.

Now Rally supports 3 different modes:

   - admin mode -> Rally manages users that are used for testing
   - admin + existing users mode -> Rally uses existing users for testing
   (if no user context)
   - *[new one] existing users mode *-> Rally uses existing users for
   testing

In every mode input task will look the same, however in case of only
existing users mode you won't be able to use plugins that requires admin
role.

This patch finishes works: https://review.openstack.org/#/c/465495/

Thanks to everybody that was involved in this huge effort!


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Boris Pavlovic
Zane,


> This is your periodic reminder that we have ~50 applications sharing the
> same database and not only do none of them know how the deployer will
> configure the database, most will not even have an idea which set of
> assumptions the other ~49 are making about how the deployer will configure
> the database.


And how can someone, that is trying to deploy OpenStack, understand/find
the right config for db? Or it's Ops tasks and community doesn't care about
them?

I would better give to Ops one config and say that everything should work
with it, and find the way to align everybody in community and make it
default for all projects.

Best regards,
Boris Pavlovic

On Tue, May 23, 2017 at 2:18 PM, Zane Bitter  wrote:

> On 21/05/17 15:38, Monty Taylor wrote:
>
>> One might argue that HA strategies are an operator concern, but in
>> reality the set of workable HA strategies is tightly constrained by how
>> the application works, and the pairing an application expecting one HA
>> strategy with a deployment implementing a different one can have
>> negative results ranging from unexpected downtime to data corruption.
>>
>
> This is your periodic reminder that we have ~50 applications sharing the
> same database and not only do none of them know how the deployer will
> configure the database, most will not even have an idea which set of
> assumptions the other ~49 are making about how the deployer will configure
> the database.
>
> (Ditto for RabbitMQ.)
>
> - ZB
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] Project Update Slides (Pika)

2017-05-14 Thread Boris Pavlovic
Hi stackers,

Here are the slides from summit
https://docs.google.com/presentation/d/1QxsQh8E7Tr46KkLPV7QTwBAbTKFtB6dVEv-pQwg6LUA/edit#slide=id.p4

Here is the video
https://www.openstack.org/videos/boston-2017/project-update-rally


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Boris Pavlovic
Sagi,

I think Rally or Browbeat and other performance oriented solutions won't
> serve our needs, because we run TripleO CI on virtualized environment with
> very limited resources. Actually we are pretty close to full utilizing
> these resources when deploying openstack, so very little is available for
> test.


You can run Rally with any load. Including just starting 1 smallest VM.


It may be useful to run a "limited edition" of API tests that maximize
> coverage and don't duplicate, for example just to check service working
> basically, without covering all its functionality. It will take very little
> time (i.e. 5 tests for each service) and will give a general picture of
> deployment success. It will cover fields that are not covered by pingtest
> as well.


You can actually pick few of scenarios that we have in Rally and cover most
of the functionality.
If you specify what exactly you want to test I can help with writing Rally
Task for that. (it will use as minimum as possible resources)


Best regards,
Boris Pavlovic



On Thu, Apr 6, 2017 at 2:38 AM, Dmitry Tantsur  wrote:

> On 04/05/2017 10:49 PM, Emilien Macchi wrote:
>
>> Greetings dear owls,
>>
>> I would like to bring back an old topic: running tempest in the gate.
>>
>> == Context
>>
>> Right now, TripleO gate is running something called pingtest to
>> validate that the OpenStack cloud is working. It's an Heat stack, that
>> deploys a Nova server, some volumes, a glance image, a neutron network
>> and sometimes a little bit more.
>> To deploy the pingtest, you obviously need Heat deployed in your
>> overcloud.
>>
>> == Problems:
>>
>> Although pingtest has been very helpful over the last years:
>> - easy to understand, it's an Heat template, like an OpenStack user
>> would do to deploy their apps.
>> - fast: the stack takes a few minutes to be created and validated
>>
>> It has some limitations:
>> - Limitation to what Heat resources support (example: some OpenStack
>> resources can't be managed from Heat)
>> - Impossible to run a dynamic workflow (test a live migration for example)
>>
>> == Solutions
>>
>> 1) Switch pingtest to Tempest run on some specific tests, with feature
>> parity of what we had with pingtest.
>> For example, we could imagine to run the scenarios that deploys VM and
>> boot from volume. It would test the same thing as pingtest (details
>> can be discussed here).
>> Each scenario would run more tests depending on the service that they
>> run (scenario001 is telemetry, so it would run some tempest tests for
>> Ceilometer, Aodh, Gnocchi, etc).
>> We should work at making the tempest run as short as possible, and the
>> close as possible from what we have with a pingtest.
>>
>
> A lot of work is going into Tempest itself and its various plugins, so
> that it becomes a convenient and universal tool to test OpenStack clouds.
> While we're not quite there in terms of convenience, it's hard to match the
> coverage of tempest + plugins. I'd prefer TripleO use (some subset of)
> Tempest test suite(s).
>
>
>> 2) Run custom scripts in TripleO CI tooling, called from the pingtest
>> (heat template), that would run some validations commands (API calls,
>> etc).
>> It has been investigated in the past but never implemented AFIK.
>>
>> 3) ?
>>
>
> Unless you want to duplicate all the work that goes into Tempest ecosystem
> now, this is probably not a good idea.
>
>
>> I tried to make this text short and go straight to the point, please
>> bring feedback now. I hope we can make progress on $topic during Pike,
>> so we can increase our testing coverage and detect deployment issues
>> sooner.
>>
>> Thanks,
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Open HTML Task report in other machine

2017-03-24 Thread Boris Pavlovic
Fernando,

Yes of course.

``rally task report`` generates standalone report that you can open
anywhere.

By default it will require internet (for js libs) but you can use
--html-static
https://github.com/openstack/rally/blob/master/rally/cli/commands/task.py#L627
this is going to merge js libs in HTML report (which makes it work without
internet)


Best regards,
Boris Pavlovic

On Fri, Mar 24, 2017 at 6:27 AM, Fernando López 
wrote:

> Dear all,
>
> I just started to play with Rally and I wonder if I could get the output
> html file with the results of a executed task and open in other machine. Is
> it possible?
>
> Best regards,
> Fernando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack New Years Resolutions

2016-12-13 Thread Boris Pavlovic
Jean-Philippe,


> s/COBOL/go/g and it makes your comment even funnier.


It makes it scary =)

Best regards,
Boris Pavlovic

On Tue, Dec 13, 2016 at 2:24 AM, Jean-Philippe Evrard <
jean-philippe.evr...@rackspace.co.uk> wrote:

> s/COBOL/go/g and it makes your comment even funnier.
>
> Best regards,
> JP
>
> On 13/12/2016, 00:00, "Jay Pipes"  wrote:
>
> On 12/12/2016 06:40 PM, Nick Chase wrote:
> > OK, so if you were putting together New Year's Resolutions for
> OpenStack
> > development for 2017, what would they be?
>
> My resolution will be to rewrite Nova in COBOL. Oh wait, no, that's for
> April 1st, not New Years...
>
> Best,
> -jay
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> 
> Rackspace Limited is a company registered in England & Wales (company
> registered number 03897010) whose registered office is at 5 Millington
> Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy
> can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail
> message may contain confidential or privileged information intended for the
> recipient. Any dissemination, distribution or copying of the enclosed
> material is prohibited. If you receive this transmission in error, please
> notify us immediately by e-mail at ab...@rackspace.com and delete the
> original message. Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Destructive / HA / fail-over scenarios

2016-11-30 Thread Boris Pavlovic
Adam,

So Rally actually field the gap specified in that video.

Now there are so called "Hooks" that allows you to trigger other tools/code
during the load (after some amount of time or on specific iteration)

Basically reliability testing & load testing during upgrades can be
implemented as a single Rally task which is quite convenient.


Best regards,
Boris Pavlovic

On Wed, Nov 30, 2016 at 6:02 AM, Dulko, Michal 
wrote:

> On Mon, 2016-11-28 at 15:51 +0300, Timur Nurlygayanov wrote:
> > Hi OpenStack developers and operators,
> >
> > we are going to create the test suite for destructive testing of
> > OpenStack clouds. We want to hear your feedback and ideas
> > about possible destructive and failover scenarios which we need
> > to check.
>
> In Cinder we're pursuing A/A for our cinder-volume service. It would be
> useful to run some destructive tests on patch chain [1] to make sure no
> volume operations are failing while clustered cinder-volume service
> gets killed. In the future we should have a CI testing that in periodic
> zuul queue.
>
> [1] https://review.openstack.org/#/c/355968
>
> >
> > Which scenarios we need to check if we want to make sure that
> > some OpenStack cluster is configured in High Availability mode
> > and can be published as a "production/enterprise" cluster.
> >
> > Your ideas are welcome, let's discuss the ideas of test scenarios in
> > this email thread.
> >
> > The spec for High Availability testing is on review: [1]
> > The user story for destructive testing of OpenStack clouds is
> > on review: [2].
> >
> > Thank you!
> >
> > [1] https://review.openstack.org/#/c/399618/
> > [2] https://review.openstack.org/#/c/396142
> >
> > --
> >
> > Timur,
> > QA Manager
> > OpenStack Projects
> > Mirantis Inc
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cloud Reliability and Resilience for OpenStack (Fault Injection, Chaos Engineering, and Google SRE)

2016-09-01 Thread Boris Pavlovic
Hi Jorge,

Rally team is working on feature called "Hooks".
"Hooks" are going to allow to use Rally to run workloads and inject any
actions (including using existing Chaos frameworks)

Here is the patch: https://review.openstack.org/#/c/352276/
Here is merged spec:
https://github.com/openstack/rally/blob/master/doc/specs/in-progress/hook_section.rst


You are very welcome to join this effort and help Rally team deliver it
faster.

Thanks!

Best regards,
Boris Pavlovic

On Wed, Aug 31, 2016 at 11:55 PM, Jorge Cardoso (Cloud Operations and
Analytics, IT R&D Division)  wrote:

>
>
> Hi all,
>
>
>
> Is there any work being done on Reliability for OpenStack using e.g.
> fault-injection, Chaos Engineering from Netflix, and Site Reliability
> Engineering principles from Google?
>
>
>
> I only found this page in the documentation http://docs.openstack.org/
> developer/performance-docs/test_results/reliability/index.html#openstack-
> reliability-testing.
>
>
>
> I am working on Cloud Reliability and Resilience and I would like to
> explore this area for OpenStack.
>
> You can check some of my interests and work at:
> http://jorge-cardoso.github.io/research/
>
>
>
> Any interest from you guys?
>
> Any suggestions on how to proceed?
>
>
>
>
>
> Best Regards,
>
> Jorge Cardoso
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Replace OSTF with Rally

2016-06-27 Thread Boris Pavlovic
Igor,


I wonder what are the benefits of using Rally then? We can run whatever we
> want by means of MCollective or Ansible. Is there something that could help
> us? I don't know, maybe some dashboard integration with per test results or
> running by tests by tag?


Benefits are in the Rally framework, engine and integrated tooling, that
are doing very hard things to provide simple interfaces for writing simple
plugins that are emulating complicated test cases.

The major benefits are next:

*1) Generalization*
1.1) One tool with one reporting system and one output format for all kinds
of testing strategies (functional, load, perf, scale, ...)
1.2) One set of plugins (code) that can be used to generate all kinds of
testing strategies
1.3.) One API for all kinds of testing strategies

*2) Simplicity *
2.1) Plugins are really simple to write, usually requires one method to be
implemented
2.2) Auto discovery: adding plugins == add code in special (or specified)
directory

*3) Reusability & Data Driven approach: *
3.1) Split code (plugins) & tests cases (yaml files)
3.2) Test cases is mixture of different plugins
3.3) Plugins accept arguments

*4) Integrated tooling*
4.1) All results are persisted in Rally DB and you can access it in any
moment
4.2) Results can be exported in different formats (you can write even own
plugins for simplifying integration)
4.3) Detailed HMTL reports with task results overview and trends




Before switching Fuel from ostf to rally, I would like to see feature
> parity comparison. It's very necessary to understand how much work we need
> to spend to rewrite all our tests in rally way.


Totally agree, let's do it.


Best regards,
Boris Pavlovic



On Mon, Jun 27, 2016 at 8:58 AM, Vladimir Kuklin 
wrote:

> +1 to initial suggestion, but I guess we need to have a full feature
> equality (e.g. HA tests for mysql and rabbitmq replication) before
> switching to Rally.
>
> On Mon, Jun 27, 2016 at 6:17 PM, Sergii Golovatiuk <
> sgolovat...@mirantis.com> wrote:
>
>> Hi,
>>
>> Before switching Fuel from ostf to rally, I would like to see feature
>> parity comparison. It's very necessary to understand how much work we need
>> to spend to rewrite all our tests in rally way.
>>
>>
>>
>> --
>> Best regards,
>> Sergii Golovatiuk,
>> Skype #golserge
>> IRC #holser
>>
>> On Mon, Jun 27, 2016 at 4:32 PM, Alexander Kostrikov <
>> akostri...@mirantis.com> wrote:
>>
>>> Hello, everybody!
>>> Hello, Alex!
>>> >I thought Rally was more for benchmarking.  Wouldn't Tempest make more
>>> sense?
>>> Rally is a good tool with nice api/usage/extensibility.
>>> I really liked "up and running tests in 5 minutes" in Rally with clear
>>> picture of what I am doing.
>>> So, I 100% for a Rally as a QA.
>>>
>>> Another note:
>>> We will need to implement some HA tests, probably not in Rally.
>>>
>>> On Mon, Jun 27, 2016 at 4:57 PM, Andrey Kurilin 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, Jun 27, 2016 at 4:46 PM, Igor Kalnitsky <
>>>> ikalnit...@mirantis.com> wrote:
>>>>
>>>>>
>>>>> > On Jun 27, 2016, at 16:23, Alex Schultz 
>>>>> wrote:
>>>>> >
>>>>> > I thought Rally was more for benchmarking.  Wouldn't Tempest make
>>>>> more sense?
>>>>>
>>>>> According to Rally wiki page [1], it seems they have a verification
>>>>> layer (Tempest so far). Hm, I wonder does it mean we will need to rewrite
>>>>> our scenarios for Tempest?
>>>>>
>>>>>
>>>> Rally consists of two main components: Rally Task and Rally
>>>> Verification. They are totally separated.
>>>> Task component is fully pluggable and you can run there whatever you
>>>> want in whatever you want way.
>>>> Verification component is hardcoded now. It was designed for
>>>> managing(install, configure) and launching(execute and store results)
>>>> Tempest. But we have a spec to make this component pluggable too.
>>>>
>>>>
>>>>> - igor
>>>>>
>>>>>
>>>>> [1] https://wiki.openstack.org/wiki/Rally
>>>>>
>>>>> __
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscri

Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-10 Thread Boris Pavlovic
Morgan,


> When there were failures, the failures were both not looked at by the
> Rally team and was not performance reasons at the time, it was rally not
> able to be setup/run at all.


Prove this, or it's not true. I agree there were such situations (very
rarely actually) and we were fixing them quickly (because we have voting
jobs in our CI pipelines), so in most cases Rally didn't failed because
it's Rally issue.


OSProfiler etc had security concerns and issues that were basically left in
> "review state" after being given clear "do X to have it approved". I want
> to point out that once the performance team came back and addressed the
> issues we landed support for OSProfiler, and it is in keystone. It is not
> enabled by default (profiling should be opt in, and I stand by that), but
> you are correct we landed it.


There were no secure issues, since we introduced HMAC approach which was
implemented in June 2014 *// 2 years ago. *
-
https://github.com/openstack/osprofiler/commit/76a99f2ccc32ea4426717333bbb75fb8b533

- As well you were able to disable osprofiler since
https://github.com/openstack/osprofiler/commit/988909f112ffe79f8855c4713c2c791dd274bb8d
*Jul
2014 // 2 years ago*

So for some reason it takes so long to add at least something to keystone
code based (by the way osprofiler team was forced to modify it very very
bad way)



I'm super happy to see a consistent report leaving data about performance,
> specifically in a consistent environment that isn't going to vary massively
> between runs (hopefully). Longterm I'd like to also see this [if it isn't
> already] do a delta-over-time of keystone performance on merged patches, so
> we can see the timeline of performance.


By the way, Rally stores all results to DB (it stored it always) and is
know capable to build performance trends of many runs

Best regards,
Boris Pavlovic

>
>
>
>
>
On Fri, Jun 10, 2016 at 3:58 PM, Morgan Fainberg 
wrote:

>
>
> On Fri, Jun 10, 2016 at 3:26 PM, Lance Bragstad 
> wrote:
>
>>
>>1. I care about performance. I just believe that a big hurdle has
>>been finding infrastructure that allows us to run performance tests in a
>>consistent manner. Dedicated infrastructure plays a big role in this,
>> which is hard (if not impossible) to obtain in the gate - making the gate
>>a suboptimal place for performance testing. Consistency is also an issue
>>because the gate is comprised of resources donated from several different
>>providers. Matt lays this out pretty well in his reply above. This sounds
>>like a TODO to hook rally into the keystone-performance/ansible pipeline,
>>then we would have rally and keystone running on bare metal.
>>
>> This was one of the BIGGEST reasons rally was not given much credence in
> keystone. The wild variations made the rally data mostly noise. We can't
> even tell if the data from similar nodes (same provider/same az) was
> available. This made it a best guess effort of "is this an issue with a
> node being slow, or the patch" at the time the gate was enabled. This is
> also why I wouldn't support re-enabling rally as an in-infra gate/check
> job. The data was extremely difficult to consume as a developer because I'd
> have to either directly run rally here locally (fine, but why waste infra
> resources then?) or try and correlate data from across different patches
> and different AZ providers. It's great to see this being addressed here.
>
>>
>>1.
>>2. See response to #5.
>>3. What were the changes made to keystone that caused rally to fail?
>>If you have some links I'd be curious to revisit them and improve them if 
>> I
>>can.
>>
>> When there were failures, the failures were both not looked at by the
> Rally team and was not performance reasons at the time, it was rally not
> able to be setup/run at all.
>
>>
>>1. Blocked because changes weren't reviewed? As far as I know
>>OSProfiler is in keystone's default pipeline.
>>
>> OSProfiler etc had security concerns and issues that were basically left
> in "review state" after being given clear "do X to have it approved". I
> want to point out that once the performance team came back and addressed
> the issues we landed support for OSProfiler, and it is in keystone. It is
> not enabled by default (profiling should be opt in, and I stand by that),
> but you are correct we landed it.
>
>>
>>1. It doesn't look like there are any open patches for rally
>>integration with keystone [0]. The closed ones have either been
>>merged [1][2][3][4] 

Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-10 Thread Boris Pavlovic
Lance,

I share just how it looked from my side.
I really support your idea (no matter what you pick to use your
tooling/rally/jmeter) it is very valuable, especially if it will become
voting job.
This really should be done by someone.


Best regards,
Boris Pavlovic

On Fri, Jun 10, 2016 at 3:26 PM, Lance Bragstad  wrote:

>
>1. I care about performance. I just believe that a big hurdle has been
>finding infrastructure that allows us to run performance tests in a
>consistent manner. Dedicated infrastructure plays a big role in this,
> which is hard (if not impossible) to obtain in the gate - making the gate
>a suboptimal place for performance testing. Consistency is also an issue
>because the gate is comprised of resources donated from several different
>providers. Matt lays this out pretty well in his reply above. This sounds
>like a TODO to hook rally into the keystone-performance/ansible pipeline,
>then we would have rally and keystone running on bare metal.
>2. See response to #5.
>3. What were the changes made to keystone that caused rally to fail?
>If you have some links I'd be curious to revisit them and improve them if I
>can.
>4. Blocked because changes weren't reviewed? As far as I know
>OSProfiler is in keystone's default pipeline.
>5. It doesn't look like there are any open patches for rally
>integration with keystone [0]. The closed ones have either been
>merged [1][2][3][4] or abandon [5][6][7][8] because they are
>work-in-progress or unattended.
>
> I'm only looking for this bot to leave a comment. I don't intend on it
> being a voting job any time soon, it's just providing a datapoint for
> patches that we suspect to have an impact on performance. It's running on
> dedicated hardware, but only from a single service provider - so mileage
> may vary depending on where and how you run keystone. But, it does take us
> a step in the right direction. People don't have to use it if they don't
> want to.
>
> Thanks for the feedback!
>
> [0]
> https://review.openstack.org/#/q/project:openstack/keystone+message:%22%255E%2540rally%2540%22
> [1] https://review.openstack.org/#/c/240251/
> [2] https://review.openstack.org/#/c/188457/
> [3] https://review.openstack.org/#/c/188352/
> [4] https://review.openstack.org/#/c/90405/
> [5] https://review.openstack.org/#/c/301367/
> [6] https://review.openstack.org/#/c/188479/
> [7] https://review.openstack.org/#/c/98836/
> [8] https://review.openstack.org/#/c/91677/
>
> On Fri, Jun 10, 2016 at 4:26 PM, Boris Pavlovic  wrote:
>
>> Lance,
>>
>> It is amazing effort, I am wishing you good luck with Keystone team,
>> however i faced some issues when I started similar effort
>> about 3 years ago with Rally. Here are some points, that are going to be
>> very useful for you:
>>
>>1. I think that Keystone team doesn't care about performance &
>>scalability at all
>>2. Keystone team ignored/discard all help from Rally team to make
>>this effort successful
>>3. When Rally job started failing, because of introduced performance
>>issues in Keystone, they decided to remove job
>>    4. They blocked almost forever work on OSProfiler so we are blind and
>>can't see where is the issue in code
>>5. They didn't help to develop any Rally plugin or even review the
>>Rally test cases that we proposed to them
>>
>>
>> Best regards,
>> Boris Pavlovic
>>
>> On Mon, Jun 6, 2016 at 10:45 AM, Clint Byrum  wrote:
>>
>>> Excerpts from Brant Knudson's message of 2016-06-03 15:16:20 -0500:
>>> > On Fri, Jun 3, 2016 at 2:35 PM, Lance Bragstad 
>>> wrote:
>>> >
>>> > > Hey all,
>>> > >
>>> > > I have been curious about impact of providing performance feedback
>>> as part
>>> > > of the review process. From what I understand, keystone used to have
>>> a
>>> > > performance job that would run against proposed patches (I've only
>>> heard
>>> > > about it so someone else will have to keep me honest about its
>>> timeframe),
>>> > > but it sounds like it wasn't valued.
>>> > >
>>> > >
>>> > We had a job running rally for a year (I think) that nobody ever
>>> looked at
>>> > so we decided it was a waste and stopped running it.
>>> >
>>> > > I think revisiting this topic is valuable, but it raises a series of
>>> > > questions.
>>> > >
>

Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-10 Thread Boris Pavlovic
Lance,

It is amazing effort, I am wishing you good luck with Keystone team,
however i faced some issues when I started similar effort
about 3 years ago with Rally. Here are some points, that are going to be
very useful for you:

   1. I think that Keystone team doesn't care about performance &
   scalability at all
   2. Keystone team ignored/discard all help from Rally team to make this
   effort successful
   3. When Rally job started failing, because of introduced performance
   issues in Keystone, they decided to remove job
   4. They blocked almost forever work on OSProfiler so we are blind and
   can't see where is the issue in code
   5. They didn't help to develop any Rally plugin or even review the Rally
   test cases that we proposed to them


Best regards,
Boris Pavlovic

On Mon, Jun 6, 2016 at 10:45 AM, Clint Byrum  wrote:

> Excerpts from Brant Knudson's message of 2016-06-03 15:16:20 -0500:
> > On Fri, Jun 3, 2016 at 2:35 PM, Lance Bragstad 
> wrote:
> >
> > > Hey all,
> > >
> > > I have been curious about impact of providing performance feedback as
> part
> > > of the review process. From what I understand, keystone used to have a
> > > performance job that would run against proposed patches (I've only
> heard
> > > about it so someone else will have to keep me honest about its
> timeframe),
> > > but it sounds like it wasn't valued.
> > >
> > >
> > We had a job running rally for a year (I think) that nobody ever looked
> at
> > so we decided it was a waste and stopped running it.
> >
> > > I think revisiting this topic is valuable, but it raises a series of
> > > questions.
> > >
> > > Initially it probably only makes sense to test a reasonable set of
> > > defaults. What do we want these defaults to be? Should they be
> determined
> > > by DevStack, openstack-ansible, or something else?
> > >
> > >
> > A performance test is going to depend on the environment (the machines,
> > disks, network, etc), the existing data (tokens, revocations, users,
> etc.),
> > and the config (fernet, uuid, caching, etc.). If these aren't consistent
> > between runs then the results are not going to be usable. (This is the
> > problem with running rally on infra hardware.) If the data isn't
> realistic
> > (1000s of tokens, etc.) then the results are going to be at best not
> useful
> > or at worst misleading.
> >
>
> That's why I started the counter-inspection spec:
>
>
> http://specs.openstack.org/openstack/qa-specs/specs/devstack/counter-inspection.html
>
> It just tries to count operations, and graph those. I've, unfortunately,
> been pulled off to other things of late, but I do intend to loop back
> and hit this hard over the next few months to try and get those graphs.
>
> What we'd get initially is just graphs of how many messages we push
> through RabbitMQ, and how many rows/queries/transactions we push through
> mysql. We may also want to add counters like how many API requests
> happened, and how many retries happen inside the code itself.
>
> There's a _TON_ we can do now to ensure that we know what the trends are
> when something gets "slow", so we can look for a gradual "death by 1000
> papercuts" trend or a hockey stick that can be tied to a particular
> commit.
>
> > What does the performance test criteria look like and where does it live?
> > > Does it just consist of running tempest?
> > >
> > >
> > I don't think tempest is going to give us numbers that we're looking for
> > for performance. I've seen a few scripts and have my own for testing
> > performance of token validation, token creation, user creation, etc.
> which
> > I think will do the exact tests we want and we can get the results
> > formatted however we like.
> >
>
> Agreed that tempest will only give a limited view. Ideally one would
> also test things like "after we've booted 1000 vms, do we end up reading
> 1000 more rows, or 1000 * 1000 more rows.
>
> > From a contributor and reviewer perspective, it would be nice to have the
> > > ability to compare performance results across patch sets. I understand
> that
> > > keeping all performance results for every patch for an extended period
> of
> > > time is unrealistic. Maybe we take a daily performance snapshot against
> > > master and use that to map performance patterns over time?
> > >
> > >
> > Where are you planning to store the results?
> >
>
> Infra has a graphite/statsd clu

Re: [openstack-dev] [rally] "Failed to create the requested number of tenants" error

2016-06-09 Thread Boris Pavlovic
Nate,

This looks quite strange. Could you share the information from keystone
catalog?

Seems like you didn't setup admin endpoint for keystone in that region.

Best regards,
Boris Pavlovic

On Thu, Jun 9, 2016 at 12:41 PM, Nate Johnston 
wrote:

> Rally folks,
>
> I am working with an engineer to get him up to speed on Rally on a new
> development.  He is trying out running a few tests from the samples
> directory, like samples/tasks/scenarios/nova/list-hypervisors.yaml - but
> he keeps getting the error "Completed: Exit context: `users`\nTask
> config is invalid: `Unable to setup context 'users': 'Failed to create
> the requested number of tenants.'`"
>
> This is against an Icehouse environment with Mitaka Rally; When I run
> Rally with debug logging I see:
>
> 2016-06-08 18:59:24.692 11197 ERROR rally.common.broker EndpointNotFound:
> admin endpoint for identity service in  region not found
>
> However I note that $OS_AUTH_URL is set in the Rally deployment... see
> http://paste.openstack.org/show/509002/ for the full log.
>
> Any ideas you could give me would be much appreciated.  Thanks!
>
> --N.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Could you share reference for Mitaka Upadates

2016-05-16 Thread Boris Pavlovic
Yuki,

Sorry for long reply.
Here are the slides:
https://docs.google.com/presentation/d/1g5fd2BJXc40yienjCB0Fuc8NaB0e5pQO0XbUHQ1CY0s

Best regards,
Boris Pavlovic

On Tue, May 10, 2016 at 12:16 AM, Yuki Nisiwaki 
wrote:

> To: Mr. Boris
> cc: OpenStacker who is interested in Rally
>
>
> Hello, Mr Boris.
>
> I was attend your session on Austin Summit  "Rally: Overview of Mitaka
> results and Newton Goals".
>
> And I want to recheck out your presentation material in order to grasp
> current rally status.
>
> So could you share the presentation material of ""Rally: Overview of
> Mitaka results and Newton Goals" ?
>
>
> Yuki Nishiwaki
> NTT Communitions
> Technology development
> Cloud Core Technology Unit
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Andrey Kurilin for python-novaclient core

2016-04-13 Thread Boris Pavlovic
Great!

On Wed, Apr 13, 2016 at 2:56 PM, Sylvain Bauza  wrote:

>
>
> Le 13/04/2016 19:53, Matt Riedemann a écrit :
>
>> I'd like to propose that we make Andrey Kurilin core on python-novaclient.
>>
>> He's been doing a lot of the maintenance the last several months and a
>> lot of times is the first to jump on any major issue, does a lot of the
>> microversion work, and is also working on cleaning up docs and helping me
>> with planning releases.
>>
>> His work is here [1].
>>
>> Review stats for the last 4 months (although he's been involved in the
>> project longer than that) [2].
>>
>> Unless there is disagreement I plan to make Andrey core by the end of the
>> week.
>>
>> [1]
>> https://review.openstack.org/#/q/owner:akurilin%2540mirantis.com+project:openstack/python-novaclient
>> [2] http://stackalytics.com/report/contribution/python-novaclient/120
>>
>>
> +1.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Term "workload" has two clashing meanings

2016-04-11 Thread Boris Pavlovic
Alex,

I would suggest to call it "dataplane" because it obvious points to
dataplane testing

Best regards,
Boris Pavlovic

On Mon, Apr 11, 2016 at 11:10 AM, Roman Vasilets 
wrote:

> Hi all, personally I want to suggest* crossload. *Concept is similar to
> cross training(training in two or more sports in order to improve fitness
> and performance, especially in a main sport.) in sport. By that template
> - crossload is load in two or more areas in order to improve durability
> and performance, especially in a main area.
> Thanks, Roman.
>
> On Mon, Apr 11, 2016 at 6:38 PM, Aleksandr Maretskiy <
> amarets...@mirantis.com> wrote:
>
>> Hi all,
>>
>> this is about terminology, we have term "workload" in Rally that appears
>> in two clashing meanings:
>>
>>  1. module rally.plugins.workload
>> <https://github.com/openstack/rally/tree/master/rally/plugins/workload>
>> which collects plugins for cross-VM testing
>>  2. workload replaces term "scenario" in our new input task format
>> <https://github.com/openstack/rally/blob/master/doc/specs/in-progress/new_rally_input_task_format.rst>
>> (task->scenarios is replaced with task->subtasks->workloads)
>>
>> Let's introduce new term as replacement of "1." (or maybe "2." but I
>> suppose this is not the best option).
>>
>> Maybe rename rally.plugins.workload to:
>>rally.plugins.
>> *vmload   *rally.plugins.*vmperf*
>>rally.plugins.*shaker*
>>rally.plugins.*vmworkload*
>>...more ideas?
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptl][keystone] Proposal to split authentication part out of Keystone to separated project

2016-04-06 Thread Boris Pavlovic
Hi stackers,

I would like to suggest very simple idea of splitting out of Keystone
authentication
part in the separated project.

Such change has 2 positive outcomes:
1) It will be quite simple to create scalable service with high performance
for authentication based on very mature projects like: Kerberos[1] and
OpenLDAP[2].

2) This will reduce scope of Keystone, which means 2 things
2.1) Smaller code base that has less issues and is simpler for testing
2.2) Keystone team would be able to concentrate more on fixing
perf/scalability issues of authorization, which is crucial at the moment
for large clouds.

Thoughts?

[1] http://web.mit.edu/kerberos/
[2] http://ldapcon.org/2011/downloads/hummel-slides.pdf

Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstackclient] I tried it and....

2016-03-28 Thread Boris Pavlovic
Hi stackers,

Recently I tried openstackclient and it has amazing UX!

Thanks to everybody involved in this!
You are doing great job keep going!


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] [Devstack] Where to keep tempest configuration?

2016-03-19 Thread Boris Pavlovic
There is as well another way to deal with this.

In Rally we have "rally verify" command that you can use to run tempest &
do auto configuration of it.
We can just extend it with new projects, in this case we are simplifying a
lot of life of everybody who wants to use tempest (ops, devops, devs,...)



Best regards,
Boris Pavlovic

On Thu, Mar 17, 2016 at 4:50 AM, Jordan Pittier 
wrote:

>
>
> On Thu, Mar 17, 2016 at 12:24 PM, Vasyl Saienko 
> wrote:
>
>> Hello Community,
>>
>> We started using tempest/devstack plugins. They allows to do not bother
>> other teams when Project specific changes need to be done. Tempest
>> configuration is still performed at devstack [0].
>> So I would like to rise the following questions:
>>
>>
>>- Where we should keep Projects specific tempest configuration?
>>Example [1]
>>
>> This iniset calls should be made from a devstack-plugin. See [1]
>
>>
>>- Where to keep shared between projects tempest configuration?
>>Example [2]
>>
>> Again, in a devstack plugin. You shouldn't make the iniset call directly
> but instead define some variables in a "setting" file (sourced by
> devstack). Hopefully these variables will be read by lib/tempest (in
> devstack) when the iniset calls will be made.  See [2]
>
>>
>>-
>>
>> As for me it would be good to move Projects related tempest configuration
>> to Projects repositories.
>>
> That's the general idea. It should be possible already right now.
>
>>
>> [0] https://github.com/openstack-dev/devstack/blob/master/lib/tempest
>> [1]
>> https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L509-L513
>> [2]
>> https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L514-L523
>>
>> Thank you in advance,
>> Vasyl Saienko
>>
>> [1]
> https://github.com/openstack/manila/blob/9834c802b8bf565099abf357fe054e086978cf6e/devstack/plugin.sh#L665
>
> [2]
> https://github.com/openstack/devstack-plugin-ceph/blob/18ee55a0a7de7948c41d066cd4a692e56fe8c425/devstack/settings#L14
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-18 Thread Boris Pavlovic
Hi everybody,

What about if we just create new project for alternative Gerrit WebUI and
use it?
I don't think that with current set of web frameworks it would be too hard.

Best regards,
Boris Pavlovic

On Fri, Mar 18, 2016 at 9:50 AM, Andrew Laski  wrote:

>
>
>
> On Fri, Mar 18, 2016, at 10:31 AM, Andrey Kurilin wrote:
>
> Hi all!
> I want to start this thread because I'm tired. I spent a lot of time, but
> I can't review as easy as it was with old interface. New Gerrit is awful.
> Here are several issues:
>
> * It is not possible to review patches at mobile phone. "New" "modern"
> theme is not adopted for small screens.
> * Leaving comments is a hard task. Position of page can jump anytime.
> * It is impossible to turn off hot-keys. position of page changed->I don't
> see that comment pop-up is closed->continue type several letters->make
> unexpected things(open edit mode, modify something, save, exit...)
> * patch-dependency tree is not user-friendly
> * summary table doesn't include status of patch(I need list to the end of
> a page to know if patch is merged or not)
> * there is no button "Comment"/"Reply" at the end of page(after all
> comments).
> * it is impossible to turn off "new" search mechanism
>
> Does it possible to return old, classic theme? It was a good time when we
> have old and new themes together...
>
>
> I spent a little bit of time investigating the possibility of a chrome
> extension to turn off the keyboard shortcuts and search mechanism a little
> while ago. I eventually gave up but what I learned is that those changes
> are apparently related to the inline edit ability that was added. The edit
> window only loads in what is currently visible so regular browser search
> would not work.
>
> I've adapted to the new interface and really like some of the new
> capabilities it provides, but having the page jump around while I'm
> commenting has been a huge annoyance.
>
>
>
>
> --
> Best regards,
> Andrey Kurilin.
>
> *__*
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-16 Thread Boris Pavlovic
Nikhil,

Thank you for rising this question.

IMHO quotas should be moved into separated services (this is right micro
services approach).

It will make a lot of things simpler:
1) this removes a lot of logic/code from projects
2) cross project atomic quotas reservation (will be possible)
   (e.g. if we would like to reserver all required quotas, before running
heat stack)
3) it will have better UX (you can change projects quotas from one place
and unified)
4) simpler migrations for the projects (we don't need to maintain db
migrations for each project)
just imagine change in quotas lib that requires db migrations, we will
need to run amount of projects migrations.


Best regards,
Boris Pavlovic


On Tue, Mar 15, 2016 at 11:25 PM, Nikhil Komawar 
wrote:

> Hello everyone,
>
> tl;dr;
> I'm writing to request some feedback on whether the cross project Quotas
> work should move ahead as a service or a library or going to a far
> extent I'd ask should this even be in a common repository, would
> projects prefer to implement everything from scratch in-tree? Should we
> limit it to a guideline spec?
>
> But before I ask anymore, I want to specifically thank Doug Hellmann,
> Joshua Harlow, Davanum Srinivas, Sean Dague, Sean McGinnis and  Andrew
> Laski for the early feedback that has helped provide some good shape to
> the already discussions.
>
> Some more context on what the happenings:
> We've this in progress spec [1] up for providing context and platform
> for such discussions. I will rephrase it to say that we plan to
> introduce a new 'entity' in the Openstack realm that may be a library or
> a service. Both concepts have trade-offs and the WG wanted to get more
> ideas around such trade-offs from the larger community.
>
> Service:
> This would entail creating a new project and will introduce managing
> tables for quotas for all the projects that will use this service. For
> example if Nova, Glance, and Cinder decide to use it, this 'entity' will
> be responsible for handling the enforcement, management and DB upgrades
> of the quotas logic for all resources for all three projects. This means
> less pain for projects during the implementation and maintenance phase,
> holistic view of the cloud and almost a guarantee of best practices
> followed (no clutter or guessing around what different projects are
> doing). However, it results into a big dependency; all projects rely on
> this one service for right enforcement, avoiding races (if do not
> incline on implementing some of that in-tree) and DB
> migrations/upgrades. It will be at the core of the cloud and prone to
> attack vectors, bugs and margin of error.
>
> Library:
> A library could be thought of in two different ways:
> 1) Something that does not deal with backed DB models, provides a
> generic enforcement and management engine. To think ahead a little bit
> it may be a ABC or even a few standard implementation vectors that can
> be imported into a project space. The project will have it's own API for
> quotas and the drivers will enforce different types of logic; per se
> flat quota driver or hierarchical quota driver with custom/project
> specific logic in project tree. Project maintains it's own DB and
> upgrades thereof.
> 2) A library that has models for DB tables that the project can import
> from. Thus the individual projects will have a handy outline of what the
> tables should look like, implicitly considering the right table values,
> arguments, etc. Project has it's own API and implements drivers in-tree
> by importing this semi-defined structure. Project maintains it's own
> upgrades but will be somewhat influenced by the common repo.
>
> Library would keep things simple for the common repository and sourcing
> of code can be done asynchronously as per project plans and priorities
> without having a strong dependency. On the other hand, there is a
> likelihood of re-implementing similar patterns in different projects
> with individual projects taking responsibility to keep things up to
> date. Attack vectors, bugs and margin of error are project responsibilities
>
> Third option is to avoid all of this and simply give guidelines, best
> practices, right packages to each projects to implement quotas in-house.
> Somewhat undesirable at this point, I'd say. But we're all ears!
>
> Thank you for reading and I anticipate more feedback.
>
> [1] https://review.openstack.org/#/c/284454/
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http:

[openstack-dev] [Rally] PTL candidacy

2016-03-15 Thread Boris Pavlovic
Hi stackers,

I'm announcing my candidacy for PTL for rally for the Newton release cycle.

My goals are the same:

- Work on road map to capture everybody's use cases and align them
- Continue working on improving our review process and CI
- Concentrate efforts on addressing tech debt
  (fixing bugs, cleaning up architecture and making it more testable)
- Make contribution to Rally even more open (make road map more open)
- Do the regular 1 per 2 weeks releases

Plans for the next releases:

- Finish work that addresses Rally scalability issues
  (only one task left storing chunks of results to DB)
- Finish distributed runner
  (we did all changes in framework except storing results in chunks, we
will need just to
   implement new runner plugin)
- Finish generalization of Rally
  (make Rally suitable for testing of everything, not only OpenStack)
- Split Rally Core & Rally OpenStack plugins into 2 repos
- Finish work on workload framework
  (create tests for network and disk testing)
- Improve Rally Certification Tasks
- Finish work on rally task v2 format
- Implement mutliscenario load
- Rally as a Service
- Rally task Trends and Compare reports
- Rally export results functionality (CLI and plugins for many systems)
- Disaster cleanup
  (be able to cleanup env no matter what happened)
-  many other interesting tasks


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common RPC Message Trace Mechanism

2016-03-06 Thread Boris Pavlovic
Xuanzhou,

I am not sure what do you mean by "trace". But if you need something that
allows to do cross service/project tracing then you should take a look at
osprofiler:
https://github.com/openstack/osprofiler

Best regards,
Boris Pavlovic

On Sun, Mar 6, 2016 at 8:15 PM, Xuanzhou Perry Dong 
wrote:

> Hi,
>
> I am looking for a common RPC message trace mechanism in oslo_messaging.
> This message trace mechanism needs to be common to all drivers. Currently
> some documentation mentions that oslo_messaging_amqp.trace can activate the
> message trace (say,
> http://docs.openstack.org/liberty/config-reference/content/networking-configuring-rpc.html).
> But it seems that this oslo_messaging_amqp.trace is only available to the
> Proton driver.
>
> Do I miss any existing common RPC message trace mechanism in oslo? If
> there is no such mechanism, I would propose to create such a mechanism for
> oslo.
>
> Any response is appreciated.
> Thanks.
> Best Regards,
> Xuanzhou Perry Dong
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][all] Integration python-*client tests on gates

2016-03-02 Thread Boris Pavlovic
Hi,

It's still not clear for me, why we can't just add Rally jobs with
scenarios related to specific project.
It will work quite fast and it will cover CLI (instantly)  with good
integration/functional testing.


Best regards,
Boris Pavlovic

On Wed, Mar 2, 2016 at 4:52 AM, Sean Dague  wrote:

> On 03/02/2016 07:34 AM, Ivan Kolodyazhny wrote:
> > Sean,
> >
> > I've mentioned above, that current tempest job runs ~1429 tests and only
> > about 10 of them uses cinderclient. It tooks a lot of time without any
> > benefits for cinder, e.g.: tests like tempest.api.network.* verifies
> > Neutron, not python-cinderclient.
>
> We can say that about a lot of things in that stack. For better or
> worse, that's where our testing is. It's a full stack same set of tests
> against all these components which get used. The tempest.api.network
> tests are quite quick. The biggest time hitters in the runs are scenario
> tests, many of which are volumes driven.
>
> 2016-02-12 19:07:46.277 |
>
> tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_reboot[compute,id-7b6860c2-afa3-4846-9522-adeb38dfbe08,network]
>  193.523
> 2016-02-12 19:07:46.277 |
>
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern[compute,id-557cd2c2-4eb8-4dce-98be-f86765ff311b,image,smoke,volume]
> 150.766
> 2016-02-12 19:07:46.277 |
>
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern[compute,id-557cd2c2-4eb8-4dce-98be-f86765ff311b,image,smoke,volume]
>   136.834
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_cross_tenant_traffic[compute,id-e79f879e-debb-440c-a7e4-efeda05b6848,network]
>107.045
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac[compute,id-9178ad42-10e4-47e9-8987-e02b170cc5cd,network]
> 101.252
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless[compute,id-cf1c4425-766b-45b8-be35-e2959728eb00,network]
>   99.041
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops[compute,id-f323b3ba-82f8-4db7-8ea6-6a895869ec49,network,smoke]
> 96.954
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance[compute,id-c1b6318c-b9da-490b-9c67-9339b627271f,image,network,volume]
> 95.120
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario[compute,id-bdbb5441-9204-419d-a225-b4fdbfb1a1a8,image,network,volume]
>86.165
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern[compute,id-608e604b-1d63-4a82-8e3e-91bc665c90b4,image,network]
>   85.422
>
>
> If you would like to pitch in on an optimization strategy for all the
> components, that would be great. But this needs to be thought about in
> those terms. It would be great to stop testing 2 versions of cinder API
> in every run, for instance. That would be super helpful to everyone as
> those Boot from volume tests take over 2 minutes each.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-02 Thread Boris Pavlovic
Hi,

I will try to be short.

- Voting unit test coverage job is ready, and you can just use it as is
from rally source code:
   you need this file
https://github.com/openstack/rally/blob/master/tests/ci/cover.sh
   and this change in tox:
https://github.com/openstack/rally/blob/master/tox.ini#L51-L52

- Rally is in gates, and it's easy to add jobs in any project. If you have
any problems with this
  just ping me or someone from Rally team (or just write comment in
openstack-rally IRC)

- Rally was a performance tool, however that change  and now we are more
like common testing
  framework, that allows to do various kinds of testing (perf, volume,
stress, functional, ...)

- In Rally we were testing all plugins with relative small concurrency
(already for more then 1.5 year),
  and I can say that we faced a lot of issues with concurrency (and we are
still facing).
  However I can't give guarantee that we are facing 100% of cases
  (however facing most of issues is better then nothing)



Best regards,
Boris Pavlovic

On Wed, Mar 2, 2016 at 7:30 AM, Michał Dulko  wrote:

> On 03/02/2016 04:11 PM, Gorka Eguileor wrote:
> > On 02/03, Ivan Kolodyazhny wrote:
> >> Eric,
> >>
> >> There are Gorka's patches [10] to remove API Races
> >>
> >>
> >> [10]
> >>
> https://review.openstack.org/#/q/project:openstack/cinder+branch:master+topic:fix/api-races-simplified
> >>
> > I looked at Rally a long time ago so apologies if I'm totally off base
> > here, but it looked like it was a performance evaluation tool, which
> > means that it probably won't help to check for API Races (at least I
> > didn't see how when I looked).
> >
> > Many of the API races only happen if you simultaneously try the same
> > operation multiple times against the same resource or if there are
> > different operations that are trying to operate on the same resource.
> >
> > On the first case if Rally allowed it we could test it because we know
> > only 1 of the operations should succeed, but on the second case when we
> > are talking about preventing races from different operations there is no
> > way to know what the result should be, since the order in which those
> > operations are executed on each test run will determine which one will
> > fail and which one will succeed.
> >
> > I'm not trying to go against the general idea of adding rally tests, I
> > just think that they won't help in the case of the API races.
>
> You're probably right - Rally would need to cache API responses to
> parallel runs, predict the result of accepted requests (these which
> haven't received VolumeIsBusy) and then verify it. In case of API race
> conditions things explode inside the stack, and not on the API response
> level. The issue is that two requests, that shouldn't ever be accepted
> together, get positive API response.
>
> I cannot say it's impossible to implement a situation like that as Rally
> resource, but definitely it seems non-trivial to verify if result is
> correct.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mitaka][hackathon] Mitaka Bug Smash Hackathon in Bay Area (March 7-9)

2016-02-17 Thread Boris Pavlovic
Just making sure that everybody saw this topic.

Best regards,
Boris Pavlovic

On Thu, Feb 11, 2016 at 11:21 AM, Boris Pavlovic  wrote:

> Hi stackers,
>
> If you are in Bay Area and you would to work together with your friends
> from community on fixing non trivial bugs together, you have a great
> chance.
>
> There is going to be special event "Mitaka bug smash".
> Here is the full information:
> https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka
>
> *If you would like to take a part and you are in Bay Area, please register
> here: *
>
> https://www.eventbrite.com/e/global-openstack-bug-smash-bay-area-tickets-21241532997?utm_source=eb_email&utm_medium=email&utm_campaign=new_event_email&utm_term=viewmyevent_button
>
> As well, please provide here info in which project you are interested:
> https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka-BayArea
>
>
>
> Best regards,
> Boris Pavlovic
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-14 Thread Boris Pavlovic
Yingxin,


Basically, what we implemented was next:

- Scheduler consumes RPC updates from Computes
- Scheduler keeps world state in memory (and each message from compute is
treat like a incremental update)
- Incremental update is shared across multiple instances of schedulers
  (so one message from computes is only consumed once)
- Schema less host state (to be able to use single scheduler service for
all resources)

^ All this was done in backward compatible way and it was really easy to
migrate.


If this was accepted, we were planing to work on making scheduler non
depend from Nova (which is actually quite simple task after those change)
 and moving that code outside of Nova.

So solutions are quite similar overall.
I hope you'll get more luck with getting them in upstream.


Best regards,
Boris Pavlovic

On Sun, Feb 14, 2016 at 11:08 PM, Cheng, Yingxin 
wrote:

> Thanks Boris, the idea is quite similar in “Do not have db accesses during
> scheduler decision making” because db blocks are introduced at the same
> time, this is very bad for the lock-free design of nova scheduler.
>
>
>
> Another important idea is that “Only compute node knows its own final
> compute-node resource view” or “The accurate resource view only exists at
> the place where it is actually consumed.” I.e., The incremental updates can
> only come from the actual “consumption” action, no matter where it is(e.g.
> compute node, storage service, network service, etc.). Borrow the terms
> from resource-provider, compute nodes can maintain its accurate version of
> “compute-node-inventory” cache, and can send incremental updates because it
> actually consumes compute resources, furthermore, storage service can also
> maintain an accurate version of “storage-inventory” cache and send
> incremental updates if it also consumes storage resources. If there are
> central services in charge of consuming all the resources, the accurate
> cache and updates must come from them.
>
>
>
> The third idea is “compatibility”. This prototype focuses on a very small
> scope by only introducing a new host_manager driver “shared_host_manager”
> with minor other changes. The driver can be changed back to “host_manager”
> very easily. It can also run with filter schedulers and caching schedulers.
> Most importantly, the filtering and weighing algorithms are kept unchanged.
> So more changes can be introduced for the complete version of “shared state
> scheduler” because it is evolving in a gradual way.
>
>
>
>
>
> Regards,
>
> -Yingxin
>
>
>
> *From:* Boris Pavlovic [mailto:bo...@pavlovic.me]
> *Sent:* Monday, February 15, 2016 1:59 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [nova] A prototype implementation towards
> the "shared state scheduler"
>
>
>
> Yingxin,
>
>
>
> This looks quite similar to the work of this bp:
>
> https://blueprints.launchpad.net/nova/+spec/no-db-scheduler
>
>
>
> It's really nice that somebody is still trying to push scheduler
> refactoring in this way.
>
> Thanks.
>
>
>
> Best regards,
>
> Boris Pavlovic
>
>
>
> On Sun, Feb 14, 2016 at 9:21 PM, Cheng, Yingxin 
> wrote:
>
> Hi,
>
>
>
> I’ve uploaded a prototype https://review.openstack.org/#/c/280047/ to
> testify its design goals in accuracy, performance, reliability and
> compatibility improvements. It will also be an Austin Summit Session if
> elected:
> https://www.openstack.org/summit/austin-2016/vote-for-speakers/Presentation/7316
>
>
>
> I want to gather opinions about this idea:
>
> 1. Is this feature possible to be accepted in the Newton release?
>
> 2. Suggestions to improve its design and compatibility.
>
> 3. Possibilities to integrate with resource-provider bp series: I know
> resource-provider is the major direction of Nova scheduler, and there will
> be fundamental changes in the future, especially according to the bp
> https://review.openstack.org/#/c/271823/1/specs/mitaka/approved/resource-providers-scheduler.rst.
> However, this prototype proposes a much faster and compatible way to make
> schedule decisions based on scheduler caches. The in-memory decisions are
> made at the same speed with the caching scheduler, but the caches are kept
> consistent with compute nodes as quickly as possible without db refreshing.
>
>
>
> Here is the detailed design of the mentioned prototype:
>
>
>
> >>
>
> Background:
>
> The host state cache maintained by host manager is the scheduler resource
> view during schedule decision making. It is updated whenever a request is
> received[1], 

Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-14 Thread Boris Pavlovic
Yingxin,

This looks quite similar to the work of this bp:
https://blueprints.launchpad.net/nova/+spec/no-db-scheduler

It's really nice that somebody is still trying to push scheduler
refactoring in this way.
Thanks.

Best regards,
Boris Pavlovic

On Sun, Feb 14, 2016 at 9:21 PM, Cheng, Yingxin 
wrote:

> Hi,
>
>
>
> I’ve uploaded a prototype https://review.openstack.org/#/c/280047/ to
> testify its design goals in accuracy, performance, reliability and
> compatibility improvements. It will also be an Austin Summit Session if
> elected:
> https://www.openstack.org/summit/austin-2016/vote-for-speakers/Presentation/7316
>
>
>
> I want to gather opinions about this idea:
>
> 1. Is this feature possible to be accepted in the Newton release?
>
> 2. Suggestions to improve its design and compatibility.
>
> 3. Possibilities to integrate with resource-provider bp series: I know
> resource-provider is the major direction of Nova scheduler, and there will
> be fundamental changes in the future, especially according to the bp
> https://review.openstack.org/#/c/271823/1/specs/mitaka/approved/resource-providers-scheduler.rst.
> However, this prototype proposes a much faster and compatible way to make
> schedule decisions based on scheduler caches. The in-memory decisions are
> made at the same speed with the caching scheduler, but the caches are kept
> consistent with compute nodes as quickly as possible without db refreshing.
>
>
>
> Here is the detailed design of the mentioned prototype:
>
>
>
> >>
>
> Background:
>
> The host state cache maintained by host manager is the scheduler resource
> view during schedule decision making. It is updated whenever a request is
> received[1], and all the compute node records are retrieved from db every
> time. There are several problems in this update model, proven in
> experiments[3]:
>
> 1. Performance: The scheduler performance is largely affected by db access
> in retrieving compute node records. The db block time of a single request
> is 355ms in average in the deployment of 3 compute nodes, compared with
> only 3ms in in-memory decision-making. Imagine there could be at most 1k
> nodes, even 10k nodes in the future.
>
> 2. Race conditions: This is not only a parallel-scheduler problem, but
> also a problem using only one scheduler. The detailed analysis of
> one-scheduler-problem is located in bug analysis[2]. In short, there is a
> gap between the scheduler makes a decision in host state cache and the
>
> compute node updates its in-db resource record according to that decision
> in resource tracker. A recent scheduler resource consumption in cache can
> be lost and overwritten by compute node data because of it, result in cache
> inconsistency and unexpected retries. In a one-scheduler experiment using
> 3-node deployment, there are 7 retries out of 31 concurrent schedule
> requests recorded, results in 22.6% extra performance overhead.
>
> 3. Parallel scheduler support: The design of filter scheduler leads to an
> "even worse" performance result using parallel schedulers. In the same
> experiment with 4 schedulers on separate machines, the average db block
> time is increased to 697ms per request and there are 16 retries out of 31
> schedule requests, namely 51.6% extra overhead.
>
>
>
> Improvements:
>
> This prototype solved the mentioned issues above by implementing a new
> update model to scheduler host state cache. Instead of refreshing caches
> from db, every compute node maintains its accurate version of host state
> cache updated by the resource tracker, and sends incremental updates
> directly to schedulers. So the scheduler cache are synchronized to the
> correct state as soon as possible with the lowest overhead. Also, scheduler
> will send resource claim with its decision to the target compute node. The
> compute node can decide whether the resource claim is successful
> immediately by its local host state cache and send responds back ASAP. With
> all the claims are tracked from schedulers to compute nodes, no false
> overwrites will happen, and thus the gaps between scheduler cache and real
> compute node states are minimized. The benefits are obvious with recorded
> experiments[3] compared with caching scheduler and filter scheduler:
>
> 1. There is no db block time during scheduler decision making, the average
> decision time per request is about 3ms in both single and multiple
> scheduler scenarios, which is equal to the in-memory decision time of
> filter scheduler and caching scheduler.
>
> 2. Since the scheduler claims are tracked and the "false overwrite" is
> eliminated, there should be 0 retries in one-scheduler d

[openstack-dev] [mitaka][hackathon] Mitaka Bug Smash Hackathon in Bay Area (March 7-9)

2016-02-11 Thread Boris Pavlovic
Hi stackers,

If you are in Bay Area and you would to work together with your friends
from community on fixing non trivial bugs together, you have a great
chance.

There is going to be special event "Mitaka bug smash".
Here is the full information:
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka

*If you would like to take a part and you are in Bay Area, please register
here: *
https://www.eventbrite.com/e/global-openstack-bug-smash-bay-area-tickets-21241532997?utm_source=eb_email&utm_medium=email&utm_campaign=new_event_email&utm_term=viewmyevent_button

As well, please provide here info in which project you are interested:
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka-BayArea



Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][neutron][requirements] - keystonemiddleware-4.1.0 performance regression

2016-01-21 Thread Boris Pavlovic
Hi,


By the way OSprofiler trace shows how this regression impacts on amount of
DB queries done by Keystone (during the boot of VM):
http://boris-42.github.io/b2.html


Best regards,
Boris Pavlovic

On Wed, Jan 20, 2016 at 3:30 PM, Morgan Fainberg 
wrote:

> As promised here are the fixes:
>
>
> https://review.openstack.org/#/q/Ifc17c27744dac5ad55e84752ca6f68169c2f5a86,n,z
>
> Proposed to both master and liberty.
>
> On Wed, Jan 20, 2016 at 12:15 PM, Sean Dague  wrote:
>
>> On 01/20/2016 02:59 PM, Morgan Fainberg wrote:
>> > So this was due to a change in keystonemiddleware. We stopped doing
>> > in-memory caching of tokens per process, per worker by default [1].
>> > There are a couple of reasons:
>> >
>> > 1) in-memory caching produced unreliable validation because some
>> > processed may have a cache, some may not
>> > 2) in-memory caching was unbounded memory wise per worker.
>> >
>> > I'll spin up a devstack change to enable memcache and use the memcache
>> > caching for keystonemiddleware today. This will benefit things in a
>> > couple ways
>> >
>> > * All services and all service's workers will share the offload of the
>> > validation, likely producing a real speedup even over the old in-memory
>> > caching
>> > * There will no longer be inconsistent validation offload/responses
>> > based upon which worker you happen to hit for a given service.
>> >
>> > I'll post to the ML here with the proposed change later today.
>> >
>> > [1]
>> >
>> https://github.com/openstack/keystonemiddleware/commit/f27d7f776e8556d976f75d07c99373455106de52
>>
>> This seems like a pretty substantial performance impact. Was there a
>> reno associated with this?
>>
>> I think that we should still probably:
>>
>> * != the keystone middleware version, it's impacting the ability to land
>> fixes in the gate
>> * add devstack memcache code
>> * find some way to WARN if we are running without memcache config, so
>> people realize they are in a regressed state
>> * add back keystone middleware at that version
>>
>> -Sean
>>
>> >
>> > Cheers,
>> > --Morgan
>> >
>> > On Tue, Jan 19, 2016 at 10:57 PM, Armando M. > > <mailto:arma...@gmail.com>> wrote:
>> >
>> >
>> >
>> > On 19 January 2016 at 22:46, Kevin Benton > > <mailto:blak...@gmail.com>> wrote:
>> >
>> > Hi all,
>> >
>> > We noticed a major jump in the neutron tempest and API test run
>> > times recently in Neutron. After digging through logstash I
>> > found out that it first occurred on the requirements bump here:
>> > https://review.openstack.org/#/c/265697/
>> >
>> > After locally testing each requirements change individually, I
>> > found that the keystonemiddleware change seems to be the
>> > culprit. It almost doubles the time it takes to fulfill simple
>> > port-list requests in Neutron.
>> >
>> > Armando pushed up a patch here to
>> > confirm: https://review.openstack.org/#/c/270024/
>> > Once that's verified, we should probably put a cap on the
>> > middleware because it's causing the tests to run up close to
>> > their time limits.
>> >
>> >
>> > Kevin,
>> >
>> > As usual your analytical skills are to be praised.
>> >
>> > I wonder if anyone else is aware of the issue/s, because during the
>> > usual hunting I could see other projects being affected and showing
>> > abnormally high run times of the dsvm jobs.
>> >
>> > I am not sure that [1] is the right approach, but it should give us
>> > some data points if executed successfully.
>> >
>> > Cheers,
>> > Armando
>> >
>> > [1]  https://review.openstack.org/#/c/270024/
>> >
>> >
>> > --
>> > Kevin Benton
>> >
>> >
>>  __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > <
>> http://openstack-dev-requ...@lists.openstack.org?s

Re: [openstack-dev] [Oslo][all] os-profiler under Oslo umbrella

2016-01-20 Thread Boris Pavlovic
Dims,

Great news! =)

When we address some small issues I'll make demo.

Best regards,
Boris Pavlovic

On Wed, Jan 13, 2016 at 5:06 AM, Davanum Srinivas  wrote:

> Team,
>
> Oslo folks have voted[1] to be the home for the osprofiler project[2].
> Several projects are already using osprofiler. One example of work in
> flight is for Nova[3].
>
> Please take a look at the README to see the features/description, in a
> nutshell it will allow operators / end users to drill down into
> HTTP/DB/RPC calls:
> https://git.openstack.org/cgit/openstack/osprofiler/tree/README.rst
>
> Thanks,
> Dims
>
> [1] https://review.openstack.org/#/c/103825/
> [2] https://git.openstack.org/cgit/openstack/osprofiler/
> [3] https://review.openstack.org/#/c/254703/
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][osprofiler] OSprofiler spec is ready for review

2015-12-17 Thread Boris Pavlovic
Hi stackers,

OSprofiler spec is ready for review.

Please review it, if you are interested in making native profiling/tracing
OpenStack happen:
https://review.openstack.org/#/c/103825/

Thanks!


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] New testing guidelines

2015-12-16 Thread Boris Pavlovic
Assaf,

We can as well add Rally testing for scale/performance/regression testing.

Best regards,
Boris Pavlovic

On Wed, Dec 16, 2015 at 7:00 AM, Fawad Khaliq  wrote:

> Very useful information. Thanks, Assaf.
>
> Fawad Khaliq
>
>
> On Thu, Dec 10, 2015 at 6:26 AM, Assaf Muller  wrote:
>
>> Today we merged [1] which adds content to the Neutron testing guidelines:
>>
>> http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
>>
>> The document details Neutron's different testing infrastructures:
>> * Unit
>> * Functional
>> * Fullstack (Integration testing with services deployed by the testing
>> infra itself)
>> * In-tree Tempest
>>
>> The new documentation provides:
>> * Examples
>> * Do's and don'ts
>> * Good and bad usage of mock
>> * The anatomy of a good unit test
>>
>> And primarily the advantages and use cases for each testing framework.
>>
>> It's short - I encourage developers to go through it. Reviewers may
>> use it as reference / link when testing anti-pattern pop up.
>>
>> Please send feedback on this thread or better yet in the form of a
>> devref patch. Thank you!
>>
>>
>> [1] https://review.openstack.org/#/c/245984/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Testing, Rally and Wiki

2015-12-10 Thread Boris Pavlovic
Hi Gal,


> We are also working on combining Rally testing with Kuryr and for that we
> are going to
> introduce Docker context plugin and client and other parts that are
> probably needed by other projects (like Magnum)
> I think it would be great if we can combine forces on this.


What this context is going to do?


Best regards,
Boris Pavlovic

On Thu, Dec 10, 2015 at 6:11 AM, Gal Sagie  wrote:

> Hello everyone,
>
> As some of you have already noticed one of the top priorities for Kuryr
> this cycle is to get
> our CI and gate testing done.
>
> I have been working on creating the base for adding integration tests that
> will run
> in the gate in addition to our unit tests and functional testing.
>
> If you would like to join and help this effort, please stop by
> #openstack-kuryr or email
> me back.
>
> We are also working on combining Rally testing with Kuryr and for that we
> are going to
> introduce Docker context plugin and client and other parts that are
> probably needed by other projects (like Magnum)
> I think it would be great if we can combine forces on this.
>
> I have also created Kuryr Wiki:
> https://wiki.openstack.org/wiki/Kuryr
>
> Feel free to edit and add needed information.
>
>
> Thanks all
> Gal.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [QA] [Tests] MOS integration tests in SWARM test suite

2015-12-07 Thread Boris Pavlovic
Timur,

I hope you are going to use for [1] and [2] Rally verify command?


Best regards,
Boris Pavlovic

On Mon, Dec 7, 2015 at 6:09 AM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi Fuel team,
>
> we have a lot of automated integration tests for OpenStack verification
> and we want to add execution of these tests to Fuel SWARM test suite (to
> run these tests on daily basis and on per commit basis).
>
> We used our own bash scripts to deploy OpenStack environments with Fuel
> before, but now we have no resources to maintain these scripts. Fuel QA and
> MOS QA teams invest a lot of efforts to improve existing QA framework with
> BVT/SWARM tests. This is why we want to add our integration automated tests
> to SWARM test suite, where we already have good framework to manage fuel
> environments.
>
> We started to move our integration tests to SWARM test suite:
> 1. Sahara integration tests with deployment of all available Sahara
> cluster / plugins types:
> https://review.openstack.org/#/c/248602/ (merged)
> 2. Murano integration tests with deployment of all available Murano
> applications:
> https://review.openstack.org/#/c/249850/(on review)
>
> We are going to add execution of full Tempest tests suite [1] and
> execution of all CLI-based functional tests [2] from upstream projects.
> These tests will be executed on separate hardware server, where we will
> have enough resources (for example, integration Murano and Sahara tests
> require 32 Gb of RAM on compute nodes minimum). We already provided this
> server to fuel CI team.
>
> We want to merge these tests before MOS 8.0 to do all acceptance testing
> with automated tests (and without manual testing).
>
> In parallel, we are working on new approach of integration of third-party
> functional / integration tests with SWARM test suite. It is under the
> discussion now and it will be not available in the nearest future.
>
> Please, let me know if you have objections or questions.
>
> [1] https://bugs.launchpad.net/fuel/+bug/1523515
> [2] https://bugs.launchpad.net/fuel/+bug/1523436
>
> --
>
> Timur,
> Senior QA Engineer
> OpenStack Projects
> Mirantis Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Performance][Proposal] Moving IRC meeting from 15:00 UTC to 16:00 UTC

2015-12-04 Thread Boris Pavlovic
+1 from me

On Fri, Dec 4, 2015 at 8:16 AM, Joshua Harlow  wrote:

> +1 from me :)
>
> Dina Belova wrote:
>
>> Dear performance folks,
>>
>> There is a suggestion to move our meeting time from 15:00 UTC (Tuesdays
>> ) to
>> 16:00 UTC (also Tuesdays
>> ) to
>> make them more comfortable for US guys.
>>
>> Please leave your +1 / -1 here in the email thread.
>>
>> Btw +1 from me :)
>>
>> Cheers,
>> Dina
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally as A Service?

2015-11-24 Thread Boris Pavlovic
Obed,

We are refactoring Rally to make it possible to run it as a deamon.
There is plenty amount of work that should be done, including this spec:
https://review.openstack.org/#/c/182245/

Best regards,
Boris Pavlovic

On Tue, Nov 24, 2015 at 2:20 PM, Munoz, Obed N 
wrote:

>
> --
> Obed N Munoz
> Cloud Engineer @ ClearLinux Project
> Open Source Technology Center
>
>
>
>
>
>
>
>
> On 11/24/15, 4:11 PM, "JJ Asghar"  wrote:
>
> >-BEGIN PGP SIGNED MESSAGE-
> >Hash: SHA512
> >
> >On 11/24/15 1:35 PM, Munoz, Obed N wrote:
> >> Is there any plan or work-in-progress for RaaS (Rally as a
> >> Service)? I saw it mentioned on
> >> https://wiki.openstack.org/wiki/Rally
> >
> >
> >I think we've talked about something like this at the Operators
> >Meetups. But this[1] is more or less what everyone just defaults to.
> >
> >[1]: https://github.com/svashu/docker-rally
>
> Yeah, actually, I’d prefer to use the official Rally Project’s  Dockerfile.
>
> https://github.com/openstack/rally/blob/master/Dockerfile
>
>
> We’re creating a new Dockerfile that takes the above as base and then adds
> some automation for
> Running the db create and html file generation.
>
> >
> >- --
> >Best Regards,
> >JJ Asghar
> >c: 512.619.0722 t: @jjasghar irc: j^2
> >-BEGIN PGP SIGNATURE-
> >Version: GnuPG/MacGPG2 v2
> >Comment: GPGTools - https://gpgtools.org
> >
> >iQIcBAEBCgAGBQJWVOCjAAoJEDZbxzMH0+jTo9gP/18pkAs9FMiL9qIWADZ5Q2BH
> >bZlnIud0Yk5Qj9uVx3o+/Tk9OpFcDQy49FLZ1ytQD2hJeP+51Bk/JRSRCYW+GVo1
> >D4qlzu5FiQBDFIEn4YB4n2x1v7DrrxR9ADb5CjADdFf1RitkHJXSMXdh0XO+yO5n
> >BWCDpIq/dVced2jMT4FhNDyArwgKrO/KMMbDaYG1TZueMXdU6JCMbshUOKkWiF1j
> >8hK8ergjzo/FDwv98NnD5cYbizPee1IgTBjsfaLO+PYmrKjU6qZrEqndabrnkPnQ
> >DKSu+xxq6q8SzHnB/tQgJDfOMJkwtr8qk7wMHquzbVNfiNYcR6Yroke1C+QdPtwQ
> >rVCcU6pLy2hNGQXvZpSWtXLe5PMohwQsCxHrWUtyB/DbUD5Eu1BetfCwSkBJCu2c
> >6m7KG+fMOwlEXmhUUQYDjrKBg8NkIFWwGXpNS3ITWekb4jkVyR3NG9Pr++yzTp0f
> >nMR0vSaYn7xFgMJbJuO2jCsv2PMZ2SGt87CPnrbI5Hry9gyqw1D/u9jEamxtNCMU
> >MOG0nDP+fWfiTuDXlVPOr4YeHyTYOWPWtGedj+f4jXIjiQ2e/Y6VXqWynfGQZXUX
> >12zgk11poOM8O9rgAQ+PHJpJqnZV5jhj4jyG+av9D+pm3kQxIgIXSLLnb4e5Kh+d
> >2yfcHXS5+Cgly28F+NMZ
> >=V7wA
> >-END PGP SIGNATURE-
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][osprofiler] OSprofiler is dead, long live OSprofiler

2015-11-18 Thread Boris Pavlovic
Hi stackers,

I updated OSprofiler spec: https://review.openstack.org/#/c/103825/ reviews
are required.


Best regards,
Boris Pavlovic



On Mon, Nov 9, 2015 at 2:57 AM, Boris Pavlovic  wrote:

> Hi stackers,
>
> Intro
> ---
>
> It's not a big secret that OpenStack is huge and complicated ecosystem of
> different
> services that are working together to implement OpenStack API.
>
> For example booting VM is going through many projects and services:
> nova-api, nova-scheduler, nova-compute, glance-api, glance-registry,
> keystone, cinder-api, neutron-api... and many others.
>
> The question is how to understand what part of the request takes the most
> of the time and should be improved. It's especially interested to get such
> data under the load.
>
> To make it simple, I wrote OSProfiler which is tiny library that should be
> added to all OpenStack
> projects to create cross project/service tracer/profiler.
>
> Demo (trace of CLI command: nova boot) can be found here:
> http://boris-42.github.io/ngk.html
>
> This library is very simple. For those who wants to know how it works and
> how it's integrated with OpenStack take a look here:
> https://github.com/openstack/osprofiler/blob/master/README.rst
>
> What is the current status?
> ---
>
> Good news:
> - OSprofiler is mostly done
> - OSprofiler is integrated with Cinder, Glance, Trove & Ceilometer
>
> Bad news:
> - OSprofiler is not integrated in a lot of important projects: Keystone,
> Nova, Neutron
> - OSprofiler can use only Ceilometer + oslo.messaging as a backend
> - OSprofiler stores part of arguments in api-paste.ini part in
> project.conf which is terrible thing
> - There is no DSVM job that check that changes in OSprofiler don't break
> the projects that are using it
> - It's hard to enable OSprofiler in DevStack
>
> Good news:
> I spend some time and made 4 specs that should address most of issues:
> https://github.com/openstack/osprofiler/tree/master/doc/specs
>
> Let's make it happen in Mitaka!
>
> Thoughts?
> By the way somebody would like to join this effort?)
>
> Best regards,
> Boris Pavlovic
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all][osprofiler] OSprofiler is dead, long live OSprofiler

2015-11-09 Thread Boris Pavlovic
Hi stackers,

Intro
---

It's not a big secret that OpenStack is huge and complicated ecosystem of
different
services that are working together to implement OpenStack API.

For example booting VM is going through many projects and services:
nova-api, nova-scheduler, nova-compute, glance-api, glance-registry,
keystone, cinder-api, neutron-api... and many others.

The question is how to understand what part of the request takes the most
of the time and should be improved. It's especially interested to get such
data under the load.

To make it simple, I wrote OSProfiler which is tiny library that should be
added to all OpenStack
projects to create cross project/service tracer/profiler.

Demo (trace of CLI command: nova boot) can be found here:
http://boris-42.github.io/ngk.html

This library is very simple. For those who wants to know how it works and
how it's integrated with OpenStack take a look here:
https://github.com/openstack/osprofiler/blob/master/README.rst

What is the current status?
---

Good news:
- OSprofiler is mostly done
- OSprofiler is integrated with Cinder, Glance, Trove & Ceilometer

Bad news:
- OSprofiler is not integrated in a lot of important projects: Keystone,
Nova, Neutron
- OSprofiler can use only Ceilometer + oslo.messaging as a backend
- OSprofiler stores part of arguments in api-paste.ini part in project.conf
which is terrible thing
- There is no DSVM job that check that changes in OSprofiler don't break
the projects that are using it
- It's hard to enable OSprofiler in DevStack

Good news:
I spend some time and made 4 specs that should address most of issues:
https://github.com/openstack/osprofiler/tree/master/doc/specs

Let's make it happen in Mitaka!

Thoughts?
By the way somebody would like to join this effort?)

Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions regarding OpenStack CI tools

2015-11-08 Thread Boris Pavlovic
Hi Maty,

Usually in python projects are used next thing:
- cover - for unit test coverage
- pep8 + custom hacking rules - to check code styles
- pylint - to check even more code styles


Best regards,
Boris Pavlovic


On Sun, Nov 8, 2015 at 4:41 AM, GROSZ, Maty (Maty) <
maty.gr...@alcatel-lucent.com> wrote:

> Hey,
>
> A question regarding OpenStack CI tools….
> Does OpenStack CI process use any monitor code cleanliness tool, code
> coverage tool or any monitor memory consumption/leaks tools?
> Thanks,
>
> Maty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Boris Pavlovic
Robert,

I don't have the exactly numbers, but during the real testing of real
deployments I saw the impact of polling resource, this is one of the reason
why we have to add quite big sleep() during polling in Rally to reduce
amount of GET requests and avoid DDoS of OpenStack..

In any case it doesn't seem like hard task to collect the numbers.

Best regards,
Boris Pavlovic

On Thu, Nov 5, 2015 at 3:56 AM, Robert Collins 
wrote:

> On 5 November 2015 at 04:42, Sean Dague  wrote:
> > On 11/04/2015 10:13 AM, John Garbutt wrote:
>
> > I think longer term we probably need a dedicated event service in
> > OpenStack. A few of us actually had an informal conversation about this
> > during the Nova notifications session to figure out if there was a way
> > to optimize the Searchlight path. Nearly everyone wants websockets,
> > which is good. The problem is, that means you've got to anticipate
> > 10,000+ open websockets as soon as we expose this. Which means the stack
> > to deliver that sanely isn't just a bit of python code, it's also the
> > highly optimized server underneath.
>
> So any decent epoll implementation should let us hit that without a
> super optimised server - eventlet being in that category. I totally
> get that we're going to expect thundering herds, but websockets isn't
> new and the stacks we have - apache, eventlet - have been around long
> enough to adjust to the rather different scaling pattern.
>
> So - lets not panic, get a proof of concept up somewhere and then run
> an actual baseline test. If thats shockingly bad *then* lets panic.
>
> -Rob
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Boris Pavlovic
Sean,

This seems like a fundamental abuse of HTTP honestly. If you find
> yourself creating a ton of new headers, you are probably doing it wrong.


I totally agree on this. We shouldn't add a lot of HTTP headers. Imho why
not just return in body string with status (in my case).


> I think longer term we probably need a dedicated event service in
> OpenStack.


Unfortunately, this will work slower then current solution with JOINs,
require more resources and it will be very hard to use... (like you'll need
to add one more service to openstack, and use one more client..)


Best regards,
Boris Pavlovic


On Thu, Nov 5, 2015 at 12:42 AM, Sean Dague  wrote:

> On 11/04/2015 10:13 AM, John Garbutt wrote:
> > On 4 November 2015 at 14:49, Jay Pipes  wrote:
> >> On 11/04/2015 09:32 AM, Sean Dague wrote:
> >>>
> >>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
> >>>>
> >>>> On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
> >>>>>
> >>>>> Hi stackers,
> >>>>>
> >>>>> Usually such projects like Heat, Tempest, Rally, Scalar, and other
> tool
> >>>>> that works with OpenStack are working with resources (e.g. VM,
> Volumes,
> >>>>> Images, ..) in the next way:
> >>>>>
> >>>>>   >>> resource = api.resouce_do_some_stuff()
> >>>>>   >>> while api.resource_get(resource["uuid"]) != expected_status
> >>>>>   >>>sleep(a_bit)
> >>>>>
> >>>>> For each async operation they are polling and call many times
> >>>>> resource_get() which creates significant load on API and DB layers
> due
> >>>>> the nature of this request. (Usually getting full information about
> >>>>> resources produces SQL requests that contains multiple JOINs, e,g for
> >>>>> nova vm it's 6 joins).
> >>>>>
> >>>>> What if we add new API method that will just resturn resource status
> by
> >>>>> UUID? Or even just extend get request with the new argument that
> returns
> >>>>> only status?
> >>>>
> >>>>
> >>>> +1
> >>>>
> >>>> All APIs should have an HTTP HEAD call on important resources for
> >>>> retrieving quick status information for the resource.
> >>>>
> >>>> In fact, I proposed exactly this in my Compute "vNext" API proposal:
> >>>>
> >>>> http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
> >>>>
> >>>> Swift's API supports HEAD for accounts:
> >>>>
> >>>>
> >>>>
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
> >>>>
> >>>>
> >>>> containers:
> >>>>
> >>>>
> >>>>
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
> >>>>
> >>>>
> >>>> and objects:
> >>>>
> >>>>
> >>>>
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
> >>>>
> >>>> So, yeah, I agree.
> >>>> -jay
> >>>
> >>>
> >>> How would you expect this to work on "servers"? HEAD specifically
> >>> forbids returning a body, and, unlike swift, we don't return very much
> >>> information in our headers.
> >>
> >>
> >> I didn't propose doing it on a collection resource like "servers". Only
> on
> >> an entity resource like a single "server".
> >>
> >> HEAD /v2/{tenant}/servers/{uuid}
> >> HTTP/1.1 200 OK
> >> Content-Length: 1022
> >> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> >> Content-Type: application/json
> >> Date: Thu, 16 Jan 2014 21:13:19 GMT
> >> OpenStack-Compute-API-Server-VM-State: ACTIVE
> >> OpenStack-Compute-API-Server-Power-State: RUNNING
> >> OpenStack-Compute-API-Server-Task-State: NONE
> >
> > For polling, that sounds quite efficient and handy.
> >
> > For "servers" we could do this (I think there was a spec up that wanted
> this):
> >
> > HEAD /v2/{tenant}/servers
> > HTTP/1.1 200 OK
> > Content-Length: 1022
> > Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> > Content-Type: application/json
> > Date: Thu, 16 Jan 

Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Boris Pavlovic
John,

> Our resources are not. We've also had specific requests to prevent
> > header bloat because it impacts the HTTP caching systems. Also, it's
> > pretty clear that headers are really not where you want to put volatile
> > information, which this is.
> Hmm, you do make a good point about caching.



Caching is useful only in such cases when you would like to return same
data many times.
In our case we are interested in latest state of resource, such kinds of
things can't be cached.


> I think we should step back here and figure out what the actual problem
> > is, and what ways we might go about solving it. This has jumped directly
> > to a point in time optimized fast poll loop. It will shave a few cycles
> > off right now on our current implementation, but will still be orders of
> > magnitude more costly that consuming the Nova notifications if the only
> > thing that is cared about is task state transitions. And it's an API
> > change we have to live with largely *forever* so short term optimization
> > is not what we want to go for.
> I do agree with that.


The thing here is that we have to have Async API, because we have long
running operations.
And basically there are 3 approaches to understand that operation is done:
1) pub/sub
2) polling resource status
3) long polling requests

All approaches have pros and cons, however the "actual" problem will stay
the same and you can't fix that..


Best regards,
Boris Pavlovic

On Thu, Nov 5, 2015 at 12:18 AM, John Garbutt  wrote:

> On 4 November 2015 at 15:00, Sean Dague  wrote:
> > On 11/04/2015 09:49 AM, Jay Pipes wrote:
> >> On 11/04/2015 09:32 AM, Sean Dague wrote:
> >>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
> >>>> On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
> >>>>> Hi stackers,
> >>>>>
> >>>>> Usually such projects like Heat, Tempest, Rally, Scalar, and other
> tool
> >>>>> that works with OpenStack are working with resources (e.g. VM,
> Volumes,
> >>>>> Images, ..) in the next way:
> >>>>>
> >>>>>   >>> resource = api.resouce_do_some_stuff()
> >>>>>   >>> while api.resource_get(resource["uuid"]) != expected_status
> >>>>>   >>>sleep(a_bit)
> >>>>>
> >>>>> For each async operation they are polling and call many times
> >>>>> resource_get() which creates significant load on API and DB layers
> due
> >>>>> the nature of this request. (Usually getting full information about
> >>>>> resources produces SQL requests that contains multiple JOINs, e,g for
> >>>>> nova vm it's 6 joins).
> >>>>>
> >>>>> What if we add new API method that will just resturn resource status
> by
> >>>>> UUID? Or even just extend get request with the new argument that
> >>>>> returns
> >>>>> only status?
> >>>>
> >>>> +1
> >>>>
> >>>> All APIs should have an HTTP HEAD call on important resources for
> >>>> retrieving quick status information for the resource.
> >>>>
> >>>> In fact, I proposed exactly this in my Compute "vNext" API proposal:
> >>>>
> >>>> http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
> >>>>
> >>>> Swift's API supports HEAD for accounts:
> >>>>
> >>>>
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
> >>>>
> >>>>
> >>>>
> >>>> containers:
> >>>>
> >>>>
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
> >>>>
> >>>>
> >>>>
> >>>> and objects:
> >>>>
> >>>>
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
> >>>>
> >>>>
> >>>> So, yeah, I agree.
> >>>> -jay
> >>>
> >>> How would you expect this to work on "servers"? HEAD specifically
> >>> forbids returning a body, and, unlike swift, we don't return very much
> >>> information in our headers.
> >>
> >> I didn't propose doing it on a collection resource like "servers". Only
> >> on an entity resource like a single "server".
> >>
> >> HEAD /v2/{tenant}/serve

Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread Boris Pavlovic
John,


The main point here is to reduce amount of data that we request from DB and
that is process by API services and sent via network
and make SQL requests simpler (remove joins from SELECT).

So like if you fetch 10 bytes instead of 1000 bytes you will process 100
times less and it will scale 100 timer better and work overall 100 time
faster.

>From other side polling may easily cause 100 API requests / second And
create significant load on the cloud.

Clint,

Please do not forget abut the fact that we are removing from SQL requests
JOINs.

Here is how look SQL request that gets VM info:
http://paste.openstack.org/show/477934/ (it has 6 joins)

This is how it looks for glance image:
http://paste.openstack.org/show/477933/ (it has 2 joins)

So the performance/scale impact will be higher.

Best regards,
Boris Pavlovic


On Wed, Nov 4, 2015 at 4:18 PM, Clint Byrum  wrote:

> Excerpts from Boris Pavlovic's message of 2015-11-03 17:32:43 -0800:
> > Clint, Morgan,
> >
> > I totally agree that the pub/sub model is better approach.
> >
> > However, there are 2 great things about polling:
> > 1) it's simpler to use than pub/sub (especially in shell)
>
> I envision something like this:
>
>
> while changes=$(openstack compute server-events --run react-to-status
> --fields status id1 id2 id3 id4) ; do
>   for id_and_status in $changes ; do
> id=${id_and_status##:}
> status=${id_and_status%%:}
>   done
> done
>
> Not exactly "hard"
>
> > 2) it has really simple implementation & we can get this in OpenStack in
> > few days/weeks
> >
>
> It doesn't actually solve a ton of things though. Even if we optimize
> it down to the fewest operations, it is still ultimately a DB query and
> extra churn in the API service.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread Boris Pavlovic
Clint, Morgan,

I totally agree that the pub/sub model is better approach.

However, there are 2 great things about polling:
1) it's simpler to use than pub/sub (especially in shell)
2) it has really simple implementation & we can get this in OpenStack in
few days/weeks

What about just supporting both approaches?


Best regards,
Boris Pavlovic

On Wed, Nov 4, 2015 at 9:33 AM, Morgan Fainberg 
wrote:

>
> On Nov 3, 2015 4:29 PM, "Clint Byrum"  wrote:
> >
> > Excerpts from Boris Pavlovic's message of 2015-11-03 14:20:10 -0800:
> > > Hi stackers,
> > >
> > > Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
> > > that works with OpenStack are working with resources (e.g. VM, Volumes,
> > > Images, ..) in the next way:
> > >
> > > >>> resource = api.resouce_do_some_stuff()
> > > >>> while api.resource_get(resource["uuid"]) != expected_status
> > > >>>sleep(a_bit)
> > >
> > > For each async operation they are polling and call many times
> > > resource_get() which creates significant load on API and DB layers due
> the
> > > nature of this request. (Usually getting full information about
> resources
> > > produces SQL requests that contains multiple JOINs, e,g for nova vm
> it's 6
> > > joins).
> > >
> > > What if we add new API method that will just resturn resource status by
> > > UUID? Or even just extend get request with the new argument that
> returns
> > > only status?
> >
> > I like the idea of being able pass in the set of fields you want to
> > see with each get. In SQL, often times only passing in indexed fields
> > will allow a query to be entirely serviced by a brief range scan in
> > the B-tree. For instance, if you have an index on '(UUID, status)',
> > then this lookup will be a single read from an index in MySQL/MariaDB:
> >
> > SELECT status FROM instances WHERE UUID='foo';
> >
> > The explain on this will say 'Using index' and basically you'll just do
> > a range scan on the UUID portion, and only find one entry, which will
> > be lightning fast, and return only status since it already has it there
> > in the index. Maintaining the index is not free, but probably worth it
> > if your users really do poll this way a lot.
> >
> > That said, this is optimizing for polling, and I'm not a huge fan. I'd
> > much rather see a pub/sub model added to the API, so that users can
> > simply subscribe to changes in resources, and poll only when a very long
> > timeout has passed. This will reduce load on API services, databases,
>
> ++ this is a much better long term solution if we are investing
> engineering resources along these lines.
>
> > caches, etc. There was a thread some time ago about using Nova's built
> > in notifications to produce an Atom feed per-project. That seems like
> > a much more scalable model, as even polling just that super fast query
> > will still incur quite a bit more cost than a GET with If-Modified-Since
> > on a single xml file.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread Boris Pavlovic
Hi stackers,

Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
that works with OpenStack are working with resources (e.g. VM, Volumes,
Images, ..) in the next way:

>>> resource = api.resouce_do_some_stuff()
>>> while api.resource_get(resource["uuid"]) != expected_status
>>>sleep(a_bit)

For each async operation they are polling and call many times
resource_get() which creates significant load on API and DB layers due the
nature of this request. (Usually getting full information about resources
produces SQL requests that contains multiple JOINs, e,g for nova vm it's 6
joins).

What if we add new API method that will just resturn resource status by
UUID? Or even just extend get request with the new argument that returns
only status?

Thoughts?


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-11 Thread Boris Pavlovic
Clint,

There are many PROS and CONS in both of approaches.

Reinventing wheel (in this case it's quite simple task) and it gives more
flexibility and doesn't require
usage of ZK/Consul (which will simplify integration of it with current
system)

Using ZK/Consul for POC may save a lot of time and as well we are
delegating part of work
to other communities (which may lead in better supported/working code).

By the way some of the parts (like sync of schedulers) stuck on review in
Nova project.

Basically for POC we can use anything and using ZK/Consul may reduce
resources for development
which is good.

Best regards,
Boris Pavlovic

On Sun, Oct 11, 2015 at 12:23 AM, Clint Byrum  wrote:

> Excerpts from Boris Pavlovic's message of 2015-10-11 00:02:39 -0700:
> > 2Everybody,
> >
> > Just curios why we need such complexity.
> >
> >
> > Let's take a look from other side:
> > 1) Information about all hosts (even in case of 100k hosts) will be less
> > then 1 GB
> > 2) Usually servers that runs scheduler service have at least 64GB RAM and
> > more on the board
> > 3) math.log(10) < 12  (binary search per rule)
> > 4) We have less then 20 rules for scheduling
> > 5) Information about hosts is updated every 60 seconds (no updates host
> is
> > dead)
> >
> >
> > According to this information:
> > 1) We can store everything in RAM of single server
> > 2) We can use Python
> > 3) Information about hosts is temporary data and shouldn't be stored in
> > persistence storage
> >
> >
> > Simplest architecture to cover this:
> > 1) Single RPC service that has two methods: find_host(rules),
> > update_host(host, data)
> > 2) Store information about hosts  like a dict (host_name->data)
> > 3) Create for each rule binary tree and update it on each host update
> > 4) Make a algorithm that will use binary trees to find host based on
> rules
> > 5) Each service like compute node, volume node, or neutron will send
> > updates about host
> >that they managed (cross service scheduling)
> > 6) Make a algorithm that will sync host stats in memory between different
> > schedulers
>
> I'm in, except I think this gets simpler with an intermediary service
> like ZK/Consul to keep track of this 1GB of data and replace the need
> for 6, and changes the implementation of 5 to "updates its record and
> signals its presence".
>
> What you've described is where I'd like to experiment, but I don't want
> to reinvent ZK or Consul or etcd when they already exist and do such a
> splendid job keeping observers informed of small changes in small data
> sets. You still end up with the same in-memory performance, and this is
> in line with some published white papers from Google around their use
> of Chubby, which is their ZK/Consul.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-11 Thread Boris Pavlovic
2Everybody,

Just curios why we need such complexity.


Let's take a look from other side:
1) Information about all hosts (even in case of 100k hosts) will be less
then 1 GB
2) Usually servers that runs scheduler service have at least 64GB RAM and
more on the board
3) math.log(10) < 12  (binary search per rule)
4) We have less then 20 rules for scheduling
5) Information about hosts is updated every 60 seconds (no updates host is
dead)


According to this information:
1) We can store everything in RAM of single server
2) We can use Python
3) Information about hosts is temporary data and shouldn't be stored in
persistence storage


Simplest architecture to cover this:
1) Single RPC service that has two methods: find_host(rules),
update_host(host, data)
2) Store information about hosts  like a dict (host_name->data)
3) Create for each rule binary tree and update it on each host update
4) Make a algorithm that will use binary trees to find host based on rules
5) Each service like compute node, volume node, or neutron will send
updates about host
   that they managed (cross service scheduling)
6) Make a algorithm that will sync host stats in memory between different
schedulers
7) ...
8) PROFIT!

It's:
1) Simple to manage
2) Simple to understand
3) Simple to calc scalability limits
4) Simple to integrate in current OpenStack architecture


As a future bonus, we can implement scheduler-per-az functionality, so each
scheduler will store information
only about his AZ, and separated AZ can have own rabbit servers for example
which will allows us to get
horizontal scalability in terms of AZ.


So do we really need Cassandra, Mongo, ... and other Web-scale solution for
such simple task?


Best regards,
Boris Pavlovic

On Sat, Oct 10, 2015 at 11:19 PM, Clint Byrum  wrote:

> Excerpts from Chris Friesen's message of 2015-10-09 23:16:43 -0700:
> > On 10/09/2015 07:29 PM, Clint Byrum wrote:
> >
> > > Even if you figured out how to make the in-memory scheduler crazy fast,
> > > There's still value in concurrency for other reasons. No matter how
> > > fast you make the scheduler, you'll be slave to the response time of
> > > a single scheduling request. If you take 1ms to schedule each node
> > > (including just reading the request and pushing out your scheduling
> > > result!) you will never achieve greater than 1000/s. 1ms is way lower
> > > than it's going to take just to shove a tiny message into RabbitMQ or
> > > even 0mq. So I'm pretty sure this is o-k for small clouds, but would be
> > > a disaster for a large, busy cloud.
> > >
> > > If, however, you can have 20 schedulers that all take 10ms on average,
> > > and have the occasional lock contention for a resource counter
> resulting
> > > in 100ms, now you're at 2000/s minus the lock contention rate. This
> > > strategy would scale better with the number of compute nodes, since
> > > more nodes means more distinct locks, so you can scale out the number
> > > of running servers separate from the number of scheduling requests.
> >
> > As far as I can see, moving to an in-memory scheduler is essentially
> orthogonal
> > to allowing multiple schedulers to run concurrently.  We can do both.
> >
>
> Agreed, and I want to make sure we continue to be able to run concurrent
> schedulers.
>
> Going in memory won't reduce contention for the same resources. So it
> will definitely schedule faster, but it may also serialize with concurrent
> schedulers sooner, and thus turn into a situation where scaling out more
> nodes means the same, or even less throughput.
>
> Keep in mind, I actually think we give our users _WAY_ too much power
> over our clouds, and I actually think we should simply have flavor based
> scheduling and let compute nodes grab node reservation requests directly
> out of flavor based queues based on their own current observation of
> their ability to service it.
>
> But I understand that there are quite a few clouds now that have been
> given shiny dynamic scheduling tools and now we have to engineer for
> those.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OPNFV] [Functest] Tempest & Rally

2015-09-25 Thread Boris Pavlovic
Jose,


Rally community provides official docker images here:
https://hub.docker.com/r/rallyforge/rally/
So I would suggest to use them.


Best regards,
Boris Pavlovic



On Fri, Sep 25, 2015 at 5:07 AM, Jose Lausuch 
wrote:

> Hi,
>
>
>
> Thanks for the hint Boris.
>
>
>
> Regarding what we do at functest with Rally, yes, we clone the latest from
> the Rally repo. We thought about that before and the possible errors it can
> convey, compatibility and so on.
>
>
>
> As I am working on a Docker image where all the Functest environment will
> be pre-installed, we might get rid of such potential problems. But, that
> image will need constant updates if there are major patches/bugfixes in the
> rally repo.
>
>
>
> What is your opinion on this? What do you think it makes more sense?
>
>
>
> /Jose
>
>
>
>
>
>
>
> *From:* bo...@pavlovic.ru [mailto:bo...@pavlovic.ru] *On Behalf Of *Boris
> Pavlovic
> *Sent:* Friday, September 25, 2015 7:56 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* EXT morgan.richo...@orange.com; Kosonen, Juha (Nokia - FI/Espoo);
> Jose Lausuch
> *Subject:* Re: [openstack-dev] [OPNFV] [Functest] Tempest & Rally
>
>
>
> Morgan,
>
>
>
>
>
> You should add at least:
>
>
>
> sla:
>   failure_rate:
> max: 0
>
>
>
> Otherwise rally will pass 100% no matter what is happening.
>
>
>
>
>
> Best regards,
>
> Boris Pavlovic
>
>
>
> On Thu, Sep 24, 2015 at 10:29 PM, Tikkanen, Viktor (Nokia - FI/Espoo) <
> viktor.tikka...@nokia.com> wrote:
>
> Hi Morgan
>
> and thank you for the overview.
>
> I'm now waiting for the POD#2 VPN profile (will be ready soon). We will
> try then to figure out what OpenStack/tempest/rally configuration changes
> are needed in order to get rid of those test failures.
>
> I suppose that most of the problems (like "Multiple possible networks
> found" etc.) are relatively easy to solve.
>
> BTW, since tempest is being currently developed in "branchless" mode
> (without release specific stable versions), do we have some common
> understanding/requirements how "dynamically" Functest should use its code?
>
> For example, config_functest.py seems to contain routines for
> cloning/installing rally (and indirectly tempest) code, does it mean that
> the code will be cloned/installed at the time when the test set is executed
> for the first time? (I'm just wondering if it is necessary or not to
> "freeze" somehow used code for each OPNFV release to make sure that it will
> remain compatible and that test results will be comparable between
> different OPNFV setups).
>
> -Viktor
>
> > -Original Message-
> > From: EXT morgan.richo...@orange.com [mailto:morgan.richo...@orange.com]
> > Sent: Thursday, September 24, 2015 4:56 PM
> > To: Kosonen, Juha (Nokia - FI/Espoo); Tikkanen, Viktor (Nokia - FI/Espoo)
> > Cc: Jose Lausuch
> > Subject: [OPNFV] [Functest] Tempest & Rally
> >
> > Hi,
> >
> > I was wondering whether you could have a look at Rally/Tempest tests we
> > automatically launch in Functest.
> > We have still some errors and I assume most of them are due to
> > misconfiguration and/or quota ...
> > With Jose, we planned to have a look after SR0 but we do not have much
> > time and we are not fully skilled (even if we progressed a little bit:))
> >
> > If you could have a look and give your feedback, it would be very
> > helpful, we could discuss it during an IRC weekly meeting
> > In Arno we did not use the SLA criteria, that is also something we could
> > do for the B Release
> >
> > for instance if you look at
> > https://build.opnfv.org/ci/view/functest/job/functest-foreman-
> > master/19/consoleText
> >
> > you will see rally and Tempest log
> >
> > Rally scenario are a compilation of default Rally scenario played one
> > after the other and can be found in
> >
> https://git.opnfv.org/cgit/functest/tree/testcases/VIM/OpenStack/CI/suites
> >
> > the Rally artifacts are also pushed into the artifact server
> > http://artifacts.opnfv.org/
> > e.g.
> > http://artifacts.opnfv.org/functest/lf_pod2/2015-09-23_17-36-
> > 07/results/rally/opnfv-authenticate.html
> > look for 09-23 to get Rally json/html files and tempest.conf
> >
> > thanks
> >
> > Morgan
> >
> >
> >
> __
> > ___
> >
> > Ce mess

Re: [openstack-dev] [OPNFV] [Functest] Tempest & Rally

2015-09-24 Thread Boris Pavlovic
Morgan,


You should add at least:

sla:
  failure_rate:
max: 0

Otherwise rally will pass 100% no matter what is happening.


Best regards,
Boris Pavlovic

On Thu, Sep 24, 2015 at 10:29 PM, Tikkanen, Viktor (Nokia - FI/Espoo) <
viktor.tikka...@nokia.com> wrote:

> Hi Morgan
>
> and thank you for the overview.
>
> I'm now waiting for the POD#2 VPN profile (will be ready soon). We will
> try then to figure out what OpenStack/tempest/rally configuration changes
> are needed in order to get rid of those test failures.
>
> I suppose that most of the problems (like "Multiple possible networks
> found" etc.) are relatively easy to solve.
>
> BTW, since tempest is being currently developed in "branchless" mode
> (without release specific stable versions), do we have some common
> understanding/requirements how "dynamically" Functest should use its code?
>
> For example, config_functest.py seems to contain routines for
> cloning/installing rally (and indirectly tempest) code, does it mean that
> the code will be cloned/installed at the time when the test set is executed
> for the first time? (I'm just wondering if it is necessary or not to
> "freeze" somehow used code for each OPNFV release to make sure that it will
> remain compatible and that test results will be comparable between
> different OPNFV setups).
>
> -Viktor
>
> > -Original Message-
> > From: EXT morgan.richo...@orange.com [mailto:morgan.richo...@orange.com]
> > Sent: Thursday, September 24, 2015 4:56 PM
> > To: Kosonen, Juha (Nokia - FI/Espoo); Tikkanen, Viktor (Nokia - FI/Espoo)
> > Cc: Jose Lausuch
> > Subject: [OPNFV] [Functest] Tempest & Rally
> >
> > Hi,
> >
> > I was wondering whether you could have a look at Rally/Tempest tests we
> > automatically launch in Functest.
> > We have still some errors and I assume most of them are due to
> > misconfiguration and/or quota ...
> > With Jose, we planned to have a look after SR0 but we do not have much
> > time and we are not fully skilled (even if we progressed a little bit:))
> >
> > If you could have a look and give your feedback, it would be very
> > helpful, we could discuss it during an IRC weekly meeting
> > In Arno we did not use the SLA criteria, that is also something we could
> > do for the B Release
> >
> > for instance if you look at
> > https://build.opnfv.org/ci/view/functest/job/functest-foreman-
> > master/19/consoleText
> >
> > you will see rally and Tempest log
> >
> > Rally scenario are a compilation of default Rally scenario played one
> > after the other and can be found in
> >
> https://git.opnfv.org/cgit/functest/tree/testcases/VIM/OpenStack/CI/suites
> >
> > the Rally artifacts are also pushed into the artifact server
> > http://artifacts.opnfv.org/
> > e.g.
> > http://artifacts.opnfv.org/functest/lf_pod2/2015-09-23_17-36-
> > 07/results/rally/opnfv-authenticate.html
> > look for 09-23 to get Rally json/html files and tempest.conf
> >
> > thanks
> >
> > Morgan
> >
> >
> >
> __
> > ___
> >
> > Ce message et ses pieces jointes peuvent contenir des informations
> > confidentielles ou privilegiees et ne doivent donc
> > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> > recu ce message par erreur, veuillez le signaler
> > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> > electroniques etant susceptibles d'alteration,
> > Orange decline toute responsabilite si ce message a ete altere, deforme
> ou
> > falsifie. Merci.
> >
> > This message and its attachments may contain confidential or privileged
> > information that may be protected by law;
> > they should not be distributed, used or copied without authorisation.
> > If you have received this email in error, please notify the sender and
> > delete this message and its attachments.
> > As emails may be altered, Orange is not liable for messages that have
> been
> > modified, changed or falsified.
> > Thank you.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators][Rally] Rally plugins reference is available

2015-09-24 Thread Boris Pavlovic
Hi stackers,

As far as you know Rally test cases are created as a mix of plugins.

At this point of time we have more than 200 plugins for almost all
OpenStack projects.
Before you had to analyze code of plugins or use "rally plugin find/list"
commands to find plugins that you need, which was the pain in neck.

So finally we have auto generated plugin reference:
https://rally.readthedocs.org/en/latest/plugin/plugin_reference.html


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][tc][tags] Rally tags

2015-09-21 Thread Boris Pavlovic
Thierry,

Okay great I will propose patches.

Best regards,
Boris Pavlovic

On Mon, Sep 21, 2015 at 1:14 AM, Thierry Carrez 
wrote:

> Boris Pavlovic wrote:
> > I have few ideas about the rally tags:
> >
> > - covered-by-rally
> >It means that there are official (inside the rally repo) plugins for
> > testing of particular project
> >
> > - has-rally-gates
> >It means that Rally is run against every patch proposed to the project
> >
> > - certified-by-rally [wip]
> >As well we are starting working on certification
> > task: https://review.openstack.org/#/c/225176/5
> >which will be the standard way to check whatever cloud is ready for
> > production based on volume, performance & scale testing.
> >
> > Thoughts?
>
> Hi Boris,
>
> The "next-tags" workgroup at the Technical Committee came up with a
> number of families where I think your proposed tags could fit:
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070651.html
>
> The "integration" family of tags defines cross-project support. We want
> to have tags that say that a specific service has a horizon dashboard
> plugin, or a devstack integration, or heat templates... So I would say
> that the "covered-by-rally" tag could be part of that family
> ('integration:rally' maybe ?). We haven't defined our first tag in that
> family yet: sdague was working on the devstack ones[1] as a template for
> the family but that effort stalled a bit:
>
> https://review.openstack.org/#/c/203785/
>
> As far as the 'has-rally-gates' tag goes, that would be part of the 'QA'
> family ("qa:has-rally-gates" for example).
>
> So I think those totally make sense as upstream-maintained tags and are
> perfectly aligned with the families we already had in mind but haven't
> had time to push yet. Feel free to propose those tags to the governance
> repository. An example of such submission lives at:
>
> https://review.openstack.org/#/c/207467/
>
> The 'certified-by-rally' tag is a bit farther away I think (less
> objective and needs your certification program to be set up first). You
> should start with the other two.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally][releases] New Rally release model

2015-09-20 Thread Boris Pavlovic
Hi stackers,

As far as you probably know Rally is using independent release model.

We are doing this to do releases as fast as possible.
Our goal is to have release at least 1 times per 2 week.

The major reason why we have to have separated release model is that we
should ship plugins as soon as possible and plugins are in the same repo
with tool, framework and docs.

Previous model was quite simple, we were planing changes for next release,
reviewing and merging those changes and cutting new versions.

This model worked well until we started doing things that are not fully
backward compatible and requires migrations.

Like 0.1.0 release will take us more then 100 days, which is terrible long
for people who is waiting new plugins.

I would like to propose the new release model that will allow us to do 2
things:
* Do the regular, fast releases with new plugins
* Have a months for developing new features and changes that are not fully
backward compatible

The main idea is next:
*) Master branch will be used for new major Rally versions development e.g.
0.x.y -> 0.x+1.0 switch
   that can include not backward compatible changes.
*) Latest version - we will port plugins, bug fixes and part of features to
it
*) Stable version - we will port only high & critical bug fixes if it is
possible

Here is the diagram that explains the release cycle:

[image: Inline image 1]

Thoughts?

Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators][tc][tags] Rally tags

2015-09-20 Thread Boris Pavlovic
Hi stackers,

Rally project is becoming more and more used by Operators to check that
live OpenStack clouds perform well and that they are ready for production.

Results of PAO OPS meeting showed that there are interest in Rally related
tags for projects:
http://www.gossamer-threads.com/lists/openstack/operators/49466

3) "works in rally" - new tag suggestion
> There was general interest in asking the Rally team to consider making a
> "works in rally" tag, since the rally tests were deemed 'good'.


I have few ideas about the rally tags:

- covered-by-rally
   It means that there are official (inside the rally repo) plugins for
testing of particular project

- has-rally-gates
   It means that Rally is run against every patch proposed to the project

- certified-by-rally [wip]
   As well we are starting working on certification task:
https://review.openstack.org/#/c/225176/5
   which will be the standard way to check whatever cloud is ready for
production based on volume, performance & scale testing.


Thoughts?


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally] PTL candidacy

2015-09-16 Thread Boris Pavlovic
Hi stackers,

My name is Boris.

Few years ago I started Rally to help OpenStack community to simplify
performance/load/scale/volume testing of OpenStack and actually make it
simple to answer on question: "How OpenStack perform (in ones very specific
case)".

Rally team did a terrific job to make from just a small initial 100 line
script, project that you can see now.

It covers most of the user cases, has plugins for most of the projects,
high quality of code & docs, as well it's simple to install/use/integrate
and it works quite stable.

However we are in the middle of our path and there are plenty of places
that should be improved:

* New input task format - that address all current issues

https://github.com/openstack/rally/blob/master/doc/specs/in-progress/new_rally_input_task_format.rst

* Multi scenario load generation
 That will allow us to do the monitoring with testing, HA testing under
load and
  load from many "different" types of workloads.

* Scaling up Rally DB
 This will allow users to run non stop  workloads for days or generte
really huge
 distributed load for a quite long amount of time

* Distributed load generation
 Generation of really huge load like 100k rps

* Workloads framework
Benchmarking that measures performance of Servers, VMs, Network and
Volumes

* ...infinity list from here:
https://docs.google.com/spreadsheets/u/1/d/16DXpfbqvlzMFaqaXAcJsBzzpowb_XpymaK2aFY2gA2g/edit#gid=0


In other words I would like to continue to work as PTL of Rally until we
get all this done.


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] dependencies problem on different release

2015-08-26 Thread Boris Pavlovic
Gareth,


A real example is to enable Rally for OpenStack Juno. Rally doesn't support
> old release officially but I could checkout its codes to the Juno release date
> which make both codes match. However even if I use the old requirements.txt
> to install dependencies, there must be many packages are installed as
> upstream versions and some of them breaks. An ugly way is to copy pip list
> from old Juno environment and install those properly. I hope there are
> better ways to do this work. Anyone has smart ideas?


Install everything in virtualenv (or at least Rally)

Best regards,
Boris Pavlovic

On Wed, Aug 26, 2015 at 7:00 AM, Gareth  wrote:

> Hey guys,
>
> I have a question about dependencies. There is an example:
>
> On 2014.1, project A is released with its dependency in requirements.txt
> which contains:
>
> foo>=1.5.0
> bar>=2.0.0,<2.2.0
>
> and half a year later, the requirements.txt changes to:
>
> foo>=1.7.0
> bar>=2.1.0,<2.2.0
>
> It looks fine, but potential change would be upstream version of package
> foo and bar become 2.0.0 and 3.0.0 (major version upgrade means there are
> incompatible changes).
>
> For bar, there will be no problems, because "<2.2.0" limit the version
> from major version changes. But with 2.0.0 foo, it will break the
> installation of 2014.1 A, because current development can't predict every
> incompatible changes in the future.
>
> A real example is to enable Rally for OpenStack Juno. Rally doesn't
> support old release officially but I could checkout its codes to the Juno
> release date which make both codes match. However even if I use the old
> requirements.txt to install dependencies, there must be many packages are
> installed as upstream versions and some of them breaks. An ugly way is to
> copy pip list from old Juno environment and install those properly. I hope
> there are better ways to do this work. Anyone has smart ideas?
>
> --
> Gareth
>
> *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
> *OpenStack contributor, kun_huang@freenode*
> *My promise: if you find any spelling or grammar mistakes in my email from
> Mar 1 2013, notify me *
> *and I'll donate $1 or ¥1 to an open organization you specify.*
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Great updates to tests and CI jobs

2015-08-19 Thread Boris Pavlovic
Roman,

well done! ;)

Best regards,
Boris Pavlovic

On Wed, Aug 19, 2015 at 8:38 AM, Roman Prykhodchenko  wrote:

> Hi folks!
>
> Today I’m proud to announce that since this moment python-fuelclient has
> it’s own python-jobs in OpenStack CI. Thanks to all of you who helped me
> making Fuel Client compatible with the upstream CI.
> Besides sharing great news I think it’s necessary to share changes we had
> to do, in order to accomplish this result.
>
> First of all tests were reorganized: now functional and unit tests have
> their own separate folders inside the fuelclient/tests directory. That
> allowed us to distinguish them from both the CI and a developer’s point of
> view, so there will be no mess we used to have.
>
> The other change we’ve made is deleting run_tests.sh*. It is possible to
> run and manage all the tests via tox which is a de-facto standard in
> OpenStack ecosystem. That also means anyone who is familiar with any of
> OpenStack projects will be able to orchestrate tests without a need to
> learn anything. Tox is preconfigured to run py26, py27, pep8, cover,
> functional, and cleanup environments. py26 and py27 only run unit tests and
> cover also involves calculating coverage. functional fires up Nailgun and
> launches functional tests. cleanup stops Nailgun, deletes its DB and any
> files left after functional tests and what you will definitely like —
> cleans up all *.pyc files. By default tox executes environments in the
> following order: py26->py27->pep8->functional->cleanup.
>
> Minimal tox was updated to 2.1 which guarantees no external environment
> variable is passed to tests.
>
> The jobs on OpenStack CI are set to be non-voting for a few days to give
> it a better try. On the next week we will switch them to voting. At the
> same time we will remove unit tests from FuelCI to not waste extra time.
>
>
> * Technically it is kept in place to keep compatibility with FuelCI but it
> only invokes tox from inside. It will be removed later, when it’s time to
> switch off unit tests on FuelCI.
>
>
> - romcheg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] [Ceilometer] profiler sample resource id

2015-08-13 Thread Boris Pavlovic
Pradeep,


Actually this topic is more about osprofiler & ceilometer.

Overall it doesn't require this prefix.
However it is used in osproifler lib.
https://github.com/stackforge/osprofiler/blob/master/osprofiler/parsers/ceilometer.py#L129


Best regards,
Boris Pavlovic

On Thu, Aug 13, 2015 at 7:16 AM, Pradeep Kilambi 
wrote:

>
>
> On Thu, Aug 13, 2015 at 8:50 AM, Roman Vasilets 
> wrote:
>
>> Hi,
>>Could you provide the link to this code?
>>
>
>
> Here it is:
>
>
> https://github.com/openstack/ceilometer/blob/master/ceilometer/profiler/notifications.py#L76
>
>
>
>
>>
>> On Wed, Aug 12, 2015 at 9:22 PM, Pradeep Kilambi 
>> wrote:
>>
>>> We're in the process of converting existing meters to use a more
>>> declarative approach where we add the meter definition as part of a yaml.
>>> As part of this transition there are few notification handlers where the id
>>> is not consistent. For example, in profiler notification Handler the
>>> resource_id is set to "profiler-%s" % message["payload"]["base_id"] . Is
>>> there a reason we have the prefix? Can we ignore this and directly set
>>> to message["payload"]["base_id"] ? Seems like there is no real need for the
>>> prefix here unless i'm missing something. Can we go ahead and drop this?
>>>
>>> If we don't hear anything i'll assume there is no objection to dropping
>>> this prefix.
>>>
>>>
>>> Thanks,
>>>
>>> --
>>> --
>>> Pradeep Kilambi; irc: prad
>>> OpenStack Engineering
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> --
> Pradeep Kilambi; irc: prad
> OpenStack Engineering
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Plan to implement the OpenStack Testing Interface for Fuel

2015-07-18 Thread Boris Pavlovic
Dmitry,


Am I missing any major risks or additional requirements here?


Syncing requirements with global openstack requirements can produce issues,
that requires changes in code.

I would strongly recommend to sync requirements by hand and test
everything before starting splitting repos and adding openstack-ci jobs.

-1 risk.


Best regards,
Boris Pavlovic

On Sat, Jul 18, 2015 at 12:16 AM, Dmitry Borodaenko <
dborodae...@mirantis.com> wrote:

> One of the requirements for all OpenStack projects is to use the same
> Testing
> Interface [0]. In response to the Fuel application [1], the Technical
> Committee
> has clarified that this includes running gate jobs on the OpenStack
> Infrastructure [2][3].
>
> [0]
> http://governance.openstack.org/reference/project-testing-interface.html
> [1] https://review.openstack.org/199232
> [2]
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-07-14-20.02.log.html#l-150
> [3] https://review.openstack.org/201766
>
> Although the proposed formal requirement could use some clarification,
> according to the meeting log linked above, TC has acknowledged that
> OpenStack
> Infrastructure can't currently host deployment tests for projects like
> Fuel and
> TripleO. This narrows the requirement down to codestyle checks, unit tests,
> coverage report, source tarball generation, and docs generation for all
> Python
> components of Fuel.
>
> As I mentioned in my previous email [4], we're days away from Feature
> Freeze
> for Fuel 7.0, so we need to plan a gradual transition instead of making the
> testing interface a hard requirement for all repositories.
>
> [4]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/069906.html
>
> I propose the following stages for transition of Fuel CI to OpenStack
> Infrastructure:
>
> Stage 1: Enable non-voting jobs compliant with the testing interface for a
> single Python component of Fuel. This has no impact on Fuel schedule and
> should
> be done immediately. Boris Pavlovic has kindly agreed to be our code fairy
> and
> magicked together a request that enables such jobs for nailgun in fuel-web
> [5].
>
> [5] https://review.openstack.org/202892
>
> As it turns out, OpenStack CI imposes strict limits on a project's
> directory
> structure, and fuel-web doesn't fit those since it contains a long list of
> components besides nailgun, some of them not even in Python. Making the
> above
> tests pass would involve a major restructuring of fuel-web repository,
> which
> once again is for now blocked by the 7.0 FF. We have a blueprint to split
> fuel-web [6], but so far we've only managed to extract fuel-agent, the rest
> will probably have to wait until 8.0.
>
> [6] https://blueprints.launchpad.net/fuel/+spec/split-fuel-web-repo
>
> Because of that, I think fuel-agent is a better candidate for the first
> Fuel
> component to get CI jobs on OpenStack Infrastructure.
>
> Stage 2: Get the non-voting jobs on the first component to pass, and make
> them
> voting and gating the commits to that component. Assuming that we pick a
> component that doesn't need major restructuring to pass OpenStack CI, we
> should
> be able to complete this stage before 7.0 soft code freeze on August 13
> [7].
>
> [7] https://wiki.openstack.org/wiki/Fuel/7.0_Release_Schedule
>
> Stage 3: Enable non-voting jobs for all other Python components of Fuel
> outside
> of fuel-web. We will have until 7.0 GA release on September 24, and we
> won't be
> able to proceed to following stages until 7.0 is released.
>
> Stage 4: Everything else that is too disruptive for 7.0 but doesn't require
> changes on the side of OpenStack Infrastructure can all start in parallel
> after
> Fuel 7.0 is released:
>
> a) Finish splitting fuel-web.
> b) Get all Python components of Fuel to pass OpenStack CI.
> c) Set up unit test gates for non-Python components of Fuel (e.g.
> fuel-astute).
> d) Finish the transition of upstream modules in fuel-library to librarian.
> e) Set up rspec based gates for non-upstream modules in fuel-library.
>
> I think completing all these can be done by 8.0 SCF in December, and if
> not,
> must become a blocker requirement for 9.0 (Q3 2016).
>
> Stage 5: Bonus objectives that are not required to meet TC requirements for
> joining OpenStack, but still achievable based on current state of OpenStack
> Infrastructure:
>
> a) functional tests for Fuel UI
> b) beaker tests for non-upstream parts of fuel-library
>
> Stage 6: Stretch goal for the distant future is to actually make it
> possible to
> run multi-node deploy tests on OpenStack Infrastructure. I guess we can at
> least start that discussion in 

Re: [openstack-dev] [Sahara] [QA] [tests coverage] Can we add CI job to control the unit tests coverage?

2015-07-02 Thread Boris Pavlovic
Anastasia,

because new patch may not be just a new code, committer may delete
> something or fix typos in docsting, etc.


This job compares amount of non covered lines (before and after patch).
If you just remove code there will be less lines that should be covered so
amount of non covered lines will be less or the same (if everything was
covered before)

Fixing typos in docstrings won't introduce new lines.

Btw job allows you to introduce  N (few) new lines that are not covered by
unit tests that are uncovered in some cases.


Best regards,
Boris Pavlovic

On Thu, Jul 2, 2015 at 10:46 AM, Anastasia Kuznetsova <
akuznets...@mirantis.com> wrote:

> Hi Timur,
>
> Generally I think that it is a good idea to have a gate that will check
> whether new code is covered by unit tests or not. But I am not sure that
> this gate should be voting (if I understand you correct),
> because new patch may not be just a new code, committer may delete
> something or fix typos in docsting, etc.
>
> On Thu, Jul 2, 2015 at 8:15 PM, Timur Nurlygayanov <
> tnurlygaya...@mirantis.com> wrote:
>
>> Hi all,
>>
>> I suggest to add CI job which will check the unit tests coverage for
>> Sahara repository and will set -1 for commits with new code and without
>> unit tests (if we have some degradation of tests coverage).
>> This job successfully works for Rally project and it helps to organize
>> the right code development process when developers write new unit tests for
>> new functionality.
>>
>> we can just copy this job from Rally and start to use it for Sahara:
>> Coverage control script:
>> https://github.com/openstack/rally/blob/master/tests/ci/cover.sh
>> Configuration file for coverage plugin (to exclude code which shouldn't
>> be affected): https://github.com/openstack/rally/blob/master/.coveragerc
>> Example of job in infra repository:
>> https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L4088
>>
>> I expect that it will help to increase the tests coverage by unit tests.
>>
>> Do we have any objections?
>>
>> --
>>
>> Timur,
>> Senior QA Engineer
>> OpenStack Projects
>> Mirantis Inc
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best regards,
> Anastasia Kuznetsova
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-15 Thread Boris Pavlovic
Joe,

When looking at stackalytics [2] for each project, we don't see any
> noticeably change in number of reviews, contributors, or number of commits
> from before and after each project joined OpenStack.


I can't agree on this.

*) Rally is facing core-reviewers bottleneck currently.
We have about 130 (40 at the begging on kilo) patches on review.
*) In IRC +15 online members in average
*) We merged about x2 if we compare to kilo-1 vs liberty-1
*) I see a lot of interest from various companies to use Rally (because it
is *official* now)


Best regards,
Boris Pavlovic



On Mon, Jun 15, 2015 at 2:12 PM, Jay Pipes  wrote:

> On 06/15/2015 06:20 AM, Joe Gordon wrote:
>
>> One of the stated problems the 'big tent' is supposed to solve is:
>>
>> 'The binary nature of the integrated release results in projects outside
>> the integrated release failing to get the recognition they deserve.
>> "Non-official" projects are second- or third-class citizens which can't
>> get development resources. Alternative solutions can't emerge in the
>> shadow of the blessed approach. Becoming part of the integrated release,
>> which was originally designed to be a technical decision, quickly became
>> a life-or-death question for new projects, and a political/community
>> minefield.' [0]
>>
>> Meaning projects should see an uptick in development once they drop
>> their second-class citizenship and join OpenStack. Now that we have been
>> living in the world of the big tent for several months now, we can see
>> if this claim is true.
>>
>> Below is a list of the first few few projects to join OpenStack after
>> the big tent, All of which have now been part of OpenStack for at least
>> two months.[1]
>>
>> * Mangum -  Tue Mar 24 20:17:36 2015
>> * Murano - Tue Mar 24 20:48:25 2015
>> * Congress - Tue Mar 31 20:24:04 2015
>> * Rally - Tue Apr 7 21:25:53 2015
>>
>> When looking at stackalytics [2] for each project, we don't see any
>> noticeably change in number of reviews, contributors, or number of
>> commits from before and after each project joined OpenStack.
>>
>> So what does this mean? At least in the short term moving from
>> Stackforge to OpenStack does not result in an increase in development
>> resources (too early to know about the long term).  One of the three
>> reasons for the big tent appears to be unfounded, but the other two
>> reasons hold.
>>
>
> You have not given enough time to see the effects of the Big Tent, IMHO.
> Lots of folks in the corporate world just found out about it at the design
> summit, frankly.
>
> > The only thing I think this information changes is what
>
>> peoples expectations should be when applying to join OpenStack.
>>
>
> What is your assumption of what people's expectations are when applying to
> join OpenStack?
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with pymysql

2015-06-12 Thread Boris Pavlovic
Sean,

Thanks for quick fix/revert https://review.openstack.org/#/c/191010/
This unblocked Rally gates...

Best regards,
Boris Pavlovic

On Fri, Jun 12, 2015 at 8:56 PM, Clint Byrum  wrote:

> Excerpts from Mike Bayer's message of 2015-06-12 09:42:42 -0700:
> >
> > On 6/12/15 11:37 AM, Mike Bayer wrote:
> > >
> > >
> > > On 6/11/15 9:32 PM, Eugene Nikanorov wrote:
> > >> Hi neutrons,
> > >>
> > >> I'd like to draw your attention to an issue discovered by rally gate
> job:
> > >>
> http://logs.openstack.org/96/190796/4/check/gate-rally-dsvm-neutron-rally/7a18e43/logs/screen-q-svc.txt.gz?level=TRACE
> > >>
> > >> I don't have bandwidth to take a deep look at it, but first
> > >> impression is that it is some issue with nested transaction support
> > >> either on sqlalchemy or pymysql side.
> > >> Also, besides errors with nested transactions, there are a lot of
> > >> Lock wait timeouts.
> > >>
> > >> I think it makes sense to start with reverting the patch that moves
> > >> to pymysql.
> > > My immediate reaction is that this is perhaps a concurrency-related
> > > issue; because PyMySQL is pure python and allows for full blown
> > > eventlet monkeypatching, I wonder if somehow the same PyMySQL
> > > connection is being used in multiple contexts. E.g. one greenlet
> > > starts up a savepoint, using identifier "_3" which is based on a
> > > counter that is local to the SQLAlchemy Connection, but then another
> > > greenlet shares that PyMySQL connection somehow with another
> > > SQLAlchemy Connection that uses the same identifier.
> >
> > reading more of the log, it seems the main issue is just that there's a
> > deadlock on inserting into the securitygroups table.  The deadlock on
> > insert can be because of an index being locked.
> >
> >
> > I'd be curious to know how many greenlets are running concurrently here,
> > and what the overall transaction looks like within the operation that is
> > failing here (e.g. does each transaction insert multiple rows into
> > securitygroups?  that would make a deadlock seem more likely).
>
> This begs two questions:
>
> 1) Are we handling deadlocks with retries? It's important that we do
> that to be defensive.
>
> 2) Are we being careful to sort the table order in any multi-table
> transactions so that we minimize the chance of deadlocks happening
> because of any cross table deadlocks?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [Ironic] [Inspector] Where should integration tests for non-core projects live now? (Was: Toward 2.0.0 release)

2015-06-10 Thread Boris Pavlovic
Dmitry,

We introduced recently dsvm rally ironic job:
https://review.openstack.org/#/c/187997/

Now we are working on Rally tests for Ironic:
https://review.openstack.org/#/c/186064/

Don't hesitate to join us=)

Best regards,
Boris Pavlovic

On Wed, Jun 10, 2015 at 1:23 PM, Dmitry Tantsur  wrote:

> On 06/10/2015 11:57 AM, Boris Pavlovic wrote:
>
>> Dmitry,
>>
>> If you chose to use Rally framework for testing there are 3 opportunities:
>>
>>   - Keep Rally plugins (tests) in separated tree
>>   - Keep Rally plugins (tests) in your project tree
>>   - Keep Rally plugins (tests) in Rally repo
>>
>> Rally plugins can be used for all kinds of testing: (perf, scalability,
>> load...)
>> so you are killing two birds with one stone.
>>
>> P.S. I would imho prefer to keep all high quality plugins inside Rally
>> repo to simplify operators life..
>>
>
> Hi, that sounds interesting, I'll have a look.
>
> Note, however, that Inspector integration testing highly depends on Ironic
> one, so unless Ironic adapts/agrees to adapt Rally, it will be hard to
> Inspector to do it.
>
>
>>
>> Best regards,
>> Boris Pavlovic
>>
>> On Wed, Jun 10, 2015 at 11:57 AM, Ken'ichi Ohmichi
>> mailto:ken1ohmi...@gmail.com>> wrote:
>>
>> 2015-06-10 16:48 GMT+09:00 Dmitry Tantsur > <mailto:dtant...@redhat.com>>:
>>
>> > On 06/10/2015 09:40 AM, Ken'ichi Ohmichi wrote:
>> >> To solve it, we have decided the scope of Tempest as the etherpad
>> >> mentioned.
>> >>
>> >>> Are there any hints now on where we can start with our
>> integration tests?
>> >>
>> >>
>> >> For the other projects, we are migrating the test framework of
>> Tempest
>> >> to tempest-lib which is a library.
>> >> So each project can implement their own tests in each repository by
>> >> using the test framework of tempest-lib.
>> >
>> >
>> > So in my case we can start with putting test code to
>> ironic-inspector tree
>> > using tempest-lib, right?
>>
>> Yeah, right.
>> Neutron is already doing that.
>> maybe neutron/tests/api/ of Neutron repository will be a hint for it.
>>
>> > Will it be possible to run tests on Ironic as well using plugin from
>> > ironic-inspector?
>>
>> Yeah, it will be possible.
>> but I'm guessing ironic-inspector is optional and Ironic should not
>> depend on the gate test result of ironic-inspector.
>> So maybe you just need to run Ironic tests on ironic-inspector gate
>> tests, right?
>>
>> >>> After a quick look at devstack-gate I got an impression that it's
>> >>> expecting
>> >>> tests as part of tempest:
>> >>>
>> >>>
>> https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L600
>> >>>
>> >>> Our final goal is to have devstack gate test for Ironic and
>> Inspector
>> >>> projects working together.
>> >>
>> >>
>> >> We have discussed external interfaces of Tempest on the summit, so
>> >> that Tempest gathers tests from each project repository and runs
>> them
>> >> at the same time.
>> >> There is a qa-spec forhttps://review.openstack.org/#/c/184992/
>>
>> >
>> >
>> > Cool, thanks! Does it mean that devstack-gate will also be updated
>> to allow
>> > something like DEVSTACK_GATE_TEMPEST_PLUGINS="https://github.com/..
>> ."?
>>
>> Yeah, will be.
>> The idea of this external interface is based on DevStack's one.
>> I think we will be able to use it on the gate like that.
>>
>> Thanks
>> Ken'ichi Ohmichi
>>
>> ---
>>
>>  >>> On 06/10/2015 08:07 AM, Yuiko Takada wrote:
>>  >>>>
>>  >>>>
>>  >>>> Hi, Dmitry,
>>  >>>>
>>  >>>>  I guess the whole idea of new release models is NOT to
>> tie projects
>>  >>>>  to each other any more except for The Big Release twice a
>> year :)
>>  >>>> So
>>  >>>>  I think no, we don't need to. We still can do it

Re: [openstack-dev] [QA] [Ironic] [Inspector] Where should integration tests for non-core projects live now? (Was: Toward 2.0.0 release)

2015-06-10 Thread Boris Pavlovic
Dmitry,

If you chose to use Rally framework for testing there are 3 opportunities:

 - Keep Rally plugins (tests) in separated tree
 - Keep Rally plugins (tests) in your project tree
 - Keep Rally plugins (tests) in Rally repo

Rally plugins can be used for all kinds of testing: (perf, scalability,
load...)
so you are killing two birds with one stone.

P.S. I would imho prefer to keep all high quality plugins inside Rally repo
to simplify operators life..


Best regards,
Boris Pavlovic

On Wed, Jun 10, 2015 at 11:57 AM, Ken'ichi Ohmichi 
wrote:

> 2015-06-10 16:48 GMT+09:00 Dmitry Tantsur :
> > On 06/10/2015 09:40 AM, Ken'ichi Ohmichi wrote:
> >> To solve it, we have decided the scope of Tempest as the etherpad
> >> mentioned.
> >>
> >>> Are there any hints now on where we can start with our integration
> tests?
> >>
> >>
> >> For the other projects, we are migrating the test framework of Tempest
> >> to tempest-lib which is a library.
> >> So each project can implement their own tests in each repository by
> >> using the test framework of tempest-lib.
> >
> >
> > So in my case we can start with putting test code to ironic-inspector
> tree
> > using tempest-lib, right?
>
> Yeah, right.
> Neutron is already doing that.
> maybe neutron/tests/api/ of Neutron repository will be a hint for it.
>
> > Will it be possible to run tests on Ironic as well using plugin from
> > ironic-inspector?
>
> Yeah, it will be possible.
> but I'm guessing ironic-inspector is optional and Ironic should not
> depend on the gate test result of ironic-inspector.
> So maybe you just need to run Ironic tests on ironic-inspector gate
> tests, right?
>
> >>> After a quick look at devstack-gate I got an impression that it's
> >>> expecting
> >>> tests as part of tempest:
> >>>
> >>>
> https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L600
> >>>
> >>> Our final goal is to have devstack gate test for Ironic and Inspector
> >>> projects working together.
> >>
> >>
> >> We have discussed external interfaces of Tempest on the summit, so
> >> that Tempest gathers tests from each project repository and runs them
> >> at the same time.
> >> There is a qa-spec for https://review.openstack.org/#/c/184992/
> >
> >
> > Cool, thanks! Does it mean that devstack-gate will also be updated to
> allow
> > something like DEVSTACK_GATE_TEMPEST_PLUGINS="https://github.com/...";?
>
> Yeah, will be.
> The idea of this external interface is based on DevStack's one.
> I think we will be able to use it on the gate like that.
>
> Thanks
> Ken'ichi Ohmichi
>
> ---
>
> >>> On 06/10/2015 08:07 AM, Yuiko Takada wrote:
> >>>>
> >>>>
> >>>> Hi, Dmitry,
> >>>>
> >>>>  I guess the whole idea of new release models is NOT to tie
> projects
> >>>>  to each other any more except for The Big Release twice a year :)
> >>>> So
> >>>>  I think no, we don't need to. We still can do it, if we have
> >>>>  something to release by the time Ironic releases, but I suggest
> >>>>  deciding it on case-by-case basis.
> >>>>
> >>>> OK, I see.
> >>>>
> >>>> One more concern, about Tempest integration test which I will
> implement
> >>>> in V2.1.0,
> >>>> it seems like that we cannot add Ironic-inspector's tests into Tempest
> >>>> even if integration tests.
> >>>> Please see:
> >>>> https://etherpad.openstack.org/p/YVR-QA-in-the-big-tent
> >>>
> >>>
> >>>
> >>> Good catch. I guess the answer depends on where Ironic integration
> tests
> >>> are
> >>> going to live - we're going to live with them. Let me retarget this
> >>> thread
> >>> to a wider audience.
> >>>
> >>>>
> >>>> But I heard from you that Devananda thinks we need this in tempest
> >>>> itself. [3]
> >>>> Do you know something like current situation?
> >>>>
> >>>>
> >>>> Best Regards,
> >>>> Yuiko Takada
> >>>>
> >>>> 2015-06-09 15:59 GMT+09:00 Dmitry Tantsur  >>>> <mailto:dtant...@redhat.com>>:
> >>>>
> >>>

Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-05 Thread Boris Pavlovic
Hi,

Maybe we should just give a try:

1) I will prepare all modifications out of infra and show demo
2) Get it in Infra as experimental feature
3) Try it in Rally
4) Share experience and if it is worth keep it or get rid of it.

Best regards,
Boris Pavlovic

On Fri, Jun 5, 2015 at 9:21 PM, Valeriy Ponomaryov  wrote:

> If such possibility appears then definitely will be people that will try
> it without weighing in to this discussion (like me).
>
> And, the only social problem is that such "maintainers" of
> project-sub-parts should be responsible enough. It is very likely that some
> vendor-specific-things maintainers can not be trusted to have "approval
> right" in general for some objective reasons (low-quality-code-writers,
> etc...). Hence, we should not automate right-granting, but should do it for
> review-process.
>
> So, I would like to have such possibility/feature in projects I
> participate as soon as there is big community for these projects.
>
> It worth to try, IMHO.
>
> --
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com
> vponomar...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Boris Pavlovic
All,

Sorry for picking very bad words. I am not native speaker. =(

In Russia this word has as well another meaning like being "to aggressive
against somebody".  I used it in that meaning. I didn't thought about
sexually violated at all.
Sorry sorry and one more time sorry! About using improper words..


Best regards,
Boris Pavlovic

On Fri, Jun 5, 2015 at 9:05 PM, Nikola Đipanov  wrote:

> On 06/05/2015 06:31 PM, Doug Hellmann wrote:
> > Excerpts from Boris Pavlovic's message of 2015-06-05 20:03:44 +0300:
> >> Hi stackers,
> >>
> >> Seems likes after stackforge/rally -> openstack/rally Rally project
> started
> >> being more attractive.
> >> According recent stats we are on top 3 position (based on Patch sets
> stats)
> >>
> http://stackalytics.com/?release=liberty&metric=patches&project_type=All
> >> And if we compare half year ago we have 40 open reviews and now we have
> >> about 140...
> >> In other words we need to scale core reviewing process with keeping
> >> quality.
> >>
> >> 
> >>
> >> I suggested in mailing thread:
> >> [openstack-dev][all][infra][tc][ptl] Scaling up code review process
> (subdir
> >> cores)
> >> To create special rules & ACL groups to have fully automated system.
> >>
> >> Instead of support I got raped by community.
> >
> > I understand that you feel that the negative response to your
> > proposal was strong, but this is *COMPLETELY* inappropriate wording
> > for this mailing list.
> >
>
> +1000 - words have meaning and getting ones ideas criticized on a
> mailing list by peers is not even in the same universe as being sexually
> violated!!!
>
> IMHO this kind of behaviour needs to be sanctioned now, this kind of
> language must not take root on this list ever!
>
> I have no idea what the process for this is but I am sure people who
> know will respond soon.
>
> Not cool Boris! Not even a little bit.
>
> N.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Boris Pavlovic
Sylvain,

Are you sure your tone is appropriate once you read again your email ?


I don't see anything wrong in tone & email at all.
I just summarize for Rally team results of that thread. So they won't need
to read it.
And explain why we won't have sub cores and need trust model.
That's all.



> How can we help you understand that opinions help us to think about us and
> how we can be better ?


Some members from community can avoid doing things from list that I wrote.


>
> Do you think you have to apologize for such this email ?


Not yet. Do I have any reason for that?


Best regards,
Boris Pavlovic

On Fri, Jun 5, 2015 at 8:43 PM, Sylvain Bauza  wrote:

>
>
> Le 05/06/2015 19:03, Boris Pavlovic a écrit :
>
> Hi stackers,
>
>  Seems likes after stackforge/rally -> openstack/rally Rally project
> started being more attractive.
> According recent stats we are on top 3 position (based on Patch sets
> stats)
>  http://stackalytics.com/?release=liberty&metric=patches&project_type=All
> And if we compare half year ago we have 40 open reviews and now we have
> about 140...
> In other words we need to scale core reviewing process with keeping
> quality.
>
>  
>
>  I suggested in mailing thread:
> [openstack-dev][all][infra][tc][ptl] Scaling up code review process
> (subdir cores)
> To create special rules & ACL groups to have fully automated system.
>
>  Instead of support I got raped by community.
> Community was very polite & technical oriented in that thread and they
> said:
> 1) I am bad PTL,
> 2) I don't know how to do open source
> 3) Rally project sux
> 4) Rally project community sux
> 5) Rally project has troubles
> 6) A lot of more constructive critics
>
>  So Instead of having NICE fully automated system for subcores we will
> use ugly, not automated but very popular in community "trust" model based
> on excel.
>
>  
>
>
>  Solution:
> We will have single core team that can merge anything.
> But there will be two types of core (based on trust ;()
>
>  I created page in docs, that explains who is who:
> https://review.openstack.org/#/c/188843/1
>
>  Core reviewer
> --
> That are core for whole project
>
>  Plugin Core reviewer
> 
> That will just review/merge their component plugins and nothing else
>
>
>  I hope by end of this cycle each component will have own subteam which
> will resolve
> most of reviewing process scale issues..
>
>
>  Best regards,
> Boris Pavlovic
>
>
> Are you sure your tone is appropriate once you read again your email ?
>
> How can we help you understand that opinions help us to think about us and
> how we can be better ?
>
> Do you think you have to apologize for such this email ?
>
> -Sylvain
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Boris Pavlovic
Doug,


I understand that you feel that the negative response to your
> proposal was strong, but this is *COMPLETELY* inappropriate wording
> for this mailing list.


Okay, next time I will copy paste parts of emails from others
(with the even more offensive tone in my side)
Instead of making such list.


Best regards,
Boris Pavlovic

On Fri, Jun 5, 2015 at 8:31 PM, Doug Hellmann  wrote:

> Excerpts from Boris Pavlovic's message of 2015-06-05 20:03:44 +0300:
> > Hi stackers,
> >
> > Seems likes after stackforge/rally -> openstack/rally Rally project
> started
> > being more attractive.
> > According recent stats we are on top 3 position (based on Patch sets
> stats)
> > http://stackalytics.com/?release=liberty&metric=patches&project_type=All
> > And if we compare half year ago we have 40 open reviews and now we have
> > about 140...
> > In other words we need to scale core reviewing process with keeping
> > quality.
> >
> > 
> >
> > I suggested in mailing thread:
> > [openstack-dev][all][infra][tc][ptl] Scaling up code review process
> (subdir
> > cores)
> > To create special rules & ACL groups to have fully automated system.
> >
> > Instead of support I got raped by community.
>
> I understand that you feel that the negative response to your
> proposal was strong, but this is *COMPLETELY* inappropriate wording
> for this mailing list.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Boris Pavlovic
Hi stackers,

Seems likes after stackforge/rally -> openstack/rally Rally project started
being more attractive.
According recent stats we are on top 3 position (based on Patch sets stats)
http://stackalytics.com/?release=liberty&metric=patches&project_type=All
And if we compare half year ago we have 40 open reviews and now we have
about 140...
In other words we need to scale core reviewing process with keeping
quality.



I suggested in mailing thread:
[openstack-dev][all][infra][tc][ptl] Scaling up code review process (subdir
cores)
To create special rules & ACL groups to have fully automated system.

Instead of support I got raped by community.
Community was very polite & technical oriented in that thread and they
said:
1) I am bad PTL,
2) I don't know how to do open source
3) Rally project sux
4) Rally project community sux
5) Rally project has troubles
6) A lot of more constructive critics

So Instead of having NICE fully automated system for subcores we will use
ugly, not automated but very popular in community "trust" model based on
excel.




Solution:
We will have single core team that can merge anything.
But there will be two types of core (based on trust ;()

I created page in docs, that explains who is who:
https://review.openstack.org/#/c/188843/1

Core reviewer
--
That are core for whole project

Plugin Core reviewer

That will just review/merge their component plugins and nothing else


I hope by end of this cycle each component will have own subteam which will
resolve
most of reviewing process scale issues..


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Change abandonment policy

2015-06-05 Thread Boris Pavlovic
Hi,

+1 for #1 and if patch is not touched for N weeks just finish it using
current active team.

Best regards,
Boris Pavlovic

On Fri, Jun 5, 2015 at 7:27 PM, Richard Raseley  wrote:

> Colleen Murphy wrote:
>
>> 3) Manually abandon after N months/weeks changes that have a -1 that was
>> never responded to
>>
>> ```
>> If a change is submitted and given a -1, and subsequently the author
>> becomes unresponsive for a few weeks, reviewers should leave reminder
>> comments on the review or attempt to contact the original author via IRC
>> or email. If the change is easy to fix, anyone should feel welcome to
>> check out the change and resubmit it using the same change ID to
>> preserve original authorship. If the author is unresponsive for at least
>> 3 months and no one else takes over the patch, core reviewers can
>> abandon the patch, leaving a detailed note about how the change can be
>> restored.
>>
>> If a change is submitted and given a -2, or it otherwise becomes clear
>> that the change can not make it in (for example, if an alternate change
>> was chosen to solve the problem), and the author has been unresponsive
>> for at least 3 months, a core reviewer should abandon the change.
>> ```
>>
>
> +1 for #3
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]Big Tent Mode within respective projects

2015-06-04 Thread Boris Pavlovic
Jay,


> At this time, Neutron is the only project that has done any splitting out
> of driver and advanced services repos. Other projects have discussed doing
> this, but, at least in Nova, that discussion was put on hold for the time
> being. Last I remember, we agreed that we would clean up, stabilize and
> document the virt driver API in Nova before any splitting of driver repos
> would be feasible.


Imho not only Neutron has this. ;)
Rally support out of tree plugins as well and I saw already some third
party repos:
https://github.com/stackforge/haos

Best regards,
Boris Pavlovic

On Thu, Jun 4, 2015 at 2:08 PM, John Garbutt  wrote:

> On 3 June 2015 at 13:39, Jay Pipes  wrote:
> > On 06/03/2015 08:25 AM, Zhipeng Huang wrote:
> >>
> >> Hi All,
> >>
> >> As I understand, Neutron by far has the clearest big tent mode via its
> >> in-tree/out-of-tree decomposition, thanks to Kyle and other Neutron team
> >> members effort.
> >>
> >> So my question is, is it the same for the other projects? For example,
> >> does Nova also have the project-level Big Tent Mode Neutron has?
> >
> >
> > Hi Zhipeng,
> >
> > At this time, Neutron is the only project that has done any splitting
> out of
> > driver and advanced services repos. Other projects have discussed doing
> > this, but, at least in Nova, that discussion was put on hold for the time
> > being. Last I remember, we agreed that we would clean up, stabilize and
> > document the virt driver API in Nova before any splitting of driver repos
> > would be feasible.
>
> +1 to jay's comment.
>
> I see Nova's mission as providing a solid interoperable API experience
> to on-demand compute resources. Right now, thats happening best by
> keeping things in tree, but we are doing work to make other options
> possible.
>
> I actually see the existence of projects such as Cinder, Heat and
> Magnum as success stories born out of Nova saying no to expanding our
> scope (and in the case of Cinder, actively trying to reduce our
> scope). I hope more of both of those things will happen in the future.
>
> If we had accepted these efforts into Nova, they would not have had
> the freedom they get by living inside OpenStack, but outside of
> Compute. Something the big tent makes much easier to deal with. I
> don't think they would have gained much by being inside the compute
> project, mostly because we are all crazy busy looking after Nova.
>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
Josh,

Yep let's just make experiment in Rally and I will share experience with
the rest of community.

Best regards,
Boris Pavlovic

On Wed, Jun 3, 2015 at 9:45 PM, Joshua Harlow  wrote:

> S, just some thoughts,
>
> If boris thinks this might help rally, why not just let him try it?
>
> If boris (and friends) will make the needed changes to jenkins or other to
> have whatever ACL format (avoid a turing complete language please) that
> says who can work in what directories in the rally repo then meh, why is
> this such a big deal? If it ends up not working out, oh well, if it ends up
> being a trust issue in the end, oh well, live and learn right?
>
> IMHO let boris try it, if it works out as a model for rally, more power to
> him, if it doesn't, well that's how people learn, and it can then be
> something that didn't work for rally. Everyone will move on, people will
> have learned what didn't work, and life will go on...
>
> It starts to feel that we have each a different model that we know and may
> not want to just let another model (that may or may not work well for
> rally) in. If we lived like that we'd probably all still be on horses and
> still think the world is flat and that the universe revolves around the
> earth.
>
> -Josh
>
> Boris Pavlovic wrote:
>
>> James B.
>>
>> One more time.
>> Everybody makes mistakes and it's perfectly OK.
>> I don't want to punish anybody and my goal is to make system
>> that catch most of them (human mistakes) no matter how it is complicated.
>>
>> Best regards,
>> Boris Pavlovic
>>
>>
>> On Wed, Jun 3, 2015 at 5:33 PM, James Bottomley
>> > <mailto:james.bottom...@hansenpartnership.com>> wrote:
>>
>> On Wed, 2015-06-03 at 09:29 +0300, Boris Pavlovic wrote:
>>  > *- Why not just trust people*
>> >
>> >  People get tired and make mistakes (very often).
>> >  That's why we have blocking CI system that checks patches,
>> >  That's why we have rule 2 cores / review (sometimes even
>> 3,4,5...)...
>> >
>> >  In ideal work Lieutenants model will work out of the box. In real
>> life all
>> >  checks like:
>> >  person X today has permission to do Y operation should be checked
>> >  automatically.
>> >
>> >  This is exactly what I am proposing.
>>
>> This is completely antithetical to the open source model.  You have to
>> trust people, that's why the project has hierarchies filled with more
>> trusted people.  Do we trust people never to make mistakes?  Of course
>> not; everyone's human, that's why there are cross checks.  It's simply
>> not possible to design a system where all the possible human mistakes
>> are eliminated by rules (well, it's not possible to imagine: brave new
>> world and 1984 try looking at something like this, but it's impossible
>> to build currently in practise).
>>
>> So, before we build complex checking systems, the correct question to
>> ask is: what's the worst that could happen if we didn't?  In this
>> case,
>> two or more of your lieutenants accidentally approve a patch not in
>> their area and no-one spots it before it gets into the build.
>> Presumably, even though it's not supposed to be their areas, they
>> reviewed the patch and found it OK.  Assuming the build isn't broken,
>> everything proceeds as normal.  Even if there was some subtle bug in
>> the
>> code that perhaps some more experienced person would spot, eventually
>> it
>> gets found and fixed.
>>
>> You see the point?  This is roughly equivalent to what would happen
>> today if a core made a mistake in a review ... it's a normal
>> consequence
>> we expect to handle.  If it happened deliberately then the bad
>> Lieutenant eventually gets found and ejected (in the same way a bad
>> core
>> would).  The bottom line is there's no point building a complex
>> permission system when it wouldn't really improve anything and it
>> would
>> get in the way of flexibility.
>>
>> James
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
Robert,

Some of the the consequences of splitting up repos:
>  - atomic changes become non-atomic
>  - cross-cutting changes become more complex
>  - code analysis has to deal with more complex setups (can't lint
> across boundaries as readily, for instance)
>  - distribution and installation via source become harder
>  - project mgmt overheads increase
>  - project identity becomes more amorphous
> These aren't necessarily bad things, but they are things, and since
> the purported goal is to reduce the likelyhood of defects entering
> rally's codebase, I'd be wary of those consequences.


+2

And don't forget about common part for all this commands:
CLI, API, DB, Common tools

So we will need to split rally code for 100500 repos release a lot of
crappy lib
that are used only in Rally and have a lot of pain with all processes
(doc, releases, management, code review, ...).

Splitting to repos really impact on architecture a lot.

Best regards,
Boris Pavlovic

​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
James B.

One more time.
Everybody makes mistakes and it's perfectly OK.
I don't want to punish anybody and my goal is to make system
that catch most of them (human mistakes) no matter how it is complicated.

Best regards,
Boris Pavlovic


On Wed, Jun 3, 2015 at 5:33 PM, James Bottomley <
james.bottom...@hansenpartnership.com> wrote:

> On Wed, 2015-06-03 at 09:29 +0300, Boris Pavlovic wrote:
> > *- Why not just trust people*
> >
> > People get tired and make mistakes (very often).
> > That's why we have blocking CI system that checks patches,
> > That's why we have rule 2 cores / review (sometimes even 3,4,5...)...
> >
> > In ideal work Lieutenants model will work out of the box. In real life
> all
> > checks like:
> > person X today has permission to do Y operation should be checked
> > automatically.
> >
> > This is exactly what I am proposing.
>
> This is completely antithetical to the open source model.  You have to
> trust people, that's why the project has hierarchies filled with more
> trusted people.  Do we trust people never to make mistakes?  Of course
> not; everyone's human, that's why there are cross checks.  It's simply
> not possible to design a system where all the possible human mistakes
> are eliminated by rules (well, it's not possible to imagine: brave new
> world and 1984 try looking at something like this, but it's impossible
> to build currently in practise).
>
> So, before we build complex checking systems, the correct question to
> ask is: what's the worst that could happen if we didn't?  In this case,
> two or more of your lieutenants accidentally approve a patch not in
> their area and no-one spots it before it gets into the build.
> Presumably, even though it's not supposed to be their areas, they
> reviewed the patch and found it OK.  Assuming the build isn't broken,
> everything proceeds as normal.  Even if there was some subtle bug in the
> code that perhaps some more experienced person would spot, eventually it
> gets found and fixed.
>
> You see the point?  This is roughly equivalent to what would happen
> today if a core made a mistake in a review ... it's a normal consequence
> we expect to handle.  If it happened deliberately then the bad
> Lieutenant eventually gets found and ejected (in the same way a bad core
> would).  The bottom line is there's no point building a complex
> permission system when it wouldn't really improve anything and it would
> get in the way of flexibility.
>
> James
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
Jeremy,


Except that reorganizing files in a repo so that you can have sane
> pattern matches across them for different review subteams is
> _exactly_ this. The question is really one of "do you have a
> separate .git in each of the directory trees for your subteams or
> only one .git in the parent directory?"


I can't talk for other projects, so let's talk about Rally specific.

We have single .git in root for whole project.

We have 4 subdir that can have own maintainers:
- rally/deploy
- rally/verify
- rally/benchmark
- rally/plugins

First 3 subdir are quite different and usually isolated communities.
Plugins are not so hard to review and mostly developed part.

If I would be able to have cores for specific areas that will scale up
code reviewing process a lot
without any trust, process, social, arch, whatever changes in project.


Best regards,
Boris Pavlovic


On Wed, Jun 3, 2015 at 5:00 PM, Julien Danjou  wrote:

> On Wed, Jun 03 2015, Boris Pavlovic wrote:
>
> > And I don't understand "what" so serious problem we have.
> > We were not able to do reverts so  we build CI that doesn't allow us to
> > break master
> >  so we don't need to do reverts. I really don't see here any big
> problems.
>
> Doing revert does not mean breaking nor unbreaking master. It's just
> about canceling changes. You're not able to break master if you have a
> good test coverage – and I'm sure Rally has.
>
> > I was talking about reverting patches. And I believe the process is
> broken
> > if you need to revert patches. It means that core team is not enough team
> > or CI is not enough good.
>
> Sure, reverting a patch means that a mistake has been made somewhere,
> *but* the point is that having a few mistakes done and reverted is far
> less a problem than freezing an entire project because everyone fears a
> mistake might be made. Just learn to make mistake, fix/revert them, and
> change fast. Not freeze everyone in terror of something being done. :)
>
> --
> Julien Danjou
> /* Free Software hacker
>http://julien.danjou.info */
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
Julien,

If I were on you shoes I would pick words more carefully.

When you are saying:

> Reverting patches is unacceptable for Rally project.
> Then you have a more serious problem than the rest of OpenStack.


"you" means Rally community which is quite large.
http://stackalytics.com/?release=liberty&metric=commits&project_type=openstack&module=rally


And I don't understand "what" so serious problem we have.
We were not able to do reverts so  we build CI that doesn't allow us to
break master
 so we don't need to do reverts. I really don't see here any big problems.

> This means that we merged bug and this is epic fail of PTL of project.
> Your code is already full of bugs and misfeatures, like the rest of the
> software. That's life.


I was talking about reverting patches. And I believe the process is broken
if you need to revert patches. It means that core team is not enough team
or CI is not enough good.


If you're having trust issues, good luck maintaining any large-scale
> successful (open source) project. This is terrible management and leads
> to micro-managing tasks and people, which has never build something
> awesome.


I don't believe even my self, because I am human and I make mistakes.
My goal on the PTL position is to make such process that stops "human"
mistakes before they land in master. In other words  everything should be
automated and pre not post checked.

Best regards,
Boris Pavlovic

On Wed, Jun 3, 2015 at 4:00 PM, Thierry Carrez 
wrote:

> So yeah, that's precisely what we discussed at the cross-project
> workshop about In-team scaling in Vancouver (led by Kyle and myself).
> For those not present, I invite you to read the notes:
>
> https://etherpad.openstack.org/p/liberty-cross-project-in-team-scaling
>
> The conclusion was to explore splitting review areas and building trust
> relationships. Those could happen:
>
> - along architectural lines (repo splits)
> - along areas of expertise with implicit trust to not review anything else
>
> ... which is precisely what you seem to oppose.
>
> Boris Pavlovic wrote:
> > *- Why not splitting repo/plugins?*
> >
> >   I don't want to make "architectural" decisions based on "social" or
> >   "not enough good tool for review" issues.
> >
> >   If we take a look at OpenStack that was splited many times: Glance,
> > Cinder, ...
> >   we will see that there is a log of code duplication that can't be
> > removed even after
> >   two or even more years of oslo effort. As well it produce such issues
> > like syncing
> >   requirements, requirements in such large bash script like devstack,
> >   there is not std installator, it's quite hard to manage and test it
> > and so on..
> >
> >   That's why I don't think that splitting repo is good "architecture"
> > decision - it makes
> >simple things complicated...
>
> I know we disagree on that one, but I don't think monolithic means
> "simpler". Having smaller parts that have a simpler role and explicit
> contracts to communicate with other pieces is IMHO better and easier to
> maintain.
>
> We shouldn't split repositories when it only results in code
> duplication. But whenever we can isolate something that could have a
> more dedicated maintenance team, I think that's worth exploring as a
> solution to the review scaling issue.
>
> > *- Why not just trust people*
> >
> > People get tired and make mistakes (very often).
> > That's why we have blocking CI system that checks patches,
> > That's why we have rule 2 cores / review (sometimes even 3,4,5...)...
>
> It's not because "we don't trust people" that we have the 2-core rule.
> Core reviewers check the desirability and quality of implementation. By
> default we consider that if 2 of those agree that a change is sane, it
> probably is. The CI system checks something else, and that is that you
> don't break everyone or introduce a regression. So you shouldn't be able
> to "introduce a bug" that would be so serious that a simple revert would
> still be embarrassing. If you can, then you should work on your tests.
>
> I think it's totally fine to give people the ability to +2/approve
> generally, together with limits on where they are supposed to use that
> power. They will be more careful as to what they approve this way. For
> corner cases you can revert.
>
> As an example, Ubuntu development has worked on that trust model for
> ages. Once you are a developer, you may commit changes to any package in
> the distro

Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
Ihar,

Reverting patches is unacceptable for Rally project.
This means that we merged bug and this is epic fail of PTL of project.


Let's take a look from other side, Ihar would you share with me
your  password of your email?
You can believe me I won't do anything wrong with it.

And "yes" I don't want to trust anybody this is huge amount of work to PTL.

PTL in such case is bottleneck because he need to check that all 100500+
subcores are reviewing pieces that they can review and passing +2 only on
patches that they can actually merge.


Let's just automate this stuff.
Like we have automated CI for testing.

Best regards,
Boris Pavlovic

On Wed, Jun 3, 2015 at 2:28 PM, Ihar Hrachyshka  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 06/03/2015 08:29 AM, Boris Pavlovic wrote:
> > Guys,
> >
> > I will try to summarize all questions and reply on them:
> >
> > *- Why not splitting repo/plugins?*
> >
> > I don't want to make "architectural" decisions based on "social" or
> >  "not enough good tool for review" issues.
> >
> > If we take a look at OpenStack that was splited many times:
> > Glance, Cinder, ... we will see that there is a log of code
> > duplication that can't be removed even after two or even more years
> > of oslo effort. As well it produce such issues like syncing
> > requirements, requirements in such large bash script like devstack,
> >  there is not std installator, it's quite hard to manage and test
> > it and so on..
> >
> > That's why I don't think that splitting repo is good
> > "architecture" decision - it makes simple things complicated...
> >
> >
> > *- Why not just trust people*
> >
> > People get tired and make mistakes (very often).
>
> I wouldn't say they make mistakes *too* often. And if there is a
> mistake, we always have an option to git-revert and talk to the guy
> about it. I believe no one in the neutron team merges random crap, and
> I would expect the same from other openstack teams.
>
> It's also quite natural that people who do more reviews extend their
> field of expertise. Do we really want to chase PTLs to introduce a
> change into turing-complete-acl-description each time we feel someone
> is now ready to start reviewing code from yet another submodule?
>
> Or consider a case when a patch touches most, if not all submodules,
> but applies some very trivial changes, like a new graduated oslo
> library being consumed, or python3 adoption changes. Do you want to
> wait for a super-core with enough ACL permissions for all those
> submodules touched to approve it? I would go the opposite direction,
> allowing a single core to merge such a trivial patch, without waiting
> for the second one to waste his time reviewing it.
>
> Core reviewers are not those who are able to put +2 on any patch, but
> those who are able to understand where *not* to put it. I would better
> allow people themselves to decide where they are capable and where
> their expertise ends, and free PTLs from micro-managing the cats.
>
> So in essence: mistakes are cheap; reputation works; people are
> responsible enough; and more ACL fences are evil.
>
> > That's why we have blocking CI system that checks patches,
>
> Those checks are easy to automate. Trust is not easily formalized though
> .
>
> Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJVbuS9AAoJEC5aWaUY1u57v2wH/iDLvCrebTtTpocZ8a0BFJ7T
> ssgjM+1F2JiEuieNg7qRqkdW8fZuMuODc7EnWihjDjfP4OMQkelO2711KSPTCSmT
> 76RLMQrSHhyB2FO29qu+4bE5uwUV4uutaDyK8IRZpra+nrSoU8dtL6NuTa/csEeU
> QbmJBB2UMSXdrQmA6HfzoQV9Dmqk5ePbjzg1HXTFy/AtxCb2DLf2IUmeHqwtqg1o
> WoC5ISqoUkRzWx5h1IbV26hhJuGrW6pWjrX50UEFmR/VZwz9T13s7BVE4ReE7mnA
> 2cIGdFnhaJY/VzD4WEzXRfNXV0qetTJG6w30wktKq6y1mG6q8nm+N6KQ4Onq0FQ=
> =DZSF
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
Guys,

One more time it's NOT about reputation and it's NOT about believing
somebody.

It's about human nature. We are all making mistakes.

System that checks can code review merge patch is just extra check
to avoid unintentional mistakes of core reviewers and make things
self organized.


Best regards,
Boris Pavlovic

On Wed, Jun 3, 2015 at 12:55 PM, Alexis Lee  wrote:

> Robert Collins said on Wed, Jun 03, 2015 at 11:12:35AM +1200:
> > So I'd like us to really get our heads around the idea that folk are
> > able to make promises ('I will only commit changes relevant to the DB
> > abstraction/transaction management') and honour them. And if they
> > don't - well, remove their access. *even with* CD in the picture,
> > thats a wholly acceptable risk IMO.
>
> +1, optimism about promises is the solution. The reputational cost of
> violating such a promise is high, given what a small world open source
> can turn out to be.
>
>
> Alexis
> --
> Nova Engineer, HP Cloud.  AKA lealexis, lxsli.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Boris Pavlovic
Guys,

I will try to summarize all questions and reply on them:

*- Why not splitting repo/plugins?*

  I don't want to make "architectural" decisions based on "social" or
  "not enough good tool for review" issues.

  If we take a look at OpenStack that was splited many times: Glance,
Cinder, ...
  we will see that there is a log of code duplication that can't be removed
even after
  two or even more years of oslo effort. As well it produce such issues
like syncing
  requirements, requirements in such large bash script like devstack,
  there is not std installator, it's quite hard to manage and test it and
so on..

  That's why I don't think that splitting repo is good "architecture"
decision - it makes
   simple things complicated...


*- Why not just trust people*

People get tired and make mistakes (very often).
That's why we have blocking CI system that checks patches,
That's why we have rule 2 cores / review (sometimes even 3,4,5...)...

In ideal work Lieutenants model will work out of the box. In real life all
checks like:
person X today has permission to do Y operation should be checked
automatically.

This is exactly what I am proposing.

Best regards,
Boris Pavlovic

On Wed, Jun 3, 2015 at 8:42 AM, Salvatore Orlando 
wrote:

>
>
> On 3 June 2015 at 07:12, John Griffith  wrote:
>
>>
>>
>> On Tue, Jun 2, 2015 at 7:19 PM, Ian Wienand  wrote:
>>
>>> On 06/03/2015 07:24 AM, Boris Pavlovic wrote:
>>>
>>>> Really it's hard to find cores that understand whole project, but
>>>> it's quite simple to find people that can maintain subsystems of
>>>> project.
>>>>
>>>
>>>   We are made wise not by the recollection of our past, but by the
>>>   responsibility for our future.
>>>- George Bernard Shaw
>>>
>>> Less authorities, mini-kingdoms and
>>> turing-complete-rule-based-gerrit-subtree-git-commit-enforcement; more
>>> empowerment of responsible developers and building trust.
>>>
>>> -i
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ​All of the debate about the technical feasibility, additional repos
>> aside, the one question I always raise when topics like this come up is
>> "how does that really solve the problem".  In other words, there's still a
>> finite number of folks that dedicate the time to be "subject matter
>> experts" and do the reviews.
>>
>> Maybe this will help, I don't know.  But I have the same argument as I
>> made in my spec to remove drivers from Cinder altogether, creating "another
>> repo" and moving things around just creates more overhead and does little
>> to address the lack of review resources.
>>
>
> In the neutron project we do not have yet enough data points to assess
> impact of driver/plugin split on review turnaround. On the one hand it
> seems that there is no statistically significant improvement in review
> times for the "core" part, but on the other hand average review times for
> plugin/driver code have improved a lot. So I reckon that there's been a
> clear advantage on this front. There is always a flip of the coin, of
> course: plugin maintainers have to do extra work to chase changes in
> openstack/neutron.
>
> However, this is a bit out of scope for this thread. I'd say that
> splitting out a project in several repositories is an option, but not
> always the right one. In the case of neutron plugins and drivers, it made
> sense because there is a stable-ish interface between the core system and
> the plugin, and because there's usually little overlap of responsibilities.
>
>
>> I understand you're not proposing new repos Boris, although it was
>> mentioned in this thread.
>>
>> I do think that we could probably try and do something like growing the
>> Lieutenant model that the Neutron team is hammering out.  Not sure... but
>> seems like a good start; again assuming there are enough
>> qualified/interested Lieutenants.  I'm not sure, but that's kind of how I
>> interpreted your proposal but one additional step of ACL's; is that
>> accurate?
>>
>
> While I cannot answer for Boris, my opinion is that the lieutenant system
> actually tries to provide a "social" 

Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Boris Pavlovic
Jeremy,


>  the Infrastructure Project is now past 120 repos with more
> than 70 core reviewers among those.


I dislike the idea of having 120 repos for single tool. It makes things
complicated for everybody:
documentation stuff, installation, maintaing, work that touches multiple
repos and so on..

So I would prefer to have single repo with many subcores.


Robert,

We *really* don't need a technical solution to a social problem.


We really need... It's like non voting jobs in our CI (everybody just
ignores).
Btw it will be hard for large core team to know each other.
Especially if we are speaking about various groups of cores that are
mantaining
only parts of systems. Keeping all this in heads will be hard task (it
should be automated)


Best regards,
Boris Pavlovic



On Wed, Jun 3, 2015 at 2:12 AM, Robert Collins 
wrote:

> On 3 June 2015 at 10:34, Jeremy Stanley  wrote:
> > On 2015-06-02 21:59:34 + (+), Ian Cordasco wrote:
> >> I like this very much. I recall there was a session at the summit
> >> about this that Thierry and Kyle led. If I recall correctly, the
> >> discussion mentioned that it wasn't (at this point in time)
> >> possible to use gerrit the way you describe it, but perhaps people
> >> were mistaken?
> > [...]
> >
> > It wasn't an option at the time. What's being conjectured now is
> > that with custom Prolog rules it might be possible to base Gerrit
> > label permissions on strict file subsets within repos. It's
> > nontrivial, as of yet I've seen no working demonstration, and we'd
> > still need the Infrastructure Team to feel comfortable supporting it
> > even if it does turn out to be technically possible. But even before
> > going down the path of automating/enforcing it anywhere in our
> > toolchain, projects interested in this workflow need to try to
> > mentally follow the proposed model and see if it makes social sense
> > for them.
> >
> > It's also still not immediately apparent to me that this additional
> > complication brings any substantial convenience over having distinct
> > Git repositories under the control of separate but allied teams. For
> > example, the Infrastructure Project is now past 120 repos with more
> > than 70 core reviewers among those. In a hypothetical reality where
> > those were separate directory trees within a single repository, I'm
> > not coming up with any significant ways it would improve our current
> > workflow. That said, I understand other projects may have different
> > needs and challenges with their codebase we just don't face.
>
> We *really* don't need a technical solution to a social problem.
>
> If someone isn't trusted enough to know the difference between
> project/subsystemA and project/subsystemB, nor trusted enough to not
> commit changes to subsystemB, pushing stuff out to a new repo, or
> in-repo ACLs are not the solution. The solution is to work with them
> to learn to trust them.
>
> Further, there are plenty of cases where the 'subsystem' is
> cross-cutting, not vertical - and in those cases its much much much
> harder to just describe file boundaries where the thing is.
>
> So I'd like us to really get our heads around the idea that folk are
> able to make promises ('I will only commit changes relevant to the DB
> abstraction/transaction management') and honour them. And if they
> don't - well, remove their access. *even with* CD in the picture,
> thats a wholly acceptable risk IMO.
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Boris Pavlovic
Hi stackers,

*Issue*
*---*

Projects are becoming bigger and bigger overtime.
More and more people would like to contribute code and usually core
reviewers
team can't scale enough. It's very hard to find people that understand full
project and have enough time to do code reviews. As a result team is very
small under heavy load and many maintainers just get burned out.

We have to solve this issue to move forward.


*Idea*
*--*

Let's introduce subsystems cores.

Really it's hard to find cores that understand whole project, but it's
quite simple to find people that can maintain subsystems of project.


*How To*
*---*

Gerrit is not so simple as it looks and it has really neat features ;)

For example we can write own rules about who can put +2 and merge patch
based on changes files.

We can make special "subdirectory core" ACL group.
People from such ACL group will be able to merge changes that touch only
files from some specific subdirs.

As a result with proper organization of directories in project we can scale
up review process without losing quality.


*Thoughts?*


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally][summit] Rally summit updates

2015-05-24 Thread Boris Pavlovic
Hi stackers,


For those who don't want to miss anything related to Rally on summit I make
a
small blogpost that covers most interesting things:

http://boris-42.me/rally-on-openstack-summit-in-vancouver/


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] A way for Operators/Users to submit feature requests

2015-05-14 Thread Boris Pavlovic
Robert,

So I think we should explicitly leave room for experimentation and
> divergence, but also encourage a single common path - don't be
> different to be different, be difference because it is important in
> this specific case.


First of all feature request are the same process as specs (in other
projects)
Difference is what we are expecting to get in spec and feature request (and
auditory)

By the way feature request in Rally were introduced* far far before
backlogs in other Keystone and Nova.*
It strange from me that those projects are reinventing working mechanism
from other project=( and not just use it.


Best regards,
Boris Pavlovic

On Thu, May 14, 2015 at 11:45 PM, Robert Collins 
wrote:

> On 15 May 2015 at 08:34, Jay Pipes  wrote:
> >
> > Hi Maish,
> >
> > I would support this kind of thing for projects that wish to do it, but
> at
> > the same time, I wouldn't want the TC to mandate all projects use this
> > method of collecting feedback. Projects, IMHO, should be free to
> > self-organize as they wish, including developing processes that make the
> > most sense for the project team.
>
> I think there is a balance to be struck. Where we tell users and
> operators to learn something different for every project, that has
> real impact. It makes it harder to engage with us, and it makes it
> harder to move between projects for contributors.
>
> Imagine if we had a spread of gerrit, github PR's, launchpad reviews,
> gitlab PRs and bitbucket PR's - say nova, swift, barbican, keystone
> and glance. That sounds silly because we all recognise the costs of
> switching there: I think we need to recognise the costs for other
> people even in things that as developers we don't interact with all
> that much.
>
> So I think we should explicitly leave room for experimentation and
> divergence, but also encourage a single common path - don't be
> different to be different, be difference because it is important in
> this specific case.
>
> -Rob
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][Rally][announce] What's new in Rally v0.0.4

2015-05-14 Thread Boris Pavlovic
Mike,

Thank you for release notes! Nice work!

Best regards,
Boris Pavlovic

On Thu, May 14, 2015 at 5:48 PM, Mikhail Dubov  wrote:

> Hi everyone,
>
> Rally team is happy to announce that we have just cut the new release
> 0.0.4!
>
> *Release stats:*
>
>- Commits: *87*
>- Bug fixes: *21*
>- New scenarios: *14*
>- New contexts: *2*
>- New SLA: *1*
>- Dev cycle: *30 days*
>- Release date: *14/May/2015*
>
> *New features:*
>
>- *Rally now can generate load with users that already exist. *This
>makes it possible to use Rally for benchmarking OpenStack clouds that are
>using LDAP, AD or any other read-only keystone backend where it is not
>possible to create any users dynamically.
>- *New decorator **@osclients.Clients.register. *This decorator adds
>new OpenStack clients at runtime. The added client will be available from
>*osclients.Clients* at the module level and cached.
>- *Improved installation script.* The installation script for Rally
>now can be run from an unprivileged user, supports different database
>types, allows to specify a custom python binary, automatically installs
>needed software if run as root etc.
>
> For more details, take a look at the *Release notes for 0.0.4*
> <https://rally.readthedocs.org/en/latest/release_notes/latest.html>.
>
> Best regards,
> Mikhail Dubov
>
> Engineering OPS
> Mirantis, Inc.
> E-Mail: mdu...@mirantis.com
> Skype: msdubov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Boris Pavlovic
Mike,

Thank you for saying all that you said above.

Best regards,
Boris Pavlovic

On Tue, May 12, 2015 at 2:35 AM, Clint Byrum  wrote:

> Excerpts from Mike Bayer's message of 2015-05-11 15:44:30 -0700:
> >
> > On 5/11/15 5:25 PM, Robert Collins wrote:
> > >
> > > Details: Skip over this bit if you know it all already.
> > >
> > > The GIL plays a big factor here: if you want to scale the amount of
> > > CPU available to a Python service, you have two routes:
> > > A) move work to a different process through some RPC - be that DB's
> > > using SQL, other services using oslo.messaging or HTTP - whatever.
> > > B) use C extensions to perform work in threads - e.g. openssl context
> > > processing.
> > >
> > > To increase concurrency you can use threads, eventlet, asyncio,
> > > twisted etc - because within a single process *all* Python bytecode
> > > execution happens inside the GIL lock, so you get at most one CPU for
> > > a CPU bound workload. For an IO bound workload, you can fit more work
> > > in by context switching within that one CPU capacity. And - the GIL is
> > > a poor scheduler, so at the limit - an IO bound workload where the IO
> > > backend has more capacity than we have CPU to consume it within our
> > > process, you will run into priority inversion and other problems.
> > > [This varies by Python release too].
> > >
> > > request_duration = time_in_cpu + time_blocked
> > > request_cpu_utilisation = time_in_cpu/request_duration
> > > cpu_utilisation = concurrency * request_cpu_utilisation
> > >
> > > Assuming that we don't want any one process to spend a lot of time at
> > > 100% - to avoid such at-the-limit issues, lets pick say 80%
> > > utilisation, or a safety factor of 0.2. If a single request consumes
> > > 50% of its duration waiting on IO, and 50% of its duration executing
> > > bytecode, we can only run one such request concurrently without
> > > hitting 100% utilisations. (2*0.5 CPU == 1). For a request that spends
> > > 75% of its duration waiting on IO and 25% on CPU, we can run 3 such
> > > requests concurrently without exceeding our target of 80% utilisation:
> > > (3*0.25=0.75).
> > >
> > > What we have today in our standard architecture for OpenStack is
> > > optimised for IO bound workloads: waiting on the
> > > network/subprocesses/disk/libvirt etc. Running high numbers of
> > > eventlet handlers in a single process only works when the majority of
> > > the work being done by a handler is IO.
> >
> > Everything stated here is great, however in our situation there is one
> > unfortunate fact which renders it completely incorrect at the moment.
> > I'm still puzzled why we are getting into deep think sessions about the
> > vagaries of the GIL and async when there is essentially a full-on
> > red-alert performance blocker rendering all of this discussion useless,
> > so I must again remind us: what we have *today* in Openstack is *as
> > completely un-optimized as you can possibly be*.
> >
> > The most GIL-heavy nightmare CPU bound task you can imagine running on
> > 25 threads on a ten year old Pentium will run better than the Openstack
> > we have today, because we are running a C-based, non-eventlet patched DB
> > library within a single OS thread that happens to use eventlet, but the
> > use of eventlet is totally pointless because right now it blocks
> > completely on all database IO.   All production Openstack applications
> > today are fully serialized to only be able to emit a single query to the
> > database at a time; for each message sent, the entire application blocks
> > an order of magnitude more than it would under the GIL waiting for the
> > database library to send a message to MySQL, waiting for MySQL to send a
> > response including the full results, waiting for the database to unwrap
> > the response into Python structures, and finally back to the Python
> > space, where we can send another database message and block the entire
> > application and all greenlets while this single message proceeds.
> >
> > To share a link I've already shared about a dozen times here, here's
> > some tests under similar conditions which illustrate what that
> > concurrency looks like:
> >
> http://www.diamondtin.com/2014/sqlalchemy-gevent-mysql-python-drivers-comparison/
> .
> > MySQLdb takes *20 times longer* to handle the work of 100 sessions than
> > PyMySQL when it's inappropriately run unde

  1   2   3   4   >