Re: [openstack-dev] [Openstack-operators] Performance Team summit session results

2015-11-09 Thread Dina Belova
Mark,

yes, sorry for not mentioning it here. it's 3PM - 4PM UTC time zone.

Cheers,
Dina

On Tue, Nov 10, 2015 at 2:51 AM, Mark Wagner  wrote:

>
> For clarification, this is 3-4 PM (15:00 - 16:00) UTC, correct ?.
>
> -mark
>
> - Original Message -
> > From: "Dina Belova" 
> > To: "Matt Riedemann" 
> > Cc: "OpenStack Development Mailing List" <
> openstack-dev@lists.openstack.org>,
> openstack-operat...@lists.openstack.org
> > Sent: Monday, November 9, 2015 4:30:55 AM
> > Subject: Re: [Openstack-operators] [openstack-dev] Performance Team
> summit session results
> >
> > Folks,
> >
> > due to the doodle  3:00 -
> > 4:00 UTC Tuesdays (starting from tomorrow) is ok for all voted people.
> > Although for the US folks with PST time zone it'll be very early due to
> the
> > time zone change happened for US on November, 1st. Still hope seeing you
> > there on *#openstack-performance* channel :)
> >
> > I've created primary wiki pages for the team
> >  and its meetings
> >  - please feel
> free
> > to add more items to the agenda.
> >
> > See you tomorrow :)
> >
> > Cheers,
> > Dina
> >
> >
> > On Mon, Nov 9, 2015 at 5:38 PM, Dina Belova 
> wrote:
> >
> > > Matt,
> > >
> > > thank you so much for covering [1], [2] and [3] points - I'll ping
> folks
> > > who've written these lines directly and will try to find out the
> answers.
> > >
> > > Cheers,
> > > Dina
> > >
> > > On Fri, Oct 30, 2015 at 1:42 AM, Matt Riedemann <
> > > mrie...@linux.vnet.ibm.com> wrote:
> > >
> > >>
> > >>
> > >> On 10/29/2015 10:55 AM, Matt Riedemann wrote:
> > >>
> > >>>
> > >>>
> > >>> On 10/29/2015 9:30 AM, Dina Belova wrote:
> > >>>
> >  Hey folks!
> > 
> >  On Tuesday we had great summit session about performance team
> kick-off
> >  and yesterday it was a great LDT session as well and I’m really
> glad to
> >  see how much does the OpenStack performance topic is important for
> all
> >  of us. 40 minutes session surely was not enough to analyse
> everyone’s
> >  feedback and bottlenecks people usually see, so I’ll try to finalise
> >  what have been discussed and the next steps in this email.
> > 
> >  Performance team kick-off session
> >  (
> > 
> https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off
> >  )
> > 
> >  can be shortly described with the following points:
> > 
> >    * IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others
> were
> >  taking part in the session
> >    * Various tools are used right now for OpenStack benchmarking and
> >  profiling right now:
> >    o Rally (IBM, HP, Mirantis, Yahoo!)
> >    o Shaker (Mirantis, merging its functionality to Rally right
> now)
> >    o Gatling (Rackspace)
> >    o Zipkin (Yahoo!)
> >    o JMeter (Yandex)
> >    o and others…
> >    * Various issues have been seen during the OpenStack cloud
> operating
> >  (full list can be found here -
> >  https://etherpad.openstack.org/p/openstack-performance-issues).
> >  Most
> >  mentioned issues were the following:
> >    o performance of DB-related layers (DB itself and oslo.db) -
> it is
> >  about 7 abstraction DB layers in Nova; performance of Nova
> >  conductor was mentioned several times
> >    o performance of MQ-related layers (MQ itself and
> oslo.messaging)
> >    * Different companies are using different standards for
> performance
> >  benchmarking (both control plane and data plane testing)
> >    * The most wished output from the team due to the comments will
> be:
> >    o agree on the “performance testing standard”, including
> answers
> >  on the following questions:
> >    + what tools need to be used for OpenStack performance
> >  benchmarking?
> >    + what benchmarking meters need to be covered? what we
> would
> >  like to compare?
> >    + what scenarios need to be covered?
> >    + how can we compare performance of different cloud
> >  deployments?
> >    + what performance deployment patterns can be used for
> various
> >  workloads?
> >    o share test plans and perform benchmarking tests
> >    o create methodologies and documentation about best OpenStack
> >  deployment and performance testing practices
> > 
> > 
> >  We’re going to cover all these topics further. First of all IRC
> channel
> >  for the discussions was created: *#openstack-performance*. We’re
> going
> >  to have weekly meeting related to current progress on that channel,
> >  doodle with the voting can be found here:
> >  http://doodle.com

Re: [openstack-dev] [Openstack-operators] Performance Team summit session results

2015-11-09 Thread Mark Wagner

For clarification, this is 3-4 PM (15:00 - 16:00) UTC, correct ?.

-mark

- Original Message -
> From: "Dina Belova" 
> To: "Matt Riedemann" 
> Cc: "OpenStack Development Mailing List" , 
> openstack-operat...@lists.openstack.org
> Sent: Monday, November 9, 2015 4:30:55 AM
> Subject: Re: [Openstack-operators] [openstack-dev] Performance Team summit 
> session results
> 
> Folks,
> 
> due to the doodle  3:00 -
> 4:00 UTC Tuesdays (starting from tomorrow) is ok for all voted people.
> Although for the US folks with PST time zone it'll be very early due to the
> time zone change happened for US on November, 1st. Still hope seeing you
> there on *#openstack-performance* channel :)
> 
> I've created primary wiki pages for the team
>  and its meetings
>  - please feel free
> to add more items to the agenda.
> 
> See you tomorrow :)
> 
> Cheers,
> Dina
> 
> 
> On Mon, Nov 9, 2015 at 5:38 PM, Dina Belova  wrote:
> 
> > Matt,
> >
> > thank you so much for covering [1], [2] and [3] points - I'll ping folks
> > who've written these lines directly and will try to find out the answers.
> >
> > Cheers,
> > Dina
> >
> > On Fri, Oct 30, 2015 at 1:42 AM, Matt Riedemann <
> > mrie...@linux.vnet.ibm.com> wrote:
> >
> >>
> >>
> >> On 10/29/2015 10:55 AM, Matt Riedemann wrote:
> >>
> >>>
> >>>
> >>> On 10/29/2015 9:30 AM, Dina Belova wrote:
> >>>
>  Hey folks!
> 
>  On Tuesday we had great summit session about performance team kick-off
>  and yesterday it was a great LDT session as well and I’m really glad to
>  see how much does the OpenStack performance topic is important for all
>  of us. 40 minutes session surely was not enough to analyse everyone’s
>  feedback and bottlenecks people usually see, so I’ll try to finalise
>  what have been discussed and the next steps in this email.
> 
>  Performance team kick-off session
>  (
>  https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off
>  )
> 
>  can be shortly described with the following points:
> 
>    * IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others were
>  taking part in the session
>    * Various tools are used right now for OpenStack benchmarking and
>  profiling right now:
>    o Rally (IBM, HP, Mirantis, Yahoo!)
>    o Shaker (Mirantis, merging its functionality to Rally right now)
>    o Gatling (Rackspace)
>    o Zipkin (Yahoo!)
>    o JMeter (Yandex)
>    o and others…
>    * Various issues have been seen during the OpenStack cloud operating
>  (full list can be found here -
>  https://etherpad.openstack.org/p/openstack-performance-issues).
>  Most
>  mentioned issues were the following:
>    o performance of DB-related layers (DB itself and oslo.db) - it is
>  about 7 abstraction DB layers in Nova; performance of Nova
>  conductor was mentioned several times
>    o performance of MQ-related layers (MQ itself and oslo.messaging)
>    * Different companies are using different standards for performance
>  benchmarking (both control plane and data plane testing)
>    * The most wished output from the team due to the comments will be:
>    o agree on the “performance testing standard”, including answers
>  on the following questions:
>    + what tools need to be used for OpenStack performance
>  benchmarking?
>    + what benchmarking meters need to be covered? what we would
>  like to compare?
>    + what scenarios need to be covered?
>    + how can we compare performance of different cloud
>  deployments?
>    + what performance deployment patterns can be used for various
>  workloads?
>    o share test plans and perform benchmarking tests
>    o create methodologies and documentation about best OpenStack
>  deployment and performance testing practices
> 
> 
>  We’re going to cover all these topics further. First of all IRC channel
>  for the discussions was created: *#openstack-performance*. We’re going
>  to have weekly meeting related to current progress on that channel,
>  doodle with the voting can be found here:
>  http://doodle.com/poll/wv6qt8eqtc3mdkuz#table
>    (I was brave enough not to include timeslots that were overlapping
>  with some of mine really hard-to-move activities :))
> 
>  Let’s have next week as a voting time, and have first IRC meeting in our
>  channel the week after next. We can start our further discussions with
>  “performance” and “performance testing” terms definition and
>  benchmarking tools analysis.
> 

Re: [openstack-dev] [Openstack-operators] Performance Team summit session results

2015-11-09 Thread Dina Belova
Folks,

due to the doodle  3:00 -
4:00 UTC Tuesdays (starting from tomorrow) is ok for all voted people.
Although for the US folks with PST time zone it'll be very early due to the
time zone change happened for US on November, 1st. Still hope seeing you
there on *#openstack-performance* channel :)

I've created primary wiki pages for the team
 and its meetings
 - please feel free
to add more items to the agenda.

See you tomorrow :)

Cheers,
Dina


On Mon, Nov 9, 2015 at 5:38 PM, Dina Belova  wrote:

> Matt,
>
> thank you so much for covering [1], [2] and [3] points - I'll ping folks
> who've written these lines directly and will try to find out the answers.
>
> Cheers,
> Dina
>
> On Fri, Oct 30, 2015 at 1:42 AM, Matt Riedemann <
> mrie...@linux.vnet.ibm.com> wrote:
>
>>
>>
>> On 10/29/2015 10:55 AM, Matt Riedemann wrote:
>>
>>>
>>>
>>> On 10/29/2015 9:30 AM, Dina Belova wrote:
>>>
 Hey folks!

 On Tuesday we had great summit session about performance team kick-off
 and yesterday it was a great LDT session as well and I’m really glad to
 see how much does the OpenStack performance topic is important for all
 of us. 40 minutes session surely was not enough to analyse everyone’s
 feedback and bottlenecks people usually see, so I’ll try to finalise
 what have been discussed and the next steps in this email.

 Performance team kick-off session
 (
 https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off
 )

 can be shortly described with the following points:

   * IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others were
 taking part in the session
   * Various tools are used right now for OpenStack benchmarking and
 profiling right now:
   o Rally (IBM, HP, Mirantis, Yahoo!)
   o Shaker (Mirantis, merging its functionality to Rally right now)
   o Gatling (Rackspace)
   o Zipkin (Yahoo!)
   o JMeter (Yandex)
   o and others…
   * Various issues have been seen during the OpenStack cloud operating
 (full list can be found here -
 https://etherpad.openstack.org/p/openstack-performance-issues).
 Most
 mentioned issues were the following:
   o performance of DB-related layers (DB itself and oslo.db) - it is
 about 7 abstraction DB layers in Nova; performance of Nova
 conductor was mentioned several times
   o performance of MQ-related layers (MQ itself and oslo.messaging)
   * Different companies are using different standards for performance
 benchmarking (both control plane and data plane testing)
   * The most wished output from the team due to the comments will be:
   o agree on the “performance testing standard”, including answers
 on the following questions:
   + what tools need to be used for OpenStack performance
 benchmarking?
   + what benchmarking meters need to be covered? what we would
 like to compare?
   + what scenarios need to be covered?
   + how can we compare performance of different cloud
 deployments?
   + what performance deployment patterns can be used for various
 workloads?
   o share test plans and perform benchmarking tests
   o create methodologies and documentation about best OpenStack
 deployment and performance testing practices


 We’re going to cover all these topics further. First of all IRC channel
 for the discussions was created: *#openstack-performance*. We’re going
 to have weekly meeting related to current progress on that channel,
 doodle with the voting can be found here:
 http://doodle.com/poll/wv6qt8eqtc3mdkuz#table
   (I was brave enough not to include timeslots that were overlapping
 with some of mine really hard-to-move activities :))

 Let’s have next week as a voting time, and have first IRC meeting in our
 channel the week after next. We can start our further discussions with
 “performance” and “performance testing” terms definition and
 benchmarking tools analysis.

 Cheers,
 Dina



 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> Thanks for writing this up, it's great to see people getting together
>>> and sharing info on performance issues and trying to pinpoint the big
>>> ones.
>>>
>>> I poked through the performance issues etherpad and was wondering how

Re: [openstack-dev] [Openstack-operators] Performance Team summit session results

2015-11-09 Thread Dina Belova
Matt,

thank you so much for covering [1], [2] and [3] points - I'll ping folks
who've written these lines directly and will try to find out the answers.

Cheers,
Dina

On Fri, Oct 30, 2015 at 1:42 AM, Matt Riedemann 
wrote:

>
>
> On 10/29/2015 10:55 AM, Matt Riedemann wrote:
>
>>
>>
>> On 10/29/2015 9:30 AM, Dina Belova wrote:
>>
>>> Hey folks!
>>>
>>> On Tuesday we had great summit session about performance team kick-off
>>> and yesterday it was a great LDT session as well and I’m really glad to
>>> see how much does the OpenStack performance topic is important for all
>>> of us. 40 minutes session surely was not enough to analyse everyone’s
>>> feedback and bottlenecks people usually see, so I’ll try to finalise
>>> what have been discussed and the next steps in this email.
>>>
>>> Performance team kick-off session
>>> (
>>> https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off
>>> )
>>>
>>> can be shortly described with the following points:
>>>
>>>   * IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others were
>>> taking part in the session
>>>   * Various tools are used right now for OpenStack benchmarking and
>>> profiling right now:
>>>   o Rally (IBM, HP, Mirantis, Yahoo!)
>>>   o Shaker (Mirantis, merging its functionality to Rally right now)
>>>   o Gatling (Rackspace)
>>>   o Zipkin (Yahoo!)
>>>   o JMeter (Yandex)
>>>   o and others…
>>>   * Various issues have been seen during the OpenStack cloud operating
>>> (full list can be found here -
>>> https://etherpad.openstack.org/p/openstack-performance-issues). Most
>>> mentioned issues were the following:
>>>   o performance of DB-related layers (DB itself and oslo.db) - it is
>>> about 7 abstraction DB layers in Nova; performance of Nova
>>> conductor was mentioned several times
>>>   o performance of MQ-related layers (MQ itself and oslo.messaging)
>>>   * Different companies are using different standards for performance
>>> benchmarking (both control plane and data plane testing)
>>>   * The most wished output from the team due to the comments will be:
>>>   o agree on the “performance testing standard”, including answers
>>> on the following questions:
>>>   + what tools need to be used for OpenStack performance
>>> benchmarking?
>>>   + what benchmarking meters need to be covered? what we would
>>> like to compare?
>>>   + what scenarios need to be covered?
>>>   + how can we compare performance of different cloud
>>> deployments?
>>>   + what performance deployment patterns can be used for various
>>> workloads?
>>>   o share test plans and perform benchmarking tests
>>>   o create methodologies and documentation about best OpenStack
>>> deployment and performance testing practices
>>>
>>>
>>> We’re going to cover all these topics further. First of all IRC channel
>>> for the discussions was created: *#openstack-performance*. We’re going
>>> to have weekly meeting related to current progress on that channel,
>>> doodle with the voting can be found here:
>>> http://doodle.com/poll/wv6qt8eqtc3mdkuz#table
>>>   (I was brave enough not to include timeslots that were overlapping
>>> with some of mine really hard-to-move activities :))
>>>
>>> Let’s have next week as a voting time, and have first IRC meeting in our
>>> channel the week after next. We can start our further discussions with
>>> “performance” and “performance testing” terms definition and
>>> benchmarking tools analysis.
>>>
>>> Cheers,
>>> Dina
>>>
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> Thanks for writing this up, it's great to see people getting together
>> and sharing info on performance issues and trying to pinpoint the big
>> ones.
>>
>> I poked through the performance issues etherpad and was wondering how
>> many people with DB issues, particularly for nova-conductor, are using a
>> level of oslo.db that's new enough to be using pymysql rather than
>> mysql-python because from what I remember there were eventlet issues
>> without pymysql. That was added to oslo.db 1.12.0 [1].
>>
>> The nova-conductor workers / CPU usage is also a known issue in the
>> large ops gate job [2] but I'm not aware of anyone spending the time
>> drilling into what exactly is causing a lot of that overhead and if any
>> of it is abnormal.
>>
>> Finally, wrt DB, I'd also be interested to know if Rackspace, or anyone
>> else, is still running with the direct-to-sql stuff that comstud wrote
>> for nova [3] and if that still shows significant performance
>> improvements over using sqlalchemy ORM. Not to open that can of