Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-28 Thread Denis Makogon
On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim g...@redhat.com wrote:

 On 01/27/2015 06:31 PM, Doug Hellmann wrote:

 On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:

 I'd like to build tool that would be able to profile messaging over
 various deployments. This tool would give me an ability to compare
 results of performance testing produced by native tools and
 oslo.messaging-based tool, eventually it would lead us into digging into
 code and trying to figure out where bad things are happening (that's
 the
 actual place where we would need to profile messaging code). Correct me
 if
 i'm wrong.


 It would be interesting to have recommendations for deployment of rabbit
 or qpid based on performance testing with oslo.messaging. It would also
 be interesting to have recommendations for changes to the implementation
 of oslo.messaging based on performance testing. I'm not sure you want to
 do full-stack testing for the latter, though.

 Either way, I think you would be able to start the testing without any
 changes in oslo.messaging.


 I agree. I think the first step is to define what to measure and then
 construct an application using olso.messaging that allows the data of
 interest to be captured using different drivers and indeed different
 configurations of a given driver.

 I wrote a very simple test application to test one aspect that I felt was
 important, namely the scalability of the RPC mechanism as you increase the
 number of clients and servers involved. The code I used is
 https://github.com/grs/ombt, its probably stale at the moment, I only
 link to it as an example of approach.

 Using that test code I was then able to compare performance in this one
 aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based drivers
 _ I wanted to try zmq, but couldn't figure out how to get it working at the
 time), and for different deployment options using a given driver (amqp 1.0
 using qpidd or qpid dispatch router in either standalone or with multiple
 connected routers).

 There are of course several other aspects that I think would be important
 to explore: notifications, more specific variations in the RPC 'topology'
 i.e. number of clients on given server number of servers in single group
 etc, and a better tool (or set of tools) would allow all of these to be
 explored.

 From my experimentation, I believe the biggest differences in scalability
 are going to come not from optimising the code in oslo.messaging so much as
 choosing different patterns for communication. Those choices may be
 constrained by other aspects as well of course, notably approach to
 reliability.



After couple internal discussions and hours of investigations, i think i've
foung the most applicabale solution
that will accomplish performance testing approach and will eventually be
evaluated as messaging drivers
configuration and AMQP service deployment recommendataion.

Solution that i've been talking about is already pretty well-known across
OpenStack components - Rally and its scenarios.
Why it would be the best option? Rally scenarios would not touch messaging
 core part. Scenarios are gate-able.
Even if we're talking about internal testing, scenarios are very useful in
this case,
since they are something that can be tuned/configured taking into account
environment needs.

Doug, Gordon, what do you think about bringing scenarios into messaging?

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-28 Thread Flavio Percoco

On 28/01/15 10:23 +0200, Denis Makogon wrote:



On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim g...@redhat.com wrote:

   On 01/27/2015 06:31 PM, Doug Hellmann wrote:
  
   On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:
  
   I'd like to build tool that would be able to profile messaging over

   various deployments. This tool would give me an ability to
   compare
   results of performance testing produced by native tools and
   oslo.messaging-based tool, eventually it would lead us into digging
   into
   code and trying to figure out where bad things are happening
   (that's
   the
   actual place where we would need to profile messaging code).
   Correct me
   if
   i'm wrong.


   It would be interesting to have recommendations for deployment of
   rabbit
   or qpid based on performance testing with oslo.messaging. It would also
   be interesting to have recommendations for changes to the
   implementation
   of oslo.messaging based on performance testing. I'm not sure you want
   to
   do full-stack testing for the latter, though.

   Either way, I think you would be able to start the testing without any
   changes in oslo.messaging.
  


   I agree. I think the first step is to define what to measure and then
   construct an application using olso.messaging that allows the data of
   interest to be captured using different drivers and indeed different
   configurations of a given driver.

   I wrote a very simple test application to test one aspect that I felt was
   important, namely the scalability of the RPC mechanism as you increase the
   number of clients and servers involved. The code I used is https://
   github.com/grs/ombt, its probably stale at the moment, I only link to it as
   an example of approach.

   Using that test code I was then able to compare performance in this one
   aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based drivers
   _ I wanted to try zmq, but couldn't figure out how to get it working at the
   time), and for different deployment options using a given driver (amqp 1.0
   using qpidd or qpid dispatch router in either standalone or with multiple
   connected routers).

   There are of course several other aspects that I think would be important
   to explore: notifications, more specific variations in the RPC 'topology'
   i.e. number of clients on given server number of servers in single group
   etc, and a better tool (or set of tools) would allow all of these to be
   explored.

   From my experimentation, I believe the biggest differences in scalability
   are going to come not from optimising the code in oslo.messaging so much as
   choosing different patterns for communication. Those choices may be
   constrained by other aspects as well of course, notably approach to
   reliability.




After couple internal discussions and hours of investigations, i think i've
foung the most applicabale solution 
that will accomplish performance testing approach and will eventually be
evaluated as messaging drivers 
configuration and AMQP service deployment recommendataion.

Solution that i've been talking about is already pretty well-known across
OpenStack components - Rally and its scenarios.
Why it would be the best option? Rally scenarios would not touch messaging
 core part. Scenarios are gate-able. 
Even if we're talking about internal testing, scenarios are very useful in this
case, 
since they are something that can be tuned/configured taking into account
environment needs.

Doug, Gordon, what do you think about bringing scenarios into messaging? 


I personally wouldn't mind having them but I'd like us to first
discuss what kind of scenarios we want to test.

I'm assuming these scenarios would be pure oslo.messaging scenarios
and they won't require any of the openstack services. Therefore, I
guess these scenarios would test things like performance with many
consumers, performance with several (a)synchronous calls, etc. What
performance means in this context will have to be discussed as well.

In addition to the above, it'd be really interesting if we could have
tests for things like reconnects delays, which I think is doable with
Rally. Am I right?

Cheers,
Flavio




   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Kind regards,
Denis M.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco



Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-28 Thread Denis Makogon
On Wed, Jan 28, 2015 at 11:39 AM, Flavio Percoco fla...@redhat.com wrote:

 On 28/01/15 10:23 +0200, Denis Makogon wrote:



 On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim g...@redhat.com wrote:

On 01/27/2015 06:31 PM, Doug Hellmann wrote:
  On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:
  I'd like to build tool that would be able to profile
 messaging over
various deployments. This tool would give me an ability to
compare
results of performance testing produced by native tools and
oslo.messaging-based tool, eventually it would lead us into
 digging
into
code and trying to figure out where bad things are happening
(that's
the
actual place where we would need to profile messaging code).
Correct me
if
i'm wrong.


It would be interesting to have recommendations for deployment of
rabbit
or qpid based on performance testing with oslo.messaging. It would
 also
be interesting to have recommendations for changes to the
implementation
of oslo.messaging based on performance testing. I'm not sure you
 want
to
do full-stack testing for the latter, though.

Either way, I think you would be able to start the testing without
 any
changes in oslo.messaging.

I agree. I think the first step is to define what to measure and then
construct an application using olso.messaging that allows the data of
interest to be captured using different drivers and indeed different
configurations of a given driver.

I wrote a very simple test application to test one aspect that I felt
 was
important, namely the scalability of the RPC mechanism as you increase
 the
number of clients and servers involved. The code I used is https://
github.com/grs/ombt, its probably stale at the moment, I only link to
 it as
an example of approach.

Using that test code I was then able to compare performance in this one
aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based
 drivers
_ I wanted to try zmq, but couldn't figure out how to get it working
 at the
time), and for different deployment options using a given driver (amqp
 1.0
using qpidd or qpid dispatch router in either standalone or with
 multiple
connected routers).

There are of course several other aspects that I think would be
 important
to explore: notifications, more specific variations in the RPC
 'topology'
i.e. number of clients on given server number of servers in single
 group
etc, and a better tool (or set of tools) would allow all of these to be
explored.

From my experimentation, I believe the biggest differences in
 scalability
are going to come not from optimising the code in oslo.messaging so
 much as
choosing different patterns for communication. Those choices may be
constrained by other aspects as well of course, notably approach to
reliability.




 After couple internal discussions and hours of investigations, i think
 i've
 foung the most applicabale solution
 that will accomplish performance testing approach and will eventually be
 evaluated as messaging drivers
 configuration and AMQP service deployment recommendataion.

 Solution that i've been talking about is already pretty well-known across
 OpenStack components - Rally and its scenarios.
 Why it would be the best option? Rally scenarios would not touch messaging
  core part. Scenarios are gate-able.
 Even if we're talking about internal testing, scenarios are very useful
 in this
 case,
 since they are something that can be tuned/configured taking into account
 environment needs.

 Doug, Gordon, what do you think about bringing scenarios into messaging?


 I personally wouldn't mind having them but I'd like us to first
 discuss what kind of scenarios we want to test.

 I'm assuming these scenarios would be pure oslo.messaging scenarios
 and they won't require any of the openstack services. Therefore, I
 guess these scenarios would test things like performance with many

consumers, performance with several (a)synchronous calls, etc. What
 performance means in this context will have to be discussed as well.


Correct, oslo.messaging scenarios would expect to have only AMQP service
and nothing else.
Yes, that's what i've been thinking about. Also, i'd like to share doc that
i've found, see [1].
As i can see it would be more than useful to enable next scenarios:

   - Single multi-thread publisher (rpc client) against single multi-thread
   consumer
  - using RPC cast/call methods try to measure time between request and
  response.
   - Multiple multi-thread publishers against single multi-thread consumer
  - using RPC cast/call methods try to measure time between requests
  and responses to multiple publishers.
   - Multiple multi-thread 

Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-28 Thread Doug Hellmann


On Wed, Jan 28, 2015, at 03:23 AM, Denis Makogon wrote:
 On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim g...@redhat.com wrote:
 
  On 01/27/2015 06:31 PM, Doug Hellmann wrote:
 
  On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:
 
  I'd like to build tool that would be able to profile messaging over
  various deployments. This tool would give me an ability to compare
  results of performance testing produced by native tools and
  oslo.messaging-based tool, eventually it would lead us into digging into
  code and trying to figure out where bad things are happening (that's
  the
  actual place where we would need to profile messaging code). Correct me
  if
  i'm wrong.
 
 
  It would be interesting to have recommendations for deployment of rabbit
  or qpid based on performance testing with oslo.messaging. It would also
  be interesting to have recommendations for changes to the implementation
  of oslo.messaging based on performance testing. I'm not sure you want to
  do full-stack testing for the latter, though.
 
  Either way, I think you would be able to start the testing without any
  changes in oslo.messaging.
 
 
  I agree. I think the first step is to define what to measure and then
  construct an application using olso.messaging that allows the data of
  interest to be captured using different drivers and indeed different
  configurations of a given driver.
 
  I wrote a very simple test application to test one aspect that I felt was
  important, namely the scalability of the RPC mechanism as you increase the
  number of clients and servers involved. The code I used is
  https://github.com/grs/ombt, its probably stale at the moment, I only
  link to it as an example of approach.
 
  Using that test code I was then able to compare performance in this one
  aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based drivers
  _ I wanted to try zmq, but couldn't figure out how to get it working at the
  time), and for different deployment options using a given driver (amqp 1.0
  using qpidd or qpid dispatch router in either standalone or with multiple
  connected routers).
 
  There are of course several other aspects that I think would be important
  to explore: notifications, more specific variations in the RPC 'topology'
  i.e. number of clients on given server number of servers in single group
  etc, and a better tool (or set of tools) would allow all of these to be
  explored.
 
  From my experimentation, I believe the biggest differences in scalability
  are going to come not from optimising the code in oslo.messaging so much as
  choosing different patterns for communication. Those choices may be
  constrained by other aspects as well of course, notably approach to
  reliability.
 
 
 
 After couple internal discussions and hours of investigations, i think
 i've
 foung the most applicabale solution
 that will accomplish performance testing approach and will eventually be
 evaluated as messaging drivers
 configuration and AMQP service deployment recommendataion.
 
 Solution that i've been talking about is already pretty well-known across
 OpenStack components - Rally and its scenarios.
 Why it would be the best option? Rally scenarios would not touch
 messaging
  core part. Scenarios are gate-able.
 Even if we're talking about internal testing, scenarios are very useful
 in
 this case,
 since they are something that can be tuned/configured taking into account
 environment needs.
 
 Doug, Gordon, what do you think about bringing scenarios into messaging?

I think I need more detail about what you mean by that.

Doug

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 Kind regards,
 Denis M.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Doug Hellmann


On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:
 On Tue, Jan 27, 2015 at 7:15 PM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Tue, Jan 27, 2015, at 10:56 AM, Denis Makogon wrote:
   On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann d...@doughellmann.com
   wrote:
  
   
 On Jan 15, 2015, at 1:30 PM, Denis Makogon dmako...@mirantis.com
wrote:

 Good day to All,

 The question that i’d like to raise here is not simple one, so i’d
  like
to involve as much readers as i can. I’d like to speak about
  oslo.messaging
performance testing. As community we’ve put lots of efforts in making
oslo.messaging widely used drivers stable as much as possible.
  Stability is
a good thing, but is it enough for saying “works well”? I’d say that
  it’s
not.
 Since oslo.messaging uses driver-based messaging workflow, it makes
sense to dig into each driver and collect all required/possible
  performance
metrics.
 First of all, it does make sense to figure out how to perform
performance testing, first that came into my mind is to simulate high
  load
on one of corresponding drivers. Here comes the question of how it can
  be
accomplished withing available oslo.messaging tools - high load on any
driver can perform an application that:
   • can populate multiple emitters(rpc clients) and consumers
  (rpc
servers).
   • can force clients to send messages of pre-defined number of
messages of any length.
   
That makes sense.
   
 Another thing is why do we need such thing. Profiling, performance
testing can improve the way in which our drivers were implemented. It
  can
show us actual “bottlenecks” in messaging process, in general. In some
cases it does make sense to figure out where problem takes its place -
whether AMQP causes messaging problems or certain driver that speaks to
AMQP fails.
 Next thing that i want to discuss the architecture of
profiling/performance testing. As i can see it seemed to be a “good”
  way to
add profiling code to each driver. If there’s any objection or better
solution, please bring them to the light.
   
What sort of extra profiling code do you anticipate needing?
   
   
   As i can foresee (taking into account [1]) couple decorators, possibly
   one
   that handles metering process. The biggest part of code will take
   highload
   tool that'll be a part of messaging. But another question adding certain
   dependecies to the project.
  
  
 Once we’d have final design for profiling we would need to figure out
tools for profiling. After searching over the web, i found pretty
interesting topic related to python profiling [1]. After certain
investigations it does makes sense discuss next profiling
  options(apply one
or both):
   • Line-by-line timing and execution frequency with a profiler
(there are possible Pros and Cons, but i would say the per-line
  statistics
is more than appreciable at initial performance testing steps)
   • Memory/CPU consumption
 Metrics. The most useful metric for us is time, any time-based
  metric,
since it is very useful to know at which step or/and by whom
  delay/timeout
caused, for example, so as it said, we would be able to figure out
  whether
AMQP or driver fails to do what it was designed for.
 Before proposing spec i’d like to figure out any other requirements,
  use
cases and restrictions for messaging performance testing. Also, if
  there
any stories of success in boosting python performance - feel free to
  share
it.
   
The metrics to measure depend on the goal. Do we think the messaging
  code
is using too much memory? Is it too slow? Or is there something else
causing concern?
   
It does make sense to have profiling for cases when trying to upscale
   cluster and it'll be a good thing to have an ability to figure out if
   scaled AMQP service has it's best configuration (i guess here would come
   the question about doing performance testing using well-known tools), and
   the most interesting question is about how messaging driver decreases (or
   leaves untouched) throughput between RPC client and server. This metering
   results can be compared to those tools that were designed for performance
   testing. And that's why it'll be good step forward having
   profiling/performance testing using high load technic.
 
  That makes it sound like you want to build performance testing tools for
  the infrastructure oslo.messaging is using, and not for oslo.messaging
  itself. Is that right?
 
  I'd like to build tool that would be able to profile messaging over
 various deployments. This tool would give me an ability to compare
 results of performance testing produced by native tools and
 oslo.messaging-based tool, eventually it would lead us into digging into
 code and trying to figure out where bad things are 

Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Gordon Sim

On 01/27/2015 06:31 PM, Doug Hellmann wrote:

On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:

I'd like to build tool that would be able to profile messaging over
various deployments. This tool would give me an ability to compare
results of performance testing produced by native tools and
oslo.messaging-based tool, eventually it would lead us into digging into
code and trying to figure out where bad things are happening (that's
the
actual place where we would need to profile messaging code). Correct me
if
i'm wrong.


It would be interesting to have recommendations for deployment of rabbit
or qpid based on performance testing with oslo.messaging. It would also
be interesting to have recommendations for changes to the implementation
of oslo.messaging based on performance testing. I'm not sure you want to
do full-stack testing for the latter, though.

Either way, I think you would be able to start the testing without any
changes in oslo.messaging.


I agree. I think the first step is to define what to measure and then 
construct an application using olso.messaging that allows the data of 
interest to be captured using different drivers and indeed different 
configurations of a given driver.


I wrote a very simple test application to test one aspect that I felt 
was important, namely the scalability of the RPC mechanism as you 
increase the number of clients and servers involved. The code I used is 
https://github.com/grs/ombt, its probably stale at the moment, I only 
link to it as an example of approach.


Using that test code I was then able to compare performance in this one 
aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based 
drivers _ I wanted to try zmq, but couldn't figure out how to get it 
working at the time), and for different deployment options using a given 
driver (amqp 1.0 using qpidd or qpid dispatch router in either 
standalone or with multiple connected routers).


There are of course several other aspects that I think would be 
important to explore: notifications, more specific variations in the RPC 
'topology' i.e. number of clients on given server number of servers in 
single group etc, and a better tool (or set of tools) would allow all of 
these to be explored.


From my experimentation, I believe the biggest differences in 
scalability are going to come not from optimising the code in 
oslo.messaging so much as choosing different patterns for communication. 
Those choices may be constrained by other aspects as well of course, 
notably approach to reliability.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Denis Makogon
On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann d...@doughellmann.com
wrote:


  On Jan 15, 2015, at 1:30 PM, Denis Makogon dmako...@mirantis.com
 wrote:
 
  Good day to All,
 
  The question that i’d like to raise here is not simple one, so i’d like
 to involve as much readers as i can. I’d like to speak about oslo.messaging
 performance testing. As community we’ve put lots of efforts in making
 oslo.messaging widely used drivers stable as much as possible. Stability is
 a good thing, but is it enough for saying “works well”? I’d say that it’s
 not.
  Since oslo.messaging uses driver-based messaging workflow, it makes
 sense to dig into each driver and collect all required/possible performance
 metrics.
  First of all, it does make sense to figure out how to perform
 performance testing, first that came into my mind is to simulate high load
 on one of corresponding drivers. Here comes the question of how it can be
 accomplished withing available oslo.messaging tools - high load on any
 driver can perform an application that:
• can populate multiple emitters(rpc clients) and consumers (rpc
 servers).
• can force clients to send messages of pre-defined number of
 messages of any length.

 That makes sense.

  Another thing is why do we need such thing. Profiling, performance
 testing can improve the way in which our drivers were implemented. It can
 show us actual “bottlenecks” in messaging process, in general. In some
 cases it does make sense to figure out where problem takes its place -
 whether AMQP causes messaging problems or certain driver that speaks to
 AMQP fails.
  Next thing that i want to discuss the architecture of
 profiling/performance testing. As i can see it seemed to be a “good” way to
 add profiling code to each driver. If there’s any objection or better
 solution, please bring them to the light.

 What sort of extra profiling code do you anticipate needing?


As i can foresee (taking into account [1]) couple decorators, possibly one
that handles metering process. The biggest part of code will take highload
tool that'll be a part of messaging. But another question adding certain
dependecies to the project.


  Once we’d have final design for profiling we would need to figure out
 tools for profiling. After searching over the web, i found pretty
 interesting topic related to python profiling [1]. After certain
 investigations it does makes sense discuss next profiling options(apply one
 or both):
• Line-by-line timing and execution frequency with a profiler
 (there are possible Pros and Cons, but i would say the per-line statistics
 is more than appreciable at initial performance testing steps)
• Memory/CPU consumption
  Metrics. The most useful metric for us is time, any time-based metric,
 since it is very useful to know at which step or/and by whom delay/timeout
 caused, for example, so as it said, we would be able to figure out whether
 AMQP or driver fails to do what it was designed for.
  Before proposing spec i’d like to figure out any other requirements, use
 cases and restrictions for messaging performance testing. Also, if there
 any stories of success in boosting python performance - feel free to share
 it.

 The metrics to measure depend on the goal. Do we think the messaging code
 is using too much memory? Is it too slow? Or is there something else
 causing concern?

 It does make sense to have profiling for cases when trying to upscale
cluster and it'll be a good thing to have an ability to figure out if
scaled AMQP service has it's best configuration (i guess here would come
the question about doing performance testing using well-known tools), and
the most interesting question is about how messaging driver decreases (or
leaves untouched) throughput between RPC client and server. This metering
results can be compared to those tools that were designed for performance
testing. And that's why it'll be good step forward having
profiling/performance testing using high load technic.


 
 
 
  [1] http://www.huyng.com/posts/python-performance-analysis/
 
  Kind regards,
  Denis Makogon
  IRC: denis_makogon
  dmako...@mirantis.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Denis Makogon
On Tue, Jan 27, 2015 at 7:15 PM, Doug Hellmann d...@doughellmann.com
wrote:



 On Tue, Jan 27, 2015, at 10:56 AM, Denis Makogon wrote:
  On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann d...@doughellmann.com
  wrote:
 
  
On Jan 15, 2015, at 1:30 PM, Denis Makogon dmako...@mirantis.com
   wrote:
   
Good day to All,
   
The question that i’d like to raise here is not simple one, so i’d
 like
   to involve as much readers as i can. I’d like to speak about
 oslo.messaging
   performance testing. As community we’ve put lots of efforts in making
   oslo.messaging widely used drivers stable as much as possible.
 Stability is
   a good thing, but is it enough for saying “works well”? I’d say that
 it’s
   not.
Since oslo.messaging uses driver-based messaging workflow, it makes
   sense to dig into each driver and collect all required/possible
 performance
   metrics.
First of all, it does make sense to figure out how to perform
   performance testing, first that came into my mind is to simulate high
 load
   on one of corresponding drivers. Here comes the question of how it can
 be
   accomplished withing available oslo.messaging tools - high load on any
   driver can perform an application that:
  • can populate multiple emitters(rpc clients) and consumers
 (rpc
   servers).
  • can force clients to send messages of pre-defined number of
   messages of any length.
  
   That makes sense.
  
Another thing is why do we need such thing. Profiling, performance
   testing can improve the way in which our drivers were implemented. It
 can
   show us actual “bottlenecks” in messaging process, in general. In some
   cases it does make sense to figure out where problem takes its place -
   whether AMQP causes messaging problems or certain driver that speaks to
   AMQP fails.
Next thing that i want to discuss the architecture of
   profiling/performance testing. As i can see it seemed to be a “good”
 way to
   add profiling code to each driver. If there’s any objection or better
   solution, please bring them to the light.
  
   What sort of extra profiling code do you anticipate needing?
  
  
  As i can foresee (taking into account [1]) couple decorators, possibly
  one
  that handles metering process. The biggest part of code will take
  highload
  tool that'll be a part of messaging. But another question adding certain
  dependecies to the project.
 
 
Once we’d have final design for profiling we would need to figure out
   tools for profiling. After searching over the web, i found pretty
   interesting topic related to python profiling [1]. After certain
   investigations it does makes sense discuss next profiling
 options(apply one
   or both):
  • Line-by-line timing and execution frequency with a profiler
   (there are possible Pros and Cons, but i would say the per-line
 statistics
   is more than appreciable at initial performance testing steps)
  • Memory/CPU consumption
Metrics. The most useful metric for us is time, any time-based
 metric,
   since it is very useful to know at which step or/and by whom
 delay/timeout
   caused, for example, so as it said, we would be able to figure out
 whether
   AMQP or driver fails to do what it was designed for.
Before proposing spec i’d like to figure out any other requirements,
 use
   cases and restrictions for messaging performance testing. Also, if
 there
   any stories of success in boosting python performance - feel free to
 share
   it.
  
   The metrics to measure depend on the goal. Do we think the messaging
 code
   is using too much memory? Is it too slow? Or is there something else
   causing concern?
  
   It does make sense to have profiling for cases when trying to upscale
  cluster and it'll be a good thing to have an ability to figure out if
  scaled AMQP service has it's best configuration (i guess here would come
  the question about doing performance testing using well-known tools), and
  the most interesting question is about how messaging driver decreases (or
  leaves untouched) throughput between RPC client and server. This metering
  results can be compared to those tools that were designed for performance
  testing. And that's why it'll be good step forward having
  profiling/performance testing using high load technic.

 That makes it sound like you want to build performance testing tools for
 the infrastructure oslo.messaging is using, and not for oslo.messaging
 itself. Is that right?

 I'd like to build tool that would be able to profile messaging over
various deployments. This tool would give me an ability to compare
results of performance testing produced by native tools and
oslo.messaging-based tool, eventually it would lead us into digging into
code and trying to figure out where bad things are happening (that's the
actual place where we would need to profile messaging code). Correct me if
i'm wrong.

Doug

 
 
   
   
   
[1] 

Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Doug Hellmann


On Tue, Jan 27, 2015, at 10:56 AM, Denis Makogon wrote:
 On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
   On Jan 15, 2015, at 1:30 PM, Denis Makogon dmako...@mirantis.com
  wrote:
  
   Good day to All,
  
   The question that i’d like to raise here is not simple one, so i’d like
  to involve as much readers as i can. I’d like to speak about oslo.messaging
  performance testing. As community we’ve put lots of efforts in making
  oslo.messaging widely used drivers stable as much as possible. Stability is
  a good thing, but is it enough for saying “works well”? I’d say that it’s
  not.
   Since oslo.messaging uses driver-based messaging workflow, it makes
  sense to dig into each driver and collect all required/possible performance
  metrics.
   First of all, it does make sense to figure out how to perform
  performance testing, first that came into my mind is to simulate high load
  on one of corresponding drivers. Here comes the question of how it can be
  accomplished withing available oslo.messaging tools - high load on any
  driver can perform an application that:
 • can populate multiple emitters(rpc clients) and consumers (rpc
  servers).
 • can force clients to send messages of pre-defined number of
  messages of any length.
 
  That makes sense.
 
   Another thing is why do we need such thing. Profiling, performance
  testing can improve the way in which our drivers were implemented. It can
  show us actual “bottlenecks” in messaging process, in general. In some
  cases it does make sense to figure out where problem takes its place -
  whether AMQP causes messaging problems or certain driver that speaks to
  AMQP fails.
   Next thing that i want to discuss the architecture of
  profiling/performance testing. As i can see it seemed to be a “good” way to
  add profiling code to each driver. If there’s any objection or better
  solution, please bring them to the light.
 
  What sort of extra profiling code do you anticipate needing?
 
 
 As i can foresee (taking into account [1]) couple decorators, possibly
 one
 that handles metering process. The biggest part of code will take
 highload
 tool that'll be a part of messaging. But another question adding certain
 dependecies to the project.
 
 
   Once we’d have final design for profiling we would need to figure out
  tools for profiling. After searching over the web, i found pretty
  interesting topic related to python profiling [1]. After certain
  investigations it does makes sense discuss next profiling options(apply one
  or both):
 • Line-by-line timing and execution frequency with a profiler
  (there are possible Pros and Cons, but i would say the per-line statistics
  is more than appreciable at initial performance testing steps)
 • Memory/CPU consumption
   Metrics. The most useful metric for us is time, any time-based metric,
  since it is very useful to know at which step or/and by whom delay/timeout
  caused, for example, so as it said, we would be able to figure out whether
  AMQP or driver fails to do what it was designed for.
   Before proposing spec i’d like to figure out any other requirements, use
  cases and restrictions for messaging performance testing. Also, if there
  any stories of success in boosting python performance - feel free to share
  it.
 
  The metrics to measure depend on the goal. Do we think the messaging code
  is using too much memory? Is it too slow? Or is there something else
  causing concern?
 
  It does make sense to have profiling for cases when trying to upscale
 cluster and it'll be a good thing to have an ability to figure out if
 scaled AMQP service has it's best configuration (i guess here would come
 the question about doing performance testing using well-known tools), and
 the most interesting question is about how messaging driver decreases (or
 leaves untouched) throughput between RPC client and server. This metering
 results can be compared to those tools that were designed for performance
 testing. And that's why it'll be good step forward having
 profiling/performance testing using high load technic.

That makes it sound like you want to build performance testing tools for
the infrastructure oslo.messaging is using, and not for oslo.messaging
itself. Is that right?

Doug

 
 
  
  
  
   [1] http://www.huyng.com/posts/python-performance-analysis/
  
   Kind regards,
   Denis Makogon
   IRC: denis_makogon
   dmako...@mirantis.com
  
  
  __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 

Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-15 Thread Doug Hellmann

 On Jan 15, 2015, at 1:30 PM, Denis Makogon dmako...@mirantis.com wrote:
 
 Good day to All,
 
 The question that i’d like to raise here is not simple one, so i’d like to 
 involve as much readers as i can. I’d like to speak about oslo.messaging 
 performance testing. As community we’ve put lots of efforts in making 
 oslo.messaging widely used drivers stable as much as possible. Stability is a 
 good thing, but is it enough for saying “works well”? I’d say that it’s not.
 Since oslo.messaging uses driver-based messaging workflow, it makes sense to 
 dig into each driver and collect all required/possible performance metrics.
 First of all, it does make sense to figure out how to perform performance 
 testing, first that came into my mind is to simulate high load on one of 
 corresponding drivers. Here comes the question of how it can be accomplished 
 withing available oslo.messaging tools - high load on any driver can perform 
 an application that:
   • can populate multiple emitters(rpc clients) and consumers (rpc 
 servers).
   • can force clients to send messages of pre-defined number of messages 
 of any length.

That makes sense.

 Another thing is why do we need such thing. Profiling, performance testing 
 can improve the way in which our drivers were implemented. It can show us 
 actual “bottlenecks” in messaging process, in general. In some cases it does 
 make sense to figure out where problem takes its place - whether AMQP causes 
 messaging problems or certain driver that speaks to AMQP fails.
 Next thing that i want to discuss the architecture of profiling/performance 
 testing. As i can see it seemed to be a “good” way to add profiling code to 
 each driver. If there’s any objection or better solution, please bring them 
 to the light.

What sort of extra profiling code do you anticipate needing?

 Once we’d have final design for profiling we would need to figure out tools 
 for profiling. After searching over the web, i found pretty interesting topic 
 related to python profiling [1]. After certain investigations it does makes 
 sense discuss next profiling options(apply one or both):
   • Line-by-line timing and execution frequency with a profiler (there 
 are possible Pros and Cons, but i would say the per-line statistics is more 
 than appreciable at initial performance testing steps)
   • Memory/CPU consumption
 Metrics. The most useful metric for us is time, any time-based metric, since 
 it is very useful to know at which step or/and by whom delay/timeout caused, 
 for example, so as it said, we would be able to figure out whether AMQP or 
 driver fails to do what it was designed for.
 Before proposing spec i’d like to figure out any other requirements, use 
 cases and restrictions for messaging performance testing. Also, if there any 
 stories of success in boosting python performance - feel free to share it.

The metrics to measure depend on the goal. Do we think the messaging code is 
using too much memory? Is it too slow? Or is there something else causing 
concern?

 
 
 
 [1] http://www.huyng.com/posts/python-performance-analysis/
 
 Kind regards,
 Denis Makogon
 IRC: denis_makogon
 dmako...@mirantis.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev