Re: [openstack-dev] [Ceilometer] Performance tests of ceilometer-collector and ceilometer-api with different backends
Hi Ilya, Interresting, thanks for sharing. So the quick conclusion to your numbers seems indicated that mongodb is more efficient for both reading and writing, except for 2 cases for retrieving data (meters and resouces listing) .. However for the reading operations, it's should be confirmed (or precised) where the time is really spent, would be interresting to compute the distribution of times spent by each layer : backend - api - cli .. similarly to what you did for collector by custom logging (or by instrumentation..) To add additional use cases (and to be more relevant) it will be good to use queries executed by billing systems or the alarm evaluator aka filtering a limited subsets of samples (by resource and/or user and/or tenant) .. to see the numbers without retrieving ten of thousands of samples. btw, others indicators should help to give a good picture, I see for now: errors rate, queue lenght (rabbit), returned samples|meters|resources by API calls, missing samples (after the populating) and some system metrics also. what was the caracteristics of serveurs used for these load test? my two cents. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Performance tests of ceilometer-collector and ceilometer-api with different backends
Hi, Swann! Thanks for your feedback) On Wed, Apr 23, 2014 at 2:33 PM, Swann Croiset swan...@gmail.com wrote: Hi Ilya, Interresting, thanks for sharing. So the quick conclusion to your numbers seems indicated that mongodb is more efficient for both reading and writing, except for 2 cases for retrieving data (meters and resouces listing) .. It's not so indisputable fact. Performance fall may happen due to cluster VMs base. For future tests we've already added standalone mysql and standalone hbase backend. Also we will deploy mongo cluster on vms in the nearest future However for the reading operations, it's should be confirmed (or precised) where the time is really spent, would be interresting to compute the distribution of times spent by each layer : backend - api - cli .. similarly to what you did for collector by custom logging (or by instrumentation..) To add additional use cases (and to be more relevant) it will be good to use queries executed by billing systems or the alarm evaluator aka filtering a limited subsets of samples (by resource and/or user and/or tenant) .. to see the numbers without retrieving ten of thousands of samples. They are good ideas. I'll add it to tests and show results as soon as possible. btw, others indicators should help to give a good picture, I see for now: errors rate, queue lenght (rabbit), returned samples|meters|resources by API calls, missing samples (after the populating) and some system metrics also. In present time we are calculating the time which messages are waiting for in rabbitmq queue. This metric has the same meaning as queue length. Also we logs backend errors but not so many errors as we might expect happens in tests. what was the caracteristics of serveurs used for these load test? Controller with 16Gb RAM, 8 procs and 3 VMs with 8 GB RAM and 8 procs (for Hbase). Best regards, Tyaptin Ilia, Intern Software Engineer. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Performance tests of ceilometer-collector and ceilometer-api with different backends
Joe, There are a number of problem reports on ceilometer performance and some promising blueprints to address them. I'd suggest we re-run the performance test when those are in place. Having reference performance tests such as this are helpful to pick up cases where there are regression or scalability problems such as you raise (and production users see them also) Tim On 23 Apr 2014, at 18:51, Joe Gordon joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote: On Mon, Apr 21, 2014 at 2:10 PM, Ilya Tyaptin ityap...@mirantis.commailto:ityap...@mirantis.com wrote: Hi team! In light of discussions about ceilometer backends, we decided to test performance of different storage backends with collector and api services because these services depend on backends availability. For the collector testing we are using not completely real data, we are generating looking like real samples with variable rate, sending them to the collector and metering the time of these messages processing. Testing result is the time between message receiving for recording message to db. For the api testing we are only comparing the time of requests to api with different backends. We have prepared a document with more detailed description of test plan and first results. This url: https://docs.google.com/document/d/1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing I am not sure if I read the 'Testing api' section correctly. Is that table in seconds? If so a REST API that takes over two minutes (sample-list for Hbase, meter-list in Mongo) doesn't sound very good. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Performance tests of ceilometer-collector and ceilometer-api with different backends
On Wed, 2014-04-23 at 17:32 +, Tim Bell wrote: There are a number of problem reports on ceilometer performance and some promising blueprints to address them. I'd suggest we re-run the performance test when those are in place. Having reference performance tests such as this are helpful to pick up cases where there are regression or scalability problems such as you raise (and production users see them also) ++, and also need the SQL driver tested as well. Best, jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Performance tests of ceilometer-collector and ceilometer-api with different backends
On Wed, Apr 23, 2014 at 10:32 AM, Tim Bell tim.b...@cern.ch wrote: Joe, There are a number of problem reports on ceilometer performance and some promising blueprints to address them. I'd suggest we re-run the performance test when those are in place. Having reference performance tests such as this are helpful to pick up cases where there are regression or scalability problems such as you raise (and production users see them also) Tim, Do you have any links to those blueprints? https://blueprints.launchpad.net/ceilometer/juno is pretty sparse. Tim On 23 Apr 2014, at 18:51, Joe Gordon joe.gord...@gmail.com wrote: On Mon, Apr 21, 2014 at 2:10 PM, Ilya Tyaptin ityap...@mirantis.comwrote: Hi team! In light of discussions about ceilometer backends, we decided to test performance of different storage backends with collector and api services because these services depend on backends availability. For the collector testing we are using not completely real data, we are generating looking like real samples with variable rate, sending them to the collector and metering the time of these messages processing. Testing result is the time between message receiving for recording message to db. For the api testing we are only comparing the time of requests to api with different backends. We have prepared a document with more detailed description of test plan and first results. This url: *https://docs.google.com/document/d/1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing https://docs.google.com/document/d/1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing* I am not sure if I read the 'Testing api' section correctly. Is that table in seconds? If so a REST API that takes over two minutes (sample-list for Hbase, meter-list in Mongo) doesn't sound very good. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Performance tests of ceilometer-collector and ceilometer-api with different backends
Do you have any links to those blueprints? https:// blueprints.launchpad.net/ceilometer/juno is pretty sparse. we'll probably add targets closer to summit (or post summit). blueprints of interest may be: https://blueprints.launchpad.net/ceilometer/+spec/big-data-sql https://blueprints.launchpad.net/ceilometer/+spec/tighten-model https://blueprints.launchpad.net/ceilometer/+spec/bulk-message-handling we're still prioritising design sessions but it's safe to say this session will be there: http://summit.openstack.org/cfp/details/163 cheers, gordon chung openstack, ibm software standards___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Performance tests of ceilometer-collector and ceilometer-api with different backends
I am not sure if I read the 'Testing api' section correctly. Is that table in seconds? If so a REST API that takes over two minutes (sample-list for Hbase, meter-list in Mongo) doesn't sound very good. Tim, Joe, in api tests values are in seconds. It's known issues and we will log and meter step by step api work: backend, ceilometer-api, cli. Test results may help find weak link. ++, and also need the SQL driver tested as well. Jay, test results for mysql will be ASAP. Best regards, Tyaptin Ilia, Intern Software Engineer. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ceilometer] Performance tests of ceilometer-collector and ceilometer-api with different backends
Hi team! In light of discussions about ceilometer backends, we decided to test performance of different storage backends with collector and api services because these services depend on backends availability. For the collector testing we are using not completely real data, we are generating looking like real samples with variable rate, sending them to the collector and metering the time of these messages processing. Testing result is the time between message receiving for recording message to db. For the api testing we are only comparing the time of requests to api with different backends. We have prepared a document with more detailed description of test plan and first results. This url: *https://docs.google.com/document/d/1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing https://docs.google.com/document/d/1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing* Please, add you cases and proposals for perfomance testing of collector and api to the document comments. Also you may use etherpad if it is more convenient. https://etherpad.openstack.org/p/performance_test_for_ceilometer_collector_and_api --- Best regards, Tyaptin Ilia, Intern Software Engineer. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev