Re: Ignite Benchmarking Rules

2017-09-22 Thread Konstantin Boudnik
Yes, we do. A team within AWS org wanted to contribute to the project
and they are run a few machines for us. There's no money going back
and forth, we are "just" using some of the resources.

I guess the best way to get a few servers is to find people @Amazon,
who'd be willing to make such donation to the project.
--
  With regards,
Konstantin (Cos) Boudnik
2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622

Disclaimer: Opinions expressed in this email are those of the author,
and do not necessarily represent the views of any company the author
might be affiliated with at the moment of writing.


On Sat, Sep 16, 2017 at 12:06 AM, Dmitriy Setrakyan
 wrote:
> Cos,
>
> I think Apache BigTop is using servers provided by Amazon. Can you make a
> suggestion on how can Ignite community get a few servers from Amazon for
> benchmarking as well?
>
> D.
>
> On Fri, Sep 15, 2017 at 5:57 AM, Anton Vinogradov  wrote:
>>
>> Guys,
>>
>> I fully agree that configured servers at Amazon is the best choice.
>>
>> But when you need to check that your changes has no performance drop
>> you're
>> able to use your own PC or PCs to checks that.
>> All you need is to benchmark already released version vs version with your
>> fix at same environment.
>>
>> So, seems we should have couple of configuration recommendations
>> - reasonable for standalone PC
>> - reasonable for cluster
>>
>> On Fri, Sep 15, 2017 at 12:20 PM, Nikolay Izhikov 
>> wrote:
>>
>> > Hello, Dmitriy.
>> >
>> > I think experienced members of community have specific number for
>> > benchmarking.
>> >
>> > Can we start from reference hardware configuration: Num of CPU, RAM and
>> > HDD(SDD?) configuration, network configs, etc.
>> >
>> > Can someone share that kind of knowledge - Which hardware is best for
>> > Ignite benchmarking?
>> >
>> > I found some numbers here - [1]. Is it well suited for Apache Ignite?
>> >
>> > [1] https://www.gridgain.com/resources/benchmarks/gridgain-vs-
>> > hazelcast-benchmarks
>> >
>> > 14.09.2017 23:27, Dmitriy Setrakyan пишет:
>> >
>> > Alexey, I completely agree. However, for the benchmarks to be useful,
>> > then
>> >> need to be run on the same hardware all the time. Apache Ignite does
>> >> not
>> >> have servers sitting around, available to run the benchmarks.
>> >>
>> >> Would be nice to see how other projects address it. Can Amazon donate
>> >> servers for the Apache projects?
>> >>
>> >> D.
>> >>
>> >> On Thu, Sep 14, 2017 at 6:25 AM, Aleksei Zaitsev
>> >> 
>> >> wrote:
>> >>
>> >> Hi, Igniters.
>> >>>
>> >>> Recently I’ve done some research in benchmarks for Ignite, and noticed
>> >>> that we don’t have any rules for running benchmarks and collecting
>> >>> result
>> >>> from them. Although sometimes we have tasks, which results need to be
>> >>> measured. I propose to formalize such things as:
>> >>>   * set of benchmarks,
>> >>>   * parameters of launching them,
>> >>>   * way of result collection and interpretation,
>> >>>   * Ignite cluster configuration.
>> >>>
>> >>> I don’t think that we need to run benchmarks before every merge into
>> >>> master, but in some cases it should be mandatory to compare new
>> >>> results
>> >>> with reference values to be sure that changes do not lead to the
>> >>> performance degradation.
>> >>>
>> >>> What do you think?
>> >>>
>> >>>
>> >>
>
>


Re: Ignite Benchmarking Rules

2017-09-15 Thread Dmitriy Setrakyan
Cos,

I think Apache BigTop is using servers provided by Amazon. Can you make a
suggestion on how can Ignite community get a few servers from Amazon for
benchmarking as well?

D.

On Fri, Sep 15, 2017 at 5:57 AM, Anton Vinogradov  wrote:

> Guys,
>
> I fully agree that configured servers at Amazon is the best choice.
>
> But when you need to check that your changes has no performance drop you're
> able to use your own PC or PCs to checks that.
> All you need is to benchmark already released version vs version with your
> fix at same environment.
>
> So, seems we should have couple of configuration recommendations
> - reasonable for standalone PC
> - reasonable for cluster
>
> On Fri, Sep 15, 2017 at 12:20 PM, Nikolay Izhikov 
> wrote:
>
> > Hello, Dmitriy.
> >
> > I think experienced members of community have specific number for
> > benchmarking.
> >
> > Can we start from reference hardware configuration: Num of CPU, RAM and
> > HDD(SDD?) configuration, network configs, etc.
> >
> > Can someone share that kind of knowledge - Which hardware is best for
> > Ignite benchmarking?
> >
> > I found some numbers here - [1]. Is it well suited for Apache Ignite?
> >
> > [1] https://www.gridgain.com/resources/benchmarks/gridgain-vs-
> > hazelcast-benchmarks
> >
> > 14.09.2017 23:27, Dmitriy Setrakyan пишет:
> >
> > Alexey, I completely agree. However, for the benchmarks to be useful,
> then
> >> need to be run on the same hardware all the time. Apache Ignite does not
> >> have servers sitting around, available to run the benchmarks.
> >>
> >> Would be nice to see how other projects address it. Can Amazon donate
> >> servers for the Apache projects?
> >>
> >> D.
> >>
> >> On Thu, Sep 14, 2017 at 6:25 AM, Aleksei Zaitsev <
> ign...@alexzaitzev.pro>
> >> wrote:
> >>
> >> Hi, Igniters.
> >>>
> >>> Recently I’ve done some research in benchmarks for Ignite, and noticed
> >>> that we don’t have any rules for running benchmarks and collecting
> result
> >>> from them. Although sometimes we have tasks, which results need to be
> >>> measured. I propose to formalize such things as:
> >>>   * set of benchmarks,
> >>>   * parameters of launching them,
> >>>   * way of result collection and interpretation,
> >>>   * Ignite cluster configuration.
> >>>
> >>> I don’t think that we need to run benchmarks before every merge into
> >>> master, but in some cases it should be mandatory to compare new results
> >>> with reference values to be sure that changes do not lead to the
> >>> performance degradation.
> >>>
> >>> What do you think?
> >>>
> >>>
> >>
>


Re: Ignite Benchmarking Rules

2017-09-15 Thread Anton Vinogradov
Guys,

I fully agree that configured servers at Amazon is the best choice.

But when you need to check that your changes has no performance drop you're
able to use your own PC or PCs to checks that.
All you need is to benchmark already released version vs version with your
fix at same environment.

So, seems we should have couple of configuration recommendations
- reasonable for standalone PC
- reasonable for cluster

On Fri, Sep 15, 2017 at 12:20 PM, Nikolay Izhikov 
wrote:

> Hello, Dmitriy.
>
> I think experienced members of community have specific number for
> benchmarking.
>
> Can we start from reference hardware configuration: Num of CPU, RAM and
> HDD(SDD?) configuration, network configs, etc.
>
> Can someone share that kind of knowledge - Which hardware is best for
> Ignite benchmarking?
>
> I found some numbers here - [1]. Is it well suited for Apache Ignite?
>
> [1] https://www.gridgain.com/resources/benchmarks/gridgain-vs-
> hazelcast-benchmarks
>
> 14.09.2017 23:27, Dmitriy Setrakyan пишет:
>
> Alexey, I completely agree. However, for the benchmarks to be useful, then
>> need to be run on the same hardware all the time. Apache Ignite does not
>> have servers sitting around, available to run the benchmarks.
>>
>> Would be nice to see how other projects address it. Can Amazon donate
>> servers for the Apache projects?
>>
>> D.
>>
>> On Thu, Sep 14, 2017 at 6:25 AM, Aleksei Zaitsev 
>> wrote:
>>
>> Hi, Igniters.
>>>
>>> Recently I’ve done some research in benchmarks for Ignite, and noticed
>>> that we don’t have any rules for running benchmarks and collecting result
>>> from them. Although sometimes we have tasks, which results need to be
>>> measured. I propose to formalize such things as:
>>>   * set of benchmarks,
>>>   * parameters of launching them,
>>>   * way of result collection and interpretation,
>>>   * Ignite cluster configuration.
>>>
>>> I don’t think that we need to run benchmarks before every merge into
>>> master, but in some cases it should be mandatory to compare new results
>>> with reference values to be sure that changes do not lead to the
>>> performance degradation.
>>>
>>> What do you think?
>>>
>>>
>>


Re: Ignite Benchmarking Rules

2017-09-15 Thread Nikolay Izhikov

Hello, Dmitriy.

I think experienced members of community have specific number for 
benchmarking.


Can we start from reference hardware configuration: Num of CPU, RAM and 
HDD(SDD?) configuration, network configs, etc.


Can someone share that kind of knowledge - Which hardware is best for 
Ignite benchmarking?


I found some numbers here - [1]. Is it well suited for Apache Ignite?

[1] 
https://www.gridgain.com/resources/benchmarks/gridgain-vs-hazelcast-benchmarks


14.09.2017 23:27, Dmitriy Setrakyan пишет:

Alexey, I completely agree. However, for the benchmarks to be useful, then
need to be run on the same hardware all the time. Apache Ignite does not
have servers sitting around, available to run the benchmarks.

Would be nice to see how other projects address it. Can Amazon donate
servers for the Apache projects?

D.

On Thu, Sep 14, 2017 at 6:25 AM, Aleksei Zaitsev 
wrote:


Hi, Igniters.

Recently I’ve done some research in benchmarks for Ignite, and noticed
that we don’t have any rules for running benchmarks and collecting result
from them. Although sometimes we have tasks, which results need to be
measured. I propose to formalize such things as:
  * set of benchmarks,
  * parameters of launching them,
  * way of result collection and interpretation,
  * Ignite cluster configuration.

I don’t think that we need to run benchmarks before every merge into
master, but in some cases it should be mandatory to compare new results
with reference values to be sure that changes do not lead to the
performance degradation.

What do you think?





Re: Ignite Benchmarking Rules

2017-09-14 Thread Dmitriy Setrakyan
Alexey, I completely agree. However, for the benchmarks to be useful, then
need to be run on the same hardware all the time. Apache Ignite does not
have servers sitting around, available to run the benchmarks.

Would be nice to see how other projects address it. Can Amazon donate
servers for the Apache projects?

D.

On Thu, Sep 14, 2017 at 6:25 AM, Aleksei Zaitsev 
wrote:

> Hi, Igniters.
>
> Recently I’ve done some research in benchmarks for Ignite, and noticed
> that we don’t have any rules for running benchmarks and collecting result
> from them. Although sometimes we have tasks, which results need to be
> measured. I propose to formalize such things as:
>  * set of benchmarks,
>  * parameters of launching them,
>  * way of result collection and interpretation,
>  * Ignite cluster configuration.
>
> I don’t think that we need to run benchmarks before every merge into
> master, but in some cases it should be mandatory to compare new results
> with reference values to be sure that changes do not lead to the
> performance degradation.
>
> What do you think?
>


Ignite Benchmarking Rules

2017-09-14 Thread Aleksei Zaitsev
Hi, Igniters.

Recently I’ve done some research in benchmarks for Ignite, and noticed that we 
don’t have any rules for running benchmarks and collecting result from them. 
Although sometimes we have tasks, which results need to be measured. I propose 
to formalize such things as:
 * set of benchmarks,
 * parameters of launching them,
 * way of result collection and interpretation,
 * Ignite cluster configuration.

I don’t think that we need to run benchmarks before every merge into master, 
but in some cases it should be mandatory to compare new results with reference 
values to be sure that changes do not lead to the performance degradation.

What do you think?