Hi, Igniters.

Recently I’ve done some research in benchmarks for Ignite, and noticed that we 
don’t have any rules for running benchmarks and collecting result from them. 
Although sometimes we have tasks, which results need to be measured. I propose 
to formalize such things as:
 * set of benchmarks,
 * parameters of launching them,
 * way of result collection and interpretation,
 * Ignite cluster configuration.

I don’t think that we need to run benchmarks before every merge into master, 
but in some cases it should be mandatory to compare new results with reference 
values to be sure that changes do not lead to the performance degradation.

What do you think?

Reply via email to