Fabien COELHO wrote:
> While testing it I had a funny pattern, something like:
>
> pgbench --random-seed=123 -M prepared -T 3 -P 1 -S
> 1.0: 600 tps
> 2.0: 600 tps
> 3.0: 600 tps
The output should include the random seed used, whether it was passed
with --random-seed, environment variabl
It means that you can't separate between OS caused, and pgbench order
caused performance differences.
I'm not objecting to adding an option for this; but I think Fabien is
right that it shouldn't be the default.
Yep.
Andres, attached is a simple POC with an option & environment variable
(w
On 2016-04-07 09:46:27 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2016-04-07 12:25:58 +0200, Fabien COELHO wrote:
> >> So I have no mathematical doubt that changing the seed is the right default
> >> setting, thus I think that the current behavior is fine. However I'm okay
> >> if
> >>
Andres Freund writes:
> On 2016-04-07 12:25:58 +0200, Fabien COELHO wrote:
>> So I have no mathematical doubt that changing the seed is the right default
>> setting, thus I think that the current behavior is fine. However I'm okay if
>> someone wants to control the randomness for some reason (mayb
On Thu, Apr 7, 2016 at 9:15 AM, Andres Freund wrote:
> It's not about "covering it up"; it's about actually being able to take
> action based on benchmark results, and about practically being able to
> run benchmarks. The argument above means essentially that we need to run
> a significant number
On 2016-04-07 08:58:16 -0400, Robert Haas wrote:
> On Thu, Apr 7, 2016 at 5:56 AM, Fabien COELHO wrote:
> > I think that it depends on what you want, which may vary:
> >
> > (1) "exactly" reproducible runs, but one run may hit a particular
> > steady state not representative of what happens
On Thu, Apr 7, 2016 at 5:56 AM, Fabien COELHO wrote:
> I think that it depends on what you want, which may vary:
>
> (1) "exactly" reproducible runs, but one run may hit a particular
> steady state not representative of what happens in general.
>
> (2) runs which really vary from one to the
Hello Andres,
If you run the test for longer... Or explicitly iterate over IVs. At the
very least we need to make pgbench output the IV used, to have some
chance of repeating tests.
Note that I'm not against providing a way to repeat tests "exactly", and I
have suggested two means: environme
On 2016-04-07 12:25:58 +0200, Fabien COELHO wrote:
>
> >> (2) runs which really vary from one to the next, so as
> >> to have an idea about how much it may vary, what is the
> >> performance stability.
> >
> >I don't think this POV makes all that much sense. If you do something
> >non-comp
(2) runs which really vary from one to the next, so as
to have an idea about how much it may vary, what is the
performance stability.
I don't think this POV makes all that much sense. If you do something
non-comparable, then the results aren't, uh, comparable. Which also
means there
On 2016-04-07 11:56:12 +0200, Fabien COELHO wrote:
> (2) runs which really vary from one to the next, so as
> to have an idea about how much it may vary, what is the
> performance stability.
I don't think this POV makes all that much sense. If you do something
non-comparable, then the r
Hello Andres,
et al I was wondering why it's a good idea for pgbench to do
INSTR_TIME_SET_CURRENT(start_time);
srandom((unsigned int) INSTR_TIME_GET_MICROSEC(start_time));
to initialize randomness and then
for (i = 0; i < nthreads; i++)
thread->random_sta
12 matches
Mail list logo