On 28.01.2013 23:30, Gurjeet Singh wrote:
On Sat, Jan 26, 2013 at 11:24 PM, Satoshi Nagayasu wrote:
2012/12/21 Gurjeet Singh:
The patch is very much what you had posted, except for a couple of
differences due to bit-rot. (i) I didn't have to #define
MAX_RANDOM_VALUE64
since its cousin
On Sat, Jan 26, 2013 at 11:24 PM, Satoshi Nagayasu wrote:
> Hi,
>
> I have reviewed this patch.
>
> https://commitfest.postgresql.org/action/patch_view?id=1068
>
> 2012/12/21 Gurjeet Singh :
> > The patch is very much what you had posted, except for a couple of
> > differences due to bit-rot.
Hi,
I have reviewed this patch.
https://commitfest.postgresql.org/action/patch_view?id=1068
2012/12/21 Gurjeet Singh :
> The patch is very much what you had posted, except for a couple of
> differences due to bit-rot. (i) I didn't have to #define MAX_RANDOM_VALUE64
> since its cousin MAX_RAN
On Wed, Feb 16, 2011 at 8:15 AM, Greg Smith wrote:
> Tom Lane wrote:
>
>> I think that might be a good idea --- it'd reduce the cross-platform
>> variability of the results quite a bit, I suspect. random() is not
>> to be trusted everywhere, but I think erand48 is pretty much the same
>> whereve
Greg Smith writes:
> Given that pgbench will run with threads in some multi-worker
> configurations, after some more portability research I think odds are
> good we'd get nailed by
> http://sourceware.org/bugzilla/show_bug.cgi?id=10320 : "erand48
> implementation not thread safe but POSIX says
Tom Lane wrote:
I think that might be a good idea --- it'd reduce the cross-platform
variability of the results quite a bit, I suspect. random() is not
to be trusted everywhere, but I think erand48 is pretty much the same
wherever it exists at all (and src/port/ provides it elsewhere).
Give
On Fri, Feb 11, 2011 at 8:35 AM, Stephen Frost wrote:
> Greg,
>
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> Greg Smith writes:
>> > Poking around a bit more, I just discovered another possible approach is
>> > to use erand48 instead of rand in pgbench, which is either provided by
>> > the OS or e
Greg,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Greg Smith writes:
> > Poking around a bit more, I just discovered another possible approach is
> > to use erand48 instead of rand in pgbench, which is either provided by
> > the OS or emulated in src/port/erand48.c That's way more resolution
> >
Greg Smith writes:
> Poking around a bit more, I just discovered another possible approach is
> to use erand48 instead of rand in pgbench, which is either provided by
> the OS or emulated in src/port/erand48.c That's way more resolution
> than needed here, given that 2^48 pgbench accounts woul
Stephen Frost wrote:
Just wondering, did you consider just calling random() twice and
smashing the result together..?
I did. The problem is that even within the 32 bits that random()
returns, it's not uniformly distributed. Combining two of them isn't
really going to solve the distributi
Greg,
* Greg Smith (g...@2ndquadrant.com) wrote:
> I took that complexity out and just put a hard line
> in there instead: if scale>=2, you get bigints. That's not
> very different from the real limit, and it made documenting when the
> switch happens easy to write and to remember.
Agreed c
Attached is an updated 64-bit pgbench patch that works as expected for
all of the most common pgbench operations, including support for scales
above the previous boundary of just over 21,000. Here's the patched
version running against a 303GB database with a previously unavailable
scale factor
The update on the work to push towards a bigger pgbench is that I now
have the patch running and generating databases larger than any
previously possible scale:
$ time pgbench -i -s 25000 pgbench
...
25 tuples done.
...
real258m46.350s
user14m41.970s
sys0m21.310s
$ psql -d
Robert Haas wrote:
At least in my book, we need to get this committed in the next two
weeks, or wait for 9.2.
Yes, I was just suggesting that I was not going to get started in the
first week or two given the other pgbench related tests I had queued up
already. Those are closing up nicely,
On Tue, Jan 18, 2011 at 1:42 PM, Greg Smith wrote:
> Thanks for picking this up again and finishing the thing off. I'll add this
> into my queue of performance tests to run and we can see if this is worth
> applying. Probably take a little longer than the usual CF review time. But
> as this doe
Euler Taveira de Oliveira wrote:
(i) If we want to support and scale factor greater than 21474 we have
to convert some columns to bigint; it will change the test. From the
portability point it is a pity but as we have never supported it I'm
not too worried about it. Why? Because it will use big
Em 10-01-2011 05:25, Greg Smith escreveu:
Euler Taveira de Oliveira wrote:
Em 07-01-2011 22:59, Greg Smith escreveu:
setrandom: invalid maximum number -2147467296
It is failing at atoi() circa pgbench.c:1036. But it just the first
one. There are some variables and constants that need to be co
17 matches
Mail list logo