On Monday 31 August 2015 17:43:08 Tomas Vondra wrote:
> Well, I could test the patch on a x86 machine with 4 sockets (64 cores),
> but I wonder whether it makes sense at this point, as the patch really
> is not correct (judging by what Andres says).
Can you test patch from this thread:
On 08/31/2015 12:54 PM, YUriy Zhuravlev wrote:
Hello hackers
Recently, we were given access to the test server is IBM, 9119-MHE with 8 CPUs
* 8 cores * 8 threads. We decided to take advantage of this and to find
bottlenecks for read scalability (pgbench -S).
All detail you can read here:
On 2015-08-31 17:43:08 +0200, Tomas Vondra wrote:
> Well, I could test the patch on a x86 machine with 4 sockets (64 cores), but
> I wonder whether it makes sense at this point, as the patch really is not
> correct (judging by what Andres says).
Additionally it's, for default pgbench, really
On 08/31/2015 05:48 PM, Andres Freund wrote:
On 2015-08-31 17:43:08 +0200, Tomas Vondra wrote:
Well, I could test the patch on a x86 machine with 4 sockets (64 cores), but
I wonder whether it makes sense at this point, as the patch really is not
correct (judging by what Andres says).
On 2015-08-31 17:54:17 +0200, Tomas Vondra wrote:
> [scratches head] So does this mean it's worth testing the patch on x86 or
> not, in it's current state?
You could try if you're interested. But I don't think it's super
meaningful. The patch is just a POC and rather widely incorrect.
Don't get
We did not got any affect on core 64 with smt = 8, and we not have a 64
-cpu x86 machine with disable HT feature.
You can set scale > 1000 and with shared_buffers >> size of index
pgbench_accounts_pkey.
You can also increase the concurrency: not only access top of b-tree
index, but also to a
On Monday 31 August 2015 17:54:17 Tomas Vondra wrote:
> So does this mean it's worth testing the patch on x86
> or not, in it's current state?
Its realy intersting. But you need have true 64 cores without HT. (32 core +HT
not have effect)
--
YUriy Zhuravlev
Postgres Professional:
On Monday 31 August 2015 17:48:50 Andres Freund wrote:
> Additionally it's, for default pgbench, really mostly a bottlneck after
> GetSnapshotData() is fixed. You can make it a problem much earlier if
> you have index nested loops over a lot of rows.
100 000 000 is a lot? Simple select query from
On 2015-08-31 13:54:57 +0300, YUriy Zhuravlev wrote:
> We have noticed s_lock in PinBuffer and UnpinBuffer. For the test we
> rewrited PinBuffer and UnpinBuffer by atomic operations and we liked
> the result. Degradation of performance almost completely disappeared,
> and went scaling up to 400
Hello hackers
Recently, we were given access to the test server is IBM, 9119-MHE with 8 CPUs
* 8 cores * 8 threads. We decided to take advantage of this and to find
bottlenecks for read scalability (pgbench -S).
All detail you can read here:
On Monday 31 August 2015 13:03:07 you wrote:
> That's definitely not correct, you should initialize the atomics using
> pg_atomic_init_u32() and write to by using pg_atomic_write_u32() - not
> access them directly. This breaks the fallback paths.
You right. Now it's just to silence the compiler.
11 matches
Mail list logo