We have several independent tables on a multi-core machine serving Select
queries. These tables fit into memory; and each Select queries goes over
one table's pages sequentially. In this experiment, there are no indexes or
table joins.
When we send concurrent Select queries to these tables, query
> I think all of this data cannot fit in shared_buffers, you might want
to increase shared_buffers
> to larger size (not 30GB but close to your data size) to see how it
behaves.
When I use shared_buffers larger than my data size such as 10 GB, results
scale nearly as expected at least for this
> Maybe you could help test this patch:
>
http://www.postgresql.org/message-id/20131115194725.gg5...@awork2.anarazel.de
Which repository should I apply these patches. I tried main repository, 9.3
stable and source code of 9.3.1, and in my trials at least of one the
patches is failed. What patch co
>Notice the huge %sy
>What kind of VM are you using? HVM or paravirtual?
This instance is paravirtual.
postgres [kernel.kallsyms] [k] shmem_getpage_gfp
On Wed, Dec 4, 2013 at 6:33 PM, Andres Freund wrote:
> On 2013-12-04 14:27:10 -0200, Claudio Freire wrote:
> > On Wed, Dec 4, 2013 at 9:19 AM, Metin Doslu wrote:
> > >
> > > Here are the results of "vmstat 1&q
> You could try HVM. I've noticed it fare better under heavy CPU load,
> and it's not fully-HVM (it still uses paravirtualized network and
> I/O).
I already tried with HVM (cc2.8xlarge instance on Amazon EC2) and observed
same problem.
> Didn't follow the thread from the start. So, this is EC2? Have you
> checked, with a recent enough version of top or whatever, how much time
> is reported as "stolen"?
Yes, this EC2. "stolen" is randomly reported as 1, mostly as 0.
Here are some extra information:
- When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is
disappeared for 8 core machines and come back with 16 core machines on
Amazon EC2. Would it be related with PostgreSQL locking mechanism?
- I tried this test with 4 core machines including my perso
> You could try my lwlock-scalability improvement patches - for some
> workloads here, the improvements have been rather noticeable. Which
> version are you testing?
I'm testing with PostgreSQL 9.3.1.
> - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is
> disappeared for 8 core machines and come back with 16 core machines on
> Amazon EC2. Would it be related with PostgreSQL locking mechanism?
If we build with -DLWLOCK_STATS to print locking stats from PostgreSQL, we
see tons of
> Is your workload bigger than RAM?
RAM is bigger than workload (more than a couple of times).
> I think a good bit of the contention
> you're seeing in that listing is populating shared_buffers - and might
> actually vanish once you're halfway cached.
> From what I've seen so far the bigger prob
Stable.
http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=shortlog;h=refs/heads/REL9_2_STABLE-rwlock-contention
On Wed, Dec 4, 2013 at 8:26 PM, Andres Freund wrote:
> On 2013-12-04 20:19:55 +0200, Metin Doslu wrote:
> > - When we increased NUM_BUFFER_PARTITIONS t
> You tested the correct branch, right? Which commit does "git rev-parse
> HEAD" show?
I applied last two patches manually on PostgreSQL 9.2 Stable.
> From what I've seen so far the bigger problem than contention in the
> lwlocks itself, is the spinlock protecting the lwlocks...
Postgres 9.3.1 also reports spindelay, it seems that there is no contention
on spinlocks.
PID 21121 lwlock 0: shacq 0 exacq 33 blk 1 spindelay 0
PID 21121 lwlock 33:
14 matches
Mail list logo