Marc,
You should expect that for the kind of OLAP workload you describe in steps 2
and 3 you will have exactly one CPU working for you in Postgres.
If you want to accelerate the speed of this processing by a factor of 100 or
more on this machine, you should try Greenplum DB which is Postgres 8.2
-Original Message-
From: Dimitri [mailto:[EMAIL PROTECTED]
Sent: Monday, July 30, 2007 05:26 PM Eastern Standard Time
To: Luke Lonergan
Cc: Josh Berkus; pgsql-performance@postgresql.org; Marc Mamin
Subject: Re: [PERFORM] Postgres configuration for 64 CPUs, 128 GB RAM...
Luke
Luke,
ZFS tuning is not coming from general suggestion ideas, but from real
practice...
So,
- limit ARC is the MUST for the moment to keep your database running
comfortable (specially DWH!)
- 8K blocksize is chosen to read exactly one page when PG ask to
read one page - don't mix it with
: [PERFORM] Postgres configuration for 64 CPUs, 128 GB RAM...
Luke,
ZFS tuning is not coming from general suggestion ideas, but from real
practice...
So,
- limit ARC is the MUST for the moment to keep your database running
comfortable (specially DWH!)
- 8K blocksize is chosen to read exactly one page
Hello,
thank you for all your comments and recommendations.
I'm aware that the conditions for this benchmark are not ideal, mostly
due to the lack of time to prepare it. We will also need an additional
benchmark on a less powerful - more realistic - server to better
understand the scability of
Josh,
On 7/20/07 4:26 PM, Josh Berkus [EMAIL PROTECTED] wrote:
There are some specific tuning parameters you need for ZFS or performance
is going to suck.
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
(scroll down to PostgreSQL)
Marc,
Server Specifications:
--
Sun SPARC Enterprise M8000 Server:
http://www.sun.com/servers/highend/m8000/specs.xml
File system:
http://en.wikipedia.org/wiki/ZFS
There are some specific tuning parameters you need for ZFS or performance
is going to suck.
Having done something similar recently, I would recommend that you look at
adding connection pooling using pgBouncer transaction pooling between your
benchmark app and PgSQL. In our application we have about 2000 clients
funneling down to 30 backends and are able to sustain large transaction per
Marc Mamin wrote:
Postgres configuration for 64 CPUs, 128 GB RAM...
there are probably not that much installation out there that large -
comments below
Hello,
We have the oppotunity to benchmark our application on a large server. I
have to prepare the Postgres configuration and I'd
On Tue, Jul 17, 2007 at 04:10:30PM +0200, Marc Mamin wrote:
shared_buffers= 262143
You should at least try some runs with this set far, far larger. At
least 10% of memory, but it'd be nice to see what happens with this set
to 50% or higher as well (though don't set it larger than the database
Marc Mamin [EMAIL PROTECTED] writes:
We have the oppotunity to benchmark our application on a large server. I
have to prepare the Postgres configuration and I'd appreciate some
comments on it as I am not experienced with servers of such a scale.
Moreover the configuration should be
On Tue, 17 Jul 2007, Marc Mamin wrote:
Moreover the configuration should be fail-proof as I won't be able to
attend the tests.
This is unreasonable. The idea that you'll get a magic perfect
configuration in one shot suggests a fundamental misunderstanding of how
work like this is done. If
Postgres configuration for 64 CPUs, 128 GB RAM...
Hello,
We have the oppotunity to benchmark our application on a large server. I
have to prepare the Postgres configuration and I'd appreciate some
comments on it as I am not experienced with servers of such a scale.
Moreover the configuration
On Tue, 17 We have the oppotunity to benchmark our application on a
large server. I
have to prepare the Postgres configuration and I'd appreciate some
comments on it as I am not experienced with servers of such a scale.
Moreover the configuration should be fail-proof as I won't be able to
14 matches
Mail list logo