Bruce,
On 11/22/05 4:13 PM, Bruce Momjian pgman@candle.pha.pa.us wrote:
Perfect summary. We have a background writer now. Ideally we would
have a background reader, that reads-ahead blocks into the buffer cache.
The problem is that while there is a relatively long time between a
buffer
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Alan,
Why not contribute something - put up proof of your stated 8KB versus 32KB page size improvement.
- Luke
On Tue, 2005-11-22 at 18:17 -0500, Anjan Dave wrote:
It's mostly a 'read' application, I increased the vm.max-readahead to
2048 from the default 256, after which I've not seen the CS storm,
though it could be incidental.
Can you verify this, please?
Turn it back down again, try the test,
The offending SELECT query that invoked the CS storm was optimized by
folks here last night, so it's hard to say if the VM setting made a
difference. I'll give it a try anyway.
Thanks,
Anjan
-Original Message-
From: Simon Riggs [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 23,
Simon,
I tested it by running two of those simultaneous queries (the
'unoptimized' one), and it doesn't make any difference whether
vm.max-readahead is 256 or 2048...the modified query runs in a snap.
Thanks,
Anjan
-Original Message-
From: Anjan Dave
Sent: Wednesday, November 23, 2005
Luke Lonergan wrote:
Why not contribute something - put up proof of your stated 8KB versus
32KB page size improvement.
I did observe that 32KB block sizes were a significant win for our
usage patterns. It might be a win for any of the following reasons:
0) The preliminaries: ~300GB
Hi,
PostgreSQL 8.1 fresh install on a freshly installed OpenBSD 3.8 box.
postgres=# CREATE DATABASE test;
CREATE DATABASE
postgres=# create table test (id serial, val integer);
NOTICE: CREATE TABLE will create implicit sequence test_id_seq for
serial column test.id
CREATE TABLE
postgres=#
Alan,
On 11/23/05 2:00 PM, Alan Stange [EMAIL PROTECTED] wrote:
Luke Lonergan wrote:
Why not contribute something - put up proof of your stated 8KB versus
32KB page size improvement.
I did observe that 32KB block sizes were a significant win for our
usage patterns. It might be a win for
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Mark,
This is an excellent idea unfortunately Im in Maui right now (Mahalo!) and Im not getting to testing with this. My first try was with 8.0.3 and its an 8.1 function I presume.
Not to be lazy but any hint as to how
Pailloncy Jean-Gerard [EMAIL PROTECTED] writes:
Why the stupid indexscan plan on the whole table ?
Pray tell, what are you using for the planner cost parameters?
The only way I can come close to duplicating your numbers is
by setting random_page_cost to somewhere around 0.01 ...
Luke Lonergan wrote:
Mark,
This is an excellent idea – unfortunately I’m in Maui right now
(Mahalo!) and I’m not getting to testing with this. My first try was
with 8.0.3 and it’s an 8.1 function I presume.
Not to be lazy – but any hint as to how to do the same thing for 8.0?
Yeah, it's
Pailloncy Jean-Gerard [EMAIL PROTECTED] wrote ..
[snip]
THIS MAY SEEM SILLY but vacuum is mispelled below and presumably there was
never any ANALYZE done.
postgres=# vaccum full verbose analyze;
---(end of broadcast)---
TIP 3: Have you
12 matches
Mail list logo