Hi,
After few test, the difference is explained by the
effective_cache_size parameter.
with effective_cache_size=1000 (default)
the planner chooses the following plan
postgres=# explain select count(*) from (select distinct on (val) *
from test) as foo;
Q
What "same result"? You only ran it up to 2K rows, not 2M. In any
Sorry, I do this over and over until xxx.000 rows but I do not write
in the mail.
I do it again. initdb, create table, insert, vacuum full analyze,
explain analyze at each stage.
And there was no problem.
So I make a copy
I redo the test, with a freshly installed data directory. Same result.
Note: This is the full log. I just suppress the mistake I do like
"sl" for "ls".
Jean-Gérard Pailloncy
Last login: Thu Nov 24 12:52:32 2005 from 192.168.0.1
OpenBSD 3.8 (WDT) #2: Tue Nov 8 00:52:38 CET 2005
Welcome to
Pailloncy Jean-Gerard <[EMAIL PROTECTED]> writes:
Why the stupid indexscan plan on the whole table ?
Pray tell, what are you using for the planner cost parameters?
The only way I can come close to duplicating your numbers is
by setting random_page_cost to somewhere around 0.01 ...
I d
THIS MAY SEEM SILLY but vacuum is mispelled below and presumably
there was never any ANALYZE done.
postgres=# vaccum full verbose analyze;
I do have done the "vacUUm full verbose analyze;".
But I copy/paste the wrong line.
Cordialement,
Jean-Gérard Pailloncy
---(en
Hi,
PostgreSQL 8.1 fresh install on a freshly installed OpenBSD 3.8 box.
postgres=# CREATE DATABASE test;
CREATE DATABASE
postgres=# create table test (id serial, val integer);
NOTICE: CREATE TABLE will create implicit sequence "test_id_seq" for
serial column "test.id"
CREATE TABLE
postgres=
Your main example seems to focus on a large table where a key
column has
constrained values. This case is interesting in proportion to the
number of possible values. If I have billions of rows, each
having one
of only two values, I can think of a trivial and very fast method of
returning th
I think I have a test case for 7.4.2
So I have a 3 millions of rows table "metadata" with a tsearch2 index.
I had memory leak in "vacuum full verbose analyze"
I drop the index, run "vacuum full verbose analyze", recreate the index
and re-run "vacuum full verbose analyze".
The I run my script to i
Hi,
A small test with 8rc2 and BLCKSZ of 8k and 32k.
It seems there is a 10% increase in the number of transactions by
second.
Does someone plan to carefully test the impact of BLCKSZ ?
Cordialement,
Jean-Gérard Pailloncy
with 8k:
> /test/bin/pgbench -c 10 -t 300 test
starting vacuum...end.
trans
Update to my case:
I drop and recreate the index and there was no problem this time.
Strange...
# DROP INDEX pkpoai.test_metadata_all;
DROP INDEX
# VACUUM FULL VERBOSE ANALYZE pkpoai.metadata;
INFO: vacuuming "pkpoai.metadata"
INFO: "metadata": found 167381 removable, 3133397 nonremovable row
ve
The classic output from top (during all other index vacuum):
PIDUID PRI NICE SIZE RES STATE WAIT TIMECPU COMMAND
20461503140 13M 75M sleep semwai 5:27 2.05% postgres
When backend hits the tsearch2 index, SIZE/RES grows until it reachs
1GB, where I got the erro
I have a table with an tsearch2 full text index on PG 7.4.2. And a
query against the index is really slow.
I try to do a "VACUUM FULL VERBOSE ANALYZE pkpoai.metadata" and I got
an error.
I monitor memory usage with top, and pg backend uses more and more
memory and hits the limit of 1GB of RAM use.
I have a table with an tsearch2 full text index on PG 7.4.2. And a
query against the index is really slow.
I try to do a "VACUUM FULL VERBOSE ANALYZE pkpoai.metadata" and I got
an error.
I monitor memory usage with top, and pg backend uses more and more
memory and hits the limit of 1GB of RAM us
13 matches
Mail list logo