The first thing to check... Did you do a recent VACUUM ANALYZE? This
updates all the statistics. There are a number of places where it says
"rows=1000" which is usually the "I have no idea, let me guess 1000".
Also, there are a number of places where the estimates are pretty far
off. For instan
Hi All,
I notice that most stat Postgres provides are per table or per process.
Is it possible to monitor CPU time per transaction and IO per
transaction? If can't, is there any commercial capacity planning tools
available?
Thanks!
__
Do you Ya
Frank Wiles <[EMAIL PROTECTED]> writes:
> I've also seen a huge difference between select count(*) and
> select count(1) in older versions,
That must have been before my time, ie, pre-6.4 or so. There is
certainly zero difference now.
regards, tom lane
-
On Fri, Dec 17, 2004 at 22:56:27 +0100,
"Steinar H. Gunderson" <[EMAIL PROTECTED]> wrote:
>
> I'm a bit unsure -- should counting ~3 million rows (no OIDs, PG 7.4,
> everything in cache, 32-byte rows) take ~3500ms on an Athlon 64 2800+?
It doesn't seem totally out of wack. You will be limited b
A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] ("Mike G.") wrote:
> Hi,
>
> I have data that I am taking from 2 tables, pulling out specific columns and
> inserting into one table.
>
> Is it more efficient to do:
> a) insert into x
> select z from y;
> insert into x
> s
On Fri, Dec 17, 2004 at 05:02:29PM -0600, Frank Wiles wrote:
> It depends more on your disk IO than the processor. Counting isn't
> processor intensive, but reading through the entire table on disk
> is. I've also seen a huge difference between select count(*) and
> select count(1) in o
On Fri, 17 Dec 2004 23:09:07 +0100
"Steinar H. Gunderson" <[EMAIL PROTECTED]> wrote:
> On Fri, Dec 17, 2004 at 10:56:27PM +0100, Steinar H. Gunderson wrote:
> > I'm a bit unsure -- should counting ~3 million rows (no OIDs, PG
> > 7.4, everything in cache, 32-byte rows) take ~3500ms on an Athlon 64
Pailloncy Jean-Gerard <[EMAIL PROTECTED]> writes:
> Update to my case:
> I drop and recreate the index and there was no problem this time.
> Strange...
Well, that time there wasn't actually any work for VACUUM FULL to do.
I think the bloat is probably driven by having to move a lot of rows
in orde
On Fri, Dec 17, 2004 at 10:56:27PM +0100, Steinar H. Gunderson wrote:
> I'm a bit unsure -- should counting ~3 million rows (no OIDs, PG 7.4,
> everything in cache, 32-byte rows) take ~3500ms on an Athlon 64 2800+?
(I realize I was a bit unclear here. This is a completely separate case, not
relate
On Fri, Dec 17, 2004 at 10:47:57AM -0500, Greg Stark wrote:
>> Must admit this puzzles me. Are you saying you can't saturate your disk I/O?
>> Or
>> are you saying other DBMS store records in 0.5 to 0.2 times less space than
>> PG?
> I don't know what he's talking about either. Perhaps he's think
Update to my case:
I drop and recreate the index and there was no problem this time.
Strange...
# DROP INDEX pkpoai.test_metadata_all;
DROP INDEX
# VACUUM FULL VERBOSE ANALYZE pkpoai.metadata;
INFO: vacuuming "pkpoai.metadata"
INFO: "metadata": found 167381 removable, 3133397 nonremovable row
ve
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> wrote:
>> This looks like it must be a memory leak in the gist indexing code
>> (either gist itself or tsearch2). I don't see any post-release fixes in
>> the 7.4 branch that look like they fixed any such thing :-(, so it's
Hi All,
I notice that most stat Postgres provides are per table or per process.
Is it possible to monitor CPU time per transaction and IO per
transaction? If can't, is there any commercial capacity planning tools
available?
Thanks!
__
Do you Y
On Fri, Dec 17, 2004 at 14:46:57 -0500,
Tom Lane <[EMAIL PROTECTED]> wrote:
>
> This looks like it must be a memory leak in the gist indexing code
> (either gist itself or tsearch2). I don't see any post-release fixes in
> the 7.4 branch that look like they fixed any such thing :-(, so it's
> p
Josh Berkus <[EMAIL PROTECTED]> writes:
>> Jean-Gerard, can you put together a self-contained test case? I suspect
>> it need only look like "put some data in a table, make a tsearch2 index,
>> delete half the rows in the table, VACUUM FULL". But I don't have time
>> to try to cons up a test case r
Tom,
> Jean-Gerard, can you put together a self-contained test case? ÂI suspect
> it need only look like "put some data in a table, make a tsearch2 index,
> delete half the rows in the table, VACUUM FULL". ÂBut I don't have time
> to try to cons up a test case right now, and especially not to figu
Josh Berkus <[EMAIL PROTECTED]> writes:
> Jean-Gerard,
>> When backend hits the tsearch2 index, SIZE/RES grows until it reachs
>> 1GB, where I got the error.
>> PIDUID PRI NICE SIZE RES STATE WAIT TIMECPU COMMAND
>> 20461503-50 765M 824M sleep biowai 4:26 33.20% pos
Hi,
I have data that I am taking from 2 tables, pulling out specific columns and
inserting into one table.
Is it more efficient to do:
a) insert into x
select z from y;
insert into x
select z from a;
b) insert into x
select z from y
union all
select z from a;
I have r
Jean-Gerard,
> The classic output from top (during all other index vacuum):
>PIDUID PRI NICE SIZE RES STATE WAIT TIMECPU COMMAND
> 20461503140 13M 75M sleep semwai 5:27 2.05% postgres
>
> When backend hits the tsearch2 index, SIZE/RES grows until it reachs
>
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> I think the one effect that's not being modeled is amortization of index
>> fetches across successive queries.
> And across multiple fetches in a single query, such as with a nested loop.
Right, that's effectively
The classic output from top (during all other index vacuum):
PIDUID PRI NICE SIZE RES STATE WAIT TIMECPU COMMAND
20461503140 13M 75M sleep semwai 5:27 2.05% postgres
When backend hits the tsearch2 index, SIZE/RES grows until it reachs
1GB, where I got the erro
I have a table with an tsearch2 full text index on PG 7.4.2. And a
query against the index is really slow.
I try to do a "VACUUM FULL VERBOSE ANALYZE pkpoai.metadata" and I got
an error.
I monitor memory usage with top, and pg backend uses more and more
memory and hits the limit of 1GB of RAM use.
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Postgres is also more pessimistic about the efficiency of index scans. It's
> > willing to use a sequential scan down to well below 5% selectivity when
> > other
> > databases use the more traditional rule of thumb
Jean-Gerard,
> I have a table with an tsearch2 full text index on PG 7.4.2. And a
> query against the index is really slow.
> I try to do a "VACUUM FULL VERBOSE ANALYZE pkpoai.metadata" and I got
> an error.
> I monitor memory usage with top, and pg backend uses more and more
> memory and hits the
I have a table with an tsearch2 full text index on PG 7.4.2. And a
query against the index is really slow.
I try to do a "VACUUM FULL VERBOSE ANALYZE pkpoai.metadata" and I got
an error.
I monitor memory usage with top, and pg backend uses more and more
memory and hits the limit of 1GB of RAM us
Greg Stark <[EMAIL PROTECTED]> writes:
> Postgres is also more pessimistic about the efficiency of index scans. It's
> willing to use a sequential scan down to well below 5% selectivity when other
> databases use the more traditional rule of thumb of 10%.
However, other databases are probably basi
Richard Huxton <[EMAIL PROTECTED]> writes:
> Not going to do anything in this case. The planner is roughly right about how
> many rows will be returned, it's just not expecting everything to be in RAM.
That doesn't make sense or else it would switch to the index at
random_page_cost = 1.0. If it w
David Brown wrote:
You might want to reduce random_page_cost a little.
Keep in mind that your test case is small enough to fit in RAM and
is probably not reflective of what will happen with larger tables.
I am also running 8.0 rc1 for Windows. Despite many hours spent
tweaking various planner cos
28 matches
Mail list logo