Hi everyone,
I have a question about the performance of sort.
Setup: Dell Dimension 3000, Suse 10, 1GB ram, PostgreSQL 8.1 RC 1 with
PostGIS, 1 built-in 80 GB IDE drive, 1 SATA Seagate 400GB drive. The
IDE drive has the OS and the WAL files, the SATA drive the database.
From hdparm the max I
Charlie Savage wrote:
Hi everyone,
I have a question about the performance of sort.
Note it takes over 10 times longer to do the sort than the full
sequential scan.
Should I expect results like this? I realize that the computer is quite
low-end and is very IO bound for this query, but I'm
Am Montag, 7. November 2005 18:22 schrieb PostgreSQL:
> My most humble apologies to the pg development team (pg_lets?).
>
> I took Greg Stark's advice and set:
>
> shared_buffers = 1 # was 5
> work_mem = 1048576# 1Gb - was 16384
>
> Also, I noticed that the EXPLAIN ANALYZE consistently
Charlie,
> Should I expect results like this? I realize that the
> computer is quite low-end and is very IO bound for this
> query, but I'm still surprised that the sort operation takes so long.
It's the sort performance of Postgres that's your problem.
> Out of curiosity, I setup an Oracle
I have run into this type of query problem as well. I solved it in my
application by the following type of query.
SELECT tlid
FROM completechain AS o
WHERE not exists (
SELECT 1
FROM completechain
WHERE tlid=o.tlid and ogc_fid!=o.ogc_fid
);
Assumes of course that you hav
On Tue, 8 Nov 2005, Luke Lonergan wrote:
> > SELECT tlid, min(ogc_fid)
> > FROM completechain
> > GROUP BY tlid
> > ORDER BY tlid;
> >
> > Even with this, it was more than a magnitude faster than Postgresql.
> > Which makes me think I have somehow misconfigured postgresql
> > (see the relevant par
Title: Re: [PERFORM] Sort performance on large tables
Stephan,
On 11/8/05 9:38 AM, "Stephan Szabo" <[EMAIL PROTECTED]> wrote:
>
> Just as we find with a similar comparison (with a "popular commercial,
> proprietary database" :-) Though some might suggest you increase
> work_mem or other tuning
On Tue, 8 Nov 2005, Luke Lonergan wrote:
> Stephan,
>
> On 11/8/05 9:38 AM, "Stephan Szabo" <[EMAIL PROTECTED]> wrote:
>
> >> >
> >> > Just as we find with a similar comparison (with a "popular commercial,
> >> > proprietary database" :-) Though some might suggest you increase
> >> > work_mem or
Thanks everyone for the feedback.
I tried increasing work_mem:
set work_mem to 30;
select tlid, min(ogc_fid)
from completechain
group by tld;
The results are:
"GroupAggregate (cost=9041602.80..10003036.88 rows=48071704 width=8)
(actual time=4371749.523..5106162.256 rows=47599910 loops=1)
Charlie Savage <[EMAIL PROTECTED]> writes:
> Thus the time decreased from 8486 seconds to 5279 seconds - which is a
> nice improvement. However, that still leaves postgresql about 9 times
> slower.
BTW, what data type are you sorting, exactly? If it's a string type,
what is your LC_COLLATE set
Its an int4.
Charlie
Tom Lane wrote:
Charlie Savage <[EMAIL PROTECTED]> writes:
Thus the time decreased from 8486 seconds to 5279 seconds - which is a
nice improvement. However, that still leaves postgresql about 9 times
slower.
BTW, what data type are you sorting, exactly? If it's a stri
I'd set up a trigger to maintain summary tables perhaps...
Chris
Charlie Savage wrote:
Thanks everyone for the feedback.
I tried increasing work_mem:
set work_mem to 30;
select tlid, min(ogc_fid)
from completechain
group by tld;
The results are:
"GroupAggregate (cost=9041602.80..1000
unsubscribe
On Nov 7, 2005, at 3:30 PM, Ralph Mason wrote:
Hi,
I have a transaction that has multiple separate command in it
(nothing unusual there).
However sometimes one of the sql statements will fail and so the
whole transaction fails.
In some cases I could fix the failing statement if only I
Very interesting technique. It doesn't actually do quite what I want
since it returns all rows that do not have duplicates and not a complete
list of unique tlid values. But I could massage it to do what I want.
Anyway, the timing:
"Seq Scan on completechain t1 (cost=0.00..218139733.60 row
15 matches
Mail list logo