I'm trying to migrate an application from an Oracle
backend to PostgreSQL and have a performance question.
The hardware for the database is the same, a SunFire
v120, 2x73GB U2W SCSI disks, 1GB RAM, 650MHz US-IIe
CPU. Running Solaris 8.
The table in question has 541741 rows. Under Oracle,
the que
--- [EMAIL PROTECTED] wrote: > You can roughly estimate time
spent for just scaning
> the table using
> something like this:
>
> select sum(version) from ... where version is not
> null
>
> and just
>
> select sum(version) from ...
>
> The results would be interesting to com
> Try increasing sort_mem temporarily, and see if that
> makes a difference:
>SET sort_mem = 64000;
>EXPLAIN ANALYSE ...
I did this (actualy 65536) and got the following:
pvcsdb=# explain analyze select distinct version from
vers where version is not null;
--- Tom Lane <[EMAIL PROTECTED]> wrote: >
=?iso-8859-1?q?Gary=20Cowell?=
> <[EMAIL PROTECTED]> writes:
> > So it seems the idea that oracle is dropping
> duplicate
> > rows prior to the sort when using distinct may
> indeed
> > be the case.
>
> Okay. We won't have any short-term solution for
> ma
--- William Carney <[EMAIL PROTECTED]> wrote:
> The test program is a C program with embedded SQL
> (ecpg). The only
> difference between the tests was the address used in
> the EXEC SQL CONNECT
> .. statement. The inserts are committed to the
> database by performing an
> EXEC SQL COMMIT after e
On Wed, 15 Dec 2004 06:38:22 -0800 (PST), sarlav kumar
<[EMAIL PROTECTED]> wrote:
> Hi All,
>
> I would like to write the output of the \d command on all tables in a
> database to an output file. There are more than 200 tables in the database.
> I am aware of \o command to write the output to a