:[EMAIL PROTECTED]
Sent: Tuesday, January 16, 2007 19:12
To: Jan van der Weijde; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
On Tue, Jan 16, 2007 at 12:06:38 -0600,
Bruno Wolff III [EMAIL PROTECTED] wrote:
Depending on exactly what you want to happen
On Mon, Jan 15, 2007 at 11:52:29 +0100,
Jan van der Weijde [EMAIL PROTECTED] wrote:
Does anyone have a suggestion for this problem ? Is there for instance
an alternative to LIMIT/OFFSET so that SELECT on large tables has a good
performance ?
Depending on exactly what you want to happen, you
On Tue, Jan 16, 2007 at 12:06:38 -0600,
Bruno Wolff III [EMAIL PROTECTED] wrote:
Depending on exactly what you want to happen, you may be able to continue
where you left off using a condition on the primary key, using the last
primary key value for a row that you have viewed, rather than
Hello all,
one of our customers is using PostgreSQL with tables containing millions
of records. A simple 'SELECT * FROM table' takes way too much time in
that case, so we have advised him to use the LIMIT and OFFSET clauses.
However now he has a concurrency problem. Records deleted, added or
Jan van der Weijde wrote:
Hello all,
one of our customers is using PostgreSQL with tables containing millions
of records. A simple 'SELECT * FROM table' takes way too much time in
that case, so we have advised him to use the LIMIT and OFFSET clauses.
That won't reduce the time to fetch
You can also opt for partitioning the tables and this way select will only
get the data from the required partition.
--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/15/07, Richard Huxton dev@archonet.com wrote:
Jan van der Weijde wrote:
Hello all,
one of our customers is
: Monday, January 15, 2007 12:01
To: Jan van der Weijde
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
Jan van der Weijde wrote:
Hello all,
one of our customers is using PostgreSQL with tables containing
millions of records. A simple 'SELECT * FROM
Shoaib Mir wrote:
You can also opt for partitioning the tables and this way select will only
get the data from the required partition.
Not in the case of SELECT * FROM table though. Unless you access the
specific partitioned table.
On 1/15/07, Richard Huxton dev@archonet.com wrote:
Jan
Jan van der Weijde wrote:
Thank you.
It is true he want to have the first few record quickly and then
continue with the next records. However without LIMIT it already takes a
very long time before the first record is returned.
I reproduced this with a table with 1.1 million records on an XP
Oh yes, need to have a condition first for which you have partitioned
tables. Only in that case it will work with partitions.
---
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/15/07, Richard Huxton dev@archonet.com wrote:
Shoaib Mir wrote:
You can also opt for partitioning
Jan van der Weijde wrote:
Thank you.
It is true he want to have the first few record quickly and then
continue with the next records. However without LIMIT it already takes a
very long time before the first record is returned.
I reproduced this with a table with 1.1 million records on an XP
Cc: Richard Huxton; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
Jan van der Weijde wrote:
Thank you.
It is true he want to have the first few record quickly and then
continue with the next records. However without LIMIT it already takes
a very long
Weijde
Cc: Richard Huxton; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
Jan van der Weijde wrote:
Thank you.
It is true he want to have the first few record quickly and then
continue with the next records. However without LIMIT it already takes
a very
Cc: Alban Hertroys; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
If you go with Java, you can make it faster by using setFetchSize (JDBC
functionality) from client and that will help you with the performance
in case of fetching large amounts of data
Jan van der Weijde wrote:
That is exactly the problem I think. However I do not deliberately
retrieve the entire table. I use the default settings of the PostgreSQL
installation and just execute a simple SELECT * FROM table.
I am using a separate client and server (both XP in the test
:49
To: Jan van der Weijde
Cc: Richard Huxton; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
Jan van der Weijde wrote:
Thank you.
It is true he want to have the first few record quickly and then
continue with the next records. However without LIMIT
*To:* Jan van der Weijde
*Cc:* Alban Hertroys; pgsql-general@postgresql.org
*Subject:* Re: [GENERAL] Performance with very large tables
If you go with Java, you can make it faster by using setFetchSize (JDBC
functionality) from client and that will help you with the performance in
case of fetching
Jan van der Weijde wrote:
That is exactly the problem I think. However I do not deliberately
retrieve the entire table. I use the default settings of the PostgreSQL
You will want to increase the default settings and let PostgreSQL use as
much RAM as you have - especially when retrieving a
18 matches
Mail list logo