Re: [GENERAL] Performance with very large tables

2007-01-23 Thread Jan van der Weijde
:[EMAIL PROTECTED] Sent: Tuesday, January 16, 2007 19:12 To: Jan van der Weijde; pgsql-general@postgresql.org Subject: Re: [GENERAL] Performance with very large tables On Tue, Jan 16, 2007 at 12:06:38 -0600, Bruno Wolff III [EMAIL PROTECTED] wrote: Depending on exactly what you want to happen

Re: [GENERAL] Performance with very large tables

2007-01-16 Thread Bruno Wolff III
On Mon, Jan 15, 2007 at 11:52:29 +0100, Jan van der Weijde [EMAIL PROTECTED] wrote: Does anyone have a suggestion for this problem ? Is there for instance an alternative to LIMIT/OFFSET so that SELECT on large tables has a good performance ? Depending on exactly what you want to happen, you

Re: [GENERAL] Performance with very large tables

2007-01-16 Thread Bruno Wolff III
On Tue, Jan 16, 2007 at 12:06:38 -0600, Bruno Wolff III [EMAIL PROTECTED] wrote: Depending on exactly what you want to happen, you may be able to continue where you left off using a condition on the primary key, using the last primary key value for a row that you have viewed, rather than

[GENERAL] Performance with very large tables

2007-01-15 Thread Jan van der Weijde
Hello all, one of our customers is using PostgreSQL with tables containing millions of records. A simple 'SELECT * FROM table' takes way too much time in that case, so we have advised him to use the LIMIT and OFFSET clauses. However now he has a concurrency problem. Records deleted, added or

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Richard Huxton
Jan van der Weijde wrote: Hello all, one of our customers is using PostgreSQL with tables containing millions of records. A simple 'SELECT * FROM table' takes way too much time in that case, so we have advised him to use the LIMIT and OFFSET clauses. That won't reduce the time to fetch

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Shoaib Mir
You can also opt for partitioning the tables and this way select will only get the data from the required partition. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/15/07, Richard Huxton dev@archonet.com wrote: Jan van der Weijde wrote: Hello all, one of our customers is

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Jan van der Weijde
: Monday, January 15, 2007 12:01 To: Jan van der Weijde Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Performance with very large tables Jan van der Weijde wrote: Hello all, one of our customers is using PostgreSQL with tables containing millions of records. A simple 'SELECT * FROM

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Richard Huxton
Shoaib Mir wrote: You can also opt for partitioning the tables and this way select will only get the data from the required partition. Not in the case of SELECT * FROM table though. Unless you access the specific partitioned table. On 1/15/07, Richard Huxton dev@archonet.com wrote: Jan

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Richard Huxton
Jan van der Weijde wrote: Thank you. It is true he want to have the first few record quickly and then continue with the next records. However without LIMIT it already takes a very long time before the first record is returned. I reproduced this with a table with 1.1 million records on an XP

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Shoaib Mir
Oh yes, need to have a condition first for which you have partitioned tables. Only in that case it will work with partitions. --- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/15/07, Richard Huxton dev@archonet.com wrote: Shoaib Mir wrote: You can also opt for partitioning

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Alban Hertroys
Jan van der Weijde wrote: Thank you. It is true he want to have the first few record quickly and then continue with the next records. However without LIMIT it already takes a very long time before the first record is returned. I reproduced this with a table with 1.1 million records on an XP

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Jan van der Weijde
Cc: Richard Huxton; pgsql-general@postgresql.org Subject: Re: [GENERAL] Performance with very large tables Jan van der Weijde wrote: Thank you. It is true he want to have the first few record quickly and then continue with the next records. However without LIMIT it already takes a very long

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Shoaib Mir
Weijde Cc: Richard Huxton; pgsql-general@postgresql.org Subject: Re: [GENERAL] Performance with very large tables Jan van der Weijde wrote: Thank you. It is true he want to have the first few record quickly and then continue with the next records. However without LIMIT it already takes a very

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Jan van der Weijde
Cc: Alban Hertroys; pgsql-general@postgresql.org Subject: Re: [GENERAL] Performance with very large tables If you go with Java, you can make it faster by using setFetchSize (JDBC functionality) from client and that will help you with the performance in case of fetching large amounts of data

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Richard Huxton
Jan van der Weijde wrote: That is exactly the problem I think. However I do not deliberately retrieve the entire table. I use the default settings of the PostgreSQL installation and just execute a simple SELECT * FROM table. I am using a separate client and server (both XP in the test

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Gregory S. Williamson
:49 To: Jan van der Weijde Cc: Richard Huxton; pgsql-general@postgresql.org Subject: Re: [GENERAL] Performance with very large tables Jan van der Weijde wrote: Thank you. It is true he want to have the first few record quickly and then continue with the next records. However without LIMIT

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Shoaib Mir
*To:* Jan van der Weijde *Cc:* Alban Hertroys; pgsql-general@postgresql.org *Subject:* Re: [GENERAL] Performance with very large tables If you go with Java, you can make it faster by using setFetchSize (JDBC functionality) from client and that will help you with the performance in case of fetching

Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Shane Ambler
Jan van der Weijde wrote: That is exactly the problem I think. However I do not deliberately retrieve the entire table. I use the default settings of the PostgreSQL You will want to increase the default settings and let PostgreSQL use as much RAM as you have - especially when retrieving a