Christopher Kings-Lynne wrote:

Probably by carefully partitioning their data.  I can't imagine anything
being fast on a single table in 250,000,000 tuple range.  Nor can I
really imagine any database that efficiently splits a single table
across multiple machines (or even inefficiently unless some internal
partitioning is being done).


Ah, what about partial indexes - those might help. As a kind of 'semi-partition'.

He could also you schemas to partition out the information within the same database.


J


Chris



-- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com PostgreSQL Replicator -- production quality replication for PostgreSQL

begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to