On Thu, 21 Oct 2004, Joshua Marsh wrote: > Recently, we have found customers who are wanting to use our service > with data files between 100 million and 300 million records. At that > size, each of the three major tables will hold between 150 million and > 700 million records. At this size, I can't expect it to run queries > in 10-15 seconds (what we can do with 10 million records), but would > prefer to keep them all under a minute.
To provide any useful information, we'd need to look at your table schemas and sample queries. The values for sort_mem and shared_buffers will also be useful. Are you VACUUMing and ANALYZEing? (or is the data read only?)) gavin ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster