scott.marlowe [EMAIL PROTECTED] writes:
3) Estimated number of transactions to be written into the Postgresql db is
around 15000 records per day.
The growth rate in terms of number of connections is around 10% per year
and the data retention is kept on average at least for 18 months
Thanks to Greg Stark, Tom Lane and Stephan Szabo for their advice on
rewriting my query... the revised query plan claims it should only take
about half the time my original query did.
Now for a somewhat different question: How might I improve my DB
performance by adjusting the various
shared_buffers = 128# min max_connections*2 or 16, 8KB each
sort_mem = 65535# min 64, size in KB
I'd pull this in. You only have 640MB ram, which means about 8 large
sorts to swap.
How about 16000?
fsync = false
I presume you understand the risks involved with
We are building a data warehouse composed of essentially click stream
data. The DB is growing fairly quickly as to be expected, currently at
90GB for one months data. The idea is to keep 6 months detailed data on
line and then start aggregating older data to summary tables. We have
Second and more radical, has anyone run postgreSQL on the new Apple
G5 with an XRaid system? This seems like a great value combination.
Fast CPU, wide bus, Fibre Channel IO, 2.5TB all for ~17k.
I keep see references to terabyte postgreSQL installations, I was
wondering if anyone on
Just wondering if anyone has done any testing on the amount of overhead
for insert you might gain by adding a serial column to a table. I'm
thinking of adding a few to some tables that get an average of 30 - 40
inserts per second, sometimes bursting over 100 inserts per second and
We are running into issues with IO saturation obviously. Since this
thing is only going to get bigger we are looking for some advice on
how to accommodate DB's of this size.
Second and more radical, has anyone run
postgreSQL on the new Apple G5 with an XRaid system? This seems like
Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
Define no longer works well.
Well it seems to completely bar the use of a straight merge join between two
Hmmm ... [squints] ... it's not supposed to do that ... [digs] ... yeah,
Sean Shanny wrote:
We are currently running on a Dell 2650 with 2 Xeon 2.8 processors in
hyper-threading mode, 4GB of ram, and 5 SCSI drives in a RAID 0, Adaptec
PERC3/Di, configuration. I believe they are 10k drives. Files system
is EXT3. We are running RH9 Linux kernel 2.4.20-20.9SMP with
On Tue, 2003-12-02 at 15:37, Vivek Khera wrote:
Now I'm trying to implement pg_autovacuum. It seems to work ok, but
after about an hour or so, it does nothing. The process still is
running, but nothing is sent to the log file.
I'm running the daemon as distributed with PG 7.4 release as
Mail list logo