On Tue, 03 Feb 2004 11:42:59 -0500
"Kevin Carpenter" <[EMAIL PROTECTED]> wrote:

> For starters, I am moving roughly 50 seperate databases which each one
> represents one of our clients and is roughly 500 megs to 3 gigs in
> size.
>  Currently we are using the MySQL replication, and so I am looking at
> Mammoths replicator for this one.  However I have seen it only allows
> on DB to be replicated at a time.  With the size of each single db, I

Not knowing too much about mammoths, but how the others work, you should
be able to run a replicator for each db.  (Or hack a shell script up to
make it run the replicator for each db.. either way each db will be
replicated independant of the others)

> don't know how I could put them all together under one roof, and if I
> was going to, what are the maximums that Postgres can handle for
> tables in one db?  We track over 2 million new points of data
> (records) a day, and are moving to 5 million in the next year.
> 

>From the docs:

Maximum size for a database     unlimited (4 TB databases exist)
Maximum size for a table        16 TB on all operating systems
Maximum size for a row  1.6 TB
Maximum size for a field        1 GB
Maximum number of rows in a table       unlimited
Maximum number of columns in a table    250 - 1600 depending on column
types Maximum number of indexes on a table      unlimited
 
...

My largest PG db is 50GB. 

My busiest PG db runs about 50 update|delete|insert's / second
(sustained throughout the day. It bursts up to 150 now and then).  And
we're doing about 40 selects / second.  And the machine it is running on
is typically 95% idle.  (Quad 2ghz xeon)

-- 
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to