On Tue, 3 Feb 2004, Kevin Carpenter wrote:

> Hello everyone,
> 
> I am doing a massive database conversion from MySQL to Postgresql for a
> company I am working for.  This has a few quirks to it that I haven't
> been able to nail down the answers I need from reading and searching
> through previous list info.
> 
> For starters, I am moving roughly 50 seperate databases which each one
> represents one of our clients and is roughly 500 megs to 3 gigs in size.
>  Currently we are using the MySQL replication, and so I am looking at
> Mammoths replicator for this one.  However I have seen it only allows on
> DB to be replicated at a time.

Look into importing all those seperate databases into seperate schemas in 
one postgresql database.

> With the size of each single db, I don't
> know how I could put them all together under one roof,

There's no functional difference to postgresql if you have 1 huge database 
or 50 smaller ones that add up to the same size.

> and if I was
> going to, what are the maximums that Postgres can handle for tables in
> one db?

None. also see:

http://www.postgresql.org/docs/faqs/FAQ.html#4.5

> We track over 2 million new points of data (records) a day, and
> are moving to 5 million in the next year.

That's quite a bit.  Postgresql can handle it.

> Second what about the physical database size, what are the limits there?

none.

>  I have seen that it was 4 gig on Linux from a 2000 message, but what
> about now?  Have we found way's past that?  

It has never been 4 gig.  It was once, a long time ago, 2 gig for a table 
I believe.  That was fixed years ago.

> Thanks in advance, will give more detail - just looking for some open
> directions and maybe some kicks to fuel my thought in other areas.

Import in bulk, either using copy or wrap a few thousand inserts inside 
begin;end; pairs.


---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to