I'm quickly getting there at 150GB. I don't trust MySQL to not corrupt itself when a table gets more than 1-3M rows in it. I have customers

I manage a production db (non-dbmail) of 30 million rows (largest table has 19 mil rows) spanning 21 gigs. Extensive use of storing compressed data vs raw (at 50% ratio as a whole) make it small relatively. This is a high transaction databse with signficant load 24/7, not a low volume, huge research type dbs which is not designed to fly when interfaced with the world. Any db vendor nowadays can support TB size but what it comes down to is a blend of size, speed, and management.

Only had one time where a mysql table got corrupted due to the table size. It was almost 2 years ago when I reached the max row size of a myisam table and mysql barfed. Mysql has since fixed this bug and disallow the transaction if the row size limit is reached. You need to modify avg_row_size to increase the default table row limit. "Show table status" should let you know when the boundry is getting close to be crossed.

If you want to prevent table locks on Mysql, Innodb is availabble. The problem that some do not point out is that a transaction safe table require 50%+ more hd space (same applies if you move myisam table to pgsql). Which means the scan operations will be slower. For me, I move the high transaction, low storage tables to Innodb and large size, low trans tables to the slim Myisam format.

Don't dog Mysql cause if you google PgSQL it has just as much problems. =)


who use MySQL and the damn thing corrupts itself when tables get to have more than a few million rows. MySQL is just not a stable database when the number of records grows. It also has problems with lock contention. PostgreSQL is much better in both regards. I have PostgreSQL installations with easily over 1B rows and using 500GB of space without it breaking a sweat. Biggest Pg database I know of is several terabytes in size ala the Human Genome Project. :)

I'd love to talk and share ideas for managing large installs,


Accounts added, deleted, etc. via dbmail-user that are created via triggers. This is going to disappear at some point in favor of natively updating the database itself, but I haven't done that yet.

optimal db configurations,


FreeBSD 5.X (post 5.3 is best, IMHO. ie RELENG_5), PostgreSQL 8.0, AMD64, Perdition

replication


slony

/backups,


pg_dump

database maintenance,


pg_autovacuum

etc.
Drop me a private email (mark a-t orcon.net.nz) or to the list.


Eh, I figure someone else may find it useful.

Reply via email to