On Thu, 2006-04-06 at 12:07 -0600, Jason Jones wrote: > I've personally never handled any OSS db with more than a couple hundred > thousand rows TOTAL, (but have around 3 years exp. handling many various > smaller dbs) and am kind of twitchy about what's going to happen with our db > as it grows exponentially to hundreds of millions of rows.
FWIW, I have a MySQL 3.23 database with a few million rows. It keeps the accounting logs for our RADIUS server. We prune it every 6 months or so just to keep disk usage to a minimum, and since we don't need the data online after that long. The thing runs like a champ. We're using MySQL replication to mirror the whole database over to a second server and that works like a charm too. > Hardware is not an issue. Disk space is not an issue. The only issue is > whether MySQL (or PostgreSQL) can be properly configured to handle hundreds > of millions of rows per table without hacking it into some slashdot-esque > frankenstein configuration. It seems to me that the real issue is proper schema and index design. Extensive use of the explain command will help there. A good DBA really needs to know his databases inside and out. Note also that there are a number of memory options in my.cnf which will affect your overall performance. A database tuned to be small will run like crap with tons of data in it, and vice-versa. Corey
signature.asc
Description: This is a digitally signed message part
/* PLUG: http://plug.org, #utah on irc.freenode.net Unsubscribe: http://plug.org/mailman/options/plug Don't fear the penguin. */
