Good morning everybody!

We are working on a big project involving SugarCRM and in the end it will contain around 1 000 000 records per table in MySQL database. Whole system is running on dedicated server which consists of two Xeons @ 3 Ghz, 2 GB RAM and two 150 GB disks @ 10000 rpm running under RAID 1 controller. OS is CentOS 4.2

We've been testing the system with 1000000 test records to see how it will work when doing selects from db and we are pretty disappointed with the results. Some selects needs around 10 seconds to finish, some take even longer. We didn't do any MySQL optimizations, so my question is how to optimize MySQL for such big amount of data. If it would be more convenient we could even provide SSH acces to our server.

I would like to send you queries in question but these queries are in SugarCRM. Is there any tool that logs every query that comes to MySQL so i can examine these?

Best regards,
Marko

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to