Hi mp333 > -----Original Message----- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > Sent: Montag, 3. M�rz 2003 14:38 > To: [EMAIL PROTECTED] > Subject: SAPDB Batch Insert benchmark > > > Hello to the SAPDB community, > > I'd love to build a 300 million row Table, short of the that > the biggest > possible on a desktop machine. > > Right now I am testing the three available open source SQL Engines: > > mySQL, FireBird, SAPDB in order to figure out which one > handles large DB > the fastest within a desktop configuration. > > I am logging the time it takes to insert a 48 byte row plus three > indices, one of which includes a float column filled with semi-random > data. > > I am using > > - 800 Mhz, 300MB+ RAM machine with a baracuda IDE drive > - Wink2K SP3 > - Kernel 7.4.3, Build 010-120-035-462 > - ODBC > - local connection > - Database instance: SAP DB OLTP > - single log mode set to off. > - 1GB data volume
Only a few thoughts: - Is autocommit on? Or do you commit yourself? If yes, after how much rows? - A local connection can be done with shared memory or with the loopback device. If you specify "localhost" as host, shared memory is used. If you specify <myhostname> than loopback. The latter will be slower, of course. - Do you insert via parameter binding or with SQL statement literals? - Do you use prepared statements? (MySQL might have problems here) > I am looking for raw INSERT speed. Data security and integrity are > secondary factors. In this case probably MySQL is the right stuff for you. However, if you change your mind about security and integrity, something data bases are built for ... Regards Thomas ---------------------------------------------- Dr. Thomas K�tter SAP DB, SAP Labs Berlin SAP DB is open source. Get it! www.sapdb.org _______________________________________________ sapdb.general mailing list [EMAIL PROTECTED] http://listserv.sap.com/mailman/listinfo/sapdb.general
