Another thing to try is to turn off auto-commit and commit the transaction at the end. Although this number of records may fill the transaction log and the inserts may need to be done in smaller chunks. Our application (not as many inserts) improved greatly by turning off auto-commit and committing the inserts at the end. In ODBC, you can also do "bulk" inserts which helps even more.

Mark

John L. Singleton wrote:

Hi Arsen,

For any sort of huge import operation like that, it would be a good idea to turn off your logs temporarily. You can do this with a series of commands like this:

dbmcli -d <db> -u <dbmuser,dbmpass>

> db_admin
> util_connect
> util_execute SET LOG WRITER OFF
> util_release

Obviously then, you can turn your logs back on with
>db_admin
>util_connect
>util_execute SET LOG WRITER ON
> util_release

Try it with the logs off and see if you have any improvement.

Cheers,
JLS
Arsen Pereymer wrote:

Hello,


Currently, we have a large Microsoft SQL 2000 Database containing over 100
tables with over 100+ million rows. The purpose of the database is to house
a datawarehouse. It currently takes about 20hrs to run the ETL to populate
the database.


We have been doing research and found that using MaxDB the performance can
greatly increase. Thus we started a benchmark test . We moved 5 tables to
MaxDB each containing approximately 800k rows. We re-wrote the SQL script
mentioned below into MaxDB.
INSERT INTO "DBA"."tbBenchmarkData"
SELECT D."DistrictID",
0, 0, BM."gradeID", BM."subjectID", SD."StudentID", T."BMSEQNUM", SD."ObjectiveID", 1 as ObjectiveTypeID, avg(SD."GotRight")*100, count(SD."QW_ID") FROM "DBA"."tbTests " T, "DBA"."tbStudentData" SD,
"DBA"."tbUserOrg" UO,
"DBA"."tbDistricts" D, "DBA"."tbBMTestinfo" BM
Where SD."TaskID" = T."TaskID" AND
UO."UserID" = SD."StudentID" AND
D."DistrictID" =UO."OrgID"
AND
BM."SEQNUM" =T."BMSEQNUM"
AND
T."BMSEQNUM" >0 GROUP BY D."DistrictID", SD."StudentID", BM."subjectID", BM."gradeID", T."BMSEQNUM", SD."ObjectiveID" ;



When running the below script on Microsoft SQL server it takes approximately
5 minutes to insert 1.9 million rows. When running the same script on MaxDB
takes over 1hr to insert the same amount of rows.


Why is the MaxDB performance so slow?

Thank you very much in advanced for any help!











Reply via email to