Hi all,

I have all the issues worked out with sql-bench now - the ANSI SQL issues in particular.

For the connect test issue, I've reduced the connects from 100k to 20k, which will get us by until we figure out the issues Eric and I have looked at. I'll be investigating what MySQL does differently to get around the limitation.

For the tests - two are failing:

----------------

testing server 'Drizzle 2009.06.1058' at 2009-07-11 10:26:07

Testing the speed of inserting data into 1 table and do some selects on it.
The tests are done with a table that has 100000 rows.

Generating random keys
Creating tables
Inserting 100000 rows in order
Inserting 100000 rows in reverse order
Inserting 100000 rows in random order
Time for insert (300000): 201 wallclock secs ( 4.52 usr 3.85 sys + 0.00 cusr 0.00 csys = 8.37 CPU)

Testing insert of duplicates
Didn't get an error when inserting duplicate record 10429

------------------

This might be either an error value not being set in the driver -OR- something in the way the data is being inserted. I'm looking into this.

The next error is with joins in the wisconsin test:

------------------

Testing server 'Drizzle 2009.06.1058' at 2009-07-11 12:21:01

Wisconsin benchmark test

Time for create_table (3): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)

Inserting data
Time to insert (31000): 21 wallclock secs ( 0.26 usr 0.52 sys + 0.00 cusr 0.00 csys = 0.78 CPU) Time to delete_big (1): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)

Running the actual benchmark

Error occured with execute(select t1.*,o.unique1 AS ounique1,o.unique2 AS ounique2,o.two AS otwo,o.four AS ofour,o.ten AS oten,o.twenty AS otwenty,o.hundred AS ohundred,o.thousand AS othousand,o.twothousand AS otwothousand,o.fivethous AS ofivethous,o.tenthous AS otenthous,o.odd AS oodd, o.even AS oeven,o.stringu1 AS ostringu1,o.stringu2 AS ostringu2,o.string4 AS ostring4 from onek o, tenk1 t1, tenk1 t2 where (o.unique2 = t1.unique2) and (t1.unique2 = t2.unique2) and (t1.unique2 < 1000) and (t2.unique2 < 1000)) -> The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET MAX_JOIN_SIZE=# if the SELECT is okay

------------------

Not sure what this is about - is there also an SQL_BIG_SELECTS setting in Drizzle as there is in MySQL?

The code:

bzr+ssh://[email protected]/%7Ecapttofu/sql-bench/trunk/

regards,

Patrick

_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help   : https://help.launchpad.net/ListHelp

Reply via email to