Didn't have that bug, but you are correct - the fix makes it safer.
So, you are getting a x15 times performance-boost - interesting...
BTW: For anyone not checking the code, it is now creating 2 tables of 10
fields each (simple string fields), and another field for the second-table,
linking it to the first (foreign key).
It then inserts a 1000 records to each table - then queries all records
with a JOIN query.
Pretty straight-forward.
I suppose if the fields were more varied in type, the gap would be even
bigger.
I am doing some more tests with a more customizable version of the code:
Let's see what happens when we increase the record-count 10 fold (210k
values):
Times are for 100 iterations, of querying for 10000 results, each with 21values
Preparing....
started bench
speed_executesql() total=11.081626 avg=0.110816
speed_normal_rows() total=204.889276 avg=2.048893
speed_rows_with_cacheable() total=190.929696 avg=1.909297
speed_executesql_with_colnames() total=13.822615 avg=0.138226
speed_rows_with_simple_processor() total=13.638187 avg=0.136382
Hmm... Ok, not much different - about x15.9 times faster.
Now, let's increase the column-count 10 times (same total of 210k values):
Times are for 100 iterations, of querying for 1000 results, each with 201values
Preparing....
started bench
speed_executesql() total=10.222960 avg=0.102230
speed_normal_rows() total=159.059873 avg=1.590599
speed_rows_with_cacheable() total=157.991245 avg=1.579912
speed_executesql_with_colnames() total=10.322220 avg=0.103222
speed_rows_with_simple_processor() total=10.510867 avg=0.105109
Same thing, x15.9 times faster...
Hmm... I'll try to customize the table-count...
On Wednesday, November 20, 2013 3:52:42 PM UTC+2, Paolo Valleri wrote:
>
> Hi, below my results:
> Times are for 100 iterations
> Preparing....
> started bench
> speed_executesql() total=0.876057 avg=0.008761
> speed_normal_rows() total=15.076992 avg=0.150770
> speed_rows_with_cacheable() total=13.477355 avg=0.134774
> speed_executesql_with_colnames() total=0.989993 avg=0.009900
> speed_rows_with_simple_processor() total=1.084824 avg=0.010848
>
>
> Mind that your code doesn't work properly, I got the error:
> "sqlite3.IntegrityError: foreign key constraint failed". Find attached an
> updated version.
>
> Paolo
>
>
>
> On Wednesday, November 20, 2013 2:32:12 PM UTC+1, Arnon Marcus wrote:
>>
>> Here we go:
>>
>> I edited the original test-code that Niphlod posted, just enough to put
>> my hypothesis to the test.
>> Here are the results:
>>
>> C:\Python26\python.exe F:/Experimental/web2py_272/web2py/test_executesql.
>> py
>> Times are for 100 iterations
>> Preparing....
>> started bench
>> speed_executesql() total=0.920402 avg=0.009204
>> speed_normal_rows() total=20.190159 avg=0.201902
>> speed_rows_with_cacheable() total=18.954632 avg=0.189546
>> speed_executesql_with_colnames() total=1.070893 avg=0.010709
>> speed_rows_with_simple_processor() total=1.118015 avg=0.011180
>>
>>
>> Process finished with exit code 0
>>
>>
>> I left in the addition of 'colname or [...' to the 'if as_dict:' block of
>> the executesql() in the dal.py, so it actually did the addition of the
>> colnames that were given - otherwise, in the original test-code of Niphlod,
>> it was disregarded... (as 'as_dict=True' is passed-in)
>> So, the 'speed_executesql_with_colnames()' is actually doing what it
>> should in this test-run.
>>
>> Let's see:
>> speed_executesql_with_colnames is about x20 times faster than
>> speed_normal_rows in this run...
>>
>> You can test it out yourself - here is the code:
>>
>>
--
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
---
You received this message because you are subscribed to the Google Groups
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.
import timeit
from gluon.dal import DAL, Field
fieldCount = 10
recordCount = 1000
times = 100
print("Times are for %i iterations, of querying for %i results, each with %i values" % (times,recordCount,(fieldCount*2+1)))
##prepare a large dict
print 'Preparing....'
db = DAL('sqlite:memory')
fieldNames = [('f_'+str(i)) for i in range(fieldCount)]
#load 1k records into a test table
db.define_table('test_table', *[Field(fieldName) for fieldName in fieldNames])
db.define_table('test_table2', Field('f_test_table', db.test_table), *[Field(fieldName) for fieldName in fieldNames])
db.commit()
db(db.test_table.id > 0).delete()
db(db.test_table2.id > 0).delete()
db.commit()
for a in range(recordCount):
values = dict([(fieldName,'testValue') for fieldName in fieldNames])
foreing_key = db.test_table.insert(**values)
values['f_test_table'] = foreing_key
db.test_table2.insert(**values)
db.commit()
query = (db.test_table2.f_test_table==db.test_table.id)
sql = db(query)._select(db.test_table.ALL,db.test_table2.ALL)
sql_colnames = sql.split(' FROM')[0][7:].split(', ')
def bench(cmd, imprt):
t = timeit.Timer(cmd, imprt)
s = t.timeit(number=times)
print("%s total=%02f avg=%02f" % (cmd, s, (s/times)))
return s
def myprocessor(rows, fields, colnames, blob_decode=True, cacheable = False):
return [dict(zip(colnames,row)) for row in rows]
def speed_executesql():
rtn = db.executesql(sql, as_dict=True)
def speed_executesql_with_colnames():
rtn = db.executesql(sql, colnames=sql_colnames, as_dict=True)
def speed_normal_rows():
rtn = db(query).select()
def speed_rows_with_cacheable():
rtn = db(query).select(cacheable=True)
def speed_rows_with_simple_processor():
rtn = db(query).select(processor=myprocessor)
def full_bench():
print 'started bench'
b1 = bench('speed_executesql()', 'from __main__ import speed_executesql')
b2 = bench('speed_normal_rows()', 'from __main__ import speed_normal_rows')
b3 = bench('speed_rows_with_cacheable()', 'from __main__ import speed_rows_with_cacheable')
b4 = bench('speed_executesql_with_colnames()', 'from __main__ import speed_executesql_with_colnames')
b5 = bench('speed_rows_with_simple_processor()', 'from __main__ import speed_rows_with_simple_processor, myprocessor')
if __name__ == '__main__':
full_bench()