Hi Johann I think this is related: https://groups.google.com/forum/?fromgroups=#!topic/web2py-developers/2SjrZ3AHR1c
Using recursive queries which fetch the whole row (especially with very wide rows) is a huge DB hit. My conclusion is that, ideally, web2py should use joins where necessary. However, a join is only necessary where the foreign key table has a specified 'format', and only the fields included in the 'format' should be selected. All info could then be fetched in one select. This would be a massive performance gain for SQL DBs. I guess No-SQL would still have to still work with recursive queries, but their limitations shouldn't penalise the rest of us :) Anthony's solution in the above mentioned thread goes part way to improving the situation, but unless I misunderstood, it doesn't deal with unnecessary recursion of the entire row. It almost goes without saying that hidden performance hits (of that severity) undermine the value of built-in framework features. In my case, due to the above, I rolled my own code which avoided recursive queries. I found the toolbar brilliant for testing my queries! D On Thursday, 5 September 2013 08:08:39 UTC+1, Johann Spies wrote: > > On 4 September 2013 15:34, Massimo Di Pierro > <[email protected]<javascript:> > > wrote: > >> Please check with the response.toolbar to see if there is are hidden >> recursive queries. >> >> Thanks! I did not use the 'db stats' function before when debugging and > it seems a useful tool! > > Interestingly the difference is the following: > > When using the grid, there are two queries: > > SELECT rresearch.nu, rresearch.ny, rresearch.nc, rresearch.id > > FROM rresearch > WHERE (rresearch.id IS NOT NULL) > ORDER BY rresearch.nu, rresearch.ny, rresearch.nc, rresearch.id; > > 2920.90ms > > SELECT rresearch.nu, isi_alt_countrynames.code, isi_alt_countrynames.id, > rresearch.id > > FROM rresearch, isi_alt_countrynames > WHERE (isi_alt_countrynames.rsc_id = rresearch.id) > ORDER BY rresearch.id, isi_alt_countrynames.id LIMIT 20 OFFSET 0; > > 325.16ms > and when using SQLTABLE there are a big one: > > SELECT rresearch.nu, isi_alt_countrynames.code, isi_alt_countrynames.id, > rresearch.id > > FROM rresearch, isi_alt_countrynames > WHERE (isi_alt_countrynames.rsc_id = rresearch.id) > ORDER BY rresearch.nu LIMIT 21 OFFSET 0; > > which took 185.42ms > > and then one query for each of the rows in the result of the previous query > like this > (taking between 0.44ms and 0.72ms each): > > > > > SELECT rresearch.id, rresearch.cn, rresearch.nf, rresearch.nc, > > rresearch.nd, rresearch.nn, rresearch.ny, rresearch.np, rresearch.nu, > rresearch.nz, rresearch.uuid > FROM rresearch WHERE (rresearch.id = 642117) LIMIT 1 OFFSET 0; > > 0.50ms > > This leaves me with more questions about the efficiency of the translation > of the queries to the backend: > > * In the case of the grid it will be totally impracticable to try and > load the whole rresearch table when working with table containing 80million > records. Why was this necessary? > * In the case of the SQLTABLE why would each of those queries select all > the fields in stead of just those necessary in the original query? > > > > I am beginning to think of bypassing DAL in these queries and go straight > to the backend. > > Regards > Johann > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- --- You received this message because you are subscribed to the Google Groups "web2py-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/groups/opt_out.

