10x Niphlod, I'll check that tomorrow...
I can provide any code you like - this is solid and consistent.
I am running this on my desktop-workstation which runs windows7x64.
It's an Intel Core-i7 870 w/ 8GB RAM, running Python 2.6x64 via a simple
command-line g-event server...
Don't think I have a problem there...
The database itself sits as a stand-alone service on an old "boxx"
pizza-box-like server on our server-room, running nothing but PostgreSQL
9.3 (x64) and PG-Bouncer (for connection pooling) - on a minimalist CentOS
6.4 (x64) installation.
I optimized it as far as I can, with the data weighing less than a 100MB
total, I have the data-folder sitting on a mounted "tmpfs" of 3GB (a-la
RAMDISK), and the schema is heavily "index"ed,, so the queries are as fast
as they can be.
As I said, it may be a case of simply having a very fast database-story...
"I'd guess your usecase is either 1k rows with 50"
That's exactly what I said - yes, it's a ~1K rows query, of a JOIN of 2
tables, selecting for about 20 columns.
That is our use-case - it can't be any different (at least not easily...)
We are not really using this version of web2py - I just did that for
texting and posting here.
But we initially encountered this case using a very old version of web2py
(1.8.95) - in which the exact numbers where:
904 rows in the result-set:
- Regular select() : 2.241 seconds
- In it, the portion that the executesql took: 0.023 seconds.
That is roughly a x44 times gap...
And as I said - Trying to convert the result-set into a simple list of
simple-class-instances, yielded similar results.
Which made it conclusive : The overhead was in the object-instanciation and
other pluming-related stuff like "__getitem__" and ".iteritems()" and the
like...
I think the test-code you gave here is interesting - but to make it closer
to our use-case, I would have a more complex query tried.
First, have 2 tables with a foreign-key linking them.
Second, they should each have about ~20+ fields of varying types.
Then the query should be a join of them, getting most/all columns.
What I notices, was that a "flat" query (one with no JOINs) may end-up
being MUCH heavier to parse into Rows (or Rows-like) objects, as there are
nested-loops involved, for having these intermediary Table-representing
objects, that then have to have the result-set filtered-into them,
according to each colname's "table-name" portion.
i.e:
Lets say we have 3 tables:
- Budgets:
name (string)
project (foreign-key to Projects)
- Projects:
name (string)
client (foreign-key to Clients)
- Clients:
name (string)
Now, if we JOIN them in a query, each result in the result-set would have
to be segregated into either of the 3 table-representational-objects in
each Row object.
A colname-attached result-record may look like this:
{
'Budgets.name':'My budget',
'Budgets.project':742,
'Projects.name':'My project',
'Projects.client':445,
'Clients.name':'My client'
}
A Row object's general structure would have to end-up looking like this:
{
'Budgets':{
'name':'My budget'
'project:742
}
'Projects':{
'name':'My project'
'project:445
}
'Clients':{
'name':'My client'
}
{
Now, multiply the tables and/or columns, and you already have a very
different story of record-filtering/segregation and object-instanciation...
Very different than the test-code you posted here.
If you then multiply that by a 1000, you may end-up getting 4000
object-instanciations just for this simple example (for each Row object, of
which there would be a 1000. there would be another 3 objects - meaning,
another 3000 objects...) And that is just a join of just 3 tables - we have
many queries much-much more complex than that...
The interesting (and somewhat disappointing) factoid I got from your
results, was that PyPy was just as "slow" in un-parsed mode... ;)
--
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
---
You received this message because you are subscribed to the Google Groups
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.