> A complex filter on a small set of items might be faster in Python than 
> doing another database hit. And a simple filter might belong in the db if 
> it has to go over lots of records. As I said, these are orthogonal 
> considerations.
>

Perhaps, but again, we are talking about a context of a statefull system - 
we might already have some data on our object-graph - so it's more 
complicated then that - If we're talking about the first query of a 
transaction, we need to think about the context of that whole transaction - 
will we be using all of the fields in the subsequent attribute-accesses? 
How about the of the records? Do we need all of them for our access-pattern 
later on? How should we construct our query so it's optimal for re-use of 
the results in subsequent attribute-accesses of that same transaction? Such 
considerations do not even exist in a stateless system like web2py's DAL - 
it doesn't have the same kind of re-usability of returned data.

For example - If I am writing a code for a transaction that would later 
need to do a simple-filter on a large data-set, but that I also know I'm 
gonna need "some" of that data, and also some-other related data for some 
other attribute-access, than I should construct the first query in the 
transaction so it would do complex filtering, even if I have a large 
data-set, so I can have the data cached for me, for when I do the 
simple-filtering of that data in-python afterwards. As for that 
extra-data-reuse, it might not pertain to all the records I got, but it 
might do-pertain to more fields than my simple filtering needed.

So in that case, I might do the simple-filtering in python, even if a 
large-data-set is involved, because I am optimizing the number of queries 
for a wider-context.
 

>  
>
>> Conversely, simple-filtering is way too verbose using the DAL - it's an 
>> overkill for that, and makes the code much less readable.
>>
>
> Don't know why you think that.
>

Because it is.
 

>   
>
>> For simple filtering, well, I'd rather do it in python and get 
>> readability, becuase the performance-benefits are negligible.
>>
>
> But I thought you were a fan of achieving negligible performance benefits 
> at great cost (see below).
>

Now you're being cynical...
 

>
> Still don't know why you would want a NoSQL database or what it has to do 
> with this topic. 
>

You're barking on the wrong tree - It does not relate to this discussion, 
and I didn't say I need a NoSQL database - I don't.
I meant it as a hypothetical-alternative to an imaginary scenario of me 
doing ALL the filtering in python - for THAT I said "well I *might-as-well* use 
NoSql" as I would then not muster the benefits of a relational database. It 
was statement to emphesize why I wouldn't want to do complex filtering in 
Python "in general" - obviousely there are edge-cases as you alluded, and 
then there's the additional complexity of decision-making as I alluded, due 
to the introduction of statefull-caching/reuse of results.
 

>
> Well, you've sure made a lot of claims about what web2py needs without 
> knowing much about what it already has. Those are ids. If they were rows, 
> then you would just do france.id and spanish.id.
>

I was simply avoiding making assumptions in that example, as there was no 
context for these variables in it.
 

>
> Yes, I think you need to. If this is only going to save a half a second of 
> CPU time per day, I'm not going to build an ORM to get it. The question 
> isn't how much faster the identity check is (and I don't think it's that 
> much faster) -- the question is how much of your overall application CPU 
> time is spent doing this kind of thing?
>
>
Fine, don't make the "is not" usage a reason for an ORM - you may still 
benefit from an "Identity Mapper" in an ORM, in terms of memory-efficiency, 
even if you stick to your ugly "!="s and "=="s....
I wouldn't make my decision of having an "Identity Mapper" only for the 
usage of "is" and "is not" - in fact, it is rarely used even in SQLA - it 
was just an example of readability that can be harnesses "in addition to" 
the memory efficiency that an Identity-Mapper is providing.

For benchmarks on THAT, you may look for SQLAlchemy vs DJango if you 
like... I don't really case much for that - I just know it is obviousely 
better...

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to