That seems odd to me, I will have to crack open my cheetah when I get home. Honestly I 
have always used DBD direct for my stuff, though have been hitting myself on the hand 
everytime I do it ;-).

The select count should be optimized at the database level, and I have used that 
method many times, so that I can get a total of the matching rows, and then only 
select a sub set of those rows using a limit, to save on the memory resources, so 
while it is brute forcing it should be optimized, granted the slow part is the extra 
call to the database at all.

Theoretically your scalar method should work, but then you are sacrificing memory over 
processing, so it is really more an environmental decision than anything.

http://danconia.org

------------------------------------------------
On Mon, 18 Nov 2002 13:50:58 -0800 (PST), "Roderick A. Anderson" <[EMAIL PROTECTED]> 
wrote:

> On Mon, 18 Nov 2002 [EMAIL PROTECTED] wrote:
> 
> > As an unfortunate user recently found out, this depends on the
> > database. DBI offers the 'rows' method that you would call on your
> > statement handle to tell you the number of rows it returned (this is
> > NOT matched rows, for example if you use a LIMIT these are two very
> > different things).
> 
> Geez.  I forgot to say -  PostgreSQL 7.2.1.  Here I was thinking I was so 
> careful and would get all the important pieces in the message.
> 
> The Cheetah book says rows() should not be used and especially not with
> select statements.  Thanks for the suggestion though.  In the same area
> it's suggested to use a 'count(*)' with the same where clause but that
> seems really brute force.
> 
> I'll get the snippets of code together and test the scalar() route.
> 
> 
> Cheers,
> Rod
> -- 
>   "Open Source Software - Sometimes you get more than you paid for..."
> 

Reply via email to