> Here's a follow up question. Imagine a situation where you
> pass a list of IDs to a query. You know that the query will
> only return, at most, the same # of rows as IDs. Taking the
> same kind of query, where the amount of bytes returned per
> row divided into the buffers size would tell u
erful ally it is." - Yoda
> -Original Message-
> From: Dave Watts [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, November 23, 2000 7:14 PM
> To: CF-Talk
> Cc: '[EMAIL PROTECTED]'
> Subject: RE: Dave Watts please read - Re: BLOCKFACTOR and MAXROWS
>
Thanks!
My concern was that some of the discussions suggested to me the buffer size
might be fixed at 32Kb rather than variable.
best, paul
At 07:14 PM 11/23/00 -0500, you wrote:
> > How large are buffers set to? I often use
> > BLOCKFACTOR=100 in a query where I:
> >
> > SELECT ID FROM foo
> CF will not generate an error if the database does not support
> block factoring, it's far worse than that.
Actually, I think that there was a problem if you tried to use it with a
Sybase or Informix native datasource - one of these (I forget which) would
cause CF to throw an error.
In any ca
> How large are buffers set to? I often use
> BLOCKFACTOR=100 in a query where I:
>
> SELECT ID FROM foo WHERE bar
>
> Is this setting a large buffer?
The size of the buffer will depend on the maximum size of a returned row.
Given that you're using a field called "ID", which is probably an int
> According to the article by Mr. Van Horn, any Oracle or ODBC
> datasource supports blockfactor. Is that not true?
I don't know for sure.
> > Also, setting a block factor too high when it is not needed
> > will hurt performance because allocating and freeing those
> > larger buffers takes ti
On 11/21/00, Ben Forta penned:
>CF will not generate an error if the database does not support block
>factoring, it's far worse than that. CF has no way to poll the database to
>see what it supports, so if you specify a number to high it'll try that, if
>that fails it'll try a lesser number, and t
How large are buffers set to? I often use
BLOCKFACTOR=100 in a query where I:
SELECT ID FROM foo WHERE bar
Is this setting a large buffer?
best, paul
At 06:25 PM 11/21/00 -0500, you wrote:
>Also, setting a block factor too high when it is not needed will hurt
>performance because allocating
ere it could be more efficiently used).
--- Ben
-Original Message-
From: Dave Watts [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 21, 2000 4:55 PM
To: CF-Talk
Cc: [EMAIL PROTECTED]
Subject: RE: Dave Watts please read - Re: BLOCKFACTOR and MAXROWS
> And for the most efficiency,
de open
to where its unspecified and CF just dies horribly..
Jeremy Allen
ElliptIQ Inc.
-Original Message-
From: Dave Watts [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 21, 2000 4:41 PM
To: CF-Talk
Cc: [EMAIL PROTECTED]
Subject: RE: Dave Watts please read - Re: BLOCKFACTOR and MAXRO
> And for the most efficiency, why not just always use
> BLOCKFACTOR=100 ?
The maximum allowable value for BLOCKFACTOR is 100, according to Allaire.
> Why is the default the most inefficient choice?
The default will always work. The ability to specify larger record blocks
isn't universally
> > I don't think you'd always want to simply set BLOCKFACTOR to
> > 100. If you set the BLOCKFACTOR too large, the database driver
> > will lower it - and I'm not sure exactly how it figures out
> > what to lower it to. It might simply lower it back to the
> > default value of 1, which won't serv
k Administrator
> Vivid Media
> [EMAIL PROTECTED]
> www.vividmedia.com
> 608.270.9770
>
>-Original Message-
>From: Dave Watts [mailto:[EMAIL PROTECTED]]
>Sent: Tuesday, November 21, 2000 2:13 PM
>To: CF-Talk
>Cc: [EMAIL PROTECTED]
>Subject: RE: Dave Watts please
> Is BLOCKFACTOR=10 the same as a SQL Select top 10 *?
No, it's not. It has absolutely no effect on how many records are returned
from the database to CF. It only affects how they're returned.
Dave Watts, CTO, Fig Leaf Software
http://www.figleaf.com/
voice: (202) 797-5496
fax: (202) 797-5444
~
And for the most efficiency, why not just always use BLOCKFACTOR=100 ?
Why is the default the most inefficient choice?
Is there any advantage in uses a lower number?
At 03:12 PM 11/21/00 -0500, Dave Watts wrote:
>> I started this thread, and its evolution has lead me to
>> believe that in fac
On 11/21/00, Dave Watts penned:
>I don't think you'd always want to simply set BLOCKFACTOR to 100. If you set
>the BLOCKFACTOR too large, the database driver will lower it - and I'm not
>sure exactly how it figures out what to lower it to. It might simply lower
>it back to the default value of 1,
: Dave Watts [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 21, 2000 2:13 PM
To: CF-Talk
Cc: [EMAIL PROTECTED]
Subject: RE: Dave Watts please read - Re: BLOCKFACTOR and MAXROWS
> I started this thread, and its evolution has lead me to
> believe that in fact I may not understand the implemen
> This is from the April 2000 edition of CFDJ. Article; In Defense of
> MS Access, By Bruce Van Horn:
>
> First, add the Blockfactor="100" attribute to all your CFQUERY tags.
> This alone will dramatically increase the speed of your queries.
> Without this attributes, when you run a query, ODBC ha
> I started this thread, and its evolution has lead me to
> believe that in fact I may not understand the implementation
> of the BLOCKFACTOR attribute. The following is from the 4.5
> Studio help:
>
> BLOCKFACTOR
> Optional. Specifies the maximum number of rows to fetch at a
> time from the serve
On 11/21/00, J.Milks penned:
>BLOCKFACTOR
>Optional. Specifies the maximum number of rows to fetch at a time from the
>server. The range is 1 (default) to 100. This parameter applies to ORACLE
>native database drivers and to ODBC drivers. Certain ODBC drivers may
>dynamically reduce the block fact
20 matches
Mail list logo