> 1. Our server never gives execution times below 15ms. I don't know why
but
> that is as accurate as I can get.
this is an important thing to note. It means any numbers fairly close to
this are "suspect" statistically, since they'd be indistinguishable from
white noise. If you are doing any tests that complete themselves on time
scales of the same order of magnitude as this number (ie up to about 100ms)
then your systemic error may be as high as 15% !! (15ms/100ms). After some
analysis you might find out that with std dev and the like that the total
"margin of error" might be so large compared to the value that it has little
meaning. Solution: increase the complexity of the test in some way to
generate typical speed tests in the mid-hundreds ms. Then throw out at least
the first one from the calculation (which you've seem to have done). Then do
several "test of test". so that you have several groups of test and you
average their cumulative results.
Also, if there *anything* else running on this machine at the time of
testing? obviously that may be a factor particuarly if you've got some app
that uses up lots of memory for no good reason (example: Outlook).
>
> 2. The code used was a simple loop adding the integers from 1 to 200. On
a
> page by itself during periods of inactivity on the development server the
> first run through took approximately 15 ms. with a subsequent run of 50
> iterations averaging out at just under 13.5 ms.
see above. I don't see how you can accurately say that 13.5 (which has 3
signficant numbers as you've written it) has any meaning when it is below
the threshold of 15ms, which itself has 2 significant digits. Much better,
in this example you give, would be to add integers from 1 to a million and
then average those results. Or design something more complex that uses
floating point math--that should spike up calculation time significantly (of
course do not do any queries, but you knew that)
>
> 3. Moving the simple loop code to another file and then calling that file
as
> a <cfinclude> or as a custom tag made little difference in the initial run
> (again, hard to tell at a time of 15 ms.), but added approximately 10-15%
to
> the average processing time in the 50 iterations. There was very little
> difference between the custom tag and the include.
notice how even what you're quoting here, an additional performance hit of
10-15% is itself fully accountable within the systemic margin of error. It
doesn't mean that a <cfinclude> isn't slower. (it pretty much *has* to be
just given the nature of CF being compiled at run-time). But if your typical
test run-time were closer to 1000 ms then one could be more confident that
the 10-15% cost of a <cfinclude> was notable.
>
> 4. Adding some complexity to the custom tag/include file (this is what I
> started off with - a complex custom tag with a lot of <cfif>s and
<cfloop>s)
> raised the time of the initial run, both on the custom tag and the
include,
> to 47-62 ms., a 3 to 4 fold increase, but the average time in the 50
> iterations went up very little, maybe 5% at most. The code that ran in
the
> custom tag was still the simple loop, but it was in the <cfelse> part of a
> <cfif> that had a lot of <cfif>s and <cfloop>s in the unprocessed portion.
>
just be careful not to mix apples and oranges. There's nothing wrong with a
more complex test....but from what you're describing the increase in timing
may be fully describable by the increase in complexity. Did you also run the
test in parallel so that you had a base calculation for these increased
complexity without the <cfinclude>? That would help
> 5. Now, here is where it gets sort of wierd. If at any time I call
another
> custom tag or include file, I lose the benefit of the repetitive calls.
For
> instance, I made two copies of the included file with the same simple loop
> code. First run of file 1 was 62 ms., then I ran the 50 iterations over
> file 1 and file 2. You would expect 15-16 ms. per file x 2 = 30 - 32 ms
per
> iteration. But instead it was 120 ms. per iteration. So each <cfinclude>
> was getting hit with the same overhead as the initial run.
>
wait, i thought CF cached the P-code after it was generated (?)
(hmm that is the type of question to ask Dave Watts :) ) You don't need
an answer in order to continue testing and come to a conclusion on the
results, but you do need an answer in order to explain such a conclusion :)
> I ran each test many many times and the numbers were consistent. It
appears
> the balancing act between single complex long files and using many
includes
> is harder than it appears.
>
I seem to remember Dinowitz doing some studies of this in late 1999 or so.
Maybe he still has his benchmark figures available.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Structure your ColdFusion code with Fusebox. Get the official book at
http://www.fusionauthority.com/bkinfo.cfm
Archives: http://www.mail-archive.com/[email protected]/
Unsubscribe: http://www.houseoffusion.com/index.cfm?sidebar=lists