At 11:54 AM 8/8/00 +0100, Graham Barr wrote:
>On Mon, Aug 07, 2000 at 02:23:08PM -0400, Chaim Frenkel wrote:
> > A different op would be a better performance win. Even those sections
> > that didn't want the check has to pay for it.
>
>That may not be completly true. You would in effect be increasing the
>size of code for perl itself. Whether or not it would be a win would
>depend on how many times the extra code caused a cache miss and a fetch
>from main memory.

The two main thrusts behind making it a separate op are op shrinkage and 
flexibility. If it's ultimately faster, that's just keen.

The problem perl will always run into is that our executable code counts as 
data to CPUs, and lives in the D cache, along with all the data we work on. 
Ripping through a few 100K strings'll kill any sort of benefits to keeping 
the optree small, though how often that happens is also up in the air. (I 
really want a CPU with three caches, I, D, & perl optree...)

>As Chip says, human intuition is a very bad benchmark.

That's not entirely true. When I think "This'll run faster!" about some 
clever bit of hackery, I'm almost inevitably wrong. Terribly handy, that. :)

                                        Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
[EMAIL PROTECTED]                         have teddy bears and even
                                      teddy bears get drunk

Reply via email to