I liked the WebMacro approach on the introspection. It did the
job once for every class, placed the per-class data in a global
map and reused this for every template. I know Geir has a concern
of the single point cache bottleneck (have you measured the
possible implication in a loaded system?).
As far as I have looked at it, Velocity does a similar job but
keeps the introspection cache on a per-request basis, thus only
speeding-up foreach loops, and similar class accesses. I guess
this means an performance impact if many different classes are
put into the context and each is accessed only once.
Velocity has a cache-enhancement that when parameters are sub-classes
the additional access signature is also cached. Still this means
dozens of java statements until the reflected method is executed.
It seems that Geir has enhanced speed by doing a "try-the-obvious"
getMethod() before entering the introspection mecanism. This makes
the introspection mecanism/cache somewhat obsolete.
I may be wrong in my above understanding, since I have not done
deep analysis after geir added the ica/icb stuff. Please correct
me if I've made wrong statements.
Compiling (in memory please - to avoid being like JSP) means that
the template is a class. References are simple Context.get method
accesses. Identifiers are compiled into some type of method lookup.
Since the classes in the context is not necesarily known at compile
time (or may change thereafter), there must exist some type of
post compliler and lookup tables for these accessor classes. This
ends up in being something like a reflection cache - avoiding
the Method.invoke().
An implementation like this would be on the bleeding edge of
technology and could possibly be a performance boost.
To really improve performance we will need to assess different tradeofs:
* central cache bottleneck
* per-request cache overhead
* compiled methods (e.g. SDK1.3 reflection proxy classes/templates?)
And a mixture of these...
:) Christoph