On 27/06/12 18:05, Sean P. DeNigris wrote:
Germán Leiva wrote
OT: Late binding rocks!
Yes, and the latest binding would be to lookup at runtime. I have no idea
what the performance hit would be, or if we would use all that power, but
the idea is exciting :)
With proper caching at VM level, the performance overhead would be
negligible.
Typically, when a class is accessed, something like PUSH_GLOBAL insn
is generated. One may change semantics of this insns to do a proper
message send back, effectively doing something like (for instance)
self class resolveGlobalByName: globalName
This is very flexible and allows for very nice tricks.
This can be further performance-optimized. One may use a sort of an
inline cache keeping a pair - function pointer to a function that
returns desired value and a "cached data".
Initially, the function would be initialized to a function that calls
back smalltalk code like the one above. However, the smalltalk
code __may__ decide that the resolution is "stable" (i.e., next time the
code runs, same class should be returned) and fill the cache. The
standard smalltalk resolve method then would look like
resolveGlobalByName: globalName cache: cache
| v |
v := Smalltalk at: v
cache bindTo: v
where the cache is Smalltalk reflection object representing this
kind-of-inline cache.
When properly JITed, the cost of global access would the same as
before (except of the very first one) - ideally just one memory
fetch.
Cheers, Jan
--
View this message in context:
http://forum.world.st/Pharo-and-Namespaces-tp4636635p4636978.html
Sent from the Pharo Smalltalk mailing list archive at Nabble.com.