Hi all!
[EMAIL PROTECTED] wrote:
>Hello [EMAIL PROTECTED],
>
>On 19-Jul-00, [EMAIL PROTECTED] wrote:
>
> > [EMAIL PROTECTED] wrote:
> >
> >> In REBOL you don't only have two scopes (contexts) - you have a tree
> of contexts, each one with a parent context, the top level context being
> what you could call global variables.
> >
> > This is probably not true; each context is independent from the
> > others.
> >
> >> As you can see, contexts are traversed bottom-up to find the value of
> a word.
> >
> > They aren't. BIND simply binds only words that are found in the
> > context --- the others are left untouched. The binding is static
> > and no lookup is needed at evaluation time.
>
>Well, I guess that's right for 'bind.
>BUT if no hieraki is present, how does the following work?
Hey Gabriele, mind if I give this a go?
REBOL scope rules are applied as a side effect of the
context-binding process. The following works because
the scopes are nested like they are in most block-
structured languages. The binding of a block of code
is performed before the code is executed, so there
doesn't need to be any runtime structures to support
scoping. I'll tell you about the changes to the runtime
state as we go.
>REBOL [
> Title "Test contexts"
>]
When the script is loaded (do does this before executing
the code), all words in the script are bound to the global
context, including words in nested blocks and parens. Any
words that weren't in the global context already are added
to it (the only context that allows this). All subsequent
rebinding is performed later when any nested scopes are
established (see later).
>obja: make object! [
The make object! operation first creates a new context
with the word self and whatever words that are referenced
as set-words in the spec block directly (not in nested
blocks or parens). The spec block is then bound to that
context by the bind function. Any words that don't exist
in the new context are not rebound, and so therefore stay
bound to the context they were already bound to, in this
case the global context.
Here's some REBOL pseudo-code of this (for block! spec):
make-object: func [spec [block!] /local c o] [
c: copy [self]
foreach x spec [
all [
set-word? x
none? find c (x: to-word x)
insert tail c x
]
]
o: make-context c ; Here's the "pseudo-" part
o/self: o
spec: bind/copy spec in o 'self
do spec
o/self ; NOT o, for some reason
]
This selective rebinding is how the scoping rules are
implemented in REBOL. Scoping is a side effect of the
repeated rebinding to "nested" contexts.
> a: "This is obja's value of 'a"
> b: "This is obja's value of 'b"
These references are to words directly bound to their
contexts, in this case that of obja. No lookup is done
here - the binding of words to values was performed
before this block was passed to do. Since there is no
runtime lookup of words, no runtime infrastructure of
context trees or stacks is needed.
This direct binding makes the runtime execution of the
REBOL code more similar to compiled code, of a _very_
smart, extensible virtual machine. It's not as if REBOL
were interpreted, so much as half-compiled. This is why
the 2.x REBOL is so much faster than 1.x REBOL.
> objb: make object! [
Note that the words 'make and 'object! were not rebound
above because they don't exist in the obja context. They
kept the values they had before, in the global context.
Thus these global values can be referenced in this scope
even though they are still bound to the global context.
Remember that no variable lookup is required to find
these values at runtime - the binding was done earlier.
> b: "This is objb's value of 'b"
> c: "This is objb's value of 'c"
> objc: make object! [
> c: "This is objc's value of 'c"
> d: "This is objc's value of 'd"
See above.
> test-it: func [] [
Another context is defined here, but it doesn't matter
because it's empty - no words are rebound here.
> print ["words defined in this object! is:" mold first self]
The word 'self is bound to its most recent binding, the
objc context. The other words are still global.
> print ["the value of 'a is:" mold a]
The word 'a is bound to its most recent binding, the
obja context. The other words are still global.
> print ["the value of 'b is:" mold b]
The word 'b is bound to its most recent binding, the
objb context. The other words are still global.
> print ["the value of 'c is:" mold c]
The word 'c is bound to its most recent binding, the
objc context. The other words are still global.
> print ["the value of 'd is:" mold d]
The word 'd is bound to its most recent binding, the
objc context. The other words are still global.
> ]
> ]
> ]
>]
>
>obja/objb/objc/test-it
>
>## do %test.r
>words defined in this object! is: [self c d test-it]
>the value of 'a is: "This is obja's value of 'a"
>the value of 'b is: "This is objb's value of 'b"
>the value of 'c is: "This is objc's value of 'c"
>the value of 'd is: "This is objc's value of 'd"
>
>
>-- end of REBOL code --
And there you go.
>To accomplish this, I see two posibilities:
>a) contexts contain a reference to their parent
>b) contexts inherit from their parent
>
>In either way, it _looks_ like a tree :-)
>
>I might be completely wrong, please let me know :-)
You are thinking in terms of the way fully interpreted
languages are implemented. In those languages, variable
binding is done at runtime while the code is executing.
This continual lookup of variables is part of what makes
these interpreters slow. They also have the overhead of
structures like those you mention.
In fully compiled languages, the compiler performs all
variable lookup and binding during the course of the
compilation. By the time the code gets to the runtime,
there are usually no traces of variables at all, and no
runtime structures to support them. All that's left are
values and operations. (Languages with metaprogramming
leave a few traces behind to support that, but you don't
use these structures during normal execution). Languages
like Pascal with nested scopes also use stack frames to
refer to values in outer contexts (or continuations for
languages like ML), but lookup code of such frames is
precalculated by the compiler and often optimized away.
REBOL splits the process up into several steps. The load
function changes the text into a nested data structure
with word values in it (among others) and binds those
word values to the global context (system/words). The
resulting data structure is then treated like virtual
machine code by functions like do and if, but treated
like compiler intermediate code by functions like use,
make object!/function! and bind. Every word value has a
direct pointer to its associated context assigned by
bind, no matter where it was defined.
REBOL's half-compiled, half-interpreted code is faster
than a straight interpreter at runtime, but harder to
explain. It's more flexible than a compiler (and faster
to get through the compilation parts than badly designed
languages like C) but has more runtime overhead.
There are tradeoffs to make with any language design -
personally, I like the results with REBOL.
> > There are some problems in allowing a word to be removed
> > from a context. Adding should be feasible, I think.
>
>What kind of problems ? (I havn't really thought this thru,
>I must admit)
Even the ability to add words to a context would cause a
lot of problems. Consider that nested scoping with direct
binding requires that binding to any (non-global) context
be performed before any code in the associated scope is
executed. If you could add words to that scope at runtime
you would have to rebind the code every time you did so.
Since much of the rebinding code has side effects in REBOL
you would be essentially creating new objects anyway and
then throwing away the original template (something you
can do in simple REBOL code now). This has even more
overhead than doing dynamic binding. Removing words has
the same drawbacks, and more.
The ability to add or remove words from REBOL contexts
would so radically change the semantics of the language
that we would be back to REBOL 1.x, envying Perl speed
instead of matching it, or beating it. Much of our code
would need rewriting too, to match the new semantics.
The dialect would need to change accordingly.
Now would you want that? :)
Brian Hawley