Damian Conway wrote:
sub debug is immediate is exported (@message) {
return $debugging ?? { print $*STDERR: @message; } :: {;}
}
Won't @message need lazy evaluation? How will Perl know to
delay interpolation until the result of the macro is called
at run time?
- Ken
Damian Conway wrote:
For that reason, even if we can solve this puzzle, it might be far kinder
just to enforce parens.
I might be weird, but when I use parens to clarify code in Perl, I
like to use the Lisp convention:
(method $object args)
Hopefully that will still work even if Perl 6
Smylers wrote:
Ken Fox wrote:
How about formalizing global namespace pollution with something like
the Usenet news group formation process? Ship Perl 6 with a very
small number of global symbols and let it grow naturally.
If the initial release of Perl 6 doesn't have commonly-required
Damian Conway wrote:
sub part ($classifier, *@list) {
return @parts;
}
Given the original example
(@foo,@bar,@zap) := part [ /foo/, /bar/, /zap/ ] @source;
this binds the contents of @parts to (@foo,@bar,@zap)? The
array refs in @parts are not flattened though. Is it
Damian Conway wrote:
Ken Fox asked:
Is it correct
to think of flattening context as a lexical flattening? i.e.
only terms written with @ are flattened and the types of
the terms can be ignored?
I'm not sure I understand this question.
Sometimes array references behave as arrays, e.g.
push
Michael Lazzaro wrote:
(@foo,@bar,@zap) := classify { /foo/ ;; /bar/ ;; /zap/ } @source;
A shorthand for:
for @source {
given {
when /foo/ { push @foo, $_ }
when /bar/ { push @bar, $_ }
when /zap/ { push @zap, $_ }
}
}
How about just
Damian Conway wrote:
my $iter = fibses();
for $iter {...}
(Careful with those single angles, Eugene!)
Operator isn't legal when the grammar is expecting an
expression, right? The must begin the circumfix operator.
Is the grammar being weakened so that yacc can handle it? The
rule
Damian Conway wrote:
Ken Fox wrote:
The must begin the circumfix operator.
Or the circumfix ... operator. Which is the problem here.
This is like playing poker with God. Assuming you can get over
the little hurdles of Free Will and Omniscience, there's still
the problem of Him pulling
Damian Conway wrote:
It's [...] the ASCII synonym for the «...» operator, which
is a synonym for the qw/.../ operator.
Nope. Heredocs still start with .
Hey! Where'd *that* card come from? ;)
Seriously, that's a good trick. How does it work? What do these
examples do?
print a b c;
Andy Wardley wrote:
Can we overload + in Perl 6 to work as both numeric addition
and string concatenation ...
Isn't there some nifty Unicode operator perl6 could enlist? ;)
How about concatenating adjacent operands? ANSI C does this
with string constants and it works very well. It would
Michael G Schwern wrote:
Before this starts up again, I hereby sentence all potential repliers to
first read:
string concatenation operator - please stop
http://archive.develooper.com/perl6-language;perl.org/msg06710.html
The bike shed thing is like Godwin's Law. Only I don't know
which side
Me wrote:
YAK for marking something.
I've been assuming that a keyword will only have
meaning in contexts where the keyword is valid.
Given the shiny new top-down grammar system, there's
no requirement for keywords to be global. (Context
sensitive keywords fall out of Perl 6 grammars
naturally
Jonathan Scott Duff wrote:
Um ... could we have a zip functor as well? I think the common case
will be to pull N elements from each list rather than N from one, M
from another, etc. So, in the spirit of timtowtdi:
for zip(a,b,c) - $x,$y,$z { ... }
sub zip (\:ref repeat{1,}) {
my $max =
Austin Hastings wrote:
At this point, Meestaire ISO-phobic Amairecain Programmaire, you have
achieved keyboard parity with the average Swiss six-year-old child.
The question is not about being ISO-phobic or pro-English. **
The question is whether we want a pictographic language. I like
the
Damian Conway wrote:
Larry Wall wrote:
That suggests to me that the circumlocution could be *.
A five character multiple symbol??? I guess that's the penalty for not
upgrading to something that can handle unicode.
Unless this is subtle humor, the Huffman encoding idea is getting
seriously
Austin Hastings wrote:
The and ... are just as pictographic (or
not) as [ and ].
I'm not particularly fond of or either. ;) Damian just
wrote that he prefers non-alphabetic operators to help
differentiate nouns and verbs. I find it helpful when people
explain their biases like that. What's
Luke Palmer wrote:
This requires infinite lookahead to parse. Nobody likes infinite
lookahead grammars.
Perl already needs infinite lookahead. Anyways, most people
don't care whether a grammar is ambiguous or not -- if we did,
natural human languages would look very different.
People want
Luke Palmer wrote:
On Thu, 12 Sep 2002, Ken Fox wrote:
Perl already needs infinite lookahead.
Really? Where?
Indirect objects need infinite lookahead and they are
in the core language. Hyper operators may need lookahead.
Place holders may need lookahead. User defined rules
will definitely
Damian Conway wrote:
Though leaving optimization in the hands of the programmer
is generally a Bad Idea.
That doesn't sound like a Perl slogan.
It's also a matter of syntactic consistency. It has to be := for
inlined bindings (i.e. rx/ $name:=ident /) because otherwise
we make = meta
Mr. Nobody wrote:
/^([+-]?)(?=\d|\.\d)\d*(\.\d*)?([Ee]([+-]?\d+))?$/
would actually become longer:
/^([+-]?)before \d|\.\d\d*(\.\d*)?([Ee]([+-]?\d+))?$/
Your first expression uses capturing parens, but the captures
don't bind anything useful, so you should probably compare
non-capturing
Brent Dax wrote:
Keep in mind how much it could inflate the bytecode if
we render a ton of generic, N-dimensional hyper-operator logic into
bytecode.
The main problem I see is that you could spend minutes executing
inside that hyper op. Doesn't that screw with the plan for putting
the event
Damian Conway wrote:
Because what you do with a hypothetical has to be reversible.
And binding is far more cheaply reversible than assignment.
Why not leave it in the language spec then? If it's too
hard to implement, then the first release of Perl 6 can
leave it out. Someday somebody might
Dan Sugalski wrote:
At 9:10 AM -0400 9/4/02, [EMAIL PROTECTED] wrote:
So, just to clarify, does that mean that multi-dispatch is (by
definition)
a run-time thing, and overloading is (by def) a compile time thing?
No. They can be both compile time things or runtime things, depending on
David Whipp wrote:
But can I use a non-constant date?
You didn't show us the iso_date rule.
Obviously we could put the onus on the module writer to write super-flexible
rules/grammars. But will there be an easy way to force interpolative context
onto this type of regex-valued subroutine
Peter Haworth wrote:
Also the different operators used (:= inside the rule, = inside the code)
seems a bit confusing to me; I can't see that they're really doing anything
different:
/ $x := (gr\w+) /vs/ (gr\w+) { let $x = $1 } /
Shouldn't they both use C := ?
Depends on
Damian Conway wrote:
One possibility is that a modifier is
implemented via a special class:
my class Decomment is RULE::Modifier
is invoked(:decomment) {
method SETUP ($data, $rule) {
...
}
# etc.
}
The thing I'd like to do right now is turn on :w
for all rules. A Fortran grammar might want to turn
on :i for all rules.
Maybe add modifiers to the grammar declaration?
grammar Fortran :i { ... }
It would also be convenient to allow the :w
modifier to have lexically scoped behavior so a
Piers Cawley wrote:
Unless I'm very much mistaken, the order of execution will
look like:
$2:=$1; $1:=$2;
You're not binding $2:=$1. You're binding $2 to the first
capture. By default $1 is also bound to the first capture.
Assuming that numbered variables aren't special, the order
Simon Cozens wrote:
[EMAIL PROTECTED] (Damian Conway) writes:
%hash4 = (Something, mixing, pairs = and, scalars);
That's perfectly okay (except you forgot the quotes around the and
and you have an odd number of elements initializing the hash).
Urgh, no. Either a pair is an atomic entity
Damian Conway wrote:
No. It will be equivalent to:
[\x0a\x0d...]
I don't think \n can be a character class because it
is a two character sequence on some systems. Apoc 5
said \n will be the same everywhere, so won't it be
something like
rule \n { \x0d \x0a | \x0d | \x0a }
Hmm. Now
Damian Conway wrote:
rule expr1 {
term { m:cont/operators/ or fail } term
}
Backtracking would just step back over the rule as if it were atomic
(or followed by a colon).
Ok, thanks. (The followed by a colon is just to explain the behavior,
right? It's illegal to follow a
Larry Wall wrote:
On Fri, 30 Aug 2002, Ken Fox wrote:
: Ok, thanks. (The followed by a colon is just to explain the behavior,
: right? It's illegal to follow a code block with a colon, isn't it?)
I don't see why it should be illegal--it could be useful if the closure
has played
A question: Do rules matched in a { code } block set backtrack points for
the outer rule? For example, are these rules equivalent?
rule expr1 {
term { /operators/ or fail } term
}
rule expr2 {
term operators term
}
And a comment: It would be nice to have procedural control over
Aaron Sherman wrote:
rule { term { /operators/.commit(1) or fail } term }
The hypothetical commit() method being one that would take a number and
That would only be useful if the outer rule can backtrack into the
inner /operators/ rule. Can it?
I agree with you that a commit method
Nicholas Clark wrote:
I can write more a efficient implementation of abs_i ... things will
go slightly faster
The law of diminishing returns is broken for a VM. Eventually you
reach a point where adding more ops actually decreases total
performance. Instead of the change in performance tending
Nicholas Clark wrote:
It seems that foo (foo - 1) is zero only for a power of 2 (or foo == 0)
but is there a fast way once you know that foo is a power of 2, to find out
log2 foo?
You're right about (foo (foo -1)).
gcc uses a repeated test and shift. That's works very nicely if foo
is
Nicholas Clark wrote:
But there do seem already to be arguably duplicate 2 operand versions of
many ops. Hence I was surprised at the lack of consistency.
Right. I suspect that as people get more experience with the new
Perl 6 compiler the 2 operand ops will go away (or vice versa).
At the
Dave Storrs wrote:
why didn't you have to write:
rule ugly_c_comment {
/
\/ \* [ .*? ugly_c_comment? ]*? \* \/
{ let $0 := }
/
}
Think of the curly braces as the regex quotes. If { is the quote
then there's nothing
Uri Guttman wrote:
but remember that whitespace is ignored as the /x mode is on
all the time.
Whoops, yeah. For some reason I kept literal mode on when
reading the spaces between two literals.
The rules {foo bar} and {foobar} are the same, but some
very low level part of my brain is
Jason Gloudon wrote:
http://www.ddj.com/ftp/2001/2001_07/aa0701.txt
I believe the LOOKUP method was the fastest for me on SPARC, if I recall
correctly.
Did they really spend 64K to create a lookup table just to find
the most significant bit? Calculating log2 for a power of two is
simpler --
James Mastros wrote:
In byteswapping the bytecode ...
I propose that we make INTVAL and opcode_t the same size, and gaurrenteed
to be able to hold a void*.
It sounds like you want portable byte code. Is that a goal? It seems like
we can have either mmap'able byte code or portable byte code,
Can't find any articles or notes on what happened
at the conference. What happened? I'm really curious
about the Worse is Better panel and the talk that
Dan and Simon gave.
- Ken
Simon Cozens wrote:
Us: We're working on this, that and the other.
Them: Pshaw. We solved those problems thirty years ago.
Were Perl and Python both grouped into the same category
of re-inventing the wheel? Or is this just the academic
distaste for Perl syntax showing through? I had hoped that
Shlomi Fish wrote:
Proper Tail Recursion is harder to debug, but consumes less memory and is
faster to execute ...
It definitely consumes less memory, but performance is the same (until
the memory issue starts dominating...) I don't know what you mean by
debugging -- user code or parrot
I made a couple changes to the drawing. Stacks
and register structures are now a bit more
conceptual -- it is much easier to see how they
work.
See http://www.msen.com/~fox/parrotguts.png
for the latest. Keep in mind that the light blue
frame stuff at the bottom is experimental.
Anybody
Michael L Maraist wrote:
Are we allowing _any_ dynamic memory to be non-GC-managed?
Parrot must allow third party libraries to use the standard system
malloc/free. Playing linker games to hide malloc/free gets *really*
ugly.
Can we assume that a buffer object is ONLY accessible by a single
Dan Sugalski wrote:
Nope, not stone tablet at all. More a sketch than anything else,
since I'm not sure yet of all the things Larry's got in store.
Ok. I've made some more progress. There's a crude picture of
some of the internals at http://www.msen.com/~fox/parrotguts.png
The lexical stuff is
Robert Spier wrote:
On Sun, Nov 11, 2001 at 07:38:28PM -0500, Ken Fox wrote:
| ... Powerpoint would be a better choice since everybody
| has to deal with that format anyway.
Please, no! Powerpoint is one of the few formats which
cannot be easily read on a non Windows or Mac system. Any
I few days ago I suggested inlining some PMC methods
would be good for performance. It turns out that this
question has been heavily studied by the OO community
for at least 10 years. A common solution is to
dynamically replace a method call with the body of the
method wrapped in an if statement.
Simon Cozens wrote:
You save one level of indirection, at a large complexity
cost.
A lot less complexity than a JIT though. 100% portable
code too.
It's also something that can be bolted on later, so there's
no reason to reject it now. I'm just throwing it out to the
list because I know other
Michael L Maraist wrote:
No barriers for us?
Generational collectors require a write barrier because
old objects must never point to younger ones. ('Course Dan
said he's starting with a simple copying collector, so we
don't need a barrier. Hmm. I guess Dan's not *reject*ing
a barrier, just
Dan Sugalski wrote:
[native code regexps] There's a hugely good case for JITting.
Yes, for JITing the regexp engine. That looks like a much easier
problem to solve than JITing all of Parrot.
If you think about it, the interpreter loop is essentially:
while (code) {
code =
Dan Sugalski wrote:
Gack. Looks like a mis-placed optimization in perl 5. The list of a foreach
is *supposed* to flatten at loop start and be static. Apparently not. :)
Is anybody keeping a list of things that are *supposed* to be static? Is
the list changing much with Perl 6?
Care to file
Dan Sugalski wrote:
my $foo;
$foo = 12;
print $foo;
$foo /= 24;
print $foo;
may well have the vtable pointer attached to the PMC for $foo change with
every line of code. Probably will, honestly.
Well, there's only two assignments there, so I assume that print is
Dan Sugalski wrote:
At 04:29 PM 11/7/2001 -0500, James Mastros wrote:
On Wed, Nov 07, 2001 at 10:15:07AM -0500, Ken Fox wrote:
If Perl can keep the loop index in an integer register, then Parrot
could use fast loop ops. IMHO there's no point in using fast loop ops
if taking the length
Simon Cozens wrote:
... Mono's work on JIT compilation ... they've got some pretty
interesting x86 code generation stuff going on.
Mono is doing some very cool stuff, but it's kind of hard
to understand at this time. The x86 code generation macros are
easy to use, but the instruction selection
Dan Sugalski wrote:
I doubt there'll be GC pluggbility. (Unless you consider Ripping out the
guts of resources.c and gc.c and replacing them pluggability... :) If it
works out that way, great, but I don't know that it's really something I'm
shooting for.
That problem doesn't bother me too
Dan Sugalski wrote:
No it isn't. It can get the integer length of the array and stuff it in a
register at the beginning of the loop, or do an integer compare when it
needs to, depending on the semantics of the loop.
Wow. Did you just come up with a place in Perl where static
behavior is
Dan Sugalski wrote:
At 07:47 PM 11/6/2001 -0500, Ken Fox wrote:
If the guts of a vtable implementation are ripped out and given an
op, isn't that inlining a PMC method? There doesn't seem much point
in replacing a dynamic vtable offset with a constant vtable offset.
The method really needs
Simon Cozens wrote:
On Mon, Nov 05, 2001 at 11:35:53PM -0500, Ken Fox wrote:
IMHO Perl is getting
Interesting construction. :)
Yeah, that should have been a disclaimer. I've heard static typing
proposed, but nothing appears finalized about anything yet. Something
like static typing might
Dan Sugalski wrote:
We might want to have one fast and potentially big loop (switch or computed
goto) with all the alternate (tracing, Safe, and debugging) loops use the
indirect function dispatch so we're not wedging another 250K per loop or
something.
Absolutely. There's no gain from
Simon Cozens wrote:
On Mon, Nov 05, 2001 at 02:08:21PM -0500, Ken Fox wrote:
we'd be a lot better inlining some of the PMC methods as ops instead of
trig functions. ;)
Won't work. We can't predict what kind of PMCs will be coming our way, let
alone what vtables they'll use, let alone
Garrett Goebel wrote:
Just does compile-time typing for $foo? Not inlining the constant?
You can't assume that the value associated with the symbol is
the same each time through the code, so how can it be inlined?
I was thinking lowercase typed variables couldn't be rebound, because
they
After downloading a dozen PDF files I've given up.
All I need is the approximate cycle counts for
instructions and address modes.
The particular problem I've got now is deciding
which of these three is the fastest:
movl (%edi,%eax,4),%eax
movl (%edi,%eax),%eax
movl (%edi),%eax
Same
Michael L Maraist wrote:
[an incredible amount of detailed information that will
take me weeks to digest...]
This looks like a malloc/free style allocator. Since the whole
GC system for Parrot is on the table, you don't have to constrain
yourself to malloc/free. IMHO free is not needed at all
will waste time with redundant loads, excessive register spills, etc.
Ken Fox wrote:
What happens when you goto middle depends on where you started.
Personally I'm all for throwing a fatal error, but that's just me.
:)
If you're copying things around that means you have to do a bunch
Michael L Maraist wrote:
The only memory storage for scalars that I currently am conceiving of is
in name-space stashes (globals). Thus our most likely implementation of S2S
would be to have 'add g_x, g_y, g_z' which performs two stash
lookups, an add, then one stash write.
Kakapo currently
Kevin Huber wrote:
This is a comparison of mops running on Parrot (-O6 on an Athlon 700)
versus Java JDK 1.4.0 beta 2 jitted and interpreted. You can see that
Parrot performance is very comparable to Java in interpreted mode.
I have an Athlon 700 too. With these compiler flags:
PERL-CFLAGS
Anybody do a gcc-specific goto *pc dispatcher
for Parrot yet? On some architectures it really
cooks.
- Ken
A little while back I posted some code that
implemented a storage-to-storage architecture.
It was slow, but I tossed that off as an
implementation detail. Really. It was. :)
Well, I've tuned things up a bit. It's now
hitting 56 mops with the mops.pasm example. Parrot
turns in 24 mops on the same
].word.lo);
pc += 4;
goto *(pc-i_addr);
I haven't counted derefs, but Parrot and Kakapo should be close.
On architectures with very slow word instructions, some code bloat
to store hi/lo offsets in native ints might be worth faster
address calculations.
Ken Fox wrote:
One thing I learned
Uri Guttman wrote:
that is good. i wasn't disagreeing with your alternative architecture.
i was just making sure that the priority was execution over compilation
speed.
I use a snazzy quintuple-pass object-oriented assembler written
in equal parts spit and string (with a little RecDescent
Uri Guttman wrote:
and please don't bring in hardware comparisons again. a VM design
cannot be compared in any way to a hardware design.
I have absolutely no idea what you are talking about. I didn't
say a single thing about hardware. My entire post was simply about
an alternative VM
Uri Guttman wrote:
so my point is the the speed of the VM is a separate issue from the ease
of code generation. an S2S VM would be easier to code generate for but
may be slower to run. the speed difference is still an open point as dan
has said. but since his goal is execution speed, that
A while back I wondered if a higher-level VM might be
useful for implementing higher-level languages. I
proposed a lexically scoped machine without registers
or stacks. The response wasn't encouraging.
A quick tour through the library turned up a few
machine designs that sounded very similar to
Simon Cozens wrote:
On Mon, Sep 10, 2001 at 08:38:43PM -0400, Ken Fox wrote:
Have you guys seen Topaz?
I may have heard of it, yes.
That's it? You're rejecting all of that work without
learning anything from it? Building strings on buffers
looked like a really good idea.
In general I
Bryan C. Warnock wrote:
On Monday 10 September 2001 09:30 pm, Dan Sugalski wrote:
gotos into scopes might not be allowed.
That's how it currently is for most scopes, and it certainly saves a
whole lot of trouble and inconsistencies.
I'm not sure I understand what you mean. Perl 5 allows
Dan Sugalski wrote:
If you're speaking of multiple buffers for a string or something like that,
you're looking at too low a level. That's something that should go in the
variables, not in the string bits. (We do *not* want all string ops slow to
support flexibility of this sort. Only the bits
Simon Cozens wrote:
FWIW, it's just dawned on me that if we want all of these things to be
overloadable by PMCs, they need to have vtable entries. The PMC vtable
is going to be considerably bigger than we anticipated.
Surely you don't expect the PMC vtable to be the only mechanism
for
Simon Cozens wrote:
=head1 The Parrot String API
Have you guys seen Topaz? One of many things I think Chip
did right was to build strings from a low-level buffer
concept. This moves memory management (and possibly raw-io)
out of the string class and into the buffer class.
The other major
Bryan C. Warnock wrote:
On Monday 10 September 2001 06:23 pm, Dan Sugalski wrote:
When we run out, we repeat the innermost type.
Why are you doing right-to-left instead of left-to-right?
Because it would be harder to repeat the innermost type then? ;)
Most binary ops will take identical
Dan Sugalski wrote:
At 05:41 PM 9/10/2001 -0400, Ken Fox wrote:
You're expecting the current lexical scope to be carried implicitly
via the PC?
No, it'll be in the interpreter struct.
But how does the interpreter know where a lexical scope begins
and ends in the bytecode? For example
Jeffrey Coleman Carlyle wrote:
Am I missing something (well, clearly I am), but are test.pasm and
test2.pasm missing from the CVS repository?
Look in ./t
- Ken
Dan Sugalski wrote:
jump FOO
doesn't change scope.
newscope scope_template_in_fixup_section
does. And
exitscope
leaves one. :)
Ok. That clears it up a little. The current scope is part of
the VM internal state and compilers need to generate state
change instructions if
Dan Sugalski wrote:
... you have to take into account the possibility that a
variable outside your immediate scope (because it's been defined in an
outer level of scope) might get replaced by a variable in some intermediate
level, things get tricky.
Other things get tricky too. How about
Dave Mitchell wrote:
The Perl equivalent $a = $a + $a*$b requires a
temporary PMC to store the intermediate result ($a*$b). I'm asking
where this tmp PMC comes from.
The PMC will stashed in a register. The PMC's value will be
stored either on the heap or in a special memory pool reserved
for
Dave Mitchell wrote:
So how does that all work then? What does the parrot assembler for
foo($x+1, $x+2, , $x+65)
The arg list will be on the stack. Parrot just allocates new PMCs and
pushes the PMC on the stack.
I assume it will look something like
new_pmc pmc_register[0]
add
Simon Cozens wrote:
I want to get on with writing all the other documents like this one, but
I don't want the questions raised in this thread to go undocumented and
unanswered. I would *love* it if someone could volunteer to send me a patch
to the original document tightening it up in the
Paolo Molaro wrote:
If anyone has any
evidence that coding a stack-based virtual machine or a register one
provides for better instructions scheduling in the dispatch code,
please step forward.
I think we're going to have some evidence in a few weeks. I'm not
sure which side the evidence is
Dan Sugalski wrote:
Dan Sugalski [EMAIL PROTECTED] wrote:
Where do they come from? Leave a plate of milk and cookies on your back
porch and the Temp PMC Gnomes will bring them. :)
Bad Dan! No cookie for me.
You aren't fooling anybody anymore... You might just as well stop the
charade
Dan Sugalski wrote:
At 02:05 PM 9/6/2001 -0400, Ken Fox wrote:
You wrote on perl6-internals:
get_lex P1, $x # Find $x
get_type I0, P1 # Get $x's type
[ loop using P1 and I0 ]
That code isn't safe! If %MY is changed at run-time, the
type and location of $x
Dan Sugalski wrote:
I think you're also overestimating the freakout factor.
Probably. I'm not really worried about surprising programmers
when they debug their code. Most of the time they've requested
the surprise and will at least have a tiny clue about what
happened.
I'm worried a little
Dave Mitchell wrote:
Can anyone think of anything else?
You omitted the most important property of lexical variables:
[From perlsub.pod]
Unlike dynamic variables created by the Clocal operator, lexical
variables declared with Cmy are totally hidden from the outside
world, including
[EMAIL PROTECTED] wrote:
Clearly caller() isn't what we want here, but I'm not
quite sure what would be the correct incantation.
I've always assumed that a BEGIN block's caller() will
be the compiler. This makes it easy for the compiler to
lie about %MY:: and use the lexical scope being
Damian Conway wrote:
It would seem *very* odd to allow every symbol table *except*
%MY:: to be accessed at run-time.
Well, yeah, that's true. How about we make it really
simple and don't allow any modifications at run-time to
any symbol table?
Somehow I get the feeling that *very* odd can't
Damian wrote:
Dan wept:
I knew there was something bugging me about this.
Allowing lexically scoped subs to spring into existence (and
variables, for that matter) will probably slow down sub and
variable access, since we can't safely resolve at compile time what
Damian wrote:
In other words, everything that Exporter does, only with lexical
referents not package referents. This in turn gives us the ability to
easily write proper lexically-scoped modules.
Great! Then we don't need run-time lexical symbol table
frobbing. A BEGIN block can muck with its'
Brent Dax wrote:
Ken Fox:
# Lexicals are fundamentally different from Perl's package (dynamically
# scoped) variables.
*How* are they fundamentally different?
Perl's local variables are dynamically scoped. This means that
they are *globally visible* -- you never know where the actual
Brent Dax wrote:
What I'm suggesting is that, instead of the padlist's AV containing
arrays, it should contain stashes, otherwise indistinguishable from
the ones used for global variables.
Lexicals are fundamentally different from Perl's package (dynamically
scoped) variables. Even if you
Dan Sugalski wrote:
For those of you worrying that parrot will be *just* low-level ops,
don't. There will be medium and high level ops in the set as well.
I was going to cite http://citeseer.nj.nec.com/romer96structure.html,
but you guys have already read that, eh? ;)
One thing I was
1 - 100 of 166 matches
Mail list logo