Re: RFC for $ME class variable (was Re: RFC 124 (v1) Sort order for any hash)
Damian Conway [EMAIL PROTECTED] writes: Errr. I would imagine that $ME contains: * a reference to the object, within an object method * the name of the class, within a class method * a reference to the *subroutine* itself, within a non-method. Ooh, recursive anonymous subroutines 'r' us: sub factorial { sub {return $_[1] if $_[0] = 0; $_[1] *= $_[0]--; goto $ME}-($_[0], 1); } I wonder if this counts as a Good Thing. -- Piers
Re: RFC 76 (v1) Builtin: reduce
Bart Lateur wrote: On Thu, 17 Aug 2000 07:44:03 +1000, Jeremy Howard wrote: $a and $b were done for speed: quicker to set up those global variables than to pass values through the stack. The solution is to pass args in as $_[0] and $_[1]. sort { $_[0] = $_[1] } @list is very ugly. I *like* the syntax of sort { $a = $b } @list My original post actually said that the reason for this is that you can then write: sort { ^0 = ^1 } @list; ...which is pretty Perlish.
Re: RFC 76 (v1) Builtin: reduce
[EMAIL PROTECTED] wrote: I think all discussion fo RFC 76 (reduce) should be on the new -data sublist. Jeremy, am I on track here? You sure are. Any stuff related to data crunching features belongs over there, please.
Re: RFC 133 (v1) Alternate Syntax for variable names
In [EMAIL PROTECTED], you wrote: count = array; # scalar context because of assignment to scalar. alt_array[] = array; # list context and if array is a subroutine? count = array(); count = array; # warning - special meaning in p5. Either would be just as messy - and I like being able to say: my $thingy = $object-subobject-value; I'm not the linguist that Mr. Wall is, but it strikes me that context should be derrived automatically as much as possible. An slightly different alternative would be that arrays and hashes are always referred to with their trailing indicator ([] or {}). So, from the example above, you'd have count=array[]; alt_array[] = array[]; Yuck. Ugly as thingywhatsit, though it does have the advantage of making syntax like array[2..5] for splice... Um, don't know about hash{[a-c].*} though (apply regular expression and only keep keys that match) -- Bron ( but I don't think the ugliness is worth it in the end.. )
Re: Things to remove
Here in my pre-caffiene morning trance it occurs to me that a few of the "fringe" features of perl should be removed from the langauge. Here's a few things that I would venture to say that none of the "perl5 is my first perl" people have probably ever actually used. reset # How often do you clear variables wholesale? dump study # never been a win for me. ?pattern? # one-time match split ?pat? # implicit split to @_ What's everyone's feeling on removing these from perl6? How often are they used? One could make dump "work" by having it dump out not a core or a.out, but rather the byte codes representing the current state of the perl machine. This seems anywhere from somewhat to seriously useful, and follows in the spirit of what dump was always meant to do. --tom
Re: Things to remove
I've very rarely found cases where ?? was useful and // didn't work, and never in regular code. From the Camel: The C?? operator is most useful when an ordinary pattern match would find the last rather than the first occurrence: open DICT, "/usr/dict/words" or die "Can't open words: $!\n"; while (DICT) { $first = $1 if ?(^neur.*)?; $last = $1 if /(^neur.*)/; } print $first,"\n"; # prints "neurad" print $last,"\n"; # prints "neurypnology" Nothing a SMOP can't address, but for one liners at the least, the S part would seem to preclude the P part. (Same for the -l line-mode flag.) --tom
Re: implied pascal-like with or express
Ken Fox wrote: Dave Storrs wrote: On Thu, 17 Aug 2000, Jonathan Scott Duff wrote: BTW, if we define Cwith to map keys of a hash to named place holders in a curried expression, this might be a good thing: with %person { print "Howdy, ", ^firstname, " ", ^lastname; } # becomes sub { print "Howdy, ", $person{$_[0]}, " ", $person{$_[1]}; }-('firstname', 'lastname'); You're breaking the halting rules for figuring out the bounds of a curried expression. Your original code should have become: with %person { print "Howdy, ", sub { $_[0] }, " ", sub { $_[0] }; } I don't believe so. The rule at issue here is probably: quote =item Sub called in void context Currying halts in the argument list of a subroutine (or method) that is called in a void context. The tree traversal example given above shows a method in a void context (any return value from $root-traverse is being ignored). Therefore just its argument is curried, rather than the whole call expression. /quote I say 'probably' because it depends how 'with' is defined. Assuming that there are no explicit curry prototypes or sub prototypes floating around in the declaration of 'with', the commas do not limit the currying context. It gets worse with longer examples because each line is a separate statement that defines a boundary for the curry. IMHO, curries have nothing to do with this. All "with" really does is create a dynamic scope from the contents of the hash and evaluate its block in that scope. my %person = { name = 'John Doe', age = 47 }; with %person { print "$name is $age years old\n"; } becomes { my $env = $CORE::CURRENT_SCOPE; while (my($k, $v) = each(%person)) { $env-bind_scalar($k, $v); } print "$name is $age years old\n"; } The thing I don't like about either of these suggestions is that the local scope is hidden. In cough VB, you can say: dim height as double dim ws as new Excel.worksheet // 'worksheet' has a 'height' property with ws print .height // Accesses ws.height print height// Accesses me.height end with In Pascal, this is not possible. As a result, I find myself rarely using 'with' in Pascal, since it's rare that you do not need to access any of the local variables within a block.
RFC 132 (v2) Subroutines should be able to return an lvalue
This and other RFCs are available on the web at http://dev.perl.org/rfc/ =head1 TITLE Subroutines should be able to return an lvalue =head1 VERSION Maintainer: Johan Vromans [EMAIL PROTECTED] Date: Aug 18, 2000 Last Modified: Aug 21, 2000 Version: 2 Mailing List: [EMAIL PROTECTED] Number: 132 =head1 ABSTRACT RFC 107 proposes that lvalue subs should receive their rvalues as subroutine arguments. RFC 118 counter-proposes that lvalue subs should receive their rvalues as lexical variables named in a prototype associated with the :lvalue modifier. This RFC proposes a terrifying simple solution for the growing complexity of the lvalue subroutines problem. It proposes the keyword Clreturn and discards the :lvalue property. =head1 DESCRIPTION If a sub wants to return an lvalue, this lvalue must be a real lvalue in all respects. In particular, all kinds of implicit and explicit value changes must be supported. For example, lvsub() = ... lvsub()++; lvsub() += ... lvsub() =~ s/// for ( lvsub() ) { $_ = ... } sysread($fh,lvsub(),...) and so on. One often heard argument is that subroutines like these must be callable in either mode: lvsub(expr) lvsub() = expr This argument is false, since the two uses are totally distinct. In the first case, the sub has control over what it does with the value while in the lvalue case the sub doesn't -- and doesn't care. If control is desired, use Ctie. The clue is "If a sub wants to return an lvalue, what's returned must really be an lvalue". Therefore I propose a new keyword Clreturn that behaves just like Creturn, but returns the lvalue instead of the rvalue. After returning, everything is exactly as if the argument to lreturn were specified instead of the subroutine call. The :lvalue property is no longer needed and should be removed since it only causes confusion. A subroutine Bis not an lvalue thing, it Breturns an lvalue if it wants to. For example: sub lvsub { ... lreturn $hash{somekey}; } lvsub() =~ s///# identical to $hash{somekey} =~ s/// $ref = \lvsub()# now $ref is \$hash{somekey} As a thought guide: think of Clreturn returning a reference to its argument, and the call to lvsub() performing a dereference. With the enhanced Cwant operator, subroutines can dynamically decide what to return. Interesting note: you can always use Clreturn instead of Creturn; for rvalue cases it does not matter. =head1 FUNDAMENTAL RESTRICTION There aint no such thing as a free lunch, so there's a catch. Good programming practice requires that, in assignments, the right hand side gets evaluated before the left hand side. This is to make statements like $a[$i++] = $b[$i++] have a defined semantics. This principle enforces a restriction to subroutines that want to return an lvalue. Consider lvsub() = some_sub() The problem is to determine the context in which some_sub() must be called. While with normal assignments this context is always clear, in this example the context is determined by what lvsub is going to lreturn, Iwhich is not going to happen before some_sub() has completed. The only solution seems to be to restrict lvalue subroutines to return only scalar lvalues. Subroutine prototypes and/or attributes are often suggested as a means to overcome this restriction. However, due to the powers of Perl this is impossible. For example, consider sub Cfoo that returns an array lvalue, and sub Cbar that returns a scalar lvalue. Even when prototyped or attributed, the following construction will cause problems: my $ref = $some_condition ? \foo : \bar; $ref-() = some_sub(); Another attempt to overcome this restriction is to change the evaluation order of assignments, probably only in case an lvalued sub is involved. I prefer not to consider this an option, but maybe others feel differently. Therefore the restriction of Clreturn to scalar lvalues must be considered a fundamental one. Fortunately, this restriction is not a problematic one. Scalars are already "more than common" in Perl. For example, arrays and hashes can only contain scalars, and nobody has ever considered that a problem. References rulez! =head1 COMPATIBILITY The proposed solution is upward compatible with the current Perl5 implementation of lvalued subroutines: sub foo : lvalue { ...; $var } can be interpreted as a shorthand, or syntactic sugar, for sub foo { ...; lreturn $var } =head1 IMPLEMENTATION Every subroutine (and method) call can potentially return an lvalue, so the compiler cannot do smart things. All must be handled at run-time. =head1 REFERENCES RFC 107: lvalue subs should receive the rvalue as an argument RFC 118: lvalue subs: parameters, explicit assignment, and wantarray() changes NRETURN, "The SNOBOL4 Programming Language", Griswold Polonsky. RFC 21: Replace Cwantarray with a generic Cwant function
Re: Draft 2 of RFC 88 version 2.
Those rule are hard to read. I've tried reading them quite a few times and I have trouble understanding them. I can't tell if the rules are complex or it simply needs to be reworked. If it is complex then I don't think this is the right approach. The rules should be simple. As for legacy. I strongly urge that Modules _never_ die. It is extremely rude. The fact that something went wrong, doesn't mean that my 100 hour complex calcuation should be terminated. The fact that I couldn't send an email message may or may not be of importance in the scheme of things. And if throw becomes the standard, then you are forcing _all_ programs to accept exception handling. Isugest that throw should be convertible into an effective return (with appropriate setting of $!) upon the (pragmatic) request of the _caller_. (I realize that this may not be possible, but I'd like to have it kept as a possiblity. The call stack between the thrower and the catcher (where the catcher may have pragmatically asked for return style and the intermidiaries may or many not have even thougth about the problem.)) chaim "TO" == Tony Olekshy [EMAIL PROTECTED] writes: TO Perl's behaviour after a Cdie starts call-stack unwinding, as TO envisioned by this RFC, is as described by the following rules. TO 1. Whenever an exception is raised Perl looks for an enclosing TO try/catch/finally block. TO If such a block is found Perl traps the exception and proceeds TO as per rule 2, otherwise program shutdown is initiated. snip TO Legacy TO It is not the intent of this RFC to interfer with traditional TO Perl scripts; the intent is only to facilitate the availability TO of a more controllable, pragmatic, and yet robust mechanism when TO such is found to be appropriate. TO Nothing in this RFC impacts the tradition of simple Perl scripts. TO Cdie "Foo"; continues to work as before. TO There is no need to use try, catch, or finally, and most of the TO cases where you would want you use them takes less source code TO with exceptions than with return code checking, as per the TO CONVERSION section above. -- Chaim FrenkelNonlinear Knowledge, Inc. [EMAIL PROTECTED] +1-718-236-0183
Re: Draft 2 of RFC 88 version 2.
At 11:03 AM 8/21/00 -0400, Chaim Frenkel wrote: Those rule are hard to read. I've tried reading them quite a few times and I have trouble understanding them. I can't tell if the rules are complex or it simply needs to be reworked. If it is complex then I don't think this is the right approach. The rules should be simple. We can try to simplify them. Unfortunately my part will have to wait until tomorrow, I must hie imminently. As for legacy. I strongly urge that Modules _never_ die. It is extremely rude. The fact that something went wrong, doesn't mean that my 100 hour complex calcuation should be terminated. The fact that I couldn't send an email message may or may not be of importance in the scheme of things. This is between you and the module author. Even if we could dictate this to them, it would be outside the scope of this RFC. In Java, the interface to a method specifies what exceptions it can throw, and it is an error for it to try to throw anything else. So it's very easy for the user to know what they should trap. A Perl module, though, will have to document what exceptions it can throw. And if throw becomes the standard, then you are forcing _all_ programs to accept exception handling. Isugest that throw should be convertible into an effective return (with appropriate setting of $!) upon the (pragmatic) request of the _caller_. Well, you could certainly have a pragma that makes throw set $! to the message and does a return undef. But that will only return from the current subroutine; there could be a bunch of module subroutines between that one and the module user. Asking module programmers to keep straight two possible flows of control in error situations, no less, is asking for trouble. If you think it can be made easier, can you show an example? I think all we can do is encourage module authors to provide both styles with a switch to select them by (API decided by module author). The reality will be that some module authors will do that, some will do exception handling everyhere, some will do error return everywhere, and some will do a mixture of both in the same module. Which, btw, is what we have right now. Grep the core pm's for 'croak' and 'die'. Take a look at the (F)s in perldiag. Notice that division doen't return undef when the dividend is 0 :-) (I realize that this may not be possible, but I'd like to have it kept as a possiblity. The call stack between the thrower and the catcher (where the catcher may have pragmatically asked for return style and the intermidiaries may or many not have even thougth about the problem.)) Exactly :-( -- Peter Scott Pacific Systems Design Technologies
RE: C# (.NET) has no interpreters
http://www.cminusminus.org/ has pointers to three implementations. None are 'industrial strength' yet. You can't really implement C-- on top of C efficiently, because of (a) tail calls and (b) the runtime interface for garbage collection, exception handling etc. But you can do it inefficiently, as the Trampoline C-- compiler does (see the above URL for pointer to it). Simon | -Original Message- | From: Joshua N Pritikin [mailto:[EMAIL PROTECTED]] | Sent: 03 August 2000 15:01 | To: Simon Peyton-Jones | Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; | [EMAIL PROTECTED] | Subject: Re: C# (.NET) has no interpreters | | | On Thu, Aug 03, 2000 at 09:32:10AM -0400, | [EMAIL PROTECTED] wrote: | Joshua N Pritikin [EMAIL PROTECTED] wrote: | On Wed, Aug 02, 2000 at 07:30:23PM -0400, [EMAIL PROTECTED] wrote: |I'd prefer us to tackle native code generation using C as the |intermediate language instead of a JIT. | | Oh, yah. C is the obvious choice, but it doesn't have to | be the only | backend. In theory we could also generate C#'s IL. Or C--. | | Help. I'm only halfway through the C-- paper, and I'm wondering: | What is the status of implmentations? Why not implement it as | extensions to existing C compilers? | | Simon, can you comment? | | -- | Never ascribe to malice that which can be explained by stupidity. |(via, but not speaking for Deutsche Bank) |
Re: Do threads support SMP?
Steven W McDougall wrote: Does Perl6 support Symmetric MultiProcessing (SMP)? This is a *huge* issue. It affects everything else that we do with threads. No it isn't. SMP is completely somebody else's problem. We need a language that worlks right on a single processor. If the hooks we use happen to make sense implemented into native threads on some hardware that supports native SMP threads, that's awesome, but we here are concerned with defining a LCD reference implementation. SMP is best, in hardware, when it allows you to reduce the number of context switches your CPU-intensives have to endure, by having the IO handled by one CPU and the thinking handled by the others, for instance. As soon as you have enough work that the number of active jobs the number of CPUs, multi-cpu threading w/in a single job hurts your throughput because there is too much bookkeeping. Thing of (Saint) Ambrose Bierce's observations about efficiency: If one man with one post hole digger can dig one post hole in one minute, sixty men with sixty post hole diggers can dig one post hole in one second. MP works best with large grains. "fully supports SMP to the level allowed by your operating system and hardware" is a sensible marketing claim, but nothing here needs to be done to back it up. To quote the AI professor, "IT'S AN IMPLEMENTATION DETAIL." -- David Nicol 816.235.1187 [EMAIL PROTECTED] Does despair.com sell a discordian calendar?
Re: TAI time
Dave Storrs [EMAIL PROTECTED] writes: If we are going to use this, I'd like to see us standardize on the highest-precision (i.e. attosecond) version. While it's not necessary in any application that I can currently think of and will probably never be necessary in 90% of Perl applications, when you need it, you need it, and if the core language doesn't support it, it can be a pain to get it. Well, actually...I suppose if there are huge penalties for using the attosecond version, maybe it wouldn't be worth it, but it doesn't sound from this post like that is the case. There shouldn't be. The only potential problem with the attosecond version (which really isn't much of a problem) is that it will have varying levels of precision on different platforms, but that's generally true of system time anyway. libtai will probably need some portability munging if we go that route (which I like as a basic idea; djb writes pretty solid code and his library interface seems fundamentally sound); right now, I believe it fails to compile on systems without a precision timer interface, mostly for the TAI64NA support. There may be systems where we can't get anything more precise than seconds out of the system clock. -- Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/
Re: RFC 132 (v1) subroutines should be able to return an lvalue
From rfc 98: =head2 acceptable coercions When resolving which method Cfoo() to call in a context CTXT, and there is no method Cfoo() defined for the context CTXT, Perl will examine the types listed in C@CTXT::ISA{OVERLOAD_CONTEXTS} for a list of other contexts to see if Cfoo() can produce, before throwing an error. This search is NOT recursive, unless defined so by the tying of the array to a dynamic iterator. =head2 ambiguity resolution In situations where multiple interpretations are possible, such as the f(g()) situation, the first possible method that will work is called. The search order is based on the preferences of the outer function, then the preferences of the inner function. Functions maintain their preference order in an array @PACKNAME::methodname{OVERLOAD_PREFERENCES} and the first context specifier found in that array, which can be satisifed with a call to the named method, is used. If no such array exists, @PACKNAME::OVERLOAD_PREFERENCES is consulted. Perl6 maintains a global @CORE::OVERLOAD_PREFERENCES which begins with Cqw(ARRAY SCALAR) and has all types declared in the program appended to it as they appear, which is used when neither a method nor its class does not provide its own OVERLOAD_PREFERENCES array. Following the amibuity resolution rules in rfc98, we could look at @lvsub{OVERLOAD_PREFERENCES} and see what it is promising us. Which may be more or less work than adding a specifier around the subroutine call; it is acceptable if you are developing a module both may be unacceptable if you are a beginner trying to use a prebuilt module. "Randal L. Schwartz" wrote: How do you indicate to the compiler at the time of compiling: lvsub() = FOO that FOO should be evaluated in list context? Or scalar context? Because FOO needs to know that while it is executing, before invoking your subroutine. Or do you imagine invoking your subroutine before evaluating the right side of the code (so it can return a scalar/list flag of some kind), thereby breaking the normal model of assignment where the right side gets run first? This is the sticky point that keeps hanging up lvalue subs. Perl's context transfer on the assignment operator keeps getting overlooked. um, @{lvsub()} = ... or ${lvsub()} = ... but those kind of take the fun out of having an "l-value subroutine" don't they
Re: RFC 137 (v1) Overview: Perl OO should Inot be fundamentally changed.
"PRL" == Perl6 RFC Librarian [EMAIL PROTECTED] writes: PRL A Cprivate keyword that lexically scopes hash keys to the current PRL package, and allows hashes to contain two or more identically named (but PRL differently scoped) entries. This would solve the problem of PRL encapsulation in OO Perl for the vast majority of (predominantly PRL hash-based) class structures. does that apply to a set of keys? or a particular hash ref used as the object? PRL =item * PRL A new special subroutine name -- CINIT -- to separate construction PRL from initialization. CINIT methods would be automatically -- and PRL hierarchically -- called whenever an object is created. INIT currently has a useful meaning in perl5. it is like BEGIN but it is called after the compilation phase is done and before runtime starts. i have found uses for it where i put an INIT block which sets up some data structures (could be complex file i/o) in front of the code that uses it. if that was a BEGIN block, the setup would execute even if there was a compile time error later in the file. so pick another name. CREATE (spelled correctly!), SETUP are some ideas.. would a parent INIT get called even if there is no INIT in this class? how would you travel up the @ISA tree and call INIT? PRL Changes to the semantics of Cbless so that, after associating an PRL object with a class, the class's CINIT methods are automatically PRL called on the object. An additional trailing C@ parameter for PRL Cbless, to allow arguments to be passed to CINIT methods. i like that. PRL Pre- and post-condition specifiers, which associate code blocks with PRL particular subroutine/method names. These blocks would be automatically PRL called before and after the subroutine/method of the same name, and PRL trigger an exception on failure. For methods, pre- and post-conditions PRL would be inherited and called hierarchically (with disjunctive PRL short-circuiting, in the case of post-conditions). can't this just be done with calls from the method sub? maybe the post condition one is useful as you can return from multiple places. does it have access to the return values? same for the pre-, does it see @_? also how would they affect 'want'? PRL =item * PRL Class invariant specifiers, which associate code blocks with a particular PRL package/class. These blocks would be called automatically after the the PRL execution of subroutine/method of the same name, and trigger an PRL exception on failure. For methods, invariants would be inherited and PRL called hierarchically. how is this different from the post-condition above? PRL A CNEXT pseudo-class, enabling resumption of the dispatch search PRL from within an invoked method, as well as the "rejection" of invocation PRL (e.g. by an CAUTOLOAD). great. this solves several problems. PRL An optional constraint (Cuse strict 'objvars'?), making it a fatal PRL error to store a object reference in a non-typed lexical. would that be defaulted in use strict; like the others are? i think it should be. PRL =item * PRL A new pragma -- Cdelegation -- that would modify the dispatch PRL mechanism to automatically delegate specific method calls to specified PRL attributes of an object. no more need for AUTOLOAD of accessors? what about complex ones (like i showed you the other day off list)? PRL That in Perl 6, only hashes (and perhaps pseudohashes) may be blessed. you just blew chapters 4 5 of your book out of the water! :) PRL This would result in no loss of functionality, since any other data type PRL that was previously blessed as an object could instead be made a PRL single attribute of a blessed hash. However, combined with the proposed PRL Cprivate keyword and Cuse delegation pragma, this proposal would PRL ensure that it was always possible to inherit from an existing class PRL without detailed knowledge of its implementation. why not make that a pragma? use object 'hash' ; PRL =head1 MIGRATION ISSUES PRL Virtually none. That's the point. :-) good point. i like it overall. perl could be the uber OO language, capable of emulating ANY object style. uri -- Uri Guttman - [EMAIL PROTECTED] -- http://www.sysarch.com SYStems ARCHitecture, Software Engineering, Perl, Internet, UNIX Consulting The Perl Books Page --- http://www.sysarch.com/cgi-bin/perl_books The Best Search Engine on the Net -- http://www.northernlight.com
Re: RFC 76 (v1) Builtin: reduce
Jeremy Howard writes: : How much hand-waving can we do with implementation efficiency of anonymous : subs and higher order functions? How much can we expect Perl to optimise : away at compile time? For instance, if: : : $sum = reduce ^_+^_, @list; : : has any substantial overhead on each iteration it would be useless for any : decent sized number crunching. Other areas where this is a huge issue are : lazily generated lists (RFCs 81, 90, and 91), and implicit array loops (RFC : 82). I've been kind of assuming that functions act on whole lists without : mutating them (RFC 82 operators, map, grep, reduce, ...) would be called in : a 'special way' that avoided the overhead of "real" sub calls. As I've : mentioned before, I've also put in various RFCs that this kind of stuff : should be evaluated lazily... : : So anyway, if any of this is just so out of the question that we shouldn't : even consider the possibility, now is a great time to let us know! I think, even if we relegate currying to some kind of high-powered macro system, as long as the operator in question has a good enough prototype, we can be pretty efficient in how we rewrite things. For instance, it seems to me that if we somehow know that the first argument to a reduce should be a sub that wants two arguments, we could count the placeholders and rewrite the curried expression accordingly without the nested ?: of the naive rewrite. Larry
Re: functions that deal with hash should be more liberal
Today around 3:34pm, Tom Christiansen hammered out this masterpiece: : Today around 11:48am, Tom Christiansen hammered out this masterpiece: : : : So basically, it would be nice if each, keys, values, etc. could all deal : : with being handed a hash from a code block or subroutine... : : : : In the current Perl World, a function can only return as output to : : its caller a LIST, not a HASH nor an ARRAY. Likewise, it can only : : receive a LIST, not those other two. : : So, this is really a bug? : : #!/usr/local/bin/perl -w : use strict; : $|++; : : sub func { : return qw/KeyOne Value1 KeyTwo Value2/; : } : : print "$_\n" foreach keys func(); : : No. keys() expects something that starts with a %, not : something that starts with a . Wow. Now that, that, is lame. You're saying that keys() expects it's first argument to begin with a %? Why should it care what it's argumen begins with? All functions recieve their arguments in a LIST via @_. Since func, in the above example, returns a LIST, that LIST should just be passed on. I have to say, caring about what the argument looks like is bad news. I know it's core and it can do what it wants, but is should behave intuitivley, don't you think? keys( LIST ); Can be: keys( %hash ); keys( @array ); keys( func ); Run time error if there is an odd number of elements. Otherwise, work something like this: sub keys { my %hash = @_; return keys %hash; } What is so hard about that? Besides, it's intuitive. If I were to write my own keys function, it would behave like above no matter what. I would expect a list, and return a list. -- print(join(' ', qw(Casey R. Tweten)));my $sig={mail='[EMAIL PROTECTED]',site= 'http://home.kiski.net/~crt'};print "\n",'.'x(length($sig-{site})+6),"\n"; print map{$_.': '.$sig-{$_}."\n"}sort{$sig-{$a}cmp$sig-{$b}}keys%{$sig}; my $VERSION = '0.01'; #'patched' by Jerrad Pierce belg4mit at MIT dot EDU
Re: functions that deal with hash should be more liberal
: No. keys() expects something that starts with a %, not : something that starts with a . Wow. Now that, that, is lame. You're saying that keys() expects it's first argument to begin with a %? Why should it care what it's argumen begins with? You're just now figuring this out? Really? All functions recieve their arguments in a LIST via @_. Since func, in the above example, returns a LIST, that LIST should just be passed on. I have to say, caring about what the argument looks like is bad news. So, you're saying to dispense with prototypes, even in the core, and leave us with the slowest language known to man? I don't see that flying. --tom
Re: ... as a term
The interesting thing about ... is that you have to be able to deal with it a statement with an implied semicolon: print "foo"; ... print "bar"; We already have plenty of statements with implied semicolons: print "foo"; for @list {} print "bar"; Either that, or it's a funny unary operator that can take 0 or 1 argument. That might let you parse these too: print (1, 2, 3, ...) or die; I think this is fraught with peril. I'd have expected: print (1, 2, 3, ...) or die; to print 12345678910111213141516171819202122232425262728etc rather than: 123Executed stubbed code at demo.pl, line 123 BTW, I propose the this new operator be pronounced "yadda yadda yadda". :-) Damian
Re: RFC: extend study to produce a fast grep through many regexes
David L. Nicol writes: : What if there were a faster way to do this, a way to Cstudy a : group of regexes and have the computer determine which ones had : parts in common, so that if $situation =~ m/^foo/ is true, the : fifty rules that start ^bar don't waste any time. At all. Perl 4 did this sort of thing automatically without a study, at least with respect to the first character that could match. Of course, it didn't do it with regular expressions in an array, but rather in a "switch" structure. And you had to bunch your tests right. If your regular expressions were in an array, you had to use eval. So certainly there's room for an interface that can take multiple regex objects and turn them into a single super regex. I don't think the code to do it necessarily belongs in the core, but it would certainly have to be somewhat incestuous with regex innards. Larry
Re: functions that deal with hash should be more liberal
Casey R. Tweten writes: Wow. Now that, that, is lame. You're saying that keys() expects it's first argument to begin with a %? Why should it care what it's argumen begins with? The keys function changes its arguments' data structure. keys resets the each iterator (see the documentation for these functions). All functions recieve their arguments in a LIST via @_. The hash functions are prototyped as \%, meaning they are passed a reference to the hash named as an argument. The reference-taking: * permits them to change the data structures * is faster (one value, not all the key/value pairs) This isn't strictly needed for keys (if you don't mind it getting slower), but is needed for each() which maintains an iterator in the hash. There's also the fact that a list isn't a hash, the same way a list isn't an array. You are on a slippery slope that ends in: push( split(/,/), "foo" ); Because "push() just takes a list, right?" (hint: wrong). Nat
Re: functions that deal with hash should be more liberal
On Mon, Aug 21, 2000 at 09:00:26PM -0400, Casey R. Tweten wrote: Today around 3:34pm, Tom Christiansen hammered out this masterpiece: : No. keys() expects something that starts with a %, not : something that starts with a . Wow. Now that, that, is lame. You're saying that keys() expects it's first argument to begin with a %? Why should it care what it's argumen begins with? It cares because it is only defined to operate on hashes. A list is not a hash. All functions recieve their arguments in a LIST via @_. Since func, in the above example, returns a LIST, that LIST should just be passed on. Exactly. This is what happens. keys() doesn't operate on lists. keys( @array ); So this would "convert" @array to a hash and take the keys of that? Or does it (as some have proposed) return the keyable indices of sparse array? Otherwise, work something like this: sub keys { my %hash = @_; return keys %hash; } Ah, convert is argument to a hash then grab the keys of that hash. -Scott -- Jonathan Scott Duff [EMAIL PROTECTED]
Re: Things to remove
: In a void context, Cdump dumps the program's current opcode : representation to its filehandle argument (or STDOUT, by : default). It's not clear to me that reusing a lame keyword for this is the highest design goal. Let's come up with a real interface, and then if we want to reuse the (the presumably missing) dump keyword for some method name or other, that's fine. But we're currently designing it from the wrong end. You're not a Sapir-Whorfist then? ;-) Actually, I wasn't proposing we design it at all. Sarathy has already done rather a good job of that, I think. Tom's opcode dumping functionality could, in principle, be added to Data::Dumper as it stands. My proposal was merely that CData::Dumper::Dumper body-snatch Cdump. Damian
Re: ... as a term
[EMAIL PROTECTED] writes: : We already have plenty of statements with implied semicolons: : : print "foo"; : for @list {} : print "bar"; Yes, we do, and I'm trying to figure out how to write a prototype for one of those. :-) / 2 : I'd have expected: : : print (1, 2, 3, ...) or die; : : to print : : 12345678910111213141516171819202122232425262728etc If you're into dwimmery, you could make all of these work, too: print (1, 2, 4, ...) print (1, 4, 9, 16, 25, ...) print (1, 1, 2, 3, 5, ...) print ('a', 'b', 'c', ...) print (3, 1, 4, 1, 5, 9, 6, 2, 5, ...) : BTW, I propose the this new operator be pronounced "yadda yadda yadda". :-) If you want to save the world, come up with a better way to say "www". (And make it stick...) Larry
Re: ... as a term
[EMAIL PROTECTED] writes: : We already have plenty of statements with implied semicolons: : : print "foo"; : for @list {} : print "bar"; Yes, we do, and I'm trying to figure out how to write a prototype for one of those. :-) / 2 Under RFC 128 and the forthcoming multimethods RFC: sub for (\$iterator, @list, block) : multi; sub for (@list, block) : multi; I.e. collectively. If you're into dwimmery, you could make all of these work, too: print (1, 2, 4, ...) print (1, 4, 9, 16, 25, ...) print (1, 1, 2, 3, 5, ...) print ('a', 'b', 'c', ...) print (3, 1, 4, 1, 5, 9, 6, 2, 5, ...) You're an evil, evil man, Larry Wall. You realize someone's probably revising the lazy lists RFC even as we type! : BTW, I propose the this new operator be pronounced "yadda yadda yadda". If you want to save the world, come up with a better way to say "www". (And make it stick...) I thought your US political satirists had solved this one. Isn't it now pronounced "Dubya, Dubya, Dubya"? Damian
Re: ... as a term
On Mon, Aug 21, 2000 at 01:01:20PM -0600, Nathan Torkington wrote: Larry Wall writes: I'd entertain a proposal that ... be made a valid term that happens to do nothing, so that you can run your examples through perl -c for syntax checks. Or better, make it an official "stub" for rapid prototyping, with some way of getting a warning whenever you execute such a stub. This is the coolest suggestion made so far for perl6. I love it. Runtime behaviour of '...' is to warn "unimplemented behaviour". With use strict 'development', it dies "unimplemented behaviour" at compile-time. Hear hear! Great idea. Who'll RFC it? Or shall I? K. -- Kirrily Robert -- [EMAIL PROTECTED] -- http://netizen.com.au/ Open Source development, consulting and solutions Level 10, 500 Collins St, Melbourne VIC 3000 Phone: +61 3 9614 0949 Fax: +61 3 9614 0948 Mobile: +61 410 664 994
Re: Pre-RFC: Require a warning on spaces after here-document terminator
[EMAIL PROTECTED] writes: : : I would like to see a compiler warning for this: : : "Spaces detected after apparent here document terminator", but : : preferably phrased better. : : : : Are there any objections? : : I object, vaguely. I think it should just Do The Right Thing. : (I suspect it should ignore spaces on the left to.) : : Hear, hear. : : And whilst you're in a mood to ignore whitespace, how about C$/ = "" : terminating on C/\n(\s*\n)+/? I'm more inclined to ignore $/ these days. :-) Larry
Re: Things to remove
In a void context, Cdump dumps the program's current opcode representation to its filehandle argument (or STDOUT, by default). In a scalar or list context, Cdump dumps nothing, but rather returns the Isource code of its arguments (or of the current state of the entire program, by default). Instant program migration: host-a:foo.pl: print SOCKET dump; host-b:bar.pl: { local $/; eval SOCKET }; -- $jhi++; # http://www.iki.fi/jhi/ # There is this special biologist word we use for 'stable'. # It is 'dead'. -- Jack Cohen
Re: Things to remove
Instant program migration: host-a:foo.pl: print SOCKET dump; host-b:bar.pl: { local $/; eval SOCKET }; If domeone is putting this RFC together, please remember to propose that Ceval and Cdo should handle opcodes as well as source: host-a:foo.pl: dump SOCKET; host-b:bar.pl: { local $/; eval SOCKET }; Or: sub suspend { open $fh, "$_[0]" or die; dump $fh } sub resume { do $_[0] } Damian
Re: Things to remove
Damian Conway writes: If domeone is putting this RFC together, please remember to propose that Ceval and Cdo should handle opcodes as well as source: host-a:foo.pl: dump SOCKET; host-b:bar.pl: { local $/; eval SOCKET }; Or: sub suspend { open $fh, "$_[0]" or die; dump $fh } sub resume { do $_[0] } This is trickier than it first apperas, as the existing bytecode shows. A Perl program is opcode + variables. Are you dumping symbol tables? When recreated, will the variables have the same values they currently do? Just a pointer for the eventual RFCer to address. Nat
Re: Pre-RFC: Require a warning on spaces after here-document terminator
Ariel Scolnicov writes: : I was asked to debug a weird Perl5 problem yesterday. The code in : question looked roughly like this (indented 4 spaces, but otherwise : unchanged): : : #!perl -w : use strict; : : print END; : The next line contains a space at the end. : END : This is still a here document : END : : This can be very hard to discover. I find it hard to see myself doing : this on purpose. I would like to see a compiler warning for this: : "Spaces detected after apparent here document terminator", but : preferably phrased better. : : Are there any objections? I object, vaguely. I think it should just Do The Right Thing. (I suspect it should ignore spaces on the left to.) Larry
RE: ... as a term
-Original Message- From: Ed Mills [mailto:[EMAIL PROTECTED]] Excellent idea- anything to get to production faster! But don't {} or {1} sort of do the same thing? I think the point here is readability, not unique functionality. There more then one way to do it :) -Corwin
RE: Things to remove
From: Damian Conway [mailto:[EMAIL PROTECTED]] One could make dump "work" by having it dump out not a core or a.out, but rather the byte codes representing the current state of the perl machine. This seems anywhere from somewhat to seriously useful, and follows in the spirit of what dump was always meant to do. I was contemplating suggesting that Data::Dumper be rolled into the core and take over Cdump. But I think I like this idea even more. It would be nice if a human readable dump were possible. So please don't completely dump the idea of Data::Dumper functionality in the core. Garrett
Re: RFCs (Re: Ideas that need RFCs?)
"Bryan C. Warnock" wrote: On Fri, 18 Aug 2000, David L. Nicol wrote: There will Be No Perl7 Of course not. Odd numbers are the development releases. The next Perl after 6 will be 8. So maybe the reference implementation should be written in perl 4. Did perl 4 have references? Doing all codrefs in terms of eval'ing strings would be a PITA. Seriously, while a worthwhile goal, this is rather short-sighted. The industry and the world will continue to change in spite (or because!) of our efforts here. We can make it easier for the users to adapt, but Perl will need to continue to evolve, as well. So we write in full access to internals, and capable macro and redefining languages, making perl6 a framework you can build anything into even easier than perl5 is a framework you can build anything in to. Perl5 was the experimentation period for threading and OO syntaces, now we've played with them for a while, we can write in programmatic access to on-the-fly parser modification and then the language becomes customary. Further releases of the standard would be modifications to the parser rules or the clarification engine, rather than rewrites. Language = Framework + Parser-rules + Clarification-engine Parser-rules can be rewritten with certain reserved macros, allowing use intercal2000; to cause the remainder of the program to be interpreted as intercal, for instance, And if the Clarification-engine is aware of everything that is not explicitly hidden from it, or if it can be pulled out and completely replaced, that not only makes translating into different languages a real breeze, it allows subroutines to be genuinely on the same footing as builtins, instead of just seeming that way. -- David Nicol 816.235.1187 [EMAIL PROTECTED] Does despair.com sell a discordian calendar?
... as a term
Randal L. Schwartz writes: : if ($a == $b) { ... } # should this be string or number comparison? Actually, it's a syntax error, because of the ... there. :-) But that reminds me of something I wanted a few months ago. I'd entertain a proposal that ... be made a valid term that happens to do nothing, so that you can run your examples through perl -c for syntax checks. Or better, make it an official "stub" for rapid prototyping, with some way of getting a warning whenever you execute such a stub. Larry
Symbolic references, was Re: RFC 109 (v1) Less line noise - let's get rid of @%
(thread intentionally broken) Nathan Torkington wrote: Steve Fink writes: True. Would anyone mourn @$scalar_containing_variable_name if it died? I've never used it, and I'm rather glad I haven't. Perl5's -w doesn't notice $x="var"; print @$x either -- it'll complain if you mention @var once. These are symbolic references. You can forbid them with the strict pragma. Yes, I'd miss them. So would the Exporter. Damn, learn something new every day... perl really is incestuous with its symbol table, isn't it? Yes. That's what makes it useful. Ouch. I need to know more. I'm looking at what a type inference engine for perl would look like, and these symbolic references would incur some massive pollution. Clearly, the inferencer will at least depend on people using strict 'refs' to prevent every $$x="bleck" from trashing the type of every scalar variable in the program. So can someone explain to me what the actual uses are, so I can dream up some form of manual annotation that will limit the scope of their effects? (If the functionality of Exporter becomes more 'core', its usage should no longer matter.) My code for doing what I thought Exporter did is: sub import { my $p = caller(1); *{"${p}::E"} = \%{"${p}::E"}; } but that doesn't run afoul of use strict 'refs'. Can you point me to the passage in Exporter.pm that uses this?
Re: Things to remove
Tom Christiansen writes: : I've very rarely found cases where ?? was useful and // didn't work, and : never in regular code. : : From the Camel: : : The C?? operator is most useful when an ordinary pattern match : would find the last rather than the first occurrence: : : open DICT, "/usr/dict/words" or die "Can't open words: $!\n"; : while (DICT) { : $first = $1 if ?(^neur.*)?; : $last = $1 if /(^neur.*)/; : } : print $first,"\n"; # prints "neurad" : print $last,"\n"; # prints "neurypnology" : : Nothing a SMOP can't address, but for one liners at the least, the : S part would seem to preclude the P part. I don't think the S and the P are that preclusive. For the example above you can just say: /(^neur.*)/ .. "" If you want to be able to reset, then say /(^neur.*)/ .. !$x++# reset with $x = 0; Larry
Re: ... as a term
Excellent idea- anything to get to production faster! But don't {} or {1} sort of do the same thing? From: Larry Wall [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: ... as a term Date: Mon, 21 Aug 2000 09:09:01 -0700 (PDT) Randal L. Schwartz writes: : if ($a == $b) { ... } # should this be string or number comparison? Actually, it's a syntax error, because of the ... there. :-) But that reminds me of something I wanted a few months ago. I'd entertain a proposal that ... be made a valid term that happens to do nothing, so that you can run your examples through perl -c for syntax checks. Or better, make it an official "stub" for rapid prototyping, with some way of getting a warning whenever you execute such a stub. Larry Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com
Re: functions that deal with hash should be more liberal
Today around 11:48am, Tom Christiansen hammered out this masterpiece: : So basically, it would be nice if each, keys, values, etc. could all deal : with being handed a hash from a code block or subroutine... : : In the current Perl World, a function can only return as output to : its caller a LIST, not a HASH nor an ARRAY. Likewise, it can only : receive a LIST, not those other two. So, this is really a bug? #!/usr/local/bin/perl -w use strict; $|++; sub func { return qw/KeyOne Value1 KeyTwo Value2/; } print "$_\n" foreach keys func(); No. keys() expects something that starts with a %, not something that starts with a . --tom
Re: Things to remove
It would be nice if a human readable dump were possible. So please don't completely dump the idea of Data::Dumper functionality in the core. These are different things. And the bytecodes can always be B::Deparse'd, or whatever we come up with for uncompilation. Not that proper marshalling isn't seriously desirable as part of the standard distribtion. It's the basis for several important technologies, including data persistence and interprocess communication of the same. --tom
Re: Things to remove
dump FILE; # dump program state as opcodes You don't like that that should be a checkpoint resurrection at the point in the programmer labelled with "FILE:", per the current (semi-dis-)functionality? Hmm, what about CHECK blocks? --tom
RFC: extend study to produce a fast grep through many regexes
title: study a list of regexes David Nicol. Aug 21 version 1 [EMAIL PROTECTED] Sometimes I have a group of regexen, and I would like to know which ones will match. Current practice is to "study" $situation and then grep them: example a: study $situation; @matches = @rules{ grep {$situation =~ m/$_/} keys %rules}; What if there were a faster way to do this, a way to Cstudy a group of regexes and have the computer determine which ones had parts in common, so that if $situation =~ m/^foo/ is true, the fifty rules that start ^bar don't waste any time. At all. example b: $matchcode = study @regexen; will generate an anonymous subroutine (it's called $matchcode in the example line) with a tree based on required parts of the regexes, to minimize the number of match attempts to determine which regexes will match. The subroutine will take an array argument and will return the subset of the rules (as stated in the original array, either string or compiled qr// references) that match on all the arguments. The code in example a could then be replaced $matchcode = study keys %rules; @matches = @rules{ $matchcode $situation } This ability could speed "dirty matching" which currently cannot take advantage of constant-time hash lookups without standardizing the dirty parts. -- David Nicol 816.235.1187 [EMAIL PROTECTED] Does despair.com sell a discordian calendar?
Re: Pre-RFC: Require a warning on spaces after here-document terminator
: I would like to see a compiler warning for this: : "Spaces detected after apparent here document terminator", but : preferably phrased better. : : Are there any objections? I object, vaguely. I think it should just Do The Right Thing. (I suspect it should ignore spaces on the left to.) Hear, hear. And whilst you're in a mood to ignore whitespace, how about C$/ = "" terminating on C/\n(\s*\n)+/? Damian
Re: Symbolic references, was Re: RFC 109 (v1) Less line noise - let's get rid of @%
Steve Fink writes: My code for doing what I thought Exporter did is: sub import { my $p = caller(1); *{"${p}::E"} = \%{"${p}::E"}; } but that doesn't run afoul of use strict 'refs'. Can you point me to the passage in Exporter.pm that uses this? It does run afoul of use strict 'refs'. That's the kind of thing that the Exporter does. Symbolic references are used for dynamic function generation: foreach my $func (qw(red green blue)) { *$func = sub { "FONT COLOR=$func@_/FONT" } } Also lazy function generation (similar thing but from within an AUTOLOAD). Class::Struct does it to mechanically create the subroutines for a class. Any object code that gets a class name (package) as an argument and uses it to inspect variables in the package does it. (off the top of my head) Nat
RE: Things to remove
One could make dump "work" by having it dump out not a core or a.out, but rather the byte codes representing the current state of the perl machine. This seems anywhere from somewhat to seriously useful, and follows in the spirit of what dump was always meant to do. I was contemplating suggesting that Data::Dumper be rolled into the core and take over Cdump. But I think I like this idea even more. It would be nice if a human readable dump were possible. So please don't completely dump the idea of Data::Dumper functionality in the core. How about this then: In a void context, Cdump dumps the program's current opcode representation to its filehandle argument (or STDOUT, by default). In a scalar or list context, Cdump dumps nothing, but rather returns the Isource code of its arguments (or of the current state of the entire program, by default). Thus: dump FILE; # dump program state as opcodes print FILE dump;# dump program state as source code print dump \%data; # dump contents of hash as source code print dump \@data; # dump contents of array as source code print dump \data; # dump subroutine as source code # etc. If people like this, would someone please RFC it. Damian
Re: Things to remove
dump FILE; # dump program state as opcodes You don't like that that should be a checkpoint resurrection at the point in the programmer labelled with "FILE:", per the current (semi-dis-)functionality? Not much :-) Maybe: dump "FILE:" but not just a bareword :-( Hmm, what about CHECK blocks? Sorry, I'm a little slow today. You lost me. What about them? Damian
Re: ... as a term
On Mon, Aug 21, 2000 at 05:49:39PM -0700, Larry Wall wrote: [EMAIL PROTECTED] writes: : I take it the existing C... operator would be unaffected? Essentially. The lexer is (and will continue to be) quite aware of the difference between terms and operators. Oops, just read this. Ignore my previoius email. -Scott -- Jonathan Scott Duff [EMAIL PROTECTED]
Re: Things to remove
[EMAIL PROTECTED] writes: : How about this then: : : In a void context, Cdump dumps the program's current opcode representation : to its filehandle argument (or STDOUT, by default). It's not clear to me that reusing a lame keyword for this is the highest design goal. Let's come up with a real interface, and then if we want to reuse the (the presumably missing) dump keyword for some method name or other, that's fine. But we're currently designing it from the wrong end. Larry
Re: RFC 88 v2 draft 5 is available via http.
"TO" == Tony Olekshy [EMAIL PROTECTED] writes: TO 2. Multiple conditional catch clauses now work like a switch, TO instead of like a bunch of sequential ifs. TO This always bugged me too, but I couldn't nail it down TO until the debate about using else/switch instead of catch. Which switch? C's with fallthrough? Damian wants perl's switch to have no fallthrough. chaim -- Chaim FrenkelNonlinear Knowledge, Inc. [EMAIL PROTECTED] +1-718-236-0183
Re: RFC 88: Possible problem with shared lexical scope.
On 22 Aug 2000, Chaim Frenkel wrote: Could you tell me why you would want two finallys? Why not put them into one? TO my ($p, $q); TO try { $p = P-new; $q = Q-new; ... } TO finally { $p and $p-Done; } TO finally { $q and $q-Done; } Presumably because all finally blocks are executed before exceptions thrown in finally blocks are propagated upwards. That's my guess at least. -dave /*== www.urth.org We await the New Sun ==*/
Re: On the case for exception-based error handling.
"TO" == Tony Olekshy [EMAIL PROTECTED] writes: As for legacy. I strongly urge that Modules _never_ die. It is extremely rude. TO The contract between a module and its client is beyond the scope TO of RFC 88. However, I take it from your strong stance that you TO wrap every ++$i in an eval and handle $@ in case the CPU throws TO an integer overflow, oh, and you handle errors while handling TO errors too, ad infinitum. There, you see, is the core problem: TO modules or any other code cannot *guarantee* to "_never_" die. You are being extreme here. I use perl _because_ it is so forgiving. I can easily do unlink("foo.err") and not check return codes because I don't care if it was there before. There are many other situations where I really don't need to be careful, since the failure mode is quite fine. It really depends on whether I'm doing brain surgery or as simply script that will barf at the right point and someone will see it. The fact that something went wrong, doesn't mean that my 100 hour complex calcuation should be terminated. The fact that I couldn't send an email message may or may not be of importance in the scheme of things. TO What would happen if the email module gets an overflow on a ++$i TO (or anything else dies, no matter what it calls or does)? Because TO you are ignoring possible problems with email (instead of properly TO handling them by ignoring them [sic]), your 100 hour calculation is TO toast even though the email didn't matter to you. You just got lucky. TO You should have written (contents of catch block optional): TO try { that_email_thing(); } TO catch { print "Email not sent.\n$@\nContinuing anyway...\n"; } Still a staw man. The email is a nice feature. The log file might be the actual official result. Especially given the way the email is like at my client. TO Right. Once you have to be prepared to cope with unwinding anyway TO (that is, if you are actually interested in writing robust code), TO you might as well use the coping mechanism to pick off *only* those TO failure modes you are willing to cope with (by actually attempting TO to handle or deliberately ignore the failure). Here are the actual TO code differences: The point of a module is to be useful _not_ to get in my way. If I have to start coding in an TO Where's $rc? Why, it's in $@, of course. Insert epiphany here. TO So, what do you, the programmer, have to learn? TO A major corrollory benefit of all this is that once people make the TO transition to exception-based error handling, the reliability of all TO software goes up, because you end up with whole systems that are TO layered to cope with failure, instead of whole systems that depend TO on no-one botching an else return $rc or a ++$i, not even one. You are preparing to force all programmers to your way of thinking? This is not what perl is about. I DONT WANT TO BE FORCED TO USE YOUR STYLE SIMPLY TO USE THE MODULES IN CPAN OR THAT SHIP WITH PERL. chaim -- Chaim FrenkelNonlinear Knowledge, Inc. [EMAIL PROTECTED] +1-718-236-0183
On the case for exception-based error handling.
Executive Summary: We should go to a pure return-based mechanism for error signalling, or a pure exception-based one. We can't do the former. Therefore we should do the latter. Author's Note: I'm a pragmatist. I'll keep using return-based error signalling for some purposes, just like everyone else. I'm considering this as a topic for a possible "philosophy" RFC; I'm publishing it for review by *-errors (should anyone care to) to help me decide what to do about it. Chaim Frenkel wrote: As for legacy. I strongly urge that Modules _never_ die. It is extremely rude. The contract between a module and its client is beyond the scope of RFC 88. However, I take it from your strong stance that you wrap every ++$i in an eval and handle $@ in case the CPU throws an integer overflow, oh, and you handle errors while handling errors too, ad infinitum. There, you see, is the core problem: modules or any other code cannot *guarantee* to "_never_" die. The fact that something went wrong, doesn't mean that my 100 hour complex calcuation should be terminated. The fact that I couldn't send an email message may or may not be of importance in the scheme of things. What would happen if the email module gets an overflow on a ++$i (or anything else dies, no matter what it calls or does)? Because you are ignoring possible problems with email (instead of properly handling them by ignoring them [sic]), your 100 hour calculation is toast even though the email didn't matter to you. You just got lucky. You should have written (contents of catch block optional): try { that_email_thing(); } catch { print "Email not sent.\n$@\nContinuing anyway...\n"; } Unless a module can guarantee to *never* throw (and it can't, on pragmatic CPUs) the client should always be prepared to cope with unwinding, for some value of cope. What ticks off the script kiddies is that this makes real programming harder, and there's nothing they can do about it. It ticked me off too, when I was learning to cope with it. And fundamentally you can never completely cope with this situation (because of what's known as the "final arbitrator" problem, that is, what happens if the final arbitrator fails), you can only try--pardon the pun--to ameliorate it. That's why servers try to page sysadmins, the ultimate final arbitrator ;-) Right. Once you have to be prepared to cope with unwinding anyway (that is, if you are actually interested in writing robust code), you might as well use the coping mechanism to pick off *only* those failure modes you are willing to cope with (by actually attempting to handle or deliberately ignore the failure). Here are the actual code differences: Fragile way (return-code based error signalling): sub foo { return ERROR_IO } my $rc = foo(); if ($rc == ERROR_IO) { ... } else { return $rc; } Robust way (exception based error signalling): sub foo { throw Error::IO } try { foo(); } catch Error::IO { ... } Where's $rc? Why, it's in $@, of course. Insert epiphany here. So, what do you, the programmer, have to learn? |Old Way | New Way | |--+-| | | | | return ERROR_IO | throw Error::IO| | | | | my $rc = foo(); | try { foo(); } | | | | | if ($rc == ERROR_IO) | catch Error::IO| | else { return $rc; } | | |__|_| Without exception-based error signalling one bad else return $rc or any other botched error handling can easily bring the system to its knees, because it is continuing with bad data even though it isn't prepared to. With exception-based error signalling it actually takes less bytes of code to do it right, it's more easily read, and if you don't explicitly handle an error (for example, because of a botched return) then the error (even if the error is botching the return) goes through, instead of your code continuing with bad data. A major corrollory benefit of all this is that once people make the transition to exception-based error handling, the reliability of all software goes up, because you end up with whole systems that are layered to cope with failure, instead of whole systems that depend on no-one botching an else return $rc or a ++$i, not even one. And if throw becomes the standard, then you are forcing _all_ programs to accept exception handling. And if you don't, you force all programs to handle return codes, even if they don't want to, and oh yeah, even if they don't want to but they do want to be robust, they still have to say defined $rc or return undef; after every function they
Re: Draft 3 of RFC 88 version 2.
"TO" == Tony Olekshy [EMAIL PROTECTED] writes: TO Perl's behaviour after a Cdie starts call-stack unwinding, as TO envisioned by this RFC, is as described by the following rules. TO 1. Whenever an exception is raised Perl looks for an enclosing TO try/catch/finally clause. TO If such a clause is found Perl traps the exception and proceeds TO as per rule 2, otherwise program shutdown is initiated. If an enclosing block is not found, program shutdown is initiated. Each of the catch blocks associated with the current try block are evaluated in order until one catch expr returns true. If none are found the finally block is entered and after completing the exception is reraised, and the search for an enclosing block continues. If a a catch block is entered and completed successfully, no succeeding catch block will be tried and processing will continue with the finally block (if any) associated with the current try. A catch block is not considered part of its own try block. So that any exception encountered during processing, control will continue as if there were no try block at this call level and the search for an encloseing block continues. Does this cover all the cases? (Does an exception in a catch go to the finally?) (Does an exception in a catch change the exception?) TO 2. The try block's "next" associated trap/catch or finally clause TO is processed according to rules 3 and 4. When there are no TO more clauses rule 5 is used. TO 3. If a catch expr returns true (without itself raising an TO exception), its associated catch block is entered. TO If the catch block is entered and it completes without itself TO raising an exception, the current exception and stack are TO cleared. But if a catch expr or a block clause raises an TO exception, it becomes the current exception, but it does TO not propagate out of the clause (at this point). TO If a catch expr raises an exception or returns true, then TO whether or not the catch block raises an exception, any TO succeeding try/catch clauses up to the next finally clause are TO skipped (for the purpose of the "next" iterator in rule 2). TO Processing then continues with rule 2. TO 4. When a finally clause is encountered its block is entered. TO If the finally block raises an exception it becomes the current TO exception, but it does not propagate out of the clause (at this TO point). TO Processing continues with rule 2. TO 5. After the catch and finally blocks are processed, if there TO is a current exception then it is re-raised and propagated TO as per Rule 1 (beginning above the current try statement in TO the call stack). TO Otherwise $@ is undef, the try statement completes normally, TO and Perl continues with the statement after the try statement. TO =head2 Built-In Exception Error Classes TO In addition to the built-in Exception class described below (which TO inherits from UNIVERSAL), a built-in Error class is also defined, TO which inherits from Exception. TO Exceptions raised by the guts of Perl are envisoned by this RFC to TO all be instances of classes that inherit from Error. Instances of TO the actual Error class itself are reserved for simple exceptions, TO for those cases in which one more or less just wants to say Cthrow TO Error "My message.", without a lot of extra tokens, and without TO getting into higher levels of the taxonomy of exceptions. TO On the other hand, new exception classes that inherit directly from TO Exception, as opposed to from Error, are assumed to be asking for TO more light-weight functionality. The intent of this RFC is to TO provide a place (Exception) in which methods can be stubbed in for TO the functionality required by Errors, so that when they are TO overridden by Error they work as expected, but when inherited by TO other derivatives of Exception, this error-functionality is avoided TO and does not otherwise interfer with the requirements of light- TO weight exception handling protocols. The stack-traceback at throw- TO time instance variable, for example, probably doesn't make much TO sense when one is throwing success, not failure. TO This Exception/Error class factoring is taken advantage of in TO this RFC to delegate the details of the Error class to proposals TO such as RFC 80 et al, independent of the fact that this RFC expects TO such an Error class to inherit from Exception. TO =head3 Instance Variables TO The built-in Exception and Error classes reserve all instance TO variable and method names matching C/^[_a-z]/. The following TO instance variables are defined. TO message TO This is a description of the exception in language intended TO for the "end user". Potentially
Re: RFC 88: Possible problem with shared lexical scope.
Could you tell me why you would want two finallys? Why not put them into one? chaim "TO" == Tony Olekshy [EMAIL PROTECTED] writes: TO Non-shared: TO my ($p, $q); TO try { $p = P-new; $q = Q-new; ... } TO finally { $p and $p-Done; } TO finally { $q and $q-Done; } TO Shared: TO try { my $p = P-new; my $q = Q-new; ... } TO finally { $p and $p-Done; } TO finally { $q and $q-Done; } TO If P-new throws, then the second finally is going to test TO $q, but it's not "in scope" yet (its my hasn't been seen). TO Or is it? If it isn't, I'll take shared lexical scoping out TO and put a note about this in ISSUES instead of the current: TO If it is not possible to have try, catch, and finally blocks TO share lexical scope (due, perhaps, to the vagaries of stack TO unwinding), this feature can simply be deleted, and the outer TO scope can be shared. TO Yours, c, Tony Olekshy -- Chaim FrenkelNonlinear Knowledge, Inc. [EMAIL PROTECTED] +1-718-236-0183
Re: RFC 88: Possible problem with shared lexical scope.
"PS" == Peter Scott [EMAIL PROTECTED] writes: PS However, my memory as to what the current perl behavior is was faulty; PS continue blocks do *not* share the lexical scope of their attached loop PS blocks. I was misremembering the caveat at the end of this part of perlsyn PS (which says the opposite, and is easily confirmed): I vaguely recall that Gurusamy fixed this one. chaim -- Chaim FrenkelNonlinear Knowledge, Inc. [EMAIL PROTECTED] +1-718-236-0183
Re: PROTOPROPOSAL FOR NEW BACKSLASH was Re: implied pascal-likewith or express
--On 18.08.2000 14:36 Uhr -0700 David L. Nicol wrote: How about backslash, after the type-qualifier? use %record{ $\interest_earned += $\balance * $\rate_daily; }; I don't really like having backslashes in front of ordinary characters anywhere except when I mean them :-) (\n, \t etc.) In most cases where you'd want a with-type construct you know exactly which keys there'll be, so maybe simply adding the keys as lexical variables to the block would work: my $interest_earned=0; use %record { $interest_earned+=$balance*$rate_daily; }; Yes, this _might_ interfere with already existing variables but I so far did not encounter lots of cases where I'd want to use a with block with hashes where I'm not sure which keys it might contain... (sorry if this was brought up already - I'm new to the lists and tried scanning perl-language but gave up after some time... ) -- Markus Peter - SPiN GmbH [EMAIL PROTECTED]
Re: Things to remove
One could make dump "work" by having it dump out not a core or a.out, but rather the byte codes representing the current state of the perl machine. This seems anywhere from somewhat to seriously useful, and follows in the spirit of what dump was always meant to do. I was contemplating suggesting that Data::Dumper be rolled into the core and take over Cdump. But I think I like this idea even more. Damian
hype for upcoming top level parser draft
Larry Wall wrote: Ed Mills writes: : But don't {} or {1} sort of do the same thing? Well, { warn "Encountered stub"; (); } would be more like it. But the biggest problem with {} or {1} is that they don't resemble an ellipsis. Larry dot operator selection: The token clarifier sees dot, it's concat operator, unless next token is also dot, in which it's one of a variety of .. operators depending on a variety of things, possibly including the RHS also being dot, in which case we've got the ... function. In the world in which the program text becomes the token list via a tokenize() function, which returns @tokens (it can't be a regex mostly due to the interactions of various sorts of quoitings) I drafted it over the weekend I will be testing it soon, mostly it's split /(/\w*|\s*)/ followed by special treatment for hereis, followed by push onto @tokens. then the tokens get run, which means executing a $_-run() if such is defined, or calling clarify($_) if not. And all this w/in multiple threads. -- David Nicol 816.235.1187 [EMAIL PROTECTED] Does despair.com sell a discordian calendar?
Re: functions that deal with hash should be more liberal
Today around 11:48am, Tom Christiansen hammered out this masterpiece: : So basically, it would be nice if each, keys, values, etc. could all deal : with being handed a hash from a code block or subroutine... : : In the current Perl World, a function can only return as output to : its caller a LIST, not a HASH nor an ARRAY. Likewise, it can only : receive a LIST, not those other two. So, this is really a bug? #!/usr/local/bin/perl -w use strict; $|++; sub func { return qw/KeyOne Value1 KeyTwo Value2/; } print "$_\n" foreach keys func(); Shell ./lab.pl Type of arg 1 to keys must be hash (not subroutine entry) at ./lab.pl line 10, near ");" Execution of ./lab.pl aborted due to compilation errors. Shell I think that keys should take a LIST as it's arguments. It would appear that right now, in the current Perl World, it should. The above should yeild something like: KeyOne KeyTwo Rather than that error. -- print(join(' ', qw(Casey R. Tweten)));my $sig={mail='[EMAIL PROTECTED]',site= 'http://home.kiski.net/~crt'};print "\n",'.'x(length($sig-{site})+6),"\n"; print map{$_.': '.$sig-{$_}."\n"}sort{$sig-{$a}cmp$sig-{$b}}keys%{$sig}; my $VERSION = '0.01'; #'patched' by Jerrad Pierce belg4mit at MIT dot EDU
Re: ... as a term
Larry Wall writes: I'd entertain a proposal that ... be made a valid term that happens to do nothing, so that you can run your examples through perl -c for syntax checks. Or better, make it an official "stub" for rapid prototyping, with some way of getting a warning whenever you execute such a stub. This is the coolest suggestion made so far for perl6. I love it. Runtime behaviour of '...' is to warn "unimplemented behaviour". With use strict 'development', it dies "unimplemented behaviour" at compile-time. Nat
new list: perl6-language-regex
subscribe by sending mail to [EMAIL PROTECTED] more details at http://dev.perl.org/lists LIST: [EMAIL PROTECTED] CHAIR: Mark-Jason Dominus [EMAIL PROTECTED] MISSION:Draft and discuss RFCs related to regexp language issues in Perl 6. Report weekly to Language WG. DEADLINE: September 30th (semi-permanent sublist) - ask -- ask bjoern hansen - http://www.netcetera.dk/~ask/ more than 70M impressions per day, http://valueclick.com
Re: ... as a term
Larry Wall writes: I'd entertain a proposal that ... be made a valid term that happens to do nothing, so that you can run your examples through perl -c for syntax checks. Or better, make it an official "stub" for rapid prototyping, with some way of getting a warning whenever you execute such a stub. This is the coolest suggestion made so far for perl6. I love it. And it's backwards compatible with a huge volume of "handwaving" code ;-) Runtime behaviour of '...' is to warn "unimplemented behaviour". With use strict 'development', it dies "unimplemented behaviour" at compile-time. I take it the existing C... operator would be unaffected? Damian
Re: Symbolic references, was Re: RFC 109 (v1) Less line noise - let's get rid of @%
Thanks! Ok, from a type inferencing perspective... Nathan Torkington wrote: Symbolic references are used for dynamic function generation: foreach my $func (qw(red green blue)) { *$func = sub { "FONT COLOR=$func@_/FONT" } } Probably have to punt on checking user code in a main routine that does this. But if it's in a module, the inferencer should just get fired up after all use's and other BEGIN's have been processed so the names and code for all of those are known. If you want user code to still get some inference, then you can have no strict 'refs' everywhere but right here and have a pragma assume_sub_in_source_is_never_overridden, so that $x = unknown_sub() pollutes all package vars and all lexical vars captured anywhere by anything. Though that's tough to verify. But subs would be okay. And eval"", as usual, would kill pretty much everything. Also lazy function generation (similar thing but from within an AUTOLOAD). Same, except known subs will never be overridden. Class::Struct does it to mechanically create the subroutines for a class. Same. Any object code that gets a class name (package) as an argument and uses it to inspect variables in the package does it. "Inspect", as in, read-only access? Then it's not bad. Does accessing $p=;$x=${"${p}::foo"} activate $foo's tied FETCH? I suppose so. Then it's bad. (off the top of my head) That's a nasty set of things that I was planning on looking at later to see what can be salvaged (probably not much), but I was wondering more about uses of $$x as a symbolic ref access as opposed to scalar dereferencing. General symbol table manipulation is probably just going to make the inferencer assume that any function call can rewrite all other subroutines and piss on all visible variables, file descriptors, etc., but scalar and array symbol table manipulation may be more survivable. I guess it doesn't matter that much, you just don't get any type inferencing if you don't use strict 'refs' or you do use runtime eval"" or require. But maybe my $Bob : might_be_accessed_symbolically would be convenient enough to enable the type inferencer for more programs.
Re: New syntactic sugar builtin: in
I'd like to see a new builtin named "in" which does the same as "in" in SQL. Basically, print "OK!" if $val in ("foo","bar","bla"); Wait for the superpositions RFC: print "OK!" if $val eq any("foo","bar","bla"); print "OK!" if $val =~ any(qr/fo+/,qr/bl?ar?/); print "OK!" if any(\foo,\bar,\bla)-($val); print "OK!" if all(@vars) any(@threshold); Damian
Re: Things to remove
On Mon, 21 Aug 2000 06:11:02 -0600, Tom Christiansen wrote: $first = $1 if ?(^neur.*)?; $first ||= $1 if /(^neur.*)/; Now if only we had a shortcut operator which would continue only if the LHS was not defined... -- Bart.