Gripes about Pod6 (S26)
Howdy, I've been doing a bunch of NQP and PIR coding, where Pmichaud++ has been trying to support some kind of POD syntax. With the release of the S26 draft, he has tightened the parsing to follow more of the rules laid out in the spec, and after a few months, I've noticed that the trend (for not-quite-pod) is definitely getting worse. POD 6 isn't very nice. It certainly isn't an improvement on POD 5. To be clear, by not an improvement on POD5 what I mean is I have abandoned POD6 in favor of block comments. I don't have a rewritten S26 to offer - sorry - but I do have some thoughts on why. There are a couple of things about POD5 that I didn't like. The biggest one, by far, was extra newlines. That got better over time, either because I changed my writing style or because the parser got smarter - I never tried to figure out which. The other big thing was no tables, which is definitely fixed (-ish) in POD6. That said, POD5 was a lightweight, easily understood format. It was easy to figure out that there was text mixed with the code, and the delimiters were cute little = signs with some semantics attached to them, rather than just /* ... */. But /* ... */ would work - as it does for Javadoc and Doxygen. With S26, there are definitely some things I am happy to see. Tables. Tables! Paragraph blocks. Attributes and out-of-band stuff. But there's an awful lot of bad, too. First, POD is not HTML. I absolutely hate =end tags. I can understand the need for a general purpose multi-line comment, but as things currently stand =begin/=end happens too much. Worse, for the kind of short-but-multiple-paragraph comments that get attached to function and methods, the extra-newlines tax gets pretty high. In general, I think I'd like some way for POD to be in sticky mode -- that is, like POD5, it should either stay in POD, or stay in a particular block type, until told otherwise. This might be a block attribute -- that is, something I can configure separately: =config default :likepara =config para :sticky Second, POD is not XML, and it definitely isn't DOCBOOK. Why do I need magic reserved words like TOC and APPENDIX? I'm not writing a book, I'm writing code. And if I was writing a book, I wouldn't be dumb enough to write it in POD. If @Larry wants to prove something by writing books in POD, let them type =begin or =for or =whatever in front of their chapter markers -- leave my namespace alone. Third, I think that S26 is trying to approach a couple of pretty important new(-ish) concepts in software -- inline documentation, and annotations -- from a POD standpoint. I'm willing to listen, but I'm not entirely convinced that it's possible, or that it's a good idea. There was a flurry of discussion right after Damian posted his S26 draft about short syntax for documenting syntax elements. The focus seemed to be on making it something that POD could grab. I'm not sure I accept that focus, especially since P6-language seems to have some kind of bias against admitting anybody ever did any syntax right after 1979. Docblocks have been around for a long time, but nobody even considered literal-string-after-{ or as being valid ways to indicate them. Far better to write #={ despite how ridiculous that is to type, than to consider using (or better yet ''' which isn't shifted). Overall, my impression of S26 is that it's not Perly enough. The idea of paragraph blocks (=for...) is pretty clearly a compromise between single-line and multi-paragraph, and it's a step in the right direction. But I think there needs to be a better consideration of the needs of the people writing the POD, expressed maybe as explicitly reserved bits with known behaviors. (For example, some set of contexts where POD5-style parse until I tell you to stop allows omission of =end markers, and lets =foo stand for =begin foo.) Also, =for just doesn't scan in a lot of places. Maybe a better way would be to depend on the text, or to add an =. =for para blah blah blah =config para :modeparagraph =para blah blah blah =para -- no text means para mode? blah blah blah ==para -- == means para mode? blah blah blah One thing I have noticed in NQP is that usually I write a function signature, and it's right. That is, I can write the signature and it corresponds with how the caller will use it. In that case, I want my inline docs to be quick, too: method unsort() --- Unsorts (randomizes) the array. {...} method sort(:order?) ''' Sorts the array. =param order?A function that takes two arguments (items in the array) and returns a number less than, equal to, or greater than zero, depending if the first argument should be considered less than, equal to, or greater than the second argument in the desired ordering. ''' { ... } The nice thing about this arrangement is that I get to leave the declaration in place at
Re: p6 Q: How do I metaprogram this?
This is a p6 question, not an NQP question - I'm citing the NQP only because it's my current example. So mentioning p6 features not currently in NQP is totally appropriate. What I mean by converting code into data is simply that a run-time version of metaprogramming will generally translate the code: $some_object.foo_method(); into data: method foo_method() { self.do_something('foo'); } In this example, the call to foo_method turned into a different call, with foo as a string(data) instead of a function. A C version, using macros, might do something like: #define function_template(name) \ void name () { return 1; } function_template(foo) This, to me, is turning (compile-time) data into code. And because it is a compile-time thing, and there's no extra overhead, it's as close to the metal as I'm likely to get. (Of course, using a template means that some possible tweak, based on this particular method never gets used to ... blah blah ..., might not get made. But that's okay.) I'm remembering a .assuming modifier from a while back. Maybe that's a way to eliminate the extra step: foo_method := template_method.assuming($arg1 := 'foo'); Or is there a better way? =Austin Geoffrey Broadwell wrote: I'm not entirely sure what you mean here by translate code into data (method name into string). The method name is already a string, which is why I offered the call by name syntax above. But of course if you have a code object for the method itself, you could do this in Perl 6: $obj.$method(...args...); Sadly this does not currently work in NQP-rx, though IIUC there's no reason it couldn't (and in fact I've already requested this feature because it would be useful for some function table stuff I do). Full Perl 6 offers a number of features that would be useful for calling a whole pile of dynamically-chosen methods on an object, but few have been implemented in NQP-rx. (I would assume because there hasn't been a lot of demand for it yet.) I'll let the Perl 6 gurus follow up with actual syntax examples for some of these nifty features. ;-) -'f
p6 Q: How do I metaprogram this?
I'm writing some NQP, which isn't quite perl6, and I've got this method: method afterall_methods() { my @methods := self._afterall_methods; unless @methods { @methods := self.fetch_afterall_methods; self._afterall_methods(@methods); } return @methods; } I've also got methods named 'after_methods', 'before_methods', 'beforeall_methods'. It's pretty much a search/replace kind of thing. I know that I could 'metaprogram' this stuff by using string manipulation on the various method names, and then calling a (self-built) call_method($obj, $method_name, ...args...) function. But I'm curious if there's some P6 feature I've forgotten about (which I've forgotten most of them, excepting the rev number) that would let me do this without having to go too far away from the metal. Thanks, =Austin
Re: p6 Q: How do I metaprogram this?
Geoffrey Broadwell wrote: On Tue, 2009-12-08 at 18:58 -0500, Austin Hastings wrote: I know that I could 'metaprogram' this stuff by using string manipulation on the various method names, and then calling a (self-built) call_method($obj, $method_name, ...args...) function. You don't need to write this by hand. NQP-rx supports the method call by name Perl 6 syntax: $obj.$method_name(...args...); which makes this kind of thing much easier. I use it in Plumage in a number of places. But I'm curious if there's some P6 feature I've forgotten about (which I've forgotten most of them, excepting the rev number) that would let me do this without having to go too far away from the metal. The above syntax is actually pretty close to the metal because it translates directly to standard PIR ops. The problem I have with the above is that it seems to require a second layer of call. Something like: sub beforeall_methods() { return fetch_methods_by_category('beforeall'); } sub fetch_methods_by_category($cat) {...} Essentially, it's one level of function call to translate code into data (method name into string) and then the template function is the second layer of call. I'm not (believe it or not) actually trying to solve a problem here, so much as I am trying to learn what kind of features p6 offers for this using a concrete example. Coming at this from a different angle, C# offers syntactic sugar for getter/setter methods. This example of mine might be a candidate for a macro, depending on the language. But is this a p6 macro? Or is there some in-between that I just don't know about. =Austin
Re: Filename literals
This whole thread seems oriented around two points: 1. Strings should not carry the burden of umpty-ump filesystem checking methods. 2. It should be possible to specify a filesystem entity using something nearly indistinguishable from standard string syntax. I agree with the first, but the relentless pursuit of the second seems to have gone beyond the point of useful speculation. What's wrong with File('C:\Windows') or Path() or Dir() or SpecialDevice()? Not to get all Cozens-y or anything, but chasing after ways to jam some cute string-like overloading into the syntax so that we can pull out the other overloading (which at least had the virtue of simplicity) seems pointless. The File::* functionality is probably going to be one of the very early p6 modules, and it is probably going to be in core. If that's true, why not allocate some really short names, ideally with 0 colons in them, and use them to spell out what's being done? Neither q:io:qq:{.} nor qq:io{.} really stand out at excellent ways to say this is a path, or directory, or file, or whatever. If it's plug-in-able, I'd take qq:file{.} or qq:dir{.} or qq:path{.}, but I'd rather see C File q{.} . =Austin Timothy S. Nelson wrote: On Sat, 15 Aug 2009, Timothy S. Nelson wrote: Considering, though, that we're talking about a magic perl quoting syntax, we could offer people the option of the following two: q:io{C:\Windows} # Does what you want q:io:qq:{C:\\Windows} # Does the same thing Wouldn't that cover the bases pretty well? My bad -- try these: $file = foo Q:io{C:\Windows\$file} # Results in C:\Windows\$file q:io{C:\\Windows\\$file} # Results in the same thing qq:io{C:\\Windows\\$file} # Results in C:\Windows\foo HTH, - | Name: Tim Nelson | Because the Creator is,| | E-mail: wayl...@wayland.id.au| I am | - BEGIN GEEK CODE BLOCK Version 3.12 GCS d+++ s+: a- C++$ U+++$ P+++$ L+++ E- W+ N+ w--- V- PE(+) Y+++ PGP-+++ R(+) !tv b++ DI D G+ e++ h! y- -END GEEK CODE BLOCK-
Re: Rukudo-Star = Rakudo-lite?
How about Rake? =Austin Richard Hainsworth wrote: Referring to Patrick's blog about an official 'useable' version of Rakudo, a suggestion: Since Rakudo* (not sure how it is to be written) is intended to be a cut-down version of perl6.0.0 that is useable, how about Rakudo-lite? Its just that */star/whatever doesnt convey [to me] the fact that its a sub-set of of Perl6. -lite seems [to me] to be used to define a functional, but stripped down version of a larger spec. Richard (finanalyst)
S05 (regex) Q: after
S05 mentions the magic after pattern in two locations, but I cannot find a specification of the interaction between after and the ratcheting {rule/token} status. Specifically, is token { ... ?after x } going to match the same pattern as rule { ... ?after x } ?? I ask because (I just did it, and) with rules encouraging the liberal use of whitespace, and implicitly generating .ws matches, something like: rule { X ?after X } will insert a .ws before the ?after, which the after-block should then be aware of. So, I suppose the question is, does ?after always behave a certain way, ratchet-wise, and if so what is it? Or does it take its mode from the surrounding context, or something else? =Austin
Re: Huffman's Log: svndate r27485
David Green wrote: It occurs to me that log is a pretty short name for a function I rarely use. (In fact, I'm not sure I've ever used it in perl.) On the other hand, I -- and a thousand or so CPAN modules -- are always logging stuff in that other popular computer sense. (All right, that number isn't exactly the result of a rigourous study... I did find 57 modules that mentioned logarithms.) The inertia of tradition weighs heavily here, but perhaps we could call it ln(). (If anyone asks, I'm prepared to say with a straight face that it stands for log (numeric).) And/or log(), but with the :base arg mandatory -- then as long as your status logging doesn't have a :base, you can have both. Umm. At the risk of pointing out the obvious, P6 has redefined the syntax of regular expressions, converted bitwise negation into a stringification unary and a binary catenation operator, and torqued a bunch of other keywords and line noise^W^Woperator characters out of shape. Do we really give a rat's posterior about the historical legacy of a mathematical function that (statistically) never gets called? Like everything else mathematical, jam it into a Math:: class and clean up the default namespace. (FWIW: My perl scripts don't do logs, in EITHER sense of the word. I don't want to replace one bit of namespace clutter with another one. All you web guys can use the Apache::log method, or whatever.) =Austin
Re: Huffman's Log: svndate r27485
Mark J. Reed wrote: I'm all for not having any variety of log() in the default namespace. Regardless, mathematical functions should follow mathematical norms. Changing Perl tradition is one thing, but we have centuries, sometimes millennia, of tradition to deal with in the mathematical realm. It should not be violated lightly. That's okay. Preserve their thousands of years of historical legacy. Just preserve it in a separate container. #! /usr/bin/perl6 say(log(1000, 10)); # Error - no such function log use Math::Simple; say(log(1000, 10)); # 2.999...
Grammar Q: does result polymorphism make sense?
Howdy, One of the problems in recursive-descent parsing is constructs that look a lot alike at the front, only to differ at the end -- a sort of end-weight pathology. The example I'm thinking of is the similarity between variable and function declarations in 'C'. extern int foo = 0; extern int bar(int foo, int mumble) ; static int zip(int foo, char * beezle) { return 0; } I'm dealing with a similar problem in Close (my pet language), and I've tweaked my grammar to favor NOT having to rescan. This means I've got a single declaration rule, but it also means a bunch of ?DECL_MODE_ predicates, and recognizing ';' at the end of the declaration (except when in parameter mode), etc. It occurs to me that a win, of sorts, would be for the declaration rule to be able to return multiple alternative values - a sort of result polymorphism. Thus: rule extern_decl { | variable_decl ';' | function_decl ';' | function_defn } could magically avoid calling three different rules that scanned mostly the same tokens by some sleight of hand: rule variable_decl is parsed(super_duper_decl); rule function_decl is parsed(super_duper_decl); rule function_defn is parsed(super_duper_decl); But how to indicate which result you are returning? Alternatively, it may be the case that some kind of intermediate-level results memoization would do the trick. If the storage_class rule memoized its result for offset X, it could generally do the right thing, except that it would have to know when not to memoize (as when some idiot puts in a ?DECL_MODE... state flag). So maybe rule storage_class is memoized {...} is the right thing? Anyway, I'm not proposing anything so much as wondering out loud. Surely there's a bunch of smarter bears than me who have given this some thought. Any wisdom? =Austin
Re: Amazing Perl 6
Jon Lang wrote: Agreed. Given the frequency with which « and » come up in Perl 6, I'd love to be able to have a simple keyboard shortcut that produces these two characters. Unfortunately, I am often stuck using a Windows system when coding; and the easiest method that I have available to me there (that I know of) is to pull up Character Map Set your keyboard to us-international. There's a bunch of goodies in there. The problem is the delayed keystrokes on composed accented characters, formed by quote+letter. (like 'a) So write your own keyboard module. I did one for P6 several years ago. In the mean time, it's alt+shift, altgr+[ =Austin
Feature request: Grammar debugging support
I'm using the PGE/PCT tools for working with grammars on Parrot, and I have to say that while there's a lot of power, there's very little debugging support. What's more, the debugging that is possible seems to be parrot debugging --i.e., single-stepping through routines, etc. -- instead of grammar debugging. Given that P6 grammars are at least partially supported in P5, it seems like it would pay dividends to insert some sort of debugging mechanism at the grammar spec level. Right now, I'm talking about things like understanding the paths and alternations considered by the parser. Why is that rule being executed twice? Shouldn't this be a short-name instead of an identifier? Maybe this is purely output-oriented, at least for early days, but the ability to set some kind of $grammar.trace(1) flag would be a real win. It may be that there are other useful things to get from the engine than rule sequencing. But that's the level I'm operating at right now, and I can't think of the other stuff (possibly because all these trees are in the way...). =Austin
[Fwd: Re: New CPAN]
Sorry, didn't do a reply-all on this. ---BeginMessage--- How about Parrot? I think the original point, along with one of the original claims for Parrot, was that Parrot would not just be the Perl internals engine but would be general enough to run other languages. (Specifically, there are Tcl, Python, Pascal, and Lua implementations in various stages of dress.) So if someone wrote the next killer thing (Rails, anyone?) in some language that compiles to Parrot, it should be possible to install the parrot binary and have it work if you're a Perl6/Parrot user instead of a Ruby/Parrot or C#/Parrot or Pascal/Parrot user. The risk is that there are now more Perl6es than just Rakudo. This tends to force language level instead of binary level distribution. It's sort of like the difference between CPAN and Maven. Maybe they're two different things. There's a use-case question there, which should probably be addressed by someone who has read the CPAN6 requirements doc (I have not). =Austin Daniel Carrera wrote: Daniel Carrera wrote: Btw, if the majority wants to start uploading Ruby, Python and Lua modules to CPAN, we can rename CPAN so that the P stands for something else that doesn't mean anything. Comprehensive Peacock Archive Network? Comprehensive Platypus Archive Network? my (@C,@P,@A,@N); @C = Comprehensively Conspicuously Continuously Completely Certainly; @P = Pathological Perplexing Powerful Pervasive Pedestrian Pure Posh; @A = Archive Array Anthology; @N = Network Nest; say (@C.pick,@P.pick,@A.pick,@N.pick).join( ); Cheers, Daniel. ---End Message---
Re: Meditations on a Loop
You write: I’m not sure what the heart of Perl 6 would be, but I think we’ve identified the spleen with the |Capture|. In the human body, most people have no idea what the spleen does. It sits there out of the way doing its thing, and we can’t live without it. I, along with a host of others, am living without a spleen. I have been for 20+ years. I'm reading perl6-language, instead of drinking beer and chasing strippers, so you might argue that I have no life. But I don't think the can't live without it claim holds. :-) Also, you've got a little green Update this paragraph that probably isn't needed any more. =Austin John M. Dlugosz wrote: If you would be so kind, please take a look at http://www.dlugosz.com/Perl6/web/med-loop.html. I spent a couple days on this, and besides needing it checked for correctness, found a few issues as well as more food for thought. --John P.S. contains some humor.
Re: Unicode in 'NFG' formation ?
If you haven't read the PDD, it's a good start. To summarize, probably oversimplifying badly: 1. A grapheme is a character *as seen on the page.* That is, if composing a + dot above + dot below produces an a with dots above and below it, then THAT is the grapheme. 2. Unicode has a lot of characters that are single code points representing a complex grapheme. For example, the A + ring above composition shows up as the Angstrom symbol. 3. But on the other hand, some combination of basic characters plus combining marks DO NOT have a single code point that represents them. For example, while your girlfriend might compose dotless lowercase i with combining heart above to produce an i with a heart instead of a dot, there isn't a single codepoint in Unicode for that. (Unless girly-grrls got their own code page. Maybe in Unicode 6...) 4. Since that's a considerable PITA to deal with, we now have NFG format, which really should have been called NFW format, IMO. (W = widechars, natch.) Every combination of basic plus combining marks *that gets used* will have a single grapheme allocated. Many of them, like the Angstrom symbol, or O + combining röckdöts, will already have a real unicode grapheme. The rest of them will get negative numbers assigned, one at a time. The negative numbers will only be meaningful to the string they're in, or maybe only to the particular execution context. (There are issues with comparing, etc. Which is why I think maybe one table per execution.) 5. The result is that every grapheme (letter-on-the-page) will have a single number behind it, will have a length of 1, etc. So we can do meaningful substr($str, 2, 7) and get what we expect, even when the fifth grapheme requires a base character plus 4 combining marks. All hail @Larry! =Austin Mark J. Reed wrote: Do we really need to be able to map arbitrary graphemes to integers, or is it enough to have an opaque value returned by ord() that, when fed to chr(), returns the same grapheme? If the latter, a list of code points (in one of the official Normalzation Formats) would seem to be sufficient. On 5/18/09, Helmut Wollmersdorfer hel...@wollmersdorfer.at wrote: Darren Duncan wrote: Since you seem eager, I recommend you start with porting the Parrot PDD 28 to a new Perl 6 Synopsis 15, and continue from there. IMHO we need some people for a broad discussion on the details first. Helmut Wollmersdorfer
Re: Unicode in 'NFG' formation ?
Mark J. Reed wrote: On Mon, May 18, 2009 at 9:11 AM, Austin Hastings austin_hasti...@yahoo.com wrote: If you haven't read the PDD, it's a good start. snip useful summary I get all that, really. I still question the necessity of mapping each grapheme to a single integer. A single *value*, sure. length($weird_grapheme) should always be 1, absolutely. But why does ord($weird_grapheme) have to be a *numeric* value? If you convert to, say, normalization form C and return a list of the scalar values so obtained, that can be used in any context to reproduce the same grapheme, with no worries about different processes coming up with different assignments of arbitrary negative numbers to graphemes. If you're doing arithmetic with the code points or scalar values of characters, then the specific numbers would seem to matter. I'm looking for the use case where the fact that it's an integer matters but the specific value doesn't. There's a couple of cases. First of all, it doesn't have to be an integer. It needs to be a fixed size, and it needs to be orderable, so that we can store a bunch of them in an intelligent fashion, thus making it easy to sort them. With that said, integers meet the need exactly. Plus, there's the benefit that unicode already has an escape hatch built in to it for user-defined stuff. And that escape hatch is an integer. The benefits are documented in the pod: they're fixed size, so we can scan over them forward and backward at low cost. They're easily distinguished (high bit set) so string code can special-case them quickly. They're orderable, comparable, etc. And best of all they contain no trans fat! =Austin
Re: Unicode in 'NFG' formation ?
Brandon S. Allbery KF8NH wrote: On May 18, 2009, at 14:16 , Larry Wall wrote: On Mon, May 18, 2009 at 11:11:32AM +0200, Helmut Wollmersdorfer wrote: 3) Details of 'life-time', round-trip. Which is a very interesting topic, with connections to type theory, scope/domain management, and security issues (such as the possibility of a DoS attack on the translation tables). I find mysef wondering if they might need to be standardized anyway; specifically I'm contemplating Erlang-style services. Why wouldn't a marshalling of an NFG string automatically include the grapheme table? That way you can realize it and immediately use it in fast mode. Alternatively, if you were providing a persistent string service, a post-marshalling step could re-normalize it in local NFG. The response in NFG could either use the same table you sent (if the response is a subset of the original string) or could attach its own table for translation at your end. =Austin
Re: Unicode in 'NFG' formation ?
Larry Wall wrote: Which is a very interesting topic, with connections to type theory, scope/domain management, and security issues (such as the possibility of a DoS attack on the translation tables). I think that a DoS attack on Unicode would be called IBM/Windows Code Pages. The rest of the world have been suffering this attack for the last 40 years. I'm not sure anyone would notice, at this point. :-)
Re: [PATCH] Add .trim method
Aristotle Pagaltzis wrote: Actually that makes me wonder now whether it’s actually a good idea at all to make the function parametrisable at all. Even `.ltrim.rtrim` is shorter and easier than `.trim(:start,:end)`! How about .trim(:l, :r) with both as the default? And if the rtl crowd makes a furor, we can add :a/:o or :ת/:א or something. So I question the usefulness of parametrisation here. Useful for doing infrequent things. IMO, left and right trimming are infrequent compared to the frequency of basic input editing. =Austin
Re: Allowing '-' in identifiers: what's the motivation?
Actually, I proposed some years ago allowing separable verbs -- function/method/operator names with spaces in them, that could in fact bracket or intersperse themselves with other parameters. This would be a way of writing if ... elsif ... else ... for example. I wonder if whitespace in identifiers should be significant? Should $foo bar be the same as $foo bar (2 spaces) ? =Austin Mark J. Reed wrote: On Tue, Aug 12, 2008 at 4:03 AM, Michael Mangelsdorf [EMAIL PROTECTED] wrote: relaxed identifiers could become what programmers actually expect. Relaxing the rules is fine, but I would like to state for the record that I'd rather not ever see whitespace allowed in identifiers. That's an Applescript feature that I could do without in Perl. (Not that anyone was proposing such a thing, just getting my objection out there preemptively. :))
Re: Allowing '-' in identifiers: what's the motivation?
That sounds cool. Did you do it at the editor level, or at the keyboard level? =Austin Bob Rogers wrote: From: Mark J. Reed [EMAIL PROTECTED] Date: Mon, 11 Aug 2008 09:07:33 -0400 I'm still somewhat ambivalent about this, myself. My previous experience with hyphens in identifiers is chiefly in languages that don't generally have algebraic expressions, e.g. LISP, XML, so it will take some getting used to in Perl . . . Amen. I've long since reprogrammed - to give _ if pressed once and - if pressed twice, when editing languages with C-like identifiers. So from my perspective, the added visual complexity is not worth it. -- Bob Rogers http://rgrjr.dyndns.org/
Re: Allowing '-' in identifiers: what's the motivation?
At a minimum, there are more multi-word identifiers than there are statements involving subtraction. Further, '-' is basic, while all of [_A-Z] are not. Ergo, a multi-word-identifier is easier to type than a multi_word_identifier or a multiWordIdentifier. The older I get, the more I like Cobol, and now *ML, for getting this stuff right. =Austin John M. Dlugosz wrote: E.g. see http://www.perlmonks.org/?node_id=703265 : sub bar { return 100; } sub foo { 50;} sub foo-bar { return rand(50); } if (foo - bar != foo-bar) { print Haha!\n; }
Re: Minimal Distance (Re: Where is Manhattan Dispatch discussion?)
TSa wrote: BTW, what is a flack? See http://en.wikipedia.org/wiki/Flak_%28disambiguation%29 Originally, (FL)ug(a)bwehr (K)anone -- German 88mm anti-aircraft cannon of WWII. Subsequently, any anti-air gun or cannon, particularly when fired at a position rather than aimed at a particular target. Presently: [[ oproer, samenscholing, Tumult, schiamazzo ]] Criticism, ado, alarm, excitement, noise, grief, hullabaloo, angst, wailing and gnashing of teeth, din, clamor, hubbub, rumpus, tumult, uproar, disturbance, objection, consternation, meshugas, narrichkeit, shemozzle, furor. =Austin
Re: All classes imply the existence of a role of the same name.
John M. Dlugosz wrote: chromatic chromatic-at-wgz.org |Perl 6| wrote: All classes imply the existence of a role of the same name. Please justify that. A class is an defined, referenceable entity with a signature composed of the bits visible to a particular caller. It is possible, by downloading the source or reversing the binaries, to produce a role which will have the same signature. Since perl is, historically, more about DWIW than about objektheorimastursecuricontraktifibation, the WTDI seem to come down to develop an overblown introspection system that will let me almost but not quite do what everybody knows I'm trying to do, or telling perl, yeah, give me something just like that. If it makes you feel introspective, you might argue for calling it .asRole or something, but, you know, Let that boy boogie-woogie -- it's in him and it's got to come out! =Austin
Re: pluralization idea that keeps bugging me
Jonathan makes an excellent point about s and S. In fact, there's probably a little language out there for this. I don't think it needs to be in the core, though. But you could put in some kind of hook mechanism, so that detecting the presence of \s or whatever caused the string to be treated specially. Perhaps it gets a different, possibly more sophisticated, type? A type that is only in-core in a limited (English-only?) implementation, but which admins can install at whim. =Austin Jonathan Lang wrote: Larry Wall wrote: Any other cute ideas? If you have '\s', you'll also want '\S': $n cat\s fight\S # 1 cat fights; 2 cats fight I'm not fond of the 'ox\soxen' idea; but I could get behind something like '\sox oxen' or 'ox\sen'. '\sa b' would mean 'a is singular; b is plural' '\sa' would be short for '\s a' '\s' would be short for '\s s' \Sa b' would reverse this. Sometimes, you won't want the pluralization variable in the string itself, or you won't know which one to use. You could use an adverb for this: :s$nthe cat\s \sis are fighting. and/or find a way to tag a variable in the string: $owner's \s=$count cat\s '\s=$count' means set plurality based on $count, and display $count normally.
Re: protecting internals from mutable arguments
Darren Duncan wrote: Larry had some ideas for dealing with the problem, but this is a matter that should be more widely discussed, particularly among implementers and such. A general thought is that a parameter could be marked so that any argument passed through it is effectively snapshot (which is a no-op if the type is already immutable, or it is likely lazy/COW if it is mutable) so further changes to the external version do indeed not affect our internal copy. Is the class underlying the type immutable? Could I change the mutability of the type after the fact? (Really?) Such as this could solve the problem in the general case. (However, I should mention in postscript that there may be a complicating factor which concerns immutable objects which are also lazy to an extent, eg that may internally cache derived values, such as their .WHICH, when the derived is first asked for rather than at construction time, though this doesn't affect their actual value, which stays immutable. We wouldn't want to lose that ability.) Um, yes, so thank you all who assist in solving this problem. Some sugar like is frozen on parameters? Alternatively, $new = snapshot $old is interesting since it could be explicitly optimized for performance. But your earlier question is a good one. How much can you depend on the (im)mutability info the compiler has? What about runtime? On the other hand, how much of this is really needed? In other words, to what extent are people passing objects that they WANT to be volatile, versus the extent to which they are passing objects where volatility would be fatal? Should is frozen be the default behavior, with auto-cow part of the entry code unless overridden? Or is volatility a more useful norm, so that requiring a statement inside the block is the right awareness for something so weird? =Austin
Re: Is Perl 6 too late?
Thomas Wittek wrote: chromatic wrote: theproblemlinguisticallyspeakingisthatsometimes [snipped] I can't remember that I said that you shouldn't separate your expressions (by punctation/whitspaces), $.but! (*adding$ %*characters _+that^# $might) @#not_ !#be() !necessary_ *#$doesn't! *(make) [EMAIL PROTECTED] =_easier to read and to type (in addition it was a torture to type that). Forgive chromatic. Part of joining @Larry is undergoing a painful initiation process, which tends to inspire zealotry. The point, though, is that there are three ways of handling the whole part of speech issue. One is with a dictionary (reserved words): in this method, every word is assigned a part of speech, usually with a default. Any use of the word FOR must be a loop, any use of INT must be a typedef, etc. Another is with context (and predeclaration). In this method, the surrounding context can be used to infer the part of speech of a word, with some sort of confirmation for 'new' words (user-defined variables, functions, etc.). Most present-day compiled languages use this one, although they frequently rely on the reserved words approach, too, for some words. Finally, the approach Larry has chosen is to explicitly mark the part of speech. Perl up to version 5 used an approach that attempted to correlate the marker with the part of speech associated with the surrounding context: foo(@array) vs. foo($array[0]) This approach was criticized for providing relatively little value over the context+lookup approach. If the sigil has to correspond to the context, then only in rare cases (ambiguous context) is the sigil adding much value. The new approach (@array[0]) ties the sigil to the declaration, serving to distinguish name collisions and of course to autovivify variables correctly. Ultimately, it comes down to value added, and culture/custom. Perl has always used sigils, so perl should continue to use sigils. That's a legitimate stand, in the absence of compelling arguments to the contrary. It let's perl be perl. As far as value goes, let's call the C/C++ approach the nul approach, since by default there is no sigil in front of words. (And I'm considering * and to be sigils, rather than operators.) The nul approach reduces typing. It relies on context to identify the part of speech, occasionally forces some look-ahead (a name followed by '(' is an invocation instead of a reference) and can't handle multiply typed (@foo vs. foo vs. $foo vs. %foo) names. The perl approach increases typing, by something less than 1 character per identifier. (This is a real cost, that Larry continues to elect to bear.) The p5 version imposed some disambiguation burden on the parser, since $foo[0] involved @foo, not $foo. Perl *can* handle *some* multiply typed names. There is a difference between $foo and @foo, but not between my Cat $foo and my Dog $foo. In addition, however, there is the whole *foo thing. Adding the sigil has encouraged people to think in weird ways, 'tied' variables and typeglobs not least among them. I don't know if a 'perl' that used the nul approach would ever have had those features. (Sapir-Whorf lives!) The perl approach, then, opts to pay a significant penalty (0.9+ characters per variable) to allow access to the cool extra features that few other languages use, and none so compactly. A similar trade-off exists with the statement terminating semicolon. In this case, it involves the number of statements per line: A language that terminates statements can ignore whitespace, allowing multiple statements per line and statements that span multiple lines. A language that associates line termination with statement termination must pay a separate cost (continuation marker) for a statement to span multiple lines. It will not, in general, support multiple statements per line. (Though it could make the terminator optional and then inject terminators between colinear statements.) The vast majority of languages have opted to terminate statements. Perl is among them. Probably the best argument is that encountering a semicolon (or full stop, in COBOL) is a positive indicator rather than a negative one. I see a semicolon. I know the statement is over. as opposed to I don't see a continuation marker, so it's likely that the statement is over, although it could be tabbed way off to the right or something. Also, there's the increasing size of words to consider. While $a = $b + $c is a great example of why line termination is not needed, the trend is for variable and function names, not to mention object and method dereferences, to grow longer. From http://www.oreillynet.com/pub/a/javascript/2003/03/18/movabletype.html I get: |MT::Template::Context-add_tag(HelloWorld = sub { return 'Hello World.'; } );| The MT::...add_tag method name alone is 30 characters. Jam a few long identifiers together and you're writing a lot of multi-line
Re: Scans
Gaal Yahas wrote: On Mon, May 08, 2006 at 04:02:35PM -0700, Larry Wall wrote: : I'm probably not thinking hard enough, so if anyone can come up with an : implementation please give it :) Otherwise, how about we add this to : the language? Maybe that's just what reduce operators do in list context. I love this idea and have implemented it in r10246. One question though, what should a scan for chained ops do? list [==] 0, 0, 1, 2, 2; # bool::false? # (bool::true, bool::true, bool::false, bool::false, bool::false) Keeping in mind that the scan will contain the boolean results of the comparisons, you'd be comparing 2 with true in the later stages of the scan. Is that what you intended, or would ~~ be more appropriate? (And I'm with Smylers on this one: show me a useful example, please.) =Austin
Re: Scans
Mark A. Biggar wrote: Austin Hastings wrote: Gaal Yahas wrote: On Mon, May 08, 2006 at 04:02:35PM -0700, Larry Wall wrote: : I'm probably not thinking hard enough, so if anyone can come up with an : implementation please give it :) Otherwise, how about we add this to : the language? Maybe that's just what reduce operators do in list context. I love this idea and have implemented it in r10246. One question though, what should a scan for chained ops do? list [==] 0, 0, 1, 2, 2; # bool::false? # (bool::true, bool::true, bool::false, bool::false, bool::false) Keeping in mind that the scan will contain the boolean results of the comparisons, you'd be comparing 2 with true in the later stages of the scan. Is that what you intended, or would ~~ be more appropriate? (And I'm with Smylers on this one: show me a useful example, please.) Well the above example does tell you where the leading prefix of equal values stops, assuming the second answer. That's a long way to go... Combined with reduce it gives some interesting results: [+] list [?] @bits == index of first zero in bit vector Likely to win the obfuscated Perl contest, but ...? There are other APLish operators that could be very useful in combination with reduce and scan: the bit vector form of grep (maybe called filter); filter (1 0 0 1 0 1 1) (1 2 3 4 5 6 7 8) == (1 4 6 7) This is really useful if your selecting out of multiple parallel arrays. Okay, this begins to approach the land of useful. If there's a faster/better/stronger way to do array or hash slices, I'm interested. But the approach above doesn't seem to be it. Use hyper compare ops to select what you want followed by using filter to prune out the unwanted. filter gives you with scan: filter (list [] @array) @array == first monotonically increasing run in @array This seems false. @array = (1 2 2 1 2 3), if I understand you correctly, yields (1 2 2 3). filter (list [=] @array) @array == first monotonically non-decreasing run in @array So @array = (1 0 -1 -2 -1 -3) == (1, -1) is monotonically non-decreasing? That was 5 minutes of thinking. I'm thinking that APL is dead for a reason. And that every language designer in the world has had a chance to pick over its dessicated bones: all the good stuff has been stolen already. So while scans may fall out as a potential side-effect of reduce, the real question should be are 'scans' useful enough to justify introducing context sensitivity to the reduce operation? =Austin
Re: Scans
Smylers wrote: Mark A. Biggar writes: Austin Hastings wrote: Gaal Yahas wrote: list [==] 0, 0, 1, 2, 2; # bool::false? # (bool::true, bool::true, bool::false, bool::false, bool::false) (And I'm with Smylers on this one: show me a useful example, please.) Well the above example does tell you where the leading prefix of equal values stops, assuming the second answer. But you still have to iterate through the list of Cbools to get that index -- so you may as well have just iterated through the input list and examined the values till you found one that differed. I think the one thing that is redeeming scans in this case is the (my?) assumption that they are automatically lazy. The downside is that they aren't random-access, at least not in 6.0. I expect that @scan ::= list [==] @array; say @scan[12]; will have to perform all the compares, since it probably won't be smart enough to know that == doesn't accumulate. So yes, you iterate over the scan until you find whatever you're looking for. Then you stop searching. If you can't stop (because you're using some other listop) that could hurt. At the most useful, it's a clever syntax for doing a map() that can compare predecessor with present value. I think that's a far better angle than any APL clonage. But because it's a side-effect of reduce, it would have to be coded using $b but true to support the next operation: sub find_insert($a, $b, $new) { my $insert_here = (defined($b) ? ($a = $new $b) : $new $a); return $b but $insert_here; } Then : sub list_insert($x) { ins := find_insert.assuming($new = $x); @.list.splice(first([ins] @array).k, 0, $x); } It's a safe bet I've blown the syntax. :( I think I'm more enthusiastic for a pairwise traversal (map2 anyone?) than for scan. But I *know* map2 belongs in a module. :) the bit vector form of grep (maybe called filter); filter (1 0 0 1 0 1 1) (1 2 3 4 5 6 7 8) == (1 4 6 7) Please don't! The name 'filter' is far too useful to impose a meaning as specific as this on it. Hear, hear! Ixnay on the ilterfay. =Austin
Re: A shorter long dot
Audrey Tang wrote: Damian Conway wrote: Juerd wrote: and propose .: as a solution $xyzzy.:foo(); $fooz. :foo(); $foo. :foo(); This would make the enormous semantic difference between: foo. :bar() and: foo :bar() depend on a visual difference of about four pixels. :-( Good (and floating) point. How about this: $antler.bar; $xyzzy.:bar; $blah. .bar; $foo. .bar; That is, introduce only the non-space-filled .: variant, and retain the space-filled long dot. How about if we replace dot with - and then you can specify any number of dashes for alignment: $antler-bar; $xyzzy--bar; $blah---bar; $foobar; Or, to put it another way: what hard problem is it that you guys are actively avoiding, that you've spent a week talking about making substantial changes to the language in order to facilitate lining up method names? =Austin
Re: Another dotty idea
Damian Conway wrote: Larry wrote: I really prefer the form where .#() looks like a no-op method call, and can provide the visual dot for a postfix extender. It also is somewhat less likely to happen by accident the #., I think. And I think the front-end shape of .# is more recognizable as different from #, while #. requires a small amount of visual lookahead, and is the same square shape on the front, and could easily be confused with a normal line-ending comment. I'm not enamoured of the .# I must confess. Nor of the #. either. I wonder whether we need the dot at all. Or, indeed, the full power of arbitrary delimiters after the octothorpe. How committed are we to spaces? If we impose an adjacent-space requirement on the range operators, we could just repeat dots endlessly: given ($x) { when (0 .. 9) {...}} $obj..method(); # These line up in proportional font. Sorry. $newobjmethod(); $longobjname.method(); Frankly, I don't line up my method calls like this, so it's not much of a concern. But I also use spaces around operators, so I'm okay with my coding style becoming syntax. :) =Austin
Re: Another dotty idea
Damian Conway wrote: I'm not enamoured of the .# I must confess. Nor of the #. either. I wonder whether we need the dot at all. Or, indeed, the full power of arbitrary delimiters after the octothorpe. What if we restricted the delimiters to the five types of balanced brackets? And then simply said that, when any comment specifier (i.e. any octothorpe) is followed immediately by an opening bracket, the comment extends to the corresponding closing bracket? Then you could have: #[ This is a comment ] #( This is a comment ) #{ This is a comment } # This is a comment #« This is a comment » This is a much better idea, disconnected from the question of putting spaces in method calls. It's particularly nice if you say the magic words multi-line comment. (Please, spare me the hand-wringing about pod.) I'll even pay you a closing hash (]#, }#, )#, etc.) if you put it in. For extra credit, make them nest. =Austin
Re: Perl 6 OO and bless
Rob Kinyon wrote: OOP is all about black-box abstraction. To that end, three items have been identified as being mostly necessary to achieve that: 1) Polymorphism - aka Liskov substitutability 2) Inheritance - aka specialization 3) Encapsulation P5 excels at #1, does #2 ok, and fails completely at #3. Now, one can argue whether the programmer should make the decision as to whether strong encapsulation is desirable, but the point is that you cannot create encapsulation in Perl that someone else cannot violate. Hence, you cannot write OO code in Perl. [...] OO is a spectrum, not a point. What I was trying to say was that when comparing the OO you can do in P5 with the OO you will be able to do in P6, it seems silly (to me) to cripple P6 out of a misguided effort to maintain backwards compatibility with P5. [sarcasm] Indeed. In fact, in any language where coders can discover the implementation of your code they might depend upon or circumvent your wishes. Ergo, no open source code can be considered OO. In fact, if p6 isn't going to compile programs down to a binary representation-indeed, if it doesn't encrypt the resulting binary-there's no reason for anyone to ever use object and perl6 in the same sentence. Woe are we. [/sarcasm] Dude, this isn't slashdot. Taking some recondite position and then defending it against all comers doesn't earn karma, or get you clickthroughs, or earn you mod points. If you think that trying to be backward compatible will have negative impacts in the long run, say that. As it is, I've just spend an hour reading Can't do it Can too! Can not! Can too! Well, it might awkward, is all I'm sayin' There's $150 you owe me. That's 180 bottles of Yuengling. As for the encapsulation thing, here's my take on it: I'm not a perl programmer. I'm a coder. I write stuff in whatever language makes sense: PHP, sh, perl, C, Batch. That means that I don't keep track of the status of all the CPAN modules, not do I particularly care about the personal lives and traumas affecting their authors. BUT, CPAN makes Perl rock IMO because it can really increase my productivity (and I'm a consultant, so I always need to be more productive than everyone else). So if there's a problem I can easily state, I go look on CPAN to see if someone has already solved it. Sadly, it's only very rarely that the module is really drop-in. The rest of the time it goes like this: 1. Given this module, can I drop it in and have it work? 2. No. Okay, can I change some of my code to interface to it? 3. No. Can I write a hook that will get the behavior I want? 4. No. If I subclass/extend it, can I trivially get what I want? 5. No. Is there another module I should consider? 6. No. Is the code itself readable and comprehensible so I can extend it? 7. No. I guess I write it myself. (Any yes answer returns, or at least yields.) Encapsulation isn't something you have to strive for. Encapsulation is something you have to work really hard to avoid. Because the only time I'm looking at (gasp!) violating encapsulation is when I have determined that your module is the best of the bunch (or at least closest to what I want) and IT DOESN'T WORK. So I'm opening the hood to answer questions 4 (unless your docs are good) or 6. That means that apparently the stars are in alignment, because I'm agreeing with Chromatic (for the second time, I think). Backward compatibility means a bunch of modules that will work, and that can be extended. I don't have to insert question 3a: Is this written in P6 so I can do stuff with it? I'm sure a lot of the obvious modules will be rewritten in P6 really soon. DBI, for instance. A lot of the bottom feeders, are only really useful when you're in a rare circumstance. They're REALLY useful, though, when you're in that circumstance. Statistically, some authors are dead. Some have seen the sacred light and switched over to programming Ruby (or the LOTW). Some are zoned out in a crack house in some dark metropolis. But if the compatibility is good I can still use their code. Saved! (By someone who really wanted to code a Maori calendar module...) =Austin
Re: Array Holes
Larry Wall wrote: Whatever the answer, it probably has to apply to my @a; @a[0] = '1'; @a[2] = '3'; print exists $a[1]; as well as the explicit delete case. Are we going to pitch an exception for writing beyond the end of an array? That seems a bit anti-Perlish. What's the semantic difference between exists but is undefined and does not exist? IOW: my @a = (1,2,3); @a[1] = undef; print exists @a[1]; print @a[1]; versus: my @a = (1,2,3); delete @a[1]; print exists @a[1]; print @a[1]; If I try to store @a[1], it auto(re)vivifies, right? If I fetch @a[1], do I get an exception or just an undef? To force undef, do I use err or //? my $a1 = @a[1] err undef; my $a1 = @a[1] // undef; Does @a.k return different results? (0..2) vs (0,2) : * how do you store nonexistence for native packed arrays We've beaten this horse, or one that looks vanishingly like it, a bunch of times. 1. Implementation issue. Throw it to p6i. 2. For infrequent exceptions, create an exception-list. 3. For frequent exceptions, create a bitmap. 4. For exceptions in unbalanced data, create an escape value. : Anyway, I want this behavior to die. Maybe we should get rid of : .delete() on arrays, or maybe we should equate it with : .splice($idx,1). Or maybe we support sparse arrays as one container type. Sure, but what's the default behavior? If I consume the endpoints of an array, using shift or pop, how is that different from consuming the middle? my @a = (1,2,3); shift @a; shift @a; print @a.indexof(3); # 0 or 2? Based on that, I'll argue that the default array mode is contains a list, which means that array contents should act like lists. Delete something in the middle, and you have a shorter list, not a list with a hole in it. Harrays and Arrashes can be different types, or maybe can just be properties of the basic sigillated types. my @a = (1,2,3) is arrash; # sparse my %h = (1 = 'a', 2 = 'b') is harray; I actually used a crowbar last week. It wasn't pretty, but it got the job done. Well, okay, it got the job started...I still have a hole in my living room wall, but my chief interest at the time was to make sure my house wasn't going to burn down. That's about as much abstraction as you could wish for in real life. Dude, you're scaring me. Should there be a license required for home ownership? =Austin
Re: Pattern matching and for loops
Dave Whipp wrote: Today I wrote some perl5 code for the umpteenth time. Basically: for( my $i=0; $i $#ARGV; $i++ ) { next unless $ARGV[$i] eq -f; $i++; $ARGV[$i] = absolute_filename $ARGV[$i]; } chdir foo; exec bar, @ARGV; I'm trying to work out if there's a clever perl6 way to write this using pattern matching: for @*ARGV - -f, $filename { $filename .= absolute_filename; } Would this actually work, or would it stop at the first elem that doesn't match (-f, ::Item)? Is there some way to associate alternate codeblocks for different patterns (i.e. local anonymous MMD)? That's given/when. I seem to recall that Cgiven and Cfor do not topicalize the same way, by design, but my recollection may be dated. If I'm wrong, then: for ... { when -f { ... }} If I'm right, then there is probably an argument for a standalone when that uses the default topic, or for some sort of shorthand to coerce a unified topicalization. =Austin
Re: Error Laziness?
Luke Palmer wrote: There are two reasons I've posted to perl6-language this time. First of all, is this acceptable behavior? Is it okay to die before the arguments to an undefined sub are evaluated? Something like: widgetMethod new Widget; The best argument I've got for forcing the args to define is that AUTOfoo might define them for you. I'm not sure that a similar argument involving a possible AUTObar might not invalidate it, though -- is there a type-based or virtual AUTOfoo behavior? Second, consider this is lazy code: sub foo ($bar is lazy) { my $bref = \$bar; do_something($bref); } foo(42); This will evaluate $bar, even if it is not used in do_something. Okay, why? I'd expect the reference-taking to come nowhere close to evaluating $bar, and passing $bref either passes the reference (no evaluation) or looks at the reference, realizes that it's basically lazy, too, and punts. In fact, this will evaluate $bar even if the do_something call is omitted altogether. This doesn't give you much control over the time of evaluation, and presumably if you're saying is lazy, control is precisely what you want. I don't think control so much as aversion. is lazy means don't evaluate this if you can avoid evaluating this. I think we need more control. I think is lazy parameters should pass a thunk that needs to be call()ed: sub foo ($bar is lazy) { say $bar; # says something like Thunk(...) say $bar(); # evaluates parameter and prints it say $bar; # still says something like Thunk(...) say $bar(); # doesn't evaluate again, just fetches } Whaddaya think? Erm, no. I think is lazy describes $bar, and that means I use $bar as '$bar' and perl dances madly to avoid evaluating it. Suppose for a moment that there were levels of laziness -- there aren't, but just suppose: $x is lazy('Network IO involved. Create temporary variables and/or closures to avoid losing this value once evaluated.'); $y is lazy('Horribly costly. Perform algebraic substitution before evaluating this.'); $z is lazy('Possibly infinitely expensive. Only evaluate if halting problem solved.'); The syntax of the language shouldn't change here. If I say print $x then I'm telling you that I want to pay the price. It's a debug statement or whatever, but I want those MIPS to get burned right now. There's not much other way around it. (Of course, if the lazy entity is a sequence it might be legitimate to print [sequence starting 1, 2, 3 ...] instead of all possibly-infinitely-many values.) Adding is lazy shouldn't change the code below it. It should just speed it up where possible. I like the idea of $x.value being a thunk, so that math or what-not could actually be carried as an expression. So that: sub foo($bar is rw is lazy) { $bar++; } Becomes ...{ $bar.value = \{ my v = $bar.value; postfix:++(v());}; } But $x.toString(), unless otherwise overridden, needs to keep dereferencing ...value.value.value until some kind of basic type falls out. Perhaps you could enumerate the various aspects of value? Obviously there's value() and toString() (or whatever it's called.) How many such are there? =Austin
Continuing in the face of exceptions, and what's the negation of // ?
I retract my opposition to err. After coding this: try { try { path = f.getCanonicalPath(); } catch (Exception e) { path = f.getAbsolutePath(); } } catch (Exception e) { path = f.toString(); } I am now a convert. To the extent that we are going to throw exceptions, there needs to be a quick way of huffmanizing the scaffolding. IIRC, use fatal decides between an exception being thrown and an undef but ... value being returned. IMO, it's important to coerce code into the same behavior: If a sub that I call tries to throw an exception, that needs to be converted into undef-but-whatever, too. One problem, of course, is drawing the line: «|die| creates a normal exception»[A4] so how, if at all, can a library coder force an abort? I suspect exit() is the only way -- there's always some guy wanting to override everything. So the err operator, or whatever, has to suppress fatal, call, check, return: sub infix:err (Code lhs, Code rhs) { no fatal; lhs; CATCH { rhs; } } Producing: my $path = $f.getCanonicalPath() err $f.getAbsolutePath() err $f.toString(); But using no fatal at the top of your code reduces this to plain old //: no fatal; my $path = $f.getCanonicalPath() // $f.getAbsolutePath() // $f.toString(); I wonder what '\\' means? Is it the negation of //? So that: my $x = $hash$key \\ die Error: hash$key already defined! Or can we continue the breaktyping theme and grab it for fatal-suppression? Alternatively, of course, there's always /// or //! -- probably one of these is best, extending the theme of the if undef to include if undef (and exception is very undef!). At any rate, count me in for suppressing errors inline -- also for tarring and feathering the guy who put that kind of crap in the Java standard library. =Austin
Re: Sane (less insane) pair semantics
Ingo Blechschmidt wrote: Juerd wrote: Ingo Blechschmidt skribis 2005-10-10 20:08 (+0200): Named arguments can -- under the proposal -- only ever exist in calls. Which leaves us with no basic datastructure that can hold both positional and named arguments. This is a problem because in a call, they can be combined. Very true. This is why we need Luke's Tuple proposal [1]. Luke's Tuple proposal, aka Luke's Grand Unified Object Model, is way not what we need for this. As far as I can see, LGUOM is an expression of Haskell envy of brobdingnagian proportion. A rule that says splatting a list coerces all pairs into named args works just fine. The corresponding rule, accessing the parameters to your sub as a list (not using *%args) coerces all named args to pairs. Presto! Reversible, etc. An alternative might be PHP-style arrays. But nobody wants that. Basically: my $tuple = (a = 1, (b = 2)):{ ...block... }; # $tuple.isa(Tuple) # Tuples are ordinary objects -- they can be stored # in scalars, arrays, etc. # But splatting tuples unfolds their magic: foo(*$tuple); # same as foo(a = 1, (b = 2)):{ ...block...}; # named arg a, positional pair (b = 2), # adverbial block { ...block... } # (Yep, under the current proposal, tuple construction conflicts # with list/array construction. FWIW, I'd be fine with # using Tuple.new(...) as the tuple constructor.) --Ingo [1] http://svn.openfoundry.org/pugs/docs/notes/theory.pod Tuple construction conflicts with a lot of things. Given the amount of new (to me, anyways) syntax proposed in the rest of the document, I'm surprised that LQe didn't unify lists and tuples (or push list construction elsewhere). =Austin
Re: Sane (less insane) pair semantics
Stuart Cook wrote: On 10/10/05, Austin Hastings [EMAIL PROTECTED] wrote: The overrides have nothing to do with it. That a=1 will *always* be a positional, because by the time it reaches the argument list, it's a value (not a syntactic form). The only way to use a pair-value as a named argument is to splat it directly, or splat a hash or arg-list-object containing it. Splatting an array *never* introduces named arguments, only positionals. That seems like a huge error. Arrays are the only thing (that I know of) that can store the positional part of an arglist as well as storing the pairs. If there's not some mechanism for getting the entire arglist, and if there's not some simple inversion of that simple mechanism for passing the args along to some other victim, then ... well ... I don't know. But it's bad! So since you keep saying arg-list-object as though it was something not an array, I'll bite: What's an arg-list-object, and how is it different from an array? =Austin
Re: Sane (less insane) pair semantics
Miroslav Silovic wrote: [EMAIL PROTECTED] wrote: * expands its RHS and evaluate it as if it was written literally. I'd like @_ or @?ARGS or something like that to be a *-able array that will be guaranteed to be compatible with the current sub's signature. This sounds nice, though. Maybe it suggests that the 'named splat' should be something other than *? How about perl should DWIM? In this case, I'm with Juerd: splat should pretend that my array is a series of args. So if I say: foo [EMAIL PROTECTED]; or if I say: foo([EMAIL PROTECTED]); I still mean the same thing: shuck the array and get those args out here, even the pairs. It's worth pointing out that perl does know the list of declared named args, though that may not be enough. If the pair.key matches an expected arg, then splat should collapse it for sure. If it doesn't match...I dunno. Is there a list() operator for converting hashes into lists of pairs? That might make parsing foo([EMAIL PROTECTED], *%_) more palatable, but I'd still prefer to get pairs in @_ if I don't explicitly ask for *%_... =Austin
Re: Sane (less insane) pair semantics
Ingo Blechschmidt wrote: Hi, while fixing bugs for the imminent Pugs 6.2.10 release, we ran into several issues with magical pairs (pairs which unexpectedly participate in named binding) again. Based on Luke's Demagicalizing pairs thread [1], #perl6 refined the exact semantics [2]. The proposed changes are: * (key = $value) (with the parens) is always a positionally passed Pair object. key = $value (without the parens) is a named parameter: sub foo ($a) {...} foo(a = 42);# named parameter a, $a will be 42 foo(:a(42)); # same foo((a = 42)); # positional parameter (a pair), # $a will be the Pair (a = 42) foo((:a(42))); # same What about whitespace? foo (a = 42); # Note space Is that the first case (subcall with named arg) or the second case (sub with positional pair)? * Passing a variable containing a Pair is always passed positionally: my $pair = (a = 42); # or :a(42) foo($pair); # positional parameter, $a will be the Pair (a = 42) * Unary * makes a normal pair variable participate in named binding: foo(*$pair); # named parameter a, $a will be 42 * Same for hashes: my %hash = (a = 1, b = 2, c = 3); foo(%hash); # positional parameter, $a will be \%hash foo(*%hash); # three named parameters Opinions? --Ingo [1] http://article.gmane.org/gmane.comp.lang.perl.perl6.language/4778/ [2] http://colabti.de/irclogger/irclogger_log/perl6?date=2005-10-09,Sunsel=528#l830 What's the most complete way to get the sub's arguments? That is, for a sub that takes positional, optional, named, and variadic (*) arguments, what's the best mechanism for grabbing the entire call? Your reply to Uri: Uri Guttman wrote: but what about lists and arrays? my @z = ( 'a', 1 ) ; foo( @z ) # $a = [ 'a', 1 ] ?? Yep. Suggests that I cannot pass along parameters in the usual way: sub foo(...) { bar(@_); } Does that work if I code it as bar @_; (which is huffmanly, but hardly intuitive). =Austin
Re: Sane (less insane) pair semantics
Stuart Cook wrote: On 10/10/05, Austin Hastings [EMAIL PROTECTED] wrote: What about whitespace? foo (a = 42); # Note space Is that the first case (subcall with named arg) or the second case (sub with positional pair)? Sub with positional pair, since the parens aren't call-parens (because of the space), so they protect the pair. It would probably be prudent to emit a warning in this case, for obvious reasons. (Actually, this is one of the major problems with using parens to protect pair args.) So to pass a hash that has one element requires using the chash/c keyword? Specifically, if I say: @args = (a = 1, get_overrides()); Then can I say foo([EMAIL PROTECTED]); Or will I, in the case of no overrides, get a positional pair instead of named a =1 ? What's the most complete way to get the sub's arguments? That is, for a sub that takes positional, optional, named, and variadic (*) arguments, what's the best mechanism for grabbing the entire call? As far as I know there currently *isn't* a concise way to capture/forward all (or some) of a sub's arguments; the closest thing is: sub foo([EMAIL PROTECTED], *%named) { bar([EMAIL PROTECTED], *%named) } Which is ugly and unwieldy. I believe Luke was considering some kind of 'unified arg-list object' which you could use to slurp and splat entire argument lists, like so: sub foo(*$args) { bar(*$args) } But I don't think it's been posted to the list yet. It seems like positionals, if specified, should appear as pairs in [EMAIL PROTECTED] unless a hash is also present. That is, @_ or its replacement as the collection-of-all-arguments-given should be a list, for positionalness, and should include pairs when necessary. =Austin
Re: Exceptuations
Yuval Kogman wrote: On Thu, Oct 06, 2005 at 14:27:30 -0600, Luke Palmer wrote: On 10/6/05, Yuval Kogman [EMAIL PROTECTED] wrote: when i can't open a file and $! tells me why i couldn't open, i can resume with an alternative handle that is supposed to be the same when I can't reach a host I ask a user if they want to wait any longer when disk is full I ask the user if they want to write somewhere else when a file is unreadable i give the user the option to skip I'm not bashing your idea, because I think it has uses. But I'll point out that all of these can be easily accompilshed by writing a wrapper for open(). That would be the usual way to abstract this kind of thing. Writing a wrapper may be the implementation mechanics: sub safe_open() { call; CATCH { when E::AccessDenied { return show_user_setuid_dialog(); }} } open.wrap(safe_open); But this is just one way to do it, and it fails to provide for helping other people's code: Yuval's GUI environment would offer to fix the problem for ALL file related calls (open, dup, popen, ad nauseum), and would not have to worry about the order in which calls are wrapped. But see below. Stylistically I would tend to disagree, actually. I think it's cleaner to use exception handling for this. Also, this implies that you know that the errors are generated by open. This is OK for open(), but if the errors are generated from a number of variants (MyApp::Timeout could come from anywhere in a moderately sized MyApp), then wrapping is not really an option. I think that what your proposal *really* requires is uniformity. There are other ways to get the same behavior, including an abstract factory interface for exception construction (would provide virtual constructor for exceptions, so permitting userspace to insert a 'retry' behavior), but it has the same vulnerability: the p6core must cooperate in uniformly using the same mechanism to report errors: throw, fail, die, error, abend, whatever it's eventually called. sub *::throw(...) { return recover_from_error([EMAIL PROTECTED]) or P6CORE::throw([EMAIL PROTECTED]); } =Austin
Re: Exceptuations
Yuval Kogman wrote: On Fri, Oct 07, 2005 at 02:31:12 -0400, Austin Hastings wrote: Yuval Kogman wrote: Stylistically I would tend to disagree, actually. I think it's cleaner to use exception handling for this. Also, this implies that you know that the errors are generated by open. This is OK for open(), but if the errors are generated from a number of variants (MyApp::Timeout could come from anywhere in a moderately sized MyApp), then wrapping is not really an option. I think that what your proposal *really* requires is uniformity. There are other ways to get the same behavior, including an abstract factory interface for exception construction (would provide virtual constructor for exceptions, so permitting userspace to insert a 'retry' behavior), but it has the same vulnerability: the p6core must cooperate in uniformly using the same mechanism to report errors: throw, fail, die, error, abend, whatever it's eventually called. We have: die: throw immediately fail: return an unthrown exception, which will be thrown depending on whether our caller, and their caller - every scope into which this value propagates - is using fatal. This is enough for normal exception handling. Yet here we are discussiong abnormal exception handling. As for recovery - the way it's done can be specialized on top of continuations, but a continuation for the code that would run had the exception not been raised is the bare metal support we need to do this. No, it isn't. It's *one way* to do this. Any mechanism which transfers control to your error-recovery code in such a way that can cause an uplevel return of a substituted value is the bare metal support we need. You're conflating requirements and design. I suggested an alternative *design* in my prior message, to no avail. Try this instead: You overload the global 'die' with a sub that tries to decode the error based on its arguments. If it cannot comprehend the error, it invokes P6CORE::die(). If it comprehends the error, it tries to resolve it (querying the user, rebooting the machine to free up space in /tmp, whatever) and if successful it returns the fixed value. But wait! This requires that everyone do return die ... instead of die..., and we can't have that. So you add a source filter, or macro, or tweak the AST, or perform a hot poultry injection at the bytecode level, or whatever is required to convert die into return die whereever it occurs. Et voila! No exceptions are caught, no continuations are released into the wild. And yet it flies. Much like the hummingbird it looks a little awkward, and I'm way not sure that munging bytecodes is necessarily a better idea. But the point is that resuming from an exception (or appearing to) is not bound to implemented with continuations. =Austin Et vidi quod aperuisset Autrijus unum de septem sigillis, et audivi unum de quatuor animalibus, dicens tamquam vocem tonitrui : Veni, et vide. Et vidi : et ecce camelus dromedarius, et qui scriptat super illum, habebat archivum sextum, et data est ei corona, et exivit haccum ut vinceret. Apocalypsis 6:1 (Vulgata O'Reilly)
Re: Look-ahead arguments in for loops
Miroslav Silovic wrote: [EMAIL PROTECTED] wrote: And that was never quite resolved. The biggest itch was with operators that have no identity, and operators whose codomain is not the same as the domain (like , which takes numbers but returns bools). Anyway, that syntax was $sum = [+] @items; And the more general form was: $sum = reduce { $^a + $^b } @items; Yes, it is called reduce, because foldl is a miserable name. To pick some nits, reduce and fold are different concepts. By definition, reduce doesn't take the initial value, while fold does. So reduce using fold is something like @items || die horribly; foldl fn, @items[0], @items[1..] ... while fold using reduce is: reduce fn, ($initial, @items) I think both are useful, depending on the circumstances. Miro Something like: sub *reduce(func, +$initial = undef, [EMAIL PROTECTED]) { $value = $initial; map { $value = func($value, $_); } == @list; return $value; } ? Or would this be an array method? I can see wanting to reduce list literals, in some wierd mostly-tutorial cases, but I can also see wanting this to be a data structure method to prevent reducing a parse tree using an array operation... =Austin
Re: Exceptuations, fatality, resumption, locality, and the with keyword; was Re: use fatal err fail
Yuval Kogman wrote: On Thu, Sep 29, 2005 at 13:52:54 -0400, Austin Hastings wrote: [Bunches of stuff elided.] A million years ago, $Larry pointed out that when we were able to use 'is just a' classifications on P6 concepts, it indicated that we were making good forward progress. In that vein, let me propose that: * Exception handling, and the whole try/catch thing, IS JUST An awkward implementation of (late! binding) run-time return-type MMD. Exception handling is just continuation passing style with sugar. Have a look at haskell's either monad. It has two familiar keywords - return and fail. Every statement in a monadic action in haskell is sequenced by using the monadic bind operator. The implementation of =, the monadic bind operator, on the Either type is one that first check to see if the left statement has failed. If it does, it returns it. If it doesn't it returns the evaluation of the right hand statement. Essentially this is the same thing, just formalized into a type Internally, it may be the same. But with exceptions, it's implemented by someone other than the victim, and leveraged by all. That's the kind of abstraction I'm looking for. My problem with the whole notion of Either errorMessage resultingValue in Haskell is that we _could_ implement it in perl as Exception|Somevalue in millions of p6 function signatures. But I don't _want_ to. I want to say MyClass and have the IO subsystem throw the exception right over my head to the top-level caller. I guess that to me, exceptions are like aspects in they should be handled orthogonally. Haskell's Either doesn't do that -- it encodes a union return type, and forces the call chain to morph whenever alternatives are added. The logical conclusion to that is that all subs return Either Exception or Value, so all types should be implicitly Either Exception or {your text here}. If that's so, then it's a language feature and we're right back at the top of this thread. Specifically, if I promise you: sub foo() will return Dog; and later on I actually wind up giving you: sub foo() will return Exception::Math::DivisionByZero; In haskell: foo :: Either Dog Exception::Math::DivisionByZero e.g., it can return either the expected type, or the parameter. Haskell is elegant in that it compromises nothing for soundness, to respect referential integrity and purity, but it still makes thing convenient for the programmer using things such as monads For appropriate definitions of both 'elegant' and 'convenient'. Java calls this 'checked exceptions', and promises to remind you when you forgot to type throws Exception::Math::DivisionByZero in one of a hundred places. I call it using a word to mean its own opposite: having been exposed to roles and aspects, having to code for the same things in many different places no longer strikes me as elegant or convenient. the try/catch paradigm essentially says: I wanted to call csub Dog foo()/c but there may be times when I discover, after making the call, that I really needed to call an anonymous csub { $inner::= sub Exception foo(); $e = $inner(); given $e {...} }/c. Yes and no. The try/catch mechanism is not like the haskell way, since it is purposefully ad-hoc. It serves to fix a case by case basis of out of bounds values. Haskell forbids out of bound values, but in most programming languages we have them to make things simpler for the maintenance programmer. Right. At some level, you're going to have to do that. This to me is where the err suggestion fits the most comfortably: err (or doh! :) is a keyword aimed at ad-hoc fixes to problems. It smooths away the horrid boilerplate needed for using exceptions on a specific basis. do_something() err fix_problem(); is much easier to read than the current { do_something(); CATCH { fix_problem(); }} by a lot. But only in two conditions: first that all exceptions are identical, and second that the correct response is to suppress the exception. To me that fails because it's like Candy Corn: you only buy it at Halloween, and then only to give to other people's kids. As syntactic sugar goes, it's not powerful enough yet. We're conditionally editing the return stack. This fits right in with the earlier thread about conditionally removing code from the inside of loops, IMO. Once you open this can, you might as well eat more than one worm. Another conceptually similar notion is that of AUTOLOAD. As a perl coder, I don't EVER want to write say Hello, world or die Write to stdout failed.; -- it's correct. It's safe coding. And it's stupid for a whole bunch of reasons, mostly involving the word yucky. It's incorrect because it's distracting and tedious. http://c2.com/cgi/wiki?IntentionNotAlgorithm Code which does it is, IMHO bad code because obviously the author does not know where to draw the line and say this is good enough, anything more would only make it worse. For instance, some
Re: Exceptuations, fatality, resumption, locality, and the with keyword; was Re: use fatal err fail
TSa wrote: The view I believe Yuval is harboring is the one examplified in movies like The Matrix or The 13th Floor and that underlies the holodeck of the Enterprise: you can leave the intrinsic causality of the running program and inspect it. Usually that is called debugging. But this implies the programmer catches a breakpoint exception or some such ;) Exception handling is the programmatic automatisation of this process. As such it works the better the closer it is in time and context to the cause and the more information is preserved. But we all know that a usefull program is lossy in that respect. It re-uses finite resources during its execution. In an extreme setting one could run a program *backwards* if all relevant events were recorded! The current state of the art dictates that exceptions are to be avoided when it is possible to handle the error in-line. That exceptions should only be used for exceptional cases, and anything you encounter in the manual pages is not exceptional. I don't agree with this, because it is IMO effectively saying We had this powerful notion, but it turned out to be difficult to integrate post-hoc into our stack-based languages, so we're going to avoid it. Rather than admitting defeat, though, we're going to categorize it as some kind of marginal entity. I don't see exceptions as necessarily being outside the intrinsic causality of the running program. They are non-traditional forms of flow control: event-based programming, if you will, in an otherwise sequential program. We do much the same thing when we talk about coroutines: violate the traditional stack model. We do the same thing again when we talk about aspects: de-localize processing of certain (ahem) aspects of the problem domain. The telling part of aspects, though, was the the first popular implementation (AspectJ) required a preprocessor and a special markup language to implement. Why? Because nobody uses extensibility and Java in the same sentence. I guess aspects are traditional in that regard, though: remember CFront. Perl, OTGH, doesn't have the poor body-image or whatever it is that keeps people afraid to change the syntax. It can't be a method because it never returns to it's caller - it's It beeing the CATCH block? Ahh, no. It in this case is the .resume call. My question was is cresume/c a multi, an object method, or what? This is because the types of exceptions I would want to resume are ones that have a distinct cause that can be mined from the exception object, and which my code can unambiguously fix without breaking the encapsulation of the code that raised the exception. Agreed. I tried to express the same above with my words. The only thing that is a bit underspecced right now is what exactly is lost in the process and what is not. My guiding theme again is the type system where you leave information about the things you need to be preserved to handle unusual cicumstances gracefully---note: *not* successfully, which would contradict the concept of exceptions! This is the classical view of exceptions, and so it is subject to the classical constraints: you can't break encapsulation, so you can't really know what's going when the exception occurs. The reason I like the with approach is that it lets us delocalize the processing, but does _not_ keep the exceptions are violent, incomprehensible events which wrench us from our placid idyll mentality. In that regard, exceptuations are resumable gotos. =Austin
Re: Look-ahead arguments in for loops
Damian Conway wrote: Rather than addition Yet Another Feature, what's wrong with just using: for @list ¥ @list[1...] - $curr, $next { ... } ??? 1. Requirement to repeat the possibly complex expression for the list. 2. Possible high cost of generating the list. 3. Possible unique nature of the list. All of these have the same solution: @list = ... for [undef, @list[0...]] ¥ @list ¥ [EMAIL PROTECTED], undef] - $last, $curr, $next { ... } Which is all but illegible. =Austin
Re: Look-ahead arguments in for loops
Damian Conway wrote: Austin Hastings wrote: All of these have the same solution: @list = ... for [undef, @list[0...]] ¥ @list ¥ [EMAIL PROTECTED], undef] - $last, $curr, $next { ... } Which is all but illegible. Oh, no! You mean I might have to write a...subroutine!?? Austin Hastings wrote: 1. Requirement to repeat the possibly complex expression for the list. 2. Possible high cost of generating the list. 3. Possible unique nature of the list. The subroutine addresses #1, but not 2 or 3. Also, there's a #4: modified state, which is hinted at but not really covered by #3. =Austin
Exceptuations, fatality, resumption, locality, and the with keyword; was Re: use fatal err fail
TSa wrote: HaloO, Yuval Kogman wrote: On Wed, Sep 28, 2005 at 11:46:37 -0500, Adam D. Lopresto wrote: The recent thread on Expectuations brought back to mind something I've been thinking for a while. In short, I propose that use fatal be on by default, and that err be turned into syntactic sugar for a very small try/CATCH block. You already know that err is the low-precedence version of //, right? What replaces that? I like default or defaults myself, but I'm never really sure what the precedence actually IS. After all, and/or were lower than assignment, so you could code: $a = foo or die; and get ($a or die). How does this work for the err/defaults keyword? Does the low-precedence version move up, or is there an idiom I don't understand? On the primary hand, I don't like the idea of using err as a try/catch because that's putting exception handling in line with the primary code. See FMTYEWT below. I like it a lot. It gives the advantages of both the flexible, more robust try/catch, and the (locally) concise, clear error return. I don't like it at all. I fear, that we mix two orthogonal concepts just because it is convenient. To me the statement return 42; # 1 has to orthogonal meanings: 1) the current scope has reached its (happy) end 2) a specific result was determined We can vary on both of these dimensions *independently*! Which gives the remaining three cases: return undef; # 0 unspecific result fail undef; # -1 no return with unspecific reason fail 42;# -2 no return but determined reason In other words an exception means return !caller; or in yet another way to describe my attitude: the least thing that *defines* an exception is that the dynamic scope in question has reached the conclusion that it is *not* going to give control back to its creator! But it *does* give control, albeit briefly, back to its caller. A million years ago, $Larry pointed out that when we were able to use 'is just a' classifications on P6 concepts, it indicated that we were making good forward progress. In that vein, let me propose that: * Exception handling, and the whole try/catch thing, IS JUST An awkward implementation of (late! binding) run-time return-type MMD. Specifically, if I promise you: sub foo() will return Dog; and later on I actually wind up giving you: sub foo() will return Exception::Math::DivisionByZero; the try/catch paradigm essentially says: I wanted to call csub Dog foo()/c but there may be times when I discover, after making the call, that I really needed to call an anonymous csub { $inner::= sub Exception foo(); $e = $inner(); given $e {...} }/c. We're conditionally editing the return stack. This fits right in with the earlier thread about conditionally removing code from the inside of loops, IMO. Once you open this can, you might as well eat more than one worm. Another conceptually similar notion is that of AUTOLOAD. As a perl coder, I don't EVER want to write say Hello, world or die Write to stdout failed.; -- it's correct. It's safe coding. And it's stupid for a whole bunch of reasons, mostly involving the word yucky. But I acknowledge that through the miracle of broken pipes, it can legitimately happen that stdout will fail while stderr is a viable diagnostic mechanism. Instead, I want PERL to fill that in for me: I believe that the default error mechanism should debug my program, the shell script that calls my program, and the actions (including blood alcohol content) of the user of my program over the last 24 hours: lets leave cuse autodebug;/c turned on by default. The 'Exceptuation' proposal seems to me to include two things: 1. A 'RESUME' feature. 2. An implicit acknowledgement that the default implementations are parallel: {... CATCH - $e {throw $e;} # Going up? RESUME - $r {resume $r;} # Going down? } The rest is optimization. If caller() includes an array of continuations, then cthrow/c looks like a loop up the array: sub throw(Exception $e) { reverse caller() == { .continuation($! = $e) if does(CATCH); } } But the default behavior (modulo threads) is going to unlink all the stack frame pages when the continuation is invoked. So there has to be yet another copy of the links to the stack, because the exception handling will want to call functions and build who-knows-what elaborate skycastles. And it must be reentrant because of the possibility of exceptions during the exception handling. Which means that the call stack needs to be stored in the Exception. [The list of things in the exception gets pretty long. I'm sure it's all a ref to the last page of the call stack, so it doesn't gobble up much space, but there's a lot of and you'll wants coming up.] So cresume/c is a multi, no? (Or it could just be a method: $!.resume, but that doesn't read as well in a block that *really* should be as readable as possible.) Also, any layer of exception handling may do some
Re: Look-ahead arguments in for loops
Dave Whipp wrote: Imagine you're writing an implementation of the unix uniq function: my $prev; for grep {defined} @in - $x { print $x unless defined $prev $x eq $prev; $prev = $x; } This feels clumsy. $prev seems to get in the way of what I'm trying to say. Could we imbue optional binding with the semantics of not being consumed? for grep {defined} @in - $item, ?$next { print $item unless defined $next $item eq $next; } The same behavior, but without the variable outside the loop scope. It would also be good not to overload the meaning of $?next to also tell us if we're at the end of the loop. In addition to FIRST{} and LAST{} blocks, could we have some implicit lexicals: for @in - $item, ?$next { print $item if $?LAST || $item ne $next } I like the idea. There's no reason the view window and the consumption have to be the same. =Austin
Re: Look-ahead arguments in for loops
Luke Palmer wrote: On 9/29/05, Dave Whipp [EMAIL PROTECTED] wrote: for grep {defined} @in - $item, ?$next { print $item unless defined $next $item eq $next; } This is an interesting idea. Perhaps for (and map) shift the minimum arity of the block from the given list and bind the maximum arity. Of course, the minimum arity has to be = 1 lest an infinite loop occur. Or not. We've already seen idioms like for (;;) ... If you specify your minimum arity as 0, then you're obviously planning to deal with it. This presumes that iterators can handle behind-the-scenes updating, of course. But then perhaps you have another way to avoid integer indices: for @list - $this, [EMAIL PROTECTED] { ... } As long as you don't look backwards. Looking backwards makes problems for GC in lazy contexts, so this might just be perfect. Plus it's hard to talk about backwards. If you say for @l - ?$prev, $curr, ?$next {...} what happens when you have two items in the list? I think we're best off using signature rules: optional stuff comes last. =Austin
Re: Look-ahead arguments in for loops
Matt Fowles wrote: Austin~ On 9/29/05, Austin Hastings [EMAIL PROTECTED] wrote: Plus it's hard to talk about backwards. If you say for @l - ?$prev, $curr, ?$next {...} what happens when you have two items in the list? I think we're best off using signature rules: optional stuff comes last. I disagree, I think that is an easy call for (1, 2) - ?$prev, $cur, ?$next { say $prev - $cur if $prev; say $cur; say $cur - $next if $next; say next; } should print 1 1 - 2 next 1 - 2 2 next Did you mean: next 1 - 2 # two spaces there? I assume so because it's the only execution path that seems to work. But that would be assuming there was always at least one non-optional binding. Given that Luke's against all-optional signatures, too, I'll withdraw that part of the suggestion. And with at least one required binding, then there's no reason that we can't have the window extend on both sides of the current value. Luke? =Austin Matt -- Computer Science is merely the post-Turing Decline of Formal Systems Theory. -Stan Kelly-Bootle, The Devil's DP Dictionary
Re: Sort of do it once feature request...
Michele Dondi wrote: On Wed, 21 Sep 2005, Joshua Gatcomb wrote: Cheers, Joshua Gatcomb a.k.a. Limbic~Region Oops... I hadn't noticed that you ARE L~R... In the tradition of i18n, etc., I had assumed that L~R was shorthand for Luke Palmer. You may want to keep up the old tradition of defining your acronyms once. :) =Austin
Re: multisub.arity?
On a related note: Suppose I have a function with a non-obvious arity: I might, in a desperate attempt to find billable hours, describe the arity as a trait: sub sandwich($bread, $meat, $cheese, $condiment1, $qty1, ...) does arity ({ 3 + 2 * any(1..Inf); }); That's cougheasy enough for trivial cases like this, but is there a way to use Cassuming for the more difficult cases? Specifically, and obviously, printf-and-friends: my example := printf.assuming(format = %s %s %n\n); say example.arity(); The obvious output is any(1..Inf), since who's going to code the arity function? But: 1. Is it possible to code an arity trait as a run-time block? (I assume yes) 2. Could this, or any, trait take advantage of assumed parameters? If so, how? (Of course, after the arity function is written, it seems obvious to die unless the current function's arity is le the number of arguments...) =Austin
Re: Idea for making @, %, $ optional
--- James Mastros [EMAIL PROTECTED] wrote: Millsa Erlas wrote: I have thought of an interesting idea that may allow Perl 6 to make the $, @, and % optional on many uses of variables. This involves simply extending the function namespace to include all kinds of structures, and thus the function namespace does not require symbols, they are optional. [...] In that case, you should be looking into how to make it a pragmata, rather then pushing the idea on perl6-language. It shouldn't be too hard -- a matter of using the equivalent of perl5's UNIVERSAL::AUTOLOAD, and the OUTER:: scope. -=- James Mastros, theorbtwo The fact that this keeps recurring on P6L is pretty indicative, I think, that a lot of people don´t value the ability to have an array, hash, and scalar of the same name quite so much as they regret the need to respecify the types of all variables every time they´re used. If it´s really that simple to do, then I´m willing to bet it´ll be used early and often. Making it a part of ´core´ and in fact planning to migrate perl in the direction of less useless line noise DOES seem to me to be a valid task for -language. =Austin
Re: (1,(2,3),4)[2]
--- Rod Adams [EMAIL PROTECTED] wrote: TSa (Thomas Sandlaß) wrote: You mean @a = [[1,2,3]]? Which is quite what you need for multi dimensional arrays anyway @m = [[1,2],[3,4]] and here you use of course @m[0][1] to pull out the 2. I'm not sure if this automatically makes the array multi-dimensional to the type system though. That is if @m[0,1] returns 2 as well or if it returns the list (1,2) or whatever. Is @m[0..3] valid and what does it return? And what's the type of that return value(s)? I can imagine many things ranging from a two element array of refs to two element arrays up to a flattened list of 4 values. @m[0,1] is an array slice of two elements, in this case two arrayrefs [1,2], and [3,4]. @m[0;1] is a multidim deref, referencing the 4. Referencing the 2, I hope? -- Rod Adams =Austin
Re: identity tests and comparing two references
Larry Wall wrote: On Wed, Apr 06, 2005 at 08:24:23PM +0200, Juerd wrote: : Larry Wall skribis 2005-04-06 11:10 (-0700): : $$ref follow the ref list to the actual object. : : my $foo; : my $bar = \$foo; : my $quux = \$bar; : my $xyzzy = \$quux; : : How then, with only $xyzzy, do you get $bar? $$xyzzy would follow until : $foo. I don't like this at all. You can't get at $bar anyway. You can only get at its thingy. Otherwise you're talking symbolic refs. : $ref.foo() is one of those contexts that forces a deref. The only way : to call methods on the Ref itself is through var($ref), or whatever : it's called today. : : This is weird. Chains of scalar refs are weird. At least, they're weird to anyone but a C programmer or a Perl 5 programmer. We're trying to re-Huffmanize the weirdness of Perl 6. How do you deref 'n' levels in such a chain? My current project is a n-way merge of some very large {i.e., O(10**8) records} XML datasets. One way I'm getting performance is by using scalar reference chains to avoid copies. I am also recasting the type of some of the objects used, so I need to be able to reach in 'one' level, as well as 'all' the levels. Currently, I have to know the length of the chain, but it's a constant at any layer so that's not a problem. So if $$ref gives the 'all the way down' behavior, how do I get just one layer down dereferencing? =Austin
Re: nothing
Juerd wrote: Rod Adams skribis 2005-03-21 14:25 (-0600): if $expr { nothing; } is harder to get confused over, IMO Except writing something when you mean nothing is kind of weird. It makes sense in rules because it doesn't usually make sense to match nothingness, but for blocks, I'd hate to see { } be invalid or meaning anything other than the proposed nothing. Juerd I'd like to see nothing as just an alias for {}. if $expr { do nothing; } Possibly the most clear piece of P6 code ever. =Austin
Re: Perl 6 How Do I? (Was: Perl 6 Summary for 2005-01-11 through 2005-01-18)
Luke Palmer wrote: Austin Hasting writes: How do I concisely code a loop that reads in lines of a file, then calls mysub() on each letter in each line? Or each xml tag on the line? And I guess the answer is the same as in Perl 5. I don't understand what the problem is with Perl 5's approach: for { mysub($_) for .split: /null/; } Or: use Rule::XML; for { mysub($_) for m:g/(Rule::XML::tag)/; } My problem with this is one of abstraction. P6 is, in a lot of places, a giant step forward in terms of abstracting away 'well-understood' operations. Grammar/Regex is one example, properties another. I'd like to see a similar, simple notation for expressing composite operations. Perhaps this is a macro thing, but macros are still a little fuzzy to me (and I have these horrid memories from Lisp... :( In general, though, this ties back to my long-ago wish for separable verb syntax support: I'd like to see a relatively concise, expressive notation for doing something like a double-loop or arbitrary traversal. The outer product was a delightful example of this kind of thinking -- it's obviously code, but it's totally data-driven. Iterators may provide some of this, of course. But they provide it in scenarios where the data structure has been comprehended beforehand. Dynamic comprehension, if such a phrase can exist, is the obvious next step in DWIMmery. Given a Tree, or a Trie, or a AvlTree, or a RBTree, it's easy to figure out what $CLASS-getIterator() is going to do. But what's the right way to traverse a C source file? In preprocessor mode it's one thing, in lexer mode it's another. Is it possible to talk about an iteration template? Say I've got a list of numbers, but I want to iterate only the primes. Something like: class PrimeNumberIterator is Iterator { method _is_prime() {...} method .next() { my $cand; $cand = SUPER::next() while !_is_prime($cand); return $cand; } } How do I impose that iteration scheme? for @list - $x is PrimeNumberIterator { say $x ; } Does that work? Anyway, the point is that we as humans can say things like look at all the prime numbers in the list ..., so I would like to see a similar level of expressivity in P6. I think P6l was heading that way some time ago, but we got sidetracked by an Apocalypse. :) Maybe Larry's concept of 'type' has a place here? (Remember that 'type' was described as a restriction of 'class', such that 'odd number' is a type that restricts 'number'.) Would for @list - $x is Prime { say $x ; } work? What about for [EMAIL PROTECTED] ~~ Prime ] - $x { say $x; } (That last one is really hard to read -- it would wind up demanding a layer of sugar...) But if type is the only way to do filtering, then we'll wind up with a metatype mechanism for defining types on the fly, so ... =Austin -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.300 / Virus Database: 265.6.12 - Release Date: 1/14/2005
Re: Making control variables local in a loop statement
David Storrs wrote: On Thu, Jan 13, 2005 at 07:35:19PM -0500, Joe Gottman wrote: In Perl5, given code like for (my $n = 0; $n 10; ++$n) {.} the control variable $n will be local to the for loop. In the equivalent Perl6 code loop my $n = 0; $n 10; ++$n {.} $n will not be local to the loop but will instead persist until the end of enclosing block. Actually, I consider this a good thing. There are lots of times when I would LIKE my loop variable to persist and, in order to get that, I need to do the following: my $n; for ($n=0; $n10; ++$n) {...} ...do stuff with $n... It's a minor ugliness, but it itches at me. Under the new Perl6 rules, I can easily have it either way. {for (my $n=0; $n10; ++$n) {...}} # Local to loop for (my $n=0; $n10; ++$n) {...}# Persistent --Dks But there's no clean way to make some of them temporary and some persistent. This seems like a legitimate place for saying what you intend, viz: for (my $n is longlasting = 0, $m = 1; ...) {...} Albeit that's a lame example of how to do it. =Austin -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.300 / Virus Database: 265.6.10 - Release Date: 1/10/2005
Re: Making control variables local in a loop statement
Matthew Walton wrote: Austin Hastings wrote: But there's no clean way to make some of them temporary and some persistent. This seems like a legitimate place for saying what you intend, viz: for (my $n is longlasting = 0, $m = 1; ...) {...} Albeit that's a lame example of how to do it. What's not clean about { loop my $n = 0; $n 10; $n++ { ... } } ? Works fine for me, shows the scope boundaries very clearly indeed, just the kind of thing a lot of languages are missing, IMO. Of course, this example's really bad because it's much better written for 0..9 { ... } In which case I assume that it only clobbers the topic inside the block, not outside it, as it's somewhat like for 0..9 - $_ { ... } To write it explicitly. Or am I barking up the wrong tree completely? Not sure. In my example, there were two variables, 'n' and 'm', one of which was supposed to outlast the scope, the other not. =Austin -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.300 / Virus Database: 265.6.10 - Release Date: 1/10/2005
Re: Possible syntax for code as comment
Luke Palmer wrote: Well, it'll still get that bad rap because it's as syntactically flexible as ever (moreso even), so people have all the freedom they want to write code ugly as sin. Anyway, if you want to see more Perl 6 syntax, why don't you post some how do Is to the list, and I'll reply with code. How do I concisely code a loop that reads in lines of a file, then calls mysub() on each letter in each line? Or each xml tag on the line? (This came up as I was praising the name of Palmer for the outer operation...) =Austin -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.300 / Virus Database: 265.6.9 - Release Date: 1/6/2005
Re: Perl 6 Summary for 2004-12-20 through 2005-01-03
Matt Fowles wrote: Perl 6 Summary for 2004-12-20 through 2005-01-03 s/conses/consensus/g ?
Re: S05 question
Larry Wall wrote: Another problem we've run into is naming if there are multiple assertions of the same name. If the capture name is just the alpha part of the assertion, then we could allow an optional number, and still recognize it as a ws: ws1 ws2 ws3 Except I can well imagine people wanting numbered rules. Drat. Could force people to say ws_1 if they want that, I suppose. Or we could use some standard delim for that: ws-1 ws-2 ws-3 which is vaguely reminiscent of our version syntax. Indeed, if we had quantifications, you might well want to have wildcards ws-* and let the name be filled in rather than autogenerating a list. But maybe we just stick with lists in that case. For captures of non-alpha assertions, we could say that ? is the same as true (just as with regular operators), and so true-3 +alpha-[aeiou] would capture to $true-3. (And one could always do an explicit binding for a different name.) Actually, I think people would find $match-3 more meaningful than Ctrue-3. PHP's use of $array[] as push might work for this: true[] +alpha-[aeiou] or @true +alpha-[aeiou] or true=1.. +alpha-[aeiou] or true@ +alpha-[aeiou] I like the idea of being able to continue versus chunk patterns. How do you say This is a continuation of the other thing versus This is a separate thing ? =Austin
Re: Arglist I/O [Was: Angle quotes and pointy brackets]
Larry Wall wrote: But here's the kicker. The null filename can again represent the standard filter input, so we end up with Perl 5's while () {...} turning into for = {...} Two more issues: idiom, and topification = Topification: There are cases in P5 when I *don't* want while () {...} but prefer while ($input = ) {...} so that I can have something else be the topic. Every example to date has used Cfor: for .lines {...} but that sets the topic. I'm a little fuzzy on this, but doesn't Cfor play topic games even in this? for .lines - $input { ... $input ... } That is, even though $_ remains unaffected, doesn't this affect smartmatch etc.? = Idiom: The other concern is idiom. Using Cfor suggests start at the beginning, continue to the end. OTOH, using Cwhile is a little weaker -- keep doing this until it's time to stop. Obviously they'll usually be used in the same way: for = {...} vs. while () {...} This seems a subtle concern, and maybe it's just my latent fear of change making me uncomfortable, but I actually *think* in english -- not that it does much good -- and this isn't how I think. Can we ditch Cfor in the examples in favor of Cwhile, for a while? :) =Austin
Re: pull put (Was: Angle quotes and pointy brackets)
Smylers wrote: Larry Wall writes: But then are we willing to rename shift/unshift to pull/put? Yes. Cunshift is a terrible name; when teaching Perl I feel embarrassed on introducing it. No! But I'd be willing to rename them to get/put. 'Pull' is the opposite of 'push', but 'pop' already works. Given the nature of many of the other changes in Perl 6, completely changing regexps for example, renaming a couple of functions seems minor. Agreed. Smylers =Austin
Re: Arglist I/O [Was: Angle quotes and pointy brackets]
David Wheeler wrote: On Dec 6, 2004, at 7:38 AM, Austin Hastings wrote: for = {...} I dub the...the fish operator! :-) Back before there was a WWW, I used an editor called tgif. It was written in france, and part of the idiom was to have two GUI buttons showing respectively the head ( * ) and tail ( ( ) parts of a fish. This were graphical images, please forgive my poor ascii drawing. It took me a while to figure it out, but it was a cute bit of bilingualism. (Or perhaps it was a bit of bilingual cute-ism...) =Austin
Re: pull put (Was: Angle quotes and pointy brackets)
Larry Wall wrote: On Mon, Dec 06, 2004 at 11:52:22AM -0700, Dan Brian wrote: : If I went with get, the opposite would be unget for both historical : and huffmaniacal reasons. Why? (I get the huffman, not the history.) Is it just a nod to unshift? Given the existence of a unary = for abbreviated use, I'd probably stick with shift/unshift. (Presumably changing the semantics of shift from p5 to be list/scalar/n-ary context sensitive, so you'd have to write scalar shift to get Perl 5's shift semantics in list context.) What about add/remove? sub unshift(@a, [EMAIL PROTECTED]) { @a.add(@items); } We could add :head and :tail, with :head the default, and let push|pop be equivalent to (add|remove).assuming(tail = 1) As a side note, other than historical consistency, is there a good reason for push/pop to use the end of the array? I'd argue that for a stack, you only want to know one address: @stack[0] -- the 'top' of the stack -- and if you ever iterate a stack you're inclined to see the items in distance-from-top order, making 0..Inf the right array sequence. If we're going to reorg the function space, let's huffmanize the stack stuff (push/pop/0) and let the other stuff go hang. =Austin
Re: iterators and functions (and lists)
Luke Palmer wrote: Larry Wall writes: Any foo() can return a list. That list can be a Lazy list. So the ordinary return can say: return 0...; to return an infinite list, or even return 0..., 0...; Is it just me, or did you just return *2? http://en.wikipedia.org/wiki/Ordinal#Arithmetic_of_ordinals That would be totally cool. But um, how do we get at the structure of that list from within Perl? It looks like no matter what you do it would be impossible to see the second 0. Luke my ($foo1, $foo2) = foo(); ? =Austin
Re: Topification [Was: Arglist I/O [Was: Angle quotes and pointy brackets]]
Luke Palmer wrote: class MyStream { has $.stream; method :send_one ($item) { $.stream.send($item); } method send ([EMAIL PROTECTED]) { .:send_one(BEGIN); for @data { .:send_one($_); } .:send_one(END); } } I'll guess that you're pointing at .:send_one($_); Which supposedly uses topic to resolve .:send_one into $this.send_one. If that works, then I'm happy -- I like being able to control topic and $_ differently. But if Cfor changes topic, then what? OUTER::.:send_one($_); Yuck. =Austin
Re: qq:i
John Macdonald wrote: The problem with interpolate if you can or leave it alone for later is that when later comes around you're in a quandry. Is the string $var that is in the final result there because it was $var in the original and couldn't be interpolated, or was it a $foo that had its value of $var injected into its place? The maybe do it now, finish up later what wasn't done the first round approach runs the risk of double interpolation. (Or single interpolation, or non-interpolation, whichever it happened to roll on the dice.) If you're Randal Schwartz discovering a s/Old Macdonald/$had a $farm/eieio accidental feature, that can be useful; but for mere portals, it is just a bug waiting to surface. Maybe this should be the default behavior. my $nameRight = name; my $nameWrong = other name; print $nameRight != $name_wrong\n; name != $name_wrong I wonder where my mistake is? This looks like a decent win for PEBKAC errors. Maybe this should go on by default if the I need help flag is turned on. OTOH, I don't have much use for the original proposal because I also want to be able to defer interpolation of variables that DO exist, like $!. I like the idea of defining a secondary interpolation character, or perhaps a secondary escape character. my $name = defined now, but it will change later; my $output_text = EOF:esc('`'); $template_header `$name $template_footer EOF =Austin
Re: Angle quotes and pointy brackets
Austin Hastings wrote: Larry Wall wrote: And now, Piers is cackling madly at Matt: welcome to perl6-hightraffic! :-) =Austin
Re: assorted questions
Rich Morin wrote: On a vaguely-related topic, I am reminded of another friend's desire to be able to redefine floating point values as quartets of values. Each operation would then be done using all possible rounding options (in the IEEE standard) and the results checked for significant variations. If anyone knows of a cute way to do this in Perl 6, I'd be happy to hear about it... Implement an opaque object whose value, when fetched, is a junction. =Austin
Re: Perl 6 Summary for 2004-10-01 through 2004-10-17
Michele Dondi wrote: On Sun, 17 Oct 2004, Matt Fowles wrote: Google groups has nothing for Perl6.language between October 2 and 14. Is this really the case? (I had not signed up until shortly before Yes: no traffic at all for quite a while... Does this mean that we're done? :)
Re: So long, and thanks for all the fish!
The Perl 6 Summarizer wrote: I tried, I really did, but I'm afraid that I must raise the white flag to my teacher training for the next while and give up writing the Perl 6 Summary until at least after Christmas. Bad food, lousy dental care, and now the childrens' education is entrusted to a chap who can't figure out how to smack one of the little buggers with a ruler. Five hundred years of empire come to this... Thanks to the language folks, Larry, Allison, Damian, and all the many and various denizens of perl6-language. Following the list has been an education. Every time I find myself thinking a proposal is simply poisonous, along comes Larry in fugu-chef mode to extract the good stuff that sets your mind a tingling and chuck away the stuff that would leave you paralyzed and dying on the floor. Yeah, we blowfish appreciate him, too. Thanks to everyone who ever sent me feedback; I've mentioned Warnock's dilemma many times in these summaries, it's always good to be gently lifted from its horns by a word or two of praise or damnation. Speaking of fishing ... I'm not about to stop writing. I'm slowly working through chromatic's 'Write Your Life' project. It's far easier than summarizing; all the material I need is already in my head, and I can bash out words even when I don't have net access. Just because Speedo sells an 'XXL' bathing suit doesn't make it a good idea to buy one ... (http://www.budlight.com - lifestyle - radio ads - Mr. Tiny Thong Bikini Wearer.mp3) A regular summary helps the interested but busy people get a grasp of how the Perl 6 project is getting on, and that can only be a good thing. Sorry things have rather fizzled out; I just didn't realise until I started quite how demanding this course would be. And I don't just mean because I've got to wear a suit. Oh, man! The speedo comment was supposed to be a joke, not foreshadowing... You, sir, have done a fine job of summarizing two lists. Frankly, I try to read just one in real time and it stumps me: I'm impressed at the amount of work (and free time) you've given us. I appreciate it, and I thank you for it. That'll do, Piers. That'll do. :) =Austin
Re: S5 updated: 3 but remainder()?
Jeff Clites wrote: (B (B On Sep 23, 2004, at 5:27 PM, Edward Peschko wrote: (B (B On Thu, Sep 23, 2004 at 08:15:08AM -0700, Jeff Clites wrote: (B (B (B just like the transformation of a string into a number, and from a (B number to a string. Two algorithmically different things as well, (B but they'd damn-well better be exact inverses of the (B other. (B (B (B But they're not: (B (B " 3 foo" -- 3 -- "3" (B (B (B I'd say that that's a caveat of implementation, sort of a side effect (B of handling (B an error condition. (B (B (B Nope, I'd call it fundamental semantics--it allows common idioms such (B as "0 but true" in Perl5, for example. It's just an explicit part of (B the rule for how Perl (and C's strtol/atoi functions) assign numerical (B values to strings. (B (B (BActually, that raises a good point: Should "3 foo" convert to number 3, (Bor should it convert to C3 but remainder(" foo") ? (B (BI can see wanting something like this for parsing, but I'm not sure if (Bthis is the right way to get it. (B (B (B But you might like this example better, which I assume will work in (B Perl6: (B (B "$B#3(B" -- 3 -- "3" (B (B (In case your email viewer doesn't render that, the first string (B contains the "fullwidth digit three", a distinct, wider version of a (B 3, used in some Asian languages.) (B (BIs that true? That is, does fwd3 actually map to a 3, or is it a funny (Bcharacter? (Doesn't russian have such a widget? I know that IPA uses (Bsomething that looks like a 3 but calls it a backwards E.) (B (BPerhaps it returns C3 but encoding("Unicode-FF00")? (B (BLikewise "\U0e53" -- 3 -- "3", but perhaps it should be annotated to (Bretranslate correctly -- thai digits are usually shown only when there (Bis a separate price for foreigners. We wouldn't want to reveal any (Bsecrets... :) (B (B (B (B My point is that if inputting strings into grammars is low level (B enough to be an op, why isn't generating strings *from* grammars? (B (B (B Maybe, because it's a less common thing to want to do? (B (B Well, there re two responses to the "that's not a common thing to (B want to do": (B (B 1) its not a common thing to want to do because its not a useful (B thing to do. (B 2) its not a common thing to want to do because its too damn (B difficult to do. (B (B I'd say that #2 is what holds. *Everybody* has difficulties with (B regular expressions - about a quarter of my job is simply looking at (B other people's regex used in data transformations and deciding what (B small bug is causing them to fail given a certain input. (B (B (B Yeah, but when a regex isn't acting how I expected it to, I know that (B because I've already got in-hand an example of a string it matches (B which I thought it wouldn't, or one it fails to match which I thought (B it should. What I want to know is *why*--what part of the regex do I (B need to change. Generating strings which would have matched, wouldn't (B seem to help much. (B (B And you might be underestimating how many strings can be generated (B from even a simple regex, and how uninformative they could be. For (B example, the Perl5 regex /[a-z]{10}/ will match 141167095653376 (B different strings, and it would likely be a very long time before I'd (B find out if this would match any strings starting with "x". I'd (B probably be left with the impression that it would only match strings (B starting with "a". (B (B (BThat's what lazy iterators/junctions are for. (B (BIf you ask perl to generate a regex, it gives you a lazy iterator. (BPossibly one that is sitting beneath a junction. (B (BOn the one hand, /[a-z]{10}/ is equal to any("aa", ...). (BOn the other hand, it's equal to all("aa", ...). I'm not sure (Bwhich flavor is better, or if it's the act of ~~ or // -ing that (Bconverts all() to any(). (B (BOne place where this seems to be actually (instead of theoretically) (Bhelpful is using grammars to generate. Perhaps the individual nodes in a (Bgrammar could be overloaded with "but generate /variablenamedigit+/" (Bto eliminate noise, while still generating every grammatical permutation? (B (B Running a regular expression in reverse has IMO the best potential (B for making (B regexes transparent - you graphically see how they work and what they (B match. (B (B (B How graphically? (B (B (BRun the generator, print the result. "Graphically" as in Cisgraph(), (Bnot "Graphically" as in "Internet pr0n". :) (B (B (B Why shouldn't that be reflected in the language itself? (B (B (B Maybe because if it's likely to be used mostly for debugging, and can (B be implemented in a library, then it doesn't need to be implemented as (B an operator, and contribute to the general learning curve of the (B language's syntax. (B (B (BOn the other hand, we have Cx (or is it Cxx now?), a rudimentary (Bform of the operator you're discussing that only works for
Re: S5 updated: 3 but remainder()?
Juerd wrote: Austin Hastings skribis 2004-09-24 12:05 (-0400): Actually, that raises a good point: Should 3 foo convert to number 3, or should it convert to C3 but remainder( foo) ? Would the remainder then be dropped when the numeric value changes? I assume that replacing the value replaces the value's properties. What happens when you do: $x = 1 but false; $x++; print $x, $x ?? (true) :: (but false), \n; Does it emit true or false? In theory, if you're using the remainder for parsing you'd like it to stay: $x = 3 foo; $x++; print $x; # 4 foo but I think it would be survivable if that didn't happen. =Austin
Re: Still about subroutines...
Larry Wall wrote: On Fri, Sep 17, 2004 at 07:35:46PM +0100, Richard Proctor wrote: : Therefore should: : : $?os Be which operating system it is being compiled on : $*os Be which operating system it is being executed on : : Some of the other special variables may have a similar dual personality. Presumably. Which presents an interesting problem, because we currently have things defined like $*PID, not $*pid. Either we have to lowercase the $* variables, or uppercase the $? variables, or decide that it's okay for them to be different. It's probably important to keep $*PID uppercase because of the way they can leak into any other namespace as $PID. The same does not hold true for $?line. On the other hand, people are used to __LINE__ already, so maybe $?LINE isn't so bad, and lights up better as a weird unit with a rectangular shape, something you might see as a funny symbol in a macro assembler. Which is more or less what it is. I originally made them lowercase because they were $=line variables and I didn't want them to conflict with POD names that are typically uppercase, and use of an C= secondary sigil for POD is a no-brainer. But that no longer applies when they have their own ssigil, or sigil2, or 2igil. I guess that would be pronounce twidgle. For that matter, what's wrong with $__ as a sigil, as in $__LINE__, et al. It combines the you can use it as a variable with the leading underscores are magic memes, and doesn't impose any wierd learning curve. =Austin Larry
But is it intuitive?
I was thinking about removing files this morning, and realized that I wish rm supported inclusion/exclusion. In particular, I wanted to remove * but not Makefile (since my Makefile uses lwp-download to re-fetch the source code, etc.) It occurred to me to wonder: can P6's cbut do the same thing? That is, can I say: $my_rex = qr/fo*/ but not 'foo'; while () { unlink if /$my_rex/; } and have it DWIM? What about globbing? In general, what needs to be done to support this 'but, used as part of a boolean'? In this case, but really means 'and': $my_rex = { my $re = qr/fo*/; $re.eval := sub { $_ ne 'foo' call; }; return $re; }; This has interesting implications for specification of generators, too. Comment? =Austin
Re: But is it intuitive?
Luke Palmer wrote: Judging from this, maybe we ought to have :not. Anyway, it's still possible: $my_rex = rx/fo*/ none(rx/^foo$/); For sure. On a side note, there should be a negating match operator for use inside: rx/\d+/ none(rx/1984/) could get awfully long if you had to handle several exceptions. I like rx:not, though -- a little easier to read IMO. =Austin
Re: Synopsis 2 draft 1 -- each and every
Larry Wall wrote: On Fri, Aug 20, 2004 at 12:52:56PM -0700, Larry Wall wrote: : Unfortunately, the only obvious one, 's', is taken. I remind myself that 'S' is equally obvious, and not taken. Like _, it suffers from spacing issues, but could be the ASCII backup for the § character. (As Y is likely to be the ASCII backup for ¥. Maybe Y is the nylon zipper operator.) Hmm. Gotta decided if S$foo.bar() is too ugly to live though... It is. I still kinda like underscore. How about scalar? The fact that one person, one time, came up with a need to invoke it doesn't mean we have to race it up the huffman tree. P6 is winning the DWIM race most of the time contextually. Maybe [#] as a macro, if you like. =Austin
Re: Synopsis 2 draft 1 -- each and every
chromatic wrote: On Fri, 2004-08-20 at 14:26, Austin Hastings wrote: Dan Hursh wrote: generalimpose scalarimpose list ----- D$foo.eat$foo.bite$foo.gobble N$foo.look$foo.peek$foo.peruse hmm, I don't like eat in this case D$foo.take$foo.grab$foo.horde s/horde/hoard/ If I'd written that, I'd claim that as deliberate. Though it does leave a problem with grab as a singular noun, Why? Horde is a noun, hoard is a verb like all the other entries in the table. (Some have dual use, like hoard does, as nouns, but...) =Austin
Is there a tuple? -- WAS: RE: :)
--- Adam D. Lopresto [EMAIL PROTECTED] wrote: The modifier to turn off warnings on a line would be ;), winking at us to let us know it's up to something. I wondered about paren-after-semi, and thought about Cfor(;;). Which led me to C@array[a;b;c], then to (a;b;c;), which let me to this: Given that @array[1;2;3] is a multi-dimensional reference, is there a tuple data type/element/constructor? Can I say, forex, my $address = tuple(1;3;5); and then my $data = @array[$address]; and GWIW? Also, can I Cassume certain dimensions? my @ary is dim(3;3;3) is default('.'); my $vct ::= @ary.assuming( .[0;;0] }; $vct[0..2] = 0..2; @ary.print; # DWIM! [ . 0 . ] [ . 1 . ] [ . 2 . ] Ignoring the DWIM, how much of that works? =Austin
Re: Why do users need FileHandles?
--- chromatic [EMAIL PROTECTED] wrote: On Mon, 2004-07-19 at 14:04, David Storrs wrote: Second, I would suggest that it NOT go in a library...this is reasonably serious under-the-hood magic and should be integrated into the core for efficiency. You must have amazingly fast hard drives. I mount /tmp on swap. My hard drive is bitchin fast. =Austin
Re: String interpolation
--- Larry Wall [EMAIL PROTECTED] wrote: If {...} supplies list context by default, most intepolations are either the same length or shorter: $($foo) {$foo} @(@foo) [EMAIL PROTECTED] $(@foo) [EMAIL PROTECTED] [EMAIL PROTECTED] [EMAIL PROTECTED] Tres PHP, sir. Plus, as I mentioned, it cleans up the $file.ext mess, the [EMAIL PROTECTED] mess, and the %08x mess. That's three FAQs that don't have to exist. Ironically, I was grousing a couple of weeks back on the Sitepoint PHP forum that the $foo vs. {$foo} interpolator wasn't smart enough -- it had a limited submode for interpolated symbols instead of going into 'get me a var-exp'. The flip side of that is that it's based on supporting two ways of interpolating: plain $old text and special {$interpolated} text. I wonder if you're talking about having just one {$interpolation} mode, or if simple interpolations stay undelimited? : The prospect of backslashing every opening brace in every : interpolated string is not one I relish. I'm just looking for what will be least confusing to the most people here. We can certainly have other possible behaviors, but something simple needs to be the default, and $() doesn't feel right to me anymore. Suppose there was a default that was you must quote curlies, and alternates like: q{ You must still quote curlies, else they interpolate. } q( You must quote parens, else they interpolate. ) q[ You must quote brackets, else they interpolate. ] (And, while I'm at it, how about the cool: qv( sym ) which expands to the @(file, line) || $line on which SYM was declared/first encountered. :-) =Austin
Re: This week's summary
--- The Perl 6 Summarizer [EMAIL PROTECTED] wrote: Okay, so the interview was on Tuesday 13th of July. It went well; I'm going to be a maths teacher. As usual, we begin with maths-geometry: In Mathematics last week, one Pythagoras suggested there might be a relationship between the sides of a triangle and its hypotenuse. Zeno continued to close on his destination, but once again only got halfway there. http://xrl.us/chjj (Squares) http://xrl.us/chjk (Zeno in mid-flight) As we all know, time flies like an arrow, but fruit flies like a banana. If you found this mathematical summary helpful, please consider paying your tuition you ungrateful little bastards. Congratulations, Piers! The fate of a generation rests on your shoulders. =Austin
Re: Why do users need FileHandles?
--- Rod Adams [EMAIL PROTECTED] wrote: Dave Whipp wrote: Your case 2 is easy: my Str $passwds is File(/etc/passwd) is const. With that, we might even catch your error at compile time. /file/open/ and we're back where we started. Except that we've lost a layer of abstraction: the programmer manipulates a file's contents, not its accessor. Text files would be just an implementation of strings. No need to learn/use a different set of operators. Want to read bytes: use $str.bytes. Graphemes: $str.graphs. Also, we use the existing access control mechanisms (is rw, is const, instead of inventing new ones to pass to the Copen function as named-args). I think part of the mental jam (at least with me), is that the read/write, exclusive, etc, are very critical to the act of opening the file, not only an after the fact restriction on what I can do later. But why? I'd argue that this ties in to the verbose/exception discussion of a few weeks back: if the operation fails, let it pass an exception up the chain that can be caught and resolved (once) at a high level. Given that file processing is so common in Perl, it deserves a high huffman scoring. The best way to do that is to abstract the operations away and replace them with a single declaration of intent. That declaration, of course, becomes a front-end for C or heavily optimized parrot. In a heavily OO paradigm, there would be a swarm of subclasses of type stream -- istream, ostream, iostream, exclusive_iostream, whatever. The suggestion is that we can derive an equally expressive vocabulary using barewords and the occasional adverbial modifier. If I cannot open a file for writing (permissions, out of space, write locked, etc), I want to know the instant I attempt to open it as such, _not_ when I later attempt to write to it. Having all these features available to open as arguements seems a much better idea to me. It's Open a file with these specifications, not Open a file, and then apply these specifications to it. But why? Do you really open files and then perform an hour of work before attempting to use them? I'll argue that's not the normal case; rather, the normal case is something like open or die ... other_stuff() while (...) { print ... } close and the intervening delay (other_stuff) is negligible in wall-clock terms: when a failure occurs, the user hears about it immediately. I do admit there is merit to your abstraction system, but IMO, it belongs in a library. I think rather that the abstraction should be the default, and the individual I don't trust Perl functions should be available as separate entry points if the user explicitly requires them. =Austin
RE: :)
-Original Message- From: Juerd [mailto:[EMAIL PROTECTED] Sent: Saturday, 17 July, 2004 01:53 PM To: [EMAIL PROTECTED] Subject: :) Do we have a :) operator yet? It's an adverbial modifier on the core expression type. Does nothing, but it acts as a line terminator when nothing but whitespace separates it from EOL. =Austin
Re: enhanced open-funktion
--- Smylers [EMAIL PROTECTED] wrote: Using C:w and C:r would at least match what C:w and C:r do in 'Vi' ... That seems intuitive: my $fh = open 'foo.txt', :w; $fh.say Hello, world!; $fh = open 'foo.txt', :e;# Ha, ha, just kidding! $fh.say -EOF If wifey shuns Your fond embrace Don't kill the mailman: Feel your face! Burma Shave EOF $fh.close :q!;# Tricked again. $fh = open :n;# Opens next file from argv? =Austin
Re: scalar subscripting
--- Jonadab the Unsightly One [EMAIL PROTECTED] wrote: Of course, this leaves open the question of whether there are any fairly common filename extensions that happen to be spelled the same as a method on Perl6's string class, that might ought to have a warning generated... Are there any three-letter methods on the string class? (Maybe there shouldn't be, in the core language...) Well, there was talk about the shortest-possible-name for, among others, the codepoints and language-dependent-entities of strings. We pretty well know that .c is a risk, and if language-dependent gets spelled per language, .pl might also be one. :-( =Austin
Re: Cartesian products? [Especially wrt iterations]
--- Michele Dondi [EMAIL PROTECTED] wrote: On Tue, 13 Jul 2004, Austin Hastings wrote: Using google(+perl6 +cartesian product) would have led you to the conclusion that this is already included. I hope this is horribly wrong, since the syntax is a little bewildering. [...] See Luke Palmer's Outer product considered useful post: http://www.mail-archive.com/[EMAIL PROTECTED]/msg15513.html That's exactly the point! I wish too there were a more intuitive syntax, possibly even employing a predefined array variable if none is explicitly specified... Boggle! While Couter may not be totally intuitive, it's not far off. Likewise, the latin-1 version is pretty good: for @x × @y × @z - $x, $y, $z { ... } Is there some even more intuitive way than this? On Tue, 13 Jul 2004, Jonathan Scott Duff wrote: Are you sure? for zip(1..10, 5..20, foo bar baz) - $x, $y, $text { do_something_with $x,$y,$text; } Not sure at all: admittedly I may well be one of the less informed ones about Perl6 here. Though as far as I can understand zip() is for iterating *in parallel*, and both other replies here and discussion previously held here seem to indicate that it is so. No, ¥ (Czip) is wrong for this. (It's the inner product, so it really ought to be '·' (Cinner) except for the wierd origin.) =Austin
Re: enhanced open-funktion
--- Larry Wall [EMAIL PROTECTED] wrote: While that probably works, I think better style would be to use a comma: my $fh = open $filename, :excl; That explicitly passes :excl to open as a term in a list rather than relying on the magical properties of :foo to find the preceding operator adverbially when used where an operator is expected. Hmm. If I must use the comma, I think I'd prefer a constant in this scenario: my $fh = open $filename, open::exclusive; Is it reasonable to use package:: as a prefix to a block to change the default namespace: { package open; exclusive + rw; } becomes open::{exclusive + rw} which then lets me say: my $fh = open $filename, open::{exclusive + rw}; =Austin =Austin
RE: The .bytes/.codepoints/.graphemes methods
-Original Message- From: Jonadab the Unsightly One [mailto:[EMAIL PROTECTED] Austin Hastings [EMAIL PROTECTED] writes: I think this is something that we'll want as a mode, a la case-insensitivity. Think of it as mark insensitivity. Makes sense to me, but... Maybe it can just roll into :i? It will probably get used in _conjunction_ with case-insensitivity quite a lot, but I suspect people will want to be able to use one without the other. Since mark-insensitivity is probably mostly a non-issue in the ASCII world, it would probably be a better candidate than average for being turned on using a unicode character, if we're running low on letters for designating these rules. How about :i ? :) :) :) =Austin
Re: scalar subscripting
--- Juerd [EMAIL PROTECTED] wrote: Piers Cawley skribis 2004-07-12 12:20 (+0100): method postcircumfix:[] is rw { ... } Compared to Ruby, this is very verbose. def [] (key) ... end # Okay, not entirely fair, as the Ruby version would also # need []= defined for the rw part. Could methods like [] and {} *default* to postcircumfix:? Do we want Larry to spend even a picosecond thinking about how to reduce the number of characters required to declare something like this? Array-/Hash-like access is very natural for many objects and I think deserves simplified syntax. method [] ($index) { .item $index } method {} ($key) { .arg $key } I thought operators were generally considered global multisubs, simply because it makes life interesting when you're dereferencing $a[1, 2] otherwise. (Not that it matters, other than increasing the number of places that MMD has to search for candidates.) Regardless, I don't agree that the huffman coding of postcircumfix:[] needs to be reduced to that of []. You could write a pcf:[] macro, if you wanted to. (And I'm assuming will be a method of Object that uses self.postcircumfix:{}.) It seems intuitive that redefining one access would redefine the other(s), but I want the override mechanism to be the same. I think requiring a method definition for one notation and a multisub for the other notation would be a mistake. So long as they are both overridden as methods or both as subs, it's cool. =Austin
Re: push with lazy lists
--- Dave Whipp [EMAIL PROTECTED] wrote: rand(@x) == @x.rand == @x[ rand int @x ] == @x[ rand(1) * @x ] guaranteeing a uniform distribution unless adverbial modifiers are used. Meaning I can do: $avg_joe = rand @students :bell_curve; ? =Austin
Re: if not C, then what?
--- Larry Wall [EMAIL PROTECTED] wrote: On Fri, Jul 09, 2004 at 10:39:56AM +0200, Michele Dondi wrote: : On Thu, 1 Jul 2004, Alexey Trofimenko wrote: : : if we really about to lose C-style comma, would we have something new : instead? : : A late thought, but since I am one of thow whose' keen on the : : print,next if /stgh/; : : kinda syntax too, and I, for one, will regret not having it anymore, I : wonder wether something vaguely like the following example could (be made : to) work: : : print.then{next} if /stgh/; That's unnecessary--the comma still works perfectly fine for this, since comma still evaluates its arguments left-to-right. The *only* difference about comma is what it returns in scalar context. Most uses of the so-called C-style comma (including this one) are actually in void context, and in that case whether the return value is a list or the final value Doesn't Really Matter. Will there be a statement modifier version of Cwhen? print, next when /stgh/; Can there reasonably be block-postfix modifiers? { print; next; } if|when /stgh/; =Austin
Re: if not C, then what?
--- Larry Wall [EMAIL PROTECTED] wrote: On Fri, Jul 09, 2004 at 11:23:09AM -0700, Austin Hastings wrote: : Can there reasonably be block-postfix modifiers? : : { print; next; } if|when /stgh/; If there reasonably can be block modifiers, I will unreasonably declare that there can't be. Be as unreasonable as you want -- the grammar's open. :) You can always say: do { print; next; } if|when /stgh/; (It's still the case that do-while is specifically disallowed, however.) What about Cloop? do { print ; next } loop (; true ;); =Austin
Re: if not C, then what?
--- Larry Wall [EMAIL PROTECTED] wrote: On Fri, Jul 09, 2004 at 11:51:52AM -0700, Austin Hastings wrote: : --- Larry Wall [EMAIL PROTECTED] wrote: : If there reasonably can be block modifiers, I will unreasonably : declare that there can't be. : : Be as unreasonable as you want -- the grammar's open. :) Darn it, when did that misfeature sneak in? :-) I can't recall the day, but I'm pretty sure it ended with 'y'. : You can always say: : : do { print; next; } if|when /stgh/; : : (It's still the case that do-while is specifically disallowed, : however.) : : What about Cloop? : : do { print ; next } loop (; true ;); I don't see much utility in that, and plenty of room for confusion. Does the next apply to the statement modifier? How often do you want to explain why do { print $i } loop (my $i = 0; $i 10; $i++); doesn't work? I want it to work. (I'm about to ask for a - binding operator, to boot :) But I also want do/while to work, solely because repeat/until sucks. What's the big deal there? All leaving out the fact that it doesn't read like English, which is a requirement for statement modifiers. Yeah. What idiot picked 'loop' for a keyword? :) OTOH, there's a whole slew of prepositions out there. What's the mechanism for adding them as statement modifiers? ++$_ throughout @a; Of course, the grammar's open... But let me put this on the record: I specifically disrecommend use of grammar tweaks that will incite lynch mobs. You have been warned. One man's syntactic sugar is another man's get a rope. I'm sure someone will implement C++ style I/O using some number of and characters (it won't be me). (And there's the separable keyword issue, natch. ...up with which I shall not put in perl? Cprint if even else next;) =Austin