Re: The invocation operators .* and .+
* yary not@gmail.com [2015-06-17 17:10]: Perl6's TEARDOWN Sorry for the confusion. It’s not in Perl 6. I invented .teardown for this example because I didn’t want to call it .destroy – that’s all. -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Language design
* Michael Zedeler mich...@zedeler.dk [2015-06-16 18:55]: For instance, why have Complex and Rat numbers in the core? If you're not working in a very specialized field (which probably *isn't* numerical computation), those datatypes are just esoteric constructs that you'll never use. https://www.youtube.com/watch?v=S0OGsFmPW2M
Re: The invocation operators .* and .+
* Michael Zedeler mich...@zedeler.dk [2015-06-16 13:10]: On 06/16/15 12:24, Aristotle Pagaltzis wrote: * Michael Zedeler mich...@zedeler.dk [2015-06-16 11:35]: This is working exactly as specified in the synopsis, but does Perl 6 NEED anything like this? Just because something is possible doesn't make it an automatic requirement! Well someone thought they needed it in Perl 5 so they wrote NEXT which provides EVERY:: which does exactly the same thing. C3 dispatch surely has something similar too, natively, I’m just not aware of it. Which is a reasonably good argument for letting others write a *module* for Perl 6 that provides this feature. I don't see why it should be in the core. Because it’s terrible in the out-of-core form on CPAN? I haven't seen just one reasonable use case for it. Anyplace you would have to say “if you override this method then make sure to call the overridden method also” (like calling -new up the inheritance tree). Instead of relying on every subclass writer to not screw this up (and leave the object instance in an incoherent state), you use something like these operators to make *sure* a certain method is called all up the inheritance tree as necessary for your de-/init needs. Sorry. Doesn't make sense. class A { sub destroy { ...important cleanup } } class B is A { sub destroy { ...important cleanup... nextall; } } followed by $b.destroy What is it that make this *less* preferable over class A { sub destroy { ...important cleanup } } class B is A { sub destroy { ...important cleanup... } } $b.+destroy. The latter breaks encapsulation. The subclass B has the *full* responsibility to handle the method call. Not the caller. I’m not sure why you thought that when I say it’s broken to give the subclass this responsibility, it follows that I mean that the *caller* should be responsible, which is so much more broken that… well, that it was obvious even to you. No, the *superclass* ought to take responsibility here: class A { method teardown { $self.+TEARDOWN } method TEARDOWN { ... } } class B is A { method TEARDOWN { ... } } class C is B { ... } $c.teardown Not only does that not break encapsulation at the caller, it also keeps the responsibility encapsulated from subclasses. They just need to take care of their own cleanup, they need not make sure everyone else’s gets called too. That’s what it’s useful for: to provide a contract of this type in one place in a superclass (or maybe a role or some other single authority). (It doesn’t even have to be provided at the root of the hierarchy.) Every modern Perl 5 OO system invents stuff like BUILD and DEMOLISH for this purpose. And one of the points of Perl 6 is not to have to handroll a reasonable OO system as your first step in writing nontrivial systems. So putting these operators right in the language, properly designed, is specifically called for. Show me the precedence for constructs like this in other languages, please. I haven't seen any and I belive it is because they're not necessary. Mostly because they make `new` an operator or otherwise have special cases for e.g. instance construction and destruction, such that you don’t need to do this in the most common case (same as you won’t in Perl 6)… but nor *can* you in most languages, if you happen to have a less common use case. Or maybe you are aware of the motivation for these designs and disagree with that desire in the first place? In that case I don’t know what to say; obviously there are plenty of people who do see it as a necessity. I am really not so sure, because I've tried to bring up the subject a couple of times, and every time I get answers from people like you: people who don't need the feature themselves, but refer to it as something someone else probably need. Is that what kind of people I am? I thought I had needed and used EVERY::LAST:: in Object::Properties. Please show me an example that makes Perl 6 vastly more useful by the addition of this feature. No. I have already mentioned BUILD/DEMOLISH and explained the motivation in text. You did not appear to even realise that those were examples of the use case, nor appreciate their usefulness (and not just to people other than me). I am willing to help you appreciate the use of this feature but I have no interest convincing you that it is a good idea. If you do not have the curiosity to take my answer seriously, there’s nothing left to do. Perl 6 could be great because things like this operator could be deferred to non-core modules, but right now they're in the core and nobody can really explain why. *COUGH* featurecreep *COUGH* Pardon me for not quite believing that you think Perl 6 would be AMAZING except alas, it has this .* / .+ operator that ruins
Re: The invocation operators .* and .+
* Michael Zedeler mich...@zedeler.dk [2015-06-16 11:35]: This is working exactly as specified in the synopsis, but does Perl 6 NEED anything like this? Just because something is possible doesn't make it an automatic requirement! Well someone thought they needed it in Perl 5 so they wrote NEXT which provides EVERY:: which does exactly the same thing. C3 dispatch surely has something similar too, natively, I’m just not aware of it. I haven't seen just one reasonable use case for it. Anyplace you would have to say “if you override this method then make sure to call the overridden method also” (like calling -new up the inheritance tree). Instead of relying on every subclass writer to not screw this up (and leave the object instance in an incoherent state), you use something like these operators to make *sure* a certain method is called all up the inheritance tree as necessary for your de-/init needs. Every modern Perl 5 OO system invents stuff like BUILD and DEMOLISH for this purpose. And one of the points of Perl 6 is not to have to handroll a reasonable OO system as your first step in writing nontrivial systems. So putting these operators right in the language, properly designed, is specifically called for. Just because you can’t think of the use of a feature doesn’t mean there isn’t one. Or maybe you are aware of the motivation for these designs and disagree with that desire in the first place? In that case I don’t know what to say; obviously there are plenty of people who do see it as a necessity. If you don’t like the fact that they exist then the situation cannot be reconciled and you might indeed be happier in a language with reflection facilities that are sufficiently limited to prevent implementing such constructs (because if so many people exist who think they need this in Perl, similar people will inevitably exist in any other language where this can be done). Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Good error messages: going the extra mile
Hi Larry (mostly) et al, this sounds like something STD could try to steal: * http://blog.llvm.org/2010/04/amazing-feats-of-clang-error-recovery.html Okay, this may be going a bit far, but how else are you going to fall completely in love with a compiler? $ cat t.c void f0() { HEAD int x; === int y; whatever } $ clang t.c t.c:2:1: error: version control conflict marker in file HEAD ^ $ gcc t.c t.c: In function ‘f0’: t.c:2: error: expected expression before ‘’ token t.c:4: error: expected expression before ‘==’ token t.c:6: error: expected expression before ‘’ token Yep, clang actually detects the merge conflict and parses one side of the conflict. You don't want to get tons of nonsense from your compiler on such a simple error, do you? As I understood it from a YAPC keynote a year or two ago, STD already has the speculative parse machinery in place. It seems like this should be implementable with reasonable effort? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: r30205 - docs/Perl6/Spec
* Leon Timmermans faw...@gmail.com [2010-03-27 09:40]: On Sat, Mar 27, 2010 at 2:01 AM, Geoffrey Broadwell ge...@broadwell.org wrote: On Fri, 2010-03-26 at 08:38 +0100, pugs-comm...@feather.perl6.nl wrote: .doit: { $^a = $^b } # okay .doit(): { $^a = $^b } # okay .doit(1,2,3): { $^a = $^b } # okay + .doit(1,2,3): { $^a = $^b } # okay + .doit:{ $^a = $^b } # okay + .doit():{ $^a = $^b } # okay + .doit(1,2,3):{ $^a = $^b } # okay + .doit(1,2,3):{ $^a = $^b } # okay My eyes must be playing tricks on me -- I can't see the difference between the last two lines in each of the above blocks. What am I missing? A space between the colon and the opening brace ;-) He is saying he can’t see how these differ from each other: .doit(1,2,3): { $^a = $^b } # okay + .doit(1,2,3): { $^a = $^b } # okay Or how these two differ from each other: + .doit(1,2,3):{ $^a = $^b } # okay + .doit(1,2,3):{ $^a = $^b } # okay (Neither can I.) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Re-thinking file test operations
* Moritz Lenz mor...@faui2k3.org [2009-07-10 00:25]: stat($str, :e)# let multi dispatch handle it for us This gets my vote. -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: .trim and 'gilding the lilly'
* Nicholas Clark n...@ccl4.org [2009-01-24 15:00]: But personally I feel that the added conceptual complexity of having over-ridable regexps, and in particular .ltrim and .rtrim methods with over-ridable regexps is not worth it. Yeah. I have come around to this view as well. In programming, everything we do is a special case of something more general – and often we know it too quickly. —Alan J. Perlis I think this is a case of us overgeneralising `.trim` when perfectly appropriate truly general ways of achieving the same effect already exist – ie. `.subst`. The reason to have `.trim` at all is that the use of `.subst` is awkward for the very common case of wanting to trim both ends. But wanting to trim only a single side is much rarer while simultaneously not being appreciably awkward to do with `.subst`. (It’s more typing, but not enough to matter for such a relatively rare thing.) Sticking to a single common use-case eliminates the need for configuration API, improving usability as a whole. Keep it simple. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Converting a Perl 5 pseudo-continuation to Perl 6
* Aristotle Pagaltzis pagalt...@gmx.de [2009-01-02 23:00]: That way, you get this combination: sub pid_file_handler ( $filename ) { # ... top half ... yield; # ... bottom half ... } sub init_server { # ... my $write_pid = pid_file_handler( $optionspid_file ); become_daemon(); $write_pid(); # ... } It turns out that is exactly how generators work in Javascript 1.7: https://developer.mozilla.org/en/New_in_JavaScript_1.7 Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Trimming arrays
* Ovid publiustemp-perl6langua...@yahoo.com [2009-01-13 00:35]: * Larry Wall la...@wall.org [2009-01-13 00:25]: It should probably say No such method. We have hyperops now to apply scalar operators to composite values explicitly: @array».=trim Won't that fail with 'No such method' on an array of hashes? Or are hyperops applied recursively? I would *NOT* want a simple `».` to recurse down into a data structure. But I wonder if it’s reasonable to expect that hypermethodcalls will collect their return values in an array. Then trimming the values of the hashes in an array would be simply @array».values».=trim; Imagine writing this in another language. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] Add .trim method
* Ovid publiustemp-perl6langua...@yahoo.com [2009-01-12 16:05]: Or all could be allowed and $string.trim(:leading0) could all $string.rtrim internally. ++ Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] Add .trim method
* Ovid publiustemp-perl6langua...@yahoo.com [2009-01-12 18:40]: 1. No params, trim all 2. :start or :end, only trim that bit (not a negated option :) 3. If both, goto 1 Also `:!start` to imply `:end` unless `:!end` (which in turn implies `:start` unless `:!end`)? I’d like not to have to type `.trim(:start)` when I could just do `.ltrim` though. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] Add .trim method
* Aristotle Pagaltzis pagalt...@gmx.de [2009-01-12 20:55]: Also `:!start` to imply `:end` unless `:!end` (which in turn implies `:start` unless `:!end`)? Ugh, forget this, I was having a blank moment. Actually that makes me wonder now whether it’s actually a good idea at all to make the function parametrisable at all. Even `.ltrim.rtrim` is shorter and easier than `.trim(:start,:end)`! Plus if there are separate `.ltrim` and `.rtrim` functions it would be better to implement `.trim` by calling them rather than vice versa, so it wouldn’t even be less efficient two make two calls rather than a parametrised one. And if anyone really needs to be able to decide the trimming based on flags, they can do that themselves with `.ltrim`/ `.rtrim` with rather little code anyway. So I question the usefulness of parametrisation here. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] Add .trim method
* Ovid publiustemp-perl6langua...@yahoo.com [2009-01-12 21:20]: Since that's RTL (Right To Left) text, should ltrim remove the leading or trailing whitespace? I like Jonathan's trim_start and trim_end. Let me ask you first: does a string that runs Right-to-Left start at the left and end at the right or start at the right and end at the left? Now to answer your question, *I* know where the *left* side is in a string that runs from right to left: it’s at the *left*, same as if the string ran from the left to the right, because left is at the *left*. :-) I mean, if the the meaning of “left” was inverted by “right-to-left”, in which it is contained, then what does the latter even mean? (OK, we’re on a Perl 6 list so I guess the answer is it’s a juction… :-) ) Clearly one of us has an inversed sense of which pair of terms is ambiguous, and I don’t think it’s me… ;-) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] Add .trim method
* Larry Wall la...@wall.org [2009-01-12 21:55]: * Aristotle Pagaltzis pagalt...@gmx.de [2009-01-12 21:20]: Plus if there are separate `.ltrim` and `.rtrim` functions it would be better to implement `.trim` by calling them rather than vice versa, so it wouldn’t even be less efficient two make two calls rather than a parametrised one. Depends on your string implementation if they're non-destructive, since they potentially have to copy the middle of the string twice if your implementation can't support one string pointing into the middle of another. And again, I think .trim should be non-destructive, and .=trim should be the destructive version. Sure, but that doesn’t affect my point: if `.trim` is implemented as calling `.ltrim` + `.rtrim`, as I assumed, then all ways of trimming a string at both ends will be equally efficient or inefficient depending on whether or not the implementation supports offsetted strings. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] Add .trim method
* Ovid publiustemp-perl6langua...@yahoo.com [2009-01-12 22:05]: I see your point And now I see yours. I was visualising the memory layout of a string, wherein a right-to-left string gets displayed from the right end of it’s in-memory representation so “left” and “right” are absolutes in that picture. But of course RTL reverses the relation of left/right in memory and left/right on screen. I think a week’s worth of wolf sleep is catching up to me, sorry. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] Add .trim method
* Austin Hastings austin_hasti...@yahoo.com [2009-01-12 22:00]: How about .trim(:l, :r) with both as the default? Liveable. And if the rtl crowd makes a furor, we can add :a/:o or :ת/:א or something. *grin* Maybe :h and :t (head/tail). Useful for doing infrequent things. IMO, left and right trimming are infrequent compared to the frequency of basic input editing. Good point, rings true. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: r24846 - docs/Perl6/Spec
* jerry gay jerry@gmail.com [2009-01-09 22:45]: it's eager for the match to close Impatient, hasty? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Converting a Perl 5 pseudo-continuation to Perl 6
* Geoffrey Broadwell ge...@broadwell.org [2009-01-01 21:40]: In the below Perl 5 code, I refactored to pull the two halves of the PID file handling out of init_server(), but to do so, I had to return a sub from pid_file_handler() that acted as a continuation. The syntax is a bit ugly, though. Is there a cleaner way to this in Perl 6? ## sub init_server { my %options = @_; # ... # Do top (pre-daemonize) portion of PID file handling. my $handler = pid_file_handler($options{pid_file}); # Detach from parent session and get to clean state. become_daemon(); # Do bottom (post-daemonize) portion of PID file handling. $handler-(); # ... } sub pid_file_handler { # Do top half (pre-daemonize) PID file handling ... my $filename = shift; my $basename = lc $BRAND; my $PID_FILE = $filename || $PID_FILE_DIR/$basename.pid; my $pid_file = open_pid_file($PID_FILE); # ... and return a continuation on the bottom half (post-daemonize). return sub { $MASTER_PID = $$; print $pid_file $$; close $pid_file; }; } ## When I asked this question on #perl6, pmurias suggested using gather/take syntax, but that didn't feel right to me either -- it's contrived in a similar way to using a one-off closure. Contrived how? I always found implicit continuations distasteful in the same way that `each` and the boolean flip-flop are bad in Perl 5: because they tie program state to a location in the code. When there is state, it should be passed around explicitly. So I think the return-a-closure solution is actually ideal. F.ex. it keeps you entirely clear of the troublesome question of when a subsequent call should restart the sub from the beginning or resume it – should that happen when identical arguments are passed? Or when no arguments are passed? Are there any rules about the proximity of the calls in the code? Or does the coroutine state effectively become global state (like with `each` and `pos` in Perl 5)? When you have an explicit entity representing the continuation, all of these questions resolve themselves in at once: all calls to the original routine create a new continuation, and all calls via the state object are resumptions. There is no ambiguity or subtlety to think about. So from the perspective of the caller, I consider the “one-off” closure ideal: the first call yields an object that can be used to resume the call. However, I agree that having to use an extra block inside the routine and return it explicity is suboptimal. It would be nice if there was a `yield` keyword that not only threw a resumable exception, but also closed over the exception object in a function that, when called, resumes the original function. That way, you get this combination: sub pid_file_handler ( $filename ) { # ... top half ... yield; # ... bottom half ... } sub init_server { # ... my $write_pid = pid_file_handler( $optionspid_file ); become_daemon(); $write_pid(); # ... } Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Support for ensuring invariants from one loop iteration to the next?
* David Green david.gr...@telus.net [2008-12-18 19:45]: Well, I prefer a built-in counter like that, but I thought the point was that you wanted some kind of block or something that could be syntactically distinct? No, that would only be a means to the end. That end is simply to not repeat myself in writing honest code. And I just realised how to best do that in Perl 5: goto INVARIANT; while ( @stuff ) { $_-do_something( ++$i ) for @stuff; INVARIANT: @stuff = grep { $_-valid } @stuff; } I am not sure why this works, to be honest. That is, I don’t know whether it’s an intentional or accidental feature that execution doesn’t merely fall off the end of the loop body after jumping into the middle of it, but loops back to the top, despite not having executed the `while` statement first. But it does work. And it says exactly what it’s supposed to say in the absolutely most straightforward manner possible. The order of execution is crystal clear, the intent behind the loop completely explicit. -- *AUTOLOAD=*_;sub _{s/(.*)::(.*)/print$2,(,$\/, )[defined wantarray]/e;$1} Just-another-Perl-hack; #Aristotle Pagaltzis // http://plasmasturm.org/
Re: Support for ensuring invariants from one loop iteration to the next?
* David Green david.gr...@telus.net [2008-12-16 18:30]: So what if we had LOOP $n {} that executed on the nth iteration? Ugh. Please no. Now you suddenly have this odd new corner of the language with all its very own semantics and you have to figure out how to make it orthogonal enough and when it’s evaluated and a million other details in order to make it actually useful, and in the end you have a bigger language with yet another mechanism bolted on. The way Template Toolkit solves this is far better: the loop body gets access to an iterator object which can be queried for the count of iterations so far and whether this is the first or last iteration. Thay would result in the following: repeat { @stuff = grep { !.valid }, @stuff }; ENTER { next if $.first; .do_something( ++$i ) for @stuff; } } while @stuff; And now suddenly you don’t need to spec out a complicated single- purpose mechanism, and you can still realise far more powerful control flows than the single-purpose mechanism could ever hope to provide. Not only that, but you get access to the information as data in the program so you can do many more things with it compared to if it were locked up in the invocation semantics of a closure trait. `FIRST` and `LAST` are justifiable regardless of what other mechanisms are available simply because they’re things you want so frequently. `LOOP n` is just overgeneralisation. Thumbs down. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Files, Directories, Resources, Operating Systems
* Charles Bailey [EMAIL PROTECTED] [2008-12-10 03:15]: It may well be that a fine-grained interface isn't practical, but perhaps there are some basics that we could implement, such as - set owner of this thing - (maybe) set group of this thing - give owner|everyone|?some-group the ability to read from|write to|remove|run this thing - tell me whether any of these is possible - make the metadata for this thing the same as the metadata for that thing - tell me when this thing was created|last updated There are many problematic suggestions here. Some examples: • Unix does not track file creation datetime at all. • The concept of making a file runnable doesn’t even exist on Windows: that property is derived from the filename extension. • Delete permission on a file is a concept that doesn’t exist on Unix. To be able to delete a file, you instead need write permission on the directory it resides in. Furthermore, in Win32, files and directories can inherit permissions, so the fact that a file has certain effective permissions does not mean that these permissions are set on the file itself. But if you set them on the file itself, you dissociate it from the inheritance chain. So reading permissions and then setting them the same, without changing anything, can still have unwanted side effects. Or if you try to make the API smart, and so make it set permissions only when they constitute a change from the effective permissions, then conversely the user no longer has a way to dissociate the file from iheritance if that *is* what they wanted. So the concept of inheritance must be exposed explicitly. This is the primary issue I was thinking of when I said that some differences between Win32 and Unix have such pervasive effects that it seems impossible to provide even a rudimentary abstract interface. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Files, Directories, Resources, Operating Systems
* Mark Overmeer [EMAIL PROTECTED] [2008-12-08 21:20]: A pitty that we do not focus on the general concept of OS abstraction (knowing that some problems are only partially solvable (on the moment)). Well go on. Explain how you would, f.ex., provide an abstract API over file ownership and access permissions between Win32 and Unix? I don’t see such a thing being possible at all: there are too many differences with pervasive consequences. The most you can reasonably do (AFAICT) is map Win32-style owner/access info to a Unix-style API for reading only. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Files, Directories, Resources, Operating Systems
* Aristotle Pagaltzis [EMAIL PROTECTED] [2008-12-10 01:10]: Well go on. Btw, I just realised that it can be read as sarcastic, which I didn’t intend. I am honestly curious, even if skeptical. I am biased, but I am open to be convinced. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Files, Directories, Resources, Operating Systems
* Mark Overmeer [EMAIL PROTECTED] [2008-12-07 14:20]: So why are you all so hessitating in making each other's life easier? There is no 100% solution, but 0% is even worse! It looks like Python 3000 just tried that. People are not happy about it: http://utcc.utoronto.ca/~cks/space/blog/python/OsListdirProblem Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Files, Directories, Resources, Operating Systems
* Mark Overmeer [EMAIL PROTECTED] [2008-12-07 14:20]: - you have XML-files with meta-data on files which are being distributed. (I have a lot of those) Use URI encoding unless you like a world of pain. You are looking at it from the wrong point of view: Perl is used as a glue language: other people determine what kind of data we have to process. So, also in my case, the content of these XML structures is totally out of my hands: no influence on the definitions at all. I think that is the more common situation. If you start with a broken data format, no amount of papering over it will unbreak it. Sorry, Perl 6 won’t have magic ponies to fix that. Ambiguous data cannot be disambiguated by smart code. If you want to try anyway, talk to someone who didn’t get their name on an IETF RFC out of disgust with the state of an unfixably messy legacy data format. NTFS seems to say it’s all Unicode and comes back as either CP1252 or UTF-16 depending on which API you use, so I guess you could auto-decode those. But FAT is codepage-dependent, and I don’t know if Windows has a good way of distinguishing when you are getting what. So Windows seems marginally more consistent than Unix, but possibly only apparently. (What happens if you zip a file with random binary garbage for a name on Unix and then unzip it on Windows?) I have no idea what other systems do. Well, the nice thing about File::Spec/Class::Path is that someone did know how those systems work and everyone can benefit from it. These modules are completely and utterly oblivious to encoding issues, so I have no idea how they are relevant in the first place. So why are you all so hessitating in making each other's life easier? There is no 100% solution, but 0% is even worse! Because I have seen Java, and it taught me that the 90% solution is worse than the 20% solution. Provide 20% in the language and someone will use that and write Path::Class. And if we abstain from putting today’s best solutions in the core library, then we have a chance that tomorrow’s best solutions might gain traction. (Otherwise we get 10 years of CGI.pm again.) Once upon a time, Perl people where eager for good DWIMming and powerful programming. And yet it’s the CPAN that turned out to be Perl’s greatest strength. If you suggested the initial concept of the CPAN today, people would laugh at you – it would seem like an April fool’s joke. It didn’t even have a standard package format! Nowadays, I see so much fear in our community to attempt simpler/better/other ways of programming. Simpler in what way? All abstractions leak. Take this into account or make users suffer. We get a brand new language, with a horribly outdated documentation system and very traditional OS approach. As if everyone prefers to stick to Perl's 22 years and Unixes 39 years old choices, where the world around us saw huge development and change in needs. If you can show me a ubiquitous kernel that runs perl and was designed less than 15 years ago, I’ll show you a modern OS API approach. If you want to see an attempt at an abstract interface layered over crusty OS designs, I’ll show you Java. Abstaining from the attractive nuisance of abstracting small- seeming differences away seems to have worked out well enough for DBI, anyway. Would you argue that DBI is not a good or relevant example? (And if so, why?) Or are you suggesting that approach was a failure or horrible in some way? Are we just getting old, grumpy and tired? Where is the new blood to stir us up? Busy designing their own second system. You want to invite a bunch of PHP kids? I’m game. :-) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Support for ensuring invariants from one loop iteration to the next?
* David Green [EMAIL PROTECTED] [2008-12-05 15:45]: I tried to break down the reasons for wanting to write such loops different ways: 1) Simple code […] 2) Clear code […] 3) Self-documenting code […] Yes, exactly right. What we need is something a bit like the continue block, except that it gets executed before the loop- condition is checked the first time. So what if we split the loop-body into two parts? repeat { something(); } while ( condition(); ) { something_else(); } Now the condition is in the middle and is syntactically separate. (It's still not up front, but if the first block is really long, you can always... add a comment!) I actually don’t think putting the condition up front is the most desirable constellation. I just want the condition syntactically highlighted and separated from both the loop body and the invariant enforcement. In fact, it seems more desirable to have the invariant enforcement up top because the order of the code then corresponds to the order of evaluation. That is the reason I wasn’t quite happy with its being rendered as a closure trait. Funnily enough, I think you’re onto something here that you didn’t even notice: the following has the right semantics, apart from the fact that it doesn’t perform any work: repeat { @stuff = grep { !.valid }, @stuff }; } while @stuff; Now if we had a NOTFIRST (which would run before ENTER just as FIRST does, but on *every* iteration *except* the first), then we could trivially attain the correct semantics and achieve all desired results: repeat { @stuff = grep { !.valid }, @stuff }; NOTFIRST { .do_something( ++$i ) for @stuff; } } while @stuff; The really nice thing about this is that the blocks are nested, so that any variable in scope for the invariant enforcement will also be in scope in the NOTFIRST block without the user ever having to arrange another enclosing scope. * David Green [EMAIL PROTECTED] [2008-12-05 16:50]: Well, you don't need a comment -- why not allow the condition to come first? repeat while ( condition(); ) { something(); }, { something_else(); } You need the comma there because the final semicolon is optional, and we don't want Perl to think it's an ordinary loop followed by an independent block. Probably better is to name the introductory block, and then programmers as well as compilers know that something unusual is going on: repeat while (condition) preamble { something } { something_else } What I don’t like about these solutions is: how do you indent them? If you try multiple statements on multiple lines within the blocks, then suddenly there is no good and natural indentation style for at least one of the blocks. Also, you are supposed to be able to leave the parentheses off the `while` condition in Perl 6, and then it breaks down visually, particularly if you throw an arrow in there. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: how to write literals of some Perl 6 types?
* TSa [EMAIL PROTECTED] [2008-12-03 09:30]: And I want to pose the question if we really need two types Bool and Bit. I think so. Binary OR and logical OR are different beasts. As Duncan said, the real question is what’s the point of having Bit when we also have both Int and Blob. I think none. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Files, Directories, Resources, Operating Systems
* Tom Christiansen [EMAIL PROTECTED] [2008-11-27 11:30]: In-Reply-To: Message from Darren Duncan [EMAIL PROTECTED] of Wed, 26 Nov 2008 19:34:09 PST. [EMAIL PROTECTED] I believe that the most important issues here, those having to do with identity, can be discussed and solved without unduly worrying about matters of collation; It's funny you should say that, as I could nearly swear that I just showed that identify cannot be determmined in the examples above without knowing about locales. To wit, while all of those sort somewhat differently, even case-insensitively, no matter whether you're thinking of a French or a Spanish ordering (and what is English's, anyway?), you have a a more fundadmental = vs != scenario which is entirely locale-dependent. If I can make a RESUME file, ought I be able to make a distcint r\x{E9}sum\x{E9} or re\x{301}sume\x{301} file in a case-ignorant filesystem? That’s for the file system to know, not Perl 6. Trying to unify this in any way on the side of Perl is, in my regard, a fool’s errand. If the file system is case insensitive, then it will make the call in whatever way it deems correct, and it’s not for us to worry about all the possible ways in which all possible current and future file systems might answer such questions. Furthermore, from the point of view of the OS, even treating file names as opaque binary blobs is actually fine! Programs don’t care after all. In fact, no problem shows up until the point where you try to show filenames to a user; that is when the headaches start, not any sooner. To that, the right solution is simply not to roundtrip filenames through the user interface; instead, keep both the original octet sequence as well as the decoded version, and use the decoded version in UI but refer back to the pristine original when the user elects, via UI, to operate on that file. As far as I am concerned, if Perl 6 has a distinction between octet strings and character strings, then all that’s required is to have filenames returned from OS APIs come back as octet strings, keeping the programmer from forgetting to deal with decoding issues. The higher-level problems like sorting names in a locale-aware fashion will be solved by the CPAN collective much better than any boil-the-ocean abstract interface design that the Perl 6 cabal would produce – if indeed these are real problems at all in practice. All that’s necessary is to design the interface such that it won’t obstruct subsequent “userland” solution approaches. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Support for ensuring invariants from one loop iteration to the next?
* [EMAIL PROTECTED] [EMAIL PROTECTED] [2008-12-03 21:45]: loop { doSomething(); next if someCondition(); doSomethingElse(); } I specifically said that I was aware of this solution and that I am dissatisfied with it. Did you read my mail? * Jon Lang [EMAIL PROTECTED] [2008-12-03 20:10]: Aristotle Pagaltzis wrote: * Bruce Gray [EMAIL PROTECTED] [2008-12-03 18:20]: In Perl 5 or Perl 6, why not move the grep() into the while()? Because it's only a figurative example and you're supposed to consider the general problem, not nitpick the specific example… But how is that not a general solution? You wanted something where you only have to set the test conditions in one place; what's wrong with that one place being inside the while()? Because readability suffers immensely when ensuring the invariant takes more than a single short expression. You have to break the loop condition out over several indented lines. Not pretty. * Eirik Berg Hanssen [EMAIL PROTECTED] [2008-12-03 22:30]: I think Perl 5 will always allow: while ( doSomething(), someCondition() ) { doSomethingElse(); } I also think Perl 6 will always allow: while ( doSomething(); someCondition() ) { doSomethingElse(); } ... but don't quote me on that. Unless I'm right. ;-) The Perl 6 version of the two is more bearable, because neither do you need a `do{}` bracket nor do comma precendence issues force you to strew parens the expressions. But as I wrote above, this breaks down as soon as you need to do a non-trivial amount of work to ensure the invariant. * Jon Lang [EMAIL PROTECTED] [2008-12-03 22:05]: I suspect that the difficulty with the while(1) version was the kludgey syntax; the loop syntax that you describe does the same thing (i.e., putting the test in the middle of the loop block instead of at the start or end of it), but in a much more elegant manner. The only thing that it doesn't do that a more traditional loop construct manages is to make the loop condition stand out visually. There’s no real difference between `while(1)` and `loop` to me. I don’t like C’s `for(;;)`, but both the Perl 5 and 6 idioms are equally fine with me. The problem I have is that the number of iterations is not indeterminate; there is a set amount of work to be complete, whereupon the loop will terminate. Contrast to the event loop in a GUI, f.ex., where the termination of the loop is an exceptional event, and the loop runs for as long as the app is running. This is much like being able to have statement modifier forms of conditionals and loops: I want to put emphasis on what matters. When I see `while ( @stuff )` that means to me that [EMAIL PROTECTED] is expected to run out as a consequence of the loop body operating on it. When I say `while (1)` I generally intend to say that I don’t expect the loop to terminate any time soon, although of course some uncommon condition might require termination. * David Green [EMAIL PROTECTED] [2008-12-03 22:00]: On 2008-Dec-3, at 12:38 pm, Mark J. Reed wrote: I think the cleanest solution is the coy one. Me too. I don't think having the condition in the middle of the block is necessarily a bad thing -- that's how the logic is actually working, after all. Fake conditions like while(1) are kind of ugly, but P6 has loop, and you can always make it stand out more: loop { doSomething(); #CHECK OUR LOOP CONDITION! last unless someCondition; doSomethingElse(); } See above. When each iteration of the loop reduces some finite quantity, I want to use a check for that quantity as the loop condition, to point out that this is the purpose of the loop: to finish a particular pile of work and terminate. This is in contrast to a loop which reacts to an infinite stream of input of whatever sort. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Support for ensuring invariants from one loop iteration to the next?
* Mark J. Reed [EMAIL PROTECTED] [2008-12-03 20:30]: OK, so let's look at the general problem. The structure is this: doSomething(); while (someCondition()) { doSomethingElse(); doSomething(); } ...and you want to factor out the doSomething() call so that it only has to be specified once. Is that correct, Aristotle? Yes. The gotcha is that the first doSomething() is unconditional, while the first doSomethingElse() should only happen if the loop condition is met (which means just moving the test to the end of the block doesn't solve the problem). Exactly. Overall, the goal is to ensure that by the end of the loop the program is in the state of having just called doSomething(), whether the loop runs or not - while also ensuring that the program is in that state at the top of each loop iteration. It’s not a goal in itself. It’s just a necessity: you cannot test the loop condition without ensuring the invariant, so whether or not the loop runs is irrelevant, you have to run it once before you can know whether the loop will run at all. It does seem like a closure trait sort of thing, but I don't think it's currently provided by the p6 spec. I don’t see anything suitable there either. And while it does seems like a closure trait, that seems somewhat problematic in that the order of evaluation is weird when compared to other closure traits, which I suppose is what led you to declare the “coy” solution as the most natural. I am trying to think of a good block structure to capture these semantics spatially and not currently coming up with anything very good. * Mark J. Reed [EMAIL PROTECTED] [2008-12-03 20:40]: We can guarantee it's set at the top of each loop iteration with ENTER, but that doesn't get run if the loop never runs. We can guarantee it's set at the end of the loop with LAST, but that also doesn't get run if the loop never runs, and doesn't take care of the first iteration. For the purposes of this particular problem, there isn’t even much difference between those two. * Patrick R. Michaud [EMAIL PROTECTED] [2008-12-03 21:10]: Perhaps PRE ... ? while (someCondition()) { PRE { doSomething(); } doSomethingElse(); } The problem is, doSomething() has to be run *prior* to *any* loop condition check – including the very first. PRE (or ENTER) can’t do that. Or, if you wanted to be sure that doSomething() is always called at least once: repeat { PRE { doSomething(); } doSomethingElse(); } while someCondition(); That will run doSomething() once unconditionally. That’s not what I’m after. -- *AUTOLOAD=*_;sub _{s/(.*)::(.*)/print$2,(,$\/, )[defined wantarray]/e;$1} Just-another-Perl-hack; #Aristotle Pagaltzis // http://plasmasturm.org/
Re: Support for ensuring invariants from one loop iteration to the next?
* David Green [EMAIL PROTECTED] [2008-12-03 22:00]: FIRST{} can do something on only the first iteration through the loop, but there's no NOT-FIRST block to do something on the second and subsequent iterations. Is there an elegant way to do something on all but the first loop? Not with a closure trait and without a flag, which I guess does not count as elegant. In Template Toolkit this is nice insofar as that the loop iterator is available as an object in a variable, so you can say IF loop.first ; ... ; END ; but equally IF NOT loop.first ; ... ; END ; and similarly you can say IF NOT loop.last ; ... ; END ; to do something on all iterations but the ultimate. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Files, Directories, Resources, Operating Systems
* Mark Overmeer [EMAIL PROTECTED] [2008-12-04 16:50]: * Aristotle Pagaltzis ([EMAIL PROTECTED]) [081204 14:38]: Furthermore, from the point of view of the OS, even treating file names as opaque binary blobs is actually fine! Programs don’t care after all. In fact, no problem shows up until the point where you try to show filenames to a user; that is when the headaches start, not any sooner. So, they start when - you have users pick filenames (with Tk) for a graphical applications. You have to know the right codeset to be able to display them correctly. Yes, but you can afford imperfection because presumably you know which displayed filename corresponds to which stored octet sequence, so even if the name displays incorrectly, you still operate on the right file if the user picks it. - you have XML-files with meta-data on files which are being distributed. (I have a lot of those) Use URI encoding unless you like a world of pain. - when you start doing path manipulation on (UTF-16) blobs, and so forth. I have been fighting these problems for a long time, and they worry me more and more because we see Unicode being introduced on the OS-level. The mess is growing by the day. And all we can do is to avoid making it even bigger. Because the only ones in control here are the OS vendors, and they aren’t solving it, only making it bigger. The only thing *we* can do is not to erect obstacles that users will have to work around when our abstractions invariably leak. I am unconvinced that this problem actually yields to abstraction. All the really hard problems in computing are the ones that intersect with human culture – text in any form, and dates and times. When computers deal with mathematical entities, few problems are even hard, let alone insurmountable, you only need to work at them long enough. Human concepts are not like that, they are messy and inconsistent. To that, the right solution is simply nt to roundtrip filenames through the user interface; instead, keep both the original octet sequence as well as the decoded version, and use the decoded version in UI but refer back to the pristine original when the user elects, via UI, to operate on that file. But now you simply say decode it. But to be able to decode it, you must known in which charset it is in the first place. So: where do we start guessing? An educated guess at OS level, or on each user program again? I am not advocating educated guesses. The mechanism would be whatever interfaces the system provides. Unix does not have any, so you can indeed only ever guess, but if they system can give you something better, that should be used. NTFS seems to say it’s all Unicode and comes back as either CP1252 or UTF-16 depending on which API you use, so I guess you could auto-decode those. But FAT is codepage-dependent, and I don’t know if Windows has a good way of distinguishing when you are getting what. So Windows seems marginally more consistent than Unix, but possibly only apparently. (What happens if you zip a file with random binary garbage for a name on Unix and then unzip it on Windows?) I have no idea what other systems do. But there is no common denominator, so pretending there is one is not going to help. The higher-level problems like sorting names in a locale-aware fashion will be solved by the CPAN collective much better than any boil-the-ocean abstract interface design that the Perl 6 cabal would produce – if indeed these are real problems at all in practice. Why? Are CPAN programmers smarter than Perl6 Cabal people? Of course! There are many more CPAN programmers than cabalists; some of them are bound to have much greater expertise in some relevant area of this problem than anyone in the cabal. Even those who aren’t that smart will have direct access to and specific knowledge of the system they are dealing with, that the cabal may never even hear about. What I whould like to be designed is an object model for OS, processes directories, and files. We will not be able to solve all problems for each OS. Maybe people need to install additional CPAN modules to get smarter behavior. But I would really welcome it if platform independent coding is the default behavior, without need for File::Spec, Class::Path and such. Ugh. I understand the desire, but it is very easy to get into architecture astronautics. I think we should follow the DBI approach and not try to provide a unified interface to system- specific things like permissions and ownership: unify the most general notions of filesystems but leave all the specifics to be dealt with by user code in the concrete. That is the only place where the amount of acceptable abstraction can be decided. Cf. writing apps that run on all of PostgreSQL, MySQL and Oracle vs those that take advantage of specific DBMS features: this is a decision that the programmer has to make, it is not one we can make on his behalf. Regards
Support for ensuring invariants from one loop iteration to the next?
Hi all, I occasionally find myself annoyed at having to do something like this (I use Perl 5 vernacular, but it actually crops up in every single language I have ever used): my $i; @stuff = grep !$_-valid, @stuff; while ( @stuff ) { $_-do_something( ++$i ) for @stuff; @stuff = grep !$_-valid, @stuff; } Here, both the `while` condition and the `for` iteration assume that [EMAIL PROTECTED] will contain only valid elements. Since I don’t know whether this is initially the case, I have to repeat the statement both before the loop and at its bottom. There is no good way to rearrange this in the general case. The only way to improve it at all is some variant on this: my $i; while (1) { @stuff = grep !$_-valid, @stuff; last if not @stuff; $_-do_something( ++$i ) for @stuff; } Here I am forced to give up the formal loop conditional and bury the termination condition somewhere in the middle of the loop body. The code doesn’t exactly lie now, but it’s more coy about its intent than necessary. Does Perl 6 have some mechanism so I could write it along the following obvious lines? my $i; while ( @stuff ) { $_-do_something( ++$i ) for @stuff; } # plus some way of attaching this fix-up just once { @stuff = grep !$_-valid, @stuff } Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Support for ensuring invariants from one loop iteration to the next?
* Bruce Gray [EMAIL PROTECTED] [2008-12-03 18:20]: In Perl 5 or Perl 6, why not move the grep() into the while()? Because it’s only a figurative example and you’re supposed to consider the general problem, not nitpick the specific example… Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: S16: chown, chmod
* Brandon S. Allbery KF8NH [EMAIL PROTECTED] [2008-11-25 07:25]: OTOH Perl has historically not said much about doing that kind of thing. And I’m not in favour of it starting now. All I am saying is that APIs should be designed to encourage correct designs; arguably this is the spirit of Perl 6, which says TMTOWTDI yet tries to provide one good default way of doing any particular thing. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: S16: chown, chmod
* dpuu [EMAIL PROTECTED] [2008-11-21 19:00]: The definition of Cchown includes the statement that it's not available on most system unless you're superuser; and this can be checked using a POSIX incantation. I was wondering if it would be reasonable to provide this as a method on the chown function, so that a user could say: if chown.is_restricted { ... } else { chown $user, $group == @files } As has been mentioned by others this is a bad idea. All atomic operations on filesystems boil down to attempting an operation and dealing with the fallout if it fails. Attempting to check whether an operation will succeed prior to attempting it almost invariably leads to broken code. (Worse: subtly broke code, in most cases.) The API you propose does not seem to me to shorten code at all and is likely to lead to problematic code, so it seems like a bad idea. Interfaces should be designed to encourage people to do things correctly and to make it hard to even think about the nearly certainly wrong way. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: S16: chown, chmod
* Larry Wall [EMAIL PROTECTED] [2008-11-21 23:55]: Any you could even do it in parallel: my @status = hyper map { .io.chmod($mode) }, @files though it's possible your sysadmin will complain about what you're doing with the disk drive heads. :) Actually I/O subsystems are smart enough these days that this might be a sensible thing to do. It would distribute to multiple disks without much affecting the single-disk case. Over on the git list, Linus Torvalds just implemented naïvely threaded `stat`ing for certain cases in `git log` and `git diff`, and people with git repositories on NFS filesystems reported 3–6-fold speedups of these commands, without the local case being adversely affected. /offtopic-diversion Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: S16: chown, chmod
* dpuu [EMAIL PROTECTED] [2008-11-24 00:40]: I agree that the specific example of chown.is_restricted is a bad idea, but only because the POSIX API I was wrapping is itself flawed. It is not flawed in the least, as far as the aspect we are talking about is concerned. (It is generally sane in far more ways than it is flawed.) In general I would continue to pursue the approach of adding precondition-checks as methods on functions, a concept that is orthogonal to the specific example you are arguing against In general? Sure. But for filesystem operations? Bad idea. I disagree that the code I showed is not simpler that the original S16 approach. I don’t see any examples in S16 concerning error handling anyway, but even so I don’t see how relying on exceptions would could possibly be more complex than guard clauses. You *still* have to handle errors after the guard clause passes and the operation is attempted. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Store captures and non-captures in source-string order
* Larry Wall [EMAIL PROTECTED] [2008-10-13 19:00]: Maybe we're looking at a generalized tree query language That’s an intriguing observation. Another case for having some XPath-ish facility in the language? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Should $.foo attributes without is rw be writable from within the class
* Damian Conway [EMAIL PROTECTED] [2008-09-18 03:30]: When thinking about this, it's also important to remember that, in Perl 6, not everything with a sigil is automatically writeable. That’s not even new to Perl 6. $ perl -e's/foo/bar/ for foo' Modification of a read-only value attempted at -e line 1. -- *AUTOLOAD=*_;sub _{s/(.*)::(.*)/print$2,(,$\/, )[defined wantarray]/e;$1} Just-another-Perl-hack; #Aristotle Pagaltzis // http://plasmasturm.org/
Re: Should $.foo attributes without is rw be writable from within the class
* Carl Mäsak [EMAIL PROTECTED] [2008-09-18 12:20]: 2. Start using $!foo consistently in methods, for both read and write accesses. Unless, of course, you want the class-internal use of the attribute to go through its accessor! Which you are likely to want for public attributes, and much less likely for class- private ones. So Perl 6 defaults the right thing here, it would seem. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: What should +:21a produce?
* Patrick R. Michaud [EMAIL PROTECTED] [2008-09-15 02:25]: So, I'm wondering what happens in the string-to-number case if there happen to be characters within the angles that are not valid digits for the given radix. A similar question holds for calling radix converters as functions Since the radix specifier/function feels is followed by some variety of circumfix, it feels like an assertion that everything within brackets is a single entity – unlike something undelimited like `0b010ax`, say. So I’d expect it to be like however it is that `say +'oops'` behaves, which is probably to say `0`. Although it would be useful if this were an interesting kind of 0 that knows it came from a parse error (and maybe even which radix was asserted). Larry? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Iterator semantics
* Larry Wall [EMAIL PROTECTED] [2008-09-11 21:20]: As a first shot at that definition, I'll submit: 1 .. $n # easy 1 .. *# hard On the other hand, I can argue that if the first expression is easy, then the first $n elements of 1..* should also be considered easy, and it's not hard till you try to get to the *. :) It could also be that I'm confusing things here, of course, and that 1..* is something easy and immutable that nevertheless cannot be calculated eagerly. More to think about... In some sense, from the reactive programming perspective `1..*` is actually easier than `1..$n`. For the latter’s iterator the answer to “do you have another element” implies a conditional somewhere, whereas for the former’s it’s trivially “yes.” Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
The False Cognate problem and what Roles are still missing
Hi $Larry et al, I brought this up as a question at YAPC::EU and found to my surprise that no one seems to have thought of it yet. This is the mail I said I’d write. (And apologies, Larry. :-) ) Consider the classic example of roles named Dog and Tree which both have a `bark` method. Then there is a class that for some inexplicable reason, assumes both roles. Maybe it is called Mutant. This is standard fare so far: the class resolves the conflict by renaming Dog’s `bark` to `yap` and all is well. But now consider that Dog has a method `on_sound_heard` that calls `bark`. You clearly don’t want that to suddenly call Tree’s `bark`. Unless, of course, you actually do. It therefore seems necessary to me to specify dispatch such that method calls in the Dog role invoke the original Dog role methods where such methods exist. There also needs to be a way for a class that assumes a role to explicitly declare that it wants to override that decision. Thus, by default, when you say that Mutant does both Dog and Tree, Dog’s methods do not silently mutate their semantics. You can cause them to do so, but you should have to ask for that. I am, as I mentioned initially, surprised that no one seems to have considered this issue, because I always thought this is what avoiding the False Cognate problem of mixins, as chromatic likes to call it, ultimately implies at the deepest level: that roles provide scope for their innards that preserves their identity and integrity (unless, of course, you explicitly stick your hands in), kind of like the safety that hygienic macros provide. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Allowing '-' in identifiers: what's the motivation?
* Peter Scott [EMAIL PROTECTED] [2008-08-13 19:20]: If we allow operator symbols in identifiers then the world will divide into those people who look at Perl 6 programs only through syntax-highlighting editors and don't know what all the fuss is about naming a variable $e*trade since it is all purple, and those people who give up on reading the other people's programs. False dilemma. See Bob Rogers’ mail in this thread; some languages already allow all these symbols and the net effect is zero, because they take more work to type and people are lazy. That said, I really *really* like the idea of embedded dashes in identifiers (not least because underscores offend my amateur typophile self), but the idea of being able to embed other operator-ish symbols in identifiers leaves me utterly cold. I strongly doubt that if they are put in, it’ll cause the end of Perl 6, as you argue, but I also don’t care at all about whether they are allowed. I’m not going to use them anyway. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Allowing '-' in identifiers: what's the motivation?
* John M. Dlugosz [EMAIL PROTECTED] [2008-08-11 06:25]: I do agree that it may be better for multi-word identifiers than camel case or underscores, as seen in many other languages that the great unwashed masses have never heard of. XML and the stack of related technologies also do this (in particular, variables and functions in XPath), which is where I first encountered identifiers with dashes. I have been wishing I could have them in mainstream languages ever since. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Allowing '-' in identifiers: what's the motivation?
* Michael Mangelsdorf [EMAIL PROTECTED] [2008-08-11 20:25]: Unicode guillemets for hyper ops? Unicode? I don’t know about your ISO-8859-1, but mine has guillemets. :-) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [svn:perl6-synopsis] r14574 - doc/trunk/design/syn
* Larry Wall [EMAIL PROTECTED] [2008-08-08 19:45]: q'foo is now a valid identifier. Qa tlho', Larry. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Rakudo test miscellanea
* Mark J. Reed [EMAIL PROTECTED] [2008-06-26 20:20]: On Thu, Jun 26, 2008 at 1:31 PM, [EMAIL PROTECTED] wrote: Most financial institutions don't use float, rational or fixed point, they just keep integer pennies. I'm not so sure about that. There are lots of financial transactions that deal in sub-$0.01 fractions: taxes, currency conversion, brokerage stuff... They use decimal fixed point math where necessary. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Rakudo test miscellanea
* Aristotle Pagaltzis [EMAIL PROTECTED] [2008-06-29 02:05]: [repeat of statements made days ago] Sorry, I was only just catching up and didn’t notice this orphan subthread had siblings, where the point was already covered. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: non blocking pipe
* Spocchio [EMAIL PROTECTED] [2008-03-23 19:40]: i'm writing a gui tool, I need to open a non blocking pipe in read mode, to avoid the block of the gui when the stream become slow.. 1. Wrong list. http://mail.gnome.org/mailman/listinfo/gtk-perl-list 2. Your question is a Gtk2-Perl FAQ. http://tinyurl.com/2uk5m5#head-20b1c1d3a92f0c61515cb88d15e06b686eba6cbc Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: local $@ has an unwanted side effect
* Rafael Garcia-Suarez [EMAIL PROTECTED] [2008-03-23 00:15]: I agree that the behaviour of $@ is very hard to modify right now in Perl 5. It has many complications and many people have worked around features or misfeatures in many ways. Introducing a parallel system might work. What does Perl 6 do in that respect? Maybe semantics could be borrowed from there? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Musings on operator overloading
* TSa [EMAIL PROTECTED] [2008-03-19 16:00]: Aristotle Pagaltzis wrote: Something like path { $app_base_dir / $conf_dir / $foo_cfg . $cfg_ext } where the operators in that scope are overloaded irrespective of the types of the variables (be they plain scalar strings, instances of a certain class, or whatever). Assuming there are Path, Extension, Directory and File types, the better approach would be in my eyes to overload string concatenation. That presumes you will never want non-path concatenation semantics on filesystem path values. Then how do I write `die Couldn't open $abspath: $!\n`? When you leave the broader domain of mathematical and “para-” mathematical abstractions behind and start to define things like division on arbitrary object types that model aspects of domains which have nothing even resembling such concepts, you’re rapidly moving into the territory of obfuscation. Indeed mathematics is all about distilling abstract properties from problem domains and then proving abstract theorems. That means when applying mathematics you only have to meet the preconditions and get all consequences for free. But overloading a symbol that means product with something that does not adhere to the algebraic properties of products is a bad choice. Which was my point. Note that even with mathematical abstractions, there are cases where scope-bound overloading is a win over type-bound overloading. Consider a hypothetical Math::Symbolic that lets you do something like this: my $x = Math::Symbolic-new(); print +( $x**2 + 4 * $x + 3 )-derivative( $x ); I hope it’s obvious how such a thing would me implemented. No, I find that far from obvious. Time to read up on operator overloading then. :-) When I say “hypothetical” I mean merely in the sense that “it doesn’t exist on the CPAN in this form” – I did not imply that this would require anything outside the currently available semantics. Kragen Sitaker wrote that precise library in Python 2.x; it could easily be recreated in Perl 5. Your derivative method should have type :(Code -- Code) No. There are no code blocks or parse trees anywhere in sight, only an expression representation built up the Math::Symbolic object internally as its methods for the various overloaded operations were called. Now, if you used type-bound overloading, then the following two expressions cannot yield the same result: ( 2 / 3 ) * $x 2 * $x / 3 But if overloading was scope-bound, they would! First of all I would allow the optimizer to convert the latter into the former unconditionally. […] Hmm, thinking twice, the above optimization is admissible only if multiplication is commutative irrespective of the type of $x. Exactly. I think your statement makes only sense when the polymorphism on the type of $x is dropped in scope-bound overloading. That’s what I was talking about all along: a way to overload operators monomorphically for the duration of a scope. Operators in Perl are monomorphic, as a rule of thumb. String concatenation requires that you use the string concatenation operator, not the addition operator overloaded by the string type to mean concatenation. So I’d like directory-scope path qualification to have its own operator; what the `path {}` syntax does is override the operator bound to the slash and dot symbols for the duration of the block, regardless of the type of operands they operate on. In other words $x is then always converted into the suitable form. Yes. That’s exactly how Perl already works. You say `$x + $y`, it converts $x and $y into numbers somehow. You say `$foo eq $bar`, it stringifies $foo and $bar in whichever way it can. You say `if ( $quux )` and it boolifies $quux in whatever way it knows to do so. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Musings on operator overloading
* Mark J. Reed [EMAIL PROTECTED] [2008-03-19 20:45]: Maybe it's just 'cause I cut my teeth on BASIC, but + for string concatenation has always felt pretty natural. Obviously it won't work in Perl where you are using the operator to determine how to treat the operands. At first blush I find it more readily readable than or ||, or even .. Personally, after getting thoroughly used to Perl, I always have a vague feeling of unease in languages where I need to use + to concatenate. It makes the meaning of the statement dependent on the types of any variables, which is information that a reader won’t necessarily find in close vicinity of the statement. And that’s exactly what led me to the idea I proposed in the initial mail in this thread – changing the meaning of an operator from one monomorphic operation to another within a lexical scope. Hopefully, the new meaning is somewhat related to the original - a sort of operator metonymy - but if the context is sufficiently different, that's not a requirement. Again, nobody's going to think you're dividing pathnames. Strongly disagree. If context was sufficient to help the reader, how did operator overloading get such a bad rep in C++? That’s exactly what my proposal was all about: if you’re completely changing the meaning of an operator, the reader should have nearby indication of what is really going on. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Musings on operator overloading
* Mark J. Reed [EMAIL PROTECTED] [2008-03-21 21:35]: On Fri, Mar 21, 2008 at 4:25 PM, Aristotle Pagaltzis [EMAIL PROTECTED] wrote: It makes the meaning of the statement dependent on the types of any variables, which is information that a reader won't necessarily find in close vicinity of the statement. [...] if you're completely changing the meaning of an operator, the reader should have nearby indication of what is really going on. Ah, so you want the types of typed vars to be apparent where those vars are used. Well, there's an easy solution there: Hungarian notation! (ducks under barrage of rotten fruit) The other easy solution is monomorphism, wherein the types of the variables are irrelevant. It so happens that this is what Perl does and what my proposal was about. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Musings on operator overloading (was: File-Fu overloading)
Hi Jonathan, * Jonathan Lang [EMAIL PROTECTED] [2008-02-24 22:30]: So if I'm understanding you correctly, the following would be an example of what you're talking about: { use text; if $a 49 { say $a } } ...with the result being the same as Perl5's 'if $a gt 49 { say $a }' (so if $a equals '5', it says '5'). Am I following you? If so, I'm not seeing what's so exciting about the concept; all it is is a package that redefines a set of operators for whatever scopes use it. If I'm not following you, I'm totally lost. you’re indeed following me. And it’s indeed not very exciting. And that’s exactly the point. I find that regular, type-based overloading is *very* exciting… but not in a good way. An approach that makes operator overloading an unexciting business therefore seems very useful to me. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Musings on operator overloading (was: File-Fu overloading)
[Cc to perl6-language as I think this is of interest] [Oh, and please read the entire thing before responding to any one particular point. There are a number of arguments flowing from one another here. (I am guilty of being too quick with the Reply button myself, hence this friendly reminder.)] * Eric Wilhelm [EMAIL PROTECTED] [2008-02-24 02:05]: # from Aristotle Pagaltzis # on Saturday 23 February 2008 14:48: I find the basic File::Fu interface interesting… but operator overloading always makes me just ever so slightly queasy, and this example is no exception. Is that because of the syntax, the concepts, or the fact that perl5 doesn't quite get it right? It’s a matter of readability. It’s the old argument about, if not to say against, operator overloading: you’re giving `*` a completely arbitrary meaning that has nothing in common in any way with what `*` means in contexts that the reader of the code had previously encountered. Does it help to know that error messages will be plentiful and informative? (Not to mention the aforementioned disambiguation between mutations and stringifications.) It has nothing to do with any of these factors. I get the desire for syntactic sugar, I really do… but looking at this, I think the sane way to accommodate that desire is to attach overloaded semantics to a specially denoted scope rather than hang them off the type of an object. I can't picture that without an example. Something like path { $app_base_dir / $conf_dir / $foo_cfg . $cfg_ext } where the operators in that scope are overloaded irrespective of the types of the variables (be they plain scalar strings, instances of a certain class, or whatever). Note that I’m not proposing this as something for File::Fu to implement. It would be rather difficult, if at all possible, to provide such an interface in Perl 5. You need macros or access to the grammar or something like that in order to implement this at all. Although I think that even if you have those, you wouldn’t want to use them directly, but rather as a substrate to implement scope-attached operator overloading as an abstraction over them. But I think it’s desirable to use this abstraction instead of using grammar modifications or macros directly, since it vastly more limited power than the former and still much less power than the latter. It should therefore be easier both in use by the programmer who designs the overloading scope and in readability for the maintenance programmer who reads code that uses overload scopes. It would particularly help the latter, of course, because the code’s behaviour does not vary based on the types that happen to pass through; the source code is explicit and direct about its meaning. I suspect though that having the object carry the semantics around with it is still going to be preferred. There are cases where it would be. When the object is a mathematical abstraction in some broad sense, e.g. it’s a complex number class, or it implements some kind of container such as a set, then being able to overload operators based on the type of that object would be useful. But note that in all of these examples, it is very much self-evident what the meaning of an overloaded `+` would be: that meaning comes from the problem domain – a problem domain that has the rare property of having concepts such as operators and operands. When you leave the broader domain of mathematical and “para-” mathematical abstractions behind and start to define things like division on arbitrary object types that model aspects of domains which have nothing even resembling such concepts, you’re rapidly moving into the territory of obfuscation. A lot of C++ programmers could sing a song about that. However, I think the way that Java reacted to this (“only the language designer gets to overload operators!!”) is completely wrong. I agree fully with the underlying desire you express: The essential motivation is that if I can't make this interface work, I'm just going to slap strings together and be done with it. The converse is that if I can make this interface work then cross-platform pathname compatibility becomes far less tedious. Absolutely it is very, very useful to be able to define syntactic sugar that makes it as easy and pleasant to do the right thing (manipulate pathnames as pathnames) as it is to do the wrong thing (use string operations to deal with pathnames). That is precisely why I said that I do get why you’d want to overload operators. And this contradiction – that being able to declare sugar is good, but the way that languages have permitted that so far leads to insanity – is what sent me thinking along the lines that there has to be some way to make overloading sane. And we all know that all is fair if you predeclare. And that led me to the flash of inspiration: why not make overloading a property of the source (lexical, early-bound) rather than of the values (temporal, late- bound)? And what