Re: How to push a hash on an array without flattening it to Pairs?
* Elizabeth Mattijsen <l...@dijkmat.nl> [2015-09-26 13:20]: > The flattening will not be done if more than one argument is specified: > > $ 6 'my %h = a => 42, b => 666; my @a = %h,%h; dd @a' > Array @a = [{:a(42), :b(666)}, {:a(42), :b(666)}] > > > This is the same behaviour as with for: > > $ 6 'my %h = a => 42, b => 666; dd $_ for %h' > :a(42) > :b(666) > > $ 6 'my %h = a => 42, b => 666; dd $_ for %h,%h' > Hash %h = {:a(42), :b(666)} > Hash %h = {:a(42), :b(666)} > > > It’s the same rule throughout :-) Yes, but adding a trailing comma to convert a lone item to an element in a one-element list, causing the flattening to apply to the list instead of the item, thus avoiding the flattening of that item as a side effect of sorts… is just way too meta for my taste, at least for everyday parts of the language. > There is: you just need to itemize the hash, e.g. by prefixing it with $ > > $ 6 'my %h = a => 42, b => 666; my @a = $%h; dd @a' > Array @a = [{:a(42), :b(666)},] > > This is the one argument rule at work. Aha! Much better. Explicit. “Don’t subject %h to flattening.” No need to combine two other unrelated rules to bend around invoking the undesired rule; just directly saying not to invoke it. Now of course I must ask – is there an opposite also? I.e. when writing a list, is there a way I can say “do flatten this item?” Or put in other words, what goes in place of XXX in the following to make it real? $ 6 'my %h = a => 42, b => 666; dd $_ for %h,XXX' Hash %h = {:a(42), :b(666)} :a(42) :b(666) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: How to push a hash on an array without flattening it to Pairs?
* Moritz Lenz <mor...@faui2k3.org> [2015-09-26 09:40]: > A trailing comma helps: > > my %h = a => 1, b => 2; > my @a = %h, ; > say @a.perl;# [{:a(1), :b(2)},] I think I understand why, but wow, that’s not reasonable. Is there really no better way to avoid the flattening? Even Perl 5 is nicer in that situation… -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: Definitions: compiler vs interpreter [was: Rationale for a VM + compiler approach instead of an interpreter?]
* Parrot Raiser 1parr...@gmail.com [2014-12-07 22:40]: The practical distinction, surely, is that the output of a compiler is usually kept around, to be run one or more times, whereas the an interpreter always works with the original human-readable source. Yes, surely that’s it. We all consider Python a compiler, after all. :-) Go on, tweak your definition to pin it down. :-) * Gerard ONeill oobl...@usa.net [2014-12-08 15:10]: How about an interpreter interprets input directly into action (even if there is some optimization going on), while a compiler converts instructions from one set to another set to be interpreted later. That’s just an unnecessarily concrete rephrasing of the definitions I mentioned. Which would make perl both at the perl source level, Perl never interprets raw perl code without first parsing it into an optree. and an interpreter at the bytecode level. Well yeah, bytecode always implies an interpreter. Thinking of execution as interpretation, this allows for the transmeta concept, where the CPU was just an interpreter / just in time compiler that interpreted x86 instructions. Although modern CISC CPU's have a step where the input to the chip was still converted to microcode which was actually what was run. So a compilation step, and an interpreting step. Execution takes a program as input and produces the program’s output as output. So it’s interpretation. By definition. Sometimes there are dedicated hard-wired circuits that do it, and sometimes there are other layers of abstraction around the hard-wired circuitry. The layers can be in hardware, and even then at different degrees of abstraction (FPGA vs microcode, say), or in software, and really what is software and what is hardware depends merely on your perspective. There are plenty of coprocessors that internally run code which is opaque from the outside; is that software or hardware? That’s what I meant by fuzzy ideas. You don’t get anywhere trying to nail this pudding to the wall. You only get somewhere if you accept that which is which is relative to your point of view and that the difference is defined in terms of the output: compilers transform programs to other programs and interpreters transform a program into its output. That’s it. E.g. if you have something perl running a Perl program then you have the CPU interpreting a program (perl) that itself interprets another program (the optree), which in turn was compiled from the user Perl program earlier on. Once you stop trying to artificially force everything into a single absolute distinction, the entire debate about which is which vanishes. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Definitions: compiler vs interpreter [was: Rationale for a VM + compiler approach instead of an interpreter?]
* Moritz Lenz mor...@faui2k3.org [2014-12-06 20:05]: First of all, the lines between interpreters and compilers a bit blurry. People think of Perl 5 as an interpreter, but actually it compilers to bytecode, which is then run by a runloop. So it has a compiler and an interpreter stage. This is sort of a tangent, but it was a clarifying insight that resolved a point of vagueness for me, so I’d like to just talk about that for a moment if you’ll indulge me. Namely, that line is actually very clear in a theoretical sense, if you judge these types of program by their outputs: Interpreter: A program that receives a program as input and produces the output of that program as output Compiler: A program that receives a program as input and produces another equivalent (in some sense) program as output Now some compilers emit programs that can be run directly by the CPU of the same computer that is running them, without an extra interpreter. This is what people with fuzzy ideas of the terms usually refer to when they speak of a compiler. But the output doesn’t have to be a program of this kind. The blurriness in practice comes from the fact that essentially all programming languages in use by humans are very impractical to use for direct interpretation. And so almost every interpreter ever written is actually coupled to a compiler that first transforms the user source program into some other form which is more convenient to interpret. Even the BASICs on those famous old home computers of the past are combined compiler-interpreters in this sense. Basically just parsing an input program up front as a whole essentially meets the definition of a compiler – even if a rather weak version of it. I think that means shells are typically true interpreters, and that they are more or less the only real examples of such. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Converting a Perl 5 pseudo-continuation to Perl 6
* Aristotle Pagaltzis pagalt...@gmx.de [2009-01-02 23:00]: That way, you get this combination: sub pid_file_handler ( $filename ) { # ... top half ... yield; # ... bottom half ... } sub init_server { # ... my $write_pid = pid_file_handler( $optionspid_file ); become_daemon(); $write_pid(); # ... } It turns out that is exactly how generators work in Javascript 1.7: https://developer.mozilla.org/en/New_in_JavaScript_1.7 Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Converting a Perl 5 pseudo-continuation to Perl 6
* Geoffrey Broadwell ge...@broadwell.org [2009-01-01 21:40]: In the below Perl 5 code, I refactored to pull the two halves of the PID file handling out of init_server(), but to do so, I had to return a sub from pid_file_handler() that acted as a continuation. The syntax is a bit ugly, though. Is there a cleaner way to this in Perl 6? ## sub init_server { my %options = @_; # ... # Do top (pre-daemonize) portion of PID file handling. my $handler = pid_file_handler($options{pid_file}); # Detach from parent session and get to clean state. become_daemon(); # Do bottom (post-daemonize) portion of PID file handling. $handler-(); # ... } sub pid_file_handler { # Do top half (pre-daemonize) PID file handling ... my $filename = shift; my $basename = lc $BRAND; my $PID_FILE = $filename || $PID_FILE_DIR/$basename.pid; my $pid_file = open_pid_file($PID_FILE); # ... and return a continuation on the bottom half (post-daemonize). return sub { $MASTER_PID = $$; print $pid_file $$; close $pid_file; }; } ## When I asked this question on #perl6, pmurias suggested using gather/take syntax, but that didn't feel right to me either -- it's contrived in a similar way to using a one-off closure. Contrived how? I always found implicit continuations distasteful in the same way that `each` and the boolean flip-flop are bad in Perl 5: because they tie program state to a location in the code. When there is state, it should be passed around explicitly. So I think the return-a-closure solution is actually ideal. F.ex. it keeps you entirely clear of the troublesome question of when a subsequent call should restart the sub from the beginning or resume it – should that happen when identical arguments are passed? Or when no arguments are passed? Are there any rules about the proximity of the calls in the code? Or does the coroutine state effectively become global state (like with `each` and `pos` in Perl 5)? When you have an explicit entity representing the continuation, all of these questions resolve themselves in at once: all calls to the original routine create a new continuation, and all calls via the state object are resumptions. There is no ambiguity or subtlety to think about. So from the perspective of the caller, I consider the “one-off” closure ideal: the first call yields an object that can be used to resume the call. However, I agree that having to use an extra block inside the routine and return it explicity is suboptimal. It would be nice if there was a `yield` keyword that not only threw a resumable exception, but also closed over the exception object in a function that, when called, resumes the original function. That way, you get this combination: sub pid_file_handler ( $filename ) { # ... top half ... yield; # ... bottom half ... } sub init_server { # ... my $write_pid = pid_file_handler( $optionspid_file ); become_daemon(); $write_pid(); # ... } Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Guido’ s library porting considerations
Hi all, this sounds sensible: * http://www.artima.com/weblogs/viewpost.jsp?thread=227041: I implore you (especially if you’re maintaining a library that’s used by others) not to make incompatible changes to your API. If you *have* to make API changes, do them *before* you port to 3.0 – release a version with the new API for Python 2.5, or 2.6 if you must. (Or do it later, *after* you’ve released a port to 3.0 without adding new features.) Why? Think of your users. Suppose Ima Lumberjack has implemented a web 2.0 app for managing his sawmill. Ima is a happy user of your most excellent web 2.0 framework. Now Ima wants to upgrade his app to Py3k. He waits until you have ported your framework to Py3k. He does everything by the books, runs his source code through the 2to3 tool, and starts testing. Imagine his despair when the tests fail: how is he going to tell whether the breakage is due to your API changes or due to his own code not being Py3k-ready? On the other hand, if port your web 2.0 framework to Py3k *without* making API changes, Ima’s task is much more focused: the bugs he is left with after running 2to3 are definitely in his own code, which (presumably :-) he knows how to debug and fix. And the Don’t Break CPAN line: The same recommendation applies even more strongly if your library is a dependency for other libraries – due to the fan-out the pain caused to others multiplies. Sounds quite well reasoned to me. Is this something that makes sense to encourage for 5-to-6 migrations of Perl code as well? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/