RFC 186 is another interesting -io RFC, even though I'm not on the -io list. I couldn't find any discussion in the mail archive, so here's some to start it. Please copy me on the discussion. Sorry for cross posting, but this is attempting to unify RFCs from different lists; I've bcc'd two of the lists, directing followup discussion to -io (seems most appropriate for now). Could the RFC authors respond by adding the other RFCs to their cross-reference lists and republishing their RFCs, or explaining why I'm all wet in seeing these relationships. And as always, other comments welcome. Perl6 RFC Librarian wrote: > =head1 TITLE > > Standard support for opening i/o handles on scalars and > arrays-of-scalars > It's extremely useful to be able to open an i/o handle on > common in-core data structures, such as scalars or arrays-of-lines. > The CPAN modules IO::Scalar, IO::ScalarArray, and IO::Lines > currently provide some of this functionality, but their pure-Perl > implementation (chosen for portability) is not as fast or > memory-efficient > as a native implementation could be. Additionally, since they are not > part of the standard Perl distribution, many developers are either > unaware of their existence or unwilling to obtain and install them. > > This RFC proposes that support for "in-core i/o" be folded > into the Perl distribution as a standard extension module, making > use of native C code for speed. I have a number of scripts that use this sort of facility, using push/shift to populate/read the array "file". These could be made simpler and more general by wrapping the array as a file. Perhaps the open "handler" stuff could be used to implement this? Efficiently? Perhaps a technique like this could be used to implement RFC 79? Perhaps these RFCs should each reference the other, to preserve this notion? Perl's first pass through the file would read it and interpret all lines via the POD rules, plopping each line into the appropriate memory "file" (array) for each type of active handler? So compiling normal perl would create (minimally) a "perl source array", and a "perl data array". After the file is completely read, the perl compiler would be turned loose on the file populated bythe perl source array, and the perl data array would eventually populate whatever the DATA handle becomes in perl6. A pod processor would declare its type and get the set of lines appropriate to that type of pod processor. Or maybe (if it is cheap, or as a pod processor helper command line option, we read all the lines anyway) a file handle/memory array would be created for each type of pod processor mentioned in the source code. Then (1) programs could access the pod data via those handles, (2) a pod processor written in perl could just use the handle for the type of processor it is, ignoring the others. This RFC also seems to be related to RFC 183... using POD for testing. Now the model of use apparently envisioned for RFC 183 is to have the tests inside the POD, and then use a preprocessor to hack them out and put them in separate files. Wouldn't it be better to skip that step? Just use the "pod helper command line option" mentioned in the above paragraph, or a variation, to cause perl's first pass to (1) obtain the source for the program or module, (2) also obtain the source for the test module, (3) obtain one or more data handles for test input data and validation data, (4) compile 1&2 as perl source code, and (5) launch the tests, which can then used the appropriate data handles. But when compiled normally (without the test switches), all the test files simply don't get included. -- Glenn ===== There are two kinds of people, those who finish what they start, and so on... -- Robert Byrne _______________________________________________ Why pay for something you could get for free? NetZero provides FREE Internet Access and Email http://www.netzero.net/download/index.html