Thank you very much Brad, This I didn't know
Regards
Marcel
CatHandle is the mechanism behind $*ARGFILES.
If you want to read several files as it they were one, you can use
IO::CatHandle.
my $combined-file = IO::CatHandle.new( 'example_000.txt', *.succ
... 'example_010.txt' );
CatHandle is the mechanism behind $*ARGFILES.
If you want to read several files as it they were one, you can use
IO::CatHandle.
my $combined-file = IO::CatHandle.new( 'example_000.txt', *.succ ...
'example_010.txt' );
Basically it works similar to the `cat` command-line utility. (Hence its
On 10/22/19 3:03 PM, Parrot Raiser wrote:
CatHandle? Is that an alias for "tail"? :-)*
hehe, that's a nice word change... Well, I've seen it here at
https://docs.perl6.org/routine/readchars
But there's also IO::Handle What I've understood is that the
CatHandle does the same as a
CatHandle? Is that an alias for "tail"? :-)*
On 10/22/19, Marcel Timmerman wrote:
> On 10/22/19 1:05 PM, Marcel Timmerman wrote:
>> On 10/20/19 11:38 PM, Joseph Brenner wrote:
>>> I was just thinking about the case of processing a large file in
>>> chunks of an arbitrary size (where "lines" or
On 10/22/19 1:05 PM, Marcel Timmerman wrote:
On 10/20/19 11:38 PM, Joseph Brenner wrote:
I was just thinking about the case of processing a large file in
chunks of an arbitrary size (where "lines" or "words" don't really
work). I can think of a few approaches that would seem kind-of
rakuish,
On 10/20/19 11:38 PM, Joseph Brenner wrote:
I was just thinking about the case of processing a large file in
chunks of an arbitrary size (where "lines" or "words" don't really
work). I can think of a few approaches that would seem kind-of
rakuish, but don't seem to be built-in anywhere...
Hi Joe,
Just a quick note to say that "Learning Perl 6" by brian d foy has a
section on reading binary files (pp.155-157). Check out the "Buf"
object type, the ":bin" adverb, and the ".read" method. In particular,
".read" takes an argument specifying how many octets you want to read
in.
HTH,
In that bioinformatics data, is there another logical record separator? If so,
and $lrs contains the logical record separator, you could do this:
for "filename".IO.lines(:nl-in($lrs)) {
.say
}
the :nl-in indicates the line separator to be used when reading a file.
> On 21 Oct
I can confirm what Yary is seeing with respect to the "lines(:!chomp)"
call. Below I can print things out on a single line (using "print"),
but the use of "print" or "put" appears to be controlling, not
manipulating the "chomp" option of "lines()".
> mbook:~ homedir$ cat abc_test.txt
line
It seems that *ARGFILES is opened with :chomp=True, so adding :!chomp to
the lines call is too late.
$ perl6 -e "say 11; say 22; say 33;" | perl6 -e '.say for lines(:chomp)'
*11*
*22*
*33*
$ perl6 -e "say 11; say 22; say 33;" | perl6 -e '.say for lines(:!chomp)'
*11*
*22*
*33*
-y
Thanks, that looks good. At the moment I was thinking about cases
where there's no need division by lines or words (like say,
hypothetically bioinformatics data: very long strings no line breaks).
On 10/20/19, Elizabeth Mattijsen wrote:
>> On 20 Oct 2019, at 23:38, Joseph Brenner wrote:
>> I
Yes, you can call .comb on a file handle (which I hadn't realized) and
if you give it an integer as first argument, that treats it as a chunk
size... So stuff like this seems to work fine:
my $fh = $file.IO.open;
my $chunk_size = 1000;
for $fh.comb( $chunk_size ) -> $chunk {
> On 20 Oct 2019, at 23:38, Joseph Brenner wrote:
> I was just thinking about the case of processing a large file in
> chunks of an arbitrary size (where "lines" or "words" don't really
> work). I can think of a few approaches that would seem kind-of
> rakuish, but don't seem to be built-in
Thanks, I'll take a look at that.
Brad Gilbert wrote:
> Assuming it is a text file, it would be `.comb(512)`
>
> On Sun, Oct 20, 2019 at 4:39 PM Joseph Brenner wrote:
>
>> I was just thinking about the case of processing a large file in
>> chunks of an arbitrary size (where "lines" or "words"
Assuming it is a text file, it would be `.comb(512)`
On Sun, Oct 20, 2019 at 4:39 PM Joseph Brenner wrote:
> I was just thinking about the case of processing a large file in
> chunks of an arbitrary size (where "lines" or "words" don't really
> work). I can think of a few approaches that
I was just thinking about the case of processing a large file in
chunks of an arbitrary size (where "lines" or "words" don't really
work). I can think of a few approaches that would seem kind-of
rakuish, but don't seem to be built-in anywhere... something like a
variant of "slurp" with an
16 matches
Mail list logo