Re: processing a file in chunks

2019-10-23 Thread Marcel Timmerman
Thank you very much Brad, This I didn't know Regards Marcel CatHandle is the mechanism behind $*ARGFILES. If you want to read several files as it they were one, you can use IO::CatHandle.     my $combined-file = IO::CatHandle.new( 'example_000.txt', *.succ ... 'example_010.txt' );

Re: processing a file in chunks

2019-10-22 Thread Brad Gilbert
CatHandle is the mechanism behind $*ARGFILES. If you want to read several files as it they were one, you can use IO::CatHandle. my $combined-file = IO::CatHandle.new( 'example_000.txt', *.succ ... 'example_010.txt' ); Basically it works similar to the `cat` command-line utility. (Hence its

Re: processing a file in chunks

2019-10-22 Thread Marcel Timmerman
On 10/22/19 3:03 PM, Parrot Raiser wrote: CatHandle? Is that an alias for "tail"? :-)* hehe, that's a nice word change... Well, I've seen it here at https://docs.perl6.org/routine/readchars But there's also IO::Handle What I've understood is that the CatHandle does the same as a

Re: processing a file in chunks

2019-10-22 Thread Parrot Raiser
CatHandle? Is that an alias for "tail"? :-)* On 10/22/19, Marcel Timmerman wrote: > On 10/22/19 1:05 PM, Marcel Timmerman wrote: >> On 10/20/19 11:38 PM, Joseph Brenner wrote: >>> I was just thinking about the case of processing a large file in >>> chunks of an arbitrary size (where "lines" or

Re: processing a file in chunks

2019-10-22 Thread Marcel Timmerman
On 10/22/19 1:05 PM, Marcel Timmerman wrote: On 10/20/19 11:38 PM, Joseph Brenner wrote: I was just thinking about the case of processing a large file in chunks of an arbitrary size (where "lines" or "words" don't really work).   I can think of a few approaches that would seem kind-of rakuish,

Re: processing a file in chunks

2019-10-22 Thread Marcel Timmerman
On 10/20/19 11:38 PM, Joseph Brenner wrote: I was just thinking about the case of processing a large file in chunks of an arbitrary size (where "lines" or "words" don't really work). I can think of a few approaches that would seem kind-of rakuish, but don't seem to be built-in anywhere...

Re: processing a file in chunks

2019-10-22 Thread William Michels via perl6-users
Hi Joe, Just a quick note to say that "Learning Perl 6" by brian d foy has a section on reading binary files (pp.155-157). Check out the "Buf" object type, the ":bin" adverb, and the ".read" method. In particular, ".read" takes an argument specifying how many octets you want to read in. HTH,

Re: processing a file in chunks

2019-10-21 Thread Elizabeth Mattijsen
In that bioinformatics data, is there another logical record separator? If so, and $lrs contains the logical record separator, you could do this: for "filename".IO.lines(:nl-in($lrs)) { .say } the :nl-in indicates the line separator to be used when reading a file. > On 21 Oct

Re: must chomp input files (was Re: processing a file in chunks)

2019-10-20 Thread William Michels via perl6-users
I can confirm what Yary is seeing with respect to the "lines(:!chomp)" call. Below I can print things out on a single line (using "print"), but the use of "print" or "put" appears to be controlling, not manipulating the "chomp" option of "lines()". > mbook:~ homedir$ cat abc_test.txt line

must chomp input files (was Re: processing a file in chunks)

2019-10-20 Thread yary
It seems that *ARGFILES is opened with :chomp=True, so adding :!chomp to the lines call is too late. $ perl6 -e "say 11; say 22; say 33;" | perl6 -e '.say for lines(:chomp)' *11* *22* *33* $ perl6 -e "say 11; say 22; say 33;" | perl6 -e '.say for lines(:!chomp)' *11* *22* *33* -y

Re: processing a file in chunks

2019-10-20 Thread Joseph Brenner
Thanks, that looks good. At the moment I was thinking about cases where there's no need division by lines or words (like say, hypothetically bioinformatics data: very long strings no line breaks). On 10/20/19, Elizabeth Mattijsen wrote: >> On 20 Oct 2019, at 23:38, Joseph Brenner wrote: >> I

Re: processing a file in chunks

2019-10-20 Thread Joseph Brenner
Yes, you can call .comb on a file handle (which I hadn't realized) and if you give it an integer as first argument, that treats it as a chunk size... So stuff like this seems to work fine: my $fh = $file.IO.open; my $chunk_size = 1000; for $fh.comb( $chunk_size ) -> $chunk {

Re: processing a file in chunks

2019-10-20 Thread Elizabeth Mattijsen
> On 20 Oct 2019, at 23:38, Joseph Brenner wrote: > I was just thinking about the case of processing a large file in > chunks of an arbitrary size (where "lines" or "words" don't really > work). I can think of a few approaches that would seem kind-of > rakuish, but don't seem to be built-in

Re: processing a file in chunks

2019-10-20 Thread Joseph Brenner
Thanks, I'll take a look at that. Brad Gilbert wrote: > Assuming it is a text file, it would be `.comb(512)` > > On Sun, Oct 20, 2019 at 4:39 PM Joseph Brenner wrote: > >> I was just thinking about the case of processing a large file in >> chunks of an arbitrary size (where "lines" or "words"

Re: processing a file in chunks

2019-10-20 Thread Brad Gilbert
Assuming it is a text file, it would be `.comb(512)` On Sun, Oct 20, 2019 at 4:39 PM Joseph Brenner wrote: > I was just thinking about the case of processing a large file in > chunks of an arbitrary size (where "lines" or "words" don't really > work). I can think of a few approaches that

processing a file in chunks

2019-10-20 Thread Joseph Brenner
I was just thinking about the case of processing a large file in chunks of an arbitrary size (where "lines" or "words" don't really work). I can think of a few approaches that would seem kind-of rakuish, but don't seem to be built-in anywhere... something like a variant of "slurp" with an