Hi,
I'm writing a tool that will make a tarball, and then the tarball is
passed to parallel, which splits it into 5GB blocks, and each block is
sent to separate pipe.
Call looks like:
tar cf - /some/directory | parallel -j 5 --pipe --block 5G --recend ''
./handle-single-part.sh "{#}"
Where hand
On Thu, Jan 25, 2018 at 11:26:24AM -0500, Joe Sapp wrote:
> Hi depesz,
>
> On Thu, Jan 25, 2018 at 10:33 AM, hubert depesz lubaczewski
> wrote:
> [snip]
> > But it looks that parallel itself is consumming HUGE amount of memory
> > - comparable with size of /some/direc
On Thu, Jan 25, 2018 at 11:26:24AM -0500, Joe Sapp wrote:
> Hi depesz,
>
> On Thu, Jan 25, 2018 at 10:33 AM, hubert depesz lubaczewski
> wrote:
> [snip]
> > But it looks that parallel itself is consumming HUGE amount of memory
> > - comparable with size of /some/direc
On Fri, Jan 26, 2018 at 02:58:32PM +0100, Hubert Kowalski wrote:
> Is it in any case similar to https://savannah.gnu.org/bugs/index.php?51261 ?
> https://lists.gnu.org/archive/html/parallel/2017-06/msg9.html ?
I don't use --results.
My parallel call is, exactly:
parallel -j 5 --pipe --block
On Sun, Jan 28, 2018 at 02:45:42AM +0100, Ole Tange wrote:
> --pipe keeps one block per process in memory, so the above should use
> around 25 GB of RAM.
>
> You can see the reason for this design by imagining jobs that reads
> very slowly: You will want all 5 of these to be running, but you would
On Sun, Jan 28, 2018 at 02:45:42AM +0100, Ole Tange wrote:
> On Thu, Jan 25, 2018 at 4:33 PM, hubert depesz lubaczewski
> You can also use --cat:
>
> tar cf - /some/directory | parallel -j 5 --pipe --block 5G --cat
> --recend '' 'cat {} | ./handle-single-part.sh {#
On Tue, Jan 30, 2018 at 05:52:47PM +0100, Ole Tange wrote:
> Typically the tmp-filesystem will be at least as fast as any other
> file system, but on many systems /tmp is faster than other filesystems
> on the server.
> If you really will not use tar to generate the input, then at least
> make sure
Hi,
I'm trying to work with parallel to split tarball, and then
compress/encrypt/upload-to-s3, but it's failing me.
I'm using parallel 20180122 on 64 bit linux.
My mem looks like:
total used free sharedbuffers cached
Mem: 404685624835521563304
On Thu, Feb 08, 2018 at 07:50:22PM +0100, Ole Tange wrote:
> In practice this means you need 2.3 GB of free RAM and 3.8 GB of free
> virtual memory (RAM+swap) to run this.
>
> Given your output from 'free' I think that if you add 2 GB of swap,
> then that will solve your issue. Just because Perl l