So how would I submit the contents of many files to parallel, without
concatenating them?
The function neds to process each file line by line.
I am sure there must be a better way. Why concatenate them at all? There is
no relationship between a line and the next line.
Maybe a new feature?


On Sun, Mar 6, 2022 at 4:19 PM Ole Tange <o...@tange.dk> wrote:

> On Sat, Mar 5, 2022 at 2:46 AM Saint Michael <vene...@gmail.com> wrote:
> >
> > I have a bunch of *.csv files.
> > I need to process each line of the separately, so I do
> > function() { any process }
> > export -f function
> > cat *.csv | parallel --colsep ',' function  "{1} {2} {3} {4} {5} {6} {7}"
> >
> > The question is: is this the best possible way to do this?
>
> If the function can only read a single input per run, then yes.
>
> If the function can read more lines, look at --pipe.
>
> > I don't like to use "cat"
>
> cat is built for concatenating files (this is what the name comes from).
>
> /Ole
>

Reply via email to