Hello community, here is the log from the commit of package gnu_parallel for openSUSE:Factory checked in at 2018-11-26 10:31:20 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Comparing /work/SRC/openSUSE:Factory/gnu_parallel (Old) and /work/SRC/openSUSE:Factory/.gnu_parallel.new.19453 (New) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "gnu_parallel" Mon Nov 26 10:31:20 2018 rev:50 rq:651434 version:20181122 Changes: -------- --- /work/SRC/openSUSE:Factory/gnu_parallel/gnu_parallel.changes 2018-11-15 12:41:37.938169240 +0100 +++ /work/SRC/openSUSE:Factory/.gnu_parallel.new.19453/gnu_parallel.changes 2018-11-26 10:31:58.988910059 +0100 @@ -1,0 +2,6 @@ +Fri Nov 23 17:26:52 UTC 2018 - Jan Engelhardt <[email protected]> + +- Update to new upstream release 20181122 + * Experimental simpler job flow control. + +------------------------------------------------------------------- Old: ---- parallel-20181022.tar.bz2 parallel-20181022.tar.bz2.sig New: ---- parallel-20181122.tar.bz2 parallel-20181122.tar.bz2.sig ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Other differences: ------------------ ++++++ gnu_parallel.spec ++++++ --- /var/tmp/diff_new_pack.ot0hmR/_old 2018-11-26 10:32:05.084902916 +0100 +++ /var/tmp/diff_new_pack.ot0hmR/_new 2018-11-26 10:32:05.088902912 +0100 @@ -17,7 +17,7 @@ Name: gnu_parallel -Version: 20181022 +Version: 20181122 Release: 0 Summary: Shell tool for executing jobs in parallel License: GPL-3.0-or-later ++++++ parallel-20181022.tar.bz2 -> parallel-20181122.tar.bz2 ++++++ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/NEWS new/parallel-20181122/NEWS --- old/parallel-20181022/NEWS 2018-10-23 00:56:52.000000000 +0200 +++ new/parallel-20181122/NEWS 2018-11-23 00:35:17.000000000 +0100 @@ -1,3 +1,13 @@ +20181122 + +* Experimental simpler job flow control. + +* 時間がかかるコマンドを GNU parallel で 並列実行する + https://qiita.com//grohiro/items/4db3fa951a4778c5c479 + +* Bug fixes and man page updates. + + 20181022 * env_parallel.fish: --session support (alpha quality) diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/README new/parallel-20181122/README --- old/parallel-20181022/README 2018-10-23 00:48:01.000000000 +0200 +++ new/parallel-20181122/README 2018-11-23 00:32:16.000000000 +0100 @@ -44,9 +44,9 @@ Full installation of GNU Parallel is as simple as: - wget https://ftpmirror.gnu.org/parallel/parallel-20181022.tar.bz2 - bzip2 -dc parallel-20181022.tar.bz2 | tar xvf - - cd parallel-20181022 + wget https://ftpmirror.gnu.org/parallel/parallel-20181122.tar.bz2 + bzip2 -dc parallel-20181122.tar.bz2 | tar xvf - + cd parallel-20181122 ./configure && make && sudo make install @@ -55,9 +55,9 @@ If you are not root you can add ~/bin to your path and install in ~/bin and ~/share: - wget https://ftpmirror.gnu.org/parallel/parallel-20181022.tar.bz2 - bzip2 -dc parallel-20181022.tar.bz2 | tar xvf - - cd parallel-20181022 + wget https://ftpmirror.gnu.org/parallel/parallel-20181122.tar.bz2 + bzip2 -dc parallel-20181122.tar.bz2 | tar xvf - + cd parallel-20181122 ./configure --prefix=$HOME && make && make install Or if your system lacks 'make' you can simply copy src/parallel diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/configure new/parallel-20181122/configure --- old/parallel-20181022/configure 2018-10-23 00:48:17.000000000 +0200 +++ new/parallel-20181122/configure 2018-11-23 00:32:28.000000000 +0100 @@ -1,6 +1,6 @@ #! /bin/sh # Guess values for system-dependent variables and create Makefiles. -# Generated by GNU Autoconf 2.69 for parallel 20181022. +# Generated by GNU Autoconf 2.69 for parallel 20181122. # # Report bugs to <[email protected]>. # @@ -579,8 +579,8 @@ # Identity of this package. PACKAGE_NAME='parallel' PACKAGE_TARNAME='parallel' -PACKAGE_VERSION='20181022' -PACKAGE_STRING='parallel 20181022' +PACKAGE_VERSION='20181122' +PACKAGE_STRING='parallel 20181122' PACKAGE_BUGREPORT='[email protected]' PACKAGE_URL='' @@ -1214,7 +1214,7 @@ # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF -\`configure' configures parallel 20181022 to adapt to many kinds of systems. +\`configure' configures parallel 20181122 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... @@ -1281,7 +1281,7 @@ if test -n "$ac_init_help"; then case $ac_init_help in - short | recursive ) echo "Configuration of parallel 20181022:";; + short | recursive ) echo "Configuration of parallel 20181122:";; esac cat <<\_ACEOF @@ -1357,7 +1357,7 @@ test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF -parallel configure 20181022 +parallel configure 20181122 generated by GNU Autoconf 2.69 Copyright (C) 2012 Free Software Foundation, Inc. @@ -1374,7 +1374,7 @@ This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. -It was created by parallel $as_me 20181022, which was +It was created by parallel $as_me 20181122, which was generated by GNU Autoconf 2.69. Invocation command line was $ $0 $@ @@ -2237,7 +2237,7 @@ # Define the identity of the package. PACKAGE='parallel' - VERSION='20181022' + VERSION='20181122' cat >>confdefs.h <<_ACEOF @@ -2880,7 +2880,7 @@ # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" -This file was extended by parallel $as_me 20181022, which was +This file was extended by parallel $as_me 20181122, which was generated by GNU Autoconf 2.69. Invocation command line was CONFIG_FILES = $CONFIG_FILES @@ -2942,7 +2942,7 @@ cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`" ac_cs_version="\\ -parallel config.status 20181022 +parallel config.status 20181122 configured by $0, generated by GNU Autoconf 2.69, with options \\"\$ac_cs_config\\" diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/configure.ac new/parallel-20181122/configure.ac --- old/parallel-20181022/configure.ac 2018-10-23 00:48:00.000000000 +0200 +++ new/parallel-20181122/configure.ac 2018-11-23 00:32:16.000000000 +0100 @@ -1,4 +1,4 @@ -AC_INIT([parallel], [20181022], [[email protected]]) +AC_INIT([parallel], [20181122], [[email protected]]) AM_INIT_AUTOMAKE([-Wall -Werror foreign]) AC_CONFIG_HEADERS([config.h]) AC_CONFIG_FILES([ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/src/niceload new/parallel-20181122/src/niceload --- old/parallel-20181022/src/niceload 2018-10-23 00:48:01.000000000 +0200 +++ new/parallel-20181122/src/niceload 2018-11-23 00:32:16.000000000 +0100 @@ -24,7 +24,7 @@ use strict; use Getopt::Long; $Global::progname="niceload"; -$Global::version = 20181022; +$Global::version = 20181122; Getopt::Long::Configure("bundling","require_order"); get_options_from_array(\@ARGV) || die_usage(); if($opt::version) { @@ -1147,7 +1147,7 @@ # throw away all execpt the last Device:-section my @iostat; for(reverse @iostat_out) { - /Device:/ and last; + /Device/ and last; push @iostat, (split(/\s+/,$_))[13]; } my $io = ::max(@iostat); diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/src/parallel new/parallel-20181122/src/parallel --- old/parallel-20181022/src/parallel 2018-10-23 00:48:01.000000000 +0200 +++ new/parallel-20181122/src/parallel 2018-11-23 00:32:16.000000000 +0100 @@ -120,7 +120,7 @@ $sem = acquire_semaphore(); } $SIG{TERM} = \&start_no_new_jobs; -start_more_jobs(); +while(start_more_jobs()) {} if($opt::tee) { # All jobs must be running in parallel for --tee $Global::start_no_new_jobs = 1; @@ -620,6 +620,7 @@ my $sleep =1; while($Global::total_running > 0) { $sleep = ::reap_usleep($sleep); + start_more_jobs(); } } $Global::start_no_new_jobs ||= 1; @@ -1554,7 +1555,7 @@ sub init_globals { # Defaults: - $Global::version = 20181022; + $Global::version = 20181122; $Global::progname = 'parallel'; $Global::infinity = 2**31; $Global::debug = 0; @@ -2668,7 +2669,6 @@ # Returns: # $jobs_started = number of jobs started my $jobs_started = 0; - my $jobs_started_this_round = 0; if($Global::start_no_new_jobs) { return $jobs_started; } @@ -2678,65 +2678,61 @@ changed_procs_file(); changed_sshloginfile(); } - do { - $jobs_started_this_round = 0; - # This will start 1 job on each --sshlogin (if possible) - # thus distribute the jobs on the --sshlogins round robin - for my $sshlogin (values %Global::host) { - if($Global::JobQueue->empty() and not $opt::pipe) { - # No more jobs in the queue - last; - } - debug("run", "Running jobs before on ", $sshlogin->string(), ": ", - $sshlogin->jobs_running(), "\n"); - if ($sshlogin->jobs_running() < $sshlogin->max_jobs_running()) { - if($opt::delay - and - $opt::delay > ::now() - $Global::newest_starttime) { - # It has been too short since last start - next; - } - if($opt::load and $sshlogin->loadavg_too_high()) { - # The load is too high or unknown - next; - } - if($opt::noswap and $sshlogin->swapping()) { - # The server is swapping - next; - } - if($opt::limit and $sshlogin->limit()) { - # Over limit - next; - } - if($opt::memfree and $sshlogin->memfree() < $opt::memfree) { - # The server has not enough mem free - ::debug("mem", "Not starting job: not enough mem\n"); - next; - } - if($sshlogin->too_fast_remote_login()) { - # It has been too short since - next; - } - debug("run", $sshlogin->string(), - " has ", $sshlogin->jobs_running(), - " out of ", $sshlogin->max_jobs_running(), - " jobs running. Start another.\n"); - if(start_another_job($sshlogin) == 0) { - # No more jobs to start on this $sshlogin - debug("run","No jobs started on ", - $sshlogin->string(), "\n"); - next; - } - $sshlogin->inc_jobs_running(); - $sshlogin->set_last_login_at(::now()); - $jobs_started++; - $jobs_started_this_round++; - } - debug("run","Running jobs after on ", $sshlogin->string(), ": ", - $sshlogin->jobs_running(), " of ", - $sshlogin->max_jobs_running(), "\n"); + # This will start 1 job on each --sshlogin (if possible) + # thus distribute the jobs on the --sshlogins round robin + for my $sshlogin (values %Global::host) { + if($Global::JobQueue->empty() and not $opt::pipe) { + # No more jobs in the queue + last; } - } while($jobs_started_this_round); + debug("run", "Running jobs before on ", $sshlogin->string(), ": ", + $sshlogin->jobs_running(), "\n"); + if ($sshlogin->jobs_running() < $sshlogin->max_jobs_running()) { + if($opt::delay + and + $opt::delay > ::now() - $Global::newest_starttime) { + # It has been too short since last start + next; + } + if($opt::load and $sshlogin->loadavg_too_high()) { + # The load is too high or unknown + next; + } + if($opt::noswap and $sshlogin->swapping()) { + # The server is swapping + next; + } + if($opt::limit and $sshlogin->limit()) { + # Over limit + next; + } + if($opt::memfree and $sshlogin->memfree() < $opt::memfree) { + # The server has not enough mem free + ::debug("mem", "Not starting job: not enough mem\n"); + next; + } + if($sshlogin->too_fast_remote_login()) { + # It has been too short since + next; + } + debug("run", $sshlogin->string(), + " has ", $sshlogin->jobs_running(), + " out of ", $sshlogin->max_jobs_running(), + " jobs running. Start another.\n"); + if(start_another_job($sshlogin) == 0) { + # No more jobs to start on this $sshlogin + debug("run","No jobs started on ", + $sshlogin->string(), "\n"); + next; + } + $sshlogin->inc_jobs_running(); + $sshlogin->set_last_login_at(::now()); + $jobs_started++; + } + debug("run","Running jobs after on ", $sshlogin->string(), ": ", + $sshlogin->jobs_running(), " of ", + $sshlogin->max_jobs_running(), "\n"); + } return $jobs_started; } @@ -2912,8 +2908,8 @@ } # * because of loadavg # * because of too little time between each ssh login. - start_more_jobs(); $sleep = ::reap_usleep($sleep); + start_more_jobs(); if($Global::max_jobs_running == 0) { ::warning("There are no job slots available. Increase --jobs."); } @@ -2921,6 +2917,7 @@ while($opt::sqlmaster and not $Global::sql->finished()) { # SQL master $sleep = ::reap_usleep($sleep); + start_more_jobs(); if($Global::start_sqlworker) { # Start an SQL worker as we are now sure there is work to do $Global::start_sqlworker = 0; @@ -4054,27 +4051,24 @@ # @pids_reaped = PIDs of children finished my $stiff; my @pids_reaped; - my $children_reaped = 0; + my $total_reaped; debug("run", "Reaper "); - # For efficiency surround with BEGIN/COMMIT when using $opt::sqlmaster - $opt::sqlmaster and $Global::sql->run("BEGIN;"); - while (($stiff = waitpid(-1, &WNOHANG)) > 0) { + if (($stiff = waitpid(-1, &WNOHANG)) > 0) { # $stiff = pid of dead process if(wantarray) { push(@pids_reaped,$stiff); - } else { - $children_reaped++; } - if($Global::sshmaster{$stiff}) { - # This is one of the ssh -M: ignore - next; - } - my $job = $Global::running{$stiff}; + $total_reaped++; + if($Global::sshmaster{$stiff}) { + # This is one of the ssh -M: ignore + next; + } + my $job = $Global::running{$stiff}; # '-a <(seq 10)' will give us a pid not in %Global::running - $job or next; - delete $Global::running{$stiff}; - $Global::total_running--; + $job or return 0; + delete $Global::running{$stiff}; + $Global::total_running--; if($job->{'commandline'}{'skip'}) { # $job->skip() was called $job->set_exitstatus(-2); @@ -4084,11 +4078,12 @@ $job->set_exitsignal($? & 127); } - debug("run", "seq ",$job->seq()," died (", $job->exitstatus(), ")"); - $job->set_endtime(::now()); - my $sshlogin = $job->sshlogin(); - $sshlogin->dec_jobs_running(); - if($job->should_be_retried()) { + debug("run", "seq ",$job->seq()," died (", $job->exitstatus(), ")"); + $job->set_endtime(::now()); + my $sshlogin = $job->sshlogin(); + $sshlogin->dec_jobs_running(); + if($job->should_be_retried()) { + # Free up file handles $job->free_ressources(); } else { # The job is done @@ -4109,18 +4104,17 @@ ::kill_sleep_seq($job->pid()); ::killall(); ::wait_and_exit($Global::halt_exitstatus); - } - } + } + } $job->cleanup(); - start_more_jobs(); + if($opt::progress) { my %progress = progress(); ::status_no_nl("\r",$progress{'status'}); } } - $opt::sqlmaster and $Global::sql->run("COMMIT;"); debug("run", "done "); - return wantarray ? @pids_reaped : $children_reaped; + return wantarray ? @pids_reaped : $total_reaped; } @@ -5102,6 +5096,7 @@ # $ms*1.1 if no children reaped my $ms = shift; if(reaper()) { + while(reaper()) {} if(not $Global::total_completed % 100) { if($opt::timeout) { # Force cleaning the timeout queue for every 1000 jobs @@ -7947,7 +7942,7 @@ $command = 'cat > $PARALLEL_TMP;'. $command.";". postpone_exit_and_cleanup(). - '$PARALLEL_TMP'; + '$PARALLEL_TMP'; } elsif($opt::fifo) { # Prepend fifo-wrapper. In essence: # mkfifo {} @@ -8280,7 +8275,7 @@ my $self = shift; my $command = shift; # TODO test that *sh -c 'parallel --env' use *sh - if(not defined $self->{'sshlogin_wrap'}) { + if(not defined $self->{'sshlogin_wrap'}{$command}) { my $sshlogin = $self->sshlogin(); my $serverlogin = $sshlogin->serverlogin(); my $quoted_remote_command; @@ -8310,12 +8305,12 @@ $command =~ /\n/) { # csh does not deal well with > 1000 chars in one word # csh does not deal well with $ENV with \n - $self->{'sshlogin_wrap'} = base64_wrap($perl_code); + $self->{'sshlogin_wrap'}{$command} = base64_wrap($perl_code); } else { - $self->{'sshlogin_wrap'} = "perl -e ".::Q($perl_code); + $self->{'sshlogin_wrap'}{$command} = "perl -e ".::Q($perl_code); } } else { - $self->{'sshlogin_wrap'} = $command; + $self->{'sshlogin_wrap'}{$command} = $command; } } else { my $pwd = ""; @@ -8357,7 +8352,7 @@ # We need to save the exit status of the job $post = '_EXIT_status=$?; ' . $post . ' exit $_EXIT_status;'; } - $self->{'sshlogin_wrap'} = + $self->{'sshlogin_wrap'}{$command} = ($pre . "$sshcmd $serverlogin -- exec " . $quoted_remote_command @@ -8365,7 +8360,7 @@ . $post); } } - return $self->{'sshlogin_wrap'}; + return $self->{'sshlogin_wrap'}{$command}; } sub transfer { diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/src/parallel_alternatives.7 new/parallel-20181122/src/parallel_alternatives.7 --- old/parallel-20181022/src/parallel_alternatives.7 2018-10-22 20:46:44.000000000 +0200 +++ new/parallel-20181122/src/parallel_alternatives.7 2018-11-10 15:01:55.000000000 +0100 @@ -129,7 +129,7 @@ .\" ======================================================================== .\" .IX Title "PARALLEL_ALTERNATIVES 7" -.TH PARALLEL_ALTERNATIVES 7 "2018-10-22" "20180922" "parallel" +.TH PARALLEL_ALTERNATIVES 7 "2018-10-23" "20181022" "parallel" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l @@ -1936,12 +1936,17 @@ dependency graph described in a file, so this is similar to \fBmake\fR. .PP https://github.com/cetra3/lorikeet (Last checked: 2018\-10) +.SS "\s-1DIFFERENCES BETWEEN\s0 spp \s-1AND GNU\s0 Parallel" +.IX Subsection "DIFFERENCES BETWEEN spp AND GNU Parallel" +\&\fBspp\fR can run jobs in parallel. \fBspp\fR does not use a command +template to generate the jobs, but requires jobs to be in a +file. Output from the jobs mix. +.PP +https://github.com/john01dav/spp .SS "Todo" .IX Subsection "Todo" Url for spread .PP -https://github.com/john01dav/spp -.PP https://github.com/amritb/with\-this.git .PP https://github.com/fd0/machma Requires Go >= 1.7. diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/src/parallel_alternatives.html new/parallel-20181122/src/parallel_alternatives.html --- old/parallel-20181022/src/parallel_alternatives.html 2018-10-22 20:46:54.000000000 +0200 +++ new/parallel-20181122/src/parallel_alternatives.html 2018-11-10 15:01:55.000000000 +0100 @@ -68,6 +68,7 @@ <li><a href="#DIFFERENCES-BETWEEN-map-soveran-AND-GNU-Parallel">DIFFERENCES BETWEEN map(soveran) AND GNU Parallel</a></li> <li><a href="#DIFFERENCES-BETWEEN-loop-AND-GNU-Parallel">DIFFERENCES BETWEEN loop AND GNU Parallel</a></li> <li><a href="#DIFFERENCES-BETWEEN-lorikeet-AND-GNU-Parallel">DIFFERENCES BETWEEN lorikeet AND GNU Parallel</a></li> + <li><a href="#DIFFERENCES-BETWEEN-spp-AND-GNU-Parallel">DIFFERENCES BETWEEN spp AND GNU Parallel</a></li> <li><a href="#Todo">Todo</a></li> </ul> </li> @@ -1527,12 +1528,16 @@ <p>https://github.com/cetra3/lorikeet (Last checked: 2018-10)</p> -<h2 id="Todo">Todo</h2> +<h2 id="DIFFERENCES-BETWEEN-spp-AND-GNU-Parallel">DIFFERENCES BETWEEN spp AND GNU Parallel</h2> -<p>Url for spread</p> +<p><b>spp</b> can run jobs in parallel. <b>spp</b> does not use a command template to generate the jobs, but requires jobs to be in a file. Output from the jobs mix.</p> <p>https://github.com/john01dav/spp</p> +<h2 id="Todo">Todo</h2> + +<p>Url for spread</p> + <p>https://github.com/amritb/with-this.git</p> <p>https://github.com/fd0/machma Requires Go >= 1.7.</p> Binary files old/parallel-20181022/src/parallel_alternatives.pdf and new/parallel-20181122/src/parallel_alternatives.pdf differ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/src/parallel_alternatives.pod new/parallel-20181122/src/parallel_alternatives.pod --- old/parallel-20181022/src/parallel_alternatives.pod 2018-10-22 20:17:25.000000000 +0200 +++ new/parallel-20181122/src/parallel_alternatives.pod 2018-10-24 01:07:54.000000000 +0200 @@ -1716,12 +1716,19 @@ https://github.com/cetra3/lorikeet (Last checked: 2018-10) +=head2 DIFFERENCES BETWEEN spp AND GNU Parallel + +B<spp> can run jobs in parallel. B<spp> does not use a command +template to generate the jobs, but requires jobs to be in a +file. Output from the jobs mix. + +https://github.com/john01dav/spp =head2 Todo Url for spread -https://github.com/john01dav/spp + https://github.com/amritb/with-this.git diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/src/parallel_alternatives.texi new/parallel-20181122/src/parallel_alternatives.texi --- old/parallel-20181022/src/parallel_alternatives.texi 2018-10-22 20:47:26.000000000 +0200 +++ new/parallel-20181122/src/parallel_alternatives.texi 2018-11-10 15:01:55.000000000 +0100 @@ -68,6 +68,7 @@ * DIFFERENCES BETWEEN map(soveran) AND GNU Parallel:: * DIFFERENCES BETWEEN loop AND GNU Parallel:: * DIFFERENCES BETWEEN lorikeet AND GNU Parallel:: +* DIFFERENCES BETWEEN spp AND GNU Parallel:: * Todo:: @end menu @@ -1962,13 +1963,20 @@ https://github.com/cetra3/lorikeet (Last checked: 2018-10) +@node DIFFERENCES BETWEEN spp AND GNU Parallel +@section DIFFERENCES BETWEEN spp AND GNU Parallel + +@strong{spp} can run jobs in parallel. @strong{spp} does not use a command +template to generate the jobs, but requires jobs to be in a +file. Output from the jobs mix. + +https://github.com/john01dav/spp + @node Todo @section Todo Url for spread -https://github.com/john01dav/spp - https://github.com/amritb/with-this.git https://github.com/fd0/machma Requires Go >= 1.7. diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/src/sem new/parallel-20181122/src/sem --- old/parallel-20181022/src/sem 2018-10-23 00:48:01.000000000 +0200 +++ new/parallel-20181122/src/sem 2018-11-23 00:32:16.000000000 +0100 @@ -120,7 +120,7 @@ $sem = acquire_semaphore(); } $SIG{TERM} = \&start_no_new_jobs; -start_more_jobs(); +while(start_more_jobs()) {} if($opt::tee) { # All jobs must be running in parallel for --tee $Global::start_no_new_jobs = 1; @@ -620,6 +620,7 @@ my $sleep =1; while($Global::total_running > 0) { $sleep = ::reap_usleep($sleep); + start_more_jobs(); } } $Global::start_no_new_jobs ||= 1; @@ -1554,7 +1555,7 @@ sub init_globals { # Defaults: - $Global::version = 20181022; + $Global::version = 20181122; $Global::progname = 'parallel'; $Global::infinity = 2**31; $Global::debug = 0; @@ -2668,7 +2669,6 @@ # Returns: # $jobs_started = number of jobs started my $jobs_started = 0; - my $jobs_started_this_round = 0; if($Global::start_no_new_jobs) { return $jobs_started; } @@ -2678,65 +2678,61 @@ changed_procs_file(); changed_sshloginfile(); } - do { - $jobs_started_this_round = 0; - # This will start 1 job on each --sshlogin (if possible) - # thus distribute the jobs on the --sshlogins round robin - for my $sshlogin (values %Global::host) { - if($Global::JobQueue->empty() and not $opt::pipe) { - # No more jobs in the queue - last; - } - debug("run", "Running jobs before on ", $sshlogin->string(), ": ", - $sshlogin->jobs_running(), "\n"); - if ($sshlogin->jobs_running() < $sshlogin->max_jobs_running()) { - if($opt::delay - and - $opt::delay > ::now() - $Global::newest_starttime) { - # It has been too short since last start - next; - } - if($opt::load and $sshlogin->loadavg_too_high()) { - # The load is too high or unknown - next; - } - if($opt::noswap and $sshlogin->swapping()) { - # The server is swapping - next; - } - if($opt::limit and $sshlogin->limit()) { - # Over limit - next; - } - if($opt::memfree and $sshlogin->memfree() < $opt::memfree) { - # The server has not enough mem free - ::debug("mem", "Not starting job: not enough mem\n"); - next; - } - if($sshlogin->too_fast_remote_login()) { - # It has been too short since - next; - } - debug("run", $sshlogin->string(), - " has ", $sshlogin->jobs_running(), - " out of ", $sshlogin->max_jobs_running(), - " jobs running. Start another.\n"); - if(start_another_job($sshlogin) == 0) { - # No more jobs to start on this $sshlogin - debug("run","No jobs started on ", - $sshlogin->string(), "\n"); - next; - } - $sshlogin->inc_jobs_running(); - $sshlogin->set_last_login_at(::now()); - $jobs_started++; - $jobs_started_this_round++; - } - debug("run","Running jobs after on ", $sshlogin->string(), ": ", - $sshlogin->jobs_running(), " of ", - $sshlogin->max_jobs_running(), "\n"); + # This will start 1 job on each --sshlogin (if possible) + # thus distribute the jobs on the --sshlogins round robin + for my $sshlogin (values %Global::host) { + if($Global::JobQueue->empty() and not $opt::pipe) { + # No more jobs in the queue + last; } - } while($jobs_started_this_round); + debug("run", "Running jobs before on ", $sshlogin->string(), ": ", + $sshlogin->jobs_running(), "\n"); + if ($sshlogin->jobs_running() < $sshlogin->max_jobs_running()) { + if($opt::delay + and + $opt::delay > ::now() - $Global::newest_starttime) { + # It has been too short since last start + next; + } + if($opt::load and $sshlogin->loadavg_too_high()) { + # The load is too high or unknown + next; + } + if($opt::noswap and $sshlogin->swapping()) { + # The server is swapping + next; + } + if($opt::limit and $sshlogin->limit()) { + # Over limit + next; + } + if($opt::memfree and $sshlogin->memfree() < $opt::memfree) { + # The server has not enough mem free + ::debug("mem", "Not starting job: not enough mem\n"); + next; + } + if($sshlogin->too_fast_remote_login()) { + # It has been too short since + next; + } + debug("run", $sshlogin->string(), + " has ", $sshlogin->jobs_running(), + " out of ", $sshlogin->max_jobs_running(), + " jobs running. Start another.\n"); + if(start_another_job($sshlogin) == 0) { + # No more jobs to start on this $sshlogin + debug("run","No jobs started on ", + $sshlogin->string(), "\n"); + next; + } + $sshlogin->inc_jobs_running(); + $sshlogin->set_last_login_at(::now()); + $jobs_started++; + } + debug("run","Running jobs after on ", $sshlogin->string(), ": ", + $sshlogin->jobs_running(), " of ", + $sshlogin->max_jobs_running(), "\n"); + } return $jobs_started; } @@ -2912,8 +2908,8 @@ } # * because of loadavg # * because of too little time between each ssh login. - start_more_jobs(); $sleep = ::reap_usleep($sleep); + start_more_jobs(); if($Global::max_jobs_running == 0) { ::warning("There are no job slots available. Increase --jobs."); } @@ -2921,6 +2917,7 @@ while($opt::sqlmaster and not $Global::sql->finished()) { # SQL master $sleep = ::reap_usleep($sleep); + start_more_jobs(); if($Global::start_sqlworker) { # Start an SQL worker as we are now sure there is work to do $Global::start_sqlworker = 0; @@ -4054,27 +4051,24 @@ # @pids_reaped = PIDs of children finished my $stiff; my @pids_reaped; - my $children_reaped = 0; + my $total_reaped; debug("run", "Reaper "); - # For efficiency surround with BEGIN/COMMIT when using $opt::sqlmaster - $opt::sqlmaster and $Global::sql->run("BEGIN;"); - while (($stiff = waitpid(-1, &WNOHANG)) > 0) { + if (($stiff = waitpid(-1, &WNOHANG)) > 0) { # $stiff = pid of dead process if(wantarray) { push(@pids_reaped,$stiff); - } else { - $children_reaped++; } - if($Global::sshmaster{$stiff}) { - # This is one of the ssh -M: ignore - next; - } - my $job = $Global::running{$stiff}; + $total_reaped++; + if($Global::sshmaster{$stiff}) { + # This is one of the ssh -M: ignore + next; + } + my $job = $Global::running{$stiff}; # '-a <(seq 10)' will give us a pid not in %Global::running - $job or next; - delete $Global::running{$stiff}; - $Global::total_running--; + $job or return 0; + delete $Global::running{$stiff}; + $Global::total_running--; if($job->{'commandline'}{'skip'}) { # $job->skip() was called $job->set_exitstatus(-2); @@ -4084,11 +4078,12 @@ $job->set_exitsignal($? & 127); } - debug("run", "seq ",$job->seq()," died (", $job->exitstatus(), ")"); - $job->set_endtime(::now()); - my $sshlogin = $job->sshlogin(); - $sshlogin->dec_jobs_running(); - if($job->should_be_retried()) { + debug("run", "seq ",$job->seq()," died (", $job->exitstatus(), ")"); + $job->set_endtime(::now()); + my $sshlogin = $job->sshlogin(); + $sshlogin->dec_jobs_running(); + if($job->should_be_retried()) { + # Free up file handles $job->free_ressources(); } else { # The job is done @@ -4109,18 +4104,17 @@ ::kill_sleep_seq($job->pid()); ::killall(); ::wait_and_exit($Global::halt_exitstatus); - } - } + } + } $job->cleanup(); - start_more_jobs(); + if($opt::progress) { my %progress = progress(); ::status_no_nl("\r",$progress{'status'}); } } - $opt::sqlmaster and $Global::sql->run("COMMIT;"); debug("run", "done "); - return wantarray ? @pids_reaped : $children_reaped; + return wantarray ? @pids_reaped : $total_reaped; } @@ -5102,6 +5096,7 @@ # $ms*1.1 if no children reaped my $ms = shift; if(reaper()) { + while(reaper()) {} if(not $Global::total_completed % 100) { if($opt::timeout) { # Force cleaning the timeout queue for every 1000 jobs @@ -7947,7 +7942,7 @@ $command = 'cat > $PARALLEL_TMP;'. $command.";". postpone_exit_and_cleanup(). - '$PARALLEL_TMP'; + '$PARALLEL_TMP'; } elsif($opt::fifo) { # Prepend fifo-wrapper. In essence: # mkfifo {} @@ -8280,7 +8275,7 @@ my $self = shift; my $command = shift; # TODO test that *sh -c 'parallel --env' use *sh - if(not defined $self->{'sshlogin_wrap'}) { + if(not defined $self->{'sshlogin_wrap'}{$command}) { my $sshlogin = $self->sshlogin(); my $serverlogin = $sshlogin->serverlogin(); my $quoted_remote_command; @@ -8310,12 +8305,12 @@ $command =~ /\n/) { # csh does not deal well with > 1000 chars in one word # csh does not deal well with $ENV with \n - $self->{'sshlogin_wrap'} = base64_wrap($perl_code); + $self->{'sshlogin_wrap'}{$command} = base64_wrap($perl_code); } else { - $self->{'sshlogin_wrap'} = "perl -e ".::Q($perl_code); + $self->{'sshlogin_wrap'}{$command} = "perl -e ".::Q($perl_code); } } else { - $self->{'sshlogin_wrap'} = $command; + $self->{'sshlogin_wrap'}{$command} = $command; } } else { my $pwd = ""; @@ -8357,7 +8352,7 @@ # We need to save the exit status of the job $post = '_EXIT_status=$?; ' . $post . ' exit $_EXIT_status;'; } - $self->{'sshlogin_wrap'} = + $self->{'sshlogin_wrap'}{$command} = ($pre . "$sshcmd $serverlogin -- exec " . $quoted_remote_command @@ -8365,7 +8360,7 @@ . $post); } } - return $self->{'sshlogin_wrap'}; + return $self->{'sshlogin_wrap'}{$command}; } sub transfer { diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/src/sql new/parallel-20181122/src/sql --- old/parallel-20181022/src/sql 2018-10-23 00:48:01.000000000 +0200 +++ new/parallel-20181122/src/sql 2018-11-23 00:32:16.000000000 +0100 @@ -576,7 +576,7 @@ exit ($err); sub parse_options { - $Global::version = 20181022; + $Global::version = 20181122; $Global::progname = 'sql'; # This must be done first as this may exec myself diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/parallel-20181022/src/sql.1 new/parallel-20181122/src/sql.1 --- old/parallel-20181022/src/sql.1 2018-10-23 00:48:24.000000000 +0200 +++ new/parallel-20181122/src/sql.1 2018-11-23 00:32:31.000000000 +0100 @@ -129,7 +129,7 @@ .\" ======================================================================== .\" .IX Title "SQL 1" -.TH SQL 1 "2018-10-22" "20181022" "parallel" +.TH SQL 1 "2018-11-22" "20181122" "parallel" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l ++++++ parallel-20181022.tar.bz2.sig -> parallel-20181122.tar.bz2.sig ++++++ --- /work/SRC/openSUSE:Factory/gnu_parallel/parallel-20181022.tar.bz2.sig 2018-11-15 12:41:41.230165493 +0100 +++ /work/SRC/openSUSE:Factory/.gnu_parallel.new.19453/parallel-20181122.tar.bz2.sig 2018-11-26 10:32:03.960904233 +0100 @@ -2,7 +2,7 @@ # To check the signature run: # echo | gpg -# gpg --auto-key-locate keyserver --keyserver-options auto-key-retrieve parallel-20181022.tar.bz2.sig +# gpg --auto-key-locate keyserver --keyserver-options auto-key-retrieve parallel-20181122.tar.bz2.sig echo | gpg 2>/dev/null gpg --auto-key-locate keyserver --keyserver-options auto-key-retrieve $0 @@ -10,32 +10,32 @@ -----BEGIN PGP SIGNATURE----- -iQTwBAABCgAGBQJbzlrhAAoJENGrRRaIiIiIjJkmoINuu2EzJxlOlIPwri2NZQ2R -F2URNBuzaW3iP+dxiAN3Lz2ABnfPv71+MuRTwPkhu6DhoPaO3FS6wz9WEPknxcE8 -/xejg8xs5fLFNL4TTVemOsC5SsGOIjqpzLw6L5HV2iCzeYUdFDMcP9QRUca2lu0B -fVTXfuuOsUICb6F6VjCWfa+2cBXCpuMjg7SuWGTcFF0DyuussCQZcq3BGZNtSrhO -ipoOENKHqcme2dfRSZangWkd7wigxVOyNmUTg+qIq1/b1qhK1XBib0cp7fA6nFOd -SK0mZNav0jMxQHudBfEpRXRVT3C55uKi7vjqMuZA/XvPQaiYefd2vpFjLLQw0OSV -Oyw7P4CrgN7oUBHCiJfTqd+hk+u5kPfU3kmi5Yy87FNAvR5XDbw7+aYfy/mxpiwz -3PccSeikGe+HGzuWwLWEdEnnYeY6wcpBtREBHxeLycsVD+sKXLS2Gwx5b5mYeIrl -svfNxdw0bNRzFNt69mtq9AZMTPnMdP1K3IFyV72j3rzFyMhwUfHfE6sIoumex3tk -71x5FZXJn3ELqGKhVRHGKvnSZLu4ZPcQYG/Pd563Fz/8+bjeVPyY56AKF3VdViYP -CmEGH3aQ+QvT/CDWBdA9ymBx1zkZP/v2uBwZzmeWKvAdCOZ20WqLDYzyW3LgLigT -0gzLys+i35Q8+VCYILybEbJIdSLgniyahZXlPk+GMz0qUFCnRYuNdfFjBq/fDqTI -wf+6b2UFhSzZc492QO91z+Ano9sRJLMGOwGoebQhY8R6LhSfFLEnjCNbCYaIQSRI -yIgidGZenjHRfWaUYxJEEeYl/nPupFoiOAoCBuUwO+T+heeluoXW5QKiFYPKB5yl -46FTaGTcTn60uTfRmLgPwrfGKSUJOmZ1Xwgsqq9cfayaYJhHcUsiJE8qusjlPlVG -/DDaCJ0zfwkj5jklVA1H/swI3sIpG+dvcWkSlfqCK05uAgZX8sNZVSuSzZjJG0vr -/JYQGvIPnA02DL8H5AC66hZtbqAHXnfwdoPndWssTmDb62z3OKY3os1gLITAfMz4 -seSIivAxZYKmrjDNF09s/lF7ihv4Pi1X9iV1HIJaeyHKE3YqvPL5xpovFlBggnRy -ZsG3CmoBmATLE1jO3z/Pucm5LUG97sD3WR/sgm7NJJPAqAlX9jh+VdYC3AEPNEoR -xfKt6LUsdNv9VwpPJF8WrJwc5k6FRT4Td1XUrjZfXmJXPswIqBjlFsI1IameT+WA -AvMmmWVOkecK672XizBqObNAtIIJH0ER7a4DsPa/Pre8cs/Gy2a2GPajHu22DSsp -gV5SGefmc2PwUxun6L4ssuFCp+Cfzm3kDH3aoaCi8C8ljDGmlRABFL559I7d37qE -8d/jYTRjM4zXK59o41Dmew2Pu9154ht8mblelqwPdU3zF30BYsFubx7frSH7LSen -hzeTuBqpuYHfm23f48Mf4DpXdXJ3DzyldVjZ1pUufryTOMQPayYRTFvLwjUbW2jB -a1Xs+7r0GSsHxrWodZRZ3Ljl4crOpDsL77phhYkebfzfqhaboCcoPj9pF2fDJR/G -axIif1rSO8XBPtr3kPJllrV1uoqRQ4493oqmkqJcyqL4JPYQEgm2E0bEbIn9qwg9 -8l81eS0sLwIoSDTxpffM40Pw5w== -=m7d/ +iQTwBAABCgAGBQJb9z2JAAoJENGrRRaIiIiI6S0mn1HMIHl+3Wxog/PHiqA3N7Of +KRVZGzSTClo9dBFchs86bm20t8D7zFmTLdiSfTqh/CnNUlU530Fd2pSpcvAQECmQ +XmDrlWTkhuRYTWO/FS/Ngh0+IfVBt4ycJiO0p46OODxdp64ewXHJ8KstUy607uHv +cRnw1rKEqu8suUMx9rdZii6b+STAPNNG0/KyJnvuiiMtwVCZEFSTFS509lIKWsQH +ffkzqUNoNxy7hwkRXVD7zA9X35Iyh55TMsmhBDvn+6cDNv0AHyNHTXnQWqmok83A +0SHkK6jwndmFJWYZCNcmjykg8bW94IYO/ThoRzcthD54FgcTpKgutQtjlpSKqFBy +LjP/shHbNpJSDsnxwxcppQL2XvWz79LnTt5dKWQRQcj5Ijh0y1NiZnBP8xreVLrb +Rx+SN6gaiUtd1ig5hi/okfIRJnd2nCxAVyvrhS6cRy1XdYJf55lc3+a5p36YBT3Q +o0jmFzVsSeSnG0v17zrJiyP0SDJrmhFHLI+qN+jxLu9/lqvMqhCSzlDOjwhv0mGq +v1x3D4sYCX1cg61CvCl532hQILU0tf7duaMlAZNnsWz4cN2o9twFTi/TTQMbku8T +C05kMgbT4xQzs62LMZJ0iaD7itTGog1Rxov3Nu6i+3Tm4kv9iYKP8BdjDJ3WckGd +CBhDgBNU2OH0KGhw6rNqSsb+G+E0xZx5dO+7KByA/PCKz9n070Mhh3LUdS9lFvi3 +OO/CuaT6PGGCsTevfResqAFcKrpP+fxUWnyFF5pxVgNHpKcxX2by9RcGYDXQYpu4 +ntDnxq5idarHhj+kVzc8ehX6KVxuXVm3Arkx5QeWbVnf33lWak6rGqW+lUZlM3Dd +LZ2JKnantHjjIDzQBfMHMDt4/hPVroOwEu6uCSyWid2T80ty8DeNBH+XJY+kS8TT +zoPslLgp9KQedlApIr+2tAoiTI4n5wQArMAUx9MU0a7qWRHYIyXK3iyHM+63dJuw +vYkEi6sfn1/27EZWwiKDkLeIOR6jeZNdy7q43jjyGs05m1uhY/PfAQ6ofshlu+XR +na6gu0g7A24iOG89brptxDa2pfm0KSVjrC1t4WrjDSJ1X/JCCBAIaOAlYEskEOni +fWFYkcNEW/73b3uKFHTXl/Pjhcr0aK1xmybP1PlX/WDQpNwuNZulJpvZuY1bxAsa +4jzCjddL+VwO17d3auU8jKh/IUMUV62brOnTUeJ3lAC+aF6SF7SXVm61JTNOa1i2 +9wIt+5nJ3skQq7gs4S6IkJXSb3yCSvn2MfntULzcOc/5IaebEZ0Aya1RZRr/mqUt +nk/yhe0V/Jwnp0hVlsWIC4IftGk6mVUOio5mPfcJZ4fvEllLcQKvAAIvjGU0RMAr +LLMsuWyKydIY6LtjQil20/IAlkSz/948XyDfQjqsoho1UJoPp9yM8ilkAUijzkaA +2zTyOApT5Ts6w96WYSqMIrFwqii2ACP/xIv3g3XpupxPHPzwWFa2LrsyIhbR263X +hARwQLvugG96ef/aSEJJSZaGM6J02ntU5KjOGPBPcTx/tq1vh9YTnxhWX3G8YT7Z +Ivw4oS+wfw2fTerNqLwH+Ey38eMtsWPMojfJUo3D+dj+qnfTdcxjyVZEQWntceaB +xfgRRHFG5f9tZ6Gpu6pZhhPP6w== +=GJ5s -----END PGP SIGNATURE-----
