Hey, everyone.

I notice parallel seem to have no shell completions. It is not convenient.
Parallel now have about 190 command options, so it is hard to remember them.
Can parallel have any shell completions?

This is a zsh completion file I used.

```zsh
#compdef parallel

setopt localoptions extended_glob

local -a _comp_priv_prefix

_arguments \
  {--null,-0}'[Use NUL as delimiter]' \
  {--arg-file,-a}'[Use input-file as input source]:input-file:_files' \
  --arg-file-sep'[Use sep-str instead of :::: as separator string between 
command and argument files]:sep-str' \
  --arg-sep'[Use sep-str instead of ::: as separator string]:sep-str' \
  --bar'[Show progress as a progress bar]' \
  {--basefile,--bf}'[file will be transferred to each sshlogin before first job 
is started]:file:_files' \
  {--basenamereplace,--bnr}'[Use the replacement string replace-str instead of 
{/} for basename of input line]:replace-str' \
  {--basenameextensionreplace,--bner}'[Use the replacement string replace-str 
instead of {/.} for basename of input line without extension]:replace-str' \
  --bin'[Use binexpr as binning key and bin input to the jobs]:binexpr' \
  --bg'[Run command in background]' \
  {--bibtex,--citation}'[Print the citation notice and BibTeX entry for GNU 
parallel, silence citation notice for all future runs, and exit. It will not 
run any commands]' \
  {--block,--block-size}'[Size of block in bytes to read at a time]:size' \
  {--block-timeout,--bt}'[Timeout for reading block when using 
--pipe]:duration' \
  --cat'[Create a temporary file with content]' \
  --cleanup'[Remove transferred files]' \
  {--colsep,-C}'[Column separator]:regexp' \
  --compress'[Compress temporary files]' \
  --compress-program'[Use prg for compressing temporary files]:prg:_commands' \
  --decompress-program'[Use prg for decompressing temporary 
files]:prg:_commands' \
  --csv'[Treat input as CSV-format]' \
  --ctag'[Color tag]:str' \
  --ctagstring'[Color tagstring]:str' \
  --delay'[Delay starting next job by duration]:duration' \
  {--delimiter,-d}'[Input items are terminated by delim]:delim' \
  {--dirnamereplace,--dnr}'[Use the replacement string replace-str instead of 
{//} for dirname of input line]:replace-str' \
  --dry-run'[Print the job to run on stdout (standard output), but do not run 
the job]' \
  {--eof,-e,-E}'[Set the end of file string to eof-str]:eof-str' \
  --embed'[Embed GNU parallel in a shell script]' \
  --env'[Copy environment variable var]:var:_vars' \
  --eta'[Show the estimated number of seconds before finishing]' \
  --fg'[Run command in foreground]' \
  --fifo'[Create a temporary fifo with content]' \
  --filter'[Only run jobs where filter is true]:filter' \
  --filter-hosts'[Remove down hosts]' \
  --gnu'[Behave like GNU parallel]' \
  --group'[Group output]' \
  --group-by'[Group input by value]:val' \
  (- *){--help,-h}'[Print a summary of the options to GNU parallel and exit]' \
  {--halt-on-error,--halt}'[When should GNU parallel terminate]:val' \
  --header'[Use regexp as header]:regexp' \
  {--hostgroups,--hgrp}'[Enable hostgroups on arguments]' \
  -I'[Use the replacement string replace-str instead of {}]:replace-str' \
  {--replace,-i}'[This option is deprecated; use -I instead]:replace-str' \
  --joblog'[Logfile for executed jobs]:logfile:_files' \
  {--jobs,-j,--max-procs,-P}'[Add N to/Subtract N from/Multiply N% with/ the 
number of CPU threads or read parameter from file]:+N/-N/N%/N/procfile:_files' \
  {--keep-order,-k}'[Keep sequence of output same as the order of input]' \
  -L'[When used with --pipe: Read records of recsize]:recsize' \
  {--max-lines,-l}[recsize]'[When used with --pipe: Read records of recsize 
lines]' \
  --limit'[Dynamic job limit]:"command args":((0\:"Below limit. Start another 
job" 1\:"Over limit. Start no jobs." 2\:"Way over limit. Kill the youngest 
job."))' \
  {--line-buffer,--lb}'[Buffer output on line basis]' \
  {--link,--xapply}'[Link input sources]' \
  --load'[Only start jobs if load is less than max-load]:max-load' \
  {--controlmaster,-M}'[Use ssh''s ControlMaster to make ssh connections 
faster]' \
  -m'[Multiple arguments]' \
  --memfree'[Minimum memory free when starting another job]:size' \
  --memsuspend'[Suspend jobs when there is less memory available]:size' \
  '(- *)'--minversion'[Print the version GNU parallel and 
exit]:version:'"($(parallel --minversion 0))" \
  {--max-args,-n}'[Use at most max-args arguments per command line]:max-args' \
  {--max-replace-args,-N}'[Use at most max-args arguments per command 
line]:max-args' \
  --nonall'[--onall with no arguments]' \
  --onall'[Run all the jobs on all computers given with --sshlogin]' \
  {--output-as-files,--outputasfiles,--files}'[Save output to files]' \
  {--pipe,--spreadstdin}'[Spread input to jobs on stdin (standard input)]' \
  --pipe-part'[Pipe parts of a physical file]' \
  --plain'[Ignore --profile, $PARALLEL, and ~/.parallel/config]' \
  --plus'[Add more replacement strings]' \
  --progress'[Show progress of computations]' \
  --max-line-length-allowed'[Print maximal command line length]' \
  --number-of-cpus'[Print the number of physical CPU cores and exit 
(obsolete)]' \
  --number-of-cores'[Print the number of physical CPU cores and exit (used by 
GNU parallel itself to determine the number of physical CPU cores on remote 
computers)]' \
  --number-of-sockets'[Print the number of filled CPU sockets and exit (used by 
GNU parallel itself to determine the number of filled CPU sockets on remote 
computers)]' \
  --number-of-threads'[Print the number of hyperthreaded CPU cores and exit 
(used by GNU parallel itself to determine the number of hyperthreaded CPU cores 
on remote computers)]' \
  --no-keep-order'[Overrides an earlier --keep-order (e.g. if set in 
~/.parallel/config)]' \
  --nice'[Run the command at this niceness]:niceness:'"($(seq -20 19))" \
  {--interactive,-p}'[Ask user before running a job]' \
  --parens'[Use parensstring instead of {==}]:parensstring' \
  {--profile,-J}'[Use profile profilename for options]:profilename:_files' \
  {--quote,-q}'[Quote command]' \
  {--no-run-if-empty,-r}'[Do not run empty input]' \
  --noswap'[Do not start job is computer is swapping]' \
  --record-env'[Record environment]' \
  {--recstart,--recend}'[Split record between endstring and 
startstring]:endstring' \
  --regexp'[Use --regexp to interpret --recstart and --recend as regular 
expressions. This is slow, however]' \
  {--remove-rec-sep,--removerecsep,--rrs}'[Remove record separator]' \
  {--results,--res}'[Save the output into files]:name:_files' \
  --resume'[Resumes from the last unfinished job]' \
  --resume-failed'[Retry all failed and resume from the last unfinished job]' \
  --retry-failed'[Retry all failed jobs in joblog]' \
  --retries'[Try failing jobs n times]:n' \
  --return'[Transfer files from remote computers]:filename:_files' \
  {--round-robin,--round}'[Distribute chunks of standard input in a round robin 
fashion]' \
  --rpl'[Define replacement string]:"tag perl expression"' \
  --rsync-opts'[Options to pass on to rsync]:options' \
  {--max-chars,-s}'[Limit length of command]:max-chars' \
  --show-limits'[Display limits given by the operating system]' \
  --semaphore'[Work as a counting semaphore]' \
  {--semaphore-name,--id}'[Use name as the name of the semaphore]:name' \
  {--semaphore-timeout,--st}'[If secs > 0: If the semaphore is not released 
within secs seconds, take it anyway]:secs' \
  --seqreplace'[Use the replacement string replace-str instead of {#} for job 
sequence number]:replace-str' \
  --session'[Record names in current environment in $PARALLEL_IGNORED_NAMES and 
exit. Only used with env_parallel. Aliases, functions, and variables with names 
i]' \
  --shard'[Use shardexpr as shard key and shard input to the jobs]:shardexpr' \
  {--shebang,--hashbang}'[GNU parallel can be called as a shebang (#!) command 
as the first line of a script. The content of the file will be treated as 
inputsource]' \
  --shebang-wrap'[GNU parallel can parallelize scripts by wrapping the shebang 
line]' \
  --shell-quote'[Does not run the command but quotes it. Useful for making 
quoted composed commands for GNU parallel]' \
  --shuf'[Shuffle jobs]' \
  --skip-first-line'[Do not use the first line of input (used by GNU parallel 
itself when called with --shebang)]' \
  --sql'[Use --sql-master instead (obsolete)]:DBURL' \
  --sql-master'[Submit jobs via SQL server. DBURL must point to a table, which 
will contain the same information as --joblog, the values from the input 
sources (stored i]:DBURL' \
  --sql-and-worker'[--sql-master DBURL --sql-worker DBURL]:DBURL' \
  --sql-worker'[Execute jobs via SQL server. Read the input sources variables 
from the table pointed to by DBURL. The command on the command line should be 
the same a]:DBURL' \
  --ssh'[GNU parallel defaults to using ssh for remote access. This can be 
overridden with --ssh. It can also be set on a per server basis with 
--sshlogin]:sshcommand' \
  --ssh-delay'[Delay starting next ssh by duration]:duration' \
  {--sshlogin,-S}'[Distribute jobs to remote 
computers]:[@hostgroups/][ncpus/]sshlogin[,[@hostgroups/][ncpus/]sshlogin[,...]]
 or @hostgroup:_users' \
  {--sshloginfile,--slf}'[File with sshlogins. The file consists of sshlogins 
on separate lines. Empty lines and lines starting with '#' are ignored. 
Example]:filename:_files' \
  --slotreplace'[Use the replacement string replace-str instead of {%} for job 
slot number]:replace-str' \
  --silent'[Silent]' \
  {--template,--tmpl}'[Replace replacement strings in file and save it in 
repl]:file=repl:_files' \
  --tty'[Open terminal tty]' \
  --tag'[Tag lines with arguments]' \
  --tagstring'[Tag lines with a string]:str' \
  --tee'[Pipe all data to all jobs]' \
  --term-seq'[Termination sequence]:sequence' \
  --tmpdir'[Directory for temporary files]:dirname:_cd' \
  --tmux'[Use tmux for output. Start a tmux session and run each job in a 
window in that session. No other output will be produced]' \
  --tmuxpane'[Use tmux for output but put output into panes in the first 
window.  Useful if you want to monitor the progress of less than 100 concurrent 
jobs]' \
  --timeout'[Time out for command. If the command runs for longer than duration 
seconds it will get killed as per --term-seq]:duration' \
  {--verbose,-t}'[Print the job to be run on stderr (standard error)]' \
  --transfer'[Transfer files to remote computers]' \
  {--transferfile,--tf}'[Transfer filename to remote 
computers]:filename:_files' \
  --trc'[--transfer --return filename --cleanup]:filename:_files' \
  --trim'[Trim white space in input]:trim_method:(n l r lr rl)' \
  {--ungroup,-u}'[Output is printed as soon as possible and bypasses GNU 
parallel internal processing]' \
  {--extensionreplace,--er}'[Use the replacement string replace-str instead of 
{.} for input line without extension]:replace-str' \
  
{--use-sockets-instead-of-threads,--use-cores-instead-of-threads,--use-cpus-instead-of-cores}'[Determine
 how GNU parallel counts the number of CPUs (obsolete)]' \
  -v'[Verbose]' \
  '(- *)'{--version,-V}'[Print the version GNU parallel and exit]' \
  {--workdir,--wd}'[Jobs will be run in the dir mydir. (default: the current 
dir for the local machine, the login dir for remote computers)]:mydir:_cd' \
  --wait'[Wait for all commands to complete]' \
  -X'[Insert as many arguments as the command line length permits]' \
  {--exit,-x}'[Exit if the size (see the -s option) is exceeded]' \
  --xargs'[Insert as many arguments as the command line length permits]' \
  '(-)1:command: _command_names -e' \
  '*::arguments:{ _comp_priv_prefix=( '$words[1]' -n 
${(kv)opt_args[(I)(-[ugHEP]|--(user|group|set-home|preserve-env|preserve-groups))]}
 ) ; _normal }'
```

Put it to `/usr/share/zsh/site-functions/_parallel`, then `compinit` to 
generate `~/.zcompdump`. Then it can work.

```shell
❯ parallel -<Tab>
option
-0                                 Use NUL as delimiter
--arg-file-sep                     Use sep-str instead of :::: as separator 
string between command and argument files
--arg-file                         Use input-file as input source
--arg-sep                          Use sep-str instead of ::: as separator 
string
-a                                 Use input-file as input source
--bar                              Show progress as a progress bar
--basefile                         file will be transferred to each sshlogin 
before first job is started
...
❯ parallel <Tab>
external command
\[                                          import                              
        ptex2pdf
\$                                          import_interpolation                
        ptftopl
2to3                                        indxbib                             
        ptipython
2to3-3.10                                   inetcat                             
        ptipython3
...
❯ parallel ls -<Tab>
files
option
-1                                          single column output
-A                                          list all except . and ..
...
❯ parallel --limit <Tab>
"command args"
0   Below limit. Start another job
1   Over limit. Start no jobs.
2   Way over limit. Kill the youngest job.
```

Thanks!

Reply via email to