Some time ago I noticed that an awk script piped into a long-running nmap script caused the overall script to make relatively frequent saves of the output file, providing some assurance that the script was making progress and providing a record of its accomplishment. That occasionally allowed me to restart the script (as in the case of a loss of mains power) to pick up where the script stopped by truncating its source file and thereby avoiding having to run the entire source file from the beginning.

Here's such a script:
awk '{print $1}' 'B-May2020-L.768-secondary.txt' | sudo nmap -6 -Pn -sn -T4 --max-retries 16 -iL '-' -oG - | grep "Host:" '-' | awk '{print $2,$3}' '-' | sed 's/()/(No_DNS)/g' | tr -d '()' | uniq -u > May2020-L.768-secondary.oGnMap.txt

There is no other need for
awk '{print $1}' 'B-May2020-L.768-secondary.txt'
because it's not actually changing the file, but the result of its use is to provide the extremely helpful & protective effect of causing the overall script to make frequent saves to HDD.

Now I am using a paste script immediately before the nmap script, but this similar use of the pipe process is not causing any peridic saves to take place. With fifteen such scripts running at once,
a lot of data can disappear if the mains power is lost in a windstorm.

The source file in each example is simply a list of IPv6 or IPv4 addresses made up in parts of real and randomly generated blocks with nothing that requires rearrangement or sorting.

A crude way of accomplishing this could be to insert a similar redundant awk script between the paste and nmap sections of the main script, but is there a more geek-like way of forcing the script to make periodic saves ? That method need not be especially efficient, as my network's sending/receiving speeds appear to be the rate-limiting factors, presently about 30 kB/second.

George Langford

Reply via email to