> It is extremely hard to tell the difference between a power user
> maxing out a server and a novice doing something that overloads the
> server. I have several times used GNU Parallel causing a cpu load of >
> 1000, because it was faster to complete my task that way.

I'm curious about that: What kind of job is working better on an
overloaded system? To my knowledge reaching 100% load (not to be
mixed with n jobs in parallel on an n-core system) is the best
to be done.

> GNU Parallel is made for power users and that is its primary goal. If
> novices can use it aswell, then that is fine, but GNU Parallel will
> not shield beginners against mistakes if that makes it harder to use
> for power users.

I understand (and even advocate) that. I really hate the idea of
an "rm='rm -i'" shell alias, that can be seen on so many instal-
lations today. (In fact we have it here and I just don't dare to
remove it, as people might got used to it and might start crying
in case 'rm' would do what it had been written for.)

I just hoped that there was a way to make PNU Parallel a bit more
failsafe, without making it harder for power users.

>> In this case: Shouldn't GNU parallel detect a situation like
>> this ("transfer to NFS homes") and exit with an error?
> 
> Definitely no. I use multiple systems, some have nfs-homes and I
> want to be able to --transfer to those.

I see, so transfer to NFS even is a desired usecase for you. OK.

For the records: I decided to preset PARALLEL on our hosts to
prevent accidently damage in the future:

  PARALLEL='--load 100% --nice 10 --noswap --workdir /scratch'

("--load 100%" and "--noswap" might be kind of redundant here)

Thanks
Thomas

Reply via email to