On Wed, Jul 11, 2018 at 3:45 AM, Daniel LaFlamme <misterlafla...@yahoo.com> wrote: > Hit send prematurely. > > Later in the job run, I did start seeing messages like the following: > > "mux_client_request_session: session request failed: Session open refused by > peer" > > All machines had the same low load during parallel's run. I have spot > checked sshd_config on a couple of the servers and they are the same. They > are also managed by puppet so they should all have the same configuration. > > My question is: what can I do to make what is going on under the hood more > transparent so I can debug the problem further?
-t will show what GNU Parallel actually runs the moment it is being run. So you will be getting lines like: ssh -S /tmp/control_path_dir-dTad/ssh-%r@%h:%p server -- exec perl -e @GNU_Parallel\\\=split/_/,\\\"use_IPC::Open3\\\;_use_MIME::Base64\\\"\\\;eval\\\"@GNU_Parallel\\\"\\\;\\\$chld\\\=\\\$SIG\\\{CHLD\\\}\\\;\\\$SIG\\\{CHLD\\\}\\\=\\\"IGNORE\\\"\\\;my\\\$zip\\\=\\\(grep\\\{-x\\\$_\\\}\\\"/usr/local/bin/bzip2\\\"\\\)\\\[0\\\]\\\|\\\|\\\"bzip2\\\"\\\;open3\\\(\\\$in,\\\$out,\\\"\\\>\\\&STDERR\\\",\\\$zip,\\\"-dc\\\"\\\)\\\;if\\\(my\\\$perlpid\\\=fork\\\)\\\{close\\\$in\\\;\\\$eval\\\=join\\\"\\\",\\\<\\\$out\\\>\\\;close\\\$out\\\;\\\}else\\\{close\\\$out\\\;print\\\$in\\\(decode_base64\\\(join\\\"\\\",@ARGV\\\)\\\)\\\;close\\\$in\\\;exit\\\;\\\}wait\\\;\\\$SIG\\\{CHLD\\\}\\\=\\\$chld\\\;eval\\\$eval\\\; QlpoOTFBWSZTWVeMEdsAABWfgHV////u538ev//f/jABa1BsRJonknlDIaaNqNNA9RoAABoDQANCKnkwmmmkaABo0ABoNAAADIAJRETyIZTEwJk0aGmhkDQGIyGhobREc4dRYSEyZfhTy06+rZhU0MqRGZKf8Ax8DK579pAO0n455KoEgINRBiGuhRpZlhO3wPCiBlr6zK6GXeeLInJeWvFYUY4ISvgw6W5WsaU2Wr1q6QFB4rRqOcWIJy707Ay5C7FiIrLuy/IkSVHTPaON8Rm8dc4hA4KINl8gOkchptIKBW6VBgMJRomCqAtu2esksMlwL6BeDvOXJlyqESHrA8VmTNG0e8PAMyDM4xDRKSGnQQvEbBzUmk+Q9kI72hZQbNMcWvPqJCb2z32LrFGMRU4XjkZfhHdHa9V2DoDnTDMUYBsy77issbk6YjbTiCxK8gCTG/H+2oSpqb5rSr1kWpSlGtHmiVIJrZCqSpbGJT200f8MKHBMpRjrrcaSVVnmx24XapGhXnjCIHmAekOqOTFwIJY7QOBdssV1wGMYm2NoNJUbuLCMzUBJ34szrbLEUC+yxP4u5IpwoSCvGCO2; Given that the problem looks like being related to ssh, you can try running some: ssh -S /tmp/control_path_dir-dTad/ssh-%r@%h:%p server sleep 10 by hand. Since there _are_ jobs on all your hosts, then I reckon it is a problem in your local setup. If find a solution, please post it. It may be that others have the same issue. /Ole