Sounds like some great possible solutions. 

Either 

1) Reading the SSH config to pick up the correct known_hosts locations (and 
perhaps setting 'host_key_checking' to false if the location is '/dev/null' 
since that's a common pattern - for instance, Vagrant does this by default, 
see https://docs.vagrantup.com/v2/cli/ssh_config.html )

or

2) A simple warning message when serialization is triggered due to 
known_hosts in order to save folks from some really tough debugging

Just lost a few hours debugging this issue. For several environments, I 
have a client's known_hosts locations set to custom locations in their SSH 
config, so everything was running serially (a 3 minute process * 20 servers 
= 60 minutes!). Persistence and sweat finally lead me to try 
"host_key_checking = False" and it finally ran in parallel - was so nice to 
finally see since I'd tried just about everything else I could imagine 
(forks, serial, ssh options, restructuring inventory, removing inventory 
groups, etc).

Thanks,
Matt

On Monday, September 29, 2014 6:57:35 PM UTC+2, Michael DeHaan wrote:
>
> I'm wondering if we can detect configuration of alternative known_hosts 
> locations in the ~/.ssh/config and issue a warning, which should be able to 
> key people in to turn off the checking feature.
>
> This should close this out, I'd think.
>
>
>
> On Mon, Sep 29, 2014 at 12:54 PM, Michael DeHaan <[email protected] 
> <javascript:>> wrote:
>
>> Ansible does not find your known hosts location from ~/.ssh/config on a 
>> per host basis and does read your ~/.ssh/known_hosts.
>>
>> It does this because it needs to know, in advance of SSH asking, whether 
>> it needs to lock.
>>
>> Assume it's running at 50/200 forks and needs to ask a question 
>> interactively, that's why it needs to know.
>>
>> So if you are saying use known_hosts in a different file, that may be 
>> EXACTLY the problem.   With host key checking on, and the data going 
>> elsewhere, it can't be found, and ansible is locking pre-emptively.
>>
>>
>> On Mon, Sep 29, 2014 at 12:45 PM, Michael Blakeley <[email protected] 
>> <javascript:>> wrote:
>>
>>> I took it that Vincent was referring to my message of 2013-09-12 
>>> <https://groups.google.com/d/msg/ansible-project/8p3XWlo83ho/Q1SflaZ9dyAJ>. 
>>> In that post I mentioned using /dev/null for the ssh UserKnownHostsFile 
>>> configuration key, scoped to Host *.amazonaws.com
>>>
>>> This configuration triggers single-threaded behavior from ansible 
>>> because ssh never stores any record of connecting to the EC2 hosts: not the 
>>> first time, not ever. Because known_hosts is /dev/null.
>>>
>>> -- Mike
>>>
>>> On Monday, September 29, 2014 9:30:32 AM UTC-7, Michael DeHaan wrote:
>>>>
>>>> So I'm confused - are you saying you are using known_hosts that are 
>>>> empty?
>>>>
>>>> This seems to be a completely unrelated question.
>>>>
>>>> The mention of /dev/null above seemed to be based on confusion that we 
>>>> didn't read it, not that it was actually symlinked to /dev/null.
>>>>
>>>> Can each of you clarify?
>>>>
>>>> On Mon, Sep 29, 2014 at 12:29 PM, Michael Blakeley <
>>>> [email protected]> wrote:
>>>>
>>>>> Vincent, I now use a slightly different workaround. Instead of routing 
>>>>> known_hosts to /dev/null I route it to a temp file. This keeps the EC2 
>>>>> noise out of my default known_hosts file, and seems to play well with 
>>>>> ansible.
>>>>>
>>>>> From my ~/.ssh/config file:
>>>>> Host *.amazonaws.com
>>>>>      PasswordAuthentication no
>>>>>      StrictHostKeyChecking no
>>>>>      UserKnownHostsFile /tmp/ec2_known_hosts
>>>>>      User ec2-user 
>>>>>
>>>>>
>>>>> Hope that helps you.
>>>>>
>>>>> -- Mike
>>>>>
>>>>> On Monday, September 29, 2014 8:37:43 AM UTC-7, Vincent Janelle wrote:
>>>>>>
>>>>>> Exactly like what was described at the start of this thread. :( 
>>>>>>  Setting the environment variable produces the desired parallel 
>>>>>> execution.
>>>>>>
>>>>>  -- 
>>>>> You received this message because you are subscribed to the Google 
>>>>> Groups "Ansible Project" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>> an email to [email protected].
>>>>> To post to this group, send email to [email protected].
>>>>> To view this discussion on the web visit https://groups.google.com/d/
>>>>> msgid/ansible-project/550bdafe-2892-477b-9452-
>>>>> bbed389bfbce%40googlegroups.com 
>>>>> <https://groups.google.com/d/msgid/ansible-project/550bdafe-2892-477b-9452-bbed389bfbce%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>  -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Ansible Project" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected] <javascript:>.
>>> To post to this group, send email to [email protected] 
>>> <javascript:>.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/ansible-project/aa3c8257-cb5c-40b1-94fc-051fee1748fc%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/ansible-project/aa3c8257-cb5c-40b1-94fc-051fee1748fc%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/a9f8b7a2-724d-4c93-8112-b4a179c8dd84%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to