Makes sense. As a workaround, and since this is a very specific case, I have decided to explicitly add the key's path to the only command that depends on it (an rsync job). But this info will come in handy if in the future we need to start working with more private keys.
Thanks for the useful info! On Thursday, July 16, 2015 at 6:56:10 PM UTC-5, Brian Coca wrote: > > ssh-add relies on environment variables that are not available to > ansible, so when you run the ssh-agent + ssh-add you are spawning a > 2nd agent, while when you login you seem to be getting the env vars > for the 1st agent, which did not get the keys added. > > You'll have to do something like shell: . ssh_agent_env.sh && ssh-add .... > > Or you can guess and set the environment vars yourself, the socket > file is normally in /tmp and owned by your user (i,.e > /tmp/ssh-7Mk71cc78Qwb/agent.4567) where the last number is the agent's > pid before forking, normally you can add +1 to get actual agent pid > and set SSH_AGENT_PID=4568 > and SSH_AUTH_SOCK=/tmp/ssh-7Mk71cc78Qwb/agent.4567. You can also use > pgrep to confirm. > > -- > Brian Coca > -- You received this message because you are subscribed to the Google Groups "Ansible Project" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/8bc5e62e-1752-41c2-955d-22eafcaa89fd%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
