[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
I could not find the real reason why the SSH on EC2 instance running CentOS was not able to connect. The workaround I applied was to write a module which try to connect host over SSH for given number of times and return success when it is successful so that further plays won't fail because of unreachable error. *wait_for_SSH.py* #!/usr/bin/python ''' module: wait_for_SSH short_description: Waits for a host to be SSH connectable. description: - Tries to Ansible ping(not ICMP ping) to host as per the passed parameters options: host: description: - A resolvable hostname or IP address to ansible ping required: true retries: description: - maximum number of times to retry required: false default: 10 delay: description: - number of seconds to wait between two consecutive pings required: false default: 5 ''' from ansible.module_utils.basic import * from subprocess import call import time def validate_params(module, retries, delay): if retries < 0: module.fail_json(msg="retries should be greater than 0") if delay < 0: module.fail_json(msg="delay should be greater than 0") return def main(): fields = { "host": {"required": True, "type": "str"}, "retries": {"required": False, "type": "int", "default": 10}, "delay": {"required": False, "type": "int", "default": 5} } module = AnsibleModule(argument_spec=fields) host = module.params['host'] retries = module.params['retries'] delay = module.params['delay'] count = 0 output = 1 validate_params(module, retries, delay) while (count < retries) and (output != 0): if delay: time.sleep(delay) output = call(["ansible", "all", "-i", ","+host, "-m", "ping"]) count += 1 response = {"output" : output} module.exit_json(changed=False, output=response) if __name__ == '__main__': main() And executed it after wait_for port 22. ... # Wait only for running instances because 'ec2_server' might contain terminated instances to fulfil exact_count condition - name: wait for ssh server to be running wait_for: host={{ item.public_dns_name }} port=22 search_regex=OpenSSH with_items: "{{ec2_server.instances | default([])}}" when: item.state == 'running' # wait_for_SSH is our custom module which tries to Ansible ping on created instances until it is successful as per retries specified(bug AD-3) - name: Ensure SSH is running wait_for_SSH: host: "{{item.private_ip}}" register: moduleoutput with_items: "{{ec2_server.instances | default([])}}" when: item.state == 'running' ..<>... Thanks, Nirav -- You received this message because you are subscribed to the Google Groups "Ansible Project" group. To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscr...@googlegroups.com. To post to this group, send email to ansible-project@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/ad02fcad-284b-4418-8390-4a77e9aaab37%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
Hi, While browsing the integration tests of Ansible modules, I cam across this: https://github.com/ansible/ansible/blob/7f8e8ddca9cf103c05bf39d68ebb2e2ded4067f2/test/utils/ansible-playbook_integration_runner/ec2.yml It has a task named "Wait a little longer for centos". So anyone know why is it there? My target hosts run CentOS also, if this is the issue for the behavior mentioned in my question. Thanks, Nirav -- You received this message because you are subscribed to the Google Groups "Ansible Project" group. To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscr...@googlegroups.com. To post to this group, send email to ansible-project@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/537fe9a0-86ae-45f9-b88d-334e0a9baeb9%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
Hi Alex / everyone, Another finding: * This also works fine: wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state =started I want to understand why wait_for: host={{ item.public_dns_name }} port=22 does not work when I query for a particular region? Does it return before port 22 is actually open or before machine is actually SSH connect-able? Just FYI, I am using a custom AMI built on top of Official CentOS 7 AMI (If this helps or if it is an OS specific issue). Thanks, Nirav -- You received this message because you are subscribed to the Google Groups "Ansible Project" group. To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscr...@googlegroups.com. To post to this group, send email to ansible-project@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/fcd522fa-45fe-4d2a-9f39-aa872a8f17cb%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
Hi Alex Sorry for late reply. I am using Ansible 2.1.1.0. It happens for any available region over AWS. One more finding: * If I remove port in wait_for, it works ok. On Friday, 23 September 2016 05:55:28 UTC+5:30, Alexander H. Laughlin wrote: > > Hi Nirav, > > This is a tough nut to crack. Which version of Ansible are you using? > > Which region is it specifically that is failing? Or is it just any region > that is specified over 'any' region? > > Alex > > On Thursday, September 22, 2016 at 3:43:38 AM UTC-7, Nirav Radia wrote: >> >> Hi Alex, >> >> Replacing availability zone in ec2.ini also gave me the same error: >> >> ERROR! The file inventory/ec2.py is marked as executable, but failed to >> execute correctly. If this is not supposed to be an executable script, >> correct this with `chmod -x inventory/ec2.py`. >> Inventory script (inventory/ec2.py) had an execution error: region name: >> us-west-2a likely not supported, or AWS is down. connection to region >> failed. >> inventory/ec2.py:3: Error parsing host definition ': No closing >> quotation >> >> >> Unsetting the environment variable and setting region in ec2.ini does not >> help and gives the same Unreachable error. :( >> >> Thanks, >> Nirav >> >> On Thursday, 22 September 2016 15:51:27 UTC+5:30, Alexander H. Laughlin >> wrote: >> >> Hi Nirav, >> >> Sorry about the lack of clarity in my suggestion. I was referring to your >> ec2.ini, specifically: >> >> >> # AWS regions to make calls to. Set this to 'all' to make request to all >> regions >> # in AWS and merge the results together. Alternatively, set this to a >> comma >> # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' >> regions = all >> regions_exclude = us-gov-west-1,cn-north-1 >> >> would become: >> >> # AWS regions to make calls to. Set this to 'all' to make request to all >> regions >> # in AWS and merge the results together. Alternatively, set this to a >> comma >> # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' >> regions = us-west-2a >> regions_exclude = us-gov-west-1,cn-north-1 >> >> >> If you were using that particular availability zone. I'd be surprised if >> that worked, however, what happens when you unset the environment variable >> and set the region in the ec2.ini file? >> >> Alex >> >> >> On Thursday, September 22, 2016 at 12:51:19 AM UTC-7, Nirav Radia wrote: >> >> Hi Alex, >> >> 1) I am not sure I understood "placing the availability zone in the >> region field instead of the region" correctly. Because when I exported >> "us-west-2a" as AWS_DEFAULT_REGION (which my ec2.py is using to filter >> region), it gave me error like this: >> >> ERROR! The file inventory/ec2.py is marked as executable, but failed to >> execute correctly. If this is not supposed to be an executable script, >> correct this with `chmod -x inventory/ec2.py`. >> Inventory script (inventory/ec2.py) had an execution error: region name: >> us-west-2a likely not supported, or AWS is down. connection to region >> failed. >> inventory/ec2.py:3: Error parsing host definition ': No closing >> quotation >> >> >> Let me know where can I place AZ instead of region. >> >> 2) Yes, I tried executing >> ./inventory/ec2.py --refresh-cache >> between instance creation playbook and ping command, but with no luck ! >> And IMHO it doesn't matter to refresh cache as I have " >> *cache_max_age = 0*" in ec2.ini. (Correct me if I am wrong.) >> >> For the first time when I ping *doesn't succeed*, verbose and console >> output is: >> >> [Private_IP1 and Private_IP2 are the instances already running and >> matching the filter in ec2.py, *Private_IP3* is the IP of the instance >> just launched and waited for port 22 successfully] >> >> Using /etc/ansible/ansible.cfg as config file >> Loaded callback minimal of type stdout, v2.0 >> ESTABLISH SSH CONNECTION FOR USER: centos >> ESTABLISH SSH CONNECTION FOR USER: centos >> SSH: ansible.cfg set ssh_args: >> (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) >> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: >> (-o)(StrictHostKeyChecking=no) >> SSH: ansible.cfg set ssh_args: >> (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) >> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: >> (-o)(StrictHostKeyChecking=no) >> SSH: >> ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: >> (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") >> SSH: >> ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: >> (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") >> SSH: ansible_password/ansible_ssh_pass not set: >> (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) >> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u >> set: (-o)(User=centos) >> SSH: ansible_password/ansible_ssh_pass not set: >> (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAut
[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
Hi Nirav, This is a tough nut to crack. Which version of Ansible are you using? Which region is it specifically that is failing? Or is it just any region that is specified over 'any' region? Alex On Thursday, September 22, 2016 at 3:43:38 AM UTC-7, Nirav Radia wrote: > > Hi Alex, > > Replacing availability zone in ec2.ini also gave me the same error: > > ERROR! The file inventory/ec2.py is marked as executable, but failed to > execute correctly. If this is not supposed to be an executable script, > correct this with `chmod -x inventory/ec2.py`. > Inventory script (inventory/ec2.py) had an execution error: region name: > us-west-2a likely not supported, or AWS is down. connection to region > failed. > inventory/ec2.py:3: Error parsing host definition ': No closing > quotation > > > Unsetting the environment variable and setting region in ec2.ini does not > help and gives the same Unreachable error. :( > > Thanks, > Nirav > > On Thursday, 22 September 2016 15:51:27 UTC+5:30, Alexander H. Laughlin > wrote: > > Hi Nirav, > > Sorry about the lack of clarity in my suggestion. I was referring to your > ec2.ini, specifically: > > > # AWS regions to make calls to. Set this to 'all' to make request to all > regions > # in AWS and merge the results together. Alternatively, set this to a comma > # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' > regions = all > regions_exclude = us-gov-west-1,cn-north-1 > > would become: > > # AWS regions to make calls to. Set this to 'all' to make request to all > regions > # in AWS and merge the results together. Alternatively, set this to a comma > # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' > regions = us-west-2a > regions_exclude = us-gov-west-1,cn-north-1 > > > If you were using that particular availability zone. I'd be surprised if > that worked, however, what happens when you unset the environment variable > and set the region in the ec2.ini file? > > Alex > > > On Thursday, September 22, 2016 at 12:51:19 AM UTC-7, Nirav Radia wrote: > > Hi Alex, > > 1) I am not sure I understood "placing the availability zone in the region > field instead of the region" correctly. Because when I exported > "us-west-2a" as AWS_DEFAULT_REGION (which my ec2.py is using to filter > region), it gave me error like this: > > ERROR! The file inventory/ec2.py is marked as executable, but failed to > execute correctly. If this is not supposed to be an executable script, > correct this with `chmod -x inventory/ec2.py`. > Inventory script (inventory/ec2.py) had an execution error: region name: > us-west-2a likely not supported, or AWS is down. connection to region > failed. > inventory/ec2.py:3: Error parsing host definition ': No closing > quotation > > > Let me know where can I place AZ instead of region. > > 2) Yes, I tried executing > ./inventory/ec2.py --refresh-cache > between instance creation playbook and ping command, but with no luck ! > And IMHO it doesn't matter to refresh cache as I have "*cache_max_age = 0*" > in ec2.ini. (Correct me if I am wrong.) > > For the first time when I ping *doesn't succeed*, verbose and console > output is: > > [Private_IP1 and Private_IP2 are the instances already running and > matching the filter in ec2.py, *Private_IP3* is the IP of the instance > just launched and waited for port 22 successfully] > > Using /etc/ansible/ansible.cfg as config file > Loaded callback minimal of type stdout, v2.0 > ESTABLISH SSH CONNECTION FOR USER: centos > ESTABLISH SSH CONNECTION FOR USER: centos > SSH: ansible.cfg set ssh_args: > (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) > SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: > (-o)(StrictHostKeyChecking=no) > SSH: ansible.cfg set ssh_args: > (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) > SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: > (-o)(StrictHostKeyChecking=no) > SSH: > ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: > (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") > SSH: > ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: > (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") > SSH: ansible_password/ansible_ssh_pass not set: > (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) > SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u > set: (-o)(User=centos) > SSH: ansible_password/ansible_ssh_pass not set: > (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) > SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u > set: (-o)(User=centos) > SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) > SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) > SSH: PlayContext set ssh_common_args: () > SSH: PlayContext set ssh_extra_args: (
[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
Hi Alex, Replacing availability zone in ec2.ini also gave me the same error: ERROR! The file inventory/ec2.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this with `chmod -x inventory/ec2.py`. Inventory script (inventory/ec2.py) had an execution error: region name: us-west-2a likely not supported, or AWS is down. connection to region failed. inventory/ec2.py:3: Error parsing host definition ': No closing quotation Unsetting the environment variable and setting region in ec2.ini does not help and gives the same Unreachable error. :( Thanks, Nirav On Thursday, 22 September 2016 15:51:27 UTC+5:30, Alexander H. Laughlin wrote: > > Hi Nirav, > > Sorry about the lack of clarity in my suggestion. I was referring to your > ec2.ini, specifically: > > > # AWS regions to make calls to. Set this to 'all' to make request to all > regions > # in AWS and merge the results together. Alternatively, set this to a comma > # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' > regions = all > regions_exclude = us-gov-west-1,cn-north-1 > > would become: > > # AWS regions to make calls to. Set this to 'all' to make request to all > regions > # in AWS and merge the results together. Alternatively, set this to a comma > # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' > regions = us-west-2a > regions_exclude = us-gov-west-1,cn-north-1 > > > If you were using that particular availability zone. I'd be surprised if > that worked, however, what happens when you unset the environment variable > and set the region in the ec2.ini file? > > Alex > > > On Thursday, September 22, 2016 at 12:51:19 AM UTC-7, Nirav Radia wrote: >> >> Hi Alex, >> >> 1) I am not sure I understood "placing the availability zone in the >> region field instead of the region" correctly. Because when I exported >> "us-west-2a" as AWS_DEFAULT_REGION (which my ec2.py is using to filter >> region), it gave me error like this: >> >> ERROR! The file inventory/ec2.py is marked as executable, but failed to >> execute correctly. If this is not supposed to be an executable script, >> correct this with `chmod -x inventory/ec2.py`. >> Inventory script (inventory/ec2.py) had an execution error: region name: >> us-west-2a likely not supported, or AWS is down. connection to region >> failed. >> inventory/ec2.py:3: Error parsing host definition ': No closing >> quotation >> >> >> Let me know where can I place AZ instead of region. >> >> 2) Yes, I tried executing >> ./inventory/ec2.py --refresh-cache >> between instance creation playbook and ping command, but with no luck ! >> And IMHO it doesn't matter to refresh cache as I have " >> *cache_max_age = 0*" in ec2.ini. (Correct me if I am wrong.) >> >> For the first time when I ping *doesn't succeed*, verbose and console >> output is: >> >> [Private_IP1 and Private_IP2 are the instances already running and >> matching the filter in ec2.py, *Private_IP3* is the IP of the instance >> just launched and waited for port 22 successfully] >> >> Using /etc/ansible/ansible.cfg as config file >> Loaded callback minimal of type stdout, v2.0 >> ESTABLISH SSH CONNECTION FOR USER: centos >> ESTABLISH SSH CONNECTION FOR USER: centos >> SSH: ansible.cfg set ssh_args: >> (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) >> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: >> (-o)(StrictHostKeyChecking=no) >> SSH: ansible.cfg set ssh_args: >> (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) >> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: >> (-o)(StrictHostKeyChecking=no) >> SSH: >> ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: >> (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") >> SSH: >> ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: >> (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") >> SSH: ansible_password/ansible_ssh_pass not set: >> (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) >> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u >> set: (-o)(User=centos) >> SSH: ansible_password/ansible_ssh_pass not set: >> (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) >> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u >> set: (-o)(User=centos) >> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) >> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) >> SSH: PlayContext set ssh_common_args: () >> SSH: PlayContext set ssh_extra_args: () >> SSH: PlayContext set ssh_common_args: () >> SSH: PlayContext set ssh_extra_args: () >> SSH: found only ControlPersist; added ControlPath: >> (-o)(ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r) >> SSH: found only ControlPersist;
[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
Hi Nirav, Sorry about the lack of clarity in my suggestion. I was referring to your ec2.ini, specifically: # AWS regions to make calls to. Set this to 'all' to make request to all regions # in AWS and merge the results together. Alternatively, set this to a comma # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' regions = all regions_exclude = us-gov-west-1,cn-north-1 would become: # AWS regions to make calls to. Set this to 'all' to make request to all regions # in AWS and merge the results together. Alternatively, set this to a comma # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' regions = us-west-2a regions_exclude = us-gov-west-1,cn-north-1 If you were using that particular availability zone. I'd be surprised if that worked, however, what happens when you unset the environment variable and set the region in the ec2.ini file? Alex On Thursday, September 22, 2016 at 12:51:19 AM UTC-7, Nirav Radia wrote: > > Hi Alex, > > 1) I am not sure I understood "placing the availability zone in the region > field instead of the region" correctly. Because when I exported > "us-west-2a" as AWS_DEFAULT_REGION (which my ec2.py is using to filter > region), it gave me error like this: > > ERROR! The file inventory/ec2.py is marked as executable, but failed to > execute correctly. If this is not supposed to be an executable script, > correct this with `chmod -x inventory/ec2.py`. > Inventory script (inventory/ec2.py) had an execution error: region name: > us-west-2a likely not supported, or AWS is down. connection to region > failed. > inventory/ec2.py:3: Error parsing host definition ': No closing > quotation > > > Let me know where can I place AZ instead of region. > > 2) Yes, I tried executing > ./inventory/ec2.py --refresh-cache > between instance creation playbook and ping command, but with no luck ! > And IMHO it doesn't matter to refresh cache as I have "*cache_max_age = 0*" > in ec2.ini. (Correct me if I am wrong.) > > For the first time when I ping *doesn't succeed*, verbose and console > output is: > > [Private_IP1 and Private_IP2 are the instances already running and > matching the filter in ec2.py, *Private_IP3* is the IP of the instance > just launched and waited for port 22 successfully] > > Using /etc/ansible/ansible.cfg as config file > Loaded callback minimal of type stdout, v2.0 > ESTABLISH SSH CONNECTION FOR USER: centos > ESTABLISH SSH CONNECTION FOR USER: centos > SSH: ansible.cfg set ssh_args: > (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) > SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: > (-o)(StrictHostKeyChecking=no) > SSH: ansible.cfg set ssh_args: > (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) > SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: > (-o)(StrictHostKeyChecking=no) > SSH: > ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: > (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") > SSH: > ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: > (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") > SSH: ansible_password/ansible_ssh_pass not set: > (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) > SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u > set: (-o)(User=centos) > SSH: ansible_password/ansible_ssh_pass not set: > (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) > SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u > set: (-o)(User=centos) > SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) > SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) > SSH: PlayContext set ssh_common_args: () > SSH: PlayContext set ssh_extra_args: () > SSH: PlayContext set ssh_common_args: () > SSH: PlayContext set ssh_extra_args: () > SSH: found only ControlPersist; added ControlPath: > (-o)(ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r) > SSH: found only ControlPersist; added ControlPath: > (-o)(ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r) > SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o > ControlPersist=60s -o StrictHostKeyChecking=no -o > 'IdentityFile="/home/ubuntu/.ssh/MyKey.pem"' -o > KbdInteractiveAuthentication=no -o > PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey > -o PasswordAuthentication=no -o User=centos -o ConnectTimeout=10 -o > ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r Private_IP3 > '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo > $HOME/.ansible/tmp/ansible-tmp-1474528176.32-147953530428586 `" && echo > ansible-tmp-1474528176.32-147953530428586="` echo > $HOME/.ansible/tmp/ansible-tmp-1474528176.32-147953530428586 `" ) && sleep > 0'"'"'' > SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o > ControlPersist=60s -o Str
[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
Hi Alex, 1) I am not sure I understood "placing the availability zone in the region field instead of the region" correctly. Because when I exported "us-west-2a" as AWS_DEFAULT_REGION (which my ec2.py is using to filter region), it gave me error like this: ERROR! The file inventory/ec2.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this with `chmod -x inventory/ec2.py`. Inventory script (inventory/ec2.py) had an execution error: region name: us-west-2a likely not supported, or AWS is down. connection to region failed. inventory/ec2.py:3: Error parsing host definition ': No closing quotation Let me know where can I place AZ instead of region. 2) Yes, I tried executing ./inventory/ec2.py --refresh-cache between instance creation playbook and ping command, but with no luck ! And IMHO it doesn't matter to refresh cache as I have "*cache_max_age = 0*" in ec2.ini. (Correct me if I am wrong.) For the first time when I ping *doesn't succeed*, verbose and console output is: [Private_IP1 and Private_IP2 are the instances already running and matching the filter in ec2.py, *Private_IP3* is the IP of the instance just launched and waited for port 22 successfully] Using /etc/ansible/ansible.cfg as config file Loaded callback minimal of type stdout, v2.0 ESTABLISH SSH CONNECTION FOR USER: centos ESTABLISH SSH CONNECTION FOR USER: centos SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no) SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no) SSH: ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") SSH: ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=centos) SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=centos) SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) SSH: PlayContext set ssh_common_args: () SSH: PlayContext set ssh_extra_args: () SSH: PlayContext set ssh_common_args: () SSH: PlayContext set ssh_extra_args: () SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r) SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r) SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ubuntu/.ssh/MyKey.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=centos -o ConnectTimeout=10 -o ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r Private_IP3 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1474528176.32-147953530428586 `" && echo ansible-tmp-1474528176.32-147953530428586="` echo $HOME/.ansible/tmp/ansible-tmp-1474528176.32-147953530428586 `" ) && sleep 0'"'"'' SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ubuntu/.ssh/MyKey.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=centos -o ConnectTimeout=10 -o ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r Private_IP1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1474528176.32-242806970585719 `" && echo ansible-tmp-1474528176.32-242806970585719="` echo $HOME/.ansible/tmp/ansible-tmp-1474528176.32-242806970585719 `" ) && sleep 0'"'"'' ESTABLISH SSH CONNECTION FOR USER: centos SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s) SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no) SSH: ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: (-o)(IdentityFile="/home/ubuntu/.ssh/MyKey.pem") SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=n
[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
Hi Nirav, I have a vague recollection of at some point having had a similar issue with dynamic inventory that was solved by placing the availability zone in the region field instead of the region. My memory is far from a reliable source of information so if you decide to give that a shot don't be surprised if it doesn't work. With that said, I just re-read your original post and your statement that this is only a problem on the first run stuck out because the script uses caching to avoid hitting the AWS api all the time. The related docs are here: http://docs.ansible.com/ansible/intro_dynamic_inventory.html#example-aws-ec2-external-inventory-script Though I'm sure you've read through those several times already, the bit I believe affects your specific situation is right at the bottom of that section: Note that the AWS inventory script will cache results to avoid repeated API calls, and this cache setting is configurable in ec2.ini. To explicitly clear the cache, you can run the ec2.py script with the --refresh-cache parameter: # ./ec2.py --refresh-cache Have you tried adding the `--refresh-cache` option to your first run of the ec2.py script? If that works, then bully! but if it doesn't, would you mind posting the related results from the first run of your playbook with the error and then also the related portions of the second one where it works? Thanks, Alex On Wednesday, September 21, 2016 at 1:05:52 AM UTC-7, Nirav Radia wrote: > > Thanks @Alexander for your reply. Here is *ec2.ini* > > > # Ansible EC2 external inventory script settings > # > > > [ec2] > > > # to talk to a private eucalyptus instance uncomment these lines > # and edit edit eucalyptus_host to be the host name of your cloud > controller > #eucalyptus = True > #eucalyptus_host = clc.cloud.domain.org > > > # AWS regions to make calls to. Set this to 'all' to make request to all > regions > # in AWS and merge the results together. Alternatively, set this to a comma > # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' > regions = all > regions_exclude = us-gov-west-1,cn-north-1 > > > # When generating inventory, Ansible needs to know how to address a server. > # Each EC2 instance has a lot of variables associated with it. Here is the > list: > # > http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance > # Below are 2 variables that are used as the address of a server: > # - destination_variable > # - vpc_destination_variable > > > # This is the normal destination variable to use. If you are running > Ansible > # from outside EC2, then 'public_dns_name' makes the most sense. If you are > # running Ansible from within EC2, then perhaps you want to use the > internal > # address, and should set this to 'private_dns_name'. The key of an EC2 tag > # may optionally be used; however the boto instance variables hold > precedence > # in the event of a collision. > destination_variable = public_dns_name > > > # For server inside a VPC, using DNS names may not make sense. When an > instance > # has 'subnet_id' set, this variable is used. If the subnet is public, > setting > # this to 'ip_address' will return the public IP address. For instances in > a > # private subnet, this should be set to 'private_ip_address', and Ansible > must > # be run from with EC2. The key of an EC2 tag may optionally be used; > however > # the boto instance variables hold precedence in the event of a collision. > vpc_destination_variable = private_ip_address > > > # To tag instances on EC2 with the resource records that point to them from > # Route53, uncomment and set 'route53' to True. > route53 = False > > > # To exclude RDS instances from the inventory, uncomment and set to False. > rds = False > > > # Additionally, you can specify the list of zones to exclude looking up in > # 'route53_excluded_zones' as a comma-separated list. > # route53_excluded_zones = samplezone1.com, samplezone2.com > > > # By default, only EC2 instances in the 'running' state are returned. Set > # 'all_instances' to True to return all instances regardless of state. > all_instances = False > > > # By default, only RDS instances in the 'available' state are returned. > Set > # 'all_rds_instances' to True return all RDS instances regardless of state. > all_rds_instances = False > > > # API calls to EC2 are slow. For this reason, we cache the results of an > API > # call. Set this to the path you want cache files to be written to. Two > files > # will be written to this directory: > # - ansible-ec2.cache > # - ansible-ec2.index > cache_path = ~/.ansible/tmp > > > # The number of seconds a cache file is considered valid. After this many > # seconds, a new API call will be made, and the cache file will be updated. > # To disable the cache, set this value to 0 > cache_max_age = 0 > > > # Organize groups into a nested/hierarchy instead of a flat namespace. > nested_groups = False > > > # The EC2 inventory output can b
[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
Thanks @Alexander for your reply. Here is *ec2.ini* # Ansible EC2 external inventory script settings # [ec2] # to talk to a private eucalyptus instance uncomment these lines # and edit edit eucalyptus_host to be the host name of your cloud controller #eucalyptus = True #eucalyptus_host = clc.cloud.domain.org # AWS regions to make calls to. Set this to 'all' to make request to all regions # in AWS and merge the results together. Alternatively, set this to a comma # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' regions = all regions_exclude = us-gov-west-1,cn-north-1 # When generating inventory, Ansible needs to know how to address a server. # Each EC2 instance has a lot of variables associated with it. Here is the list: # http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance # Below are 2 variables that are used as the address of a server: # - destination_variable # - vpc_destination_variable # This is the normal destination variable to use. If you are running Ansible # from outside EC2, then 'public_dns_name' makes the most sense. If you are # running Ansible from within EC2, then perhaps you want to use the internal # address, and should set this to 'private_dns_name'. The key of an EC2 tag # may optionally be used; however the boto instance variables hold precedence # in the event of a collision. destination_variable = public_dns_name # For server inside a VPC, using DNS names may not make sense. When an instance # has 'subnet_id' set, this variable is used. If the subnet is public, setting # this to 'ip_address' will return the public IP address. For instances in a # private subnet, this should be set to 'private_ip_address', and Ansible must # be run from with EC2. The key of an EC2 tag may optionally be used; however # the boto instance variables hold precedence in the event of a collision. vpc_destination_variable = private_ip_address # To tag instances on EC2 with the resource records that point to them from # Route53, uncomment and set 'route53' to True. route53 = False # To exclude RDS instances from the inventory, uncomment and set to False. rds = False # Additionally, you can specify the list of zones to exclude looking up in # 'route53_excluded_zones' as a comma-separated list. # route53_excluded_zones = samplezone1.com, samplezone2.com # By default, only EC2 instances in the 'running' state are returned. Set # 'all_instances' to True to return all instances regardless of state. all_instances = False # By default, only RDS instances in the 'available' state are returned. Set # 'all_rds_instances' to True return all RDS instances regardless of state. all_rds_instances = False # API calls to EC2 are slow. For this reason, we cache the results of an API # call. Set this to the path you want cache files to be written to. Two files # will be written to this directory: # - ansible-ec2.cache # - ansible-ec2.index cache_path = ~/.ansible/tmp # The number of seconds a cache file is considered valid. After this many # seconds, a new API call will be made, and the cache file will be updated. # To disable the cache, set this value to 0 cache_max_age = 0 # Organize groups into a nested/hierarchy instead of a flat namespace. nested_groups = False # The EC2 inventory output can become very large. To manage its size, # configure which groups should be created. group_by_instance_id = False group_by_region = True group_by_availability_zone = False group_by_ami_id = False group_by_instance_type = False group_by_key_pair = False group_by_vpc_id = False group_by_security_group = False group_by_tag_keys = True group_by_tag_none = False group_by_route53_names = False group_by_rds_engine = False group_by_rds_parameter_group = False # If you only want to include hosts that match a certain regular expression # pattern_include = stage-* # If you want to exclude any hosts that match a certain regular expression pattern_exclude = datafactory* # Instance filters can be used to control which instances are retrieved for # inventory. For the full list of possible filters, please read the EC2 API # docs: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html#query-DescribeInstances-filters # Filters are key/value pairs separated by '=', to list multiple filters use # a list separated by commas. See examples below. # Retrieve only instances with (key=value) env=stage tag #instance_filters = tag:cmx_env=dev # Retrieve only instances with role=webservers OR role=dbservers tag # instance_filters = tag:role=webservers,tag:role=dbservers # Retrieve only t1.micro instances OR instances with tag env=stage # instance_filters = instance-type=t1.micro,tag:env=stage # You can use wildcards in filter values also. Below will list instances which # tag Name value matches webservers1* # (ex. webservers15, webservers1a, webservers123 etc) # instance_filters = tag:Name=webservers1* elasticache = F
[ansible-project] Re: Unable to reach to EC2 instance using dynamic inventory with a single region
Would you mind to post your ec2.ini with the credentials taken out? Also, what is the output of ec2.py when you run it alone? On Friday, September 16, 2016 at 12:15:39 AM UTC-7, Nirav Radia wrote: > > Hi all, > > I am pretty new to Ansible. I am using ec2.py to connect to my EC2 > instances and run my ansible scripts on them. Previously, it was working > fine when I was using "all" regions. But now when I change the region to a > specific one region (any of the available ones), in ec2.ini, it gives me > below error for the first time and after second time onward, it works well. > > UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via > ssh.", "unreachable": true} > > > What I do is use ec2 module to create a instance and then wait_for port 22 > on each host to be up. It wait successfully and moves forward but in the > immediate next step when I try to connect to the instance, it gives above > error first time. Here is my wait_for task: (ec2_server is the variable > registered from ec2 module) > > name: wait for ssh server to be running > wait_for: host={{ item.public_dns_name }} port=22 > with_items: "{{ec2_server.instances | default([])}}" > when: item.state == 'running' > > > I ping in between to the instance using private ip and the ping succeeds. > I suspect it is some timing issue or DNS name resolution issue with AWS. > Has anyone faced such problem before? > > Any help would be appreciated. Thanks ! > > -- You received this message because you are subscribed to the Google Groups "Ansible Project" group. To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscr...@googlegroups.com. To post to this group, send email to ansible-project@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/0384a64c-46af-4686-bdf7-1ff42b90d8e9%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.