Thanks @Alexander for your reply. Here is *ec2.ini*

# Ansible EC2 external inventory script settings


# to talk to a private eucalyptus instance uncomment these lines
# and edit edit eucalyptus_host to be the host name of your cloud controller
#eucalyptus = True
#eucalyptus_host =

# AWS regions to make calls to. Set this to 'all' to make request to all 
# in AWS and merge the results together. Alternatively, set this to a comma
# separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2'
regions = all
regions_exclude = us-gov-west-1,cn-north-1

# When generating inventory, Ansible needs to know how to address a server.
# Each EC2 instance has a lot of variables associated with it. Here is the 
# Below are 2 variables that are used as the address of a server:
#   - destination_variable
#   - vpc_destination_variable

# This is the normal destination variable to use. If you are running Ansible
# from outside EC2, then 'public_dns_name' makes the most sense. If you are
# running Ansible from within EC2, then perhaps you want to use the internal
# address, and should set this to 'private_dns_name'. The key of an EC2 tag
# may optionally be used; however the boto instance variables hold 
# in the event of a collision.
destination_variable = public_dns_name

# For server inside a VPC, using DNS names may not make sense. When an 
# has 'subnet_id' set, this variable is used. If the subnet is public, 
# this to 'ip_address' will return the public IP address. For instances in a
# private subnet, this should be set to 'private_ip_address', and Ansible 
# be run from with EC2. The key of an EC2 tag may optionally be used; 
# the boto instance variables hold precedence in the event of a collision.
vpc_destination_variable = private_ip_address

# To tag instances on EC2 with the resource records that point to them from
# Route53, uncomment and set 'route53' to True.
route53 = False

# To exclude RDS instances from the inventory, uncomment and set to False.
rds = False

# Additionally, you can specify the list of zones to exclude looking up in
# 'route53_excluded_zones' as a comma-separated list.
# route53_excluded_zones =,

# By default, only EC2 instances in the 'running' state are returned. Set
# 'all_instances' to True to return all instances regardless of state.
all_instances = False

# By default, only RDS instances in the 'available' state are returned.  Set
# 'all_rds_instances' to True return all RDS instances regardless of state.
all_rds_instances = False

# API calls to EC2 are slow. For this reason, we cache the results of an API
# call. Set this to the path you want cache files to be written to. Two 
# will be written to this directory:
#   - ansible-ec2.cache
#   - ansible-ec2.index
cache_path = ~/.ansible/tmp

# The number of seconds a cache file is considered valid. After this many
# seconds, a new API call will be made, and the cache file will be updated.
# To disable the cache, set this value to 0
cache_max_age = 0

# Organize groups into a nested/hierarchy instead of a flat namespace.
nested_groups = False

# The EC2 inventory output can become very large. To manage its size,
# configure which groups should be created.
group_by_instance_id = False
group_by_region = True
group_by_availability_zone = False
group_by_ami_id = False
group_by_instance_type = False
group_by_key_pair = False
group_by_vpc_id = False
group_by_security_group = False
group_by_tag_keys = True
group_by_tag_none = False
group_by_route53_names = False
group_by_rds_engine = False
group_by_rds_parameter_group = False

# If you only want to include hosts that match a certain regular expression
# pattern_include = stage-*

# If you want to exclude any hosts that match a certain regular expression
pattern_exclude = datafactory*

# Instance filters can be used to control which instances are retrieved for
# inventory. For the full list of possible filters, please read the EC2 API
# docs:
# Filters are key/value pairs separated by '=', to list multiple filters use
# a list separated by commas. See examples below.

# Retrieve only instances with (key=value) env=stage tag
#instance_filters = tag:cmx_env=dev

# Retrieve only instances with role=webservers OR role=dbservers tag
# instance_filters = tag:role=webservers,tag:role=dbservers

# Retrieve only t1.micro instances OR instances with tag env=stage
# instance_filters = instance-type=t1.micro,tag:env=stage

# You can use wildcards in filter values also. Below will list instances 
# tag Name value matches webservers1*
# (ex. webservers15, webservers1a, webservers123 etc) 
# instance_filters = tag:Name=webservers1*

elasticache = False
expand_csv_tags = True

In my file, I have modified the Regions section a little bit.

# Regions
        self.regions = []
        configRegions = os.getenv('AWS_DEFAULT_REGION', config.get('ec2', 
        configRegions_exclude = config.get('ec2', 'regions_exclude')
        if (configRegions == 'all'):
            if self.eucalyptus_host:
                for regionInfo in ec2.regions():
                    if not in configRegions_exclude:
            self.regions = configRegions.split(",")

I export environment variable AWS_DEFAULT_REGION just before the playbook 
call. When I run, it returns me the host on which I am expecting to 
run the playbook.

PS: On a different note, I have added "search_regex=OpenSSH" in wait_for 
before the playbook call, but it didn't help either.

On Monday, 19 September 2016 22:48:10 UTC+5:30, Alexander H. Laughlin wrote:
> Would you mind to post your ec2.ini with the credentials taken out? Also, 
> what is the output of when you run it alone?
> On Friday, September 16, 2016 at 12:15:39 AM UTC-7, Nirav Radia wrote:
>> Hi all,
>> I am pretty new to Ansible. I am using to connect to my EC2 
>> instances and run my ansible scripts on them. Previously, it was working 
>> fine when I was using "all" regions. But now when I change the region to a 
>> specific one region (any of the available ones), in ec2.ini, it gives me 
>> below error for the first time and after second time onward, it works well.
>> UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via 
>> ssh.", "unreachable": true}
>> What I do is use ec2 module to create a instance and then wait_for port 
>> 22 on each host to be up. It wait successfully and moves forward but in the 
>> immediate next step when I try to connect to the instance, it gives above 
>> error first time. Here is my wait_for task: (ec2_server is the variable 
>> registered from ec2 module)
>> name: wait for ssh server to be running
>>     wait_for: host={{ item.public_dns_name }} port=22
>>     with_items: "{{ec2_server.instances | default([])}}"
>>     when: item.state == 'running'
>> I ping in between to the instance using private ip and the ping succeeds. 
>> I suspect it is some timing issue or DNS name resolution issue with AWS. 
>> Has anyone faced such problem before?
>> Any help would be appreciated. Thanks !

You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
To post to this group, send email to
To view this discussion on the web visit
For more options, visit

Reply via email to