Ok.  That is helpful.  I will add that as an early task in the provision.yml 
playbook.

My confusion arises, because i have never installed Boto on the server 
instance, yet the task runs just fine with Ansible 1.8.4 if I also supply 
the AWS credential.  It is a plain Ubuntu AMI, but perhaps it has the AWS 
tools installed in someway.  I note that awscli depends on botocore.

thanks again

Andrew

On Friday, 19 June 2015 17:47:54 UTC+10, benno joy wrote:
>
> Hi Andrew,
>
>
> If the s3 task is running on the target server,  then the provisioned 
> instance needs to have boto installed. but if boto is not installed you 
> should have got a message like "boto is not installed on this machine 
> etc.."   maybe an incomplete boto installation on target server ?
>
>
> - Benno
>
>
> On Fri, Jun 19, 2015 at 1:11 PM, Andrew Burrow <
> [email protected] <javascript:>> wrote:
>
>> No problems, I think you have the picture right, but might have missed my 
>> earlier question: do I need to install Boto on the target server?
>>
>> So, yes:
>>
>>    - All playbooks are run on my laptop
>>    - A playbook aws-start.yml first creates the EC2 instance.  It 
>>    operates on the localhost
>>    - A playbook provision.yml then attempts to connect to the S3 
>>    bucket.  It operates on the EC2 instance 
>>
>> Andrew
>>
>> On Friday, 19 June 2015 17:30:23 UTC+10, benno joy wrote:
>>>
>>> Hi Andrew,
>>>
>>> Sorry if i am understanding this wrong, i assume you already have an ec2 
>>> instance which has an iam role attached which gives it access to download 
>>> buckets/files from S3 right ? and in your playbook you have an s3 task 
>>> which runs on this target server which has boto and python installed , so i 
>>> am a bit confused as to why you would need to reinstall ansible boto etc.. 
>>> on your local macbook
>>> Probalby if you can attach your playbook it might make things clear.
>>>
>>>
>>> - Benno
>>>  
>>>
>>> On Fri, Jun 19, 2015 at 12:53 PM, Andrew Burrow <
>>> [email protected]> wrote:
>>>
>>>> Just a follow up.  I tried two more scenarios, the second being the 
>>>> boil-the-ocean approach :-)
>>>>
>>>> 1. I deactivated the virtual environment, and reinstalled Ansible and 
>>>> Boto to /usr/local using Homebrew and Pip as follows:
>>>>
>>>> brew install ansible
>>>> pip install boto==2.38.0
>>>>
>>>> I then reran the playbook, and got the same error message, but again I 
>>>> was able to execute the s3 and cloudformation tasks locally.
>>>>
>>>> 2. I set my path to a minimum, uninstalled all the Homebrew python, and 
>>>> reinstalled Ansible and Boto using pip into the system as follows:
>>>>
>>>> PATH="/usr/local/bin:/usr/bin:/bin"
>>>> pip uninstall boto
>>>> brew uninstall ansible
>>>> brew uninstall python
>>>> curl -O https://bootstrap.pypa.io/get-pip.py
>>>> sudo python2.7 get-pip.py
>>>> sudo pip install six
>>>> sudo pip install boto
>>>> sudo pip install ansible
>>>>
>>>> I then reran the playbook, and got the same error message, but again I 
>>>> was able to execute the s3 and cloudformation tasks locally.
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Andrew
>>>>
>>>>
>>>> On Friday, 19 June 2015 16:52:53 UTC+10, Andrew Burrow wrote:
>>>>>
>>>>> Thanks Benno,
>>>>>
>>>>> I install Ansible and Boto in a virtualenv using pip, and then add 
>>>>> the following to group_vars/localhosts.yml, which is enough to ensure 
>>>>> that the cloudformation, s3, and ec2 modules run on the localhost.  
>>>>> Do I need to also install Boto on the remote?
>>>>>
>>>>> # Do not use the system installed Python when running locally
>>>>> ansible_python_interpreter: python
>>>>>
>>>>> The exact set of packages is:
>>>>>
>>>>> Jinja2==2.7.3
>>>>> MarkupSafe==0.23
>>>>> PyYAML==3.11
>>>>> ansible==1.9.1
>>>>> boto==2.38.0
>>>>> ecdsa==0.13
>>>>> paramiko==1.15.2
>>>>> pycrypto==2.6.1
>>>>> six==1.9.0
>>>>> wsgiref==0.1.2
>>>>>
>>>>> regards
>>>>>
>>>>> Andrew
>>>>>
>>>>> On Friday, 19 June 2015 15:44:43 UTC+10, benno joy wrote:
>>>>>>
>>>>>> Hi Andrew,
>>>>>>
>>>>>> instance profiles do work without any issues, from the error msg: 
>>>>>> Failed to connect to S3: 'module' object has no attribute 
>>>>>> 'connect_to_region'
>>>>>>
>>>>>> seems like boto is not installed properly, how did you install boto ? 
>>>>>> can you please try reinstalling boto and check.
>>>>>>
>>>>>> - Benno
>>>>>>
>>>>>>
>>>>>> On Fri, Jun 19, 2015 at 9:51 AM, Andrew Burrow <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> I am unable to make use of IAM roles in my Ansible playbooks.  
>>>>>>> Specifically, I have authorised an EC2 instance to get from an S3 
>>>>>>> bucket, 
>>>>>>> but I cannot work out how to make use of this authorisation from within 
>>>>>>> Ansible.
>>>>>>>
>>>>>>>
>>>>>>> *The question*
>>>>>>>
>>>>>>> How do I write Ansible task(s) that satisfies all the following :
>>>>>>>
>>>>>>>    1. Runs on an EC2 instance
>>>>>>>    2. Uses the IAM role defined on the EC2 instance to obtain 
>>>>>>>    authorisation to access an S3 bucket
>>>>>>>    3. Gets a file from the S3 bucket
>>>>>>>
>>>>>>>
>>>>>>> *A work around*
>>>>>>>
>>>>>>> I can get the EC2 instance to download from S3, only by passing in 
>>>>>>> my credentials as follows:
>>>>>>>
>>>>>>> - name: Download the part archive from S3
>>>>>>>   s3:
>>>>>>>    aws_access_key: "{{ lookup('env','aws_key') }}"
>>>>>>>    aws_secret_key: "{{ lookup('env','aws_secret') }}"
>>>>>>>    region: "{{ aws_packages_region }}"
>>>>>>>    bucket: "{{ aws_packages_bucket }}"
>>>>>>>    object: "/JI79IML/my_part_X86_64_c7.15.tar.gz"
>>>>>>>    dest: "/data/parts/JI79IML/my_part_X86_64_c7.15.tar.gz"
>>>>>>>    mode: get
>>>>>>>    overwrite: no
>>>>>>>
>>>>>>> However,  I would rather not send my AWS credentials to the 
>>>>>>> instance.  Instead I have defined a role with the appropriate 
>>>>>>> permissions 
>>>>>>> to get files from the S3 bucket.
>>>>>>>
>>>>>>>
>>>>>>> *What I've tried*
>>>>>>>
>>>>>>> The top answer in the stack overflow question linked below, suggests 
>>>>>>> that it is simple matter of leaving the secret access key parameters 
>>>>>>> out, 
>>>>>>> and letting the Boto library take care of assuming the role.
>>>>>>>
>>>>>>>    - 
>>>>>>>    http://stackoverflow.com/questions/28997757/ansible-and-s3-module
>>>>>>>    
>>>>>>> However, when I try this with Ansible 1.8.4 and Boto 2.36.0 I get
>>>>>>>
>>>>>>> msg: No handler was ready to authenticate. 1 handlers were checked. 
>>>>>>> ['HmacAuthV1Handler'] Check your credentials
>>>>>>>
>>>>>>> and with Ansible 1.9.1 and Boto 2.38.0 I get:
>>>>>>>
>>>>>>> msg: Failed to connect to S3: 'module' object has no attribute 
>>>>>>> 'connect_to_region'
>>>>>>>
>>>>>>>
>>>>>>> *How I've confirmed the IAM role*
>>>>>>>
>>>>>>> To confirm that the IAM role is *sufficient*, I installed awscli on 
>>>>>>> the EC2 instance and performed the download directly.  First, I assumed 
>>>>>>> the 
>>>>>>> role
>>>>>>>
>>>>>>> aws sts assume-role --role-arn "${ROLE_ARN}" --role-session-name 
>>>>>>> "GettingMyPart"
>>>>>>>
>>>>>>> which returns an absolutely baffling error message that the user 
>>>>>>> with the assumed role cannot assume the role?!?  But seems to do the 
>>>>>>> trick, 
>>>>>>> because I can then download the part
>>>>>>>
>>>>>>> aws s3api get-object --bucket "${BUCKET_NAME}" --key JI79IML/
>>>>>>> my_part_X86_64_c7.15.tar.gz my_part_X86_64_c7.15.tar.gz
>>>>>>>
>>>>>>> To confirm that the IAM role is *required*, I created another 
>>>>>>> instance that does not enjoy a role and installed awscli on this 
>>>>>>> second EC2 instance and followed the above steps.  In each case, I got 
>>>>>>> the 
>>>>>>> message "Unable to locate credentials" as expected
>>>>>>>
>>>>>>>  -- 
>>>>>>> You received this message because you are subscribed to the Google 
>>>>>>> Groups "Ansible Project" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>>> send an email to [email protected].
>>>>>>> To post to this group, send email to [email protected].
>>>>>>> To view this discussion on the web visit 
>>>>>>> https://groups.google.com/d/msgid/ansible-project/550cc437-c0b2-4999-8710-cf87e28f45e6%40googlegroups.com
>>>>>>>  
>>>>>>> <https://groups.google.com/d/msgid/ansible-project/550cc437-c0b2-4999-8710-cf87e28f45e6%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>>> .
>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>
>>>>>>
>>>>>>  -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Ansible Project" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To post to this group, send email to [email protected].
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/ansible-project/ee82581e-c91d-4f22-8f3a-02ecfea51cd5%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/ansible-project/ee82581e-c91d-4f22-8f3a-02ecfea51cd5%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ansible-project/0e19302d-9955-4194-a145-e9f891b991d6%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/ansible-project/0e19302d-9955-4194-a145-e9f891b991d6%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/981cb4f0-1770-43e4-aad8-702d50eb5b98%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to