Sorry, I'm not really following you. The pseudocode you posted is somewhat 
similar to what I'm doing, but the problems are:

   1. My current delegate host (your "build_systems") cannot resolve an IP 
   address from my target machine's FQDN (your "publication_system")
   2. I don't really have the ability/permissions/desire to create 
   arbitrary users with arbitrary permissions on my delegate hosts. They are 
   simply file repos that can change at any time; I have variables that can be 
   overridden like: artifacts_host, artifacts_user, artifacts_passwd, 
   artifacts_path, etc.

The play I'm trying to make should ultimately look like:

    - name:  artifacts_setup | rsync WAR artifact from artifacts host
      synchronize: >
        src={{ artifacts_path }}/{{ artifact_filename }}.war
        dest={{ artifact_installation_dir }}
      delegate_to: "{{ artifacts_host }}"

I came very close to getting this to work by using ansible-vault to encrypt 
a "secrets.yml" variable file with the artifact_host's public key and then 
installed it on the target machine's auth file like:

    - name: install artifacts_host's public key to auth file
      authorized_key: >
        user={{ ansible_ssh_user }}
        key='{{ artifacts_host_public_key }}'
      sudo: yes

but the problem still remains that my artifacts_host cannot resolve an IP 
address from the FQDN that Ansible passes to it. If I was able to "inform" 
the artifacts_host of the IP to use (what the fqdn _should_ resolve to) 
then I would be fine. I would also be fine having the task fire off on the 
target machine to pull from the artifacts_host, but I can't find an 
idempotent way of accomplishing this, nor can I figure out how to feed the 
target machine a login/password OR ssh key to use.
Am I just gonna have to template out a script to push to my targets???

Thanks again,
- Dan


On Thursday, June 18, 2015 at 3:53:22 AM UTC-4, Philippe Eveque wrote:
>
> Hi Dan
>
> I think you are on the right track with synchronize and delegate_to
>
> In my case I do this the following way (pseudo code)
>
> PLAY1:
>   - hosts: build_systems   # build_systems is a group in the inventory. 
> Can have 1 or more systems
>     remote_user: tester
>     gather_facts: True
>
>  - tasks:
>
>      - basically build from the sources 
>        that generate some binaries in some well known location
>        note that each 
>
> PLAY2:
>   - hosts: publication_system    # so the system serving the binaries, or 
> directly the target systems
>     remote_users: tester
>
>     tasks:
>       - populate some rsync-filter ()
>         delegate_to: groups['build_systems']
>
>       - synchronize 
>           src: the well known directory location on the build_systems
>           dest: /tmp/bindirectory/
>         delagate_to: groups['build_systems']
>         
>       - cleanup the rsync filters
>         delegate_to: groups['build_systems']    
>
>
>
> At that stage you have what you built on 'buildsystems' transferred 
> directly from buildystems to publication_system (available in 
> /tmp/bindirectory)
>
> Notes: 
>
>    - In my case I have 2 build systems but the built binaries differs on 
>    each.
>    This results in different set of files being transferred from each 
>    build systems to the publication_system
>    - I have one account (tester) valid on all the target hosts of each 
>    PLAY (that simplify file access right) 
>    
>       
>
> hope this gives you an idea on how to do it
>
> Phil
>
> 2015-06-17 21:06 GMT+02:00 Dan Swartz <[email protected] <javascript:>>:
>
>> I have a server that contains some fairly large release artifacts, which 
>> I would like to distribute to my target machines. The only method that I've 
>> tried that worked was using the shell module to pull the artifacts down 
>> locally with scp before pushing them out, but I just believe that is the 
>> best way to do this.
>>
>> It seems like the _proper_ way to do this would be using the synchronize 
>> module and delegating to the host containing the artifacts, but there are 
>> several obstacles I've been facing. I was able to install the artifact 
>> host's public key on the target machine's auth_keys file, but then I found 
>> out that the artifacts host cannot resolve the IP address of the target 
>> machine(s) from their FQDN. I have a private rsa key that the artifacts 
>> host has in its auth file, but it seems like there is no way to 
>> use/integrate it with the synchronize module. Also, there appear to be 
>> complications and/or confusions in terms of which users ansible is using to 
>> connect to which machine, and I really don't want to have to modify my own 
>> control machine's ssh configs, as this would have to be done manually on 
>> any potential control machine we would ever use.
>>
>> I'm looking for best methods here, or at least some ideas. How can I just 
>> send files from another host to my target machine(s)?
>>
>> Thanks in advance,
>> - Dan
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ansible-project/7f43e3ae-0bda-4377-b20d-892529c0a8ae%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/ansible-project/7f43e3ae-0bda-4377-b20d-892529c0a8ae%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/b6a46418-21c5-4b1e-a60a-d9e72b896bd1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to