Hey All,

Sorry about the radio silence on this thread, I kind of dropped the ball on 
this one  :-(

I'm picking up where I left off, I'll work out the options for the module 
to move this thing forward. I think my last ideas revolved around two 
options for the module, one where we have a (semi) fixed folder structure - 
and one where we're only interested in the symlink and cleanup.

If anyone has had thoughts about it since august, feel free to add them - 
I'll be working on this next week.

Kind regards,


Ramon


Op donderdag 14 augustus 2014 14:34:10 UTC+2 schreef Michael DeHaan:
>
> "
> Basically, I'm trying to use their deployment role as a whole, to perform 
> certain operations only on certain hosts, while leaving all the 
> hosts/groups, listed in the the main 'hosts' directive."
>
> This is there too :)
>
> ansible-playbook foo.yml --limit groupname
> ansible-playbook foo.yml --limit hostname
>
> "To nail it down, I am deploying on 10 nodes and each is having a custom 
> connection string to its own Redis node. That would be handled via the 
> /private folder, but I can't see a built-in way for their deployment role 
> to do this."
>
> This doesn't seem related.
>
> I'd just do host_vars/<hostname> for this, and that's a different concept, 
> but does not interfere with --limit.
>
>
>
>
>
>
> On Thu, Aug 14, 2014 at 8:27 AM, Dan Vaida <[email protected] 
> <javascript:>> wrote:
>
>> That's right. But I was referring to another thing.
>>
>> Basically, I'm trying to use their deployment role as a whole, to perform 
>> certain operations only on certain hosts, while leaving all the 
>> hosts/groups, listed in the the main 'hosts' directive.
>>
>> In my other roles, I've been using host_vars or even defined the host 
>> vars in the 'hosts' file and used those vars as flags for acting or not on 
>> some particular tasks.
>>
>> To nail it down, I am deploying on 10 nodes and each is having a custom 
>> connection string to its own Redis node. That would be handled via the 
>> /private folder, but I can't see a built-in way for their deployment role 
>> to do this.
>>
>> Perhaps I'm not looking at it from the right angle.
>>
>>
>> On Thursday, 14 August 2014 14:00:30 UTC+2, Michael DeHaan wrote:
>>
>>> "With capistrano, it is possible to run some of the tasks only on 
>>> specific hosts. Any plans for such a feature?"
>>>
>>> This has been a feature in Ansible since day 1.
>>>
>>> - hosts: hostnames
>>> - hosts: otherhostnames
>>>
>>> Etc
>>>
>>>
>>>
>>>
>>> On Thu, Aug 14, 2014 at 7:42 AM, Dan Vaida <[email protected]> wrote:
>>>
>>>> Hello,
>>>>
>>>> Again, a very BIG thank you for your efforts on the deploy module.
>>>>
>>>> I would like to share my suggestion, perhaps as an idea to generate a 
>>>> future pull request:
>>>> With capistrano, it is possible to run some of the tasks only on 
>>>> specific hosts. Any plans for such a feature?
>>>> Problem is that in the /shared folder, I have stuff that's being shared 
>>>> between releases but ALSO some mounted nfs shares. I think I will create 
>>>> another directory in deployment root called /mounts for the NFS purposes, 
>>>> to avoid confusion and workarounds.
>>>> Regardless, I think the host filter feature would come in handy.
>>>>
>>>> P.S. for us, it's important to have a simple rollback functionality so 
>>>> you might see a fork/pr soon.
>>>>  
>>>>
>>>> On Wednesday, 6 August 2014 15:05:14 UTC+2, Jasper N. Brouwer wrote:
>>>>
>>>>> Hi all! 
>>>>>
>>>>> A little introduction for context: I'm a college/employee of Ramon de 
>>>>> la Fuente, and we both maintain the f500.* roles in Galaxy. So when I 
>>>>> refer 
>>>>> to "our module", that's the same module as the one Ramon refers to. 
>>>>>
>>>>> I'd like to sum up my thoughts on the discussion so far: 
>>>>>
>>>>>
>>>>> - We choose to use the same directory layout as Capistrano does: 
>>>>>
>>>>> /opt/base/current -> /opt/base/releases/{timestamp} 
>>>>> /opt/base/releases/ 
>>>>> /opt/base/shared/ 
>>>>>
>>>>> "shared" is used for stuff that needs to survive a deploy (uploads, 
>>>>> etc). 
>>>>>
>>>>> The main reason we chose this is because it will be familiar to people 
>>>>> who have used Capistrano. Plus we didn't see anything wrong with this 
>>>>> layout, it suits our needs perfectly. 
>>>>>
>>>>> We could make the exact file/directory names configurable though. 
>>>>>
>>>>>
>>>>> - I agree we need something to create a consistent timestamp (or 
>>>>> whatever) to be used on all hosts. 
>>>>>
>>>>> And this probably doesn't have to be a timestamp. The reason we choose 
>>>>> a timestamp is because it helps determine which releases should be 
>>>>> cleaned 
>>>>> up. We can simple order them and keep the latest X. 
>>>>>
>>>>> I suspect it should be possible to stat those directories for a 
>>>>> creation-date, and use them for the cleanup. The directory name itself 
>>>>> can 
>>>>> then be whatever you like (unix timestamp, yyyymmddhhmmss style 
>>>>> timestamp, 
>>>>> commit hash, uuid, etc). 
>>>>>
>>>>>
>>>>> - Our current role also sets some facts, which are really convenient 
>>>>> to have around: 
>>>>>
>>>>> base_path:            <must be provided through a required option> 
>>>>> current_symlink:      <base_path>/current 
>>>>> releases_path:        <base_path>/releases 
>>>>> shared_path:          <base_path>/shared 
>>>>> current_release:      <the release-timestamp/whatever that 
>>>>> current_symlink points to> 
>>>>> current_release_path: <base_path>/releases/<current_release> 
>>>>> new_release:          <the given/generated release-timestamp/whatever> 
>>>>> new_release_path:     <base_path>/releases/<new_release> 
>>>>> unfinished_file:      BUILD_UNFINISHED 
>>>>>
>>>>> I'd like the core module to have these as well. Any thoughts on 
>>>>> additions or changes are more than welkom! 
>>>>>
>>>>>
>>>>> - The cleanup process we use is 2-fold: First we remove any releases 
>>>>> that still contain the BUILD_UNFINISHED file. Next we remove any releases 
>>>>> that exceed a configurable amount (keep 5 releases for example). 
>>>>>
>>>>> This 2-fold process is very important to us, because we don't want to 
>>>>> accidentally fail 5 releases in a row and have the cleanup process remove 
>>>>> any older releases, therefor be left with only broken releases. The 
>>>>> releases that are kept must be successful ones. 
>>>>>
>>>>> And, we don't have this yet, but I think the cleanup should never 
>>>>> remove the active release (the one the symlink points to), even if it's 
>>>>> considered old). So it has to safeguard that. 
>>>>>
>>>>>
>>>>> --   
>>>>> Jasper N. Brouwer 
>>>>> (@jaspernbrouwer) 
>>>>>
>>>>>
>>>>> On 5 August 2014 at 18:29:00, Michael DeHaan ([email protected]) 
>>>>> wrote: 
>>>>> > Yeah, good point, and may not be an issue anywhere really because of 
>>>>> the 
>>>>> > symlink, if there's good enough cleanup options. 
>>>>> >   
>>>>> > Not requiring that seems like it would be a nice shortcut, provided 
>>>>> that 
>>>>> > the module could be called to register what the "latest" was if you 
>>>>> didn't 
>>>>> > pass too many arguments. 
>>>>> >  
>>>>> > ... 
>>>>> >   
>>>>> > I think it would only assume a timestamp dir in base, but it could 
>>>>> default 
>>>>> > to make a subdir called "releases", sure. 
>>>>> >   
>>>>> > I think as long as we document what it does we could make up a 
>>>>> convention, 
>>>>> > because it's going to change the way you deploy your app a little 
>>>>> bit, and 
>>>>> > you would not have to use unless you wanted... 
>>>>> >   
>>>>> > ... 
>>>>> >   
>>>>> > This sounds pretty cool to me. 
>>>>> >   
>>>>> > ... 
>>>>> >   
>>>>> > I think maybe you might need to pass a parameter to remove the other 
>>>>> ones, 
>>>>> > and it could be optional. 
>>>>> >   
>>>>> > ... 
>>>>> >   
>>>>> > Yeah something like what you have, if not exactly, as a module seems 
>>>>> really 
>>>>> > really cool to me. 
>>>>>
>>>>>
>>>>>  -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Ansible Project" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To post to this group, send email to [email protected].
>>>> To view this discussion on the web visit https://groups.google.com/d/
>>>> msgid/ansible-project/c2b8e4c7-e5a5-4437-9508-
>>>> bc9717d5e1a5%40googlegroups.com 
>>>> <https://groups.google.com/d/msgid/ansible-project/c2b8e4c7-e5a5-4437-9508-bc9717d5e1a5%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ansible-project/6020a5d1-7e19-4e90-859b-8c28136648d3%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/ansible-project/6020a5d1-7e19-4e90-859b-8c28136648d3%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/0b5ecc2c-befc-459c-a260-fc4e6b09f4c9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to