I think we can go with multiple gateway manager nodes with following
architecture. This diagrom explains according to mesos environments.




​
1. For each GW mgr has a script, it connects to mesos dns and get available
gw workers.
2. Push new artifacts receive to that particular mgr node to all workers.
3. This script runs as unterminated and with some intervals.
4. Each Mgr node and script deploy in same container using supervisor.D. as
multiple processors.

I think this will solve restrict to single manager node limitation and
issue of re start rsync daemon when network down.


Thanks





On Wed, Jun 8, 2016 at 6:49 AM, Chamila De Alwis <[email protected]> wrote:

> Hi Imesh,
>
>
> On Wed, Jun 8, 2016 at 5:32 AM, Imesh Gunaratne <[email protected]> wrote:
>
>> ​It would be better if we can implement this feature without tightly
>> coupling with the K8S API.​ Therefore I prefer the pull based model than
>> this.
>>
>
> I agree. This would require different ways of contacting any platform
> specific name resolution service to get the list of target containers.
>
>
>>
>>
>>> The pull method works the other way, i.e. initiated by the GW worker
>>> nodes and has to be run continuously on a loop.
>>>
>>
>> ​This approach can be applied to API-M on any container cluster manager
>> (and also on VMs) with very few changes. AFAIU it's matter of changing how
>> SSH server and rsync command processes are run on each GW node. K8S can use
>> separate containers for these using pods and Mesos can use supervisord [4].
>>
>
> I think supervisord would be applicable to any Docker based platform, be
> it Kubernetes or Mesos. Is there an additional advantage in going for a
> separate container to run sshd with a shared volume?
>
> One complication we'll come across (regardless of whether rsync pull or
> push) that I couldn't further elaborate was the artifact synchronization
> *between* GW Manager nodes when manager is in HA. This is when there are
> multiple GW Manager nodes, fronted with a load balancer. There will be only
> one active pod at a given time, and the request will be directed to the
> next available pod when the former goes down.
>
> For this, if we use a hostPath approach to share data, we'll also need to
> specify a node affinity to limit the node that the pod is spawned on. This
> way we can make sure the same location is mounted to every pod. The other
> option is to use something like Flocker with a Block storage service,
> however IMO it is a too complex method to approach a simple problem.
>
>
>
>
> Regards,
> Chamila de Alwis
> Committer and PMC Member - Apache Stratos
> Software Engineer | WSO2 | +94772207163
> Blog: code.chamiladealwis.com
>
>
>


-- 
Manoj Gunawardena
Tech Lead
WSO2, Inc.: http://wso2.com
lean.enterprise.middleware
Mobile : +94 77 2291643
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to