On Sun, Apr 17, 2016 at 9:14 PM, Imesh Gunaratne <[email protected]> wrote:

>
>
> On Thu, Apr 14, 2016 at 10:54 PM, Manuranga Perera <[email protected]> wrote:
>
>>  K8S will only know about the container image that was used for the
>>> deployment
>>
>> Ok, But form the image, don't we know what are the artifacts (since
>> immutable servers)?
>>
>
> We know that, but I don't think we can assume that all the artifacts found
> in the image will get deployed properly. We may need to expose the actual
> status of the artifacts via an API.
>
> On Fri, Apr 15, 2016 at 1:44 AM, Susankha Nirmala <[email protected]>
>  wrote:
>
>> Why we can't copy new artifacts (or updated  artifacts) to the deployment
>> directory of the carbon servers, running on the containers?
>>
>> That's exactly what we do.
>

Without recreating the docker image with new or updated artifacts
(just copy the artifacts to the deployment directory of the running server)?


>
>
>> On Thu, Apr 14, 2016 at 1:18 PM, Frank Leymann <[email protected]> wrote:
>>
>>> Sorry for jumping in so late in the thread:  is technology like HEAT/HOT
>>> (OpenStack) or TOSCA (OASIS) too encompassing? I am happy to provide on
>>> overview of their features...
>>>
>>> I am not suggesting to use the corresponding implementations (they have
>>> their pros/cons) but we may learn from the concepts behind them.
>>>
>>>
>>> Best regards,
>>> Frank
>>>
>>> 2016-04-14 12:06 GMT+02:00 Imesh Gunaratne <[email protected]>:
>>>
>>>>
>>>>
>>>> On Thu, Apr 14, 2016 at 1:35 AM, Manuranga Perera <[email protected]>
>>>> wrote:
>>>>
>>>>> If an existing artifact needs to be updated or new artifacts needs to
>>>>>> be added a new container image needs to be created.
>>>>>
>>>>> In this case, why can't we ask from Kub how many pods with new
>>>>> artifact has been spun up? Why does this have to be updated at carbon
>>>>> kernel level via JMS?
>>>>>
>>>>
>>>> Carbon may not handle the rollout but it will need to inform an
>>>> external entity the status of the deployed artifacts. K8S will only know
>>>> about the container image that was used for the deployment, it will have no
>>>> information on the artifacts deployed in the Carbon server.
>>>>
>>>>>
>>>>>
>>>>> On Thu, Apr 7, 2016 at 2:38 PM, Imesh Gunaratne <[email protected]>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Apr 7, 2016 at 11:53 PM, Imesh Gunaratne <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>> Hi Ruwan,
>>>>>>>
>>>>>>> On Thu, Mar 31, 2016 at 3:07 PM, Ruwan Abeykoon <[email protected]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi All,
>>>>>>>> Do we really want artifact deployment coordination in C5?
>>>>>>>> What is preventing us to build the new image with the new version
>>>>>>>> of artifacts and let the k8s take care of deployment?
>>>>>>>>
>>>>>>>
>>>>>>> You are absolutely correct! We may not do artifact synchronization
>>>>>>> in C5 rather artifacts will get packaged into the containers.
>>>>>>>
>>>>>>
>>>>>> I'm sorry C5 will also support none containerized deployments (VM,
>>>>>> physical machines), still artifact synchronization will not be handled by
>>>>>> Carbon.
>>>>>>
>>>>>> On Wed, Apr 6, 2016 at 8:03 PM, Akila Ravihansa Perera <
>>>>>> [email protected]> wrote:
>>>>>>>
>>>>>>>
>>>>>>> I've few concerns regarding artifact deployment coordination
>>>>>>>  - Artifact versioning support. This is important to ensure
>>>>>>> consistency across a cluster
>>>>>>>
>>>>>>
>>>>>> Indded, but it may not relate to this feature I guess.
>>>>>>
>>>>>>
>>>>>>>  - REST API to query the status. I'd rather go ahead with a REST API
>>>>>>> before a JMS based implementation. IMO it's much simpler and easy to 
>>>>>>> use.
>>>>>>>
>>>>>>
>>>>>> A REST API might be needed in a different context, may be in a
>>>>>> central monitoring server. In this context the design is to let servers
>>>>>> publish their status to a central server. Otherwise it might not be
>>>>>> feasible for a client to talk to each and every server and prepare the
>>>>>> aggregated view.
>>>>>>
>>>>>>
>>>>>>>  - Why don't we provide a REST API to deploy artifacts rather than
>>>>>>> copying files (whenever applicable)? We can immediately notify the 
>>>>>>> client
>>>>>>> (via HTTP response status) whether artifact deployment was successful.
>>>>>>>
>>>>>>
>>>>>> Might not be needed for container based deployments.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>> This feature is for monitoring the deployment status of the
>>>>>>> artifacts. If an existing artifact needs to be updated or new artifacts
>>>>>>> needs to be added a new container image needs to be created. Then a 
>>>>>>> rollout
>>>>>>> should be triggerred (depending on the container cluster management 
>>>>>>> system
>>>>>>> used).
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>> Ruwan
>>>>>>>>
>>>>>>>> On Wed, Mar 30, 2016 at 2:54 PM, Isuru Haththotuwa <[email protected]
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Hi Kasun,
>>>>>>>>>
>>>>>>>>> On Wed, Mar 23, 2016 at 10:45 AM, KasunG Gajasinghe <
>>>>>>>>> [email protected]> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> Given several issues we discovered with automatic artifact
>>>>>>>>>> synchronization with DepSync in C4, we have discussed how to 
>>>>>>>>>> approach this
>>>>>>>>>> problem in C5.
>>>>>>>>>>
>>>>>>>>>> We are thinking of not doing the automated artifact
>>>>>>>>>> synchronization in C5. Rather, users should use their own mechanism 
>>>>>>>>>> to
>>>>>>>>>> synchronize the artifacts across a cluster. Common approaches are 
>>>>>>>>>> RSync as
>>>>>>>>>> a cron job and shell scripts.
>>>>>>>>>>
>>>>>>>>>> But, it is vital to know the artifact deployment status of the
>>>>>>>>>> nodes in the entire cluster from a central place. For that, we are
>>>>>>>>>> providing this deployment coordination feature. There will be two 
>>>>>>>>>> ways to
>>>>>>>>>> use this.
>>>>>>>>>>
>>>>>>>>>> 1. JMS based publishing - the deployment status will be published
>>>>>>>>>> by each node to a jms topic/queue
>>>>>>>>>>
>>>>>>>>>> 2. Log based publishing - publish the logs by using a syslog
>>>>>>>>>> appender [1] or our own custom appender to a central location.
>>>>>>>>>>
>>>>>>>>> Both are push mechanisms, IMHO we would need an API to check the
>>>>>>>>> status of a deployed artifacts on demand, WDYT?
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> The log publishing may not be limited to just the deployment
>>>>>>>>>> coordination. In a containerized deployment, the carbon products 
>>>>>>>>>> will run
>>>>>>>>>> in disposable containers. But sometimes, the logs need to be backed 
>>>>>>>>>> up for
>>>>>>>>>> later reference. This will help with that.
>>>>>>>>>>
>>>>>>>>>> Any thoughts on this matter?
>>>>>>>>>>
>>>>>>>>>> [1]
>>>>>>>>>> https://logging.apache.org/log4j/2.x/manual/appenders.html#SyslogAppender
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> KasunG
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> ~~--~~
>>>>>>>>>> Sending this mail via my phone. Do excuse any typo
>>>>>>>>>> or short replies
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Architecture mailing list
>>>>>>>>>> [email protected]
>>>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Thanks and Regards,
>>>>>>>>>
>>>>>>>>> Isuru H.
>>>>>>>>> +94 716 358 048* <http://wso2.com/>*
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Architecture mailing list
>>>>>>>>> [email protected]
>>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> *Ruwan Abeykoon*
>>>>>>>> *Architect,*
>>>>>>>> *WSO2, Inc. http://wso2.com <http://wso2.com/> *
>>>>>>>> *lean.enterprise.middleware.*
>>>>>>>>
>>>>>>>> email: [email protected]
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Architecture mailing list
>>>>>>>> [email protected]
>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Imesh Gunaratne*
>>>>>>> Senior Technical Lead
>>>>>>> WSO2 Inc: http://wso2.com
>>>>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>>>>> W: http://imesh.io
>>>>>>> Lean . Enterprise . Middleware
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Imesh Gunaratne*
>>>>>> Senior Technical Lead
>>>>>> WSO2 Inc: http://wso2.com
>>>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>>>> W: http://imesh.io
>>>>>> Lean . Enterprise . Middleware
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Architecture mailing list
>>>>>> [email protected]
>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> With regards,
>>>>> *Manu*ranga Perera.
>>>>>
>>>>> phone : 071 7 70 20 50
>>>>> mail : [email protected]
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Imesh Gunaratne*
>>>> Senior Technical Lead
>>>> WSO2 Inc: http://wso2.com
>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>> W: http://imesh.io
>>>> Lean . Enterprise . Middleware
>>>>
>>>>
>>>> _______________________________________________
>>>> Architecture mailing list
>>>> [email protected]
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>
>>
>> --
>> With regards,
>> *Manu*ranga Perera.
>>
>> phone : 071 7 70 20 50
>> mail : [email protected]
>>
>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.io
> Lean . Enterprise . Middleware
>
>
> _______________________________________________
> Architecture mailing list
> [email protected]
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Susankha Nirmala
Software Engineer
WSO2, Inc.: http://wso2.com
lean.enterprise.middleware
Mobile : +94 77 593 2146
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to