On Mon, Jul 9, 2012 at 12:20 PM, Isuru Suriarachchi <[email protected]> wrote:

>
>
> On Mon, Jul 9, 2012 at 10:55 AM, Muhammed Shariq <[email protected]> wrote:
>
>>
>> On Fri, Jul 6, 2012 at 6:05 PM, Isuru Suriarachchi <[email protected]>wrote:
>>
>>> Hi all,
>>>
>>> I'm trying to fix [1]. Here's the root cause for this issue..
>>>
>>> Imagine a Carbon cluster with 2 nodes where the svn based deployment
>>> synchronizer (DS) is configured. When a C-App is deployed to node1, it is
>>> extracted and individual artifacts are copied into respective hot
>>> directories. When the DS runs for the first time, it copies the C-App into
>>> node2 and it will be deployed there. When the DS runs again in node1, it
>>> will try to copy the individual artifacts to node2. But node2 already has
>>> those artifacts as the C-App id already deployed in node2. Therefore an svn
>>> conflict occurs.
>>>
>>> To resolve this issue, there are two possible options..
>>>
>>> 1. Keeping all artifacts coming from C-Apps out of the repository
>>> (repository/deployment/server)
>>> 2. Keeping the original C-App out of the repository
>>>
>>> Initially I tried option 1 above and programetically called the relevant
>>> deployers for individual artifacts. But this creates lot of problems with
>>> some artifacts (Ex: ESB stuff). Therefore, I'm trying to solve the initial
>>> problem using option 2 above.
>>>
>>> I've taken the carbonapps directory out of repository/deployment/server
>>> directory and kept it as repository/carbonapps (we can change this if
>>> needed). Still the carbonapps directory has hot deployment capabilities.
>>> But it won't be synchronized by the DS. So when a C-App is deployed into
>>> node 1, it will be extracted and only the individual artifacts will be
>>> copied into the repository. When the DS runs, all needed artifacts will be
>>> synced to node 2. Therefore, functionality wise, there won't be any issues
>>> on node 2.
>>>
>>> But if someone logs into the management console of node 2 and go to the
>>> C-App list, nothing will be listed. Is this something we have to fix?
>>> Because anyway in a RW/RO cluster, user can't use the management console of
>>> the slave node.
>>>
>>
>> Also we will lose the relationship between the C-App and its artifacts
>> right? For example, now if we delete the C-App, then all its dependent
>> artifacts will get undeployed automatically etc. But as per 2nd solution,
>> in node 2 the dependent artifacts will be independent resources, so if we
>> want to undeploy the C-App, we should manually remove the
>> dependent artifacts from the respective lists.
>>
>
> No that won't be the case, the relationship between the C-App and it's
> artifacts will be there on node 1. So when the C-App is deleted on node 1,
> all respective arficats will be deleted. When the DS runs, it will make
> sure all those will be deleted at node 2 as well. You can't use the
> management console of node 2 in any case.
>

Yup, if we don't use the UI in node2 no issues would arise ...

>
> Thanks,
> ~Isuru
>
>
>> Of course functionality wise there shouldn't be any issues ...
>>
>>>
>>> WDYT??
>>>
>>> Thanks,
>>> ~Isuru
>>>
>>> [1] https://wso2.org/jira/browse/CARBON-13598
>>>
>>> --
>>> Isuru Suriarachchi
>>> Senior Technical Lead
>>> WSO2 Inc. http://wso2.com
>>> email : [email protected]
>>> blog : http://isurues.wordpress.com/
>>>
>>> lean . enterprise . middleware
>>>
>>>
>>> _______________________________________________
>>> Dev mailing list
>>> [email protected]
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Thanks,
>> Shariq.
>> Phone: +94 777 202 225
>>
>>
>
>
> --
> Isuru Suriarachchi
> Senior Technical Lead
> WSO2 Inc. http://wso2.com
> email : [email protected]
> blog : http://isurues.wordpress.com/
>
> lean . enterprise . middleware
>
>


-- 
Thanks,
Shariq.
Phone: +94 777 202 225
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to