Hi,

Debugged the code to find the problem. In the K8s membership scheme, we
were earlier creating a HTTPS connection always, which was attempting to
create a HTTPS connection even to a non secure endpoint. That was the
issue. I changed that to create a secure/non-secure connection depending on
the URL, and that resolved the issue. Committed the fix now.

On Wed, Feb 3, 2016 at 1:20 AM, Vishanth Balasubramaniam <[email protected]
> wrote:

> Hi Pubudu,
>
> Even with the proxy port set in the manager and worker profiles, it is
> still having the same problem. Sharing the axis2.xml and
> catalina-server.xml [1] [2] of the manager node.
>
> [1] -
> https://drive.google.com/a/wso2.com/file/d/0B1Vp6McRCyeJVVJiZ0pMaUcwVEE/view?usp=sharing
> [2] -
> https://drive.google.com/a/wso2.com/file/d/0B1Vp6McRCyeJTFFPOEpNTFB5Y00/view?usp=sharing
>
> Regards,
> Vishanth
>
> On Tue, Feb 2, 2016 at 10:25 PM, Pubudu Gunatilaka <[email protected]>
> wrote:
>
>> Hi Vishanth,
>>
>> Looks like you haven't configured the proxy port in catalina-server.xml.
>>
>> In k8 we create a service for each port mapping according to [1]. This
>> port will be accessible via the nodeport in k8. So you need to set this
>> node port value in the catalina-sever.xml using hiera data. If this is not
>> set correctly you cannot access the management console.
>>
>> [1] -
>> https://github.com/wso2/kubernetes-artifacts/blob/master/wso2as/kubernetes/wso2as-manager-service.yaml#L13
>>
>> Thank you!
>>
>> On Tue, Feb 2, 2016 at 9:49 PM, Vishanth Balasubramaniam <
>> [email protected]> wrote:
>>
>>> Hi,
>>>
>>> I have been working on WSO2 AS worker/manager separated cluster in
>>> Kubernetes. I have setup the Kubernetes cluster using the Vagrant and
>>> CoreOs [1].
>>> I built the docker images for manager and worker with the following
>>> Kubernetes membership scheme configurations in the profile.
>>>
>>> wso2::clustering :
>>>>   enabled : true
>>>>   local_member_host : local.as.wso2.com
>>>>   local_member_port : 4000
>>>>   membership_scheme : kubernetes
>>>>   k8 :
>>>>     k8_master : http://172.17.8.101:8080
>>>>     k8_namespace : default
>>>>     k8_services : wso2as-manager,wso2as-worker
>>>>   subDomain : mgt
>>>
>>>
>>> wso2::clustering :
>>>>   enabled : true
>>>>   local_member_host : worker.as.wso2.com
>>>>   local_member_port : 4000
>>>>   membership_scheme : kubernetes
>>>>   k8 :
>>>>     k8_master : http://172.17.8.101:8080
>>>>     k8_namespace : default
>>>>     k8_services : wso2as-manager,wso2as-worker
>>>>   subDomain : worker
>>>
>>>
>>> I SCP-ed the saved zip file of built images and loaded them in the
>>> Kubernetes minion. Then I deployed the worker/manager service and
>>> controller. I SSH-ed into the container where the manager node is running,
>>> and there are no error logs as you can see below.
>>>
>>> TID: [-1234] [] [2016-02-02 12:43:53,892]  INFO
>>>> {org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent} -
>>>> Using kubernetes based membership management scheme
>>>> {org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent}
>>>> TID: [-1234] [] [2016-02-02 12:43:53,905]  INFO
>>>> {org.wso2.carbon.membership.scheme.kubernetes.KubernetesMembershipScheme}
>>>> -  Initializing kubernetes membership scheme...
>>>> {org.wso2.carbon.membership.scheme.kubernetes.KubernetesMembershipScheme}
>>>> TID: [-1234] [] [2016-02-02 12:43:53,909]  INFO
>>>> {org.wso2.carbon.membership.scheme.kubernetes.KubernetesMembershipScheme}
>>>> -  Kubernetes clustering configuration: [master]
>>>> http://172.17.8.101:8080 [namespace] default [services] wso2as-manager
>>>> {org.wso2.carbon.membership.scheme.kubernetes.KubernetesMembershipScheme}
>>>> TID: [-1234] [] [2016-02-02 12:43:54,499]  INFO
>>>> {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} -  Mgt Console URL
>>>> : https://10.244.78.4:9443/carbon/
>>>> {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
>>>
>>>
>>> But it doesn't become the cluster coordinator node and there is no any
>>> other log after this. Also I am not able to access the carbon management
>>> console from my local machine.
>>>
>>> But whereas, when I deploy only the default profile (not cluster), then
>>> it runs fine and I am able to access the carbon management console from my
>>> local machine with the provided nodeport.
>>>
>>> Have I misconfigured anything in worker/manager cluster setup?
>>>
>>> [1] - https://github.com/pires/kubernetes-vagrant-coreos-cluster
>>>
>>> Regards,
>>> Vishanth
>>>
>>> --
>>> *Vishanth Balasubramaniam*
>>> Committer & PMC Member, Apache Stratos,
>>> Software Engineer, WSO2 Inc.; http://wso2.com
>>>
>>> mobile: *+94 77 17 377 18*
>>> about me: *http://about.me/vishanth <http://about.me/vishanth>*
>>>
>>> _______________________________________________
>>> Dev mailing list
>>> [email protected]
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> *Pubudu Gunatilaka*
>> Committer and PMC Member - Apache Stratos
>> Software Engineer
>> WSO2, Inc.: http://wso2.com
>> mobile : +94774079049 <%2B94772207163>
>>
>>
>
>
> --
> *Vishanth Balasubramaniam*
> Committer & PMC Member, Apache Stratos,
> Software Engineer, WSO2 Inc.; http://wso2.com
>
> mobile: *+94 77 17 377 18*
> about me: *http://about.me/vishanth <http://about.me/vishanth>*
>
> _______________________________________________
> Dev mailing list
> [email protected]
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Thanks and Regards,

Isuru H.
+94 716 358 048* <http://wso2.com/>*
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to