Hi all,
Please find the below blogpost as a guide to deploy a HAProxy pod in
Kubernetes cluster. [1]

Regards
Nishadi

[1].
http://nishadikirielle.blogspot.com/2016/04/configuring-haproxy-load-balancer-for.html

On Thu, Mar 17, 2016 at 12:25 PM, Nishadi Kirielle <[email protected]> wrote:

> Hi,
> The IP addresses have been used to generate certs only for the development
> purposes. We'll update these with proper certs when it is going for
> production.
> Thanks
>
> On Thu, Mar 17, 2016 at 12:03 PM, Udara Liyanage <[email protected]> wrote:
>
>>
>>
>> On Wed, Mar 16, 2016 at 10:49 AM, Nishadi Kirielle <[email protected]>
>> wrote:
>>
>>> Hi all,
>>> I have configured load balancing in AppCloud staging with HAProxy.
>>> In the configuration I have done few changes in the kubernetes service
>>> loadbalancer. [1]
>>>
>>> Do we create certificates for IP addresses? Though it is not invalid,
>> general approach is to create certificate for hot names.
>> If we create certificate for Ip addresses, when ip addresses change or
>> add a new node, we need to again create certificate.
>>
>> In order to run the haproxy load balancer in kubernetes and provide
>>> support for https traffic, we need to add the SSL certificate specific to
>>> the node public IP to each node. In this deployment I have manually added
>>> the certificate file inside the container. It should be added to the
>>> location specified in the replication controller definition. [2] If we are
>>> adding a ca certificate as well, we can define it in the replication
>>> controller definition as an arg as follows
>>>          - --ssl-ca-cert=/ssl/ca.crt
>>>
>>> In the current deployment, we have tested a service with a single port
>>> exposed. This is because the service identifies whether this is exposed to
>>> http or https through the service annotation which is common to all exposed
>>> ports in the service. If we are going with that approach, in order to
>>> support http traffic and https traffic, we need several services. Thus,
>>> currently I'm attempting to deploy a service with several exposed ports.
>>>
>>> In addition, another concern is how the HAProxy load balancer itself is
>>> exposed to external traffic. Currently it is done through host ports. If we
>>> use node ports for this, it will expose the particular port in all the
>>> nodes. But the use of host port will only expose the particular port in the
>>> specified node.
>>>
>>> Appreciate your feedback on the approach taken.
>>>
>>> Thanks
>>>
>>> [1].
>>> https://github.com/nishadi/contrib/commit/f169044546dc8a84a359d889bb186aef83d9c422
>>> [2].
>>> https://github.com/nishadi/contrib/blob/master/service-loadbalancer/rc.yaml#L52
>>>
>>> On Mon, Mar 14, 2016 at 10:39 AM, Nishadi Kirielle <[email protected]>
>>> wrote:
>>>
>>>> Hi all,
>>>> +1 for going with SSL pass through approach. Once the testing with
>>>> staging is done, I will focus on this approach.
>>>>
>>>> Thanks
>>>>
>>>> On Mon, Mar 14, 2016 at 10:29 AM, Manjula Rathnayake <[email protected]
>>>> > wrote:
>>>>
>>>>> Hi Imesh,
>>>>>
>>>>> On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Hi Manjula,
>>>>>>
>>>>>> On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> Hi Imesh,
>>>>>>>
>>>>>>> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne <[email protected]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle <
>>>>>>>> [email protected]> wrote:
>>>>>>>>
>>>>>>>>> Hi all,
>>>>>>>>> Currently I'm working on configuring HAProxy load balancing
>>>>>>>>> support for app cloud.
>>>>>>>>> In checking the session affinity functionality in kuberenetes, I
>>>>>>>>> have verified the load balancing of http traffic with HAProxy. It 
>>>>>>>>> could be
>>>>>>>>> done using kubernetes contribution repo, 'service loadbalancer' [1].
>>>>>>>>>
>>>>>>>>> In order to check the load balancing with https traffic the taken
>>>>>>>>> approach is SSL termination.In the scenario of app cloud, kubernetes
>>>>>>>>> cluster is not directly exposed and the load balancer exists within 
>>>>>>>>> the
>>>>>>>>> cluster. Thus the communication between the application servers and 
>>>>>>>>> the
>>>>>>>>> load balancer happens internally. Although SSL termination ends the 
>>>>>>>>> secure
>>>>>>>>> connection at the load balancer, due to the above mentioned reasons, 
>>>>>>>>> SSL
>>>>>>>>> termination seems to be a better solution. The reason for the use of 
>>>>>>>>> SSL
>>>>>>>>> termination over SSL pass through is because of the complexity of 
>>>>>>>>> handling
>>>>>>>>> a separate SSL certificate for each server behind the load balancer 
>>>>>>>>> in the
>>>>>>>>> case of SSL pass through.
>>>>>>>>>
>>>>>>>>> -1 for this approach, IMO this has a major security risk.
>>>>>>>>
>>>>>>>> Let me explain the problem. If we offload SSL at the service load
>>>>>>>> balancer, all traffic beyond the load balancer will use HTTP and the
>>>>>>>> message content will be visible to anyone on network inside K8S. Which
>>>>>>>> means someone can simply start a container in K8S and trace all HTTP
>>>>>>>> traffic going through.
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>> Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend
>>>>>>> server communication happens with HTTPS enabled but not validating the
>>>>>>> server certificate.
>>>>>>>
>>>>>>
>>>>>>
>>>>>>> verify
>>>>>>> <http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-verify>
>>>>>>> [none|required]
>>>>>>>
>>>>>>> This setting is only available when support for OpenSSL was built in. 
>>>>>>> If set
>>>>>>> to 'none', server certificate is not verified. In the other case, The
>>>>>>> certificate provided by the server is verified using CAs from 'ca-file'
>>>>>>> and optional CRLs from 'crl-file'. If 'ssl_server_verify' is not 
>>>>>>> specified
>>>>>>> in global  section, this is the default. On verify failure the handshake
>>>>>>> is aborted. It is critically important to verify server certificates 
>>>>>>> when
>>>>>>> using SSL to connect to servers, otherwise the communication is prone to
>>>>>>> trivial man-in-the-middle attacks rendering SSL totally useless.
>>>>>>>
>>>>>>> IMO still there is a major problem if we are not verifying the SSL
>>>>>> certificate. See the highlighted text.
>>>>>>
>>>>> +1. We will attend to this once initial end to end scenario got
>>>>> working in App Cloud. I am +1 for using a self signed cert in pods and
>>>>> adding it to truststore of HA Proxy to fix above issue.
>>>>>
>>>>> thank you.
>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>> [1].
>>>>>>> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#ssl%20%28Server%20and%20default-server%20options%29
>>>>>>>
>>>>>>> thank you.
>>>>>>>
>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> In configuring load balancing with SSL termination, I had to
>>>>>>>>> customize kubernetes haproxy.conf file template of service 
>>>>>>>>> loadbalancer
>>>>>>>>> repo to support SSL termination.
>>>>>>>>>
>>>>>>>>> In order to provide SSL termination, the kubernetes services have
>>>>>>>>> to be annotated with
>>>>>>>>>       serviceloadbalancer/lb.sslTerm: "true"
>>>>>>>>>
>>>>>>>>> The default approach in load balancing with service load balancer
>>>>>>>>> repo is based on simple fan out approach which uses context path to 
>>>>>>>>> load
>>>>>>>>> balance the traffic. As we need to load balance based on the host 
>>>>>>>>> name, we
>>>>>>>>> need to go with the name based virtual hosting approach. It can be 
>>>>>>>>> achieved
>>>>>>>>> via the following annotation.
>>>>>>>>>      serviceloadbalancer/lb.Host: "<host-name>"
>>>>>>>>>
>>>>>>>>> Any suggestions on the approach taken are highly appreciated.
>>>>>>>>>
>>>>>>>>> Thank you
>>>>>>>>>
>>>>>>>>> [1].
>>>>>>>>> https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *Imesh Gunaratne*
>>>>>>>> Senior Technical Lead
>>>>>>>> WSO2 Inc: http://wso2.com
>>>>>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>>>>>> W: http://imesh.io
>>>>>>>> Lean . Enterprise . Middleware
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Manjula Rathnayaka
>>>>>>> Associate Technical Lead
>>>>>>> WSO2, Inc.
>>>>>>> Mobile:+94 77 743 1987
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Imesh Gunaratne*
>>>>>> Senior Technical Lead
>>>>>> WSO2 Inc: http://wso2.com
>>>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>>>> W: http://imesh.io
>>>>>> Lean . Enterprise . Middleware
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Manjula Rathnayaka
>>>>> Associate Technical Lead
>>>>> WSO2, Inc.
>>>>> Mobile:+94 77 743 1987
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Nishadi Kirielle*
>>>> *Software Engineering Intern*
>>>> Mobile : +94 (0) 714722148
>>>> Blog : http://nishadikirielle.blogspot.com/
>>>> [email protected]
>>>>
>>>
>>>
>>>
>>> --
>>> *Nishadi Kirielle*
>>> *Software Engineering Intern*
>>> Mobile : +94 (0) 714722148
>>> Blog : http://nishadikirielle.blogspot.com/
>>> [email protected]
>>>
>>> _______________________________________________
>>> Dev mailing list
>>> [email protected]
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>>
>> Udara Liyanage
>> Software Engineer
>> WSO2, Inc.: http://wso2.com
>> lean. enterprise. middleware
>>
>> web: http://udaraliyanage.wordpress.com
>> phone: +94 71 443 6897
>>
>
>
>
> --
> *Nishadi Kirielle*
> *Software Engineering Intern*
> Mobile : +94 (0) 714722148
> Blog : http://nishadikirielle.blogspot.com/
> [email protected]
>



-- 
*Nishadi Kirielle*
*Software Engineering Intern*
Mobile : +94 (0) 714722148
Blog : http://nishadikirielle.blogspot.com/
[email protected]
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to