Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-04-11 Thread Nishadi Kirielle
Hi,
The limitation is there with the service load balancer. If we are going
with that approach we need to patch kubernetes service load balancer code
to annotate the services with port definitions. The current approach taken
is to create two services for https and http.
We can use the node ports as well. Already the host ports are available in
AWS load balancer.

Regards
Nishadi

On Thu, Apr 7, 2016 at 10:14 PM, Imesh Gunaratne  wrote:

>
>
> On Wed, Mar 16, 2016 at 10:49 AM, Nishadi Kirielle 
> wrote:
>>
>>
>> In the current deployment, we have tested a service with a single port
>> exposed. This is because the service identifies whether this is exposed to
>> http or https through the service annotation which is common to all exposed
>> ports in the service. If we are going with that approach, in order to
>> support http traffic and https traffic, we need several services. Thus,
>> currently I'm attempting to deploy a service with several exposed ports.
>>
>
> AFAIU this is not a restriction enforced by K8S services rather a
> limitation in the service load balancer (the way it uses service
> annotations) [3]. K8S services allow to define any number of annotations
> with any key/value pair. We can change the service load balancer to use an
> annotation per port to handle this.
>
>>
>> In addition, another concern is how the HAProxy load balancer itself is
>> exposed to external traffic. Currently it is done through host ports. If we
>> use node ports for this, it will expose the particular port in all the
>> nodes. But the use of host port will only expose the particular port in the
>> specified node.
>>
>
> Why do we use host ports instead of node ports? I believe traffic get
> delegated to HAProxy via an AWS load balancer. If so what would happen if
> the above host becomes unavailable?
>
> [3]
> https://github.com/nishadi/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L468
>
> Thanks
>
>
>>
>> Appreciate your feedback on the approach taken.
>>
>> Thanks
>>
>> [1].
>> https://github.com/nishadi/contrib/commit/f169044546dc8a84a359d889bb186aef83d9c422
>> [2].
>> https://github.com/nishadi/contrib/blob/master/service-loadbalancer/rc.yaml#L52
>>
>> On Mon, Mar 14, 2016 at 10:39 AM, Nishadi Kirielle 
>> wrote:
>>
>>> Hi all,
>>> +1 for going with SSL pass through approach. Once the testing with
>>> staging is done, I will focus on this approach.
>>>
>>> Thanks
>>>
>>> On Mon, Mar 14, 2016 at 10:29 AM, Manjula Rathnayake 
>>> wrote:
>>>
 Hi Imesh,

 On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne 
 wrote:

> Hi Manjula,
>
> On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake <
> manju...@wso2.com> wrote:
>
>> Hi Imesh,
>>
>> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne 
>> wrote:
>>
>>>
>>> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle >> > wrote:
>>>
 Hi all,
 Currently I'm working on configuring HAProxy load balancing support
 for app cloud.
 In checking the session affinity functionality in kuberenetes, I
 have verified the load balancing of http traffic with HAProxy. It 
 could be
 done using kubernetes contribution repo, 'service loadbalancer' [1].

 In order to check the load balancing with https traffic the taken
 approach is SSL termination.In the scenario of app cloud, kubernetes
 cluster is not directly exposed and the load balancer exists within the
 cluster. Thus the communication between the application servers and the
 load balancer happens internally. Although SSL termination ends the 
 secure
 connection at the load balancer, due to the above mentioned reasons, 
 SSL
 termination seems to be a better solution. The reason for the use of 
 SSL
 termination over SSL pass through is because of the complexity of 
 handling
 a separate SSL certificate for each server behind the load balancer in 
 the
 case of SSL pass through.

 -1 for this approach, IMO this has a major security risk.
>>>
>>> Let me explain the problem. If we offload SSL at the service load
>>> balancer, all traffic beyond the load balancer will use HTTP and the
>>> message content will be visible to anyone on network inside K8S. Which
>>> means someone can simply start a container in K8S and trace all HTTP
>>> traffic going through.
>>>
>>
>
>> Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend
>> server communication happens with HTTPS enabled but not validating the
>> server certificate.
>>
>
>
>> verify
>> 
>> [none|required]
>>
>> This 

Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-04-07 Thread Imesh Gunaratne
On Wed, Mar 16, 2016 at 10:49 AM, Nishadi Kirielle  wrote:
>
>
> In the current deployment, we have tested a service with a single port
> exposed. This is because the service identifies whether this is exposed to
> http or https through the service annotation which is common to all exposed
> ports in the service. If we are going with that approach, in order to
> support http traffic and https traffic, we need several services. Thus,
> currently I'm attempting to deploy a service with several exposed ports.
>

AFAIU this is not a restriction enforced by K8S services rather a
limitation in the service load balancer (the way it uses service
annotations) [3]. K8S services allow to define any number of annotations
with any key/value pair. We can change the service load balancer to use an
annotation per port to handle this.

>
> In addition, another concern is how the HAProxy load balancer itself is
> exposed to external traffic. Currently it is done through host ports. If we
> use node ports for this, it will expose the particular port in all the
> nodes. But the use of host port will only expose the particular port in the
> specified node.
>

Why do we use host ports instead of node ports? I believe traffic get
delegated to HAProxy via an AWS load balancer. If so what would happen if
the above host becomes unavailable?

[3]
https://github.com/nishadi/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L468

Thanks


>
> Appreciate your feedback on the approach taken.
>
> Thanks
>
> [1].
> https://github.com/nishadi/contrib/commit/f169044546dc8a84a359d889bb186aef83d9c422
> [2].
> https://github.com/nishadi/contrib/blob/master/service-loadbalancer/rc.yaml#L52
>
> On Mon, Mar 14, 2016 at 10:39 AM, Nishadi Kirielle 
> wrote:
>
>> Hi all,
>> +1 for going with SSL pass through approach. Once the testing with
>> staging is done, I will focus on this approach.
>>
>> Thanks
>>
>> On Mon, Mar 14, 2016 at 10:29 AM, Manjula Rathnayake 
>> wrote:
>>
>>> Hi Imesh,
>>>
>>> On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne 
>>> wrote:
>>>
 Hi Manjula,

 On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake  wrote:

> Hi Imesh,
>
> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne 
> wrote:
>
>>
>> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle 
>> wrote:
>>
>>> Hi all,
>>> Currently I'm working on configuring HAProxy load balancing support
>>> for app cloud.
>>> In checking the session affinity functionality in kuberenetes, I
>>> have verified the load balancing of http traffic with HAProxy. It could 
>>> be
>>> done using kubernetes contribution repo, 'service loadbalancer' [1].
>>>
>>> In order to check the load balancing with https traffic the taken
>>> approach is SSL termination.In the scenario of app cloud, kubernetes
>>> cluster is not directly exposed and the load balancer exists within the
>>> cluster. Thus the communication between the application servers and the
>>> load balancer happens internally. Although SSL termination ends the 
>>> secure
>>> connection at the load balancer, due to the above mentioned reasons, SSL
>>> termination seems to be a better solution. The reason for the use of SSL
>>> termination over SSL pass through is because of the complexity of 
>>> handling
>>> a separate SSL certificate for each server behind the load balancer in 
>>> the
>>> case of SSL pass through.
>>>
>>> -1 for this approach, IMO this has a major security risk.
>>
>> Let me explain the problem. If we offload SSL at the service load
>> balancer, all traffic beyond the load balancer will use HTTP and the
>> message content will be visible to anyone on network inside K8S. Which
>> means someone can simply start a container in K8S and trace all HTTP
>> traffic going through.
>>
>

> Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend
> server communication happens with HTTPS enabled but not validating the
> server certificate.
>


> verify
> 
> [none|required]
>
> This setting is only available when support for OpenSSL was built in. If 
> set
> to 'none', server certificate is not verified. In the other case, The
> certificate provided by the server is verified using CAs from 'ca-file'
> and optional CRLs from 'crl-file'. If 'ssl_server_verify' is not specified
> in global  section, this is the default. On verify failure the handshake
> is aborted. It is critically important to verify server certificates when
> using SSL to connect to servers, otherwise the communication is prone to
> trivial man-in-the-middle attacks rendering SSL totally 

Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-04-07 Thread Nishadi Kirielle
Hi all,
Please find the below blogpost as a guide to deploy a HAProxy pod in
Kubernetes cluster. [1]

Regards
Nishadi

[1].
http://nishadikirielle.blogspot.com/2016/04/configuring-haproxy-load-balancer-for.html

On Thu, Mar 17, 2016 at 12:25 PM, Nishadi Kirielle  wrote:

> Hi,
> The IP addresses have been used to generate certs only for the development
> purposes. We'll update these with proper certs when it is going for
> production.
> Thanks
>
> On Thu, Mar 17, 2016 at 12:03 PM, Udara Liyanage  wrote:
>
>>
>>
>> On Wed, Mar 16, 2016 at 10:49 AM, Nishadi Kirielle 
>> wrote:
>>
>>> Hi all,
>>> I have configured load balancing in AppCloud staging with HAProxy.
>>> In the configuration I have done few changes in the kubernetes service
>>> loadbalancer. [1]
>>>
>>> Do we create certificates for IP addresses? Though it is not invalid,
>> general approach is to create certificate for hot names.
>> If we create certificate for Ip addresses, when ip addresses change or
>> add a new node, we need to again create certificate.
>>
>> In order to run the haproxy load balancer in kubernetes and provide
>>> support for https traffic, we need to add the SSL certificate specific to
>>> the node public IP to each node. In this deployment I have manually added
>>> the certificate file inside the container. It should be added to the
>>> location specified in the replication controller definition. [2] If we are
>>> adding a ca certificate as well, we can define it in the replication
>>> controller definition as an arg as follows
>>>  - --ssl-ca-cert=/ssl/ca.crt
>>>
>>> In the current deployment, we have tested a service with a single port
>>> exposed. This is because the service identifies whether this is exposed to
>>> http or https through the service annotation which is common to all exposed
>>> ports in the service. If we are going with that approach, in order to
>>> support http traffic and https traffic, we need several services. Thus,
>>> currently I'm attempting to deploy a service with several exposed ports.
>>>
>>> In addition, another concern is how the HAProxy load balancer itself is
>>> exposed to external traffic. Currently it is done through host ports. If we
>>> use node ports for this, it will expose the particular port in all the
>>> nodes. But the use of host port will only expose the particular port in the
>>> specified node.
>>>
>>> Appreciate your feedback on the approach taken.
>>>
>>> Thanks
>>>
>>> [1].
>>> https://github.com/nishadi/contrib/commit/f169044546dc8a84a359d889bb186aef83d9c422
>>> [2].
>>> https://github.com/nishadi/contrib/blob/master/service-loadbalancer/rc.yaml#L52
>>>
>>> On Mon, Mar 14, 2016 at 10:39 AM, Nishadi Kirielle 
>>> wrote:
>>>
 Hi all,
 +1 for going with SSL pass through approach. Once the testing with
 staging is done, I will focus on this approach.

 Thanks

 On Mon, Mar 14, 2016 at 10:29 AM, Manjula Rathnayake  wrote:

> Hi Imesh,
>
> On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne 
> wrote:
>
>> Hi Manjula,
>>
>> On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake <
>> manju...@wso2.com> wrote:
>>
>>> Hi Imesh,
>>>
>>> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne 
>>> wrote:
>>>

 On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle <
 nish...@wso2.com> wrote:

> Hi all,
> Currently I'm working on configuring HAProxy load balancing
> support for app cloud.
> In checking the session affinity functionality in kuberenetes, I
> have verified the load balancing of http traffic with HAProxy. It 
> could be
> done using kubernetes contribution repo, 'service loadbalancer' [1].
>
> In order to check the load balancing with https traffic the taken
> approach is SSL termination.In the scenario of app cloud, kubernetes
> cluster is not directly exposed and the load balancer exists within 
> the
> cluster. Thus the communication between the application servers and 
> the
> load balancer happens internally. Although SSL termination ends the 
> secure
> connection at the load balancer, due to the above mentioned reasons, 
> SSL
> termination seems to be a better solution. The reason for the use of 
> SSL
> termination over SSL pass through is because of the complexity of 
> handling
> a separate SSL certificate for each server behind the load balancer 
> in the
> case of SSL pass through.
>
> -1 for this approach, IMO this has a major security risk.

 Let me explain the problem. If we offload SSL at the service load
 balancer, all traffic beyond the load balancer will use HTTP and the

Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-03-19 Thread Udara Liyanage
On Wed, Mar 16, 2016 at 10:49 AM, Nishadi Kirielle  wrote:

> Hi all,
> I have configured load balancing in AppCloud staging with HAProxy.
> In the configuration I have done few changes in the kubernetes service
> loadbalancer. [1]
>
> Do we create certificates for IP addresses? Though it is not invalid,
general approach is to create certificate for hot names.
If we create certificate for Ip addresses, when ip addresses change or add
a new node, we need to again create certificate.

In order to run the haproxy load balancer in kubernetes and provide support
> for https traffic, we need to add the SSL certificate specific to the node
> public IP to each node. In this deployment I have manually added the
> certificate file inside the container. It should be added to the location
> specified in the replication controller definition. [2] If we are adding a
> ca certificate as well, we can define it in the replication controller
> definition as an arg as follows
>  - --ssl-ca-cert=/ssl/ca.crt
>
> In the current deployment, we have tested a service with a single port
> exposed. This is because the service identifies whether this is exposed to
> http or https through the service annotation which is common to all exposed
> ports in the service. If we are going with that approach, in order to
> support http traffic and https traffic, we need several services. Thus,
> currently I'm attempting to deploy a service with several exposed ports.
>
> In addition, another concern is how the HAProxy load balancer itself is
> exposed to external traffic. Currently it is done through host ports. If we
> use node ports for this, it will expose the particular port in all the
> nodes. But the use of host port will only expose the particular port in the
> specified node.
>
> Appreciate your feedback on the approach taken.
>
> Thanks
>
> [1].
> https://github.com/nishadi/contrib/commit/f169044546dc8a84a359d889bb186aef83d9c422
> [2].
> https://github.com/nishadi/contrib/blob/master/service-loadbalancer/rc.yaml#L52
>
> On Mon, Mar 14, 2016 at 10:39 AM, Nishadi Kirielle 
> wrote:
>
>> Hi all,
>> +1 for going with SSL pass through approach. Once the testing with
>> staging is done, I will focus on this approach.
>>
>> Thanks
>>
>> On Mon, Mar 14, 2016 at 10:29 AM, Manjula Rathnayake 
>> wrote:
>>
>>> Hi Imesh,
>>>
>>> On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne 
>>> wrote:
>>>
 Hi Manjula,

 On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake  wrote:

> Hi Imesh,
>
> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne 
> wrote:
>
>>
>> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle 
>> wrote:
>>
>>> Hi all,
>>> Currently I'm working on configuring HAProxy load balancing support
>>> for app cloud.
>>> In checking the session affinity functionality in kuberenetes, I
>>> have verified the load balancing of http traffic with HAProxy. It could 
>>> be
>>> done using kubernetes contribution repo, 'service loadbalancer' [1].
>>>
>>> In order to check the load balancing with https traffic the taken
>>> approach is SSL termination.In the scenario of app cloud, kubernetes
>>> cluster is not directly exposed and the load balancer exists within the
>>> cluster. Thus the communication between the application servers and the
>>> load balancer happens internally. Although SSL termination ends the 
>>> secure
>>> connection at the load balancer, due to the above mentioned reasons, SSL
>>> termination seems to be a better solution. The reason for the use of SSL
>>> termination over SSL pass through is because of the complexity of 
>>> handling
>>> a separate SSL certificate for each server behind the load balancer in 
>>> the
>>> case of SSL pass through.
>>>
>>> -1 for this approach, IMO this has a major security risk.
>>
>> Let me explain the problem. If we offload SSL at the service load
>> balancer, all traffic beyond the load balancer will use HTTP and the
>> message content will be visible to anyone on network inside K8S. Which
>> means someone can simply start a container in K8S and trace all HTTP
>> traffic going through.
>>
>

> Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend
> server communication happens with HTTPS enabled but not validating the
> server certificate.
>


> verify
> 
> [none|required]
>
> This setting is only available when support for OpenSSL was built in. If 
> set
> to 'none', server certificate is not verified. In the other case, The
> certificate provided by the server is verified using CAs from 'ca-file'
> and optional CRLs from 'crl-file'. If 

Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-03-19 Thread Nishadi Kirielle
Hi,
The IP addresses have been used to generate certs only for the development
purposes. We'll update these with proper certs when it is going for
production.
Thanks

On Thu, Mar 17, 2016 at 12:03 PM, Udara Liyanage  wrote:

>
>
> On Wed, Mar 16, 2016 at 10:49 AM, Nishadi Kirielle 
> wrote:
>
>> Hi all,
>> I have configured load balancing in AppCloud staging with HAProxy.
>> In the configuration I have done few changes in the kubernetes service
>> loadbalancer. [1]
>>
>> Do we create certificates for IP addresses? Though it is not invalid,
> general approach is to create certificate for hot names.
> If we create certificate for Ip addresses, when ip addresses change or add
> a new node, we need to again create certificate.
>
> In order to run the haproxy load balancer in kubernetes and provide
>> support for https traffic, we need to add the SSL certificate specific to
>> the node public IP to each node. In this deployment I have manually added
>> the certificate file inside the container. It should be added to the
>> location specified in the replication controller definition. [2] If we are
>> adding a ca certificate as well, we can define it in the replication
>> controller definition as an arg as follows
>>  - --ssl-ca-cert=/ssl/ca.crt
>>
>> In the current deployment, we have tested a service with a single port
>> exposed. This is because the service identifies whether this is exposed to
>> http or https through the service annotation which is common to all exposed
>> ports in the service. If we are going with that approach, in order to
>> support http traffic and https traffic, we need several services. Thus,
>> currently I'm attempting to deploy a service with several exposed ports.
>>
>> In addition, another concern is how the HAProxy load balancer itself is
>> exposed to external traffic. Currently it is done through host ports. If we
>> use node ports for this, it will expose the particular port in all the
>> nodes. But the use of host port will only expose the particular port in the
>> specified node.
>>
>> Appreciate your feedback on the approach taken.
>>
>> Thanks
>>
>> [1].
>> https://github.com/nishadi/contrib/commit/f169044546dc8a84a359d889bb186aef83d9c422
>> [2].
>> https://github.com/nishadi/contrib/blob/master/service-loadbalancer/rc.yaml#L52
>>
>> On Mon, Mar 14, 2016 at 10:39 AM, Nishadi Kirielle 
>> wrote:
>>
>>> Hi all,
>>> +1 for going with SSL pass through approach. Once the testing with
>>> staging is done, I will focus on this approach.
>>>
>>> Thanks
>>>
>>> On Mon, Mar 14, 2016 at 10:29 AM, Manjula Rathnayake 
>>> wrote:
>>>
 Hi Imesh,

 On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne 
 wrote:

> Hi Manjula,
>
> On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake <
> manju...@wso2.com> wrote:
>
>> Hi Imesh,
>>
>> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne 
>> wrote:
>>
>>>
>>> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle >> > wrote:
>>>
 Hi all,
 Currently I'm working on configuring HAProxy load balancing support
 for app cloud.
 In checking the session affinity functionality in kuberenetes, I
 have verified the load balancing of http traffic with HAProxy. It 
 could be
 done using kubernetes contribution repo, 'service loadbalancer' [1].

 In order to check the load balancing with https traffic the taken
 approach is SSL termination.In the scenario of app cloud, kubernetes
 cluster is not directly exposed and the load balancer exists within the
 cluster. Thus the communication between the application servers and the
 load balancer happens internally. Although SSL termination ends the 
 secure
 connection at the load balancer, due to the above mentioned reasons, 
 SSL
 termination seems to be a better solution. The reason for the use of 
 SSL
 termination over SSL pass through is because of the complexity of 
 handling
 a separate SSL certificate for each server behind the load balancer in 
 the
 case of SSL pass through.

 -1 for this approach, IMO this has a major security risk.
>>>
>>> Let me explain the problem. If we offload SSL at the service load
>>> balancer, all traffic beyond the load balancer will use HTTP and the
>>> message content will be visible to anyone on network inside K8S. Which
>>> means someone can simply start a container in K8S and trace all HTTP
>>> traffic going through.
>>>
>>
>
>> Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend
>> server communication happens with HTTPS enabled but not validating the
>> server certificate.
>>
>
>
>> verify
>> 

Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-03-15 Thread Nishadi Kirielle
Hi all,
I have configured load balancing in AppCloud staging with HAProxy.
In the configuration I have done few changes in the kubernetes service
loadbalancer. [1]

In order to run the haproxy load balancer in kubernetes and provide support
for https traffic, we need to add the SSL certificate specific to the node
public IP to each node. In this deployment I have manually added the
certificate file inside the container. It should be added to the location
specified in the replication controller definition. [2] If we are adding a
ca certificate as well, we can define it in the replication controller
definition as an arg as follows
 - --ssl-ca-cert=/ssl/ca.crt

In the current deployment, we have tested a service with a single port
exposed. This is because the service identifies whether this is exposed to
http or https through the service annotation which is common to all exposed
ports in the service. If we are going with that approach, in order to
support http traffic and https traffic, we need several services. Thus,
currently I'm attempting to deploy a service with several exposed ports.

In addition, another concern is how the HAProxy load balancer itself is
exposed to external traffic. Currently it is done through host ports. If we
use node ports for this, it will expose the particular port in all the
nodes. But the use of host port will only expose the particular port in the
specified node.

Appreciate your feedback on the approach taken.

Thanks

[1].
https://github.com/nishadi/contrib/commit/f169044546dc8a84a359d889bb186aef83d9c422
[2].
https://github.com/nishadi/contrib/blob/master/service-loadbalancer/rc.yaml#L52

On Mon, Mar 14, 2016 at 10:39 AM, Nishadi Kirielle  wrote:

> Hi all,
> +1 for going with SSL pass through approach. Once the testing with staging
> is done, I will focus on this approach.
>
> Thanks
>
> On Mon, Mar 14, 2016 at 10:29 AM, Manjula Rathnayake 
> wrote:
>
>> Hi Imesh,
>>
>> On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne  wrote:
>>
>>> Hi Manjula,
>>>
>>> On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake 
>>> wrote:
>>>
 Hi Imesh,

 On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne 
 wrote:

>
> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle 
> wrote:
>
>> Hi all,
>> Currently I'm working on configuring HAProxy load balancing support
>> for app cloud.
>> In checking the session affinity functionality in kuberenetes, I have
>> verified the load balancing of http traffic with HAProxy. It could be 
>> done
>> using kubernetes contribution repo, 'service loadbalancer' [1].
>>
>> In order to check the load balancing with https traffic the taken
>> approach is SSL termination.In the scenario of app cloud, kubernetes
>> cluster is not directly exposed and the load balancer exists within the
>> cluster. Thus the communication between the application servers and the
>> load balancer happens internally. Although SSL termination ends the 
>> secure
>> connection at the load balancer, due to the above mentioned reasons, SSL
>> termination seems to be a better solution. The reason for the use of SSL
>> termination over SSL pass through is because of the complexity of 
>> handling
>> a separate SSL certificate for each server behind the load balancer in 
>> the
>> case of SSL pass through.
>>
>> -1 for this approach, IMO this has a major security risk.
>
> Let me explain the problem. If we offload SSL at the service load
> balancer, all traffic beyond the load balancer will use HTTP and the
> message content will be visible to anyone on network inside K8S. Which
> means someone can simply start a container in K8S and trace all HTTP
> traffic going through.
>

>>>
 Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend
 server communication happens with HTTPS enabled but not validating the
 server certificate.

>>>
>>>
 verify
 
 [none|required]

 This setting is only available when support for OpenSSL was built in. If 
 set
 to 'none', server certificate is not verified. In the other case, The
 certificate provided by the server is verified using CAs from 'ca-file'
 and optional CRLs from 'crl-file'. If 'ssl_server_verify' is not specified
 in global  section, this is the default. On verify failure the handshake
 is aborted. It is critically important to verify server certificates when
 using SSL to connect to servers, otherwise the communication is prone to
 trivial man-in-the-middle attacks rendering SSL totally useless.

 IMO still there is a major problem if we are not verifying the SSL
>>> certificate. See the highlighted text.
>>>
>> +1. We will 

Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-03-13 Thread Nishadi Kirielle
Hi all,
+1 for going with SSL pass through approach. Once the testing with staging
is done, I will focus on this approach.

Thanks

On Mon, Mar 14, 2016 at 10:29 AM, Manjula Rathnayake 
wrote:

> Hi Imesh,
>
> On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne  wrote:
>
>> Hi Manjula,
>>
>> On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake 
>> wrote:
>>
>>> Hi Imesh,
>>>
>>> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne  wrote:
>>>

 On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle 
 wrote:

> Hi all,
> Currently I'm working on configuring HAProxy load balancing support
> for app cloud.
> In checking the session affinity functionality in kuberenetes, I have
> verified the load balancing of http traffic with HAProxy. It could be done
> using kubernetes contribution repo, 'service loadbalancer' [1].
>
> In order to check the load balancing with https traffic the taken
> approach is SSL termination.In the scenario of app cloud, kubernetes
> cluster is not directly exposed and the load balancer exists within the
> cluster. Thus the communication between the application servers and the
> load balancer happens internally. Although SSL termination ends the secure
> connection at the load balancer, due to the above mentioned reasons, SSL
> termination seems to be a better solution. The reason for the use of SSL
> termination over SSL pass through is because of the complexity of handling
> a separate SSL certificate for each server behind the load balancer in the
> case of SSL pass through.
>
> -1 for this approach, IMO this has a major security risk.

 Let me explain the problem. If we offload SSL at the service load
 balancer, all traffic beyond the load balancer will use HTTP and the
 message content will be visible to anyone on network inside K8S. Which
 means someone can simply start a container in K8S and trace all HTTP
 traffic going through.

>>>
>>
>>> Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend
>>> server communication happens with HTTPS enabled but not validating the
>>> server certificate.
>>>
>>
>>
>>> verify
>>> 
>>> [none|required]
>>>
>>> This setting is only available when support for OpenSSL was built in. If set
>>> to 'none', server certificate is not verified. In the other case, The
>>> certificate provided by the server is verified using CAs from 'ca-file'
>>> and optional CRLs from 'crl-file'. If 'ssl_server_verify' is not specified
>>> in global  section, this is the default. On verify failure the handshake
>>> is aborted. It is critically important to verify server certificates when
>>> using SSL to connect to servers, otherwise the communication is prone to
>>> trivial man-in-the-middle attacks rendering SSL totally useless.
>>>
>>> IMO still there is a major problem if we are not verifying the SSL
>> certificate. See the highlighted text.
>>
> +1. We will attend to this once initial end to end scenario got working in
> App Cloud. I am +1 for using a self signed cert in pods and adding it to
> truststore of HA Proxy to fix above issue.
>
> thank you.
>
>>
>> Thanks
>>
>>
>>> [1].
>>> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#ssl%20%28Server%20and%20default-server%20options%29
>>>
>>> thank you.
>>>
>>>
 Thanks

 In configuring load balancing with SSL termination, I had to customize
> kubernetes haproxy.conf file template of service loadbalancer repo to
> support SSL termination.
>
> In order to provide SSL termination, the kubernetes services have to
> be annotated with
>   serviceloadbalancer/lb.sslTerm: "true"
>
> The default approach in load balancing with service load balancer repo
> is based on simple fan out approach which uses context path to load 
> balance
> the traffic. As we need to load balance based on the host name, we need to
> go with the name based virtual hosting approach. It can be achieved via 
> the
> following annotation.
>  serviceloadbalancer/lb.Host: ""
>
> Any suggestions on the approach taken are highly appreciated.
>
> Thank you
>
> [1].
> https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
>
>



 --
 *Imesh Gunaratne*
 Senior Technical Lead
 WSO2 Inc: http://wso2.com
 T: +94 11 214 5345 M: +94 77 374 2057
 W: http://imesh.io
 Lean . Enterprise . Middleware


>>>
>>>
>>> --
>>> Manjula Rathnayaka
>>> Associate Technical Lead
>>> WSO2, Inc.
>>> Mobile:+94 77 743 1987
>>>
>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io
>> Lean . Enterprise . Middleware
>>

Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-03-13 Thread Manjula Rathnayake
Hi Imesh,

On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne  wrote:

> Hi Manjula,
>
> On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake 
> wrote:
>
>> Hi Imesh,
>>
>> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne  wrote:
>>
>>>
>>> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle 
>>> wrote:
>>>
 Hi all,
 Currently I'm working on configuring HAProxy load balancing support for
 app cloud.
 In checking the session affinity functionality in kuberenetes, I have
 verified the load balancing of http traffic with HAProxy. It could be done
 using kubernetes contribution repo, 'service loadbalancer' [1].

 In order to check the load balancing with https traffic the taken
 approach is SSL termination.In the scenario of app cloud, kubernetes
 cluster is not directly exposed and the load balancer exists within the
 cluster. Thus the communication between the application servers and the
 load balancer happens internally. Although SSL termination ends the secure
 connection at the load balancer, due to the above mentioned reasons, SSL
 termination seems to be a better solution. The reason for the use of SSL
 termination over SSL pass through is because of the complexity of handling
 a separate SSL certificate for each server behind the load balancer in the
 case of SSL pass through.

 -1 for this approach, IMO this has a major security risk.
>>>
>>> Let me explain the problem. If we offload SSL at the service load
>>> balancer, all traffic beyond the load balancer will use HTTP and the
>>> message content will be visible to anyone on network inside K8S. Which
>>> means someone can simply start a container in K8S and trace all HTTP
>>> traffic going through.
>>>
>>
>
>> Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend
>> server communication happens with HTTPS enabled but not validating the
>> server certificate.
>>
>
>
>> verify
>> 
>> [none|required]
>>
>> This setting is only available when support for OpenSSL was built in. If set
>> to 'none', server certificate is not verified. In the other case, The
>> certificate provided by the server is verified using CAs from 'ca-file'
>> and optional CRLs from 'crl-file'. If 'ssl_server_verify' is not specified
>> in global  section, this is the default. On verify failure the handshake
>> is aborted. It is critically important to verify server certificates when
>> using SSL to connect to servers, otherwise the communication is prone to
>> trivial man-in-the-middle attacks rendering SSL totally useless.
>>
>> IMO still there is a major problem if we are not verifying the SSL
> certificate. See the highlighted text.
>
+1. We will attend to this once initial end to end scenario got working in
App Cloud. I am +1 for using a self signed cert in pods and adding it to
truststore of HA Proxy to fix above issue.

thank you.

>
> Thanks
>
>
>> [1].
>> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#ssl%20%28Server%20and%20default-server%20options%29
>>
>> thank you.
>>
>>
>>> Thanks
>>>
>>> In configuring load balancing with SSL termination, I had to customize
 kubernetes haproxy.conf file template of service loadbalancer repo to
 support SSL termination.

 In order to provide SSL termination, the kubernetes services have to be
 annotated with
   serviceloadbalancer/lb.sslTerm: "true"

 The default approach in load balancing with service load balancer repo
 is based on simple fan out approach which uses context path to load balance
 the traffic. As we need to load balance based on the host name, we need to
 go with the name based virtual hosting approach. It can be achieved via the
 following annotation.
  serviceloadbalancer/lb.Host: ""

 Any suggestions on the approach taken are highly appreciated.

 Thank you

 [1].
 https://github.com/kubernetes/contrib/tree/master/service-loadbalancer


>>>
>>>
>>>
>>> --
>>> *Imesh Gunaratne*
>>> Senior Technical Lead
>>> WSO2 Inc: http://wso2.com
>>> T: +94 11 214 5345 M: +94 77 374 2057
>>> W: http://imesh.io
>>> Lean . Enterprise . Middleware
>>>
>>>
>>
>>
>> --
>> Manjula Rathnayaka
>> Associate Technical Lead
>> WSO2, Inc.
>> Mobile:+94 77 743 1987
>>
>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.io
> Lean . Enterprise . Middleware
>
>


-- 
Manjula Rathnayaka
Associate Technical Lead
WSO2, Inc.
Mobile:+94 77 743 1987
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-03-13 Thread Imesh Gunaratne
Hi Manjula,

On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake 
wrote:

> Hi Imesh,
>
> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne  wrote:
>
>>
>> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle 
>> wrote:
>>
>>> Hi all,
>>> Currently I'm working on configuring HAProxy load balancing support for
>>> app cloud.
>>> In checking the session affinity functionality in kuberenetes, I have
>>> verified the load balancing of http traffic with HAProxy. It could be done
>>> using kubernetes contribution repo, 'service loadbalancer' [1].
>>>
>>> In order to check the load balancing with https traffic the taken
>>> approach is SSL termination.In the scenario of app cloud, kubernetes
>>> cluster is not directly exposed and the load balancer exists within the
>>> cluster. Thus the communication between the application servers and the
>>> load balancer happens internally. Although SSL termination ends the secure
>>> connection at the load balancer, due to the above mentioned reasons, SSL
>>> termination seems to be a better solution. The reason for the use of SSL
>>> termination over SSL pass through is because of the complexity of handling
>>> a separate SSL certificate for each server behind the load balancer in the
>>> case of SSL pass through.
>>>
>>> -1 for this approach, IMO this has a major security risk.
>>
>> Let me explain the problem. If we offload SSL at the service load
>> balancer, all traffic beyond the load balancer will use HTTP and the
>> message content will be visible to anyone on network inside K8S. Which
>> means someone can simply start a container in K8S and trace all HTTP
>> traffic going through.
>>
>

> Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend server
> communication happens with HTTPS enabled but not validating the server
> certificate.
>


> verify
> 
> [none|required]
>
> This setting is only available when support for OpenSSL was built in. If set
> to 'none', server certificate is not verified. In the other case, The
> certificate provided by the server is verified using CAs from 'ca-file'
> and optional CRLs from 'crl-file'. If 'ssl_server_verify' is not specified
> in global  section, this is the default. On verify failure the handshake
> is aborted. It is critically important to verify server certificates when
> using SSL to connect to servers, otherwise the communication is prone to
> trivial man-in-the-middle attacks rendering SSL totally useless.
>
> IMO still there is a major problem if we are not verifying the SSL
certificate. See the highlighted text.

Thanks


> [1].
> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#ssl%20%28Server%20and%20default-server%20options%29
>
> thank you.
>
>
>> Thanks
>>
>> In configuring load balancing with SSL termination, I had to customize
>>> kubernetes haproxy.conf file template of service loadbalancer repo to
>>> support SSL termination.
>>>
>>> In order to provide SSL termination, the kubernetes services have to be
>>> annotated with
>>>   serviceloadbalancer/lb.sslTerm: "true"
>>>
>>> The default approach in load balancing with service load balancer repo
>>> is based on simple fan out approach which uses context path to load balance
>>> the traffic. As we need to load balance based on the host name, we need to
>>> go with the name based virtual hosting approach. It can be achieved via the
>>> following annotation.
>>>  serviceloadbalancer/lb.Host: ""
>>>
>>> Any suggestions on the approach taken are highly appreciated.
>>>
>>> Thank you
>>>
>>> [1].
>>> https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
>>>
>>>
>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Manjula Rathnayaka
> Associate Technical Lead
> WSO2, Inc.
> Mobile:+94 77 743 1987
>



-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-03-13 Thread Manjula Rathnayake
Hi Imesh,

On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne  wrote:

>
>
> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle 
> wrote:
>
>> Hi all,
>> Currently I'm working on configuring HAProxy load balancing support for
>> app cloud.
>> In checking the session affinity functionality in kuberenetes, I have
>> verified the load balancing of http traffic with HAProxy. It could be done
>> using kubernetes contribution repo, 'service loadbalancer' [1].
>>
>> In order to check the load balancing with https traffic the taken
>> approach is SSL termination.In the scenario of app cloud, kubernetes
>> cluster is not directly exposed and the load balancer exists within the
>> cluster. Thus the communication between the application servers and the
>> load balancer happens internally. Although SSL termination ends the secure
>> connection at the load balancer, due to the above mentioned reasons, SSL
>> termination seems to be a better solution. The reason for the use of SSL
>> termination over SSL pass through is because of the complexity of handling
>> a separate SSL certificate for each server behind the load balancer in the
>> case of SSL pass through.
>>
>> -1 for this approach, IMO this has a major security risk.
>
> Let me explain the problem. If we offload SSL at the service load
> balancer, all traffic beyond the load balancer will use HTTP and the
> message content will be visible to anyone on network inside K8S. Which
> means someone can simply start a container in K8S and trace all HTTP
> traffic going through.
>
Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend server
communication happens with HTTPS enabled but not validating the server
certificate.
verify

[none|required]

This setting is only available when support for OpenSSL was built in. If set
to 'none', server certificate is not verified. In the other case, The
certificate provided by the server is verified using CAs from 'ca-file'
and optional CRLs from 'crl-file'. If 'ssl_server_verify' is not specified
in global  section, this is the default. On verify failure the handshake
is aborted. It is critically important to verify server certificates when
using SSL to connect to servers, otherwise the communication is prone to
trivial man-in-the-middle attacks rendering SSL totally useless.

[1].
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#ssl%20%28Server%20and%20default-server%20options%29

thank you.


> Thanks
>
> In configuring load balancing with SSL termination, I had to customize
>> kubernetes haproxy.conf file template of service loadbalancer repo to
>> support SSL termination.
>>
>> In order to provide SSL termination, the kubernetes services have to be
>> annotated with
>>   serviceloadbalancer/lb.sslTerm: "true"
>>
>> The default approach in load balancing with service load balancer repo is
>> based on simple fan out approach which uses context path to load balance
>> the traffic. As we need to load balance based on the host name, we need to
>> go with the name based virtual hosting approach. It can be achieved via the
>> following annotation.
>>  serviceloadbalancer/lb.Host: ""
>>
>> Any suggestions on the approach taken are highly appreciated.
>>
>> Thank you
>>
>> [1].
>> https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
>>
>>
>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.io
> Lean . Enterprise . Middleware
>
>


-- 
Manjula Rathnayaka
Associate Technical Lead
WSO2, Inc.
Mobile:+94 77 743 1987
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Configuring load balancing in app cloud with HA Proxy

2016-03-13 Thread Imesh Gunaratne
On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle  wrote:

> Hi all,
> Currently I'm working on configuring HAProxy load balancing support for
> app cloud.
> In checking the session affinity functionality in kuberenetes, I have
> verified the load balancing of http traffic with HAProxy. It could be done
> using kubernetes contribution repo, 'service loadbalancer' [1].
>
> In order to check the load balancing with https traffic the taken approach
> is SSL termination.In the scenario of app cloud, kubernetes cluster is not
> directly exposed and the load balancer exists within the cluster. Thus the
> communication between the application servers and the load balancer happens
> internally. Although SSL termination ends the secure connection at the load
> balancer, due to the above mentioned reasons, SSL termination seems to be a
> better solution. The reason for the use of SSL termination over SSL pass
> through is because of the complexity of handling a separate SSL certificate
> for each server behind the load balancer in the case of SSL pass through.
>
> -1 for this approach, IMO this has a major security risk.

Let me explain the problem. If we offload SSL at the service load balancer,
all traffic beyond the load balancer will use HTTP and the message content
will be visible to anyone on network inside K8S. Which means someone can
simply start a container in K8S and trace all HTTP traffic going through.

Thanks

In configuring load balancing with SSL termination, I had to customize
> kubernetes haproxy.conf file template of service loadbalancer repo to
> support SSL termination.
>
> In order to provide SSL termination, the kubernetes services have to be
> annotated with
>   serviceloadbalancer/lb.sslTerm: "true"
>
> The default approach in load balancing with service load balancer repo is
> based on simple fan out approach which uses context path to load balance
> the traffic. As we need to load balance based on the host name, we need to
> go with the name based virtual hosting approach. It can be achieved via the
> following annotation.
>  serviceloadbalancer/lb.Host: ""
>
> Any suggestions on the approach taken are highly appreciated.
>
> Thank you
>
> [1].
> https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
>
>



-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Configuring load balancing in app cloud with HA Proxy

2016-03-13 Thread Nishadi Kirielle
Hi all,
Currently I'm working on configuring HAProxy load balancing support for app
cloud.
In checking the session affinity functionality in kuberenetes, I have
verified the load balancing of http traffic with HAProxy. It could be done
using kubernetes contribution repo, 'service loadbalancer' [1].

In order to check the load balancing with https traffic the taken approach
is SSL termination.In the scenario of app cloud, kubernetes cluster is not
directly exposed and the load balancer exists within the cluster. Thus the
communication between the application servers and the load balancer happens
internally. Although SSL termination ends the secure connection at the load
balancer, due to the above mentioned reasons, SSL termination seems to be a
better solution. The reason for the use of SSL termination over SSL pass
through is because of the complexity of handling a separate SSL certificate
for each server behind the load balancer in the case of SSL pass through.

In configuring load balancing with SSL termination, I had to customize
kubernetes haproxy.conf file template of service loadbalancer repo to
support SSL termination.

In order to provide SSL termination, the kubernetes services have to be
annotated with
  serviceloadbalancer/lb.sslTerm: "true"

The default approach in load balancing with service load balancer repo is
based on simple fan out approach which uses context path to load balance
the traffic. As we need to load balance based on the host name, we need to
go with the name based virtual hosting approach. It can be achieved via the
following annotation.
 serviceloadbalancer/lb.Host: ""

Any suggestions on the approach taken are highly appreciated.

Thank you

[1]. https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev