Design questions around Container logs, EFK & OCP

2018-03-07 Thread Mohamed A. Shahat
Hi All,

My first question here, so i am hoping at least for some acknowledgement !

*Background*

   - OCP v3.7
   - Several Worker Nodes
   - Few Workload types
   - One Workload, let's call it WorkloadA is planned to have dedicated
   Worker Nodes.

*Objective*

   - for WorkloadA , I'd like to send/route the Container Logs to an
   External EFK / ELK stack other than the one that does get setup with OCP

*Motivation*

   - For Workload A, an ES cluster does already exist, we would like to
   reuse it.
   - There is an impression that the ES cluster that comes with OCP might
   not necessarily scale if the team operating OCP does not size it well

*Inquiries*

   1. Has this done before ? Yes / No ? Any comments ?
   2. Is there anyway with the fluentd pods or else to route specific
   Workload / Pods Container logs to an external ES cluster ?
   3. If not, i'm willing to deploy my own fluentd pods , what do i lose by
   excluding the WorkloadA Worker Nodes to not have the OCP fluentd pods ? for
   example i don't want to lose any Operations / OCP related / Worker Nodes
   related logs going to the embedded ES cluster, all i need is to have the
   Container Logs of WorkloadA to another ES cluster.


Looking forward to hearing from you,

Thanks,
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Design questions around Container logs, EFK & OCP

2018-03-08 Thread Mohamed A. Shahat
Thanks Aleks for the feedback.

This looks promising.

We're using Enterprise OCP. Does that make a difference at that level of
discussion ?

For the External Elasticsearch instance configs you referred to , is it
possible to co-exist both ? Some Worker nodes sending logs to the internal
ES, and some other Worker nodes sending logs to the external one ?


Opensource origin:
> https://docs.openshift.org/latest/install_config/aggregate_logging.html#
> sending-logs-to-an-external-elasticsearch-instance
> Enterprise:
> https://docs.openshift.com/container-platform/3.7/
> install_config/aggregate_logging.html#sending-logs-to-
> an-external-elasticsearch-instance



Many Thanks,
/Mo


On 7 March 2018 at 23:27, Aleksandar Lazic <openshift-...@me2digital.com>
wrote:

> Hi.
>
> Am 07.03.2018 um 23:47 schrieb Mohamed A. Shahat:
> > Hi All,
> >
> > My first question here, so i am hoping at least for some
> > acknowledgement !
> >
> > _Background_
> >
> >   * OCP v3.7
> >
> Do you use the enterprise version or the opensource one?
> >
> >   * Several Worker Nodes
> >   * Few Workload types
> >   * One Workload, let's call it WorkloadA is planned to have dedicated
> > Worker Nodes.
> >
> > _Objective_
> >
> >   * for WorkloadA , I'd like to send/route the Container Logs to an
> > External EFK / ELK stack other than the one that does get setup
> > with OCP
> >
> > _Motivation_
> >
> >   * For Workload A, an ES cluster does already exist, we would like to
> > reuse it.
> >   * There is an impression that the ES cluster that comes with OCP
> > might not necessarily scale if the team operating OCP does not
> > size it well
> >
> > _Inquiries_
> >
> >  1. Has this done before ? Yes / No ? Any comments ?
> >
> Yes.
> As you may know is "handle logs in a proper way" not a easy task.
> There are some serious questions like the following.
>
> * How long should the logs be preserved
> * How much logs are written
> * How fast are the logs written
> * What's the limit of the network
> * What's the limit of the remote es
> * and many many more questions
>
> >  1. Is there anyway with the fluentd pods or else to route specific
> > Workload / Pods Container logs to an external ES cluster ?
> >  2. If not, i'm willing to deploy my own fluentd pods , what do i lose
> > by excluding the WorkloadA Worker Nodes to not have the OCP
> > fluentd pods ? for example i don't want to lose any Operations /
> > OCP related / Worker Nodes related logs going to the embedded ES
> > cluster, all i need is to have the Container Logs of WorkloadA to
> > another ES cluster.
> >
> Have you looked at the following doc part?
>
> Opensource origin:
> https://docs.openshift.org/latest/install_config/aggregate_logging.html#
> sending-logs-to-an-external-elasticsearch-instance
>
> Enterprise:
> https://docs.openshift.com/container-platform/3.7/
> install_config/aggregate_logging.html#sending-logs-to-
> an-external-elasticsearch-instance
>
> As in the doc described you can send the collected fluentd logs to a
> external es cluster.
>
> You can find the source of the openshift logging solution in this repo.
> https://github.com/openshift/origin-aggregated-logging
>
> > Looking forward to hearing from you,
> >
> > Thanks,
> Hth
> Aleks
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Design questions around Container logs, EFK & OCP

2018-03-12 Thread Mohamed A. Shahat
Thanks Luke, extremely enlightening.

Now, can you help list the logs that are actually forwarded by the fluentd
pods on worker nodes ? e.g. you mentioned (all node's logs ) , container
logs and service logs. Can you please clarify the differences ?

Many thanks,


On 12 March 2018 at 23:18, Luke Meyer <lme...@redhat.com> wrote:

> Although you can set up the fluentd instances to send logs either to the
> integrated storage or an external ES, it will be tricky to do both with the
> same deployment. They are deployed with a daemonset. What you can do is
> copy the daemonset and configure both as you like (with different
> secrets/configmaps), using node selectors and node labels to have the right
> ones land on the right nodes. However that will direct *all* of the node's
> logs; I don't think there's an easy way to have the container logs go to
> one destination and the service logs to another, without more in-depth
> configuration of fluentd. You do have complete control over its config if
> you really want though by modifying the configmap.
>
> On Thu, Mar 8, 2018 at 4:15 AM, Mohamed A. Shahat <mols...@gmail.com>
> wrote:
>
>> Thanks Aleks for the feedback.
>>
>> This looks promising.
>>
>> We're using Enterprise OCP. Does that make a difference at that level of
>> discussion ?
>>
>> For the External Elasticsearch instance configs you referred to , is it
>> possible to co-exist both ? Some Worker nodes sending logs to the internal
>> ES, and some other Worker nodes sending logs to the external one ?
>>
>>
>> Opensource origin:
>>> https://docs.openshift.org/latest/install_config/aggregate_l
>>> ogging.html#sending-logs-to-an-external-elasticsearch-instance
>>> Enterprise:
>>> https://docs.openshift.com/container-platform/3.7/install_co
>>> nfig/aggregate_logging.html#sending-logs-to-an-external-elas
>>> ticsearch-instance
>>
>>
>>
>> Many Thanks,
>> /Mo
>>
>>
>> On 7 March 2018 at 23:27, Aleksandar Lazic <openshift-...@me2digital.com>
>> wrote:
>>
>>> Hi.
>>>
>>> Am 07.03.2018 um 23:47 schrieb Mohamed A. Shahat:
>>> > Hi All,
>>> >
>>> > My first question here, so i am hoping at least for some
>>> > acknowledgement !
>>> >
>>> > _Background_
>>> >
>>> >   * OCP v3.7
>>> >
>>> Do you use the enterprise version or the opensource one?
>>> >
>>> >   * Several Worker Nodes
>>> >   * Few Workload types
>>> >   * One Workload, let's call it WorkloadA is planned to have dedicated
>>> > Worker Nodes.
>>> >
>>> > _Objective_
>>> >
>>> >   * for WorkloadA , I'd like to send/route the Container Logs to an
>>> > External EFK / ELK stack other than the one that does get setup
>>> > with OCP
>>> >
>>> > _Motivation_
>>> >
>>> >   * For Workload A, an ES cluster does already exist, we would like to
>>> > reuse it.
>>> >   * There is an impression that the ES cluster that comes with OCP
>>> > might not necessarily scale if the team operating OCP does not
>>> > size it well
>>> >
>>> > _Inquiries_
>>> >
>>> >  1. Has this done before ? Yes / No ? Any comments ?
>>> >
>>> Yes.
>>> As you may know is "handle logs in a proper way" not a easy task.
>>> There are some serious questions like the following.
>>>
>>> * How long should the logs be preserved
>>> * How much logs are written
>>> * How fast are the logs written
>>> * What's the limit of the network
>>> * What's the limit of the remote es
>>> * and many many more questions
>>>
>>> >  1. Is there anyway with the fluentd pods or else to route specific
>>> > Workload / Pods Container logs to an external ES cluster ?
>>> >  2. If not, i'm willing to deploy my own fluentd pods , what do i lose
>>> > by excluding the WorkloadA Worker Nodes to not have the OCP
>>> > fluentd pods ? for example i don't want to lose any Operations /
>>> > OCP related / Worker Nodes related logs going to the embedded ES
>>> > cluster, all i need is to have the Container Logs of WorkloadA to
>>> > another ES cluster.
>>> >
>>> Have you looked at the following doc part?
>>>
>>> Opensource origin:
>>> https://docs.openshift.org/latest/install_config/aggregate_l
>>> ogging.html#sending-logs-to-an-external-elasticsearch-instance
>>>
>>> Enterprise:
>>> https://docs.openshift.com/container-platform/3.7/install_co
>>> nfig/aggregate_logging.html#sending-logs-to-an-external-elas
>>> ticsearch-instance
>>>
>>> As in the doc described you can send the collected fluentd logs to a
>>> external es cluster.
>>>
>>> You can find the source of the openshift logging solution in this repo.
>>> https://github.com/openshift/origin-aggregated-logging
>>>
>>> > Looking forward to hearing from you,
>>> >
>>> > Thanks,
>>> Hth
>>> Aleks
>>>
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev