You said:

for example i don't want to lose any Operations / OCP related / Worker
> Nodes related logs going to the embedded ES cluster


 I was going to say that the fluentd config doesn't have a config mechanism
to send these to a different ES cluster. Actually, it does, though, as
these all are designated as "ops" logs and there's a mechanism for defining
two separate (on-cluster) ES clusters and having fluentd send the "ops"
logs to one cluster and the regular container logs to another. You may be
able to leverage that to have the non-ops container logs going to an
external cluster.

you mentioned (all node's logs ) , container logs and service logs. Can you
> please clarify the differences ?


For our purposes, all node logs include log entries in the journal as well
as container logs that docker writes separately when the json-files log
driver is configured (which I believe is the default again -- journal was
the default for a while).

By "service logs" I meant journal logs from the systemd services, for
example the master and node units. These are all considered ops logs.

Container logs are just logs from containers, whether they're in json files
or in journal, and whether they're workload containers or "Operations / OCP
related". Most OCP infrastructure components are deployed in projects that
are considered ops logs (there's a list in fluentd config).


On Mon, Mar 12, 2018 at 7:21 PM, Mohamed A. Shahat <mols...@gmail.com>
wrote:

> Thanks Luke, extremely enlightening.
>
> Now, can you help list the logs that are actually forwarded by the fluentd
> pods on worker nodes ? e.g. you mentioned (all node's logs ) , container
> logs and service logs. Can you please clarify the differences ?
>
> Many thanks,
>
>
> On 12 March 2018 at 23:18, Luke Meyer <lme...@redhat.com> wrote:
>
>> Although you can set up the fluentd instances to send logs either to the
>> integrated storage or an external ES, it will be tricky to do both with the
>> same deployment. They are deployed with a daemonset. What you can do is
>> copy the daemonset and configure both as you like (with different
>> secrets/configmaps), using node selectors and node labels to have the right
>> ones land on the right nodes. However that will direct *all* of the node's
>> logs; I don't think there's an easy way to have the container logs go to
>> one destination and the service logs to another, without more in-depth
>> configuration of fluentd. You do have complete control over its config if
>> you really want though by modifying the configmap.
>>
>> On Thu, Mar 8, 2018 at 4:15 AM, Mohamed A. Shahat <mols...@gmail.com>
>> wrote:
>>
>>> Thanks Aleks for the feedback.
>>>
>>> This looks promising.
>>>
>>> We're using Enterprise OCP. Does that make a difference at that level of
>>> discussion ?
>>>
>>> For the External Elasticsearch instance configs you referred to , is it
>>> possible to co-exist both ? Some Worker nodes sending logs to the internal
>>> ES, and some other Worker nodes sending logs to the external one ?
>>>
>>>
>>> Opensource origin:
>>>> https://docs.openshift.org/latest/install_config/aggregate_l
>>>> ogging.html#sending-logs-to-an-external-elasticsearch-instance
>>>> Enterprise:
>>>> https://docs.openshift.com/container-platform/3.7/install_co
>>>> nfig/aggregate_logging.html#sending-logs-to-an-external-elas
>>>> ticsearch-instance
>>>
>>>
>>>
>>> Many Thanks,
>>> /Mo
>>>
>>>
>>> On 7 March 2018 at 23:27, Aleksandar Lazic <openshift-...@me2digital.com
>>> > wrote:
>>>
>>>> Hi.
>>>>
>>>> Am 07.03.2018 um 23:47 schrieb Mohamed A. Shahat:
>>>> > Hi All,
>>>> >
>>>> > My first question here, so i am hoping at least for some
>>>> > acknowledgement !
>>>> >
>>>> > _Background_
>>>> >
>>>> >   * OCP v3.7
>>>> >
>>>> Do you use the enterprise version or the opensource one?
>>>> >
>>>> >   * Several Worker Nodes
>>>> >   * Few Workload types
>>>> >   * One Workload, let's call it WorkloadA is planned to have dedicated
>>>> >     Worker Nodes.
>>>> >
>>>> > _Objective_
>>>> >
>>>> >   * for WorkloadA , I'd like to send/route the Container Logs to an
>>>> >     External EFK / ELK stack other than the one that does get setup
>>>> >     with OCP
>>>> >
>>>> > _Motivation_
>>>> >
>>>> >   * For Workload A, an ES cluster does already exist, we would like to
>>>> >     reuse it.
>>>> >   * There is an impression that the ES cluster that comes with OCP
>>>> >     might not necessarily scale if the team operating OCP does not
>>>> >     size it well
>>>> >
>>>> > _Inquiries_
>>>> >
>>>> >  1. Has this done before ? Yes / No ? Any comments ?
>>>> >
>>>> Yes.
>>>> As you may know is "handle logs in a proper way" not a easy task.
>>>> There are some serious questions like the following.
>>>>
>>>> * How long should the logs be preserved
>>>> * How much logs are written
>>>> * How fast are the logs written
>>>> * What's the limit of the network
>>>> * What's the limit of the remote es
>>>> * and many many more questions
>>>>
>>>> >  1. Is there anyway with the fluentd pods or else to route specific
>>>> >     Workload / Pods Container logs to an external ES cluster ?
>>>> >  2. If not, i'm willing to deploy my own fluentd pods , what do i lose
>>>> >     by excluding the WorkloadA Worker Nodes to not have the OCP
>>>> >     fluentd pods ? for example i don't want to lose any Operations /
>>>> >     OCP related / Worker Nodes related logs going to the embedded ES
>>>> >     cluster, all i need is to have the Container Logs of WorkloadA to
>>>> >     another ES cluster.
>>>> >
>>>> Have you looked at the following doc part?
>>>>
>>>> Opensource origin:
>>>> https://docs.openshift.org/latest/install_config/aggregate_l
>>>> ogging.html#sending-logs-to-an-external-elasticsearch-instance
>>>>
>>>> Enterprise:
>>>> https://docs.openshift.com/container-platform/3.7/install_co
>>>> nfig/aggregate_logging.html#sending-logs-to-an-external-elas
>>>> ticsearch-instance
>>>>
>>>> As in the doc described you can send the collected fluentd logs to a
>>>> external es cluster.
>>>>
>>>> You can find the source of the openshift logging solution in this repo.
>>>> https://github.com/openshift/origin-aggregated-logging
>>>>
>>>> > Looking forward to hearing from you,
>>>> >
>>>> > Thanks,
>>>> Hth
>>>> Aleks
>>>>
>>>
>>>
>>> _______________________________________________
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
_______________________________________________
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

Reply via email to