Hello Rich, I was on PTO yesterday and did not get chance to run the above commands. But before running these when i logged into my system i see that fluentd pods are up and running. So does it take some time for the fluentd pods to come up once logging is installed ?
Today i did re installation of my logging and i again see fluentd pods not being up again. Thanks kasturi On Mon, Nov 19, 2018 at 9:21 PM Rich Megginson <[email protected]> wrote: > Try unlabeling then relabeling the nodes: > > oc label node --all logging-infra-fluentd- > > wait a minute > > oc label node --all logging-infra-fluentd=true > > On 11/19/18 8:44 AM, Kasturi Narra wrote: > > Hello, > > > > Please find replies line.... > > > > On Mon, Nov 19, 2018 at 9:12 PM Rich Megginson <[email protected] > <mailto:[email protected]>> wrote: > > > > On 11/19/18 8:32 AM, Kasturi Narra wrote: > > > Hello Jeff, > > > yes , i do have it. Here is the output i have got. > > > > > > dhcp46-68.lab.eng.blr.redhat.com < > http://dhcp46-68.lab.eng.blr.redhat.com> < > http://dhcp46-68.lab.eng.blr.redhat.com> Ready <none> 6d > v1.9.1+a0ce1bc657 > > > > > > beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled > > < > http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled > > > > > > > > > < > http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled > > > > > > > > > oc get daemonset > > > > > > [root@dhcp46-170 ~]# oc get daemonset > > NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE > NODE SELECTOR AGE > > logging-fluentd 0 0 0 0 0 > logging-infra-fluentd=true 3m > > > > > > oc describe daemonset logging-fluentd > > > > > > [root@dhcp46-170 ~]# oc describe daemonset logging-fluentd > > Name: logging-fluentd > > Selector: component=fluentd,provider=openshift > > Node-Selector: logging-infra-fluentd=true > > Labels: component=fluentd > > logging-infra=fluentd > > provider=openshift > > Annotations: <none> > > Desired Number of Nodes Scheduled: 0 > > Current Number of Nodes Scheduled: 0 > > Number of Nodes Scheduled with Up-to-date Pods: 0 > > Number of Nodes Scheduled with Available Pods: 0 > > Number of Nodes Misscheduled: 0 > > Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed > > Pod Template: > > Labels: component=fluentd > > logging-infra=fluentd > > provider=openshift > > Service Account: aggregated-logging-fluentd > > Containers: > > fluentd-elasticsearch: > > Image: registry.access.redhat.com/openshift3/logging-fluentd:v3.9.43 > <http://registry.access.redhat.com/openshift3/logging-fluentd:v3.9.43> > > Port: <none> > > Limits: > > memory: 512Mi > > Requests: > > cpu: 100m > > memory: 512Mi > > Environment: > > K8S_HOST_URL: https://kubernetes.default.svc.cluster.local > > ES_HOST: logging-es > > ES_PORT: 9200 > > ES_CLIENT_CERT: /etc/fluent/keys/cert > > ES_CLIENT_KEY: /etc/fluent/keys/key > > ES_CA: /etc/fluent/keys/ca > > OPS_HOST: logging-es > > OPS_PORT: 9200 > > OPS_CLIENT_CERT: /etc/fluent/keys/ops-cert > > OPS_CLIENT_KEY: /etc/fluent/keys/ops-key > > OPS_CA: /etc/fluent/keys/ops-ca > > JOURNAL_SOURCE: > > JOURNAL_READ_FROM_HEAD: > > BUFFER_QUEUE_LIMIT: 32 > > BUFFER_SIZE_LIMIT: 8m > > FLUENTD_CPU_LIMIT: node allocatable (limits.cpu) > > FLUENTD_MEMORY_LIMIT: 536870912 (limits.memory) > > FILE_BUFFER_LIMIT: 256Mi > > Mounts: > > /etc/docker from dockerdaemoncfg (ro) > > /etc/docker-hostname from dockerhostname (ro) > > /etc/fluent/configs.d/user from config (ro) > > /etc/fluent/keys from certs (ro) > > /etc/localtime from localtime (ro) > > /etc/origin/node from originnodecfg (ro) > > /etc/sysconfig/docker from dockercfg (ro) > > /run/log/journal from runlogjournal (rw) > > /var/lib/docker/containers from varlibdockercontainers (ro) > > /var/lib/fluentd from filebufferstorage (rw) > > /var/log from varlog (rw) > > Volumes: > > runlogjournal: > > Type: HostPath (bare host directory volume) > > Path: /run/log/journal > > HostPathType: > > varlog: > > Type: HostPath (bare host directory volume) > > Path: /var/log > > HostPathType: > > varlibdockercontainers: > > Type: HostPath (bare host directory volume) > > Path: /var/lib/docker/containers > > HostPathType: > > config: > > Type: ConfigMap (a volume populated by a ConfigMap) > > Name: logging-fluentd > > Optional: false > > certs: > > Type: Secret (a volume populated by a Secret) > > SecretName: logging-fluentd > > Optional: false > > dockerhostname: > > Type: HostPath (bare host directory volume) > > Path: /etc/hostname > > HostPathType: > > localtime: > > Type: HostPath (bare host directory volume) > > Path: /etc/localtime > > HostPathType: > > dockercfg: > > Type: HostPath (bare host directory volume) > > Path: /etc/sysconfig/docker > > HostPathType: > > originnodecfg: > > Type: HostPath (bare host directory volume) > > Path: /etc/origin/node > > HostPathType: > > dockerdaemoncfg: > > Type: HostPath (bare host directory volume) > > Path: /etc/docker > > HostPathType: > > filebufferstorage: > > Type: HostPath (bare host directory volume) > > Path: /var/lib/fluentd > > HostPathType: > > Events: <none> > > > > > > > > > Thanks > > > kasturi > > > > > > On Mon, Nov 19, 2018 at 7:16 PM Jeff Cantrill <[email protected] > <mailto:[email protected]> <mailto:[email protected] <mailto: > [email protected]>>> wrote: > > > > > > It doesn't appear you have any fluentd pods which are > responsible for collecting logs from the other pods. Are your nodes > labeled with 'logging-infra-fluend=true' > > > > > > On Mon, Nov 19, 2018 at 7:28 AM Kasturi Narra < > [email protected] <mailto:[email protected]> <mailto:[email protected] > <mailto:[email protected]>>> wrote: > > > > > > Hello Everyone, > > > > > > I have a setup where i am trying to install logging > using ocp3.9+ cns3.11 . I see that logging pods are up and running but when > i access the webconsole i get an error > > present at > > > [1] and i tried the solution provided at [2] but having > no luck. Can some one of you please help me on resolving this issue ? > > > > > > [root@dhcp46-170 ~]# oc version > > > oc v3.9.43 > > > kubernetes v1.9.1+a0ce1bc657 > > > features: Basic-Auth GSSAPI Kerberos SPNEGO > > > > > > Server https://dhcp46-170.lab.eng.blr.redhat.com:8443 > > > openshift v3.9.43 > > > kubernetes v1.9.1+a0ce1bc657 > > > > > > [root@dhcp46-170 ~]# oc get pods > > > NAME READY STATUS RESTARTS AGE > > > logging-curator-1-bgjbj 1/1 Running 0 2h > > > logging-es-data-master-5gjnm57x-2-5vjq6 2/2 Running > 0 2h > > > logging-kibana-1-872dn 2/2 Running 0 2h > > > > > > [1] Discover: [exception] The index returned an empty > result. You can use the Time Picker to change the time filter or select a > higher time interval > > > [2] https://access.redhat.com/solutions/3352681 > > > > > > Thanks > > > kasturi > > > _______________________________________________ > > > users mailing list > > > [email protected] <mailto: > [email protected]> <mailto:[email protected] > <mailto:[email protected]>> > > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > > > > > > > > > > > > -- > > > -- > > > Jeff Cantrill > > > Senior Software Engineer, Red Hat Engineering > > > OpenShift Logging > > > Red Hat, Inc. > > > *Office*: 703-748-4420 | 866-546-8970 ext. 8162420 > > > [email protected] <mailto:[email protected]> <mailto: > [email protected] <mailto:[email protected]>> > > > http://www.redhat.com > > > > > > > > > _______________________________________________ > > > users mailing list > > > [email protected] <mailto: > [email protected]> > > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > > > > > > _______________________________________________ > > users mailing list > > [email protected] <mailto: > [email protected]> > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > > > >
_______________________________________________ users mailing list [email protected] http://lists.openshift.redhat.com/openshiftmm/listinfo/users
