Re: Audit logging

2020-07-09 Thread Amanti Lulo
Sorry the link got formatted wrong. This is the correct one
https://github.com/openshift-examples/web/blob/e6f30ff11f5395265753a8537b7430c7926c3e88/content/openshift-3/efk-auditlog.md#installation-with-openshift-on-rhel

On Thu, Jul 9, 2020, 5:27 PM Amanti Lulo  wrote:

> Hello everybody,
>
> I am trying to enable audit logging for openshift. I have already cahgned
> the master-config.yaml and restart the atomic-openshift service. The
> problem lies that I can not view logs in Kibana.
>
> I have followed these tutorials: openshift-examples/web but I have had no
> success. Any suggestions are appreciated.
>
> Thank you
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Audit logging

2020-07-09 Thread Amanti Lulo
Hello everybody,

I am trying to enable audit logging for openshift. I have already cahgned
the master-config.yaml and restart the atomic-openshift service. The
problem lies that I can not view logs in Kibana.

I have followed these tutorials: openshift-examples/web but I have had no
success. Any suggestions are appreciated.

Thank you
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


logging for an openshift service

2020-03-12 Thread Just Marvin
Hi,

Is there a way I can enable logging or otherwise determine which pod
(of all the ones that its selector selects) a service is sending requests
to?

Thanks,
Marvin
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: The cluster-logging pods (Elasticsearch, Kibana, Fluentd) don't start - Openshift 4.1

2019-11-07 Thread Jeff Cantrill
On Wed, Nov 6, 2019 at 6:48 PM Full Name  wrote:

> Thank you Rich for your prompt reply.
>
> After viewing  the
> "manifests/4.2/cluster-logging.v4.2.0.clusterserviceversion.yaml " on the
> cluster-logging-operator pod, I confirm that the added (minKubeVersion:
> 1.16.0) line in GITHUB  is missing in the manifest file on the CLO pod on
> my Cluster.
>

The minKubeVersion was corrected for 4.2 in:
https://github.com/openshift/cluster-logging-operator/pull/267
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: The cluster-logging pods (Elasticsearch, Kibana, Fluentd) don't start - Openshift 4.1

2019-11-06 Thread Rich Megginson

On 11/6/19 4:51 PM, Full Name wrote:

Thank you Rich for your prompt reply.

After viewing  the "manifests/4.2/cluster-logging.v4.2.0.clusterserviceversion.yaml 
" on the  cluster-logging-operator pod, I confirm that the added (minKubeVersion: 
1.16.0) line in GITHUB  is missing in the manifest file on the CLO pod on my Cluster.

I tried to edit the manifest file thru "oc rsh vi"  but the file is in ReadOnly 
and I can't get root access to this pod.

What is the good method of editing the manifest yaml file to update it with the 
missing minKubeversion ?


You can't edit the file, as you have found.

I'm not really sure how to modify this in a running cluster.  You could 
do `oc -n openshift-logging edit csv clusterlogging.v4.1.0`


oc -n openshift-logging get csv

to find your clusterlogging csv, then

oc -n openshift-logging edit $thecsv

and change the minKubeVersion there, but if that doesn't trigger a 
redeployment, I'm not sure how to do that.




Thank you.


-Original Message-
From: "Rich Megginson" [rmegg...@redhat.com]
Date: 11/06/2019 01:21 PM
To: users@lists.openshift.redhat.com
Subject: Re: The cluster-logging pods (Elasticsearch, Kibana, Fluentd) don't
start - Openshift 4.1

are you running into https://bugzilla.redhat.com/show_bug.cgi?id=1766343 ?

On 11/6/19 9:19 AM, Full Name wrote:

Hi all,

I'm trying to deploy logging on Openshift cluster 4.1.21 using the procedure 
described in the following link 
https://docs.openshift.com/container-platform/4.1/logging/efk-logging.html.
Everything is going fine but the logging pods don't want to start and stay at 
pending state.  I have the following error (0/7 nodes are available: 7 node(s) 
didn't match node selector) for all the 5 logging pods (2 x elasticsearch,  2 x 
kibana,  1x curator).

The logging pods don't start  with or without nodeSelector in the 
Cluster-Logging instance.

-------
the Cluster-Logging instance YAML file:
---
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
creationTimestamp: '2019-11-04T21:20:57Z'
generation: 37
name: instance
    namespace: openshift-logging
resourceVersion: '569806'
selfLink: >-
  
/apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance
uid: fdc0e971-ff48-11e9-a3f8-0af5a0903ee4
spec:
collection:
  logs:
fluentd:
  nodeSelector:
kubernetes.io/os: linux
node-role.kubernetes.io/infra: ''
  resources: null
rsyslog:
  resources: null
type: fluentd
curation:
  curator:
nodeSelector:
  kubernetes.io/os: linux
  node-role.kubernetes.io/infra: ''
resources: null
schedule: 30 3 * * *
  type: curator
logStore:
  elasticsearch:
nodeCount: 2
nodeSelector:
  node-role.kubernetes.io/infra: ''
redundancyPolicy: SingleRedundancy
resources:
  requests:
cpu: 500m
memory: 4Gi
storage:
 size: 20G
  storageClassName: gp2
  type: elasticsearch
managementState: Managed
visualization:
  kibana:
nodeSelector:
  kubernetes.io/os: linux
  node-role.kubernetes.io/infra: ''
proxy:
  resources: null
replicas: 1
resources: null
  type: kibana
status:
collection:
  logs:
fluentdStatus:
  daemonSet: fluentd
  nodes: {}
  pods:
failed: []
notReady: []
ready: []
rsyslogStatus:
  Nodes: null
daemonSet: ''
  pods: null
curation:
  curatorStatus:
- clusterCondition:
curator-1572924600-pwbf8:
  - lastTransitionTime: '2019-11-05T03:30:01Z'
message: '0/7 nodes are available: 7 node(s) didn''t match node 
selector.'
reason: Unschedulable
status: 'True'
type: Unschedulable
  cronJobs: curator
  schedules: 30 3 * * *
  suspended: false
logStore:
 elasticsearchStatus:
- ShardAllocationEnabled: shard allocation unknown
  cluster:
numDataNodes: 0
initializingShards: 0
numNodes: 0
activePrimaryShards: 0
status: cluster health unknown
pendingTasks: 0
relocatingShards: 0
activeShards: 0
unassignedShards: 0
  clusterName: elasticsearch
  nodeConditions:
elasticsearch-cdm-wgsf9ygw-1:
 - lastTransitionTime: '2019-11-04T22:33:32Z'
message: '0/7 nodes are available: 7 node(s) didn''t match node 
selector.'
reason: Unschedulable
status: 'True'
type: Unschedulable
  elasticsearch-cdm-wgsf9ygw-2:
  - last

Re: The cluster-logging pods (Elasticsearch, Kibana, Fluentd) don't start - Openshift 4.1

2019-11-06 Thread Full Name
Thank you Rich for your prompt reply.

After viewing  the 
"manifests/4.2/cluster-logging.v4.2.0.clusterserviceversion.yaml " on the  
cluster-logging-operator pod, I confirm that the added (minKubeVersion: 1.16.0) 
line in GITHUB  is missing in the manifest file on the CLO pod on my Cluster.

I tried to edit the manifest file thru "oc rsh vi"  but the file is in ReadOnly 
and I can't get root access to this pod.

What is the good method of editing the manifest yaml file to update it with the 
missing minKubeversion ?

Thank you.


-Original Message-
From: "Rich Megginson" [rmegg...@redhat.com]
Date: 11/06/2019 01:21 PM
To: users@lists.openshift.redhat.com
Subject: Re: The cluster-logging pods (Elasticsearch, Kibana, Fluentd) don't
start - Openshift 4.1

are you running into https://bugzilla.redhat.com/show_bug.cgi?id=1766343 ?

On 11/6/19 9:19 AM, Full Name wrote:
> Hi all,
> 
> I'm trying to deploy logging on Openshift cluster 4.1.21 using the procedure 
> described in the following link 
> https://docs.openshift.com/container-platform/4.1/logging/efk-logging.html.
> Everything is going fine but the logging pods don't want to start and stay at 
> pending state.  I have the following error (0/7 nodes are available: 7 
> node(s) didn't match node selector) for all the 5 logging pods (2 x 
> elasticsearch,  2 x kibana,  1x curator).
> 
> The logging pods don't start  with or without nodeSelector in the 
> Cluster-Logging instance.
> 
> -------
> the Cluster-Logging instance YAML file:
> ---
> apiVersion: logging.openshift.io/v1
> kind: ClusterLogging
> metadata:
>creationTimestamp: '2019-11-04T21:20:57Z'
>generation: 37
>name: instance
>namespace: openshift-logging
>resourceVersion: '569806'
>selfLink: >-
>  
> /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance
>uid: fdc0e971-ff48-11e9-a3f8-0af5a0903ee4
> spec:
>collection:
>  logs:
>fluentd:
>  nodeSelector:
>kubernetes.io/os: linux
>node-role.kubernetes.io/infra: ''
>  resources: null
>rsyslog:
>  resources: null
> type: fluentd
>curation:
>  curator:
>nodeSelector:
>  kubernetes.io/os: linux
>  node-role.kubernetes.io/infra: ''
>resources: null
>schedule: 30 3 * * *
>  type: curator
>logStore:
>  elasticsearch:
>nodeCount: 2
>nodeSelector:
>  node-role.kubernetes.io/infra: ''
>redundancyPolicy: SingleRedundancy
>resources:
>  requests:
>cpu: 500m
>memory: 4Gi
>storage:
> size: 20G
>  storageClassName: gp2
>  type: elasticsearch
>managementState: Managed
>visualization:
>  kibana:
>nodeSelector:
>  kubernetes.io/os: linux
>  node-role.kubernetes.io/infra: ''
>proxy:
>  resources: null
>replicas: 1
>resources: null
>  type: kibana
> status:
>collection:
>  logs:
>fluentdStatus:
>  daemonSet: fluentd
>  nodes: {}
>  pods:
>failed: []
>notReady: []
>ready: []
>rsyslogStatus:
>  Nodes: null
>daemonSet: ''
>  pods: null
>curation:
>  curatorStatus:
>- clusterCondition:
>curator-1572924600-pwbf8:
>  - lastTransitionTime: '2019-11-05T03:30:01Z'
>message: '0/7 nodes are available: 7 node(s) didn''t match 
> node selector.'
>reason: Unschedulable
>status: 'True'
>type: Unschedulable
>  cronJobs: curator
>  schedules: 30 3 * * *
>  suspended: false
>logStore:
> elasticsearchStatus:
>- ShardAllocationEnabled: shard allocation unknown
>  cluster:
>numDataNodes: 0
>initializingShards: 0
>numNodes: 0
>activePrimaryShards: 0
>status: cluster health unknown
>pendingTasks: 0
>relocatingShards: 0
>activeShards: 0
>unassignedShards: 0
>  clusterName: elasticsearch
>  nodeConditions:
>elasticsearch-cdm-wgsf9ygw-1:
> - lastTransitionTime: '2019-11-04T22:33:32Z'
>message: '0/7 nodes are available: 7 node(s) didn''t match 
> node selector.'
>reason: Unschedulable
>status: 'True'
>type: Unsch

Re: The cluster-logging pods (Elasticsearch, Kibana, Fluentd) don't start - Openshift 4.1

2019-11-06 Thread Rich Megginson

are you running into https://bugzilla.redhat.com/show_bug.cgi?id=1766343 ?

On 11/6/19 9:19 AM, Full Name wrote:

Hi all,

I'm trying to deploy logging on Openshift cluster 4.1.21 using the procedure 
described in the following link 
https://docs.openshift.com/container-platform/4.1/logging/efk-logging.html.
Everything is going fine but the logging pods don't want to start and stay at 
pending state.  I have the following error (0/7 nodes are available: 7 node(s) 
didn't match node selector) for all the 5 logging pods (2 x elasticsearch,  2 x 
kibana,  1x curator).

The logging pods don't start  with or without nodeSelector in the 
Cluster-Logging instance.

---
the Cluster-Logging instance YAML file:
---
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
   creationTimestamp: '2019-11-04T21:20:57Z'
   generation: 37
   name: instance
   namespace: openshift-logging
   resourceVersion: '569806'
   selfLink: >-
 
/apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance
   uid: fdc0e971-ff48-11e9-a3f8-0af5a0903ee4
spec:
   collection:
 logs:
   fluentd:
 nodeSelector:
   kubernetes.io/os: linux
   node-role.kubernetes.io/infra: ''
 resources: null
   rsyslog:
 resources: null
   type: fluentd
   curation:
 curator:
   nodeSelector:
 kubernetes.io/os: linux
 node-role.kubernetes.io/infra: ''
   resources: null
   schedule: 30 3 * * *
 type: curator
   logStore:
 elasticsearch:
   nodeCount: 2
   nodeSelector:
 node-role.kubernetes.io/infra: ''
   redundancyPolicy: SingleRedundancy
   resources:
 requests:
   cpu: 500m
   memory: 4Gi
   storage:
 size: 20G
 storageClassName: gp2
 type: elasticsearch
   managementState: Managed
   visualization:
 kibana:
   nodeSelector:
 kubernetes.io/os: linux
 node-role.kubernetes.io/infra: ''
   proxy:
 resources: null
   replicas: 1
   resources: null
 type: kibana
status:
   collection:
 logs:
   fluentdStatus:
 daemonSet: fluentd
 nodes: {}
 pods:
   failed: []
   notReady: []
   ready: []
   rsyslogStatus:
 Nodes: null
 daemonSet: ''
 pods: null
   curation:
 curatorStatus:
   - clusterCondition:
   curator-1572924600-pwbf8:
 - lastTransitionTime: '2019-11-05T03:30:01Z'
   message: '0/7 nodes are available: 7 node(s) didn''t match node 
selector.'
   reason: Unschedulable
   status: 'True'
   type: Unschedulable
 cronJobs: curator
 schedules: 30 3 * * *
 suspended: false
   logStore:
 elasticsearchStatus:
   - ShardAllocationEnabled: shard allocation unknown
 cluster:
   numDataNodes: 0
   initializingShards: 0
   numNodes: 0
   activePrimaryShards: 0
   status: cluster health unknown
   pendingTasks: 0
   relocatingShards: 0
   activeShards: 0
   unassignedShards: 0
 clusterName: elasticsearch
 nodeConditions:
   elasticsearch-cdm-wgsf9ygw-1:
 - lastTransitionTime: '2019-11-04T22:33:32Z'
   message: '0/7 nodes are available: 7 node(s) didn''t match node 
selector.'
   reason: Unschedulable
   status: 'True'
   type: Unschedulable
   elasticsearch-cdm-wgsf9ygw-2:
 - lastTransitionTime: '2019-11-04T22:33:33Z'
   message: '0/7 nodes are available: 7 node(s) didn''t match node 
selector.'
   reason: Unschedulable
   status: 'True'
   type: Unschedulable
 nodeCount: 2
 pods:
   client:
 failed: []
 notReady:
   - elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk
   - elasticsearch-cdm-wgsf9ygw-2-577779-2z4ph
 ready: []
   data:
 failed: []
 notReady:
   - elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk
   - elasticsearch-cdm-wgsf9ygw-2-577779-2z4ph
 ready: []
   master:
 failed: []
 notReady:
   - elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk
   - elasticsearch-cdm-wgsf9ygw-2-577779-2z4ph
 ready: []
   visualization:
 kibanaStatus:
   - clusterCondition:
   kibana-99dc6bb95-5848h:
 - lastTransitionTime: '2019-11-04T22:00:49Z'
   message: '0/7 nodes are available: 7 node(s) didn''t match node 
selector.'
   reason: Unschedulable
   status: 'True'
   type: Unschedulable
   kibana-fb96dc875-wk4w5:
 - lastTransitionTime: '2

Re: OCP4 - Logging in gives me api list on Chrome and Firefox

2019-05-02 Thread Sam Padgett
This sounds like https://bugzilla.redhat.com/show_bug.cgi?id=1686476 which
was recently fixed. You should be able to wait a minute or two and refresh
the page to work around the problem.

On Thu, May 2, 2019 at 3:17 PM Marc Boorshtein 
wrote:

> This is really odd.  When I try to login to OCP4 from my mac via chrome or
> firefox once I get logged in with kubeadmin I just get a blank screen that
> shows the list of apis:
>
> {
>   "paths": [
> "/apis",
> "/healthz",
> "/healthz/log",
> "/healthz/ping",
>
> "/healthz/poststarthook/oauth.openshift.io-startoauthclientsbootstrapping",
> "/metrics",
> "/readyz",
> "/readyz/log",
> "/readyz/ping",
>
> "/readyz/poststarthook/oauth.openshift.io-startoauthclientsbootstrapping",
> "/readyz/terminating"
>   ]
> }
>
> here's the url in the bar -
> https://console-openshift-console.apps.ocp47.tremolo.dev/auth/callback?code=
> 
> ...
>
> The only browser that works is safari on mac.  Whats really odd is when I
> add an openid connect provider it redirects me to my identity provider, but
> shows me the above api list.  Whats really odd here is that the url bar is
> pointing to my idp (openunison) hosted in openshift:
>
>
> https://orchestra.apps.ocp47.tremolo.dev/auth/idp/OpenShiftIdP/auth?client_id=openshift_uri=https%3A%2F%2Fopenshift-authentication-openshift-authentication.apps.ocp47.tremolo.dev%2Foauth2callback%2Fopenunison_type=code=openid=Y3NyZj1keUtqRndhY2ItNFR6M1B5a1VXVHZiOXFJYjk3clRhbDI3WXVrYl9lWjZZJnRoZW49JTJGb2F1dGglMkZhdXRob3JpemUlM0ZjbGllbnRfaWQlM0Rjb25zb2xlJTI2aWRwJTNEb3BlbnVuaXNvbiUyNnJlZGlyZWN0X3VyaSUzRGh0dHBzJTI1M0ElMjUyRiUyNTJGY29uc29sZS1vcGVuc2hpZnQtY29uc29sZS5hcHBzLm9jcDQ3LnRyZW1vbG8uZGV2JTI1MkZhdXRoJTI1MkZjYWxsYmFjayUyNnJlc3BvbnNlX3R5cGUlM0Rjb2RlJTI2c2NvcGUlM0R1c2VyJTI1M0FmdWxsJTI2c3RhdGUlM0Q1NmU5ZmUyOQ%3D%3D
>
> This is REALLY odd.  Why is ocp4 generating a path list on my app?
>
> Thanks
> Marc
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OCP4 - Logging in gives me api list on Chrome and Firefox

2019-05-02 Thread Marc Boorshtein
This is really odd.  When I try to login to OCP4 from my mac via chrome or
firefox once I get logged in with kubeadmin I just get a blank screen that
shows the list of apis:

{
  "paths": [
"/apis",
"/healthz",
"/healthz/log",
"/healthz/ping",

"/healthz/poststarthook/oauth.openshift.io-startoauthclientsbootstrapping",
"/metrics",
"/readyz",
"/readyz/log",
"/readyz/ping",

"/readyz/poststarthook/oauth.openshift.io-startoauthclientsbootstrapping",
"/readyz/terminating"
  ]
}

here's the url in the bar -
https://console-openshift-console.apps.ocp47.tremolo.dev/auth/callback?code=

...

The only browser that works is safari on mac.  Whats really odd is when I
add an openid connect provider it redirects me to my identity provider, but
shows me the above api list.  Whats really odd here is that the url bar is
pointing to my idp (openunison) hosted in openshift:

https://orchestra.apps.ocp47.tremolo.dev/auth/idp/OpenShiftIdP/auth?client_id=openshift_uri=https%3A%2F%2Fopenshift-authentication-openshift-authentication.apps.ocp47.tremolo.dev%2Foauth2callback%2Fopenunison_type=code=openid=Y3NyZj1keUtqRndhY2ItNFR6M1B5a1VXVHZiOXFJYjk3clRhbDI3WXVrYl9lWjZZJnRoZW49JTJGb2F1dGglMkZhdXRob3JpemUlM0ZjbGllbnRfaWQlM0Rjb25zb2xlJTI2aWRwJTNEb3BlbnVuaXNvbiUyNnJlZGlyZWN0X3VyaSUzRGh0dHBzJTI1M0ElMjUyRiUyNTJGY29uc29sZS1vcGVuc2hpZnQtY29uc29sZS5hcHBzLm9jcDQ3LnRyZW1vbG8uZGV2JTI1MkZhdXRoJTI1MkZjYWxsYmFjayUyNnJlc3BvbnNlX3R5cGUlM0Rjb2RlJTI2c2NvcGUlM0R1c2VyJTI1M0FmdWxsJTI2c3RhdGUlM0Q1NmU5ZmUyOQ%3D%3D

This is REALLY odd.  Why is ocp4 generating a path list on my app?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Redirect logs of a namespace to ops logging

2019-04-29 Thread Rich Megginson

You might try setting the env var OCP_OPERATIONS_PROJECTS in the fluentd 
daemonset:

oc set env OCP_OPERATIONS_PROJECTS="default openshift openshift- kube-"

https://github.com/openshift/origin-aggregated-logging/blob/release-3.10/fluentd/run.sh

On 4/29/19 7:48 AM, bahhooo wrote:

Hi all,

In a setup (latest OCP 3.10) where logging-ops is enabled the logs of the 
namespaces kube-system are not pushed into the elasticsearch-ops.The namespace 
is annotated with

Annotations: openshift.io/logging.data.prefix=.operations 
<http://openshift.io/logging.data.prefix=.operations>
openshift.io/logging.ui.hostname=kibana-ops.$URL 
<http://openshift.io/logging.ui.hostname=kibana-ops.$URL>

I know that the logging.ui.hostname is for the link "View archive".  I thought 
that logging.data.prefix is for the ops cluster, but as it seems it is actually only for 
the prefix of the index.
I can see the kube-system indices in the application logging elastichsearch instance. However I cannot see the logs using kibana with kubernetes.namespace_name:"kube-system" having selected 
either .all or .operations.*


Is there a way of moving these logs (or any other namespace's logs) properly to 
ops logging?


Best,
Baho

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Redirect logs of a namespace to ops logging

2019-04-29 Thread bahhooo
Hi all,

In a setup (latest OCP 3.10) where logging-ops is enabled the logs of the
namespaces kube-system are not pushed into the elasticsearch-ops.The
namespace is annotated with

Annotations:openshift.io/logging.data.prefix=.operations
openshift.io/logging.ui.hostname=kibana-ops.$URL

I know that the logging.ui.hostname is for the link "View archive".  I
thought that logging.data.prefix is for the ops cluster, but as it seems it
is actually only for the prefix of the index.
I can see the kube-system indices in the application logging elastichsearch
instance. However I cannot see the logs using kibana with
kubernetes.namespace_name:"kube-system" having selected either .all or
.operations.*

Is there a way of moving these logs (or any other namespace's logs)
properly to ops logging?


Best,
Baho
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Logging of network policy events

2018-11-21 Thread Lars Milland
HiIs it possible to get OpenShift 3.10 to produce log events of its allow and 
deny activities on network traffic to and from pods internally in the Openshift 
and with allowing or denying egress traffic. The log would have to show 
originating source IP and pod and then the target ip and target pod for the 
internal traffic. And similar for external traffic. I am looking at complying 
with log policies at my company to keep an audit log of network traffic 
decisions. So what is sought for would be result of the resolving logic of 
NetworkPolicy and EgressNetworkPolicy objects to have that logged to 
ElasticSearch or similar log targets. If this can be solved by logging of 
IPTables or flow rules activity that might also be useful. Anybody know how 
such a log can be produced. Best Regards Lars Milland___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Regarding Logging

2018-11-21 Thread Rich Megginson

On 11/21/18 12:28 AM, Kasturi Narra wrote:

Hello Rich,

   I was on PTO yesterday and did not get chance to run the above commands. But before running these when i logged into my system i see that fluentd pods are up and running. So does it take 
some time for the fluentd pods to come up once logging is installed ?



yes




   Today i did re installation of my logging and i again see fluentd pods not 
being up again.



I guess it may take a while for fluentd to come up, but not sure why it would 
take more than a minute or two.

Look for /var/log/*.pos and /var/lib/fluentd/* for evidence that fluentd is up 
and doing something.




Thanks
kasturi

On Mon, Nov 19, 2018 at 9:21 PM Rich Megginson mailto:rmegg...@redhat.com>> wrote:

Try unlabeling then relabeling the nodes:

oc label node --all logging-infra-fluentd-

wait a minute

oc label node --all logging-infra-fluentd=true

On 11/19/18 8:44 AM, Kasturi Narra wrote:
> Hello,
>
>   Please find replies line
>
> On Mon, Nov 19, 2018 at 9:12 PM Rich Megginson mailto:rmegg...@redhat.com> <mailto:rmegg...@redhat.com 
<mailto:rmegg...@redhat.com>>> wrote:
>
>     On 11/19/18 8:32 AM, Kasturi Narra wrote:
>     > Hello Jeff,
>     >    yes , i do have it. Here is the output i have got.
>     >
>     > dhcp46-68.lab.eng.blr.redhat.com <http://dhcp46-68.lab.eng.blr.redhat.com> 
<http://dhcp46-68.lab.eng.blr.redhat.com> <http://dhcp46-68.lab.eng.blr.redhat.com>    Ready     

   6d        v1.9.1+a0ce1bc657
>     >
>

beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled

<http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled>
>   
 
<http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled>
>
>     >
>   
 
<http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled>
>     >
>
>     oc get daemonset
>
>
    > [root@dhcp46-170 ~]# oc get daemonset
> NAME              DESIRED   CURRENT   READY UP-TO-DATE AVAILABLE   NODE 
SELECTOR  AGE
> logging-fluentd   0         0         0         0  0           
logging-infra-fluentd=true   3m
    >
    >
>     oc describe daemonset logging-fluentd
>
>
> [root@dhcp46-170 ~]# oc describe daemonset logging-fluentd
> Name:           logging-fluentd
> Selector:       component=fluentd,provider=openshift
> Node-Selector:  logging-infra-fluentd=true
> Labels:         component=fluentd
>                 logging-infra=fluentd
>                 provider=openshift
> Annotations:    
> Desired Number of Nodes Scheduled: 0
> Current Number of Nodes Scheduled: 0
> Number of Nodes Scheduled with Up-to-date Pods: 0
> Number of Nodes Scheduled with Available Pods: 0
> Number of Nodes Misscheduled: 0
> Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
> Pod Template:
>   Labels:           component=fluentd
>                     logging-infra=fluentd
>                     provider=openshift
>   Service Account:  aggregated-logging-fluentd
>   Containers:
    >    fluentd-elasticsearch:
>     Image: registry.access.redhat.com/openshift3/logging-fluentd:v3.9.43 
<http://registry.access.redhat.com/openshift3/logging-fluentd:v3.9.43>
<http://registry.access.redhat.com/openshift3/logging-fluentd:v3.9.43>
>     Port:   
>     Limits:
>       memory:  512Mi
>     Requests:
>       cpu:     100m
>       memory:  512Mi
>     Environment:
>       K8S_HOST_URL: https://kubernetes.default.svc.cluster.local
>       ES_HOST:                 logging-es
>       ES_PORT:                 9200
>       ES_CLIENT_CERT:          /etc/fluent/keys/cert
>       ES_CLIENT_KEY:           /etc/fluent/keys/key
>       ES_CA:                   /etc/fluent/keys/ca
>       OPS_HOST:                logging-es
>       OPS_PORT:                9200
>       OPS_CLIENT_CERT: /etc/fluent/keys/ops-cert
>       OPS_CLIENT_KEY:  /etc/fluent/keys/ops-key
>       OPS_CA:  /etc/fluent/keys/ops-ca

Re: Regarding Logging

2018-11-20 Thread Kasturi Narra
Hello Rich,

   I was on PTO yesterday and did not get chance to run the above commands.
But before running these when i logged into my system i see that fluentd
pods are up and running. So does it take some time for the fluentd pods to
come up once logging is installed ?

   Today i did re installation of my logging and i again see fluentd pods
not being up again.

Thanks
kasturi

On Mon, Nov 19, 2018 at 9:21 PM Rich Megginson  wrote:

> Try unlabeling then relabeling the nodes:
>
> oc label node --all logging-infra-fluentd-
>
> wait a minute
>
> oc label node --all logging-infra-fluentd=true
>
> On 11/19/18 8:44 AM, Kasturi Narra wrote:
> > Hello,
> >
> >   Please find replies line
> >
> > On Mon, Nov 19, 2018 at 9:12 PM Rich Megginson  <mailto:rmegg...@redhat.com>> wrote:
> >
> > On 11/19/18 8:32 AM, Kasturi Narra wrote:
> > > Hello Jeff,
> > >yes , i do have it. Here is the output i have got.
> > >
> > > dhcp46-68.lab.eng.blr.redhat.com <
> http://dhcp46-68.lab.eng.blr.redhat.com> <
> http://dhcp46-68.lab.eng.blr.redhat.com>Ready 6d
>  v1.9.1+a0ce1bc657
> > >
> >
> beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled
> > <
> http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled
> >
> >
> > >
> > <
> http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled
> >
> > >
> >
> > oc get daemonset
> >
> >
> > [root@dhcp46-170 ~]# oc get daemonset
> > NAME  DESIRED   CURRENT   READY UP-TO-DATE   AVAILABLE
> NODE SELECTOR  AGE
> > logging-fluentd   0 0 0 00
> logging-infra-fluentd=true   3m
> >
> >
> > oc describe daemonset logging-fluentd
> >
> >
> > [root@dhcp46-170 ~]# oc describe daemonset logging-fluentd
> > Name:   logging-fluentd
> > Selector:   component=fluentd,provider=openshift
> > Node-Selector:  logging-infra-fluentd=true
> > Labels: component=fluentd
> > logging-infra=fluentd
> > provider=openshift
> > Annotations:
> > Desired Number of Nodes Scheduled: 0
> > Current Number of Nodes Scheduled: 0
> > Number of Nodes Scheduled with Up-to-date Pods: 0
> > Number of Nodes Scheduled with Available Pods: 0
> > Number of Nodes Misscheduled: 0
> > Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
> > Pod Template:
> >   Labels:   component=fluentd
> > logging-infra=fluentd
> > provider=openshift
> >   Service Account:  aggregated-logging-fluentd
> >   Containers:
> >fluentd-elasticsearch:
> > Image: registry.access.redhat.com/openshift3/logging-fluentd:v3.9.43
> <http://registry.access.redhat.com/openshift3/logging-fluentd:v3.9.43>
> > Port:   
> > Limits:
> >   memory:  512Mi
> > Requests:
> >   cpu: 100m
> >   memory:  512Mi
> > Environment:
> >   K8S_HOST_URL: https://kubernetes.default.svc.cluster.local
> >   ES_HOST: logging-es
> >   ES_PORT: 9200
> >   ES_CLIENT_CERT:  /etc/fluent/keys/cert
> >   ES_CLIENT_KEY:   /etc/fluent/keys/key
> >   ES_CA:   /etc/fluent/keys/ca
> >   OPS_HOST:logging-es
> >   OPS_PORT:9200
> >   OPS_CLIENT_CERT: /etc/fluent/keys/ops-cert
> >   OPS_CLIENT_KEY:  /etc/fluent/keys/ops-key
> >   OPS_CA:  /etc/fluent/keys/ops-ca
> >   JOURNAL_SOURCE:
> >   JOURNAL_READ_FROM_HEAD:
> >   BUFFER_QUEUE_LIMIT:  32
> >   BUFFER_SIZE_LIMIT:   8m
> >   FLUENTD_CPU_LIMIT:   node allocatable (limits.cpu)
> >   FLUENTD_MEMORY_LIMIT:536870912 (limits.memory)
> >   FILE_BUFFER_LIMIT:   256Mi
> > Mounts:
> >   /etc/docker from dockerdaemoncfg (ro)
> >   /etc/docker-hostname from dockerhostname (ro)
> >   /etc/fluent/configs.d/user from config (ro)
> >   /etc/fluent/keys from certs (ro)
> >  

Re: Regarding Logging

2018-11-19 Thread Rich Megginson

Try unlabeling then relabeling the nodes:

oc label node --all logging-infra-fluentd-

wait a minute

oc label node --all logging-infra-fluentd=true

On 11/19/18 8:44 AM, Kasturi Narra wrote:

Hello,

  Please find replies line

On Mon, Nov 19, 2018 at 9:12 PM Rich Megginson mailto:rmegg...@redhat.com>> wrote:

On 11/19/18 8:32 AM, Kasturi Narra wrote:
> Hello Jeff,
>    yes , i do have it. Here is the output i have got.
>
> dhcp46-68.lab.eng.blr.redhat.com <http://dhcp46-68.lab.eng.blr.redhat.com> 
<http://dhcp46-68.lab.eng.blr.redhat.com>    Ready         6d        
v1.9.1+a0ce1bc657
>

beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled

<http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled>

>

<http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled>
>

oc get daemonset


[root@dhcp46-170 ~]# oc get daemonset
NAME              DESIRED   CURRENT   READY UP-TO-DATE   AVAILABLE   NODE 
SELECTOR  AGE
logging-fluentd   0         0         0         0        0           
logging-infra-fluentd=true   3m


oc describe daemonset logging-fluentd


[root@dhcp46-170 ~]# oc describe daemonset logging-fluentd
Name:           logging-fluentd
Selector:       component=fluentd,provider=openshift
Node-Selector:  logging-infra-fluentd=true
Labels:         component=fluentd
                logging-infra=fluentd
                provider=openshift
Annotations:    
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           component=fluentd
                    logging-infra=fluentd
                    provider=openshift
  Service Account:  aggregated-logging-fluentd
  Containers:
   fluentd-elasticsearch:
    Image: registry.access.redhat.com/openshift3/logging-fluentd:v3.9.43 
<http://registry.access.redhat.com/openshift3/logging-fluentd:v3.9.43>
    Port:   
    Limits:
      memory:  512Mi
    Requests:
      cpu:     100m
      memory:  512Mi
    Environment:
      K8S_HOST_URL: https://kubernetes.default.svc.cluster.local
      ES_HOST:                 logging-es
      ES_PORT:                 9200
      ES_CLIENT_CERT:          /etc/fluent/keys/cert
      ES_CLIENT_KEY:           /etc/fluent/keys/key
      ES_CA:                   /etc/fluent/keys/ca
      OPS_HOST:                logging-es
      OPS_PORT:                9200
      OPS_CLIENT_CERT: /etc/fluent/keys/ops-cert
      OPS_CLIENT_KEY:  /etc/fluent/keys/ops-key
      OPS_CA:  /etc/fluent/keys/ops-ca
      JOURNAL_SOURCE:
      JOURNAL_READ_FROM_HEAD:
      BUFFER_QUEUE_LIMIT:      32
      BUFFER_SIZE_LIMIT:       8m
      FLUENTD_CPU_LIMIT:       node allocatable (limits.cpu)
      FLUENTD_MEMORY_LIMIT:    536870912 (limits.memory)
      FILE_BUFFER_LIMIT:       256Mi
    Mounts:
      /etc/docker from dockerdaemoncfg (ro)
      /etc/docker-hostname from dockerhostname (ro)
      /etc/fluent/configs.d/user from config (ro)
      /etc/fluent/keys from certs (ro)
      /etc/localtime from localtime (ro)
      /etc/origin/node from originnodecfg (ro)
      /etc/sysconfig/docker from dockercfg (ro)
      /run/log/journal from runlogjournal (rw)
      /var/lib/docker/containers from varlibdockercontainers (ro)
      /var/lib/fluentd from filebufferstorage (rw)
      /var/log from varlog (rw)
  Volumes:
   runlogjournal:
    Type:          HostPath (bare host directory volume)
    Path:          /run/log/journal
    HostPathType:
   varlog:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:
   varlibdockercontainers:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers
    HostPathType:
   config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      logging-fluentd
    Optional:  false
   certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  logging-fluentd
    Optional:    false
   dockerhostname:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/hostname
    HostPathType:
   localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:
   dockercfg:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/sysc

Re: Regarding Logging

2018-11-19 Thread Kasturi Narra
Hello,

  Please find replies line

On Mon, Nov 19, 2018 at 9:12 PM Rich Megginson  wrote:

> On 11/19/18 8:32 AM, Kasturi Narra wrote:
> > Hello Jeff,
> >yes , i do have it. Here is the output i have got.
> >
> > dhcp46-68.lab.eng.blr.redhat.com <
> http://dhcp46-68.lab.eng.blr.redhat.com>Ready 6d
>  v1.9.1+a0ce1bc657
> >
> beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled
> > <
> http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled
> >
> >
>
> oc get daemonset
>

[root@dhcp46-170 ~]# oc get daemonset
NAME  DESIRED   CURRENT   READY UP-TO-DATE   AVAILABLE
NODE SELECTOR        AGE
logging-fluentd   0 0 0     00
logging-infra-fluentd=true   3m


>
> oc describe daemonset logging-fluentd
>

[root@dhcp46-170 ~]# oc describe daemonset logging-fluentd
Name:   logging-fluentd
Selector:   component=fluentd,provider=openshift
Node-Selector:  logging-infra-fluentd=true
Labels: component=fluentd
logging-infra=fluentd
provider=openshift
Annotations:
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:   component=fluentd
logging-infra=fluentd
provider=openshift
  Service Account:  aggregated-logging-fluentd
  Containers:
   fluentd-elasticsearch:
Image:  registry.access.redhat.com/openshift3/logging-fluentd:v3.9.43
Port:   
Limits:
  memory:  512Mi
Requests:
  cpu: 100m
  memory:  512Mi
Environment:
  K8S_HOST_URL:    https://kubernetes.default.svc.cluster.local
  ES_HOST: logging-es
  ES_PORT: 9200
  ES_CLIENT_CERT:  /etc/fluent/keys/cert
  ES_CLIENT_KEY:   /etc/fluent/keys/key
  ES_CA:   /etc/fluent/keys/ca
  OPS_HOST:logging-es
  OPS_PORT:9200
  OPS_CLIENT_CERT: /etc/fluent/keys/ops-cert
  OPS_CLIENT_KEY:  /etc/fluent/keys/ops-key
  OPS_CA:  /etc/fluent/keys/ops-ca
  JOURNAL_SOURCE:
  JOURNAL_READ_FROM_HEAD:
  BUFFER_QUEUE_LIMIT:  32
  BUFFER_SIZE_LIMIT:   8m
  FLUENTD_CPU_LIMIT:   node allocatable (limits.cpu)
  FLUENTD_MEMORY_LIMIT:536870912 (limits.memory)
  FILE_BUFFER_LIMIT:   256Mi
Mounts:
  /etc/docker from dockerdaemoncfg (ro)
  /etc/docker-hostname from dockerhostname (ro)
  /etc/fluent/configs.d/user from config (ro)
  /etc/fluent/keys from certs (ro)
  /etc/localtime from localtime (ro)
  /etc/origin/node from originnodecfg (ro)
  /etc/sysconfig/docker from dockercfg (ro)
  /run/log/journal from runlogjournal (rw)
  /var/lib/docker/containers from varlibdockercontainers (ro)
  /var/lib/fluentd from filebufferstorage (rw)
  /var/log from varlog (rw)
  Volumes:
   runlogjournal:
Type:  HostPath (bare host directory volume)
Path:  /run/log/journal
HostPathType:
   varlog:
Type:  HostPath (bare host directory volume)
Path:  /var/log
HostPathType:
   varlibdockercontainers:
Type:  HostPath (bare host directory volume)
Path:  /var/lib/docker/containers
HostPathType:
   config:
Type:  ConfigMap (a volume populated by a ConfigMap)
Name:  logging-fluentd
Optional:  false
   certs:
Type:Secret (a volume populated by a Secret)
SecretName:  logging-fluentd
Optional:false
   dockerhostname:
Type:  HostPath (bare host directory volume)
Path:  /etc/hostname
HostPathType:
   localtime:
Type:  HostPath (bare host directory volume)
Path:  /etc/localtime
HostPathType:
   dockercfg:
Type:  HostPath (bare host directory volume)
Path:  /etc/sysconfig/docker
HostPathType:
   originnodecfg:
Type:  HostPath (bare host directory volume)
Path:  /etc/origin/node
HostPathType:
   dockerdaemoncfg:
Type:  HostPath (bare host directory volume)
Path:  /etc/docker
HostPathType:
   filebufferstorage:
Type:  HostPath (bare host directory volume)
Path:  /var/lib/fluentd
HostPathType:
Events:


>
>
> > Thanks

Re: Regarding Logging

2018-11-19 Thread Rich Megginson

On 11/19/18 8:32 AM, Kasturi Narra wrote:

Hello Jeff,
   yes , i do have it. Here is the output i have got.

dhcp46-68.lab.eng.blr.redhat.com <http://dhcp46-68.lab.eng.blr.redhat.com>    Ready         6d        v1.9.1+a0ce1bc657 
beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled 
<http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled>




oc get daemonset

oc describe daemonset logging-fluentd



Thanks
kasturi

On Mon, Nov 19, 2018 at 7:16 PM Jeff Cantrill mailto:jcant...@redhat.com>> wrote:

It doesn't appear you have any fluentd pods which are responsible for 
collecting logs from the other pods.  Are your nodes labeled with 
'logging-infra-fluend=true'

On Mon, Nov 19, 2018 at 7:28 AM Kasturi Narra mailto:kna...@redhat.com>> wrote:

Hello Everyone,

   I have a setup where i am trying to install logging using ocp3.9+ 
cns3.11 . I see that logging pods are up and running but when i access the 
webconsole i get an error  present at
[1]  and i tried the solution provided at [2] but having no luck. Can 
some one of you please help me on resolving this issue ?

[root@dhcp46-170 ~]# oc version
oc v3.9.43
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://dhcp46-170.lab.eng.blr.redhat.com:8443
openshift v3.9.43
kubernetes v1.9.1+a0ce1bc657

[root@dhcp46-170 ~]# oc get pods
NAME  READY     STATUS    RESTARTS   AGE
logging-curator-1-bgjbj 1/1       Running   0          2h
logging-es-data-master-5gjnm57x-2-5vjq6 2/2       Running   0          
2h
logging-kibana-1-872dn  2/2       Running   0          2h

[1] Discover: [exception] The index returned an empty result. You can 
use the Time Picker to change the time filter or select a higher time interval
[2] https://access.redhat.com/solutions/3352681

Thanks
kasturi
___
users mailing list
users@lists.openshift.redhat.com 
<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



-- 
--

Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Logging
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com <mailto:jcant...@redhat.com>
http://www.redhat.com


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Regarding Logging

2018-11-19 Thread Kasturi Narra
Hello Jeff,

   yes , i do have it. Here is the output i have got.

dhcp46-68.lab.eng.blr.redhat.comReady 6d
 v1.9.1+a0ce1bc657
beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=dhcp46-68.lab.eng.blr.redhat.com,logging-infra-fluentd=true,region=infra,registry=enabled,role=node,router=enabled

Thanks
kasturi

On Mon, Nov 19, 2018 at 7:16 PM Jeff Cantrill  wrote:

> It doesn't appear you have any fluentd pods which are responsible for
> collecting logs from the other pods.  Are your nodes labeled with
> 'logging-infra-fluend=true'
>
> On Mon, Nov 19, 2018 at 7:28 AM Kasturi Narra  wrote:
>
>> Hello Everyone,
>>
>>I have a setup where i am trying to install logging using ocp3.9+
>> cns3.11 . I see that logging pods are up and running but when i access the
>> webconsole i get an error  present at [1]  and i tried the solution
>> provided at [2] but having no luck. Can some one of you please help me on
>> resolving this issue ?
>>
>> [root@dhcp46-170 ~]# oc version
>> oc v3.9.43
>> kubernetes v1.9.1+a0ce1bc657
>> features: Basic-Auth GSSAPI Kerberos SPNEGO
>>
>> Server https://dhcp46-170.lab.eng.blr.redhat.com:8443
>> openshift v3.9.43
>> kubernetes v1.9.1+a0ce1bc657
>>
>> [root@dhcp46-170 ~]# oc get pods
>> NAME      READY STATUSRESTARTS
>> AGE
>> logging-curator-1-bgjbj   1/1   Running   0
>>  2h
>> logging-es-data-master-5gjnm57x-2-5vjq6   2/2   Running   0
>>  2h
>> logging-kibana-1-872dn2/2   Running   0
>>  2h
>>
>> [1] Discover: [exception] The index returned an empty result. You can
>> use the Time Picker to change the time filter or select a higher time
>> interval
>> [2] https://access.redhat.com/solutions/3352681
>>
>> Thanks
>> kasturi
>> _______
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> --
> --
> Jeff Cantrill
> Senior Software Engineer, Red Hat Engineering
> OpenShift Logging
> Red Hat, Inc.
> *Office*: 703-748-4420 | 866-546-8970 ext. 8162420
> jcant...@redhat.com
> http://www.redhat.com
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Regarding Logging

2018-11-19 Thread Jeff Cantrill
It doesn't appear you have any fluentd pods which are responsible for
collecting logs from the other pods.  Are your nodes labeled with
'logging-infra-fluend=true'

On Mon, Nov 19, 2018 at 7:28 AM Kasturi Narra  wrote:

> Hello Everyone,
>
>I have a setup where i am trying to install logging using ocp3.9+
> cns3.11 . I see that logging pods are up and running but when i access the
> webconsole i get an error  present at [1]  and i tried the solution
> provided at [2] but having no luck. Can some one of you please help me on
> resolving this issue ?
>
> [root@dhcp46-170 ~]# oc version
> oc v3.9.43
> kubernetes v1.9.1+a0ce1bc657
> features: Basic-Auth GSSAPI Kerberos SPNEGO
>
> Server https://dhcp46-170.lab.eng.blr.redhat.com:8443
> openshift v3.9.43
> kubernetes v1.9.1+a0ce1bc657
>
> [root@dhcp46-170 ~]# oc get pods
> NAME      READY STATUSRESTARTS
> AGE
> logging-curator-1-bgjbj   1/1   Running   0  2h
> logging-es-data-master-5gjnm57x-2-5vjq6   2/2   Running   0  2h
> logging-kibana-1-872dn2/2   Running   0  2h
>
> [1] Discover: [exception] The index returned an empty result. You can use
> the Time Picker to change the time filter or select a higher time interval
> [2] https://access.redhat.com/solutions/3352681
>
> Thanks
> kasturi
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Logging
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Regarding Logging

2018-11-19 Thread Kasturi Narra
Hello Everyone,

   I have a setup where i am trying to install logging using ocp3.9+
cns3.11 . I see that logging pods are up and running but when i access the
webconsole i get an error  present at [1]  and i tried the solution
provided at [2] but having no luck. Can some one of you please help me on
resolving this issue ?

[root@dhcp46-170 ~]# oc version
oc v3.9.43
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://dhcp46-170.lab.eng.blr.redhat.com:8443
openshift v3.9.43
kubernetes v1.9.1+a0ce1bc657

[root@dhcp46-170 ~]# oc get pods
NAME  READY STATUSRESTARTS   AGE
logging-curator-1-bgjbj   1/1   Running   0  2h
logging-es-data-master-5gjnm57x-2-5vjq6   2/2   Running   0  2h
logging-kibana-1-872dn2/2   Running   0  2h

[1] Discover: [exception] The index returned an empty result. You can use
the Time Picker to change the time filter or select a higher time interval
[2] https://access.redhat.com/solutions/3352681

Thanks
kasturi
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift centralized logging - add custom container logfiles

2018-08-16 Thread Aleksandar Lazic
Hi.

Am 16.08.2018 um 16:27 schrieb Rich Megginson:
> On 08/16/2018 05:42 AM, Aleksandar Lazic wrote:
>> Am 16.08.2018 um 12:48 schrieb Aleksandar Kostadinov:
>>> Might be real nice to allow pod to request sockets created where different 
>>> log
>>> streams can be sent to central logging without extra containers in the pod.
>> You can run socklog/fluentbit/... in the background to handle the logging and
>> your app logs to this socket.
>
> So you would need to configure your app to log to a socket instead of a log 
> file?
> Where does socklog write the logs?  Who reads from that destination?

Socklog writes to stdout by default.
In my setup is the haproxy configured to write to the unix socket but he can
also listen to udp socket.
In any case the output is written to stdout

http://smarden.org/socklog/

I have describe the setup in two blog posts
https://www.me2digital.com/blog/2017/05/syslog-in-a-container-world/
https://www.me2digital.com/blog/2017/09/syslog-receiver/

Another possible tool is https://fluentbit.io/ as it can use more input sources.
https://fluentbit.io/documentation/0.13/input/

For example you can use tail if it's not possible to change easily the logging
setup of the app.
https://fluentbit.io/documentation/0.13/input/tail.html

In the past was the rsyslog hard to setup for openshift with normal privileges
from the rhel image, that was the reason for me to build this solution, imho.
The https://www.rsyslog.com/doc/v8-stable/configuration/modules/omstdout.html is
documented to not use it in real deployments

Best Regards
Aleks

>> Something similar as I have done it in my haproxy image.
>>
>> https://gitlab.com/aleks001/haproxy18-centos/blob/master/containerfiles/container-entrypoint.sh#L92-93
>>
>>
>> ###
>> ...
>> echo "starting socklog"
>> /usr/local/bin/socklog unix /tmp/haproxy_syslog &
>> ...
>> ###
>>
>> Regards
>> Aleks
>>> Jeff Cantrill wrote on 08/15/18 16:50:
>>>> The recommended options with the current log stack are either to 
>>>> reconfigure
>>>> your log to send to stdout or add a sidecar container that is capable of
>>>> tailing the log in question which would write it to stdout and ultimately
>>>> read by fluentd.
>>>>
>>>> On Wed, Aug 15, 2018 at 2:47 AM, Leo David >>> <mailto:leoa...@gmail.com>> wrote:
>>>>
>>>>  Hi Everyone,
>>>>  I have logging with fluentd / elasticsearch at cluster level running
>>>>  fine,  everything works as expected.
>>>>  I have an issue though...
>>>>  What would it be the procedure to add some custom log files from
>>>>  different containers ( logs that are not shown in stdout ) to be
>>>>  delivered to elasticseach as well ?
>>>>  I two different clusters ( 3.7 and 3.9 ) up and running,  and i know
>>>>  that in 3.7 docker logging driver is configured with journald whilst
>>>>  in 3.9 is json-file.
>>>>  Any thoughts on this ?
>>>>  Thanks a lot !
>>>>
>>>>  --     Best regards, Leo David
>>>>
>>>> -- 
>>>> -- 
>>>> Jeff Cantrill
>>>> Senior Software Engineer, Red Hat Engineering
>>>> OpenShift Logging
>>>> Red Hat, Inc.
>>>> *Office*: 703-748-4420 | 866-546-8970 ext. 8162420
>>>> jcant...@redhat.com <mailto:jcant...@redhat.com>
>>>> http://www.redhat.com


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging / Kibana export Logs

2018-08-16 Thread Rich Megginson
You could expose Elasticsearch externally 
https://docs.okd.io/latest/install_config/aggregate_logging.html



|openshift_logging_es_allow_external|

Set to |true| to expose Elasticsearch as a reencrypt route. Set to 
|false| by default.




Except that username/password and token auth is currently broken due to 
the oauth proxy.


On 08/16/2018 07:16 AM, Tobias Brunner wrote:

Hi,

Does anyone have an idea how logs could be exported from the OpenShift
integrated logging for further analysis? Constraints: We can't give the
users access to the logging namespace and therefore also not to the
Elasticsearch Pod as this would allow the user to bypass access control
(Searchguard).

Thanks,
Tobias

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift centralized logging - add custom container logfiles

2018-08-16 Thread Rich Megginson

On 08/16/2018 05:42 AM, Aleksandar Lazic wrote:

Am 16.08.2018 um 12:48 schrieb Aleksandar Kostadinov:

Might be real nice to allow pod to request sockets created where different log
streams can be sent to central logging without extra containers in the pod.

You can run socklog/fluentbit/... in the background to handle the logging and
your app logs to this socket.


So you would need to configure your app to log to a socket instead of a 
log file?

Where does socklog write the logs?  Who reads from that destination?


Something similar as I have done it in my haproxy image.

https://gitlab.com/aleks001/haproxy18-centos/blob/master/containerfiles/container-entrypoint.sh#L92-93

###
...
echo "starting socklog"
/usr/local/bin/socklog unix /tmp/haproxy_syslog &
...
###

Regards
Aleks

Jeff Cantrill wrote on 08/15/18 16:50:

The recommended options with the current log stack are either to reconfigure
your log to send to stdout or add a sidecar container that is capable of
tailing the log in question which would write it to stdout and ultimately
read by fluentd.

On Wed, Aug 15, 2018 at 2:47 AM, Leo David mailto:leoa...@gmail.com>> wrote:

     Hi Everyone,
     I have logging with fluentd / elasticsearch at cluster level running
     fine,  everything works as expected.
     I have an issue though...
     What would it be the procedure to add some custom log files from
     different containers ( logs that are not shown in stdout ) to be
     delivered to elasticseach as well ?
     I two different clusters ( 3.7 and 3.9 ) up and running,  and i know
     that in 3.7 docker logging driver is configured with journald whilst
     in 3.9 is json-file.
     Any thoughts on this ?
     Thanks a lot !

     --     Best regards, Leo David

     ___
     users mailing list
     users@lists.openshift.redhat.com
     <mailto:users@lists.openshift.redhat.com>
     http://lists.openshift.redhat.com/openshiftmm/listinfo/users
     <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>




--
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Logging
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com <mailto:jcant...@redhat.com>
http://www.redhat.com


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift centralized logging - add custom container logfiles

2018-08-16 Thread Aleksandar Lazic
Am 16.08.2018 um 12:48 schrieb Aleksandar Kostadinov:
> Might be real nice to allow pod to request sockets created where different log
> streams can be sent to central logging without extra containers in the pod.

You can run socklog/fluentbit/... in the background to handle the logging and
your app logs to this socket.
Something similar as I have done it in my haproxy image.

https://gitlab.com/aleks001/haproxy18-centos/blob/master/containerfiles/container-entrypoint.sh#L92-93

###
...
echo "starting socklog"
/usr/local/bin/socklog unix /tmp/haproxy_syslog &
...
###

Regards
Aleks
> Jeff Cantrill wrote on 08/15/18 16:50:
>> The recommended options with the current log stack are either to reconfigure
>> your log to send to stdout or add a sidecar container that is capable of
>> tailing the log in question which would write it to stdout and ultimately
>> read by fluentd.
>>
>> On Wed, Aug 15, 2018 at 2:47 AM, Leo David > <mailto:leoa...@gmail.com>> wrote:
>>
>>     Hi Everyone,
>>     I have logging with fluentd / elasticsearch at cluster level running
>>     fine,  everything works as expected.
>>     I have an issue though...
>>     What would it be the procedure to add some custom log files from
>>     different containers ( logs that are not shown in stdout ) to be
>>     delivered to elasticseach as well ?
>>     I two different clusters ( 3.7 and 3.9 ) up and running,  and i know
>>     that in 3.7 docker logging driver is configured with journald whilst
>>     in 3.9 is json-file.
>>     Any thoughts on this ?
>>     Thanks a lot !
>>
>>     --     Best regards, Leo David
>>
>>     ___
>>     users mailing list
>>     users@lists.openshift.redhat.com
>>     <mailto:users@lists.openshift.redhat.com>
>>     http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>     <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
>>
>>
>>
>>
>> -- 
>> -- 
>> Jeff Cantrill
>> Senior Software Engineer, Red Hat Engineering
>> OpenShift Logging
>> Red Hat, Inc.
>> *Office*: 703-748-4420 | 866-546-8970 ext. 8162420
>> jcant...@redhat.com <mailto:jcant...@redhat.com>
>> http://www.redhat.com
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift centralized logging - add custom container logfiles

2018-08-16 Thread Aleksandar Kostadinov
Might be real nice to allow pod to request sockets created where 
different log streams can be sent to central logging without extra 
containers in the pod.


Jeff Cantrill wrote on 08/15/18 16:50:
The recommended options with the current log stack are either to 
reconfigure your log to send to stdout or add a sidecar container that 
is capable of tailing the log in question which would write it to stdout 
and ultimately read by fluentd.


On Wed, Aug 15, 2018 at 2:47 AM, Leo David <mailto:leoa...@gmail.com>> wrote:


Hi Everyone,
    I have logging with fluentd / elasticsearch at cluster level running
fine,  everything works as expected.
I have an issue though...
What would it be the procedure to add some custom log files from
different containers ( logs that are not shown in stdout ) to be
delivered to elasticseach as well ?
I two different clusters ( 3.7 and 3.9 ) up and running,  and i know
that in 3.7 docker logging driver is configured with journald whilst
in 3.9 is json-file.
Any thoughts on this ?
Thanks a lot !

-- 
Best regards, Leo David


___
users mailing list
users@lists.openshift.redhat.com
<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
<http://lists.openshift.redhat.com/openshiftmm/listinfo/users>




--
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Logging
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com <mailto:jcant...@redhat.com>
http://www.redhat.com


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Openshift centralized logging - add custom container logfiles

2018-08-15 Thread Leo David
Hi Everyone,
I have logging with fluentd / elasticsearch at cluster level running
fine,  everything works as expected.
I have an issue though...
What would it be the procedure to add some custom log files from different
containers ( logs that are not shown in stdout ) to be delivered to
elasticseach as well ?
I two different clusters ( 3.7 and 3.9 ) up and running,  and i know that
in 3.7 docker logging driver is configured with journald whilst in 3.9 is
json-file.
Any thoughts on this ?
Thanks a lot !

-- 
Best regards, Leo David
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [logging]

2018-06-01 Thread Rich Megginson
Not sure how logstash chooses which cert/key to use from the 
truststore.  You might ask on a logstash forum.


Or, just use the fluentd cert/key with plain old client cert and key 
files in pem format, if logstash supports that.  You can dump the 
fluentd ca, cert, and key using


oc extract -n logging secret/logging-fluentd --keys=cert --to=- 
fluentd-cert.pem


etc.


On 05/31/2018 06:02 AM, Himmat Singh wrote:

Hi,
Anybody worked on sending logs from logstash server (pod running on 
openshift) with existing elasticsearch of openshift efk solution which 
is secured with searchguard..


Please share configuration details how to get connectivity between them.

I am getting same kind of below error again again..



On Wed, May 30, 2018, 3:16 PM Himmat Singh 
mailto:himmat.singh.ba...@gmail.com>> 
wrote:


Hi Team,

I have deployed rabbitmq, logstash server on openshift to make
another ELK pipeline for logging which supports some set of
application and want to forward logs from those application logs
through ELK pipeline but Elasticsearch will be the common For both
EFK/ELK pipeline.

I have below secrets on openshift logging-elasticsearch :

|logging-elasticsearch created 3 months ago Opaque Reveal Secret
admin-ca * admin-cert * admin-key * admin.jks *
key * searchguard.key * searchguard.truststore *
truststore * |



I have grabbed truststore key using below command and used
truststore_password => tspass from elasticsaerch.yml :

|sudo oc get secret logging-elasticsearch --template='{{index .data
"truststore"}}' | base64 -d > truststore.jks |

Please help me with procedure i need to follow if i want to
connect using truststore keys,username,password for truststore.

Below is logstash.conf file : :

|input { rabbitmq { host => "rabbitmq-logstash" queue => "logstash"
durable => true port => 5672 user => "admin" password => "admin" }
} output { elasticsearch { hosts => ["logging-es:9200"] #cacert =>
'/etc/logstash/conf.d/keys/es-ca.crt' #user => 'fluentd' #password
=> 'changeme' ssl => true ssl_certificate_verification => false
truststore => "/etc/logstash/conf.d/keys/truststore.jks"
truststore_password => tspass index => "logstash-%{+.MM.dd}"
manage_template => false document_type => "%{[@metadata][type]}" }
stdout { codec => rubydebug } } |

I am facing below error:

10:51:56.154 [Ruby-0-Thread-5:

/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:228]
WARN logstash.outputs.elasticsearch - Attempted to resurrect
connection to dead ES instance, but got an error.
{:url=>"https://logging-es:9200/;,

:error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError,
:error=>"Got response code '401' contacting Elasticsearch at URL
'https://logging-es:9200/'"} <https://logging-es:9200/%27%22%7D>
10:52:01.155 [Ruby-0-Thread-5:

/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:228]
INFO logstash.outputs.elasticsearch - Running health check to see
if an Elasticsearch connection is working
{:healthcheck_url=>https://logging-es:9200/, :path=>"/"}
  | 10:52:01.158 [Ruby-0-Thread-5:

/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:228]
WARN logstash.outputs.elasticsearch - Attempted to resurrect
connection to dead ES instance, but got an error.
{:url=>"https://logging-es:9200/;,

:error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError,
:error=>"Got response code '401' contacting Elasticsearch at URL
'https://logging-es:9200/'"} <https://logging-es:9200/%27%22%7D>

Please help me with correct configuration how do i get all
parameter username, password and truststore_password, truststore,
ca certificate.


*Thanks and Regards, *
*Himmat Singh.*
*Virtusa|Polaris Pvt Ltd*
*8465009408*
*
*
*
*



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [logging]

2018-05-31 Thread Himmat Singh
Hi,
Anybody worked on sending logs from logstash server (pod running on
openshift) with existing elasticsearch of openshift efk solution which is
secured with searchguard..

Please share configuration details how to get connectivity between them.

I am getting same kind of below error again again..



On Wed, May 30, 2018, 3:16 PM Himmat Singh 
wrote:

> Hi Team,
>
> I have deployed rabbitmq, logstash server on openshift to make another ELK
> pipeline for logging which supports some set of application and want to
> forward logs from those application logs through ELK pipeline but
> Elasticsearch will be the common For both EFK/ELK pipeline.
>
> I have below secrets on openshift logging-elasticsearch :
>
> logging-elasticsearch created 3 months ago
> Opaque Reveal Secret
> admin-ca
> *
> admin-cert
> *
> admin-key
> *
> admin.jks
> *
> key
> *
> searchguard.key
> *
> searchguard.truststore
> *
> truststore
> *
>
> --
>
> I have grabbed truststore key using below command and used
> truststore_password => tspass from elasticsaerch.yml :
>
> sudo oc get secret logging-elasticsearch --template='{{index .data 
> "truststore"}}' | base64 -d > truststore.jks
>
> Please help me with procedure i need to follow if i want to connect using
> truststore keys,username,password for truststore.
>
> Below is logstash.conf file : :
>
> input {
>   rabbitmq {
> host => "rabbitmq-logstash"
> queue => "logstash"
> durable => true
> port => 5672
> user => "admin"
> password => "admin"
> }
> }
> output {
>   elasticsearch {
> hosts => ["logging-es:9200"]
> #cacert => '/etc/logstash/conf.d/keys/es-ca.crt'
> #user => 'fluentd'
> #password => 'changeme'
> ssl => true
> ssl_certificate_verification => false
> truststore => "/etc/logstash/conf.d/keys/truststore.jks"
> truststore_password => tspass
> index => "logstash-%{+.MM.dd}"
> manage_template => false
> document_type => "%{[@metadata][type]}"
>}
>   stdout { codec => rubydebug }
> }
>
> I am facing below error:
>
> 10:51:56.154 [Ruby-0-Thread-5:
> /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:228]
> WARN logstash.outputs.elasticsearch - Attempted to resurrect connection to
> dead ES instance, but got an error. {:url=>"https://logging-es:9200/;,
> :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError,
> :error=>"Got response code '401' contacting Elasticsearch at URL '
> https://logging-es:9200/'"}
> 10:52:01.155 [Ruby-0-Thread-5:
> /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:228]
> INFO logstash.outputs.elasticsearch - Running health check to see if an
> Elasticsearch connection is working {:healthcheck_url=>
> https://logging-es:9200/, :path=>"/"}
>   | 10:52:01.158 [Ruby-0-Thread-5:
> /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:228]
> WARN logstash.outputs.elasticsearch - Attempted to resurrect connection to
> dead ES instance, but got an error. {:url=>"https://logging-es:9200/;,
> :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError,
> :error=>"Got response code '401' contacting Elasticsearch at URL '
> https://logging-es:9200/'"}
>
> Please help me with correct configuration how do i get all parameter
> username, password and truststore_password, truststore, ca certificate.
>
>
>
> *Thanks and Regards,  *
> *Himmat Singh.*
> *Virtusa|Polaris Pvt Ltd*
> *8465009408*
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[logging]

2018-05-30 Thread Himmat Singh
Hi Team,

I have deployed rabbitmq, logstash server on openshift to make another ELK
pipeline for logging which supports some set of application and want to
forward logs from those application logs through ELK pipeline but
Elasticsearch will be the common For both EFK/ELK pipeline.

I have below secrets on openshift logging-elasticsearch :

logging-elasticsearch created 3 months ago
Opaque Reveal Secret
admin-ca
*
admin-cert
*
admin-key
*
admin.jks
*
key
*
searchguard.key
*
searchguard.truststore
*
truststore
*

--

I have grabbed truststore key using below command and used
truststore_password => tspass from elasticsaerch.yml :

sudo oc get secret logging-elasticsearch --template='{{index .data
"truststore"}}' | base64 -d > truststore.jks

Please help me with procedure i need to follow if i want to connect using
truststore keys,username,password for truststore.

Below is logstash.conf file : :

input {
  rabbitmq {
host => "rabbitmq-logstash"
queue => "logstash"
durable => true
port => 5672
user => "admin"
password => "admin"
}
}
output {
  elasticsearch {
hosts => ["logging-es:9200"]
#cacert => '/etc/logstash/conf.d/keys/es-ca.crt'
#user => 'fluentd'
#password => 'changeme'
ssl => true
ssl_certificate_verification => false
truststore => "/etc/logstash/conf.d/keys/truststore.jks"
truststore_password => tspass
index => "logstash-%{+.MM.dd}"
manage_template => false
document_type => "%{[@metadata][type]}"
   }
  stdout { codec => rubydebug }
}

I am facing below error:

10:51:56.154 [Ruby-0-Thread-5:
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:228]
WARN logstash.outputs.elasticsearch - Attempted to resurrect connection to
dead ES instance, but got an error. {:url=>"https://logging-es:9200/;,
:error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError,
:error=>"Got response code '401' contacting Elasticsearch at URL '
https://logging-es:9200/'"}
10:52:01.155 [Ruby-0-Thread-5:
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:228]
INFO logstash.outputs.elasticsearch - Running health check to see if an
Elasticsearch connection is working {:healthcheck_url=>
https://logging-es:9200/, :path=>"/"}
  | 10:52:01.158 [Ruby-0-Thread-5:
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:228]
WARN logstash.outputs.elasticsearch - Attempted to resurrect connection to
dead ES instance, but got an error. {:url=>"https://logging-es:9200/;,
:error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError,
:error=>"Got response code '401' contacting Elasticsearch at URL '
https://logging-es:9200/'"}

Please help me with correct configuration how do i get all parameter
username, password and truststore_password, truststore, ca certificate.



*Thanks and Regards,  *
*Himmat Singh.*
*Virtusa|Polaris Pvt Ltd*
*8465009408*
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [logging]

2018-05-21 Thread Rich Megginson

On 05/20/2018 12:31 PM, Himmat Singh wrote:


Hi Team,

I am using openshift logging image  with below version for provides us 
centralize logging capability for our openshift cluster and external 
environment logs.


|registry.access.redhat.com/openshift3/logging-fluentd:v3.9 
<http://registry.access.redhat.com/openshift3/logging-fluentd:v3.9>|


I am trying to add additional functionality on top of above images as 
per our additional requirement.


As per requirement, i have created below configuration files to get 
node security logs and  ingest them to elasticsearch via mux .


Below is source file .. input-pre-secure.conf


|@type tail|
|@label @INGRESS|
|@id secure-input|
|path /var/log/secure*|
|read_from_head true|
|pos_file /var/log/secure.log.pos|
|tag audit.log|
|format none|
||


and filter-pre-secure.conf


|@type parser|
|key_name message|
|format grok|
||
|    pattern (?%{WORD} %{DATA} %{TIME}) 
%{HOSTNAME:host_target} sshd\[%{BASE10NUM}\]: (?%{WORD} 
%{WORD}) (?%{WORD}) from %{IP:src_ip} port %{BASE10NUM:port}|

||
||
|    pattern (?%{WORD} %{DATA} %{TIME}) 
%{HOSTNAME:host_target} sshd\[%{BASE10NUM}\]: %{DATA:EventType} for 
%{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2|

||
||


Modified Dockerfile:

|FROM registry.access.redhat.com/openshift3/logging-fluentd:v3.9 
<http://registry.access.redhat.com/openshift3/logging-fluentd:v3.9>|||

||
|COPY fluent-plugin-grok-parser-1.0.1.gem .|
|RUN gem install fluent-plugin-grok-parser-1.0.1.gem|
|COPY input-pre-secure.conf /etc/fluent/configs.d/openshift/|
|COPY filter-pre-secure.conf /etc/fluent/configs.d/openshift/|


*I have deployed updated logging images to mux and fluentd 
**daemonset**. After making this configuration changes i am not able 
to get any of logs to elasticsearch. *


I want all the security logs from /var/log/secure to be filtered 
according to our specific requirement and to be written on .operation 
index. what all configurations i need to make to have logs to be 
written on operation logs.



Please help me with the solution or any suggestion and with correct 
configuration files.


So in another issue I commented about this: 
https://github.com/openshift/origin-aggregated-logging/issues/1141#issuecomment-389301880


You are going to want to create your own index name and bypass all other 
processing.




*
*
*Thanks and Regards, *
*Himmat Singh.*

*
*
*
*


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging fails when using cinder volume for elasticsearch

2018-05-21 Thread Tim Dudgeon

On 21/05/18 13:30, Jeff Cantrill wrote:
Consider logging and issue so that it is properly addressed by the 
development team.



https://github.com/openshift/openshift-ansible/issues/8456
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging fails when using cinder volume for elasticsearch

2018-05-21 Thread Jeff Cantrill
Consider logging and issue so that it is properly addressed by the
development team.

On Mon, May 21, 2018 at 7:05 AM, Tim Dudgeon <tdudgeon...@gmail.com> wrote:

> I'm seeing a  strange problem with trying to use a Cinder volume for the
> elasticsearch PVC when installing logging with Origin 3.7. If I use NFS or
> GlusterFS volumes it all works fine. If I try a Cinder volume elastic
> search fails to start because of permissions problems:
>
>
> [2018-05-21 11:03:48,483][INFO ][container.run] Begin
> Elasticsearch startup script
> [2018-05-21 11:03:48,500][INFO ][container.run] Comparing the
> specified RAM to the maximum recommended for Elasticsearch...
> [2018-05-21 11:03:48,503][INFO ][container.run] Inspecting the
> maximum RAM available...
> [2018-05-21 11:03:48,513][INFO ][container.run] ES_HEAP_SIZE:
> '4096m'
> [2018-05-21 11:03:48,527][INFO ][container.run] Setting heap
> dump location /elasticsearch/persistent/heapdump.hprof
> [2018-05-21 11:03:48,531][INFO ][container.run] Checking if
> Elasticsearch is ready on https://localhost:9200
> Exception in thread "main" java.lang.IllegalStateException: Failed to
> created node environment
> Likely root cause: java.nio.file.AccessDeniedException:
> /elasticsearch/persistent/logging-es
> at sun.nio.fs.UnixException.translateToIOException(UnixExceptio
> n.java:84)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.
> java:102)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.
> java:107)
> at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSy
> stemProvider.java:384)
> at java.nio.file.Files.createDirectory(Files.java:674)
> at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
> at java.nio.file.Files.createDirectories(Files.java:767)
> at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment
> .java:169)
> at org.elasticsearch.node.Node.(Node.java:165)
> at org.elasticsearch.node.Node.(Node.java:140)
> at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:143)
> at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:194)
> at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:286)
> at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch
> .java:45)
> Refer to the log for complete error details.
>
> The directory ownerships do look very strange. Using Gluster (where it
> works) you see this (/elasticsearch/persistent is where the volume is
> mounted):
>
> sh-4.2$ cd /elasticsearch/persistent
> sh-4.2$ ls -al
> total 8
> drwxrwsr-x. 4 root 2009 4096 May 21 07:17 .
> drwxrwxrwx. 4 root root   42 May 21 07:17 ..
> drwxr-sr-x. 3 1000 2009 4096 May 21 07:17 logging-es
>
> User 1000 and group 2009 do not exist in /etc/passwd or /etc/groups
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Logging fails when using cinder volume for elasticsearch

2018-05-21 Thread Tim Dudgeon
I'm seeing a  strange problem with trying to use a Cinder volume for the 
elasticsearch PVC when installing logging with Origin 3.7. If I use NFS 
or GlusterFS volumes it all works fine. If I try a Cinder volume elastic 
search fails to start because of permissions problems:



[2018-05-21 11:03:48,483][INFO ][container.run    ] Begin 
Elasticsearch startup script
[2018-05-21 11:03:48,500][INFO ][container.run    ] Comparing 
the specified RAM to the maximum recommended for Elasticsearch...
[2018-05-21 11:03:48,503][INFO ][container.run    ] Inspecting 
the maximum RAM available...
[2018-05-21 11:03:48,513][INFO ][container.run    ] 
ES_HEAP_SIZE: '4096m'
[2018-05-21 11:03:48,527][INFO ][container.run    ] Setting heap 
dump location /elasticsearch/persistent/heapdump.hprof
[2018-05-21 11:03:48,531][INFO ][container.run    ] Checking if 
Elasticsearch is ready on https://localhost:9200
Exception in thread "main" java.lang.IllegalStateException: Failed to 
created node environment
Likely root cause: java.nio.file.AccessDeniedException: 
/elasticsearch/persistent/logging-es
    at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
    at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
    at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
    at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)

    at java.nio.file.Files.createDirectory(Files.java:674)
    at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
    at java.nio.file.Files.createDirectories(Files.java:767)
    at 
org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:169)

    at org.elasticsearch.node.Node.(Node.java:165)
    at org.elasticsearch.node.Node.(Node.java:140)
    at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:143)
    at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:194)
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:286)
    at 
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:45)

Refer to the log for complete error details.

The directory ownerships do look very strange. Using Gluster (where it 
works) you see this (/elasticsearch/persistent is where the volume is 
mounted):


sh-4.2$ cd /elasticsearch/persistent
sh-4.2$ ls -al
total 8
drwxrwsr-x. 4 root 2009 4096 May 21 07:17 .
drwxrwxrwx. 4 root root   42 May 21 07:17 ..
drwxr-sr-x. 3 1000 2009 4096 May 21 07:17 logging-es

User 1000 and group 2009 do not exist in /etc/passwd or /etc/groups



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[logging]

2018-05-20 Thread Himmat Singh
Hi Team,



I am using openshift logging image  with below version for provides us
centralize logging capability for our openshift cluster and external
environment logs.

registry.access.redhat.com/openshift3/logging-fluentd:v3.9

I am trying to add additional functionality on top of above images as per
our additional requirement.

As per requirement, i have created below configuration files to get node
security logs and  ingest them to elasticsearch via mux .

Below is source file .. input-pre-secure.conf



@type tail

@label @INGRESS

@id secure-input

path /var/log/secure*

read_from_head true

pos_file /var/log/secure.log.pos

tag audit.log

format none




and filter-pre-secure.conf



@type parser

key_name message

format grok



pattern (?%{WORD} %{DATA} %{TIME})
%{HOSTNAME:host_target} sshd\[%{BASE10NUM}\]: (?%{WORD}
%{WORD}) (?%{WORD}) from %{IP:src_ip} port %{BASE10NUM:port}





pattern (?%{WORD} %{DATA} %{TIME})
%{HOSTNAME:host_target} sshd\[%{BASE10NUM}\]: %{DATA:EventType} for
%{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2






Modified Dockerfile:

FROM registry.access.redhat.com/openshift3/logging-fluentd:v3.9



COPY fluent-plugin-grok-parser-1.0.1.gem .

RUN gem install fluent-plugin-grok-parser-1.0.1.gem

COPY input-pre-secure.conf /etc/fluent/configs.d/openshift/

COPY filter-pre-secure.conf /etc/fluent/configs.d/openshift/


*I have deployed updated logging images to mux and fluentd **daemonset**.
After making this configuration changes i am not able to get any of logs to
elasticsearch. *

I want all the security logs from /var/log/secure to be filtered according
to our specific requirement and to be written on .operation index. what all
configurations i need to make to have logs to be written on operation logs.


Plea

se help me with the solution or any suggestion and with correct
configuration files.



*Thanks and Regards,  *
*Himmat Singh.*
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: specifying storage class for metrics and logging

2018-04-17 Thread Jeff Cantrill
openshift_logging_elasticsearch_pvc_dynamic is a deprecated variable that
defined the alpha feature of PV->PVC associations prior to the introduction
of storage classes

On Tue, Apr 17, 2018 at 6:26 AM, Per Carlson <pe...@hemmop.com> wrote:

> Hi.
>
> On 17 April 2018 at 12:17, Tim Dudgeon <tdudgeon...@gmail.com> wrote:
>
>> So if you are using dynamic provisioning the only option for logging is
>> for the default StorageClass to be set to what is needed?
>>
>> On 17/04/18 11:12, Per Carlson wrote:
>>
>> This holds at least for 3.7:
>>
>> For metrics you can use "openshift_metrics_cassanda_pvc_storage_class_name"
>> (https://github.com/openshift/openshift-ansible/blob/release
>> -3.7/roles/openshift_metrics/tasks/generate_cassandra_pvcs.yaml#L44).
>>
>> Using a StorageClass for logging (ElasticSearch) is more confusing. The
>> variable is "openshift_logging_elasticsearch_pvc_storage_class_name" (
>> https://github.com/openshift/openshift-ansible/blob/release
>> -3.7/roles/openshift_logging_elasticsearch/defaults/main.yml#L34). But,
>> it is only used for non-dynamic PVCs (https://github.com/openshift/
>> openshift-ansible/blob/release-3.7/roles/openshift_logging_
>> elasticsearch/tasks/main.yaml#L368-L370).
>>
>>
>> --
>> Pelle
>>
>> Research is what I'm doing when I don't know what I'm doing.
>> - Wernher von Braun
>>
>>
>>
>
> ​No, I think you can ​use a StorageClass by keeping 
> "openshift_logging_elasticsearch_pvc_dynamic"
> is false. Not sure if that has any side effects though.
>
> --
> Pelle
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: specifying storage class for metrics and logging

2018-04-17 Thread Per Carlson
Hi.

On 17 April 2018 at 12:17, Tim Dudgeon <tdudgeon...@gmail.com> wrote:

> So if you are using dynamic provisioning the only option for logging is
> for the default StorageClass to be set to what is needed?
>
> On 17/04/18 11:12, Per Carlson wrote:
>
> This holds at least for 3.7:
>
> For metrics you can use "openshift_metrics_cassanda_pvc_storage_class_name"
> (https://github.com/openshift/openshift-ansible/blob/
> release-3.7/roles/openshift_metrics/tasks/generate_cassandra_pvcs.yaml#L44
> ).
>
> Using a StorageClass for logging (ElasticSearch) is more confusing. The
> variable is "openshift_logging_elasticsearch_pvc_storage_class_name" (
> https://github.com/openshift/openshift-ansible/blob/
> release-3.7/roles/openshift_logging_elasticsearch/defaults/main.yml#L34).
> But, it is only used for non-dynamic PVCs (https://github.com/openshift/
> openshift-ansible/blob/release-3.7/roles/openshift_
> logging_elasticsearch/tasks/main.yaml#L368-L370).
>
>
> --
> Pelle
>
> Research is what I'm doing when I don't know what I'm doing.
> - Wernher von Braun
>
>
>

​No, I think you can ​use a StorageClass by keeping
"openshift_logging_elasticsearch_pvc_dynamic" is false. Not sure if that
has any side effects though.

-- 
Pelle
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: specifying storage class for metrics and logging

2018-04-17 Thread Tim Dudgeon
So if you are using dynamic provisioning the only option for logging is 
for the default StorageClass to be set to what is needed?



On 17/04/18 11:12, Per Carlson wrote:

This holds at least for 3.7:

For metrics you can use 
"openshift_metrics_cassanda_pvc_storage_class_name" 
(https://github.com/openshift/openshift-ansible/blob/release-3.7/roles/openshift_metrics/tasks/generate_cassandra_pvcs.yaml#L44).


Using a StorageClass for logging (ElasticSearch) is more confusing. 
The variable is 
"openshift_logging_elasticsearch_pvc_storage_class_name" 
(https://github.com/openshift/openshift-ansible/blob/release-3.7/roles/openshift_logging_elasticsearch/defaults/main.yml#L34). 
But, it is only used for non-dynamic PVCs 
(https://github.com/openshift/openshift-ansible/blob/release-3.7/roles/openshift_logging_elasticsearch/tasks/main.yaml#L368-L370).



--
Pelle

Research is what I'm doing when I don't know what I'm doing.
- Wernher von Braun


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


specifying storage class for metrics and logging

2018-04-17 Thread Tim Dudgeon
If using dynamic provisioning for metrics and logging e.g. your 
inventory file contains:


openshift_metrics_cassandra_storage_type=dynamic

How does one go about specifying the StorageClass to uses?
Without this the default storage class would be used which is not what 
you might want.


Tim



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Issues with logging and metrics on Origin 3.7

2018-01-09 Thread Eric Wolinetz
On Mon, Jan 8, 2018 at 12:04 PM, Tim Dudgeon <tdudgeon...@gmail.com> wrote:

> Ah, so that makes more sense.
>
> So can I define the persistence properties (e.g. using nfs) in the
> inventory file, but specify 'openshift_metrics_install_metrics=false' and
> then run the byo/config.yml  playbook will that create the PVs, but not
> deploy metrics. Then I can later run the 
> byo/openshift-cluster/openshift-metrics.yml
> to actually deploy the metrics.
>

Correct!


> The reason I'm doing this in 2 stages is that I sometimes hit 'Unable to
> allocate memory' problems when trying to deploy everything with
> byo/config.yml (possibly due to the 'forks' setting in ansible.cfg).
>
>
>
> On 08/01/18 17:49, Eric Wolinetz wrote:
>
> I think the issue you're seeing stems from the fact that the logging and
> metrics playbooks to not create their own PVs. That is handled by the
> cluster install playbook.
> The logging and metrics playbooks only create the PVCs that their objects
> may require (unless ephemeral storage is configured).
>
> I admit, the naming of the variables makes that confusing however it is
> described in our docs umbrella'd under the advanced install section which
> uses the cluster playbook...
> https://docs.openshift.com/container-platform/3.7/install_config/install/
> advanced_install.html#advanced-install-cluster-metrics
>
> On Mon, Jan 8, 2018 at 11:22 AM, Tim Dudgeon <tdudgeon...@gmail.com>
> wrote:
>
>> On 08/01/18 16:51, Luke Meyer wrote:
>>
>>
>>
>> On Thu, Jan 4, 2018 at 10:39 AM, Tim Dudgeon <tdudgeon...@gmail.com>
>> wrote:
>>
>>> I'm hitting a number of issues with installing logging and metrics on
>>> Origin 3.7.
>>> This is using Centos7 hosts, the release-3.7 branch of openshift-ansible
>>> and NFS for persistent storage.
>>>
>>> I first do a minimal deploy with logging and metrics turned off.
>>> This goes fine. On the NFS server I see various volumes exported under
>>> /exports for logging, metrics, prometheus, even thought these are not
>>> deployed, but that's fine,  they are there if they become needed.
>>> As epxected there are no PVs related to metrics and logging.
>>>
>>> So I try to install metrics. I add this to the inventory file:
>>>
>>> openshift_metrics_install_metrics=true
>>> openshift_metrics_storage_kind=nfs
>>> openshift_metrics_storage_access_modes=['ReadWriteOnce']
>>> openshift_metrics_storage_nfs_directory=/exports
>>> openshift_metrics_storage_nfs_options='*(rw,root_squash)'
>>> openshift_metrics_storage_volume_name=metrics
>>> openshift_metrics_storage_volume_size=10Gi
>>> openshift_metrics_storage_labels={'storage': 'metrics'}
>>>
>>> and run:
>>>
>>> ansible-playbook openshift-ansible/playbooks/by
>>> o/openshift-cluster/openshift-metrics.yml
>>>
>>> All seems to install OK, but metrics can't start, and it turns out that
>>> no PV is created so the PVC needed by Casandra can't be satisfied.
>>> So I manually create the PV using this definition:
>>>
>>> apiVersion: v1
>>> kind: PersistentVolume
>>> metadata:
>>>   name: metrics-pv
>>>   labels:
>>> storage: metrics
>>> spec:
>>>   capacity:
>>> storage: 10Gi
>>>   accessModes:
>>> - ReadWriteOnce
>>>   persistentVolumeReclaimPolicy: Recycle
>>>   nfs:
>>> path: /exports/metrics
>>> server: nfsserver
>>>
>>> Now the PVC is satisfied and metrics can be started (though pods may
>>> need to be bounced because they have timed out).
>>>
>>> ISSUE 1: why does the metrics PV not get created?
>>>
>>>
>>> So now on to trying to install logging. The approach is similar. Add
>>> this to the inventory file:
>>>
>>> openshift_logging_install_logging=true
>>> openshift_logging_storage_kind=nfs
>>> openshift_logging_storage_access_modes=['ReadWriteOnce']
>>> openshift_logging_storage_nfs_directory=/exports
>>> openshift_logging_storage_nfs_options='*(rw,root_squash)'
>>> openshift_logging_storage_volume_name=logging
>>> openshift_logging_storage_volume_size=10Gi
>>> openshift_logging_storage_labels={'storage': 'logging'}
>>>
>>> and run:
>>> ansible-playbook openshift-ansible/playbooks/by
>>> o/openshift-cluster/openshift-logging.yml
>>>
>>> Logging installs fine, and is running fine. Kibana shows logs.
>>> But look at wha

Re: Issues with logging and metrics on Origin 3.7

2018-01-08 Thread Tim Dudgeon

Ah, so that makes more sense.

So can I define the persistence properties (e.g. using nfs) in the 
inventory file, but specify 'openshift_metrics_install_metrics=false' 
and then run the byo/config.yml  playbook will that create the PVs, but 
not deploy metrics. Then I can later run the 
byo/openshift-cluster/openshift-metrics.yml to actually deploy the metrics.


The reason I'm doing this in 2 stages is that I sometimes hit 'Unable to 
allocate memory' problems when trying to deploy everything with 
byo/config.yml (possibly due to the 'forks' setting in ansible.cfg).



On 08/01/18 17:49, Eric Wolinetz wrote:
I think the issue you're seeing stems from the fact that the logging 
and metrics playbooks to not create their own PVs. That is handled by 
the cluster install playbook.
The logging and metrics playbooks only create the PVCs that their 
objects may require (unless ephemeral storage is configured).


I admit, the naming of the variables makes that confusing however it 
is described in our docs umbrella'd under the advanced install section 
which uses the cluster playbook...

https://docs.openshift.com/container-platform/3.7/install_config/install/advanced_install.html#advanced-install-cluster-metrics

On Mon, Jan 8, 2018 at 11:22 AM, Tim Dudgeon <tdudgeon...@gmail.com 
<mailto:tdudgeon...@gmail.com>> wrote:


On 08/01/18 16:51, Luke Meyer wrote:



On Thu, Jan 4, 2018 at 10:39 AM, Tim Dudgeon
<tdudgeon...@gmail.com <mailto:tdudgeon...@gmail.com>> wrote:

I'm hitting a number of issues with installing logging and
metrics on Origin 3.7.
This is using Centos7 hosts, the release-3.7 branch of
openshift-ansible and NFS for persistent storage.

I first do a minimal deploy with logging and metrics turned off.
This goes fine. On the NFS server I see various volumes
exported under /exports for logging, metrics, prometheus,
even thought these are not deployed, but that's fine,  they
are there if they become needed.
As epxected there are no PVs related to metrics and logging.

So I try to install metrics. I add this to the inventory file:

openshift_metrics_install_metrics=true
openshift_metrics_storage_kind=nfs
openshift_metrics_storage_access_modes=['ReadWriteOnce']
openshift_metrics_storage_nfs_directory=/exports
openshift_metrics_storage_nfs_options='*(rw,root_squash)'
openshift_metrics_storage_volume_name=metrics
openshift_metrics_storage_volume_size=10Gi
openshift_metrics_storage_labels={'storage': 'metrics'}

and run:

ansible-playbook
openshift-ansible/playbooks/byo/openshift-cluster/openshift-metrics.yml

All seems to install OK, but metrics can't start, and it
turns out that no PV is created so the PVC needed by Casandra
can't be satisfied.
So I manually create the PV using this definition:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: metrics-pv
  labels:
    storage: metrics
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /exports/metrics
    server: nfsserver

Now the PVC is satisfied and metrics can be started (though
pods may need to be bounced because they have timed out).

ISSUE 1: why does the metrics PV not get created?


So now on to trying to install logging. The approach is
similar. Add this to the inventory file:

openshift_logging_install_logging=true
openshift_logging_storage_kind=nfs
openshift_logging_storage_access_modes=['ReadWriteOnce']
openshift_logging_storage_nfs_directory=/exports
openshift_logging_storage_nfs_options='*(rw,root_squash)'
openshift_logging_storage_volume_name=logging
openshift_logging_storage_volume_size=10Gi
openshift_logging_storage_labels={'storage': 'logging'}

and run:
ansible-playbook
openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml

Logging installs fine, and is running fine. Kibana shows logs.
But look at what has been installed and there are no PVs or
PVs for logging. It seems it has  ignored the instructions to
use NFS and and deployed using ephemeral storage.

ISSUE 2: why does the persistence definitions get ignored?


I'm not entirely sure that under kind=nfs it's *supposed* to
create a PVC. Might just directly mount the volume.

One thing to check: did you set up a host in the [nfs] group in
your inventory?

Yes, there is a nfs server, and its working fine (e.g. for the
docker registry)



And finally, looking at the metrics and logging images on
Docker Hub there are 

Re: Issues with logging and metrics on Origin 3.7

2018-01-08 Thread Tim Dudgeon

On 08/01/18 16:51, Luke Meyer wrote:



On Thu, Jan 4, 2018 at 10:39 AM, Tim Dudgeon <tdudgeon...@gmail.com 
<mailto:tdudgeon...@gmail.com>> wrote:


I'm hitting a number of issues with installing logging and metrics
on Origin 3.7.
This is using Centos7 hosts, the release-3.7 branch of
openshift-ansible and NFS for persistent storage.

I first do a minimal deploy with logging and metrics turned off.
This goes fine. On the NFS server I see various volumes exported
under /exports for logging, metrics, prometheus, even thought
these are not deployed, but that's fine, they are there if they
become needed.
As epxected there are no PVs related to metrics and logging.

So I try to install metrics. I add this to the inventory file:

openshift_metrics_install_metrics=true
openshift_metrics_storage_kind=nfs
openshift_metrics_storage_access_modes=['ReadWriteOnce']
openshift_metrics_storage_nfs_directory=/exports
openshift_metrics_storage_nfs_options='*(rw,root_squash)'
openshift_metrics_storage_volume_name=metrics
openshift_metrics_storage_volume_size=10Gi
openshift_metrics_storage_labels={'storage': 'metrics'}

and run:

ansible-playbook
openshift-ansible/playbooks/byo/openshift-cluster/openshift-metrics.yml

All seems to install OK, but metrics can't start, and it turns out
that no PV is created so the PVC needed by Casandra can't be
satisfied.
So I manually create the PV using this definition:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: metrics-pv
  labels:
    storage: metrics
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /exports/metrics
    server: nfsserver

Now the PVC is satisfied and metrics can be started (though pods
may need to be bounced because they have timed out).

ISSUE 1: why does the metrics PV not get created?


So now on to trying to install logging. The approach is similar.
Add this to the inventory file:

openshift_logging_install_logging=true
openshift_logging_storage_kind=nfs
openshift_logging_storage_access_modes=['ReadWriteOnce']
openshift_logging_storage_nfs_directory=/exports
openshift_logging_storage_nfs_options='*(rw,root_squash)'
openshift_logging_storage_volume_name=logging
openshift_logging_storage_volume_size=10Gi
openshift_logging_storage_labels={'storage': 'logging'}

and run:
ansible-playbook
openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml

Logging installs fine, and is running fine. Kibana shows logs.
But look at what has been installed and there are no PVs or PVs
    for logging. It seems it has  ignored the instructions to use NFS
and and deployed using ephemeral storage.

ISSUE 2: why does the persistence definitions get ignored?


I'm not entirely sure that under kind=nfs it's *supposed* to create a 
PVC. Might just directly mount the volume.


One thing to check: did you set up a host in the [nfs] group in your 
inventory?
Yes, there is a nfs server, and its working fine (e.g. for the docker 
registry)



And finally, looking at the metrics and logging images on Docker
Hub there are none with
v3.7.0 or v3.7 tags. The only tag related to 3.7 is v3.7.0-rc.0.
For example look here:
https://hub.docker.com/r/openshift/origin-metrics-hawkular-metrics/tags/
<https://hub.docker.com/r/openshift/origin-metrics-hawkular-metrics/tags/>
But for other openshift components there is a v3.7.0 tag present.
Without specifying any particular tag to use for metrics or
logging it seems you get 'latest' installed.

ISSUE 3: is 3.7 officially released yet (there's no docs for this
here either: https://docs.openshift.org/index.html
<https://docs.openshift.org/index.html>)?



3.7 is released. Seems like those dockerhub images (tags) got lost in 
the shuffle though.

OK. They will presumably appear sometime soon?
What about docs for 3.7? https://docs.openshift.org/index.html


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Issues with logging and metrics on Origin 3.7

2018-01-04 Thread Tim Dudgeon
I'm hitting a number of issues with installing logging and metrics on 
Origin 3.7.
This is using Centos7 hosts, the release-3.7 branch of openshift-ansible 
and NFS for persistent storage.


I first do a minimal deploy with logging and metrics turned off.
This goes fine. On the NFS server I see various volumes exported under 
/exports for logging, metrics, prometheus, even thought these are not 
deployed, but that's fine,  they are there if they become needed.

As epxected there are no PVs related to metrics and logging.

So I try to install metrics. I add this to the inventory file:

openshift_metrics_install_metrics=true
openshift_metrics_storage_kind=nfs
openshift_metrics_storage_access_modes=['ReadWriteOnce']
openshift_metrics_storage_nfs_directory=/exports
openshift_metrics_storage_nfs_options='*(rw,root_squash)'
openshift_metrics_storage_volume_name=metrics
openshift_metrics_storage_volume_size=10Gi
openshift_metrics_storage_labels={'storage': 'metrics'}

and run:

ansible-playbook 
openshift-ansible/playbooks/byo/openshift-cluster/openshift-metrics.yml


All seems to install OK, but metrics can't start, and it turns out that 
no PV is created so the PVC needed by Casandra can't be satisfied.

So I manually create the PV using this definition:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: metrics-pv
  labels:
    storage: metrics
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /exports/metrics
    server: nfsserver

Now the PVC is satisfied and metrics can be started (though pods may 
need to be bounced because they have timed out).


ISSUE 1: why does the metrics PV not get created?


So now on to trying to install logging. The approach is similar. Add 
this to the inventory file:


openshift_logging_install_logging=true
openshift_logging_storage_kind=nfs
openshift_logging_storage_access_modes=['ReadWriteOnce']
openshift_logging_storage_nfs_directory=/exports
openshift_logging_storage_nfs_options='*(rw,root_squash)'
openshift_logging_storage_volume_name=logging
openshift_logging_storage_volume_size=10Gi
openshift_logging_storage_labels={'storage': 'logging'}

and run:
ansible-playbook 
openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml


Logging installs fine, and is running fine. Kibana shows logs.
But look at what has been installed and there are no PVs or PVs for 
logging. It seems it has  ignored the instructions to use NFS and and 
deployed using ephemeral storage.


ISSUE 2: why does the persistence definitions get ignored?


And finally, looking at the metrics and logging images on Docker Hub 
there are none with
v3.7.0 or v3.7 tags. The only tag related to 3.7 is v3.7.0-rc.0. For 
example look here:

https://hub.docker.com/r/openshift/origin-metrics-hawkular-metrics/tags/
But for other openshift components there is a v3.7.0 tag present.
Without specifying any particular tag to use for metrics or logging it 
seems you get 'latest' installed.


ISSUE 3: is 3.7 officially released yet (there's no docs for this here 
either: https://docs.openshift.org/index.html)?


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Regex in logging curator settings

2017-11-27 Thread Jeff Cantrill
Create an RFE for your request either
https://trello.com/b/oJbshSIs/logging-and-metrics or
https://bugzilla.redhat.com/

On Mon, Nov 27, 2017 at 9:32 AM, bahhooo <bah...@gmail.com> wrote:

> Hello all,
>
> Is there a reason why regexes are not allowed in the curator settings?
>
> I would like to delete some indices according to the regular expressions I
> provide, so that I am not forced to enter individual project names into the
> configs.
>
> Right now I create a setting yaml with a one-liner and add it to the
> configmap. But whenever new projects are added to the cluster I will have
> to maintain the list manually.
>
> Anybody having a similar issue?
>
>
> Best,
> Bahho
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging seems to be working, but no logs are collected

2017-11-03 Thread Louis Santillan
Tim,

This KCS may also be of use to you [0].

[0] https://access.redhat.com/solutions/3220401

___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST <https://www.redhat.com/>

lpsan...@gmail.comM: 3236334854
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Thu, Nov 2, 2017 at 9:00 AM, Rich Megginson <rmegg...@redhat.com> wrote:

> On 11/02/2017 02:01 AM, Tim Dudgeon wrote:
>
>>
>> Noriko, That fixed it.
>> There was no filter-post-z-* file and the  and 
>> tags were present.
>> After removing those tags and restarting the fluentd pods logs are
>> getting pushed to ES.
>>
>> So the question is how to avoid this problem in the first place?
>>
>>
> Upstream logging is a bit of a mess right now.
>
> Some time ago we decoupled the configuration of logging from the
> implementation.  That is, we moved all of the configuration into
> openshift-ansible.  That meant we needed to either release
> openshift-ansible packages and logging images in absolute lock-step (which
> didn't happen - in fact we never released upstream logging images for 3.6.x
> - this is now being addressed - https://github.com/openshift/o
> rigin-aggregated-logging/pull/758), or we need to ensure that
> openshift-ansible logging changes did not depend on the image version, and
> vice versa (this also didn't happen - we released changes to the logging
> images that assumed they would only ever be deployed with a specific
> version of openshift-ansible, instead of adopting a more "defensive
> programming" style).
>
>
> This was a simple ansible install using this in the inventory file:
>>
>> openshift_logging_image_version=v3.6.1
>> openshift_hosted_logging_deploy=true
>> openshift_logging_fluentd_journal_read_from_head=false
>>
>> (note, the image tag for the ES deployment currently needs to be changed
>> to :latest for ES to start, but that's a separate issue).
>>
>>
>> On 01/11/2017 21:00, Noriko Hosoi wrote:
>>
>>> On 11/01/2017 12:56 PM, Rich Megginson wrote:
>>>
>>>> On 11/01/2017 01:18 PM, Tim Dudgeon wrote:
>>>>
>>>>> More data on this.
>>>>> Just to confirm that the journal on the node is receiving events:
>>>>>
>>>>> sudo journalctl -n 25
>>>>> -- Logs begin at Wed 2017-11-01 14:24:08 UTC, end at Wed 2017-11-01
>>>>> 19:15:15 UTC. --
>>>>> Nov 01 19:14:23 master-1.openstacklocal origin-master[15148]: I1101
>>>>> 19:14:23.286735   15148 rest.go:324] Starting watch for 
>>>>> /api/v1/configmaps,
>>>>> rv=1940 labels= fields
>>>>> Nov 01 19:14:24 master-1.openstacklocal origin-master[15148]: I1101
>>>>> 19:14:24.288497   15148 rest.go:324] Starting watch for /api/v1/nodes,
>>>>> rv=6595 labels= fields= tim
>>>>> Nov 01 19:14:29 master-1.openstacklocal origin-master[15148]: I1101
>>>>> 19:14:29.283528   15148 rest.go:324] Starting watch for
>>>>> /apis/extensions/v1beta1/ingresses, rv=4 l
>>>>> Nov 01 19:14:36 master-1.openstacklocal origin-master[15148]: I1101
>>>>> 19:14:36.566696   15148 rest.go:324] Starting watch for /api/v1/pods,
>>>>> rv=6028 labels= fields= time
>>>>> Nov 01 19:14:40 master-1.openstacklocal origin-master[15148]: I1101
>>>>> 19:14:40.284191   15148 rest.go:324] Starting watch for
>>>>> /api/v1/persistentvolumeclaims, rv=1606 la
>>>>> Nov 01 19:14:43 master-1.openstacklocal origin-master[15148]: I1101
>>>>> 19:14:43.291205   15148 rest.go:324] Starting watch for /apis/
>>>>> authorization.openshift.io/v1/policy
>>>>> Nov 01 19:14:43 master-1.openstacklocal origin-master[15148]: I1101
>>>>> 19:14:43.34   15148 rest.go:324] Starting watch for
>>>>> /oapi/v1/hostsubnets, rv=1054 labels= fiel
>>>>> Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101
>>>>> 19:14:47.255576   20672 operation_generator.go:609] MountVolume.SetUp
>>>>> succeeded for volume "kubernet
>>>>> Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101
>>>>> 19:14:47.256440   20672 operation_generator.go:609] MountVolume.SetUp
>>>>> succeeded for volume "kubernet
>>>>> Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101
>>>>> 19:14:47.258455   20672 operation_generator.g

Re: Logging seems to be working, but no logs are collected

2017-11-02 Thread Rich Megginson

On 11/02/2017 02:01 AM, Tim Dudgeon wrote:


Noriko, That fixed it.
There was no filter-post-z-* file and the  and  
tags were present.
After removing those tags and restarting the fluentd pods logs are 
getting pushed to ES.


So the question is how to avoid this problem in the first place?



Upstream logging is a bit of a mess right now.

Some time ago we decoupled the configuration of logging from the 
implementation.  That is, we moved all of the configuration into 
openshift-ansible.  That meant we needed to either release 
openshift-ansible packages and logging images in absolute lock-step 
(which didn't happen - in fact we never released upstream logging images 
for 3.6.x - this is now being addressed - 
https://github.com/openshift/origin-aggregated-logging/pull/758), or we 
need to ensure that openshift-ansible logging changes did not depend on 
the image version, and vice versa (this also didn't happen - we released 
changes to the logging images that assumed they would only ever be 
deployed with a specific version of openshift-ansible, instead of 
adopting a more "defensive programming" style).



This was a simple ansible install using this in the inventory file:

openshift_logging_image_version=v3.6.1
openshift_hosted_logging_deploy=true
openshift_logging_fluentd_journal_read_from_head=false

(note, the image tag for the ES deployment currently needs to be 
changed to :latest for ES to start, but that's a separate issue).



On 01/11/2017 21:00, Noriko Hosoi wrote:

On 11/01/2017 12:56 PM, Rich Megginson wrote:

On 11/01/2017 01:18 PM, Tim Dudgeon wrote:

More data on this.
Just to confirm that the journal on the node is receiving events:

sudo journalctl -n 25
-- Logs begin at Wed 2017-11-01 14:24:08 UTC, end at Wed 2017-11-01 
19:15:15 UTC. --
Nov 01 19:14:23 master-1.openstacklocal origin-master[15148]: I1101 
19:14:23.286735   15148 rest.go:324] Starting watch for 
/api/v1/configmaps, rv=1940 labels= fields
Nov 01 19:14:24 master-1.openstacklocal origin-master[15148]: I1101 
19:14:24.288497   15148 rest.go:324] Starting watch for 
/api/v1/nodes, rv=6595 labels= fields= tim
Nov 01 19:14:29 master-1.openstacklocal origin-master[15148]: I1101 
19:14:29.283528   15148 rest.go:324] Starting watch for 
/apis/extensions/v1beta1/ingresses, rv=4 l
Nov 01 19:14:36 master-1.openstacklocal origin-master[15148]: I1101 
19:14:36.566696   15148 rest.go:324] Starting watch for 
/api/v1/pods, rv=6028 labels= fields= time
Nov 01 19:14:40 master-1.openstacklocal origin-master[15148]: I1101 
19:14:40.284191   15148 rest.go:324] Starting watch for 
/api/v1/persistentvolumeclaims, rv=1606 la
Nov 01 19:14:43 master-1.openstacklocal origin-master[15148]: I1101 
19:14:43.291205   15148 rest.go:324] Starting watch for 
/apis/authorization.openshift.io/v1/policy
Nov 01 19:14:43 master-1.openstacklocal origin-master[15148]: I1101 
19:14:43.34   15148 rest.go:324] Starting watch for 
/oapi/v1/hostsubnets, rv=1054 labels= fiel
Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101 
19:14:47.255576   20672 operation_generator.go:609] 
MountVolume.SetUp succeeded for volume "kubernet
Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101 
19:14:47.256440   20672 operation_generator.go:609] 
MountVolume.SetUp succeeded for volume "kubernet
Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101 
19:14:47.258455   20672 operation_generator.go:609] 
MountVolume.SetUp succeeded for volume "kubernet
Nov 01 19:14:48 master-1.openstacklocal origin-master[15148]: I1101 
19:14:48.291988   15148 rest.go:324] Starting watch for 
/apis/authorization.openshift.io/v1/cluste
Nov 01 19:14:51 master-1.openstacklocal sshd[46103]: Invalid user 
admin from 118.89.45.36 port 17929
Nov 01 19:14:51 master-1.openstacklocal sshd[46103]: 
input_userauth_request: invalid user admin [preauth]
Nov 01 19:14:52 master-1.openstacklocal sshd[46103]: Connection 
closed by 118.89.45.36 port 17929 [preauth]
Nov 01 19:14:56 master-1.openstacklocal origin-master[15148]: I1101 
19:14:56.206290   15148 rest.go:324] Starting watch for 
/api/v1/services, rv=2008 labels= fields=
Nov 01 19:14:57 master-1.openstacklocal origin-master[15148]: I1101 
19:14:57.559640   15148 rest.go:324] Starting watch for 
/api/v1/namespaces, rv=1845 labels= fields
Nov 01 19:14:59 master-1.openstacklocal origin-master[15148]: I1101 
19:14:59.275807   15148 rest.go:324] Starting watch for 
/api/v1/podtemplates, rv=4 labels= fields=
Nov 01 19:14:59 master-1.openstacklocal origin-master[15148]: I1101 
19:14:59.459554   15148 rest.go:324] Starting watch for 
/apis/storage.k8s.io/v1beta1/storageclasse
Nov 01 19:15:01 master-1.openstacklocal origin-master[15148]: I1101 
19:15:01.286182   15148 rest.go:324] Starting watch for 
/apis/extensions/v1beta1/replicasets, rv=4
Nov 01 19:15:06 master-1.openstacklocal origin-master[15148]: I1101 
19:15:06.270704   15148 rest.go:324] Starting watch for 
/apis/security.openshift.io/v1/se

Re: Logging seems to be working, but no logs are collected

2017-11-02 Thread Tim Dudgeon
ese events?


I think it has to do with this:

2017-11-01 16:59:47 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:49 + [warn]: no patterns matched 
tag="kubernetes.journal.container"


That's very bad.

Noriko, what is the fix for the missing @OUTPUT section?


I have 2 questions.

In the fluentd pod:

oc rsh $FLUENTDPOD

Do we have a filter-post-z-* config file in /etc/fluent/configs.d?
# ls /etc/fluent/configs.d/openshift/filter-post-z-*
/etc/fluent/configs.d/openshift/filter-post-z-retag-two.conf

Also, how does the fluentd's configmap look like?
oc edit configmap $FLUENTDPOD

Does the configmap have  as follows?
8<-
    
    ## filters
  @include configs.d/openshift/filter-pre-*.conf
  @include configs.d/openshift/filter-retag-journal.conf
  @include configs.d/openshift/filter-k8s-meta.conf
  @include configs.d/openshift/filter-kibana-transform.conf
  @include configs.d/openshift/filter-k8s-flatten-hash.conf
  @include configs.d/openshift/filter-k8s-record-transform.conf
  @include configs.d/openshift/filter-syslog-record-transform.conf
  @include configs.d/openshift/filter-viaq-data-model.conf
  @include configs.d/openshift/filter-post-*.conf
    ##
    

    
    ## matches
  @include configs.d/openshift/output-pre-*.conf
  @include configs.d/openshift/output-operations.conf
  @include configs.d/openshift/output-applications.conf
  # no post - applications.conf matches everything left
    ##
    
8<-

If there is no filter-post-z-* config file in 
/etc/fluent/configs.d/openshift, please remove  and @OUTPUT> as follows:

8<-
    
    ## filters
  @include configs.d/openshift/filter-pre-*.conf
  @include configs.d/openshift/filter-retag-journal.conf
  @include configs.d/openshift/filter-k8s-meta.conf
  @include configs.d/openshift/filter-kibana-transform.conf
  @include configs.d/openshift/filter-k8s-flatten-hash.conf
  @include configs.d/openshift/filter-k8s-record-transform.conf
  @include configs.d/openshift/filter-syslog-record-transform.conf
  @include configs.d/openshift/filter-viaq-data-model.conf
  @include configs.d/openshift/filter-post-*.conf
    ##

    ## matches
  @include configs.d/openshift/output-pre-*.conf
  @include configs.d/openshift/output-operations.conf
  @include configs.d/openshift/output-applications.conf
  # no post - applications.conf matches everything left
    ##
    
8<-

If you have the filter-post-z-* config file in 
/etc/fluent/configs.d/openshift and do not have  and @OUTPUT>, please add them.  (I don't think that's the case since the 
fluentd run.sh does not install filter-post-z-* unless  
is found in the configmap.)


Thanks,
--noriko





On 01/11/2017 18:16, Tim Dudgeon wrote:

Correction/update on this.

The `journalctl -n 100` command runs ON on the host but not inside 
the pod.


The file `/var/log/journal.pos` is present both on the host and in 
the pod.


Tim


On 01/11/2017 17:28, Tim Dudgeon wrote:
So I've tried this and a few other variants but not made any 
progress.

The issue seems to be that there are no journal logs?

# journalctl -n 100
No journal files were found.
-- No entries --

Even though:

# cat /var/log/journal.pos
s=8da3038f46274f8f80cadbf839d487a5;i=45bd;b=80a3902da560465e8799ccf3e6fb2ef7;m=27729aac6;t=55cef2678535f;x=42fa04d62b52d49fsh-4.2 



And in the logs of the pod I see this:

$  oc logs logging-fluentd-h6f3h
umounts of dead containers will fail. Ignoring...
umount: 
/var/lib/docker/containers/30effb9ff35fc74b9bf37ebeeb5d0d61b515a55e4f3ae52e9bb618ac55704d73/shm: 
not mounted
umount: 
/var/lib/docker/containers/39b5c1572e79dd2e698917a7116c6110d2c6eb0a6761142a6e718904f6c43022/shm: 
not mounted
umount: 
/var/lib/docker/containers/64c1c27537aa7441ded69a04c78f2f2ce60920fa6e4dc628637a19289b2ead6a/shm: 
not mounted
umount: 
/var/lib/docker/containers/7b8564902f011522917c6cffd8a39133cabb8588229f2836c9fbcee95960ac78/shm: 
not mounted
umount: 
/var/lib/docker/containers/b85e6d1123da047a7ffe679edfb71376267ef27e9525c7097f3fd6668acd110e/shm: 
not mounted
umount: 
/var/lib/docker/containers/c02f10b8dcf69979a95305a76c2f570aaf37fb9c2c0cad6893ed1822f7f24274/shm: 
not mounted
umount: 
/var/lib/docker/containers/c30c4f0f34b2470ef5280c85a6db3910b143707df997ad6ee6ed2c2208009a70/shm: 
not mounted
umount: 
/var/lib/docker/containers/c67c4e5e89b5f41c593ba2e538671821b6b43936962e8d49785b292644c4a031/shm: 
not mounted
2017-11-01 16:59:42 + [info]: reading config file 
path="/etc/fluent/fluent.conf"
2017-11-01 16:59:43 + [w

Re: Logging seems to be working, but no logs are collected

2017-11-01 Thread Noriko Hosoi
FLUENTDPOD

Does the configmap have  as follows?
8<-
    
    ## filters
  @include configs.d/openshift/filter-pre-*.conf
  @include configs.d/openshift/filter-retag-journal.conf
  @include configs.d/openshift/filter-k8s-meta.conf
  @include configs.d/openshift/filter-kibana-transform.conf
  @include configs.d/openshift/filter-k8s-flatten-hash.conf
  @include configs.d/openshift/filter-k8s-record-transform.conf
  @include configs.d/openshift/filter-syslog-record-transform.conf
  @include configs.d/openshift/filter-viaq-data-model.conf
  @include configs.d/openshift/filter-post-*.conf
    ##
    

    
    ## matches
  @include configs.d/openshift/output-pre-*.conf
  @include configs.d/openshift/output-operations.conf
  @include configs.d/openshift/output-applications.conf
  # no post - applications.conf matches everything left
    ##
    
8<-

If there is no filter-post-z-* config file in 
/etc/fluent/configs.d/openshift, please remove  and @OUTPUT> as follows:

8<-
    
    ## filters
  @include configs.d/openshift/filter-pre-*.conf
  @include configs.d/openshift/filter-retag-journal.conf
  @include configs.d/openshift/filter-k8s-meta.conf
  @include configs.d/openshift/filter-kibana-transform.conf
  @include configs.d/openshift/filter-k8s-flatten-hash.conf
  @include configs.d/openshift/filter-k8s-record-transform.conf
  @include configs.d/openshift/filter-syslog-record-transform.conf
  @include configs.d/openshift/filter-viaq-data-model.conf
  @include configs.d/openshift/filter-post-*.conf
    ##

    ## matches
  @include configs.d/openshift/output-pre-*.conf
  @include configs.d/openshift/output-operations.conf
  @include configs.d/openshift/output-applications.conf
  # no post - applications.conf matches everything left
    ##
    
8<-

If you have the filter-post-z-* config file in 
/etc/fluent/configs.d/openshift and do not have  and @OUTPUT>, please add them.  (I don't think that's the case since the 
fluentd run.sh does not install filter-post-z-* unless  
is found in the configmap.)


Thanks,
--noriko





On 01/11/2017 18:16, Tim Dudgeon wrote:

Correction/update on this.

The `journalctl -n 100` command runs ON on the host but not inside 
the pod.


The file `/var/log/journal.pos` is present both on the host and in 
the pod.


Tim


On 01/11/2017 17:28, Tim Dudgeon wrote:

So I've tried this and a few other variants but not made any progress.
The issue seems to be that there are no journal logs?

# journalctl -n 100
No journal files were found.
-- No entries --

Even though:

# cat /var/log/journal.pos
s=8da3038f46274f8f80cadbf839d487a5;i=45bd;b=80a3902da560465e8799ccf3e6fb2ef7;m=27729aac6;t=55cef2678535f;x=42fa04d62b52d49fsh-4.2 



And in the logs of the pod I see this:

$  oc logs logging-fluentd-h6f3h
umounts of dead containers will fail. Ignoring...
umount: 
/var/lib/docker/containers/30effb9ff35fc74b9bf37ebeeb5d0d61b515a55e4f3ae52e9bb618ac55704d73/shm: 
not mounted
umount: 
/var/lib/docker/containers/39b5c1572e79dd2e698917a7116c6110d2c6eb0a6761142a6e718904f6c43022/shm: 
not mounted
umount: 
/var/lib/docker/containers/64c1c27537aa7441ded69a04c78f2f2ce60920fa6e4dc628637a19289b2ead6a/shm: 
not mounted
umount: 
/var/lib/docker/containers/7b8564902f011522917c6cffd8a39133cabb8588229f2836c9fbcee95960ac78/shm: 
not mounted
umount: 
/var/lib/docker/containers/b85e6d1123da047a7ffe679edfb71376267ef27e9525c7097f3fd6668acd110e/shm: 
not mounted
umount: 
/var/lib/docker/containers/c02f10b8dcf69979a95305a76c2f570aaf37fb9c2c0cad6893ed1822f7f24274/shm: 
not mounted
umount: 
/var/lib/docker/containers/c30c4f0f34b2470ef5280c85a6db3910b143707df997ad6ee6ed2c2208009a70/shm: 
not mounted
umount: 
/var/lib/docker/containers/c67c4e5e89b5f41c593ba2e538671821b6b43936962e8d49785b292644c4a031/shm: 
not mounted
2017-11-01 16:59:42 + [info]: reading config file 
path="/etc/fluent/fluent.conf"
2017-11-01 16:59:43 + [warn]: 'block' action stops input 
process until the buffer full is resolved. Check your pipeline this 
action is fit or not
2017-11-01 16:59:43 + [warn]: 'block' action stops input 
process until the buffer full is resolved. Check your pipeline this 
action is fit or not
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017

Re: Logging seems to be working, but no logs are collected

2017-11-01 Thread Rich Megginson
e fluentd's configmap look like?
oc edit configmap $FLUENTDPOD

Does the configmap have  as follows?
8<-
    
    ## filters
  @include configs.d/openshift/filter-pre-*.conf
  @include configs.d/openshift/filter-retag-journal.conf
  @include configs.d/openshift/filter-k8s-meta.conf
  @include configs.d/openshift/filter-kibana-transform.conf
  @include configs.d/openshift/filter-k8s-flatten-hash.conf
  @include configs.d/openshift/filter-k8s-record-transform.conf
  @include configs.d/openshift/filter-syslog-record-transform.conf
  @include configs.d/openshift/filter-viaq-data-model.conf
  @include configs.d/openshift/filter-post-*.conf
    ##
    

    
    ## matches
  @include configs.d/openshift/output-pre-*.conf
  @include configs.d/openshift/output-operations.conf
  @include configs.d/openshift/output-applications.conf
  # no post - applications.conf matches everything left
    ##
    
8<-

If there is no filter-post-z-* config file in 
/etc/fluent/configs.d/openshift, please remove  and @OUTPUT> as follows:

8<-
    
    ## filters
  @include configs.d/openshift/filter-pre-*.conf
  @include configs.d/openshift/filter-retag-journal.conf
  @include configs.d/openshift/filter-k8s-meta.conf
  @include configs.d/openshift/filter-kibana-transform.conf
  @include configs.d/openshift/filter-k8s-flatten-hash.conf
  @include configs.d/openshift/filter-k8s-record-transform.conf
  @include configs.d/openshift/filter-syslog-record-transform.conf
  @include configs.d/openshift/filter-viaq-data-model.conf
  @include configs.d/openshift/filter-post-*.conf
    ##

    ## matches
  @include configs.d/openshift/output-pre-*.conf
  @include configs.d/openshift/output-operations.conf
  @include configs.d/openshift/output-applications.conf
  # no post - applications.conf matches everything left
    ##
    
8<-

If you have the filter-post-z-* config file in 
/etc/fluent/configs.d/openshift and do not have  and @OUTPUT>, please add them.  (I don't think that's the case since the 
fluentd run.sh does not install filter-post-z-* unless  
is found in the configmap.)


Thanks,
--noriko






On 01/11/2017 18:16, Tim Dudgeon wrote:

Correction/update on this.

The `journalctl -n 100` command runs ON on the host but not inside 
the pod.


The file `/var/log/journal.pos` is present both on the host and in 
the pod.


Tim


On 01/11/2017 17:28, Tim Dudgeon wrote:

So I've tried this and a few other variants but not made any progress.
The issue seems to be that there are no journal logs?

# journalctl -n 100
No journal files were found.
-- No entries --

Even though:

# cat /var/log/journal.pos
s=8da3038f46274f8f80cadbf839d487a5;i=45bd;b=80a3902da560465e8799ccf3e6fb2ef7;m=27729aac6;t=55cef2678535f;x=42fa04d62b52d49fsh-4.2 



And in the logs of the pod I see this:

$  oc logs logging-fluentd-h6f3h
umounts of dead containers will fail. Ignoring...
umount: 
/var/lib/docker/containers/30effb9ff35fc74b9bf37ebeeb5d0d61b515a55e4f3ae52e9bb618ac55704d73/shm: 
not mounted
umount: 
/var/lib/docker/containers/39b5c1572e79dd2e698917a7116c6110d2c6eb0a6761142a6e718904f6c43022/shm: 
not mounted
umount: 
/var/lib/docker/containers/64c1c27537aa7441ded69a04c78f2f2ce60920fa6e4dc628637a19289b2ead6a/shm: 
not mounted
umount: 
/var/lib/docker/containers/7b8564902f011522917c6cffd8a39133cabb8588229f2836c9fbcee95960ac78/shm: 
not mounted
umount: 
/var/lib/docker/containers/b85e6d1123da047a7ffe679edfb71376267ef27e9525c7097f3fd6668acd110e/shm: 
not mounted
umount: 
/var/lib/docker/containers/c02f10b8dcf69979a95305a76c2f570aaf37fb9c2c0cad6893ed1822f7f24274/shm: 
not mounted
umount: 
/var/lib/docker/containers/c30c4f0f34b2470ef5280c85a6db3910b143707df997ad6ee6ed2c2208009a70/shm: 
not mounted
umount: 
/var/lib/docker/containers/c67c4e5e89b5f41c593ba2e538671821b6b43936962e8d49785b292644c4a031/shm: 
not mounted
2017-11-01 16:59:42 + [info]: reading config file 
path="/etc/fluent/fluent.conf"
2017-11-01 16:59:43 + [warn]: 'block' action stops input 
process until the buffer full is resolved. Check your pipeline this 
action is fit or not
2017-11-01 16:59:43 + [warn]: 'block' action stops input 
process until the buffer full is resolved. Check your pipeline this 
action is fit or not
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patt

Re: confusion over storage for logging

2017-11-01 Thread Tim Dudgeon
Yes I can use 'hostPath/hostMount' and (I think) I understand what that 
means and how to do so.


My point was that at this stage this is not such a big concern for me as 
I'm trying to understand *how* to set up a HA environment as opposed to 
actually needing to do so :-)



On 01/11/2017 19:54, Rich Megginson wrote:

On 11/01/2017 12:34 PM, Tim Dudgeon wrote:

Issue: https://github.com/openshift/openshift-docs/issues/6080

I'll stick with default ephemeral storage for now until the situation 
becomes clearer.


You can't use hostPath/hostMount storage?  I take "ephemeral" to mean 
"private to the container that will go away when the pod/container 
goes away" i.e. not persistent.  You can use local disk storage to 
achieve persistence without using NFS.





On 01/11/2017 17:21, Rich Megginson wrote:

On 11/01/2017 10:50 AM, Tim Dudgeon wrote:

I am confused over persistent storage for logging (elasticsearch).

The latest advanced installer docs [1] specifically describes how 
to define using NFS for persistent storage, but the docs for 
"aggregating container logs" [2] says that NFS should not be used 
(except in one particular scenario) and seems to suggest that the 
only really suitable scenario is to use a volume (disk) directly 
mounted to each logging node.


Could someone clarify the situtation?


Elasticsearch says do not use NFS. 
https://www.elastic.co/guide/en/elasticsearch/guide/2.x/indexing-performance.html#_storage


We should make that clear in the docs.

Please file a doc bug.



Tim


[1] 
https://docs.openshift.org/latest/install_config/install/advanced_install.html#advanced-install-cluster-logging


[2] 
https://docs.openshift.org/latest/install_config/aggregate_logging.html#aggregated-elasticsearch


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging seems to be working, but no logs are collected

2017-11-01 Thread Tim Dudgeon

On 01/11/2017 19:56, Rich Megginson wrote:


I think it has to do with this:

2017-11-01 16:59:47 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:49 + [warn]: no patterns matched 
tag="kubernetes.journal.container"


That's very bad. 

Yes, that's the point I've been making for some time :-)

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging seems to be working, but no logs are collected

2017-11-01 Thread Rich Megginson
o be that there are no journal logs?

# journalctl -n 100
No journal files were found.
-- No entries --

Even though:

# cat /var/log/journal.pos
s=8da3038f46274f8f80cadbf839d487a5;i=45bd;b=80a3902da560465e8799ccf3e6fb2ef7;m=27729aac6;t=55cef2678535f;x=42fa04d62b52d49fsh-4.2 



And in the logs of the pod I see this:

$  oc logs logging-fluentd-h6f3h
umounts of dead containers will fail. Ignoring...
umount: 
/var/lib/docker/containers/30effb9ff35fc74b9bf37ebeeb5d0d61b515a55e4f3ae52e9bb618ac55704d73/shm: 
not mounted
umount: 
/var/lib/docker/containers/39b5c1572e79dd2e698917a7116c6110d2c6eb0a6761142a6e718904f6c43022/shm: 
not mounted
umount: 
/var/lib/docker/containers/64c1c27537aa7441ded69a04c78f2f2ce60920fa6e4dc628637a19289b2ead6a/shm: 
not mounted
umount: 
/var/lib/docker/containers/7b8564902f011522917c6cffd8a39133cabb8588229f2836c9fbcee95960ac78/shm: 
not mounted
umount: 
/var/lib/docker/containers/b85e6d1123da047a7ffe679edfb71376267ef27e9525c7097f3fd6668acd110e/shm: 
not mounted
umount: 
/var/lib/docker/containers/c02f10b8dcf69979a95305a76c2f570aaf37fb9c2c0cad6893ed1822f7f24274/shm: 
not mounted
umount: 
/var/lib/docker/containers/c30c4f0f34b2470ef5280c85a6db3910b143707df997ad6ee6ed2c2208009a70/shm: 
not mounted
umount: 
/var/lib/docker/containers/c67c4e5e89b5f41c593ba2e538671821b6b43936962e8d49785b292644c4a031/shm: 
not mounted
2017-11-01 16:59:42 + [info]: reading config file 
path="/etc/fluent/fluent.conf"
2017-11-01 16:59:43 + [warn]: 'block' action stops input process 
until the buffer full is resolved. Check your pipeline this action 
is fit or not
2017-11-01 16:59:43 + [warn]: 'block' action stops input process 
until the buffer full is resolved. Check your pipeline this action 
is fit or not
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:44 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:44 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:46 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:47 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:49 + [warn]: no patterns matched 
tag="kubernetes.journal.container"
2017-11-01 16:59:51 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:52 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:01:02 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:02:30 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:06:04 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:10:01 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:14:01 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:18:06 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:22:09 + [warn]: no patterns matched 
tag="journal.system"



This is really a basic centos7 image with the only modifications 
done by installing the packages required by openshift and then 
running the ansible installer.


Tim


On 31/10/2017 18:15, Rich Megginson wrote:
Very strange.  It would appear that fluentd was not able to keep up 
with the log rate to the journal for such an extent that the 
fluentd current cursor position was rotated away . . .


You can "reset" fluentd by shutting it down, then removing that 
cursor file.  That will tell fluentd to start reading from the tail 
of the journal.  but NOTE - THAT WILL LOSE ALL RECORDS CURRENTLY IN 
THE JOURNAL.  If you want to try to recover everything in the 
journal, then oc set env ds/logging-fluentd 
JOURNAL_READ_FROM_HEAD=true - but note that this may take several 
hours until you have recent records in Elasticsearch, depending on 
what is the log rate to the journal and how fast fluentd can keep up.


If you go the JOURNAL_READ_FROM_HEAD=true route, setting the env 
should trigger a redeployment of fluentd, so you should not have to 
restart/relabel.


oc label node --all --overwrite logging-infra-fluentd-
... wait for oc pods to report no logging-fluentd pods ...
rm -f /var/log/journal.pos
oc label node --all --overwrite logging-infra-fluentd=true

Then, monitor fluentd like this:

https://github.com/op

Re: confusion over storage for logging

2017-11-01 Thread Rich Megginson

On 11/01/2017 12:34 PM, Tim Dudgeon wrote:

Issue: https://github.com/openshift/openshift-docs/issues/6080

I'll stick with default ephemeral storage for now until the situation 
becomes clearer.


You can't use hostPath/hostMount storage?  I take "ephemeral" to mean 
"private to the container that will go away when the pod/container goes 
away" i.e. not persistent.  You can use local disk storage to achieve 
persistence without using NFS.





On 01/11/2017 17:21, Rich Megginson wrote:

On 11/01/2017 10:50 AM, Tim Dudgeon wrote:

I am confused over persistent storage for logging (elasticsearch).

The latest advanced installer docs [1] specifically describes how to 
define using NFS for persistent storage, but the docs for 
"aggregating container logs" [2] says that NFS should not be used 
(except in one particular scenario) and seems to suggest that the 
only really suitable scenario is to use a volume (disk) directly 
mounted to each logging node.


Could someone clarify the situtation?


Elasticsearch says do not use NFS. 
https://www.elastic.co/guide/en/elasticsearch/guide/2.x/indexing-performance.html#_storage


We should make that clear in the docs.

Please file a doc bug.



Tim


[1] 
https://docs.openshift.org/latest/install_config/install/advanced_install.html#advanced-install-cluster-logging


[2] 
https://docs.openshift.org/latest/install_config/aggregate_logging.html#aggregated-elasticsearch


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging seems to be working, but no logs are collected

2017-11-01 Thread Tim Dudgeon

More data on this.
Just to confirm that the journal on the node is receiving events:

sudo journalctl -n 25
-- Logs begin at Wed 2017-11-01 14:24:08 UTC, end at Wed 2017-11-01 
19:15:15 UTC. --
Nov 01 19:14:23 master-1.openstacklocal origin-master[15148]: I1101 
19:14:23.286735   15148 rest.go:324] Starting watch for 
/api/v1/configmaps, rv=1940 labels= fields
Nov 01 19:14:24 master-1.openstacklocal origin-master[15148]: I1101 
19:14:24.288497   15148 rest.go:324] Starting watch for /api/v1/nodes, 
rv=6595 labels= fields= tim
Nov 01 19:14:29 master-1.openstacklocal origin-master[15148]: I1101 
19:14:29.283528   15148 rest.go:324] Starting watch for 
/apis/extensions/v1beta1/ingresses, rv=4 l
Nov 01 19:14:36 master-1.openstacklocal origin-master[15148]: I1101 
19:14:36.566696   15148 rest.go:324] Starting watch for /api/v1/pods, 
rv=6028 labels= fields= time
Nov 01 19:14:40 master-1.openstacklocal origin-master[15148]: I1101 
19:14:40.284191   15148 rest.go:324] Starting watch for 
/api/v1/persistentvolumeclaims, rv=1606 la
Nov 01 19:14:43 master-1.openstacklocal origin-master[15148]: I1101 
19:14:43.291205   15148 rest.go:324] Starting watch for 
/apis/authorization.openshift.io/v1/policy
Nov 01 19:14:43 master-1.openstacklocal origin-master[15148]: I1101 
19:14:43.34   15148 rest.go:324] Starting watch for 
/oapi/v1/hostsubnets, rv=1054 labels= fiel
Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101 
19:14:47.255576   20672 operation_generator.go:609] MountVolume.SetUp 
succeeded for volume "kubernet
Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101 
19:14:47.256440   20672 operation_generator.go:609] MountVolume.SetUp 
succeeded for volume "kubernet
Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101 
19:14:47.258455   20672 operation_generator.go:609] MountVolume.SetUp 
succeeded for volume "kubernet
Nov 01 19:14:48 master-1.openstacklocal origin-master[15148]: I1101 
19:14:48.291988   15148 rest.go:324] Starting watch for 
/apis/authorization.openshift.io/v1/cluste
Nov 01 19:14:51 master-1.openstacklocal sshd[46103]: Invalid user admin 
from 118.89.45.36 port 17929
Nov 01 19:14:51 master-1.openstacklocal sshd[46103]: 
input_userauth_request: invalid user admin [preauth]
Nov 01 19:14:52 master-1.openstacklocal sshd[46103]: Connection closed 
by 118.89.45.36 port 17929 [preauth]
Nov 01 19:14:56 master-1.openstacklocal origin-master[15148]: I1101 
19:14:56.206290   15148 rest.go:324] Starting watch for 
/api/v1/services, rv=2008 labels= fields=
Nov 01 19:14:57 master-1.openstacklocal origin-master[15148]: I1101 
19:14:57.559640   15148 rest.go:324] Starting watch for 
/api/v1/namespaces, rv=1845 labels= fields
Nov 01 19:14:59 master-1.openstacklocal origin-master[15148]: I1101 
19:14:59.275807   15148 rest.go:324] Starting watch for 
/api/v1/podtemplates, rv=4 labels= fields=
Nov 01 19:14:59 master-1.openstacklocal origin-master[15148]: I1101 
19:14:59.459554   15148 rest.go:324] Starting watch for 
/apis/storage.k8s.io/v1beta1/storageclasse
Nov 01 19:15:01 master-1.openstacklocal origin-master[15148]: I1101 
19:15:01.286182   15148 rest.go:324] Starting watch for 
/apis/extensions/v1beta1/replicasets, rv=4
Nov 01 19:15:06 master-1.openstacklocal origin-master[15148]: I1101 
19:15:06.270704   15148 rest.go:324] Starting watch for 
/apis/security.openshift.io/v1/securitycon
Nov 01 19:15:06 master-1.openstacklocal origin-master[15148]: I1101 
19:15:06.290752   15148 rest.go:324] Starting watch for 
/apis/batch/v2alpha1/cronjobs, rv=4 labels
Nov 01 19:15:08 master-1.openstacklocal origin-master[15148]: I1101 
19:15:08.330948   15148 rest.go:324] Starting watch for 
/api/v1/services, rv=2008 labels= fields=
Nov 01 19:15:08 master-1.openstacklocal origin-master[15148]: I1101 
19:15:08.460997   15148 rest.go:324] Starting watch for 
/api/v1/serviceaccounts, rv=1909 labels= f
Nov 01 19:15:14 master-1.openstacklocal origin-master[15148]: I1101 
19:15:14.286471   15148 rest.go:324] Starting watch for 
/apis/rbac.authorization.k8s.io/v1beta1/ro
Nov 01 19:15:15 master-1.openstacklocal sudo[46140]:   centos : 
TTY=pts/0 ; PWD=/home/centos ; USER=root ; COMMAND=/bin/journalctl -n 25


So why is the fluentd running on that node not picking up these events?


On 01/11/2017 18:16, Tim Dudgeon wrote:

Correction/update on this.

The `journalctl -n 100` command runs ON on the host but not inside the 
pod.


The file `/var/log/journal.pos` is present both on the host and in the 
pod.


Tim


On 01/11/2017 17:28, Tim Dudgeon wrote:

So I've tried this and a few other variants but not made any progress.
The issue seems to be that there are no journal logs?

# journalctl -n 100
No journal files were found.
-- No entries --

Even though:

# cat /var/log/journal.pos
s=8da3038f46274f8f80cadbf839d487a5;i=45bd;b=80a3902da560465e8799ccf3e6fb2ef7;m=27729aac6;t=55cef2678535f;x=42fa04d62b52d49fsh-4.2 



And in the logs of the pod I see this:

$  oc logs logging-fluent

Re: confusion over storage for logging

2017-11-01 Thread Tim Dudgeon

Issue: https://github.com/openshift/openshift-docs/issues/6080

I'll stick with default ephemeral storage for now until the situation 
becomes clearer.



On 01/11/2017 17:21, Rich Megginson wrote:

On 11/01/2017 10:50 AM, Tim Dudgeon wrote:

I am confused over persistent storage for logging (elasticsearch).

The latest advanced installer docs [1] specifically describes how to 
define using NFS for persistent storage, but the docs for 
"aggregating container logs" [2] says that NFS should not be used 
(except in one particular scenario) and seems to suggest that the 
only really suitable scenario is to use a volume (disk) directly 
mounted to each logging node.


Could someone clarify the situtation?


Elasticsearch says do not use NFS. 
https://www.elastic.co/guide/en/elasticsearch/guide/2.x/indexing-performance.html#_storage


We should make that clear in the docs.

Please file a doc bug.



Tim


[1] 
https://docs.openshift.org/latest/install_config/install/advanced_install.html#advanced-install-cluster-logging


[2] 
https://docs.openshift.org/latest/install_config/aggregate_logging.html#aggregated-elasticsearch


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging seems to be working, but no logs are collected

2017-11-01 Thread Tim Dudgeon

Correction/update on this.

The `journalctl -n 100` command runs ON on the host but not inside the pod.

The file `/var/log/journal.pos` is present both on the host and in the pod.

Tim


On 01/11/2017 17:28, Tim Dudgeon wrote:

So I've tried this and a few other variants but not made any progress.
The issue seems to be that there are no journal logs?

# journalctl -n 100
No journal files were found.
-- No entries --

Even though:

# cat /var/log/journal.pos
s=8da3038f46274f8f80cadbf839d487a5;i=45bd;b=80a3902da560465e8799ccf3e6fb2ef7;m=27729aac6;t=55cef2678535f;x=42fa04d62b52d49fsh-4.2 



And in the logs of the pod I see this:

$  oc logs logging-fluentd-h6f3h
umounts of dead containers will fail. Ignoring...
umount: 
/var/lib/docker/containers/30effb9ff35fc74b9bf37ebeeb5d0d61b515a55e4f3ae52e9bb618ac55704d73/shm: 
not mounted
umount: 
/var/lib/docker/containers/39b5c1572e79dd2e698917a7116c6110d2c6eb0a6761142a6e718904f6c43022/shm: 
not mounted
umount: 
/var/lib/docker/containers/64c1c27537aa7441ded69a04c78f2f2ce60920fa6e4dc628637a19289b2ead6a/shm: 
not mounted
umount: 
/var/lib/docker/containers/7b8564902f011522917c6cffd8a39133cabb8588229f2836c9fbcee95960ac78/shm: 
not mounted
umount: 
/var/lib/docker/containers/b85e6d1123da047a7ffe679edfb71376267ef27e9525c7097f3fd6668acd110e/shm: 
not mounted
umount: 
/var/lib/docker/containers/c02f10b8dcf69979a95305a76c2f570aaf37fb9c2c0cad6893ed1822f7f24274/shm: 
not mounted
umount: 
/var/lib/docker/containers/c30c4f0f34b2470ef5280c85a6db3910b143707df997ad6ee6ed2c2208009a70/shm: 
not mounted
umount: 
/var/lib/docker/containers/c67c4e5e89b5f41c593ba2e538671821b6b43936962e8d49785b292644c4a031/shm: 
not mounted
2017-11-01 16:59:42 + [info]: reading config file 
path="/etc/fluent/fluent.conf"
2017-11-01 16:59:43 + [warn]: 'block' action stops input process 
until the buffer full is resolved. Check your pipeline this action is 
fit or not
2017-11-01 16:59:43 + [warn]: 'block' action stops input process 
until the buffer full is resolved. Check your pipeline this action is 
fit or not
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:43 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:44 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:44 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:46 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:47 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:49 + [warn]: no patterns matched 
tag="kubernetes.journal.container"
2017-11-01 16:59:51 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 16:59:52 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:01:02 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:02:30 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:06:04 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:10:01 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:14:01 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:18:06 + [warn]: no patterns matched 
tag="journal.system"
2017-11-01 17:22:09 + [warn]: no patterns matched 
tag="journal.system"



This is really a basic centos7 image with the only modifications done 
by installing the packages required by openshift and then running the 
ansible installer.


Tim


On 31/10/2017 18:15, Rich Megginson wrote:
Very strange.  It would appear that fluentd was not able to keep up 
with the log rate to the journal for such an extent that the fluentd 
current cursor position was rotated away . . .


You can "reset" fluentd by shutting it down, then removing that 
cursor file.  That will tell fluentd to start reading from the tail 
of the journal.  but NOTE - THAT WILL LOSE ALL RECORDS CURRENTLY IN 
THE JOURNAL.  If you want to try to recover everything in the 
journal, then oc set env ds/logging-fluentd 
JOURNAL_READ_FROM_HEAD=true - but note that this may take several 
hours until you have recent records in Elasticsearch, depending on 
what is the log rate to the journal and how fast fluentd can keep up.


If you go the JOURNAL_READ_FROM_HEAD=true route, setting the env 
should trigger a red

Re: confusion over storage for logging

2017-11-01 Thread Louis Santillan
I have an active PR for that in the Scaling Performance Section [0][1][2].

Once it lands, I plan to add more references to that section from the
Registry, Metrics, & Logging install docs.

[0] https://github.com/openshift/openshift-docs/pull/6033
[1]
https://github.com/tmorriso-rh/openshift-docs/blob/89e0641169ea9cc35c5c4adb538639aeff62e8b4/scaling_performance/optimizing_storage.adoc#general-storage-guidelines
[2]
https://github.com/tmorriso-rh/openshift-docs/blob/89e0641169ea9cc35c5c4adb538639aeff62e8b4/scaling_performance/optimizing_storage.adoc#back-end-recommendations


___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST <https://www.redhat.com/>

lpsan...@gmail.comM: 3236334854
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Wed, Nov 1, 2017 at 1:21 PM, Rich Megginson <rmegg...@redhat.com> wrote:

> On 11/01/2017 10:50 AM, Tim Dudgeon wrote:
>
>> I am confused over persistent storage for logging (elasticsearch).
>>
>> The latest advanced installer docs [1] specifically describes how to
>> define using NFS for persistent storage, but the docs for "aggregating
>> container logs" [2] says that NFS should not be used (except in one
>> particular scenario) and seems to suggest that the only really suitable
>> scenario is to use a volume (disk) directly mounted to each logging node.
>>
>> Could someone clarify the situtation?
>>
>
> Elasticsearch says do not use NFS. https://www.elastic.co/guide/e
> n/elasticsearch/guide/2.x/indexing-performance.html#_storage
>
> We should make that clear in the docs.
>
> Please file a doc bug.
>
>
>
>> Tim
>>
>>
>> [1] https://docs.openshift.org/latest/install_config/install/adv
>> anced_install.html#advanced-install-cluster-logging
>>
>> [2] https://docs.openshift.org/latest/install_config/aggregate_
>> logging.html#aggregated-elasticsearch
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


confusion over storage for logging

2017-11-01 Thread Tim Dudgeon

I am confused over persistent storage for logging (elasticsearch).

The latest advanced installer docs [1] specifically describes how to 
define using NFS for persistent storage, but the docs for "aggregating 
container logs" [2] says that NFS should not be used (except in one 
particular scenario) and seems to suggest that the only really suitable 
scenario is to use a volume (disk) directly mounted to each logging node.


Could someone clarify the situtation?

Tim


[1] 
https://docs.openshift.org/latest/install_config/install/advanced_install.html#advanced-install-cluster-logging


[2] 
https://docs.openshift.org/latest/install_config/aggregate_logging.html#aggregated-elasticsearch


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging seems to be working, but no logs are collected

2017-10-31 Thread Tim Dudgeon

On 31/10/2017 18:15, Rich Megginson wrote:
Very strange.  It would appear that fluentd was not able to keep up 
with the log rate to the journal for such an extent that the fluentd 
current cursor position was rotated away . . .
That would be strange - the nodes (4 of them) and fluentd have been 
running for about 3 days and have been 99.9% idle over that period.


You can "reset" fluentd by shutting it down, then removing that cursor 
file. 

Do you mean shut down the pods? Won't the daemon set immediately re-create?

That will tell fluentd to start reading from the tail of the journal. 
but NOTE - THAT WILL LOSE ALL RECORDS CURRENTLY IN THE JOURNAL. If you 
want to try to recover everything in the journal, then oc set env 
ds/logging-fluentd JOURNAL_READ_FROM_HEAD=true - but note that this 
may take several hours until you have recent records in Elasticsearch, 
depending on what is the log rate to the journal and how fast fluentd 
can keep up.


If you go the JOURNAL_READ_FROM_HEAD=true route, setting the env 
should trigger a redeployment of fluentd, so you should not have to 
restart/relabel.


oc label node --all --overwrite logging-infra-fluentd-
... wait for oc pods to report no logging-fluentd pods ...
rm -f /var/log/journal.pos
oc label node --all --overwrite logging-infra-fluentd=true

Then, monitor fluentd like this:

https://github.com/openshift/origin-aggregated-logging/blob/master/hack/testing/entrypoint.sh#L56 



and monitor the journald log rate (number of logs/minute) like this:

https://github.com/openshift/origin-aggregated-logging/blob/master/hack/testing/entrypoint.sh#L70 



Will try that. This is just a test system so I'm not concerned about 
keeping the logfile data, but I might try both approaches to gain 
experience.


Thanks for your help.



On 10/31/2017 11:57 AM, Tim Dudgeon wrote:

$ sudo docker info | grep -i log
 WARNING: Usage of loopback devices is strongly discouraged for 
production use. Use `--storage-opt dm.thinpooldev` to specify a 
custom block storage device.

Logging Driver: journald
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

$ journalctl -r -n 1 --show-cursor
-- Logs begin at Sun 2017-10-29 03:04:42 UTC, end at Tue 2017-10-31 
17:54:37 UTC. --
Oct 31 17:54:37 worker-1.openstacklocal dockerd-current[6135]: 
{"type":"response","@timestamp":"2017-10-31T17:54:37Z","tags":[],"pid":8,"
-- cursor: 
s=f746c7090d724f5ab0ece0d13683fc53;i=a54f2;b=93b6daa912044dd9ae9f05521c603efc;m=55116ad995;t=55cdb72d7c92d;x=5a16032caedc4423



On 31/10/2017 17:31, Rich Megginson wrote:


# docker info | grep -i log

# journalctl -r -n 1 --show-cursor


On 10/31/2017 11:12 AM, Tim Dudgeon wrote:


Thanks. Those links are useful.

It looks to me like its a problem at the fluentd level. This is 
what I see on on of the fluentd pods:


sh-4.2# cat /var/log/es-containers.log.pos
cat: /var/log/es-containers.log.pos: No such file or directory
sh-4.2# cat /var/log/journal.pos
s=52fdd277f90749b0a442c78739b1efa7;i=50d69;b=2a3f1736a1a1486d83f95db719fdc281;m=5465b53fd1;t=55cdac4738846;x=85596f3f5f5a27e4sh-4.2# 


sh-4.2# journalctl -c `cat /var/log/journal.pos`
No journal files were found.
-- No entries --

Which might sort of explain why everything is running but no logs 
are being processed.


This is based on a centos7 image with only the necessary openshift 
packages installed and then openshift installed using ansible. The 
logging setup in the inventory file is this:


openshift_hosted_logging_deployer_version=v3.6.0
openshift_hosted_logging_deploy=true
openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_nfs_directory=/exports
openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi
openshift_hosted_logging_storage_labels={'storage': 'logging'}


Tim


On 31/10/2017 16:37, Jeff Cantrill wrote:
Please provide additional information, logs, etc or post the 
output of [1] someplace for review. Additionally, consider 
reviewing [2].


[1] 
https://github.com/openshift/origin-aggregated-logging/blob/master/hack/logging-dump.sh 

[2] 
https://github.com/openshift/origin-aggregated-logging/blob/master/docs/checking-efk-health.md


On Tue, Oct 31, 2017 at 11:47 AM, Tim Dudgeon 
<tdudgeon...@gmail.com <mailto:tdudgeon...@gmail.com>> wrote:


    Hi All,

    I've deployed logging using the ansible installer (v3.6.0) for a
    fairly simple openshift setup and everything appears to running:

    NAME  READY STATUS RESTARTS   AGE
    logging-curator-1-gvh73  1/1 Running 24     3d
    logging-es-data-master-xz0e7a0c-1-deploy   0/1 Error 
0      3d
    logging-es-data-master-xz0e7a0c-4-deploy   0/1 Error 
0      3d
    logging-es-data-ma

Re: Logging seems to be working, but no logs are collected

2017-10-31 Thread Rich Megginson
Very strange.  It would appear that fluentd was not able to keep up with 
the log rate to the journal for such an extent that the fluentd current 
cursor position was rotated away . . .


You can "reset" fluentd by shutting it down, then removing that cursor 
file.  That will tell fluentd to start reading from the tail of the 
journal.  but NOTE - THAT WILL LOSE ALL RECORDS CURRENTLY IN THE 
JOURNAL.  If you want to try to recover everything in the journal, then 
oc set env ds/logging-fluentd JOURNAL_READ_FROM_HEAD=true - but note 
that this may take several hours until you have recent records in 
Elasticsearch, depending on what is the log rate to the journal and how 
fast fluentd can keep up.


If you go the JOURNAL_READ_FROM_HEAD=true route, setting the env should 
trigger a redeployment of fluentd, so you should not have to 
restart/relabel.


oc label node --all --overwrite logging-infra-fluentd-
... wait for oc pods to report no logging-fluentd pods ...
rm -f /var/log/journal.pos
oc label node --all --overwrite logging-infra-fluentd=true

Then, monitor fluentd like this:

https://github.com/openshift/origin-aggregated-logging/blob/master/hack/testing/entrypoint.sh#L56

and monitor the journald log rate (number of logs/minute) like this:

https://github.com/openshift/origin-aggregated-logging/blob/master/hack/testing/entrypoint.sh#L70

On 10/31/2017 11:57 AM, Tim Dudgeon wrote:

$ sudo docker info | grep -i log
 WARNING: Usage of loopback devices is strongly discouraged for 
production use. Use `--storage-opt dm.thinpooldev` to specify a custom 
block storage device.

Logging Driver: journald
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

$ journalctl -r -n 1 --show-cursor
-- Logs begin at Sun 2017-10-29 03:04:42 UTC, end at Tue 2017-10-31 
17:54:37 UTC. --
Oct 31 17:54:37 worker-1.openstacklocal dockerd-current[6135]: 
{"type":"response","@timestamp":"2017-10-31T17:54:37Z","tags":[],"pid":8,"
-- cursor: 
s=f746c7090d724f5ab0ece0d13683fc53;i=a54f2;b=93b6daa912044dd9ae9f05521c603efc;m=55116ad995;t=55cdb72d7c92d;x=5a16032caedc4423



On 31/10/2017 17:31, Rich Megginson wrote:


# docker info | grep -i log

# journalctl -r -n 1 --show-cursor


On 10/31/2017 11:12 AM, Tim Dudgeon wrote:


Thanks. Those links are useful.

It looks to me like its a problem at the fluentd level. This is what 
I see on on of the fluentd pods:


sh-4.2# cat /var/log/es-containers.log.pos
cat: /var/log/es-containers.log.pos: No such file or directory
sh-4.2# cat /var/log/journal.pos
s=52fdd277f90749b0a442c78739b1efa7;i=50d69;b=2a3f1736a1a1486d83f95db719fdc281;m=5465b53fd1;t=55cdac4738846;x=85596f3f5f5a27e4sh-4.2# 


sh-4.2# journalctl -c `cat /var/log/journal.pos`
No journal files were found.
-- No entries --

Which might sort of explain why everything is running but no logs 
are being processed.


This is based on a centos7 image with only the necessary openshift 
packages installed and then openshift installed using ansible. The 
logging setup in the inventory file is this:


openshift_hosted_logging_deployer_version=v3.6.0
openshift_hosted_logging_deploy=true
openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_nfs_directory=/exports
openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi
openshift_hosted_logging_storage_labels={'storage': 'logging'}


Tim


On 31/10/2017 16:37, Jeff Cantrill wrote:
Please provide additional information, logs, etc or post the output 
of [1] someplace for review. Additionally, consider reviewing [2].


[1] 
https://github.com/openshift/origin-aggregated-logging/blob/master/hack/logging-dump.sh 

[2] 
https://github.com/openshift/origin-aggregated-logging/blob/master/docs/checking-efk-health.md


On Tue, Oct 31, 2017 at 11:47 AM, Tim Dudgeon 
<tdudgeon...@gmail.com <mailto:tdudgeon...@gmail.com>> wrote:


    Hi All,

    I've deployed logging using the ansible installer (v3.6.0) for a
    fairly simple openshift setup and everything appears to running:

    NAME  READY STATUS RESTARTS   AGE
    logging-curator-1-gvh73  1/1 Running 24     3d
    logging-es-data-master-xz0e7a0c-1-deploy   0/1 Error 0      3d
    logging-es-data-master-xz0e7a0c-4-deploy   0/1 Error 0      3d
    logging-es-data-master-xz0e7a0c-5-deploy   0/1 Error 0  3d
    logging-es-data-master-xz0e7a0c-7-t4xpf    1/1 Running 
0  3d

    logging-fluentd-4rm2w          1/1 Running 0 3d
    logging-fluentd-8h944          1/1 Running 0 3d
    logging-fluentd-n00bn          1/1 Running 0 3d
    logging-fluentd-vt8hh  1/1 Running 0 3d
    logging-kibana-1-g7l4z  2/2 Running 0 3d

    (the failed pods were related to gett

Re: Logging seems to be working, but no logs are collected

2017-10-31 Thread Tim Dudgeon

$ sudo docker info | grep -i log
 WARNING: Usage of loopback devices is strongly discouraged for 
production use. Use `--storage-opt dm.thinpooldev` to specify a custom 
block storage device.

Logging Driver: journald
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

$ journalctl -r -n 1 --show-cursor
-- Logs begin at Sun 2017-10-29 03:04:42 UTC, end at Tue 2017-10-31 
17:54:37 UTC. --
Oct 31 17:54:37 worker-1.openstacklocal dockerd-current[6135]: 
{"type":"response","@timestamp":"2017-10-31T17:54:37Z","tags":[],"pid":8,"
-- cursor: 
s=f746c7090d724f5ab0ece0d13683fc53;i=a54f2;b=93b6daa912044dd9ae9f05521c603efc;m=55116ad995;t=55cdb72d7c92d;x=5a16032caedc4423



On 31/10/2017 17:31, Rich Megginson wrote:


# docker info | grep -i log

# journalctl -r -n 1 --show-cursor


On 10/31/2017 11:12 AM, Tim Dudgeon wrote:


Thanks. Those links are useful.

It looks to me like its a problem at the fluentd level. This is what 
I see on on of the fluentd pods:


sh-4.2# cat /var/log/es-containers.log.pos
cat: /var/log/es-containers.log.pos: No such file or directory
sh-4.2# cat /var/log/journal.pos
s=52fdd277f90749b0a442c78739b1efa7;i=50d69;b=2a3f1736a1a1486d83f95db719fdc281;m=5465b53fd1;t=55cdac4738846;x=85596f3f5f5a27e4sh-4.2# 


sh-4.2# journalctl -c `cat /var/log/journal.pos`
No journal files were found.
-- No entries --

Which might sort of explain why everything is running but no logs are 
being processed.


This is based on a centos7 image with only the necessary openshift 
packages installed and then openshift installed using ansible. The 
logging setup in the inventory file is this:


openshift_hosted_logging_deployer_version=v3.6.0
openshift_hosted_logging_deploy=true
openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_nfs_directory=/exports
openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi
openshift_hosted_logging_storage_labels={'storage': 'logging'}


Tim


On 31/10/2017 16:37, Jeff Cantrill wrote:
Please provide additional information, logs, etc or post the output 
of [1] someplace for review. Additionally, consider reviewing [2].


[1] 
https://github.com/openshift/origin-aggregated-logging/blob/master/hack/logging-dump.sh 

[2] 
https://github.com/openshift/origin-aggregated-logging/blob/master/docs/checking-efk-health.md


On Tue, Oct 31, 2017 at 11:47 AM, Tim Dudgeon <tdudgeon...@gmail.com 
<mailto:tdudgeon...@gmail.com>> wrote:


    Hi All,

    I've deployed logging using the ansible installer (v3.6.0) for a
    fairly simple openshift setup and everything appears to running:

    NAME  READY STATUS RESTARTS   AGE
    logging-curator-1-gvh73  1/1 Running 24 3d
    logging-es-data-master-xz0e7a0c-1-deploy   0/1 Error 0  3d
    logging-es-data-master-xz0e7a0c-4-deploy   0/1 Error 0  3d
    logging-es-data-master-xz0e7a0c-5-deploy   0/1 Error 0  3d
    logging-es-data-master-xz0e7a0c-7-t4xpf    1/1 Running 
0  3d

    logging-fluentd-4rm2w  1/1 Running 0 3d
    logging-fluentd-8h944  1/1 Running 0 3d
    logging-fluentd-n00bn  1/1 Running 0 3d
    logging-fluentd-vt8hh  1/1 Running 0 3d
    logging-kibana-1-g7l4z  2/2 Running 0 3d

    (the failed pods were related to getting elasticsearch running,
    but that was resolved).

    The problem is that I don't see any logs in Kibana. When I look
    in the fluentd pod logs I see lots of stuff like this:

    2017-10-31 13:53:15 + [warn]: no patterns matched
    tag="journal.system"
    2017-10-31 13:58:02 + [warn]: no patterns matched
    tag="kubernetes.journal.container"
    2017-10-31 14:02:18 + [warn]: no patterns matched
    tag="journal.system"
    2017-10-31 14:07:15 + [warn]: no patterns matched
    tag="journal.system"
    2017-10-31 14:11:20 + [warn]: no patterns matched
    tag="journal.system"
    2017-10-31 14:15:16 + [warn]: no patterns matched
    tag="journal.system"
    2017-10-31 14:19:58 + [warn]: no patterns matched
    tag="journal.system"

    Is this the cause, and if so what is wrong?
    If not how to debug this?

    Tim



    ___
    users mailing list
    users@lists.openshift.redhat.com
    <mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
<http://lists.openshift.redhat.com/openshiftmm/listinfo/users>




--
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat

Re: Logging seems to be working, but no logs are collected

2017-10-31 Thread Rich Megginson


# docker info | grep -i log

# journalctl -r -n 1 --show-cursor


On 10/31/2017 11:12 AM, Tim Dudgeon wrote:


Thanks. Those links are useful.

It looks to me like its a problem at the fluentd level. This is what I 
see on on of the fluentd pods:


sh-4.2# cat /var/log/es-containers.log.pos
cat: /var/log/es-containers.log.pos: No such file or directory
sh-4.2# cat /var/log/journal.pos
s=52fdd277f90749b0a442c78739b1efa7;i=50d69;b=2a3f1736a1a1486d83f95db719fdc281;m=5465b53fd1;t=55cdac4738846;x=85596f3f5f5a27e4sh-4.2# 


sh-4.2# journalctl -c `cat /var/log/journal.pos`
No journal files were found.
-- No entries --

Which might sort of explain why everything is running but no logs are 
being processed.


This is based on a centos7 image with only the necessary openshift 
packages installed and then openshift installed using ansible. The 
logging setup in the inventory file is this:


openshift_hosted_logging_deployer_version=v3.6.0
openshift_hosted_logging_deploy=true
openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_nfs_directory=/exports
openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi
openshift_hosted_logging_storage_labels={'storage': 'logging'}


Tim


On 31/10/2017 16:37, Jeff Cantrill wrote:
Please provide additional information, logs, etc or post the output 
of [1] someplace for review.  Additionally, consider reviewing [2].


[1] 
https://github.com/openshift/origin-aggregated-logging/blob/master/hack/logging-dump.sh 

[2] 
https://github.com/openshift/origin-aggregated-logging/blob/master/docs/checking-efk-health.md


On Tue, Oct 31, 2017 at 11:47 AM, Tim Dudgeon <tdudgeon...@gmail.com 
<mailto:tdudgeon...@gmail.com>> wrote:


Hi All,

I've deployed logging using the ansible installer (v3.6.0) for a
fairly simple openshift setup and everything appears to running:

NAME  READY STATUS RESTARTS   AGE
logging-curator-1-gvh73  1/1 Running 24 3d
logging-es-data-master-xz0e7a0c-1-deploy   0/1 Error 0  3d
logging-es-data-master-xz0e7a0c-4-deploy   0/1 Error 0  3d
logging-es-data-master-xz0e7a0c-5-deploy   0/1 Error 0  3d
logging-es-data-master-xz0e7a0c-7-t4xpf    1/1 Running 0  3d
logging-fluentd-4rm2w  1/1 Running 0  3d
logging-fluentd-8h944  1/1 Running 0  3d
logging-fluentd-n00bn  1/1 Running 0  3d
logging-fluentd-vt8hh  1/1 Running 0  3d
logging-kibana-1-g7l4z  2/2 Running 0  3d

(the failed pods were related to getting elasticsearch running,
but that was resolved).

The problem is that I don't see any logs in Kibana. When I look
in the fluentd pod logs I see lots of stuff like this:

2017-10-31 13:53:15 + [warn]: no patterns matched
tag="journal.system"
2017-10-31 13:58:02 + [warn]: no patterns matched
tag="kubernetes.journal.container"
2017-10-31 14:02:18 + [warn]: no patterns matched
tag="journal.system"
2017-10-31 14:07:15 + [warn]: no patterns matched
tag="journal.system"
2017-10-31 14:11:20 + [warn]: no patterns matched
tag="journal.system"
2017-10-31 14:15:16 + [warn]: no patterns matched
tag="journal.system"
2017-10-31 14:19:58 + [warn]: no patterns matched
tag="journal.system"

Is this the cause, and if so what is wrong?
If not how to debug this?

Tim



___
users mailing list
users@lists.openshift.redhat.com
<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
<http://lists.openshift.redhat.com/openshiftmm/listinfo/users>




--
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com <mailto:jcant...@redhat.com>
http://www.redhat.com




___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging seems to be working, but no logs are collected

2017-10-31 Thread Tim Dudgeon

Thanks. Those links are useful.

It looks to me like its a problem at the fluentd level. This is what I 
see on on of the fluentd pods:


sh-4.2# cat /var/log/es-containers.log.pos
cat: /var/log/es-containers.log.pos: No such file or directory
sh-4.2# cat /var/log/journal.pos
s=52fdd277f90749b0a442c78739b1efa7;i=50d69;b=2a3f1736a1a1486d83f95db719fdc281;m=5465b53fd1;t=55cdac4738846;x=85596f3f5f5a27e4sh-4.2# 


sh-4.2# journalctl -c `cat /var/log/journal.pos`
No journal files were found.
-- No entries --

Which might sort of explain why everything is running but no logs are 
being processed.


This is based on a centos7 image with only the necessary openshift 
packages installed and then openshift installed using ansible. The 
logging setup in the inventory file is this:


openshift_hosted_logging_deployer_version=v3.6.0
openshift_hosted_logging_deploy=true
openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_nfs_directory=/exports
openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi
openshift_hosted_logging_storage_labels={'storage': 'logging'}


Tim


On 31/10/2017 16:37, Jeff Cantrill wrote:
Please provide additional information, logs, etc or post the output of 
[1] someplace for review.  Additionally, consider reviewing [2].


[1] 
https://github.com/openshift/origin-aggregated-logging/blob/master/hack/logging-dump.sh 

[2] 
https://github.com/openshift/origin-aggregated-logging/blob/master/docs/checking-efk-health.md


On Tue, Oct 31, 2017 at 11:47 AM, Tim Dudgeon <tdudgeon...@gmail.com 
<mailto:tdudgeon...@gmail.com>> wrote:


Hi All,

I've deployed logging using the ansible installer (v3.6.0) for a
fairly simple openshift setup and everything appears to running:

NAME  READY STATUS RESTARTS   AGE
logging-curator-1-gvh73  1/1 Running 24 3d
logging-es-data-master-xz0e7a0c-1-deploy   0/1 Error 0  3d
logging-es-data-master-xz0e7a0c-4-deploy   0/1 Error 0  3d
logging-es-data-master-xz0e7a0c-5-deploy   0/1 Error 0  3d
logging-es-data-master-xz0e7a0c-7-t4xpf    1/1 Running 0  3d
logging-fluentd-4rm2w  1/1 Running 0  3d
logging-fluentd-8h944  1/1 Running 0  3d
logging-fluentd-n00bn  1/1 Running 0  3d
logging-fluentd-vt8hh  1/1 Running 0  3d
logging-kibana-1-g7l4z  2/2 Running 0  3d

(the failed pods were related to getting elasticsearch running,
but that was resolved).

The problem is that I don't see any logs in Kibana. When I look in
the fluentd pod logs I see lots of stuff like this:

2017-10-31 13:53:15 + [warn]: no patterns matched
tag="journal.system"
2017-10-31 13:58:02 + [warn]: no patterns matched
tag="kubernetes.journal.container"
2017-10-31 14:02:18 + [warn]: no patterns matched
tag="journal.system"
2017-10-31 14:07:15 + [warn]: no patterns matched
tag="journal.system"
2017-10-31 14:11:20 + [warn]: no patterns matched
tag="journal.system"
2017-10-31 14:15:16 + [warn]: no patterns matched
tag="journal.system"
2017-10-31 14:19:58 + [warn]: no patterns matched
tag="journal.system"

Is this the cause, and if so what is wrong?
If not how to debug this?

Tim



___
users mailing list
users@lists.openshift.redhat.com
<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
<http://lists.openshift.redhat.com/openshiftmm/listinfo/users>




--
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com <mailto:jcant...@redhat.com>
http://www.redhat.com


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging seems to be working, but no logs are collected

2017-10-31 Thread Jeff Cantrill
Please provide additional information, logs, etc or post the output of [1]
someplace for review.  Additionally, consider reviewing [2].

[1]
https://github.com/openshift/origin-aggregated-logging/blob/master/hack/logging-dump.sh

[2]
https://github.com/openshift/origin-aggregated-logging/blob/master/docs/checking-efk-health.md

On Tue, Oct 31, 2017 at 11:47 AM, Tim Dudgeon <tdudgeon...@gmail.com> wrote:

> Hi All,
>
> I've deployed logging using the ansible installer (v3.6.0) for a fairly
> simple openshift setup and everything appears to running:
>
> NAME   READY STATUS RESTARTS   AGE
> logging-curator-1-gvh731/1   Running 24 3d
> logging-es-data-master-xz0e7a0c-1-deploy   0/1   Error 0  3d
> logging-es-data-master-xz0e7a0c-4-deploy   0/1   Error 0  3d
> logging-es-data-master-xz0e7a0c-5-deploy   0/1   Error 0  3d
> logging-es-data-master-xz0e7a0c-7-t4xpf1/1   Running 0  3d
> logging-fluentd-4rm2w      1/1   Running 0  3d
> logging-fluentd-8h944      1/1   Running 0  3d
> logging-fluentd-n00bn      1/1   Running 0  3d
> logging-fluentd-vt8hh      1/1   Running 0  3d
> logging-kibana-1-g7l4z 2/2   Running 0  3d
>
> (the failed pods were related to getting elasticsearch running, but that
> was resolved).
>
> The problem is that I don't see any logs in Kibana. When I look in the
> fluentd pod logs I see lots of stuff like this:
>
> 2017-10-31 13:53:15 + [warn]: no patterns matched tag="journal.system"
> 2017-10-31 13:58:02 + [warn]: no patterns matched
> tag="kubernetes.journal.container"
> 2017-10-31 14:02:18 + [warn]: no patterns matched tag="journal.system"
> 2017-10-31 14:07:15 + [warn]: no patterns matched tag="journal.system"
> 2017-10-31 14:11:20 + [warn]: no patterns matched tag="journal.system"
> 2017-10-31 14:15:16 + [warn]: no patterns matched tag="journal.system"
> 2017-10-31 14:19:58 + [warn]: no patterns matched tag="journal.system"
>
> Is this the cause, and if so what is wrong?
> If not how to debug this?
>
> Tim
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Logging seems to be working, but no logs are collected

2017-10-31 Thread Tim Dudgeon

Hi All,

I've deployed logging using the ansible installer (v3.6.0) for a fairly 
simple openshift setup and everything appears to running:


NAME   READY STATUS RESTARTS   AGE
logging-curator-1-gvh73    1/1   Running 24 3d
logging-es-data-master-xz0e7a0c-1-deploy   0/1   Error 0  3d
logging-es-data-master-xz0e7a0c-4-deploy   0/1   Error 0  3d
logging-es-data-master-xz0e7a0c-5-deploy   0/1   Error 0  3d
logging-es-data-master-xz0e7a0c-7-t4xpf    1/1   Running 0  3d
logging-fluentd-4rm2w  1/1   Running 0  3d
logging-fluentd-8h944  1/1   Running 0  3d
logging-fluentd-n00bn  1/1   Running 0  3d
logging-fluentd-vt8hh  1/1   Running 0  3d
logging-kibana-1-g7l4z 2/2   Running 0  3d

(the failed pods were related to getting elasticsearch running, but that 
was resolved).


The problem is that I don't see any logs in Kibana. When I look in the 
fluentd pod logs I see lots of stuff like this:


2017-10-31 13:53:15 + [warn]: no patterns matched tag="journal.system"
2017-10-31 13:58:02 + [warn]: no patterns matched 
tag="kubernetes.journal.container"

2017-10-31 14:02:18 + [warn]: no patterns matched tag="journal.system"
2017-10-31 14:07:15 + [warn]: no patterns matched tag="journal.system"
2017-10-31 14:11:20 + [warn]: no patterns matched tag="journal.system"
2017-10-31 14:15:16 + [warn]: no patterns matched tag="journal.system"
2017-10-31 14:19:58 + [warn]: no patterns matched tag="journal.system"

Is this the cause, and if so what is wrong?
If not how to debug this?

Tim



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Possible to use AWS elasitcsearch for OpenShift logging?

2017-10-17 Thread Marc Boorshtein
That makes sense. Thanks!

On Mon, Oct 16, 2017, 9:31 AM Luke Meyer <lme...@redhat.com> wrote:

> You can configure fluentd to forward logs (see
> https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#sending-logs-to-an-external-elasticsearch-instance).
> Note the caveat, "If you are not using the provided Kibana and
> Elasticsearch images, you will not have the same multi-tenant capabilities
> and your data will not be restricted by user access to a particular
> project."
>
> On Thu, Oct 12, 2017 at 10:35 AM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
>
>> I have built out a cluster on AWS using the ansible advanced install.  I
>> see that i can setup logging by creating infrastructure nodes that will
>> host elasticsearch.  AWS has an elasticsearch service.  Is there a way to
>> use that instead?
>>
>> Thanks
>> Marc
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Possible to use AWS elasitcsearch for OpenShift logging?

2017-10-16 Thread Luke Meyer
You can configure fluentd to forward logs (see
https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#sending-logs-to-an-external-elasticsearch-instance).
Note the caveat, "If you are not using the provided Kibana and
Elasticsearch images, you will not have the same multi-tenant capabilities
and your data will not be restricted by user access to a particular
project."

On Thu, Oct 12, 2017 at 10:35 AM, Marc Boorshtein <mboorsht...@gmail.com>
wrote:

> I have built out a cluster on AWS using the ansible advanced install.  I
> see that i can setup logging by creating infrastructure nodes that will
> host elasticsearch.  AWS has an elasticsearch service.  Is there a way to
> use that instead?
>
> Thanks
> Marc
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Possible to use AWS elasitcsearch for OpenShift logging?

2017-10-12 Thread Marc Boorshtein
I have built out a cluster on AWS using the ansible advanced install.  I
see that i can setup logging by creating infrastructure nodes that will
host elasticsearch.  AWS has an elasticsearch service.  Is there a way to
use that instead?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [aos-int-services] Problem about logging in openshift origin

2017-09-18 Thread Jeff Cantrill
The images you reference may not even be the latest 3.6.x version of the
image.  I recommend you rebuild them yourself.

Access to the OCP imags require a valid RedHat subscription.

On Mon, Sep 18, 2017 at 2:24 AM, Yu Wei <yu20...@hotmail.com> wrote:

> Hi Jeff,
>
> The image used is docker.io/openshift/origin-logging-elasticsearch:v3.6.0.
>
> It's fetched from docker hub.
>
> How could I get images from OCP?
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
> --
> *From:* Jeff Cantrill <jcant...@redhat.com>
> *Sent:* Saturday, September 16, 2017 1:32:19 AM
> *To:* Peter Portante
> *Cc:* Yu Wei; d...@lists.openshift.redhat.com;
> users@lists.openshift.redhat.com; aos-int-services
> *Subject:* Re: [aos-int-services] Problem about logging in openshift
> origin
>
> Can you also post the image Tag you are using?  Is this from an OCP based
> image or upstream images you may find on dockerhub?
>
> On Fri, Sep 15, 2017 at 7:20 AM, Peter Portante <pport...@redhat.com>
> wrote:
>
>>
>>
>> On Fri, Sep 15, 2017 at 6:10 AM, Yu Wei <yu20...@hotmail.com> wrote:
>>
>>> Hi,
>>>
>>> I setup OpenShift origin 3.6 cluster successfully and enabled metrics
>>> and logging.
>>>
>>> Metrics worked well and logging didn't worked.
>>>
>>> Pod * logging-es-data-master-lf6al5rb-5-deploy* in logging frequently
>>> crashed with below logs,
>>>
>>> *--> Scaling logging-es-data-master-lf6al5rb-5 to 1 *
>>> *--> Waiting up to 10m0s for pods in rc
>>> logging-es-data-master-lf6al5rb-5 to become ready *
>>> *error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods
>>> for rc "logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to
>>> become ready*
>>>
>>> I didn't find other information. How could I debug such problem?
>>>
>> ​Hi Yu,​
>>
>> Added aos-int-services ...
>>
>> ​How many indices do you have in the Elasticsearch instance?
>>
>> What is the storage configuration for the Elasticsearch pods?
>>
>> ​Regards, -peter
>>
>>
>>
>>>
>>> Thanks,
>>>
>>> Jared, (韦煜)
>>> Software developer
>>> Interested in open source software, big data, Linux
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>
>
> --
> --
> Jeff Cantrill
> Senior Software Engineer, Red Hat Engineering
> OpenShift Integration Services
> Red Hat, Inc.
> *Office*: 703-748-4420 <(703)%20748-4420> | 866-546-8970 ext. 8162420
> <(866)%20546-8970>
> jcant...@redhat.com
> http://www.redhat.com
>



-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem about logging in openshift origin

2017-09-18 Thread Peter Portante
On Mon, Sep 18, 2017 at 2:33 AM, Yu Wei <yu20...@hotmail.com> wrote:

> Hi Peter,
>
> The storage is EmptyDir for es pods.
>

​How much storage do you have available for each ES pod to use?  ES can
fill TBs of storage if the amount of logging is high enough.

How big are your ES indices?

-peter​

> What's the meaning of aos-int-services? I only enabled logging feature
> during ansible installation.
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
> --
> *From:* Peter Portante <pport...@redhat.com>
> *Sent:* Friday, September 15, 2017 7:20:18 PM
> *To:* Yu Wei
> *Cc:* users@lists.openshift.redhat.com; d...@lists.openshift.redhat.com;
> aos-int-services
> *Subject:* Re: Problem about logging in openshift origin
>
>
>
> On Fri, Sep 15, 2017 at 6:10 AM, Yu Wei <yu20...@hotmail.com> wrote:
>
>> Hi,
>>
>> I setup OpenShift origin 3.6 cluster successfully and enabled metrics and
>> logging.
>>
>> Metrics worked well and logging didn't worked.
>>
>> Pod * logging-es-data-master-lf6al5rb-5-deploy* in logging frequently
>> crashed with below logs,
>>
>> *--> Scaling logging-es-data-master-lf6al5rb-5 to 1 *
>> *--> Waiting up to 10m0s for pods in rc logging-es-data-master-lf6al5rb-5
>> to become ready *
>> *error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods
>> for rc "logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to
>> become ready*
>>
>> I didn't find other information. How could I debug such problem?
>>
> ​Hi Yu,​
>
> Added aos-int-services ...
>
> ​How many indices do you have in the Elasticsearch instance?
>
> What is the storage configuration for the Elasticsearch pods?
>
> ​Regards, -peter
>
>
>
>>
>> Thanks,
>>
>> Jared, (韦煜)
>> Software developer
>> Interested in open source software, big data, Linux
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem about logging in openshift origin

2017-09-18 Thread Yu Wei
Hi Peter,

The storage is EmptyDir for es pods.

What's the meaning of aos-int-services? I only enabled logging feature during 
ansible installation.


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux


From: Peter Portante <pport...@redhat.com>
Sent: Friday, September 15, 2017 7:20:18 PM
To: Yu Wei
Cc: users@lists.openshift.redhat.com; d...@lists.openshift.redhat.com; 
aos-int-services
Subject: Re: Problem about logging in openshift origin



On Fri, Sep 15, 2017 at 6:10 AM, Yu Wei 
<yu20...@hotmail.com<mailto:yu20...@hotmail.com>> wrote:

Hi,

I setup OpenShift origin 3.6 cluster successfully and enabled metrics and 
logging.

Metrics worked well and logging didn't worked.

Pod logging-es-data-master-lf6al5rb-5-deploy in logging frequently crashed with 
below logs,

--> Scaling logging-es-data-master-lf6al5rb-5 to 1
--> Waiting up to 10m0s for pods in rc logging-es-data-master-lf6al5rb-5 to 
become ready
error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods for rc 
"logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to become ready


I didn't find other information. How could I debug such problem?

​Hi Yu,​

Added aos-int-services ...

​How many indices do you have in the Elasticsearch instance?

What is the storage configuration for the Elasticsearch pods?

​Regards, -peter




Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem about logging in openshift origin

2017-09-17 Thread Yu Wei
@Mateus Caruccio

I run the commands you mentioned and did not find any useful information.

It indicated that no pods named logging-es-data-master-lf6al5rb-5.

No event logs found either.


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux


From: Mateus Caruccio <mateus.caruc...@getupcloud.com>
Sent: Friday, September 15, 2017 6:19:36 PM
To: Yu Wei
Cc: d...@lists.openshift.redhat.com; users
Subject: Re: Problem about logging in openshift origin

You can look into two places for clues.  The pod's log itself (oc -n logging 
logs -f logging-es-data-master-lf6al5rb-5) and project events (oc -n logging 
get events)

Em 15 de set de 2017 07:10, "Yu Wei" 
<yu20...@hotmail.com<mailto:yu20...@hotmail.com>> escreveu:

Hi,

I setup OpenShift origin 3.6 cluster successfully and enabled metrics and 
logging.

Metrics worked well and logging didn't worked.

Pod logging-es-data-master-lf6al5rb-5-deploy in logging frequently crashed with 
below logs,

--> Scaling logging-es-data-master-lf6al5rb-5 to 1
--> Waiting up to 10m0s for pods in rc logging-es-data-master-lf6al5rb-5 to 
become ready
error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods for rc 
"logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to become ready


I didn't find other information. How could I debug such problem?


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [aos-int-services] Problem about logging in openshift origin

2017-09-15 Thread Jeff Cantrill
Can you also post the image Tag you are using?  Is this from an OCP based
image or upstream images you may find on dockerhub?

On Fri, Sep 15, 2017 at 7:20 AM, Peter Portante <pport...@redhat.com> wrote:

>
>
> On Fri, Sep 15, 2017 at 6:10 AM, Yu Wei <yu20...@hotmail.com> wrote:
>
>> Hi,
>>
>> I setup OpenShift origin 3.6 cluster successfully and enabled metrics and
>> logging.
>>
>> Metrics worked well and logging didn't worked.
>>
>> Pod *logging-es-data-master-lf6al5rb-5-deploy* in logging frequently
>> crashed with below logs,
>>
>> *--> Scaling logging-es-data-master-lf6al5rb-5 to 1 *
>> *--> Waiting up to 10m0s for pods in rc logging-es-data-master-lf6al5rb-5
>> to become ready *
>> *error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods
>> for rc "logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to
>> become ready*
>>
>> I didn't find other information. How could I debug such problem?
>>
> ​Hi Yu,​
>
> Added aos-int-services ...
>
> ​How many indices do you have in the Elasticsearch instance?
>
> What is the storage configuration for the Elasticsearch pods?
>
> ​Regards, -peter
>
>
>
>>
>> Thanks,
>>
>> Jared, (韦煜)
>> Software developer
>> Interested in open source software, big data, Linux
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem about logging in openshift origin

2017-09-15 Thread Peter Portante
On Fri, Sep 15, 2017 at 6:10 AM, Yu Wei <yu20...@hotmail.com> wrote:

> Hi,
>
> I setup OpenShift origin 3.6 cluster successfully and enabled metrics and
> logging.
>
> Metrics worked well and logging didn't worked.
>
> Pod *logging-es-data-master-lf6al5rb-5-deploy* in logging frequently
> crashed with below logs,
>
> *--> Scaling logging-es-data-master-lf6al5rb-5 to 1 *
> *--> Waiting up to 10m0s for pods in rc logging-es-data-master-lf6al5rb-5
> to become ready *
> *error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods
> for rc "logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to
> become ready*
>
> I didn't find other information. How could I debug such problem?
>
​Hi Yu,​

Added aos-int-services ...

​How many indices do you have in the Elasticsearch instance?

What is the storage configuration for the Elasticsearch pods?

​Regards, -peter



>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem about logging in openshift origin

2017-09-15 Thread Mateus Caruccio
You can look into two places for clues.  The pod's log itself (oc -n
logging logs -f logging-es-data-master-lf6al5rb-5) and project events (oc
-n logging get events)

Em 15 de set de 2017 07:10, "Yu Wei" <yu20...@hotmail.com> escreveu:

> Hi,
>
> I setup OpenShift origin 3.6 cluster successfully and enabled metrics and
> logging.
>
> Metrics worked well and logging didn't worked.
>
> Pod *logging-es-data-master-lf6al5rb-5-deploy* in logging frequently
> crashed with below logs,
>
> *--> Scaling logging-es-data-master-lf6al5rb-5 to 1 *
> *--> Waiting up to 10m0s for pods in rc logging-es-data-master-lf6al5rb-5
> to become ready *
> *error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods
> for rc "logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to
> become ready*
>
> I didn't find other information. How could I debug such problem?
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Problem about logging in openshift origin

2017-09-15 Thread Yu Wei
Hi,

I setup OpenShift origin 3.6 cluster successfully and enabled metrics and 
logging.

Metrics worked well and logging didn't worked.

Pod logging-es-data-master-lf6al5rb-5-deploy in logging frequently crashed with 
below logs,

--> Scaling logging-es-data-master-lf6al5rb-5 to 1
--> Waiting up to 10m0s for pods in rc logging-es-data-master-lf6al5rb-5 to 
become ready
error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods for rc 
"logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to become ready


I didn't find other information. How could I debug such problem?


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Stéphane Klein
2017-07-12 15:41 GMT+02:00 Peter Portante <pport...@redhat.com>:

>
>
> On Wed, Jul 12, 2017 at 9:28 AM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>>
>> 2017-07-12 15:20 GMT+02:00 Peter Portante <pport...@redhat.com>:
>>
>>> This looks a lot like this BZ: https://bugzilla.redhat.co
>>> m/show_bug.cgi?id=1449378, "Timeout after 30SECONDS while retrieving
>>> configuration"
>>>
>>> What version of Origin are you using?
>>>
>>>
>> Logging image : origin-logging-elasticsearch:v1.5.0
>>
>> $ oc version
>> oc v1.4.1+3f9807a
>> kubernetes v1.4.0+776c994
>> features: Basic-Auth
>>
>> Server https://console.tech-angels.net:443
>> openshift v1.5.0+031cbe4
>> kubernetes v1.5.2+43a9be4
>>
>> and with 1.4 nodes because of this crazy bug
>> https://github.com/openshift/origin/issues/14092)
>>
>>
>>> I found that I had to run the sgadmin script in each ES pod at the same
>>> time, and when one succeeds and one fails, just run it again and it worked.
>>>
>>>
>> Ok, I'll try that, how can I execute sgadmin script manually ?
>>
>
> ​You can see it in the run.sh script in each pod, look for the invocation
> of sgadmin there.
>
>
Ok I have executed:

/usr/share/elasticsearch/plugins/search-guard-2/tools/sgadmin.sh \
-cd ${HOME}/sgconfig \
-i .searchguard.${HOSTNAME} \
-ks /etc/elasticsearch/secret/searchguard.key \
-kst JKS \
-kspass kspass \
-ts /etc/elasticsearch/secret/searchguard.truststore \
-tst JKS \
-tspass tspass \
    -nhnv \
-icl

One ES node 1 and 2 in same time, but I have need to restart one second
time on node2.

Now I have this message:

Will connect to localhost:9300 ... done
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW
clusterstate ...
Clustername: logging-es
Clusterstate: GREEN
Number of nodes: 2
Number of data nodes: 2
.searchguard.logging-es-x39myqbs-1-s5g7c index already exists, so we do not
need to create one.
Populate config from /opt/app-root/src/sgconfig/
Will update 'config' with /opt/app-root/src/sgconfig/sg_config.yml
   SUCC: Configuration for 'config' created or updated
Will update 'roles' with /opt/app-root/src/sgconfig/sg_roles.yml
   SUCC: Configuration for 'roles' created or updated
Will update 'rolesmapping' with
/opt/app-root/src/sgconfig/sg_roles_mapping.yml
   SUCC: Configuration for 'rolesmapping' created or updated
Will update 'internalusers' with
/opt/app-root/src/sgconfig/sg_internal_users.yml
   SUCC: Configuration for 'internalusers' created or updated
Will update 'actiongroups' with
/opt/app-root/src/sgconfig/sg_action_groups.yml
   SUCC: Configuration for 'actiongroups' created or updated
Done with success

Fixed, thanks.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Peter Portante
On Wed, Jul 12, 2017 at 9:28 AM, Stéphane Klein <cont...@stephane-klein.info
> wrote:

>
> 2017-07-12 15:20 GMT+02:00 Peter Portante <pport...@redhat.com>:
>
>> This looks a lot like this BZ: https://bugzilla.redhat.co
>> m/show_bug.cgi?id=1449378, "Timeout after 30SECONDS while retrieving
>> configuration"
>>
>> What version of Origin are you using?
>>
>>
> Logging image : origin-logging-elasticsearch:v1.5.0
>
> $ oc version
> oc v1.4.1+3f9807a
> kubernetes v1.4.0+776c994
> features: Basic-Auth
>
> Server https://console.tech-angels.net:443
> openshift v1.5.0+031cbe4
> kubernetes v1.5.2+43a9be4
>
> and with 1.4 nodes because of this crazy bug https://github.com/openshift/
> origin/issues/14092)
>
>
>> I found that I had to run the sgadmin script in each ES pod at the same
>> time, and when one succeeds and one fails, just run it again and it worked.
>>
>>
> Ok, I'll try that, how can I execute sgadmin script manually ?
>

​You can see it in the run.sh script in each pod, look for the invocation
of sgadmin there.

-peter​



>
> Best regards,
> Stéphane
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Stéphane Klein
2017-07-12 15:20 GMT+02:00 Peter Portante <pport...@redhat.com>:

> This looks a lot like this BZ: https://bugzilla.redhat.
> com/show_bug.cgi?id=1449378, "Timeout after 30SECONDS while retrieving
> configuration"
>
> What version of Origin are you using?
>
>
Logging image : origin-logging-elasticsearch:v1.5.0

$ oc version
oc v1.4.1+3f9807a
kubernetes v1.4.0+776c994
features: Basic-Auth

Server https://console.tech-angels.net:443
openshift v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4

and with 1.4 nodes because of this crazy bug
https://github.com/openshift/origin/issues/14092)


> I found that I had to run the sgadmin script in each ES pod at the same
> time, and when one succeeds and one fails, just run it again and it worked.
>
>
Ok, I'll try that, how can I execute sgadmin script manually ?

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Peter Portante
This looks a lot like this BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1449378, "Timeout after
30SECONDS while retrieving configuration"

What version of Origin are you using?

I found that I had to run the sgadmin script in each ES pod at the same
time, and when one succeeds and one fails, just run it again and it worked.

It seems to have to do with sgadmin script trying to be sure that all nodes
can see the searchguard index, but since we create one per node, if another
node does not have searchguard successfully setup, the current node's setup
will fail.  Retry at the same time until they work seems to be the fix. :(

-peter

On Wed, Jul 12, 2017 at 9:03 AM, Stéphane Klein <cont...@stephane-klein.info
> wrote:

> Hi,
>
> Since one day, after ES cluster pods restart, I have this error message
> when I launch logging-es:
>
> $ oc logs -f logging-es-ne81bsny-5-jdcdk
> Comparing the specificed RAM to the maximum recommended for
> ElasticSearch...
> Inspecting the maximum RAM available...
> ES_JAVA_OPTS: '-Dmapper.allow_dots_in_name=true -Xms128M -Xmx4096m'
> Checking if Elasticsearch is ready on https://localhost:9200
> ..Will connect to localhost:9300 ...
> done
> Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW
> clusterstate ...
> Clustername: logging-es
> Clusterstate: YELLOW
> Number of nodes: 2
> Number of data nodes: 2
> .searchguard.logging-es-ne81bsny-5-jdcdk index does not exists, attempt
> to create it ... done (with 1 replicas, auto expand replicas is off)
> Populate config from /opt/app-root/src/sgconfig/
> Will update 'config' with /opt/app-root/src/sgconfig/sg_config.yml
>SUCC: Configuration for 'config' created or updated
> Will update 'roles' with /opt/app-root/src/sgconfig/sg_roles.yml
>SUCC: Configuration for 'roles' created or updated
> Will update 'rolesmapping' with /opt/app-root/src/sgconfig/sg_
> roles_mapping.yml
>SUCC: Configuration for 'rolesmapping' created or updated
> Will update 'internalusers' with /opt/app-root/src/sgconfig/sg_
> internal_users.yml
>SUCC: Configuration for 'internalusers' created or updated
> Will update 'actiongroups' with /opt/app-root/src/sgconfig/sg_
> action_groups.yml
>SUCC: Configuration for 'actiongroups' created or updated
> Timeout (java.util.concurrent.TimeoutException: Timeout after 30SECONDS
> while retrieving configuration for [config, roles, rolesmapping,
> internalusers, actiongroups](index=.searchguard.logging-es-
> x39myqbs-1-s5g7c))
> Done with failures
>
> after some time, my ES cluster (2 nodes) is green:
>
> stephane$ oc rsh logging-es-x39myqbs-1-s5g7c bash
> st:9200/_cluster/health?pretty=trueasticsearch/secret/admin-cert
> https://localho
> {
>   "cluster_name" : "logging-es",
>   "status" : "green",
>   "timed_out" : false,
>   "number_of_nodes" : 2,
>   "number_of_data_nodes" : 2,
>   "active_primary_shards" : 1643,
>   "active_shards" : 3286,
>   "relocating_shards" : 0,
>   "initializing_shards" : 0,
>   "unassigned_shards" : 0,
>   "delayed_unassigned_shards" : 0,
>   "number_of_pending_tasks" : 0,
>   "number_of_in_flight_fetch" : 0,
>   "task_max_waiting_in_queue_millis" : 0,
>   "active_shards_percent_as_number" : 100.0
> }
>
> I have this error in kibana container:
>
> $ oc logs -f -c kibana logging-kibana-1-jblhl
> {"type":"log","@timestamp":"2017-07-12T12:54:54Z","tags":[
> "warning","elasticsearch"],"pid":1,"message":"No living connections"}
> {"type":"log","@timestamp":"2017-07-12T12:54:57Z","tags":[
> "warning","elasticsearch"],"pid":1,"message":"Unable to revive
> connection: https://logging-es:9200/"}
>
> But in Kibana container I can access to elasticsearch server:
>
> $ oc rsh -c kibana logging-kibana-1-jblhl bash
> $ curl https://logging-es:9200/ --cacert /etc/kibana/keys/ca --key
> /etc/kibana/keys/key --cert /etc/kibana/keys/cert
> {
>   "name" : "Adri Nital",
>   "cluster_name" : "logging-es",
>   "cluster_uuid" : "iRo3wOHWSq2bTZskrIs6Zg",
>   "version" : {
> "number" : "2.4.4",
> "build_hash" : "fcbb46dfd45562a9cf00c604b30849a6dec6b017",
> "build_timestamp" : "2017-01-03T11:33:16Z",
> "build_snapshot" : false,
> "lucene_version" : "5.5.2"
>   },
>   "tagline" : "You Know, for Search"
> }
>
> How can I fix this error?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Stéphane Klein
Hi,

Since one day, after ES cluster pods restart, I have this error message
when I launch logging-es:

$ oc logs -f logging-es-ne81bsny-5-jdcdk
Comparing the specificed RAM to the maximum recommended for ElasticSearch...
Inspecting the maximum RAM available...
ES_JAVA_OPTS: '-Dmapper.allow_dots_in_name=true -Xms128M -Xmx4096m'
Checking if Elasticsearch is ready on https://localhost:9200
..Will connect to localhost:9300 ...
done
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW
clusterstate ...
Clustername: logging-es
Clusterstate: YELLOW
Number of nodes: 2
Number of data nodes: 2
.searchguard.logging-es-ne81bsny-5-jdcdk index does not exists, attempt to
create it ... done (with 1 replicas, auto expand replicas is off)
Populate config from /opt/app-root/src/sgconfig/
Will update 'config' with /opt/app-root/src/sgconfig/sg_config.yml
   SUCC: Configuration for 'config' created or updated
Will update 'roles' with /opt/app-root/src/sgconfig/sg_roles.yml
   SUCC: Configuration for 'roles' created or updated
Will update 'rolesmapping' with
/opt/app-root/src/sgconfig/sg_roles_mapping.yml
   SUCC: Configuration for 'rolesmapping' created or updated
Will update 'internalusers' with
/opt/app-root/src/sgconfig/sg_internal_users.yml
   SUCC: Configuration for 'internalusers' created or updated
Will update 'actiongroups' with
/opt/app-root/src/sgconfig/sg_action_groups.yml
   SUCC: Configuration for 'actiongroups' created or updated
Timeout (java.util.concurrent.TimeoutException: Timeout after 30SECONDS
while retrieving configuration for [config, roles, rolesmapping,
internalusers,
actiongroups](index=.searchguard.logging-es-x39myqbs-1-s5g7c))
Done with failures

after some time, my ES cluster (2 nodes) is green:

stephane$ oc rsh logging-es-x39myqbs-1-s5g7c bash
st:9200/_cluster/health?pretty=trueasticsearch/secret/admin-cert
https://localho
{
  "cluster_name" : "logging-es",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 1643,
  "active_shards" : 3286,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

I have this error in kibana container:

$ oc logs -f -c kibana logging-kibana-1-jblhl
{"type":"log","@timestamp":"2017-07-12T12:54:54Z","tags":["warning","elasticsearch"],"pid":1,"message":"No
living connections"}
{"type":"log","@timestamp":"2017-07-12T12:54:57Z","tags":["warning","elasticsearch"],"pid":1,"message":"Unable
to revive connection: https://logging-es:9200/"}

But in Kibana container I can access to elasticsearch server:

$ oc rsh -c kibana logging-kibana-1-jblhl bash
$ curl https://logging-es:9200/ --cacert /etc/kibana/keys/ca --key
/etc/kibana/keys/key --cert /etc/kibana/keys/cert
{
  "name" : "Adri Nital",
  "cluster_name" : "logging-es",
  "cluster_uuid" : "iRo3wOHWSq2bTZskrIs6Zg",
  "version" : {
"number" : "2.4.4",
"build_hash" : "fcbb46dfd45562a9cf00c604b30849a6dec6b017",
"build_timestamp" : "2017-01-03T11:33:16Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
  },
  "tagline" : "You Know, for Search"
}

How can I fix this error?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] What component forward log entries to fluentd input service?

2017-07-11 Thread Stéphane Klein
2017-07-11 15:00 GMT+02:00 Alex Wauck <alexwa...@exosite.com>:

> Last I checked (OpenShift Origin 1.2), fluentd was just slurping up the
> log files produced by Docker.  It can do that because the pods it runs in
> have access to the host filesystem.
>
> On Tue, Jul 11, 2017 at 6:12 AM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>> Hi,
>>
>> I see here https://github.com/openshift/origin-aggregated-logging/
>> blob/master/fluentd/configs.d/input-post-forward-mux.conf#L2
>> that fluentd logging system use secure_forward input system.
>>
>> My question: what component forward log entries to fluentd input service ?
>>
>>
Ok it's here:

bash-4.2# cat configs.d/dynamic/input-syslog-default-syslog.conf

  @type systemd
  @label @INGRESS
  path "/var/log/journal"
  pos_file /var/log/journal.pos
  tag journal


Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] What component forward log entries to fluentd input service?

2017-07-11 Thread Richard Megginson
Please see 
https://github.com/openshift/origin-aggregated-logging/blob/master/docs/mux-logging-service.md

- Original Message -
> Hi,
> 
> I see here
> https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/configs.d/input-post-forward-mux.conf#L2
> 
> that fluentd logging system use secure_forward input system.
> 
> My question: what component forward log entries to fluentd input service ?
> 
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] What component forward log entries to fluentd input service?

2017-07-11 Thread Peter Portante
On Tue, Jul 11, 2017 at 9:00 AM, Alex Wauck <alexwa...@exosite.com> wrote:
> Last I checked (OpenShift Origin 1.2), fluentd was just slurping up the log
> files produced by Docker.  It can do that because the pods it runs in have
> access to the host filesystem.
>
> On Tue, Jul 11, 2017 at 6:12 AM, Stéphane Klein
> <cont...@stephane-klein.info> wrote:
>>
>> Hi,
>>
>> I see here
>> https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/configs.d/input-post-forward-mux.conf#L2
>> that fluentd logging system use secure_forward input system.
>>
>> My question: what component forward log entries to fluentd input service ?

The "mux" service is a concentrator of sorts.

Without the mux service, each fluentd pod runs on a host in an
OpenShift cluster collecting logs and sending them to Elasticsearch
directly.  The collectors also have the responsibility of enhancing
the logs collected with the metadata that describes which
pod/container they came from.  This requires connections to the API
server to get that information.

So in a large cluster, 200+ nodes, maybe less, maybe more, the API
servers are overwhelmed by requests from all the fluentd pods.

With the mux service, all the fluentd collections pods only talk to
the mux service and DO NOT talk to the API server; they simply send
the logs they collect to the mux fluentd instance.

The mux fluentd instance in turns talks to the API service to enrich
the logs with the pod/container metadata and then send along to
Elasticsearch.

This scales much better.

-peter


>>
>> Best regards,
>> Stéphane
>> --
>> Stéphane Klein <cont...@stephane-klein.info>
>> blog: http://stephane-klein.info
>> cv : http://cv.stephane-klein.info
>> Twitter: http://twitter.com/klein_stephane
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
>
> --
>
> Alex Wauck // Senior DevOps Engineer
>
> E X O S I T E
> www.exosite.com
>
> Making Machines More Human.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] What component forward log entries to fluentd input service?

2017-07-11 Thread Alex Wauck
Last I checked (OpenShift Origin 1.2), fluentd was just slurping up the log
files produced by Docker.  It can do that because the pods it runs in have
access to the host filesystem.

On Tue, Jul 11, 2017 at 6:12 AM, Stéphane Klein <cont...@stephane-klein.info
> wrote:

> Hi,
>
> I see here https://github.com/openshift/origin-aggregated-
> logging/blob/master/fluentd/configs.d/input-post-forward-mux.conf#L2
> that fluentd logging system use secure_forward input system.
>
> My question: what component forward log entries to fluentd input service ?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 

Alex Wauck // Senior DevOps Engineer

*E X O S I T E*
*www.exosite.com <http://www.exosite.com/>*

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[Logging] What component forward log entries to fluentd input service?

2017-07-11 Thread Stéphane Klein
Hi,

I see here
https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/configs.d/input-post-forward-mux.conf#L2

that fluentd logging system use secure_forward input system.

My question: what component forward log entries to fluentd input service ?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin-Aggregated-Logging OPS generate 10Go ES data by day, 40000 hits by hours

2017-07-07 Thread Peter Portante
On Fri, Jul 7, 2017 at 9:52 AM, Stéphane Klein
 wrote:
>
> 2017-07-07 15:51 GMT+02:00 Stéphane Klein :
>>
>> 2017-07-07 14:26 GMT+02:00 Peter Portante :
>>>
>>> >
>>> > 4 hits by hours!
>>>
>>> How are you determining 40,000 hits per hour?
>>>
>>
>> I did a search in Kibana, last hour => 40,000 hits
>
>
> for one node.

Can you share the query you put into Kibana?  And share what version
of origin you are using?  Perhaps this is 1.4 or 1.5?

Finally, can you use the "Discovery" tab in Kibana to view the entire
JSON document for one of the log entries, so I can see the other
metadata?

Thanks, -peter

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin-Aggregated-Logging OPS generate 10Go ES data by day, 40000 hits by hours

2017-07-07 Thread Aleksandar Lazic
Title: Re: Origin-Aggregated-Logging OPS generate 10Go ES data by day, 4 hits by hours


Hi Stéphane Klein.

on Freitag, 07. Juli 2017 at 11:15 was written:





Hi,

Origin-Aggregated-Logging (https://github.com/openshift/origin-aggregated-logging) is installed on my cluster and I have enabled "OPS" option.

Then, I have two ElasticSearch clusters:

* ES
* ES-OPS

My issue: OPS logging generate 10Go ES data by day!

origin-node log level is set at 0 (errors and warnings only).

This is some logging record:

/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --insecure-registry=172.30.0.0/16 --log-driver=journald --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/cah-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true

/usr/lib/systemd/systemd --switched-root --system --deserialize 19

/usr/bin/docker-current run --name origin-node --rm --privileged --net=host --pid=host --env-file=/etc/sysconfig/origin-node -v /:/rootfs:ro,rslave -e CONFIG_FILE=/etc/origin/node/node-config.yaml -e OPTIONS=--loglevel=0 -e HOST=/rootfs -e HOST_ETC=/host-etc -v /var/lib/origin:/var/lib/origin:rslave -v /etc/origin/node:/etc/origin/node -v /etc/localtime:/etc/localtime:ro -v /etc/machine-id:/etc/machine-id:ro -v /run:/run -v /sys:/sys:rw -v /sys/fs/cgroup:/sys/fs/cgroup:rw -v /usr/bin/docker:/usr/bin/docker:ro -v /var/lib/docker:/var/lib/docker -v /lib/modules:/lib/modules -v /etc/origin/openvswitch:/etc/openvswitch -v /etc/origin/sdn:/etc/openshift-sdn -v /var/lib/cni:/var/lib/cni -v /etc/systemd/system:/host-etc/systemd/system -v /var/log:/var/log -v /dev:/dev --volume=/usr/bin/docker-current:/usr/bin/docker-current:ro --volume=/etc/sysconfig/docker:/etc/sysconfig/docker:ro openshift/node:v1.4.1

...

4 hits by hours!

I don't understand why I have all this log record, it is usual?



From my observations yes it is normal.
You should also have a lot of something like atomic-openshift-node entries.





How can I fix it?



Only with redefine the log lines in docker, ihmo.





Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane





-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin-Aggregated-Logging OPS generate 10Go ES data by day, 40000 hits by hours

2017-07-07 Thread Stéphane Klein
2017-07-07 15:51 GMT+02:00 Stéphane Klein :

> 2017-07-07 14:26 GMT+02:00 Peter Portante :
>
>> >
>> > 4 hits by hours!
>>
>> How are you determining 40,000 hits per hour?
>>
>>
> I did a search in Kibana, last hour => 40,000 hits
>

for one node.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin-Aggregated-Logging OPS generate 10Go ES data by day, 40000 hits by hours

2017-07-07 Thread Stéphane Klein
2017-07-07 14:26 GMT+02:00 Peter Portante :

> >
> > 4 hits by hours!
>
> How are you determining 40,000 hits per hour?
>
>
I did a search in Kibana, last hour => 40,000 hits
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin-Aggregated-Logging OPS generate 10Go ES data by day, 40000 hits by hours

2017-07-07 Thread Peter Portante
On Fri, Jul 7, 2017 at 5:15 AM, Stéphane Klein
<cont...@stephane-klein.info> wrote:
> Hi,
>
> Origin-Aggregated-Logging
> (https://github.com/openshift/origin-aggregated-logging) is installed on my
> cluster and I have enabled "OPS" option.
>
> Then, I have two ElasticSearch clusters:
>
> * ES
> * ES-OPS
>
> My issue: OPS logging generate 10Go ES data by day!
>
> origin-node log level is set at 0 (errors and warnings only).
>
> This is some logging record:
>
> /usr/bin/dockerd-current --add-runtime
> docker-runc=/usr/libexec/docker/docker-runc-current
> --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd
> --userland-proxy-path=/usr/libexec/docker/docker-proxy-current
> --selinux-enabled --insecure-registry=172.30.0.0/16 --log-driver=journald
> --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt
> dm.thinpooldev=/dev/mapper/cah-docker--pool --storage-opt
> dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true
>
> /usr/lib/systemd/systemd --switched-root --system --deserialize 19
>
> /usr/bin/docker-current run --name origin-node --rm --privileged --net=host
> --pid=host --env-file=/etc/sysconfig/origin-node -v /:/rootfs:ro,rslave -e
> CONFIG_FILE=/etc/origin/node/node-config.yaml -e OPTIONS=--loglevel=0 -e
> HOST=/rootfs -e HOST_ETC=/host-etc -v /var/lib/origin:/var/lib/origin:rslave
> -v /etc/origin/node:/etc/origin/node -v /etc/localtime:/etc/localtime:ro -v
> /etc/machine-id:/etc/machine-id:ro -v /run:/run -v /sys:/sys:rw -v
> /sys/fs/cgroup:/sys/fs/cgroup:rw -v /usr/bin/docker:/usr/bin/docker:ro -v
> /var/lib/docker:/var/lib/docker -v /lib/modules:/lib/modules -v
> /etc/origin/openvswitch:/etc/openvswitch -v
> /etc/origin/sdn:/etc/openshift-sdn -v /var/lib/cni:/var/lib/cni -v
> /etc/systemd/system:/host-etc/systemd/system -v /var/log:/var/log -v
> /dev:/dev --volume=/usr/bin/docker-current:/usr/bin/docker-current:ro
> --volume=/etc/sysconfig/docker:/etc/sysconfig/docker:ro
> openshift/node:v1.4.1
>
> ...
>
> 4 hits by hours!

How are you determining 40,000 hits per hour?

What query are you doing to determine the above log entries?

Thanks, -peter

>
> I don't understand why I have all this log record, it is usual?
>
> How can I fix it?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can I exclude one project or one container to Origin-Aggregated-Logging system?

2017-06-05 Thread Eric Wolinetz
On Tue, May 30, 2017 at 2:55 PM, Office ME2Digtial e. U. <
al...@me2digital.eu> wrote:

> Hi Eric.
>
> Eric Wolinetz have written on Tue, 30 May 2017 11:47:32 -0500:
>
> > On Tue, May 30, 2017 at 10:46 AM, Aleksandar Lazic
> > <al...@me2digital.eu> wrote:
> >
> > > Hi.
> > >
> > > Afasik there is no option for this.
> > >
> > > Best regards
> > > Aleks
> > >
> > > "Stéphane Klein" <cont...@stephane-klein.info> schrieb am
> > > 30.05.2017:
> > >> HI,
> > >>
> > >> I just read origin-aggregated-logging
> > >> <https://github.com/openshift/origin-aggregated-logging>
> > >> documentation and I don't found if I can exclude one project or
> > >> one container to logging system.
> > >>
> > >
> > You can update your Fluentd configmap to drop the records so that they
> > aren't sent to ES.
> >
> > In the fluent.conf section you can add in the highlighted section:
> >
> > Please note the "**_" before and "_**" after the project names, this
> > is to correctly match the record pattern.
> >
> > ...
> > 
> >   
> > @type null
> >   
> > ## filters
> > ...
> >
> > You can also specify multiple projects on this match if you so desire
> > by separating the patterns with spaces:
> >   
>
> Ah you are referring to
> http://docs.fluentd.org/v0.12/articles/config-file#2-
> ldquomatchrdquo-tell-fluentd-what-to-do
>
>
Correct. Depending on your version of OpenShift logging that has been
deployed, you should be able to edit the fluent.conf file section within
the logging-fluentd configmap.
$ oc edit configmap/logging-fluentd


> thanks
>
> > >> Is it possible with a container labels? or other system?
> > >>
> > >> Best regards,
> > >> Stéphane
> > >> --
> > >> Stéphane Klein <cont...@stephane-klein.info>
> > >> blog: http://stephane-klein.info
> > >> cv : http://cv.stephane-klein.info
> > >> Twitter: http://twitter.com/klein_stephane
>
> --
> Best Regards
> Aleksandar Lazic - ME2Digital e. U.
> https://me2digital.online/
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can I exclude one project or one container to Origin-Aggregated-Logging system?

2017-05-30 Thread Office ME2Digtial e. U.
Hi Eric.

Eric Wolinetz have written on Tue, 30 May 2017 11:47:32 -0500:

> On Tue, May 30, 2017 at 10:46 AM, Aleksandar Lazic
> <al...@me2digital.eu> wrote:
> 
> > Hi.
> >
> > Afasik there is no option for this.
> >
> > Best regards
> > Aleks
> >
> > "Stéphane Klein" <cont...@stephane-klein.info> schrieb am
> > 30.05.2017: 
> >> HI,
> >>
> >> I just read origin-aggregated-logging
> >> <https://github.com/openshift/origin-aggregated-logging>
> >> documentation and I don't found if I can exclude one project or
> >> one container to logging system.
> >>  
> >  
> You can update your Fluentd configmap to drop the records so that they
> aren't sent to ES.
> 
> In the fluent.conf section you can add in the highlighted section:
> 
> Please note the "**_" before and "_**" after the project names, this
> is to correctly match the record pattern.
> 
> ...
> 
>   
> @type null
>   
> ## filters
> ...
> 
> You can also specify multiple projects on this match if you so desire
> by separating the patterns with spaces:
>   

Ah you are referring to
http://docs.fluentd.org/v0.12/articles/config-file#2-ldquomatchrdquo-tell-fluentd-what-to-do

thanks

> >> Is it possible with a container labels? or other system?
> >>
> >> Best regards,
> >> Stéphane
> >> --
> >> Stéphane Klein <cont...@stephane-klein.info>
> >> blog: http://stephane-klein.info
> >> cv : http://cv.stephane-klein.info
> >> Twitter: http://twitter.com/klein_stephane

-- 
Best Regards
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can I exclude one project or one container to Origin-Aggregated-Logging system?

2017-05-30 Thread Eric Wolinetz
On Tue, May 30, 2017 at 10:46 AM, Aleksandar Lazic <al...@me2digital.eu>
wrote:

> Hi.
>
> Afasik there is no option for this.
>
> Best regards
> Aleks
>
> "Stéphane Klein" <cont...@stephane-klein.info> schrieb am 30.05.2017:
>
>> HI,
>>
>> I just read origin-aggregated-logging
>> <https://github.com/openshift/origin-aggregated-logging> documentation
>> and I don't found if I can exclude one project or one container to logging
>> system.
>>
>
You can update your Fluentd configmap to drop the records so that they
aren't sent to ES.

In the fluent.conf section you can add in the highlighted section:

Please note the "**_" before and "_**" after the project names, this is to
correctly match the record pattern.

...

  
@type null
  
## filters
...

You can also specify multiple projects on this match if you so desire by
separating the patterns with spaces:
  


>> Is it possible with a container labels? or other system?
>>
>> Best regards,
>> Stéphane
>> --
>> Stéphane Klein <cont...@stephane-klein.info>
>> blog: http://stephane-klein.info
>> cv : http://cv.stephane-klein.info
>> Twitter: http://twitter.com/klein_stephane
>>
>> --
>>
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can I exclude one project or one container to Origin-Aggregated-Logging system?

2017-05-30 Thread Aleksandar Lazic
Hi.

Afasik there is no option for this.

Best regards
Aleks

"Stéphane Klein" <cont...@stephane-klein.info> schrieb am 30.05.2017:
>HI,
>
>I just read origin-aggregated-logging
><https://github.com/openshift/origin-aggregated-logging> documentation
>and
>I don't found if I can exclude one project or one container to logging
>system.
>
>Is it possible with a container labels? or other system?
>
>Best regards,
>Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Can I exclude one project or one container to Origin-Aggregated-Logging system?

2017-05-30 Thread Stéphane Klein
HI,

I just read origin-aggregated-logging
<https://github.com/openshift/origin-aggregated-logging> documentation and
I don't found if I can exclude one project or one container to logging
system.

Is it possible with a container labels? or other system?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


origin-aggregated-logging

2017-05-11 Thread 戴耀飞
Hi guys,

Origin 1.5 has already released.


https://github.com/openshift/origin-aggregated-logging/tags

V1.5.0 is missing ,  only v1.5.0-rc.0 is available.

https://hub.docker.com/r/openshift/origin-logging-elasticsearch/tags/ has a 
version v1.5.0,  where does this come  from?

What happens if we reference ocp 3.5  aggregated-logging image on Openshift 
Origin 1.5?


Best regards,



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to open port 9300 for aggregate logging without messing up iptables for Openshift

2017-04-28 Thread Dean Peterson
I had firewalld enabled. I turned that off. Port 9300 is no longer a
problem. Thanks!

On Fri, Apr 28, 2017 at 3:55 PM, Luke Meyer <lme...@redhat.com> wrote:

> The Elastic Search pods contact port 9300 on other pods, that is, on the
> internal pod IP. There should be no need to do anything on the hosts to
> enable this. If ES is failing to contact other ES nodes then either there
> is a networking problem or the other nodes aren't listening (yet) on the
> port.
>
> On Thu, Apr 27, 2017 at 10:53 PM, Dean Peterson <peterson.d...@gmail.com>
> wrote:
>
>> I am trying to start aggregate logging. The elastic search cluster
>> requires port 9300 to be open. I am getting Connection refused errors and I
>> need to open that port. How do I open port 9300 without messing up the
>> existing rules for Openshift. Do I make changes in firewalld or iptables
>> directly? I notice iptables is masked. In previous versions it seems like
>> firewalld wasn't being used. Now it is. I am not sure what the right way to
>> make port 9300 available to aggregate logging is.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to open port 9300 for aggregate logging without messing up iptables for Openshift

2017-04-28 Thread Luke Meyer
The Elastic Search pods contact port 9300 on other pods, that is, on the
internal pod IP. There should be no need to do anything on the hosts to
enable this. If ES is failing to contact other ES nodes then either there
is a networking problem or the other nodes aren't listening (yet) on the
port.

On Thu, Apr 27, 2017 at 10:53 PM, Dean Peterson <peterson.d...@gmail.com>
wrote:

> I am trying to start aggregate logging. The elastic search cluster
> requires port 9300 to be open. I am getting Connection refused errors and I
> need to open that port. How do I open port 9300 without messing up the
> existing rules for Openshift. Do I make changes in firewalld or iptables
> directly? I notice iptables is masked. In previous versions it seems like
> firewalld wasn't being used. Now it is. I am not sure what the right way to
> make port 9300 available to aggregate logging is.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to open port 9300 for aggregate logging without messing up iptables for Openshift

2017-04-27 Thread Dean Peterson
I am trying to start aggregate logging. The elastic search cluster requires
port 9300 to be open. I am getting Connection refused errors and I need to
open that port. How do I open port 9300 without messing up the existing
rules for Openshift. Do I make changes in firewalld or iptables directly? I
notice iptables is masked. In previous versions it seems like firewalld
wasn't being used. Now it is. I am not sure what the right way to make port
9300 available to aggregate logging is.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging (aggregated, FluentD)

2017-04-19 Thread Rich Megginson

On 04/19/2017 08:39 AM, Shepp wrote:

Hi Rich,

Thanks.  So first should I not be pointing Kibana at the same host as 
my OSE Web Interface?


You should be pointing Kibana at the same host, but not necessarily the 
same hostname.  For example, I usually have something like this in my 
/etc/hosts for testing:


10.x.y.z ocp.origin-14.rmeggins.test kibana.origin-14.rmeggins.test

So everything is on the same physical host/IP, but I use 
https://ocp.origin-14.rmeggins.test:8443 to access the OpenShift 
console, and use https://kibana.origin-14.rmeggins.test for access to Kibana


If not, how would you suggest to install/reconfigure.  I'm in AWS and 
I don't really follow what you mean by 
openshift.deployment.subdomain.  Would that be another instance in AWS?


No, not necessarily.



Re:  OSE version - yes I'm running OSE 3.4/Kube 1.4. Here's the output 
of oc version:

===

[root@ip-172-31-45-158 ~]# oc version

oc v3.4.1.12

kubernetes v1.4.0+776c994

features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://ip-172-31-45-158.us-east-2.compute.internal:8443

openshift v3.4.1.12

kubernetes v1.4.0+776c994

===

Here's the fluentD conf:

[root@ip-172-31-45-158 ~]# oc project logging

Now using project "logging" on server 
"https://ip-172-31-45-158.us-east-2.compute.internal:8443;.


[root@ip-172-31-45-158 ~]# oc get configmap logging-fluentd -o yaml

apiVersion: v1

data:

  fluent.conf: |

# This file is the fluentd configuration entrypoint. Edit with care.

@include configs.d/openshift/system.conf

# In each section below, pre- and post- includes don't include 
anything initially;


# they exist to enable future additions to openshift conf as needed.

## sources

## ordered so that syslog always runs last...

@include configs.d/openshift/input-pre-*.conf

@include configs.d/dynamic/input-docker-*.conf

@include configs.d/dynamic/input-syslog-*.conf

@include configs.d/openshift/input-post-*.conf

##



## filters

  @include configs.d/openshift/filter-pre-*.conf

  @include configs.d/openshift/filter-retag-journal.conf

  @include configs.d/openshift/filter-k8s-meta.conf

  @include configs.d/openshift/filter-kibana-transform.conf

  @include configs.d/openshift/filter-k8s-record-transform.conf

  @include configs.d/openshift/filter-syslog-record-transform.conf

  @include configs.d/openshift/filter-common-data-model.conf



You do have the common data model filter, so I don't think that is the 
problem.



  @include configs.d/openshift/filter-post-*.conf

##

## matches

  @include configs.d/openshift/output-pre-*.conf

  @include configs.d/openshift/output-operations.conf

  @include configs.d/openshift/output-applications.conf

  # no post - applications.conf matches everything left

##



secure-forward.conf: |

# @type secure_forward


# self_hostname ${HOSTNAME}

# shared_key 


# secure yes

# enable_strict_verification yes


# ca_cert_path /etc/fluent/keys/your_ca_cert

# ca_private_key_path /etc/fluent/keys/your_private_key

  # for private CA secret key

# ca_private_key_passphrase passphrase


# 

  # or IP

#   host server.fqdn.example.com <http://server.fqdn.example.com>

#   port 24284

# 

# 

  # ip address to connect

#   host 203.0.113.8

  # specify hostlabel for FQDN verification if ipaddress is used 
for host


#   hostlabel server.fqdn.example.com <http://server.fqdn.example.com>

# 

throttle-config.yaml: |

# Logging example fluentd throttling config file


#example-project:

# read_lines_limit: 10

#

#.operations:

# read_lines_limit: 100

kind: ConfigMap

metadata:

creationTimestamp: 2017-04-12T17:20:48Z

  labels:

logging-infra: support

  name: logging-fluentd

  namespace: logging

  resourceVersion: "188321"

  selfLink: /api/v1/namespaces/logging/configmaps/logging-fluentd

  uid: 5ee29731-1fa4-11e7-b524-0a7a32c48dc3



I'm happy to give you access to my environment if that would help.



Sure.  At this point I have no idea what's wrong.



On Tue, Apr 18, 2017 at 4:47 PM, Rich Megginson <rmegg...@redhat.com 
<mailto:rmegg...@redhat.com>> wrote:


On 04/18/2017 01:51 PM, Shepp wrote:

Hello,

I've posted over on the FluentD Google Groups but was directed
here.
https://groups.google.com/forum/#!topic/fluentd/Uo2E6kQzM5E
<https://groups.google.com/forum/#%21topic/fluentd/Uo2E6kQzM5E>
<https://groups.google.com/forum/#%21topic/fluentd/Uo2E6kQzM5E
<https://groups.google.com/forum/#%21topic/fluentd/Uo2E6kQzM5E>>

    I've got an OpenShift test lab in AWS, all the Aggregated
Logging PODs are deployed and running, and I believe I've a

  1   2   >