Re: Docker 1.13.1 breaking 3.7.0 installs

2018-03-13 Thread Luke Meyer
OpenShift < 3.9 is actually not intended to use docker-1.13. I don't have a
list of what breaks, I think it's mostly subtle stuff aside from CNS. If
you can see it, there is more detail at this kbase:
https://access.redhat.com/solutions/3376031

On Mon, Mar 12, 2018 at 11:59 PM, Brigman, Larry 
wrote:

>
> Looks like CentOS has released an update to Docker.  The playbooks want to
> say that it should use it but there is another test that says it cannot use
> anything > 1.12.
>
> None of the variables allow over-riding this setting using the rpm
> packages.
> The only way I found to get this working is to modify
> roles/openshift_health_checker/openshift_checks/package_version.py
> Adding this line to the openshift_to_docker_version:
> (3, 7): "1.13",
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Absence of master-config.yaml

2018-02-08 Thread Luke Meyer
On Thu, Feb 8, 2018 at 2:43 AM, Gaurav Ojha  wrote:

> Thank you for your reply. Just a couple more questions:
>
>
>1. Is there any way to create this file when I launch by openshift
>start?
>
>
openshift start --write-config= ...
(see --help and also note --master-config and --node-config flags)


>1. Pardon me, but when you say "it should be inside the container",
>you mean the host on which I am running openshift on, or the openshift
>container which starts as a result of this?
>
>
Inside the container named "origin" that "oc cluster up" runs on docker.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Service Catalog and Openshift Origin 3.7

2017-12-06 Thread Luke Meyer
As Aleksander said, more information would help.

The service broker waits on the service catalog API to come up. It may be
that the service catalog was deployed but the pods are not actually
starting for some reason (e.g. not available at requested version). Check
the pods in the namespace.

$ oc get pods,ds  -n kube-service-catalog

On Tue, Dec 5, 2017 at 6:26 PM, Aleksandar Lazic 
wrote:

> Hi.
>
> -- Originalnachricht --
> Von: "Marcello Lorenzi" 
> An: "users" 
> Gesendet: 05.12.2017 16:55:22
> Betreff: Service Catalog and Openshift Origin 3.7
>
> Hi All,
>> we tried to install the newer version of Openshift Origin 3.7 but during
>> the playbook execution we noticed this error:
>>
>> FAILED - RETRYING: wait for api server to be ready (120 retries left).
>>
>> The issue seems to be related to the service catalog but we don't know
>> there this is running.
>>
> Why do you assume this?
> Please can you share some more datas like.
>
> * inventory file
> * ansible version
> * playbook version
> * os
> * some logs
>
> Does someone notice this issue?
>>
>> Thanks,
>> Marcello
>>
>
> Regards
> Aleks
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: faulty diagnostics?

2017-11-13 Thread Luke Meyer
Thanks for bringing this up. This tool... needs some attention. Comments
below:

On Fri, Oct 27, 2017 at 7:48 AM, Tim Dudgeon  wrote:

> I've been looking at using the diagnostics (oc adm diagnostics) to test
> the status of a cluster installed with the ansible installer and
> consistently see things that seem to be false alarms. The cluster appears
> to be functions (builds run, can push to registry and routes are working
> etc.). This is with origin 3.6.0.
>
> 1. This is consistently seen, and a restart of the master  does not fix
> it. The name docker-registry.default.svc resolve tot he ip address
> 172.30.200.62
>
> ERROR: [DClu1019 from diagnostic ClusterRegistry@openshift/orig
>> in/pkg/diagnostics/cluster/registry.go:343]
>>Diagnostics created a test ImageStream and compared the registry IP
>>it received to the registry IP available via the docker-registry
>> service.
>>
>>docker-registry  : 172.30.200.62:5000
>>ImageStream registry : docker-registry.default.svc:5000
>>
>>They do not match, which probably means that an administrator
>> re-created
>>the docker-registry service but the master has cached the old
>> service
>>IP address. Builds or deployments that use ImageStreams with the
>> wrong
>>docker-registry IP will fail under this condition.
>>
>>To resolve this issue, restarting the master (to clear the cache)
>> should
>>be sufficient. Existing ImageStreams may need to be re-created.
>>
>
This is a bug -- the registry deployment changed without updating the
relevant diagnostic. It has been fixed with
https://github.com/openshift/origin/pull/16188 which I guess was not
backported in Origin to 3.6 so expect it fixed in 3.7.



> 2. This warning is seen
>
> WARN:  [DClu0003 from diagnostic NodeDefinition@openshift/origi
>> n/pkg/diagnostics/cluster/node_definitions.go:113]
>>Node ip-10-0-247-194.eu-west-1.compute.internal is ready but is
>> marked Unschedulable.
>>This is usually set manually for administrative reasons.
>>An administrator can mark the node schedulable with:
>>oadm manage-node ip-10-0-247-194.eu-west-1.compute.internal
>> --schedulable=true
>>
>>While in this state, pods should not be scheduled to deploy on the
>> node.
>>Existing pods will continue to run until completed or evacuated
>> (see
>>other options for 'oadm manage-node').
>>
> This is for the master node which by default is non-schedulable.
>

It's a warning, not an error, because this could be a legitimate
configuration. Actually, the diagnostic generally has no way to know that a
node belongs to a master or that it is supposed to be unschedulable (there
is nothing in the API to determine this).

That diagnostic is intended to alert you to the possibility that the reason
a node is not getting pods scheduled is because of this setting. It's not
saying there's anything wrong with the cluster. It's certainly a bit
confusing; do you feel it's better to get a useless warning from masters or
not to hear about unschedulable nodes at all in the diagnostics?



>
> 3. If metrics and logging are not deployed you see this warning:
>
> WARN:  [DH0005 from diagnostic MasterConfigCheck@openshift/or
>> igin/pkg/diagnostics/host/check_master_config.go:52]
>>Validation of master config file 
>> '/etc/origin/master/master-config.yaml'
>> warned:
>>assetConfig.loggingPublicURL: Invalid value: "": required to view
>> aggregated container logs in the console
>>assetConfig.metricsPublicURL: Invalid value: "": required to view
>> cluster metrics in the console
>>auditConfig.auditFilePath: Required value: audit can not be logged
>> to a separate file
>>
>
> Whilst 2 and 3 could be considered minor irritations, 1 might scare people
> that something is actually wrong.
>


Once again... it's a warning. And again, it's because there's no way to
determine from the API whether these are supposed to be deployed.



>
> Also, the 'oc adm diagnostics' command need to be run as root or with sudo
> otherwise you get some file permissions related errors. I don't think this
> is mentioned in the docs.
>


Could you be more specific about what errors you get? Errors accessing the
node/master config files perhaps?

Thanks for the feedback, and sorry for the delay in responding.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-19 Thread Luke Meyer
On Thu, Oct 19, 2017 at 10:58 AM, Julio Saura  wrote:

> yes ofc
>
> oc create serviceaccount icinga -n project1
>
> oadm policy add-cluster-role-to-user admin system:serviceaccounts:
> project1:icinga
>

There is no cluster role "admin" (... by default anyway, you could of
course create one).

You probably wanted `oc policy add-role-to-user admin ...` to make the user
an admin of the project.

Unless you actually wanted them to be an admin of the entire cluster, in
which case the role is cluster-admin not admin.



>
> oadm policy reconcile-cluster-roles —confirm
>
> and then dump the token
>
> oc serviceaccounts get-token icing
>
>
> ty frederic!
>
> i do login with curl but i get
>
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "User \"system:serviceaccount:project1:icinga\" cannot list
> replicationcontrollers in project \”project1\"",
>   "reason": "Forbidden",
>   "details": {
> "kind": "replicationcontrollers"
>   },
>   "code": 403
> }
>
>
>
>
>
> El 19 oct 2017, a las 16:55, Frederic Giloux 
> escribió:
>
> Hi Julio,
>
> Could you copy the commands you have used?
>
> Regards,
>
> Frédéric
>
> On 19 Oct 2017 11:43, "Julio Saura"  wrote:
>
>> Hello
>>
>> i am trying to create a sa for accessing rest api with token ..
>>
>> i have followed the doc steps
>>
>> creating the account, applying admin role to that account and getting the
>> token
>>
>> trying to access replicacioncontroller info with bearer in curl, i can
>> auth into but i get i have no permission to list rc on the project
>>
>> i also did a reconciliate role on cluster
>>
>> i also logged in with oc login passing token as parameter, i log in but
>> it says i have no projects ..
>>
>> what else i am missing?
>>
>> ty
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Possible to use AWS elasitcsearch for OpenShift logging?

2017-10-16 Thread Luke Meyer
You can configure fluentd to forward logs (see
https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#sending-logs-to-an-external-elasticsearch-instance).
Note the caveat, "If you are not using the provided Kibana and
Elasticsearch images, you will not have the same multi-tenant capabilities
and your data will not be restricted by user access to a particular
project."

On Thu, Oct 12, 2017 at 10:35 AM, Marc Boorshtein 
wrote:

> I have built out a cluster on AWS using the ansible advanced install.  I
> see that i can setup logging by creating infrastructure nodes that will
> host elasticsearch.  AWS has an elasticsearch service.  Is there a way to
> use that instead?
>
> Thanks
> Marc
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to open port 9300 for aggregate logging without messing up iptables for Openshift

2017-04-28 Thread Luke Meyer
The Elastic Search pods contact port 9300 on other pods, that is, on the
internal pod IP. There should be no need to do anything on the hosts to
enable this. If ES is failing to contact other ES nodes then either there
is a networking problem or the other nodes aren't listening (yet) on the
port.

On Thu, Apr 27, 2017 at 10:53 PM, Dean Peterson 
wrote:

> I am trying to start aggregate logging. The elastic search cluster
> requires port 9300 to be open. I am getting Connection refused errors and I
> need to open that port. How do I open port 9300 without messing up the
> existing rules for Openshift. Do I make changes in firewalld or iptables
> directly? I notice iptables is masked. In previous versions it seems like
> firewalld wasn't being used. Now it is. I am not sure what the right way to
> make port 9300 available to aggregate logging is.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Unable to start openshift in VM, AWS or google cloud

2016-11-03 Thread Luke Meyer
On Wed, Nov 2, 2016 at 11:34 PM, Ravi  wrote:

>
> I am not able to start openshift, I tried three different ways.
>
> 1. Windows 7 + Virtual Box + Ubuntu
> oc cluster up works well. I went to console and launched nodejs-ex
> example. Console shows it is up, however when I click on route, it says
> "unable to connect". I tried going directly to POD's IP address and it does
> work. In other words, somehow load balancer was failing in virtualbox
> Ubuntu VM.
>

Have you installed a router? Does DNS or /etc/hosts for the route direct
your browser to the host's IP?


>
> 2. Then I moved on to AWS. I launched a RedHat image and installed docker
> and started openshift. Here, OC starts on private IP address, so I am not
> able to access it from public internet. I even tried
> oc cluster up --public-hostname='my ip address' but since the public ip
> address is some magic, oc is not able to detect etcd etc and fails.
>
> 3. Then I tried on google cloud. I faced exactly same issue as AWS.
>
> If any one of them works, I will be ok but no idea how to get past these
> issues.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How can I put logstash config files in ConfigMap ?

2016-10-27 Thread Luke Meyer
The underscores are the problem. Can you convert them to hyphens?

On Tue, Oct 25, 2016 at 5:45 AM, Stéphane Klein  wrote:

> Hi,
>
> How can I put logstash config files in ConfigMap ?
>
>
> $ tree
> .
> ├── logstash-config
> │   ├── 1_tcp_input.conf
> │   ├── 2_news_filter.conf
> │   └── 3_elasticsearch_ouput.conf
>
> $ oc create configmap logstash-config --from-file=logstash-config/
> error: 1_tcp_input.conf is not a valid key name for a configMap
>
>
> For the moment I use PersistentVolume to store this configuration files
> but I think that it isn't the better choice.
>
> Best regards,
> Stéphane
> --
> Stéphane Klein 
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: clean up elastic search logging project

2016-10-11 Thread Luke Meyer
Yeah, I don't think we have quota on ephemeral volumes yet. Curator can
clear out your data more aggressively, for example just keep a few days
worth of logs.

On Mon, Oct 10, 2016 at 3:26 AM, Den Cowboy  wrote:

> Hi,
>
>
> We have implemented our logging project:https://docs.
> openshift.org/latest/install_config/aggregate_logging.html
>
> We are using ephemeral storage for ou ES containers.
> But now is our question how we can set a limit or something: clear db
> after reaching 4GB storage or something.
> Is this possbile?
> I checked the environment variables but the limits seem to be on RAM.
>
>
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Low Disk Watermark

2016-08-31 Thread Luke Meyer
Looks like you're using your root partition for docker volume storage (and
thus Elasticsearch storage). That is the default configuration, but not a
recommended one - we recommend specifying storage specifically for docker
https://docs.openshift.org/latest/install_config/install/prerequisites.html#configuring-docker-storage

Also ES data will keep getting blown away if you don't give it a persistent
volume, but hopefully that was already evident to you.

On Mon, Aug 29, 2016 at 9:55 PM, Frank Liauw  wrote:

> Hi All,
>
> My Origin cluster is pretty new, and I happen to spot the following log
> entry by elasticsearch in kibana (I'm using OpenShift's logging stack):
>
> [2016-08-30 01:44:25,997][INFO ][cluster.routing.allocation.decider]
> [Quicksilver] low disk watermark [15%] exceeded on 
> [t2l6Oz8uT-WS8Fa7S7jzfQ][Quicksilver]
> free: 1.5gb[11.4%], replicas will not be assigned to this node
>
> df on the node shows the following:
>
> /dev/mapper/centos_node3-root   14G   13G  1.6G  89% /
> ..
> tmpfs  7.8G  4.0K  7.8G   1%
> /var/lib/origin/openshift.local.volumes/pods/8a2a40e3-
> 5f83-11e6-8b2f-0231a929d7bf/volumes/kubernetes.io~secret/
> builder-dockercfg-3z4qk-push
> tmpfs  7.8G  4.0K  7.8G   1%
> /var/lib/origin/openshift.local.volumes/pods/8a2a40e3-
> 5f83-11e6-8b2f-0231a929d7bf/volumes/kubernetes.io~secret/sshsecret-source
> tmpfs  7.8G   12K  7.8G   1%
> /var/lib/origin/openshift.local.volumes/pods/8a2a40e3-
> 5f83-11e6-8b2f-0231a929d7bf/volumes/kubernetes.io~secret/
> builder-token-znk7k
> tmpfs  7.8G  4.0K  7.8G   1%
> ..
>
> This appears to be the case on one of my other nodes as well (with a
> slightly different tmpfs size of 5.8G).
>
> Is this normal?
>
> Frank
> Systems Engineer
>
> VSee: fr...@vsee.com  | Cell: +65 9338 0035
>
> Join me on VSee for Free 
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Kibana Logs Empty

2016-08-16 Thread Luke Meyer
On Mon, Aug 15, 2016 at 3:54 AM, Frank Liauw  wrote:

> Hi All,
>
> I followed through the instructions on https://docs.openshift.org/
> latest/install_config/aggregate_logging.html and have setup a 3 node ES
> cluster. Fluentd is also deployed on all my nodes.
>
> I am getting kibana logs on the logging project, but all my other projects
> do not have any logs; kibana shows "No results found", with occasional
> errors reading "Discover: An error occurred with your request. Reset your
> inputs and try again."
>

Just to make sure... the default time period in Kibana is to look only 15
minutes in the past - are you sure your projects had logs in the last 15
minutes?
That wouldn't have anything to do with the errors you're seeing though.


>
> Probing the requests made by kibana, some calls to
> /elasticsearch/_msearch?timeout=0_unavailable=true
> =1471245075265 are failing from time to time.
>

That certainly shouldn't be happening. Do you have any more details on how
they're failing? Do they fail to connect, or just get back an error
response code? Not sure if you can tell...


>
> Looking into the ES logs for all 3 cluster pods, I don't see much errors
> to be concerned, with the last error of 2 nodes similar to the following
> which seems to be a known issue with Openshift's setup (
> https://lists.openshift.redhat.com/openshift-archives/users
> /2015-December/msg00078.html) and possibly explains the failed requests
> made by kibana on auto-refresh, but that's a problem for another day:
>
> [2016-08-15 06:53:49,130][INFO ][cluster.service  ] [Gremlin]
> added {[Quicksilver][t2l6Oz8uT-WS8Fa7S7jzfQ][logging-es-d7r1t3dm-
> 2-a0cf0][inet[/10.1.3.3:9300]],}, reason: zen-disco-receive(from master
> [[One Above All][CyFgyTTtS_S85yYRom2wVQ][logging-es-0w45va6n-2-8m85p][in
> et[/10.1.2.5:9300]]])
>

This is good, means your cluster is forming...


> [2016-08-15 
> 06:59:27,727][ERROR][com.floragunn.searchguard.filter.SearchGuardActionFilter]
> Error while apply() due to com.floragunn.searchguard.toke
> neval.MalformedConfigurationException: no bypass or execute filters at
> all for action indices:admin/mappings/fields/get
> com.floragunn.searchguard.tokeneval.MalformedConfigurationException: no
> bypass or execute filters at all
>

Unfortunate SearchGuard behavior while the cluster is starting, but nothing
to be concerned about as long as it doesn't continue.


>
> Looking into fluentd logs, one of my nodes is complaining of a
> "getaddrinfo" error:
>
> 2016-08-15 03:45:18 -0400 [error]: unexpected error error="getaddrinfo:
> Name or service not known"
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:878:in
> `initialize'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:878:in
> `open'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:878:in
> `block in connect'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/timeout.rb:52:in
> `timeout'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:877:in
> `connect'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:862:in
> `do_start'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:851:in
> `start'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/rest-cl
> ient-2.0.0/lib/restclient/request.rb:766:in `transmit'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/rest-cl
> ient-2.0.0/lib/restclient/request.rb:215:in `execute'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/rest-cl
> ient-2.0.0/lib/restclient/request.rb:52:in `execute'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/rest-cl
> ient-2.0.0/lib/restclient/resource.rb:51:in `get'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/kubecli
> ent-1.1.4/lib/kubeclient/common.rb:328:in `block in api'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/kubecli
> ent-1.1.4/lib/kubeclient/common.rb:58:in `handle_exception'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/kubecli
> ent-1.1.4/lib/kubeclient/common.rb:327:in `api'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/kubecli
> ent-1.1.4/lib/kubeclient/common.rb:322:in `api_valid?'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluent-
> plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/
> filter_kubernetes_metadata.rb:167:in `configure'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluentd
> -0.12.23/lib/fluent/agent.rb:144:in `add_filter'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluentd
> -0.12.23/lib/fluent/agent.rb:61:in `block in configure'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluentd
> -0.12.23/lib/fluent/agent.rb:57:in `each'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluentd
> -0.12.23/lib/fluent/agent.rb:57:in `configure'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluentd
> 

Re: Kibana: This site can’t be reached: ERR_CONTENT_DECODING_FAILED

2016-07-22 Thread Luke Meyer
I wish I could be more helpful here but I've never seen this before and I'm
at a loss to think of what could be happening. The fact that you're getting
a redirect and going through the oauth flow and only then getting the error
indicates that at least the auth proxy in front of Kibana is running
correctly. Requests then get proxied back to Kibana itself in the same
container, which ought to then load a JS app into your browser which makes
further calls that proxy through Kibana to Elasticsearch. It's hard to tell
from limited information but it almost sounds like Kibana got into a weird
state at the start; you could try nuking the kibana pod and seeing if the
new one has any better luck. That's kind of a shot in the dark though. I
would be checking Kibana and Elasticsearch logs, and seeing if I could get
Firebug to give me any information about the response it didn't like... or
wireshark, if you know how to decrypt the traffic.

On Fri, Jul 22, 2016 at 5:53 AM, Den Cowboy  wrote:

> Hi,
>
> I'm using openshift origin 1.2.0
> I try to set up logging (I did it already a few times so I know the
> procedure).
> I performed the prereqs and started the template with:
>
> oc new-app logging-deployer-template \
> >  --param KIBANA_HOSTNAME=kibana.xx-dev.xx \
> >  --param ES_CLUSTER_SIZE=1 \
> >  --param PUBLIC_MASTER_URL=https://master.xx-xx:8443 \
> >  --param IMAGE_VERSION=v1.2.0
>
> Everything is pulled and starting fine.
> So after everything is running I try to access kibana which is redirecting
> me to the login page of kibana (equal to the login page of openshift)
>
> After login in I'm redirected to my kibana URL but I don't see my logs. I
> got:
> This site can’t be reached
>
> The webpage at *https://kibana.xx.xx/ * might be
> temporarily down or it may have moved permanently to a new web address.
> ERR_CONTENT_DECODING_FAILED
>
> I don't see weird logs in my pods/containers.
>
> Can someone help me? I tried it multiple times and in multiple browsers.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: logging-es errors: shards failed

2016-07-15 Thread Luke Meyer
They surely do. Although it would probably be easiest here to just get them
from `oc logs` against the ES pod, especially if we can't trust ES storage.

On Fri, Jul 15, 2016 at 3:26 PM, Peter Portante <pport...@redhat.com> wrote:

> Eric, Luke,
>
> Do the logs from the ES instance itself flow into that ES instance?
>
> -peter
>
> On Fri, Jul 15, 2016 at 12:14 PM, Alex Wauck <alexwa...@exosite.com>
> wrote:
> > I'm not sure that I can.  I clicked the "Archive" link for the logging-es
> > pod and then changed the query in Kibana to "kubernetes_container_name:
> > logging-es-cycd8veb && kubernetes_namespace_name: logging".  I got no
> > results, instead getting this error:
> >
> > Index: unrelated-project.92c37428-11f6-11e6-9c83-020b5091df01.2016.07.12
> > Shard: 2 Reason: EsRejectedExecutionException[rejected execution (queue
> > capacity 1000) on
> > org.elasticsearch.search.action.SearchServiceTransportAction$23@6b1f2699
> ]
> > Index: unrelated-project.92c37428-11f6-11e6-9c83-020b5091df01.2016.07.14
> > Shard: 2 Reason: EsRejectedExecutionException[rejected execution (queue
> > capacity 1000) on
> > org.elasticsearch.search.action.SearchServiceTransportAction$23@66b9a5fb
> ]
> > Index: unrelated-project.92c37428-11f6-11e6-9c83-020b5091df01.2016.07.15
> > Shard: 2 Reason: EsRejectedExecutionException[rejected execution (queue
> > capacity 1000) on
> > org.elasticsearch.search.action.SearchServiceTransportAction$23@512820e]
> > Index: unrelated-project.f38ac6ff-3e42-11e6-ab71-020b5091df01.2016.06.29
> > Shard: 2 Reason: EsRejectedExecutionException[rejected execution (queue
> > capacity 1000) on
> > org.elasticsearch.search.action.SearchServiceTransportAction$23@3dce96b9
> ]
> > Index: unrelated-project.f38ac6ff-3e42-11e6-ab71-020b5091df01.2016.06.30
> > Shard: 2 Reason: EsRejectedExecutionException[rejected execution (queue
> > capacity 1000) on
> > org.elasticsearch.search.action.SearchServiceTransportAction$23@2f774477
> ]
> >
> > When I initially clicked the "Archive" link, I saw a lot of messages with
> > the kubernetes_container_name "logging-fluentd", which is not what I
> > expected to see.
> >
> >
> > On Fri, Jul 15, 2016 at 10:44 AM, Peter Portante <pport...@redhat.com>
> > wrote:
> >>
> >> Can you go back further in the logs to the point where the errors
> started?
> >>
> >> I am thinking about possible Java HEAP issues, or possibly ES
> >> restarting for some reason.
> >>
> >> -peter
> >>
> >> On Fri, Jul 15, 2016 at 11:37 AM, Lukáš Vlček <lvl...@redhat.com>
> wrote:
> >> > Also looking at this.
> >> > Alex, is it possible to investigate if you were having some kind of
> >> > network connection issues in the ES cluster (I mean between individual
> >> > cluster nodes)?
> >> >
> >> > Regards,
> >> > Lukáš
> >> >
> >> >
> >> >
> >> >
> >> >> On 15 Jul 2016, at 17:08, Peter Portante <pport...@redhat.com>
> wrote:
> >> >>
> >> >> Just catching up on the thread, will get back to you all in a few ...
> >> >>
> >> >> On Fri, Jul 15, 2016 at 10:08 AM, Eric Wolinetz <ewoli...@redhat.com
> >
> >> >> wrote:
> >> >>> Adding Lukas and Peter
> >> >>>
> >> >>> On Fri, Jul 15, 2016 at 8:07 AM, Luke Meyer <lme...@redhat.com>
> wrote:
> >> >>>>
> >> >>>> I believe the "queue capacity" there is the number of parallel
> >> >>>> searches
> >> >>>> that can be queued while the existing search workers operate. It
> >> >>>> sounds like
> >> >>>> it has plenty of capacity there and it has a different reason for
> >> >>>> rejecting
> >> >>>> the query. I would guess the data requested is missing given it
> >> >>>> couldn't
> >> >>>> fetch shards it expected to.
> >> >>>>
> >> >>>> The number of shards is a multiple (for redundancy) of the number
> of
> >> >>>> indices, and there is an index created per project per day. So even
> >> >>>> for a
> >> >>>> small cluster this doesn't sound out of line.
> >> >>>>
> >> >>>> Can you give a little more information about your logging

Re: logging-es errors: shards failed

2016-07-15 Thread Luke Meyer
I believe the "queue capacity" there is the number of parallel searches
that can be queued while the existing search workers operate. It sounds
like it has plenty of capacity there and it has a different reason for
rejecting the query. I would guess the data requested is missing given it
couldn't fetch shards it expected to.

The number of shards is a multiple (for redundancy) of the number of
indices, and there is an index created per project per day. So even for a
small cluster this doesn't sound out of line.

Can you give a little more information about your logging deployment? Have
you deployed multiple ES nodes for redundancy, and what are you using for
storage? Could you attach full ES logs? How many OpenShift nodes and
projects do you have? Any history of events that might have resulted in
lost data?

On Thu, Jul 14, 2016 at 4:06 PM, Alex Wauck  wrote:

> When doing searches in Kibana, I get error messages similar to "Courier
> Fetch: 919 of 2020 shards failed".  Deeper inspection reveals errors like
> this: "EsRejectedExecutionException[rejected execution (queue capacity
> 1000) on
> org.elasticsearch.search.action.SearchServiceTransportAction$23@14522b8e
> ]".
>
> A bit of investigation lead me to conclude that our Elasticsearch server
> was not sufficiently powerful, but I spun up a new one with four times the
> CPU and RAM of the original one, but the queue capacity is still only
> 1000.  Also, 2020 seems like a really ridiculous number of shards.  Any
> idea what's going on here?
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error setting up EFK logging: Error from server: User "system:serviceaccount:logging:logging-deployer" cannot list configmaps in project "logging"

2016-07-12 Thread Luke Meyer
I wonder if you executed step 6:

$ oc policy add-role-to-user edit --serviceaccount logging-deployer


... at all, or perhaps in the wrong project?

The service account needs an edit role.

On Tue, Jul 12, 2016 at 4:50 AM, Michael Leimenmeier 
wrote:

> Hi,
>
> I've tried to set up logging with the EFK stack according to the
> documentation for OpenShift 3.2, but when I try to deploy the
> logging-deployer pod it fails into Error status with the following error
> message in the container log:
>
> [...]
> + echo 'Attaching secrets to service accounts'
> + oc secrets add serviceaccount/aggregated-logging-kibana logging-kibana
> logging-kibana-proxy
> + oc secrets add serviceaccount/aggregated-logging-elasticsearch
> logging-elasticsearch
> + oc secrets add serviceaccount/aggregated-logging-fluentd logging-fluentd
> + oc secrets add serviceaccount/aggregated-logging-curator logging-curator
> Deleting configmaps
> + '[' -n '' ']'
> + generate_configmaps
> + echo 'Deleting configmaps'
> + oc delete configmap -l logging-infra=support
> Error from server: User "system:serviceaccount:logging:logging-deployer"
> cannot list configmaps in project "logging"
>
> [ full output at http://pastebin.com/sUZrNX1b ]
>
> When I take a look who is allowed to list configmaps the logging-deployer
> serviceaccount is not listed:
> 10:18:16 root@osmaster:~> oc policy who-can list configmap -n logging
> Namespace: logging
> Verb: list
> Resource: configmaps
>
> Users: system:serviceaccount:openshift-infra:namespace-controller
>
> Groups: system:cluster-admins
> system:masters
>
> But to be honest I don't have a clue how to add a verb/resource pair to a
> serviceaccount.
> I've tried to add the view/edit/admin roles to the serviceaccount but no
> luck.
>
> Any help would be greatly appreciated!
>
> Thanks and kind regards,
> Lemmy.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logs from container app stored to local disk on nodes

2016-07-06 Thread Luke Meyer
You may need to modify the file permissions and/or selinux context for the
volume so that the container user can write to it. Under the default SCC
the container user/group are randomized. Under the privileged SCC it will
probably be whatever user the Dockerfile indicates (and you can choose an
selinux context in the pod security context if needed).

On Wed, Jul 6, 2016 at 3:49 AM, Ronan O Keeffe  wrote:

> Hi Clayton,
>
> Much appreciated. I have run the following:
>
> oadm policy add-scc-to-user privileged -n staging -z default (It's a test
> box and we're deploying our own images, I can edit the scc to hostaccess or
> hostmount-anyuid later).
>
> I have then run
> oc volume dc/ --add --name=logging --type=hostPath
> --mount-path=/var/log/
>
> The app deploys alright is is up and running sucesfully, but there is
> nothing logging to the node.
>
> In case it matters I created the log storage by adding a 10Gb disk to the
> VM the node lives on, created an xfs partition on it and mounted it in the
> folder that the webapps should log to.
>
> Any pointers would be appreciated.
>
> Regards,
> Ronan.
>
> On 5 Jul 2016, at 01:44, Clayton Coleman  wrote:
>
> In the future there is an ongoing design to have a specific "log volume"
> defined on a per pod basis that will be respected by the system.
>
> For now, the correct way is to use hostPath, but there's a catch -
> security.  The reason why it failed to deploy is because users have to be
> granted the permission to access the host (for security reasons).  You'll
> want to grant access to an SCC that allows host volumes to your service
> account (do "oc get scc" to see the full list, then "oadm policy
> add-scc-to-user NAME -z default" to grant access to that SCC to a named
> service account).
>
> On Mon, Jul 4, 2016 at 5:26 AM, Ronan O Keeffe 
> wrote:
>
>> Hi,
>>
>> Just wondering is it possible to have an app living in a container log
>> back to the box the container lives on.
>>
>> Our test set up is as follows:
>>
>> All web apps identical
>> webapp1 > node1
>> webapp2 > node2
>> webapp3 > node3
>> webapp4 > node4
>>
>> Ideally we'd like logs from the webapp inside a container on node1 to log
>> to a dedicated logging partition on the host OS of node1 and so on for the
>> other nodes.
>> Ultimately we'd like the logs to persist beyond the life of the container
>> I suppose.
>>
>> We've tried oc edit dc/webapp and specifying a volume to log to
>> oc volume dc/ --add --name=v1 --type=hostPath
>> --path=/var/log/
>>
>> And then specifying that the webapp log to the above partition.
>>
>> However the webapp fails to deploy. I'll need to dig in to why that is,
>> but in the meantime is this vaguely the correct way to go about logging?
>>
>> Cheers,
>> Ronan.
>>
>>
>> P.S. I went to thank Scott Dodson and for help with a previous matter
>> recently but for some reason the mail has not been received on the list.
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: configmap configuration

2016-06-29 Thread Luke Meyer
former, latter... they're just words right? yeah.

On Wed, Jun 29, 2016 at 11:08 AM, Jordan Liggitt <jligg...@redhat.com>
wrote:

> Other way around... mounting a config map doesn't require the service
> account to have special permissions. Reading a configmap via an API call
> from within a pod does.
>
> On Wed, Jun 29, 2016 at 10:58 AM, Luke Meyer <lme...@redhat.com> wrote:
>
>> Are you trying to mount the configmap or read from it? The latter does
>> not require any extra role for the pod service account.
>>
>> On Wed, Jun 29, 2016 at 8:46 AM, Lewis Shobbrook <
>> l.shobbrook+ori...@base2services.com> wrote:
>>
>>> Hi Guys,
>>> Having some trouble with configmaps with our pods.
>>>
>>> In the pods logs we see the following...
>>>
>>> 2016-06-28 02:45:55.055 [INFO]  [-main]
>>> [au.com.consealed.service.interfac.config.SpringConfig]
>>> ConfigMapConfigProperties: ppe
>>> 2016-06-28 02:46:46.046 [WARN]  [-main]
>>> [io.fabric8.spring.cloud.kubernetes.config.ConfigMapPropertySource]
>>> Can't read configMap with name: [ppe] in namespace:[dev]. Ignoring
>>> io.fabric8.kubernetes.client.KubernetesClientException: Failure
>>> executing: GET at:
>>> https://kubernetes.default.svc/api/v1/namespaces/dev/configmaps/ppe.
>>> Message: Forbidden!Configured service account doesn't have access. Service
>>> account may have been revoked.
>>>
>>> From oc rsh ...
>>>
>>> sh-4.2$ curl -k -H "Authorization: oAuth XXX"
>>> https://kubernetes.default.svc/api/v1/namespaces/dev/configmap
>>> {
>>> "kind": "Status",
>>> "apiVersion": "v1",
>>> "metadata": {},
>>> "status": "Failure",
>>> "message": "User \"system:anonymous\" cannot get configmaps in project
>>> \"dev\"",
>>> "reason": "Forbidden",
>>> "details": {
>>> "name": "ppe",
>>> "kind": "configmaps"
>>> },
>>> "code": 403
>>> }
>>>
>>> I'm pretty green with this, but what do I need to do to provide a pod
>>> within the the same namespace the correct access to the configmap?
>>> I can see secrets are mounted correctly within /run/secrets/
>>> kubernetes.io/serviceaccount/ within the pod
>>>
>>> oc version
>>> oc v1.2.0-rc1
>>> kubernetes v1.2.0-36-g4a3f9c5
>>>
>>> Cheers
>>>
>>> Lew
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Creating from a template: get parameters from a file

2016-06-23 Thread Luke Meyer
`oc process -v` and `oc new-app -p` work exactly the same, both being
implemented the same. You can specify multiple of either. I thought there
was supposed to be a way to escape commas but I can't find it now.

FWIW you can specify newlines - anything, really, except a comma - in
parameters.

However, have you considered using a Secret or ConfigMap to supply the
parameters? It's easy to put strings and files in those with oc create
secret|configmap. If they're only needed at runtime, not for the actual
template, that seems simplest.

On Fri, Jun 17, 2016 at 6:07 PM, Clayton Coleman 
wrote:

> The -v flag needs to be fixed for sure (splitting flag values is bad).
>
> New-app should support both -f FILE and -p (which you can specify multiple
> -p, one for each param).
>
> Do you have any templates that require new lines?
>
> On Jun 17, 2016, at 5:55 PM, Alex Wauck  wrote:
>
> I need to create services from a template that has a lot of parameters.
> In addition to having a lot of parameters, it has parameters with values
> containing commas, which does not play well with the -v flag for oc
> process.  Is there any way to make oc process get the parameter values from
> a file?  I'm currently tediously copy/pasting the values into the web UI,
> which is not a good solution.
>
> --
>
> Alex Wauck // DevOps Engineer
> *E X O S I T E*
> *www.exosite.com *
> Making Machines More Human.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Metrics deployment

2016-06-14 Thread Luke Meyer
The readiness probe status seems like an important indicator to me:

Readiness probe failed: cat: /etc/ld.so.conf.d/*.conf: No such file or
directory

What could cause that failure? Or is that a red herring...

On Tue, Jun 14, 2016 at 1:53 PM, Matt Wringe  wrote:

> - Original Message -
> > From: "Srinivas Naga Kotaru (skotaru)" 
> > To: "Matt Wringe" 
> > Cc: users@lists.openshift.redhat.com
> > Sent: Tuesday, June 14, 2016 1:37:01 PM
> > Subject: Re: Metrics deployment
> >
> > I removed readiness probes from both hawkular-cassandra-1 &
> hawkular-metrics
> > as both status shows probes failed.
>
> You should not have to remove the probes, this indicates that something is
> wrong with your installation.
>
> >
> > It looks good now. Both containers looks and running
> > (hawkular-cassandra-1-kr8ka , hawkular-metrics-vhe3u) however
> heapster-7yl34
> > logs still shows Could not connect to
> > https://hawkular-metrics:443/hawkular/metrics/status. Curl exit code: 6.
> > Status Code 000.
> >
> > Are we good or still had issues?
> >
> >
> > # oc get pods
> > NAME READY STATUSRESTARTS   AGE
> > hawkular-cassandra-1-kr8ka   1/1   Running   0  6m
> > hawkular-metrics-vhe3u   1/1   Running   2  5m
> > heapster-7yl34   0/1   Running   2  5m
> >
> >
> >
> >
> >
> > --
> > Srinivas Kotaru
> >
> > On 6/14/16, 10:07 AM, "Srinivas Naga Kotaru (skotaru)" <
> skot...@cisco.com>
> > wrote:
> >
> > >Matt
> > >
> > >Just want to share more info by running describe pod.
> > >
> > >It seems to be health probe failing. Do you think it is the issue?
> > >
> > >
> > >
> > ># oc describe pod hawkular-cassandra-1-it5uh
> > >Name:hawkular-cassandra-1-it5uh
> > >Namespace:   openshift-infra
> > >Node:l3inpn-id2-003.cisco.com/173.36.96.16
> > >Start Time:  Tue, 14 Jun 2016 16:36:21 +
> > >Labels:
> > >
>  
> metrics-infra=hawkular-cassandra,name=hawkular-cassandra-1,type=hawkular-cassandra
> > >Status:  Running
> > >IP:  10.1.9.2
> > >Controllers: ReplicationController/hawkular-cassandra-1
> > >Containers:
> > >  hawkular-cassandra-1:
> > >Container ID:
> > >
>  docker://17a9575eb655145859a9207f5c4bde7456f947e27188a056ff2bd08c4ce6ae5d
> > >Image:
> registry.access.redhat.com/openshift3/metrics-cassandra:latest
> > >Image ID:
> > >
>  docker://ee2117c9848298ca5a0cbbce354fd4adff370435225324ab9d60cd9cd9a95c53
> > >Ports:   9042/TCP, 9160/TCP, 7000/TCP, 7001/TCP
> > >Command:
> > >  /opt/apache-cassandra/bin/cassandra-docker.sh
> > >  --cluster_name=hawkular-metrics
> > >  --data_volume=/cassandra_data
> > >  --internode_encryption=all
> > >  --require_node_auth=true
> > >  --enable_client_encryption=true
> > >  --require_client_auth=true
> > >  --keystore_file=/secret/cassandra.keystore
> > >  --keystore_password_file=/secret/cassandra.keystore.password
> > >  --truststore_file=/secret/cassandra.truststore
> > >  --truststore_password_file=/secret/cassandra.truststore.password
> > >  --cassandra_pem_file=/secret/cassandra.pem
> > >QoS Tier:
> > >  cpu:   BestEffort
> > >  memory:BestEffort
> > >State:   Running
> > >  Started:   Tue, 14 Jun 2016 16:37:01 +
> > >Ready:   True
> > >Restart Count:   0
> > >Readiness:   exec
> [/opt/apache-cassandra/bin/cassandra-docker-ready.sh]
> > >delay=0s timeout=1s period=10s #success=1 #failure=3
> > >Environment Variables:
> > >  CASSANDRA_MASTER:  true
> > >  POD_NAMESPACE: openshift-infra (v1:metadata.namespace)
> > >Conditions:
> > >  Type   Status
> > >  Ready  True
> > >Volumes:
> > >  cassandra-data:
> > >Type:PersistentVolumeClaim (a reference to a
> PersistentVolumeClaim in
> > >the same namespace)
> > >ClaimName:   metrics-cassandra-1
> > >ReadOnly:false
> > >  hawkular-cassandra-secrets:
> > >Type:Secret (a volume populated by a Secret)
> > >SecretName:  hawkular-cassandra-secrets
> > >  cassandra-token-4urfd:
> > >Type:Secret (a volume populated by a Secret)
> > >SecretName:  cassandra-token-4urfd
> > >Events:
> > >  FirstSeen  LastSeenCount   From
>   SubobjectPath   TypeReason
>   Message
> > >  -  -   
>   -   --
> > > ---
> > >  27m27m 1   {default-scheduler }
>   Normal
> Scheduled   Successfully
> > >  assigned hawkular-cassandra-1-it5uh to l3inpn-id2-003.cisco.com
> > >  27m27m 1   {kubelet
> l3inpn-id2-003.cisco.com}
> > > 

Re: 503 - Maintenance page

2016-06-07 Thread Luke Meyer
It sounds like what he wants is for the router to simply not interfere with
passing along something that's already returning a 503. It sounds like
haproxy is replacing the page content with its own in that use case.

On Mon, Jun 6, 2016 at 11:53 PM, Ram Ranganathan 
wrote:

> Not clear if you want the router to automatically serve the 503 page or
> not. If you do, this line in the haproxy config template:
>
> https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L198
>
> automatically sends a 503 page if your service is down (example has 0 pods
> backing the service).
> The actual error page template is at:
>
> https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/error-page-503.http
>
>
> You could customize the template and/or error page (and the router image)
> to use a different page.
>
> Alternatively, if you desire some other behavior, you can disable it by
> removing that haproxy directive. Does still need a custom template + router
> image.
>
> HTH.
>
>
> On Mon, Jun 6, 2016 at 12:58 PM, Philippe Lafoucrière <
> philippe.lafoucri...@tech-angels.com> wrote:
>
>> @Clayton:
>> Sorry for the confusion. I'm not updating the routeR, I'm updating the
>> route directly. The route to our website is pointing to a "maintenance"
>> service during maintenance. This service serves 503 pages for most URLs,
>> except a few for testing purprose.
>>
>> The problem is: If I browse my website, I get the expected 503 code, but
>> a blank page, instead of the desired maintenance page served by the
>> "maintenance" pods. I don't understand this blank page, it's like haproxy
>> is not forwarding it because the pods responded with a 503.
>>
>> @v: Can I use a dedicated router per project?
>> ​
>> Thanks
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
> Ram//
> main(O,s){s=--O;10>4*s)*(O++?-1:1):10)&&\
> main(++O,s++);}
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: logging-fluentd-template: use own image

2016-06-03 Thread Luke Meyer
The error is that the "image" field is missing from the container
definition. I wonder if you edited the template at all? It's easy to
indent/outdent something and create a definition where the first validation
that fails looks like this. The spec and container definition should look
something like this:

  spec:
containers:
- env:
  - name: K8S_HOST_URL
value: https://kubernetes.default.svc.cluster.local
  [...]
  - name: OPS_COPY_PASSWORD
value: ""
  image: ${IMAGE_PREFIX}logging-fluentd:${IMAGE_VERSION}

I wonder if you put a "-" in front of that image field, or otherwise
changed the whitespace there somehow. I don't see what else it could be, it
seems like it should work as you described.

On Thu, Jun 2, 2016 at 8:07 AM, Lorenz Vanthillo <
lorenz.vanthi...@outlook.com> wrote:

> I'm busy with setting up logging on OpenShift Origin 1.1.6.
> At the moment I have the base setup with the help of templates and the
> documentation about aggregating logging.
> This was all fine. But now have edited our  fluentd image.
> Now we try to use the existing template to deploy our image:
> The name of the template is: logging-fluentd-template and it asks voor 2
> parameters:
> IMAGE_PREFIX
> default: docker.io/openshift/origin-
> IMAGE_VERSION
> default: latest
>
> This is what's in the template:
> - image: ${IMAGE_PREFIX}logging-fluentd:${IMAGE_VERSION}
>
> parameters:
> - description: The image prefix for the Fluentd image to use
>   name: IMAGE_PREFIX
>   value: docker.io/openshift/origin-
> - description: The image version for the Fluentd image to use
>   name: IMAGE_VERSION
>   value: latest
>
> But I try to use my own image (which is also called logging-fluentd and is
> inside my openshift registry)
> So it looks something like this:
>
> oc new-app logging-fluentd-template -p 
> IMAGE_PREFIX=172.30.xx.xx:5000/logging/ IMAGE_VERSION=latest
>
>
> I also tried it inside the webconsole but also did not work. This was the
> error:
>
> error: DaemonSet.extensions "logging-fluentd" is invalid: 
> spec.template.spec.containers[0].image: Required value
>
> What am I doing wrong?
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Only the previous 1000 log lines and new log messages will be displayed because of the large log size.

2016-04-24 Thread Luke Meyer
On Thu, Apr 21, 2016 at 5:00 AM, Den Cowboy  wrote:

> My webconsole is showing the following warning when I'm looking for the
> logs of a pod:
> Only the previous 1000 log lines and new log messages will be displayed
> because of the large log size.
>

I'm pretty sure this is just a web console restriction and you can get the
full logs from oc logs.


>
> I'm afraid the logsize will be huge?
> This is for a tomcat-container which isn't using persistent storage (it's
> hosting a web service which is showing logs after being triggered). Okay
> it's ephemeral so the logs will be gone when I delete the container (but
> normally this container never goes down).
>

Container storage is irrelevant. The only logs you can retrieve are the
ones that are sent to stdout / stdin, i.e. generally for tomcat the java
console logs. These go into Docker logs and can be retrieved via
Kubernetes.

If you're writing log files inside container storage then you can't
retrieve them from the console or oc logs, you have to oc rsync them or
similar.

Either way, yes, the logs will be gone when the pod is gone. You may also
find aggregated logging interesting (
https://docs.openshift.org/latest/install_config/aggregate_logging.html).



> So the amount of saved logs will be huge. Is the container saving
> (ephemeral) the logs for some max amount of time?
>
>
You can configure docker log rotation based on size -
https://docs.openshift.org/latest/install_config/install/prerequisites.html#managing-docker-container-logs

I think the default is still to not rotate and just let the log files grow.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: almost there

2016-01-11 Thread Luke Meyer
On Mon, Jan 11, 2016 at 1:20 PM, Clayton Coleman 
wrote:

>
> > - I realized that last time I didn't execute the required
> pre-installation
> > steps (which include setting up docker for instance) but this didn't
> seem to
> > pose any problems. Should I scratch everything and start over? One
> worrying
> > aspect is this thing about docker storage, which I don't really get...
>
> Docker storage is a performance thing (the default is incredibly
> slow).  I don't know if you have to scratch it and start over - you
> can selectively apply those steps to each host as you go in most
> cases.
>
>
You don't have to reinstall anything. Just stop docker, follow the
instructions for setting up storage (which includes nuking the ephemeral
contents of /var/lib/docker), and start docker again.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users