node_exporter

2019-05-30 Thread Neale Ferguson
Hi,
For v3.11 there is a openshift/node_exporter git repo forked from 
prometheus/node_exporter that appears to produce an image 
openshift/node_exporter. The ansible playbook looks for an image 
openshift/prometheus-node-exporter. Is this derived from the Prometheus gitrepo 
or is it the openshift repo retagged to the name used by ansible?

Neale
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Elasticsearch - 5.x

2019-03-21 Thread Neale Ferguson
Are you asking how to erase everything and start over?
- No, just how to get information out of the elasticsearch container to tell me 
why it's failing. 

Can you share your inventory files with the logging parameters (be sure to 
redact any sensitive information)?

- The configuration was an all-in-one when it was first created. I added a 
compute node shortly after then added the logging.

[nfs]
okcd-master.sinenomine.net

# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd
nfs
new_nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',}]
openshift_master_htpasswd_users={}
os_firewall_use_firewalld=True 
openshift_logging_install_logging=true
openshift_logging_storage_kind=nfs
openshift_logging_storage_access_modes=['ReadWriteOnce']
openshift_logging_storage_nfs_directory=/exports
openshift_logging_storage_nfs_options='*(rw,root_squash)'
openshift_logging_storage_volume_name=logging
openshift_logging_storage_volume_size=200Mi
openshift_logging_storage_labels={'storage': 'logging'}
openshift_logging_kibana_hostname=logging.origin.z
openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra': 'true'} 
ansible_ssh_user=root
openshift_deployment_type=origin
oreg_url=docker.io/clefos/origin-${component}:${version}
openshift_examples_modify_imagestreams=true
openshift_web_console_prefix=docker.io/clefos/
openshift_disable_check=disk_availability,docker_storage,memory_availability
openshift_hosted_router_selector='node-role.kubernetes.io/infra=true'
openshift_hosted_manage_registry=true
openshift_enable_unsupported_configurations=True
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=5Gi
openshift_cockpit_deployer_image=docker.io/clefos/cockpit-kubernetes:latest
openshift_console_install=False

# host group for masters
[masters]
okcd-master.sinenomine.net

# host group for etcd
[etcd]
master.example.com

# host group for nodes, includes region info
[nodes]
master.example.com openshift_node_group_name='node-config-all-in-one'
node.example.com openshift_node_group_name='node-config-compute'

# Adding new node to the cluster
[new_nodes]


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Elasticsearch - 5.x

2019-03-21 Thread Neale Ferguson
> - So this leads to the question of how to debug the elasticsearch 
container not coming up. The console log information in the 1st email came from 
me running it manually so I could add the -v option to the elasticsearch 
command.  Setting the DEBUG and LOGLEVEL environment variables wasn't 
illuminating. I guess I need to try and add the -v option to the run.sh script.

This simply should not be happening.  Did you use openshift-ansible to 
upgrade from 3.10 to 3.11, or did you deploy logging from scratch on 3.11?
- Initially, I created the 3.11 cluster from scratch  but without logging. I 
then added the logging statements to the hosts file and then ran the logging 
playbook.
 


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Elasticsearch - 5.x

2019-03-21 Thread Neale Ferguson
elasticsearch 5 is not tech preview in 3.11 - it is fully supported - and 
2.x is gone
- Understood. Did that change with the move from 3.10 to 3.11? I must've been 
scanning another host which was running 3.10 to spot those preview vars. 
- So this leads to the question of how to debug the elasticsearch container not 
coming up. The console log information in the 1st email came from me running it 
manually so I could add the -v option to the elasticsearch command.  Setting 
the DEBUG and LOGLEVEL environment variables wasn't illuminating. I guess I 
need to try and add the -v option to the run.sh script.




___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Elasticsearch - 5.x

2019-03-21 Thread Neale Ferguson


On 3/21/19, 15:33, "Neale Ferguson"  wrote:

Right. The image is now 
https://hub.docker.com/r/openshift/origin-logging-elasticsearch5/tags - there 
are similar changes for origin-logging-curator5, origin-logging-kibana5, 
origin-logging-fluentd

What version of openshift-ansible did you use to deploy logging?
3.11. Strangely, I didn't set the preview option in the hosts file but the 
only images available for me (the ones that I built) are the "5" series and 
these are the ones that were started.
 




___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Docker level for building 3.11

2019-02-26 Thread Neale Ferguson
Thanks. I have local images I wish to use for the build, so I assume I will 
need a local registry up and running.


you don't need to erase docker. buildah and docker can coexist.

Thanks. buildah-1.5.2 appears to be the latest. So I need to:


  1.  yum erase docker
  2.  yum install buildah

What provides the docker daemon?

Neale


This is a part of the multistage build syntax introduced in Docker 17.05 [1]. 
This is available through the centos-extras repo, and requires you to uninstall 
any other installations of docker.

I recommend using buildah instead [3].

[1] https://docs.docker.com/develop/develop-images/multistage-build
[2] https://docs.docker.com/install/linux/docker-ce/centos/
[3] https://github.com/containers/buildah
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Docker level for building 3.11

2019-02-26 Thread Neale Ferguson
Hi,
 I am building openshift-console. What level of docker supports the "AS" 
directive on the FROM statement? Is it part of CentOS 7.6.1810 yet?

FROM quay.io/coreos/tectonic-console-builder:v16 AS build

Neale



___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


origin-console

2018-12-18 Thread Neale Ferguson
From what sources are the origin-console:v3.11.0 image built?

Neale
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Prometheus Operator

2018-12-07 Thread Neale Ferguson
Solved: new spec is go get -u -d k8s.io/kube-openapi/cmd/openapi-gen 


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Prometheus Operator

2018-12-07 Thread Neale Ferguson
Forgot to reply-all (below) and a follow-up 

Follow-up question: what is the message telling me with return code 200:

go get -u -v -d k8s.io/code-generator/cmd/openapi-gen
Fetching https://k8s.io/code-generator/cmd/openapi-gen?go-get=1
Parsing meta tags from https://k8s.io/code-generator/cmd/openapi-gen?go-get=1 
(status code 200)
get "k8s.io/code-generator/cmd/openapi-gen": found meta tag 
get.metaImport{Prefix:"k8s.io/code-generator", VCS:"git", 
RepoRoot:"https://github.com/kubernetes/code-generator"} at 
https://k8s.io/code-generator/cmd/openapi-gen?go-get=1
get "k8s.io/code-generator/cmd/openapi-gen": verifying non-authoritative meta 
tag
Fetching https://k8s.io/code-generator?go-get=1
Parsing meta tags from https://k8s.io/code-generator?go-get=1 (status code 200)
k8s.io/code-generator (download)
package k8s.io/code-generator/cmd/openapi-gen: cannot find package 
"k8s.io/code-generator/cmd/openapi-gen" in any of:
  /usr/lib/golang/src/k8s.io/code-generator/cmd/openapi-gen (from $GOROOT)
  /root/origin-3.11.0/go/src/k8s.io/code-generator/cmd/openapi-gen (from 
$GOPATH)

I've not had problems with go get like this.

On 12/7/18, 10:55, "Neale Ferguson"  wrote:

Thanks, but same result. It wants to build openapi-gen as its source file 
is found and used a pre-req to operator:

  Considering target file `pkg/client/monitoring/v1/openapi_generated.go'.
Pruning file `pkg/client/monitoring/v1/types.go'.
Considering target file `/root/origin-3.11.0/go/bin/openapi-gen'.
 File `/root/origin-3.11.0/go/bin/openapi-gen' does not exist.
 Finished prerequisites of target file 
`/root/origin-3.11.0/go/bin/openapi-gen'.
Must remake target `/root/origin-3.11.0/go/bin/openapi-gen'.

On 12/7/18, 10:42, "Simon Pasquier"  wrote:

AFAIU you could run "make operator" if you just want the binary. "make
build" also generates some code but it shouldn't be needed to build
the operator.
 




___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Prometheus Operator

2018-12-07 Thread Neale Ferguson
I assume Origin is still using the coreos git repo while the openshift repo is 
brought up to date? When I attempt to build the v0.23.2 code I am getting:


go get -u -v -d k8s.io/code-generator/cmd/openapi-gen

Fetching https://k8s.io/code-generator/cmd/openapi-gen?go-get=1

Parsing meta tags from https://k8s.io/code-generator/cmd/openapi-gen?go-get=1 
(status code 200)

get "k8s.io/code-generator/cmd/openapi-gen": found meta tag 
get.metaImport{Prefix:"k8s.io/code-generator", VCS:"git", 
RepoRoot:"https://github.com/kubernetes/code-generator"} at 
https://k8s.io/code-generator/cmd/openapi-gen?go-get=1

get "k8s.io/code-generator/cmd/openapi-gen": verifying non-authoritative meta 
tag

Fetching https://k8s.io/code-generator?go-get=1

Parsing meta tags from https://k8s.io/code-generator?go-get=1 (status code 200)

k8s.io/code-generator (download)

package k8s.io/code-generator/cmd/openapi-gen: cannot find package 
"k8s.io/code-generator/cmd/openapi-gen" in any of:

  /usr/lib/golang/src/k8s.io/code-generator/cmd/openapi-gen (from $GOROOT)

  /root/origin-3.11.0/go/src/k8s.io/code-generator/cmd/openapi-gen (from 
$GOPATH)

I am using go 1.10.2. I am unsure as to what is going wrong.

Neale





___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin builds for CVE-2018-1002105

2018-12-06 Thread Neale Ferguson
Will the github repo be tagged with v3.11.1 or retagged with v3.11.0?

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Building Origin for s390x

2018-11-30 Thread Neale Ferguson
I have been building Origin for use on s390x (aka Z Systems aka Mainframe) for 
a couple of years now. Currently, I have a script that will pull the 
appropriate release from github and build the various components: origin, 
registry, web-console, service-broker, and aggregate-logging. There are a 
handful of mods due to where the resulting images will be pushed and some 
logging package locations. I would like to replicate the build infrastructure 
used for the official releases rather than maintain some bash to get things 
going. I have access to a relatively well resource Linux on Z virtual machine 
with external network capability. If this is feasible, I would like to identify 
what’s required.

Neale
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: invalid reference format

2018-09-12 Thread Neale Ferguson
1. Please file a BZ or Github issue with inventory, verbose
ansible-playbook output and other requested information.
- Will do

2. I'm not sure what you're attempting to build.
- origin/registry/web-console/...

3. We don't control packaging for distros, if you're not using RHEL
you can install the required version of ansible however you see fit.
- Understood.

4. Yes, that seems broken.  We don't control the packaging of these
components, I think there is an outstanding github issue for this on
github.com/openshift/openshift-ansible
 - I'll take a look.
 


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: invalid reference format

2018-09-12 Thread Neale Ferguson
Thanks for the explanation. I will add that to my hosts file and see how it 
goes.



Four other questions/issues:



1. I am finding that when the playbook gets to the logging component it is 
choking when it tries to run the generate_certs playbook and construct the oc 
adm command. The entry in particular:



- include_tasks: procure_server_certs.yaml

  loop_control:

loop_var: cert_info

  with_items:

- procure_component: kibana

- procure_component: kibana-ops

- procure_component: kibana-internal

hostnames: "kibana, kibana-ops, {{openshift_logging_kibana_hostname}}, 
{{openshift_logging_kibana_ops_hostname}}"



results in the error: error: x509: cannot parse dnsName



If I eliminate the spaces between the hostname: entries then there are no such 
complaints. I wonder if this is a golang issue (see #2)?



2. What is the recommended golang level? I am using 1.9.4 but have access to 
everything up to 1.11 (though 1.9.7 fails in the sync/atomic test when I build 
it).



3. The 3.10.35 and .43 levels of openshift-ansible require ansible-2.4.x or 
better but this level does not exist in the base/updates/extras 7 repo but 
there are higher levels in openstack/virt/gluster41 repos that can be used



4. Playbook playbooks/init/base_packages.yml specifies python-docker: this used 
to be in python-docker-py but modern versions of that package now provide 
docker-python. python-docker does exist in python-docker-3.x but this is 
incompatible with atomic-1.22 in the extras repo.



If these items should be reported in the relevant github repo issues then I 
will raise them there.



Thanks again for your patience and responses… Neale



On 9/12/18, 13:36, "Michael Gugino"  wrote:



Yes, we use regex to replace that value.



It's not valid to set oreg_url in the way you are attempting to set

it, but it may actually work for a large majority of the images.  You

can set the registry-console image directly as a workaround.



openshift_cockpit_deployer_image, should contain fully qualified image

name and desired version, example:

myregistry.com/testing/cockpit:latest
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


invalid reference format

2018-09-12 Thread Neale Ferguson
Hi,
I built 3.10 from source and performed an ansible-playbook installation. 
Everything thing went fine except for registry-console. What have I failed to 
configure or what may be missing such that when registry-console is started it 
fails with:

Warning  InspectFailed   5s (x8 over 27s)  kubelet, docker-test.sinenomine.net  
Failed to apply default image tag 
"docker.io/clefos/origin-${component}:latest": couldn't parse image reference 
"docker.io/clefos/origin-${component}:latest": invalid reference format

(clefos is the name of my repo where the images built are placed)

I assume ${component} is supposed to be substituted by something during the 
playbook processing.

Neale
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


tokencmd build error

2018-04-12 Thread Neale Ferguson
Hi,
 While building 3.9.0 I get the following:


# github.com/openshift/origin/pkg/oc/util/tokencmd

pkg/oc/util/tokencmd/negotiator_gssapi.go:25:7: undefined: gssapi.Lib

pkg/oc/util/tokencmd/negotiator_gssapi.go:32:8: undefined: gssapi.Name

pkg/oc/util/tokencmd/negotiator_gssapi.go:34:7: undefined: gssapi.CtxId

pkg/oc/util/tokencmd/negotiator_gssapi.go:43:8: undefined: gssapi.CredId

I note in that module there is an import for:


"github.com/apcera/gssapi"

And that in apcera/gssapi/gss_types.go those particular things (e.g. CtxId) are 
defined.

I am not sure why the error is occurring.

Neale


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: 3.7.1 - Console

2018-03-02 Thread Neale Ferguson
Bingo! Thanks.

Neale

From: Sam Padgett <spadg...@redhat.com<mailto:spadg...@redhat.com>>
Date: Thursday, March 1, 2018 at 6:13 PM
To: Neale Ferguson <ne...@sinenomine.net<mailto:ne...@sinenomine.net>>
Cc: Openshift 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: 3.7.1 - Console

Did you include the default templates or image streams when you installed?

There was a bug in 3.7 where the spinner never stopped when there was no 
content, fixed in 3.9.

https://bugzilla.redhat.com/show_bug.cgi?id=1501785
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


cbs-paas7-openshift-multiarch-el7-build

2018-01-22 Thread Neale Ferguson
Hi,
 I have been building CE for the s390x platform and notice in the recent 
release there is a new repo used in the source image. At the moment most of the 
packages referenced there are in the s390x version of EPEL that I maintain but 
I’d like to be consistent with all architectures. I would like to build the 
s390x version of this repo. I’d like to automate this process rather than just 
trying to manually build each package. I assume the non-x86_64 arches get their 
stuff for building from the x86_64 repo so wonder if there is some 
tool/procedure I could adapt to build for s390x?

Neale
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Problem with tito when building RPMs

2018-01-19 Thread Neale Ferguson
I downgraded tito from 0.6.11-1 to 0.6.10-1 and all is now working.


Have you pulled tags into you git repo?

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: 3.6.1 test failures

2017-11-07 Thread Neale Ferguson
Thanks. Easily ignored then. The only thing I am really interested in
getting fixed is the configuration of the server with which the end-to-end
test is trying to talk. Is there any guidance available in the doc(s)?

On 11/6/17, 2:38 PM, "dev-boun...@lists.openshift.redhat.com on behalf of
Christian Heimes" <dev-boun...@lists.openshift.redhat.com on behalf of
chei...@redhat.com> wrote:

>On 2017-11-02 16:29, Neale Ferguson wrote:
>> coverage: 75.9% of statements
>> 
>> FAILgithub.com/openshift/origin/pkg/build/cmd   0.124s
>> 
>> 
>> - It appears there are more ciphers supported in this level of go (1.8.1
>> - defined in /usr/lib/golang/src/crypto/tls/cipher_suites.go) than
>> openshift is prepared to use. I assume
>> updating pkg/cmd/server/crypto/crypto.go with these additional ciphers
>> would be required (among other places):
>> 
>> --- FAIL: TestConstantMaps (0.02s)
>> 
>> crypto_test.go:35: discovered cipher
>> tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 not in ciphers map
>> 
>> crypto_test.go:35: discovered cipher
>> tls.TLS_RSA_WITH_AES_128_CBC_SHA256 not in ciphers map
>> 
>> crypto_test.go:35: discovered cipher
>> tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 not in ciphers map
>> 
>> crypto_test.go:35: discovered cipher
>> tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 not in ciphers map
>> 
>> crypto_test.go:35: discovered cipher
>> tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 not in ciphers map
>> 
>> Building CA...
>> 
>> Building intermediate 1...
>> 
>> Building intermediate 2...
>> 
>> Building server...
>> 
>> Building client...
>> 
>> FAIL
>> 
>> coverage: 12.7% of statements
>> 
>> FAILgithub.com/openshift/origin/pkg/cmd/server/crypto   1.857s
>
>Your assumption is correct. The five ciphers were added to the cipher
>map in
>https://github.com/tiran/origin/commit/6da689b0f307474ed5733a7c3c739bfb659
>9b6e0
>. The test assumes Golang 1.7.


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev