node_exporter
Hi, For v3.11 there is a openshift/node_exporter git repo forked from prometheus/node_exporter that appears to produce an image openshift/node_exporter. The ansible playbook looks for an image openshift/prometheus-node-exporter. Is this derived from the Prometheus gitrepo or is it the openshift repo retagged to the name used by ansible? Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Elasticsearch - 5.x
Thanks anyway. I'll take a deeper look into what's happening inside the container when it's started. I think I should add an infra node rather than tie up the master as the logging stuff looks a little heavy on resources. I've had a couple of instances where when logging is brought up the api controller etc start timing out or liveness probes timeout. I can't really identify what causes the cascading of pod failures. I think creating a small ceph cluster might be useful as well rather than using the unsupported NFS. Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Elasticsearch - 5.x
Are you asking how to erase everything and start over? - No, just how to get information out of the elasticsearch container to tell me why it's failing. Can you share your inventory files with the logging parameters (be sure to redact any sensitive information)? - The configuration was an all-in-one when it was first created. I added a compute node shortly after then added the logging. [nfs] okcd-master.sinenomine.net # Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd nfs new_nodes # Set variables common for all OSEv3 hosts [OSEv3:vars] openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',}] openshift_master_htpasswd_users={} os_firewall_use_firewalld=True openshift_logging_install_logging=true openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_nfs_directory=/exports openshift_logging_storage_nfs_options='*(rw,root_squash)' openshift_logging_storage_volume_name=logging openshift_logging_storage_volume_size=200Mi openshift_logging_storage_labels={'storage': 'logging'} openshift_logging_kibana_hostname=logging.origin.z openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra': 'true'} ansible_ssh_user=root openshift_deployment_type=origin oreg_url=docker.io/clefos/origin-${component}:${version} openshift_examples_modify_imagestreams=true openshift_web_console_prefix=docker.io/clefos/ openshift_disable_check=disk_availability,docker_storage,memory_availability openshift_hosted_router_selector='node-role.kubernetes.io/infra=true' openshift_hosted_manage_registry=true openshift_enable_unsupported_configurations=True openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)' openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=5Gi openshift_cockpit_deployer_image=docker.io/clefos/cockpit-kubernetes:latest openshift_console_install=False # host group for masters [masters] okcd-master.sinenomine.net # host group for etcd [etcd] master.example.com # host group for nodes, includes region info [nodes] master.example.com openshift_node_group_name='node-config-all-in-one' node.example.com openshift_node_group_name='node-config-compute' # Adding new node to the cluster [new_nodes] ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Elasticsearch - 5.x
> - So this leads to the question of how to debug the elasticsearch container not coming up. The console log information in the 1st email came from me running it manually so I could add the -v option to the elasticsearch command. Setting the DEBUG and LOGLEVEL environment variables wasn't illuminating. I guess I need to try and add the -v option to the run.sh script. This simply should not be happening. Did you use openshift-ansible to upgrade from 3.10 to 3.11, or did you deploy logging from scratch on 3.11? - Initially, I created the 3.11 cluster from scratch but without logging. I then added the logging statements to the hosts file and then ran the logging playbook. ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Elasticsearch - 5.x
elasticsearch 5 is not tech preview in 3.11 - it is fully supported - and 2.x is gone - Understood. Did that change with the move from 3.10 to 3.11? I must've been scanning another host which was running 3.10 to spot those preview vars. - So this leads to the question of how to debug the elasticsearch container not coming up. The console log information in the 1st email came from me running it manually so I could add the -v option to the elasticsearch command. Setting the DEBUG and LOGLEVEL environment variables wasn't illuminating. I guess I need to try and add the -v option to the run.sh script. ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Elasticsearch - 5.x
On 3/21/19, 15:33, "Neale Ferguson" wrote: Right. The image is now https://hub.docker.com/r/openshift/origin-logging-elasticsearch5/tags - there are similar changes for origin-logging-curator5, origin-logging-kibana5, origin-logging-fluentd What version of openshift-ansible did you use to deploy logging? 3.11. Strangely, I didn't set the preview option in the hosts file but the only images available for me (the ones that I built) are the "5" series and these are the ones that were started. ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Elasticsearch - 5.x
If I want to use the techpreview elasticsearch5 images are there any other changes I need to make as I am seeing this on startup: Found index level settings on node level configuration. Since elasticsearch 5.x index level settings can NOT be set on the nodes configuration like the elasticsearch.yaml, in system properties or command line arguments.In order to upgrade all indices the settings must be updated via the /${index}/_settings API. Unless all settings are dynamic all indices must be closed in order to apply the upgradeIndices created in the future should use index templates to set default values. Please ensure all required values are updated on all indices by executing: curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{ "index.number_of_replicas" : "0", "index.number_of_shards" : "1", "index.translog.flush_threshold_period" : "5m", "index.translog.flush_threshold_size" : "256mb", "index.unassigned.node_left.delayed_timeout" : "2m" }' * [2019-03-21T18:22:36,493][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [logging-es-data-master-1jbe2qib] uncaught exception in thread [main] org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: node settings must not contain any index level settings The Dockerfile is building 5.6.10 but the 3.11.0 image from docker.io/openshift appears to be based on 2.4.4 (which was changed to 5.x in June 2018). Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Docker level for building 3.11
Thanks. I have local images I wish to use for the build, so I assume I will need a local registry up and running. you don't need to erase docker. buildah and docker can coexist. Thanks. buildah-1.5.2 appears to be the latest. So I need to: 1. yum erase docker 2. yum install buildah What provides the docker daemon? Neale This is a part of the multistage build syntax introduced in Docker 17.05 [1]. This is available through the centos-extras repo, and requires you to uninstall any other installations of docker. I recommend using buildah instead [3]. [1] https://docs.docker.com/develop/develop-images/multistage-build [2] https://docs.docker.com/install/linux/docker-ce/centos/ [3] https://github.com/containers/buildah ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Docker level for building 3.11
Thanks. buildah-1.5.2 appears to be the latest. So I need to: 1. yum erase docker 2. yum install buildah What provides the docker daemon? Neale This is a part of the multistage build syntax introduced in Docker 17.05 [1]. This is available through the centos-extras repo, and requires you to uninstall any other installations of docker. I recommend using buildah instead [3]. [1] https://docs.docker.com/develop/develop-images/multistage-build [2] https://docs.docker.com/install/linux/docker-ce/centos/ [3] https://github.com/containers/buildah ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Docker level for building 3.11
Hi, I am building openshift-console. What level of docker supports the "AS" directive on the FROM statement? Is it part of CentOS 7.6.1810 yet? FROM quay.io/coreos/tectonic-console-builder:v16 AS build Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
origin-console
From what sources are the origin-console:v3.11.0 image built? Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Prometheus Operator
Solved: new spec is go get -u -d k8s.io/kube-openapi/cmd/openapi-gen ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Prometheus Operator
Forgot to reply-all (below) and a follow-up Follow-up question: what is the message telling me with return code 200: go get -u -v -d k8s.io/code-generator/cmd/openapi-gen Fetching https://k8s.io/code-generator/cmd/openapi-gen?go-get=1 Parsing meta tags from https://k8s.io/code-generator/cmd/openapi-gen?go-get=1 (status code 200) get "k8s.io/code-generator/cmd/openapi-gen": found meta tag get.metaImport{Prefix:"k8s.io/code-generator", VCS:"git", RepoRoot:"https://github.com/kubernetes/code-generator"} at https://k8s.io/code-generator/cmd/openapi-gen?go-get=1 get "k8s.io/code-generator/cmd/openapi-gen": verifying non-authoritative meta tag Fetching https://k8s.io/code-generator?go-get=1 Parsing meta tags from https://k8s.io/code-generator?go-get=1 (status code 200) k8s.io/code-generator (download) package k8s.io/code-generator/cmd/openapi-gen: cannot find package "k8s.io/code-generator/cmd/openapi-gen" in any of: /usr/lib/golang/src/k8s.io/code-generator/cmd/openapi-gen (from $GOROOT) /root/origin-3.11.0/go/src/k8s.io/code-generator/cmd/openapi-gen (from $GOPATH) I've not had problems with go get like this. On 12/7/18, 10:55, "Neale Ferguson" wrote: Thanks, but same result. It wants to build openapi-gen as its source file is found and used a pre-req to operator: Considering target file `pkg/client/monitoring/v1/openapi_generated.go'. Pruning file `pkg/client/monitoring/v1/types.go'. Considering target file `/root/origin-3.11.0/go/bin/openapi-gen'. File `/root/origin-3.11.0/go/bin/openapi-gen' does not exist. Finished prerequisites of target file `/root/origin-3.11.0/go/bin/openapi-gen'. Must remake target `/root/origin-3.11.0/go/bin/openapi-gen'. On 12/7/18, 10:42, "Simon Pasquier" wrote: AFAIU you could run "make operator" if you just want the binary. "make build" also generates some code but it shouldn't be needed to build the operator. ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Prometheus Operator
I assume Origin is still using the coreos git repo while the openshift repo is brought up to date? When I attempt to build the v0.23.2 code I am getting: go get -u -v -d k8s.io/code-generator/cmd/openapi-gen Fetching https://k8s.io/code-generator/cmd/openapi-gen?go-get=1 Parsing meta tags from https://k8s.io/code-generator/cmd/openapi-gen?go-get=1 (status code 200) get "k8s.io/code-generator/cmd/openapi-gen": found meta tag get.metaImport{Prefix:"k8s.io/code-generator", VCS:"git", RepoRoot:"https://github.com/kubernetes/code-generator"} at https://k8s.io/code-generator/cmd/openapi-gen?go-get=1 get "k8s.io/code-generator/cmd/openapi-gen": verifying non-authoritative meta tag Fetching https://k8s.io/code-generator?go-get=1 Parsing meta tags from https://k8s.io/code-generator?go-get=1 (status code 200) k8s.io/code-generator (download) package k8s.io/code-generator/cmd/openapi-gen: cannot find package "k8s.io/code-generator/cmd/openapi-gen" in any of: /usr/lib/golang/src/k8s.io/code-generator/cmd/openapi-gen (from $GOROOT) /root/origin-3.11.0/go/src/k8s.io/code-generator/cmd/openapi-gen (from $GOPATH) I am using go 1.10.2. I am unsure as to what is going wrong. Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Openshift Origin builds for CVE-2018-1002105
Will the github repo be tagged with v3.11.1 or retagged with v3.11.0? ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Building Origin for s390x
I have been building Origin for use on s390x (aka Z Systems aka Mainframe) for a couple of years now. Currently, I have a script that will pull the appropriate release from github and build the various components: origin, registry, web-console, service-broker, and aggregate-logging. There are a handful of mods due to where the resulting images will be pushed and some logging package locations. I would like to replicate the build infrastructure used for the official releases rather than maintain some bash to get things going. I have access to a relatively well resource Linux on Z virtual machine with external network capability. If this is feasible, I would like to identify what’s required. Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: invalid reference format
1. Please file a BZ or Github issue with inventory, verbose ansible-playbook output and other requested information. - Will do 2. I'm not sure what you're attempting to build. - origin/registry/web-console/... 3. We don't control packaging for distros, if you're not using RHEL you can install the required version of ansible however you see fit. - Understood. 4. Yes, that seems broken. We don't control the packaging of these components, I think there is an outstanding github issue for this on github.com/openshift/openshift-ansible - I'll take a look. ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: invalid reference format
Thanks for the explanation. I will add that to my hosts file and see how it goes. Four other questions/issues: 1. I am finding that when the playbook gets to the logging component it is choking when it tries to run the generate_certs playbook and construct the oc adm command. The entry in particular: - include_tasks: procure_server_certs.yaml loop_control: loop_var: cert_info with_items: - procure_component: kibana - procure_component: kibana-ops - procure_component: kibana-internal hostnames: "kibana, kibana-ops, {{openshift_logging_kibana_hostname}}, {{openshift_logging_kibana_ops_hostname}}" results in the error: error: x509: cannot parse dnsName If I eliminate the spaces between the hostname: entries then there are no such complaints. I wonder if this is a golang issue (see #2)? 2. What is the recommended golang level? I am using 1.9.4 but have access to everything up to 1.11 (though 1.9.7 fails in the sync/atomic test when I build it). 3. The 3.10.35 and .43 levels of openshift-ansible require ansible-2.4.x or better but this level does not exist in the base/updates/extras 7 repo but there are higher levels in openstack/virt/gluster41 repos that can be used 4. Playbook playbooks/init/base_packages.yml specifies python-docker: this used to be in python-docker-py but modern versions of that package now provide docker-python. python-docker does exist in python-docker-3.x but this is incompatible with atomic-1.22 in the extras repo. If these items should be reported in the relevant github repo issues then I will raise them there. Thanks again for your patience and responses… Neale On 9/12/18, 13:36, "Michael Gugino" wrote: Yes, we use regex to replace that value. It's not valid to set oreg_url in the way you are attempting to set it, but it may actually work for a large majority of the images. You can set the registry-console image directly as a workaround. openshift_cockpit_deployer_image, should contain fully qualified image name and desired version, example: myregistry.com/testing/cockpit:latest ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
invalid reference format
Hi, I built 3.10 from source and performed an ansible-playbook installation. Everything thing went fine except for registry-console. What have I failed to configure or what may be missing such that when registry-console is started it fails with: Warning InspectFailed 5s (x8 over 27s) kubelet, docker-test.sinenomine.net Failed to apply default image tag "docker.io/clefos/origin-${component}:latest": couldn't parse image reference "docker.io/clefos/origin-${component}:latest": invalid reference format (clefos is the name of my repo where the images built are placed) I assume ${component} is supposed to be substituted by something during the playbook processing. Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
epoll_ctl question
I was just stracing openshift as I was interested in what it was doing when running a couple of pods. I noticed these entries: 1006 openat(AT_FDCWD, "/rootfs/sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ba26be_41cd_11e8_89c7_02001011.slice/docker-21defa552e0fd961411e1ecd63d3726d19c874cbde02c40258e5f60183cd3b55.scope/cpu.shares", O_RDONLY|O_CLOEXEC) = 231 1006 epoll_ctl(4, EPOLL_CTL_ADD, 231, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=1023, u64=4396643372080}}) = -1 EPERM (Operation not permitted) I was wondering if this is a kernel level or kernel configuration thing or just something that can be ignored? Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: tokencmd build error
Problem was the cross build. It was building for amd64. Setting OS_ONLY_BUILD_PLATFORMS appears to have fixed things. Hi, While building 3.9.0 I get the following: # github.com/openshift/origin/pkg/oc/util/tokencmd pkg/oc/util/tokencmd/negotiator_gssapi.go:25:7: undefined: gssapi.Lib pkg/oc/util/tokencmd/negotiator_gssapi.go:32:8: undefined: gssapi.Name pkg/oc/util/tokencmd/negotiator_gssapi.go:34:7: undefined: gssapi.CtxId pkg/oc/util/tokencmd/negotiator_gssapi.go:43:8: undefined: gssapi.CredId I note in that module there is an import for: "github.com/apcera/gssapi" And that in apcera/gssapi/gss_types.go those particular things (e.g. CtxId) are defined. ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
tokencmd build error
Hi, While building 3.9.0 I get the following: # github.com/openshift/origin/pkg/oc/util/tokencmd pkg/oc/util/tokencmd/negotiator_gssapi.go:25:7: undefined: gssapi.Lib pkg/oc/util/tokencmd/negotiator_gssapi.go:32:8: undefined: gssapi.Name pkg/oc/util/tokencmd/negotiator_gssapi.go:34:7: undefined: gssapi.CtxId pkg/oc/util/tokencmd/negotiator_gssapi.go:43:8: undefined: gssapi.CredId I note in that module there is an import for: "github.com/apcera/gssapi" And that in apcera/gssapi/gss_types.go those particular things (e.g. CtxId) are defined. I am not sure why the error is occurring. Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: 3.7.1 - Console
Bingo! Thanks. Neale From: Sam Padgett mailto:spadg...@redhat.com>> Date: Thursday, March 1, 2018 at 6:13 PM To: Neale Ferguson mailto:ne...@sinenomine.net>> Cc: Openshift mailto:dev@lists.openshift.redhat.com>> Subject: Re: 3.7.1 - Console Did you include the default templates or image streams when you installed? There was a bug in 3.7 where the spinner never stopped when there was no content, fixed in 3.9. https://bugzilla.redhat.com/show_bug.cgi?id=1501785 ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
cbs-paas7-openshift-multiarch-el7-build
Hi, I have been building CE for the s390x platform and notice in the recent release there is a new repo used in the source image. At the moment most of the packages referenced there are in the s390x version of EPEL that I maintain but I’d like to be consistent with all architectures. I would like to build the s390x version of this repo. I’d like to automate this process rather than just trying to manually build each package. I assume the non-x86_64 arches get their stuff for building from the x86_64 repo so wonder if there is some tool/procedure I could adapt to build for s390x? Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: Problem with tito when building RPMs
I downgraded tito from 0.6.11-1 to 0.6.10-1 and all is now working. Have you pulled tags into you git repo? ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Problem with tito when building RPMs
I saw a similar problem reported in October last year but there was no report of a resolution. I have done a build since then and it worked but today when I went to build 3.7.1 I got the following: OS_ONLY_BUILD_PLATFORMS='linux/amd64' hack/build-rpm-release.sh [INFO] Building Origin release RPMs with tito... Version: 3.7.1 Release: 1.1 Creating output directory: /tmp/tito OS_GIT_MINOR::7+ OS_GIT_MAJOR::3 OS_GIT_VERSION::v3.7.1+8e67101-1 OS_GIT_TREE_STATE::clean OS_GIT_CATALOG_VERSION::v0.1.2 OS_GIT_COMMIT::8e67101 Tagging new version of origin: 0.0.1 -> 3.7.1-1.1 version_and_rel: 3.7.1-1.1 suffixed_version: 3.7.1 release: 1.1 Traceback (most recent call last): File "/usr/bin/tito", line 23, in CLI().main(sys.argv[1:]) File "/usr/lib/python2.7/site-packages/tito/cli.py", line 203, in main return module.main(argv) File "/usr/lib/python2.7/site-packages/tito/cli.py", line 671, in main return tagger.run(self.options) File "/usr/lib/python2.7/site-packages/tito/tagger/main.py", line 114, in run self._tag_release() File "/root/origin-3.7.1/go/src/github.com/openshift/origin/.tito/lib/origin/tagger/__init__.py", line 40, in _tag_release super(OriginTagger, self)._tag_release() File "/usr/lib/python2.7/site-packages/tito/tagger/main.py", line 136, in _tag_release self._check_tag_does_not_exist(self._get_new_tag(new_version)) File "/usr/lib/python2.7/site-packages/tito/tagger/main.py", line 558, in _get_new_tag return self._get_tag_for_version(suffixed_version, release) TypeError: _get_tag_for_version() takes exactly 2 arguments (3 given) [ERROR] PID 55920: hack/build-rpm-release.sh:32: `tito tag --use-version="${OS_RPM_VERSION}" --use-release="${OS_RPM_RELEASE}" --no-auto-changelog --offline` exited with status 1. [INFO] Stack Trace: [INFO] 1: hack/build-rpm-release.sh:32: `tito tag --use-version="${OS_RPM_VERSION}" --use-release="${OS_RPM_RELEASE}" --no-auto-changelog --offline` [INFO] Exiting with code 1. [ERROR] hack/build-rpm-release.sh exited with code 1 after 00h 00m 05s make: *** [build-rpms] Error 1 Neale ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: 3.6.1 test failures
Thanks. Easily ignored then. The only thing I am really interested in getting fixed is the configuration of the server with which the end-to-end test is trying to talk. Is there any guidance available in the doc(s)? On 11/6/17, 2:38 PM, "dev-boun...@lists.openshift.redhat.com on behalf of Christian Heimes" wrote: >On 2017-11-02 16:29, Neale Ferguson wrote: >> coverage: 75.9% of statements >> >> FAILgithub.com/openshift/origin/pkg/build/cmd 0.124s >> >> >> - It appears there are more ciphers supported in this level of go (1.8.1 >> - defined in /usr/lib/golang/src/crypto/tls/cipher_suites.go) than >> openshift is prepared to use. I assume >> updating pkg/cmd/server/crypto/crypto.go with these additional ciphers >> would be required (among other places): >> >> --- FAIL: TestConstantMaps (0.02s) >> >> crypto_test.go:35: discovered cipher >> tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 not in ciphers map >> >> crypto_test.go:35: discovered cipher >> tls.TLS_RSA_WITH_AES_128_CBC_SHA256 not in ciphers map >> >> crypto_test.go:35: discovered cipher >> tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 not in ciphers map >> >> crypto_test.go:35: discovered cipher >> tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 not in ciphers map >> >> crypto_test.go:35: discovered cipher >> tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 not in ciphers map >> >> Building CA... >> >> Building intermediate 1... >> >> Building intermediate 2... >> >> Building server... >> >> Building client... >> >> FAIL >> >> coverage: 12.7% of statements >> >> FAILgithub.com/openshift/origin/pkg/cmd/server/crypto 1.857s > >Your assumption is correct. The five ciphers were added to the cipher >map in >https://github.com/tiran/origin/commit/6da689b0f307474ed5733a7c3c739bfb659 >9b6e0 >. The test assumes Golang 1.7. ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
3.6.1 test failures
I built 3.6.1 on s390x and am passing the vast majority of tests except for the following. A couple look trivial but the others look like I need to do some work. Advice most welcome. - Not sure what’s happening here: --- FAIL: TestStop (0.00s) reaper_test.go:217: long name builds: unexpected action: {{default delete { builds} } build-00a-3}, expected {{default delete { builds} } build-00a-2} reaper_test.go:217: long name builds: unexpected action: {{default delete { builds} } build-00a-2}, expected {{default delete { builds} } build-00a-3} FAIL coverage: 75.9% of statements FAILgithub.com/openshift/origin/pkg/build/cmd 0.124s - It appears there are more ciphers supported in this level of go (1.8.1 - defined in /usr/lib/golang/src/crypto/tls/cipher_suites.go) than openshift is prepared to use. I assume updating pkg/cmd/server/crypto/crypto.go with these additional ciphers would be required (among other places): --- FAIL: TestConstantMaps (0.02s) crypto_test.go:35: discovered cipher tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 not in ciphers map crypto_test.go:35: discovered cipher tls.TLS_RSA_WITH_AES_128_CBC_SHA256 not in ciphers map crypto_test.go:35: discovered cipher tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 not in ciphers map crypto_test.go:35: discovered cipher tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 not in ciphers map crypto_test.go:35: discovered cipher tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 not in ciphers map Building CA... Building intermediate 1... Building intermediate 2... Building server... Building client... FAIL coverage: 12.7% of statements FAILgithub.com/openshift/origin/pkg/cmd/server/crypto 1.857s - This test hardcodes the expected architecture. Just need to build the string it is expecting dynamically: --- FAIL: TestKubeletDefaults (0.00s) node_config_test.go:142: expected defaults, actual defaults: object.KubeletConfiguration.PodInfraContainerImage: a: "gcr.io/google_containers/pause-amd64:3.0" b: "gcr.io/google_containers/pause-s390x:3.0" node_config_test.go:143: Got different defaults than expected, adjust in BuildKubernetesNodeConfig and update expectedDefaults FAIL coverage: 3.1% of statements FAILgithub.com/openshift/origin/pkg/cmd/server/kubernetes/node 0.134s - Actual SEGV: --- FAIL: TestParseRepository (0.14s) git_test.go:101: ParseRepository returned err: parse g...@github.com:user/repo.git: first path segment in URL cannot contain colon panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x26ea90] goroutine 21 [running]: testing.tRunner.func1(0xc42006f1e0) /usr/lib/golang/src/testing/testing.go:622 +0x2e0 panic(0x2c6400, 0x479ce0) /usr/lib/golang/src/runtime/panic.go:489 +0x2d8 github.com/openshift/origin/pkg/generate/git.TestParseRepository(0xc42006f1e0) /root/origin-3.6.1/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/pkg/generate/git/git_test.go:108 +0xd00 testing.tRunner(0xc42006f1e0, 0x329670) /usr/lib/golang/src/testing/testing.go:657 +0xa6 created by testing.(*T).Run /usr/lib/golang/src/testing/testing.go:697 +0x2e4 FAILgithub.com/openshift/origin/pkg/generate/git0.150s - This test complains about invalid certificates and nothing much happens except retrying forever… Server [https://localhost:8443]: Authentication required for https://localhost:8443 (openshift) Username: system:admin Password: error: username system:admin is invalid for basic auth [INFO] [CLEANUP] Dumping etcd contents to _output/scripts/test-integration/artifacts/etcd Syslog output: Oct 30 13:59:57 docker-test journal: 2017-