Antwort: PV based on NFS does not survive reboot

2018-11-05 Thread marc . schlegel
It seems my understanding of persistent-volumes and the corresponding 
claim was wrong. I've expected that a PV can have multiple PVS associated 
to it as long as there is enough storage.
But it seems it is a 1-to-1 relation and my PV was not reclaimed after I 
deleted the first PVC. The reboot obviously had nothing to do with this.

I am going to test this later today.



Von:marc.schle...@sdv-it.de
An: users@lists.openshift.redhat.com
Datum:  05.11.2018 08:58
Betreff:PV based on NFS does not survive reboot
Gesendet von:   users-boun...@lists.openshift.redhat.com



I am running a test setup including a dedicated node providing a NFS 
share, which is not part of the Openshift installation. 
After the installation I ran all the steps provided by the documentation 
[1] and I was able to add a persistent-volume-claim to my projekt which 
was bound to the NFS-PV. 

However, after rebooting my cluster I can no longer add PVCs. They fail 
with the message that no persistent-volume is available. Running the oc 
command to add the NFS-PV again fails with a message that it already 
exists. 
I checked my nfs-node and the nfs-service is running. Since I did not 
install any nfs-utils on the Openshift nodes I assume that the client 
service might not be enabled there, hence the PV is not available. I would 
assume that this is handled by the ansible-installer. 

Any ideas what could cause this behavior? 

[1] 
https://docs.openshift.com/enterprise/3.0/admin_guide/persistent_storage_nfs.html
 


regards 
Marc 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


PV based on NFS does not survive reboot

2018-11-04 Thread marc . schlegel
I am running a test setup including a dedicated node providing a NFS 
share, which is not part of the Openshift installation.
After the installation I ran all the steps provided by the documentation 
[1] and I was able to add a persistent-volume-claim to my projekt which 
was bound to the NFS-PV.

However, after rebooting my cluster I can no longer add PVCs. They fail 
with the message that no persistent-volume is available. Running the oc 
command to add the NFS-PV again fails with a message that it already 
exists.
I checked my nfs-node and the nfs-service is running. Since I did not 
install any nfs-utils on the Openshift nodes I assume that the client 
service might not be enabled there, hence the PV is not available. I would 
assume that this is handled by the ansible-installer.

Any ideas what could cause this behavior?

[1] 
https://docs.openshift.com/enterprise/3.0/admin_guide/persistent_storage_nfs.html

regards
Marc

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [CentOS PaaS SIG]: Origin v3.11 rpms available for testing

2018-11-01 Thread Marc Schlegel
I've successfully installed OKD 3.11 from the CI-Repo

Though I reverted back to 3.10 because Persistence-Volume-Claims from NFS did 
not work.
Furthermore the follwing issue still persists during install
https://github.com/openshift/openshift-ansible/issues/10375

When the openshift-ansible installer waits for the catalog-service I have to 
run an additional ansible-task to workaround service-resolution. But this also 
happens on 3.10.

regards
Marc

Am Mittwoch, 31. Oktober 2018, 16:42:41 CET schrieb Ricardo Martinelli de 
Oliveira:
> I'd like to ask anyone who deployed OKD 3.11 successfuly if you could reply
> to this thread with your ack or nack. We need this feedback in order to
> promote to -candidate and then the official CentOS repos.
> 
> On Fri, Oct 19, 2018 at 5:42 PM Anton Hughes 
> wrote:
> 
> > Thanks Phil
> >
> > I was using
> >
> > openshift_release="v3.11"
> > openshift_image_tag="v3.11"
> > openshift_pkg_version="-3.11"
> >
> > But should have been using
> >
> > openshift_release="v3.11.0"
> > openshift_image_tag="v3.11.0"
> > openshift_pkg_version="-3.11.0"
> >
> >
> >
> > On Sat, 20 Oct 2018 at 09:23, Phil Cameron  wrote:
> >
> >> Go to
> >> http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
> >> in your web browser and you can see the names of all available rpms. It
> >> appears the 3.11 rpms are 3.11.0
> >>
> >> cd /etc/yum.repos.d
> >> create a file, centos-okd-ci.repo
> >> [centos-okd-ci]
> >> name=centos-okd-ci
> >> baseurl=
> >> http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
> >> gpgcheck=0
> >> enabled=1
> >>
> >> yum search origin-node
> >> will list the available rpms
> >>
> >> On 10/19/2018 04:09 PM, Anton Hughes wrote:
> >>
> >> Hi Daniel
> >>
> >> Unfortunately this is still not working for me. Im trying the method of
> >> adding the repo using the inventory file, eg,
> >>
> >> openshift_additional_repos=[{'id': 'centos-okd-ci', 'name':
> >> 'centos-okd-ci', 'baseurl' :'
> >> http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/',
> >> 'gpgcheck' :'0', 'enabled' :'1'}]
> >>
> >> but I am getting the below error.
> >>
> >> TASK [openshift_node : Install node, clients, and conntrack packages]
> >> **
> >> Saturday 20 October 2018  09:04:53 +1300 (0:00:02.255)   0:03:34.602
> >> **
> >> FAILED - RETRYING: Install node, clients, and conntrack packages (3
> >> retries left).
> >> FAILED - RETRYING: Install node, clients, and conntrack packages (2
> >> retries left).
> >> FAILED - RETRYING: Install node, clients, and conntrack packages (1
> >> retries left).
> >> failed: [xxx.xxx.xxx.xxx] (item={u'name': u'origin-node-3.11'}) =>
> >> {"attempts": 3, "changed": false, "item": {"name": "origin-node-3.11"},
> >> "msg": "No package matching 'origin-node-3.11' found available, installed
> >> or updated", "rc": 126, "results": ["No package matching 'origin-node-3.11'
> >> found available, installed or updated"]}
> >> FAILED - RETRYING: Install node, clients, and conntrack packages (3
> >> retries left).
> >> FAILED - RETRYING: Install node, clients, and conntrack packages (2
> >> retries left).
> >> FAILED - RETRYING: Install node, clients, and conntrack packages (1
> >> retries left).
> >> failed: [xxx.xxx.xxx.xxx] (item={u'name': u'origin-clients-3.11'}) =>
> >> {"attempts": 3, "changed": false, "item": {"name": "origin-clients-3.11"},
> >> "msg": "No package matching 'origin-clients-3.11' found available,
> >> installed or updated", "rc": 126, "results": ["No package matching
> >> 'origin-clients-3.11' found available, installed or updated"]}
> >>
> >>
> >> On Sat, 20 Oct 2018 at 03:27, Daniel Comnea 
> >> wrote:
> >>
> >>> Hi all,
> >>>
> >>> First of all sorry for the late reply as well as for any confusion i may
> >>> have caused with my previous email.
> >>> I was very pleased to see the vibe and excitement around testing OKD
> >>> v3.11, very much appreciated.
> >>>
> >>> Here are the latest info:
> >>>
> >>>- everyone who wants to help us with testing should use [1] repo
> >>>which can be consumed:
> >>>   -  in the inventory as [2] or
> >>>   - by deploying your own repo file [3]
> >>>- nobody should use the repo i've mentioned in my previous email [4]
> >>>(CentOS Infra team corrected me on the confusion i made, once again
> >>>apologies for that)
> >>>
> >>>
> >>> Regarding the ansible version here are the info following my sync up
> >>> with CentOS Infra team:
> >>>
> >>>- very likely on Monday/ latest Tuesday a new rpm called
> >>>centos-release-ansible26 will appear in CentOs Extras
> >>>- the above rpm will become a dependency for the
> >>>*centos-release-openshift-origin311* rpm which will be created and
> >>>land in CentOS Extras repo at the same time OKD v3.11 will be promoted 
> >>> to
> >>>mirror.centos.org
> >>>   - note this is the same flow as it 

Re: Re: ETCD no longer starting during install

2018-10-31 Thread marc . schlegel
Well, I can confirm as well that changing to 2.6 solved the issue

Still, could anyone please add a check to the ansible-installer that 2.7 
is not supported. This would safe hours of work

regards
Marc



Von:Yu Wei 
An: Scott Dodson 
Kopie:  users 
Datum:  30.10.2018 16:52
Betreff:Re: ETCD no longer starting during install
Gesendet von:   users-boun...@lists.openshift.redhat.com



I changed to Ansible 2.6 and resolved the issue.
Thanks for your help.
On 2018/10/30 21:55, Scott Dodson wrote:
Please try using Ansible 2.6, we're aware of some problems in 2.7 that 
cause large portions of the playbooks to be skipped. Some users are 
reporting that those problems go away in Ansible 2.7.1 but others report 
that they persist.

On Tue, Oct 30, 2018 at 5:25 AM Yu Wei  wrote:
I met the same problem and found that etcd was skipped as below,
TASK [openshift_control_plane : Establish the default bootstrap kubeconfig 
for masters] **
changed: [host-10-1-241-74] => 
(item=/etc/origin/node/bootstrap.kubeconfig)
changed: [host-10-1-241-74] => (item=/etc/origin/node/node.kubeconfig)

TASK [openshift_control_plane : Check status of control plane image 
pre-pull] 
changed: [host-10-1-241-74]

TASK [openshift_control_plane : Check status of etcd image pre-pull] 
*
skipping: [host-10-1-241-74]

TASK [openshift_control_plane : Start and enable self-hosting node] 
**
changed: [host-10-1-241-74]


Is this playbooks issue?
Thanks,
Jared
Interested in big data, cloud computing
On 2018/10/30 15:47, marc.schle...@sdv-it.de wrote:
Hello everyone 

I am facing an issue with the installer for 3.10 (and 3.11 has the same 
problem) 

It started around 2-3 weeks ago, since I wasnt able to run the Ansible 
installer successfully...even when using a tag from 3.10 in the 
installer-repo that worked before. 
The control-plane is not starting and what I could figure out is, that 
etcd is not started anywhere. The last time it was working, when running 
"docker ps" on the master (single master, multi node system) I saw about 4 
running containers...one of them was the etcd. 
Now, there are only 2 of them and no etcd anywhere. 

https://github.com/lostiniceland/devops/tree/master/openshift 
This is my current Vagrant-Setup which uses a simple script to check-out 
the openshift-installer, prepare Vagrant and run the Ansible files. 

I thought that I might have broken my inventory or script but I double 
checked everything and I new that this setup was working before. 
Now at work, the collegue who is maining our test-cluster has the same 
problem when upgrading from 3.9 to 3.10...no etcd anywhere. It seems 
restarting the docker-daemon fixes for our test-cluster. 

If anyone could look into this would be very appreciated. 
What I find odd is the fact that even a before working tag like 
openshit-ansible-3.10.53-1 is now broken. The only reasons I can think of, 
is the used Dockerimages have been updated or the installed Version of 
Docker is somewhat broken. 


best regards 
Marc 

___
users mailing list
users@lists.openshift.redhat.com
https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openshift.redhat.com%2Fopenshiftmm%2Flistinfo%2Fusersdata=02%7C01%7C%7Cc3b207c02bd24d56857008d63e3c1e7a%7C84df9e7fe9f640afb435%7C1%7C0%7C636764825256850099sdata=FbRAeT9pwbbnhvmlO1WZ9cVHVbSuEKUOO9SBfTO83wk%3Dreserved=0


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


ETCD no longer starting during install

2018-10-30 Thread marc . schlegel
Hello everyone

I am facing an issue with the installer for 3.10 (and 3.11 has the same 
problem)

It started around 2-3 weeks ago, since I wasnt able to run the Ansible 
installer successfully...even when using a tag from 3.10 in the 
installer-repo that worked before.
The control-plane is not starting and what I could figure out is, that 
etcd is not started anywhere. The last time it was working, when running 
"docker ps" on the master (single master, multi node system) I saw about 4 
running containers...one of them was the etcd.
Now, there are only 2 of them and no etcd anywhere.

https://github.com/lostiniceland/devops/tree/master/openshift
This is my current Vagrant-Setup which uses a simple script to check-out 
the openshift-installer, prepare Vagrant and run the Ansible files.

I thought that I might have broken my inventory or script but I double 
checked everything and I new that this setup was working before.
Now at work, the collegue who is maining our test-cluster has the same 
problem when upgrading from 3.9 to 3.10...no etcd anywhere. It seems 
restarting the docker-daemon fixes for our test-cluster.

If anyone could look into this would be very appreciated.
What I find odd is the fact that even a before working tag like 
openshit-ansible-3.10.53-1 is now broken. The only reasons I can think of, 
is the used Dockerimages have been updated or the installed Version of 
Docker is somewhat broken.


best regards
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [CentOS PaaS SIG]: Origin v3.11 rpms available for testing

2018-10-17 Thread Marc Schlegel
I would like to participate in the testing.

How can I set the rpm-repo-url for the ansible-installer? I couldnt find any 
inventory-param in the docs.

Am Mittwoch, 17. Oktober 2018, 11:38:48 CEST schrieb Daniel Comnea:
> Hi,
> 
> We would like to announce that OKD v3.11 rpms are available for testing at
> [1].
> 
> As such we are calling for help from community to start testing and let us
> know if there are issues with the rpms and its dependencies.
> 
> And in the spirit of transparency see below the plan to promote the rpms to
> mirror.centos.org repo:
> 
> 
>1. in the next few days the packages should be promoted to the test repo
>[2] (currently it does not exist, we are waiting to be sync'ed in the
>background)
>2. in one/two weeks time if we haven't heard any issues/ blockers we are
>going to promote to [3] repo (currently it doesn't exist, it will once
>the rpm will be promoted and signed)
> 
> 
> Please note the ansbile version use (and supported) *must be* 2.6.x and not
> 2.7, if you opt to ignore the warning you will run into issues.
> 
> On a different note the CentOS Infra team are working hard (thanks !) to
> package and release a centos-ansible rpm which we'll promote in our PaaS
> repos.
> 
> The rational is to bring more control around the ansible version used/
> required by OpenShift-ansible installer and not rely on the latest ansbile
> version pushed to epel repo we caused friction recently (reflected on our
> CI as well as users reporting issues)
> 
> 
> Thank you,
> PaaS SiG team
> 
> [1] https://cbs.centos.org/repos/paas7-openshift-origin311-testing/
> [2] https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
> [3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
> 




___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift-ansible release-3.10 - Install fails with control plane pods

2018-10-09 Thread Marc Schlegel
Hello everyone

I was finally able to resolve the issue with the control plane.

The problem was caused by the master pod which was not able to connect to the 
etcd pod because the hostname always resolved to 127.0.0.1 and not the local 
cluster ip. This was due to the Vagrant box I used, and could be resolved by 
making sure that /etc/hosts only contained the localhost 127.0.0.1 entry.

Now the installer gets past the control-plane-check.

Unfortunately the next issue arises when the installer waits for the "catalog 
api server".  The command "curl -k 
https://apiserver.kube-service-catalog.svc/healthz; cannot connect because the 
installer only adds "cluster.local" to resolv.conf.
Either the installer makes sure that any service with .svc gets resolved as 
well (my current workaround, by adding server=/svc/172.30.0.1 to 
/etc/dnsmasq.d/origin-upstream-dns.conf), or all services get the hostname 
ending on "cluster.local"


Am Freitag, 31. August 2018, 21:15:12 CEST schrieben Sie:
> The dependency chain for control plane is node then etcd then api then
> controllers. From your previous post it looks like there's no apiserver
> running. I'd look into what's wrong there.
> 
> Check `master-logs api api` if that doesn't provide you any hints then
> check the logs for the node service but I can't think of anything that
> would fail there yet result in successfully starting the controller pods.
> The apiserver and controller pods use the same image. Each pod will have
> two containers, the k8s_POD containers are rarely interesting.
> 
> On Thu, Aug 30, 2018 at 2:37 PM Marc Schlegel  wrote:
> 
> > Thanks for the link. It looks like the api-pod is not getting up at all!
> >
> > Log from k8s_controllers_master-controllers-*
> >
> > [vagrant@master ~]$ sudo docker logs
> > k8s_controllers_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_1
> > E0830 18:28:05.787358   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594:
> > Failed to list *v1.Pod: Get
> > https://master.vnet.de:8443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.788589   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.ReplicationController: Get
> > https://master.vnet.de:8443/api/v1/replicationcontrollers?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.804239   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.Node: Get
> > https://master.vnet.de:8443/api/v1/nodes?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.806879   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.StatefulSet: Get
> > https://master.vnet.de:8443/apis/apps/v1beta1/statefulsets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.808195   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.PodDisruptionBudget: Get
> > https://master.vnet.de:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.673507   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.PersistentVolume: Get
> > https://master.vnet.de:8443/api/v1/persistentvolumes?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.770141   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.ReplicaSet: Get
> > https://master.vnet.de:8443/apis/extensions/v1beta1/replicasets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.773878   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.Service: Get
> > https://master.vnet.de:8443/api/v1/services?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.778204   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.StorageClass: Get

Re: openshift-ansible release-3.10 - Install fails with control plane pods

2018-09-02 Thread Marc Schlegel
Well I found two options for the inventory

openshift_ip

# host group for masters
[masters]
master openshift_ip=192.168.60.150
# host group for etcd
[etcd]
master openshift_ip=192.168.60.150
# host group for nodes, includes region info
[nodes]
master openshift_node_group_name='node-config-master' 
openshift_ip=192.168.60.150
infra openshift_node_group_name='node-config-infra' openshift_ip=192.168.60.151
app1 openshift_node_group_name='node-config-compute' openshift_ip=192.168.60.152
app2 openshift_node_group_name='node-config-compute' openshift_ip=192.168.60.153


and flannel

openshift_use_openshift_sdn=false 
openshift_use_flannel=true 
flannel_interface=eth1


The etcd logs are looking good now, still the problem seems that there is no 
SSL port open

Here are some line I could pull from journalctl on master

Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7200376300 
certificate_manager.go:216] Certificate rotation is enabled.
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7204536300 
manager.go:154] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct"
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7382576300 
certificate_manager.go:287] Rotating certificates
Sep 02 19:17:38 master.vnet.de origin-node[6300]: E0902 19:17:38.7525316300 
certificate_manager.go:299] Failed while requesting a signed certificate from 
the master: cannot create certificate signing request: Post 
https://master.vnet.de:8443/apis/certificates.k8s.io/v
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7784906300 
fs.go:142] Filesystem UUIDs: map[570897ca-e759-4c81-90cf-389da6eee4cc:/dev/vda2 
b60e9498-0baa-4d9f-90aa-069048217fee:/dev/dm-0 
c39c5bed-f37c-4263-bee8-aeb6a6659d7b:/dev/dm-1]
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7785066300 
fs.go:143] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 
minor:19 fsType:tmpfs blockSize:0} 
/dev/mapper/VolGroup00-LogVol00:{mountpoint:/var/lib/docker/overlay2 major:253 
minor
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7801306300 
manager.go:227] Machine: {NumCores:1 CpuFrequency:2808000 
MemoryCapacity:3974230016 HugePages:[{PageSize:1048576 NumPages:0} 
{PageSize:2048 NumPages:0}] MachineID:6c1357b9e4a54b929e1d09cacf37e
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7836556300 
manager.go:233] Version: {KernelVersion:3.10.0-862.2.3.el7.x86_64 
ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.13.1 
DockerAPIVersion:1.26 CadvisorVersion: CadvisorRevision:}
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7842516300 
server.go:621] --cgroups-per-qos enabled, but --cgroup-root was not specified.  
defaulting to /
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7845246300 
container_manager_linux.go:242] container manager verified user specified 
cgroup-root exists: /
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7845336300 
container_manager_linux.go:247] Creating Container Manager object based on Node 
Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: 
ContainerRuntime:docker CgroupsPerQOS:true C
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7846096300 
container_manager_linux.go:266] Creating device plugin manager: true
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7846166300 
manager.go:102] Creating Device Plugin manager at 
/var/lib/kubelet/device-plugins/kubelet.sock
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7847146300 
state_mem.go:36] [cpumanager] initializing new in-memory state store
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7849446300 
state_file.go:82] [cpumanager] state file: created new state file 
"/var/lib/origin/openshift.local.volumes/cpu_manager_state"
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7849886300 
server.go:895] Using root directory: /var/lib/origin/openshift.local.volumes
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7850136300 
kubelet.go:273] Adding pod path: /etc/origin/node/pods
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7850466300 
file.go:52] Watching path "/etc/origin/node/pods"
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.7850546300 
kubelet.go:298] Watching apiserver
Sep 02 19:17:38 master.vnet.de origin-node[6300]: E0902 19:17:38.7966516300 
reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461:
 Failed to list *v1.Node: Get 
https://master.vnet.de:8443/api/v1/nodes?fieldSelector=metadata.
Sep 02 19:17:38 master.vnet.de origin-node[6300]: E0902 19:17:38.7966956300 
reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452:
 Failed to list *v1.Service: Get 

Re: openshift-ansible release-3.10 - Install fails with control plane pods

2018-09-02 Thread Marc Schlegel
I might have found something...it could be a Vagrant issue

Vagrant uses to network interfaces: one for its own ssh access, the other one 
uses the ip configured in the Vagrantfile.
Here´s a log from the etcd-pod

...
2018-09-02 17:15:43.896539 I | etcdserver: published {Name:master.vnet.de 
ClientURLs:[https://192.168.121.202:2379]} to cluster 6d42105e200fef69
2018-09-02 17:15:43.896651 I | embed: ready to serve client requests
2018-09-02 17:15:43.897149 I | embed: serving client requests on 
192.168.121.202:2379


The interesting part is, that it is serving on 192.168.121.202, but the ip 
which should be used is 192.168.60.150.

[vagrant@master ~]$ ip ad 
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
link/ether 52:54:00:87:13:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.121.202/24 brd 192.168.121.255 scope global noprefixroute 
dynamic eth0
   valid_lft 3387sec preferred_lft 3387sec
inet6 fe80::5054:ff:fe87:1301/64 scope link 
   valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
link/ether 5c:a1:ab:1e:00:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.60.150/24 brd 192.168.60.255 scope global noprefixroute eth1
   valid_lft forever preferred_lft forever
inet6 fe80::5ea1:abff:fe1e:2/64 scope link 
   valid_lft forever preferred_lft forever
4: docker0:  mtu 1500 qdisc noqueue state 
DOWN group default 
link/ether 02:42:8b:fa:b7:b0 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
   valid_lft forever preferred_lft forever


Is there any way I can configure my inventory to use a dedicated 
network-interface (eth1 in my Vagrant case)?



Am Freitag, 31. August 2018, 21:15:12 CEST schrieben Sie:
> The dependency chain for control plane is node then etcd then api then
> controllers. From your previous post it looks like there's no apiserver
> running. I'd look into what's wrong there.
> 
> Check `master-logs api api` if that doesn't provide you any hints then
> check the logs for the node service but I can't think of anything that
> would fail there yet result in successfully starting the controller pods.
> The apiserver and controller pods use the same image. Each pod will have
> two containers, the k8s_POD containers are rarely interesting.
> 
> On Thu, Aug 30, 2018 at 2:37 PM Marc Schlegel  wrote:
> 
> > Thanks for the link. It looks like the api-pod is not getting up at all!
> >
> > Log from k8s_controllers_master-controllers-*
> >
> > [vagrant@master ~]$ sudo docker logs
> > k8s_controllers_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_1
> > E0830 18:28:05.787358   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594:
> > Failed to list *v1.Pod: Get
> > https://master.vnet.de:8443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.788589   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.ReplicationController: Get
> > https://master.vnet.de:8443/api/v1/replicationcontrollers?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.804239   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.Node: Get
> > https://master.vnet.de:8443/api/v1/nodes?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.806879   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.StatefulSet: Get
> > https://master.vnet.de:8443/apis/apps/v1beta1/statefulsets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.808195   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.PodDisruptionBudget: Get
> > https://master.vnet.de:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.673507   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.PersistentVolume: G

Re: openshift-ansible release-3.10 - Install fails with control plane pods

2018-08-31 Thread Marc Schlegel
Sure, see attached. 

Before each attempt I pull the latest release-3.10 branch for openshift-ansible.

@Scott Dodson: I am going to investigate again using your suggestions.

> Marc,
> 
> Is it possible to share  your ansible inventory file to review your
> openshift installation? I know there are some changes in 3.10 installation
> and might reflect in the inventory.
> 
> On Thu, Aug 30, 2018 at 3:37 PM Marc Schlegel  wrote:
> 
> > Thanks for the link. It looks like the api-pod is not getting up at all!
> >
> > Log from k8s_controllers_master-controllers-*
> >
> > [vagrant@master ~]$ sudo docker logs
> > k8s_controllers_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_1
> > E0830 18:28:05.787358   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594:
> > Failed to list *v1.Pod: Get
> > https://master.vnet.de:8443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.788589   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.ReplicationController: Get
> > https://master.vnet.de:8443/api/v1/replicationcontrollers?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.804239   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.Node: Get
> > https://master.vnet.de:8443/api/v1/nodes?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.806879   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.StatefulSet: Get
> > https://master.vnet.de:8443/apis/apps/v1beta1/statefulsets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.808195   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.PodDisruptionBudget: Get
> > https://master.vnet.de:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.673507   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.PersistentVolume: Get
> > https://master.vnet.de:8443/api/v1/persistentvolumes?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.770141   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.ReplicaSet: Get
> > https://master.vnet.de:8443/apis/extensions/v1beta1/replicasets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.773878   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.Service: Get
> > https://master.vnet.de:8443/api/v1/services?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.778204   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.StorageClass: Get
> > https://master.vnet.de:8443/apis/storage.k8s.io/v1/storageclasses?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.784874   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.PersistentVolumeClaim: Get
> > https://master.vnet.de:8443/api/v1/persistentvolumeclaims?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> >
> > The log is full with those. Since it is all about api, I tried to get the
> > logs from k8s_POD_master-api-master.vnet.de_kube-system_* which is
> > completely empty :-/
> >
> > [vagrant@master ~]$ sudo docker logs
> > k8s_POD_master-api-master.vnet.de_kube-system_86017803919d833e39cb3d694c249997_1
> > [vagrant@master ~]$
> >
> > Is there any special prerequisite about the api-pod?
> >
> > regards
> > Marc
> >
> >
> > > Marc,
> > >
> > > could you please look over the issue [1] and pull the master pod logs and
> > > see if you bumped into same issue mentioned by the other folks?
>

Re: openshift-ansible release-3.10 - Install fails with control plane pods

2018-08-30 Thread Marc Schlegel
Thanks for the link. It looks like the api-pod is not getting up at all!

Log from k8s_controllers_master-controllers-*

[vagrant@master ~]$ sudo docker logs 
k8s_controllers_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_1
E0830 18:28:05.787358   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594:
 Failed to list *v1.Pod: Get 
https://master.vnet.de:8443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:05.788589   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.ReplicationController: Get 
https://master.vnet.de:8443/api/v1/replicationcontrollers?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:05.804239   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.Node: Get 
https://master.vnet.de:8443/api/v1/nodes?limit=500=0: dial tcp 
127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:05.806879   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1beta1.StatefulSet: Get 
https://master.vnet.de:8443/apis/apps/v1beta1/statefulsets?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:05.808195   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1beta1.PodDisruptionBudget: Get 
https://master.vnet.de:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:06.673507   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.PersistentVolume: Get 
https://master.vnet.de:8443/api/v1/persistentvolumes?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:06.770141   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1beta1.ReplicaSet: Get 
https://master.vnet.de:8443/apis/extensions/v1beta1/replicasets?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:06.773878   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.Service: Get 
https://master.vnet.de:8443/api/v1/services?limit=500=0: dial 
tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:06.778204   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.StorageClass: Get 
https://master.vnet.de:8443/apis/storage.k8s.io/v1/storageclasses?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:06.784874   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.PersistentVolumeClaim: Get 
https://master.vnet.de:8443/api/v1/persistentvolumeclaims?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused

The log is full with those. Since it is all about api, I tried to get the logs 
from k8s_POD_master-api-master.vnet.de_kube-system_* which is completely empty 
:-/

[vagrant@master ~]$ sudo docker logs 
k8s_POD_master-api-master.vnet.de_kube-system_86017803919d833e39cb3d694c249997_1
[vagrant@master ~]$ 

Is there any special prerequisite about the api-pod?

regards
Marc


> Marc,
> 
> could you please look over the issue [1] and pull the master pod logs and
> see if you bumped into same issue mentioned by the other folks?
> Also make sure the openshift-ansible release is the latest one.
> 
> Dani
> 
> [1] https://github.com/openshift/openshift-ansible/issues/9575
> 
> On Wed, Aug 29, 2018 at 7:36 PM Marc Schlegel  wrote:
> 
> > Hello everyone
> >
> > I am having trouble getting a working Origin 3.10 installation using the
> > openshift-ansible installer. My install always fails because the control
> > pane pods are not available. I've checkout the release-3.10 branch from
> > openshift-ansible and configured the inventory accordingly
> >
> >
> > TASK [openshift_control_plane : Start and enable self-hosting node]
> > **
> > changed: [master]
> > TASK [openshift_control_plane : Get node logs]
> > ***
> > skipping: [master]
> > TASK [openshift_control_plane : debug]
> > **
> > skipping: [master]
> > TASK [openshift_control_plane : fail]
> > *
> > skipping: [master]
> > TASK [openshift_contr

openshift-ansible release-3.10 - Install fails with control plane pods

2018-08-29 Thread Marc Schlegel
Hello everyone

I am having trouble getting a working Origin 3.10 installation using the 
openshift-ansible installer. My install always fails because the control pane 
pods are not available. I've checkout the release-3.10 branch from 
openshift-ansible and configured the inventory accordingly


TASK [openshift_control_plane : Start and enable self-hosting node] 
**
changed: [master]
TASK [openshift_control_plane : Get node logs] ***
skipping: [master]
TASK [openshift_control_plane : debug] 
**
skipping: [master]
TASK [openshift_control_plane : fail] 
*
skipping: [master]
TASK [openshift_control_plane : Wait for control plane pods to appear] 
***

failed: [master] (item=etcd) => {"attempts": 60, "changed": false, "item": 
"etcd", "msg": {"cmd": "/bin/oc get pod master-etcd-master.vnet.de -o json -n 
kube-system", "results": [{}], "returncode": 1, "stderr": "The connection to 
the server master.vnet.de:8443 was refused - did you specify the right host or 
port?\n", "stdout": ""}}  

TASK [openshift_control_plane : Report control plane errors] 
*
fatal: [master]: FAILED! => {"changed": false, "msg": "Control plane pods 
didn't come up"}


I am using Vagrant to setup a local domain (vnet.de) which also includes a 
dnsmasq-node to have full control over the dns. The following VMs are running 
and DNS ans SSH works as expected

Hostname IP
domain.vnet.de   192.168.60.100
master.vnet.de192.168.60.150 (dns also works for openshift.vnet.de which is 
configured as openshift_master_cluster_public_hostname) also runs etcd
infra.vnet.de192.168.60.151 (openshift_master_default_subdomain 
wildcard points to this node)
app1.vnet.de192.168.60.152
app2.vnet.de192.168.60.153


When connecting to the master-node I can see that several docker-instances are 
up and running

[vagrant@master ~]$ sudo docker ps
CONTAINER IDIMAGECOMMAND
  CREATED STATUS  PORTS   NAMES 


9a0844123909ff5dd2137a4f "/bin/sh -c 
'#!/bi..."   19 minutes ago  Up 19 minutes   
k8s_etcd_master-etcd-master.vnet.de_kube-system_a2c858fccd481c334a9af7413728e203_0

41d803023b72f216d84cdf54 "/bin/bash -c 
'#!/..."   19 minutes ago  Up 19 minutes   
k8s_controllers_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_0
  
044c9d12588cdocker.io/openshift/origin-pod:v3.10.0   "/usr/bin/pod" 
  19 minutes ago  Up 19 minutes   
k8s_POD_master-api-master.vnet.de_kube-system_86017803919d833e39cb3d694c249997_0
  
10a197e394b3docker.io/openshift/origin-pod:v3.10.0   "/usr/bin/pod" 
  19 minutes ago  Up 19 minutes   
k8s_POD_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_0
  
20f4f86bdd07docker.io/openshift/origin-pod:v3.10.0   "/usr/bin/pod" 
  19 minutes ago  Up 19 minutes   
k8s_POD_master-etcd-master.vnet.de_kube-system_a2c858fccd481c334a9af7413728e203_0
 

However, there is no port 8443 open on the master-node. No wonder the 
ansible-installer complains. 

The machines are using a plain Centos 7.5 and I've run the 
openshift-ansible/playbooks/prerequisites.yml first and then 
openshift-ansible/playbooks/deploy_cluster.yml.
I've double-checked the installation documentation and my Vagrant config...all 
looks correct.

Any ideas/advice?
regards
Marc


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin 3.9 (oc cluster up) doesnt use registry-mirror for internal registry

2018-04-25 Thread marc . schlegel
With some help from the list I've come to the following solution which 
should work according to documentation (but doesnt).

First, I've configured my oc cluster up with persistent configuration and 
data

oc cluster up 
--host-data-dir=c:/Temp/openshift/data--host-config-dir=c:/Temp/openshift/config
 


After the initial config is written I can edit the master-config.yaml and 
subsequent runs will use the existing config

oc cluster up --host-data-dir=c:/Temp/openshift/data 
--host-config-dir=c:/Temp/openshift/config --use-existing-config=true 

In order to point the preconfigured imagestreams to our private insecure 
registry I need to edit the imagesream yaml files within the webconsole. 
First I need to set an annotation for the imagestream allowing insecure 
repository as described here
https://docs.openshift.org/latest/dev_guide/managing_images.html#insecure-registries
With that in place I want to point the Docker reference to our registry 
which fails due to a whitelist error as described earlier.

Now comes the master-config.yaml. There you can configure 
allowed-registries-for-import as described here
https://docs.openshift.com/container-platform/3.9/admin_guide/image_policy.html

So I've changed the config like this

imagePolicyConfig:
  allowedRegistriesForImport:
  - domainName: *:*
insecure: true
  - domainName: *:*
insecure: false
  disableScheduledImport: false
  maxImagesBulkImportedPerRepository: 5
  maxScheduledImageImportsPerMinute: 60
  scheduledImageImportMinimumIntervalSeconds: 900

This allowes all hosts and ports. After a restart I still get the same 
result: 
Reason: ImageStream "jenkins" is invalid: spec.tags[2].from.name: 
Forbidden: registry "docker.sdvrz.de:5000" not allowed by whitelist: 
"172.30.1.1:5000", "docker.io:443", "*.docker.io:443", "*.redhat.com:443", 
and 5 more ...
Of course I've tried less dramatic options without wildcard.

I am running out of options. Where can I find this whitelist? :-)

regards
Marc





Von:marc.schle...@sdv-it.de
An: users@lists.openshift.redhat.com
Datum:  23.04.2018 08:56
Betreff:Re: Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry
Gesendet von:   users-boun...@lists.openshift.redhat.com



Thanks for the link 
I think this is a valid solution for development. In the long run we need 
to create custom imagestream anyway. 
Stil, I cannot save the yaml because our registry is not in the whitelist, 
even when setting the insecure annotation. I double checked my 
docker-daemon... 

{ 
  "registry-mirrors": [ 
"https://docker.mydomain.com:5000; 
  ], 
  "insecure-registries": [ 
"docker.mydomain.com:5000", 
"172.30.0.0/16" 
  ], 
  "debug": true, 
  "experimental": true 
} 




Von:Ben Parees  
An:marc.schle...@sdv-it.de 
Kopie:users  
Datum:20.04.2018 15:25 
Betreff:Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry 





On Fri, Apr 20, 2018 at 2:49 AM,  wrote: 
After setting up the proxy in oc cluster up as well as the daemon 
(including the necessary bypass) the problem remains. 

So I created a admin user to which I gave the cluster-admin role and this 
one can see all image-streams and I can update them in the webconsole. 

And here I can see the root cause which is actually caused by SSL 


Internal error occurred: Get https://registry-1.docker.io/v2/: x509: 
certificate signed by unknown authority. Timestamp: 2018-04-20T06:33:47Z 
Error count: 2 

Of course we have our own CA :-)
Is there a way to import our ca-bundle? I did not see anything in "oc 
cluster up --help" 

You're seeing this error in the imagestreams during image import? 

The easiest thing to do is mark the imagestreams insecure: 
https://docs.openshift.org/latest/dev_guide/managing_images.html#insecure-registries
 


(Since oc cluster up is intended for dev usage, I am going to make the 
assumption this is a reasonable thing for you to do). 

If you don't want to do that, you'd need to add the cert to the origin 
image which oc cluster up starts up to run the master. 

 




Von:Ben Parees  
An:marc.schle...@sdv-it.de 
Kopie:users  
Datum:19.04.2018 16:10 
Betreff:Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry 





On Thu, Apr 19, 2018 at 9:14 AM,  wrote: 
Thanks for the quick replies. 

The http-proxy is not enough to get out, since the daemon uses also other 
protocols than http. 

right but it will get the imagestream imported.  After that it's up to 
your daemon configuration as to whether the pull can occur, and it sounded 
like you had already configured your daemon. 

 


Changing the image-streams seems to be a valid approach, unfortunately I 
cannot export them in order to 

Re: Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use registry-mirror for internal registry

2018-04-23 Thread marc . schlegel
Thanks for the link
I think this is a valid solution for development. In the long run we need 
to create custom imagestream anyway.
Stil, I cannot save the yaml because our registry is not in the whitelist, 
even when setting the insecure annotation. I double checked my 
docker-daemon...

{
  "registry-mirrors": [
"https://docker.mydomain.com:5000;
  ],
  "insecure-registries": [
"docker.mydomain.com:5000",
"172.30.0.0/16"
  ],
  "debug": true,
  "experimental": true
}




Von:Ben Parees 
An: marc.schle...@sdv-it.de
Kopie:  users 
Datum:  20.04.2018 15:25
Betreff:Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry





On Fri, Apr 20, 2018 at 2:49 AM,  wrote:
After setting up the proxy in oc cluster up as well as the daemon 
(including the necessary bypass) the problem remains. 

So I created a admin user to which I gave the cluster-admin role and this 
one can see all image-streams and I can update them in the webconsole. 

And here I can see the root cause which is actually caused by SSL 


Internal error occurred: Get https://registry-1.docker.io/v2/: x509: 
certificate signed by unknown authority. Timestamp: 2018-04-20T06:33:47Z 
Error count: 2 

Of course we have our own CA :-)
Is there a way to import our ca-bundle? I did not see anything in "oc 
cluster up --help"

You're seeing this error in the imagestreams during image import?

The easiest thing to do is mark the imagestreams insecure: 
https://docs.openshift.org/latest/dev_guide/managing_images.html#insecure-registries

(Since oc cluster up is intended for dev usage, I am going to make the 
assumption this is a reasonable thing for you to do).

If you don't want to do that, you'd need to add the cert to the origin 
image which oc cluster up starts up to run the master.

 




Von:Ben Parees  
An:marc.schle...@sdv-it.de 
Kopie:users  
Datum:19.04.2018 16:10 
Betreff:Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry 





On Thu, Apr 19, 2018 at 9:14 AM,  wrote: 
Thanks for the quick replies. 

The http-proxy is not enough to get out, since the daemon uses also other 
protocols than http. 

right but it will get the imagestream imported.  After that it's up to 
your daemon configuration as to whether the pull can occur, and it sounded 
like you had already configured your daemon. 

  


Changing the image-streams seems to be a valid approach, unfortunately I 
cannot export them in order to edit them...because they are not there yet 
According to the documentation I need to export the image-stream by 
@ 
In order to get the id, I can use oc describe...but see 

$ oc describe is jenkins 
Error from server (NotFound): imagestreams.image.openshift.io "jenkins" 
not found 

So I cannot run 

$ oc export isimage jenkins@??? 

I am wondering why the containerized version isnt honoring the settings of 
the docker-daemon running on my machine. Well it does when it is pulling 
the openshift images 
 docker images 
REPOSITORY TAG IMAGE ID   
 CREATED SIZE 
openshift/origin-web-console   v3.9.0  60938911a1f9   
 2 weeks ago 485MB 
openshift/origin-docker-registry   v3.9.0  2663c9df9123   
 2 weeks ago 455MB 
openshift/origin-haproxy-routerv3.9.0  c70d45de5384   
 2 weeks ago 1.27GB 
openshift/origin-deployer  v3.9.0  378ccd170718   
 2 weeks ago 1.25GB 
openshift/origin   v3.9.0  b5f178918ae9   
 2 weeks ago 1.25GB 
openshift/origin-pod   v3.9.0  1b36bf755484   
 2 weeks ago 217MB

but the image-steams are not pulled. 
Nonetheless, When I pull the image-stream manually (docker pull 
openshift/jenkins-2-centos7) it works. 
So why is the pull not working from inside Openshift? 

regards 
Marc 






You can update the image streams to change the registry. 

You can also set a proxy for the master, which is the process doing the 
imports and which presumably needs the proxy configured, by passing these 
args to oc cluster up: 

  --http-proxy='': HTTP proxy to use for master and builds
  --https-proxy='': HTTPS proxy to use for master and builds


I believe that should enable your existing imagestreams (not the ones 
pointing to the proxy url) to import. 





best regards 
Marc 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees 

Re: Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use registry-mirror for internal registry

2018-04-20 Thread marc . schlegel
One more thing, When I change the image-stream to point to our mirror 
registry I cannot save the yaml.


Reason: ImageStream "jenkins" is invalid: [spec.tags[1].from.name: 
Forbidden: registry "docker.sdvrz.de:5000" not allowed by whitelist: 
"172.30.1.1:5000", "docker.io:443", "*.docker.io:443", "*.redhat.com:443", 
and 5 more ..., spec.tags[2].from.name: Forbidden: registry 
"docker.sdvrz.de:5000" not allowed by whitelist: "172.30.1.1:5000", 
"docker.io:443", "*.docker.io:443", "*.redhat.com:443", and 5 more ...] 

Why is the internal registry using other settings as my docker-daemon on 
the host? Our mirror is added as insecrure-registry.
There seems to be no option to option to change this. I've also looked at 
the internal registries deployment in the "default" project using the 
cluster-admin




Von:marc.schle...@sdv-it.de
An: users@lists.openshift.redhat.com
Datum:  20.04.2018 08:51
Betreff:Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry
Gesendet von:   users-boun...@lists.openshift.redhat.com



After setting up the proxy in oc cluster up as well as the daemon 
(including the necessary bypass) the problem remains. 

So I created a admin user to which I gave the cluster-admin role and this 
one can see all image-streams and I can update them in the webconsole. 

And here I can see the root cause which is actually caused by SSL 


Internal error occurred: Get https://registry-1.docker.io/v2/: x509: 
certificate signed by unknown authority. Timestamp: 2018-04-20T06:33:47Z 
Error count: 2 

Of course we have our own CA :-)
Is there a way to import our ca-bundle? I did not see anything in "oc 
cluster up --help" 



Von:Ben Parees  
An:marc.schle...@sdv-it.de 
Kopie:users  
Datum:19.04.2018 16:10 
Betreff:Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry 





On Thu, Apr 19, 2018 at 9:14 AM,  wrote: 
Thanks for the quick replies. 

The http-proxy is not enough to get out, since the daemon uses also other 
protocols than http. 

right but it will get the imagestream imported.  After that it's up to 
your daemon configuration as to whether the pull can occur, and it sounded 
like you had already configured your daemon. 

 


Changing the image-streams seems to be a valid approach, unfortunately I 
cannot export them in order to edit them...because they are not there yet 
According to the documentation I need to export the image-stream by 
@ 
In order to get the id, I can use oc describe...but see 

$ oc describe is jenkins 
Error from server (NotFound): imagestreams.image.openshift.io "jenkins" 
not found 

So I cannot run 

$ oc export isimage jenkins@??? 

I am wondering why the containerized version isnt honoring the settings of 
the docker-daemon running on my machine. Well it does when it is pulling 
the openshift images 
 docker images 
REPOSITORY TAG IMAGE ID CREATED
   SIZE 
openshift/origin-web-console   v3.9.0  60938911a1f9 2 
weeks ago 485MB 
openshift/origin-docker-registry   v3.9.0  2663c9df9123 2 
weeks ago 455MB 
openshift/origin-haproxy-routerv3.9.0  c70d45de5384 2 
weeks ago 1.27GB 
openshift/origin-deployer  v3.9.0  378ccd170718 2 
weeks ago 1.25GB 
openshift/origin   v3.9.0  b5f178918ae9 2 
weeks ago 1.25GB 
openshift/origin-pod   v3.9.0  1b36bf755484 2 
weeks ago 217MB

but the image-steams are not pulled. 
Nonetheless, When I pull the image-stream manually (docker pull 
openshift/jenkins-2-centos7) it works. 
So why is the pull not working from inside Openshift? 

regards 
Marc 






You can update the image streams to change the registry. 

You can also set a proxy for the master, which is the process doing the 
imports and which presumably needs the proxy configured, by passing these 
args to oc cluster up: 

  --http-proxy='': HTTP proxy to use for master and builds
  --https-proxy='': HTTPS proxy to use for master and builds


I believe that should enable your existing imagestreams (not the ones 
pointing to the proxy url) to import. 





best regards 
Marc 



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use registry-mirror for internal registry

2018-04-20 Thread marc . schlegel
After setting up the proxy in oc cluster up as well as the daemon 
(including the necessary bypass) the problem remains.

So I created a admin user to which I gave the cluster-admin role and this 
one can see all image-streams and I can update them in the webconsole.

And here I can see the root cause which is actually caused by SSL


Internal error occurred: Get https://registry-1.docker.io/v2/: x509: 
certificate signed by unknown authority. Timestamp: 2018-04-20T06:33:47Z 
Error count: 2 

Of course we have our own CA :-)
Is there a way to import our ca-bundle? I did not see anything in "oc 
cluster up --help"



Von:Ben Parees 
An: marc.schle...@sdv-it.de
Kopie:  users 
Datum:  19.04.2018 16:10
Betreff:Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry





On Thu, Apr 19, 2018 at 9:14 AM,  wrote:
Thanks for the quick replies. 

The http-proxy is not enough to get out, since the daemon uses also other 
protocols than http.

right but it will get the imagestream imported.  After that it's up to 
your daemon configuration as to whether the pull can occur, and it sounded 
like you had already configured your daemon.

 


Changing the image-streams seems to be a valid approach, unfortunately I 
cannot export them in order to edit them...because they are not there yet 
According to the documentation I need to export the image-stream by 
@ 
In order to get the id, I can use oc describe...but see 

$ oc describe is jenkins 
Error from server (NotFound): imagestreams.image.openshift.io "jenkins" 
not found 

So I cannot run 

$ oc export isimage jenkins@??? 

I am wondering why the containerized version isnt honoring the settings of 
the docker-daemon running on my machine. Well it does when it is pulling 
the openshift images 
 docker images 
REPOSITORY TAG IMAGE ID   
 CREATED SIZE 
openshift/origin-web-console   v3.9.0  60938911a1f9   
 2 weeks ago 485MB 
openshift/origin-docker-registry   v3.9.0  2663c9df9123   
 2 weeks ago 455MB 
openshift/origin-haproxy-routerv3.9.0  c70d45de5384   
 2 weeks ago 1.27GB 
openshift/origin-deployer  v3.9.0  378ccd170718   
 2 weeks ago 1.25GB 
openshift/origin   v3.9.0  b5f178918ae9   
 2 weeks ago 1.25GB 
openshift/origin-pod   v3.9.0  1b36bf755484   
 2 weeks ago 217MB

but the image-steams are not pulled. 
Nonetheless, When I pull the image-stream manually (docker pull 
openshift/jenkins-2-centos7) it works. 
So why is the pull not working from inside Openshift? 

regards 
Marc 






You can update the image streams to change the registry. 

You can also set a proxy for the master, which is the process doing the 
imports and which presumably needs the proxy configured, by passing these 
args to oc cluster up: 

  --http-proxy='': HTTP proxy to use for master and builds
  --https-proxy='': HTTPS proxy to use for master and builds


I believe that should enable your existing imagestreams (not the ones 
pointing to the proxy url) to import. 





best regards 
Marc 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Re: Origin 3.9 (oc cluster up) doesnt use registry-mirror for internal registry

2018-04-19 Thread marc . schlegel
Thanks for the quick replies. 

The http-proxy is not enough to get out, since the daemon uses also other 
protocols than http.

Changing the image-streams seems to be a valid approach, unfortunately I 
cannot export them in order to edit them...because they are not there yet
According to the documentation I need to export the image-stream by 
@
In order to get the id, I can use oc describe...but see

$ oc describe is jenkins
Error from server (NotFound): imagestreams.image.openshift.io "jenkins" 
not found

So I cannot run 

$ oc export isimage jenkins@???

I am wondering why the containerized version isnt honoring the settings of 
the docker-daemon running on my machine. Well it does when it is pulling 
the openshift images
 docker images
REPOSITORY TAG IMAGE ID CREATED
   SIZE
openshift/origin-web-console   v3.9.0  60938911a1f9 2 
weeks ago 485MB
openshift/origin-docker-registry   v3.9.0  2663c9df9123 2 
weeks ago 455MB
openshift/origin-haproxy-routerv3.9.0  c70d45de5384 2 
weeks ago 1.27GB
openshift/origin-deployer  v3.9.0  378ccd170718 2 
weeks ago 1.25GB
openshift/origin   v3.9.0  b5f178918ae9 2 
weeks ago 1.25GB
openshift/origin-pod   v3.9.0  1b36bf755484 2 
weeks ago 217MB

but the image-steams are not pulled.
Nonetheless, When I pull the image-stream manually (docker pull 
openshift/jenkins-2-centos7) it works. 
So why is the pull not working from inside Openshift?

regards
Marc






You can update the image streams to change the registry.

You can also set a proxy for the master, which is the process doing the 
imports and which presumably needs the proxy configured, by passing these 
args to oc cluster up:

  --http-proxy='': HTTP proxy to use for master and builds
  --https-proxy='': HTTPS proxy to use for master and builds


I believe that should enable your existing imagestreams (not the ones 
pointing to the proxy url) to import.





best regards 
Marc 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Origin 3.9 (oc cluster up) doesnt use registry-mirror for internal registry

2018-04-19 Thread marc . schlegel
Hello everyone

I was asking this question already on the Openshift Google Group but was 
redirected to this list in the hope to find someone who knows the details 
about the current "oc cluster up" command.


I am facing some trouble using the "oc cluster up" command within our 
corporate environment. The main pain-point is that no external registry is 
available from inside our network. The only way to pull images is via a 
proxy registry (which mirror dockerhub and the redhat registry).

So I configured my local Docker daemon to use this registry by specifying 
"insecure-registries" and "registry-mirrors". Especially the mirror is 
important because it causes Docker to look at the specified registry 
first.
By configuring Docker this way, the command "oc cluster up" can pull the 
necessary images.

Unfortunately, when running Openshift and adding a deployment based on an 
template/imagestream, no deployment happens. Message is: A new deployment 
will start automatically when an image is pushed to openshift/jenkins:2. 

When checking the imagestreams I can see
 

$ oc get is -n openshift
NAME DOCKER REPOTAGS   
 UPDATED
dotnet   172.30.1.1:5000/openshift/dotnet   2.0
dotnet-runtime   172.30.1.1:5000/openshift/dotnet-runtime   2.0
httpd172.30.1.1:5000/openshift/httpd2.4
jenkins  172.30.1.1:5000/openshift/jenkins  1,2
mariadb  172.30.1.1:5000/openshift/mariadb  10.1,10.2
mongodb  172.30.1.1:5000/openshift/mongodb  2.4,2.6,3.2 + 
1 more...
mysql172.30.1.1:5000/openshift/mysql5.7,5.5,5.6
nginx172.30.1.1:5000/openshift/nginx1.10,1.12,1.8
nodejs   172.30.1.1:5000/openshift/nodejs   0.10,4,6 + 1 
more...
perl 172.30.1.1:5000/openshift/perl 5.16,5.20,5.24
php  172.30.1.1:5000/openshift/php  5.5,5.6,7.0 + 
1 more...
postgresql   172.30.1.1:5000/openshift/postgresql   9.4,9.5,9.6 + 
1 more...
python   172.30.1.1:5000/openshift/python   3.4,3.5,3.6 + 
2 more...
redis172.30.1.1:5000/openshift/redis3.2
ruby 172.30.1.1:5000/openshift/ruby 2.0,2.2,2.3 + 
1 more...
wildfly  172.30.1.1:5000/openshift/wildfly  10.0,10.1,8.1 
+ 1 more...


It seems the Images are not available in the internal docker registry 
(inside kubernetes) and they are not pulled on the host either.



$ docker images
REPOSITORY TAG IMAGE ID   
 CREATED SIZE
openshift/origin-web-console   v3.9.0  60938911a1f9   
 11 days ago 485MB
openshift/origin-docker-registry   v3.9.0  2663c9df9123   
 11 days ago 455MB
openshift/origin-haproxy-routerv3.9.0  c70d45de5384   
 11 days ago 1.27GB
openshift/origin-deployer  v3.9.0  378ccd170718   
 11 days ago 1.25GB
openshift/origin   v3.9.0  b5f178918ae9   
 11 days ago 1.25GB
openshift/origin-pod   v3.9.0  1b36bf755484   
 11 days ago 217MB

I would expect that the containerized Openshift variant uses the 
configuration provided by the Docker installation on the host-system.


I've also tried to Import an imagestream manually but it failed because 
our proxy-registry is not whitelisted


$ oc import-image my-jenkins --from=docker-proxy.de:5000/openshift/jenkins
-2-centos7 --confirm
The ImageStream "my-jenkins" is invalid: spec.tags[latest].from.name: 
Forbidden: registry "docker-proxy.de:5000" not allowed by whitelist: "
172.30.1.1:5000", "docker.io:443", "*.docker.io:443", "*.redhat.com:443", 
and 5 more ..



Is there any way to redirect the pull of the imagestreams to our corporate 
Proxy?
Or can I modify the imagestreams somehow to hardcode the registry?


best regards 
Marc

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Re: Re: Check if template service broker is running

2018-03-14 Thread marc . schlegel
I am trying another fresh install just now, and my ansible script is 
hanging for 15 minutes at 
TASK [openshift_service_catalog : wait for api server to be ready]

This was the same the last times I tried.

I did a minor adjustment to the ansible-script by adding the following 
options
openshift_enable_service_catalog=true
openshift_template_service_broker_namespaces=['openshift']

Is it possible that my DNS is not working? Is the api service listening on 
a special DNS which I need to add to my domain? 
Those are some other settings
openshift_master_cluster_public_hostname="openshift.vnet.de"
openshift_master_default_subdomain=apps.vnet.de

The DNS is setup so that everything ending with vnet.de points to node-1 
(infra node) while openshift.vnet.de points to my master. The hostnames 
are also mapped explicitly.





Von:marc.schle...@sdv-it.de
An: users@lists.openshift.redhat.com
Datum:  14.03.2018 09:13
Betreff:Re: Re: Check if template service broker is running
Gesendet von:   users-boun...@lists.openshift.redhat.com



Thats what I get from my console. 


Logged into "https://openshift.vnet.de:8443; as "system:admin" using 
existing 
credentials.

You have access to the following projects and can switch between them with 
'oc 
project ':

 * default
   kube-public
   kube-system
   logging
   management-infra
   openshift
   openshift-infra
   openshift-node

Using project "default".
[root@master ~]# oc describe daemonset -n 
openshift-template-service-broker
[root@master ~]# oc get events -n openshift-template-service-broker
No resources found.
[root@master ~]# 
My cluster-setup looks like this 
- master 
- node-1 (label "region" : "infra" for infrastructure which is used in the 
ansible for the docker-registry and the router) 
- node-2 (label "region" : "primary" for deployments)

The only Ansible adjustments I made, apart from the necessary where those 
openshift_hostet_router_selector='region=infra' 
openshift_hostet_registry_selector='region=infra' 

regards 
Marc 



Von:Sam Padgett  
An:marc.schle...@sdv-it.de 
Kopie:users  
Datum:09.03.2018 18:24 
Betreff:Re: Re: Check if template service broker is running 



Do you see any obvious problems looking at...? 

$ oc describe daemonset -n openshift-template-service-broker 
$ oc get events -n openshift-template-service-broker 

(I'm assuming you haven't set `template_service_broker_install` to false 
in your inventory.) 

On Fri, Mar 9, 2018 at 2:30 AM,  wrote: 
The template-service-broker is indeed not running. 

[root@master ~]# oc get pods -n kube-service-catalog
NAME   READY STATUSRESTARTS   AGE
apiserver-w8p771/1   Running   2  6d
controller-manager-cmmcx   1/1   Running   5  6d

[root@master ~]# oc get pods -n openshift-template-service-broker
No resources found. 

The Ansible install finished with a warning which is probably the reason. 
Unfortunately a reinstall always ends with the same result (no 
template-service-broker) 

Can I manually install one and how? 


Von:Sam Padgett  
An:marc.schle...@sdv-it.de 
Kopie:users  
Datum:08.03.2018 15:11 
Betreff:Re: Check if template service broker is running 



Starting in 3.7, service catalog and template service broker is enabled by 
default when installing using openshift-ansible. You can check if things 
are running with: 

$ oc get pods -n kube-service-catalog 
NAME READY STATUSRESTARTS   AGE 
apiserver-858dcddcdf-f58mv   2/2   Running   0  15m 
controller-manager-645f5dbbd-jz8ll   1/1   Running   0  15m 

$ oc get pods -n openshift-template-service-broker 
NAME  READY STATUSRESTARTS   AGE 
apiserver-4cq6q   1/1   Running   0  15m 

If template service broker is installed, but not running, that would 
explain why items are missing. 

On Thu, Mar 8, 2018 at 3:05 AM,  wrote: 
Hello everyone 

I am having trouble with the templates defined in the default 
image-streams. 

Checking which imagestreams and templates are installed via oc get lists 
everthing I am expecting. 
Unfortunately the webconsole is only showing 8 items (Jenkins for example 
is missing). 

I've got some help from the Openshift Google Groups which says that the 
templates service broker might not be running[1] 

How can I check if this service is running? 



[root@master ~]# oc get is -n openshift
NAME DOCKER REPO   TAGS UPDATED
dotnet   docker-registry.default.svc:5000/openshift/dotnet  latest
,2.0   About an hour ago
dotnet-runtime   docker-registry.default.svc:5000/openshift/dotnet-runtime 
  latest,2.0   About an hour ago
httpd  

Re: Re: Check if template service broker is running

2018-03-14 Thread marc . schlegel
Thats what I get from my console. 


Logged into "https://openshift.vnet.de:8443; as "system:admin" using 
existing 
credentials.

You have access to the following projects and can switch between them with 
'oc 
project ':

  * default
kube-public
kube-system
logging
management-infra
openshift
openshift-infra
openshift-node

Using project "default".
[root@master ~]# oc describe daemonset -n 
openshift-template-service-broker
[root@master ~]# oc get events -n openshift-template-service-broker
No resources found.
[root@master ~]# 

My cluster-setup looks like this
- master
- node-1 (label "region" : "infra" for infrastructure which is used in the 
ansible for the docker-registry and the router)
- node-2 (label "region" : "primary" for deployments)

The only Ansible adjustments I made, apart from the necessary where those
openshift_hostet_router_selector='region=infra'
openshift_hostet_registry_selector='region=infra'

regards
Marc




Von:Sam Padgett 
An: marc.schle...@sdv-it.de
Kopie:  users 
Datum:  09.03.2018 18:24
Betreff:Re: Re: Check if template service broker is running



Do you see any obvious problems looking at...?

$ oc describe daemonset -n openshift-template-service-broker
$ oc get events -n openshift-template-service-broker

(I'm assuming you haven't set `template_service_broker_install` to false 
in your inventory.)

On Fri, Mar 9, 2018 at 2:30 AM,  wrote:
The template-service-broker is indeed not running. 

[root@master ~]# oc get pods -n kube-service-catalog
NAME   READY STATUSRESTARTS   AGE
apiserver-w8p771/1   Running   2  6d
controller-manager-cmmcx   1/1   Running   5  6d

[root@master ~]# oc get pods -n openshift-template-service-broker
No resources found. 

The Ansible install finished with a warning which is probably the reason. 
Unfortunately a reinstall always ends with the same result (no 
template-service-broker) 

Can I manually install one and how? 


Von:Sam Padgett  
An:marc.schle...@sdv-it.de 
Kopie:users  
Datum:08.03.2018 15:11 
Betreff:Re: Check if template service broker is running 



Starting in 3.7, service catalog and template service broker is enabled by 
default when installing using openshift-ansible. You can check if things 
are running with: 

$ oc get pods -n kube-service-catalog 
NAME READY STATUSRESTARTS   AGE 
apiserver-858dcddcdf-f58mv   2/2   Running   0  15m 
controller-manager-645f5dbbd-jz8ll   1/1   Running   0  15m 

$ oc get pods -n openshift-template-service-broker 
NAME  READY STATUSRESTARTS   AGE 
apiserver-4cq6q   1/1   Running   0  15m 

If template service broker is installed, but not running, that would 
explain why items are missing. 

On Thu, Mar 8, 2018 at 3:05 AM,  wrote: 
Hello everyone 

I am having trouble with the templates defined in the default 
image-streams. 

Checking which imagestreams and templates are installed via oc get lists 
everthing I am expecting. 
Unfortunately the webconsole is only showing 8 items (Jenkins for example 
is missing). 

I've got some help from the Openshift Google Groups which says that the 
templates service broker might not be running[1] 

How can I check if this service is running? 



[root@master ~]# oc get is -n openshift
NAME DOCKER REPO   
  TAGS UPDATED
dotnet   docker-registry.default.svc:5000/openshift/dotnet 
  latest,2.0   About an hour ago
dotnet-runtime   docker-registry.default.svc:5000/openshift/dotnet-runtime 
  latest,2.0   About an hour ago
httpddocker-registry.default.svc:5000/openshift/httpd 
   latest,2.4   About an hour ago
jenkins  docker-registry.default.svc:5000/openshift/jenkins   
   1,2,latest   About an hour ago
mariadb  docker-registry.default.svc:5000/openshift/mariadb   
   10.1,latest  About an hour ago
mongodb  docker-registry.default.svc:5000/openshift/mongodb   
   3.2,2.6,2.4 + 1 more...  About an hour ago
mysqldocker-registry.default.svc:5000/openshift/mysql 
   5.6,5.5,latest + 1 more...   About an hour ago
nodejs   docker-registry.default.svc:5000/openshift/nodejs 
  latest,0.10,4 + 1 more...About an hour ago
perl docker-registry.default.svc:5000/openshift/perl   
  5.24,5.20,5.16 + 1 more...   About an hour ago
php  docker-registry.default.svc:5000/openshift/php   
   latest,7.0,5.6 + 1 more...   About an hour ago
postgresql   

Antwort: Re: Check if template service broker is running

2018-03-08 Thread marc . schlegel
The template-service-broker is indeed not running.

[root@master ~]# oc get pods -n kube-service-catalog
NAME   READY STATUSRESTARTS   AGE
apiserver-w8p771/1   Running   2  6d
controller-manager-cmmcx   1/1   Running   5  6d

[root@master ~]# oc get pods -n openshift-template-service-broker
No resources found.

The Ansible install finished with a warning which is probably the reason. 
Unfortunately a reinstall always ends with the same result (no 
template-service-broker)

Can I manually install one and how?

Mit freundlichen Grüßen

i. A. Marc Schlegel
Software-Entwickler
DS ? AAA - AJET
_

SDV-IT, Sparda-Datenverarbeitung eG
Freiligrathstraße 32
90482 Nürnberg

Telefon: (0911) 9291 - 2722
marc.schle...@sdv-it.de
www.sdv-it.de
_

Sitz der Genossenschaft: Nürnberg
Amtsgericht Nürnberg, GnR 271
Vorstand: Burkhard Kintscher (Vorsitzender), Dr. Thomas Reimer
Aufsichtsratsvorsitzender: Manfred Stevermann

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail 
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und 
vernichten Sie diese E-Mail. Das unerlaubte Kopieren und die unbefugte 
Weitergabe dieser E-Mail sind nicht gestattet.

This e-mail may contain confidential and/or privileged information. If you 
are not the intended recipient or have received this e-mail in error, 
please notify the sender immediately and delete this e-mail. Any 
unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.



Von:Sam Padgett <spadg...@redhat.com>
An: marc.schle...@sdv-it.de
Kopie:  users <users@lists.openshift.redhat.com>
Datum:  08.03.2018 15:11
Betreff:Re: Check if template service broker is running



Starting in 3.7, service catalog and template service broker is enabled by 
default when installing using openshift-ansible. You can check if things 
are running with:

$ oc get pods -n kube-service-catalog
NAME READY STATUSRESTARTS   AGE
apiserver-858dcddcdf-f58mv   2/2   Running   0  15m
controller-manager-645f5dbbd-jz8ll   1/1   Running   0  15m

$ oc get pods -n openshift-template-service-broker
NAME  READY STATUSRESTARTS   AGE
apiserver-4cq6q   1/1   Running   0  15m

If template service broker is installed, but not running, that would 
explain why items are missing.

On Thu, Mar 8, 2018 at 3:05 AM, <marc.schle...@sdv-it.de> wrote:
Hello everyone 

I am having trouble with the templates defined in the default 
image-streams. 

Checking which imagestreams and templates are installed via oc get lists 
everthing I am expecting. 
Unfortunately the webconsole is only showing 8 items (Jenkins for example 
is missing). 

I've got some help from the Openshift Google Groups which says that the 
templates service broker might not be running[1] 

How can I check if this service is running? 



[root@master ~]# oc get is -n openshift
NAME DOCKER REPO   
  TAGS UPDATED
dotnet   docker-registry.default.svc:5000/openshift/dotnet 
  latest,2.0   About an hour ago
dotnet-runtime   docker-registry.default.svc:5000/openshift/dotnet-runtime 
  latest,2.0   About an hour ago
httpddocker-registry.default.svc:5000/openshift/httpd 
   latest,2.4   About an hour ago
jenkins  docker-registry.default.svc:5000/openshift/jenkins   
   1,2,latest   About an hour ago
mariadb  docker-registry.default.svc:5000/openshift/mariadb   
   10.1,latest  About an hour ago
mongodb  docker-registry.default.svc:5000/openshift/mongodb   
   3.2,2.6,2.4 + 1 more...  About an hour ago
mysqldocker-registry.default.svc:5000/openshift/mysql 
   5.6,5.5,latest + 1 more...   About an hour ago
nodejs   docker-registry.default.svc:5000/openshift/nodejs 
  latest,0.10,4 + 1 more...About an hour ago
perl docker-registry.default.svc:5000/openshift/perl   
  5.24,5.20,5.16 + 1 more...   About an hour ago
php  docker-registry.default.svc:5000/openshift/php   
   latest,7.0,5.6 + 1 more...   About an hour ago
postgresql   docker-registry.default.svc:5000/openshift/postgresql 
  9.2,latest,9.5 + 1 more...   About an hour ago
python   docker-registry.default.svc:5000/openshift/python 
  latest,3.5,3.4 + 2 more...   About an hour ago
redisdocker-registry.default.svc:5000/openshift/redis 
   latest,3.2   About an hour ago
ruby docker-registry.default.svc:5

Check if template service broker is running

2018-03-08 Thread marc . schlegel
Hello everyone

I am having trouble with the templates defined in the default 
image-streams. 

Checking which imagestreams and templates are installed via oc get lists 
everthing I am expecting.
Unfortunately the webconsole is only showing 8 items (Jenkins for example 
is missing).

I've got some help from the Openshift Google Groups which says that the 
templates service broker might not be running[1]

How can I check if this service is running?



[root@master ~]# oc get is -n openshift
NAME DOCKER REPO   
  TAGS UPDATED
dotnet   docker-registry.default.svc:5000/openshift/dotnet 
  latest,2.0   About an hour ago
dotnet-runtime   docker-registry.default.svc:5000/openshift/dotnet-runtime 
  latest,2.0   About an hour ago
httpddocker-registry.default.svc:5000/openshift/httpd 
   latest,2.4   About an hour ago
jenkins  docker-registry.default.svc:5000/openshift/jenkins   
   1,2,latest   About an hour ago
mariadb  docker-registry.default.svc:5000/openshift/mariadb   
   10.1,latest  About an hour ago
mongodb  docker-registry.default.svc:5000/openshift/mongodb   
   3.2,2.6,2.4 + 1 more...  About an hour ago
mysqldocker-registry.default.svc:5000/openshift/mysql 
   5.6,5.5,latest + 1 more...   About an hour ago
nodejs   docker-registry.default.svc:5000/openshift/nodejs 
  latest,0.10,4 + 1 more...About an hour ago
perl docker-registry.default.svc:5000/openshift/perl   
  5.24,5.20,5.16 + 1 more...   About an hour ago
php  docker-registry.default.svc:5000/openshift/php   
   latest,7.0,5.6 + 1 more...   About an hour ago
postgresql   docker-registry.default.svc:5000/openshift/postgresql 
  9.2,latest,9.5 + 1 more...   About an hour ago
python   docker-registry.default.svc:5000/openshift/python 
  latest,3.5,3.4 + 2 more...   About an hour ago
redisdocker-registry.default.svc:5000/openshift/redis 
   latest,3.2   About an hour ago
ruby docker-registry.default.svc:5000/openshift/ruby   
  latest,2.4,2.3 + 2 more...   About an hour ago
wildfly  docker-registry.default.svc:5000/openshift/wildfly   
   10.1,10.0,9.0 + 2 more...About an hour ago



[root@master ~]# oc get templates -n openshift

NAME  DESCRIPTION 
   PARAMETERSOBJECTS
3scale-gateway3scale API Gateway   
  15 (6 blank)  2
amp-apicast-wildcard-router   
   3 (1 blank)   4
amp-pvc   
   0 (all set)   4
cakephp-mysql-example An example CakePHP application with a MySQL
 database. For more information ab...   19 (4 blank)  8
cakephp-mysql-persistent  An example CakePHP application with a MySQL
 database. For more information ab...   20 (4 blank)  9
dancer-mysql-example  An example Dancer application with a MySQL
 database. For more information abo...   16 (5 blank)  8
dancer-mysql-persistent   An example Dancer application with a MySQL
 database. For more information abo...   17 (5 blank)  9
django-psql-example   An example Django application with a 
PostgreSQL database. For more informatio...   17 (5 blank)  8
django-psql-persistentAn example Django application with a 
PostgreSQL database. For more informatio...   18 (5 blank)  9
dotnet-exampleAn example .NET Core application.   
   16 (5 blank)  5
dotnet-pgsql-persistent   An example .NET Core application with a 
PostgreSQL database. For more informa...   23 (6 blank)  9
dotnet-runtime-exampleAn example .NET Core Runtime example 
application.  17 (5 blank)  7
httpd-example An example Apache HTTP Server (httpd)
 application that serves static content   9 (3 blank)   5
jenkins-ephemeral Jenkins service, without persistent storage
6 (all set)   6
jenkins-persistentJenkins service, with persistent storage
   7 (all set)   7
mariadb-ephemeral MariaDB database service, without persistent 
storage. For more information ab...   7 (3 generated)   3
mariadb-persistentMariaDB database service, with persistent 
storage. For more information about...   8 (3 generated)   4
mongodb-ephemeral MongoDB