Re: audit users of imagestream to enable deprecation

2017-04-17 Thread Andy Goldstein
I'm not aware of a tool, but the code that we use to identify images for
pruning could be a good starting point.

https://github.com/openshift/origin/blob/master/pkg/image/prune/prune.go

Andy

On Mon, Apr 10, 2017 at 7:09 PM, Dale Bewley  wrote:

> I have created some imagestreams and shared images in my cluster which I
> would like to deprecate now that they are provided upstream.
>
> Does anyone have any tips or tools for uncovering consumers of a
> particular imagestream or image?
>
> I can write something to troll through buildconfigs and deploymentconfigs,
> but I wonder if there is a pre-existing tool I'm not aware of.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift 1.1.6 - need to clean-up old docker registry

2017-03-16 Thread Andy Goldstein
Prune should work in 1.1 afaik. What errors were you seeing, if any?

On Thu, Mar 16, 2017 at 3:24 PM, Clayton Coleman 
wrote:

> I'm not aware of anything better today.  Supported is a good question - I
> think you should backup before you do that, and stay alert to signs of
> removal.  I don't expect major problems (since the APIs are deliberately
> stable), but we haven't tested it.
>
> Ultimately 1.1 is very old - it's really hard for us to say whether
> something across three versions *will* work, other than saying it *should*
> work.  So the best I can offer is your mileage may vary :)
>
> On Wed, Mar 15, 2017 at 8:30 AM, David Gabriel <
> d.gabr...@dest-unreachable.net> wrote:
>
>> Hi,
>>
>> I'm confronted with the task of cleaning up an old docker registry on a
>> OpenShift 1.1.6 cluster setup which we can't upgrade in the near future
>> because a production install of one of our clients is being run on it and
>> the upgrade to a new Openshift version is - while planned - still some
>> month away. Right now we are running origin-docker-registry:1.1.6 which
>> contains a docker-registry 2.1.0. Although OpenShift sports the 'oadm
>> prune' command, this doesn't seem to work (correctly?) in this dated
>> OpenShift version. Although we set the required environment variables to
>> enable deletes (REGISTRY_STORAGE_DELETE_ENABLED:  true), 'oadm prune
>> image...' never seems to clean up any data.
>>
>> To be able to test other options we set up a test environment and tried
>> the following methods to see if we could achieve a registry cleanup without
>> having to update the whole cluster:
>>
>> 1) use the docker.io/registry:2 registry:
>>
>> We pulled down the original docker registry (version 2.4), mounted a copy
>> of the production registry data and tried to 'registry garbage-collect',
>> which resulted in all data being deleted. Although we are aware that
>> probably isn't a supported method, I still would be curious why it deleted
>> all the data - is the structure of the openshift registry different?
>>
>>
>> 2) use a recent openshift registry:
>>
>> We updated only the origin-docker-registry pod to 1.4.1 (registry version
>> 2.4.1 iirc) and ran 'oadm prune images...' on our test dataset. Although
>> this actually deleted the unreferenced data, is this a supported way to
>> clean up a registry? As of right now we have no method of knowing if the
>> registry is consistent after the cleanup. We of course would like to only
>> switch to the new registry temporarily to clean up stale data, then start
>> the old version again to keep the setup as-is for right now.
>>
>> 3) other/better method I'm not aware of?
>>
>> br and thanks in advance for any insights on how to move forward,
>>
>> d.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Log message error on all nodes: encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/mapper/cah-docker--pool_tmeta: Error running command `thin_ls --no-

2017-02-27 Thread Andy Goldstein
Issue: https://github.com/openshift/origin/issues/10940
Fix: https://github.com/openshift/origin/pull/12753

We have since discovered that thin_ls is too expensive to invoke, leading
to...

Actual fix: https://github.com/openshift/origin/pull/12822

On Mon, Feb 27, 2017 at 12:10 PM, Stéphane Klein <
cont...@stephane-klein.info> wrote:

> Hi,
>
> I have many many lines with this error message in nodes logs:
>
> Feb 27 17:51:13 atomic-test-node-1.priv.tech-angels.net
> origin-node[24165]: E0227 17:51:13.183150   24451 thin_pool_watcher.go:72]
> encountered error refreshing thin pool watcher: error performing thin_ls on
> metadata device /dev/mapper/cah-docker--pool_tmeta: Error running command
> `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES
> /dev/mapper/cah-docker--pool_tmeta`: exit status 127
>
> Do you know what is this error? How can I fix it?
>
> My nodes use Centos Atomic OS:
>
> # atomic host status
> State: idle
> Deployments:
> ● centos-atomic-host:centos-atomic-host/7/x86_64/standard
>Version: 7.20170209 (2017-02-10 00:54:47)
> Commit: d433342b09673c9c4d75ff6eef50a4
> 47e73a7541491e5197e1dde14147b164b8
> OSName: centos-atomic-host
>   GPGSignature: 1 signature
> Signature made Fri 10 Feb 2017 02:06:18 AM CET using RSA
> key ID F17E745691BA8335
> Good signature from "CentOS Atomic SIG <
> secur...@centos.org>"
>
> # docker version
> Client:
>  Version: 1.12.5
>  API version: 1.24
>  Package version: docker-common-1.12.5-14.el7.centos.x86_64
>  Go version:  go1.7.4
>  Git commit:  047e51b/1.12.5
>  Built:   Mon Jan 23 15:35:13 2017
>  OS/Arch: linux/amd64
>
> Server:
>  Version: 1.12.5
>  API version: 1.24
>  Package version: docker-common-1.12.5-14.el7.centos.x86_64
>  Go version:  go1.7.4
>  Git commit:  047e51b/1.12.5
>  Built:   Mon Jan 23 15:35:13 2017
>  OS/Arch: linux/amd64
>
>
> OpenShift Master:  v1.4.1+3f9807a
> Kubernetes Master: v1.4.0+776c994
>
> Best regards,
> Stéphane
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: cannot see Logs or use Terminal in latest

2017-02-22 Thread Andy Goldstein
Ok, so the issue you're hitting is that the node portion of OpenShift is
calling itself localhost.localdomain, so when the master/api part of
OpenShift tries to talk to the node part, it uses the node's identifier,
which ends up being localhost. Try setting --hostname to 192.168.1.5 and
see if that works any better.

Andy

On Wed, Feb 22, 2017 at 4:15 PM, John Mazzitelli  wrote:

> - Original Message -
> > I normally do this:
> >
> > $ openshift start --write-config openshift.local.config
> >
> > $ sudo openshift start --master-config
> > openshift.local.config/master/master-config.yaml --node-config
> > openshift.local.config//node-config.yaml.
> >
> > You can edit the config files as needed (although I rarely do). If you
> are
> > just doing `openshift start --listen=`, it looks like the --hostname flag
> > defaults to localhost.localdomain for naming the node. You might try
> > changing that?
> >
> > But maybe before you do, what do you get for:
> >
> > sudo netstat -anp | grep 10250 | grep LISTEN
>
>
> I get this:
>
> $ sudo netstat -anp | grep 10250 | grep LISTEN
> tcp  0  0  192.168.1.15:10250  0.0.0.0:*  LISTEN  17506/openshift
>
> "192.168.1.15" is my host IP and what I pass in to the --listen option of
> "openshift"
>
>
> >
> > Andy
> >
> > On Wed, Feb 22, 2017 at 4:00 PM, John Mazzitelli 
> wrote:
> >
> > > Well, that's a good question. When I start OpenShift using the
> openshift
> > > executable, I pass in --listen so I ensure it listens to my LAN IP -
> so I
> > > do this:
> > >
> > > openshift start --listen=https://192.168.1.15:8443
> > >
> > > Is that bad? :)
> > >
> > > If you pass in a --listen option, is there something else I have to
> > > specify?
> > >
> > > - Original Message -
> > > > 127.0.0.1:10250 is the OpenShift node's port. Is your OpenShift
> process
> > > > listening on that port on the loopback interface?
> > > >
> > > > On Wed, Feb 22, 2017 at 2:17 PM, John Mazzitelli 
> > > wrote:
> > > >
> > > > > The only thing that seems related (when I click on a Logs or
> Terminal
> > > tab
> > > > > in the UI -this is the only error that is logged):
> > > > >
> > > > > E0222 14:15:38.835307   31426 errors.go:63] apiserver received an
> error
> > > > > that is not an unversioned.Status: Get
> https://localhost.localdomain:
> > > > > 10250/containerLogs/default/hawkular-openshift-agent-
> > > > > w7lsc/hawkular-openshift-agent?follow=true=
> > > > > 10485760=5000: dial tcp 127.0.0.1:10250: getsockopt:
> > > connection
> > > > > refused
> > > > >
> > > > >
> > > > > - Original Message -
> > > > > > With a 500 I would expect to see something in the logs for the
> > > master.
> > > > > > Anything there?
> > > > > >
> > > > > > On Wed, Feb 22, 2017 at 2:02 PM, John Mazzitelli <
> m...@redhat.com>
> > > > > wrote:
> > > > > >
> > > > > > > Same thing for both Logs and Terminal - getting a error code
> of 500
> > > > > when
> > > > > > > the websocket tries to connect.
> > > > > > >
> > > > > > > I have no idea why - this used to always work - until I started
> > > using
> > > > > the
> > > > > > > latest code.
> > > > > > >
> > > > > > > - Original Message -
> > > > > > > > Good call:
> > > > > > > >
> > > > > > > > (unknown) WebSocket connection to
> > > > > > > > 'wss://192.168.1.15:8443/api/v1/namespaces/openshift-infra/
> > > > > > > pods/heapster-dln…xterm+%2Fbin%2Fsh_token=CpQs_
> > > > > > > rRQetCoiXwAamUgface2Op20B7bEO7p4mov_rw'
> > > > > > > > failed: Error during WebSocket handshake: Unexpected response
> > > code:
> > > > > 500
> > > > > > > >
> > > > > > > > - Original Message -
> > > > > > > > > Can you open the network tab in the dev tools of your
> browser
> > > and
> > > > > look
> > > > > > > for
> > > > > > > > > the websocket connections. I recommend using Chrome since
> it
> > > does a
> > > > > > > better
> > > > > > > > > job of displaying websocket information.
> > > > > > > > >
> > > > > > > > > Any time both logs and terminal are not working, its
> usually a
> > > > > symptom
> > > > > > > of
> > > > > > > > > websockets failing to open.
> > > > > > > > >
> > > > > > > > > On Wed, Feb 22, 2017 at 11:10 AM, John Mazzitelli <
> > > > > m...@redhat.com >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > OK, I'm almost back to where I was when I had things
> working.
> > > > > > > > >
> > > > > > > > > I was using an older version and starting with "oc cluster
> > > up", I
> > > > > am
> > > > > > > now
> > > > > > > > > using the latest builds and starting with "openshift
> start".
> > > > > > > > >
> > > > > > > > > But if I go to any pod's Logs tab in the UI console, I see
> > > this:
> > > > > > > > >
> > > > > > > > > "Logs are not available. The logs are no longer available
> or
> > > could
> > > > > not
> > > > > > > be
> > > > > > > > > loaded."
> > > > > > > > >
> > > > > > > > > If I try to use the Terminal (in the UI terminal tab) I see
> 

Re: cannot see Logs or use Terminal in latest

2017-02-22 Thread Andy Goldstein
127.0.0.1:10250 is the OpenShift node's port. Is your OpenShift process
listening on that port on the loopback interface?

On Wed, Feb 22, 2017 at 2:17 PM, John Mazzitelli  wrote:

> The only thing that seems related (when I click on a Logs or Terminal tab
> in the UI -this is the only error that is logged):
>
> E0222 14:15:38.835307   31426 errors.go:63] apiserver received an error
> that is not an unversioned.Status: Get https://localhost.localdomain:
> 10250/containerLogs/default/hawkular-openshift-agent-
> w7lsc/hawkular-openshift-agent?follow=true=
> 10485760=5000: dial tcp 127.0.0.1:10250: getsockopt: connection
> refused
>
>
> - Original Message -
> > With a 500 I would expect to see something in the logs for the master.
> > Anything there?
> >
> > On Wed, Feb 22, 2017 at 2:02 PM, John Mazzitelli 
> wrote:
> >
> > > Same thing for both Logs and Terminal - getting a error code of 500
> when
> > > the websocket tries to connect.
> > >
> > > I have no idea why - this used to always work - until I started using
> the
> > > latest code.
> > >
> > > - Original Message -
> > > > Good call:
> > > >
> > > > (unknown) WebSocket connection to
> > > > 'wss://192.168.1.15:8443/api/v1/namespaces/openshift-infra/
> > > pods/heapster-dln…xterm+%2Fbin%2Fsh_token=CpQs_
> > > rRQetCoiXwAamUgface2Op20B7bEO7p4mov_rw'
> > > > failed: Error during WebSocket handshake: Unexpected response code:
> 500
> > > >
> > > > - Original Message -
> > > > > Can you open the network tab in the dev tools of your browser and
> look
> > > for
> > > > > the websocket connections. I recommend using Chrome since it does a
> > > better
> > > > > job of displaying websocket information.
> > > > >
> > > > > Any time both logs and terminal are not working, its usually a
> symptom
> > > of
> > > > > websockets failing to open.
> > > > >
> > > > > On Wed, Feb 22, 2017 at 11:10 AM, John Mazzitelli <
> m...@redhat.com >
> > > > > wrote:
> > > > >
> > > > >
> > > > > OK, I'm almost back to where I was when I had things working.
> > > > >
> > > > > I was using an older version and starting with "oc cluster up", I
> am
> > > now
> > > > > using the latest builds and starting with "openshift start".
> > > > >
> > > > > But if I go to any pod's Logs tab in the UI console, I see this:
> > > > >
> > > > > "Logs are not available. The logs are no longer available or could
> not
> > > be
> > > > > loaded."
> > > > >
> > > > > If I try to use the Terminal (in the UI terminal tab) I see this:
> > > > >
> > > > > "Could not connect to the container. Do you have sufficient
> > > privileges?"
> > > > >
> > > > > I am logged into the UI console as a user that is given
> cluster-admin
> > > role
> > > > > -
> > > > > so I assume I should be able to see everything.
> > > > >
> > > > > I used to be able to see these in the older versions I was using
> (when
> > > I
> > > > > was
> > > > > using oc cluster up). But now I can't.
> > > > >
> > > > > Am I missing a role or some permission?
> > > > >
> > > > > Versions:
> > > > >
> > > > > oc v1.5.0-alpha.3+2d20080-23
> > > > > kubernetes v1.5.2+43a9be4
> > > > > openshift v1.5.0-alpha.3+2d20080-23
> > > > >
> > > > > ___
> > > > > users mailing list
> > > > > users@lists.openshift.redhat.com
> > > > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> > > > >
> > > > >
> > > > > ___
> > > > > users mailing list
> > > > > users@lists.openshift.redhat.com
> > > > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> > > > >
> > > >
> > > > ___
> > > > users mailing list
> > > > users@lists.openshift.redhat.com
> > > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> > > >
> > >
> > > ___
> > > users mailing list
> > > users@lists.openshift.redhat.com
> > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> > >
> >
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Push to registry sometimes fails

2016-12-07 Thread Andy Goldstein
Docker assumes that the registry talks TLS. It will only use http if you
specify the registry is insecure (typically via '--insecure-registry
172.30.0.0/16' in /etc/sysconfig/docker).

Is your registry secured?

On Wed, Dec 7, 2016 at 8:11 PM, Cameron Braid  wrote:

> I am occasional getting this error after a build when pushing to the
> internal registry :
>
> Pushed 10/12 layers, 83% complete
> Registry server Address:
> Registry server User Name: serviceaccount
> Registry server Email: serviceacco...@example.org
> Registry server Password: <>
> error: build error: Failed to push image: Get
> http://172.30.25.196:5000/v2/: malformed HTTP response
> "\x15\x03\x01\x00\x02\x02"
>
> It looks like the pusher is using http to talk to the https registry.
>
> What tells the pusher that the registry is TLS ?
>
> Cheers
>
> Cameron
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ruby-ex deployment times out

2016-09-28 Thread Andy Goldstein
Can you run this command:

sudo cat /var/lib/docker/image/devicemapper/repositories.json | python
-mjson.tool

and find the repository entry for 172.30.165.95:5000/myproject/ruby-ex.
Then find the entry for
http://172.30.165.95:5000/myproject/ruby-ex@sha256:df71f696941a9daa5daaea808cfcaaf72071d7ad206833c1b95a5060dd95ca92
and see what sha256 it points to. Then I would 'docker inspect' that value
and see what it is.

We had a bug in the past where when you had images pulled using our docker
1.8 and then you upgraded to 1.9+, you would get this error. I'm not sure
why you'd see it on a clean 1.10.3 install, though :-(

Andy

On Wed, Sep 28, 2016 at 10:15 AM, Gerard Braad <m...@gbraad.nl> wrote:

> On Wed, Sep 28, 2016 at 6:50 PM, Andy Goldstein <agold...@redhat.com>
> wrote:
> > Gerard, were you on docker 1.8 and did you then upgrade to 1.9 or 1.10? I
> > recall a bug we had that sounds similar to what you're seeing here.
>
> I had not done an upgrade at all.
> a clean cloud image install:
>
> $ dnf install -y curl docker
> $ docker -v
> Docker version 1.10.3, build 8b7fa4a/1.10.3
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OopenShift Nginx issue - setgid(107) failed (1: Operation not permitted)

2016-09-23 Thread Andy Goldstein
I haven't tried doing any volume mounting with nginx on OpenShift, but this
always works for me:

oadm policy add-scc-to-user anyuid -z default
oc run --image nginx nginx

Andy

On Fri, Sep 23, 2016 at 1:36 PM, Charles Moulliard 
wrote:

> How can I change that for the user admin ?
>
> Tried these cmds without success to add the user admin
>
> oadm policy add-scc-to-user restricted admin
> oc describe scc restricted
> Name:   restricted
> Priority:   
> Access:
>   Users:   admin
> ...
>
> but
>
> oc logs local-nginx
> 2016/09/23 17:20:13 [emerg] 5#5: setgid(107) failed (1: Operation not
> permitted)
>
>
> On Fri, Sep 23, 2016 at 7:32 PM, Clayton Coleman 
> wrote:
>
>> Regular users can't change groups by default.  Generally you have to be
>> in a higher tier of privilege - the "anyuid" SCC (setgid is equivalent to
>> root usually)
>>
>> On Sep 23, 2016, at 1:03 PM, Charles Moulliard 
>> wrote:
>>
>> Hi,
>>
>> Can somebody help me concerning this (setgid(107) failed (1: Operation
>> not permitted)) issue reported here - https://github.com/jimmidyso
>> n/minishift/issues/105#issuecomment-249245765 ?
>>
>> Many thanks in advance
>>
>> Charles
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deleting a pod sometimes deletes the docker container and sometimes it doesn't

2016-09-20 Thread Andy Goldstein
If the pod crashes immediately, they should be able to use 'oc debug' to
try to determine what is happening. Otherwise, I would recommend using
aggregated logging.

Andy

On Tue, Sep 20, 2016 at 3:07 AM, v <vekt...@gmx.net> wrote:

> Hello,
>
> our use case is that the developers sometimes want to access the logs of
> crashed pods in order to see why their app crashed. That's not possible via
> the OpenShift Console, therefore we backup the logs of exited containers
> and make them accessible to the devs.
>
> I think that this could be accomplished via aggregated logging too (could
> it?) but last time we tried that it turned out to be a very heavyweight
> solution that constantly required our attention and oversight.
>
> Can you give us any recommendation?
>
> Regards
> v
>
>
>
>
> Am 2016-09-19 um 15:57 schrieb Andy Goldstein:
>
> When you delete a pod, its containers should **always** be deleted. If
> they are not, this is a bug.
>
> Could you please elaborate what use case(s) you have for keeping the
> containers around?
>
> Thanks,
> Andy
>
> On Mon, Sep 19, 2016 at 5:09 AM, v <vekt...@gmx.net> wrote:
>
>> Hello,
>>
>> we have an issue with docker/openshfit.
>> With Openshift 1.1.4 and Docker 1.8.2 we can delete a pod via "oc delete
>> po" and the docker container is only stopped but not deleted. That means
>> the container is still visible via "docker ps -a",
>> /var/lib/docker/containers/[hash] still persits, "docker logs" still
>> works etc.
>>
>> With Openshift 1.1.6 and Docker 1.9.1 the docker container is sometimes
>> DELETED when we delete a pod via "oc delete po" and sometimes it is just
>> stopped like with Openshift 1.1.4/Docker 1.8.2.
>>
>> Is there any way we can influence this? Ideally we would like "oc delete
>> po" to just stop the pod but not delete it.
>>
>> Regards
>> v
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deleting a pod sometimes deletes the docker container and sometimes it doesn't

2016-09-19 Thread Andy Goldstein
When you delete a pod, its containers should **always** be deleted. If they
are not, this is a bug.

Could you please elaborate what use case(s) you have for keeping the
containers around?

Thanks,
Andy

On Mon, Sep 19, 2016 at 5:09 AM, v  wrote:

> Hello,
>
> we have an issue with docker/openshfit.
> With Openshift 1.1.4 and Docker 1.8.2 we can delete a pod via "oc delete
> po" and the docker container is only stopped but not deleted. That means
> the container is still visible via "docker ps -a",
> /var/lib/docker/containers/[hash] still persits, "docker logs" still
> works etc.
>
> With Openshift 1.1.6 and Docker 1.9.1 the docker container is sometimes
> DELETED when we delete a pod via "oc delete po" and sometimes it is just
> stopped like with Openshift 1.1.4/Docker 1.8.2.
>
> Is there any way we can influence this? Ideally we would like "oc delete
> po" to just stop the pod but not delete it.
>
> Regards
> v
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem authenticating to private docker registry

2016-08-10 Thread Andy Goldstein
Tony, can you show the output when you try to manually 'docker pull'?

On Wed, Aug 10, 2016 at 2:04 PM, Cesar Wong  wrote:

> Hmm, I didn't know the issue existed between 1.10 and 1.12 as well.
>
> Andy, what would you recommend?
>
>
> On Aug 10, 2016, at 1:58 PM, Tony Saxon  wrote:
>
> Ok, maybe that is the issue. I can not do the docker pull referencing the
> sha256 hash on the node.
>
> The docker version running on the node is docker 1.10.3, and the docker
> version on the machine that pushed the image is 1.12.0. Is there a
> potential workaround for this, or do I need to get the docker version
> updated on the nodes? For reference, I installed the openshift platform
> using the ansible advanced installation referenced in the documentation.
>
> On Wed, Aug 10, 2016 at 1:46 PM, Cesar Wong  wrote:
>
>> Tony,
>>
>> The only other time that I've seen the manifest not found error was when
>> there was a version mismatch between the Docker version that pushed the
>> image vs the version that was consuming the image (ie. images pushed with
>> Docker 1.9 and pulled with Docker 1.10). Are you able to pull the image
>> spec directly from your node using the Docker cli?
>>
>> $ docker pull docker-lab.example.com:5000/testwebapp@sha256:9799a25cd
>> 6fd7f7908bad740fc0c85823e38aa22afb22f687a5b8a3ed2bf9ec3
>>
>> On Aug 10, 2016, at 1:02 PM, Tony Saxon  wrote:
>>
>> I'm not sure if this has anything to do with it, but I looked at the
>> details of the imagestream that I imported and see that it has this as the
>> docker image reference:
>>
>> status:
>>   dockerImageRepository: 172.30.11.167:5000/testwebapp/testwebapp
>>   tags:
>>   - items:
>> - created: 2016-08-10T13:26:01Z
>>   dockerImageReference: docker-lab.example.com:5000/te
>> stwebapp@sha256:9799a25cd6fd7f7908bad740fc0c85823e38aa22afb2
>> 2f687a5b8a3ed2bf9ec3
>>   generation: 1
>>   image: sha256:9799a25cd6fd7f7908bad740fc0c85823e38aa22afb22f687a5b8
>> a3ed2bf9ec3
>> tag: latest
>>
>> I also see these errors show up on the docker registry when I try to
>> deploy the app:
>>
>> time="2016-08-10T16:58:26Z" level=warning msg="error authorizing context:
>> basic authentication challenge for realm \"Registry Realm\": invalid
>> authorization credential" go.version=go1.6.3 http.request.host="
>> docker-lab.evolveip.net:5000" 
>> http.request.id=ecce6c57-6273-42d6-b7a9-441877c0338f
>> http.request.method=GET http.request.remoteaddr="192.168.122.156:35858"
>> http.request.uri="/v2/" http.request.useragent="docker/1.10.3 go/go1.4.2
>> git-commit/9419b24-unsupported kernel/3.10.0-327.22.2.el7.x86_64
>> os/linux arch/amd64" instance.id=f0d70491-6e34-44eb-a51c-3b13eae8daa6
>> version=v2.5.0
>> 192.168.122.156 - - [10/Aug/2016:16:58:26 +] "GET /v2/ HTTP/1.1" 401
>> 87 "" "docker/1.10.3 go/go1.4.2 git-commit/9419b24-unsupported
>> kernel/3.10.0-327.22.2.el7.x86_64 os/linux arch/amd64"
>> time="2016-08-10T16:58:26Z" level=error msg="response completed with
>> error" auth.user.name=maven err.code="manifest unknown"
>> err.detail="unknown manifest name=testwebapp revision=sha256:9799a25cd6fd7f
>> 7908bad740fc0c85823e38aa22afb22f687a5b8a3ed2bf9ec3"
>> err.message="manifest unknown" go.version=go1.6.3 http.request.host="
>> docker-lab.evolveip.net:5000" 
>> http.request.id=b994a477-6beb-4908-8589-c051b9048e87
>> http.request.method=GET http.request.remoteaddr="192.168.122.156:35860"
>> http.request.uri="/v2/testwebapp/manifests/sha256:9799a25cd6
>> fd7f7908bad740fc0c85823e38aa22afb22f687a5b8a3ed2bf9ec3"
>> http.request.useragent="docker/1.10.3 go/go1.4.2
>> git-commit/9419b24-unsupported kernel/3.10.0-327.22.2.el7.x86_64
>> os/linux arch/amd64" http.response.contenttype="application/json;
>> charset=utf-8" http.response.duration=6.04215ms http.response.status=404
>> http.response.written=186 instance.id=f0d70491-6e34-44eb-a51c-3b13eae8daa6
>> vars.name=testwebapp vars.reference="sha256:9799a25
>> cd6fd7f7908bad740fc0c85823e38aa22afb22f687a5b8a3ed2bf9ec3" version=v2.5.0
>> 192.168.122.156 - - [10/Aug/2016:16:58:26 +] "GET
>> /v2/testwebapp/manifests/sha256:9799a25cd6fd7f7908bad740fc0c
>> 85823e38aa22afb22f687a5b8a3ed2bf9ec3 HTTP/1.1" 404 186 "" "docker/1.10.3
>> go/go1.4.2 git-commit/9419b24-unsupported kernel/3.10.0-327.22.2.el7.x86_64
>> os/linux arch/amd64"
>>
>> So it looks like the manifest isn't found, or am I misunderstanding that?
>>
>> The imagestream was imported by simply:
>>
>> [root@os-master ~]# oc import-image testwebapp --confirm --from=
>> docker-lab.example.com:5000/testwebapp:latest
>> The import completed successfully.
>>
>> Name:   testwebapp
>> Created:Less than a second ago
>> Labels: 
>> Annotations:openshift.io/image.dockerRepos
>> itoryCheck=2016-08-10T17:01:46Z
>> Docker Pull Spec:   172.30.11.167:5000/testwebapp/testwebapp
>>
>> Tag Spec  

Re: Watch routes filter deleted

2016-07-22 Thread Andy Goldstein
While the events have that info, kubectl isn't currently coded to look at
the event type and display it to the user to clearly indicate a deletion.

On Fri, Jul 22, 2016 at 9:09 AM, Jordan Liggitt  wrote:

> Watch events include the type of event, along with the object:
> ADDED, MODIFIED, DELETED, or ERROR
>
> Additionally, deleted objects should have a metadata.deletionTimestamp set
>
>
>
> > On Jul 22, 2016, at 9:05 AM, Tobias Florek  wrote:
> >
> > Hi.
> >
> > I am speaking about origin 1.2.1 behavior. If that is fixed in more
> > recent versions, please tell me!
> >
> > When watching routes, one gets newly created and status-changed routes
> > as expected. What I did not expect was getting deleted routes with no
> > indication, that the route is going to be deleted.
> >
> > How can I filter/recognize the delete events without checking the route
> > explicitly via `oc get route `?
> >
> > Example:
> >
> > In one terminal:
> >
> >> oc expose  --... (to create a route to play with)
> >> oc get routes --watch-only -o yaml
> >
> > In another terminal:
> >
> >> oc delete route 
> >
> > This will show the complete route in the first terminal, with no
> > indication that the route is going to be deleted.
> >
> >
> > Cheers,
> > Tobias Florek
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Watch routes filter deleted

2016-07-22 Thread Andy Goldstein
It's not fixed yet. Here are the 2 related upstream issues:

https://github.com/kubernetes/kubernetes/issues/11338

https://github.com/kubernetes/kubernetes/issues/17612

Andy

On Fri, Jul 22, 2016 at 9:03 AM, Tobias Florek  wrote:

> Hi.
>
> I am speaking about origin 1.2.1 behavior. If that is fixed in more
> recent versions, please tell me!
>
> When watching routes, one gets newly created and status-changed routes
> as expected. What I did not expect was getting deleted routes with no
> indication, that the route is going to be deleted.
>
> How can I filter/recognize the delete events without checking the route
> explicitly via `oc get route `?
>
> Example:
>
> In one terminal:
>
> > oc expose  --... (to create a route to play with)
> > oc get routes --watch-only -o yaml
>
> In another terminal:
>
> > oc delete route 
>
> This will show the complete route in the first terminal, with no
> indication that the route is going to be deleted.
>
>
> Cheers,
>  Tobias Florek
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Back-off pulling image "/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"

2016-07-07 Thread Andy Goldstein
Can you go into the filesystem where your registry lives and see if you
have anything
with 5ad3b9e964ec6e420ac047be6ae96bf04abe817d94a7d77592af1c119543b37b in
the file name?

On Thu, Jul 7, 2016 at 4:45 PM, Den Cowboy  wrote:

> sorry,
> My node:
> Version: 1.10.3
>  API version: 1.22
>  Package version: docker-common-1.10.3-44.el7.centos.x86_64
>  Go version:  go1.4.2
>  Git commit:  9419b24-unsupported
>  Built:   Fri Jun 24 12:09:49 2016
>  OS/Arch: linux/amd64
>
>
> Pushed with docker 1.11 (after normal install an ubuntu)
>
> --
> From: agold...@redhat.com
> Date: Thu, 7 Jul 2016 16:43:33 -0400
>
> Subject: Re: Back-off pulling image "/origin-logging-curator@sha256
> :b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
>
> (asking again) Did you push the image using Docker 1.10 and your node is
> running Docker 1.9?
>
> On Thu, Jul 7, 2016 at 4:40 PM, Den Cowboy  wrote:
>
> Thanks. How can I handle this when I'm using my own images?
>
> maybe a more clear explanation:
> We are using our own docker registry which is secured with a selfsigned
> certificate. So if we place the cert on our openshift node we're able to
> pull. We pulled the openshift-origin images v1.2.0 from dockerhub and
> pushed it inside our docker registry. We are using the registry instead of
> docker.io/openshift/origin-xxx
> This works fine for our router, registry, cluster metrics project etc..
> But when we are deploying the logging project:
> https://docs.openshift.org/latest/install_config/aggregate_logging.html
> it doesn't work.
> the pull of the registry.com/asco/origin-logging-deployment:v1.2.0 is
> fine and it deploys.
> But the problem raises later. The fluentd image is also pulled fine (from
> our registry).
> But the images of the rest aren't pulled in the right way,
>
>
>
> example of error events/logs: (origin-logging-"empty" is probably an
> issue?)
>
>
>
> pulling image "
> registry.com/asco/origin-logging-auth-proxy@sha256:179b84eb803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65
> 
> "
> Failed to pull image "
> registry.com/asco/origin-logging-auth-proxy@sha256:179b84xb803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65
> ":
> *image pull failed for registry.com/asco/origin-logging-
> *auth-proxy@sha256:179b84ex803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65,
> this may be because there are no credentials on this request. details:
> (manifest unknown: manifest unknown)
>
>
> --
> From: agold...@redhat.com
> Date: Thu, 7 Jul 2016 16:34:29 -0400
> Subject: Re: Back-off pulling image "/origin-logging-curator@sha256
> :b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
>
>
>
>
> On Thu, Jul 7, 2016 at 3:48 PM, Den Cowboy  wrote:
>
> Hi,
>
> We are using our own registry which contains some necessary origin-images
> for us.
> We already deployed the router, registry and cluster metrics using our
> regisry:
>
> The images are all in the form of:
> myregistry.com/company/origin-
>
> Now I try to deploy a logging project:
> After starting the logging deployer template (in which I described our
> registry + v1.2.0) it starts pulling the origin-logging-deployer image
> which is fine.
>
> Than everything seems to start but:
> Back-off pulling image "
> myregistry.com/company/origin-logging-elasticsearch@sha256:5ad3b9e964ec6e420ac047be6ae96bf04abe817d94a7d77592af1c119543b37b
> 
> "
> (manifest unknown: manifest unknown)
>
>
> Did you push the image using Docker 1.10 and your node is running Docker
> 1.9?
>
>
>
> In the deploymentconfig is also the image with the @sha
> This is happening for each image of our deployment (es, kibana, fluentd,
> ..)
>
> Why is it adding that @sha after our image?
>
>
> If you're using ImageChangeTriggers, we translate tags to
> content-addressable IDs for consistent image usage so that a moving tag
> such as "latest" doesn't yield different images and possibly results when
> you deploy today, tomorrow, next week, etc.
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
>
___
users mailing list
users@lists.openshift.redhat.com

Re: Back-off pulling image "/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"

2016-07-07 Thread Andy Goldstein
On Thu, Jul 7, 2016 at 3:48 PM, Den Cowboy  wrote:

> Hi,
>
> We are using our own registry which contains some necessary origin-images
> for us.
> We already deployed the router, registry and cluster metrics using our
> regisry:
>
> The images are all in the form of:
> myregistry.com/company/origin-
>
> Now I try to deploy a logging project:
> After starting the logging deployer template (in which I described our
> registry + v1.2.0) it starts pulling the origin-logging-deployer image
> which is fine.
>
> Than everything seems to start but:
> Back-off pulling image "
> myregistry.com/company/origin-logging-elasticsearch@sha256:5ad3b9e964ec6e420ac047be6ae96bf04abe817d94a7d77592af1c119543b37b
> "
> (manifest unknown: manifest unknown)
>

Did you push the image using Docker 1.10 and your node is running Docker
1.9?


>
> In the deploymentconfig is also the image with the @sha
> This is happening for each image of our deployment (es, kibana, fluentd,
> ..)
>
> Why is it adding that @sha after our image?
>

If you're using ImageChangeTriggers, we translate tags to
content-addressable IDs for consistent image usage so that a moving tag
such as "latest" doesn't yield different images and possibly results when
you deploy today, tomorrow, next week, etc.


>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Port forwarding without keeping port-forward process running

2016-05-16 Thread Andy Goldstein
Is the port that you want to expose HTTP or some other protocol? If it's
some other protocol, does it support TLS+SNI? You can expose ports via the
router if they're HTTP/HTTPS or TLS+SNI.

On Thu, May 12, 2016 at 3:03 AM, Henryk Konsek  wrote:

> Many thanks, Aleksandar. I will take look at your post.
>
> Can somebody from OS team confirm what are the options here? I would
> highly appreciate that. :)
>
> Cheers!
>
> czw., 12.05.2016 o 08:15 użytkownik Aleksandar Lazic <
> aleksandar.la...@cloudwerkstatt.com> napisał:
>
>> Hi Henryk.
>>
>> As far as I know there is no easy way.
>>
>> We solved this requirement like this.
>>
>>
>> https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/
>>
>> You only need to tell the "haproxy for master services" the destination
>> service.
>>
>> Hth & best regards
>>
>> Aleksandar Lazic
>> Von Outlook Mobile  gesendet
>>
>>
>>
>> On Wed, May 11, 2016 at 11:04 PM -0700, "Henryk Konsek" <
>> hekon...@gmail.com> wrote:
>>
>> Hi,
>>
>> Is there a way to forward port outside OS without keeping "oc
>> port-forward" process running? I'd like to expose port of my pod to the
>> outside world, but I don't want to run "oc port-forward" as an external
>> process.
>>
>> Is there a way to tell OS to always forward given port for given pod?
>>
>> Cheers!
>> --
>> Henryk Konsek
>> https://linkedin.com/in/hekonsek
>>
> --
> Henryk Konsek
> https://linkedin.com/in/hekonsek
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pushing to internal registry

2016-04-22 Thread Andy Goldstein
Hi Stephen,

This is the key line from your log:

OpenShift access denied: User \"system:serviceaccount:sbraswe1xzz:builder\"
cannot get imagestreams/layers in project \"sbraswe1xzz

What is the output of `oadm policy who-can get imagestreams/layers -n
sbraswe1xzz`?

Andy

On Fri, Apr 22, 2016 at 1:51 PM, Braswell, Stephen <step...@unc.edu> wrote:

> Hi Andy,
>
> I don’t know if my issue is exactly the same as Gary’s but I have some
> additional log data from when I was getting a similar error if it is
> helpful.
>
> https://gist.github.com/braswell/048b5a1d9c8b740673218188fe959545
>
>
> -Stephen
>
> > On Apr 19, 2016, at 3:27 PM, Andy Goldstein <agold...@redhat.com> wrote:
> >
> > Do you have more output from the registry's log? That "error" message is
> quite normal and doesn't actually indicate an error. It shows up when a
> Docker client first tries to talk to it without providing any credentials.
> The registry sends back an "unauthorized" response, so Docker then sends
> credentials.
> >
> > On Tue, Apr 19, 2016 at 12:05 PM, Gary Franczyk <
> gary.franc...@availity.com> wrote:
> > Hi there.
> >
> > I'm attempting to push a docker image to the internal registry and am
> getting this error in the registry pod logs:
> >
> >
> > time="2016-04-19T11:58:58.942610625-04:00" level=error msg="error
> authorizing context: authorization header with basic token required"
> go.version=go1.4
> > .2 http.request.host="172.30.142.84:5000" 
> > http.request.id=a4ecc601-9bc3-4f23-94b4-b2362555618b
> http.request.method=GET http.request.remoteaddr="10.1.1.
> > 1:46873" http.request.uri="/v2/" http.request.useragent="docker/1.9.1
> go/go1.4.2 kernel/3.10.0-327.10.1.el7.x86_64 os/linux arch/amd64"
> instance.id=74d
> > e4ff9-2af6-496e-b033-52e9711a4bd6
> >
> >
> >
> > I was able to successfully (as far as I can tell) login to the docker
> registry with "docker login".
> > WARNING: login credentials saved in /root/.docker/config.json
> > Login Succeeded
> > I am using LDAP for authentication.
> >
> > Can anyone shed some light on this?   Thanks!
> >
> > Gary Franczyk
> > Senior Unix Administrator, Infrastructure
> >
> > Availity | 10752 Deerwood Park Blvd S. Ste 110, Jacksonville FL 32256
> > W 904.470.4953 | M 561.313.2866
> > gary.franc...@availity.com
> > The information contained in this e-mail may be privileged and
> confidential under applicable law. It is intended solely for the use of the
> person or firm named above. If the reader of this e-mail is not the
> intended recipient, please notify us immediately by returning the e-mail to
> the originating e-mail address. Availity, LLC is not responsible for errors
> or omissions in this e-mail message. Any personal comments made in this
> e-mail do not reflect the views of Availity, LLC.
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Best way to delete images (local and in registry)

2016-04-21 Thread Andy Goldstein
On Thu, Apr 21, 2016 at 2:22 PM, v <vekt...@gmx.net> wrote:

> Am 2016-04-21 um 13:49 schrieb Andy Goldstein:
>
>
>
> On Thursday, April 21, 2016, v <vekt...@gmx.net> wrote:
>
>>
>>
>> Am 2016-04-21 um 09:44 schrieb aleks:
>>
>>> Hi Lorenz
>>>
>>> Am 21-04-2016 09:01, schrieb Lorenz Vanthillo:
>>>
>>>> Thanks Aleks,
>>>>
>>>> Is this deleting images on your nodes or also on your
>>>> openshift-registry?
>>>>
>>>
>>> As far as I have seen only in the registy not on the nodes.
>>> That's the reason why we afterwards execute a
>>>
>>> ansible -m shell -a 'docker rmi $(docker images -q)' all
>>>
>>> And for example:
>>>>
>>>> oadm prune images --keep-younger-than=60m
>>>>
>>>> Will this only delete images images older than 60m which aren't used?
>>>> Or wil this also delete images which are used (maybe only on the node
>>>> but not out of the registry?)
>>>>
>>> Unfortunately this will not delete any images at all, it will only
>> delete the references to those images. You will not get any disk space back
>> with this.
>
>
> This is incorrect. oadm prune images does free up disk space in the
> registry pod's storage.
>
> Hello,
>
> this is interesting. We've tried executing
> oadm prune builds --confirm and then
> oadm prune deployments --confirm and then
> oadm prune images --confirm
> and it never freed up a meaningful amount of disk space. We tried it with
> Origin 1.0.6 and just recently with Origin 1.1.4 (our registry is currently
> 50 GiB in size).
>
> Does that mean that we have encountered a bug?
>

We will need more information to determine if there's a bug or not. Could
you please provide:

   - The command(s) you ran, and output, showing the registry's size before
   pruning
   - the output from running 'oadm prune images' (including the exact
   command line you ran)
   - The command(s) and output showing the registry's size after pruning



>
>
>
>
>
>
>>
>> As for your second question:
>> If a pod, RC or DC is using the image it will not get deleted, you can
>> read the docs for more details:
>>
>> https://docs.openshift.com/enterprise/3.1/admin_guide/pruning_resources.html#pruning-images
>>
>>
>>
>>> Well due to the fact hat I'm not using such option you can try it by
>>> your own as long as you don't add ' --confirm ' to the command
>>>
>>> Best Regards
>>> Aleks
>>>
>>> PS: Please keep the list on cc thanks
>>>
>>>> To: lorenz.vanthi...@outlook.com
>>>>> Subject: Re: Best way to delete images (local and in registry)
>>>>> Date: Thu, 21 Apr 2016 08:42:50 +0200
>>>>> From: al-openshiftus...@none.at
>>>>> CC: users@lists.openshift.redhat.com
>>>>>
>>>>> Hi Lorenz.
>>>>>
>>>>> Am 20-04-2016 14:33, schrieb Lorenz Vanthillo:
>>>>> > I'm searching for the best way to delete unused docker images in my
>>>>> > cluster.
>>>>> > Because we're rebuilding images + pushing them to the registry.
>>>>> >
>>>>> > When we perform
>>>>> > docker images -q |xargs do
>>>>>
>>>>>>
>>>>>>
>>>>>> oadm prune images --keep-younger-than=60m
>>>>>>
>>>>>> Will this only delete images images older than 60m which aren't used?
>>>>>> Or wil this also delete images which are used (maybe only on the node
>>>>>> but not out of the registry?)
>>>>>>
>>>>>
>>>>> Well due to the fact hat I'm not using such option you can try it by
>>>>> your own as long as you don't add ' --confirm ' to the command
>>>>>
>>>>> Best Regards cker rmi
>>>>> >
>>>>> > We get:
>>>>> > REPOSITORY TAG IMAGE ID CREATED
>>>>> > VIRTUAL SIZE
>>>>> >   0fd6f6a7d8
>>>>>
>>>>>>
>>>>>>
>>>>>> oadm prune images --keep-younger-than=60m
>>>>>>
>>>>>> Will this only delete images images older than 60m which aren't used?
>>>>>> Or wil this also delete images which are used (maybe only on the node
>>>>>> but not out of the registry?)
>>>>>>
>&

Re: Best way to delete images (local and in registry)

2016-04-21 Thread Andy Goldstein
On Thursday, April 21, 2016, v  wrote:

>
>
> Am 2016-04-21 um 09:44 schrieb aleks:
>
>> Hi Lorenz
>>
>> Am 21-04-2016 09:01, schrieb Lorenz Vanthillo:
>>
>>> Thanks Aleks,
>>>
>>> Is this deleting images on your nodes or also on your
>>> openshift-registry?
>>>
>>
>> As far as I have seen only in the registy not on the nodes.
>> That's the reason why we afterwards execute a
>>
>> ansible -m shell -a 'docker rmi $(docker images -q)' all
>>
>> And for example:
>>>
>>> oadm prune images --keep-younger-than=60m
>>>
>>> Will this only delete images images older than 60m which aren't used?
>>> Or wil this also delete images which are used (maybe only on the node
>>> but not out of the registry?)
>>>
>> Unfortunately this will not delete any images at all, it will only delete
> the references to those images. You will not get any disk space back with
> this.


This is incorrect. oadm prune images does free up disk space in the
registry pod's storage.


>
> As for your second question:
> If a pod, RC or DC is using the image it will not get deleted, you can
> read the docs for more details:
>
> https://docs.openshift.com/enterprise/3.1/admin_guide/pruning_resources.html#pruning-images
>
>
>
>> Well due to the fact hat I'm not using such option you can try it by your
>> own as long as you don't add ' --confirm ' to the command
>>
>> Best Regards
>> Aleks
>>
>> PS: Please keep the list on cc thanks
>>
>>> To: lorenz.vanthi...@outlook.com
 Subject: Re: Best way to delete images (local and in registry)
 Date: Thu, 21 Apr 2016 08:42:50 +0200
 From: al-openshiftus...@none.at
 CC: users@lists.openshift.redhat.com

 Hi Lorenz.

 Am 20-04-2016 14:33, schrieb Lorenz Vanthillo:
 > I'm searching for the best way to delete unused docker images in my
 > cluster.
 > Because we're rebuilding images + pushing them to the registry.
 >
 > When we perform
 > docker images -q |xargs do

>
>
> oadm prune images --keep-younger-than=60m
>
> Will this only delete images images older than 60m which aren't used?
> Or wil this also delete images which are used (maybe only on the node
> but not out of the registry?)
>

 Well due to the fact hat I'm not using such option you can try it by
 your own as long as you don't add ' --confirm ' to the command

 Best Regards cker rmi
 >
 > We get:
 > REPOSITORY TAG IMAGE ID CREATED
 > VIRTUAL SIZE
 >   0fd6f6a7d8

>
>
> oadm prune images --keep-younger-than=60m
>
> Will this only delete images images older than 60m which aren't used?
> Or wil this also delete images which are used (maybe only on the node
> but not out of the registry?)
>

 Well due to the fact hat I'm not using such option you can try it by
 your own as long as you don't add ' --confirm ' to the command

 Best Regards fb 6 days ago
 > 660.1 MB
 >   cdcb32f9b621 2 weeks ago
 > 743.2 MB
 >   9df362e36242 2 weeks ago
 > 794 MB
 >   67de4dbed60e 2 weeks ago
 > 704 MB
 >   999e0047a070 2 weeks ago
 > 543.6 MB
 >
 > But oc get images gave us:

 [snipp]

 > Is this fine?
 >
 > And what's the best way to delete old images out of the registry?

 Do you have tried this way?



>>> https://docs.openshift.com/enterprise/3.1/admin_guide/pruning_resources.html#pruning-images
>>>

 After wards we have run

 docker rmi $(docker images -q)

 on every node.

 I'm not sure if the last step is still necessary in the current

>>> version.
>>>

 Best Regards
 Aleks

>>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pushing to internal registry

2016-04-19 Thread Andy Goldstein
Do you have more output from the registry's log? That "error" message is
quite normal and doesn't actually indicate an error. It shows up when a
Docker client first tries to talk to it without providing any credentials.
The registry sends back an "unauthorized" response, so Docker then sends
credentials.

On Tue, Apr 19, 2016 at 12:05 PM, Gary Franczyk 
wrote:

> Hi there.
>
> I'm attempting to push a docker image to the internal registry and am
> getting this error in the registry pod logs:
>
>
> time="2016-04-19T11:58:58.942610625-04:00" level=error msg="error
> authorizing context: authorization header with basic token required"
> go.version=go1.4
>
> .2 http.request.host="172.30.142.84:5000" 
> http.request.id=a4ecc601-9bc3-4f23-94b4-b2362555618b
> http.request.method=GET http.request.remoteaddr="10.1.1.
>
> 1:46873" http.request.uri="/v2/" http.request.useragent="docker/1.9.1
> go/go1.4.2 kernel/3.10.0-327.10.1.el7.x86_64 os/linux arch/amd64"
> instance.id=74d
>
> e4ff9-2af6-496e-b033-52e9711a4bd6
>
>
>
> I was able to successfully (as far as I can tell) login to the docker
> registry with "docker login".
>
> WARNING: login credentials saved in /root/.docker/config.json
>
> Login Succeeded
> I am using LDAP for authentication.
>
> Can anyone shed some light on this?   Thanks!
>
> *Gary Franczyk*
> Senior Unix Administrator, Infrastructure
>
> Availity | 10752 Deerwood Park Blvd S. Ste 110, Jacksonville FL 32256
> W 904.470.4953 | M 561.313.2866
>
> gary.franc...@availity.com
> --
> The information contained in this e-mail may be privileged and
> confidential under applicable law. It is intended solely for the use of the
> person or firm named above. If the reader of this e-mail is not the
> intended recipient, please notify us immediately by returning the e-mail to
> the originating e-mail address. Availity, LLC is not responsible for errors
> or omissions in this e-mail message. Any personal comments made in this
> e-mail do not reflect the views of Availity, LLC.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: I wonder if there are some commands to list port forward and remove port forward?

2016-04-06 Thread Andy Goldstein
Hi Stéphane,

Port forwarding is a temporary operation - it stays alive as long as you
keep the `oc port-forward` command running. Does this help answer your
question?

Andy

On Wed, Apr 6, 2016 at 11:00 AM, Stéphane Klein  wrote:

> Hi,
>
> I see the command
>
> oc port-forward
>
> in https://docs.openshift.org/latest/dev_guide/port_forwarding.html
>
> I wonder if there are some commands to :
>
> * list port forward ?
> * remove port forward ?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein 
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Repository for docker image

2016-03-25 Thread Andy Goldstein
Try docker images --digests

On Friday, March 25, 2016, Maciej Szulik  wrote:

> Hey,
>
> On 03/25/2016 01:27 PM, Lorenz Vanthillo wrote:
>
>> We have 2 environments (OpenShift Origin 1.1.3 cluster). Development and
>> test.
>> In Development we have a Jenkins CI which creates our images and pushes
>> to our
>> OpenShift registry.
>>
>> We have one Jenkins Job which is pulling the image from that registry.
>> The job
>> tags the image on registry.dev.xxx.com:443/project-hello/image.
>> After that it pushes the image over https to the secure registry of our
>> Dev
>> environment.
>> When I enter project-hello I have an image stream:
>> 172.30.xx.xx:5000/project-hello/image. So that works fine. I'm albe to
>> deploy
>> the image.
>>
>> The whole flow is fine but when I perform:
>> $ docker images:
>>   
>> def2fed60xxx
>>  3 days ago  367.3 MB
>>   
>> aeb73c022xxx
>>  3 days ago  382.4 MB
>>   
>> e448569c1xxx
>>  3 days ago  383.3 MB
>> ...
>>
>> $ oc get images gives me:
>>
>> sha256:xxx   172.30.xx.xx:5000/project-hello/image-name@sha256:xxx
>> sha256:xxx   172.30.xx.xx:5000/test-bluepond/other-image-name@sha256:xxx
>>
>> What is the 'oc get images' command actually showing? Are these all the
>> available image-streams in your environment?
>> And maybe more important: why are we getting  for repository and
>> 
>> for tag?
>>
>>
>
> I'm little confused with what you wrote. `oc get images` shows so called
> ImageStreamImage objects which describe imported image metadata. If you
> try `oc get image sha256:xxx -o yaml` you should see all that metadata.
> Having said that, the question you're posting relates to `docker images`
> command, I think, and has nothing to do with openshift CLI.
> Can you restate your question, it looks like I'm missing here something?
>
> Maciej
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Docker Registry Authentication Issues

2016-03-11 Thread Andy Goldstein
Can you gist the tail end of the registry pod log from when the push failed?

On Thursday, March 10, 2016, Tim Moor  wrote:

> Hey thanks Clayton,
>
> I’ve built  a fresh environment to ensure that this isn’t an isolated
> incident and it is displaying there as well.
>
> As I mentioned earlier I can see that data is being stored on the
> persistent volume, the 500 error seems to occur in the final commit phase
> of the push.
>
> I’ve also tried different users and projects and getting the same result.
>
> The logs are rather useless, any lead on where I can try next?
>
> Thanks
>
>
> On 10/03/2016, at 4:19 PM, Clayton Coleman  > wrote:
>
> error authorizing context: authorization header with basic token required"
>
>
> Is definitely not expected.  Did you do any sort of upgrade or change to
> the registry image?   What image is your registry pod using?
>
> On Mar 9, 2016, at 10:11 PM, Tim Moor  > wrote:
>
> Hey Guys, in the last few days we’ve been having issues with pushes to our
> OpenShift docker registry.
>
> On the client side we’ve pulled from docker, tagged for the destination
> including a name space that we are admins for and we’re getting the
> following error.
>
> oc login 
> docker login -u $(oc whoami) -p $(oc whoami -t) registry.oso.infra.corp
> docker pull springnz/post_a_job:latest
> docker tag springnz/post_a_job:latest
> registry.oso.infra.corp/springnz/post_a_job:latest
> docker push registry.oso.infra.corp/springnz/post_a_job:latest
>
>
> Received unexpected HTTP status: 500 Internal Server Error
>
>
> From the oc get logs side we’re seeing the following level=error messages.
>
>
> 54 "response completed with error" err.code
>
> 16 "error authorizing context: authorization header with basic token
> required" go.version
>
> 6 "Error creating ImageStreamMapping: imagestreams \"ansible-runtime\" not
> found" go.version
>
> 4 "Error creating ImageStreamMapping: imagestreams \"centos-base\" not
> found" go.version
>
> 3 "Error creating ImageStreamMapping: imagestreams \"centos\" not found"
> go.version
>
>
>
> Environment
>
> - origin-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-clients-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-master-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-node-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-sdn-ovs-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
>
>
>
>
> oc get pods docker-registry-??
>
> "metadata": {
>"name": "docker-registry-4-ye8hi",
>"generateName": "docker-registry-4-",
>"namespace": "default",
>"selfLink":
> "/api/v1/namespaces/default/pods/docker-registry-4-ye8hi",
>"uid": "55ad28f5-e65e-11e5-98e9-001a4a160178",
>"resourceVersion": "2393",
>"creationTimestamp": "2016-03-10T01:20:54Z",
>"labels": {
>"deployment": "docker-registry-4",
>"deploymentconfig": "docker-registry",
>"docker-registry": "default"
>},
>"annotations": {
>"kubernetes.io/created-by":
> "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"default\",\"name\":\"docker-registry-4\",\"uid\":\"52e32262-e65e-11e5-98e9-001a4a160178\",\"apiVersion\":\"v1\",\"resourceVersion\":\"1996\"}}\n",
>"openshift.io/deployment-config.latest-version": "4",
>"openshift.io/deployment-config.name": "docker-registry",
>"openshift.io/deployment.name": "docker-registry-4",
>"openshift.io/scc": "restricted"
>}
>},
>
>
> Thanks
>
> Tim
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Serious docker upgrade problem -> 1.8 -> 1.9 update breaks system

2016-03-09 Thread Andy Goldstein
What OS - Fedora/Centos/RHEL?

Jhon/Dan, PTAL as this might be related to forward-journald.

Andy

On Wed, Mar 9, 2016 at 7:44 AM, Skarbek, John <john.skar...@ca.com> wrote:

> Andy,
>
> David had already file an issue
> <https://github.com/openshift/openshift-ansible/issues/1573>
>
> Mar 09 12:40:07 js-router-001.ose.bld.f4tech.com systemd[1]: Starting Docker 
> Application Container Engine...
> Mar 09 12:40:07 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
> Forwarding stdin to journald using Priority Informational and tag docker
> Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
> time="2016-03-09T12:40:08.104822274Z" level=info msg="Firewalld running: 
> false"
> Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
> time="2016-03-09T12:40:08.136388972Z" level=info msg="Default bridge 
> (docker0) is assigned with an IP address 172.17.0.1/16. Daemon option --bip 
> can be used to set a preferred IP address"
> Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
> time="2016-03-09T12:40:08.183636904Z" level=info msg="Loading containers: 
> start."
> Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]:
> Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
> time="2016-03-09T12:40:08.183842066Z" level=info msg="Loading containers: 
> done."
> Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
> time="2016-03-09T12:40:08.184069850Z" level=info msg="Daemon has completed 
> initialization"
> Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
> time="2016-03-09T12:40:08.184116853Z" level=info msg="Docker daemon" 
> commit="185277d/1.9.1" execdriver=native-0.2 graphdriver=devicemapper 
> version=1.9.1
> Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
> time="2016-03-09T12:40:08.193532957Z" level=info msg="API listen on 
> /var/run/docker.sock"
> Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com systemd[1]: docker.service: 
> Got notification message from PID 18499, but reception only permitted for 
> main PID 18498
>
>
>
>
> --
> John Skarbek
>
> On March 9, 2016 at 07:41:03, Andy Goldstein (agold...@redhat.com) wrote:
>
> What is the output of 'sudo journalctl -u docker -e'?
>
> On Wed, Mar 9, 2016 at 3:38 AM, David Strejc <david.str...@gmail.com>
> wrote:
>
>> I don't know where I could find right person for this issue so I am
>> trying to post it here as many people are reading this.
>>
>> Clean installation of Open Shift v3 via ansible is broken by simple yum
>> update as yum updates Docker from 1.8 to 1.9.1 and Docker is not starting
>> anymore.
>>
>> This is the message in logs:
>>
>> Mar 09 09:03:45 1.devcloud.cz
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__1.devcloud.cz=CwMFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=2_kN02gJlhscVfvV2RC2IaT4PbU3aXbeIvn7SmOh60k=0DfVIqpqjDpTFZY8aULlS2Vf6d43ozczHKAN6PNer5E=>
>> systemd[1]: docker.service: Got notification message from PID 7150, but
>> reception only permitted for main PID 7149
>>
>>
>> David Strejc
>> t: +420734270131
>> e: david.str...@gmail.com
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users=CwMFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=2_kN02gJlhscVfvV2RC2IaT4PbU3aXbeIvn7SmOh60k=VtI_OicOZR1K3k47Lnidb-zMLVVNJIPEnadtVHfyXN0=>
>>
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users=CwICAg=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=2_kN02gJlhscVfvV2RC2IaT4PbU3aXbeIvn7SmOh60k=VtI_OicOZR1K3k47Lnidb-zMLVVNJIPEnadtVHfyXN0=
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: 503 service unavailable

2016-03-03 Thread Andy Goldstein
You're missing a "cat" before /var/run/secrets/
kubernetes.io/serviceaccount/token, i.e.

-H "Authorization: Bearer $(cat /var/run/secrets/
kubernetes.io/serviceaccount/token)"

On Thu, Mar 3, 2016 at 1:26 PM, Dean Peterson 
wrote:

> I have followed some of Ram's steps last night after recreating the router
> a few times.
>
> 1. oadm router aberouter --replicas=1 \
> --credentials=/etc/origin/master/openshift-router.kubeconfig \
> --service-account=system:serviceaccount:default:router
>
> 2. docker ps | grep haproxy
>
> 3.  I grab the container id and type "cid=" replacing
> container id with what I get from step 2.
>
> 4.  sudo nsenter -m -u -n -i -p -t  $(sudo docker inspect --format "{{
> .State.Pid }}" "$cid")
>
> 5. curl -k -vvv https://openshift.abecorn.com:8443/api/v1/routes/ -H
> "Authorization: Bearer $(/var/run/secrets/
> kubernetes.io/serviceaccount/token)"
>
> Instead of seeing "system:router\" cannot list all routes in the cluster,
> I see "system:anonymous\" cannot list all routes in the cluster.
>
> *It looks like the haproxy container is being created with
> system:anonymous credentials?  How is that possible and how can I force it
> to use system:router when I already used
> --service-account=system:serviceaccount:default:router in step 1?*
>
>
> curl -k -vvv https://openshift.abecorn.com:8443/api/v1/routes/ -H
> "Authorization: Bearer $(/var/run/secrets/
> kubernetes.io/serviceaccount/token)"
> -bash: /var/run/secrets/kubernetes.io/serviceaccount/token: Permission
> denied
> * About to connect() to openshift.abecorn.com port 8443 (#0)
> *   Trying 23.25.149.227...
> * Connected to openshift.abecorn.com (23.25.149.227) port 8443 (#0)
> * Initializing NSS with certpath: sql:/etc/pki/nssdb
> * skipping SSL peer certificate verification
> * NSS: client certificate not found (nickname not specified)
> * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
> * Server certificate:
> * subject: CN=172.30.0.1
> * start date: Feb 29 03:50:56 2016 GMT
> * expire date: Feb 28 03:50:57 2018 GMT
> * common name: 172.30.0.1
> * issuer: CN=openshift-signer@1456717855
> > GET /api/v1/routes/ HTTP/1.1
> > User-Agent: curl/7.29.0
> > Host: openshift.abecorn.com:8443
> > Accept: */*
> > Authorization: Bearer
> >
> < HTTP/1.1 403 Forbidden
> < Cache-Control: no-store
> < Content-Type: application/json
> < Date: Thu, 03 Mar 2016 18:20:09 GMT
> < Content-Length: 247
> <
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "User \"system:anonymous\" cannot list all routes in the
> cluster",
>   "reason": "Forbidden",
>   "details": {
> "kind": "routes"
>   },
>   "code": 403
> }
>
>
> On Thu, Mar 3, 2016 at 10:39 AM, Dean Peterson 
> wrote:
>
>> Ram actually went through quite a lot with me last night.  Here is a gist
>> of the irc chat:
>>
>> https://gist.github.com/deanpeterson/568f07b032933e9d219b
>>
>> On Thu, Mar 3, 2016 at 10:36 AM, Dean Peterson 
>> wrote:
>>
>>> The logs only say:  "Router is including routes in all namespaces"
>>>
>>>
>>> On Thu, Mar 3, 2016 at 10:22 AM, Jordan Liggitt 
>>> wrote:
>>>
 What is in your router logs?

 On Thu, Mar 3, 2016 at 11:21 AM, Dean Peterson  wrote:

> *The service account does exist:*
>
>  oc describe serviceaccount router
> Name:   router
> Namespace:  default
> Labels: 
>
> Image pull secrets: router-dockercfg-2d4wd
>
> Mountable secrets:  router-token-9p8at
> router-dockercfg-2d4wd
>
> Tokens: router-token-1le9y
> router-token-9p8at
>
> *It is in scc privileged:*
>
> users:
> - system:serviceaccount:openshift-infra:build-controller
> - system:serviceaccount:management-infra:management-admin
> - system:serviceaccount:default:router
> - system:serviceaccount:default:registry
>
> *And it has the policy to view endpoints in all namespaces:*
>
> oadm policy who-can get endpoints --all-namespaces Namespace:
> 
> Verb:  get
> Resource:  endpoints
>
> Users:  system:serviceaccount:default:router
> system:serviceaccount:management-infra:management-admin
>
> Groups: system:cluster-admins
> system:cluster-readers
> system:masters
> system:nodes
>
> Still getting 503 error on all services
>
>
>
>
> On Thu, Mar 3, 2016 at 10:14 AM, Dean Peterson <
> peterson.d...@gmail.com> wrote:
>
>> Now when it displays:
>>
>>  oadm policy who-can get endpoints --all-namespaces
>> Namespace: 
>> Verb:  get
>> Resource:  endpoints
>>
>> Users:  system:serviceaccount:default:router
>> 

Re: use service from other project (namespace)

2016-03-03 Thread Andy Goldstein
I'm curious - what is your concern about 40 pods in the same project?

On Thu, Mar 3, 2016 at 8:24 AM, Den Cowboy  wrote:

> Is it possible to connect with a service which is in another project
> (namespace)?
>
We have a project with 2 pods and 2 services. one pod (container) is
> filling the other pod (container) of our database.
> Now we want to start other pods which can use data from that database.
> It's possible when we're working in the same project. Just by defining the
> service-name (name of service above the DB). But it's bad for our
> structure. We have our DB in project A but we want some pods which are
> using the DB in project B and some others in project C because otherwise we
> will have 40 pods in the same project. Is it possible?
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: horizontal autoscaler does not get cpu utilization

2016-02-25 Thread Andy Goldstein
That error message is your issue. Are you able to curl 10.1.0.6:8080 from
any of your nodes?

On Thu, Feb 25, 2016 at 8:35 AM, Den Cowboy  wrote:

> I keep getting a timout:
> Error: 'dial tcp 10.1.0.6:8082: i/o timeout'
>
> or without the -k
> curl performs SSL certificate verification by default, using a "bundle"
>  of Certificate Authority (CA) public keys (CA certs). If the default
>  bundle file isn't adequate, you can specify an alternate file ...
>
> command:
> $ curl -H "Authorization: Bearer token" \
> > -X GET
> https://ip-172-31-xx-xx.xx-xx-1.compute.internal:8443/api/v1/proxy/namespaces/openshift-infra/services/https:heapster:/validate
>
>
>
> + What was your thought about the errors in the second mail:
> *invalid character 'E' looking for beginning of value*
>
> --
> Date: Thu, 25 Feb 2016 08:12:19 -0500
> Subject: Re: horizontal autoscaler does not get cpu utilization
> From: agold...@redhat.com
> To: dencow...@hotmail.com
> CC: mwri...@redhat.com; users@lists.openshift.redhat.com
>
>
> 2 things:
>
> 1) you need to provide authentication when you make the curl request
>
> 2) you need a : in between https and heapster (so httpsheapster: becomes
> https:heapster:)
>
> On Thu, Feb 25, 2016 at 3:24 AM, Den Cowboy  wrote:
>
> I got this from the curl (as system:admin)
> curl -k
> https://ip-xx-xx-xx-xx.xx-xx-1.compute.internal:8443/api/v1/proxy/namespaces/openshift-infra/services/httpsheapster:/validate
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "User \"system:anonymous\" cannot \"proxy\" \"services\" with
> name \"httpsheapster:\" in project \"openshift-infra\"",
>   "reason": "Forbidden",
>   "details": {
> "name": "httpsheapster:",
> "kind": "services"
>   },
>   "code": 403
>
>
> --
> From: dencow...@hotmail.com
> To: mwri...@redhat.com
> Subject: RE: horizontal autoscaler does not get cpu utilization
> Date: Thu, 25 Feb 2016 07:59:11 +
> CC: users@lists.openshift.redhat.com
>
>
> Some logs are showing:
>
> Failed to reconcile test-scaler: failed to compute desired number of
> replicas based on CPU utilization for DeploymentConfig/test/test: failed to
> get cpu utilization: failed to get CPU consumption and request: failed to
> unmarshall heapster response: *invalid character 'E' looking for
> beginning of value*
> Feb 25 07:48:57 ip-172-31-xx-xx origin-master: E0225 07:48:57.079028
> 2242 event.go:192] Server rejected event
> '{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""},
> ObjectMeta:api.ObjectMeta{Name:"test-scaler.14361ecd543d4608",
> GenerateName:"", Namespace:"test", SelfLink:"", UID:"", ResourceVersion:"",
> Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0,
> nsec:0, loc:(*time.Location)(nil)}},
> DeletionTimestamp:(*unversioned.Time)(nil),
> DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil),
> Annotations:map[string]string(nil)},
> InvolvedObject:api.ObjectReference{Kind:"HorizontalPodAutoscaler",
> Namespace:"test", Name:"test-scaler",
> UID:"f7bac384-db00-11e5-ac6e-06b94d3c6589", APIVersion:"extensions",
> ResourceVersion:"13501", FieldPath:""}, Reason:"FailedGetMetrics",
> Message:"failed to get CPU consumption and request: failed to unmarshall
> heapster response: invalid character 'E' looking for beginning of value",
>
> I used this configuration:
>
> $ oc secrets new metrics-deployer nothing=/dev/null
>
>
>
>
> > Date: Wed, 24 Feb 2016 13:29:21 -0500
> > From: mwri...@redhat.com
> > To: dencow...@hotmail.com
> > CC: users@lists.openshift.redhat.com
> > Subject: Re: horizontal autoscaler does not get cpu utilization
> >
> >
> >
> > - Original Message -
> > > From: "Den Cowboy" 
> > > To: users@lists.openshift.redhat.com
> > > Sent: Wednesday, February 24, 2016 9:35:34 AM
> > > Subject: RE: horizontal autoscaler does not get cpu utilization
> > >
> > > I don't know if this is maybe the issue?
> > > In my browser
> https://hawkular-metrics.xx.xx.com/hawkular/metrics/status
> > >
> {"MetricsService":"STARTED","Implementation-Version":"0.13.0-SNAPSHOT","Built-From-Git-SHA1":"7dee24acfcfb3beac356e2c4d83b7b1704ebf82x"}
> > > curl on my master or nodes:
> > > curl -X GET https://hawkular-metrics.xx.xx.com/hawkular/metrics/status
> > > curl: (6) Could not resolve host: hawkular-metrics.xx.xx.com; Name or
> service
> > > not known
> > >
> > > I'm just describing the IP of the node where my router is in my local
> > > /etc/hosts
> > > like this: xx.xx.xx.xx hawkular-metrics.xx.xx.com
> >
> > The router configuration is not used for the HPA and so not being able
> to resolve the hostname from within the node or container should not be an
> issue.
> >
> > What the HPA does use is the API proxy.
> >
> > You can check if Heapster is accessible via the API proxy through the
> following command:
> >
> > curl -H "Authorization: Bearer 

Re: Create app with image from own docker registry on OpenShift 3.1

2016-02-23 Thread Andy Goldstein
You need to call it like this: oc new-app --insecure-registry 

On Tue, Feb 23, 2016 at 6:20 AM, Den Cowboy  wrote:

> I've added it + restarted docker:
> INSECURE_REGISTRY='--insecure-registry
> ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com'
>
> I'm able to perform a docker login and pull the image manually but
>
> oc new-app ec2-xxx:5000/test/image:1 or /test/image
>
> error: can't look up Docker image "ec2-xxx:5000/dbm/ponds-ui-nodejs:83":
> Internal error occurred: Get https://ec2-xxx:5000/v2/: x509: certificate
> signed by unknown authority
> error: no match for "ec2-xxx:5000/test/image:1"
>
> --
> From: bpar...@redhat.com
> Date: Thu, 18 Feb 2016 09:48:32 -0500
> Subject: Re: Create app with image from own docker registry on OpenShift
> 3.1
> To: dencow...@hotmail.com; users@lists.openshift.redhat.com
>
>
> INSECURE_REGISTRY is needed because your registry is using a self-signed
> cert, whether it is secured or not.
>
>
> On Thu, Feb 18, 2016 at 4:59 AM, Den Cowboy  wrote:
>
> No didn't do that. I'm using a secure registry for OpenShift. So the tag
> was not on insecure.
>
> --
> From: bpar...@redhat.com
> Date: Wed, 17 Feb 2016 10:53:48 -0500
> Subject: Re: Create app with image from own docker registry on OpenShift
> 3.1
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
>
>
> is ec2-xxx listed as an insecure registry in your docker daemon's
> configuration?
>
> /etc/sysconfig/docker
> INSECURE_REGISTRY='--insecure-registry ec2-'
>
> I believe that is needed for docker to communicate with registries that
> use self-signed certs.
>
> (you'll need to restart the docker daemon after adding that setting)
>
>
>
> On Wed, Feb 17, 2016 at 8:15 AM, Den Cowboy  wrote:
>
> I have my own docker registry secured with a selfsigned certificate. On
> other servers, I'm able to login on the registry and pull/push images from
> it. So that seems to work fine.
> But when I want to create an app from the image using OpenShift it does
> not seem te work:
>
> oc new-app ec2-xxx:5000/test/image1
> error: can't look up Docker image "ec2-xx/test/image1": Internal error 
> occurred: Get https://ec2-xxx:5000/v2/: x509: certificate signed by unknown 
> authority
> error: no match for "ec2-xxx:5000/test/image1"
>
> What could be the issue? I'm able to login in the registry and pull the
> image manually.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
>
> --
> Ben Parees | OpenShift
>
>
>
>
> --
> Ben Parees | OpenShift
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: All ports seem to be blocked after ansible install

2016-02-23 Thread Andy Goldstein
Just 1 place is sufficient - thanks!

On Mon, Feb 22, 2016 at 11:13 PM, Dean Peterson 
wrote:

> Oh, I opened the bug on bugzilla but I can open it on github too:
> https://bugzilla.redhat.com/show_bug.cgi?id=1310968
>
>
> On Mon, Feb 22, 2016 at 9:37 PM, Clayton Coleman 
> wrote:
>
>> Can you open an issue on GitHub with the results of `ip addr list` and
>> `ip route`?  It sounds like the SDN configuration may be disabling
>> your network, or some other unexpected interaction with the host is
>> blocking traffic.
>>
>> On Mon, Feb 22, 2016 at 1:23 PM, Dean Peterson 
>> wrote:
>> > Any ideas how the ansible installer may have made my machine
>> inaccessible to
>> > the outside world even with iptables turned off?
>> >
>> > On Feb 21, 2016 10:57 PM, "Dean Peterson" 
>> wrote:
>> >>
>> >> I performed an ansible install of openshift origin.  If I am on the
>> local
>> >> machine, I can bring up openshift in the browser.  However, on any
>> external
>> >> machine, I am no longer able to access anything on the openshift
>> master.  I
>> >> am unable to ssh, or visit the openshift web console in a browser.  I
>> >> checked iptables and all of the necessary ports are open.  I even
>> stopped
>> >> it.  Firewalld is also not running.  I was able to ssh prior to the
>> ansible
>> >> install but something during that install blocked all external
>> traffic.  How
>> >> do I open things back up?
>> >
>> >
>> > ___
>> > users mailing list
>> > users@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Securing registry failed: error bad certificate

2016-02-09 Thread Andy Goldstein
It's saying the cert doesn't have the IP address of the registry listed as
a subjectAltName. What command did you run to generate your cert?

On Tuesday, February 9, 2016, Den Cowboy  wrote:

> I try to secure my registry but it fails:
> This are the logs after a push:
> I've checked the certificate: the ca.crt has the same content as the
> second part of my generated secret. So I don't know why this certificate is
> bad?
>
> I0209 11:54:53.887517 1 sti.go:315] Successfully built
> 172.30.221.132:5000/test2/test2:latest
> I0209 11:54:53.917560 1 cleanup.go:23] Removing temporary directory
> /tmp/s2i-build586685329
> I0209 11:54:53.917581 1 fs.go:117] Removing directory
> '/tmp/s2i-build586685329'
> I0209 11:54:53.919251 1 sti.go:214] Using provided push secret for pushing
> 172.30.221.132:5000/test2/test2:latest image
> I0209 11:54:53.919274 1 sti.go:218] Pushing
> 172.30.221.132:5000/test2/test2:latest image ...
> E0209 11:54:53.929640 1 dockerutil.go:78] push for image
> 172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds
> ...
> E0209 11:54:58.939648 1 dockerutil.go:78] push for image
> 172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds
> ...
> E0209 11:55:03.960704 1 dockerutil.go:78] push for image
> 172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds
> ...
> E0209 11:55:08.967635 1 dockerutil.go:78] push for image
> 172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds
> ...
> E0209 11:55:13.976535 1 dockerutil.go:78] push for image
> 172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds
> ...
> E0209 11:55:18.986800 1 dockerutil.go:78] push for image
> 172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds
> ...
> E0209 11:55:23.999629 1 dockerutil.go:78] push for image
> 172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds
> ...
> I0209 11:55:28.01 1 sti.go:223] Registry server Address:
> I0209 11:55:28.50 1 sti.go:224] Registry server User Name:
> serviceaccount
> I0209 11:55:28.70 1 sti.go:225] Registry server Email:
> serviceacco...@example.org
> 
> I0209 11:55:28.89 1 sti.go:230] Registry server Password:
> <>
> F0209 11:55:29.54 1 builder.go:185] Error: build error: Failed to push
> image. Response from registry is: unable to ping registry endpoint
> https://172.30.221.132:5000/v0/
> v2 ping attempt failed with error: Get https://172.30.221.132:5000/v2/:
> x509: cannot validate certificate for 172.30.221.132 because it doesn't
> contain any IP SANs
> v1 ping attempt failed with error: Get
> https://172.30.221.132:5000/v1/_ping: x509: cannot validate certificate
> for 172.30.221.132 because it doesn't contain any IP SANs
>
> This are the logs of the registry itself:
> time="2016-02-09T11:50:54.384124563Z" level=info msg="redis not
> configured" go.version=go1.4.2 
> instance.id=0af8425a-7aef-44e4-9939-1105ac8d92fa
>
> time="2016-02-09T11:50:54.38411731Z" level=info msg="Starting upload purge
> in 6m0s" go.version=go1.4.2 instance.id=0af8425a-7aef-44e4-9939-1105ac8d92fa
>
> time="2016-02-09T11:50:54.384179893Z" level=info msg="using inmemory blob
> descriptor cache" go.version=go1.4.2 
> instance.id=0af8425a-7aef-44e4-9939-1105ac8d92fa
>
> time="2016-02-09T11:50:54.384208064Z" level=info msg="Using Origin Auth
> handler"
> time="2016-02-09T11:50:54.38423117Z" level=debug msg="configured
> \"openshift\" access controller" go.version=go1.4.2 
> instance.id=0af8425a-7aef-44e4-9939-1105ac8d92fa
>
> time="2016-02-09T11:50:54.384447261Z" level=info msg="listening on :5000,
> tls" go.version=go1.4.2 instance.id=0af8425a-7aef-44e4-9939-1105ac8d92fa
> 10.1.0.1 - - [09/Feb/2016:11:51:02 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:51:12 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:51:22 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:51:32 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:51:42 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:51:52 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:52:02 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:52:12 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:52:22 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:52:32 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:52:42 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - [09/Feb/2016:11:52:52 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go 1.1 package http"
> 10.1.0.1 - - 

Re: Nobody test OpenShift Origin with Vagrant + Ansible ? There are many issues with this workflow

2016-02-09 Thread Andy Goldstein
I do vagrant + ansible + parallels on my Mac for OSE. What sort of issues
are you seeing?

On Tuesday, February 9, 2016, Clayton Coleman  wrote:

> Yeah, I think most people are testing vagrant locally for dev, but for
> ansible are deploying to EC2 or GCE
>
> On Tue, Feb 9, 2016 at 4:19 PM, Jason DeTiberus  > wrote:
> >
> >
> > On Tue, Feb 9, 2016 at 12:12 PM, Stéphane Klein
> > > wrote:
> >>
> >> Hi,
> >>
> >> nobody test OpenShift Origin with Vagrant ?
> >>
> >> I test OpenShift Origin + Ansible + Vagrant since three week on OSX,
> there
> >> are many many issues.
> >>
> >> Am I alone to use this workflow ?
> >
> >
> > Stéphane,
> >
> > The core developers of openshift-ansible do not generally test using the
> > Vagrantfile.  I personally wouldn't have a chance to take a look at it
> for
> > another couple of weeks, and even then I do not have the ability to test
> on
> > OS X or using VirtualBox.
> >
> >
> > --
> > Jason DeTiberus
> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com 
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
> ___
> users mailing list
> users@lists.openshift.redhat.com 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Issues with the built-in registry

2016-01-29 Thread Andy Goldstein
ls -laZ /opt/ose-registry

Most likely you need to do: sudo chcon -t svirt_sandbox_file_t
/opt/ose-registry

Andy

On Fri, Jan 29, 2016 at 9:01 AM, Jason DeTiberus 
wrote:

>
> On Jan 29, 2016 8:43 AM, "Florian Daniel Otel" 
> wrote:
> >
> >
> > No worries ;) -- part since  it's my turn to apologise, since I missed
> adding the  "admin" role to the "oepnshift" project.
> >
> > Done that now, and now I get a HTTP 500:
> >
> > [root@osev31-node1 src]#  docker push
> 172.30.38.99:5000/openshift/busybox
> > The push refers to a repository [172.30.38.99:5000/openshift/busybox]
> (len: 1)
> > 964092b7f3e5: Preparing
> > Received unexpected HTTP status: 500 Internal Server Error
> > [root@osev31-node1 src]#
> >
> > Attached are the "oc logs" for the docker registry pods.
> >
> > The weird thing there (at least to me) is:
> >
> > level=error msg="response completed with error" err.code=UNKNOWN
> err.detail="filesystem: mkdir /registry/docker: permission denied"
> >
> > Can this have smth to do with the way I deployed the registry (with the
> "-mount-host=/opt/ose-registry" )  -- see below ? That directory exists,
> but is empty
>
> It sounds like a permissions issue on /opt/ose-registry. Unfortunately I
> do not know what the permissions and/or the SELinux context should be.
>
> >
> > Thanks,
> >
> > Florian
> >
> > On Fri, Jan 29, 2016 at 2:30 PM, Jason DeTiberus 
> wrote:
> >>
> >>
> >> On Jan 29, 2016 8:05 AM, "Florian Daniel Otel" 
> wrote:
> >> >
> >> > I should have mentioned that in my original email, but that's exactly
> the steps I followed.
> >>
> >> My apologies, missed the auth parts mentioned the first read through.
> >>
> >> Just to verify, did you grant reguser admin rights on the openshift
> project?
> >> oadm policy add-role-to-user admin  -n openshift
> >>
> >> As for not seeing any subdirectories under /registry, I believe that is
> to be expected until a Docker push has been done (either by a builder pod
> or by a manual push).
> >>
> >> >
> >> > IOW:  In addition to the stuff below (and prior to all that) I have
> done, as "system:admin" , for user "reguser"
> >> >
> >> > oadm policy add-role-to-user system:registry reguser
> >> > oadm policy add-role-to-user  system:image-builder reguser
> >> >
> >> > Again, following the instructions in the docs all works fine, until I
> try a "docker push"
> >> >
> >> > The only thing that doesn't seem quite right is that listing the
> content of the Docker registry only lists the top directory "/registry",
> but nothing underneath it:
> >> >
> >> > root@osev31-node1 src]# docker ps
> >> > CONTAINER IDIMAGE
>COMMAND  CREATED STATUS
>  PORTS   NAMES
> >> > ea83db288da1
> registry.access.redhat.com/openshift3/ose-docker-registry:v3.1.1.6
> "/bin/sh -c 'DOCKER_R"   2 hours ago Up 2 hours
>
>  
> k8s_registry.f0018725_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_dd13c8d0
> >> > f383ae8db39fopenshift3/ose-pod:latest
>"/pod"   2 hours ago Up 2 hours
>
>  
> k8s_POD.f419fdd1_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_d21e1b8c
> >> >
> >> >
> >> >
> >> > [root@osev31-node1 src]# docker ps
> >> > CONTAINER IDIMAGE
>COMMAND  CREATED STATUS
>  PORTS   NAMES
> >> > ea83db288da1
> registry.access.redhat.com/openshift3/ose-docker-registry:v3.1.1.6
> "/bin/sh -c 'DOCKER_R"   2 hours ago Up 2 hours
>
>  
> k8s_registry.f0018725_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_dd13c8d0
> >> > f383ae8db39fopenshift3/ose-pod:latest
>"/pod"   2 hours ago Up 2 hours
>
>  
> k8s_POD.f419fdd1_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_d21e1b8c
> >> > [root@osev31-node1 src]#
> >> >
> >> >
> >> >  () Nothing listed under "/registry" ??
> >> >
> >> >
> >> > [root@osev31-node1 src]# docker exec -it ea83db288da1 find /registry
> >> > /registry
> >> > [root@osev31-node1 src]#
> >> >
> >> >
> >> >
> >> > On Fri, Jan 29, 2016 at 1:03 PM, Jason DeTiberus 
> wrote:
> >> >>
> >> >>
> >> >> On Jan 29, 2016 6:07 AM, "Florian Daniel Otel" <
> florian.o...@gmail.com> wrote:
> >> >> >
> >> >> > Hello all,
> >> >> >
> >> >> > I'm pretty sure it's mostly related to my ignorance, but for some
> reason I'm not able to push to the built-in docker registry after deploying
> it.
> >> >> >
> >> >> >
> >> >> > Deplyoment:
> >> >> >
> >> >> > oadm registry --service-account=registry
> --config=/etc/origin/master/admin.kubeconfig
> --credentials=/etc/origin/master/openshift-registry.kubeconfig
> --images='
> registry.access.redhat.com/openshift3/ose-${component}:${version}
>