*Generally*, a proxy shouldn't be between the router or master servers and
the pods themselves.  So in production I would expect you to not have that
issue because you should be able to identify your masters via hostname or
long term IP (and thus be able to put them in NO_PROXY).  Running in a
container does have some limitations like that for sure.

The uber-doc for it here:
https://docs.openshift.org/latest/install_config/http_proxies.html - we can
add a suggestion that helps clarify the question if you think it's not
clear enough.

On Thu, Sep 15, 2016 at 11:43 AM, Martin Goldstone <
m.j.goldst...@keele.ac.uk> wrote:

> Thanks for that info.
>
> Your suggestion for #2 worked perfectly, I'll open a bug as you suggest.
>
> As for #3, it turns out it's all down to being behind a web proxy. I was
> only setting the upper case versions of the HTTP_PROXY, HTTPS_PROXY and
> NO_PROXY env vars, which curl was not picking up. The routing that does the
> health check was however picking them up and as the IP address of that pod
> wasn't included in the NO_PROXY env var, it was consulting the proxy. As
> this was the first pod I started it was reliably getting an IP address of
> 172.17.0.3. Putting this in the NO_PROXY list allows the container to start
> successfully, which is great. Unfortunately it's not possible to wildcard
> IP addresses in NO_PROXY, so I'm not quite sure how I'll manage this in a
> production environment, as presumably if that pod goes away for whatever
> reason, a new pod will get started and will have a new IP address. Any
> suggestions? I'd prefer to keep it behind a proxy server, but I could just
> give it a NATd Internet connection instead.
>
> Thanks.
>
> On 15 September 2016 at 16:06, Clayton Coleman <ccole...@redhat.com>
> wrote:
>
>> The first problem is probably because the request hadn't completely
>> finished to import the image.  We should report a better error / wait for
>> the import to happen.  Please open a bug describing what you did and we'll
>> try to give you a better experience.
>>
>> For #2 - clean up what you did before, and run just "oadm registry".
>> Everything should work correctly.  I think the docs are wrong - if my
>> simpler suggestion works please open a bug on
>> github.com/openshift/openshift-docs and we'll fix that.
>>
>> For #3 - I'll try to take a look at it and see whether I can recreate.
>> We might have a bug in the example, since when you input the tags work
>> correctly.
>>
>>
>>
>> On Thu, Sep 15, 2016 at 10:32 AM, Martin Goldstone <
>> m.j.goldstone+opensh...@keele.ac.uk> wrote:
>>
>>> Hi,
>>>
>>> I've just started looking at OpenShift Origin. I'm attempting to get it
>>> going on a CentOS Atomic Host box, and I've been following the Getting
>>> Started guide at https://docs.openshift.org/
>>> latest/getting_started/administrators.html#getting-started-a
>>> dministrators, using the method to run origin as a docker container.
>>> I've launched origin, and I can create the indicated test project and the
>>> deployment example app. I've also been able to login to the web console.
>>>
>>> The first of my problems is when the guide suggests running "oc tag
>>> deployment-example:v2 deployment-example:latest". This command fails with
>>>  "error: "deployment-example:v2" is not currently pointing to an image,
>>> cannot use it as the source of a tag".
>>>
>>> Leaving this aside, I moved on to trying to deploy the integrated
>>> registry. Running the command listed, I receive a warning that I should be
>>> using the --service-account option as --credentials is deprecated. "oc get
>>> all" shows that the deploy pod is running for the registry, but the
>>> registry pod never starts. A combination of oc get, oc logs and oc describe
>>> led me to the conclusion that a service account under the name of registry
>>> needed to exist, and it didn't. I created this service account, and the pod
>>> now launches. Unfortunately, it's still not working properly, as even
>>> though the pod launches, it only lives for about 30 seconds before being
>>> restarted. Seemingly the liveness and readiness checks are failing as they
>>> are getting a 503 error when looking at /healthz on port 5000. This is a
>>> bit confusing, as if I launch a shell in the container and use curl, I get
>>> a 200 OK response, and similarly if I use curl from the openshift container
>>> to the IP address of the registry container I get a 200 OK response. Does
>>> anyone have any idea why it's doing this? Have I gone wrong somewhere?
>>>
>>> I should mention that this server has no direct connection to the
>>> Internet, all offsite http and https traffic must go via a web proxy. I've
>>> set the HTTP_PROXY, HTTPS_PROXY and NO_PROXY env vars on the docker command
>>> line used to launch the openshift container, and I've set this in
>>> /etc/systemd/system/docker.service.d/http-proxy.conf on the host. Even
>>> though I'm able to pull images successfully, could this be contributing to
>>> my problem? If so, any ideas on how I might get this working from behind a
>>> web proxy?.  Also, I managed to work around my first issue by editing the
>>> ImageStream in the web console and manually inputting a both v1 and v2 tags
>>> pointing to the v1 and v2 tags of the docker image respectively, and this
>>> has allowed me to switch between the versions by updating the latest tag
>>> and the command suggests. Was this the right thing to do or have I made a
>>> mistake somewhere?
>>>
>>> Thanks very much
>>>
>>> Martin
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>
>
> --
> Martin Goldstone
> IT Systems Administrator
> IT Services, Innovation Centre 1 (IC1)
> Keele University, Keele, Staffordshire, United Kingdom, ST5 5NB
> Telephone: +44 1782 734457
> G+: http://google.com/+MartinGoldstoneKeele
>
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to