Ben Parees wrote on 9.08.20 г. 2:38 ч.:
> On Sat, Aug 8, 2020 at 7:23 AM Aleksandar Kostadinov
> mailto:akost...@redhat.com>> wrote:
>
> Thank you for the explanation. While running on 3.11, there is some
> subscription available but is not enough so I need to use cus
recorded
into the layers and no modifications to Dockerfile are needed?
[3] https://github.com/openshift/openshift-docs/issues/24625
Ben Parees wrote on 8.08.20 г. 0:49 ч.:
>
>
> On Fri, Aug 7, 2020 at 3:29 PM Aleksandar Kostadinov
> mailto:akost...@redhat.com>> wrote:
&g
ild while in OpenShift we need these COPY commands.
Or is something I'm missing here? Some way to use same Dockerfile?
Aleksandar Kostadinov wrote on 7.08.20 г. 22:28 ч.:
> I'm reading documentation [1] but adding the secret mounted under
> `etc-pki-entitlement` directory but subscriptio
I'm reading documentation [1] but adding the secret mounted under
`etc-pki-entitlement` directory but subscription manager still doesn't
find the extra repos.
I don't see 3.11 specific information. Should it work in another way?
I also see a blog post from this year [2]. It suggests copying
You don't have to port-forward the cli. Client source port is allocated
dynamically and usually differs from service port. So just start your
cli without port-forwarding.
Now you can't access localhost from the container so you need a few things:
1. figure out what is the IP address your host
Pavel Sapozhnikov wrote on 2/13/19 9:45 PM:
> Hi
>
> Is there any way to configure Build Webhook URL in OpenShift to be not
> HTTPS?
>
> If answer is yes, then how?
Even if you could, it wouldn't make sense. Do you want somebody to sniff
your hook secret and DoS your server with numerous
Marc Ledent wrote on 1/10/19 11:01 AM:
> Hi all,
>
> We are currently doing an evaluation of the different storage
> technologies inside OpenShift containers.
>
> We are currently testing GlusterFS, that has the advantage to be
> available on all nodes, but I have a mitigated feeling about it. I
Might be real nice to allow pod to request sockets created where
different log streams can be sent to central logging without extra
containers in the pod.
Jeff Cantrill wrote on 08/15/18 16:50:
The recommended options with the current log stack are either to
reconfigure your log to send to
Hi,
I have a blog about it [1]. HTH
[1] http://rboci.blogspot.com/2015/07/openshift-v3-rest-api-usage.html
Yu Wei wrote on 08/01/18 12:24:
Hi guys,
I could get session token via cli "oc whoami -t".
Could I get the same information via rest api?
I tried with api below, however, it returned
Maybe you can try to replace/add files inside
> /etc/pki/ca-trust/extracted/
You can prepare the files on a real machine and then copy them over to
containers as secrets.
P.S. SSL was invented exactly to prevent man in the middle (what the
appliance is presently doing) as far as I can tell.
FYI there is an image `origin-ansible` [1] in case it has ansible
readily installed and working.
[1] https://github.com/openshift/openshift-ansible
Marc Boorshtein wrote on 06/21/18 07:48:
I created a simple container on centos7 designed to run an ansible
playbook. Runs great on local
Seems to be supported [1][2]. I wonder does this feature depend on the
network plugin used in the cluster. i.e. does cluster need to use the
network policy plugin instead of the default?
[1]
https://docs.openshift.org/latest/rest_api/apis-networking.k8s.io/v1.NetworkPolicy.html
[2]
Looks like you need edge termination [1].
Not sure about path you set. I don't have experience with that. But it
looks wrong to set query parameters in the path. Why not first try
routing without path? Shouldn't web sphere redirect user to the correct
path on first access?
[1]
Hi,
I was wondering whether it is possible to configure the api server, to
send additional custom header(s) as part of all (or selected) responses.
Any ideas?
___
users mailing list
users@lists.openshift.redhat.com
Did you manage to figure out whether the ssh connection is dropped or
the `oc port-forward` connection?
I think you can use wireshark to watch the `oc` remote connection and
see when it gets dropped and by which side of it.
There are a lot of things that can be tried. For example setting
I think that most of what you need should be found at the docs [1].
Especially "3.1.1.2. Datasource configuration environment variables".
To enable data sources you need to set the respective environment variables.
You can create a secret for your database credentials. Then edit your
app
Hi, for non-critical workloads I find `oc-cluster-wrapper` very
convenient. You can wipe and recreate installations quickly when needed.
Just make sure you script all post-install configuration like
configuring users, etc.
I find it more problematic that if you recreate the cluster you would
Alternatively to relying on fixed user ids, you can make the files that
your app requires to write, writable to the `root` *group*. For files
that your app needs to read, they can be public readable or readable by
the `root` group again.
OpenShift will give your container a random UID but the
https://lists.mindrot.org/pipermail/openssh-unix-dev/2017-March/035906.html
https://lists.mindrot.org/pipermail/openssh-unix-dev/2017-August/036168.html
Looks like running as non-privileged user will continue to be supported,
just not as a single process and not with the old option.
Fedora 26
You sure?
Where do you read OpenSSH is dropping that mode?
Tobias Florek wrote on 07/11/17 13:07:
Hi!
I have a container (based on centos), that runs openssh's sftp server as
random uid for use in openshift (using nss-wrapper).
Unfortunately OpenSSH is going to drop running as non-root in
Hi,
you can use http for file transfer as well and communicate server
password through a secret in the project. It is indeed possible to run
an SSH server inside OpenShift. Just a little tricky. I've a blog about
it [1].
[1]
so code finds em as null
Best regards
El 9 sept 2016, a las 17:49, Julio Saura <jsa...@hiberus.com
<mailto:jsa...@hiberus.com>> escribió:
El 9 sept 2016, a las 17:47, Aleksandar Kostadinov
<akost...@redhat.com <mailto:akost...@redhat.com>> escribió:
Julio Saura wrote o
Josh Berkus wrote on 07/22/16 00:21:
On 07/21/2016 02:07 PM, Aleksandar Kostadinov wrote:
Then use plain IPs for nodes and masters. Then use xip.io for automatic
generated DNS names pointing at your NAT router. Make sure NAT router
forwards 80 and 443 to OpenShift cluster 80 and 443 ports
Josh Berkus wrote on 07/21/16 23:54:
On 07/21/2016 01:40 PM, Alex Wauck wrote:
On Thu, Jul 21, 2016 at 3:29 PM, Josh Berkus > wrote:
There is no external DNS server, here. I'm talking about a portable
microcluster, a stack of microboard
Alex Wauck wrote on 07/21/16 23:40:
On Thu, Jul 21, 2016 at 3:29 PM, Josh Berkus > wrote:
There is no external DNS server, here. I'm talking about a portable
microcluster, a stack of microboard computers, self-contained. The idea
Josh Berkus wrote on 07/21/16 22:59:
...
Just testing, for now, so the AWS DNS will work.
I'll have to give some thought as to how I'll handle DNS on the hardware
microcluster. Anyone have suggestions for a minimalist solution? I'd
love to just run BIND on a container, but there's a bit of a
Josh Berkus wrote on 07/21/16 22:17:
Folks:
https://docs.openshift.org/latest/install_config/install/prerequisites.html#install-config-install-prerequisites
This goes on a bit about DNS requirements, but what's *actually*
required is a bit unclear. Do I just need DNS support for the
This is the intended behavior for the default multi-tenant network
plugin. If your setup allows to use the subnet network plug-in, that
should allow such visibility.
Also using a route may help (not sure about that).
Maybe somebody else will chime in but I am not aware of any other
options.
Candide Kemmler wrote on 04/13/2016 12:12 PM:
Hi Aleksandar,
I might not be able to help a lot with your specific issues, but could you
explain more about them and possibly include some relevant logs?
From your email it is not clear what exactly issues you're hitting.
With a more detailed
Candide Kemmler wrote on 04/13/2016 10:53 AM:
My application is made up of several modules and jar dependencies (you guessed
it, it's written in java). I have a complex setup involving a nexus repo and
jenkins. To deploy one of the services that make up my app, I first have to
build jar
How about allowing PVs to be usable only from a particular project? In
this way admin can assign some PVs to project that project admin can use
as desired.
Mark Turansky wrote on 03/17/2016 08:55 PM:
And to be specific, when the PV is provisioned *for that claim*, it will
be pre-bound to that
Srinivas Naga Kotaru (skotaru) wrote on 02/22/2016 08:26 PM:
Thanks guys for having some discussion on this topic. Pl confirm whether my
understanding is correct or not pertaining to multi cluster authentication and
token management.
1. OSE3 authentication sub system can use external oAuth
Jordan Liggitt wrote on 02/22/2016 09:43 AM:
...
Correct, that method relies on the API server directly
identifying the
user from the certificate. That works for the few built in bootstrap
users, and can work for end users if that particular certificate
Srinivas Naga Kotaru (skotaru) wrote on 02/19/2016 08:00 PM:
David
Thanks for info
It looks like a big problem from management or client experience
perceptive . Have seen most of the clients are using a single cluster
but what about if a client has multiple clusters but client base is
common?
34 matches
Mail list logo