> On Nov 21, 2020, at 10:54 AM, Russ Krichevskiy
> wrote:
>
> I am trying to replace a failed master node that did not reboot
> properly during upgrade (4.6.3 to 4.6.4).
> Referencing documentation here
>
ton,
>
> Thanks for the response. "no longer required" as in this was a change
> made recently? What version of OpenShift was this change made effective in?
>
> Thanks,
> Marvin
>
> On Thu, Sep 10, 2020 at 10:02 AM Clayton Coleman
> wrote:
>
>> Link is
Link is no longer required unless you want pods with that service account
to use a pull secret automatically. Import isn't related to a service
account, so it uses all pull secrets in the namespace.
On Thu, Sep 10, 2020 at 9:25 AM Just Marvin <
marvin.the.cynical.ro...@gmail.com> wrote:
> Hi,
>
You usually have to define a secret for your tls keys
On May 2, 2020, at 9:21 PM, Conrado Poole wrote:
Hi all,
Trying to figure out if the Ingress Operator is able to create Routes for
Kubernetes objects when they specify a TLS section on their spec.
>From my testing Routes are
> On Nov 26, 2019, at 12:30 PM, Jon Stanley wrote:
>
>> On Tue, Nov 26, 2019 at 10:29 AM Clayton Coleman wrote:
>> The standard kube object
>> is what we would normally output. You should be able to drop that in
>> the manifests dir the installer creates to obvia
One challenge is that you will need both segments, but only one of
them is a standard kube object (the icsp). The standard kube object
is what we would normally output. You should be able to drop that in
the manifests dir the installer creates to obviate the need for the
content sources.
Also,
Did you run must-gather while it couldn’t detach?
Without deeper debug info from the interval it’s hard to say. If you can
recreate it and run must gather we might be able to find it.
On Nov 24, 2019, at 10:25 PM, Joel Pearson
wrote:
Hi,
I updated some machine config to configure chrony for
On Nov 17, 2019, at 9:34 PM, Joel Pearson
wrote:
So, I'm running OpenShift 4.2 on Azure UPI following this blog article:
https://blog.openshift.com/openshift-4-1-upi-environment-deployment-on-microsoft-azure-cloud/
with
a few customisations on the terraform side.
One of the main differences it
Hrm, the nightly link seems to have disappeared.
The nightly installer binaries are located at:
https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/
On Nov 19, 2019, at 7:58 PM, Dale Bewley wrote:
I'm thwarted from installing OCP 4.2 on OSP 13 due to lack of support for a
Raise a bug to the installler component, yes
On Nov 17, 2019, at 6:03 PM, Joel Pearson
wrote:
On Mon, 18 Nov 2019 at 12:37, Ben Parees wrote:
>
>
> On Sun, Nov 17, 2019 at 7:24 PM Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>>
>>
>> On Wed, 13 Nov 2019 at 02:43, Ben Parees
On Nov 12, 2019, at 3:44 AM, Joel Pearson
wrote:
On Tue, 12 Nov 2019 at 15:37, Ben Parees wrote:
>
>
> On Mon, Nov 11, 2019 at 11:26 PM Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>> I've now discovered that the cluster-samples-operator doesn't seem honour
>> the proxy settings,
There is a known bug in 4.2 where image stream content from the
release payload is not mirrored correctly. That is slated to be
fixed.
> On Oct 28, 2019, at 8:39 PM, W. Trevor King wrote:
>
>> On Mon, Oct 28, 2019 at 5:08 PM Joel Pearson wrote:
>> It looks like image streams don't honor the
On Oct 28, 2019, at 8:07 PM, Joel Pearson
wrote:
> Maybe must-gather could be included in the release manifest so that it's
> available in disconnected environments by default?
> It is:
> $ oc adm release info --image-for=must-gather
> quay.io/openshift-release-dev/ocp-release:4.2.0
>
>
Yes, that is a known 4.2 bug.
> On Oct 28, 2019, at 2:24 PM, W. Trevor King wrote:
>
>> On Mon, Oct 28, 2019 at 4:05 AM Joel Pearson wrote:
>> Maybe must-gather could be included in the release manifest so that it's
>> available in disconnected environments by default?
>
> It is:
>
> $ oc adm
We probably need to remove the example from the docs and highlight
that you must copy the value reported by image mirror
> On Oct 27, 2019, at 11:33 AM, W. Trevor King wrote:
>
>> On Sun, Oct 27, 2019 at 2:17 AM Joel Pearson wrote:
>> Ooh, does this mean 4.2.2 is out or the release is imminent?
Metrics are exposed via the controller process in the pod (pid1), not the
HAProxy process.
On Mon, Oct 14, 2019 at 1:27 PM Tim Dudgeon wrote:
> I'm trying to see the router stats as described here:
> https://docs.okd.io/3.11/admin_guide/router.html
>
> I can see this from within the container
We should supper a longer before window - there’s really no cost to us
because we require ntp after cluster startup and our rotation window
is days, not hours.
> On Oct 1, 2019, at 7:35 PM, W. Trevor King wrote:
>
>> On Tue, Oct 1, 2019 at 4:24 PM Jon Stanley wrote:
>> I fixed the hostname
Note: Triggers and image streams also work on deployments.
Are you looking to change these objects live, or in config files on disk?
> On Aug 27, 2019, at 6:42 AM, Cameron Braid wrote:
>
> I have a bunch of DeploymentConfig resources I need to convert to Deployment
> resources.
>
> Does anyone
[jber...@redhat.com]
> Sent: Thursday, July 25, 2019 11:23 AM
> To: Clayton Coleman; Aleksandar Lazic
> Cc: users; dev
> Subject: Re: Follow up on OKD 4
>
>> On 7/25/19 6:51 AM, Clayton Coleman wrote:
>> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
&
ro do people want". If
there's a group who want to get more involved in the "build a distro" part
of tools that exist, that definitely seems like a different use case.
>
> The redhat container catalog is a good start too, but we need to be
> thinking all the way up to the k8s le
y, I'm just throwing some ideas out there, I wouldn't consider my
>> statements as advocating strongly in any direction. Surely FCoS is
>> the natural fit, but I think considering other distros merits
>> discussion.
>
> +1
>
> Regards
> Aleks
>
>
>>>
previous suggestion (the auto updating
kube distro) has the concrete goal of “don’t worry about security /
updates / nodes and still be able to run containers”, and fcos is a
detail, even if it’s an important one. How would you pitch the
alternative?
>
>> On Wed, Jul 24, 2019 a
On Wed, Jul 24, 2019 at 12:45 PM Fox, Kevin M wrote:
> Ah, this raises an interesting discussion I've been wanting to have for a
> while.
>
> There are potentially lots of things you could call a distro.
>
> Most linux distro's are made up of several layers:
> 1. boot loader - components to get
> From: dev-boun...@lists.openshift.redhat.com [
> dev-boun...@lists.openshift.redhat.com] on behalf of Michael Gugino [
> mgug...@redhat.com]
> Sent: Wednesday, July 24, 2019 7:40 AM
> To: Clayton Coleman
> Cc: users; dev
> Subject: Re: Follow up o
On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino wrote:
> I tried FCoS prior to the release by using the assembler on github.
> Too much secret sauce in how to actually construct an image. I
> thought atomic was much more polished, not really sure what the
> value-add of ignition is in this
On Sat, Jul 20, 2019 at 12:40 PM Justin Cook wrote:
> Once upon a time Freenode #openshift-dev was vibrant with loads of
> activity and publicly available logs. I jumped in asked questions and Red
> Hatters came from the woodwork and some amazing work was done.
>
> Perfect.
>
> Slack not so
> via email :)
>
> I'll send out more info here ASAP. Stay tuned!
>
> With kind regards
>
> CHRISTIAN GLOMBEK
> Associate Software Engineer
>
> Red Hat GmbH, registred seat: Grassbrunn
> Commercial register: Amtsgericht Muenchen, HRB 153243
> Managing directors: Charl
Thanks for everyone who provided feedback over the last few weeks. There's
been a lot of good feedback, including some things I'll try to capture here:
* More structured working groups would be good
* Better public roadmap
* Concrete schedule for OKD 4
* Concrete proposal for OKD 4
I've heard
tps://commons.openshift.org/events.html#event%7Cokd4-road-map-release-update-with-clayton-coleman-red-hat%7C960>
to further explore these topics with the wider community. I hope you’ll
join the conversation and look forward to hearing from the others across
the community. Meeting details here
v-boun...@lists.openshift.redhat.com <
dev-boun...@lists.openshift.redhat.com> *On Behalf Of *Clayton Coleman
*Sent:* jeudi, 6 juin 2019 12:15
*To:* Alix ander
*Cc:* OpenShift Users List ;
d...@lists.openshift.redhat.com
*Subject:* Re: OKD 4.x
We’re currently working on how F
We’re currently working on how Fedora CoreOS will integrate into OKD.
There’s a fair chunk of work that needs to be done and FCoS has a broader
mission than RHCoS does, so its a bit further behind (since OpenShift 4 /
OKD 4 require an OS with ignition and ostree). Stay tuned, I was going to
write
skopeo is available via Homebrew on the Mac - if there’s a gap in function
for signing it’s very reasonable to file an issue to ensure it works
properly.
On May 3, 2019, at 7:32 PM, Clayton Coleman wrote:
On May 3, 2019, at 4:59 PM, Grace Thompson wrote:
We'd like to implement image
On May 3, 2019, at 4:59 PM, Grace Thompson wrote:
We'd like to implement image signing for our imagestreams. We are unable to
use `atomic cli` or skopeo to sign the images since we support other OS's
and not just rpm based distros.
If you would clarify - what part of “rpm based distros”
It really depends on your use case. The development team uses the
integrated registry to serve content for other clusters. We also use
quay.io and docker hub to host mirrors of that content.
On Thu, May 2, 2019 at 4:12 AM Harald Dunkel
wrote:
> HI folks,
>
> I understand that I can expose the
ot;oc explain object" but
> the tool wasn't too happy about that.
>
> Regards,
> Marvin
>
> On Wed, Mar 6, 2019 at 11:53 AM Clayton Coleman
> wrote:
>
>> Objects is opaque (literally "any object") so you can't get open api
>> metadata on it.
&
Objects is opaque (literally "any object") so you can't get open api
metadata on it.
On Wed, Mar 6, 2019 at 11:51 AM Kenneth Stephen <
marvin.the.cynical.ro...@gmail.com> wrote:
> Hi,
>
> The structure of a template is --> objects -->
> metadata . "oc explain template.objects" shows me good
Are there errors in the controller logs?
On Dec 2, 2018, at 2:42 AM, Cameron Braid wrote:
Sorry, a typo - its a 3.7 cluster not 3.6
~> oc version
oc v3.7.2+282e43f
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth
Server
openshift v3.7.0+7ed6862
kubernetes v1.7.6+a08f5eeb62
On Sun, 2
Deployment Configs do not allow batch behavior, but they do allow “test”
behavior that allows you to trigger a scale up when a change happens, and
if everything succeeds you’ll get a “passing” deployment. When success or
failure happens it is scaled down to zero. This can be used to validate
:
> On the 4.0 changes, is the plan to provide the ability to upgrade from
> 3.11 to 4.0 or would a totally fresh install be required?
>
> On Thu, Oct 11, 2018 at 4:55 PM Clayton Coleman
> wrote:
>
>> https://github.com/openshift/origin/releases/tag/v3.11.0 contains the
>
https://github.com/openshift/origin/releases/tag/v3.11.0 contains the
release notes and latest binaries.
The v3.11.0 tag on docker.io is up to date and will be a rolling tag (new
fixes will be delivered there).
Thanks to everyone on their hard work!
The all-in-one path David is referring to (openshift start) is not used by
minishift (which uses of cluster up).
There will be a replacement path for the core functionality of running a
single master in a VM, we’re still working out the details. The end goal
would be for an equivalent easy to
OpenShift automatically prunes images off nodes and has done so since at
least 3.4. Please see
https://docs.okd.io/latest/admin_guide/garbage_collection.html
On Tue, Sep 25, 2018 at 12:32 PM Tim Dudgeon wrote:
> As time progresses more and more docker images will be present on the
> nodes in a
support for extending openshift with
other ecosystem projects like istio and knative
On Fri, Sep 7, 2018 at 9:18 AM Clayton Coleman wrote:
> Master right now will be labeled 4.0 when 3.11 branches (happening right
> now). It’s possible we might later cut a 3.12 but no plans at the current
not to add the
individual masters to that endpoint and use a load balancer instead? Say a
private ELB for example?
Or are there future features in kubernetes that will make master failover
more reliable internally?
Thanks,
Joel
On Thu, 28 Jun 2018 at 12:48 pm, Clayton Coleman
wrote
And coming in 3.11 is an experimental command that lets you add a tar file
as a new layer in an image (or as a new “scratch” image), so it’s even
easier to tar up your source code as an image, and have builds trigger off
the image change. The command is “oc image append —to= [layer as
tar.gz]”
There are a lot of possible ways to automate or improve this. Part of
the reason we have hesitated to add more complexity was because of the
divergence of use cases.
It usually boils down to asking “who” or “why”. One option for
empowering specific trusted individuals is to write a quick loop to
?
On Thu, Sep 6, 2018 at 2:34 PM Clayton Coleman wrote:
> The successor to atomic host will be RH CoreOS and the community
> variants. That is slated for 4.0.
>
> > On Sep 6, 2018, at 9:25 AM, Marc Ledent wrote:
> >
> > Hi all,
> >
> > I have read in
The successor to atomic host will be RH CoreOS and the community
variants. That is slated for 4.0.
> On Sep 6, 2018, at 9:25 AM, Marc Ledent wrote:
>
> Hi all,
>
> I have read in the 3.10 release notes that Atomic Host is deprecated and will
> nod be supported starting release 3.11.
>
> What
, create a role that you bind to the
namespace / account when you trust them. Or just add it to edit and they’d
have it by default.
The complicated scenarios are usually when your trust domains are
heterogeneous - it didn’t sound like that was your case.
On Thu, Aug 30, 2018 at 2:46 PM Clayton
more
effective to schedule the routers on nodes and keep that traffic separate
from a resiliency perspective.
The routers need the masters to be available (2/3 min) to receive their
route configuration when restarting, but require no interconnection to
serve traffic.
*From:* Clayton Coleman
*Sent
When you were experiencing the outage was ALB listing 2/3 healthy
backends? I’m not as familiar with ALB over ELB, but what you are
describing sounds like the frontend only was able to see one of the pods.
On Sep 2, 2018, at 9:21 AM, Stan Varlamov wrote:
AWS ALB
*From:* Clayton Coleman
Routers all watch all routes. What are you fronting your routers with for
HA? VRRP? An F5 or cloud load balancer? DNS?
On Sep 2, 2018, at 6:18 AM, Stan Varlamov wrote:
Went through a pretty scary experience of partial and uncontrollable outage
in a 3.9 cluster that happened to be caused by
Ultimately you need to ask what you are trying to prevent:
1. a user from accidentally blowing up the cluster
2. malicious users
3. an application breaking at runtime because it needs more resources than
it is allotted
The second one is more what we've been discussing here - being draconian up
Please see https://status.docker.com/ for times.
Remember, if you have autoscaling nodes that need to pull new apps, or have
pods that run with PullAlways, or push builds to the docker hub, while the
hub is down those operations will fail.
Mitigations could include:
1. Disable autoscaling for
at we have today?
On Tue, Aug 14, 2018 at 6:17 PM, Clayton Coleman
wrote:
> As part of the continuation of splitting OpenShift up to make it be able
> to run on top of kubernetes, we just merged https://github.com/
> openshift/origin/pull/20344 which removes "openshift start no
As part of the continuation of splitting OpenShift up to make it be able to
run on top of kubernetes, we just merged
https://github.com/openshift/origin/pull/20344 which removes "openshift
start node" and the "openshift start" commands. This means that the
openshift binary will no longer include
possible to also do that with masters post upgrade? Do you have any
> info you could point me at to create the new node groups post upgrade?
>
>
>
> On Tue, Jul 24, 2018 at 3:00 PM Clayton Coleman
> wrote:
>
>> Upgrading from regular nodes to autoscaling groups is not i
Upgrading from regular nodes to autoscaling groups is not implemented.
You’d have to add new node groups post upgrade and manage it that way.
> On Jul 24, 2018, at 7:22 AM, David Conde wrote:
>
> I'm in the process of upgrading an origin cluster running on AWS from 3.7 to
> 3.9 using openshift
To access things across all namespaces, you need a ClusterRoleBinding, not
a RoleBinding. RoleBindings only give you access to the role scoped to the
namespace the RoleBinding is in.
On Tue, Jul 17, 2018 at 10:21 AM Eric D Helms
wrote:
> Howdy,
>
> I am trying to manage routes via a
In OpenShift 3.9, when a master goes down the endpoints object should be
updated within 15s (the TTL on the record for the master). You can check
the value of "oc get endpoints -n default kubernetes" - if you still see
the master IP in that list after 15s then something else is wrong.
On Wed,
If you have api audit logging on (see docs for master-config) you would see
who edited the config map and what time.
On Jun 27, 2018, at 1:59 PM, leo David wrote:
Hello everyone,
I'm encountering this situation on OS Origin 3.9, in which someone whith
full acces in a particular namespace
Clients and binaries have been pushed to GitHub
https://github.com/openshift/origin/releases/tag/v3.10.0-rc.0 and images
are available on the DockerHub.
___
users mailing list
users@lists.openshift.redhat.com
Find the name of one of your crashing pods and run:
$ oc debug POD_NAME
That'll put you into a copy of that pod at a shell and you can debug
further from there.
On Mon, May 21, 2018 at 5:04 PM, Brian Keyes wrote:
> I have an very very simple hello python
>
>
> #start
oc serviceaccount get-token
Is a little easier for scripting
On May 11, 2018, at 10:49 PM, Mohamed Lrhazi wrote:
I got it ! Thanks!
» oc describe secret robot-token-6w99j
On Fri, May 11, 2018 at 10:38 PM, Mohamed Lrhazi wrote:
> One more quick question ;)
>
If you want to access the docker socket your pod / container must be
privileged, since the docker socket gives the pod full access to the host.
Set the privileged Boolean on the container’s security context
On May 10, 2018, at 9:43 AM, Mohmmed, Osman X <
osman.x.mohm...@healthpartners.com> wrote:
Resource limits are fixed because we need to make a good scheduling
decision for the initial burst you’re describing (the extremely high cpu at
the beginning). Some applications might also need similar cpu on restart.
Your workload needs to “burst”, so setting your cpu limit to your startup
peak
https://github.com/openshift/origin/pull/19509 has been merged and does two
things:
First, and most important, it puts our images and binaries on a diet:
1. oc is now 110M instead of 220M
2. most origin images were 1.26GB uncompressed (300 or so on the wire) are
now half that size (150 on the
On Apr 19, 2018, at 4:44 AM, marc.schle...@sdv-it.de wrote:
Hello everyone
I was asking this question already on the Openshift Google Group but was
redirected to this list in the hope to find someone who knows the details
about the current "oc cluster up" command.
I am facing some trouble
ns-buildah-slave
https://hub.docker.com/r/alanbchristie/jenkins-slave-buildah-centos7/
On 17 Apr 2018, at 00:39, Clayton Coleman <ccole...@redhat.com> wrote:
Like any other user, to run privileged an administrator must grant access
to the Jenkins service account to launch privileged pods. T
Like any other user, to run privileged an administrator must grant access
to the Jenkins service account to launch privileged pods. That’s done by
granting the service account the slave pod runs as the privileged SCC:
oc adm policy add-scc-to-user -z SERVICE_ACCT privileged
On Apr 16, 2018,
You would add your CA to the master’s trust bundle (ca.crt or ca-bundle.crt
on each master, usually via Ansible), which is then distributed to all
containers as /var/run/secrets/kubernetes.io/serviceaccount/ca.crt and
available for many default actions like fetching source. However, if you
are
Alan
On 13 Apr 2018, at 21:35, Clayton Coleman <ccole...@redhat.com> wrote:
Can not find allocated subnet usually means the master didn’t hand out a
chunk of SDN IPs to that node. Check the master’s origin-master-controller
logs and look for anything that relates to the node name menti
Can not find allocated subnet usually means the master didn’t hand out a
chunk of SDN IPs to that node. Check the master’s origin-master-controller
logs and look for anything that relates to the node name mentioned in your
error. If you see a problem, try restarting the origin-master-controllers
You can try rerunning the install with -vv to get additional debug
information.
What OS and version on Ansible are you using?
On Apr 10, 2018, at 3:24 AM, Yu Wei wrote:
Hi,
I tried to install openshift origin 3.9 on a single machine and encountered
problems as below,
OpenShift users! Please take a moment to participate in the Kubernetes
application survey - your feedback will help make Kubernetes and OpenShift
a better platform.
Thanks!
Begin forwarded message:
*From:* Matt Farina
*Date:* April 5, 2018 at 11:40:11 AM EDT
*To:*
sers-boun...@lists.openshift.redhat.com>] *On Behalf Of *Clayton Coleman
*Sent:* Tuesday, March 27, 2018 3:44 PM
*To:* Troy Dawson <tdaw...@redhat.com>
*Cc:* users <us...@redhat.com>; The CentOS developers mailing list. <
centos-de...@centos.org>; dev <d...@lists.openshift.redhat.
Still waiting for a last couple of regressions to be fixed. Sorry
everyone, I know you're excited about this.
On Tue, Mar 27, 2018 at 6:04 PM, Troy Dawson wrote:
> I didn't see anything saying that 3.9 was released yet. Last I heard
> they were working on some regressions.
The router contains a management process that generates the haproxy
config and manages restarts, as well as gather stats.
> On Mar 27, 2018, at 9:32 AM, abdul nizam wrote:
>
> Hi all
>
> i can see the haproxy process id is changing whenever i create a route. So
> that means
On Mon, Mar 26, 2018 at 11:50 AM, Alfredo Palhares
wrote:
> Hello everyone,
>
>
> I would like to share some of the frustations that I currently have with
> openshift, which is making me not consider this a base to our container
> infrastcture.
> - No visualization of the
We found a regression in the subPath behavior, currently waiting for fixes
to land. I'm probably going to remove the tag and cut another once that
lands.
On Tue, Mar 20, 2018 at 4:06 AM, Joel Pearson wrote:
> Is the OpenShift Origin 3.9.0 release imminent? I
setup).
Is there a way to have the daemon process pick up the new certificates
without a downtime in my scenario?
Clayton Coleman <ccole...@redhat.com> schrieb am Di., 6. März 2018 um
02:30 Uhr:
> Even when you restart, you aren’t seeing the new certs loaded?
>
> On Mar 5, 2018, at 2:5
Even when you restart, you aren’t seeing the new certs loaded?
On Mar 5, 2018, at 2:58 AM, Alex Stockinger wrote:
Hi,
I am trying to secure my OpenShift installation's Console / API on port
8443 with let's encrypt certificates. I got this working nicely using the
We have no plan to deprecate routes. Since ingress are still beta and
there is no clear replacement proposal (and are less expressive) we
plan to continue to offer routes for a long time. There’s some work
in 3.10 to convert ingress to routes automatically to simplify
transition, which will
On Feb 13, 2018, at 10:49 PM, Joel Pearson
wrote:
The web UI and registry aren’t connected in any way I don’t believe.
If you don’t use the internal registry then you can’t trigger things to
deploy when an image changes, not sure if that matters to you...
Is /var/lib/etcd your etcd data directory? Ie is there anything in that
folder?
On Feb 13, 2018, at 4:50 PM, Feld, Michael (IMS) wrote:
Hi all,
I am trying to use the ansible playbook to migrate etcd from v2 to v3 for a
3.6.0 origin cluster and it keeps failing with the
When you run “openshift start” by itself that file won’t be created (we
create one in memory). If you launch with oc cluster up, it should be
inside the container at
/var/lib/origin/openshift.local.config/master/master-config.yaml
On Feb 7, 2018, at 8:54 PM, Gaurav Ojha
There’s a job that oc cluster up runs to create host PVs. You may want to
check that that job ran successfully. I don’t remember exactly what
namespace it was created it in
On Feb 3, 2018, at 1:10 PM, Tien Hung Nguyen
wrote:
Hello,
I'm using OpenShift Origin v3.7.1
You can grant the role to the user to let them set it. However, that
lets that app escape any network isolation boundaries so the
multitenant network plugin won’t work.
You can also grant that permission to all users if you don’t need the
protection.
> On Jan 30, 2018, at 3:18 PM, Tomas Nozicka
r high scale workloads.
--
*Srinivas Kotaru*
*From: *Clayton Coleman <ccole...@redhat.com>
*Date: *Wednesday, January 24, 2018 at 10:32 AM
*To: *Srinivas Naga Kotaru <skot...@cisco.com>
*Cc: *users <users@lists.openshift.redhat.com>
*Subject: *Re: Heptio Contour
At thi
3.9 will have haproxy 1.8, but the current level of http2 doesn’t
really help applications that aren’t willing to do passthrough. That
said, passthrough http2 should work.
> On Jan 24, 2018, at 2:54 PM, Tobias Brunner <tob...@tobru.ch> wrote:
>
>> On 24.01.2018 19:31, Cla
At this point in time, contour is still pretty new, so expect some rough
edges. I did a prototype of routes with envoy (similar to contour, but
preserving the router features) a few months back, and identified a set of
challenges which made it not a great fit as a replacement for the OOTB
How are you providing configuration to the container? Note that when
openshift creates an app from an image it adds emptyDir volumes to the
created app. That means the directories end up empty instead of having the
default image content.
Check whether /opt/apache-tomcat-8.5.14/conf/ is a volume
There was an open bug on this previously - I’m having trouble finding it at
the moment. The node may be racing with the cloud controller and then not
updating the labels. One workaround is to simply add an “oc label
node/$(hostname) ...” command to the origin-node services as a prestart
command.
On Mon, Dec 18, 2017 at 5:17 AM, Yu Wei wrote:
> Hi,
>
> I have several questions about user and authorization management.
>
> 1, How could I remove user from project?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[root@host-10-1-236-92 gpu-test]# oc login -u test1 -p test1 Login
>
On Dec 13, 2017, at 8:36 PM, Nick Bartos (nibartos)
wrote:
I am unable to get a writable hostPath volume for a "privileged: false"
container, even when the container's runAsUser owns the directory on the
host.
The k8s docs say "You either need to run your process as root in
When you ran oc cluster up, did you explicitly set the master to run on
127.0.0.1, or did it select that address for you?
OAuth won’t work when the master is set to 127.0.0.1 (nor will a number of
other functions)
On Dec 11, 2017, at 6:38 AM, Simon Pasquier wrote:
Hi,
The deploy command is a sub command in the openshift binary (openshift
infra deploy —help) and makes api calls back to openshift to launch the
pod. The deployment service account is used by the pod and is granted the
permission to launch hook pods and also to scale the replica set for each
Sha1 may not even be in “old” (because I believe it’s now considered
broken. If you need it, you’ll have to edit the router template with that
cipher.
On Nov 17, 2017, at 7:49 AM, Mateus Caruccio
wrote:
What is the value of `ROUTER_CIPHERS`?
$ oc -n default env
The latter is still being actively developed - that will be the future
direction but is not yet ready for genera use. Our work towards
leveraging node bootstrapping and using pre-baked AMIs is also not
completely finalized, so no expectation of stability on the latter.
> On Oct 26, 2017, at 9:18
No one with 2 nodes and 11 namespaces needs 20GB.
Up to a few hundred pods and ten nodes the openshift processes themselves
shouldn't be using more than a few hundred megs. If you plan on growing or
haven't upgraded to etcd3 you obviously want to leave some wiggle room, but
even very large
1 - 100 of 361 matches
Mail list logo