I've been beating my head over this problem - I finally am coming to the
conclusion it's not me :) I suspect a bug in k8s. Please confirm.
Tried "oc cluster up", tried "./openshift start" - getting this error in master
builds and 1.5.alpha3 - most of my pods fail with this event:
E0221
e dev tools of your browser and look for
> > the websocket connections. I recommend using Chrome since it does a better
> > job of displaying websocket information.
> >
> > Any time both logs and terminal are not working, its usually a symptom of
> > websockets failing
ift tries to talk to the node part, it uses the node's identifier,
> which ends up being localhost. Try setting --hostname to 192.168.1.5 and
> see if that works any better.
>
> Andy
>
> On Wed, Feb 22, 2017 at 4:15 PM, John Mazzitelli <m...@redhat.com> wrote:
>
>
7 at 2:02 PM, John Mazzitelli <m...@redhat.com> wrote:
>
> > Same thing for both Logs and Terminal - getting a error code of 500 when
> > the websocket tries to connect.
> >
> > I have no idea why - this used to always work - until I started using the
&g
These OpenShift docs were correct when docker 1.10 and before was allowed. Now
that newer docker versions are required, I believe these docs are now incorrect
and should be updated - please correct me if I'm wrong:
>From https://github.com/openshift/origin/blob/master/CONTRIBUTING.adoc
OK, I'm almost back to where I was when I had things working.
I was using an older version and starting with "oc cluster up", I am now using
the latest builds and starting with "openshift start".
But if I go to any pod's Logs tab in the UI console, I see this:
"Logs are not available. The
ed Hat Consulting
> > 101 N. Wacker, Suite 150
> > Chicago, IL 60606
> > andrew.bl...@redhat.com | m. (716) 870-2408
> >
> > On September 1, 2016 at 10:16:12 AM, Clayton Coleman (ccole...@redhat.com)
> > wrote:
> >
> > Copying Andrew who was th
Question: I can't seem to find the Jolokia app that this redhat dev article
mentions:
http://developers.redhat.com/blog/2016/03/30/jolokia-jvm-monitoring-in-openshift-2/
I installed VirtualBox, got OpenShift 3 running, I can see the OpenShift UI. I
can run "oc" on my host (I installed that
I have a watcher setup:
func watch() {
fieldSelector := fields.OneTermEqualSelector("spec.nodeName", nodeName)
listOptions := api.ListOptions{Watch: true, FieldSelector: fieldSelector }
watcher, err := d.Client.Pods(v1.NamespaceAll).Watch(listOptions)
go func() {
> I use oc from a utility server that’s not part of the cluster, which any
> developer can access. We keep oadm on a master openshift host, which is only
> accessible by openshift admins. I don’t believe oc needs access to the kube
> config, or at least haven’t hit any commands for it yet. Oadm
different way
pods can share credentials to a cluster-wide agent like this?
- Original Message -
> Correct, the cluster-reader role is intentionally non-escalating, so it
> does not have access to read secrets.
>
> Global read access to secrets is not typically something yo
Just wondering if this is not supposed to work or if it's a bug.
Try to delete a clusterrole using --selector and it doesn't work:
=
$ oc get --show-kind --show-labels clusterrole my-role
NAME LABELS
clusterroles/my-role my-label-name=my-label-value
$ oc delete
-
> Where you're seeing this error? What's the oc version you're using?
>
> Maciej
>
> On Sun, Feb 11, 2018 at 1:25 AM, John Mazzitelli <m...@redhat.com> wrote:
>
> > Did something change in the past 24 to 48 hours that might have caused
> > this? This wasn't
Did something change in the past 24 to 48 hours that might have caused this?
This wasn't happening a few days ago:
oc delete all,secrets,sa,templates,configmaps,daemonsets,clusterroles
--selector=app=myapp -n default
error: {batch cronjobs} matches multiple kinds [batch/v1beta1, Kind=CronJob
oc
> version` after the second cluster up?
>
> [1]
> https://github.com/openshift/origin/pull/17862/commits/7e44f156eb93532b4e8f9e8e15397e4e14f6ccd8
>
> On Fri, Feb 16, 2018 at 5:48 PM, John Mazzitelli < m...@redhat.com > wrote:
>
>
> Anyone see this behavior
est (I did NOT build those on my machine).
How can I tell it to not do that :) I want it to always use the locally built
images, not to pull down from a remote docker repo the true latest?
> On Fri, Feb 16, 2018 at 5:48 PM, John Mazzitelli < m...@redhat.com > wrote:
>
> Anyo
Anyone see this behavior or know what it means?
I have git cloned openshift-origin and I am building the release-3.8 branch via:
hack/env make clean build
hack/build-local-images.py
I then run oc cluster up to get it to start:
oc cluster up --version=latest
I use version latest
I have no idea if this is what you are looking for, but the Kiali Operator
(itself implemented via Ansible) uses Molecule for some tests - the tests are
here:
https://github.com/kiali/kiali/tree/master/operator/molecule
One caveat -- you (or the CI test framework) needs to stand up an
> Are these [1] instructions expected to work for CRC as well, or are there
> different instructions / its not possible to get the istio working on CRC?
>
> Regards,
> Marvin
>
> [1]
> https://docs.openshift.com/container-platform/4.1/service_mesh/service_mesh_install/installing-ossm.html
oc cluster up is gone in 4.x - replaced with CRC:
https://github.com/code-ready/crc
- Original Message -
> Hi
>
> Is is planned to release okd4 - https://www.okd.io/ and that we still have a
> nice way to start a cluster -> `oc cluster up` ?
>
> Best regards
>
> Charles
20 matches
Mail list logo