Re: how to supply ca trust bundle to bootstrap ignition config for openstack

2019-11-20 Thread Dale Bewley
I believe the ca bundle value should show up here:

$ cat terraform.openstack.auto.tfvars.json | jq -r
.openstack_bootstrap_shim_ignition | jq .ignition.security
{
  "tls": {}
}

While attempting to influence that, I've had no luck by placing the CA cert
in install-config.yaml at any of
.platform.openstack.{userCA,userCAIgnition} subsequent
to running create cluster. What am I missing to influence the TLS payload
of bootstrap shim for ignition?

$ ~/Downloads/openshift-install version
/Users/dlbewley/Downloads/openshift-install v4.3.0
built from commit 2355d9b2dd662c0043133d76273c5cf10e0ce00a
release image
quay.io/openshift-release-dev/ocp-release-nightly@sha256:3828e79b24b1891b9bec8b47fb7bf2fe093d7211dc9687cff317f475fa15f999

On Wed, Nov 20, 2019 at 9:38 AM Dale Bewley  wrote:

> After this merge I understand I can supply a CA bundle to enable ignition
> to trust my OpenStack Swift endpoint
>
> https://github.com/openshift/installer/pull/2587/files#diff-8f7812d0db7f9cf17958b3a70170f7a0
> I am trying with
> the openshift-install-mac-4.3.0-0.nightly-2019-11-19-053808.tar.gz build.
>
> Can you help me help myself with this?
>
> How do I translate `bootstrap_shim_ignition =
> var.openstack_bootstrap_shim_ignition` into a expected value in
> install-config.yaml?
>
> Thanks
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to recover from failed update in OpenShift 4.2.x?

2019-11-20 Thread Clayton Coleman
On Nov 17, 2019, at 9:34 PM, Joel Pearson 
wrote:

So, I'm running OpenShift 4.2 on Azure UPI following this blog article:
https://blog.openshift.com/openshift-4-1-upi-environment-deployment-on-microsoft-azure-cloud/
with
a few customisations on the terraform side.

One of the main differences it seems, is how the router/ingress is handled.
Normal Azure uses load balancers, but UPI Azure uses a regular router (that
I'm used to seeing the 3.x version) which is configured by setting the
"HostNetwork"
for the endpoint publishing strategy



This sounds like a bug in Azure UPI.  IPI is the reference architecture, it
shouldn’t have a default divergent from the ref arch.


It was all working fine in OpenShift 4.2.0 and 4.2.2, but when I upgraded
to OpenShift 4.2.4, the router stopped listening on ports 80 and 443, I
could see the pod running with "crictl ps", but a "netstat -tpln" didn't
show anything listening.

I tried updating the version back from 4.2.4 to 4.2.2, but I
accidentally used 4.1.22 image digest value, so I quickly reverted back to
4.2.4 once I saw the apiservers coming up as 4.1.22.  I then noticed that
there was a 4.2.7 release on the candidate-4.2 channel, so I switched to
that, and ingress started working properly again.

So my question is, what is the strategy for recovering from a failed
update? Do I need to have etcd backups and then restore the cluster by
restoring etcd? Ie.
https://docs.openshift.com/container-platform/4.2/backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.html

The upgrade page

specifically says "Reverting your cluster to a previous version, or a
rollback, is not supported. Only upgrading to a newer version is
supported." so is it an expectation for a production cluster that you would
restore from backup if the cluster isn't usable?


Backup, yes.  If you could open a bug for the documentation that would be
great.


Maybe the upgrade page should mention taking backups? Especially if there
is no rollback option.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: cronjobs - how does this work?

2019-11-20 Thread Maciej Szulik
On Wed, Nov 20, 2019, 5:04 AM Just Marvin <
marvin.the.cynical.ro...@gmail.com> wrote:

> Hi,
>
>  I'm poring through the text at
> https://docs.openshift.com/container-platform/4.2/nodes/jobs/nodes-nodes-jobs.html#nodes-nodes-jobs-creating-cron_nodes-nodes-jobs
>  .
> Should I interpret spec.jobTemplate.spec.template.spec.containers.command
> as a command that exists within the container specified
> by cronjob.spec.jobTemplate.spec.template.spec.containers.image? I just
> tried spinning up an openjdk-11-rhel7 image (by itself, using new-app) and
> it refused to start up because it is coded to expect some main class /
> other entrypoint specified to it. If I wanted to run java code in a batch
> process, would this (or similar) containers be the right approach - i.e do
> a docker / s2i build with it to add in my code and have it be executed? In
> that case, should I simply not specify a command in the CronJob yaml,
> because the container is already geared to run the command I need on launch?
>

Yes, you don't need to specify command if one is already backed into the
image through any kind of build.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


how to supply ca trust bundle to bootstrap ignition config for openstack

2019-11-20 Thread Dale Bewley
After this merge I understand I can supply a CA bundle to enable ignition
to trust my OpenStack Swift endpoint
https://github.com/openshift/installer/pull/2587/files#diff-8f7812d0db7f9cf17958b3a70170f7a0
I am trying with
the openshift-install-mac-4.3.0-0.nightly-2019-11-19-053808.tar.gz build.

Can you help me help myself with this?

How do I translate `bootstrap_shim_ignition =
var.openstack_bootstrap_shim_ignition` into a expected value in
install-config.yaml?

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


cronjobs - how does this work?

2019-11-20 Thread Just Marvin
Hi,

 I'm poring through the text at
https://docs.openshift.com/container-platform/4.2/nodes/jobs/nodes-nodes-jobs.html#nodes-nodes-jobs-creating-cron_nodes-nodes-jobs
.
Should I interpret spec.jobTemplate.spec.template.spec.containers.command
as a command that exists within the container specified
by cronjob.spec.jobTemplate.spec.template.spec.containers.image? I just
tried spinning up an openjdk-11-rhel7 image (by itself, using new-app) and
it refused to start up because it is coded to expect some main class /
other entrypoint specified to it. If I wanted to run java code in a batch
process, would this (or similar) containers be the right approach - i.e do
a docker / s2i build with it to add in my code and have it be executed? In
that case, should I simply not specify a command in the CronJob yaml,
because the container is already geared to run the command I need on launch?

Regards,
Marvin
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: access to nightly OCP builds

2019-11-20 Thread Greg Sheremeta
My bad on the missing nightlies link. We'll get that added back asap.

Greg

On Tue, Nov 19, 2019 at 11:31 PM Clayton Coleman 
wrote:

> Hrm, the nightly link seems to have disappeared.
>
> The nightly installer binaries are located at:
>
> https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/
>
> On Nov 19, 2019, at 7:58 PM, Dale Bewley  wrote:
>
> I'm thwarted from installing OCP 4.2 on OSP 13 due to lack of support for
> a typical enterprise TLS config [1]. It is preventing the bootstrap node
> from reaching Swift due to "self-signed" cert. I see that may be mostly
> fixed upstream [2] now.
>
> If this fix is in a nightly build [3] of OCP 4.3, how do I, as a customer
> obtain access for testing? Following the link in the blog post to
> try.openshift.com does not seem to offer an answer. RH tech support does
> not seem to realize it's a thing or I'm bad at asking questions. Nightlies
> of OCP is a thing, right? :)
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1735192
> [2] https://github.com/openshift/installer/pull/2544
> [3]
> https://blog.openshift.com/introducing-red-hat-openshift-4-2-in-developer-preview-releasing-nightly-builds/
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>


-- 

Greg Sheremeta

Associate Manager, Software Engineering

OpenShift

Red Hat 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Changing Prometheus rules

2019-11-20 Thread Simon Pasquier
On Wed, Nov 20, 2019 at 10:23 AM Mateus Caruccio
 wrote:
>
> Isn't cluster version operator something from okd 4.x?

hmm, I might be confusing 3.x and 4.x indeed...

>
> Em Qua, 20 de nov de 2019 05:37, Simon Pasquier  
> escreveu:
>>
>> On Tue, Nov 19, 2019 at 6:33 PM Mateus Caruccio
>>  wrote:
>> >
>> > You must disable cluster-monitoring-operator since it will try to 
>> > reconcile the whole monitoring stack.
>> >
>> > $ oc scale --replicas=0 deploy/cluster-monitoring-operator
>>
>> You'd need to disable the cluster version operator too IIRC and this
>> has a bigger impact.
>>
>> >
>> > Muting alerts using inhibit rules may have an unexpected side-effect as 
>> > noted by [1]. The recommended approach is to send alerts for a "blackhole" 
>> > receiver (rationale and example in the link)
>> >
>> > [1] 
>> > https://medium.com/@wrossmann/suppressing-informational-alerts-with-prometheus-and-alertmanager-4237feab7ce9
>>
>> What I've described should work because source and target labels won't
>> match the same alerts. Agreed that blackholing the notification is
>> also a good solution.
>>
>> >
>> > --
>> > Mateus Caruccio / Master of Puppets
>> > GetupCloud.com
>> > We make the infrastructure invisible
>> > Gartner Cool Vendor 2017
>> >
>> >
>> > Em ter., 19 de nov. de 2019 às 13:27, Tim Dudgeon  
>> > escreveu:
>> >>
>> >> No joy with that approach. I tried editing the ConfigMap and the CRD but 
>> >> both got reset when the cluster-monitoring-operator was restarted.
>> >>
>> >> Looks like I'll have to live with silencing the alert.
>> >>
>> >> On 19/11/2019 07:56, Vladimir REMENAR wrote:
>> >>
>> >> Hi Tim,
>> >>
>> >> You need to stop cluster-monitoring-operator than and then edit 
>> >> configmap. If cluster-monitoring-operator is running while editing 
>> >> configmap it will always revert it to default.
>> >>
>> >>
>> >> Uz pozdrav,
>> >> Vladimir Remenar
>> >>
>> >>
>> >>
>> >> From:Tim Dudgeon 
>> >> To:Simon Pasquier 
>> >> Cc:users 
>> >> Date:18.11.2019 17:46
>> >> Subject:Re: Changing Prometheus rules
>> >> Sent by:users-boun...@lists.openshift.redhat.com
>> >> 
>> >>
>> >>
>> >>
>> >> The KubeAPILatencyHigh alert fires several times a day for us (on 2
>> >> different OKD clusters).
>> >>
>> >> On 18/11/2019 15:17, Simon Pasquier wrote:
>> >> > The Prometheus instances deployed by the cluster monitoring operator
>> >> > are read-only and can't be customized.
>> >> > https://docs.openshift.com/container-platform/3.11/install_config/prometheus_cluster_monitoring.html#alerting-rules_prometheus-cluster-monitoring
>> >> >
>> >> > Can you provide more details about which alerts are noisy?
>> >> >
>> >> > On Mon, Nov 18, 2019 at 2:43 PM Tim Dudgeon  
>> >> > wrote:
>> >> >> What is the "right" way to edit Prometheus rules that are deployed by
>> >> >> default on OKD 3.11?
>> >> >> I have alerts that are annoyingly noisy, and want to silence them 
>> >> >> forever!
>> >> >>
>> >> >> I tried editing the definition of the PrometheusRule CRD and/or the
>> >> >> prometheus-k8s-rulefiles-0 ConfigMap in the openshift-monitoring 
>> >> >> project
>> >> >> but my changes keep getting reverted back to the original.
>> >> >>
>> >> >> ___
>> >> >> users mailing list
>> >> >> users@lists.openshift.redhat.com
>> >> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >> >>
>> >>
>> >> ___
>> >> users mailing list
>> >> users@lists.openshift.redhat.com
>> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >>
>> >>
>> >>
>> >> ___
>> >> users mailing list
>> >> users@lists.openshift.redhat.com
>> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >
>> > ___
>> > users mailing list
>> > users@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Changing Prometheus rules

2019-11-20 Thread Mateus Caruccio
Isn't cluster version operator something from okd 4.x?

Em Qua, 20 de nov de 2019 05:37, Simon Pasquier 
escreveu:

> On Tue, Nov 19, 2019 at 6:33 PM Mateus Caruccio
>  wrote:
> >
> > You must disable cluster-monitoring-operator since it will try to
> reconcile the whole monitoring stack.
> >
> > $ oc scale --replicas=0 deploy/cluster-monitoring-operator
>
> You'd need to disable the cluster version operator too IIRC and this
> has a bigger impact.
>
> >
> > Muting alerts using inhibit rules may have an unexpected side-effect as
> noted by [1]. The recommended approach is to send alerts for a "blackhole"
> receiver (rationale and example in the link)
> >
> > [1]
> https://medium.com/@wrossmann/suppressing-informational-alerts-with-prometheus-and-alertmanager-4237feab7ce9
>
> What I've described should work because source and target labels won't
> match the same alerts. Agreed that blackholing the notification is
> also a good solution.
>
> >
> > --
> > Mateus Caruccio / Master of Puppets
> > GetupCloud.com
> > We make the infrastructure invisible
> > Gartner Cool Vendor 2017
> >
> >
> > Em ter., 19 de nov. de 2019 às 13:27, Tim Dudgeon 
> escreveu:
> >>
> >> No joy with that approach. I tried editing the ConfigMap and the CRD
> but both got reset when the cluster-monitoring-operator was restarted.
> >>
> >> Looks like I'll have to live with silencing the alert.
> >>
> >> On 19/11/2019 07:56, Vladimir REMENAR wrote:
> >>
> >> Hi Tim,
> >>
> >> You need to stop cluster-monitoring-operator than and then edit
> configmap. If cluster-monitoring-operator is running while editing
> configmap it will always revert it to default.
> >>
> >>
> >> Uz pozdrav,
> >> Vladimir Remenar
> >>
> >>
> >>
> >> From:Tim Dudgeon 
> >> To:Simon Pasquier 
> >> Cc:users 
> >> Date:18.11.2019 17:46
> >> Subject:Re: Changing Prometheus rules
> >> Sent by:users-boun...@lists.openshift.redhat.com
> >> 
> >>
> >>
> >>
> >> The KubeAPILatencyHigh alert fires several times a day for us (on 2
> >> different OKD clusters).
> >>
> >> On 18/11/2019 15:17, Simon Pasquier wrote:
> >> > The Prometheus instances deployed by the cluster monitoring operator
> >> > are read-only and can't be customized.
> >> >
> https://docs.openshift.com/container-platform/3.11/install_config/prometheus_cluster_monitoring.html#alerting-rules_prometheus-cluster-monitoring
> >> >
> >> > Can you provide more details about which alerts are noisy?
> >> >
> >> > On Mon, Nov 18, 2019 at 2:43 PM Tim Dudgeon 
> wrote:
> >> >> What is the "right" way to edit Prometheus rules that are deployed by
> >> >> default on OKD 3.11?
> >> >> I have alerts that are annoyingly noisy, and want to silence them
> forever!
> >> >>
> >> >> I tried editing the definition of the PrometheusRule CRD and/or the
> >> >> prometheus-k8s-rulefiles-0 ConfigMap in the openshift-monitoring
> project
> >> >> but my changes keep getting reverted back to the original.
> >> >>
> >> >> ___
> >> >> users mailing list
> >> >> users@lists.openshift.redhat.com
> >> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >> >>
> >>
> >> ___
> >> users mailing list
> >> users@lists.openshift.redhat.com
> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >>
> >>
> >>
> >> ___
> >> users mailing list
> >> users@lists.openshift.redhat.com
> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[OKD 4.x]: first preview drop - please help out with testing and feedback

2019-11-20 Thread Daniel Comnea
Hi folks,

For those who are not following the okd working group updates or not
hanging on the openshift-dev/users K8s slack channel, please be aware of
the announcement sent out [1] by Clayton

We would very much appreciate if folks help out testing and provide
feedback.

Note we haven't finalized the process on where folks should raise issues,
in the last OKD wg meeting there were few suggestions made but no
conclusion yet. Hopefully a decision will be made soon which will be
circulated around.


Cheers

[1] https://mobile.twitter.com/smarterclayton/status/1196477646885965824
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Changing Prometheus rules

2019-11-20 Thread Simon Pasquier
On Tue, Nov 19, 2019 at 6:33 PM Mateus Caruccio
 wrote:
>
> You must disable cluster-monitoring-operator since it will try to reconcile 
> the whole monitoring stack.
>
> $ oc scale --replicas=0 deploy/cluster-monitoring-operator

You'd need to disable the cluster version operator too IIRC and this
has a bigger impact.

>
> Muting alerts using inhibit rules may have an unexpected side-effect as noted 
> by [1]. The recommended approach is to send alerts for a "blackhole" receiver 
> (rationale and example in the link)
>
> [1] 
> https://medium.com/@wrossmann/suppressing-informational-alerts-with-prometheus-and-alertmanager-4237feab7ce9

What I've described should work because source and target labels won't
match the same alerts. Agreed that blackholing the notification is
also a good solution.

>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
>
> Em ter., 19 de nov. de 2019 às 13:27, Tim Dudgeon  
> escreveu:
>>
>> No joy with that approach. I tried editing the ConfigMap and the CRD but 
>> both got reset when the cluster-monitoring-operator was restarted.
>>
>> Looks like I'll have to live with silencing the alert.
>>
>> On 19/11/2019 07:56, Vladimir REMENAR wrote:
>>
>> Hi Tim,
>>
>> You need to stop cluster-monitoring-operator than and then edit configmap. 
>> If cluster-monitoring-operator is running while editing configmap it will 
>> always revert it to default.
>>
>>
>> Uz pozdrav,
>> Vladimir Remenar
>>
>>
>>
>> From:Tim Dudgeon 
>> To:Simon Pasquier 
>> Cc:users 
>> Date:18.11.2019 17:46
>> Subject:Re: Changing Prometheus rules
>> Sent by:users-boun...@lists.openshift.redhat.com
>> 
>>
>>
>>
>> The KubeAPILatencyHigh alert fires several times a day for us (on 2
>> different OKD clusters).
>>
>> On 18/11/2019 15:17, Simon Pasquier wrote:
>> > The Prometheus instances deployed by the cluster monitoring operator
>> > are read-only and can't be customized.
>> > https://docs.openshift.com/container-platform/3.11/install_config/prometheus_cluster_monitoring.html#alerting-rules_prometheus-cluster-monitoring
>> >
>> > Can you provide more details about which alerts are noisy?
>> >
>> > On Mon, Nov 18, 2019 at 2:43 PM Tim Dudgeon  wrote:
>> >> What is the "right" way to edit Prometheus rules that are deployed by
>> >> default on OKD 3.11?
>> >> I have alerts that are annoyingly noisy, and want to silence them forever!
>> >>
>> >> I tried editing the definition of the PrometheusRule CRD and/or the
>> >> prometheus-k8s-rulefiles-0 ConfigMap in the openshift-monitoring project
>> >> but my changes keep getting reverted back to the original.
>> >>
>> >> ___
>> >> users mailing list
>> >> users@lists.openshift.redhat.com
>> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users