Re: Changing Prometheus rules

2019-11-18 Thread Vladimir REMENAR
Hi Tim,

You need to stop cluster-monitoring-operator than and then edit configmap. 
If cluster-monitoring-operator is running while editing configmap it will 
always revert it to default.


Uz pozdrav,
Vladimir Remenar



From:   Tim Dudgeon 
To: Simon Pasquier 
Cc: users 
Date:   18.11.2019 17:46
Subject:Re: Changing Prometheus rules
Sent by:users-boun...@lists.openshift.redhat.com



The KubeAPILatencyHigh alert fires several times a day for us (on 2 
different OKD clusters).

On 18/11/2019 15:17, Simon Pasquier wrote:
> The Prometheus instances deployed by the cluster monitoring operator
> are read-only and can't be customized.
> 
https://docs.openshift.com/container-platform/3.11/install_config/prometheus_cluster_monitoring.html#alerting-rules_prometheus-cluster-monitoring

>
> Can you provide more details about which alerts are noisy?
>
> On Mon, Nov 18, 2019 at 2:43 PM Tim Dudgeon  
wrote:
>> What is the "right" way to edit Prometheus rules that are deployed by
>> default on OKD 3.11?
>> I have alerts that are annoyingly noisy, and want to silence them 
forever!
>>
>> I tried editing the definition of the PrometheusRule CRD and/or the
>> prometheus-k8s-rulefiles-0 ConfigMap in the openshift-monitoring 
project
>> but my changes keep getting reverted back to the original.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: sftp service on cluster - how to do it

2019-11-18 Thread Mateus Caruccio
I guess one could use either Service.type=LoadBalancer (one ELB per service
on port 22) or Service.type=NodePort with single ELB mapping
ELB-PORT:NODE-PORT for each service.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017


Em dom., 17 de nov. de 2019 às 22:13, Just Marvin <
marvin.the.cynical.ro...@gmail.com> escreveu:

> Tobias,
>
> I _will_ have access to load balancers if needed, but at the moment, I
> need to understand how it works. Assume that I do: what exactly does "proxy
> to the internal sftp service" mean? I assume "sftp service" would be the
> service that I set up, but which piece is the proxy? I don't see that load
> balancer and proxy functions as being the same, so it seems like you are
> talking about a third piece. What piece is that?
>
> Regards,
> Marvin
>
> On Sun, Nov 17, 2019 at 1:30 PM Tobias Florek 
> wrote:
>
>> Hi!
>>
>> I assume you don't have easy access to load balancers, because that
>> would be easiest.  Just proxy to the internal sftp service.
>>
>> If you don't I have used Nodeport service in the past.  You will lose
>> the nice port 22 though.  If you control the node's ssh daemon, you can
>> also use ProxyJumps.  Be sure to lock down ssh for the users though.
>>
>> Cheers,
>>  Tobias Florek
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Changing Prometheus rules

2019-11-18 Thread Tim Dudgeon
The KubeAPILatencyHigh alert fires several times a day for us (on 2 
different OKD clusters).


On 18/11/2019 15:17, Simon Pasquier wrote:

The Prometheus instances deployed by the cluster monitoring operator
are read-only and can't be customized.
https://docs.openshift.com/container-platform/3.11/install_config/prometheus_cluster_monitoring.html#alerting-rules_prometheus-cluster-monitoring

Can you provide more details about which alerts are noisy?

On Mon, Nov 18, 2019 at 2:43 PM Tim Dudgeon  wrote:

What is the "right" way to edit Prometheus rules that are deployed by
default on OKD 3.11?
I have alerts that are annoyingly noisy, and want to silence them forever!

I tried editing the definition of the PrometheusRule CRD and/or the
prometheus-k8s-rulefiles-0 ConfigMap in the openshift-monitoring project
but my changes keep getting reverted back to the original.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



Re: OKD 3.11 - Volume and Claim Pre-binding - volumes for a namespace

2019-11-18 Thread Marc Boorshtein
Ended up doing the same thing with a validating webhook using OPA

On Mon, Nov 18, 2019, 4:13 AM Alan Christie <
achris...@informaticsmatters.com> wrote:

> Thanks,
>
> I was wondering whether I could create an arbitrary storage class so (if
> the application can be adjusted to name that class) this might well be a
> solution. I’ll poke around today, thanks.
>
>
> Alan Christie
> achris...@informaticsmatters.com
>
>
>
> On 18 Nov 2019, at 12:08 pm, Frederic Giloux  wrote:
>
> Hi Alan
>
> you can use a storage class for the purpose [1] and pair it with quotas
> for the defined storage class [2] as proposed by Samuel.
>
> [1]
> https://docs.okd.io/3.11/install_config/storage_examples/storage_classes_legacy.html#install-config-storage-examples-storage-classes-legacy
> [2]
> https://docs.okd.io/3.11/dev_guide/compute_resources.html#dev-managed-by-quota
>
> Regards,
>
> Frédéric
>
> On Mon, Nov 18, 2019 at 12:38 PM Samuel Martín Moro 
> wrote:
>
>> Not that I know of.
>> The claimRef is not meant to be changed manually. Once set, PV should
>> have been bound already, you won't be able to only set a namespace.
>>
>> Have you considered using ResourceQuotas?
>>
>> To deny users in a Project from requesting persistent storage, you could
>> use the following:
>>
>> apiVersion: v1
>> kind: ResourceQuota
>> metadata:
>>   name: no-pv
>>   namespace: project-with-no-persistent-volumes
>> spec:
>>   hard:
>> persistentvolumeclaims: 0
>>
>>
>> On Mon, Nov 18, 2019 at 12:00 PM Alan Christie <
>> achris...@informaticsmatters.com> wrote:
>>
>>> On the topic of volume claim pre-binding …
>>>
>>> Is there a pattern for creating volumes that can only be bound to a PVC
>>> from a known namespace, specifically when the PVC name may not be known in
>>> advance?
>>>
>>> In my specific case I don’t have control over the application's PVC name
>>> but I do know its namespace. I need to prevent the pre-allocated volume
>>> from being bound to a claim from a namespace other than the one the
>>> application’s in.
>>>
>>> The `PersistentVolume` spec contains a `claimRef` section but I suspect
>>> that you can’t just fill-out the `namespace`, you need to provide both the
>>> `name` and `namespace` (because although the former doesn’t generate an
>>> error it doesn’t work).
>>>
>>> Any suggestions?
>>>
>>> Alan Christie
>>> achris...@informaticsmatters.com
>>>
>>>
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
>>
>> --
>> Samuel Martín Moro
>> {EPITECH.} 2011
>>
>> "Nobody wants to say how this works.
>>  Maybe nobody knows ..."
>>   Xorg.conf(5)
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> --
> *Frédéric Giloux*
> Senior Technical Account Manager
> Red Hat Germany
>
> fgil...@redhat.com M: +49-174-172-4661
>
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> 
> Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn,
> Handelsregister: Amtsgericht München, HRB 153243,
> Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Changing Prometheus rules

2019-11-18 Thread Simon Pasquier
The Prometheus instances deployed by the cluster monitoring operator
are read-only and can't be customized.
https://docs.openshift.com/container-platform/3.11/install_config/prometheus_cluster_monitoring.html#alerting-rules_prometheus-cluster-monitoring

Can you provide more details about which alerts are noisy?

On Mon, Nov 18, 2019 at 2:43 PM Tim Dudgeon  wrote:
>
> What is the "right" way to edit Prometheus rules that are deployed by
> default on OKD 3.11?
> I have alerts that are annoyingly noisy, and want to silence them forever!
>
> I tried editing the definition of the PrometheusRule CRD and/or the
> prometheus-k8s-rulefiles-0 ConfigMap in the openshift-monitoring project
> but my changes keep getting reverted back to the original.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



Changing Prometheus rules

2019-11-18 Thread Tim Dudgeon
What is the "right" way to edit Prometheus rules that are deployed by 
default on OKD 3.11?

I have alerts that are annoyingly noisy, and want to silence them forever!

I tried editing the definition of the PrometheusRule CRD and/or the 
prometheus-k8s-rulefiles-0 ConfigMap in the openshift-monitoring project 
but my changes keep getting reverted back to the original.


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



Re: OKD 3.11 - Volume and Claim Pre-binding - volumes for a namespace

2019-11-18 Thread Alan Christie
Thanks,

I was wondering whether I could create an arbitrary storage class so (if the 
application can be adjusted to name that class) this might well be a solution. 
I’ll poke around today, thanks.


Alan Christie
achris...@informaticsmatters.com



> On 18 Nov 2019, at 12:08 pm, Frederic Giloux  wrote:
> 
> Hi Alan
> 
> you can use a storage class for the purpose [1] and pair it with quotas for 
> the defined storage class [2] as proposed by Samuel.
> 
> [1] 
> https://docs.okd.io/3.11/install_config/storage_examples/storage_classes_legacy.html#install-config-storage-examples-storage-classes-legacy
>  
> 
> [2] 
> https://docs.okd.io/3.11/dev_guide/compute_resources.html#dev-managed-by-quota
>  
> 
> 
> Regards,
> 
> Frédéric
> 
> On Mon, Nov 18, 2019 at 12:38 PM Samuel Martín Moro  > wrote:
> Not that I know of.
> The claimRef is not meant to be changed manually. Once set, PV should have 
> been bound already, you won't be able to only set a namespace.
> 
> Have you considered using ResourceQuotas?
> 
> To deny users in a Project from requesting persistent storage, you could use 
> the following: 
> 
> apiVersion: v1
> kind: ResourceQuota
> metadata:
>   name: no-pv
>   namespace: project-with-no-persistent-volumes
> spec:
>   hard:
> persistentvolumeclaims: 0
> 
> 
> On Mon, Nov 18, 2019 at 12:00 PM Alan Christie 
> mailto:achris...@informaticsmatters.com>> 
> wrote:
> On the topic of volume claim pre-binding …
> 
> Is there a pattern for creating volumes that can only be bound to a PVC from 
> a known namespace, specifically when the PVC name may not be known in advance?
> 
> In my specific case I don’t have control over the application's PVC name but 
> I do know its namespace. I need to prevent the pre-allocated volume from 
> being bound to a claim from a namespace other than the one the application’s 
> in.
> 
> The `PersistentVolume` spec contains a `claimRef` section but I suspect that 
> you can’t just fill-out the `namespace`, you need to provide both the `name` 
> and `namespace` (because although the former doesn’t generate an error it 
> doesn’t work).
> 
> Any suggestions?
> 
> Alan Christie
> achris...@informaticsmatters.com 
> 
> 
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> 
> 
> 
> -- 
> Samuel Martín Moro
> {EPITECH.} 2011
> 
> "Nobody wants to say how this works.
>  Maybe nobody knows ..."
>   Xorg.conf(5)
> ___
> users mailing list
> users@lists.openshift.redhat.com 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> 
> 
> 
> -- 
> Frédéric Giloux
> Senior Technical Account Manager
> Red Hat Germany
> 
> fgil...@redhat.com  M: +49-174-172-4661 
> 
> 
> redhat.com  | TRIED. TESTED. TRUSTED. | redhat.com/trusted 
> 
>  
> Red Hat GmbH, http://www.de.redhat.com/ , Sitz: 
> Grasbrunn, 
> Handelsregister: Amtsgericht München, HRB 153243,
> Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD 3.11 - Volume and Claim Pre-binding - volumes for a namespace

2019-11-18 Thread Frederic Giloux
Hi Alan

you can use a storage class for the purpose [1] and pair it with quotas for
the defined storage class [2] as proposed by Samuel.

[1]
https://docs.okd.io/3.11/install_config/storage_examples/storage_classes_legacy.html#install-config-storage-examples-storage-classes-legacy
[2]
https://docs.okd.io/3.11/dev_guide/compute_resources.html#dev-managed-by-quota

Regards,

Frédéric

On Mon, Nov 18, 2019 at 12:38 PM Samuel Martín Moro 
wrote:

> Not that I know of.
> The claimRef is not meant to be changed manually. Once set, PV should have
> been bound already, you won't be able to only set a namespace.
>
> Have you considered using ResourceQuotas?
>
> To deny users in a Project from requesting persistent storage, you could
> use the following:
>
> apiVersion: v1
> kind: ResourceQuota
> metadata:
>   name: no-pv
>   namespace: project-with-no-persistent-volumes
> spec:
>   hard:
> persistentvolumeclaims: 0
>
>
> On Mon, Nov 18, 2019 at 12:00 PM Alan Christie <
> achris...@informaticsmatters.com> wrote:
>
>> On the topic of volume claim pre-binding …
>>
>> Is there a pattern for creating volumes that can only be bound to a PVC
>> from a known namespace, specifically when the PVC name may not be known in
>> advance?
>>
>> In my specific case I don’t have control over the application's PVC name
>> but I do know its namespace. I need to prevent the pre-allocated volume
>> from being bound to a claim from a namespace other than the one the
>> application’s in.
>>
>> The `PersistentVolume` spec contains a `claimRef` section but I suspect
>> that you can’t just fill-out the `namespace`, you need to provide both the
>> `name` and `namespace` (because although the former doesn’t generate an
>> error it doesn’t work).
>>
>> Any suggestions?
>>
>> Alan Christie
>> achris...@informaticsmatters.com
>>
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> --
> Samuel Martín Moro
> {EPITECH.} 2011
>
> "Nobody wants to say how this works.
>  Maybe nobody knows ..."
>   Xorg.conf(5)
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>


-- 
*Frédéric Giloux*
Senior Technical Account Manager
Red Hat Germany

fgil...@redhat.com M: +49-174-172-4661

redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn,
Handelsregister: Amtsgericht München, HRB 153243,
Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD 3.11 - Volume and Claim Pre-binding - volumes for a namespace

2019-11-18 Thread Samuel Martín Moro
Not that I know of.
The claimRef is not meant to be changed manually. Once set, PV should have
been bound already, you won't be able to only set a namespace.

Have you considered using ResourceQuotas?

To deny users in a Project from requesting persistent storage, you could
use the following:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: no-pv
  namespace: project-with-no-persistent-volumes
spec:
  hard:
persistentvolumeclaims: 0


On Mon, Nov 18, 2019 at 12:00 PM Alan Christie <
achris...@informaticsmatters.com> wrote:

> On the topic of volume claim pre-binding …
>
> Is there a pattern for creating volumes that can only be bound to a PVC
> from a known namespace, specifically when the PVC name may not be known in
> advance?
>
> In my specific case I don’t have control over the application's PVC name
> but I do know its namespace. I need to prevent the pre-allocated volume
> from being bound to a claim from a namespace other than the one the
> application’s in.
>
> The `PersistentVolume` spec contains a `claimRef` section but I suspect
> that you can’t just fill-out the `namespace`, you need to provide both the
> `name` and `namespace` (because although the former doesn’t generate an
> error it doesn’t work).
>
> Any suggestions?
>
> Alan Christie
> achris...@informaticsmatters.com
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>


-- 
Samuel Martín Moro
{EPITECH.} 2011

"Nobody wants to say how this works.
 Maybe nobody knows ..."
  Xorg.conf(5)
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OKD 3.11 - Volume and Claim Pre-binding - volumes for a namespace

2019-11-18 Thread Alan Christie
On the topic of volume claim pre-binding …

Is there a pattern for creating volumes that can only be bound to a PVC from a 
known namespace, specifically when the PVC name may not be known in advance?

In my specific case I don’t have control over the application's PVC name but I 
do know its namespace. I need to prevent the pre-allocated volume from being 
bound to a claim from a namespace other than the one the application’s in.

The `PersistentVolume` spec contains a `claimRef` section but I suspect that 
you can’t just fill-out the `namespace`, you need to provide both the `name` 
and `namespace` (because although the former doesn’t generate an error it 
doesn’t work).

Any suggestions?

Alan Christie
achris...@informaticsmatters.com




___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: external ips - seems like handwaving in docs

2019-11-18 Thread Samuel Martín Moro
Hi,

External IPs relies on either:
- some cloud integration, which would provision some kind of LoadBalancer,
then the allocated IP would show back as our Services external IP
- with bare-metal: some pre-configured subnet, a pool of IPs that may be
allocated to those services (I didn't see this much documented yet in 4.X,
you can find some details on 3.X docs, eg:
https://docs.openshift.com/container-platform/3.4/dev_guide/expose_service/expose_internal_ip_service.html#defining_ip_range
 )

On CRC, you might not be able to do this.

Regards.

On Mon, Nov 18, 2019 at 2:17 AM Just Marvin <
marvin.the.cynical.ro...@gmail.com> wrote:

> Hi,
>
>
> https://docs.openshift.com/container-platform/4.2/networking/configuring-ingress-cluster-traffic/configuring-ingress-cluster-traffic-service-external-ip.html#nw-creating-project-and-service_configuring-ingress-cluster-traffic-service-external-ip
>  .
>
> Step 4 seems like magic. When I do that on my local CRC install, I get
> this:
>
> [zaphod@oc6010654212 Downloads]$ oc get svc -n openshift-ingress
> NAME  TYPECLUSTER-IP   EXTERNAL-IP
> PORT(S)   AGE
> router-internal-default   ClusterIP   172.30.165.244   
>  80/TCP,443/TCP,1936/TCP   18d
> [zaphod@oc6010654212 Downloads]$
>
> Which is what I would have expected to see. Where is that
> "router-default" entry coming from? I've added an external ip to the crc
> device, so I think I've met the pre-requisites. Whats the step that I'm
> missing?
>
> Regards,
> Marvin
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>


-- 
Samuel Martín Moro
{EPITECH.} 2011

"Nobody wants to say how this works.
 Maybe nobody knows ..."
  Xorg.conf(5)
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users