Router

2020-01-23 Thread Srinivas Naga Kotaru (skotaru)
Quick question

Is it possible to expose different routers for different routes on same 
project? One approach is create different projects but we have a use case where 
we want to expose different routers for different routers. We knew there is a 
namespace label on every project and all the routes created in that project by 
default use that namespace router


--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


controller -> api talk

2019-10-08 Thread Srinivas Naga Kotaru (skotaru)
Is controller talk to API server using hostname: port or LB VIP? We have 3 node 
master setup and API servers LB with VIP. Trying to understand whether 
controller use direct path to talk to API servers or go via VIP like how other 
clients access

--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Time zone in deployment/POD

2019-03-04 Thread Srinivas Naga Kotaru (skotaru)
Hi

How to set rightway timezone info in containers/POD? Hypervisors and VM’s using 
GMT time zone. However it was observed image build node time zone is taking 
into account, rather where the container has been running.

Is it true image build host timezone always embedded as TZ inside container?
Why container not suing node TZ always?
We don’t want to inject TZ=GMT as a environmetn variable for all the 
deployments as it trigger massive re-deployments across clusters

What is best solution here?

Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: quota question

2018-04-05 Thread Srinivas Naga Kotaru (skotaru)
Thanks Clayton .


--
Srinivas Kotaru
From: Clayton Coleman <ccole...@redhat.com>
Date: Wednesday, April 4, 2018 at 5:29 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: quota question

It's definitely not the latter (requested memory for the node is 15GB total).  
It would almost certainly place B, then if A doesn't release memory it would be 
evicted because it is using the most over its request on the node and there is 
no other priority rules in place (guaranteed -> burstable -> best-effort)

On Wed, Apr 4, 2018 at 8:14 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Want to validate a statement.

Just assume, we have a only one node with 60 GB memory. Pod A scheduled with 
request of 10 GB and no limit. After some time, it start using 55 GB ( ignore 
systems reserves for now). system now has only left with 5 GB free. If new podB 
scheduled with request of 10GB, what is systems behavior?


  *   Will it place Pod A and evict PodB softly since it can’t accommodate both 
pods on same node?
  *   Scheduling fail to pod B since it can’t accommodate since node doesn’t 
enough requested memory?


--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


quota question

2018-04-04 Thread Srinivas Naga Kotaru (skotaru)
Want to validate a statement.

Just assume, we have a only one node with 60 GB memory. Pod A scheduled with 
request of 10 GB and no limit. After some time, it start using 55 GB ( ignore 
systems reserves for now). system now has only left with 5 GB free. If new podB 
scheduled with request of 10GB, what is systems behavior?


  *   Will it place Pod A and evict PodB softly since it can’t accommodate both 
pods on same node?
  *   Scheduling fail to pod B since it can’t accommodate since node doesn’t 
enough requested memory?


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


CAP_LINUX_IMMUTABLE

2018-03-28 Thread Srinivas Naga Kotaru (skotaru)
Is it possible to use CAP_LINUX_IMMUTABLE security context with restricted SCC? 
One of our client want to use chattr +a /tmp/logs/*.log command in pod. We 
don’t want to relax or give privileged SCC for any clients.

Wondering whether any way they can use this command inside pod directly or 
inside pod definition security context?


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Configuring Private DNS Zones and Upstream Nameservers in Openshift

2018-02-22 Thread Srinivas Naga Kotaru (skotaru)
Thanks Clayton. This is very useful feature for clients to manage their dns and 
helps a lot  for service discovery type integrations.

Sent from my iPhone

On Feb 22, 2018, at 4:08 PM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:

Probably have to wait until 3.9.  We also want to move to coredns, but that 
could take longer.

On Feb 15, 2018, at 6:26 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:


Is it possible like described in kubernetes?

http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html

We have few clients where they configured their own consul based DNS server and 
not suing service discovery provided by Openshift. We build a custom solution 
by adding these external zones and IP addresses to dnsmasq.conf. This approach 
working but having few failures since it has to first check cluster zones by 
honoring the order specified in /etc/resolve.conf and then forward to these 
external DNS severs.

I saw a solution in Kubernetes 1.9 as an alpha feature  by letting clients to 
configure their own DNS settings in POD definitions.

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-dns-config

Openshift clients need to wait till 3.9? or is there any way currently to solve 
this problem?

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Leader election in Kubernetes control plane

2018-02-20 Thread Srinivas Naga Kotaru (skotaru)
Thanks, that make sense. We are using 3.6 currently 

-- 

Srinivas Kotaru
On 2/20/18, 9:46 PM, "Takayoshi Kimura" <tkim...@redhat.com> wrote:

In 3.7+ "oc get cm openshift-master-controllers -n kube-system -o yaml" you 
can see the annotation described in that article.

Regards,
Takayoshi

On Wed, 21 Feb 2018 14:37:32 +0900,
    "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> wrote:
> 
> It has just client-ca-file. We have 3 masters in each cluster. not sure 
how to identify which control manager is active? I usually find which oneʼs is 
writing logs by using journalctl atomic-openshift-master-controllers.service. 
passive oneʼs donʼt write or generate
> any logs.
> 
> -- 
> 
> Srinivas Kotaru
> 
> From: Clayton Coleman <ccole...@redhat.com>
> Date: Tuesday, February 20, 2018 at 9:29 PM
> To: Srinivas Naga Kotaru <skot...@cisco.com>
> Cc: dev <dev@lists.openshift.redhat.com>
> Subject: Re: Leader election in Kubernetes control plane
> 
> We use config maps - check in kube-system for that.
> 
> On Feb 15, 2018, at 2:48 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com> wrote:
> 
> while I was reading below article, I tried to do the same to find out 
which one is active control plane in Openshift. I could see zero end points in 
kube-system name space. Am I missing something or not implemented in Openshift?
> 
> 
https://blog.heptio.com/leader-election-in-kubernetes-control-plane-heptioprotip-1ed9fb0f3e6d
> 
> $oc project
>
> Using project "kube-system" on server
>
> $ oc get ep
>
> No resources found.
>
> $oc get all
>
> No resources found.
> 
> -- 
>
> Srinivas Kotaru
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> 
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev



___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Leader election in Kubernetes control plane

2018-02-20 Thread Srinivas Naga Kotaru (skotaru)
It has just client-ca-file. We have 3 masters in each cluster. not sure how to 
identify which control manager is active? I usually find which one’s is writing 
logs by using journalctl atomic-openshift-master-controllers.service. passive 
one’s don’t write or generate any logs.

--
Srinivas Kotaru
From: Clayton Coleman <ccole...@redhat.com>
Date: Tuesday, February 20, 2018 at 9:29 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: Leader election in Kubernetes control plane

We use config maps - check in kube-system for that.

On Feb 15, 2018, at 2:48 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
while I was reading below article, I tried to do the same to find out which one 
is active control plane in Openshift. I could see zero end points in 
kube-system name space. Am I missing something or not implemented in Openshift?

https://blog.heptio.com/leader-election-in-kubernetes-control-plane-heptioprotip-1ed9fb0f3e6d

$oc project
Using project "kube-system" on server
$ oc get ep
No resources found.
$oc get all
No resources found.

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Configuring Private DNS Zones and Upstream Nameservers in Openshift

2018-02-15 Thread Srinivas Naga Kotaru (skotaru)

Is it possible like described in kubernetes?

http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html

We have few clients where they configured their own consul based DNS server and 
not suing service discovery provided by Openshift. We build a custom solution 
by adding these external zones and IP addresses to dnsmasq.conf. This approach 
working but having few failures since it has to first check cluster zones by 
honoring the order specified in /etc/resolve.conf and then forward to these 
external DNS severs.

I saw a solution in Kubernetes 1.9 as an alpha feature  by letting clients to 
configure their own DNS settings in POD definitions.

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-dns-config

Openshift clients need to wait till 3.9? or is there any way currently to solve 
this problem?

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Leader election in Kubernetes control plane

2018-02-15 Thread Srinivas Naga Kotaru (skotaru)
while I was reading below article, I tried to do the same to find out which one 
is active control plane in Openshift. I could see zero end points in 
kube-system name space. Am I missing something or not implemented in Openshift?

https://blog.heptio.com/leader-election-in-kubernetes-control-plane-heptioprotip-1ed9fb0f3e6d

$oc project
Using project "kube-system" on server
$ oc get ep
No resources found.
$oc get all
No resources found.

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


PTR record for POD

2017-10-05 Thread Srinivas Naga Kotaru (skotaru)
HI

Is it possible to get POD name given POD IP address by querying master DNS 
server?


Service lookup working:

dig +short @master kubernetes.default.svc.cluster.local
172.24.0.1

PTR lookup not working:

$ dig -x @master 172.24.0.1 +short
172.24.0.1

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


jolokia REST API

2017-06-28 Thread Srinivas Naga Kotaru (skotaru)
Hi

Is there any way we can collect metrics using Jolokia REST API end point? I 
knew Openshift using jolokia for Java apps. I’m not sure this is only for 
Redhat supplied images or in genera.

Am trying to collect metrics using InfluxDB/telegraf agent from Jolokia REST 
API.


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Prometheus Vs raw metrics

2017-06-26 Thread Srinivas Naga Kotaru (skotaru)
Hi

What is the difference of running a dedicated Prometheus server Vs using 
metrics exposed by oc get –raw metrics? If both are same in terms of accuracy, 
available information, does it make sense to run again another Prometheus 
server and pull cluster metrics?

Am trying to setup some metrics for etcd health and its read, write, latency, 
throughput


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Usage is more then Limits

2017-05-23 Thread Srinivas Naga Kotaru (skotaru)
Can someone comment on this?


--
Srinivas Kotaru

From: Srinivas Naga Kotaru 
Date: Wednesday, May 10, 2017 at 12:25 PM
To: dev 
Subject: Usage is more then Limits

Hi

Is it possible Usage is more than Limits? Observed some nodes has more Usage 
then allowed Limits in our cluster. We have a Quota’s implemented, LimitRagen 
enabled per project (Defaults Limits and Requests) and Cluster overcommit % 
specified (10 % CPU Limits and 25 % Memory Limits as requests for scheduling to 
take place)

It is my understanding based on above data, is requets always 1/10 fo CPU and ¼ 
of memory and Limits can go as much specified by clients but Usage should be 
less then Limits as clients can’t go beyound Limits.


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Usage is more then Limits

2017-05-10 Thread Srinivas Naga Kotaru (skotaru)
Hi

Is it possible Usage is more than Limits? Observed some nodes has more Usage 
then allowed Limits in our cluster. We have a Quota’s implemented, LimitRagen 
enabled per project (Defaults Limits and Requests) and Cluster overcommit % 
specified (10 % CPU Limits and 25 % Memory Limits as requests for scheduling to 
take place)

It is my understanding based on above data, is requets always 1/10 fo CPU and ¼ 
of memory and Limits can go as much specified by clients but Usage should be 
less then Limits as clients can’t go beyound Limits.


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


arp table increase

2017-04-18 Thread Srinivas Naga Kotaru (skotaru)
Hi

We had an issue where one client joining consul agents from different projects 
to central project where they kept all servers. All agents using local service 
account but using end points approach to connect to remote consul server. 
Remote consul service has ingress IP attached.

Flow:

Project1 --> Local service account --> end point/Ingress IP of remote server 
--> Consul server (pet set)

Using above approach, consul agents throwing unable to connect sometimes (not 
always) and behavior is inconsistent.  If we remove the local service account 
and directly use external ingress IP (another project in same cluster), join 
always successful.

We did a below change and increased ARP table size fixed the change. Want to 
confirm whether this has any impact to cluster network in future or any side 
affect?

https://github.com/hashicorp/serf/issues/263

https://trello.com/c/DZb8ghlZ/228-5-scale-document-tuning-options-for-arp-cache

https://www.serveradminblog.com/2011/02/neighbour-table-overflow-sysctl-conf-tunning/



--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: List routes from Routershards

2017-04-04 Thread Srinivas Naga Kotaru (skotaru)
U mean oc get routes –all-namespaces and filter? I don’t see any way to filter 
routes by shard

Correct me if am missing anything.

--
Srinivas Kotaru

From: "ccole...@redhat.com" <ccole...@redhat.com>
Date: Tuesday, April 4, 2017 at 12:38 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: List routes from Routershards

Right now you'd want to list routes and filter them clientside.  We could add a 
field path selector at some point in the future, but the syntax is deliberately 
limited and I don't think we can guarantee something stable anytime soon.

On Apr 4, 2017, at 9:04 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Is anyway we can list all the routes created from a specific router shard? We 
have multiple router  shards configured and want to check or list routes from a 
specific shard?



--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


List routes from Routershards

2017-04-04 Thread Srinivas Naga Kotaru (skotaru)
Is anyway we can list all the routes created from a specific router shard? We 
have multiple router  shards configured and want to check or list routes from a 
specific shard?



--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: PV/PVC storage use tracking

2017-03-31 Thread Srinivas Naga Kotaru (skotaru)
We using 3.4, that is latest stable. 

-- 
Srinivas Kotaru

On 3/31/17, 10:39 AM, "Matt Wringe" <mwri...@redhat.com> wrote:

- Original Message -----
> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
> To: "Patrick Tescher" <patr...@outtherelabs.com>
> Cc: "dev" <dev@lists.openshift.redhat.com>
> Sent: Friday, 31 March, 2017 11:36:36 AM
> Subject: Re: PV/PVC storage use tracking
> 
> 
> 
> Openshift shipping lower version of hawkular
> 
> 
> 
> 
> { " MetricsService ": "STARTED" ," Implementation-Version ":
> "0.21.7.Final-redhat-1" ," Built-From-Git-SHA1 ":
> "0ed89b2dbd78208177f39a7ac880f5cec3eda8f8" }
> 
> 
> 
> 
> 
> is there any way it can be bumped up latest to take advantage of tag
> language?
> 

For Origin, we should we using a more recent version with the latest 
releases.

For OCP we usually do not increase the version of components until the next 
release. Newer OCP versions will have a newer Hawkular

> 
> 
> 
> 
> 
> --
> 
> 
> Srinivas Kotaru
> 
> 
> 
> 
> From: <dev-boun...@lists.openshift.redhat.com> on behalf of Srinivas Naga
> Kotaru <skot...@cisco.com>
> Date: Wednesday, March 29, 2017 at 10:45 PM
> To: Patrick Tescher <patr...@outtherelabs.com>
> Cc: dev <dev@lists.openshift.redhat.com>
> Subject: Re: PV/PVC storage use tracking
> 
> 
> 
> 
> 
> Patric
> 
> 
> 
> Thanks for pointing Grafant plug-in for Hawkuarl. We already invested 
effort
> and time on TICK setup where metrics are collected via telegraf, InfluxDB
> and Grafant. This plug-in is nice on top of our Grafana setup. However,
> telegraf Docker plugin is much more advanced and have more metrics then
> Hawkular metrics.
> 
> 
> 
> Also, I noticed tags not working. Searching by name is pathetic and pretty
> slow. Only gauge working . Other 2 metrics types like count, and
> availability not working. Not sure why…
> 
> 
> 
> Even Hawkular also not showing any information regarding PVC or PV.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> --
> 
> 
> Srinivas Kotaru
> 
> 
> 
> 
> From: Patrick Tescher <patr...@outtherelabs.com>
> Date: Wednesday, March 29, 2017 at 11:38 AM
> To: Srinivas Naga Kotaru <skot...@cisco.com>
> Cc: dev <dev@lists.openshift.redhat.com>
> Subject: Re: PV/PVC storage use tracking
> 
> 
> 
> 
> 
> The system that would track this is Heapster but it doesn't appear to. 
There
> are a few open issues:
> 
> 
> https://github.com/kubernetes/heapster/issues/885
> 
> 
> https://github.com/kubernetes/heapster/issues/1270
> 
> 
> 
> 
> 
> The other option would be Hawkular Openshift Agent:
> https://github.com/hawkular/hawkular-openshift-agent
> 
> 
> This means that the pod mounting your PVC needs to have some sort of agent
> that reports filesystem usage. Sometimes this makes sense. For instance I 
am
> using https://github.com/wrouesnel/postgres_exporter to export other
> PostgresSQL stats and one of those is filesystem usage. In other cases it
> may not make as much sense.
> 
> 
> 
> 
> 
> Both of these are a little odd since they would track pod volume usage, 
not
> necessarily a specific PVC. It would be nice to somehow monitor usage of a
> PVC directly but that is hard since the PVC is not always mounted on any
> node.
> 
> 
> 
> 
> 
> Lastly if you want to be able to display the stats generated by Heapster 
or
> Hawkular Openshift Agent you can set up
> https://github.com/hawkular/hawkular-grafana-datasource .
> 
> 
> 
> 
> 
> --
> 
> 
> Patrick Tescher
> 
> 
> 
> 
> 
> 
> 
> 
> On Mar 29, 2017, at 10:05 AM, Srinivas Naga Kotaru (skotaru) <
> skot...@cisco.com > wrote:
> 
> 
> 
> 
> 
> Does Openshift has any mechanism to track PV/PVC usage? PV/PVC are getting

Re: PV/PVC storage use tracking

2017-03-31 Thread Srinivas Naga Kotaru (skotaru)
Openshift shipping lower version of hawkular


{"MetricsService": "STARTED","Implementation-Version": 
"0.21.7.Final-redhat-1","Built-From-Git-SHA1": 
"0ed89b2dbd78208177f39a7ac880f5cec3eda8f8"}


is there any way it can be bumped up latest to take advantage of tag language?


--
Srinivas Kotaru

From: <dev-boun...@lists.openshift.redhat.com> on behalf of Srinivas Naga 
Kotaru <skot...@cisco.com>
Date: Wednesday, March 29, 2017 at 10:45 PM
To: Patrick Tescher <patr...@outtherelabs.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: PV/PVC storage use tracking

Patric

Thanks for pointing Grafant plug-in for Hawkuarl. We already invested effort 
and time on TICK setup where metrics are collected via telegraf, InfluxDB and 
Grafant. This plug-in is nice on top of our Grafana setup. However, telegraf 
Docker plugin is much more advanced and have more metrics then Hawkular metrics.

Also, I noticed tags not working. Searching by name is pathetic and pretty 
slow.  Only gauge working . Other 2 metrics types like count, and availability 
not working. Not sure why…

Even Hawkular also not showing any information regarding PVC or PV.

[cid:image001.png@01D2A9F9.E85856A0]

--
Srinivas Kotaru

From: Patrick Tescher <patr...@outtherelabs.com>
Date: Wednesday, March 29, 2017 at 11:38 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: PV/PVC storage use tracking

The system that would track this is Heapster but it doesn't appear to. There 
are a few open issues:
https://github.com/kubernetes/heapster/issues/885
https://github.com/kubernetes/heapster/issues/1270

The other option would be Hawkular Openshift Agent: 
https://github.com/hawkular/hawkular-openshift-agent
This means that the pod mounting your PVC needs to have some sort of agent that 
reports filesystem usage. Sometimes this makes sense. For instance I am using 
https://github.com/wrouesnel/postgres_exporter to export other PostgresSQL 
stats and one of those is filesystem usage. In other cases it may not make as 
much sense.

Both of these are a little odd since they would track pod volume usage, not 
necessarily a specific PVC. It would be nice to somehow monitor usage of a PVC 
directly but that is hard since the PVC is not always mounted on any node.

Lastly if you want to be able to display the stats generated by Heapster or 
Hawkular Openshift Agent you can set up 
https://github.com/hawkular/hawkular-grafana-datasource.

--
Patrick Tescher

On Mar 29, 2017, at 10:05 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

Does Openshift has any mechanism to track PV/PVC usage? PV/PVC are getting 
filled but there is no mechanism for us or our clients to track what is current 
utilization? One way to check is ,  platform teams or clients has to check 
usage by mounting the PV somewhere and check by using OS commands like du etc. 
wondering any better way to track each PVC usage and alert if they reach 90 % 
threshold etc. Our monitoring systems can handle alerts but looking for a API 
call or better simple way to check PVC usage

Don’t’ want to use OS commands to check or track usage.

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: PV/PVC storage use tracking

2017-03-29 Thread Srinivas Naga Kotaru (skotaru)
Patric

Thanks for pointing Grafant plug-in for Hawkuarl. We already invested effort 
and time on TICK setup where metrics are collected via telegraf, InfluxDB and 
Grafant. This plug-in is nice on top of our Grafana setup. However, telegraf 
Docker plugin is much more advanced and have more metrics then Hawkular metrics.

Also, I noticed tags not working. Searching by name is pathetic and pretty 
slow.  Only gauge working . Other 2 metrics types like count, and availability 
not working. Not sure why…

Even Hawkular also not showing any information regarding PVC or PV.

[cid:image001.png@01D2A8DE.23C2F0F0]

--
Srinivas Kotaru

From: Patrick Tescher <patr...@outtherelabs.com>
Date: Wednesday, March 29, 2017 at 11:38 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: PV/PVC storage use tracking

The system that would track this is Heapster but it doesn't appear to. There 
are a few open issues:
https://github.com/kubernetes/heapster/issues/885
https://github.com/kubernetes/heapster/issues/1270

The other option would be Hawkular Openshift Agent: 
https://github.com/hawkular/hawkular-openshift-agent
This means that the pod mounting your PVC needs to have some sort of agent that 
reports filesystem usage. Sometimes this makes sense. For instance I am using 
https://github.com/wrouesnel/postgres_exporter to export other PostgresSQL 
stats and one of those is filesystem usage. In other cases it may not make as 
much sense.

Both of these are a little odd since they would track pod volume usage, not 
necessarily a specific PVC. It would be nice to somehow monitor usage of a PVC 
directly but that is hard since the PVC is not always mounted on any node.

Lastly if you want to be able to display the stats generated by Heapster or 
Hawkular Openshift Agent you can set up 
https://github.com/hawkular/hawkular-grafana-datasource.

--
Patrick Tescher

On Mar 29, 2017, at 10:05 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

Does Openshift has any mechanism to track PV/PVC usage? PV/PVC are getting 
filled but there is no mechanism for us or our clients to track what is current 
utilization? One way to check is ,  platform teams or clients has to check 
usage by mounting the PV somewhere and check by using OS commands like du etc. 
wondering any better way to track each PVC usage and alert if they reach 90 % 
threshold etc. Our monitoring systems can handle alerts but looking for a API 
call or better simple way to check PVC usage

Don’t’ want to use OS commands to check or track usage.

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: projects join

2017-03-29 Thread Srinivas Naga Kotaru (skotaru)
Clayton

Dan Winship helped.  Oc get netnamespace commnd is helped me to get this info

Thanks for both of you for getting back to me .

--
Srinivas Kotaru

From: "ccole...@redhat.com" <ccole...@redhat.com>
Date: Wednesday, March 29, 2017 at 3:15 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: projects join

Not at my laptop but should be an annotation on the project/namespace

On Mar 29, 2017, at 12:28 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Is there anyway or how to find out 2 projects are joined together? I joined few 
projects for inter project communication but didn’t find any way to check the 
status.


# oadm pod-network join-projects --to= 

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


PV/PVC storage use tracking

2017-03-29 Thread Srinivas Naga Kotaru (skotaru)
Does Openshift has any mechanism to track PV/PVC usage? PV/PVC are getting 
filled but there is no mechanism for us or our clients to track what is current 
utilization? One way to check is ,  platform teams or clients has to check 
usage by mounting the PV somewhere and check by using OS commands like du etc. 
wondering any better way to track each PVC usage and alert if they reach 90 % 
threshold etc. Our monitoring systems can handle alerts but looking for a API 
call or better simple way to check PVC usage

Don’t’ want to use OS commands to check or track usage.

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


projects join

2017-03-28 Thread Srinivas Naga Kotaru (skotaru)
Is there anyway or how to find out 2 projects are joined together? I joined few 
projects for inter project communication but didn’t find any way to check the 
status.


# oadm pod-network join-projects --to= 

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: metrics

2017-03-01 Thread Srinivas Naga Kotaru (skotaru)
Access issue fixed. I could see rich amount of information or metrics to 
measure the cluster health.

However, am having difficult to interpretation or accuracy of the data.

For example:  am getting below value for etcd. I don’t believe etcd db size is 
0 in my prod cluster

# TYPE etcd_storage_db_total_size_in_bytes gauge
etcd_storage_db_total_size_in_bytes 0
similarly lot of metrics where I am interested to measure the health of cluster 
doesn’t make sense to me. Am I missing here anything or unable to 
interpretation properly?

--
Srinivas Kotaru

From: Srinivas Naga Kotaru <skot...@cisco.com>
Date: Thursday, February 23, 2017 at 3:47 PM
To: "ccole...@redhat.com" <ccole...@redhat.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: metrics

I have a cluster admin but unable to see in browser. OC get working fine.



{"kind": "Status","apiVersion": "v1","metadata": {},"status": 
"Failure","message": "User \"system:anonymous\" cannot \"get\" on 
\"/metrics\"","reason": "Forbidden","details": {},"code": 403}


is it expected?

I have a metrics collector which is using a cluster-reader service account. 
Which also getting same 403 error.

--
Srinivas Kotaru

From: Srinivas Naga Kotaru <skot...@cisco.com>
Date: Thursday, February 23, 2017 at 3:01 PM
To: "ccole...@redhat.com" <ccole...@redhat.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: metrics

Awesome … is the same metrics exposed by Prometheus as well? Am seeing 
Prometheus also exposing the same but less then oc get metrics. oc get metrics 
is giving more info.

Also, how to get metrics of kubelet?

--
Srinivas Kotaru

From: "ccole...@redhat.com" <ccole...@redhat.com>
Date: Thursday, February 23, 2017 at 2:33 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: metrics

Resending to dev list.

On Thu, Feb 23, 2017 at 5:31 PM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:
Yes, the apiserver, the controllers, and the nodes all expose metrics on their 
serving port.  The controllers listen on localhost only today.

You can view the api server metrics as a suitably privileged user with "oc get 
--raw /metrics", or use the appropriate credentials (treat it as an API call 
for the purpose of authentication).

On Thu, Feb 23, 2017 at 4:53 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Does API server expose any metrics like https://apiserver/metrics or any other 
form?


--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: metrics

2017-02-23 Thread Srinivas Naga Kotaru (skotaru)
I have a cluster admin but unable to see in browser. OC get working fine.



{"kind": "Status","apiVersion": "v1","metadata": {},"status": 
"Failure","message": "User \"system:anonymous\" cannot \"get\" on 
\"/metrics\"","reason": "Forbidden","details": {},"code": 403}


is it expected?

I have a metrics collector which is using a cluster-reader service account. 
Which also getting same 403 error.

--
Srinivas Kotaru

From: Srinivas Naga Kotaru <skot...@cisco.com>
Date: Thursday, February 23, 2017 at 3:01 PM
To: "ccole...@redhat.com" <ccole...@redhat.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: metrics

Awesome … is the same metrics exposed by Prometheus as well? Am seeing 
Prometheus also exposing the same but less then oc get metrics. oc get metrics 
is giving more info.

Also, how to get metrics of kubelet?

--
Srinivas Kotaru

From: "ccole...@redhat.com" <ccole...@redhat.com>
Date: Thursday, February 23, 2017 at 2:33 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: metrics

Resending to dev list.

On Thu, Feb 23, 2017 at 5:31 PM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:
Yes, the apiserver, the controllers, and the nodes all expose metrics on their 
serving port.  The controllers listen on localhost only today.

You can view the api server metrics as a suitably privileged user with "oc get 
--raw /metrics", or use the appropriate credentials (treat it as an API call 
for the purpose of authentication).

On Thu, Feb 23, 2017 at 4:53 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Does API server expose any metrics like https://apiserver/metrics or any other 
form?


--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: metrics

2017-02-23 Thread Srinivas Naga Kotaru (skotaru)
Awesome … is the same metrics exposed by Prometheus as well? Am seeing 
Prometheus also exposing the same but less then oc get metrics. oc get metrics 
is giving more info.

Also, how to get metrics of kubelet?

--
Srinivas Kotaru

From: "ccole...@redhat.com" <ccole...@redhat.com>
Date: Thursday, February 23, 2017 at 2:33 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: metrics

Resending to dev list.

On Thu, Feb 23, 2017 at 5:31 PM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:
Yes, the apiserver, the controllers, and the nodes all expose metrics on their 
serving port.  The controllers listen on localhost only today.

You can view the api server metrics as a suitably privileged user with "oc get 
--raw /metrics", or use the appropriate credentials (treat it as an API call 
for the purpose of authentication).

On Thu, Feb 23, 2017 at 4:53 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Does API server expose any metrics like https://apiserver/metrics or any other 
form?


--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


metrics

2017-02-23 Thread Srinivas Naga Kotaru (skotaru)
Does API server expose any metrics like https://apiserver/metrics or any other 
form?


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Docker Errors

2017-02-14 Thread Srinivas Naga Kotaru (skotaru)
We are seeing below 3 symptoms very frequently in our platform. Any idea or 
thoughts why they occurring?

Issue 1:

Feb 14 03:38:55 cae-ga2-004 systemd[1]: Starting Docker Application Container 
Engine...
Feb 14 03:38:55 cae-ga2-004 docker-current[115776]: 
time="2017-02-14T03:38:55.028792370Z" level=fatal msg="can't create unix socket 
/var/run/docker.sock: is a directory"
Feb 14 03:38:55 cae-ga2-004 systemd[1]: docker.service: main process exited, 
code=exited, status=1/FAILURE
Feb 14 03:38:55 cae-ga2-004 systemd[1]: Failed to start Docker Application 
Container Engine.
Feb 14 03:38:55 cae-ga2-004 systemd[1]: Unit docker.service entered failed 
state.
Feb 14 03:38:55 cae-ga2-004 systemd[1]: docker.service failed.

not sure why Docker socket is getting converted to folder. This is not expected 
behavior. Since Docker seeing a folder rather a socket file, it is failing to 
restart and throwing above error

Fix:  stop the Docker, remove the folder ( rm –rf /var/run/docker.socket) and 
start the Docker again

Issue 2:

Feb 13 23:43:08 cae-ga1-207 docker-current[50724]: 
time="2017-02-13T23:43:08.459633232Z" level=fatal msg="Error starting daemon: 
Error initializing network controller: Error creating default \"bridge\" 
network: Failed to Setup IP tables: Unable to enable SKIP DNAT rule:  (iptables 
failed...
Feb 13 23:43:08 cae-ga1-207 systemd[1]: docker.service: main process exited, 
code=exited, status=1/FAILURE
Feb 13 23:43:08 cae-ga1-207 systemd[1]: Failed to start Docker Application 
Container Engine.
Feb 13 23:43:08 cae-ga1-207 systemd[1]: Unit docker.service entered failed 
state.
Feb 13 23:43:08 cae-ga1-207 systemd[1]: docker.service failed.

This error appears less frequency then above error.

Fix: Docker restart doesn’t work. We have to stop and start the Docker.

Issue 3:

Below errors are throwing at nodes console output when you login to nodes

kernel:unregister_netdevice: waiting for lo to become free. Usage count = 1
kernel:unregister_netdevice: waiting for lo to become free. Usage count = 1
kernel:unregister_netdevice: waiting for lo to become free. Usage count = 1




--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: API health or status page

2017-02-09 Thread Srinivas Naga Kotaru (skotaru)
It is working perfect. Just followed instructions and was able to hit /version 
and /healthz without authentication.

Thank you ..

--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com>
Date: Thursday, February 9, 2017 at 2:26 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: API health or status page

See the cluster-status role as an example:

oc export clusterroles cluster-status -o yaml > myrole.yaml

Change the name to a custom name, and include only the urls you would want 
anonymous users to access
Then create the custom role:
oc create -f myrole.yaml
And grant it to anonymous users:
oadm policy add-cluster-role-to-group my-role-name system:unauthenticated


On Thu, Feb 9, 2017 at 5:18 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
That is interesting, indeed what I want.

Can you share step by step or any document which explains?


--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>
Date: Thursday, February 9, 2017 at 1:57 PM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: API health or status page

You can set up a role that allows access to the API endpoints you want, and 
bind that role to the `system:unauthenticated` group, and it will allow 
accessing that API without any authentication.


On Thu, Feb 9, 2017 at 4:55 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can I use any API call without authentication? I need an API URL to put into my 
monitoring agent to periodically check health. All most all API calls need 
token or authentication. Although I can use a service account and use secret as 
a token since it doesn’t expire, am looking for a simple solution if possible



--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: API health or status page

2017-02-09 Thread Srinivas Naga Kotaru (skotaru)
NO. I already checked. Version also throwing 403

HTTP/1.1 403 Forbidden
Cache-Control: no-store
Content-Type: application/json
Date: Thu, 09 Feb 2017 22:39:12 GMT
Content-Length: 217



--
Srinivas Kotaru

From: Mateus Caruccio <mateus.caruc...@getupcloud.com>
Date: Thursday, February 9, 2017 at 2:34 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: Jordan Liggitt <jligg...@redhat.com>, dev <dev@lists.openshift.redhat.com>
Subject: Re: API health or status page

Isn't /version open by default?

--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Thu, Feb 9, 2017 at 8:30 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Perfect. Thank you very much Jordan. Appreciated for quick help

--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>
Date: Thursday, February 9, 2017 at 2:26 PM

To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: API health or status page

See the cluster-status role as an example:

oc export clusterroles cluster-status -o yaml > myrole.yaml
Change the name to a custom name, and include only the urls you would want 
anonymous users to access
Then create the custom role:
oc create -f myrole.yaml
And grant it to anonymous users:
oadm policy add-cluster-role-to-group my-role-name system:unauthenticated


On Thu, Feb 9, 2017 at 5:18 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
That is interesting, indeed what I want.

Can you share step by step or any document which explains?


--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>
Date: Thursday, February 9, 2017 at 1:57 PM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: API health or status page

You can set up a role that allows access to the API endpoints you want, and 
bind that role to the `system:unauthenticated` group, and it will allow 
accessing that API without any authentication.

On Thu, Feb 9, 2017 at 4:55 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can I use any API call without authentication? I need an API URL to put into my 
monitoring agent to periodically check health. All most all API calls need 
token or authentication. Although I can use a service account and use secret as 
a token since it doesn’t expire, am looking for a simple solution if possible



--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev



___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: API health or status page

2017-02-09 Thread Srinivas Naga Kotaru (skotaru)
Perfect. Thank you very much Jordan. Appreciated for quick help

--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com>
Date: Thursday, February 9, 2017 at 2:26 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: API health or status page

See the cluster-status role as an example:

oc export clusterroles cluster-status -o yaml > myrole.yaml

Change the name to a custom name, and include only the urls you would want 
anonymous users to access
Then create the custom role:
oc create -f myrole.yaml
And grant it to anonymous users:
oadm policy add-cluster-role-to-group my-role-name system:unauthenticated


On Thu, Feb 9, 2017 at 5:18 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
That is interesting, indeed what I want.

Can you share step by step or any document which explains?


--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>
Date: Thursday, February 9, 2017 at 1:57 PM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: API health or status page

You can set up a role that allows access to the API endpoints you want, and 
bind that role to the `system:unauthenticated` group, and it will allow 
accessing that API without any authentication.


On Thu, Feb 9, 2017 at 4:55 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can I use any API call without authentication? I need an API URL to put into my 
monitoring agent to periodically check health. All most all API calls need 
token or authentication. Although I can use a service account and use secret as 
a token since it doesn’t expire, am looking for a simple solution if possible



--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


API health or status page

2017-02-09 Thread Srinivas Naga Kotaru (skotaru)
Can I use any API call without authentication? I need an API URL to put into my 
monitoring agent to periodically check health. All most all API calls need 
token or authentication. Although I can use a service account and use secret as 
a token since it doesn’t expire, am looking for a simple solution if possible



--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Network manager

2017-02-03 Thread Srinivas Naga Kotaru (skotaru)
Sorry for pushing but we want to take a decision based on this discussion. If 
possible we would like to avoid network manager and use regular network service 
and let our hosting team manage automated process of /etc/resolve.conf and we 
as platform team update whatever is required to put under /etcc/dnsmasq/*.conf 
file. 


-- 
Srinivas Kotaru

On 2/3/17, 9:02 AM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> wrote:

That is exactly my next question. If we have an automated process to update 
/etc/resolve.conf file, can’t we also update dnsmasq file and eliminate 
networkmanaer altogether? 

I’m assuming contents in /etc/dnsmasq/*.conf file is pretty satic and can 
we pushed using Ansible or something..

Please correct me if am missing anything here. 


-- 
Srinivas Kotaru

On 2/3/17, 8:39 AM, "Clayton Coleman" <ccole...@redhat.com> wrote:

But that's not required if someone has already configured dnsmasq.  We
don't require you use our dnsmasq configuration, right?

> On Feb 3, 2017, at 10:57 AM, Scott Dodson <sdod...@redhat.com> wrote:
>
> Re-sending with proper dev@lists.openshift.redhat.com address
>
> dnsmasq services are managed by a NetworkManager dispatcher script and
> that script is also responsible for updating /etc/resolv.conf to point
> at dnsmasq after dnsmasq is started. Therefore NetworkManager is
> required.
>
>> On Fri, Feb 3, 2017 at 10:50 AM, Clayton Coleman 
<ccole...@redhat.com> wrote:
>> Hrm - I don't know why it would actually be required, in the sense 
that
>> nothing dnsmasq is doing is truly dependent on NetworkManager.   
Copying
>> Scott
>>
>> On Feb 3, 2017, at 10:38 AM, Nakayama Kenjiro 
<nakayamakenj...@gmail.com>
>> wrote:
>>
>> Hi,
>>
>> You might have already checked, but according to the docs, it is 
mandatory.
>>
>> 
https://docs.openshift.org/latest/install_config/install/prerequisites.html#prereq-dns
>> "NetworkManager is required on the nodes in order to populate 
dnsmasq with
    >> the DNS IP addresses."
>>
>> Regards,
>> Kenjiro
>>
>>
>>
>> On Sat, Feb 4, 2017 at 12:14 AM, Srinivas Naga Kotaru (skotaru)
>> <skot...@cisco.com> wrote:
>>>
>>> NetworkManager is mandatory requirement for OCP install and
>>> functionality?? Can we use traditional network service rather 
network
>>> manager??
>>>
>>> Srinivas Kotaru
>>>
>>> Sent from my iPhone
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>>
>>
>> --
>> Kenjiro NAKAYAMA <nakayamakenj...@gmail.com>
>> GPG Key fingerprint = ED8F 049D E67A 727D 9A44  8E25 F44B E208 C946 
5EB9
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev





___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Network manager

2017-02-03 Thread Srinivas Naga Kotaru (skotaru)




-- 
Srinivas Kotaru

On 2/3/17, 9:02 AM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> wrote:

That is exactly my next question. If we have an automated process to update 
/etc/resolve.conf file, can’t we also update dnsmasq file and eliminate 
networkmanaer altogether? 

I’m assuming contents in /etc/dnsmasq/*.conf file is pretty satic and can 
we pushed using Ansible or something..

Please correct me if am missing anything here. 


-- 
Srinivas Kotaru

On 2/3/17, 8:39 AM, "Clayton Coleman" <ccole...@redhat.com> wrote:

But that's not required if someone has already configured dnsmasq.  We
don't require you use our dnsmasq configuration, right?

> On Feb 3, 2017, at 10:57 AM, Scott Dodson <sdod...@redhat.com> wrote:
>
> Re-sending with proper dev@lists.openshift.redhat.com address
>
> dnsmasq services are managed by a NetworkManager dispatcher script and
> that script is also responsible for updating /etc/resolv.conf to point
> at dnsmasq after dnsmasq is started. Therefore NetworkManager is
> required.
>
>> On Fri, Feb 3, 2017 at 10:50 AM, Clayton Coleman 
<ccole...@redhat.com> wrote:
>> Hrm - I don't know why it would actually be required, in the sense 
that
>> nothing dnsmasq is doing is truly dependent on NetworkManager.   
Copying
>> Scott
>>
>> On Feb 3, 2017, at 10:38 AM, Nakayama Kenjiro 
<nakayamakenj...@gmail.com>
>> wrote:
>>
>> Hi,
>>
>> You might have already checked, but according to the docs, it is 
mandatory.
>>
>> 
https://docs.openshift.org/latest/install_config/install/prerequisites.html#prereq-dns
>> "NetworkManager is required on the nodes in order to populate 
dnsmasq with
    >> the DNS IP addresses."
>>
>> Regards,
>> Kenjiro
>>
>>
>>
>> On Sat, Feb 4, 2017 at 12:14 AM, Srinivas Naga Kotaru (skotaru)
>> <skot...@cisco.com> wrote:
>>>
>>> NetworkManager is mandatory requirement for OCP install and
>>> functionality?? Can we use traditional network service rather 
network
>>> manager??
>>>
>>> Srinivas Kotaru
>>>
>>> Sent from my iPhone
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>>
>>
>> --
>> Kenjiro NAKAYAMA <nakayamakenj...@gmail.com>
>> GPG Key fingerprint = ED8F 049D E67A 727D 9A44  8E25 F44B E208 C946 
5EB9
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev





___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Network manager

2017-02-03 Thread Srinivas Naga Kotaru (skotaru)
NetworkManager is mandatory requirement for OCP install and functionality?? Can 
we use traditional network service rather network manager?? 

Srinivas Kotaru 

Sent from my iPhone

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: service discover - always confuse

2017-02-01 Thread Srinivas Naga Kotaru (skotaru)

If all containers forwarding to nodes for name resolution and nodes forwarding 
to master for DNS resolution, what master does for external queries for which 
master is not authorities? Am wonder how curl and ping is working from 
containers for dig/lookup not working unless we mention @name server in the 
query.

Can someone explain how name resolution works in below scenarios

-  POD to corporate resources

-  POD to external resources

What master nameserver does ( dnsmasq) in above occasions?

--
Srinivas Kotaru

From: "ccole...@redhat.com" <ccole...@redhat.com>
Date: Tuesday, January 31, 2017 at 1:24 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: [SUSPICIOUS] Re: service discover - always confuse

Including the list correctly.

On Tue, Jan 31, 2017 at 4:06 PM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:


On Jan 30, 2017, at 1:51 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Hi

Observed 2 different behaviors in my platform. not sure this is expected 
behavior or not. Can you clarify for below behaviors?


1.   Name resolution not working for external domains although ping and 
curl commands working as expected

Examples:

# oc rsh kong-app3-792309857-1i4xk

# ping -c1 
google.com<http://secure-web.cisco.com/1u7i8XNuQksEt2rnqXP1-jsT2yGDprPA8nZOLjazQFhursnj3aKlpEJYmbk6ZHHVwa0lSI6KXQ1Mc0lf0iOmQwe-VFh7voHUFmbrYjos7Vu2zT56DapPDbFM0x0yv3kvHHQtAqjA1aWgrZdFsTbhpOuV34sq0FJBmsZSZn__7JJCvm5q4dUtOVKzzlqJptH-1ZGivEpxeQBqHdCjPbqTcuwghuHnAi9TlZqA3XtBUTkvslbJANSqLeBRpKX584z40vWX0DKA4i1qkrEc7UR3XVmuBCZz3gOXgOKhZg-unr8HKCKPyJaIZq_9xzDawPmgEQp7FS2FVOTmv5cDZgi0fAw/http%3A%2F%2Fgoogle.com>
PING 
google.com<http://secure-web.cisco.com/1u7i8XNuQksEt2rnqXP1-jsT2yGDprPA8nZOLjazQFhursnj3aKlpEJYmbk6ZHHVwa0lSI6KXQ1Mc0lf0iOmQwe-VFh7voHUFmbrYjos7Vu2zT56DapPDbFM0x0yv3kvHHQtAqjA1aWgrZdFsTbhpOuV34sq0FJBmsZSZn__7JJCvm5q4dUtOVKzzlqJptH-1ZGivEpxeQBqHdCjPbqTcuwghuHnAi9TlZqA3XtBUTkvslbJANSqLeBRpKX584z40vWX0DKA4i1qkrEc7UR3XVmuBCZz3gOXgOKhZg-unr8HKCKPyJaIZq_9xzDawPmgEQp7FS2FVOTmv5cDZgi0fAw/http%3A%2F%2Fgoogle.com>
 (216.58.204.110) 56(84) bytes of data.
64 bytes from 
par10s28-in-f14.1e100.net<http://secure-web.cisco.com/1CFI4O1dpduXxSBAvCZF5OaKY1YkUPCVRmxKXA0ssOpNS7NKzcBuyebhEP4yzszEo3W4WvK7Zn_0J4IU7SOVDWYUP0C-LXnFdy2Z9dMsEZZr2CGbMH8bIpQcAX6qshoTW99z5uB7SUMfLXIS8Isg7-8ys3x_8nmKB3KtM91mOAh--k8jcCteJ74Z52-xx_GBqF68MfseCsJAv-FXUxOenNhNpfz7O46XVkdrGAEVpQMltWAfWT-oIStz6Q-xbbmo3TRqAeM5b6GGI0YpA8E3O_1-lHmh9pSXQlH9ZqMNLnJBuCGwOgModIaHgL_Fci6kuaUT1hFucVPP1SCKWl7cdKA/http%3A%2F%2Fpar10s28-in-f14.1e100.net>
 (216.58.204.110): icmp_seq=1 ttl=47 time=85.7 ms

#$ curl -IL 
google.com<http://secure-web.cisco.com/1u7i8XNuQksEt2rnqXP1-jsT2yGDprPA8nZOLjazQFhursnj3aKlpEJYmbk6ZHHVwa0lSI6KXQ1Mc0lf0iOmQwe-VFh7voHUFmbrYjos7Vu2zT56DapPDbFM0x0yv3kvHHQtAqjA1aWgrZdFsTbhpOuV34sq0FJBmsZSZn__7JJCvm5q4dUtOVKzzlqJptH-1ZGivEpxeQBqHdCjPbqTcuwghuHnAi9TlZqA3XtBUTkvslbJANSqLeBRpKX584z40vWX0DKA4i1qkrEc7UR3XVmuBCZz3gOXgOKhZg-unr8HKCKPyJaIZq_9xzDawPmgEQp7FS2FVOTmv5cDZgi0fAw/http%3A%2F%2Fgoogle.com>
HTTP/1.1 302 Found
Cache-Control: private
Location: 
http://secure-web.cisco.com/161IUJ29rLtqDu4qCdZKtaRCUMt8xdybiQ5Y6_yw383Yl_tbsNfQlk0Vh6yvGPFyTquM8wx4hmm0H5gcrzQu0YnLl2iMjdjWo9DyQDq6KFY8-7PF6P3I2Fft9aw3SK32g4c7vK0h9EAapyOyYLY1oy99qUWkhaUcsU4m-KACvLiuGZkObHP_v20FGn114xPVUn4Yk9hQeAikGfkp_7d00ZCdrpqIEmghKo1B-Lbby2ZStgpIH6uVuuZ5WWUMPi2fteWHl40sauFqvLCfYesgsrLpEuYCfo7ABVo9MJhM-UiC13clZD_EEU9aNwAzOEOA0RMwYyBE7O8-r_QiSuXbiVQ/http%3A%2F%2Fwww.google.com.co%2F%3Fgfe_rd=cr=6d6OWOCgJPLU8ge42JWoCA<http://secure-web.cisco.com/1F4ob_uMFD6hdjdNFXM9Ko2rCShVKCTtPr7aeJ2BFXt-d8jr5AVGFPWr5UQq-Tutfbas672ro73plh2hN7xd9Lcu5wFv4wLcj2GSyhEzByWcqxG-SXI4_75Hb_dMsBmnbwlKIMoyNhSPtalsTv5IjmOnGfMEMb4uE9k-lcwA_QBf4JM5klw3ziDn1raG82tcfxhl-S3jUFx3i4gp2OoWYCRNAG5dMHrWvYYedY4tXcH73CLczl-ko_J7PAmi7IoY1Bfjh8QiyJxJZDLMZuz4FmOAcSBaYBjNVsBhwi3OgbY9QBxItX_0QFVfDom3OAh_d/http%3A%2F%2Fwww.google.com.co%2F%3Fgfe_rd%3Dcr%26ei%3D6d6OWOCgJPLU8ge42JWoCA>
Date: Mon, 30 Jan 2017 06:36:25 GMT
Content-Length: 262
Content-Type: text/html; charset=UTF-8
Via: 1.1 
rtp5-dmz-wsa-1-mgmt.cisco.com:80<http://secure-web.cisco.com/1j1jeBO-osvVCpC1b2j9uZRgi3qaMTRjyDg7rcBGnSUEH5jx0N_Jryn7uZSL38q2Ad5wTuPDSxRLJXCg8knX4f3pGvwHyeaQqwYABjRU861sM32w3MClkcTW6CIdl7i-NKSHBnEglAwmT_BnGD2vdxuzvmby6HEXind9ZUtF0VXeoFi7JGoc7APslIh1eM3jN7WUGTO-NPgIMnf3_vSb_CE1Y0V-2hJSCW2WcA1xCN7gvPtWYoxPIjcweJIB9nmGcSAcBJxJHcEeWjGFT9iCuMRRTobw8-pWYH2cPZLzfp3ZrODpj1Ggo5zMUXrO5n4vFPvFrmuUSTibrJsz-BxY0Eg/http%3A%2F%2Frtp5-dmz-wsa-1-mgmt.cisco.com%3A80>
 (Cisco-WSA/8.8.0-085)
Connection: keep-alive

HTTP/1.1 200 OK
Date: Mon, 30 Jan 2017 06:36:26 GMT
Expires: -1
Cache-Control: private, max-age=0
P3P: CP="This is not a P3P policy! See 
https://secure-web.c

service discover - always confuse

2017-01-29 Thread Srinivas Naga Kotaru (skotaru)
Hi

Observed 2 different behaviors in my platform. not sure this is expected 
behavior or not. Can you clarify for below behaviors?


1.   Name resolution not working for external domains although ping and 
curl commands working as expected

Examples:

# oc rsh kong-app3-792309857-1i4xk

# ping -c1 google.com
PING google.com (216.58.204.110) 56(84) bytes of data.
64 bytes from par10s28-in-f14.1e100.net (216.58.204.110): icmp_seq=1 ttl=47 
time=85.7 ms

#$ curl -IL google.com
HTTP/1.1 302 Found
Cache-Control: private
Location: http://www.google.com.co/?gfe_rd=cr=6d6OWOCgJPLU8ge42JWoCA
Date: Mon, 30 Jan 2017 06:36:25 GMT
Content-Length: 262
Content-Type: text/html; charset=UTF-8
Via: 1.1 rtp5-dmz-wsa-1-mgmt.cisco.com:80 (Cisco-WSA/8.8.0-085)
Connection: keep-alive

HTTP/1.1 200 OK
Date: Mon, 30 Jan 2017 06:36:26 GMT
Expires: -1
Cache-Control: private, max-age=0
P3P: CP="This is not a P3P policy! See 
https://www.google.com/support/accounts/answer/151657?hl=en for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: 
NID=95=pwFoBQb-ktya5LeVlXGLKPJX2N-fLE_oIw37Wq2F7FFOeuJp0rObVHRJw7jRg9luz3Jq4f3CQadMWr6RonTJot0oxSEcx-NGKS8cDaksuT5t3mqTtPZaKb20HRjn9ffF;
 expires=Tue, 01-Aug-2017 06:36:26 GMT; path=/; domain=.google.com.co; HttpOnly
Accept-Ranges: none
Vary: Accept-Encoding
Transfer-Encoding: chunked
Content-Type: text/html; charset=ISO-8859-1
Via: 1.1 rtp5-dmz-wsa-1-mgmt.cisco.com:80 (Cisco-WSA/8.8.0-085)
Connection: keep-alive

# cat /etc/resolv.conf
search oneid-rtp.svc.cluster.local svc.cluster.local cluster.local cisco.com
nameserver 64.101.6.21
nameserver 144.254.71.184
nameserver 173.38.200.100
nameserver 173.38.165.13
options timeout:1 attempts:1
options ndots:5


# dig google.com

; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.1 <<>> google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 21380
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;google.com.   INA

;; Query time: 1 msec
;; SERVER: 64.101.6.21#53(64.101.6.21)
;; WHEN: Mon Jan 30 06:38:07 UTC 2017
;; MSG SIZE  rcvd: 28

# nslookup google.com
Server:   64.101.6.21
Address:64.101.6.21#53

** server can't find google.com: REFUSED

#host gogole.com
Host gogole.com not found: 5(REFUSED)

Queries against 1st server from POD /etc/resolve.conf ( This is node IP address 
where POD has been running)

sh-4.2$ dig +short google.com  @64.101.6.21
sh-4.2$

sh-4.2$ nslookup google.com 64.101.6.21
Server:   64.101.6.21
Address:64.101.6.21#53

** server can't find google.com: REFUSED

However DNS queries aginst 2nd server from POD /etc/resolve.conf works fine ( 
This IP represent our corproate DNS servers borrowed from HOST 
/etc/resolve.conf)

dig +short google.com @144.254.71.184
216.58.204.110

sh-4.2$ nslookup google.com 144.254.71.184
Server:   144.254.71.184
Address:144.254.71.184#53

Non-authoritative answer:
Name:   google.com
Address: 216.58.204.110


2.   Service resolution working as expected. However different DNS clients 
behaving differently with and without service FQDN. Dig doesn’t working without 
FQDN,


sh-4.2$ dig kong-database2
;; QUESTION SECTION:
;kong-database2.   INA

with FQDN it is working

sh-4.2$ dig kong-database2.oneid-rtp.svc.cluster.local
;; ANSWER SECTION:
kong-database2.oneid-rtp.svc.cluster.local. 30 IN A 172.29.2.1


nslookup working with or without FQDN

sh-4.2$ nslookup kong-database2
Server:   64.101.6.21
Address:64.101.6.21#53

Non-authoritative answer:
Name:   kong-database2.oneid-rtp.svc.cluster.local
Address: 172.29.2.1

sh-4.2$ nslookup kong-database2.oneid-rtp.svc.cluster.local
Server:   64.101.6.21
Address:64.101.6.21#53

Non-authoritative answer:
Name:   kong-database2.oneid-rtp.svc.cluster.local
Address: 172.29.2.1


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


cluster health metrics or API end points

2017-01-27 Thread Srinivas Naga Kotaru (skotaru)
We want to measure the health of OpenShift cluster from all possible ways and 
report status back to clients in a single simple page. I have few things in mind

Health of:

· API servers

· etcd servers

· nodes (kubectl??)

· SDN

· PV’s

· Routers shards

· Ingress controllers

· Docker (every node)

· Docker storage volume (every node)

Am sure we have REST API’s available to measure the health of most of the above 
critical components. Can you shed some light on?


· What are the API?

· Any other better way to measure health of critical components?

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: storage labels

2017-01-13 Thread Srinivas Naga Kotaru (skotaru)
Storage class might fix this issue but am not convinced fully claiming a PV 
without using selector. It bypasses protection offer by storage label 
selectors. Again, we have to wait for 3.4 and upgrade 3.3. some more work.


--
Srinivas Kotaru

From: Brad Childs <bchi...@redhat.com>
Date: Friday, January 13, 2017 at 12:17 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: Nakayama Kenjiro <nakayamakenj...@gmail.com>, dev 
<dev@lists.openshift.redhat.com>
Subject: Re: storage labels



On Fri, Jan 13, 2017 at 2:05 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can you explain or point a documentation on how to use storage class? I don’t 
see any document on storage class yet, at least OSE 3.3. I heard that feature 
will be releasing in 3.4.

Sorry I should have noticed your OSE 3.3 version.  StorageClass is in 3.4

-bc



--
Srinivas Kotaru

From: Brad Childs <bchi...@redhat.com<mailto:bchi...@redhat.com>>
Date: Friday, January 13, 2017 at 11:59 AM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: Nakayama Kenjiro 
<nakayamakenj...@gmail.com<mailto:nakayamakenj...@gmail.com>>, dev 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: storage labels



On Fri, Jan 13, 2017 at 12:59 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Perfect, that answer and clarify. Thank you, Nakayama,


I was able to bound a PV which has label selectors using a PVC which doesn’t 
have any selectors. This behavior completely makes useless our storage labeling 
strategy. We want to label few volumes (special volumes by 
cost/performance/size) to specific clients and want only that clients can use 
these PV using label selectors. Clients who don’t mention label selectors in 
PVC should bound general volumes

This concerns us a lot. How to deal this issue?

One way is to require every PVC to create with a selector.  If you have a 
performance={fast, slow} then every PVC must create with a selector for 
performance=something.

You could use StorageClass in a similar way.  Special PVs that you want only 
certain users to use would belong to a specific StorageClass and general 
purpose PVs to another.  You could then set the default StorageClass to the 
general purpose so users must specifically request PVs from the higher 
performance StorageClass.

-bc


--
Srinivas Kotaru

From: Nakayama Kenjiro 
<nakayamakenj...@gmail.com<mailto:nakayamakenj...@gmail.com>>
Date: Friday, January 13, 2017 at 4:10 AM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, dev 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: storage labels

I think that following sentence in the docs is wrong(?).

  
https://docs.openshift.com/container-platform/3.3/install_config/storage_examples/binding_pv_by_label.html
  "It is important to note that a claim must match all of the key-value pairs 
included in its selector stanza."

In my understanding, it should mean that:

OK
===
  PV:
labels:
  A: B
  X: Y
  PVC:
matchLabels:
  A: B
  X: Y

OK
===
  PV:
labels:
  A: B
  X: Y
  PVC:
    matchLabels:
  A: B

NG
===
  PV:
labels:
  A: B

  PVC:
matchLabels:
  A: B
  X: Y

Regards,
Kenjiro

On Fri, Jan 13, 2017 at 6:47 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Thanks, Clayton

Is it necessary to have both selectors to match in PVC to bound to PV or any 
one matching selector enough? In my testing PVC able to bound even one label 
selector match although I have 2 selectors in my PV.
Documentiaotn says otherwise …

--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Thursday, January 12, 2017 at 1:25 PM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: storage labels

Yes

On Jan 12, 2017, at 4:23 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
How to represent TB storage in PV? Is it Ti , similar to Gi?

--
Srinivas Kotaru

From: 
<dev-boun...@lists.openshift.redhat.com<mailto:dev-boun...@lists.openshift.redhat.com>>
 on behalf of Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Date: Wednesday, January 11, 2017 at 11:33 AM
To: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: storage labels

Hi

We are 

Re: storage labels

2017-01-13 Thread Srinivas Naga Kotaru (skotaru)
Can you explain or point a documentation on how to use storage class? I don’t 
see any document on storage class yet, at least OSE 3.3. I heard that feature 
will be releasing in 3.4.


--
Srinivas Kotaru

From: Brad Childs <bchi...@redhat.com>
Date: Friday, January 13, 2017 at 11:59 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: Nakayama Kenjiro <nakayamakenj...@gmail.com>, dev 
<dev@lists.openshift.redhat.com>
Subject: Re: storage labels



On Fri, Jan 13, 2017 at 12:59 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Perfect, that answer and clarify. Thank you, Nakayama,


I was able to bound a PV which has label selectors using a PVC which doesn’t 
have any selectors. This behavior completely makes useless our storage labeling 
strategy. We want to label few volumes (special volumes by 
cost/performance/size) to specific clients and want only that clients can use 
these PV using label selectors. Clients who don’t mention label selectors in 
PVC should bound general volumes

This concerns us a lot. How to deal this issue?

One way is to require every PVC to create with a selector.  If you have a 
performance={fast, slow} then every PVC must create with a selector for 
performance=something.

You could use StorageClass in a similar way.  Special PVs that you want only 
certain users to use would belong to a specific StorageClass and general 
purpose PVs to another.  You could then set the default StorageClass to the 
general purpose so users must specifically request PVs from the higher 
performance StorageClass.

-bc


--
Srinivas Kotaru

From: Nakayama Kenjiro 
<nakayamakenj...@gmail.com<mailto:nakayamakenj...@gmail.com>>
Date: Friday, January 13, 2017 at 4:10 AM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, dev 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: storage labels

I think that following sentence in the docs is wrong(?).

  
https://docs.openshift.com/container-platform/3.3/install_config/storage_examples/binding_pv_by_label.html
  "It is important to note that a claim must match all of the key-value pairs 
included in its selector stanza."

In my understanding, it should mean that:

OK
===
  PV:
labels:
  A: B
  X: Y
  PVC:
matchLabels:
  A: B
  X: Y

OK
===
  PV:
labels:
  A: B
  X: Y
  PVC:
matchLabels:
  A: B

NG
===
  PV:
labels:
  A: B

  PVC:
    matchLabels:
  A: B
  X: Y

Regards,
Kenjiro

On Fri, Jan 13, 2017 at 6:47 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Thanks, Clayton

Is it necessary to have both selectors to match in PVC to bound to PV or any 
one matching selector enough? In my testing PVC able to bound even one label 
selector match although I have 2 selectors in my PV.
Documentiaotn says otherwise …

--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Thursday, January 12, 2017 at 1:25 PM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: storage labels

Yes

On Jan 12, 2017, at 4:23 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
How to represent TB storage in PV? Is it Ti , similar to Gi?

--
Srinivas Kotaru

From: 
<dev-boun...@lists.openshift.redhat.com<mailto:dev-boun...@lists.openshift.redhat.com>>
 on behalf of Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Date: Wednesday, January 11, 2017 at 11:33 AM
To: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: storage labels

Hi

We are going to leverage storage labels feature with OCP 3.3. in storage label 
scenario, it seems PVC ignores PV capacity ( spec.capacity.storage) attribute 
and match depending on label selector in PV.

Questions


1.  If yes, then why do we need to specifiy storage attributes in PV and 
PVC?

2.  If we have multiple sizes in single storage class, do we need to 
classify multiple lable selectors to match PVC claims? Like nfs-ssd-100gb for 
to match 100gb volumes, nfs-ssd-50gb for 50gb volumes?

Am having different sizes of volumes in NFS, wondering single label enough or 
do I need to 1 label for same size volumes?


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___

Re: storage labels

2017-01-13 Thread Srinivas Naga Kotaru (skotaru)
Perfect, that answer and clarify. Thank you, Nakayama,


I was able to bound a PV which has label selectors using a PVC which doesn’t 
have any selectors. This behavior completely makes useless our storage labeling 
strategy. We want to label few volumes (special volumes by 
cost/performance/size) to specific clients and want only that clients can use 
these PV using label selectors. Clients who don’t mention label selectors in 
PVC should bound general volumes

This concerns us a lot. How to deal this issue?


--
Srinivas Kotaru

From: Nakayama Kenjiro <nakayamakenj...@gmail.com>
Date: Friday, January 13, 2017 at 4:10 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: "ccole...@redhat.com" <ccole...@redhat.com>, dev 
<dev@lists.openshift.redhat.com>
Subject: Re: storage labels

I think that following sentence in the docs is wrong(?).

  
https://docs.openshift.com/container-platform/3.3/install_config/storage_examples/binding_pv_by_label.html
  "It is important to note that a claim must match all of the key-value pairs 
included in its selector stanza."

In my understanding, it should mean that:

OK
===
  PV:
labels:
  A: B
  X: Y
  PVC:
matchLabels:
  A: B
  X: Y

OK
===
  PV:
labels:
  A: B
  X: Y
  PVC:
matchLabels:
  A: B

NG
===
  PV:
labels:
  A: B

  PVC:
matchLabels:
  A: B
  X: Y

Regards,
Kenjiro

On Fri, Jan 13, 2017 at 6:47 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Thanks, Clayton

Is it necessary to have both selectors to match in PVC to bound to PV or any 
one matching selector enough? In my testing PVC able to bound even one label 
selector match although I have 2 selectors in my PV.
Documentiaotn says otherwise …

--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Thursday, January 12, 2017 at 1:25 PM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: storage labels

Yes

On Jan 12, 2017, at 4:23 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
How to represent TB storage in PV? Is it Ti , similar to Gi?

--
Srinivas Kotaru

From: 
<dev-boun...@lists.openshift.redhat.com<mailto:dev-boun...@lists.openshift.redhat.com>>
 on behalf of Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Date: Wednesday, January 11, 2017 at 11:33 AM
To: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: storage labels

Hi

We are going to leverage storage labels feature with OCP 3.3. in storage label 
scenario, it seems PVC ignores PV capacity ( spec.capacity.storage) attribute 
and match depending on label selector in PV.

Questions


1.  If yes, then why do we need to specifiy storage attributes in PV and 
PVC?

2.  If we have multiple sizes in single storage class, do we need to 
classify multiple lable selectors to match PVC claims? Like nfs-ssd-100gb for 
to match 100gb volumes, nfs-ssd-50gb for 50gb volumes?

Am having different sizes of volumes in NFS, wondering single label enough or 
do I need to 1 label for same size volumes?


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev



--
Kenjiro NAKAYAMA <nakayamakenj...@gmail.com<mailto:nakayamakenj...@gmail.com>>
GPG Key fingerprint = ED8F 049D E67A 727D 9A44  8E25 F44B E208 C946 5EB9
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: storage labels

2017-01-12 Thread Srinivas Naga Kotaru (skotaru)
How to represent TB storage in PV? Is it Ti , similar to Gi?

--
Srinivas Kotaru

From:  on behalf of Srinivas Naga 
Kotaru 
Date: Wednesday, January 11, 2017 at 11:33 AM
To: dev 
Subject: storage labels

Hi

We are going to leverage storage labels feature with OCP 3.3. in storage label 
scenario, it seems PVC ignores PV capacity ( spec.capacity.storage) attribute 
and match depending on label selector in PV.

Questions


1.  If yes, then why do we need to specifiy storage attributes in PV and 
PVC?

2.  If we have multiple sizes in single storage class, do we need to 
classify multiple lable selectors to match PVC claims? Like nfs-ssd-100gb for 
to match 100gb volumes, nfs-ssd-50gb for 50gb volumes?

Am having different sizes of volumes in NFS, wondering single label enough or 
do I need to 1 label for same size volumes?


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Binding Persistent Volumes by Labels

2017-01-04 Thread Srinivas Naga Kotaru (skotaru)
Thanks, Derek. That answer my question.

So basically we don’t have the ability till OSE 3.6 to control storage by 
class/type using quotas. Basically we are exploring ways to control which 
projects should have access too which storage type/class. Like SSD is 
expensive, limit this type of storage to only few projects etc.

With combination of dynamic storage provision ( 3.4) , storage quotas ( 3.4) 
and storage class quota in future ( 3.6 ??)  would make sotrage more control 
for platform teams.

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com>
Date: Wednesday, January 4, 2017 at 11:32 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: "ccole...@redhat.com" <ccole...@redhat.com>, dev 
<dev@lists.openshift.redhat.com>
Subject: Re: Binding Persistent Volumes by Labels

The ability to quota storage by storage class has merged and will make 
Kubernetes 1.6 and would appear in OpenShift when it re-bases to that level.  
It lets you at a project/namespace level control how much storage you can 
consume by storage class (the assumption here is storage type is aligned with 
your usage of storage class).

The corresponding PR with a detailed scenario is here:
https://github.com/kubernetes/kubernetes/pull/34554
I am working on a PR that I hope makes Kubernetes 1.6 to allow a resource to be 
marked as precious or limited by default.  This would allow you to say that a 
resource can not be consumed in a project unless its covered by a quota.

The corresponding PR with a sample configuration is here (its work-in-progress):
https://github.com/kubernetes/kubernetes/pull/36765
Thanks,
Derek


On Wed, Jan 4, 2017 at 2:03 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Clayton

I saw the 3.4 release notes. Storage quota is good. I am not sure that satisfy 
my requirement of storage type allocation controlling (NFS, SSD etc) at project 
level.

Can you clarify?

--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Wednesday, January 4, 2017 at 10:56 AM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: Binding Persistent Volumes by Labels

In 1.4 quota of persistent volume claims per storage class will be available, 
but you have to define all of your classes up front in the quota.  A whitelist 
approach is coming later (where adding new storage classes would not require 
you to change everyone's quota for that new type to be zero)

On Wed, Jan 4, 2017 at 1:35 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can we control storage at project level, similar to node selector for POD’s 
scheduling?

Use case I have is, want to control different type of storage (NFS, SSD etc) at 
project creation time? like project A can have only NFS type storage, Project B 
can have SSD only, project C can have access to both projects.

--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Binding Persistent Volumes by Labels

2017-01-04 Thread Srinivas Naga Kotaru (skotaru)
Clayton

I saw the 3.4 release notes. Storage quota is good. I am not sure that satisfy 
my requirement of storage type allocation controlling (NFS, SSD etc) at project 
level.

Can you clarify?

--
Srinivas Kotaru

From: "ccole...@redhat.com" <ccole...@redhat.com>
Date: Wednesday, January 4, 2017 at 10:56 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: Binding Persistent Volumes by Labels

In 1.4 quota of persistent volume claims per storage class will be available, 
but you have to define all of your classes up front in the quota.  A whitelist 
approach is coming later (where adding new storage classes would not require 
you to change everyone's quota for that new type to be zero)

On Wed, Jan 4, 2017 at 1:35 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can we control storage at project level, similar to node selector for POD’s 
scheduling?

Use case I have is, want to control different type of storage (NFS, SSD etc) at 
project creation time? like project A can have only NFS type storage, Project B 
can have SSD only, project C can have access to both projects.

--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Binding Persistent Volumes by Labels

2017-01-04 Thread Srinivas Naga Kotaru (skotaru)
Can we control storage at project level, similar to node selector for POD’s 
scheduling?

Use case I have is, want to control different type of storage (NFS, SSD etc) at 
project creation time? like project A can have only NFS type storage, Project B 
can have SSD only, project C can have access to both projects.

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: ingress firewall

2016-12-14 Thread Srinivas Naga Kotaru (skotaru)
Thanks Dan. At this point we are not sure how to control ingress traffic. I 
knew pretty sure that we can provide Ingress IP address to that client services 
get external reachable IP and TCP ports. 

If this is not possible 3.4, can we except in 3.5? at least this gives us a 
window to talk to client and convince him to use ingress now and expect ingress 
firewall support in 3.5? 

Am thinkiing it is very important feature if want to extent the platform to all 
type of work loads rather just web apps.  No one interested just typical web 
work loads in container platform. Clients expecting 
freedom/choices/possibilities of IaaS layer in container platrorm without 
having any limitations. To achive this, network is very foundational and 
critical. 

-- 
Srinivas Kotaru

On 12/14/16, 10:48 AM, "Dan Winship" <d...@redhat.com> wrote:

On 12/14/2016 01:03 PM, Srinivas Naga Kotaru (skotaru) wrote:
> Does ingress support firewall? We have a use case where tenant have
> multiple projects for services segmentation purpose and need ports other
> 80/433. We are planning to use ingress and egress features to allocated
> pool of IP address to use. Client has strict requirements of controlling
> inbound and outbound traffic, like who can allow or deny.
> 
> As per below documentation egress support firewall. Does ingress also
> support similar?

Upstream Kubernetes has a NetworkPolicy object that can be used to
control ingress traffic, but it's not supported by the default OpenShift
networking plugin in 3.4. (Some third-party plugins support it, and it
should be supported by OpenShift's networking plugin in 3.5.) However,
the current version of NetworkPolicy is focused more on pod-to-pod
traffic and doesn't have support for filtering ingress by IP, and it's
not clear when it will.

> Any ideas how to control ingress control? We are thinking to use
> iptables but that seems be dirty or not sure whether even possible.

iptables wouldn't be able to implement per-project rules, but if you
don't mind having the same restrictions for all pods, then it would work
fine.

-- Dan




___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


ingress firewall

2016-12-14 Thread Srinivas Naga Kotaru (skotaru)
Hi

Does ingress support firewall? We have a use case where tenant have multiple 
projects for services segmentation purpose and need ports other 80/433. We are 
planning to use ingress and egress features to allocated pool of IP address to 
use. Client has strict requirements of controlling inbound and outbound 
traffic, like who can allow or deny.

As per below documentation egress support firewall. Does ingress also support 
similar?

https://docs.openshift.com/dedicated/admin_guide/limit_pod_access_egress.html

client also has requirement of cross cluster communication using ingress and 
egress and want to control its access.

Any ideas how to control ingress control? We are thinking to use iptables but 
that seems be dirty or not sure whether even possible.

Any ideas here greatly apprecated …

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: web socket support

2016-12-06 Thread Srinivas Naga Kotaru (skotaru)
Ok will take a look. But router timeouts shouldn’t impact web socket connection 
as it should be under tunnel type mode.

Will play aroud it. It would be nice if we have a documenaiton to cover it …

--
Srinivas Kotaru

From: "ccole...@redhat.com" <ccole...@redhat.com>
Date: Tuesday, December 6, 2016 at 3:55 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: web socket support

You would just listen on whatever port is exposed by the route (the target 
port).  You can create multiple routes if necessary.  Router allows Connection: 
Upgrade headers seamlessly.  Connection timeouts on the router matter, of 
course.

The router documentation briefly describes it, mostly because it just works.

On Dec 6, 2016, at 6:51 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

Clayton

Can you point me any documentation to see how it works or implemented?

--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Tuesday, December 6, 2016 at 2:58 PM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: web socket support

It's fully supported and has been since 3.0

On Tue, Dec 6, 2016 at 5:55 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
What is OpenShift strategy or plans to support web socket support at router 
layer? Our clients asking web socket support since Openshift 2 days onwards. I 
knew Openshift 2 has limited apache based node proxy but that is not a full web 
socket support.

Would like to hear from your for OpenShift 3

--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


web socket support

2016-12-06 Thread Srinivas Naga Kotaru (skotaru)
What is OpenShift strategy or plans to support web socket support at router 
layer? Our clients asking web socket support since Openshift 2 days onwards. I 
knew Openshift 2 has limited apache based node proxy but that is not a full web 
socket support.

Would like to hear from your for OpenShift 3

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


feedback

2016-12-06 Thread Srinivas Naga Kotaru (skotaru)
we are continuously hearing 2 complaints from our users

not much verbose info to troubleshott/narrow down 2 commonf failures
.

1.   Pod unable to come up. Why it failed, what caused?

2.   Deployment failure. Why it failed ? what is the reason?

Most clints using console,  so would be nice if we add as much info as possible 
to both console and oc clients.

Current console info/events not sufficient for them.  One example I can quickly 
quote was, we restricted quota and foret to put scopes. Since scopes missing, 
quota is applied to deploy pods also, since quota is insufficient, deployment 
pods unable to schedule. We adding scopes to nonterminated pods so that quota 
not applicable short lived deploy pods. Verbose info from console/oc unable to 
narrow issue due to quota. They simply failing without throwiing any meaningful 
info

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: cluster wide service acount

2016-12-01 Thread Srinivas Naga Kotaru (skotaru)
Thanks, it is working.  Able to login using service account token

# oc get sa
# oc get secrets
#  oc get  secret  cae-ops-token-5vrkf  --template='{{.data.token}}'

decode base64 token

# oc login –token=

Qeustion:

I can see 2 secrets for each service accont and both are valied to login. Any 
idea why 2 ?

# oc get secrets

cae-ops-token-5vrkfkubernetes.io/service-account-token   3 35m
cae-ops-token-jdhezkubernetes.io/service-account-token   3 35m

--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com>
Date: Thursday, December 1, 2016 at 12:26 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: cluster wide service acount

If you have the service account's token, you can use it from the command line 
like this:

oc login --token=...

The web console does not provide a way to log in with a service account token.

On Thu, Dec 1, 2016 at 3:19 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Jordan

That helps. Thanks for quick help.

Can we use this sa account to login into console and OC clinet? If yes how? I 
knew SA account only has non expired token but no password


--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>
Date: Thursday, December 1, 2016 at 12:04 PM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: cluster wide service acount

Service accounts exist within a namespace but can be granted permissions across 
the entire cluster, just like any other user. For example:
oadm policy add-cluster-role-to-user cluster-reader 
system:serviceaccount:openshift-infra:monitor-service-account

On Thu, Dec 1, 2016 at 3:02 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
I knew we can create a service account per project and can be used as a 
password less API work and automations activities. Can we create a service 
account at cluster level and can be used for platform operations (monitoring, 
automation, shared account for operation teams)?

Intention is to have expiry free tokens.

--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: cluster wide service acount

2016-12-01 Thread Srinivas Naga Kotaru (skotaru)
Jordan

That helps. Thanks for quick help.

Can we use this sa account to login into console and OC clinet? If yes how? I 
knew SA account only has non expired token but no password


--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com>
Date: Thursday, December 1, 2016 at 12:04 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: cluster wide service acount

Service accounts exist within a namespace but can be granted permissions across 
the entire cluster, just like any other user. For example:
oadm policy add-cluster-role-to-user cluster-reader 
system:serviceaccount:openshift-infra:monitor-service-account


On Thu, Dec 1, 2016 at 3:02 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
I knew we can create a service account per project and can be used as a 
password less API work and automations activities. Can we create a service 
account at cluster level and can be used for platform operations (monitoring, 
automation, shared account for operation teams)?

Intention is to have expiry free tokens.

--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


cluster wide service acount

2016-12-01 Thread Srinivas Naga Kotaru (skotaru)
I knew we can create a service account per project and can be used as a 
password less API work and automations activities. Can we create a service 
account at cluster level and can be used for platform operations (monitoring, 
automation, shared account for operation teams)?

Intention is to have expiry free tokens.

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: master public http --> https redirection

2016-12-01 Thread Srinivas Naga Kotaru (skotaru)
Yes, we are using a load balancer across 3 masters. You want us to take 
redirect help from LB?

--
Srinivas Kotaru

From: Jessica Forrester <jforr...@redhat.com>
Date: Thursday, December 1, 2016 at 10:17 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: master public http --> https redirection

There is an existing RFE for this to happen OOTB https://trello.com/c/qxRMizmK

Is the load balancer you are using in front of the masters able to do this 
redirect?

On Thu, Dec 1, 2016 at 1:08 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
How to configure master public URl to redirect from http --> https? we want to 
redirect to https when our clients hit http://public_url in thr browser.

Also OC and other clients shouldn’t face any issues with this change.

Is it possible?

--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


master public http --> https redirection

2016-12-01 Thread Srinivas Naga Kotaru (skotaru)
How to configure master public URl to redirect from http --> https? we want to 
redirect to https when our clients hit http://public_url in thr browser.

Also OC and other clients shouldn’t face any issues with this change.

Is it possible?

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: cockpit auth issue

2016-11-30 Thread Srinivas Naga Kotaru (skotaru)
Dec 01 05:25:04 l3imas-id3-01.cisco.com cockpit-ws[99368]: Using certificate: 
/etc/cockpit/ws-certs.d/0-self-signed.cert
Dec 01 05:25:04 l3imas-id3-01.cisco.com cockpit-session[99371]: pam_ssh_add: 
Identity added: /users/skotaru/.ssh/id_rsa (/users/skotaru/.ssh/id_rsa)
Dec 01 05:25:04 l3imas-id3-01.cisco.com cockpit-session[99371]: 
pam_limits(cockpit:session): Could not set limit for 'nofile': Operation not 
permitted
Dec 01 05:25:04 l3imas-id3-01.cisco.com cockpit-ws[99368]: cockpit-session: 
couldn't open session: skotaru: Permission denied

Am seeing this error. Not sure this is bug or something need to be done.

$ rpm -qa cockpit*

cockpit-shell-118-2.el7.noarch
cockpit-118-2.el7.x86_64
cockpit-bridge-118-2.el7.x86_64
cockpit-ws-118-2.el7.x86_64
cockpit-storaged-118-2.el7.noarch
cockpit-doc-118-2.el7.x86_64
cockpit-docker-118-2.el7.x86_64
cockpit-pcp-118-2.el7.x86_64
cockpit-kubernetes-118-2.el7.x86_64
--
Srinivas Kotaru

From: Srinivas Naga Kotaru <skot...@cisco.com>
Date: Wednesday, November 30, 2016 at 5:49 PM
To: Manjunath A Kumatagi <mkuma...@in.ibm.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: cockpit auth issue

We use same user/password for host access and openshift login. It is the same

Sent from my iPhone

On Nov 30, 2016, at 4:54 PM, Manjunath A Kumatagi 
<mkuma...@in.ibm.com<mailto:mkuma...@in.ibm.com>> wrote:

You will have to use openshift credentials for authentication purpose as 
cockpit auth is linked with openshift auth mechanism.

"Srinivas Naga Kotaru (skotaru)" ---12/01/2016 12:29:43 AM---Am 
testing cockpit and cloudforms for OpenShift monitoring and see which one is 
better for our requi

From: "Srinivas Naga Kotaru (skotaru)" 
<skot...@cisco.com<mailto:skot...@cisco.com>>
To: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Date: 12/01/2016 12:29 AM
Subject: cockpit auth issue
Sent by: 
dev-boun...@lists.openshift.redhat.com<mailto:dev-boun...@lists.openshift.redhat.com>





Am testing cockpit and cloudforms for OpenShift monitoring and see which one is 
better for our requirements.

First started with cockpit …

Installed cockpit and cockpit-kubernetes and trying.

When try to access one node cockpit url like 
https://host:9090<https://host:9090/>, and using my host username/password, am 
getting access denied error.

Do we need any special auth for cockpit or can we integrate cockpit with 
openshift Oauth ?

--
Srinivas Kotaru___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: cockpit auth issue

2016-11-30 Thread Srinivas Naga Kotaru (skotaru)
We use same user/password for host access and openshift login. It is the same

Sent from my iPhone

On Nov 30, 2016, at 4:54 PM, Manjunath A Kumatagi 
<mkuma...@in.ibm.com<mailto:mkuma...@in.ibm.com>> wrote:


You will have to use openshift credentials for authentication purpose as 
cockpit auth is linked with openshift auth mechanism.

"Srinivas Naga Kotaru (skotaru)" ---12/01/2016 12:29:43 AM---Am 
testing cockpit and cloudforms for OpenShift monitoring and see which one is 
better for our requi

From: "Srinivas Naga Kotaru (skotaru)" 
<skot...@cisco.com<mailto:skot...@cisco.com>>
To: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Date: 12/01/2016 12:29 AM
Subject: cockpit auth issue
Sent by: 
dev-boun...@lists.openshift.redhat.com<mailto:dev-boun...@lists.openshift.redhat.com>





Am testing cockpit and cloudforms for OpenShift monitoring and see which one is 
better for our requirements.

First started with cockpit ...

Installed cockpit and cockpit-kubernetes and trying.

When try to access one node cockpit url like 
https://host:9090<https://host:9090/>, and using my host username/password, am 
getting access denied error.

Do we need any special auth for cockpit or can we integrate cockpit with 
openshift Oauth ?

--
Srinivas Kotaru___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev



___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: namedCertificates not working

2016-11-15 Thread Srinivas Naga Kotaru (skotaru)
Issue got fixed. Am using a SAN cert and included individual master names also 
in the cert. I also included these individual master names in the configuration 
under names:

names:
  - "mastervip"
  - "master1"
 - "master2"
  - "master3"

after removing individual master names issue got fixed. Now configuration has 
just public URL

able to see projects in the browser and CLI after authentication.

However CURL and OC clients sill throwing warning  and not trusting certificate

oc login https://masterpublicurl
 The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could 
be intercepted by others.
Use insecure connections? (y/n):

Any idea why? Althoght it is prod grade cert from well known CA.

--
Srinivas Kotaru

From: Srinivas Naga Kotaru <skot...@cisco.com>
Date: Tuesday, November 15, 2016 at 3:06 PM
To: Jordan Liggitt <jligg...@redhat.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: namedCertificates not working

Nov 15 23:03:53 atomic-openshift-master-api[121472]: E1115 23:03:53.196173  
121472 reflector.go:203] 
github.com/openshift/origin/pkg/project/auth/cache.go:188: Failed to list 
*api.Namespace: Get https:// /api/v1/namespaces?resourceVersion=0: 
x509: certificate signed by unknown authority
Nov 15 23:03:53 atomic-openshift-master-api[121472]: I1115 23:03:53.204024  
121472 server.go:2161] http: TLS handshake error from 64.101.6.3:42824: remote 
error: bad certificate

Am wondering why this error sicne cert is fully valid. In fact, master console 
clearely showing green lock with right cert information.

--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com>
Date: Tuesday, November 15, 2016 at 2:41 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: namedCertificates not working

Are you seeing this from a system where you previously logged in to that URL 
using oc with the non-prod CA bundle? When configured to use a non-system-roots 
ca bundle, oc remembers it in the local user's kubeconfig file ($KUBECONFIG or 
~/.kube/config).

Try moving (or removing) the kubeconfig file and see if that allows oc to use 
the system roots to recognize the new certificates




On Nov 15, 2016, at 5:30 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Trying to deploy prod grade cert to our prod installation.   Browser showing 
green light but CLI clients showing cert errors.  OC client unable to display 
any projects. Do we need to use cafile in the config? I couldn’t find right 
syntax . I tried caFile but no use.

Although browser showing green light and showing correct cert info, unable to 
display any projects including default projects after authentication

We are using separate URL for public and internal OpenShift communication. 
Public URL is load balanced with 3 masters. LB was configured with SS 
pass-through to masters and masters doing actual SSL offload.

oc login https://<API<https://%3cAPI> VIP> 1 ↵
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could 
be intercepted by others.
Use insecure connections? (y/n):

oc project default  
 1 ↵
Error from server: Get https://<api<https://%3capi> vip> 
/api/v1/namespaces/default: x509: certificate signed by unknown authority

assetConfig:
  logoutURL: ""
  masterPublicURL: https://apivip
  publicURL: https://apivip/console/
  servingInfo:
bindAddress: 0.0.0.0:443<http://0.0.0.0:443>
bindNetwork: tcp4
certFile: master.server.crt
clientCA: ""
keyFile: master.server.key
maxRequestsInFlight: 0
requestTimeoutSeconds: 0
namedCertificates:
  - certFile: /opt/cae/certs/master/cae.crt
keyFile: /opt/cae/certs/master/cae.key
names:
  - "mastervip"
  - "master1"
 - "master2"
  - "master3"

servingInfo:
  bindAddress: 0.0.0.0:443<http://0.0.0.0:443>
  bindNetwork: tcp4
  certFile: master.server.crt
  clientCA: ca.crt
  keyFile: master.server.key
  maxRequestsInFlight: 500
  requestTimeoutSeconds: 3600
  namedCertificates:
- certFile: /opt/cae/certs/master/cae.crt
  keyFile: /opt/cae/certs/master/cae.key
names:
  - "mastervip"
  - "master1"
 - "master2"
  - "master3"


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Quota Policies

2016-10-27 Thread Srinivas Naga Kotaru (skotaru)
Perfect. got it.

Thanks you very much for helping to understand this easy yet complicated topic. 
Will reach out if need further info.

Really appreciated co opeation with great details.

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com>
Date: Thursday, October 27, 2016 at 2:45 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: "ccole...@redhat.com" <ccole...@redhat.com>, dev 
<dev@lists.openshift.redhat.com>
Subject: Re: Quota Policies

Your understanding is correct, but one caveat.
This config doesn’t alter or increase limit numbers put the developers

This is true UNLESS you set limitCPUToMemoryPercent.  In that case, the only 
value a user sets is memory limits.
In a nutshell, the idea behind the cluster resource override is users should 
only think about the limits for cpu/memory and not think about the request at 
all (since the operator is taking that responsibility).

Thanks,
Derek


On Thu, Oct 27, 2016 at 5:13 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Derek

We have separate project for non-prod & prod.

I fully understood the example you quoted. It Is very clear. Would be nice if 
someone paste this explanation with example to the overcommit documentation.

In summary:

This config only applicable to pods which have explicit request or limit or 
both (via using limitrange/defaults)
This overcommit ratio apply to entire cluster/projects who satisfy above 
requirement
This is cluster administrator explicitly controlling the overcommit and 
overriding what development teams put on request #
This config doesn’t alter or increase limit numbers put the developers

Is above my understanding is correct?

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com<mailto:dec...@redhat.com>>
Date: Thursday, October 27, 2016 at 1:07 PM

To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, dev 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: Quota Policies

Do you plan to manage non-prod apps in the same project(s) as prod-apps?
I will describe the ClusterResourceOverride behavior via an example, but it is 
basically a giant hammer you can enable on the cluster that lets an 
administrator set a cluster-wide over-commit target which projects may 
opt-in/out from being utilized via annotation.
If a project opts into the behavior, all incoming pods will be modified based 
on the configuration.
Sample Scenario:  A project opts into the ClusterResourceOverride and it has no 
LimitRange defined

$ kubectl run best-effort-pods --image=nginx
The resulting pod will still have no resource requirements made (the plug-in 
has no impact).
$ kubectl run pods-with-resources --image=nginx --limits=cpu=1,memory=1Gi

Traditionally, this pod would have Guaranteed quality of service and both the 
request and limit value would be cpu=1 and memory=1Gi.
But let's see what happens if you enable the overriding behavior on this 
project using the following config:
memoryRequestToLimitPercent: 25
cpuRequestToLimitPercent: 25
limitCPUToMemoryPercent: 200
The pod ends up with the following:

requests.cpu=500m
limits.cpu=2
requests.memory=256Mi
limits.memory=1Gi
As you can see, the only value that had meaning from the end-user was the 
memory limit, but all other values were tuned relative to that value.  The 
memory request was tuned down to 25% of the the limit.  The cpu limit was tuned 
up to 2 cores because it was set to 200% of the memory limit where 1Gi =1 core 
in that conversion.  Finally, the cpu request was tuned down to 25% of the 
limit to 500m.
If we remove the limitCPUToMemoryPercent setting, and use the following 
configuration:

memoryRequestToLimitPercent: 25
cpuRequestToLimitPercent: 25

The pod ends up with the following:

requests.cpu=250m
limits.cpu=1
requests.memory=256Mi
limits.memory=1Gi
In this case, you can see the limit was respected from the user, but the 
requests were tuned down to meet the desired overcommit.  In effect, it is only 
possible to run BestEffort/Burstable pods but not Guaranteed pods with this 
configuration on in a project.

Thanks,
Derek









On Thu, Oct 27, 2016 at 2:32 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Derek

Thanks for helping so far. It is not clear how quota & QOS works. We are 
planning ot use BestEffort for non-prod apps and non-BestEffort for prod 
applications. This has some side effect and app teams might complain that their 
application experience is not same as non-prod behaves different then prod when 
they testing release and monitoring performances. We need to think about it how 
to mitigate these challenges

I was reading below link and this is pretty good.

https://docs.openshif

Re: Quota Policies

2016-10-27 Thread Srinivas Naga Kotaru (skotaru)
Sorry I mean in my earlier email, I intend to write

Thanks for helping so far. It is clear how quota & QOS works. I did a horrible 
typo mistake by using ‘not clear’. It changed whole context. Sorry for that. 
Let me read what you said in latest email and contact if further info required.

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com>
Date: Thursday, October 27, 2016 at 1:07 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: "ccole...@redhat.com" <ccole...@redhat.com>, dev 
<dev@lists.openshift.redhat.com>
Subject: Re: Quota Policies

Do you plan to manage non-prod apps in the same project(s) as prod-apps?
I will describe the ClusterResourceOverride behavior via an example, but it is 
basically a giant hammer you can enable on the cluster that lets an 
administrator set a cluster-wide over-commit target which projects may 
opt-in/out from being utilized via annotation.
If a project opts into the behavior, all incoming pods will be modified based 
on the configuration.
Sample Scenario:  A project opts into the ClusterResourceOverride and it has no 
LimitRange defined

$ kubectl run best-effort-pods --image=nginx
The resulting pod will still have no resource requirements made (the plug-in 
has no impact).
$ kubectl run pods-with-resources --image=nginx --limits=cpu=1,memory=1Gi

Traditionally, this pod would have Guaranteed quality of service and both the 
request and limit value would be cpu=1 and memory=1Gi.
But let's see what happens if you enable the overriding behavior on this 
project using the following config:
memoryRequestToLimitPercent: 25
cpuRequestToLimitPercent: 25
limitCPUToMemoryPercent: 200
The pod ends up with the following:

requests.cpu=500m
limits.cpu=2
requests.memory=256Mi
limits.memory=1Gi
As you can see, the only value that had meaning from the end-user was the 
memory limit, but all other values were tuned relative to that value.  The 
memory request was tuned down to 25% of the the limit.  The cpu limit was tuned 
up to 2 cores because it was set to 200% of the memory limit where 1Gi =1 core 
in that conversion.  Finally, the cpu request was tuned down to 25% of the 
limit to 500m.
If we remove the limitCPUToMemoryPercent setting, and use the following 
configuration:

memoryRequestToLimitPercent: 25
cpuRequestToLimitPercent: 25

The pod ends up with the following:

requests.cpu=250m
limits.cpu=1
requests.memory=256Mi
limits.memory=1Gi
In this case, you can see the limit was respected from the user, but the 
requests were tuned down to meet the desired overcommit.  In effect, it is only 
possible to run BestEffort/Burstable pods but not Guaranteed pods with this 
configuration on in a project.

Thanks,
Derek










On Thu, Oct 27, 2016 at 2:32 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Derek

Thanks for helping so far. It is not clear how quota & QOS works. We are 
planning ot use BestEffort for non-prod apps and non-BestEffort for prod 
applications. This has some side effect and app teams might complain that their 
application experience is not same as non-prod behaves different then prod when 
they testing release and monitoring performances. We need to think about it how 
to mitigate these challenges

I was reading below link and this is pretty good.

https://docs.openshift.com/container-platform/3.3/admin_guide/overcommit.html

didn’t understand Configuring Masters for Overcommitment and its example. Can 
you breif how this overcommitment works in the scanarios we talked about? 
BestEffort, Burst, and Guarnted ..

memoryRequestToLimitPercent: 25
cpuRequestToLimitPercent: 25
limitCPUToMemoryPercent: 200

would be glad if you explain with simple examples… I’m trying to understand how 
this overcommit helps platform admisn to tune better.

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com<mailto:dec...@redhat.com>>
Date: Wednesday, October 26, 2016 at 1:23 PM

To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, dev 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: Quota Policies

A BestEffort pod is a pod whose pod.spec.containers[x].resources.requests and 
pod.spec.containers[x].resources.limits are empty so your understanding is 
correct.
If you want to have a project that supports both BestEffort and NotBestEffort 
pods together, you can do that and control usage via ResourceQuota using the 
examples I provided.
If you want to have a project that supports both BestEffort and NotBestEffort 
pods together, and use LimitRange to enforce min/max constraints and default 
resource requirements, you will encounter problems.

  1.  The LimitRange will assign default resources to each BestEffort pod you 
submit (making them no longer Best

Re: Quota Policies

2016-10-27 Thread Srinivas Naga Kotaru (skotaru)
Derek

Thanks for helping so far. It is not clear how quota & QOS works. We are 
planning ot use BestEffort for non-prod apps and non-BestEffort for prod 
applications. This has some side effect and app teams might complain that their 
application experience is not same as non-prod behaves different then prod when 
they testing release and monitoring performances. We need to think about it how 
to mitigate these challenges

I was reading below link and this is pretty good.

https://docs.openshift.com/container-platform/3.3/admin_guide/overcommit.html

didn’t understand Configuring Masters for Overcommitment and its example. Can 
you breif how this overcommitment works in the scanarios we talked about? 
BestEffort, Burst, and Guarnted ..

memoryRequestToLimitPercent: 25
cpuRequestToLimitPercent: 25
limitCPUToMemoryPercent: 200

would be glad if you explain with simple examples… I’m trying to understand how 
this overcommit helps platform admisn to tune better.

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com>
Date: Wednesday, October 26, 2016 at 1:23 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: "ccole...@redhat.com" <ccole...@redhat.com>, dev 
<dev@lists.openshift.redhat.com>
Subject: Re: Quota Policies

A BestEffort pod is a pod whose pod.spec.containers[x].resources.requests and 
pod.spec.containers[x].resources.limits are empty so your understanding is 
correct.
If you want to have a project that supports both BestEffort and NotBestEffort 
pods together, you can do that and control usage via ResourceQuota using the 
examples I provided.
If you want to have a project that supports both BestEffort and NotBestEffort 
pods together, and use LimitRange to enforce min/max constraints and default 
resource requirements, you will encounter problems.

  1.  The LimitRange will assign default resources to each BestEffort pod you 
submit (making them no longer BestEffort) or
  2.  It will require that each pod have a cpu or memory value specified as 
part of its validation (if you configured it as such)
Thanks,
Derek



On Wed, Oct 26, 2016 at 2:54 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can u answer this question? Trying to understand how do we call BestEffort pods 
in terms of quota/limtrange/pod definitions perceptive?

My understand is, a pod is called besteffort pod, it it does not have any quota 
defination without compute resources ( limit or request) and it doesn’t have 
any explicit request and limit in pod defiantion. Is It my understanding is 
correct?

--
Srinivas Kotaru

From: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Date: Tuesday, October 25, 2016 at 3:42 PM
To: Derek Carr <dec...@redhat.com<mailto:dec...@redhat.com>>

Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, dev 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: Quota Policies

This is good. I’m getting enough details to craft my policies.

In case of 1st example (BestEffort), we don’t have to create any limitrange 
with default request and limits? Or quota definition without having any 
request.cpu, request.memory, limit.cpu and limit.memory?

Am trying to understand what exactly it means by BestEffort when it comes to 
quota, limitrange, pod definitions perceptive. Is it just an arbitrary word or 
a pod is called as BestEffort if it doesn’t have request, limits in its 
definition?

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com<mailto:dec...@redhat.com>>
Date: Tuesday, October 25, 2016 at 2:26 PM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, dev 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: Quota Policies

Sorry, the command is the following (missed scopes on second):

$ kubectl create quota best-effort-not-terminating --hard=pods=5 
--scopes=NotTerminating,BestEffort
$ kubectl create quota not-best-effort-not-terminating 
--hard=requests.cpu=5,requests.memory=10Gi,limits.cpu=10,limits.memory=20Gi 
--scopes=NotTerminating,NotBestEffort

On Tue, Oct 25, 2016 at 5:25 PM, Derek Carr 
<dec...@redhat.com<mailto:dec...@redhat.com>> wrote:
If you only want to quota pods that have a more permanent footprint on the 
node, then create a quota that only matches on the NotTerminating scope.
If you want to allow usage of slack resources (i.e. run BestEffort pods), and 
define a quota that controls otherwise, create 2 quotas.
$ kubectl create quota best-effort-not-terminating --hard=pods=5 
--scopes=NotTerminating,BestEffort
$ kubectl create quota not-best-effort-not-terminating 
--hard=requests.cpu=5,requests.

Re: Quota Policies

2016-10-26 Thread Srinivas Naga Kotaru (skotaru)
Can u answer this question? Trying to understand how do we call BestEffort pods 
in terms of quota/limtrange/pod definitions perceptive?

My understand is, a pod is called besteffort pod, it it does not have any quota 
defination without compute resources ( limit or request) and it doesn’t have 
any explicit request and limit in pod defiantion. Is It my understanding is 
correct?

--
Srinivas Kotaru

From: Srinivas Naga Kotaru <skot...@cisco.com>
Date: Tuesday, October 25, 2016 at 3:42 PM
To: Derek Carr <dec...@redhat.com>
Cc: "ccole...@redhat.com" <ccole...@redhat.com>, dev 
<dev@lists.openshift.redhat.com>
Subject: Re: Quota Policies

This is good. I’m getting enough details to craft my policies.

In case of 1st example (BestEffort), we don’t have to create any limitrange 
with default request and limits? Or quota definition without having any 
request.cpu, request.memory, limit.cpu and limit.memory?

Am trying to understand what exactly it means by BestEffort when it comes to 
quota, limitrange, pod definitions perceptive. Is it just an arbitrary word or 
a pod is called as BestEffort if it doesn’t have request, limits in its 
definition?

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com>
Date: Tuesday, October 25, 2016 at 2:26 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: "ccole...@redhat.com" <ccole...@redhat.com>, dev 
<dev@lists.openshift.redhat.com>
Subject: Re: Quota Policies

Sorry, the command is the following (missed scopes on second):

$ kubectl create quota best-effort-not-terminating --hard=pods=5 
--scopes=NotTerminating,BestEffort
$ kubectl create quota not-best-effort-not-terminating 
--hard=requests.cpu=5,requests.memory=10Gi,limits.cpu=10,limits.memory=20Gi 
--scopes=NotTerminating,NotBestEffort

On Tue, Oct 25, 2016 at 5:25 PM, Derek Carr 
<dec...@redhat.com<mailto:dec...@redhat.com>> wrote:
If you only want to quota pods that have a more permanent footprint on the 
node, then create a quota that only matches on the NotTerminating scope.
If you want to allow usage of slack resources (i.e. run BestEffort pods), and 
define a quota that controls otherwise, create 2 quotas.
$ kubectl create quota best-effort-not-terminating --hard=pods=5 
--scopes=NotTerminating,BestEffort
$ kubectl create quota not-best-effort-not-terminating 
--hard=requests.cpu=5,requests.memory=10Gi,limits.cpu=10,limits.memory=20Gi
So in this example:

1. the user is able to create 5 long running pods that make no resource request 
(i.e. no cpu, memory specified)
2. the user to request up to 5 cpu cores and 10Gi memory for scheduling 
purposes, and the node will work to ensure is available
3. are able to burst up to 10 cpu cores, and 20Gi memory based on node-local 
conditions

Thanks,
Derek

On Tue, Oct 25, 2016 at 5:14 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Derek/Clayton

I saw this link yesterday. It was really good and helpful; I didn’t understand 
the last advanced section. Let me spend some time again.

@Clayton: Do we need to create separate quota policies for both terminated and 
non-terminated ? or just creating a single policy for non-terminated would be 
enough? Want to be simple but at same time, don’t want non-terminated short 
lived pods don’t create any issues to regular working pods.

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com<mailto:dec...@redhat.com>>
Date: Tuesday, October 25, 2016 at 1:09 PM
To: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, dev 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: Quota Policies

You may find this document useful:
http://kubernetes.io/docs/admin/resourcequota/walkthrough/

>BestEffort or NotBestEffort are used to explain the concept or can Pod 
>definition can have these words?
This refers to the quality of service for a pod.  If a container in a pod makes 
no request/limit for compute resources, it is BestEffort.  If it makes a 
request for any resource, its NotBestEffort.
You can apply a quota to control the number of BestEffort pods you can create 
separate from the number of NotBestEffort pods.
See step 5 in the above linked example for a walkthrough.
Thanks,
Derek




On Tue, Oct 25, 2016 at 4:02 PM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:


On Tue, Oct 25, 2016 at 3:55 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Hi

I’m trying to frame a policy for best usage of compute resources for our 
environment. I stared reading documentation on this topic. Although 
documentation is pretty limited on this topic with working examples, now I have 
some better understanding 

Re: How would I know my project members & roles ?

2016-10-26 Thread Srinivas Naga Kotaru (skotaru)
We only giving edit role to project members, not admin for specific reason.  
With edit role, they won’t be able to view members of their project? Does it 
need admin privileges?  OSE 2.x

Below issue reproted by one of our client.

--
Srinivas Kotaru



OSE3.x has no option to display the roles & membership on Web UI.

As a member of project account I don’t have permission to view the same.
[cid:image001.png@01D22F7E.CD2B2FB0]

Is there anything we are planning to have like OSE2.x as shown below ?


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Quota Policies

2016-10-25 Thread Srinivas Naga Kotaru (skotaru)
This is good. I’m getting enough details to craft my policies.

In case of 1st example (BestEffort), we don’t have to create any limitrange 
with default request and limits? Or quota definition without having any 
request.cpu, request.memory, limit.cpu and limit.memory?

Am trying to understand what exactly it means by BestEffort when it comes to 
quota, limitrange, pod definitions perceptive. Is it just an arbitrary word or 
a pod is called as BestEffort if it doesn’t have request, limits in its 
definition?

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com>
Date: Tuesday, October 25, 2016 at 2:26 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: "ccole...@redhat.com" <ccole...@redhat.com>, dev 
<dev@lists.openshift.redhat.com>
Subject: Re: Quota Policies

Sorry, the command is the following (missed scopes on second):

$ kubectl create quota best-effort-not-terminating --hard=pods=5 
--scopes=NotTerminating,BestEffort
$ kubectl create quota not-best-effort-not-terminating 
--hard=requests.cpu=5,requests.memory=10Gi,limits.cpu=10,limits.memory=20Gi 
--scopes=NotTerminating,NotBestEffort

On Tue, Oct 25, 2016 at 5:25 PM, Derek Carr 
<dec...@redhat.com<mailto:dec...@redhat.com>> wrote:
If you only want to quota pods that have a more permanent footprint on the 
node, then create a quota that only matches on the NotTerminating scope.
If you want to allow usage of slack resources (i.e. run BestEffort pods), and 
define a quota that controls otherwise, create 2 quotas.
$ kubectl create quota best-effort-not-terminating --hard=pods=5 
--scopes=NotTerminating,BestEffort
$ kubectl create quota not-best-effort-not-terminating 
--hard=requests.cpu=5,requests.memory=10Gi,limits.cpu=10,limits.memory=20Gi
So in this example:

1. the user is able to create 5 long running pods that make no resource request 
(i.e. no cpu, memory specified)
2. the user to request up to 5 cpu cores and 10Gi memory for scheduling 
purposes, and the node will work to ensure is available
3. are able to burst up to 10 cpu cores, and 20Gi memory based on node-local 
conditions

Thanks,
Derek

On Tue, Oct 25, 2016 at 5:14 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Derek/Clayton

I saw this link yesterday. It was really good and helpful; I didn’t understand 
the last advanced section. Let me spend some time again.

@Clayton: Do we need to create separate quota policies for both terminated and 
non-terminated ? or just creating a single policy for non-terminated would be 
enough? Want to be simple but at same time, don’t want non-terminated short 
lived pods don’t create any issues to regular working pods.

--
Srinivas Kotaru

From: Derek Carr <dec...@redhat.com<mailto:dec...@redhat.com>>
Date: Tuesday, October 25, 2016 at 1:09 PM
To: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, dev 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: Quota Policies

You may find this document useful:
http://kubernetes.io/docs/admin/resourcequota/walkthrough/

>BestEffort or NotBestEffort are used to explain the concept or can Pod 
>definition can have these words?
This refers to the quality of service for a pod.  If a container in a pod makes 
no request/limit for compute resources, it is BestEffort.  If it makes a 
request for any resource, its NotBestEffort.
You can apply a quota to control the number of BestEffort pods you can create 
separate from the number of NotBestEffort pods.
See step 5 in the above linked example for a walkthrough.
Thanks,
Derek




On Tue, Oct 25, 2016 at 4:02 PM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:


On Tue, Oct 25, 2016 at 3:55 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Hi

I’m trying to frame a policy for best usage of compute resources for our 
environment. I stared reading documentation on this topic. Although 
documentation is pretty limited on this topic with working examples, now I have 
some better understanding on quota and limtrange objects.

We are planning to enforce quota and limtrange on every project as part of 
project provision. Client can increase these limits by going to modify screen 
on our system and pay the cost accordingly. Goal is to have high efficient 
cluster resource usage and minimal client disturbance.

Have few questions around implementation?

Can we exclude build, deploy like short time span pods from quota restrictions?

There are two quotas - one for terminating pods (pods that are guaranteed to 
finish in a certain time period) and one for non-terminating pods.

Quotas enforced only running pods or dead pods, pending status, succeeded?

Once a pod 

Audit logging in Openshift Enterprise 3

2016-10-17 Thread Srinivas Naga Kotaru (skotaru)
https://access.redhat.com/solutions/1748893

had seen KB article recently. What is the path to log file? Can we specific a 
log path? Can we forward to other logging systems (Splunk or ELK) etc.?

any good documentation link would be useful

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Container UUID

2016-10-11 Thread Srinivas Naga Kotaru (skotaru)
Hmm that might work but we need to modify templates, not sure all of clients 
want this feature. Again this UUID should be unique to each pod.

Also some pods might  be created without using templates.

Is there any other way??

--
Srinivas Kotaru

From: Mateus Caruccio <mateus.caruc...@getupcloud.com>
Date: Tuesday, October 11, 2016 at 10:19 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: dev <dev@lists.openshift.redhat.com>
Subject: Re: Container UUID

Hi.

You could use template parameters to generate a random value and use it into 
your contiainer template.

In you template.parameters:

  - description: My unique UUID
name: UNIQUE_UUID
generate: expression
from: '[0-9A-F]{8}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{12}'

And then in your DC:

spec:
template:
spec:
  containers:
  - env:
- name: UNIQUE_UUID
  value: '${UNIQUE_UUID}'


Hope it helps.


--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Tue, Oct 11, 2016 at 2:11 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Hi

Is there any way to put an environment variable which hold a unique UUID value 
per pod basis? If we put an environment variable at dc or rc level, same value 
propagating for all pods. That is expected behavior since all pods are creating 
using same template definition

If we add environment variable at pod level, its life time is limited.

Example:  Want to put an environment variable like below

UUID = FCAC382C-0CEB-40E4-9654-07715CDC9DD8

This UUID is unique to each pod.


--
Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Container UUID

2016-10-11 Thread Srinivas Naga Kotaru (skotaru)
Hi

Is there any way to put an environment variable which hold a unique UUID value 
per pod basis? If we put an environment variable at dc or rc level, same value 
propagating for all pods. That is expected behavior since all pods are creating 
using same template definition

If we add environment variable at pod level, its life time is limited.

Example:  Want to put an environment variable like below

UUID = FCAC382C-0CEB-40E4-9654-07715CDC9DD8

This UUID is unique to each pod.


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


NetworkCIDR for big cluster

2016-10-06 Thread Srinivas Naga Kotaru (skotaru)
Hi

We ‘re building 3 big clusters, 1 specific to each data center. growth  
expected to 1000 nodes each cluster over the time.

Questions:


1.
# egrep 'clusterNetworkCIDR|serviceNetworkCIDR' 
/etc/origin/master/master-config.yaml

  clusterNetworkCIDR: 10.1.0.0/16
  serviceNetworkCIDR: 172.30.0.0/16

above are default subnet values.  These defaults will be sufficient for 1k 
clsuter size?


2.   if answer is ‘no’ can we change once cluster is build with new CIDR 
values depending on growth? ( I heard it is not possible once cluster was build)

3.   if answer is ‘no’ what is the right CIDR’s for 1k cluster size?


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


ovs-multitenant

2016-09-27 Thread Srinivas Naga Kotaru (skotaru)
Hi

We are switching our SDN plugin from ovs-subnet --> ovs-multitenant.

Few qq


1.   ovs-multitenant is ready for prod grade workloads?

2.   Do we need to delete and re-create router and registry components or 
not required? ( I knew we need to restart master and node services after 
switching)

3.   Can we toggle back to ovs-subnet in future if required without having 
any impact or downtime to apps

4.   Any other useful info for ovs-multitenant plug-ins usage?


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Clarification

2016-09-08 Thread Srinivas Naga Kotaru (skotaru)
The scenario where I saw this happening was running a Consul with default 
settings. Because multiple different clusters were started using default 
settings the access details were the same. The way Consul finds other nodes is 
via Gossip which is done over UDP.
 
By changing the settings for Consul this was resolved. This is also how I 
detected that other instances were running from a previous deploy. Consul nodes 
were popping up that I had previous deleted by deleting either the Pod or RC 
and yet the container in the Pod for the Consul agent was still running.

-- 
Srinivas Kotaru


On 9/8/16, 12:44 PM, "Dan Winship" <d...@redhat.com> wrote:

On 09/08/2016 03:32 PM, Srinivas Naga Kotaru (skotaru) wrote:
> Containers that use UDP (Layer 4) and do not go through the Openshift
> networking layer can find other containers running in a Pod with a
> Service defined. *Potential impact* to mutli-tenant boundaries.

Can you explain what you mean? Especially the part about "and do not go
through the OpenShift networking layer"?

If by "can find other containers" you just mean "can find that certain
IP addresses are in use by pods in other namespaces", then yes, that's
true, but they can't actually communicate with them.

-- Dan





___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Clarification

2016-09-08 Thread Srinivas Naga Kotaru (skotaru)
Can you confirm below 2 statements?

Potential Bug: Openshift does not always clean up all containers within a Pod 
when the Pod is removed. There were a few instances where one of the containers 
from the Pod were left running even though the Pod was successfully removed and 
the other containers within the Pod had been successfully shutdown and removed.

Containers that use UDP (Layer 4) and do not go through the Openshift 
networking layer can find other containers running in a Pod with a Service 
defined. Potential impact to mutli-tenant boundaries.

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Accessing Metrics Using Hawkular Metrics

2016-07-14 Thread Srinivas Naga Kotaru (skotaru)
Any comments?

--
Srinivas Kotaru

From: 
>
 on behalf of skotaru >
Date: Wednesday, July 13, 2016 at 7:01 PM
To: dev >
Subject: Accessing Metrics Using Hawkular Metrics

Can you fix below documentation? Date examples throwing below error

usage: date [-jnu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ...
[-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format]
No JSON object could be decoded


Need another clarification, AP’aI giving all the information except metrics. I 
was expecting some numeric numbers to show cpu and memory usage . How clients 
leverage these metrics to take some runtime decisions ( like scaling more pods, 
or some business decisions depending on the metrics)



[
   ...
   },
   {
"id": 
"hawkular-cassandra-1/7186d9dc-9d30-11e5-90d9-3c970e88a56f/cpu/limit",
"tags": {
"container_base_image": "openshift/origin-metrics-cassandra:latest",
"container_base_image_description": "User-defined image name that 
is run inside the container",
"container_name": "hawkular-cassandra-1",
"container_name_description": "User-provided name of the container 
or full container name for system containers",
"descriptor_name": "cpu/limit",
"group_id": "hawkular-cassandra-1/cpu/limit",
"host_id": "192.168.122.1",
"host_id_description": "Identifier specific to a host. Set by cloud 
provider or user",
"hostname": "192.168.122.1",
"hostname_description": "Hostname where the container ran",
"labels": "...",
"labels_description": "Comma-separated list of user-provided 
labels",
"namespace_id": "fd5c9c31-8750-11e5-8b09-3c970e88a56f",
"namespace_id_description": "The UID of namespace of the pod",
"pod_id": "7186d9dc-9d30-11e5-90d9-3c970e88a56f",
"pod_id_description": "The unique ID of the pod",
"pod_name": "hawkular-cassandra-1-yoe3h",
"pod_name_description": "The name of the pod",
"pod_namespace": "openshift-infra",
"pod_namespace_description": "The namespace of the pod",
"resource_id_description": "Identifier(s) specific to a metric"
},
"tenantId": "openshift-infra",
"type": "gauge"
},
{
  ...
]






--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OSE 3.2 - Registry - Unable to write

2016-06-07 Thread Srinivas Naga Kotaru (skotaru)
It is fixed by chwon at host level 


sudo chown -R 1001 
/var/lib/origin/openshift.local.volumes/pods/2a7b5be6-2c32-11e6-b963-005056acedd5/volumes/kubernetes.io~nfs/registry-storage

found using mount command , above Docker volume and changed it 

it fixed the issue

thanks guys





-- 
Srinivas Kotaru

On 6/7/16, 10:58 AM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> wrote:

>Am using NFS volume for registry 
>
>
>
>-- 
>Srinivas Kotaru
>
>On 6/7/16, 10:42 AM, "Seth Jennings" <sjenn...@redhat.com> wrote:
>
>>Yes, not really sure of your storage setup but if it is NFS storage
>>and selinux is blocking it you need to do:
>>
>>setsebool -P virt_use_nfs 1
>>
>>On Tue, Jun 7, 2016 at 12:29 PM, Srinivas Naga Kotaru (skotaru)
>><skot...@cisco.com> wrote:
>>> Can someone help here? Struck and unable to proceed next step
>>>
>>>
>>>
>>> --
>>>
>>> Srinivas Kotaru
>>>
>>>
>>>
>>> From: skotaru <skot...@cisco.com>
>>> Date: Monday, June 6, 2016 at 8:24 PM
>>> To: "dev@lists.openshift.redhat.com" <dev@lists.openshift.redhat.com>
>>> Subject: OSE 3.2 - Registry - Unable to write
>>>
>>>
>>>
>>> Hi
>>>
>>>
>>>
>>> Just finished installing OSE 3.2.  Registry throwing below error while doing
>>> a sample deployment.
>>>
>>>
>>>
>>> I0606 18:40:55.315293   1 sti.go:334] Successfully built
>>> alln-int-build-testing/cakephp-example-1:e6008a5f
>>>
>>> I0606 18:40:55.335600   1 cleanup.go:23] Removing temporary directory
>>> /tmp/s2i-build044311744
>>>
>>> I0606 18:40:55.335621   1 fs.go:156] Removing directory
>>> '/tmp/s2i-build044311744'
>>>
>>> I0606 18:40:55.370335   1 sti.go:268] Using provided push secret for
>>> pushing 172.30.84.20:5000/alln-int-build-testing/cakephp-example:latest
>>> image
>>>
>>> I0606 18:40:55.370389   1 sti.go:272] Pushing
>>> 172.30.84.20:5000/alln-int-build-testing/cakephp-example:latest image ...
>>>
>>> I0606 18:40:57.016159   1 sti.go:277] Registry server Address:
>>>
>>> I0606 18:40:57.016243   1 sti.go:278] Registry server User Name:
>>> serviceaccount
>>>
>>> I0606 18:40:57.016255   1 sti.go:279] Registry server Email:
>>> serviceacco...@example.org
>>>
>>> I0606 18:40:57.016262   1 sti.go:284] Registry server Password:
>>> <>
>>>
>>> F0606 18:40:57.016273   1 builder.go:204] Error: build error: Failed to
>>> push image. Response from registry is: Received unexpected HTTP status: 500
>>> Internal Server Error
>>>
>>>
>>>
>>> Diagnostics on primary master throws below error
>>>
>>>
>>>
>>> ERROR: [DClu1020 from diagnostic
>>> ClusterRegistry@openshift/origin/pkg/diagnostics/cluster/registry.go:271]
>>>
>>>The pod logs for the "docker-registry-6-cqs51" pod belonging to
>>>
>>>the "docker-registry" service indicated the registry is unable to
>>> write to disk.
>>>
>>>This may indicate an SELinux denial, or problems with volume
>>>
>>>ownership/permissions.
>>>
>>>
>>>
>>>For volume permission problems please consult the Persistent Storage
>>> section
>>>
>>>of the Administrator's Guide.
>>>
>>>
>>>
>>>In the case of SELinux this may be resolved on the node by running:
>>>
>>>
>>>
>>>sudo chcon -R -t svirt_sandbox_file_t
>>> [PATH_TO]/openshift.local.volumes
>>>
>>>
>>>
>>>time="2016-06-06T19:00:08.144988457-04:00" level=error msg="response
>>> completed with error" err.code=UNKNOWN err.detail="filesystem: mkdir
>>> /registry/docker: permission denied" err.message="unknown error"
>>> go.version=go1.4.2 http.request.host="172.30.84.20:5000"
>>> http.request.id=7cb19403-49f5-4909-b287-582e60685bec
>>> http.request.method=POST http.request.remoteaddr="10.1.0.1:38212"
>>> http.request.uri="/v2/alln-int-build-testing/busybox/blobs/uploads/"
>>> http.request.useragent="docker/1.9.1 go/go1.4.2
>>> kernel/3.10.0-327.13.1.el7

Re: OSE 3.2 - Registry - Unable to write

2016-06-07 Thread Srinivas Naga Kotaru (skotaru)
Am using NFS volume for registry 



-- 
Srinivas Kotaru

On 6/7/16, 10:42 AM, "Seth Jennings" <sjenn...@redhat.com> wrote:

>Yes, not really sure of your storage setup but if it is NFS storage
>and selinux is blocking it you need to do:
>
>setsebool -P virt_use_nfs 1
>
>On Tue, Jun 7, 2016 at 12:29 PM, Srinivas Naga Kotaru (skotaru)
><skot...@cisco.com> wrote:
>> Can someone help here? Struck and unable to proceed next step
>>
>>
>>
>> --
>>
>> Srinivas Kotaru
>>
>>
>>
>> From: skotaru <skot...@cisco.com>
>> Date: Monday, June 6, 2016 at 8:24 PM
>> To: "dev@lists.openshift.redhat.com" <dev@lists.openshift.redhat.com>
>> Subject: OSE 3.2 - Registry - Unable to write
>>
>>
>>
>> Hi
>>
>>
>>
>> Just finished installing OSE 3.2.  Registry throwing below error while doing
>> a sample deployment.
>>
>>
>>
>> I0606 18:40:55.315293   1 sti.go:334] Successfully built
>> alln-int-build-testing/cakephp-example-1:e6008a5f
>>
>> I0606 18:40:55.335600   1 cleanup.go:23] Removing temporary directory
>> /tmp/s2i-build044311744
>>
>> I0606 18:40:55.335621   1 fs.go:156] Removing directory
>> '/tmp/s2i-build044311744'
>>
>> I0606 18:40:55.370335   1 sti.go:268] Using provided push secret for
>> pushing 172.30.84.20:5000/alln-int-build-testing/cakephp-example:latest
>> image
>>
>> I0606 18:40:55.370389   1 sti.go:272] Pushing
>> 172.30.84.20:5000/alln-int-build-testing/cakephp-example:latest image ...
>>
>> I0606 18:40:57.016159   1 sti.go:277] Registry server Address:
>>
>> I0606 18:40:57.016243   1 sti.go:278] Registry server User Name:
>> serviceaccount
>>
>> I0606 18:40:57.016255   1 sti.go:279] Registry server Email:
>> serviceacco...@example.org
>>
>> I0606 18:40:57.016262   1 sti.go:284] Registry server Password:
>> <>
>>
>> F0606 18:40:57.016273   1 builder.go:204] Error: build error: Failed to
>> push image. Response from registry is: Received unexpected HTTP status: 500
>> Internal Server Error
>>
>>
>>
>> Diagnostics on primary master throws below error
>>
>>
>>
>> ERROR: [DClu1020 from diagnostic
>> ClusterRegistry@openshift/origin/pkg/diagnostics/cluster/registry.go:271]
>>
>>The pod logs for the "docker-registry-6-cqs51" pod belonging to
>>
>>the "docker-registry" service indicated the registry is unable to
>> write to disk.
>>
>>This may indicate an SELinux denial, or problems with volume
>>
>>ownership/permissions.
>>
>>
>>
>>For volume permission problems please consult the Persistent Storage
>> section
>>
>>of the Administrator's Guide.
>>
>>
>>
>>In the case of SELinux this may be resolved on the node by running:
>>
>>
>>
>>sudo chcon -R -t svirt_sandbox_file_t
>> [PATH_TO]/openshift.local.volumes
>>
>>
>>
>>time="2016-06-06T19:00:08.144988457-04:00" level=error msg="response
>> completed with error" err.code=UNKNOWN err.detail="filesystem: mkdir
>> /registry/docker: permission denied" err.message="unknown error"
>> go.version=go1.4.2 http.request.host="172.30.84.20:5000"
>> http.request.id=7cb19403-49f5-4909-b287-582e60685bec
>> http.request.method=POST http.request.remoteaddr="10.1.0.1:38212"
>> http.request.uri="/v2/alln-int-build-testing/busybox/blobs/uploads/"
>> http.request.useragent="docker/1.9.1 go/go1.4.2
>> kernel/3.10.0-327.13.1.el7.x86_64 os/linux arch/amd64"
>> http.response.contenttype="application/json; charset=utf-8"
>> http.response.duration=24.082081ms http.response.status=500
>> http.response.written=156 instance.id=45d786ad-d663-4dfc-8c8e-aa4455aab742
>> vars.name="alln-int-build-testing/busybox"
>>
>>
>>
>>
>>
>> While further analsys, it seems NFS volume mounted on registry container has
>> root:root permissions
>>
>>
>>
>> # sudo docker exec -it 01b162687557 bash
>>
>>
>>
>> bash-4.2$ ls -ld /registry/
>>
>> drwxr-xr-x. 3 root root 4096 Jun  6 17:12 /registry/
>>
>>
>>
>> I tried to change ownership , but no luck. What to do ? is it bug or
>> intended behaviour?
>>
>>
>>
>> bash-4.2$ whoami
>>
>> whoami: cannot find name for user ID 1001
>>
>>
>>
>> bash-4.2$ chown 1001 /registry/
>>
>> chown: changing ownership of '/registry/': Operation not permitted
>>
>>
>>
>> Srinivas Kotaru
>>
>>
>>
>>
>>
>> --
>>
>> Srinivas Kotaru
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OSE 3.2 - Registry - Unable to write

2016-06-06 Thread Srinivas Naga Kotaru (skotaru)
Hi

Just finished installing OSE 3.2.  Registry throwing below error while doing a 
sample deployment.

I0606 18:40:55.315293   1 sti.go:334] Successfully built 
alln-int-build-testing/cakephp-example-1:e6008a5f
I0606 18:40:55.335600   1 cleanup.go:23] Removing temporary directory 
/tmp/s2i-build044311744
I0606 18:40:55.335621   1 fs.go:156] Removing directory 
'/tmp/s2i-build044311744'
I0606 18:40:55.370335   1 sti.go:268] Using provided push secret for 
pushing 172.30.84.20:5000/alln-int-build-testing/cakephp-example:latest image
I0606 18:40:55.370389   1 sti.go:272] Pushing 
172.30.84.20:5000/alln-int-build-testing/cakephp-example:latest image ...
I0606 18:40:57.016159   1 sti.go:277] Registry server Address:
I0606 18:40:57.016243   1 sti.go:278] Registry server User Name: 
serviceaccount
I0606 18:40:57.016255   1 sti.go:279] Registry server Email: 
serviceacco...@example.org
I0606 18:40:57.016262   1 sti.go:284] Registry server Password: 
<>
F0606 18:40:57.016273   1 builder.go:204] Error: build error: Failed to 
push image. Response from registry is: Received unexpected HTTP status: 500 
Internal Server Error

Diagnostics on primary master throws below error

ERROR: [DClu1020 from diagnostic 
ClusterRegistry@openshift/origin/pkg/diagnostics/cluster/registry.go:271]
   The pod logs for the "docker-registry-6-cqs51" pod belonging to
   the "docker-registry" service indicated the registry is unable to write 
to disk.
   This may indicate an SELinux denial, or problems with volume
   ownership/permissions.

   For volume permission problems please consult the Persistent Storage 
section
   of the Administrator's Guide.

   In the case of SELinux this may be resolved on the node by running:

   sudo chcon -R -t svirt_sandbox_file_t 
[PATH_TO]/openshift.local.volumes

   time="2016-06-06T19:00:08.144988457-04:00" level=error msg="response 
completed with error" err.code=UNKNOWN err.detail="filesystem: mkdir 
/registry/docker: permission denied" err.message="unknown error" 
go.version=go1.4.2 http.request.host="172.30.84.20:5000" 
http.request.id=7cb19403-49f5-4909-b287-582e60685bec http.request.method=POST 
http.request.remoteaddr="10.1.0.1:38212" 
http.request.uri="/v2/alln-int-build-testing/busybox/blobs/uploads/" 
http.request.useragent="docker/1.9.1 go/go1.4.2 
kernel/3.10.0-327.13.1.el7.x86_64 os/linux arch/amd64" 
http.response.contenttype="application/json; charset=utf-8" 
http.response.duration=24.082081ms http.response.status=500 
http.response.written=156 instance.id=45d786ad-d663-4dfc-8c8e-aa4455aab742 
vars.name="alln-int-build-testing/busybox"


While further analsys, it seems NFS volume mounted on registry container has 
root:root permissions

# sudo docker exec -it 01b162687557 bash

bash-4.2$ ls -ld /registry/
drwxr-xr-x. 3 root root 4096 Jun  6 17:12 /registry/

I tried to change ownership , but no luck. What to do ? is it bug or intended 
behaviour?

bash-4.2$ whoami
whoami: cannot find name for user ID 1001

bash-4.2$ chown 1001 /registry/
chown: changing ownership of '/registry/': Operation not permitted

Srinivas Kotaru


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Clarification on container security in OpenShift

2016-01-19 Thread Srinivas Naga Kotaru (skotaru)
Clayton and Team

Is it possible to run all containers from a specific application to use a 
dedicated OS user name ( UUID in OSE 2.X). Am not referring UID which is 
typically a numeric number and control local access.

We have a requirement for database access control perceptive where every 
application ( all instances of that app) should use a dedicated OS user name ( 
UUID) and it should be predicable well in advance ( unlike OSE 2.X auto scaling 
where UUID prediction is difficult).

--
Srinivas Kotaru

From: 
>
 on behalf of "ccole...@redhat.com" 
>
Date: Tuesday, January 19, 2016 at 9:57 AM
To: Paul Weil >
Cc: dev >
Subject: Re: Clarification on container security in OpenShift

If you had specified uid 0 in your pod definition, you would receive an error 
(instead of being defaulted).  We do this defaulting by default to protect from 
the classic "it's usually a bad idea to run arbitrary software from the 
Internet as root on your machines" - the step Paul mentions is the equivalent 
of requiring you to answer "are you sure you want to allow this to run as root?"

On Jan 19, 2016, at 12:51 PM, Paul Weil 
> wrote:

You are correct, the container will not run as root with pod spec that is shown.

The pod spec indicates that you validated under the restricted SCC and were 
given the UID 13.  When your container is launched it will be 
configured to run as 13 regardless of what is in the docker file.

If you would like the container to run as root you can grant access to the 
anyuid SCC for the service account that the pod is using.

https://docs.openshift.org/latest/admin_guide/manage_scc.html#add-an-scc-to-a-user-or-group.



On Tue, Jan 19, 2016 at 11:43 AM, Rishi Misra 
> wrote:
Thanks for your response.  Perhaps interpreting this will help me understand 
SCC better - My app pod looks like:

/==/
oc get pod nodejs-sample-app-1-fpiha -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
kubernetes.io/created-by: |
  
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"test","name":"nodejs-sample-app-1","uid":"6fd8f412-bb9a-11e5-9f87-022e","apiVersion":"v1","resourceVersion":"328"}}

openshift.io/deployment-config.latest-version:
 "1"

openshift.io/deployment-config.name:
 nodejs-sample-app
openshift.io/deployment.name: 
nodejs-sample-app-1
openshift.io/generated-by: OpenShiftNewApp
openshift.io/scc: restricted
  creationTimestamp: 2016-01-15T15:12:54Z
  generateName: nodejs-sample-app-1-
  labels:
app: nodejs-sample-app
deployment: nodejs-sample-app-1
deploymentconfig: nodejs-sample-app
  name: nodejs-sample-app-1-fpiha
  namespace: test
  resourceVersion: "1729"
  selfLink: /api/v1/namespaces/test/pods/nodejs-sample-app-1-fpiha
  uid: 737d0f9a-bb9a-11e5-9f87-022e
spec:
  containers:
  - image: openshift/nodejs-sample-app:forOpenShift
imagePullPolicy: IfNotPresent
name: nodejs-sample-app
ports:
- containerPort: 8080
  protocol: TCP
resources: {}
securityContext:
  privileged: false
  runAsUser: 13
  seLinuxOptions:
level: s0:c6,c0
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: 
/var/run/secrets/kubernetes.io/serviceaccount
  name: default-token-8dwhf
  readOnly: true
  dnsPolicy: ClusterFirst
  host: xxx..
  imagePullSecrets:
  - name: default-dockercfg-i1ke5
  nodeName: xxx..
  restartPolicy: Always
  securityContext:
seLinuxOptions:
  level: s0:c6,c0
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - name: default-token-8dwhf
secret:
  secretName: default-token-8dwhf
status:
  conditions:
  - lastProbeTime: null
lastTransitionTime: 2016-01-19T16:10:00Z
status: "True"
type: Ready
  containerStatuses:
  - containerID: 
docker://ca9a288d9ee1fe48517e18e5f6f6b1def28e0ba605962545063f42fbf1f38f38
image: openshift/nodejs-sample-app:forOpenShift
imageID: 
docker://a7782aa25f2463169c43423490297c3a5cf9237b34e7cc772ac2f3ab06b5d302
lastState: {}
name: nodejs-sample-app
ready: true
restartCount: 0
state:
  running:
startedAt: 2016-01-15T15:12:57Z
  hostIP: x.xx.xx.xxx
  phase: 

Re: Clarification on container security in OpenShift

2016-01-19 Thread Srinivas Naga Kotaru (skotaru)
Clayton

Am referring OS user name running a specific process not UID or user id. While 
inspecting pod definitions, I can see the  flexibility of specifying UID, 
however am not seeing similar mechanism to run container or processes ( in 
container) using a pre defined OS user name or group.

Just sake of example, say, I want to run apache process or tomcat process from 
container run using www:www or tomcat:tomcat user and group combination.

Is it possible?


--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Tuesday, January 19, 2016 at 10:44 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: Paul Weil <pw...@redhat.com<mailto:pw...@redhat.com>>, dev 
<dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: Clarification on container security in OpenShift

Not sure if this is exactly what you are asking, but Openshift allows you to 
partition the local UNIX user ID space across the entire cluster automatically. 
 Every project gets a 10k block by default.  Those are not shared, so that 
block uniquely identifies any process in that project on any node.  The default 
policy forces pods to run in uids in that block - again, that cannot be escaped 
by end users by default.

If you want to identify all pods via the API, that is what labels and 
annotations are for.  Enforcing a unique label on each pod in a namespace 
should be possible, although that's only visible via the API.

On Jan 19, 2016, at 1:31 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

Clayton and Team

Is it possible to run all containers from a specific application to use a 
dedicated OS user name ( UUID in OSE 2.X). Am not referring UID which is 
typically a numeric number and control local access.

We have a requirement for database access control perceptive where every 
application ( all instances of that app) should use a dedicated OS user name ( 
UUID) and it should be predicable well in advance ( unlike OSE 2.X auto scaling 
where UUID prediction is difficult).

--
Srinivas Kotaru

From: 
<dev-boun...@lists.openshift.redhat.com<mailto:dev-boun...@lists.openshift.redhat.com>>
 on behalf of "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Tuesday, January 19, 2016 at 9:57 AM
To: Paul Weil <pw...@redhat.com<mailto:pw...@redhat.com>>
Cc: dev <dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>>
Subject: Re: Clarification on container security in OpenShift

If you had specified uid 0 in your pod definition, you would receive an error 
(instead of being defaulted).  We do this defaulting by default to protect from 
the classic "it's usually a bad idea to run arbitrary software from the 
Internet as root on your machines" - the step Paul mentions is the equivalent 
of requiring you to answer "are you sure you want to allow this to run as root?"

On Jan 19, 2016, at 12:51 PM, Paul Weil 
<pw...@redhat.com<mailto:pw...@redhat.com>> wrote:

You are correct, the container will not run as root with pod spec that is shown.

The pod spec indicates that you validated under the restricted SCC and were 
given the UID 13.  When your container is launched it will be 
configured to run as 13 regardless of what is in the docker file.

If you would like the container to run as root you can grant access to the 
anyuid SCC for the service account that the pod is using.

https://docs.openshift.org/latest/admin_guide/manage_scc.html#add-an-scc-to-a-user-or-group.



On Tue, Jan 19, 2016 at 11:43 AM, Rishi Misra 
<rishi.investig...@gmail.com<mailto:rishi.investig...@gmail.com>> wrote:
Thanks for your response.  Perhaps interpreting this will help me understand 
SCC better - My app pod looks like:

/==/
oc get pod nodejs-sample-app-1-fpiha -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
kubernetes.io/created-by<http://kubernetes.io/created-by>: |
  
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"test","name":"nodejs-sample-app-1","uid":"6fd8f412-bb9a-11e5-9f87-022e","apiVersion":"v1","resourceVersion":"328"}}

openshift.io/deployment-config.latest-version<http://openshift.io/deployment-config.latest-version>:
 "1"

openshift.io/deployment-config.name<http://openshift.io/deployment-config.name>:
 nodejs-sample-app
openshift.io/deployment.name<http://openshift.io/deployment.name>: 
nodejs-sample-

routing/vhost alias

2016-01-19 Thread Srinivas Naga Kotaru (skotaru)
Hi

In OSE 2.X we have a alias concept for routes. User or admin can create an 
alias ( apache vhost definition) for an application and create a DNS recored to 
point to upstream load balancer. This was so flexible if user FQDN is different 
than openshift created http url (  example http://-.domain).

In 3.X we have router instead of apache node proxy. Can user or admin create 
similar alias entries?

The reason am asking, client facing virtual are different than openshift 
generated URL’s for each app in our environment.  We need a mechanism to 
map/proxy client facing url to openshift generated URL. Thus, it requires or 
have same Host header at backend using VHOST serveralias or HAProxy using some 
ACL definition etc.


--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Router Sharding

2016-01-15 Thread Srinivas Naga Kotaru (skotaru)

Brenton said you guys are working on router sharding

https://trello.com/c/DtPlixdb/49-8-router-sharding-traffic-ingress

I didn’t get quite well description. What is this feature, how it is useful, 
what are the use cases and when it will be released?

Can we create separate routers for internal or external apps, or more control 
on grouping routes by labels or node selector ( region or zone) ?

--
Srinivas Kotaru
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Router Sharding

2016-01-15 Thread Srinivas Naga Kotaru (skotaru)
Thanks Brenton sharing overview page to see what are upcoming features or 
changes. Very handy ..


-- 
Srinivas Kotaru






On 1/15/16, 1:49 PM, "Brenton Leanhardt" <blean...@redhat.com> wrote:

>On Fri, Jan 15, 2016 at 3:53 PM, Srinivas Naga Kotaru (skotaru)
><skot...@cisco.com> wrote:
>> Thanks Brenton. It is clear now. When this feature will be released? 3.2?
>
>That card has the 'committed-3.3' label in trello so that's really the
>best guidance I can give.
>
>In general https://ci.openshift.redhat.com/releases_overview.html is a
>great place to see how cards across all the various trello boards map
>to releases.
>
>>
>>
>> --
>> Srinivas Kotaru
>>
>>
>>
>>
>>
>>
>> On 1/15/16, 12:30 PM, "Brenton Leanhardt" <blean...@redhat.com> wrote:
>>
>>>On Fri, Jan 15, 2016 at 12:47 PM, Srinivas Naga Kotaru (skotaru)
>>><skot...@cisco.com> wrote:
>>>>
>>>> Brenton said you guys are working on router sharding
>>>>
>>>> https://trello.com/c/DtPlixdb/49-8-router-sharding-traffic-ingress
>>>>
>>>> I didn’t get quite well description. What is this feature, how it is 
>>>> useful,
>>>> what are the use cases and when it will be released?
>>>
>>>One use case would be for large scale deployments it's not practical
>>>to have hundreds of thousands of routes loaded in a single haproxy
>>>instance.  Sharding allows the problem to be carved up in to smaller
>>>pieces.
>>>
>>>Other use case would simply be to have routing images that are tuned
>>>for specific workloads.  If I create a custom router image that is
>>>only useful for a certain class of applications this feature would
>>>allow the router to only listen to the correct subset.
>>>
>>>>
>>>> Can we create separate routers for internal or external apps, or more
>>>> control on grouping routes by labels or node selector ( region or zone) ?
>>>
>>>My understanding is that this would all be possible.
>>>
>>>>
>>>> --
>>>> Srinivas Kotaru

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev