Re: Accessing Remote Files via SSHFS

2018-03-28 Thread Jamie Jackson
Okay, thanks for the reference (and it's heads up to the k8s GitHub ticket
about supporting fuse volumes).

On Wed, Mar 28, 2018 at 4:38 PM, Joel Pearson  wrote:

> A quick google found this:
>
> https://karlstoney.com/2017/03/01/fuse-mount-in-kubernetes/
>
> It looks like the approach would work for you too. But it’s worth
> mentioning that he’s doing the mount from within the container, so he needs
> the pod to start as a privileged pod. You can do that in open shift but
> running privileged pods does have security implications, so it depends if
> you trust your legacy app enough to run it this way.
> On Thu, 29 Mar 2018 at 1:59 am, Jamie Jackson 
> wrote:
>
>> Hi Folks,
>>
>> I'm in the process of containerizing my stack. One of the pieces of the
>> legacy stack accesses a remote file system over SSHFS (autofs manages the
>> access). What would be the best way to handle this kind of requirement on
>> OpenShift?
>>
>> FYI, I'm currently using straight docker for the stack (docker-compose,
>> but no orchestration), but the end goal is probably to run on OpenShift, so
>> I'm trying to approach things in a way that will be most transferable to
>> OpenShift.
>>
>> (Note, this conversation started on Google Groups:
>> https://groups.google.com/d/msg/openshift/9hjDE2INe5o/vqPoQq-6AwAJ )
>>
>> Thanks,
>> Jamie
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Accessing Remote Files via SSHFS

2018-03-28 Thread Joel Pearson
A quick google found this:

https://karlstoney.com/2017/03/01/fuse-mount-in-kubernetes/

It looks like the approach would work for you too. But it’s worth
mentioning that he’s doing the mount from within the container, so he needs
the pod to start as a privileged pod. You can do that in open shift but
running privileged pods does have security implications, so it depends if
you trust your legacy app enough to run it this way.
On Thu, 29 Mar 2018 at 1:59 am, Jamie Jackson  wrote:

> Hi Folks,
>
> I'm in the process of containerizing my stack. One of the pieces of the
> legacy stack accesses a remote file system over SSHFS (autofs manages the
> access). What would be the best way to handle this kind of requirement on
> OpenShift?
>
> FYI, I'm currently using straight docker for the stack (docker-compose,
> but no orchestration), but the end goal is probably to run on OpenShift, so
> I'm trying to approach things in a way that will be most transferable to
> OpenShift.
>
> (Note, this conversation started on Google Groups:
> https://groups.google.com/d/msg/openshift/9hjDE2INe5o/vqPoQq-6AwAJ )
>
> Thanks,
> Jamie
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


CentOS PaaS SIG meeting (2018-03-28)

2018-03-28 Thread Troy Dawson
Hello,
It's time for our weekly PaaS SIG sync-up meeting
This week will be a bit different because we are going to devote alot
of the time to training new committee members.

Time: 1700 UTC - Wedensdays (date -d "1700 UTC")
Date: Today Wedensday, 28 March 2018
Where: IRC- Freenode - #centos-devel

Agenda:
- OpenShift Current Status
-- rpms
- Training for new committee members
- Open Floor

Minutes from last meeting:
https://www.centos.org/minutes/2018/March/centos-devel.2018-03-21-17.01.log.html

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Accessing Remote Files via SSHFS

2018-03-28 Thread Jamie Jackson
Hi Folks,

I'm in the process of containerizing my stack. One of the pieces of the
legacy stack accesses a remote file system over SSHFS (autofs manages the
access). What would be the best way to handle this kind of requirement on
OpenShift?

FYI, I'm currently using straight docker for the stack (docker-compose, but
no orchestration), but the end goal is probably to run on OpenShift, so I'm
trying to approach things in a way that will be most transferable to
OpenShift.

(Note, this conversation started on Google Groups:
https://groups.google.com/d/msg/openshift/9hjDE2INe5o/vqPoQq-6AwAJ )

Thanks,
Jamie
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: glusterfs setup

2018-03-28 Thread Joel Pearson
You’d have to run your Gluster cluster separate from OpenShift if you want
a different volume type I’m guessing.
On Thu, 29 Mar 2018 at 12:15 am, Tim Dudgeon  wrote:

> Ah!, that's a shame.
>
> Tim
>
> On 28/03/18 14:11, Joel Pearson wrote:
>
> “Distributed-Three-way replication is the only supported volume type.”
>
>
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/ch03s02
>
>
> On Thu, 29 Mar 2018 at 12:00 am, Tim Dudgeon 
> wrote:
>
>> When using native glusterfs its not clear to me how to configure the
>> types of storage.
>>
>> As described in the glusterfs docs [1] there are multiple types of
>> volume that can be created (Distributed, Replicated, Distributed
>> Replicated, Striped, Distributed Striped).
>>
>> In the example ansible inventory file [2] you are suggested to set up
>> the glusterfs_devices variable like this:
>>
>> [glusterfs]
>> node0  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
>> node1  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
>> node2  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
>>
>> But how is the way those block devices are utilised to create a
>> particular type of volume?
>>
>> How would you specify that you wanted multiple types of volume
>> (presumably each with its own storage class)?
>>
>> Thanks
>> Tim
>>
>> [1]
>>
>> https://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/#types-of-volumes
>> [2]
>>
>> https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.glusterfs.native.example
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: glusterfs setup

2018-03-28 Thread Tim Dudgeon

Ah!, that's a shame.

Tim


On 28/03/18 14:11, Joel Pearson wrote:

“Distributed-Three-way replication is the only supported volume type.”

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/ch03s02


On Thu, 29 Mar 2018 at 12:00 am, Tim Dudgeon > wrote:


When using native glusterfs its not clear to me how to configure the
types of storage.

As described in the glusterfs docs [1] there are multiple types of
volume that can be created (Distributed, Replicated, Distributed
Replicated, Striped, Distributed Striped).

In the example ansible inventory file [2] you are suggested to set up
the glusterfs_devices variable like this:

[glusterfs]
node0  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
node1  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
node2  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'

But how is the way those block devices are utilised to create a
particular type of volume?

How would you specify that you wanted multiple types of volume
(presumably each with its own storage class)?

Thanks
Tim

[1]

https://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/#types-of-volumes
[2]

https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.glusterfs.native.example

___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: glusterfs setup

2018-03-28 Thread Joel Pearson
“Distributed-Three-way replication is the only supported volume type.”

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/ch03s02


On Thu, 29 Mar 2018 at 12:00 am, Tim Dudgeon  wrote:

> When using native glusterfs its not clear to me how to configure the
> types of storage.
>
> As described in the glusterfs docs [1] there are multiple types of
> volume that can be created (Distributed, Replicated, Distributed
> Replicated, Striped, Distributed Striped).
>
> In the example ansible inventory file [2] you are suggested to set up
> the glusterfs_devices variable like this:
>
> [glusterfs]
> node0  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
> node1  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
> node2  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
>
> But how is the way those block devices are utilised to create a
> particular type of volume?
>
> How would you specify that you wanted multiple types of volume
> (presumably each with its own storage class)?
>
> Thanks
> Tim
>
> [1]
>
> https://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/#types-of-volumes
> [2]
>
> https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.glusterfs.native.example
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


glusterfs setup

2018-03-28 Thread Tim Dudgeon
When using native glusterfs its not clear to me how to configure the 
types of storage.


As described in the glusterfs docs [1] there are multiple types of 
volume that can be created (Distributed, Replicated, Distributed 
Replicated, Striped, Distributed Striped).


In the example ansible inventory file [2] you are suggested to set up 
the glusterfs_devices variable like this:


[glusterfs]
node0  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
node1  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
node2  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'

But how is the way those block devices are utilised to create a 
particular type of volume?


How would you specify that you wanted multiple types of volume 
(presumably each with its own storage class)?


Thanks
Tim

[1] 
https://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/#types-of-volumes
[2] 
https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.glusterfs.native.example


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to debug the openid auth plugin ?

2018-03-28 Thread Fabio Martinelli
Thank you Larry

I'll keep your experience as a precious reference ; I assume you're using
OpenShift -> LDAP -> AD because you don't have OpenShift -> OpenID Connect
-> AD like me

in my IT environment all the applications use OpenID Connect to
authenticate our users and I preferably should authenticate in that way,
therefore I need to understand how to debug the OpenShift -> OpenID Connect
-> AD pipeline

is there some tool to simulate the OpenID Connect authentication ? Just
found this [@]

I hope somebody from Red Hat can give me some insights, maybe it's just
matter of raising some debug level.

Thanks,
Fabio

[@] https://github.com/curityio/example-python-openid-connect-client






On 28 March 2018 at 02:02, Brigman, Larry  wrote:

> I configure one of our clusters to use LDAP against our AD.
> Here is my line from the inventory (obsucated) but handling both local and
> LDAP:
> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
> 'filename': '/etc/origin/master/htpasswd'},{'name': 'ldap', 'challenge':
> 'true', 'login': 'true', 'mappingMethod': 'claim', 'kind':
> 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email':
> ['mail'], 'name': ['cn'], 'preferredUsername': ['sAMAccountName']},
> 'bindDN': 'x...@ad.example.com.com', 'bindPassword': 'XXX',
> 'insecure': 'true', 'url': 'ldap://ldap.example.com:389/
> dc=sub,dc=example,dc=com?sAMAccountName'}]
>
> This is a give a good reference of how to configure/test things.
> https://github.com/redhat-cop/openshift-playbooks/blob/
> master/playbooks/installation/ldap_integration.adoc
>
> -Original Message-
> From: users-boun...@lists.openshift.redhat.com [mailto:
> users-boun...@lists.openshift.redhat.com] On Behalf Of fabio martinelli
> Sent: Monday, March 26, 2018 2:26 PM
> To: users 
> Subject: How to debug the openid auth plugin ?
>
> Dear OpenShift Colleagues
>
> I can't get working the OpenID Auth plugin [$], not necessarily because
> that's broken Origin side since it's involved also the AD layer where I'm
> not root [%] ; furthermore I don't have very much experience with OpenID.
>
> I believe I've slavishly followed the manual [$] and I've selected as the
> mappingMethod the option "lookup" since I don't want any automatic login
> from our AD at this stage.
>
> This is my failed login attempt by oc :
> 
> $ oc login --loglevel=10
> I0326 22:58:26.698146   38291 loader.go:357] Config loaded from file
> /Users/f_martinelli/.kube/config
> I0326 22:58:26.701628   38291 round_trippers.go:386] curl -k -v -XHEAD
> https://na01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fhosting.wfp.org%3A443%2F=01%7C01%7Clarry.
> brigman%40arris.com%7Cbacd11c800094b91ed7808d5936047ca%
> 7Cf27929ade5544d55837ac561519c3091%7C1=m0bOfRtQnQ5QE8ntZo%
> 2BaGSmV1OwfYrThXluGDNTenb0%3D=0
> I0326 22:58:26.922676   38291 round_trippers.go:405] HEAD
> https://na01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fhosting.wfp.org%3A443%2F=01%7C01%7Clarry.
> brigman%40arris.com%7Cbacd11c800094b91ed7808d5936047ca%
> 7Cf27929ade5544d55837ac561519c3091%7C1=m0bOfRtQnQ5QE8ntZo%
> 2BaGSmV1OwfYrThXluGDNTenb0%3D=0 403 Forbidden in 220 milliseconds
> I0326 22:58:26.922709   38291 round_trippers.go:411] Response Headers:
> I0326 22:58:26.922720   38291 round_trippers.go:414] Vary:
> Accept-Encoding
> I0326 22:58:26.922729   38291 round_trippers.go:414]
> X-Content-Type-Options: nosniff
> I0326 22:58:26.922738   38291 round_trippers.go:414] Date: Mon, 26 Mar
> 2018 20:58:26 GMT
> I0326 22:58:26.922747   38291 round_trippers.go:414] Content-Type:
> text/plain
> I0326 22:58:26.922756   38291 round_trippers.go:414] Connection:
> keep-alive
> I0326 22:58:26.922765   38291 round_trippers.go:414] Server: nginx
> I0326 22:58:26.922774   38291 round_trippers.go:414] Content-Length: 90
> I0326 22:58:26.922782   38291 round_trippers.go:414] Cache-Control:
> no-store
> I0326 22:58:26.922889   38291 round_trippers.go:386] curl -k -v -XGET -H
> "X-Csrf-Token: 1"
> https://na01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fhosting.wfp.org%3A443%2F.well-known%2Foauth-
> authorization-server=01%7C01%7Clarry.brigman%40arris.com%
> 7Cbacd11c800094b91ed7808d5936047ca%7Cf27929ade5544d55837ac561519c
> 3091%7C1=eM9%2Bsrj6GMSd524K6RaF7%2FqNnxsWIi6Cqr2A6O58pYM%3D&
> reserved=0
> I0326 22:58:26.965442   38291 round_trippers.go:405] GET
> https://na01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fhosting.wfp.org%3A443%2F.well-known%2Foauth-
> authorization-server=01%7C01%7Clarry.brigman%40arris.com%
> 7Cbacd11c800094b91ed7808d5936047ca%7Cf27929ade5544d55837ac561519c
> 3091%7C1=eM9%2Bsrj6GMSd524K6RaF7%2FqNnxsWIi6Cqr2A6O58pYM%3D&
> reserved=0 200 OK in 42 milliseconds
> I0326 22:58:26.965686   38291 round_trippers.go:411] Response Headers:
> I0326 

Re: Not able to route to services

2018-03-28 Thread Tim Dudgeon

A little more on this.
I have two systems, installed in an identical manner as is possible.
One works fine, on the other I can't connect to services.

For instance, from the master node I try to connect the docker-registry 
service on the infrastructure node. If I try:


curl -I https://:5000/healthz

It works on the working environment, but gets a "No route to host" error 
on the failing one.


If I try:

sudo traceroute -T -p 5000 

it confirms the problem. On the working environment:

$ sudo traceroute -T -p 5000 172.30.145.23
traceroute to 172.30.145.23 (172.30.145.23), 30 hops max, 60 byte packets
 1  docker-registry.default.svc.cluster.local (172.30.145.23)  3.044 
ms  2.723 ms  2.307 ms


On the failing one:

$ sudo traceroute -T -p 5000 172.30.76.145
traceroute to 172.30.76.145 (172.30.76.145), 30 hops max, 60 byte packets
 1  docker-registry.default.svc.cluster.local (172.30.76.145) 3004.572 
ms !H  3004.517 ms !H  3004.502 ms !H


The !H means the host is unreachable.
If I run the same commands from the infrastructure node where the 
service is actually running then it works OK.


The security group for both servers leaves all TCP traffic open. e.g.

ALLOW IPv4 1-65535/tcp to 0.0.0.0/0
ALLOW IPv4 1-65535/tcp from 0.0.0.0/0

Any thoughts on what is blocking the traffic?

Tim



On 27/03/18 21:54, Tim Dudgeon wrote:


Sorry, I am using port 5000. I wrote that bit incorrectly.
I did do some more digging based on what's here 
(https://docs.openshift.org/latest/admin_guide/sdn_troubleshooting.html) 
and it looks like there's something wrong with the node to node 
communications.

From the master I try to contact the infrastructure node:

$ ping 192.168.253.126
PING 192.168.253.126 (192.168.253.126) 56(84) bytes of data.
64 bytes from 192.168.253.126: icmp_seq=1 ttl=64 time=0.657 ms
64 bytes from 192.168.253.126: icmp_seq=2 ttl=64 time=0.588 ms
64 bytes from 192.168.253.126: icmp_seq=3 ttl=64 time=0.605 ms
^C
--- 192.168.253.126 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.588/0.616/0.657/0.041 ms

$ tracepath 192.168.253.126
 1?: [LOCALHOST] pmtu 1450
 1:  no reply
 2:  no reply
 3:  no reply
 4:  no reply
^C

I can ping the node but treacepath can't reach it. On a working 
claster tracepath has no problems.


I don't know the cause. Any ideas?


On 27/03/18 21:46, Louis Santillan wrote:
Isn't the default port for your Registry 5000? Try `curl -kv 
https://docker-registry.default.svc:5000/healthz` 
 [0][1].


[0] https://access.redhat.com/solutions/1616953#health
[1] 
https://docs.openshift.com/container-platform/3.7/install_config/registry/accessing_registry.html#accessing-registry-metrics


___

LOUIS P.SANTILLAN

Architect, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854 



  
TRIED. TESTED. TRUSTED. 




On Tue, Mar 27, 2018 at 6:39 AM, Tim Dudgeon > wrote:


Something strange has happened in my environment which has
resulted in not being able to route to any of the services.
Earlier this was all working fine. The install was done using the
ansible installer and this is happening with 3.6.1 and 3.7.1.
The services are all there are running fine, and DNS is working,
but I can't reach them. e.g. from the master node:

$ host docker-registry.default.svc
docker-registry.default.svc.cl
uster.local has address
172.30.243.173
$ curl -k https://docker-registry.default.svc/healthz

curl: (7) Failed connect to docker-registry.default.svc:443; No
route to host

Any ideas on how to work out what's gone wrong?


___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users







___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users