Re: OpenShift master keeps consuming lots and memory and swapping

2017-10-20 Thread Louis Santillan
Firstly, leaving swap enabled is an anti-pattern in general [0] as
OpenShift is then unable to recognize OOM conditions until performance is
thoroughly degraded.  Secondly, we generally recommend to our customers
that they have at least 20GB [1] for Masters.  I've seen many customers go
far past that to suit their comfort.


[0] https://docs.openshift.com/container-platform/3.6/admin_
guide/overcommit.html#disabling-swap-memory
[1] https://docs.openshift.com/container-platform/3.6/
install_config/install/prerequisites.html#production-
level-hardware-requirements


___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Fri, Oct 20, 2017 at 4:54 PM, Joel Pearson  wrote:

> Hi,
>
> I've got a brand new OpenShift cluster running on OpenStack and I'm
> finding that the single master that I have is struggling big time, it seems
> to consume tons of virtual memory and then start swapping and slows right
> down.
>
> It is running with 16GB of memory, 40GB disk and 2 CPUs.
>
> The cluster is fairly idle, so I don't know why the master gets this way.
> Restarting the master solves the problem for a while, for example, I
> restarted it at 10pm last night, and when I checked again this morning it
> was in the same situation.
>
> Would having multiple masters alleviate this problem?
>
> Here is a snapshot of top:
>
> [image: Inline images 1]
>
> Any advice?  I've happy to build the cluster with multiple masters if it
> will help.
>
>
> --
> Kind Regards,
>
> Joel Pearson
> Agile Digital | Senior Software Consultant
>
> Love Your Software™ | ABN 98 106 361 273
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift master keeps consuming lots and memory and swapping

2017-10-20 Thread Clayton Coleman
You can hit the master prometheus endpoint to see what is going on (or run
prometheus from the release-3.6 branch in examples/prometheus):

oc get --raw /metrics

As an admin will dump the apiserver prometheus metrics for that server.
You can look at (going from memory here) go_memstats_heap_inuse_bytes to
see exactly how much memory Go has allocated.  You can look at
apiserver_request_count to see if there are large numbers of requests being
made from any particular client (the readme in the prometheus directory has
more queries for visualizing this from the prometheus dashboard).

Re: unusual behavior, looking at the logs at a reasonable log level should
not be generating a lot of output.

Generally for a cluster that small you should be using about 100-200M of
memory for masters and same for nodes.  If the heap in use reported above
is bigger than that you might have a rogue client creating things.

On Oct 21, 2017, at 5:53 AM, Joel Pearson 
wrote:

Hi Clayton,

We’re running 3.6.1 I believe. It was installed a few weeks ago using
OpenShift ansible on the the release-3.6 branch.

We’re running 11 namespaces, 2 nodes, 7 pods, so it’s pretty minimal.

I’ve never run this prune.
https://docs.openshift.com/container-platform/3.6/admin_guide/pruning_resources.html

Is there some log that would help highlight exactly what the issue is?


Thanks,

Joel

On Sat, 21 Oct 2017 at 2:23 pm, Clayton Coleman  wrote:

> What version are you running?  How many nodes, pods, and namespaces?
> Excessive memory use can be caused by not running prune or having an
> automated process that creates lots of an object.  Excessive CPU use can be
> caused by an errant client or component stuck in a hot loop repeatedly
> taking the same action.
>
>
>
> On Oct 21, 2017, at 1:55 AM, Joel Pearson 
> wrote:
>
> Hi,
>
> I've got a brand new OpenShift cluster running on OpenStack and I'm
> finding that the single master that I have is struggling big time, it seems
> to consume tons of virtual memory and then start swapping and slows right
> down.
>
> It is running with 16GB of memory, 40GB disk and 2 CPUs.
>
> The cluster is fairly idle, so I don't know why the master gets this way.
> Restarting the master solves the problem for a while, for example, I
> restarted it at 10pm last night, and when I checked again this morning it
> was in the same situation.
>
> Would having multiple masters alleviate this problem?
>
> Here is a snapshot of top:
>
> 
>
>
> Any advice?  I've happy to build the cluster with multiple masters if it
> will help.
>
>
> --
> Kind Regards,
>
> Joel Pearson
> Agile Digital | Senior Software Consultant
>
> Love Your Software™ | ABN 98 106 361 273
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
> --
Kind Regards,

Joel Pearson
Agile Digital | Senior Software Consultant

Love Your Software™ | ABN 98 106 361 273
p: 1300 858 277 | m: 0405 417 843 <0405417843> | w: agiledigital.com.au
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift master keeps consuming lots and memory and swapping

2017-10-20 Thread Joel Pearson
Hi Clayton,

We’re running 3.6.1 I believe. It was installed a few weeks ago using
OpenShift ansible on the the release-3.6 branch.

We’re running 11 namespaces, 2 nodes, 7 pods, so it’s pretty minimal.

I’ve never run this prune.
https://docs.openshift.com/container-platform/3.6/admin_guide/pruning_resources.html

Is there some log that would help highlight exactly what the issue is?


Thanks,

Joel

On Sat, 21 Oct 2017 at 2:23 pm, Clayton Coleman  wrote:

> What version are you running?  How many nodes, pods, and namespaces?
> Excessive memory use can be caused by not running prune or having an
> automated process that creates lots of an object.  Excessive CPU use can be
> caused by an errant client or component stuck in a hot loop repeatedly
> taking the same action.
>
>
>
> On Oct 21, 2017, at 1:55 AM, Joel Pearson 
> wrote:
>
> Hi,
>
> I've got a brand new OpenShift cluster running on OpenStack and I'm
> finding that the single master that I have is struggling big time, it seems
> to consume tons of virtual memory and then start swapping and slows right
> down.
>
> It is running with 16GB of memory, 40GB disk and 2 CPUs.
>
> The cluster is fairly idle, so I don't know why the master gets this way.
> Restarting the master solves the problem for a while, for example, I
> restarted it at 10pm last night, and when I checked again this morning it
> was in the same situation.
>
> Would having multiple masters alleviate this problem?
>
> Here is a snapshot of top:
>
> 
>
>
> Any advice?  I've happy to build the cluster with multiple masters if it
> will help.
>
>
> --
> Kind Regards,
>
> Joel Pearson
> Agile Digital | Senior Software Consultant
>
> Love Your Software™ | ABN 98 106 361 273
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
> --
Kind Regards,

Joel Pearson
Agile Digital | Senior Software Consultant

Love Your Software™ | ABN 98 106 361 273
p: 1300 858 277 | m: 0405 417 843 <0405417843> | w: agiledigital.com.au
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift master keeps consuming lots and memory and swapping

2017-10-20 Thread Clayton Coleman
What version are you running?  How many nodes, pods, and namespaces?
Excessive memory use can be caused by not running prune or having an
automated process that creates lots of an object.  Excessive CPU use can be
caused by an errant client or component stuck in a hot loop repeatedly
taking the same action.



On Oct 21, 2017, at 1:55 AM, Joel Pearson 
wrote:

Hi,

I've got a brand new OpenShift cluster running on OpenStack and I'm finding
that the single master that I have is struggling big time, it seems to
consume tons of virtual memory and then start swapping and slows right down.

It is running with 16GB of memory, 40GB disk and 2 CPUs.

The cluster is fairly idle, so I don't know why the master gets this way.
Restarting the master solves the problem for a while, for example, I
restarted it at 10pm last night, and when I checked again this morning it
was in the same situation.

Would having multiple masters alleviate this problem?

Here is a snapshot of top:



Any advice?  I've happy to build the cluster with multiple masters if it
will help.


-- 
Kind Regards,

Joel Pearson
Agile Digital | Senior Software Consultant

Love Your Software™ | ABN 98 106 361 273

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


LDAP bindPassword in Ansible inventory

2017-10-20 Thread Lionel Orellana
Hi,

I see there's a way to encrypt
an
ldap bind password for use in the master configs.

But I'm not sure how this would work in the Ansible inventory configuration
for the identity provider.

If I use an Encrypted External File do I need to copy the file to all the
masters first? Or is the playbook going to copy it from the ansible host?

What should the openshift_master_identity_providers look like?

openshift_master_identity_providers=[{'name': 'my_ldap_provider', ...,
'kind': 'LDAPPasswordIdentityProvider', ..., *'bindPassword': { 'file':
'bindPassword.encrypted'*
*'keyFile': 'bindPassword.key'}*, ...}]

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: which branch of sensible playbook should be used

2017-10-20 Thread Walters, Todd
We’ve had some issues with clone of the branch and chose to clone master as the 
official documentation states and you note from 2nd link.

We do however, specify the versions, in our playbook
# Specify the generic release version
openshift_release: v3.6.0
openshift_image_tag: v3.6.0
openshift_pkg_tag: 3.6.0

as well as metrics
openshift_hosted_metrics_deployer_version: v3.6.0

This has seemed to work consistently.

Todd Walters


--

Message: 1
Date: Fri, 20 Oct 2017 16:58:34 +
From: Yu Wei 
To: "d...@lists.openshift.redhat.com" ,
"users@lists.openshift.redhat.com" 
Subject: Which branch of ansible playbook  should be used when
installingopenshift origin 3.6?
Message-ID:



Content-Type: text/plain; charset="gb2312"

Hi?

I'm a little confused about which branch should be used during "advanced 
installation".

>From document in 
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenshift%2Fopenshift-ansible=01%7C01%7Ctodd_walters%40unigroup.com%7Cddfd821c0de34f3a1c1508d517f0984d%7C259bdc2f86d3477b8cb34eee64289142%7C1=xnSrzopiFaTrKEQ88rPY%2BwcHYUQteVn%2BA8CKWNc12II%3D=0,
  it seemed branch 3.6 should be used.


>From doc 

 
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openshift.org%2F3.6%2Finstall_config%2Finstall%2Fhost_preparation.html%23preparing-for-advanced-installations-origin=01%7C01%7Ctodd_walters%40unigroup.com%7Cddfd821c0de34f3a1c1508d517f0984d%7C259bdc2f86d3477b8cb34eee64289142%7C1=4aQhvxqL96NlaA8aJgoalZwR5T3ow%2Fi0i8cx8pGv1G8%3D=0,
 there is section as below,

Be sure to stay on the master branch of the openshift-ansible repository 
when running an advanced installation.


Which branch should I use during advanced installation?


Please help to clarify this.


Thanks,

Jared, (???
Software developer
Interested in open source software, big data, Linux
-- next part --
An HTML attachment was scrubbed...
URL: 


--

--

Message: 3
Date: Fri, 20 Oct 2017 15:26:44 -0400
From: Tim Bielawa 
To: Yu Wei 
Cc: "users@lists.openshift.redhat.com"
,"d...@lists.openshift.redhat.com"

Subject: Re: Which branch of ansible playbook should be used when
installingopenshift origin 3.6?
Message-ID:

Re: Which branch of ansible playbook should be used when installing openshift origin 3.6?

2017-10-20 Thread Tim Bielawa
That seems incorrect to me, the online documentation. I would recommend
staying on the release-3.6 branch for a 3.6 installation.


On Fri, Oct 20, 2017 at 12:58 PM, Yu Wei  wrote:

> Hi,
>
> I'm a little confused about which branch should be used during "advanced
> installation".
>
> From document in https://github.com/openshift/openshift-ansible,  it
> seemed branch 3.6 should be used.
>
>
> From doc
> 
> https://docs.openshift.org/3.6/install_config/install/host_
> preparation.html#preparing-for-advanced-installations-origin, there is
> section as below,
>
> Be sure to stay on the *master* branch of the *openshift-ansible*
> repository when running an advanced installation.
>
>
> Which branch should I use during advanced installation?
>
>
> Please help to clarify this.
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Tim Bielawa, Sr. Software Engineer [ED-C137]
1BA0 4FAB 4C13 FBA0 A036  4958 AD05 E75E 0333 AE37
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


No route to host when trying to connect to services

2017-10-20 Thread Yu Wei
Hi guys,

I setup openshift origin cluster 3.6 and deployed 3 zookeeper instances as 
cluster.

I met error “no route to host" when trying to connect to one zookeeper via 
service.

The detailed information is as below,

zookeeper-1   172.30.64.134  
2181/TCP,2888/TCP,3888/TCP   10m
zookeeper-2   172.30.174.48  
2181/TCP,2888/TCP,3888/TCP   10m
zookeeper-3   172.30.223.77  
2181/TCP,2888/TCP,3888/TCP   10m
[root@host-10-1-236-92 ~]# curl -kv zookeeper-1:3888
* Could not resolve host: zookeeper-1; Name or service not known
* Closing connection 0
curl: (6) Could not resolve host: zookeeper-1; Name or service not known
[root@host-10-1-236-92 ~]# curl -kv zookeeper-1.aura.svc:3888
* About to connect() to zookeeper-1.aura.svc port 3888 (#0)
*   Trying 172.30.64.134...
* Connected to zookeeper-1.aura.svc (172.30.64.134) port 3888 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: zookeeper-1.aura.svc:3888
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
[root@host-10-1-236-92 ~]# curl -kv zookeeper-2.aura.svc:3888
* About to connect() to zookeeper-2.aura.svc port 3888 (#0)
*   Trying 172.30.174.48...
* No route to host
* Failed connect to zookeeper-2.aura.svc:3888; No route to host
* Closing connection 0
curl: (7) Failed connect to zookeeper-2.aura.svc:3888; No route to host
[root@host-10-1-236-92 ~]# curl -kv zookeeper-3.aura.svc:3888
* About to connect() to zookeeper-3.aura.svc port 3888 (#0)
*   Trying 172.30.223.77...
* Connected to zookeeper-3.aura.svc (172.30.223.77) port 3888 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: zookeeper-3.aura.svc:3888
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

The pods are running well.
How could I fix such problem?


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Which branch of ansible playbook should be used when installing openshift origin 3.6?

2017-10-20 Thread Yu Wei
Hi,

I'm a little confused about which branch should be used during "advanced 
installation".

From document in https://github.com/openshift/openshift-ansible,  it seemed 
branch 3.6 should be used.


From doc 

 
https://docs.openshift.org/3.6/install_config/install/host_preparation.html#preparing-for-advanced-installations-origin,
 there is section as below,

Be sure to stay on the master branch of the openshift-ansible repository when 
running an advanced installation.


Which branch should I use during advanced installation?


Please help to clarify this.


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: DNS resolving problem - in pod

2017-10-20 Thread Marko Lukša

1. is the service in the same namespace as the pod you're testing in?

2. connect through the FQDN of the service 
(kibanasg.fullnamespace.svc.cluster.local)



On 20. 10. 2017 11:14, Łukasz Strzelec wrote:

Thx guys ;)Nope, this is not this case.
I've notice that I can reach SVC via IP addresses. But when I want do 
the same with  name of svc, I'm recieving  "name or service not 
known". Where to start debugging ?


Best regards

2017-10-19 15:27 GMT+02:00 Mateus Caruccio 
>:


Alpine's musl libc only supports "search" starting from
version 1.1.13.
Check if this is your case.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-10-19 10:58 GMT-02:00 Cameron Braid >:

I had that happen quite a bit within containers based on
alpine linux

Cam

On Thu, 19 Oct 2017 at 23:49 Łukasz Strzelec
>
wrote:

Dear all :)

I have following problem:

Obraz w treści 1


Frequently I have to restart origin-node to solve this
issue, but I can't find  the root cause of it.
Does anybody has got any idea ? Where to start looking ?
In addition , this problem is affecting different cluster
nodes - randomly diffrent pods have got this issues.


Best regards
-- 
Ł.S.

___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users






--
Ł.S.


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: DNS resolving problem - in pod

2017-10-20 Thread Łukasz Strzelec
Thx guys ;)Nope, this is not this case.
I've notice that I can reach SVC via IP addresses. But when I want do the
same with  name of svc, I'm recieving  "name or service not known". Where
to start debugging ?

Best regards

2017-10-19 15:27 GMT+02:00 Mateus Caruccio :

> Alpine's musl libc only supports "search" starting from version 1.1.13.
> Check if this is your case.
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
> 2017-10-19 10:58 GMT-02:00 Cameron Braid :
>
>> I had that happen quite a bit within containers based on alpine linux
>>
>> Cam
>>
>> On Thu, 19 Oct 2017 at 23:49 Łukasz Strzelec 
>> wrote:
>>
>>> Dear all :)
>>>
>>> I have following problem:
>>>
>>> [image: Obraz w treści 1]
>>>
>>>
>>> Frequently I have to restart origin-node to solve this issue, but I
>>> can't find  the root cause of it.
>>> Does anybody has got any idea ? Where to start looking ?
>>> In addition , this problem is affecting different cluster nodes -
>>> randomly diffrent pods have got this issues.
>>>
>>>
>>> Best regards
>>> --
>>> Ł.S.
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>


-- 
Ł.S.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-20 Thread Julio Saura
hello


> El 20 oct 2017, a las 9:57, Frederic Giloux  escribió:
> 
> Hi Julio
> 
> a couple of points here:
> - oc policy add-role-to-user admin system:serviceaccounts:project1:inciga -n 
> project1 would have worked for the project.

did not work :( trust me .. checked a lot of times

same command with view role did the trick

> If you have used oadm policy add-cluster-role-to-user you should use a 
> cluster role, which view or cluster-admin are and admin is not.

also tried, no luck :(



> - we validated with oc get rc -n project1 
> --as=system:serviceaccounts:project1:inciga that the rights were sufficient 
> for queries specific to the project.

i know .. and i am still trying to understand why the view role did the trick 
for me using curl or python request and was not needed using oc get ..

> - when you say the token provided by oc login you probably mean the token of 
> a user account, which is shorter than the token of a service account. On the 
> other hand it will expire, which is not the case for a token of a service 
> account.

right! that is why i decided to move to service account
> 
> Happy that it works for you now.

me too :)

thanks all for the support.

> 
> Regards,
> 
> Frédéric
> 
> 
> On Fri, Oct 20, 2017 at 9:40 AM, Julio Saura  > wrote:
> python problem solved too
> 
> all working
> 
> view role was the key :/
> 
> 
> 
> 
>> El 20 oct 2017, a las 9:27, Julio Saura > > escribió:
>> 
>> problem solved
>> 
>> i do not know why but giving user role view instead of admin make the trick 
>> ..
>> 
>> :/
>> 
>> now i am able to access using curl with the token, but not using python xD i 
>> get a 401 with long token, but i i use the short one that oc login gives 
>> works xD
>> 
>> 
>> 
>> 
>>> El 20 oct 2017, a las 8:59, Frederic Giloux >> > escribió:
>>> 
>>> Julio,
>>> 
>>> have you tried the command with higer log level as per my previous email?
>>> # oc get rc -n project1 --as=system:serviceaccounts:project1:inciga 
>>> --loglevel=8
>>> This gives you the successful rest call, which is made by the OC client to 
>>> the API server. You can then check whether it differs from your curl.
>>> 
>>> Regards,
>>> 
>>> Frédéric
>>> 
>>> On Fri, Oct 20, 2017 at 8:30 AM, Julio Saura >> > wrote:
>>> headers look ok in curl request
>>> 
>>> * Cipher selection: 
>>> ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
>>> * successfully set certificate verify locations:
>>> *   CAfile: /etc/ssl/certs/ca-certificates.crt
>>>   CApath: none
>>> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
>>> * TLSv1.2 (IN), TLS handshake, Server hello (2):
>>> * NPN, negotiated HTTP1.1
>>> * TLSv1.2 (IN), TLS handshake, Certificate (11):
>>> * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
>>> * TLSv1.2 (IN), TLS handshake, Request CERT (13):
>>> * TLSv1.2 (IN), TLS handshake, Server finished (14):
>>> * TLSv1.2 (OUT), TLS handshake, Certificate (11):
>>> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
>>> * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
>>> * TLSv1.2 (OUT), TLS handshake, Unknown (67):
>>> * TLSv1.2 (OUT), TLS handshake, Finished (20):
>>> * TLSv1.2 (IN), TLS change cipher, Client hello (1):
>>> * TLSv1.2 (IN), TLS handshake, Finished (20):
>>> * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
>>> * Server certificate:
>>> *  subject: CN=10.1.5.31
>>> *  start date: Sep 21 11:19:56 2017 GMT
>>> *  expire date: Sep 21 11:19:57 2019 GMT
>>> *  issuer: CN=openshift-signer@1505992768
>>> *  SSL certificate verify result: self signed certificate in certificate 
>>> chain (19), continuing anyway.
>>> > GET /api/v1/namespaces/project1/replicationcontrollers HTTP/1.1
>>> > Host: BALANCER:8443
>>> > User-Agent: curl/7.56.0
>>> > Accept: */*
>>> > Authorization: Bearer 
>>> > eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsZHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaW5jaWdhLXRva2VuLTBkNDcyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImluY2lnYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIyMjE0YTI4LWI0ZTMtMTFlNy1hZTBhLTAwNTA1NmE0M2M0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsZHA6aW5jaWdhIn0.VfJa8fLQQjSYySjWO3d_hp0kGqVFAnhvFQ2R6jTcLmtFwiA2NouO0QJCI2KZqvhXigAzPsksOKP7-BP_v2c-93UH3UyXW7RhkYKMOO7d1EMZVMGnT6NBKhVkw45wa20kH221ggh98wdv4MZRAoNEOvmN9qXHmsUWEnxfT8uNIjIkAt_aydocQ22hIbYXzd6w5x6zmOWIVWllgF3qGtY8ArTgRf4WxhuwhUJRy_Gm31WhtKioovk2Hpt6XnlPhnfvHhioqtizZsTepVOD0A-yjearxiDBE7yuIzRsMHo014Dq3O2T_qIZ2P2wvEWBzfpi7i1to4ep3jcb_qDM2vQ0IQ
>>> > Content-Type: application/json
>>> >
>>> < HTTP/1.1 403 Forbidden
>>> < Cache-Control: no-store
>>> < Content-Type: application/json
>>> < Date: Fri, 

Re: service account for rest api

2017-10-20 Thread Frederic Giloux
Hi Julio

a couple of points here:
- oc policy add-role-to-user admin system:serviceaccounts:project1:inciga
-n project1 would have worked for the project. If you have used oadm policy
add-cluster-role-to-user you should use a cluster role, which view or
cluster-admin are and admin is not.
- we validated with oc get rc -n project1
--as=system:serviceaccounts:project1:inciga
that the rights were sufficient for queries specific to the project.
- when you say the token provided by oc login you probably mean the token
of a user account, which is shorter than the token of a service account. On
the other hand it will expire, which is not the case for a token of a
service account.

Happy that it works for you now.

Regards,

Frédéric


On Fri, Oct 20, 2017 at 9:40 AM, Julio Saura  wrote:

> python problem solved too
>
> all working
>
> view role was the key :/
>
>
>
>
> El 20 oct 2017, a las 9:27, Julio Saura  escribió:
>
> problem solved
>
> i do not know why but giving user role view instead of admin make the
> trick ..
>
> :/
>
> now i am able to access using curl with the token, but not using python xD
> i get a 401 with long token, but i i use the short one that oc login gives
> works xD
>
>
>
>
> El 20 oct 2017, a las 8:59, Frederic Giloux  escribió:
>
> Julio,
>
> have you tried the command with higer log level as per my previous email?
> # oc get rc -n project1 --as=system:serviceaccounts:project1:inciga
> --loglevel=8
> This gives you the successful rest call, which is made by the OC client to
> the API server. You can then check whether it differs from your curl.
>
> Regards,
>
> Frédéric
>
> On Fri, Oct 20, 2017 at 8:30 AM, Julio Saura  wrote:
>
>> headers look ok in curl request
>>
>> * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT5
>> 6:!aNULL:!LOW:!RC4:@STRENGTH
>> * successfully set certificate verify locations:
>> *   CAfile: /etc/ssl/certs/ca-certificates.crt
>>   CApath: none
>> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
>> * TLSv1.2 (IN), TLS handshake, Server hello (2):
>> * NPN, negotiated HTTP1.1
>> * TLSv1.2 (IN), TLS handshake, Certificate (11):
>> * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
>> * TLSv1.2 (IN), TLS handshake, Request CERT (13):
>> * TLSv1.2 (IN), TLS handshake, Server finished (14):
>> * TLSv1.2 (OUT), TLS handshake, Certificate (11):
>> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
>> * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
>> * TLSv1.2 (OUT), TLS handshake, Unknown (67):
>> * TLSv1.2 (OUT), TLS handshake, Finished (20):
>> * TLSv1.2 (IN), TLS change cipher, Client hello (1):
>> * TLSv1.2 (IN), TLS handshake, Finished (20):
>> * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
>> * Server certificate:
>> *  subject: CN=10.1.5.31
>> *  start date: Sep 21 11:19:56 2017 GMT
>> *  expire date: Sep 21 11:19:57 2019 GMT
>> *  issuer: CN=openshift-signer@1505992768
>> *  SSL certificate verify result: self signed certificate in certificate
>> chain (19), continuing anyway.
>> > GET /api/v1/namespaces/project1/replicationcontrollers HTTP/1.1
>> > Host: BALANCER:8443
>> > User-Agent: curl/7.56.0
>> > Accept: */*
>> *> Authorization: Bearer
>> eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsZHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaW5jaWdhLXRva2VuLTBkNDcyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImluY2lnYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIyMjE0YTI4LWI0ZTMtMTFlNy1hZTBhLTAwNTA1NmE0M2M0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsZHA6aW5jaWdhIn0.VfJa8fLQQjSYySjWO3d_hp0kGqVFAnhvFQ2R6jTcLmtFwiA2NouO0QJCI2KZqvhXigAzPsksOKP7-BP_v2c-93UH3UyXW7RhkYKMOO7d1EMZVMGnT6NBKhVkw45wa20kH221ggh98wdv4MZRAoNEOvmN9qXHmsUWEnxfT8uNIjIkAt_aydocQ22hIbYXzd6w5x6zmOWIVWllgF3qGtY8ArTgRf4WxhuwhUJRy_Gm31WhtKioovk2Hpt6XnlPhnfvHhioqtizZsTepVOD0A-yjearxiDBE7yuIzRsMHo014Dq3O2T_qIZ2P2wvEWBzfpi7i1to4ep3jcb_qDM2vQ0IQ*
>> > Content-Type: application/json
>> >
>> < HTTP/1.1 403 Forbidden
>> < Cache-Control: no-store
>> < Content-Type: application/json
>> < Date: Fri, 20 Oct 2017 06:28:52 GMT
>> < Content-Length: 295
>> {
>>   "kind": "Status",
>>   "apiVersion": "v1",
>>   "metadata": {},
>>   "status": "Failure",
>>   "message": "User \"system:serviceaccount:ldp:inciga\" cannot list
>> replicationcontrollers in project \"ldp\"",
>>   "reason": "Forbidden",
>>   "details": {
>> "kind": "replicationcontrollers"
>>   },
>>   "code": 403
>> }
>>
>>
>>
>>
>> El 19 oct 2017, a las 18:17, Frederic Giloux 
>> escribió:
>>
>> Very good. The issue is with your curl. Next step run the same command
>> with --loglevel=8 and check the queries that are sent to the API server.
>>
>> Regards,
>>
>> Frédéric
>>
>> On 19 Oct 2017 18:11, "Julio Saura"  wrote:
>>
>>> umm that works 

Re: service account for rest api

2017-10-20 Thread Julio Saura
python problem solved too

all working

view role was the key :/




> El 20 oct 2017, a las 9:27, Julio Saura  escribió:
> 
> problem solved
> 
> i do not know why but giving user role view instead of admin make the trick ..
> 
> :/
> 
> now i am able to access using curl with the token, but not using python xD i 
> get a 401 with long token, but i i use the short one that oc login gives 
> works xD
> 
> 
> 
> 
>> El 20 oct 2017, a las 8:59, Frederic Giloux > > escribió:
>> 
>> Julio,
>> 
>> have you tried the command with higer log level as per my previous email?
>> # oc get rc -n project1 --as=system:serviceaccounts:project1:inciga 
>> --loglevel=8
>> This gives you the successful rest call, which is made by the OC client to 
>> the API server. You can then check whether it differs from your curl.
>> 
>> Regards,
>> 
>> Frédéric
>> 
>> On Fri, Oct 20, 2017 at 8:30 AM, Julio Saura > > wrote:
>> headers look ok in curl request
>> 
>> * Cipher selection: 
>> ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
>> * successfully set certificate verify locations:
>> *   CAfile: /etc/ssl/certs/ca-certificates.crt
>>   CApath: none
>> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
>> * TLSv1.2 (IN), TLS handshake, Server hello (2):
>> * NPN, negotiated HTTP1.1
>> * TLSv1.2 (IN), TLS handshake, Certificate (11):
>> * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
>> * TLSv1.2 (IN), TLS handshake, Request CERT (13):
>> * TLSv1.2 (IN), TLS handshake, Server finished (14):
>> * TLSv1.2 (OUT), TLS handshake, Certificate (11):
>> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
>> * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
>> * TLSv1.2 (OUT), TLS handshake, Unknown (67):
>> * TLSv1.2 (OUT), TLS handshake, Finished (20):
>> * TLSv1.2 (IN), TLS change cipher, Client hello (1):
>> * TLSv1.2 (IN), TLS handshake, Finished (20):
>> * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
>> * Server certificate:
>> *  subject: CN=10.1.5.31
>> *  start date: Sep 21 11:19:56 2017 GMT
>> *  expire date: Sep 21 11:19:57 2019 GMT
>> *  issuer: CN=openshift-signer@1505992768
>> *  SSL certificate verify result: self signed certificate in certificate 
>> chain (19), continuing anyway.
>> > GET /api/v1/namespaces/project1/replicationcontrollers HTTP/1.1
>> > Host: BALANCER:8443
>> > User-Agent: curl/7.56.0
>> > Accept: */*
>> > Authorization: Bearer 
>> > eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsZHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaW5jaWdhLXRva2VuLTBkNDcyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImluY2lnYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIyMjE0YTI4LWI0ZTMtMTFlNy1hZTBhLTAwNTA1NmE0M2M0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsZHA6aW5jaWdhIn0.VfJa8fLQQjSYySjWO3d_hp0kGqVFAnhvFQ2R6jTcLmtFwiA2NouO0QJCI2KZqvhXigAzPsksOKP7-BP_v2c-93UH3UyXW7RhkYKMOO7d1EMZVMGnT6NBKhVkw45wa20kH221ggh98wdv4MZRAoNEOvmN9qXHmsUWEnxfT8uNIjIkAt_aydocQ22hIbYXzd6w5x6zmOWIVWllgF3qGtY8ArTgRf4WxhuwhUJRy_Gm31WhtKioovk2Hpt6XnlPhnfvHhioqtizZsTepVOD0A-yjearxiDBE7yuIzRsMHo014Dq3O2T_qIZ2P2wvEWBzfpi7i1to4ep3jcb_qDM2vQ0IQ
>> > Content-Type: application/json
>> >
>> < HTTP/1.1 403 Forbidden
>> < Cache-Control: no-store
>> < Content-Type: application/json
>> < Date: Fri, 20 Oct 2017 06:28:52 GMT
>> < Content-Length: 295
>> {
>>   "kind": "Status",
>>   "apiVersion": "v1",
>>   "metadata": {},
>>   "status": "Failure",
>>   "message": "User \"system:serviceaccount:ldp:inciga\" cannot list 
>> replicationcontrollers in project \"ldp\"",
>>   "reason": "Forbidden",
>>   "details": {
>> "kind": "replicationcontrollers"
>>   },
>>   "code": 403
>> }
>> 
>> 
>> 
>> 
>>> El 19 oct 2017, a las 18:17, Frederic Giloux >> > escribió:
>>> 
>>> Very good. The issue is with your curl. Next step run the same command with 
>>> --loglevel=8 and check the queries that are sent to the API server. 
>>> 
>>> Regards, 
>>> 
>>> Frédéric 
>>> 
>>> On 19 Oct 2017 18:11, "Julio Saura" >> > wrote:
>>> umm that works …
>>> 
>>> weird
>>> 
>>> Julio Saura Alejandre
>>> Responsable Servicios Gestionados
>>> hiberus TRAVEL
>>> Tel.: + 34 902 87 73 92 Ext. 659 
>>> Parque Empresarial PLAZA
>>> Edificio EXPOINNOVACIÓN
>>> C/. Bari 25  
>>> Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
>>> www.hiberus.com 
>>> Crecemos contigo
>>> 
>>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>>> su destinatario y pueden contener información privilegiada o 

Re: service account for rest api

2017-10-20 Thread Julio Saura
problem solved

i do not know why but giving user role view instead of admin make the trick ..

:/

now i am able to access using curl with the token, but not using python xD i 
get a 401 with long token, but i i use the short one that oc login gives works 
xD




> El 20 oct 2017, a las 8:59, Frederic Giloux  escribió:
> 
> Julio,
> 
> have you tried the command with higer log level as per my previous email?
> # oc get rc -n project1 --as=system:serviceaccounts:project1:inciga 
> --loglevel=8
> This gives you the successful rest call, which is made by the OC client to 
> the API server. You can then check whether it differs from your curl.
> 
> Regards,
> 
> Frédéric
> 
> On Fri, Oct 20, 2017 at 8:30 AM, Julio Saura  > wrote:
> headers look ok in curl request
> 
> * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
> * successfully set certificate verify locations:
> *   CAfile: /etc/ssl/certs/ca-certificates.crt
>   CApath: none
> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
> * TLSv1.2 (IN), TLS handshake, Server hello (2):
> * NPN, negotiated HTTP1.1
> * TLSv1.2 (IN), TLS handshake, Certificate (11):
> * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
> * TLSv1.2 (IN), TLS handshake, Request CERT (13):
> * TLSv1.2 (IN), TLS handshake, Server finished (14):
> * TLSv1.2 (OUT), TLS handshake, Certificate (11):
> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
> * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
> * TLSv1.2 (OUT), TLS handshake, Unknown (67):
> * TLSv1.2 (OUT), TLS handshake, Finished (20):
> * TLSv1.2 (IN), TLS change cipher, Client hello (1):
> * TLSv1.2 (IN), TLS handshake, Finished (20):
> * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
> * Server certificate:
> *  subject: CN=10.1.5.31
> *  start date: Sep 21 11:19:56 2017 GMT
> *  expire date: Sep 21 11:19:57 2019 GMT
> *  issuer: CN=openshift-signer@1505992768
> *  SSL certificate verify result: self signed certificate in certificate 
> chain (19), continuing anyway.
> > GET /api/v1/namespaces/project1/replicationcontrollers HTTP/1.1
> > Host: BALANCER:8443
> > User-Agent: curl/7.56.0
> > Accept: */*
> > Authorization: Bearer 
> > eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsZHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaW5jaWdhLXRva2VuLTBkNDcyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImluY2lnYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIyMjE0YTI4LWI0ZTMtMTFlNy1hZTBhLTAwNTA1NmE0M2M0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsZHA6aW5jaWdhIn0.VfJa8fLQQjSYySjWO3d_hp0kGqVFAnhvFQ2R6jTcLmtFwiA2NouO0QJCI2KZqvhXigAzPsksOKP7-BP_v2c-93UH3UyXW7RhkYKMOO7d1EMZVMGnT6NBKhVkw45wa20kH221ggh98wdv4MZRAoNEOvmN9qXHmsUWEnxfT8uNIjIkAt_aydocQ22hIbYXzd6w5x6zmOWIVWllgF3qGtY8ArTgRf4WxhuwhUJRy_Gm31WhtKioovk2Hpt6XnlPhnfvHhioqtizZsTepVOD0A-yjearxiDBE7yuIzRsMHo014Dq3O2T_qIZ2P2wvEWBzfpi7i1to4ep3jcb_qDM2vQ0IQ
> > Content-Type: application/json
> >
> < HTTP/1.1 403 Forbidden
> < Cache-Control: no-store
> < Content-Type: application/json
> < Date: Fri, 20 Oct 2017 06:28:52 GMT
> < Content-Length: 295
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "User \"system:serviceaccount:ldp:inciga\" cannot list 
> replicationcontrollers in project \"ldp\"",
>   "reason": "Forbidden",
>   "details": {
> "kind": "replicationcontrollers"
>   },
>   "code": 403
> }
> 
> 
> 
> 
>> El 19 oct 2017, a las 18:17, Frederic Giloux > > escribió:
>> 
>> Very good. The issue is with your curl. Next step run the same command with 
>> --loglevel=8 and check the queries that are sent to the API server. 
>> 
>> Regards, 
>> 
>> Frédéric 
>> 
>> On 19 Oct 2017 18:11, "Julio Saura" > > wrote:
>> umm that works …
>> 
>> weird
>> 
>> Julio Saura Alejandre
>> Responsable Servicios Gestionados
>> hiberus TRAVEL
>> Tel.: + 34 902 87 73 92 Ext. 659 
>> Parque Empresarial PLAZA
>> Edificio EXPOINNOVACIÓN
>> C/. Bari 25  
>> Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
>> www.hiberus.com 
>> Crecemos contigo
>> 
>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>> su destinatario y pueden contener información privilegiada o confidencial. 
>> Si tú no eres el destinatario indicado, queda notificado de que la 
>> utilización, divulgación y/o copia sin autorización está prohibida en virtud 
>> de la legislación vigente. Por ello, se informa a quien lo reciba por error, 
>> que la información contenida en el mismo es reservada y su 

Re: service account for rest api

2017-10-20 Thread Frederic Giloux
Julio,

have you tried the command with higer log level as per my previous email?
# oc get rc -n project1 --as=system:serviceaccounts:project1:inciga
--loglevel=8
This gives you the successful rest call, which is made by the OC client to
the API server. You can then check whether it differs from your curl.

Regards,

Frédéric

On Fri, Oct 20, 2017 at 8:30 AM, Julio Saura  wrote:

> headers look ok in curl request
>
> * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@
> STRENGTH
> * successfully set certificate verify locations:
> *   CAfile: /etc/ssl/certs/ca-certificates.crt
>   CApath: none
> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
> * TLSv1.2 (IN), TLS handshake, Server hello (2):
> * NPN, negotiated HTTP1.1
> * TLSv1.2 (IN), TLS handshake, Certificate (11):
> * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
> * TLSv1.2 (IN), TLS handshake, Request CERT (13):
> * TLSv1.2 (IN), TLS handshake, Server finished (14):
> * TLSv1.2 (OUT), TLS handshake, Certificate (11):
> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
> * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
> * TLSv1.2 (OUT), TLS handshake, Unknown (67):
> * TLSv1.2 (OUT), TLS handshake, Finished (20):
> * TLSv1.2 (IN), TLS change cipher, Client hello (1):
> * TLSv1.2 (IN), TLS handshake, Finished (20):
> * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
> * Server certificate:
> *  subject: CN=10.1.5.31
> *  start date: Sep 21 11:19:56 2017 GMT
> *  expire date: Sep 21 11:19:57 2019 GMT
> *  issuer: CN=openshift-signer@1505992768
> *  SSL certificate verify result: self signed certificate in certificate
> chain (19), continuing anyway.
> > GET /api/v1/namespaces/project1/replicationcontrollers HTTP/1.1
> > Host: BALANCER:8443
> > User-Agent: curl/7.56.0
> > Accept: */*
> *> Authorization: Bearer
> eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsZHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaW5jaWdhLXRva2VuLTBkNDcyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImluY2lnYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIyMjE0YTI4LWI0ZTMtMTFlNy1hZTBhLTAwNTA1NmE0M2M0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsZHA6aW5jaWdhIn0.VfJa8fLQQjSYySjWO3d_hp0kGqVFAnhvFQ2R6jTcLmtFwiA2NouO0QJCI2KZqvhXigAzPsksOKP7-BP_v2c-93UH3UyXW7RhkYKMOO7d1EMZVMGnT6NBKhVkw45wa20kH221ggh98wdv4MZRAoNEOvmN9qXHmsUWEnxfT8uNIjIkAt_aydocQ22hIbYXzd6w5x6zmOWIVWllgF3qGtY8ArTgRf4WxhuwhUJRy_Gm31WhtKioovk2Hpt6XnlPhnfvHhioqtizZsTepVOD0A-yjearxiDBE7yuIzRsMHo014Dq3O2T_qIZ2P2wvEWBzfpi7i1to4ep3jcb_qDM2vQ0IQ*
> > Content-Type: application/json
> >
> < HTTP/1.1 403 Forbidden
> < Cache-Control: no-store
> < Content-Type: application/json
> < Date: Fri, 20 Oct 2017 06:28:52 GMT
> < Content-Length: 295
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "User \"system:serviceaccount:ldp:inciga\" cannot list
> replicationcontrollers in project \"ldp\"",
>   "reason": "Forbidden",
>   "details": {
> "kind": "replicationcontrollers"
>   },
>   "code": 403
> }
>
>
>
>
> El 19 oct 2017, a las 18:17, Frederic Giloux 
> escribió:
>
> Very good. The issue is with your curl. Next step run the same command
> with --loglevel=8 and check the queries that are sent to the API server.
>
> Regards,
>
> Frédéric
>
> On 19 Oct 2017 18:11, "Julio Saura"  wrote:
>
>> umm that works …
>>
>> weird
>>
>> *Julio Saura Alejandre*
>> *Responsable Servicios Gestionados*
>> *hiberus* TRAVEL
>> Tel.: + 34 902 87 73 92 Ext. 659 <+34%20902%2087%2073%2092>
>> Parque Empresarial PLAZA
>> Edificio EXPOINNOVACIÓN
>> C/. Bari 25 
>> Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
>> www.hiberus.com
>>
>> Crecemos contigo
>> Este mensaje se envía desde la plataforma de correo de Hiberus Este
>> mensaje y los documentos que, en su caso, lleve anexos, se dirigen
>> exclusivamente a su destinatario y pueden contener información privilegiada
>> o confidencial. Si tú no eres el destinatario indicado, queda notificado de
>> que la utilización, divulgación y/o copia sin autorización está prohibida
>> en virtud de la legislación vigente. Por ello, se informa a quien lo reciba
>> por error, que la información contenida en el mismo es reservada y su uso
>> no autorizado está prohibido legalmente, por lo que en tal caso te rogamos
>> que nos lo comuniques vía e-mail o teléfono, te abstengas de realizar
>> copias del mensaje o remitirlo o entregarlo a terceras personas y procedas
>> a devolverlo a su emisor y/o destruirlo de inmediato.
>>
>> El 19 oct 2017, a las 18:01, Frederic Giloux 
>> escribió:
>>
>> oc get rc -n project1 --as=system:serviceaccounts:project1:inciga
>>
>>
>>
>


-- 
*Frédéric Giloux*
Senior Middleware Consultant
Red Hat 

Re: service account for rest api

2017-10-20 Thread Julio Saura
headers look ok in curl request

* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* NPN, negotiated HTTP1.1
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Unknown (67):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
*  subject: CN=10.1.5.31
*  start date: Sep 21 11:19:56 2017 GMT
*  expire date: Sep 21 11:19:57 2019 GMT
*  issuer: CN=openshift-signer@1505992768
*  SSL certificate verify result: self signed certificate in certificate chain 
(19), continuing anyway.
> GET /api/v1/namespaces/project1/replicationcontrollers HTTP/1.1
> Host: BALANCER:8443
> User-Agent: curl/7.56.0
> Accept: */*
> Authorization: Bearer 
> eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsZHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaW5jaWdhLXRva2VuLTBkNDcyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImluY2lnYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIyMjE0YTI4LWI0ZTMtMTFlNy1hZTBhLTAwNTA1NmE0M2M0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsZHA6aW5jaWdhIn0.VfJa8fLQQjSYySjWO3d_hp0kGqVFAnhvFQ2R6jTcLmtFwiA2NouO0QJCI2KZqvhXigAzPsksOKP7-BP_v2c-93UH3UyXW7RhkYKMOO7d1EMZVMGnT6NBKhVkw45wa20kH221ggh98wdv4MZRAoNEOvmN9qXHmsUWEnxfT8uNIjIkAt_aydocQ22hIbYXzd6w5x6zmOWIVWllgF3qGtY8ArTgRf4WxhuwhUJRy_Gm31WhtKioovk2Hpt6XnlPhnfvHhioqtizZsTepVOD0A-yjearxiDBE7yuIzRsMHo014Dq3O2T_qIZ2P2wvEWBzfpi7i1to4ep3jcb_qDM2vQ0IQ
> Content-Type: application/json
>
< HTTP/1.1 403 Forbidden
< Cache-Control: no-store
< Content-Type: application/json
< Date: Fri, 20 Oct 2017 06:28:52 GMT
< Content-Length: 295
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "User \"system:serviceaccount:ldp:inciga\" cannot list 
replicationcontrollers in project \"ldp\"",
  "reason": "Forbidden",
  "details": {
"kind": "replicationcontrollers"
  },
  "code": 403
}




> El 19 oct 2017, a las 18:17, Frederic Giloux  escribió:
> 
> Very good. The issue is with your curl. Next step run the same command with 
> --loglevel=8 and check the queries that are sent to the API server. 
> 
> Regards, 
> 
> Frédéric 
> 
> On 19 Oct 2017 18:11, "Julio Saura"  > wrote:
> umm that works …
> 
> weird
> 
> Julio Saura Alejandre
> Responsable Servicios Gestionados
> hiberus TRAVEL
> Tel.: + 34 902 87 73 92 Ext. 659 
> Parque Empresarial PLAZA
> Edificio EXPOINNOVACIÓN
> C/. Bari 25  
> Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
> www.hiberus.com 
> Crecemos contigo
> 
> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
> los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
> destinatario y pueden contener información privilegiada o confidencial. Si tú 
> no eres el destinatario indicado, queda notificado de que la utilización, 
> divulgación y/o copia sin autorización está prohibida en virtud de la 
> legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
> información contenida en el mismo es reservada y su uso no autorizado está 
> prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
> vía e-mail o teléfono, te abstengas de realizar copias del mensaje o 
> remitirlo o entregarlo a terceras personas y procedas a devolverlo a su 
> emisor y/o destruirlo de inmediato.
> 
>> El 19 oct 2017, a las 18:01, Frederic Giloux > > escribió:
>> 
>> oc get rc -n project1 --as=system:serviceaccounts:project1:inciga
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-20 Thread Julio Saura
compiled last stable curl version

same problem

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "User \"system:serviceaccount:project1:inciga\" cannot list 
replicationcontrollers in project \”project1\"",
  "reason": "Forbidden",
  "details": {
"kind": "replicationcontrollers"
  },
  "code": 403
}

curl-7.56.0

this is weird

> El 19 oct 2017, a las 19:23, Hiberus  escribió:
> 
> Yikes !!
> 
> I will check tomorrow 
> 
> Ty!
> 
> El 19 oct 2017, a las 18:16, Cesar Wong  > escribió:
> 
>> 
>> Julio, 
>> 
>> Depending on your version of curl, you may be hitting this:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1260178 
>> 
>> 
>> On Thu, Oct 19, 2017 at 12:11 PM, Julio Saura > > wrote:
>> umm that works …
>> 
>> weird
>> 
>> Julio Saura Alejandre
>> Responsable Servicios Gestionados
>> hiberus TRAVEL
>> Tel.: + 34 902 87 73 92 Ext. 659
>> Parque Empresarial PLAZA
>> Edificio EXPOINNOVACIÓN
>> C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
>> www.hiberus.com 
>> Crecemos contigo
>> 
>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>> su destinatario y pueden contener información privilegiada o confidencial. 
>> Si tú no eres el destinatario indicado, queda notificado de que la 
>> utilización, divulgación y/o copia sin autorización está prohibida en virtud 
>> de la legislación vigente. Por ello, se informa a quien lo reciba por error, 
>> que la información contenida en el mismo es reservada y su uso no autorizado 
>> está prohibido legalmente, por lo que en tal caso te rogamos que nos lo 
>> comuniques vía e-mail o teléfono, te abstengas de realizar copias del 
>> mensaje o remitirlo o entregarlo a terceras personas y procedas a devolverlo 
>> a su emisor y/o destruirlo de inmediato.
>> 
>>> El 19 oct 2017, a las 18:01, Frederic Giloux >> > escribió:
>>> 
>>> oc get rc -n project1 --as=system:serviceaccounts:project1:inciga
>> 
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-20 Thread Julio Saura
tried

no luck :(


Julio Saura Alejandre
Responsable Servicios Gestionados
hiberus TRAVEL
Tel.: + 34 902 87 73 92 Ext. 659
Parque Empresarial PLAZA
Edificio EXPOINNOVACIÓN
C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
www.hiberus.com 
Crecemos contigo

Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 19 oct 2017, a las 21:40, Luke Meyer  escribió:
> 
> oc policy add-role-to-user admin

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users