Re: oc port-forward command unexpectedly cuts the proxied connection on Origin 3.7.2

2018-04-23 Thread Fabio Martinelli
Hi Aleksandar

On 22 April 2018 at 17:07, Aleksandar Lazic  wrote:

>
> Does the port-forwarding goes thru a proxy?
>

No others SW in the middle, it's just "pure" oc port-forward


> Is there a amount of time when this happens (= timeout)
>

~3mins the times I've checked


> What's in the events when this happen?
>

the only messages I could found are :

Apr 20 14:30:56 wfpromshap21 origin-node: I0420 14:30:56.783451  102439
docker_streaming.go:186] executing port forwarding command:
/usr/bin/nsenter -t 12036 -n /usr/bin/socat - TCP4:localhost:


Apr 20 14:30:56 wfpromshap21 journal: I0420 14:30:56.783451  102439
docker_streaming.go:186] executing port forwarding command:
/usr/bin/nsenter -t 12036 -n /usr/bin/socat - TCP4:localhost:

  is the port of the SSHd unprivileged daemon

>
>
I'm afraid that somehow Ansible manages to screw up the oc port-forward
tunnel but I can't really say how
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


oc port-forward command unexpectedly cuts the proxied connection on Origin 3.7.2

2018-04-19 Thread Fabio Martinelli
Dear Colleagues

In few time I've to migrate several corporate applications from a RedHat6
LXC cluster to a RedHat7 OpenShift Origin 3.7.2 cluster

here the application Developers are use to write an Ansible playbook for
each app so they've explicitly requested me to prepare a base CentOS7
container running as non-root and featuring an unprivileged SSHd daemon in
order to run their well tested Ansible playbooks, furthermore to place the
container /home on a dedicated GlusterFS volume to make it persistent along
the time ; last ring of this chain is the oc port-forward command that's in
charge of connecting the Developers workstation with the unprivileged SSHd
daemon just for the Ansible playbook execution time.

this is actually working pretty well but the fact that the oc port-forward
command at certain point cuts the connection and the Ansible run gets
obviously affected making the Developer experience disappointing ; on the
other end the SSHd process didn't stop.

kindly which settings may I change both on the Origin Masters yaml files
and on the Origin Nodes yaml files in order to prevent this issue ?

I'm aware that the application Developers should rewrite their works in
terms of Dockerfiles but for the time being they've really no time to do
that.


Many thanks,
Fabio Martinelli
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


apiserver 3.7.2 can't start reporting "open /etc/origin/master/master.etcd-client.crt: permission denied)"

2018-04-12 Thread Fabio Martinelli
Dear Colleagues

lately I've updated my Origin HA setup from 3.7.0 to 3.7.2, everything
seems ok but the apiserver kube-service-catalog namespace that strangely
complains a about a cert that actually didn't change during the last
months, kindly how can I fix this ? I might open the file permissions but
it doesn't seem the smartest thing to do

Many thanks in advance,
Fabio Martinelli

 logs 


$ oc logs -n kube-service-catalog po/apiserver-n2mh2
...
[]string{"https://wfpromshap21.global.wfp.org:2379;, "
https://wfpromshap22.global.wfp.org:2379;, "
https://wfpromshap23.global.wfp.org:2379"},
KeyFile:"/etc/origin/master/master.etcd-client.key",
CertFile:"/etc/origin/master/master.etcd-client.crt",
CAFile:"/etc/origin/master/master.etcd-ca.crt", Quorum:true, Paging:true,
DeserializationCacheSize:0, Codec:runtime.Codec(nil),
Transformer:value.Transformer(nil), CompactionInterval:3000}
F0412 08:37:55.702920   1 storage_decorator.go:57] Unable to create
storage backend: config (&{ /registry [
https://wfpromshap21.global.wfp.org:2379
https://wfpromshap22.global.wfp.org:2379
https://wfpromshap23.global.wfp.org:2379]
/etc/origin/master/master.etcd-client.key
/etc/origin/master/master.etcd-client.crt
/etc/origin/master/master.etcd-ca.crt true true 0 {0xc420423f00
0xc420423f80}  5m0s}), err (open
/etc/origin/master/master.etcd-client.crt: permission denied)

$ ssh shap21 sudo ls -l  /etc/origin/master/master.etcd-client.crt
-rw---. 1 root root 5930 Dec 27 15:37
/etc/origin/master/master.etcd-client.crt

$ ssh wfpromshap22 sudo ls -l  /etc/origin/master/master.etcd-client.crt
-rw---. 1 root root 5930 Dec 27 15:37
/etc/origin/master/master.etcd-client.crt

$ ssh wfpromshap23 sudo ls -l  /etc/origin/master/master.etcd-client.crt
-rw---. 1 root root 5930 Dec 27 15:37
/etc/origin/master/master.etcd-client.crt
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to debug the openid auth plugin ?

2018-03-28 Thread Fabio Martinelli
Thank you Larry

I'll keep your experience as a precious reference ; I assume you're using
OpenShift -> LDAP -> AD because you don't have OpenShift -> OpenID Connect
-> AD like me

in my IT environment all the applications use OpenID Connect to
authenticate our users and I preferably should authenticate in that way,
therefore I need to understand how to debug the OpenShift -> OpenID Connect
-> AD pipeline

is there some tool to simulate the OpenID Connect authentication ? Just
found this [@]

I hope somebody from Red Hat can give me some insights, maybe it's just
matter of raising some debug level.

Thanks,
Fabio

[@] https://github.com/curityio/example-python-openid-connect-client






On 28 March 2018 at 02:02, Brigman, Larry <larry.brig...@arris.com> wrote:

> I configure one of our clusters to use LDAP against our AD.
> Here is my line from the inventory (obsucated) but handling both local and
> LDAP:
> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
> 'filename': '/etc/origin/master/htpasswd'},{'name': 'ldap', 'challenge':
> 'true', 'login': 'true', 'mappingMethod': 'claim', 'kind':
> 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email':
> ['mail'], 'name': ['cn'], 'preferredUsername': ['sAMAccountName']},
> 'bindDN': 'x...@ad.example.com.com', 'bindPassword': 'XXX',
> 'insecure': 'true', 'url': 'ldap://ldap.example.com:389/
> dc=sub,dc=example,dc=com?sAMAccountName'}]
>
> This is a give a good reference of how to configure/test things.
> https://github.com/redhat-cop/openshift-playbooks/blob/
> master/playbooks/installation/ldap_integration.adoc
>
> -Original Message-
> From: users-boun...@lists.openshift.redhat.com [mailto:
> users-boun...@lists.openshift.redhat.com] On Behalf Of fabio martinelli
> Sent: Monday, March 26, 2018 2:26 PM
> To: users <users@lists.openshift.redhat.com>
> Subject: How to debug the openid auth plugin ?
>
> Dear OpenShift Colleagues
>
> I can't get working the OpenID Auth plugin [$], not necessarily because
> that's broken Origin side since it's involved also the AD layer where I'm
> not root [%] ; furthermore I don't have very much experience with OpenID.
>
> I believe I've slavishly followed the manual [$] and I've selected as the
> mappingMethod the option "lookup" since I don't want any automatic login
> from our AD at this stage.
>
> This is my failed login attempt by oc :
> 
> $ oc login --loglevel=10
> I0326 22:58:26.698146   38291 loader.go:357] Config loaded from file
> /Users/f_martinelli/.kube/config
> I0326 22:58:26.701628   38291 round_trippers.go:386] curl -k -v -XHEAD
> https://na01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fhosting.wfp.org%3A443%2F=01%7C01%7Clarry.
> brigman%40arris.com%7Cbacd11c800094b91ed7808d5936047ca%
> 7Cf27929ade5544d55837ac561519c3091%7C1=m0bOfRtQnQ5QE8ntZo%
> 2BaGSmV1OwfYrThXluGDNTenb0%3D=0
> I0326 22:58:26.922676   38291 round_trippers.go:405] HEAD
> https://na01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fhosting.wfp.org%3A443%2F=01%7C01%7Clarry.
> brigman%40arris.com%7Cbacd11c800094b91ed7808d5936047ca%
> 7Cf27929ade5544d55837ac561519c3091%7C1=m0bOfRtQnQ5QE8ntZo%
> 2BaGSmV1OwfYrThXluGDNTenb0%3D=0 403 Forbidden in 220 milliseconds
> I0326 22:58:26.922709   38291 round_trippers.go:411] Response Headers:
> I0326 22:58:26.922720   38291 round_trippers.go:414] Vary:
> Accept-Encoding
> I0326 22:58:26.922729   38291 round_trippers.go:414]
> X-Content-Type-Options: nosniff
> I0326 22:58:26.922738   38291 round_trippers.go:414] Date: Mon, 26 Mar
> 2018 20:58:26 GMT
> I0326 22:58:26.922747   38291 round_trippers.go:414] Content-Type:
> text/plain
> I0326 22:58:26.922756   38291 round_trippers.go:414] Connection:
> keep-alive
> I0326 22:58:26.922765   38291 round_trippers.go:414] Server: nginx
> I0326 22:58:26.922774   38291 round_trippers.go:414] Content-Length: 90
> I0326 22:58:26.922782   38291 round_trippers.go:414] Cache-Control:
> no-store
> I0326 22:58:26.922889   38291 round_trippers.go:386] curl -k -v -XGET -H
> "X-Csrf-Token: 1"
> https://na01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fhosting.wfp.org%3A443%2F.well-known%2Foauth-
> authorization-server=01%7C01%7Clarry.brigman%40arris.com%
> 7Cbacd11c800094b91ed7808d5936047ca%7Cf27929ade5544d55837ac561519c
> 3091%7C1=eM9%2Bsrj6GMSd524K6RaF7%2FqNnxsWIi6Cqr2A6O58pYM%3D&
> reserved=0
> I0326 22:58:26.965442   38291 round_trippers.go:405] GET
> https://na01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fhosting.wfp.org%3A443%2F.well-known%2Foauth-
> authorization-server=01

How to debug the openid auth plugin ?

2018-03-26 Thread fabio martinelli
OTc4MjMzMjYxMzAxNzcyNDkwNTM1MTEyODU3MTA0Mjc4In0_uri=https%3A%2F%2Fhosting.wfp.org%2Fconsole%2Foauth: 
(2.865321ms) 302 [[Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) 
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 
Safari/537.36] 10.11.40.34:34290]
Mar 26 22:59:14 wfpromshap22 journal: I0326 20:59:14.634186   1 
handler.go:160] Got auth data
Mar 26 22:59:14 wfpromshap22 origin-master-api: I0326 
20:59:14.634186   1 handler.go:160] Got auth data
Mar 26 22:59:14 wfpromshap22 origin-master-api: I0326 
20:59:14.642600   1 openid.go:216] identity=&{my_openid_connect 
l8M167PMNqOtC+i49V4K5wAiVhlnNY7Tax//O0l0Bm8= map[]}



please can I somehow debug step by step what Origin is doing here ?

I've got I should get a JWT from AD during the authentication, did I get 
it ? I read "Got auth data" in the logs.


I've no access to the AD logs but I can dialog F2F with our AD Admin.

many thanks in advance,
Fabio Martinelli




[$] 
https://docs.openshift.com/container-platform/3.7/install_config/configuring_authentication.html#OpenID
[%] 
https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-protocols-openid-connect-code 


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to monitor a containerized GlusterFS installation ?

2018-03-07 Thread Fabio Martinelli
Dear colleagues

I'm successfully using [$] and I need to monitor the dynamic GlusterFS
volumes created by Heketi by some tool ; the one I know better is Nagios
but I might consider others ; actually Nagios is also reported in the
manual [%]

at the very least I need to detect error conditions like this, for each
dynamic volume :

*
oc exec glusterfs-storage-c6j54 gluster vol heal
vol_f0b915bc0e4e25e6037f3d6c78620f67 info
Brick 10.11.41.87:
/var/lib/heketi/mounts/vg_cd618f81490483d6011f9b4a22046067/brick_eb97dc9f46f5232629d89c803ec01aee/brick
Status: Transport endpoint is not connected  < !!!
Number of entries: -

Brick 10.11.41.86:
/var/lib/heketi/mounts/vg_62712704664dc3dd291e75b7fc435c0b/brick_ab42b7dedafa3480424ac1e32d9a2044/brick
Status: Connected
Number of entries: 0

Brick 10.11.41.85:
/var/lib/heketi/mounts/vg_d940dd89c8611cdb93bdcbe2dd363977/brick_abeb14f9e03b3b9b71be8de519719f33/brick
Status: Connected
Number of entries: 0
*

My containerized Origin 3.7 is using the Docker image :
docker.io/gluster/gluster-centos@sha256:5e86156d721ddeaf75972299efd000794cf7bc3da8cbd5aae4ddfd73e36c4534


any advice is highly appreciated.

many thanks,
Fabio Martinelli


[$]
https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_glusterfs.html#gfs-containerized-storage-cluster

[%]
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/chap-monitoring_red_hat_storage
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: nginx in front of haproxy ?

2018-01-02 Thread Fabio Martinelli
It was actually needed to rewrite the master-config.yaml in this other way,
basically removing all the :8443 strings in the 'public' fields, i.e. to
make it implicitly appear as :443

admissionConfig:
  pluginConfig:
BuildDefaults:
  configuration:
apiVersion: v1
env: []
kind: BuildDefaultsConfig
resources:
  limits: {}
  requests: {}
BuildOverrides:
  configuration:
apiVersion: v1
kind: BuildOverridesConfig
PodPreset:
  configuration:
apiVersion: v1
disable: false
kind: DefaultAdmissionConfig
openshift.io/ImagePolicy:
  configuration:
apiVersion: v1
executionRules:
- matchImageAnnotations:
  - key: images.openshift.io/deny-execution
value: 'true'
  name: execution-denied
  onResources:
  - resource: pods
  - resource: builds
  reject: true
  skipOnResolutionFailure: true
kind: ImagePolicyConfig
aggregatorConfig:
  proxyClientInfo:
certFile: aggregator-front-proxy.crt
keyFile: aggregator-front-proxy.key
apiLevels:
- v1
apiVersion: v1
assetConfig:
  extensionScripts:
  - /etc/origin/master/openshift-ansible-catalog-console.js
  logoutURL: ""
  masterPublicURL: https://hosting.wfp.org<
  metricsPublicURL: https://metrics.hosting.wfp.org/hawkular/metrics
  publicURL: https://hosting.wfp.org/console/<
  servingInfo:
bindAddress: 0.0.0.0:8443
bindNetwork: tcp4
certFile: master.server.crt
clientCA: ""
keyFile: master.server.key
maxRequestsInFlight: 0
requestTimeoutSeconds: 0
authConfig:
  requestHeader:
clientCA: front-proxy-ca.crt
clientCommonNames:
- aggregator-front-proxy
extraHeaderPrefixes:
- X-Remote-Extra-
groupHeaders:
- X-Remote-Group
usernameHeaders:
- X-Remote-User
controllerConfig:
  election:
lockName: openshift-master-controllers
  serviceServingCert:
signer:
  certFile: service-signer.crt
  keyFile: service-signer.key
controllers: '*'
corsAllowedOrigins:
- (?i)//127\.0\.0\.1(:|\z)
- (?i)//localhost(:|\z)
- (?i)//10\.11\.41\.85(:|\z)
- (?i)//kubernetes\.default(:|\z)
- (?i)//kubernetes\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes(:|\z)
- (?i)//openshift\.default(:|\z)
- (?i)//hosting\.wfp\.org(:|\z)
- (?i)//openshift\.default\.svc(:|\z)
- (?i)//172\.30\.0\.1(:|\z)
- (?i)//wfpromshap21\.global\.wfp\.org(:|\z)
- (?i)//openshift\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes\.default\.svc(:|\z)
- (?i)//openshift(:|\z)
dnsConfig:
  bindAddress: 0.0.0.0:8053
  bindNetwork: tcp4
etcdClientInfo:
  ca: master.etcd-ca.crt
  certFile: master.etcd-client.crt
  keyFile: master.etcd-client.key
  urls:
  - https://wfpromshap21.global.wfp.org:2379
  - https://wfpromshap22.global.wfp.org:2379
  - https://wfpromshap23.global.wfp.org:2379
etcdStorageConfig:
  kubernetesStoragePrefix: kubernetes.io
  kubernetesStorageVersion: v1
  openShiftStoragePrefix: openshift.io
  openShiftStorageVersion: v1
imageConfig:
  format: openshift/origin-${component}:${version}
  latest: false
kind: MasterConfig
kubeletClientInfo:
  ca: ca-bundle.crt
  certFile: master.kubelet-client.crt
  keyFile: master.kubelet-client.key
  port: 10250
kubernetesMasterConfig:
  apiServerArguments:
runtime-config:
- apis/settings.k8s.io/v1alpha1=true
storage-backend:
- etcd3
storage-media-type:
- application/vnd.kubernetes.protobuf
  controllerArguments:
  masterCount: 3
  masterIP: 10.11.41.85
  podEvictionTimeout:
  proxyClientInfo:
certFile: master.proxy-client.crt
keyFile: master.proxy-client.key
  schedulerArguments:
  schedulerConfigFile: /etc/origin/master/scheduler.json
  servicesNodePortRange: ""
  servicesSubnet: 172.30.0.0/16
  staticNodeNames: []
masterClients:
  externalKubernetesClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 400
contentType: application/vnd.kubernetes.protobuf
qps: 200
  externalKubernetesKubeConfig: ""
  openshiftLoopbackClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 600
contentType: application/vnd.kubernetes.protobuf
qps: 300
  openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://hosting.wfp.org<
networkConfig:
  clusterNetworkCIDR: 10.128.0.0/14
  clusterNetworks:
  - cidr: 10.128.0.0/14
hostSubnetLength: 9
  externalIPNetworkCIDRs:
  - 0.0.0.0/0
  hostSubnetLength: 9
  networkPluginName: redhat/openshift-ovs-multitenant
  serviceNetworkCIDR: 172.30.0.0/16
oauthConfig:
  assetPublicURL: https://hosting.wfp.org/console/
  grantConfig:
method: auto
  identityProviders:
  - challenge: true
login: true
mappingMethod: claim
name: htpasswd_auth
provider:
  apiVersion: v1
  file: /etc/origin/master/htpasswd
  kind: 

nginx in front of haproxy ?

2018-01-02 Thread Fabio Martinelli
rue
mappingMethod: claim
name: htpasswd_auth
provider:
  apiVersion: v1
  file: /etc/origin/master/htpasswd
  kind: HTPasswdPasswordIdentityProvider
  masterCA: ca-bundle.crt
  masterPublicURL: https://hosting.wfp.org:8443<
  masterURL: https://wfpromshap21.global.wfp.org:8443
  sessionConfig:
sessionMaxAgeSeconds: 3600
sessionName: ssn
sessionSecretsFile: /etc/origin/master/session-secrets.yaml
  tokenConfig:
accessTokenMaxAgeSeconds: 86400
authorizeTokenMaxAgeSeconds: 500
pauseControllers: false
policyConfig:
  bootstrapPolicyFile: /etc/origin/master/policy.json
  openshiftInfrastructureNamespace: openshift-infra
  openshiftSharedResourcesNamespace: openshift
projectConfig:
  defaultNodeSelector: ""
  projectRequestMessage: ""
  projectRequestTemplate: ""
  securityAllocator:
mcsAllocatorRange: s0:/2
mcsLabelsPerProject: 5
uidAllocatorRange: 10-19/1
routingConfig:
  subdomain: hosting.wfp.org<
serviceAccountConfig:
  limitSecretReferences: false
  managedNames:
  - default
  - builder
  - deployer
  masterCA: ca-bundle.crt
  privateKeyFile: serviceaccounts.private.key
  publicKeyFiles:
  - serviceaccounts.public.key
servingInfo:
  bindAddress: 0.0.0.0:8443
  bindNetwork: tcp4
  certFile: master.server.crt
  clientCA: ca.crt
  keyFile: master.server.key
  maxRequestsInFlight: 500
  requestTimeoutSeconds: 3600
volumeConfig:
  dynamicProvisioningEnabled: true


this is the nginx 1.13.8 counterpart provided by a colleagues of mines, I
don't know nginx very well.

upstream openshift-cluster-webconsole {
ip_hash;
server wfpromshap21.global.wfp.org:8443;
server wfpromshap22.global.wfp.org:8443;
server wfpromshap23.global.wfp.org:8443;
}

server {
listen   10.11.40.99:80;
server_name hosting.wfp.org;
return 301 https://$server_name$request_uri;
}


server {
listen   10.11.40.99:443;
server_name hosting.wfp.org;

access_log /var/log/nginx/hosting-console-access.log;
#access_log off;
error_log  /var/log/nginx/hosting-console-error.log  crit;

include /data/nginx/includes.d/ssl-wfp.conf;

include /data/nginx/includes.d/error.conf;

include /data/nginx/includes.d/proxy.conf;

proxy_set_header Host $host;

location / {
proxy_pass https://openshift-cluster-webconsole;
}

}


the result is this auth error when I try to reach
https://hosting.wfp.org:8443/

Please what may I check ? Right now the Origin Web console is unusable.

Is there any well tested procedure that I might follow to configure nginx
1.13.8 in front of haproxy as it's installed by Origin 3.7 ?


Many thanks,
Fabio Martinelli

object(Zend_Controller_Exception)#217 (8) {
  ["_previous":"Zend_Exception":private] => NULL
  ["message":protected] => string(1028) "A value for the identity was
not provided prior to authentication with Zend_Auth_Adapter_DbTable.#0
/srv/www/ada-framework/prome/releases/20171115114915/lib/Zend/Auth/Adapter/DbTable.php(366):
Zend_Auth_Adapter_DbTable->_authenticateSetup()
#1 /srv/www/ada-framework/prome/releases/20171115114915/lib/Zend/Auth.php(117):
Zend_Auth_Adapter_DbTable->authenticate()
#2 
/srv/www/ada-framework/prome/releases/20171115114915/lib/ADA/Controller/Plugin/Securite/HTTPAuthorization.php(119):
Zend_Auth->authenticate(Object(Systeme_Auth_Adapter))
#3 
/srv/www/ada-framework/prome/releases/20171115114915/lib/Zend/Controller/Plugin/Broker.php(309):
ADA_Controller_Plugin_Securite_HTTPAuthorization->preDispatch(Object(Zend_Controller_Request_Http))
#4 
/srv/www/ada-framework/prome/releases/20171115114915/lib/Zend/Controller/Front.php(941):
Zend_Controller_Plugin_Broker->preDispatch(Object(Zend_Controller_Request_Http))
#5 /srv/www/ada-framework/prome/releases/20171115114915/api/index.php(108):
Zend_Controller_Front->dispatch()
#6 {main}"
  ["string":"Exception":private] => string(0) ""
  ["code":protected] => int(0)
  ["file":protected] => string(90)
"/srv/www/ada-framework/prome/releases/20171115114915/lib/Zend/Controller/Plugin/Broker.php"
  ["line":protected] => int(312)
  ["trace":"Exception":private] => array(2) {
[0] => array(6) {
  ["file"] => string(82)
"/srv/www/ada-framework/prome/releases/20171115114915/lib/Zend/Controller/Front.php"
  ["line"] => int(941)
  ["function"] => string(11) "preDispatch"
  ["class"] => string(29) "Zend_Controller_Plugin_Broker"
  ["type"] => string(2) "->"
  ["args"] => array(1) {
[0] => object(Zend_Controller_Request_Http)#210 (15) {
  ["_paramSources":protected] => array(2) {
[0] => string(4) "_GET"
[1] 

Can't install Origin 3.7 on three independent Azure VMs

2017-09-29 Thread fabio martinelli

Dear Colleagues

I'm trying to deploy Origin 3.7 on three independent Microsoft Azure VMs but I 
can't get their VMs public IPs into the etcd settings and my Ansible 
installation is failing accordingly ; precisely I'm getting these etcd settings 
:

[root@openshift01 ~]# docker inspect etcd_container

...

   "ETCD_NAME=openshift01.northeurope.cloudapp.azure.com",
"ETCD_LISTEN_PEER_URLS=https://10.0.0.4:2380;,
"ETCD_DATA_DIR=/var/lib/etcd/",
"ETCD_HEARTBEAT_INTERVAL=500",
"ETCD_ELECTION_TIMEOUT=2500",
"ETCD_LISTEN_CLIENT_URLS=https://10.0.0.4:2379;,
"ETCD_INITIAL_ADVERTISE_PEER_URLS=https://10.0.0.4:2380;,

"ETCD_INITIAL_CLUSTER=openshift01.northeurope.cloudapp.azure.com=https://10.0.0.4:2380,openshift02.uksouth.cloudapp.azure.com=https://10.0.1.4:2380,openshift03.westus.cloudapp.azure.com=https://10.0.3.4:2380;,
"ETCD_INITIAL_CLUSTER_STATE=new",
"ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-1",
"ETCD_ADVERTISE_CLIENT_URLS=https://10.0.0.4:2379;,
...

while I need to get their VMs public IPs inside, like this :

...
   "ETCD_NAME=openshift01.northeurope.cloudapp.azure.com",
"ETCD_LISTEN_PEER_URLS=https://52.169.122.138:2380;,
"ETCD_DATA_DIR=/var/lib/etcd/",
"ETCD_HEARTBEAT_INTERVAL=500",
"ETCD_ELECTION_TIMEOUT=2500",
"ETCD_LISTEN_CLIENT_URLS=https://52.169.122.138:2379;,
"ETCD_INITIAL_ADVERTISE_PEER_URLS=https://52.169.122.138:2380;,

"ETCD_INITIAL_CLUSTER=openshift01.northeurope.cloudapp.azure.com=https://52.169.122.138:2380,openshift02.uksouth.cloudapp.azure.com=https://51.140.76.172:2380,openshift03.westus.cloudapp.azure.com=https://13.64.104.13:2380;,
"ETCD_INITIAL_CLUSTER_STATE=new",
"ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-1",
"ETCD_ADVERTISE_CLIENT_URLS=https://52.169.122.138:2379;,
...

following the net details about the VM featuring the private IP 10.0.0.4 ; its 
public IP is not defined :

[root@openshift01 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
 
   valid_lft forever preferred_lft forever

2: eth0:  mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:0d:3a:b7:49:df brd ff:ff:ff:ff:ff:ff
inet 10.0.0.4/24 brd 10.0.0.255 scope global eth0
   valid_lft forever preferred_lft forever
inet6 fe80::20d:3aff:feb7:49df/64 scope link
   valid_lft forever preferred_lft forever

3: docker0:  mtu 1500 qdisc noqueue state 
DOWN
link/ether 02:42:e9:80:cb:26 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
   valid_lft forever preferred_lft forever

I've tried to explicitly set the three Host variables ' openshift_hostname | 
openshift_ip | openshift_public_ip ' as described in the Manual but I couldn't 
make it work :
https://docs.openshift.org/latest/install_config/install/advanced_install.html#configuring-host-variables

Please how can I deploy Origin 3.7 on three independent Azure VMs ?

Or am I forced to setup some kind of VPN among my three Azure VMs in order to 
give them the illusion to be on the same subnet ?

Many thanks for all your suggestions,
cheers
Fabio


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Ansible glusterfs & glusterfs_registry run gets stuck reporting "Wait for heketi pod" on CentOS Atomic

2017-09-26 Thread Fabio Martinelli
Hello Everybody


I'm trying to setup Origin 3.7 featuring glusterfs & glusterfs_registry on
6 CentOS Atomic nodes but every time I've tried my Ansible run got stuck
reporting "Wait for heketi pod"



At the end of the Ansible run I can see these containers running :

CONTAINER ID
IMAGE
COMMAND  CREATED STATUS
PORTS   NAMES

54583d997e6b
docker.io/gluster/gluster-centos@sha256:e3e2881af497bbd76e4d3de90a4359d8167aa8410db2c66196f0b99df6067cb2
"/usr/sbin/init" 34 minutes ago  Up 34
minutes
k8s_glusterfs_glusterfs-storage-zltlb_glusterfs_0a71a6cc-a1d1-11e7-8f00-005056a6cb52_0

425e9d1b0e0a
openshift/origin-pod:v3.7.0-alpha.1
"/usr/bin/pod"   34 minutes ago  Up 34
minutes
k8s_POD_glusterfs-storage-zltlb_glusterfs_0a71a6cc-a1d1-11e7-8f00-005056a6cb52_0

9f41e1b56784
openshift/node:latest
"/usr/local/bin/origi"   About an hour ago   Up About an
hourorigin-node

cdd713cfa1f6
openshift/openvswitch:latest
"/usr/local/bin/ovs-r"   About an hour ago   Up About an
houropenvswitch

c170ceae9902
openshift/origin:latest
"/usr/bin/openshift s"   About an hour ago   Up About an
hourorigin-master-controllers

21a7050510f4
openshift/origin:latest
"/usr/bin/openshift s"   About an hour ago   Up About an
hourorigin-master-api

016b94eef88fregistry.access.redhat.com/rhel7/etcd

"/usr/bin/etcd"
About an hour ago   Up About an houretcd_container



Actually Glusterfs got created as well as this Heketi file :

[root@wfpromshas05 ~]# docker exec -ti -u 0
k8s_glusterfs_glusterfs-storage-zltlb_glusterfs_0a71a6cc-a1d1-11e7-8f00-005056a6cb52_0
ls -l
/var/lib/heketi/mounts/vg_266e9e3ed8ced24c7d22d17162989cfd/brick_f8ccb630bcf42e953e0d0944263891e9/brick/heketi.db

-rw-r--r--. 2 root root 45056 Sep 25 09:21
/var/lib/heketi/mounts/vg_266e9e3ed8ced24c7d22d17162989cfd/brick_f8ccb630bcf42e953e0d0944263891e9/brick/heketi.db



This is my Ansibile inventory file, there the field
'openshift_master_htpasswd_users' was omitted on purpose :

##

[OSEv3:children]

masters

nodes

etcd

glusterfs

glusterfs_registry





[OSEv3:vars]

ansible_ssh_user=root

openshift_deployment_type=origin

openshift_storage_glusterfs_wipe=True

debug_level=4

openshift_release=v3.7

openshift_image_tag=latest

openshift_install_examples=True

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
'filename': '/etc/origin/master/htpasswd'}]
openshift_disable_check=disk_availability,memory_availability

openshift_clock_enabled=true

os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'

osm_cluster_network_cidr=10.128.0.0/14

osm_host_subnet_length=9

openshift_portal_net=172.30.0.0/16

openshift_metrics_install_metrics=true

openshift_metrics_storage_kind=dynamic





[masters]

wfpromshas05.global.wfp.org containerized=true openshift_schedulable=True
wfpromshas06.global.wfp.org containerized=true openshift_schedulable=True
wfpromshas07.global.wfp.org containerized=true openshift_schedulable=True



[etcd]

wfpromshas05.global.wfp.org containerized=true openshift_schedulable=True
wfpromshas06.global.wfp.org containerized=true openshift_schedulable=True
wfpromshas07.global.wfp.org containerized=true openshift_schedulable=True



[nodes]

wfpromshas02.global.wfp.org openshift_node_labels="{'region': 'infra'}"
containerized=true openshift_schedulable=True wfpromshas03.global.wfp.org
openshift_node_labels="{'region': 'infra'}" containerized=true
openshift_schedulable=True wfpromshas04.global.wfp.org
openshift_node_labels="{'region': 'infra'}" containerized=true
openshift_schedulable=True

wfpromshas05.global.wfp.org storage=True
containerized=true openshift_schedulable=True

wfpromshas06.global.wfp.org storage=True
containerized=true openshift_schedulable=True

wfpromshas07.global.wfp.org storage=True
containerized=true openshift_schedulable=True



[glusterfs_registry]

wfpromshas02.global.wfp.org glusterfs_devices='[ "/dev/sdb", "/dev/sdc",
"/dev/sdd" ]' containerized=True wfpromshas03.global.wfp.org
glusterfs_devices='[ "/dev/sdb", "/dev/sdc", "/dev/sdd" ]'
containerized=True wfpromshas04.global.wfp.org glusterfs_devices='[
"/dev/sdb", "/dev/sdc", "/dev/sdd" ]' containerized=True



[glusterfs]

wfpromshas05.global.wfp.org glusterfs_devices='[ "/dev/sdb" ]'
containerized=True wfpromshas06.global.wfp.org glusterfs_devices='[
"/dev/sdb" ]' containerized=True wfpromshas07.global.wfp.org
glusterfs_devices='[ "/dev/sdb" ]' containerized=True

##



Kindly what may I check ?



Thank you very much,

Fabio Martinelli
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users