Re: Upgrade to v3.6.0, "oc adm migrate storage" return many errors like: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`

2017-08-22 Thread Stéphane Klein
When I try to edit pods :

oc edit pod app-2-8bh3m

If I update a label I have this error:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving
this file will be
# reopened with the relevant failures.
#
# pods "app-2-8bh3m" was not valid:
# * spec: Forbidden: pod updates may not change fields other than
`containers[*].image` or `spec.activeDeadlineSeconds`
#
apiVersion: v1
kind: Pod
metadata:
  annotations:
kubernetes.io/created-by: |

{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"groulot","name":"app-2","uid":"b8debf4a-8705-11e7-bd32-005056b1755a","apiVersion":"v1","resourceVersion":"26178167"}}
openshift.io/deployment-config.latest-version: "2"
openshift.io/deployment-config.name: app
openshift.io/deployment.name: app-2
openshift.io/scc: groulot
  creationTimestamp: 2017-08-22T06:47:17Z
  generateName: app-2-
  labels:
deployment: app-2-test
deploymentconfig: app
name: app
  name: app-2-8bh3m
  namespace: groulot
  resourceVersion: "26178220"
  selfLink: /api/v1/namespaces/groulot/pods/app-2-8bh3m
  uid: bd079ad1-8705-11e7-bd32-005056b1755a
spec:
  containers:
  - image:
172.30.201.95:5000/openshift/ta-s2i-php-prod@sha256:f6da85d9b0aada51f45f776d8c04941f7ada1fe0219776b5eb0ccb1bab20a3e3
imagePullPolicy: Always
name: app
...

2017-08-22 11:35 GMT+02:00 Michal Fojtik <mfoj...@redhat.com>:

> Can you please post the YAML representation of the
> 'test-secret-6-qz4ar' pod? Or another
> pod that failed.
>
> Thanks!
>
>
> On 21 August 2017 at 23:49:56, Stéphane Klein
> (cont...@stephane-klein.info) wrote:
> > Hi,
> >
> > when I try to upgrade OpenShift Origin v1.5.1 cluster to v3.6.0 I have
> many
> > errors when Ansible execute « Upgrade all storage » task:
> > https://github.com/openshift/openshift-ansible/blob/
> release-3.6/playbooks/common/openshift-cluster/upgrades/
> upgrade_control_plane.yml#L11
> >
> > Many line errors like:
> >
> > error: pods/test-secret-6-qz4ar -n issue-29059: Pod
> > \"test-secret-6-qz4ar\" is invalid: spec: Forbidden: pod updates may not
> > change fields other than `containers[*].image` or
> > `spec.activeDeadlineSeconds`
> >
> > What is it? How can I fix it?
> >
> > Best regards,
> > Stéphane
> > --
> > Stéphane Klein
> > blog: http://stephane-klein.info
> > cv : http://cv.stephane-klein.info
> > Twitter: http://twitter.com/klein_stephane
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How do you manage your git repository when you use public Ansible recipes like Ceph-Ansible or OpenShift-Ansible?

2017-08-01 Thread Stéphane Klein
Hi,

this is a message that I posted on Ansible Google Groups:
https://groups.google.com/forum/#!topic/ansible-project/wfm_vmywwTU

I use Ceph-Ansible (https://github.com/ceph/ceph-ansible) and
OpenShift-Ansible (https://github.com/openshift/openshift-ansible) to
install this stuffs on our servers.

I would like to keep my configuration in a private Git repository.

For now, I use a private repository with this structure:
.
├── ceph
│   ├── prod
│   │   ├── README.md
│   │   ├── ansible.cfg
│   │   ├── ceph-ansible-upstream
│   │   ├── hosts
│   │   ├── playbooks
│   │   └── roles
│   └── test
│   ├── README.md
│   ├── ansible.cfg
│   ├── ceph-ansible-upstream
│   ├── hosts
│   ├── playbooks
│   └── roles
└── openshift
├── prod
│   ├── README.md
│   ├── ansible.cfg
│   ├── hosts
│   ├── openshift-ansible-upstream
│   ├── playbooks
│   └── roles
└── test
├── README.md
├── ansible.cfg
├── hosts
├── openshift-ansible-upstream
├── playbooks
└── roles


Git Submodule handle all *-upstream folders.

In all this environments I can add some private roles or playbooks.

My questions:

* how do you manage your Ansible installation when you use public Ansible
recipes?
* do you fork this Ansible recipes and update it directly?
* do you use Ansible Galaxy roles? If yes, I think it's difficult because
OpenShift Ansible playbooks are very long and complex.

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-17 Thread Stéphane Klein
2017-07-17 17:20 GMT+02:00 Andrew Lau :

> I see this too. It only started happening after mixing 1.5 and 1.4 nodes.
>

Ok, thanks, we have also master 1.5.1 and nodes in 1.4 :(
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-17 Thread Stéphane Klein
2017-07-17 17:03 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

>
>
> 2017-07-17 17:01 GMT+02:00 Hemant Kumar <heku...@redhat.com>:
>
>> Did you use openshift-ansible?
>>
>>
> Yes
>


We use ovs-multitenant
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-17 Thread Stéphane Klein
2017-07-17 17:01 GMT+02:00 Hemant Kumar :

> Is there anything in apiserver/controller logs?
>

You mean "journalctl -u origin-node" on node ?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-17 Thread Stéphane Klein
2017-07-17 17:01 GMT+02:00 Hemant Kumar :

> Did you use openshift-ansible?
>
>
Yes
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-17 Thread Stéphane Klein
2017-07-17 15:39 GMT+02:00 Hemant Kumar :

> Phillippe - I have never seen a properly configured openshift server to
> timeout while mounting secrets.
>

We have this messages in log (it is the same cluster that Philippe) :

 ul 17 10:34:15 prod-node-rbx-2.example.com origin-node[65154]: E0717
10:34:15.197266   65220 docker_manager.go:357] NetworkPlugin cni failed on
the status hook for pod 'test-secret-3-deploy' - Unexpected command output
Device "eth0" does not exist.
Jul 17 10:34:15 prod-node-rbx-2.example.com origin-node[65154]: with error:
exit status 1
Jul 17 10:34:17 prod-node-rbx-2.example.com origin-node[65154]: I0717
10:34:17.519925   65220 docker_manager.go:2177] Determined pod ip after
infra change: 
"test-secret-3-deploy_issue-29059(ebd9309d-6afb-11e7-9452-005056b1755a)":
"10.1.3.9"
Jul 17 10:34:18 prod-node-rbx-2.example.com origin-node[65154]: E0717
10:34:18.314570   65220 docker_manager.go:761] Logging security options:
{key:seccomp value:unconfined msg:}
Jul 17 10:34:18 prod-node-rbx-2.example.com origin-node[65154]: E0717
10:34:18.708998   65220 docker_manager.go:1711] Failed to create symbolic
link to the log file of pod "test-secret-3-deploy_issue-
29059(ebd9309d-6afb-11e7-9452-005056b1755a)" container "deployment":
symlink  /var/log/containers/test-secret-3-deploy_issue-29059_deployment-
ded8b25b6ad78a620d981292111a2f0a46da14b879f9e862f630228e07e8cd7c.log: no
such file or directory
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Stéphane Klein
2017-07-12 15:41 GMT+02:00 Peter Portante <pport...@redhat.com>:

>
>
> On Wed, Jul 12, 2017 at 9:28 AM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>>
>> 2017-07-12 15:20 GMT+02:00 Peter Portante <pport...@redhat.com>:
>>
>>> This looks a lot like this BZ: https://bugzilla.redhat.co
>>> m/show_bug.cgi?id=1449378, "Timeout after 30SECONDS while retrieving
>>> configuration"
>>>
>>> What version of Origin are you using?
>>>
>>>
>> Logging image : origin-logging-elasticsearch:v1.5.0
>>
>> $ oc version
>> oc v1.4.1+3f9807a
>> kubernetes v1.4.0+776c994
>> features: Basic-Auth
>>
>> Server https://console.tech-angels.net:443
>> openshift v1.5.0+031cbe4
>> kubernetes v1.5.2+43a9be4
>>
>> and with 1.4 nodes because of this crazy bug
>> https://github.com/openshift/origin/issues/14092)
>>
>>
>>> I found that I had to run the sgadmin script in each ES pod at the same
>>> time, and when one succeeds and one fails, just run it again and it worked.
>>>
>>>
>> Ok, I'll try that, how can I execute sgadmin script manually ?
>>
>
> ​You can see it in the run.sh script in each pod, look for the invocation
> of sgadmin there.
>
>
Ok I have executed:

/usr/share/elasticsearch/plugins/search-guard-2/tools/sgadmin.sh \
-cd ${HOME}/sgconfig \
-i .searchguard.${HOSTNAME} \
-ks /etc/elasticsearch/secret/searchguard.key \
-kst JKS \
-kspass kspass \
-ts /etc/elasticsearch/secret/searchguard.truststore \
-tst JKS \
-tspass tspass \
-nhnv \
-icl

One ES node 1 and 2 in same time, but I have need to restart one second
time on node2.

Now I have this message:

Will connect to localhost:9300 ... done
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW
clusterstate ...
Clustername: logging-es
Clusterstate: GREEN
Number of nodes: 2
Number of data nodes: 2
.searchguard.logging-es-x39myqbs-1-s5g7c index already exists, so we do not
need to create one.
Populate config from /opt/app-root/src/sgconfig/
Will update 'config' with /opt/app-root/src/sgconfig/sg_config.yml
   SUCC: Configuration for 'config' created or updated
Will update 'roles' with /opt/app-root/src/sgconfig/sg_roles.yml
   SUCC: Configuration for 'roles' created or updated
Will update 'rolesmapping' with
/opt/app-root/src/sgconfig/sg_roles_mapping.yml
   SUCC: Configuration for 'rolesmapping' created or updated
Will update 'internalusers' with
/opt/app-root/src/sgconfig/sg_internal_users.yml
   SUCC: Configuration for 'internalusers' created or updated
Will update 'actiongroups' with
/opt/app-root/src/sgconfig/sg_action_groups.yml
   SUCC: Configuration for 'actiongroups' created or updated
Done with success

Fixed, thanks.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Stéphane Klein
2017-07-12 15:20 GMT+02:00 Peter Portante :

> This looks a lot like this BZ: https://bugzilla.redhat.
> com/show_bug.cgi?id=1449378, "Timeout after 30SECONDS while retrieving
> configuration"
>
> What version of Origin are you using?
>
>
Logging image : origin-logging-elasticsearch:v1.5.0

$ oc version
oc v1.4.1+3f9807a
kubernetes v1.4.0+776c994
features: Basic-Auth

Server https://console.tech-angels.net:443
openshift v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4

and with 1.4 nodes because of this crazy bug
https://github.com/openshift/origin/issues/14092)


> I found that I had to run the sgadmin script in each ES pod at the same
> time, and when one succeeds and one fails, just run it again and it worked.
>
>
Ok, I'll try that, how can I execute sgadmin script manually ?

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Stéphane Klein
Hi,

Since one day, after ES cluster pods restart, I have this error message
when I launch logging-es:

$ oc logs -f logging-es-ne81bsny-5-jdcdk
Comparing the specificed RAM to the maximum recommended for ElasticSearch...
Inspecting the maximum RAM available...
ES_JAVA_OPTS: '-Dmapper.allow_dots_in_name=true -Xms128M -Xmx4096m'
Checking if Elasticsearch is ready on https://localhost:9200
..Will connect to localhost:9300 ...
done
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW
clusterstate ...
Clustername: logging-es
Clusterstate: YELLOW
Number of nodes: 2
Number of data nodes: 2
.searchguard.logging-es-ne81bsny-5-jdcdk index does not exists, attempt to
create it ... done (with 1 replicas, auto expand replicas is off)
Populate config from /opt/app-root/src/sgconfig/
Will update 'config' with /opt/app-root/src/sgconfig/sg_config.yml
   SUCC: Configuration for 'config' created or updated
Will update 'roles' with /opt/app-root/src/sgconfig/sg_roles.yml
   SUCC: Configuration for 'roles' created or updated
Will update 'rolesmapping' with
/opt/app-root/src/sgconfig/sg_roles_mapping.yml
   SUCC: Configuration for 'rolesmapping' created or updated
Will update 'internalusers' with
/opt/app-root/src/sgconfig/sg_internal_users.yml
   SUCC: Configuration for 'internalusers' created or updated
Will update 'actiongroups' with
/opt/app-root/src/sgconfig/sg_action_groups.yml
   SUCC: Configuration for 'actiongroups' created or updated
Timeout (java.util.concurrent.TimeoutException: Timeout after 30SECONDS
while retrieving configuration for [config, roles, rolesmapping,
internalusers,
actiongroups](index=.searchguard.logging-es-x39myqbs-1-s5g7c))
Done with failures

after some time, my ES cluster (2 nodes) is green:

stephane$ oc rsh logging-es-x39myqbs-1-s5g7c bash
st:9200/_cluster/health?pretty=trueasticsearch/secret/admin-cert
https://localho
{
  "cluster_name" : "logging-es",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 1643,
  "active_shards" : 3286,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

I have this error in kibana container:

$ oc logs -f -c kibana logging-kibana-1-jblhl
{"type":"log","@timestamp":"2017-07-12T12:54:54Z","tags":["warning","elasticsearch"],"pid":1,"message":"No
living connections"}
{"type":"log","@timestamp":"2017-07-12T12:54:57Z","tags":["warning","elasticsearch"],"pid":1,"message":"Unable
to revive connection: https://logging-es:9200/"}

But in Kibana container I can access to elasticsearch server:

$ oc rsh -c kibana logging-kibana-1-jblhl bash
$ curl https://logging-es:9200/ --cacert /etc/kibana/keys/ca --key
/etc/kibana/keys/key --cert /etc/kibana/keys/cert
{
  "name" : "Adri Nital",
  "cluster_name" : "logging-es",
  "cluster_uuid" : "iRo3wOHWSq2bTZskrIs6Zg",
  "version" : {
"number" : "2.4.4",
"build_hash" : "fcbb46dfd45562a9cf00c604b30849a6dec6b017",
"build_timestamp" : "2017-01-03T11:33:16Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
  },
  "tagline" : "You Know, for Search"
}

How can I fix this error?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] What component forward log entries to fluentd input service?

2017-07-11 Thread Stéphane Klein
2017-07-11 15:00 GMT+02:00 Alex Wauck <alexwa...@exosite.com>:

> Last I checked (OpenShift Origin 1.2), fluentd was just slurping up the
> log files produced by Docker.  It can do that because the pods it runs in
> have access to the host filesystem.
>
> On Tue, Jul 11, 2017 at 6:12 AM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>> Hi,
>>
>> I see here https://github.com/openshift/origin-aggregated-logging/
>> blob/master/fluentd/configs.d/input-post-forward-mux.conf#L2
>> that fluentd logging system use secure_forward input system.
>>
>> My question: what component forward log entries to fluentd input service ?
>>
>>
Ok it's here:

bash-4.2# cat configs.d/dynamic/input-syslog-default-syslog.conf

  @type systemd
  @label @INGRESS
  path "/var/log/journal"
  pos_file /var/log/journal.pos
  tag journal


Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[Logging] What component forward log entries to fluentd input service?

2017-07-11 Thread Stéphane Klein
Hi,

I see here
https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/configs.d/input-post-forward-mux.conf#L2

that fluentd logging system use secure_forward input system.

My question: what component forward log entries to fluentd input service ?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin-Aggregated-Logging OPS generate 10Go ES data by day, 40000 hits by hours

2017-07-07 Thread Stéphane Klein
2017-07-07 15:51 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

> 2017-07-07 14:26 GMT+02:00 Peter Portante <pport...@redhat.com>:
>
>> >
>> > 4 hits by hours!
>>
>> How are you determining 40,000 hits per hour?
>>
>>
> I did a search in Kibana, last hour => 40,000 hits
>

for one node.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin-Aggregated-Logging OPS generate 10Go ES data by day, 40000 hits by hours

2017-07-07 Thread Stéphane Klein
2017-07-07 14:26 GMT+02:00 Peter Portante :

> >
> > 4 hits by hours!
>
> How are you determining 40,000 hits per hour?
>
>
I did a search in Kibana, last hour => 40,000 hits
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error creating: pods "mysql-79-" is forbidden: failed quota: resource-quota: must specify limits.cpu, limits.memory, requests.cpu, requests.memory

2017-06-29 Thread Stéphane Klein
2017-06-29 16:33 GMT+02:00 Jessica Forrester :

> It means the pod template in your DC doesn't set requests and limits for
> the pods.  If you are going to have a resourcequota restricting cpu and
> memory then you either have to explicitly set requests/limits on all of
> your pod templates OR
>

Yes, it's that, thanks.



> you need to create a limit range that will provide defaults
>

Defaults ? How can I define defaults limit range ?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Error creating: pods "mysql-79-" is forbidden: failed quota: resource-quota: must specify limits.cpu, limits.memory, requests.cpu, requests.memory

2017-06-29 Thread Stéphane Klein
Hi,

I have a project with this ResourceQuota:

$ oc describe ResourceQuota resource-quota
Name: resource-quota
Namespace: zf-novalac-staging
Resource Used Hard
  
limits.cpu 0 1
limits.memory 0 2Gi
requests.cpu 0 1
requests.memory 0 2Gi

I start one DC:

$ oc scale dc mysql --replicas=1

$ oc get events -w
LASTSEEN FIRSTSEENCOUNT
NAME   KINDSUBOBJECT   TYPE  REASON
   SOURCE   MESSAGE
2017-06-29 15:41:38 +0200 CEST   2017-06-29 15:31:03 +0200 CEST   5
mysql  DeploymentConfigNormal
 ReplicationControllerScaled   {deploymentconfig-controller }   Scaled
replication controller "mysql-79" from 0 to 1
2017-06-29 15:54:44 +0200 CEST   2017-06-29 15:34:44 +0200 CEST   63
 mysql-79   ReplicationController   Warning   FailedCreate
 {replication-controller }Error creating: pods
"mysql-79-" is forbidden: failed quota: resource-quota: must specify
limits.cpu,limits.memory,requests.cpu,requests.memory

I don't understand this error « resource-quota: must specify
limits.cpu,limits.memory,requests.cpu,requests.memory » because I have
specified this parameters.

Where did I go wrong?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


oc rsh or oc get pod -w disconnection after few minutes

2017-06-23 Thread Stéphane Klein
Hi,

When I use:

oc rsh mypod bash

or

oc get pod -w

I lost connection after few minutes. It's not always the same duration.

Why this disconnection? Where can I look to fix it?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Can I configure a SCC to allow container to access to CephFS volume without SELinux attributes support?

2017-06-22 Thread Stéphane Klein
Hi,

I use CephFS volume but this volume don't support SELinux attributes:

-bash-4.2# ls -lZ
/var/lib/origin/openshift.local.volumes/pods/6726536a-5735-11e7-aef3-005056b1755a/volumes/
kubernetes.io~cephfs/pv-ceph-prod-rbx-fs1
drwxr-xr-x root root ?foo

It is possible to configure a SCC to allow container to access to this
volume?

This my SCC but I have this error:

$ oc rsh test-cephfs-4-mn53h bash
root@test-cephfs-4-mn53h:/# ls /cephfs/
ls: cannot open directory '/cephfs/': Permission denied

apiVersion: v1
kind: List
metadata: {}
items:
- apiVersion: v1
  kind: SecurityContextConstraints
  metadata:
name: test-cephfs
  priority: 1
  requiredDropCapabilities: null
  readOnlyRootFilesystem: false
  runAsUser:
type: RunAsAny
  seLinuxContext:
type: RunAsAny
  supplementalGroups:
type: RunAsAny
  seccompProfiles:
  - '*'
  supplementalGroups:
type: RunAsAny
  fsGroup:
type: RunAsAny
  users:
  - system:serviceaccount:test-cephfs:default
  volumes:
  - cephFS
  - configMap
  - emptyDir
  - nfs
  - persistentVolumeClaim
  - rbd
  - secret
  allowHostDirVolumePlugin: true
  allowHostIPC: true
  allowHostNetwork: true
  allowHostPID: true
  allowHostPorts: true
  allowPrivilegedContainer: true
  allowedCapabilities: null

Best regards,
Stéphane

-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: CephFS => ls: cannot open directory /cephfs/: Permission denied

2017-06-22 Thread Stéphane Klein
CephFS appear to don't support SELinux labels (
http://tracker.ceph.com/issues/13231) then what can I do to allow container
access to volume?

2017-06-21 17:52 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

>
>
> 2017-06-21 16:25 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:
>
>> I don't see where is my permission error.
>>
>
> Maybe it's this error: http://tracker.ceph.com/issues/13231 ?
>
> I have tried that:
>
> # setfattr -n security.selinux -v system_u:object_r:nfs_t:s0
> /var/lib/origin/openshift.local.volumes/pods/4cc61dfa-
> 5692-11e7-aef3-005056b1755a/volumes/kubernetes.io~cephfs/
> pv-ceph-prod-rbx-fs1
> setfattr: /var/lib/origin/openshift.local.volumes/pods/4cc61dfa-
> 5692-11e7-aef3-005056b1755a/volumes/kubernetes.io~cephfs/pv-ceph-prod-rbx-fs1:
> Operation not supported
>
> I don't know if it is the good syntax.
>
> Kernel version is: 3.10.0-514.16.1.el7.x86_64
>
> # ceph -v
> ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
>
> # atomic host status
> State: idle
> Deployments:
> ● centos-atomic-host:centos-atomic-host/7/x86_64/standard
>  Version: 7.20170428 (2017-05-09 16:53:51)
>   Commit: 67c8af37c5d05bb3b377ec1bd3c127
> f98664d6f7a78bf2089fcfb02784d12fbd
>   OSName: centos-atomic-host
> GPGSignature: 1 signature
>   Signature made Tue 09 May 2017 05:43:07 PM EDT using
> RSA key ID F17E745691BA8335
>       Good signature from "CentOS Atomic SIG <
> secur...@centos.org>"
>
> Best regards,
> Stéphane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: CephFS => ls: cannot open directory /cephfs/: Permission denied

2017-06-21 Thread Stéphane Klein
2017-06-21 16:25 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

> I don't see where is my permission error.
>

Maybe it's this error: http://tracker.ceph.com/issues/13231 ?

I have tried that:

# setfattr -n security.selinux -v system_u:object_r:nfs_t:s0
/var/lib/origin/openshift.local.volumes/pods/4cc61dfa-5692-11e7-aef3-005056b1755a/volumes/
kubernetes.io~cephfs/pv-ceph-prod-rbx-fs1
setfattr:
/var/lib/origin/openshift.local.volumes/pods/4cc61dfa-5692-11e7-aef3-005056b1755a/volumes/
kubernetes.io~cephfs/pv-ceph-prod-rbx-fs1: Operation not supported

I don't know if it is the good syntax.

Kernel version is: 3.10.0-514.16.1.el7.x86_64

# ceph -v
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)

# atomic host status
State: idle
Deployments:
● centos-atomic-host:centos-atomic-host/7/x86_64/standard
 Version: 7.20170428 (2017-05-09 16:53:51)
  Commit:
67c8af37c5d05bb3b377ec1bd3c127f98664d6f7a78bf2089fcfb02784d12fbd
  OSName: centos-atomic-host
GPGSignature: 1 signature
  Signature made Tue 09 May 2017 05:43:07 PM EDT using
RSA key ID F17E745691BA8335
  Good signature from "CentOS Atomic SIG <
secur...@centos.org>"

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


CephFS => ls: cannot open directory /cephfs/: Permission denied

2017-06-21 Thread Stéphane Klein
Hi,

I have one CephFS cluster.

This my PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-ceph-prod-rbx-fs1
  labels:
storage-type: ceph-fs
ceph-cluster: ceph-prod-rbx
spec:
  accessModes:
  - ReadWriteMany
  capacity:
storage: 100Mi
  cephfs:
monitors:
  - 137.74.203.82:6789
  - 172.29.20.31:6789
  - 172.29.20.32:6789
pool: rbd
user: admin
path: /data1/
secretRef:
  name: ceph-secret
readOnly: false
  persistentVolumeReclaimPolicy: Retain


After container started, CephFS volume is mounted with success on OpenShift
node.

In OpenShift node host:

# mount | grep "ceph"
137.74.203.82:6789,172.29.20.31:6789,172.29.20.32:6789:/data1/ on
/var/lib/origin/openshift.local.volumes/pods/0f4bb6ef-568b-11e7-aef3-005056b1755a/volumes/
kubernetes.io~cephfs/pv-ceph-prod-rbx-fs1 type ceph
(rw,relatime,name=admin,secret=,acl)

# ls
/var/lib/origin/openshift.local.volumes/pods/0f4bb6ef-568b-11e7-aef3-005056b1755a/volumes/
kubernetes.io~cephfs/pv-ceph-prod-rbx-fs1 -lha
total 0
drwxrwxrwx  1 root root  1 Jun 21 09:58 .
drwxr-x---. 3 root root 33 Jun 21 10:08 ..
drwxr-xr-x  1 root root  0 Jun 21 09:58 foo

Here, I can write in CephFS volume.

In container, I have this error:

$ oc rsh test-cephfs-3-v5ggn bash
root@test-cephfs-3-v5ggn:/# ls /cephfs/ -lha
ls: cannot open directory /cephfs/: Permission denied

This is docker mount information:

"Mounts": [
{
"Source":
"/var/lib/origin/openshift.local.volumes/pods/0f4bb6ef-568b-11e7-aef3-005056b1755a/volumes/
kubernetes.io~cephfs/pv-ceph-prod-rbx-fs1",
"Destination": "/cephfs",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}

I have created this SCC:

apiVersion: v1
kind: List
metadata: {}
items:
- apiVersion: v1
  kind: SecurityContextConstraints
  metadata:
name: test-cephfs
  priority: 1
  requiredDropCapabilities: null
  readOnlyRootFilesystem: false
  runAsUser:
type: RunAsAny
  seLinuxContext:
type: MustRunAs
  supplementalGroups:
type: RunAsAny
  fsGroup:
type: MustRunAs
  users:
  - system:serviceaccount:test-cephfs:default
  volumes:
  - cephFS
  - configMap
  - emptyDir
  - nfs
  - persistentVolumeClaim
  - rbd
  - secret
  allowHostDirVolumePlugin: false
  allowHostIPC: false
  allowHostNetwork: false
  allowHostPID: false
  allowHostPorts: false
  allowPrivilegedContainer: false
  allowedCapabilities: null


I don't see where is my permission error.

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Is CephFS supported by OpenShift?

2017-06-16 Thread Stéphane Klein
2017-06-16 16:04 GMT+02:00 Clayton Coleman :

> If you configure it yourself it's in the code
>

In the code ? OpenShift Go source code or Ansible role source code?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Is CephFS supported by OpenShift?

2017-06-16 Thread Stéphane Klein
Hi,

I see that Kubernetes support CephFS:
https://kubernetes.io/docs/concepts/storage/volumes/#cephfs

Is CephFS supported by OpenShift? I don't see it here
https://docs.openshift.org/latest/install_config/persistent_storage/index.html

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Can I exclude one project or one container to Origin-Aggregated-Logging system?

2017-05-30 Thread Stéphane Klein
HI,

I just read origin-aggregated-logging
<https://github.com/openshift/origin-aggregated-logging> documentation and
I don't found if I can exclude one project or one container to logging
system.

Is it possible with a container labels? or other system?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods has connectivity to other pod and service only when I run an additional pod

2017-05-23 Thread Stéphane Klein
2017-05-23 15:32 GMT+02:00 Andrew Lau :

> Philippe, I'm curious if you are running containerized?
>
>
yes,  containerized.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Why I can use insecureEdgeTerminationPolicy: Redirect when I have termination: reencrypt

2017-03-31 Thread Stéphane Klein
Hi,

why I can't use:

insecureEdgeTerminationPolicy: Redirect

where I have :

termination: reencrypt

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Is it possible to use Helm package system with OpenShift?

2017-03-23 Thread Stéphane Klein
Hi,

is it possible to use Helm (https://github.com/kubernetes/helm) package
system with OpenShift?
Maybe not default Kubernetes Helm Charts but some OpenShift Charts?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error: MountVolume.SetUp failed for volume ... Error: MountVolume.SetUp failed for volume ... with: rbd: map failed exit status 1 ... -1 did not load config file, using default settings

2017-03-15 Thread Stéphane Klein
2017-03-15 18:22 GMT+01:00 Huamin Chen :

> Which ceph release you are using, "rbd -v" ?
>

docker exec -it origin-node bash
[root@atomic-test-node-1 origin]# rbd -v
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error: MountVolume.SetUp failed for volume ... Error: MountVolume.SetUp failed for volume ... with: rbd: map failed exit status 1 ... -1 did not load config file, using default settings

2017-03-15 Thread Stéphane Klein
2017-03-13 18:07 GMT+01:00 Huamin Chen :

> "rbd: sysfs write failed
> rbd: map failed: (1) Operation not permitted"
>
> These messages indicate a permission issue. Do your ceph user and keyring
> have permission to map the rbd image?
>
>
I use  OpenShift Containerized instance.

In "origin-node" Docker container, I can mount Ceph RBD image (gist:
https://gist.github.com/harobed/0c24a772e9731c2772caca53f829d751):

-bash-4.2# docker exec -it origin-node bash
[root@atomic-test-node-1 origin]# rbd --id admin -m 172.29.20.10:6789
--key=SECRET list
2017-03-15 10:22:35.928695 7f68b68a07c0 -1 did not load config file,
using default settings.
image1
image10
image11
image12
image13
image14
image15
image16
image17
image18
image2
image3
image4
image5
image6
image7
image8
image9
[root@atomic-test-node-1 origin]# rbd --id admin -m 172.29.20.10:6789
--key=SECRET map rbd/image5
2017-03-15 10:24:06.484089 7f639b8ee7c0 -1 did not load config file,
using default settings.
/dev/rbd0
[root@atomic-test-node-1 origin]# mkdir /mnt/image5
[root@atomic-test-node-1 origin]# mount /dev/rbd0 /mnt/image5/
[root@atomic-test-node-1 origin]# ls /mnt/image5/
lost+found

I use same rbd command with same parameters that here :
https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/pkg/volume/rbd/rbd_util.go#L236

I can also lock and unlock image.

rbd in OpenShift don't use /etc/ceph/ config files, all is set by
command parameters.

What other tests can I do?

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Log message error on all nodes: encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/mapper/cah-docker--pool_tmeta: Error running command `thin_ls --no-head

2017-02-27 Thread Stéphane Klein
Hi,

I have many many lines with this error message in nodes logs:

Feb 27 17:51:13 atomic-test-node-1.priv.tech-angels.net origin-node[24165]:
E0227 17:51:13.183150   24451 thin_pool_watcher.go:72] encountered error
refreshing thin pool watcher: error performing thin_ls on metadata device
/dev/mapper/cah-docker--pool_tmeta: Error running command `thin_ls
--no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/mapper/cah-docker--pool_tmeta`:
exit status 127

Do you know what is this error? How can I fix it?

My nodes use Centos Atomic OS:

# atomic host status
State: idle
Deployments:
● centos-atomic-host:centos-atomic-host/7/x86_64/standard
   Version: 7.20170209 (2017-02-10 00:54:47)
Commit:
d433342b09673c9c4d75ff6eef50a447e73a7541491e5197e1dde14147b164b8
OSName: centos-atomic-host
  GPGSignature: 1 signature
Signature made Fri 10 Feb 2017 02:06:18 AM CET using RSA
key ID F17E745691BA8335
Good signature from "CentOS Atomic SIG "

# docker version
Client:
 Version: 1.12.5
 API version: 1.24
 Package version: docker-common-1.12.5-14.el7.centos.x86_64
 Go version:  go1.7.4
 Git commit:  047e51b/1.12.5
 Built:   Mon Jan 23 15:35:13 2017
 OS/Arch: linux/amd64

Server:
 Version: 1.12.5
 API version: 1.24
 Package version: docker-common-1.12.5-14.el7.centos.x86_64
 Go version:  go1.7.4
 Git commit:  047e51b/1.12.5
 Built:   Mon Jan 23 15:35:13 2017
 OS/Arch: linux/amd64


OpenShift Master:  v1.4.1+3f9807a
Kubernetes Master: v1.4.0+776c994

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Can I create a new scc with chroot capability?

2017-01-30 Thread Stéphane Klein
Hi,

I use Postfix docker image. This image use chroot function.

I think after OpenShift 1.2 => 1.3 upgrade, this Postfix container don't
working anymore.

If I check "oc describe scc anyuid" I see:

Required Drop Capabilities:MKNOD,SYS_CHROOT
Why chroot capabilities is dropped now? Can I create a new scc with chroot
capability?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How can I scale up the number of etcd host ?

2017-01-13 Thread Stéphane Klein
2017-01-10 20:17 GMT+01:00 Scott Dodson :

> openshift-ansible doesn't currently provide this, there's an issue
> requesting it https://github.com/openshift/openshift-ansible/issues/1772
> which links to a blog post describing how to do it, though I've not
> validated that myself. The only hard part is the certificate
> management, otherwise scaling procedures should mirror those
> documented by etcd upstream.
>

This is my experimentation:

* I add 2 news etcd host (2 etcd hosts are useless, see
https://coreos.com/etcd/docs/latest/v2/admin_guide.html#optimal-cluster-size
)
* Next, I have executed this Ansible playbooks:

*
https://github.com/openshift/openshift-ansible/blob/master/playbooks/byo/openshift-cluster/redeploy-certificates.yml
*
https://github.com/openshift/openshift-ansible/blob/master/playbooks/byo/openshift-cluster/config.yml
  with --skip-tags=hosted option

* Finally I restart origin-master

Troubles:

* if I shutdown etcd1, I need to restart master node to see data in
OpenShift Console or launch build…

This behavior is normal? Voluntary? or I have something badly configured?

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How can I scale up the number of etcd host ?

2017-01-13 Thread Stéphane Klein
2017-01-12 17:12 GMT+01:00 Alex Wauck :

> Are you using the built-in OpenShift etcd on that one node, or are you
> using real etcd?
>

I use registry.access.redhat.com/rhel7/etc standard docker OpenShift image.

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How can I scale up the number of etcd host ?

2017-01-10 Thread Stéphane Klein
Hi,

I use OpenShift Ansible, how can I scaleup etcd number of etcd host ?

I see this two "scaleup" playbooks:

*
https://github.com/openshift/openshift-ansible/blob/844137e9e968fb0455b9cf5128342c3e449c8abb/playbooks/byo/openshift-master/scaleup.yml
*
https://github.com/openshift/openshift-ansible/blob/c65c07f4238b23ee0e4c72746927d587517518ce/playbooks/byo/openshift-node/scaleup.yml

This is what I going to do:

* add new etcd host in [etcd] section in my inventory file
* launch this two scaleup playbooks

This is the good method ?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: In Ansible, what oo_ meaning? for instance in oo_masters, is it meaning Openshift Origin?

2016-12-28 Thread Stéphane Klein
Thanks for this explanations.

I have a question: how /etc/ansible/facts.d/openshift.fact is updated ?

Best regards,
Stéphane

2016-12-26 18:34 GMT+01:00 Jon Stanley <jonstan...@gmail.com>:

> Yes, that is just a variable that has specific meaning with that
> playbook (and yes, it does stand for Openshift Origin). If you';re
> just learning Ansible, you've picked one of the most complex examples
> of usage available to learn (which is good!). You'll notice that
> g_etcd_hosts, for example is referenced. That gets set in the
> individual cloud providers (aws, gce, byo, etc) cluster_hosts.yml.
>
> In the openshift-ansible project, you'll also notice something called
> oo_option being used - that's a lookup plugin[1] written in Python.
> IMO, similar functionality should be part of Ansible core, but that's
> a different topic :)
>
> [1] https://docs.ansible.com/ansible/dev_guide/developing_
> plugins.html#lookup-plugins
>
> On Mon, Dec 26, 2016 at 11:38 AM, Stéphane Klein
> <cont...@stephane-klein.info> wrote:
> > Hi,
> >
> > I have a simple question: what the oo_ is meaning example here
> > https://github.com/openshift/openshift-ansible/blob/master/
> playbooks/common/openshift-cluster/evaluate_groups.yml#L44
> > ?
> >
> > Is it meaning Openshift Origin?
> >
> > Best regards,
> > Stéphane
> > --
> > Stéphane Klein <cont...@stephane-klein.info>
> > blog: http://stephane-klein.info
> > cv : http://cv.stephane-klein.info
> > Twitter: http://twitter.com/klein_stephane
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


I'm connected on Atomic Registry web UI but I have this message: « Server has closed the connection »

2016-12-27 Thread Stéphane Klein
Hi,

I have registry-console installed but:

* I have this in container log:

INFO: cockpit-ws: logged in user: admin
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
INFO: cockpit-ws: admin: timed out

* I'm connected on Atomic Registry web UI but I have this message: « Server
has closed the connection »

Where can be my error? I don't found registry-console debug mode.

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


In Ansible, what oo_ meaning? for instance in oo_masters, is it meaning Openshift Origin?

2016-12-26 Thread Stéphane Klein
Hi,

I have a simple question: what the oo_ is meaning example here
https://github.com/openshift/openshift-ansible/blob/master/playbooks/common/openshift-cluster/evaluate_groups.yml#L44
?

Is it meaning Openshift Origin?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error: error communicating with registry: Get https://registry.example.com/healthz: x509: certificate signed by unknown authority

2016-11-28 Thread Stéphane Klein
Fixed with:

# curl http://www.tbs-x509.com/AddTrustExternalCARoot.crt >
/etc/ssl/certs/AddTrustExternalCARoot.crt
# /usr/local/bin/oc adm --token=`/usr/local/bin/oc -n default sa get-token
pruner` prune images --confirm --registry-url=registry.example.com
certificate-authority=/etc/ssl/certs/AddTrustExternalCARoot.crt

2016-11-28 14:24 GMT+01:00 Skarbek, John <john.skar...@ca.com>:

>
> On November 28, 2016 at 08:19:21, Stéphane Klein (
> cont...@stephane-klein.info) wrote:
>
> Hi,
>
> I can execute with success this command on my desktop host:
>
> oc adm --token=`oc -n default sa get-token pruner` prune images --confirm
> --registry-url=registry.example.com
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__registry.example.com=DgMFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=PWiBBEC5h0oUZhVRERJ9MHhtup2ov9ZBheVyqkFsxSE=VuTdgofhOcmxbQar8nkhb3Ig2AGG5MUe1bQRUKsTGf4=>
>
> On OpenShift master host, I have this error:
> /usr/local/bin/oc adm --token=`/usr/local/bin/oc -n default sa get-token
> pruner` prune images --confirm --registry-url=registry.example.com
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__registry.example.com=DgMFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=PWiBBEC5h0oUZhVRERJ9MHhtup2ov9ZBheVyqkFsxSE=VuTdgofhOcmxbQar8nkhb3Ig2AGG5MUe1bQRUKsTGf4=>
> error: error communicating with registry: Get https://registry.example.
> com/healthz
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__registry.example.com_healthz=DgMFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=PWiBBEC5h0oUZhVRERJ9MHhtup2ov9ZBheVyqkFsxSE=-9MkD_pHxQQswBrKCKtS9pn1q3O2WGKXCPSPiucznOs=>:
> x509: certificate signed by unknown authority
>
> I have tried with 
> --certificate-authority=/etc/origin/master/openshift-registry.crt
> parameter, but always the same error.
>
> The above is the path to the certificate used by the registry, not the
> authority.  You probably want `/etc/origin/master/ca.crt` here
>
>
>
> Where is my mistake ?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__stephane-2Dklein.info=DgMFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=PWiBBEC5h0oUZhVRERJ9MHhtup2ov9ZBheVyqkFsxSE=GWHmQVqieEG89wy8-aW12S6b1aDiAkjMf8Bacwo4Jxo=>
> cv : http://cv.stephane-klein.info
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__cv.stephane-2Dklein.info=DgMFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=PWiBBEC5h0oUZhVRERJ9MHhtup2ov9ZBheVyqkFsxSE=XKGBY7L-e-uDQu7HmXWPl0Sg62yCQnXop2q8kM31c8A=>
> Twitter: http://twitter.com/klein_stephane
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__twitter.com_klein-5Fstephane=DgMFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=PWiBBEC5h0oUZhVRERJ9MHhtup2ov9ZBheVyqkFsxSE=HwAK0Xqawmr7dYm80VYyArDMEsCQ7ZQeRsUX7O4vOiA=>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.
> openshift.redhat.com_openshiftmm_listinfo_users=DgICAg=_
> hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_
> M0oNfzEXXNp-tpx74=PWiBBEC5h0oUZhVRERJ9MHhtup2ov9ZBheVyqkFsxSE=
> wvYO22kLbNvVGxWYsQPRUyKq8ljps4iGxKe9CML1QsI=
>
>


-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: I try to connect to my custom route to registry-console and I have this error « The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than onc

2016-11-28 Thread Stéphane Klein
Fixed !

I need to append my new url https://registry-console.example.com in
redirectURIs in oauthclient/cockpit-oauth-client OpenShift object:

$ oc export oauthclient/cockpit-oauth-client > foobar.yaml

Append https://registry-console.example.com like that in foobar.yaml

apiVersion: v1
kind: OAuthClient
metadata:
  annotations:
openshift.io/generated-by: OpenShiftNewApp
  creationTimestamp: null
  labels:
app: registry-console
createdBy: registry-console-template
  name: cockpit-oauth-client
redirectURIs:
- https://registry-console-default.router.default.svc.cluster.local
- https://registry-console.example.com
secret: 

$ oc apply -f foobar.yaml

Best regards,
Stéphane

2016-11-28 9:15 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> $ oc get oauthclient
> NAME   SECRET
> WWW-CHALLENGE   REDIRECT URIS
> kibana-proxy      FALSE
> https://kibana.example.com,https://kibana-ops.example.com
> openshift-browser-client   ...   FALSE
>   https://console.example.com:443/oauth/token/display,https:
> //cluster.example.com:443/oauth/token/display
> openshift-challenging-client   ...   TRUE
>https://console.example.com:443/oauth/token/implicit
> openshift-web-console  ...   FALSE
>   https://console.example.com:443/console/,http://localhost:
> 9000,https://localhost:9000,https://cluster.example.com:443/console/
>
> https://registry-console.example.com isn't in the list.
>
> Why registry console isn't in this list ?
>
> Best regards,
> Stéphane
>
>
> 2016-11-25 16:06 GMT+01:00 Luis Fernandez Alvarez <
> luis.fernandezalva...@epfl.ch>:
>
>> Hi,
>>
>> Take a look to the oauth clients, do they match the URIs you're setting?
>>
>> $ oc get oauthclient
>> NAME   SECRET
>>      WWW-CHALLENGE   REDIRECT URIS
>> ...
>>
>> Cheers,
>>
>> Luis
>>
>> On 11/25/2016 10:18 AM, Stéphane Klein wrote:
>>
>> Hi,
>>
>> In OpenShift 1.3.1, I have configured this route:
>>
>> * https://registry-console.example.com => registry-console
>>
>> When I try to connect to https://registry-console.example.com I have
>> this errors:
>>
>> * in browser:
>>
>> {"error":"invalid_request","error_description":"The request is missing a
>> required parameter, includes an invalid parameter value, includes a
>> parameter more than once, or is otherwise malformed."}
>>
>> * in origin-master log:
>>
>> osinserver.go:99] internal error: urls don't validate:
>> https://registry-console-default.router.default.svc.cluster.local /
>> https://registry-console.example.com/
>>
>> I have appended https://registry-console.example.com/
>> to corsAllowedOrigins field in /etc/origin/master/master-config.yaml
>> config file.
>>
>> Where is my error? I have forgotten something?
>>
>> Best regards,
>> Stéphane
>> --
>> Stéphane Klein <cont...@stephane-klein.info>
>> blog: http://stephane-klein.info
>> cv : http://cv.stephane-klein.info
>> Twitter: http://twitter.com/klein_stephane
>>
>>
>> ___
>> users mailing 
>> listusers@lists.openshift.redhat.comhttp://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


I try to connect to my custom route to registry-console and I have this error « The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, o

2016-11-25 Thread Stéphane Klein
Hi,

In OpenShift 1.3.1, I have configured this route:

* https://registry-console.example.com => registry-console

When I try to connect to https://registry-console.example.com I have this
errors:

* in browser:

{"error":"invalid_request","error_description":"The request is missing a
required parameter, includes an invalid parameter value, includes a
parameter more than once, or is otherwise malformed."}

* in origin-master log:

osinserver.go:99] internal error: urls don't validate:
https://registry-console-default.router.default.svc.cluster.local /
https://registry-console.example.com/

I have appended https://registry-console.example.com/ to corsAllowedOrigins
field in /etc/origin/master/master-config.yaml config file.

Where is my error? I have forgotten something?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift use Github issue and Trello, why not use a service like https://waffle.io/ to avoid using two systems and create confusion ?

2016-11-17 Thread Stéphane Klein
2016-11-16 15:28 GMT+01:00 John Lamb :

> What confusion?
>

Where are feature requests and bug reports? In hidden Trello or in Github
issues?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: s2i build on OSX => fatal error: unexpected signal during runtime execution

2016-11-17 Thread Stéphane Klein
Done: https://github.com/openshift/source-to-image/issues/639

2016-11-17 15:23 GMT+01:00 Ben Parees <bpar...@redhat.com>:

> please open an issue on github.
>
> On Thu, Nov 17, 2016 at 9:08 AM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>> I have this error:
>>
>> https://gist.github.com/harobed/a3acf12956d073f1f8378379aea46764
>>
>> Information about my host:
>>
>> $ s2i version
>> s2i v1.1.3
>>
>> $ docker version
>> Client:
>>  Version:  1.12.3
>>  API version:  1.24
>>  Go version:   go1.6.3
>>  Git commit:   6b644ec
>>  Built:Wed Oct 26 23:26:11 2016
>>  OS/Arch:  darwin/amd64
>>
>> Server:
>>  Version:  1.12.3
>>  API version:  1.24
>>  Go version:   go1.6.3
>>  Git commit:   6b644ec
>>  Built:Wed Oct 26 23:26:11 2016
>>  OS/Arch:  linux/amd64
>>
>> $ docker info
>> Containers: 15
>>  Running: 0
>>  Paused: 0
>>  Stopped: 15
>> Images: 71
>> Server Version: 1.12.3
>> Storage Driver: aufs
>>  Root Dir: /var/lib/docker/aufs
>>  Backing Filesystem: extfs
>>  Dirs: 132
>>  Dirperm1 Supported: true
>> Logging Driver: json-file
>> Cgroup Driver: cgroupfs
>> Plugins:
>>  Volume: local
>>  Network: bridge host null overlay
>> Swarm: inactive
>> Runtimes: runc
>> Default Runtime: runc
>> Security Options: seccomp
>> Kernel Version: 4.4.27-moby
>> Operating System: Alpine Linux v3.4
>> OSType: linux
>> Architecture: x86_64
>> CPUs: 4
>> Total Memory: 1.951 GiB
>> Name: moby
>> ID: EINF:6OM6:4537:3WUL:3GJE:W42O:HJGQ:U22H:4VBP:PXMP:EQGO:43OL
>> Docker Root Dir: /var/lib/docker
>> Debug Mode (client): false
>> Debug Mode (server): true
>>  File Descriptors: 16
>>  Goroutines: 29
>>  System Time: 2016-11-17T14:06:42.466914005Z
>>  EventsListeners: 1
>> No Proxy: *.local, 169.254/16
>> Registry: https://index.docker.io/v1/
>> WARNING: No kernel memory limit support
>> Insecure Registries:
>>  127.0.0.0/8
>>
>> --
>> Stéphane Klein <cont...@stephane-klein.info>
>> blog: http://stephane-klein.info
>> cv : http://cv.stephane-klein.info
>> Twitter: http://twitter.com/klein_stephane
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>


-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


s2i build on OSX => fatal error: unexpected signal during runtime execution

2016-11-17 Thread Stéphane Klein
I have this error:

https://gist.github.com/harobed/a3acf12956d073f1f8378379aea46764

Information about my host:

$ s2i version
s2i v1.1.3

$ docker version
Client:
 Version:  1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:Wed Oct 26 23:26:11 2016
 OS/Arch:  darwin/amd64

Server:
 Version:  1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:Wed Oct 26 23:26:11 2016
 OS/Arch:  linux/amd64

$ docker info
Containers: 15
 Running: 0
 Paused: 0
 Stopped: 15
Images: 71
Server Version: 1.12.3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 132
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.27-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.951 GiB
Name: moby
ID: EINF:6OM6:4537:3WUL:3GJE:W42O:HJGQ:U22H:4VBP:PXMP:EQGO:43OL
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 16
 Goroutines: 29
 System Time: 2016-11-17T14:06:42.466914005Z
 EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
WARNING: No kernel memory limit support
Insecure Registries:
 127.0.0.0/8

-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error from server: User "system:serviceaccount:default:pruner" cannot list all images in the cluster

2016-11-17 Thread Stéphane Klein
Yes, thanks it's that:

$ oc adm policy add-cluster-role-to-user system:image-pruner
system:serviceaccount:default:pruner
# oadm --token=`oc sa get-token pruner` prune images --confirm

but now I've:

error: error communicating with registry: Get
http://172.30.154.75:5000/healthz: dial tcp 172.30.154.75:5000: i/o timeout

2016-11-16 17:54 GMT+01:00 Jordan Liggitt <jligg...@redhat.com>:

> When granting the cluster role, the username for the service account is
> not "pruner", it is "system:serviceaccount:default:pruner"
>
> On Nov 16, 2016, at 11:29 AM, Stéphane Klein <cont...@stephane-klein.info>
> wrote:
>
> Hi,
>
> oc adm policy add-cluster-role-to-user system:image-pruner pruner
>
> oadm --token=`oc sa get-token pruner` prune images --confirm
> Error from server: User "system:serviceaccount:default:pruner" cannot
> list all images in the cluster
>
> What role I forget to grant to pruner ServiceAccount ?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How can I put logstash config files in ConfigMap ?

2016-11-16 Thread Stéphane Klein
2016-10-27 15:08 GMT+02:00 Luke Meyer :

> The underscores are the problem. Can you convert them to hyphens?
>
>
Yes ! It's that, it works with OpenShift 1.3.1 and hyphens instead
underscores.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift use Github issue and Trello, why not use a service like https://waffle.io/ to avoid using two systems and create confusion ?

2016-11-16 Thread Stéphane Klein
Hi,

I see that you use Github issue and Trello to manage OpenShift issue,
example: https://github.com/openshift/origin/issues/7018

Why not use a service like https://waffle.io/ to avoid using two systems
and create confusion ?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Why Metrics and Logging use Deployer container?

2016-11-15 Thread Stéphane Klein
2016-11-15 16:00 GMT+01:00 Matt Wringe :

>
> Is there any features we are missing that you are needing to make changes
> for?
>
>
https://github.com/openshift/origin-metrics/issues/262
https://github.com/openshift/origin-metrics/issues/263
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Why Metrics and Logging use Deployer container?

2016-11-15 Thread Stéphane Klein
Why Metrics and Logging use Deployer container?

* https://github.com/openshift/origin-metrics/tree/master/deployer
*
https://github.com/openshift/origin-aggregated-logging/tree/master/deployer

I think it's too difficult to hack, why don't use only Ansible for that?

I think there are too many layers to install this components (Metrics and
Logging): Ansible + Dockerfile + deployer template…

If I want hack Metrics template I need:

* fork origin-metrics
* update OpenShift config file objects
* rebuild Docker image
* upload to registry
* change metrics prefix image
* execute playbook
* fix my errors and restart to step 2

Why this complexity?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error: valueFrom fieldRef resource => ...env[5].valueFrom.fieldRef.fieldPath: Required value

2016-11-08 Thread Stéphane Klein
2016-11-08 16:43 GMT+01:00 Marko Lukša :

> Your version of oc is too old. Your file works for me, when I use 1.3.0+,
> but I get the same error as you when using 1.2.0.
>

Thanks !

I've created this issue: https://github.com/openshift/origin/issues/11836 «
Suggestion: append in `oc` a warning message if you use oc version <
OpenShift server version »
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error: valueFrom fieldRef resource => ...env[5].valueFrom.fieldRef.fieldPath: Required value

2016-11-08 Thread Stéphane Klein
> This error occurs when you specify multiple field references
simultaneously like fieldRef and resourceFieldRef together.

I've only resourceFieldRef here:

  - name: MEMORY_LIMIT
valueFrom:
  resourceFieldRef:
resource: limits.memory
  - name: CPU_LIMIT
valueFrom:
  resourceFieldRef:
resource: limits.cpu
divisor: 1m

complete file:
https://gist.github.com/harobed/fc24a7766dbcf2d9e61f42dd8a968a6c


2016-11-08 16:14 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> https://gist.github.com/harobed/fc24a7766dbcf2d9e61f42dd8a968a6c
>
> 2016-11-08 16:03 GMT+01:00 Avesh Agarwal <avaga...@redhat.com>:
>
>>
>>
>> On Tue, Nov 8, 2016 at 8:50 AM, Stéphane Klein <
>> cont...@stephane-klein.info> wrote:
>>
>>>
>>>
>>> 2016-11-08 13:43 GMT+01:00 Avesh Agarwal <avaga...@redhat.com>:
>>>
>>>>
>>>>
>>>> On Tue, Nov 8, 2016 at 5:44 AM, Stéphane Klein <
>>>> cont...@stephane-klein.info> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I've this ReplicationController:
>>>>>
>>>>> apiVersion: v1
>>>>> kind: List
>>>>> metadata: {}
>>>>> items:
>>>>> - apiVersion: v1
>>>>>   kind: ReplicationController
>>>>>   metadata:
>>>>> labels:
>>>>>   metrics-infra: hawkular-cassandra
>>>>>   name: hawkular-cassandra
>>>>>   type: hawkular-cassandra
>>>>> name: hawkular-cassandra-1
>>>>>   spec:
>>>>> replicas: 1
>>>>> selector:
>>>>>   name: hawkular-cassandra-1
>>>>> template:
>>>>>   metadata:
>>>>> labels:
>>>>>   metrics-infra: hawkular-cassandra
>>>>>   name: hawkular-cassandra-1
>>>>>   type: hawkular-cassandra
>>>>>   spec:
>>>>> nodeSelector:
>>>>>   name: atomic-test-node-1
>>>>> containers:
>>>>> - command:
>>>>>   - /opt/apache-cassandra/bin/cassandra-docker.sh
>>>>>   - --cluster_name=hawkular-metrics
>>>>>
>>>>>   ...
>>>>>
>>>>>   env:
>>>>>   - name: POD_NAMESPACE
>>>>> valueFrom:
>>>>>   fieldRef:
>>>>> fieldPath: metadata.namespace
>>>>>   - name: MEMORY_LIMIT
>>>>> valueFrom:
>>>>>   fieldRef:
>>>>>
>>>>
>>>> Should be resourceFieldRef .
>>>>
>>>> resource: limits.memory
>>>>>   - name: CPU_LIMIT
>>>>> valueFrom:
>>>>>   fieldRef:
>>>>>
>>>>
>>>> Should be resourceFieldRef
>>>>
>>>
>>> With resourceFieldRef like here https://github.com/openshift/o
>>> rigin-metrics/blob/master/deployer/templates/hawkular-cassan
>>> dra-node-emptydir.yaml#L83
>>>
>>> I have this error:
>>>
>>> Error from server: ReplicationController "hawkular-cassandra-1" is
>>> invalid: [spec.template.spec.containers[0].env[5].valueFrom: Invalid
>>> value: "": may not have more than one field specified at a time,
>>> spec.template.spec.containers[0].env[6].valueFrom: Invalid value: "":
>>> may not have more than one field specified at a time]
>>>
>>
>> This error occurs when you specify multiple field references
>> simultaneously like fieldRef and resourceFieldRef together. Could you share
>> a link to your spec, and what openshift/origin version you are using? I
>> tried  https://paste.fedoraproject.org/475673/47861727/ and it worked
>> for me.
>>
>>
>> Thanks
>> Avesh
>>
>>
>
>
>
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error: valueFrom fieldRef resource => ...env[5].valueFrom.fieldRef.fieldPath: Required value

2016-11-08 Thread Stéphane Klein
https://gist.github.com/harobed/fc24a7766dbcf2d9e61f42dd8a968a6c

2016-11-08 16:03 GMT+01:00 Avesh Agarwal <avaga...@redhat.com>:

>
>
> On Tue, Nov 8, 2016 at 8:50 AM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>>
>>
>> 2016-11-08 13:43 GMT+01:00 Avesh Agarwal <avaga...@redhat.com>:
>>
>>>
>>>
>>> On Tue, Nov 8, 2016 at 5:44 AM, Stéphane Klein <
>>> cont...@stephane-klein.info> wrote:
>>>
>>>> Hi,
>>>>
>>>> I've this ReplicationController:
>>>>
>>>> apiVersion: v1
>>>> kind: List
>>>> metadata: {}
>>>> items:
>>>> - apiVersion: v1
>>>>   kind: ReplicationController
>>>>   metadata:
>>>> labels:
>>>>   metrics-infra: hawkular-cassandra
>>>>   name: hawkular-cassandra
>>>>   type: hawkular-cassandra
>>>> name: hawkular-cassandra-1
>>>>   spec:
>>>> replicas: 1
>>>> selector:
>>>>   name: hawkular-cassandra-1
>>>> template:
>>>>   metadata:
>>>> labels:
>>>>   metrics-infra: hawkular-cassandra
>>>>   name: hawkular-cassandra-1
>>>>   type: hawkular-cassandra
>>>>   spec:
>>>> nodeSelector:
>>>>   name: atomic-test-node-1
>>>> containers:
>>>> - command:
>>>>   - /opt/apache-cassandra/bin/cassandra-docker.sh
>>>>   - --cluster_name=hawkular-metrics
>>>>
>>>>   ...
>>>>
>>>>   env:
>>>>   - name: POD_NAMESPACE
>>>> valueFrom:
>>>>   fieldRef:
>>>> fieldPath: metadata.namespace
>>>>   - name: MEMORY_LIMIT
>>>> valueFrom:
>>>>   fieldRef:
>>>>
>>>
>>> Should be resourceFieldRef .
>>>
>>> resource: limits.memory
>>>>   - name: CPU_LIMIT
>>>> valueFrom:
>>>>   fieldRef:
>>>>
>>>
>>> Should be resourceFieldRef
>>>
>>
>> With resourceFieldRef like here https://github.com/openshift/o
>> rigin-metrics/blob/master/deployer/templates/hawkular-cassan
>> dra-node-emptydir.yaml#L83
>>
>> I have this error:
>>
>> Error from server: ReplicationController "hawkular-cassandra-1" is
>> invalid: [spec.template.spec.containers[0].env[5].valueFrom: Invalid
>> value: "": may not have more than one field specified at a time,
>> spec.template.spec.containers[0].env[6].valueFrom: Invalid value: "":
>> may not have more than one field specified at a time]
>>
>
> This error occurs when you specify multiple field references
> simultaneously like fieldRef and resourceFieldRef together. Could you share
> a link to your spec, and what openshift/origin version you are using? I
> tried  https://paste.fedoraproject.org/475673/47861727/ and it worked for
> me.
>
>
> Thanks
> Avesh
>
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error: valueFrom fieldRef resource => ...env[5].valueFrom.fieldRef.fieldPath: Required value

2016-11-08 Thread Stéphane Klein
2016-11-08 13:43 GMT+01:00 Avesh Agarwal <avaga...@redhat.com>:

>
>
> On Tue, Nov 8, 2016 at 5:44 AM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>> Hi,
>>
>> I've this ReplicationController:
>>
>> apiVersion: v1
>> kind: List
>> metadata: {}
>> items:
>> - apiVersion: v1
>>   kind: ReplicationController
>>   metadata:
>> labels:
>>   metrics-infra: hawkular-cassandra
>>   name: hawkular-cassandra
>>   type: hawkular-cassandra
>> name: hawkular-cassandra-1
>>   spec:
>> replicas: 1
>> selector:
>>   name: hawkular-cassandra-1
>> template:
>>   metadata:
>> labels:
>>   metrics-infra: hawkular-cassandra
>>   name: hawkular-cassandra-1
>>   type: hawkular-cassandra
>>   spec:
>> nodeSelector:
>>   name: atomic-test-node-1
>> containers:
>> - command:
>>   - /opt/apache-cassandra/bin/cassandra-docker.sh
>>   - --cluster_name=hawkular-metrics
>>
>>   ...
>>
>>   env:
>>   - name: POD_NAMESPACE
>> valueFrom:
>>   fieldRef:
>> fieldPath: metadata.namespace
>>   - name: MEMORY_LIMIT
>> valueFrom:
>>   fieldRef:
>>
>
> Should be resourceFieldRef .
>
> resource: limits.memory
>>   - name: CPU_LIMIT
>> valueFrom:
>>   fieldRef:
>>
>
> Should be resourceFieldRef
>

With resourceFieldRef like here https://github.com/openshift/
origin-metrics/blob/master/deployer/templates/hawkular-
cassandra-node-emptydir.yaml#L83

I have this error:

Error from server: ReplicationController "hawkular-cassandra-1" is invalid:
[spec.template.spec.containers[0].env[5].valueFrom: Invalid value: "": may
not have more than one field specified at a time,
spec.template.spec.containers[0].env[6].valueFrom: Invalid value: "": may
not have more than one field specified at a time]
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Error: valueFrom fieldRef resource => ...env[5].valueFrom.fieldRef.fieldPath: Required value

2016-11-08 Thread Stéphane Klein
Hi,

I've this ReplicationController:

apiVersion: v1
kind: List
metadata: {}
items:
- apiVersion: v1
  kind: ReplicationController
  metadata:
labels:
  metrics-infra: hawkular-cassandra
  name: hawkular-cassandra
  type: hawkular-cassandra
name: hawkular-cassandra-1
  spec:
replicas: 1
selector:
  name: hawkular-cassandra-1
template:
  metadata:
labels:
  metrics-infra: hawkular-cassandra
  name: hawkular-cassandra-1
  type: hawkular-cassandra
  spec:
nodeSelector:
  name: atomic-test-node-1
containers:
- command:
  - /opt/apache-cassandra/bin/cassandra-docker.sh
  - --cluster_name=hawkular-metrics

  ...

  env:
  - name: POD_NAMESPACE
valueFrom:
  fieldRef:
fieldPath: metadata.namespace
  - name: MEMORY_LIMIT
valueFrom:
  fieldRef:
resource: limits.memory
  - name: CPU_LIMIT
valueFrom:
  fieldRef:
resource: limits.cpu
divisor: 1m
  resources:
limits:
  cpu: '1'
  memory: 500Mi
  ...

I've this error:

* spec.template.spec.containers[0].env[5].valueFrom.fieldRef.fieldPath:
Required value
* spec.template.spec.containers[0].env[6].valueFrom.fieldRef.fieldPath:
Required value

Where is my mistake ?

I found no example with valueFrom + resource in OpenShift documentation.

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: default node selectors

2016-11-07 Thread Stéphane Klein
To do something like: https://github.com/kubernetes/kubernetes/issues/7562 ?

2016-11-07 10:31 GMT+01:00 Andrew Lau <and...@andrewklau.com>:

> From the doc examples, node with label disktype: magnetic / ssd
>
> Is there a way to default the selector to be magnetic, while giving the
> user the option to select ssd.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to use SCC and HostPath ?

2016-11-03 Thread Stéphane Klein
2016-11-03 15:03 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

>
>
> 2016-11-03 14:56 GMT+01:00 Clayton Coleman <ccole...@redhat.com>:
>
>> That RC is creating pods under service account cassandra.  So you need to
>> give "cassandra" access to privileged
>>
>>
> Yes ! it's here: https://gist.github.com/harobed/
> 76dc697e1658afd934c107aadc4f09a6#file-replicationcontrollers-yaml-L87
>


I removed this lines:

serviceAccount: cassandra
serviceAccountName: cassandra

Now it's working with:

$ oc adm policy add-scc-to-user privileged -z default -n openshift-infra

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to use SCC and HostPath ?

2016-11-03 Thread Stéphane Klein
2016-11-03 15:03 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

>
>
> 2016-11-03 14:56 GMT+01:00 Clayton Coleman <ccole...@redhat.com>:
>
>> That RC is creating pods under service account cassandra.  So you need to
>> give "cassandra" access to privileged
>>
>>
> Yes ! it's here: https://gist.github.com/harobed/
> 76dc697e1658afd934c107aadc4f09a6#file-replicationcontrollers-yaml-L87
>


How can I see this log info
https://github.com/openshift/origin/blob/85eb37b34f0657631592356d020cef5a58470f8e/pkg/security/admission/admission.go#L88
from oc cli ?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to use SCC and HostPath ?

2016-11-03 Thread Stéphane Klein
2016-11-03 14:56 GMT+01:00 Clayton Coleman :

> That RC is creating pods under service account cassandra.  So you need to
> give "cassandra" access to privileged
>
>
Yes ! it's here:
https://gist.github.com/harobed/76dc697e1658afd934c107aadc4f09a6#file-replicationcontrollers-yaml-L87

Thanks!

I don't understand why I've this config.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to use SCC and HostPath ?

2016-11-03 Thread Stéphane Klein
Hi,

This my SCC:

$ oc get scc
NAME   PRIV  CAPS  SELINUX RUNASUSER
FSGROUP SUPGROUPPRIORITY   READONLYROOTFS   VOLUMES
anyuid false []MustRunAs   RunAsAny
RunAsAnyRunAsAny10 false[configMap downwardAPI
emptyDir persistentVolumeClaim secret]
hostaccess false []MustRunAs   MustRunAsRange
MustRunAs   RunAsAny false[configMap downwardAPI
emptyDir hostPath persistentVolumeClaim secret]
hostmount-anyuid   false []MustRunAs   RunAsAny
RunAsAnyRunAsAny false[configMap downwardAPI
emptyDir hostPath nfs persistentVolumeClaim secret]
hostnetworkfalse []MustRunAs   MustRunAsRange
MustRunAs   MustRunAsfalse[configMap downwardAPI
emptyDir persistentVolumeClaim secret]
nonrootfalse []MustRunAs   MustRunAsNonRoot
RunAsAnyRunAsAny false[configMap downwardAPI
emptyDir persistentVolumeClaim secret]
privileged true  []RunAsAnyRunAsAny
RunAsAnyRunAsAny false[*]
restricted false []MustRunAs   MustRunAsRange
MustRunAs   RunAsAny false[configMap downwardAPI
emptyDir persistentVolumeClaim secret]

I see that hostaccess, hostmount-anyuid and privileged have access to
hostPath volume.

I've removed all SCC from admin user and default SA:

$ oc adm policy remove-scc-from-user anyuid -z default -n openshift-infra
$ oc adm policy remove-scc-from-user hostaccess -z default -n
openshift-infra
$ oc adm policy remove-scc-from-user hostmount-anyuid -z default -n
openshift-infra
$ oc adm policy remove-scc-from-user hostnetwork -z default -n
openshift-infra
$ oc adm policy remove-scc-from-user nonroot -z default -n openshift-infra
$ oc adm policy remove-scc-from-user privileged -z default -n
openshift-infra
$ oc adm policy remove-scc-from-user restricted -z default -n
openshift-infra
$ oc adm policy remove-scc-from-user anyuid admin -n openshift-infra
$ oc adm policy remove-scc-from-user hostaccess admin -n openshift-infra
$ oc adm policy remove-scc-from-user hostmount-anyuid admin -n
openshift-infra
$ oc adm policy remove-scc-from-user hostnetwork admin -n openshift-infra
$ oc adm policy remove-scc-from-user nonroot admin -n openshift-infra
$ oc adm policy remove-scc-from-user privileged admin -n openshift-infra
$ oc adm policy remove-scc-from-user restricted admin -n openshift-infra
$ oc adm policy add-scc-to-user privileged admin -n openshift-infra
$ oc adm policy add-scc-to-user privileged -z default -n openshift-infra

Now I add privileged SCC to admin user and default SA:

$ oc adm policy add-scc-to-user privileged admin -n openshift-infra
$ oc adm policy add-scc-to-user privileged -z default -n openshift-infra

My replication controller file:
https://gist.github.com/harobed/76dc697e1658afd934c107aadc4f09a6

Next, I create ReplicationController:

$ oc delete rc hawkular-cassandra-1
$ oc delete event --all
$ oc apply -n openshift-infra -f replicationcontrollers.yaml
$ oc get events
FIRSTSEEN   LASTSEEN   COUNT NAME
KINDSUBOBJECT   TYPE  REASON
SOURCE  MESSAGE
3d  3d 4 hawkular-cassandra-1
ReplicationController   Warning   FailedCreate
{replication-controller }   Error creating: pods "hawkular-cassandra-1-" is
forbidden: unable to validate against any security context constraint:
[spec.containers[0].securityContext.volumes[0]: Invalid value: "hostPath":
hostPath volumes are not allowed to be used]

Why ? I set policy on bad user ?

Is it this bug? https://github.com/openshift/origin/issues/11153

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How can I put logstash config files in ConfigMap ?

2016-10-25 Thread Stéphane Klein
Hi,

How can I put logstash config files in ConfigMap ?


$ tree
.
├── logstash-config
│   ├── 1_tcp_input.conf
│   ├── 2_news_filter.conf
│   └── 3_elasticsearch_ouput.conf

$ oc create configmap logstash-config --from-file=logstash-config/
error: 1_tcp_input.conf is not a valid key name for a configMap


For the moment I use PersistentVolume to store this configuration files but
I think that it isn't the better choice.

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Why I don't have debug information in DockerRegistry logs?

2016-10-23 Thread Stéphane Klein
I see some debug message here
https://github.com/openshift/origin/blob/master/pkg/dockerregistry/server/token.go#L60

Why I didn't see it in container logs ?

2016-10-23 11:41 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

> Hi,
>
> I've some auth issue with my OpenShift DockerRegistry:
>
> I1023 08:54:24.043049   1 docker.go:118] Pushing image
> 172.30.201.95:5000/openshift/ta-s2i-base-prod:latest ...
> E1023 08:54:24.046357   1 dockerutil.go:86] push for image
> 172.30.201.95:5000/openshift/base-prod:latest failed, will retry in 5s ...
> E1023 08:54:29.051732   1 dockerutil.go:86] push for image
> 172.30.201.95:5000/openshift/base-prod:latest failed, will retry in 5s ...
> E1023 08:54:34.054921   1 dockerutil.go:86] push for image
> 172.30.201.95:5000/openshift/base-prod:latest failed, will retry in 5s ...
> E1023 08:54:39.058377   1 dockerutil.go:86] push for image
> 172.30.201.95:5000/openshift/base-prod:latest failed, will retry in 5s ...
> E1023 08:54:44.061671   1 dockerutil.go:86] push for image
> 172.30.201.95:5000/openshift/base-prod:latest failed, will retry in 5s ...
> E1023 08:54:49.064716   1 dockerutil.go:86] push for image
> 172.30.201.95:5000/openshift/base-prod:latest failed, will retry in 5s ...
> E1023 08:54:54.067985   1 dockerutil.go:86] push for image
> 172.30.201.95:5000/openshift/base-prod:latest failed, will retry in 5s ...
> F1023 08:54:59.068275   1 builder.go:204] Error: build error: Failed
> to push image: unable to ping registry endpoint
> https://172.30.201.95:5000/v0/
> v2 ping attempt failed with error: Get https://172.30.201.95:5000/v2/:
> http: server gave HTTP response to HTTPS client
>  v1 ping attempt failed with error: Get https://172.30.201.95:5000/v1/
> _ping: http: server gave HTTP response to HTTPS client
>
> In DockerRegistry logs I have only this informations:
>
> 10.1.3.1 - - [23/Oct/2016:09:38:35 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
> 10.1.3.1 - - [23/Oct/2016:09:38:45 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
> 10.1.3.1 - - [23/Oct/2016:09:38:45 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
> 10.1.3.1 - - [23/Oct/2016:09:38:55 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
> 10.1.3.1 - - [23/Oct/2016:09:38:55 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
> 10.1.4.1 - - [23/Oct/2016:09:39:05 +] "GET / HTTP/1.1" 200 0 ""
> "check_http/v2.0 (monitoring-plugins 2.0)"
> 10.1.3.1 - - [23/Oct/2016:09:39:05 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
> 10.1.3.1 - - [23/Oct/2016:09:39:05 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
> 10.1.3.1 - - [23/Oct/2016:09:39:15 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
> 10.1.3.1 - - [23/Oct/2016:09:39:15 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
> 10.1.3.1 - - [23/Oct/2016:09:39:25 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
> 10.1.3.1 - - [23/Oct/2016:09:39:25 +] "GET /healthz HTTP/1.1" 200 0 ""
> "Go-http-client/1.1"
>
> But I've this in DockerRegistry config file:
>
> $ cat /config.yml
> version: 0.1
> log:
>   level: debug
> http:
>   addr: :5000
> storage:
>   cache:
> layerinfo: inmemory
>   filesystem:
> rootdirectory: /registry
>   delete:
> enabled: true
> auth:
>   openshift:
> realm: openshift
> middleware:
>   repository:
> - name: openshift
>   options:
> pullthrough: true
>
> Why I don't have debug information in log?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing OpenShift Configuration with Puppet/Ansible… what are your best practices?

2016-10-13 Thread Stéphane Klein
2016-10-12 17:41 GMT+02:00 Alex Wauck :

> we do the actual OpenShift installation using openshift-ansible (which
> Rich Megginson mentioned)
>

Thanks but my subject isn't about OpenShift cluster installation and
upgrade.

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing OpenShift Configuration with Puppet/Ansible… what are your best practices?

2016-10-13 Thread Stéphane Klein
2016-10-12 17:10 GMT+02:00 Rich Megginson <rmegg...@redhat.com>:

> On 10/12/2016 03:15 AM, Stéphane Klein wrote:
>>
>> * are there some Ansible or Puppet tools for OpenShift (I found nothing)?
>>
>
> https://github.com/openshift/openshift-ansible


I know and I use that, it's only to install and upgrade OpenShift cluster.

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-22 Thread Stéphane Klein
Personally I use this options to fix OpenShift version:

openshift_pkg_version=v1.2.0
openshift_image_tag=v1.2.0


2016-06-22 13:24 GMT+02:00 Den Cowboy <dencow...@hotmail.com>:

> Is it possible to define and origin version in your ansible install.
> At the moment we have so many issues with our newest install (while we had
> 1.1.6 pretty stable for some time)
> We want to go to a stable 1.2.0
>
> Our issues:
> version = oc v1.2.0-rc1-13-g2e62fab
> So images are pulled with tag oc v1.2.0-rc1-13-g2e62fab which doesn't
> exist in openshift. Okay we have a workaround by editing the master and
> node config's and using 'i--image' but whe don't like this approach
>
> logs on our nodes:
>  level=error msg="Error reading loginuid: open /proc/27182/loginuid: no
> such file or directory"
> level=error msg="Error reading loginuid: open /proc/27182/loginuid: no
> such file or directory"
>
> We started a mysql instance. We weren't able to use the service name to
> connect:
> mysql -u test -h mysql -p did NOT work
> mysql -u test -h 172.30.x.x (service ip) -p did work..
>
> So we have too many issues on this version of OpenShift. We've deployed in
> a team several times and are pretty confident with the setup and it was
> always working fine for us. But now this last weird versions seem really
> bad for us.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: All my ipfailover pods are in "Entering MASTER STATE", it's not fair ?

2016-06-22 Thread Stéphane Klein
2016-06-22 2:17 GMT+02:00 Ram Ranganathan :

> Couldn't figure out if you have a problem or not  (or it was just a
> question) from the email thread.
>
>
It's an observation and I don't know if it's normal and I ask the question.
Sorry if my mail is unclear.



> What does "ip addr show" on all the nodes show?  This is the nodes where
> your ipfailover pods are running.
>

Now ip failover is visible on all hosts.


> Are the VIPs allocated to both nodes (assuming you have from the logs),
> then it is likely some of the VRRP instances would be in master state.
>
>
Yes, it's.

Thanks

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: All my ipfailover pods are in "Entering MASTER STATE", it's not fair ?

2016-06-20 Thread Stéphane Klein
I would like to say "It's a problem, it's abnormal ?"

2016-06-17 16:26 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

> Hi,
>
> I've:
>
> * one cluster with 2 nodes
> * ipfailover replicas=2
>
> I execute:
>
> * oc logs ipfailover-rbx-1-bh3kn
> https://gist.github.com/harobed/2ab152ed98f95285d549cbc7af3a#file-oc-logs-ipfailover-rbx-1-bh3kn
> * oc logs ipfailover-rbx-1-mmp36
> https://gist.github.com/harobed/2ab152ed98f95285d549cbc7af3a#file-oc-logs-ipfailover-rbx-1-mmp36
>
> and I see that all ipfailover pod are in "Entering MASTER STATE".
>
> It's not fair ?
>
> Best regards,
> Stéphane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


All my ipfailover pods are in "Entering MASTER STATE", it's not fair ?

2016-06-17 Thread Stéphane Klein
Hi,

I've:

* one cluster with 2 nodes
* ipfailover replicas=2

I execute:

* oc logs ipfailover-rbx-1-bh3kn
https://gist.github.com/harobed/2ab152ed98f95285d549cbc7af3a#file-oc-logs-ipfailover-rbx-1-bh3kn
* oc logs ipfailover-rbx-1-mmp36
https://gist.github.com/harobed/2ab152ed98f95285d549cbc7af3a#file-oc-logs-ipfailover-rbx-1-mmp36

and I see that all ipfailover pod are in "Entering MASTER STATE".

It's not fair ?

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: I try to append role to user but it's not visible with oc policy who-can, why ?

2016-06-15 Thread Stéphane Klein
Thanks, ok it's working.

So, how can I get the complete list of relation between « project - role -
user / group » ?

Best regards,
Stéphane

2016-06-15 18:03 GMT+02:00 Jordan Liggitt <jligg...@redhat.com>:

> "who-can" takes an API verb and resource, not a role name (something like
> `who-can get pods`). I think it's only listing the users who can do any
> verb (*) on any resource (*).
>
> On Wed, Jun 15, 2016 at 11:57 AM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>> Hi,
>>
>> I try to append role to user:
>>
>> ```
>> $ oc policy add-role-to-user admin user1  -n myproject
>> $ oc policy who-can admin myproject
>> Namespace: myproject
>> Verb:  admin
>> Resource:  myproject
>>
>> Users:  admin
>> sklein
>>
>> Groups: system:cluster-admins
>> system:masters
>> ```
>>
>> I don't understand why user1 isn't in who-can user list ?
>>
>> Where is my mistake ?
>>
>> Best regards,
>> Stéphane
>> --
>> Stéphane Klein <cont...@stephane-klein.info>
>> blog: http://stephane-klein.info
>> cv : http://cv.stephane-klein.info
>> Twitter: http://twitter.com/klein_stephane
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>


-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


I try to append role to user but it's not visible with oc policy who-can, why ?

2016-06-15 Thread Stéphane Klein
Hi,

I try to append role to user:

```
$ oc policy add-role-to-user admin user1  -n myproject
$ oc policy who-can admin myproject
Namespace: myproject
Verb:  admin
Resource:  myproject

Users:  admin
sklein

Groups: system:cluster-admins
system:masters
```

I don't understand why user1 isn't in who-can user list ?

Where is my mistake ?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error: « kubernetesMasterConfig: Invalid value: null: either kubernetesMasterConfig or masterClients.externalKubernetesKubeConfig must have a value »

2016-05-31 Thread Stéphane Klein
Well, it was my mistake:

```
ubernetesMasterConfig:
```

=>

```
kubernetesMasterConfig:
```

in /etc/origin/master/master-config.yaml

:(

2016-05-31 10:29 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

> Hi,
>
> my origin-master container work perfectly until yesterday, now when I
> start it, I've this error:
>
> May 31 10:21:36 prod-master-1.priv.tech-angels.net docker[27197]: Invalid
> MasterConfig /etc/origin/master/master-config.yaml
> May 31 10:21:36 prod-master-1.priv.tech-angels.net docker[27197]:
> kubernetesMasterConfig: Invalid value: null: either kubernetesMasterConfig
> or masterClients.externalKubernetesKubeConfig must have a value
>
> I've this values in /etc/origin/master/master-config.yaml:
>
> masterClients:
>   externalKubernetesKubeConfig: ""
>   openshiftLoopbackKubeConfig: openshift-master.kubeconfig
>
> I don't understand, I've the same value in 2 others OpenShift testing
> cluster and origin-master start with success.
>
> Do you have some idea about this issue?
>
> Best regards,
> Stéphane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Error: « kubernetesMasterConfig: Invalid value: null: either kubernetesMasterConfig or masterClients.externalKubernetesKubeConfig must have a value »

2016-05-31 Thread Stéphane Klein
Hi,

my origin-master container work perfectly until yesterday, now when I start
it, I've this error:

May 31 10:21:36 prod-master-1.priv.tech-angels.net docker[27197]: Invalid
MasterConfig /etc/origin/master/master-config.yaml
May 31 10:21:36 prod-master-1.priv.tech-angels.net docker[27197]:
kubernetesMasterConfig: Invalid value: null: either kubernetesMasterConfig
or masterClients.externalKubernetesKubeConfig must have a value

I've this values in /etc/origin/master/master-config.yaml:

masterClients:
  externalKubernetesKubeConfig: ""
  openshiftLoopbackKubeConfig: openshift-master.kubeconfig

I don't understand, I've the same value in 2 others OpenShift testing
cluster and origin-master start with success.

Do you have some idea about this issue?

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


In cluster with 2 regions, do I need to deploy one "oadm router…" by region ?

2016-05-30 Thread Stéphane Klein
Hi,

I've two region in my cluster:

* region A with 2 nodes
* region B with 2 nodes

Do I need to deploy one "oadm router ..." by region ?

For the moment I've created only one router deploymentconfig and I've:

* replicas = 4
* 2 router installed with success on region A
* 2 router in pending status

This is my error log:

  39m2m31{default-scheduler }Warning
FailedSchedulingpod (router-1-4bz0r) failed to fit in any node
fit failure on node (prod-node-a-2.example.com): PodFitsPorts
fit failure on node (prod-node-b-1.example.com): Region
fit failure on node (prod-node-b-2.example.com): Region
fit failure on node (prod-node-a-1.example.com): PodFitsPorts

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error updating deployment [deploy] status to Pending

2016-05-20 Thread Stéphane Klein
I see that in origin-node log:

```
mai 20 09:52:21 openshift-master-1.priv.tech-angels.net origin-node[1238]:
E0520 09:52:21.4046571238 event.go:192] Server rejected event
'{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""},
ObjectMeta:api.ObjectMeta{Name:"openshift-master-1.priv.tech-angels.net.144f81b48964418e",
GenerateName:"", Namespace:"default", SelfLink:"", UID:"",
ResourceVersion:"7644318", Generation:0,
CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0,
loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil),
DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil),
Annotations:map[string]string(nil)},
InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"
openshift-master-1.priv.tech-angels.net", UID:"
openshift-master-1.priv.tech-angels.net", APIVersion:"",
ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientDisk",
Message:"Node openshift-master-1.priv.tech-angels.net status is now:
NodeHasSufficientDisk", Source:api.EventSource{Component:"kubelet", Host:"
openshift-master-1.priv.tech-angels.net"},
FirstTimestamp:unversioned.Time{Time:time.Time{sec:63599127816, nsec:0,
loc:(*time.Location)(0x56a0960)}},
LastTimestamp:unversioned.Time{Time:time.Time{sec:63599327541,
nsec:367959378, loc:(*time.Location)(0x56a0960)}}, Count:2,
Type:"Normal"}': 'events
"openshift-master-1.priv.tech-angels.net.144f81b48964418e" not found' (will
not retry!)
mai 20 10:01:58 openshift-master-1.priv.tech-angels.net origin-node[1238]:
E0520 10:01:58.2410641238 kubelet.go:2654] Error updating node status,
will retry: error #0: client: etcd member
https://openshift-etcd-1.priv.tech-angels.net:2379 has no leader
```


2016-05-20 11:23 GMT+02:00 Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com>:

> Hi,
>
> Yes the node is ready. I have tried to unschedule it, and evacuate pods,
> it's not working either. I  don't use any PV in the test I'm doing. The
> other node seems to have the same problem, so I guess it's somewhere else
> than the node. Maybe corrupted data in etcd?
> This cluster has been working for months now, I don't understand why it's
> suddenly failing. I have absolutely no clue.
> Thanks
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Is there a command to see on which node pod is installed ?

2016-05-17 Thread Stéphane Klein
Yes :

```
$ oc describe pod docker-registry-3-4bizw
Name:docker-registry-3-4bizw
Namespace:default
Node:atomic-test-node-2.priv.tech-angels.net/172.29.20.211
```


2016-05-17 10:11 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

> Hi,
>
> is there a command to see on which node pod is installed ?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Is there a command to see on which node pod is installed ?

2016-05-17 Thread Stéphane Klein
Hi,

is there a command to see on which node pod is installed ?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Error authenticating "admin" with provider "my_htpasswd_auth": user "admin" cannot be claimed by identity "my_htpasswd_auth:admin" because it is already mapped to [allow_all:admin]

2016-05-02 Thread Stéphane Klein
Note:

If you have this message:

```
login.go:162] Error authenticating "admin" with provider
"my_htpasswd_auth": user "admin" cannot be claimed by identity
"my_htpasswd_auth:admin" because it is already mapped to [allow_all:admin]
```

you need to delete user before auth in console:

```
oc delete user admin
```

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How can I fix « Error syncing pod, skipping: failed to "TeardownNetwork" for "docker-registry-1-deploy_default" with TeardownNetworkError: "Failed to teardown network for pod \"b138b1c5-07d8-11e6-a2ef

2016-04-22 Thread Stéphane Klein
Hi,

I've this error in events logs :

Error syncing pod, skipping: failed to "TeardownNetwork" for
"docker-registry-1-deploy_default" with TeardownNetworkError: "Failed
to teardown network for pod \"b138b1c5-07d8-11e6-a2ef-525400ffc199\"
using network plugins \"redhat/openshift-ovs-multitenant\": exit
status 1"

I see this issue : https://bugzilla.redhat.com/show_bug.cgi?id=1320430
And this patch : https://github.com/openshift/origin/pull/8468

How can I fix this bug before next OpenShift release ?

Context:

* OS type: Atomic Project
* Installation type: container

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: I wonder if there are some commands to list port forward and remove port forward?

2016-04-07 Thread Stéphane Klein
This is a solution :

- apiVersion: v1
  kind: Service
  metadata:
name: mysql-service
  spec:
ports:
  - name: mysql
protocol: TCP
port: 3306
targetPort: 3306
selector:
  name: mysql-service
externalIPs:
  - PUBLIC_IP


2016-04-07 11:22 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

> It's a way ? http://kubernetes.io/docs/user-guide/services/#external-ips
>
> 2016-04-06 17:19 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:
>
>> 2016-04-06 17:08 GMT+02:00 Andy Goldstein <agold...@redhat.com>:
>>
>>> Port forwarding is a temporary operation - it stays alive as long as you
>>> keep the `oc port-forward` command running. Does this help answer your
>>> question?
>>>
>>
>> Are there nothing to configure permanent port forwarding ? I need to
>> configure one external access to Mysql Server in one pod.
>>
>
>
>
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: I wonder if there are some commands to list port forward and remove port forward?

2016-04-06 Thread Stéphane Klein
2016-04-06 17:08 GMT+02:00 Andy Goldstein :

> Port forwarding is a temporary operation - it stays alive as long as you
> keep the `oc port-forward` command running. Does this help answer your
> question?
>

Are there nothing to configure permanent port forwarding ? I need to
configure one external access to Mysql Server in one pod.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


I wonder if there are some commands to list port forward and remove port forward?

2016-04-06 Thread Stéphane Klein
Hi,

I see the command

oc port-forward

in https://docs.openshift.org/latest/dev_guide/port_forwarding.html

I wonder if there are some commands to :

* list port forward ?
* remove port forward ?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift start => don't generate master-config.yaml, openshift start master --write-config => generate master-config.yaml, it's a bug or a feature ?

2016-02-24 Thread Stéphane Klein
2016-02-24 13:38 GMT+01:00 Den Cowboy :

> Have you checked /etc/origin/master/
> That's where the config files are generated in Origin.
>

Nothing in /etc/origin/master/

It's a bug ?


>
> with the option --write-config you're going to write your own configfiles.
> I assume you were watching to the documentation of OpenShift 3.0
> But when you're working with origin I would recommend the Origin
> documentation: https://docs.openshift.org/latest/welcome/index.html
>

 I already look this documentation :)
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[no subject]

2016-02-24 Thread Stéphane Klein
Hi,

I've created a PersistentVolumeClaim before my PersistentVolume resource.

Now, I've this :

```
# oc get pv
NAME   LABELSCAPACITY   ACCESSMODES
STATUS  CLAIM REASONAGE
my-persistent-volume   1GiRWO
Available   18h

# oc get pvc
NAME  LABELSSTATUSVOLUMECAPACITY   ACCESSMODES   AGE
foobar  Pending  4d
```

My PersistentVolumeClaim config :
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /my-persistent-volume/
```

My PersistentVolume config :

```
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
name: foobar
  spec:
accessModes:
- ReadWriteMany
resources:
  requests:
storage: 512Mi
```

How can I say to my persistent volume to retry ?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


openshift start => don't generate master-config.yaml, openshift start master --write-config => generate master-config.yaml, it's a bug or a feature ?

2016-02-24 Thread Stéphane Klein
Hi,

when I execute :

```
# openshift start
# ls openshift.local.config/master/master-config.yaml
ls: cannot access openshift.local.config/master/master-config.yaml: No such
file or directory
...

The "master-config.yaml" config file isn't generated.

Same result with :

```
# openshift start master
# ls openshift.local.config/master/master-config.yaml
ls: cannot access openshift.local.config/master/master-config.yaml: No such
file or directory
```

But if I execute :

```
# openshift start master --write-config=openshift.local.config/master/
# ls openshift.local.config/master/master-config.yaml
openshift.local.config/master/master-config.yaml
```

The config file is present.

It's a bug or a feature ? If it's a feature, I don't understand why.

This is the version :

```
# openshift version
openshift v1.1.3
kubernetes v1.2.0-origin
etcd 2.2.2+git
```

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to import private image from hub.docker to ImageStream ?

2016-02-23 Thread Stéphane Klein
I've tried to append :

```
# oc secrets add serviceaccount/default secrets/hub.docker.io --pull
# oc secrets add serviceaccount/default secrets/hub.docker.io --for=pull
# oc secrets add serviceaccount/default secrets/hub.docker.io
# oc secrets add serviceaccount/deployer secrets/hub.docker.io
```

I've always :

```
# oc import-image api
The import completed successfully.

Name:api
Created:3 hours ago
Labels:
Annotations:
openshift.io/image.dockerRepositoryCheck=2016-02-23T09:14:34Z
Docker Pull Spec:172.30.27.206:5000/foobar/api

TagSpecCreatedPullSpecImage
latestapi3 hours agoimport failed: you may not have
access to the Docker image "api"
```

Best regards,
Stéphane

2016-02-23 12:48 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> 2016-02-23 11:05 GMT+01:00 Maciej Szulik <maszu...@redhat.com>:
>
>> Have you checked this doc:
>>
>>
>> https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#private-registries
>>
>>
>>
> Thanks for this url :)
>
> I've created my hub.docker.io secret with (I have replaced with my
> credentials) :
>
> ```
> oc secrets new-dockercfg SECRET --docker-server=DOCKER_REGISTRY_SERVER
> --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD
> --docker-email=DOCKER_EMAIL
> ```
>
> Now I've :
>
> ```
> # oc get secret hub.docker.io -o json
> {
> "kind": "Secret",
> "apiVersion": "v1",
> "metadata": {
> "name": "hub.docker.io",
> "namespace": "foobar-staging",
> "selfLink": "/api/v1/namespaces/foobar-staging/secrets/
> hub.docker.io",
> "uid": "3b1b2aa4-da15-11e5-b613-080027143490",
> "resourceVersion": "19813",
> "creationTimestamp": "2016-02-23T10:07:22Z"
> },
> "data": {
> ".dockercfg": ".."
> },
> "type": "kubernetes.io/dockercfg"
> }
> ```
>
> When I execute :
>
> ```
> # oc import-image api
> The import completed successfully.
>
> Name:api
> Created:2 hours ago
> Labels:
> Annotations:
> openshift.io/image.dockerRepositoryCheck=2016-02-23T09:14:34Z
> Docker Pull Spec:172.30.27.206:5000/foobar-staging/api
>
> TagSpecCreatedPullSpecImage
> latestapi2 hours agoimport failed: you may not have
> access to the Docker image "api"
> ```
>
> Where is my mistake ? how can I say to my ImageStream to use my
> hub.docker.io secret ?
>
> Best regards,
> Stéphane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to import private image from hub.docker to ImageStream ?

2016-02-23 Thread Stéphane Klein
2016-02-23 11:05 GMT+01:00 Maciej Szulik :

> Have you checked this doc:
>
>
> https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#private-registries
>
>
>
Thanks for this url :)

I've created my hub.docker.io secret with (I have replaced with my
credentials) :

```
oc secrets new-dockercfg SECRET --docker-server=DOCKER_REGISTRY_SERVER
--docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD
--docker-email=DOCKER_EMAIL
```

Now I've :

```
# oc get secret hub.docker.io -o json
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "hub.docker.io",
"namespace": "foobar-staging",
"selfLink": "/api/v1/namespaces/foobar-staging/secrets/hub.docker.io
",
"uid": "3b1b2aa4-da15-11e5-b613-080027143490",
"resourceVersion": "19813",
"creationTimestamp": "2016-02-23T10:07:22Z"
},
"data": {
".dockercfg": ".."
},
"type": "kubernetes.io/dockercfg"
}
```

When I execute :

```
# oc import-image api
The import completed successfully.

Name:api
Created:2 hours ago
Labels:
Annotations:
openshift.io/image.dockerRepositoryCheck=2016-02-23T09:14:34Z
Docker Pull Spec:172.30.27.206:5000/foobar-staging/api

TagSpecCreatedPullSpecImage
latestapi2 hours agoimport failed: you may not have
access to the Docker image "api"
```

Where is my mistake ? how can I say to my ImageStream to use my
hub.docker.io secret ?

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift with Docker method installation on CentOS, error : deployer.go:65] couldn't get deployment default/docker-registry-1: Get https://10.0.2.15:8443/api/v1/namespaces/default/replicationcont

2016-02-10 Thread Stéphane Klein
Yes it's that, thanks

my bug « deployer.go:65] couldn't get deployment default/docker-registry-1:
Get
https://10.0.2.15:8443/api/v1/namespaces/default/replicationcontrollers/docker-registry-1:
dial tcp 10.0.2.15:8443: no route to host » is gone.

Well, now I've another bug but I'll create another subject.

Best regards,
Stéphane

2016-02-10 13:58 GMT+01:00 Skarbek, John <john.skar...@ca.com>:

> You don’t have any rules for port 8443. We would need to find out which
> chain the rule should go inside But something *similar* to this should
> fix the problem:
>
> iptables -I INPUT -p tcp —-dport 8443 -j ACCEPT
>
> Though I’d be more concerned as to why the rule wasn’t put in place from
> the get go.
>
>
>
> --
> John Skarbek
>
> On February 10, 2016 at 05:59:16, Stéphane Klein (
> cont...@stephane-klein.info) wrote:
>
> Do you see my mistake ? It's the default iptable config on CentOS.
>
> 2016-02-10 11:48 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:
>
>>
>>
>> 2016-02-10 11:44 GMT+01:00 Clayton Coleman <ccole...@redhat.com>:
>>
>>> Firewall it is :)
>>>
>>>
>> ```
>> iptables -L
>> Chain INPUT (policy ACCEPT)
>> target prot opt source   destination
>> ACCEPT all  --  anywhere anywhere ctstate
>> RELATED,ESTABLISHED
>> ACCEPT all  --  anywhere anywhere
>> INPUT_direct  all  --  anywhere anywhere
>> INPUT_ZONES_SOURCE  all  --  anywhere anywhere
>> INPUT_ZONES  all  --  anywhere anywhere
>> ACCEPT icmp --  anywhere anywhere
>> REJECT all  --  anywhere anywhere reject-with
>> icmp-host-prohibited
>>
>> Chain FORWARD (policy ACCEPT)
>> target prot opt source   destination
>> DOCKER all  --  anywhere anywhere
>> ACCEPT all  --  anywhere anywhere ctstate
>> RELATED,ESTABLISHED
>> ACCEPT all  --  anywhere anywhere
>> ACCEPT all  --  anywhere anywhere
>> ACCEPT all  --  anywhere anywhere ctstate
>> RELATED,ESTABLISHED
>> ACCEPT all  --  anywhere anywhere
>> FORWARD_direct  all  --  anywhere anywhere
>> FORWARD_IN_ZONES_SOURCE  all  --  anywhere anywhere
>> FORWARD_IN_ZONES  all  --  anywhere anywhere
>> FORWARD_OUT_ZONES_SOURCE  all  --  anywhere anywhere
>> FORWARD_OUT_ZONES  all  --  anywhere anywhere
>> ACCEPT icmp --  anywhere anywhere
>> REJECT all  --  anywhere anywhere reject-with
>> icmp-host-prohibited
>>
>> Chain OUTPUT (policy ACCEPT)
>> target prot opt source   destination
>> OUTPUT_direct  all  --  anywhere anywhere
>>
>> Chain DOCKER (1 references)
>> target prot opt source   destination
>>
>> Chain FORWARD_IN_ZONES (1 references)
>> target prot opt source   destination
>> FWDI_public  all  --  anywhere anywhere[goto]
>> FWDI_public  all  --  anywhere anywhere[goto]
>>
>> Chain FORWARD_IN_ZONES_SOURCE (1 references)
>> target prot opt source   destination
>>
>> Chain FORWARD_OUT_ZONES (1 references)
>> target prot opt source   destination
>> FWDO_public  all  --  anywhere anywhere[goto]
>> FWDO_public  all  --  anywhere anywhere[goto]
>>
>> Chain FORWARD_OUT_ZONES_SOURCE (1 references)
>> target prot opt source   destination
>>
>> Chain FORWARD_direct (1 references)
>> target prot opt source   destination
>>
>> Chain FWDI_public (2 references)
>> target prot opt source   destination
>> FWDI_public_log  all  --  anywhere anywhere
>> FWDI_public_deny  all  --  anywhere anywhere
>> FWDI_public_allow  all  --  anywhere anywhere
>>
>> Chain FWDI_public_allow (1 references)
>> target prot opt source   destination
>>
>> Chain FWDI_public_deny (1 references)
>> target prot opt source   destination
>>
>> Chain FWDI_public_log (1 references)
>> target prot opt source   destination
>>
>> Chain FWDO_public (2 references)
>> target prot opt source   destination
>> FWDO_public_log  all  --  anywhere 

Re: OpenShift with Docker method installation on CentOS, error : deployer.go:65] couldn't get deployment default/docker-registry-1: Get https://10.0.2.15:8443/api/v1/namespaces/default/replicationcont

2016-02-10 Thread Stéphane Klein
Same error with Docker Origin image version 1.1.2

More info :

```
[root@localhost vagrant]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 08:00:27:aa:92:14 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
   valid_lft 83250sec preferred_lft 83250sec
inet6 fe80::a00:27ff:feaa:9214/64 scope link
   valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN
link/ether 02:42:55:74:29:18 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
   valid_lft forever preferred_lft forever
inet6 fe80::42:55ff:fe74:2918/64 scope link
   valid_lft forever preferred_lft forever
[root@localhost vagrant]# brctl show
bridge namebridge idSTP enabledinterfaces
docker08000.024255742918no
```


2016-02-10 8:34 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> Hi,
>
> I've installed OpenShift with Docker method (
> https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container
> ) on CentOS Linux release 7.2.1511 (Core)
>
> ```
> # docker run -d --name "origin" --privileged --pid=host --net=host -v
> /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v
> /var/lib/docker:/var/lib/docker:rw -v
> /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes
> openshift/origin:v1.1.1.1 start
> # docker exec -it origin bash
> # oc login -u system:admin
> # oc project default
> # oadm registry
> --credentials=./openshift.local.config/master/openshift-registry.kubeconfig
> # oc logs docker-registry-1-deploy
> F0209 22:00:19.098482   1 deployer.go:65] couldn't get deployment
> default/docker-registry-1: Get
> https://10.0.2.15:8443/api/v1/namespaces/default/replicationcontrollers/docker-registry-1:
> dial tcp 10.0.2.15:8443: no route to host
> ```
>
> But ```10.0.2.15:8443``` is accessible :
>
> ```
> # curl
> https://10.0.2.15:8443/api/v1/namespaces/default/replicationcontrollers/docker-registry-1
> --insecure
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "User \"system:anonymous\" cannot get replicationcontrollers
> in project \"default\"",
>   "reason": "Forbidden",
>   "details": {
> "name": "docker-registry-1",
>     "kind": "replicationcontrollers"
>   },
>   "code": 403
> }
> ```
>
> Where is my mistake ?
>
> Best regards,
> Stéphane
>
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift with Docker method installation on CentOS, error : deployer.go:65] couldn't get deployment default/docker-registry-1: Get https://10.0.2.15:8443/api/v1/namespaces/default/replicationcont

2016-02-10 Thread Stéphane Klein
2016-02-10 11:44 GMT+01:00 Clayton Coleman :

> Firewall it is :)
>
>
```
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination
ACCEPT all  --  anywhere anywhere ctstate
RELATED,ESTABLISHED
ACCEPT all  --  anywhere anywhere
INPUT_direct  all  --  anywhere anywhere
INPUT_ZONES_SOURCE  all  --  anywhere anywhere
INPUT_ZONES  all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhere
REJECT all  --  anywhere anywhere reject-with
icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target prot opt source   destination
DOCKER all  --  anywhere anywhere
ACCEPT all  --  anywhere anywhere ctstate
RELATED,ESTABLISHED
ACCEPT all  --  anywhere anywhere
ACCEPT all  --  anywhere anywhere
ACCEPT all  --  anywhere anywhere ctstate
RELATED,ESTABLISHED
ACCEPT all  --  anywhere anywhere
FORWARD_direct  all  --  anywhere anywhere
FORWARD_IN_ZONES_SOURCE  all  --  anywhere anywhere
FORWARD_IN_ZONES  all  --  anywhere anywhere
FORWARD_OUT_ZONES_SOURCE  all  --  anywhere anywhere
FORWARD_OUT_ZONES  all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhere
REJECT all  --  anywhere anywhere reject-with
icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination
OUTPUT_direct  all  --  anywhere anywhere

Chain DOCKER (1 references)
target prot opt source   destination

Chain FORWARD_IN_ZONES (1 references)
target prot opt source   destination
FWDI_public  all  --  anywhere anywhere[goto]
FWDI_public  all  --  anywhere anywhere[goto]

Chain FORWARD_IN_ZONES_SOURCE (1 references)
target prot opt source   destination

Chain FORWARD_OUT_ZONES (1 references)
target prot opt source   destination
FWDO_public  all  --  anywhere anywhere[goto]
FWDO_public  all  --  anywhere anywhere[goto]

Chain FORWARD_OUT_ZONES_SOURCE (1 references)
target prot opt source   destination

Chain FORWARD_direct (1 references)
target prot opt source   destination

Chain FWDI_public (2 references)
target prot opt source   destination
FWDI_public_log  all  --  anywhere anywhere
FWDI_public_deny  all  --  anywhere anywhere
FWDI_public_allow  all  --  anywhere anywhere

Chain FWDI_public_allow (1 references)
target prot opt source   destination

Chain FWDI_public_deny (1 references)
target prot opt source   destination

Chain FWDI_public_log (1 references)
target prot opt source   destination

Chain FWDO_public (2 references)
target prot opt source   destination
FWDO_public_log  all  --  anywhere anywhere
FWDO_public_deny  all  --  anywhere anywhere
FWDO_public_allow  all  --  anywhere anywhere

Chain FWDO_public_allow (1 references)
target prot opt source   destination

Chain FWDO_public_deny (1 references)
target prot opt source   destination

Chain FWDO_public_log (1 references)
target prot opt source   destination

Chain INPUT_ZONES (1 references)
target prot opt source   destination
IN_public  all  --  anywhere anywhere[goto]
IN_public  all  --  anywhere anywhere[goto]

Chain INPUT_ZONES_SOURCE (1 references)
target prot opt source   destination

Chain INPUT_direct (1 references)
target prot opt source   destination

Chain IN_public (2 references)
target prot opt source   destination
IN_public_log  all  --  anywhere anywhere
IN_public_deny  all  --  anywhere anywhere
IN_public_allow  all  --  anywhere anywhere

Chain IN_public_allow (1 references)
target prot opt source   destination
ACCEPT tcp  --  anywhere anywhere tcp dpt:ssh
ctstate NEW

Chain IN_public_deny (1 references)
target prot opt source   destination

Chain IN_public_log (1 references)
target prot opt source   destination

Chain OUTPUT_direct (1 references)
target prot opt source   destination
[root@localhost vagrant]#
```
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift with Docker method installation on CentOS, error : deployer.go:65] couldn't get deployment default/docker-registry-1: Get https://10.0.2.15:8443/api/v1/namespaces/default/replicationcont

2016-02-10 Thread Stéphane Klein
With ```docker logs -f origin```, I've this :

```
I0210 00:59:33.5232368650 replication_controller.go:409] Replication
Controller has been deleted default/docker-registry-1
E0210 00:59:45.2384358650 event.go:188] Server rejected event
'{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""},
ObjectMeta:api.ObjectMeta{Name:"docker-registry.14316ecde4c4ec88",
GenerateName:"", Namespace:"default", SelfLink:"", UID:"",
ResourceVersion:"", Generation:0,
CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0,
loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil),
DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil),
Annotations:map[string]string(nil)},
InvolvedObject:api.ObjectReference{Kind:"DeploymentConfig",
Namespace:"default", Name:"docker-registry",
UID:"932c478a-cf91-11e5-ae8f-080027aa9214", APIVersion:"v1",
ResourceVersion:"588", FieldPath:""}, Reason:"DeploymentCreated",
Message:"Created new deployment \"docker-registry-1\" for version 1",
Source:api.EventSource{Component:"deploymentconfig-controller", Host:""},
FirstTimestamp:unversioned.Time{Time:time.Time{sec:63590662785,
nsec:202842760, loc:(*time.Location)(0x508f920)}},
LastTimestamp:unversioned.Time{Time:time.Time{sec:63590662785,
nsec:202842760, loc:(*time.Location)(0x508f920)}}, Count:1,
Type:"Normal"}': 'Event "docker-registry.14316ecde4c4ec88" is invalid:
involvedObject.kind: invalid value 'DeploymentConfig', Details: couldn't
check whether namespace is allowed: no kind named {"" "DeploymentConfig"}
is registered in versions ["v1"]' (will not retry!)
```

This can maybe help you ?


2016-02-10 10:31 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> Same error with Docker Origin image version 1.1.2
>
> More info :
>
> ```
> [root@localhost vagrant]# ip addr
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 08:00:27:aa:92:14 brd ff:ff:ff:ff:ff:ff
> inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
>valid_lft 83250sec preferred_lft 83250sec
> inet6 fe80::a00:27ff:feaa:9214/64 scope link
>valid_lft forever preferred_lft forever
> 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
> state DOWN
> link/ether 02:42:55:74:29:18 brd ff:ff:ff:ff:ff:ff
> inet 172.17.42.1/16 scope global docker0
>valid_lft forever preferred_lft forever
> inet6 fe80::42:55ff:fe74:2918/64 scope link
>valid_lft forever preferred_lft forever
> [root@localhost vagrant]# brctl show
> bridge namebridge idSTP enabledinterfaces
> docker08000.024255742918no
> ```
>
>
> 2016-02-10 8:34 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:
>
>> Hi,
>>
>> I've installed OpenShift with Docker method (
>> https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container
>> ) on CentOS Linux release 7.2.1511 (Core)
>>
>> ```
>> # docker run -d --name "origin" --privileged --pid=host --net=host -v
>> /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v
>> /var/lib/docker:/var/lib/docker:rw -v
>> /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes
>> openshift/origin:v1.1.1.1 start
>> # docker exec -it origin bash
>> # oc login -u system:admin
>> # oc project default
>> # oadm registry
>> --credentials=./openshift.local.config/master/openshift-registry.kubeconfig
>> # oc logs docker-registry-1-deploy
>> F0209 22:00:19.098482   1 deployer.go:65] couldn't get deployment
>> default/docker-registry-1: Get
>> https://10.0.2.15:8443/api/v1/namespaces/default/replicationcontrollers/docker-registry-1:
>> dial tcp 10.0.2.15:8443: no route to host
>> ```
>>
>> But ```10.0.2.15:8443``` is accessible :
>>
>> ```
>> # curl
>> https://10.0.2.15:8443/api/v1/namespaces/default/replicationcontrollers/docker-registry-1
>> --insecure
>> {
>>   "kind": "Status",
>>   "apiVersion": "v1",
>>   "metadata": {},
>>   "status": "Failure",
>>   "message"

Re: OpenShift with Docker method installation on CentOS, error : deployer.go:65] couldn't get deployment default/docker-registry-1: Get https://10.0.2.15:8443/api/v1/namespaces/default/replicationcont

2016-02-10 Thread Stéphane Klein
```
root@d2a1ccf29589:/# nmap 10.0.2.15

Starting Nmap 6.40 ( http://nmap.org ) at 2016-02-10 01:40 UTC
Nmap scan report for 10.0.2.15
Host is up (0.92s latency).
Not shown: 999 filtered ports
PORT   STATE SERVICE
22/tcp open  ssh

Nmap done: 1 IP address (1 host up) scanned in 19.78 seconds
```

Only ssh port is open :(


2016-02-10 11:35 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> More info, I can ping, but port 8443 isn't open ?
>
> ```
> root@d2a1ccf29589:/# ping 10.0.2.15
> PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
> 64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.059 ms
> root@d2a1ccf29589:/# telnet 10.0.2.15 8443
> Trying 10.0.2.15...
> telnet: Unable to connect to remote host: No route to host
> ```
>
>
> 2016-02-10 11:10 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:
>
>>
>>
>> 2016-02-10 10:48 GMT+01:00 Clayton Coleman <ccole...@redhat.com>:
>>
>>> Might be a firewall rule - try connecting to 10.0.2.15 from a random
>>> docker container on the machine.
>>>
>>>
>>
>> Yes I can :
>>
>> ```
>> [root@localhost vagrant]# docker run -t -i ubuntu /bin/bash
>> root@b4e25a365497:/# ping 10.0.2.15
>> PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
>> 64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.084 ms
>> 64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.079 ms
>> ```
>>
>> Clayton, are you on irc ? If yes, can I contact you to help me to debug
>> that ? (I'm harobed3)
>>
>> Best regards,
>> Stéphane
>>
>
>
>
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift with Docker method installation on CentOS, error : deployer.go:65] couldn't get deployment default/docker-registry-1: Get https://10.0.2.15:8443/api/v1/namespaces/default/replicationcont

2016-02-10 Thread Stéphane Klein
2016-02-10 10:48 GMT+01:00 Clayton Coleman :

> Might be a firewall rule - try connecting to 10.0.2.15 from a random
> docker container on the machine.
>
>

Yes I can :

```
[root@localhost vagrant]# docker run -t -i ubuntu /bin/bash
root@b4e25a365497:/# ping 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.079 ms
```

Clayton, are you on irc ? If yes, can I contact you to help me to debug
that ? (I'm harobed3)

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift with Docker method installation on CentOS, error : deployer.go:65] couldn't get deployment default/docker-registry-1: Get https://10.0.2.15:8443/api/v1/namespaces/default/replicationcont

2016-02-10 Thread Stéphane Klein
More info, I can ping, but port 8443 isn't open ?

```
root@d2a1ccf29589:/# ping 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.059 ms
root@d2a1ccf29589:/# telnet 10.0.2.15 8443
Trying 10.0.2.15...
telnet: Unable to connect to remote host: No route to host
```


2016-02-10 11:10 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

>
>
> 2016-02-10 10:48 GMT+01:00 Clayton Coleman <ccole...@redhat.com>:
>
>> Might be a firewall rule - try connecting to 10.0.2.15 from a random
>> docker container on the machine.
>>
>>
>
> Yes I can :
>
> ```
> [root@localhost vagrant]# docker run -t -i ubuntu /bin/bash
> root@b4e25a365497:/# ping 10.0.2.15
> PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
> 64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.084 ms
> 64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.079 ms
> ```
>
> Clayton, are you on irc ? If yes, can I contact you to help me to debug
> that ? (I'm harobed3)
>
> Best regards,
> Stéphane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift with Docker method installation on CentOS, error : deployer.go:65] couldn't get deployment default/docker-registry-1: Get https://10.0.2.15:8443/api/v1/namespaces/default/replicationcont

2016-02-10 Thread Stéphane Klein
Do you see my mistake ? It's the default iptable config on CentOS.

2016-02-10 11:48 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

>
>
> 2016-02-10 11:44 GMT+01:00 Clayton Coleman <ccole...@redhat.com>:
>
>> Firewall it is :)
>>
>>
> ```
> iptables -L
> Chain INPUT (policy ACCEPT)
> target prot opt source   destination
> ACCEPT all  --  anywhere anywhere ctstate
> RELATED,ESTABLISHED
> ACCEPT all  --  anywhere anywhere
> INPUT_direct  all  --  anywhere anywhere
> INPUT_ZONES_SOURCE  all  --  anywhere anywhere
> INPUT_ZONES  all  --  anywhere anywhere
> ACCEPT icmp --  anywhere anywhere
> REJECT all  --  anywhere anywhere reject-with
> icmp-host-prohibited
>
> Chain FORWARD (policy ACCEPT)
> target prot opt source   destination
> DOCKER all  --  anywhere anywhere
> ACCEPT all  --  anywhere anywhere ctstate
> RELATED,ESTABLISHED
> ACCEPT all  --  anywhere anywhere
> ACCEPT all  --  anywhere anywhere
> ACCEPT all  --  anywhere anywhere ctstate
> RELATED,ESTABLISHED
> ACCEPT all  --  anywhere anywhere
> FORWARD_direct  all  --  anywhere anywhere
> FORWARD_IN_ZONES_SOURCE  all  --  anywhere anywhere
> FORWARD_IN_ZONES  all  --  anywhere anywhere
> FORWARD_OUT_ZONES_SOURCE  all  --  anywhere anywhere
> FORWARD_OUT_ZONES  all  --  anywhere anywhere
> ACCEPT icmp --  anywhere anywhere
> REJECT all  --  anywhere anywhere reject-with
> icmp-host-prohibited
>
> Chain OUTPUT (policy ACCEPT)
> target prot opt source   destination
> OUTPUT_direct  all  --  anywhere anywhere
>
> Chain DOCKER (1 references)
> target prot opt source   destination
>
> Chain FORWARD_IN_ZONES (1 references)
> target prot opt source   destination
> FWDI_public  all  --  anywhere anywhere[goto]
> FWDI_public  all  --  anywhere anywhere[goto]
>
> Chain FORWARD_IN_ZONES_SOURCE (1 references)
> target prot opt source   destination
>
> Chain FORWARD_OUT_ZONES (1 references)
> target prot opt source   destination
> FWDO_public  all  --  anywhere anywhere[goto]
> FWDO_public  all  --  anywhere anywhere[goto]
>
> Chain FORWARD_OUT_ZONES_SOURCE (1 references)
> target prot opt source   destination
>
> Chain FORWARD_direct (1 references)
> target prot opt source   destination
>
> Chain FWDI_public (2 references)
> target prot opt source   destination
> FWDI_public_log  all  --  anywhere anywhere
> FWDI_public_deny  all  --  anywhere anywhere
> FWDI_public_allow  all  --  anywhere anywhere
>
> Chain FWDI_public_allow (1 references)
> target prot opt source   destination
>
> Chain FWDI_public_deny (1 references)
> target prot opt source   destination
>
> Chain FWDI_public_log (1 references)
> target prot opt source   destination
>
> Chain FWDO_public (2 references)
> target prot opt source   destination
> FWDO_public_log  all  --  anywhere anywhere
> FWDO_public_deny  all  --  anywhere anywhere
> FWDO_public_allow  all  --  anywhere anywhere
>
> Chain FWDO_public_allow (1 references)
> target prot opt source   destination
>
> Chain FWDO_public_deny (1 references)
> target prot opt source   destination
>
> Chain FWDO_public_log (1 references)
> target prot opt source   destination
>
> Chain INPUT_ZONES (1 references)
> target prot opt source   destination
> IN_public  all  --  anywhere anywhere[goto]
> IN_public  all  --  anywhere anywhere[goto]
>
> Chain INPUT_ZONES_SOURCE (1 references)
> target prot opt source   destination
>
> Chain INPUT_direct (1 references)
> target prot opt source   destination
>
> Chain IN_public (2 references)
> target prot opt source   destination
> IN_public_log  all  --  anywhere anywhere
> IN_public_deny  all  --  anywhere anywhere
> IN_public_allow  all  --  anywhere anywhere
>
> Chain IN_public_allow (1 references)
> tar

Hairpin setup failed for pod "docker-registry-1-deploy_default": open /sys/devices/virtual/net/veth342eb48/brport/hairpin_mode: no such file or directory

2016-02-09 Thread Stéphane Klein
Hi,

I've installed OpenShift with Ansible + Vagrant

After OpenShift origin installation I do :

```
$ vagrant ssh master
$ sudo su
# oc login -u system:admin
 # oc project default
# oadm registry
--credentials=/etc/origin/master/openshift-registry.kubeconfig
# oc get pods
NAME   READY STATUSRESTARTS   AGE
docker-registry-1-deploy   0/1   Pending   0  3m
router-1-deploy0/1   Pending   0  40m
```

In node2, I've :

```
$ vagrant ssh node2
$ sudo su
# journalctl -u origin-node -f


févr. 09 07:56:22 ose3-node2.example.com origin-node[16820]: I0209
07:56:22.467556   16820 proxier.go:294] Adding new service
"default/docker-registry:5000-tcp" at 172.30.182.154:5000/TCP
févr. 09 07:56:22 ose3-node2.example.com origin-node[16820]: I0209
07:56:22.712996   16820 kubelet.go:2169] SyncLoop (ADD, "api"):
"docker-registry-1-deploy_default"
févr. 09 07:56:22 ose3-node2.example.com origin-node[16820]: I0209
07:56:22.779938   16820 manager.go:1720] Need to restart pod infra
container for "docker-registry-1-deploy_default" because it is not found
févr. 09 07:56:22 ose3-node2.example.com origin-node[16820]: I0209
07:56:22.782087   16820 provider.go:91] Refreshing cache for provider:
*credentialprovider.defaultDockerConfigProvider
févr. 09 07:56:22 ose3-node2.example.com origin-node[16820]: I0209
07:56:22.782212   16820 docker.go:159] Pulling image
openshift/origin-pod:v1.1.1.1 without credentials
févr. 09 07:56:30 ose3-node2.example.com ovs-vsctl[20090]:
ovs|1|vsctl|INFO|Called as ovs-vsctl add-port br0 veth342eb48
févr. 09 07:56:30 ose3-node2.example.com origin-node[16820]: W0209
07:56:30.878571   16820 manager.go:1892] Hairpin setup failed for pod
"docker-registry-1-deploy_default": open
/sys/devices/virtual/net/veth342eb48/brport/hairpin_mode: no such file or
directory
févr. 09 07:56:30 ose3-node2.example.com origin-node[16820]: I0209
07:56:30.879326   16820 docker.go:159] Pulling image
openshift/origin-deployer:v1.1.1.1 without credentials
```


Why this error ?

```
manager.go:1892] Hairpin setup failed for pod
"docker-registry-1-deploy_default": open
/sys/devices/virtual/net/veth342eb48/brport/hairpin_mode: no such file or
directory
```

Best regards,
Stéphane

-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Nobody test OpenShift Origin with Vagrant + Ansible ? There are many issues with this workflow

2016-02-09 Thread Stéphane Klein
Hi,

nobody test OpenShift Origin with Vagrant ?

I test OpenShift Origin + Ansible + Vagrant since three week on OSX, there
are many many issues.

Am I alone to use this workflow ?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Where is OpenShift Origin v1.1.2 rpm packages ?

2016-02-09 Thread Stéphane Klein
I use Centos 7 :

```
[root@ose3-master vagrant]# cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core)
```

this rpm are build for this distribution ?


2016-02-09 17:00 GMT+01:00 Scott Dodson <sdod...@redhat.com>:

> RPM builds aren't integrated into the origin release process currently
> so there's often delays. I've just built them you should be able to
> update now.
>
> On Tue, Feb 9, 2016 at 10:34 AM, Stéphane Klein
> <cont...@stephane-klein.info> wrote:
> > Hi,
> >
> > where is OpenShift Origin v1.1.2 rpm packages ?
> >
> > Best regards,
> > Stéphane
> >
> > --
> > Stéphane Klein <cont...@stephane-klein.info>
> > blog: http://stephane-klein.info
> > cv : http://cv.stephane-klein.info
> > Twitter: http://twitter.com/klein_stephane
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: container_manager_linux.go:272] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to move PID 20499 (in "/system.slice/docker.service") to "/docker-daemon"

2016-02-09 Thread Stéphane Klein
It's same bug ?
http://lists.openshift.redhat.com/openshift-archives/users/2016-February/msg00114.html

2016-02-09 19:11 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> I've executed :
>
> ```
> docker run -d --name "origin" --privileged --pid=host --net=host -v
> /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v
> /var/lib/docker:/var/lib/docker:rw  -v
> /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes
> openshift/origin start
> ```
>
> -v /sys:/sys is present :(
>
>
> 2016-02-09 19:09 GMT+01:00 Clayton Coleman <ccole...@redhat.com>:
>
>> Those errors usually indicate -v /sys:/sys is missing
>>
>> On Tue, Feb 9, 2016 at 1:07 PM, Stéphane Klein
>> <cont...@stephane-klein.info> wrote:
>> > Hi,
>> >
>> > I've installed OpenShift with Docker method (
>> >
>> https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container
>> > ) on CentOS Linux release 7.2.1511 (Core)
>> >
>> > After installation, I've this :
>> >
>> > ```
>> >
>> > E0209 17:33:57.492089   20834 kubelet.go:863] Image garbage collection
>> > failed: unable to find data for container /
>> > I0209 17:33:57.492103   20834 server.go:103] Starting to listen on
>> > 0.0.0.0:10250
>> > W0209 17:33:57.673340   20834 kubelet.go:891] Failed to move Kubelet to
>> > container "/kubelet": mkdir /sys/fs/cgroup/cpuacct,cpu: read-only file
>> > system
>> > I0209 17:33:57.673400   20834 kubelet.go:893] Running in container
>> > "/kubelet"
>> > I0209 17:33:57.678639   20834 manager.go:124] Starting to sync pod
>> status
>> > with apiserver
>> > I0209 17:33:57.678677   20834 kubelet.go:2299] Starting kubelet main
>> sync
>> > loop.
>> > I0209 17:33:57.687318   20834 factory.go:203] System is using systemd
>> > I0209 17:33:57.785603   20834 factory.go:245] Registering Docker factory
>> > I0209 17:33:57.788723   20834 factory.go:94] Registering Raw factory
>> > I0209 17:33:57.829322   20834 kubelet.go:1087] Successfully registered
>> node
>> > localhost.localdomain
>> > I0209 17:33:57.914526   20834 manager.go:1005] Started watching for new
>> ooms
>> > in manager
>> > I0209 17:33:57.918669   20834 oomparser.go:182] oomparser using systemd
>> > I0209 17:33:57.919128   20834 manager.go:249] Starting recovery of all
>> > containers
>> > I0209 17:33:57.983381   20834 manager.go:254] Recovery completed
>> > W0209 17:33:58.058390   20834 container_manager_linux.go:272]
>> > [ContainerManager] Failed to ensure state of "/docker-daemon": [failed
>> to
>> > move PID 20848 (in "/") to "/docker-daemon", failed to move PID 20499
>> (in
>> > "/system.slice/docker.service") to "/docker-daemon"]
>> > W0209 17:33:59.462068   20834 nodecontroller.go:585] Missing timestamp
>> for
>> > Node localhost.localdomain. Assuming now as a timestamp.
>> > I0209 17:33:59.462108   20834 event.go:210]
>> > Event(api.ObjectReference{Kind:"Node", Namespace:"",
>> > Name:"localhost.localdomain", UID:"localhost.localdomain",
>> APIVersion:"",
>> > ResourceVersion:"", FieldPath:""}): type: 'Normal' reason:
>> 'RegisteredNode'
>> > Node localhost.localdomain event: Registered Node localhost.localdomain
>> in
>> > NodeController
>> > W0209 17:34:58.456869   20834 container_manager_linux.go:272]
>> > [ContainerManager] Failed to ensure state of "/docker-daemon": [failed
>> to
>> > move PID 20848 (in "/") to "/docker-daemon", failed to move PID 20499
>> (in
>> > "/system.slice/docker.service") to "/docker-daemon"]
>> > E0209 17:35:51.607402   20834 nsenter_mount.go:179] Failed to nsenter
>> mount,
>> > return file doesn't exist: exit status 1
>> > E0209 17:35:51.633646   20834 nsenter_mount.go:179] Failed to nsenter
>> mount,
>> > return file doesn't exist: exit status 1
>> > I0209 17:35:51.809000   20834 provider.go:91] Refreshing cache for
>> provider:
>> > *credentialprovider.defaultDockerConfigProvider
>> > W0209 17:35:58.806107   20834 container_manager_linux.go:272]
>> > [ContainerManager] Failed to ensure state of "/docker-daemon": [failed
>> to
>> > move PID 20848 (in "/") to "/docker

Re: Where is OpenShift Origin v1.1.2 rpm packages ?

2016-02-09 Thread Stéphane Klein
Ok it's good :

```
[root@ose3-master vagrant]# rpm -qa | grep "origin"
origin-1.1.2-0.git.0.b8d7bbd.el7.centos.x86_64
origin-clients-1.1.2-0.git.0.b8d7bbd.el7.centos.x86_64
origin-node-1.1.2-0.git.0.b8d7bbd.el7.centos.x86_64
origin-master-1.1.2-0.git.0.b8d7bbd.el7.centos.x86_64
tuned-profiles-origin-node-1.1.2-0.git.0.b8d7bbd.el7.centos.x86_64
origin-sdn-ovs-1.1.2-0.git.0.b8d7bbd.el7.centos.x86_64
```

Thanks :)


2016-02-09 19:20 GMT+01:00 Jason DeTiberus <jdeti...@redhat.com>:

>
>
> On Tue, Feb 9, 2016 at 12:44 PM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>> I've this :
>>
>> ```
>> [root@ose3-master vagrant]# cat
>> /etc/yum.repos.d/maxamillion-origin-next-epel-7.repo
>> [maxamillion-origin-next]
>> name=Copr repo for origin-next owned by maxamillion
>> baseurl=
>> https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/epel-7-$basearch/
>> skip_if_unavailable=True
>> gpgcheck=1
>> gpgkey=
>> https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/pubkey.gpg
>> enabled=1
>> ```
>>
>> but :
>>
>> ```
>> [root@ose3-master vagrant]# yum update
>> Modules complémentaires chargés : fastestmirror
>> Loading mirror speeds from cached hostfile
>>  * base: centos.quelquesmots.fr
>>  * extras: centos.quelquesmots.fr
>>  * updates: ftp.nluug.nl
>> No packages marked for update
>> ```
>>
>
> You may have to do a `yum clean all` to clear the yum cache and see the
> updated packages.
>
>
>>
>>
>> 2016-02-09 18:41 GMT+01:00 Scott Dodson <sdod...@redhat.com>:
>>
>>> Yes, sorry you asked where they were and I didn't point you to them.
>>> Currently they're in this COPR
>>> https://copr.fedorainfracloud.org/coprs/maxamillion/origin-next/ and
>>> openshift-ansible will set this repo up for you automatically if you
>>> do an origin install.
>>>
>>> On Tue, Feb 9, 2016 at 11:46 AM, Stéphane Klein
>>> <cont...@stephane-klein.info> wrote:
>>> > I use Centos 7 :
>>> >
>>> > ```
>>> > [root@ose3-master vagrant]# cat /etc/centos-release
>>> > CentOS Linux release 7.2.1511 (Core)
>>> > ```
>>> >
>>> > this rpm are build for this distribution ?
>>> >
>>> >
>>> > 2016-02-09 17:00 GMT+01:00 Scott Dodson <sdod...@redhat.com>:
>>> >>
>>> >> RPM builds aren't integrated into the origin release process currently
>>> >> so there's often delays. I've just built them you should be able to
>>> >> update now.
>>> >>
>>> >> On Tue, Feb 9, 2016 at 10:34 AM, Stéphane Klein
>>> >> <cont...@stephane-klein.info> wrote:
>>> >> > Hi,
>>> >> >
>>> >> > where is OpenShift Origin v1.1.2 rpm packages ?
>>> >> >
>>> >> > Best regards,
>>> >> > Stéphane
>>> >> >
>>> >> > --
>>> >> > Stéphane Klein <cont...@stephane-klein.info>
>>> >> > blog: http://stephane-klein.info
>>> >> > cv : http://cv.stephane-klein.info
>>> >> > Twitter: http://twitter.com/klein_stephane
>>> >> >
>>> >> > ___
>>> >> > users mailing list
>>> >> > users@lists.openshift.redhat.com
>>> >> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>> >> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Stéphane Klein <cont...@stephane-klein.info>
>>> > blog: http://stephane-klein.info
>>> > cv : http://cv.stephane-klein.info
>>> > Twitter: http://twitter.com/klein_stephane
>>>
>>
>>
>>
>> --
>> Stéphane Klein <cont...@stephane-klein.info>
>> blog: http://stephane-klein.info
>> cv : http://cv.stephane-klein.info
>> Twitter: http://twitter.com/klein_stephane
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
> Jason DeTiberus
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


I don't found "osc" cli in OpenShift Origin documentation, why ?

2016-02-09 Thread Stéphane Klein
Hi,

In OpenShift I've "oc", "oadm" and "osc" cli commands.

I found "oc" and "oadm" cli documentation :
https://docs.openshift.org/latest/cli_reference/index.html but I don't
found "osc" in documentation, why ?

Best regards,
Stéphane

-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Nobody test OpenShift Origin with Vagrant + Ansible ? There are many issues with this workflow

2016-02-09 Thread Stéphane Klein
2016-02-10 0:19 GMT+01:00 Andy Goldstein :

> I do vagrant + ansible + parallels on my Mac for OSE. What sort of issues
> are you seeing?
>
>
This for example :

*
http://lists.openshift.redhat.com/openshift-archives/users/2016-February/msg00111.html
* https://github.com/openshift/openshift-ansible/issues/1350
*
http://lists.openshift.redhat.com/openshift-archives/users/2016-February/msg6.html

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Where is OpenShift Origin v1.1.2 rpm packages ?

2016-02-09 Thread Stéphane Klein
I've this :

```
[root@ose3-master vagrant]# cat
/etc/yum.repos.d/maxamillion-origin-next-epel-7.repo
[maxamillion-origin-next]
name=Copr repo for origin-next owned by maxamillion
baseurl=
https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/epel-7-$basearch/
skip_if_unavailable=True
gpgcheck=1
gpgkey=
https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/pubkey.gpg
enabled=1
```

but :

```
[root@ose3-master vagrant]# yum update
Modules complémentaires chargés : fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.quelquesmots.fr
 * extras: centos.quelquesmots.fr
 * updates: ftp.nluug.nl
No packages marked for update
```


2016-02-09 18:41 GMT+01:00 Scott Dodson <sdod...@redhat.com>:

> Yes, sorry you asked where they were and I didn't point you to them.
> Currently they're in this COPR
> https://copr.fedorainfracloud.org/coprs/maxamillion/origin-next/ and
> openshift-ansible will set this repo up for you automatically if you
> do an origin install.
>
> On Tue, Feb 9, 2016 at 11:46 AM, Stéphane Klein
> <cont...@stephane-klein.info> wrote:
> > I use Centos 7 :
> >
> > ```
> > [root@ose3-master vagrant]# cat /etc/centos-release
> > CentOS Linux release 7.2.1511 (Core)
> > ```
> >
> > this rpm are build for this distribution ?
> >
> >
> > 2016-02-09 17:00 GMT+01:00 Scott Dodson <sdod...@redhat.com>:
> >>
> >> RPM builds aren't integrated into the origin release process currently
> >> so there's often delays. I've just built them you should be able to
> >> update now.
> >>
> >> On Tue, Feb 9, 2016 at 10:34 AM, Stéphane Klein
> >> <cont...@stephane-klein.info> wrote:
> >> > Hi,
> >> >
> >> > where is OpenShift Origin v1.1.2 rpm packages ?
> >> >
> >> > Best regards,
> >> > Stéphane
> >> >
> >> > --
> >> > Stéphane Klein <cont...@stephane-klein.info>
> >> > blog: http://stephane-klein.info
> >> > cv : http://cv.stephane-klein.info
> >> > Twitter: http://twitter.com/klein_stephane
> >> >
> >> > ___
> >> > users mailing list
> >> > users@lists.openshift.redhat.com
> >> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >> >
> >
> >
> >
> >
> > --
> > Stéphane Klein <cont...@stephane-klein.info>
> > blog: http://stephane-klein.info
> > cv : http://cv.stephane-klein.info
> > Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


  1   2   >