Re: Openshift Origin builds for CVE-2018-1002105

2018-12-06 Thread Mateus Caruccio
On top of that is anyone here building publicly accessible rpms/srpms?


Em Qui, 6 de dez de 2018 07:36, Gowtham Sundara <
gowtham.sund...@rapyuta-robotics.com escreveu:

> Hello,
> The RPMs for Openshift origin need to be updated because of the recent
> vulnerability. Is there a release schedule for this?
>
> --
> Gowtham Sundara
> Site Reliability Engineer
>
> Rapyuta Robotics “empowering lives with connected machines”
> rapyuta-robotics.com 
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Audit logs can be written

2018-08-20 Thread Mateus Caruccio
Thanks, Aleks. The first solution works fine. The second one seems a little
bit odd IMHO.
Have a nice week.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-08-15 17:21 GMT-03:00 Aleksandar Lazic :

> Hi.
>
> I'm pretty sure that the directory `/var/lib/origin/openpaas-oscp-audit`
> does
> not exist in the api container.
> The line was copied from a rpm install in 3.7 for 3.10
>
> You can adopt the path in the config or create directory
>
> Adopt path:
> openshift_master_audit_config={"enabled": "true", "auditFilePath":
> "/var/lib/origin/ocp-audit.log", "maximumFileRetentionDays": "14",
> "maximumFileSizeMegabytes": "500", "maximumRetainedFiles": "5"}
>
>
> Create directory:
>
> [root@master001 ~]# oc -n openshift-apiserver get po
>
> oc -n openshift-apiserver rsh  ls -la /var/lib/origin/
>
> I think you will need to create it in the api container.
> oc -n openshift-apiserver rsh  mkdir /var/lib/origin/openpaas-oscp-
> audit/
>
> Hth
> Aleks
>
> Am 15.08.2018 um 12:02 schrieb Mateus Caruccio:
> > Hi everyone.
> >
> > After a fresh install of OKD 3.10, I'm unable to properly save audit
> logs into
> > a host dir. The default path from the hosts.example [1] tries to write
> into an
> > unwriteable dir.
> >
> > What is the recommended solution for this?
> >
> > The /var/log/audit/audit.log file from the host:
> >
> > type=AVC msg=audit(1534326872.648:1703901): avc:  denied  { write } for
> > pid=22634 comm="openshift" name="openpaas-oscp-audit" dev="xvda1"
> ino=15097948
> > scontext=system_u:system_r:container_t:s0:c143,c334
> > tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=dir
> > type=SYSCALL msg=audit(1534326872.648:1703901): arch=c03e
> syscall=257
> > success=no exit=-13 a0=ff9c a1=c42ce61100 a2=80241 a3=1a4
> items=0
> > ppid=22624 pid=22634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0
> egid=0
> > sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="openshift"
> > exe="/usr/bin/openshift" subj=system_u:system_r:container_t:s0:c143,c334
> > key=(null)
> > type=PROCTITLE msg=audit(1534326872.648:1703901):
> > proctitle=6F70656E7368696674007374617274006D617374657200617069002D2D63
> 6F6E6669673D2F6574632F6F726967696E2F6D61737465722F6D61737465
> 722D636F6E6669672E79616D6C002D2D6C6F676C6576656C3D31
> >
> > And the logs of the API container
> >
> > E0815 09:52:21.826793   1 metrics.go:86] Error in audit plugin 'log'
> > affecting 1 audit events: can't open new logfile: open
> > /var/lib/origin/openpaas-oscp-audit/openpaas-oscp-audit.log: permission
> denied
> > Impacted events:
> > 2018-08-15T09:52:21.826616689Z AUDIT:
> > id="90c74b44-bbeb-495f-bb2b-543e2c1b23f1" stage="RequestReceived"
> > ip="10.0.108.99" method="get" user="system:openshift-master"
> > groups="\"system:masters\",\"system:openshift-master\",\"
> system:authenticated\""
> > as="" asgroups="" namespace="openshift-web-console"
> > uri="/api/v1/namespaces/openshift-web-console/
> configmaps/webconsole-config"
> > response=""
> > E0815 09:52:21.828096   1 metrics.go:86] Error in audit plugin 'log'
> > affecting 1 audit events: can't open new logfile: open
> > /var/lib/origin/openpaas-oscp-audit/openpaas-oscp-audit.log: permission
> denied
> > Impacted events:
> > 2018-08-15T09:52:21.826616689Z AUDIT:
> > id="90c74b44-bbeb-495f-bb2b-543e2c1b23f1" stage="ResponseComplete"
> > ip="10.0.108.99" method="get" user="system:openshift-master"
> > groups="\"system:masters\",\"system:openshift-master\",\"
> system:authenticated\""
> > as="" asgroups="" namespace="openshift-web-console"
> > uri="/api/v1/namespaces/openshift-web-console/
> configmaps/webconsole-config"
> > response="404"
> >
> >
> >
> > [1] https://github.com/openshift/openshift-ansible/blob/
> 2e78bc99fdd240e8be653facb93118f1597e801f/inventory/hosts.example#L927
> >
> > --
> > Mateus Caruccio / Master of Puppets
> > GetupCloud.com
> > We make the infrastructure invisible
> > Gartner Cool Vendor 2017
> >
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Update HostSubnets

2018-08-09 Thread Mateus Caruccio
Is it possible to increase or decrease by simply changing master config's
networkConfig.clusterNetworks.hostSubnetLength and restarting all nodes?
I'm trying to have more IPs per namespace.


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Kubelet/node nice level

2018-07-01 Thread Mateus Caruccio
Got it. So now I just need to fix my scripts. Thanks for clarifying.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-07-01 14:18 GMT-03:00 Clayton Coleman :

> That’s the one the installer lays down.  Ansible has never used the one in
> the RPMs (and the one in the RPMs is being removed in 3.10 to prevent
> confusion).
>
> On Jul 1, 2018, at 10:03 AM, Mateus Caruccio  com> wrote:
>
> Yep, I copy/paste from an old buffer. It's /bin/nice already. Same results.
>
> Anyway, I believe have found the reason. Looks like
> /etc/systemd/system/multi-user.target.wants is linking to the wrong unit
> file (or am I editing the wrong one?).
>
> # ls -l /etc/systemd/system/multi-user.target.wants/origin-node.service
> lrwxrwxrwx.  1 root root   39 Jul  1 13:42 /etc/systemd/system/multi-
> user.target.wants/origin-node.service ->
> */etc/systemd/system/origin-node.service*
>
> # cat /etc/systemd/system/origin-node.service
> [Unit]
> Description=OpenShift Node
> After=docker.service
> After=chronyd.service
> After=ntpd.service
> Wants=openvswitch.service
> After=ovsdb-server.service
> After=ovs-vswitchd.service
> Wants=docker.service
> Documentation=https://github.com/openshift/origin
> Wants=dnsmasq.service
> After=dnsmasq.service
>
> [Service]
> Type=notify
> EnvironmentFile=/etc/sysconfig/origin-node
> Environment=GOTRACEBACK=crash
> ExecStartPre=/usr/bin/cp /etc/origin/node/node-dnsmasq.conf
> /etc/dnsmasq.d/
> ExecStartPre=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq
> /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers
> array:string:/in-addr.arpa/127.0.0.1,/cluster.local/127.0.0.1
> ExecStopPost=/usr/bin/rm /etc/dnsmasq.d/node-dnsmasq.conf
> ExecStopPost=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq
> /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers
> array:string:
> ExecStart=/usr/bin/openshift start node  --config=${CONFIG_FILE} $OPTIONS
> LimitNOFILE=65536
> LimitCORE=infinity
> WorkingDirectory=/var/lib/origin/
> SyslogIdentifier=origin-node
> Restart=always
> RestartSec=5s
> TimeoutStartSec=300
> OOMScoreAdjust=-999
>
> [Install]
> WantedBy=multi-user.target
>
>
> After remove it, the link points to where I expect being the right unit
> file:
>
>
> # rm -f /etc/systemd/system/origin-node.service
>
> # systemctl disable origin-node
> Removed symlink /etc/systemd/system/multi-user.target.wants/origin-node.
> service.
>
> # systemctl enable origin-node
> Created symlink from /etc/systemd/system/multi-
> user.target.wants/origin-node.service to /usr/lib/systemd/system/
> origin-node.service.
>
> *# systemctl restart origin-node*
>
> # ps ax -o pid,nice,comm|grep openshift
>   4994   0 openshift
>   5036  -5 openshift
>
>
> The question now is: where /etc/systemd/system/origin-node.service comes
> from?
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
> 2018-07-01 8:06 GMT-03:00 Tobias Florek :
>
>> Hi!
>>
>> > ExecStart=nice -n -5 /usr/bin/openshift start node [...]
>>
>> That won't work. You need the full path to the executable in systemd
>> units.
>>
>> Cheers,
>>  Tobias Florek
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Kubelet/node nice level

2018-07-01 Thread Mateus Caruccio
Yep, I copy/paste from an old buffer. It's /bin/nice already. Same results.

Anyway, I believe have found the reason. Looks like
/etc/systemd/system/multi-user.target.wants is linking to the wrong unit
file (or am I editing the wrong one?).

# ls -l /etc/systemd/system/multi-user.target.wants/origin-node.service
lrwxrwxrwx.  1 root root   39 Jul  1 13:42
/etc/systemd/system/multi-user.target.wants/origin-node.service ->
*/etc/systemd/system/origin-node.service*

# cat /etc/systemd/system/origin-node.service
[Unit]
Description=OpenShift Node
After=docker.service
After=chronyd.service
After=ntpd.service
Wants=openvswitch.service
After=ovsdb-server.service
After=ovs-vswitchd.service
Wants=docker.service
Documentation=https://github.com/openshift/origin
Wants=dnsmasq.service
After=dnsmasq.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/origin-node
Environment=GOTRACEBACK=crash
ExecStartPre=/usr/bin/cp /etc/origin/node/node-dnsmasq.conf /etc/dnsmasq.d/
ExecStartPre=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq
/uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers
array:string:/in-addr.arpa/127.0.0.1,/cluster.local/127.0.0.1
ExecStopPost=/usr/bin/rm /etc/dnsmasq.d/node-dnsmasq.conf
ExecStopPost=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq
/uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:
ExecStart=/usr/bin/openshift start node  --config=${CONFIG_FILE} $OPTIONS
LimitNOFILE=65536
LimitCORE=infinity
WorkingDirectory=/var/lib/origin/
SyslogIdentifier=origin-node
Restart=always
RestartSec=5s
TimeoutStartSec=300
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target


After remove it, the link points to where I expect being the right unit
file:


# rm -f /etc/systemd/system/origin-node.service

# systemctl disable origin-node
Removed symlink
/etc/systemd/system/multi-user.target.wants/origin-node.service.

# systemctl enable origin-node
Created symlink from
/etc/systemd/system/multi-user.target.wants/origin-node.service to
/usr/lib/systemd/system/origin-node.service.

*# systemctl restart origin-node*

# ps ax -o pid,nice,comm|grep openshift
  4994   0 openshift
  5036  -5 openshift


The question now is: where /etc/systemd/system/origin-node.service comes
from?

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-07-01 8:06 GMT-03:00 Tobias Florek :

> Hi!
>
> > ExecStart=nice -n -5 /usr/bin/openshift start node [...]
>
> That won't work. You need the full path to the executable in systemd
> units.
>
> Cheers,
>  Tobias Florek
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Kubelet/node nice level

2018-06-30 Thread Mateus Caruccio
Nice doesn't appears on origin-node.service. However I managed to
sucessfully set it on both docker.service and crond.service.
Also I've tried to call it using the nice command with no success:
ExecStart=nice -n -5 /usr/bin/openshift start node
--config=${CONFIG_FILE} $OPTIONS

[root@ip-10-0-53-142 centos]# systemctl daemon-reload
​
[root@ip-10-0-53-142 centos]# systemctl stop origin-node.service
[root@ip-10-0-53-142 centos]# systemctl start origin-node.service
[root@ip-10-0-53-142 centos]# ps afx -o pid,nice,comm|grep openshift
  1744   0 openshift
  1758   0 openshift
 87650   0 openshift
​


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-06-30 17:01 GMT-03:00 Clayton Coleman :

> Maybe double check that systemd sees the Nice parameter with systemctl cat
> origin-node
>
> On Jun 30, 2018, at 3:10 PM, Mateus Caruccio  com> wrote:
>
> [centos@ip-10-0-53-142 ~]$ oc version
> oc v3.9.0+ba7faec-1
> kubernetes v1.9.1+a0ce1bc657
> features: Basic-Auth GSSAPI Kerberos SPNEGO
>
> Server https://api.engine.caruccio.com
> openshift v3.9.0+ba7faec-1
> kubernetes v1.9.1+a0ce1bc657
>
> --
>
> [centos@ip-10-0-53-142 ~]$ sudo grep -v '^#' /etc/sysconfig/origin-node
> /etc/sysconfig/origin-node:OPTIONS=--loglevel=1
> /etc/sysconfig/origin-node:CONFIG_FILE=/etc/origin/node/node-config.yaml
> /etc/sysconfig/origin-node:
> /etc/sysconfig/origin-node:IMAGE_VERSION=v3.9.0
> /etc/sysconfig/origin-node:AWS_ACCESS_KEY_ID=[REDACTED]
> /etc/sysconfig/origin-node:AWS_SECRET_ACCESS_KEY=[REDACTED]
>
> --
>
> [centos@ip-10-0-53-142 ~]$ sudo cat /etc/origin/node/node-config.yaml
> allowDisabledDocker: false
> apiVersion: v1
> dnsBindAddress: 127.0.0.1:53
> dnsRecursiveResolvConf: /etc/origin/node/resolv.conf
> dnsDomain: cluster.local
> dnsIP: 10.0.53.142
> dockerConfig:
>   execHandlerName: ""
> iptablesSyncPeriod: "30s"
> imageConfig:
>   format: openshift/origin-${component}:${version}
>   latest: False
> kind: NodeConfig
> kubeletArguments:
>   cloud-config:
>   - /etc/origin/cloudprovider/aws.conf
>   cloud-provider:
>   - aws
>   image-gc-high-threshold:
>   - '80'
>   image-gc-low-threshold:
>   - '50'
>   image-pull-progress-deadline:
>   - 20m
>   max-pods:
>   - '200'
>   maximum-dead-containers:
>   - '10'
>   maximum-dead-containers-per-container:
>   - '1'
>   minimum-container-ttl-duration:
>   - 30s
>   node-labels:
>   - region=primary
>   - role=master
>   - zone=default
>   - server_name=mateus-master-0
>   pods-per-core:
>   - '20'
> masterClientConnectionOverrides:
>   acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
>   contentType: application/vnd.kubernetes.protobuf
>   burst: 200
>   qps: 100
> masterKubeConfig: system:node:ip-10-0-53-142.us-west-2.compute.internal.
> kubeconfig
> networkPluginName: redhat/openshift-ovs-multitenant
> # networkConfig struct introduced in origin 1.0.6 and OSE 3.0.2 which
> # deprecates networkPluginName above. The two should match.
> networkConfig:
>mtu: 8951
>networkPluginName: redhat/openshift-ovs-multitenant
> nodeName: ip-10-0-53-142.us-west-2.compute.internal
> podManifestConfig:
> servingInfo:
>   bindAddress: 0.0.0.0:10250
>   certFile: server.crt
>   clientCA: ca.crt
>   keyFile: server.key
>   minTLSVersion: VersionTLS12
> volumeDirectory: /var/lib/origin/openshift.local.volumes
> proxyArguments:
>   proxy-mode:
>  - iptables
> volumeConfig:
>   localQuota:
> perFSGroup:
>
> --
>
> [centos@ip-10-0-53-142 ~]$ sudo cat /usr/lib/systemd/system/
> origin-node.service
> [Unit]
> Description=Origin Node
> After=docker.service
> Wants=docker.service
> Documentation=https://github.com/openshift/origin
>
> [Service]
> Type=notify
> EnvironmentFile=/etc/sysconfig/origin-node
> Environment=GOTRACEBACK=crash
> ExecStart=/usr/bin/openshift start node --config=${CONFIG_FILE} $OPTIONS
> LimitNOFILE=65536
> LimitCORE=infinity
> WorkingDirectory=/var/lib/origin/
> SyslogIdentifier=origin-node
> Restart=always
> RestartSec=5s
> OOMScoreAdjust=-999
> Nice = -5
>
> [Install]
> WantedBy=multi-user.target
>
> ------
>
> This is the ansible task I'm using:
>
> https://github.com/caruccio/getup-engine-installer/commit/
> 49c28e4cc350856e11b8160f7a315e5fdda0dcce
>
> ---
> - name: Set origin-node niceness
>   ini_file:
> path: /usr/lib/systemd/system/origin-node.service
> section: Service
> option: Nice
> value: -5
> backup: 

Re: Kubelet/node nice level

2018-06-30 Thread Mateus Caruccio
[centos@ip-10-0-53-142 ~]$ oc version
oc v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://api.engine.caruccio.com
openshift v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657

--

[centos@ip-10-0-53-142 ~]$ sudo grep -v '^#' /etc/sysconfig/origin-node
/etc/sysconfig/origin-node:OPTIONS=--loglevel=1
/etc/sysconfig/origin-node:CONFIG_FILE=/etc/origin/node/node-config.yaml
/etc/sysconfig/origin-node:
/etc/sysconfig/origin-node:IMAGE_VERSION=v3.9.0
/etc/sysconfig/origin-node:AWS_ACCESS_KEY_ID=[REDACTED]
/etc/sysconfig/origin-node:AWS_SECRET_ACCESS_KEY=[REDACTED]

--

[centos@ip-10-0-53-142 ~]$ sudo cat /etc/origin/node/node-config.yaml
allowDisabledDocker: false
apiVersion: v1
dnsBindAddress: 127.0.0.1:53
dnsRecursiveResolvConf: /etc/origin/node/resolv.conf
dnsDomain: cluster.local
dnsIP: 10.0.53.142
dockerConfig:
  execHandlerName: ""
iptablesSyncPeriod: "30s"
imageConfig:
  format: openshift/origin-${component}:${version}
  latest: False
kind: NodeConfig
kubeletArguments:
  cloud-config:
  - /etc/origin/cloudprovider/aws.conf
  cloud-provider:
  - aws
  image-gc-high-threshold:
  - '80'
  image-gc-low-threshold:
  - '50'
  image-pull-progress-deadline:
  - 20m
  max-pods:
  - '200'
  maximum-dead-containers:
  - '10'
  maximum-dead-containers-per-container:
  - '1'
  minimum-container-ttl-duration:
  - 30s
  node-labels:
  - region=primary
  - role=master
  - zone=default
  - server_name=mateus-master-0
  pods-per-core:
  - '20'
masterClientConnectionOverrides:
  acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
  contentType: application/vnd.kubernetes.protobuf
  burst: 200
  qps: 100
masterKubeConfig:
system:node:ip-10-0-53-142.us-west-2.compute.internal.kubeconfig
networkPluginName: redhat/openshift-ovs-multitenant
# networkConfig struct introduced in origin 1.0.6 and OSE 3.0.2 which
# deprecates networkPluginName above. The two should match.
networkConfig:
   mtu: 8951
   networkPluginName: redhat/openshift-ovs-multitenant
nodeName: ip-10-0-53-142.us-west-2.compute.internal
podManifestConfig:
servingInfo:
  bindAddress: 0.0.0.0:10250
  certFile: server.crt
  clientCA: ca.crt
  keyFile: server.key
  minTLSVersion: VersionTLS12
volumeDirectory: /var/lib/origin/openshift.local.volumes
proxyArguments:
  proxy-mode:
 - iptables
volumeConfig:
  localQuota:
perFSGroup:

--

[centos@ip-10-0-53-142 ~]$ sudo cat
/usr/lib/systemd/system/origin-node.service
[Unit]
Description=Origin Node
After=docker.service
Wants=docker.service
Documentation=https://github.com/openshift/origin

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/origin-node
Environment=GOTRACEBACK=crash
ExecStart=/usr/bin/openshift start node --config=${CONFIG_FILE} $OPTIONS
LimitNOFILE=65536
LimitCORE=infinity
WorkingDirectory=/var/lib/origin/
SyslogIdentifier=origin-node
Restart=always
RestartSec=5s
OOMScoreAdjust=-999
Nice = -5

[Install]
WantedBy=multi-user.target

--

This is the ansible task I'm using:

https://github.com/caruccio/getup-engine-installer/commit/49c28e4cc350856e11b8160f7a315e5fdda0dcce

---
- name: Set origin-node niceness
  ini_file:
path: /usr/lib/systemd/system/origin-node.service
section: Service
option: Nice
value: -5
backup: yes
  tags:
  - post-install


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-06-30 15:03 GMT-03:00 Clayton Coleman :

> Which version of openshift and what are your node start settings?
>
> On Jun 29, 2018, at 11:10 PM, Mateus Caruccio  com> wrote:
>
> Hi. I'm trying to run openshift kubelet with nice set o -5 but with no
> success.
>
> I've noticed that kubelet is started using syscall.Exec[1], which calls
> execve. The man page of execve[2] states that the new process shall
> inherited nice value from caller.
>
> After adding `Nice=-5` to origin-node unit and reloading both daemon and
> unit, openshift node process still runs with nice=0.
>
> What am I missing?
>
> [1]: https://github.com/openshift/origin/blob/
> 83ac5ae6a7d635ae67b1be438d85c339500fd65b/pkg/cmd/server/
> start/start_node.go#L433
> [2]: https://linux.die.net/man/3/execve
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Kubelet/node nice level

2018-06-29 Thread Mateus Caruccio
Hi. I'm trying to run openshift kubelet with nice set o -5 but with no
success.

I've noticed that kubelet is started using syscall.Exec[1], which calls
execve. The man page of execve[2] states that the new process shall
inherited nice value from caller.

After adding `Nice=-5` to origin-node unit and reloading both daemon and
unit, openshift node process still runs with nice=0.

What am I missing?

[1]:
https://github.com/openshift/origin/blob/83ac5ae6a7d635ae67b1be438d85c339500fd65b/pkg/cmd/server/start/start_node.go#L433
[2]: https://linux.die.net/man/3/execve

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Any alternative to "oc adm policy add-scc-to-user" ?

2018-05-24 Thread Mateus Caruccio
AFAIK there is nothing special on `oc adm`. It's just a regular rest client
for the API.
Also, SCC exists only on openshift.
Am I missing something here?

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-05-24 18:35 GMT-03:00 Daniel Comnea <comnea.d...@gmail.com>:

> Not to mention that with the spec file at least i should be able to use
> either kubectl or oc cli while with "oc adm" you can do it only with oc cli.
>
> On Thu, May 24, 2018 at 10:32 PM, Daniel Comnea <comnea.d...@gmail.com>
> wrote:
>
>> Well yeah that is an option but then that is more or less like "oc edit
>> scc" which is not what i want since i need to know all the users and that
>> is tricky depending on the time when i run it (green field deployment,
>> after upgrade etc)
>>
>> On Thu, May 24, 2018 at 10:24 PM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>>> Hey, you could use oc's --loglevel=N to see the exact HTTP
>>> request/response flow with the api and adapt it to your need.
>>> I believe a level of 8 should be enough.
>>>
>>> --
>>> Mateus Caruccio / Master of Puppets
>>> GetupCloud.com
>>> We make the infrastructure invisible
>>> Gartner Cool Vendor 2017
>>>
>>> 2018-05-24 18:16 GMT-03:00 Daniel Comnea <comnea.d...@gmail.com>:
>>>
>>>> Hi,
>>>>
>>>> Is any alternative to "oc adm policy add-scc-to-user" command in the
>>>> same way there is one for "oc create serviceaccount foo" which can
>>>> be achieved by
>>>>
>>>> apiVersion: v1
>>>>
>>>> kind: ServiceAccount
>>>>
>>>> metadata:
>>>>
>>>>   name: foo-sa
>>>>
>>>>   namespace: foo
>>>>
>>>>
>>>> I'd like to be able to put all the info in a file rather than run oc
>>>> cmd sequentially.
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>> ___
>>>> dev mailing list
>>>> dev@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>>>
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Any alternative to "oc adm policy add-scc-to-user" ?

2018-05-24 Thread Mateus Caruccio
Hey, you could use oc's --loglevel=N to see the exact HTTP request/response
flow with the api and adapt it to your need.
I believe a level of 8 should be enough.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-05-24 18:16 GMT-03:00 Daniel Comnea <comnea.d...@gmail.com>:

> Hi,
>
> Is any alternative to "oc adm policy add-scc-to-user" command in the same
> way there is one for "oc create serviceaccount foo" which can be achieved
> by
>
> apiVersion: v1
>
> kind: ServiceAccount
>
> metadata:
>
>   name: foo-sa
>
>   namespace: foo
>
>
> I'd like to be able to put all the info in a file rather than run oc cmd
> sequentially.
>
>
> Thanks
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Hawkular metrics returns Forbiden

2018-01-16 Thread Mateus Caruccio
I was using :v3.7.0-rc.0 but switching to :latest solves the problem.
Is 3.6.1 fixed too?

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-01-16 11:59 GMT-02:00 Matthew Wringe <mwri...@redhat.com>:

> Are you using the latest 3.6 images?
>
> On Tue, Jan 16, 2018 at 7:48 AM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Hey guys, any news on this?
>> Tnx
>>
>> --
>> Mateus Caruccio / Master of Puppets
>> GetupCloud.com
>> We make the infrastructure invisible
>> Gartner Cool Vendor 2017
>>
>> 2017-10-05 18:35 GMT-03:00 Mateus Caruccio <mateus.caruc...@getupcloud.co
>> m>:
>>
>>> Hey Matt, any update on this?
>>>
>>> --
>>> Mateus Caruccio / Master of Puppets
>>> GetupCloud.com
>>> We make the infrastructure invisible
>>> Gartner Cool Vendor 2017
>>>
>>> 2017-09-28 10:19 GMT-03:00 Matthew Wringe <mwri...@redhat.com>:
>>>
>>>> Wait, there is another update that we need. That PR probably wont work
>>>> properly for you yet. I am investigating
>>>>
>>>> On Thu, Sep 28, 2017 at 9:06 AM, Matthew Wringe <mwri...@redhat.com>
>>>> wrote:
>>>>
>>>>> The PR is this: https://github.com/openshift/origin-metrics/pull/382
>>>>>
>>>>> It was a problem in one of our releases of Hawkular Metrics, but I
>>>>> didn't think it made it into the 3.6 release (but it did).
>>>>>
>>>>> On Thu, Sep 28, 2017 at 8:41 AM, Mateus Caruccio <
>>>>> mateus.caruc...@getupcloud.com> wrote:
>>>>>
>>>>>> Sweet! Would you mind pointing the PR url?
>>>>>> Thanks.
>>>>>>
>>>>>> --
>>>>>> Mateus Caruccio / Master of Puppets
>>>>>> GetupCloud.com
>>>>>> We make the infrastructure invisible
>>>>>> Gartner Cool Vendor 2017
>>>>>>
>>>>>> 2017-09-28 9:34 GMT-03:00 Matthew Wringe <mwri...@redhat.com>:
>>>>>>
>>>>>>> Ah, sorry, this somehow got missed. We have had an issue that
>>>>>>> slipped into 3.6.0 that we are currently in progress to fix. The PR has
>>>>>>> been submitted and we are waiting for a new image to be built and pushed
>>>>>>> out.
>>>>>>>
>>>>>>> On Thu, Sep 28, 2017 at 6:53 AM, Mateus Caruccio <
>>>>>>> mateus.caruc...@getupcloud.com> wrote:
>>>>>>>
>>>>>>>> Nope, no time to debug yet :(
>>>>>>>>
>>>>>>>> --
>>>>>>>> Mateus Caruccio / Master of Puppets
>>>>>>>> GetupCloud.com
>>>>>>>> We make the infrastructure invisible
>>>>>>>> Gartner Cool Vendor 2017
>>>>>>>>
>>>>>>>> 2017-09-28 7:52 GMT-03:00 Andrew Lau <and...@andrewklau.com>:
>>>>>>>>
>>>>>>>>> Did you find any solution for this?
>>>>>>>>>
>>>>>>>>> On Fri, 15 Sep 2017 at 01:34 Mateus Caruccio <
>>>>>>>>> mateus.caruc...@getupcloud.com> wrote:
>>>>>>>>>
>>>>>>>>>> Yep, there it is:
>>>>>>>>>>
>>>>>>>>>> [OSEv3:children]
>>>>>>>>>> masters
>>>>>>>>>> etcd
>>>>>>>>>> nodes
>>>>>>>>>>
>>>>>>>>>> [OSEv3:vars]
>>>>>>>>>> deployment_type=origin
>>>>>>>>>> openshift_release=v3.6
>>>>>>>>>> debug_level=1
>>>>>>>>>> openshift_debug_level=1
>>>>>>>>>> openshift_node_debug_level=1
>>>>>>>>>> openshift_master_debug_level=1
>>>>>>>>>> openshift_master_access_token_max_seconds=2419200
>>>>>>>>>> osm_cluster_network_cidr=172.16.0.0/16
>>>>>>>>>> openshift_registry_selector="docker-registry=true"
>>>>>>>>>> openshift_hosted_registry_replicas=1
>>>>>>>>>>
>>>>>>>>

Re: Hawkular metrics returns Forbiden

2018-01-16 Thread Mateus Caruccio
Hey guys, any news on this?
Tnx

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-10-05 18:35 GMT-03:00 Mateus Caruccio <mateus.caruc...@getupcloud.com>:

> Hey Matt, any update on this?
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
> 2017-09-28 10:19 GMT-03:00 Matthew Wringe <mwri...@redhat.com>:
>
>> Wait, there is another update that we need. That PR probably wont work
>> properly for you yet. I am investigating
>>
>> On Thu, Sep 28, 2017 at 9:06 AM, Matthew Wringe <mwri...@redhat.com>
>> wrote:
>>
>>> The PR is this: https://github.com/openshift/origin-metrics/pull/382
>>>
>>> It was a problem in one of our releases of Hawkular Metrics, but I
>>> didn't think it made it into the 3.6 release (but it did).
>>>
>>> On Thu, Sep 28, 2017 at 8:41 AM, Mateus Caruccio <
>>> mateus.caruc...@getupcloud.com> wrote:
>>>
>>>> Sweet! Would you mind pointing the PR url?
>>>> Thanks.
>>>>
>>>> --
>>>> Mateus Caruccio / Master of Puppets
>>>> GetupCloud.com
>>>> We make the infrastructure invisible
>>>> Gartner Cool Vendor 2017
>>>>
>>>> 2017-09-28 9:34 GMT-03:00 Matthew Wringe <mwri...@redhat.com>:
>>>>
>>>>> Ah, sorry, this somehow got missed. We have had an issue that slipped
>>>>> into 3.6.0 that we are currently in progress to fix. The PR has been
>>>>> submitted and we are waiting for a new image to be built and pushed out.
>>>>>
>>>>> On Thu, Sep 28, 2017 at 6:53 AM, Mateus Caruccio <
>>>>> mateus.caruc...@getupcloud.com> wrote:
>>>>>
>>>>>> Nope, no time to debug yet :(
>>>>>>
>>>>>> --
>>>>>> Mateus Caruccio / Master of Puppets
>>>>>> GetupCloud.com
>>>>>> We make the infrastructure invisible
>>>>>> Gartner Cool Vendor 2017
>>>>>>
>>>>>> 2017-09-28 7:52 GMT-03:00 Andrew Lau <and...@andrewklau.com>:
>>>>>>
>>>>>>> Did you find any solution for this?
>>>>>>>
>>>>>>> On Fri, 15 Sep 2017 at 01:34 Mateus Caruccio <
>>>>>>> mateus.caruc...@getupcloud.com> wrote:
>>>>>>>
>>>>>>>> Yep, there it is:
>>>>>>>>
>>>>>>>> [OSEv3:children]
>>>>>>>> masters
>>>>>>>> etcd
>>>>>>>> nodes
>>>>>>>>
>>>>>>>> [OSEv3:vars]
>>>>>>>> deployment_type=origin
>>>>>>>> openshift_release=v3.6
>>>>>>>> debug_level=1
>>>>>>>> openshift_debug_level=1
>>>>>>>> openshift_node_debug_level=1
>>>>>>>> openshift_master_debug_level=1
>>>>>>>> openshift_master_access_token_max_seconds=2419200
>>>>>>>> osm_cluster_network_cidr=172.16.0.0/16
>>>>>>>> openshift_registry_selector="docker-registry=true"
>>>>>>>> openshift_hosted_registry_replicas=1
>>>>>>>>
>>>>>>>> openshift_master_cluster_hostname=api-cluster.example.com.br
>>>>>>>> openshift_master_cluster_public_hostname=api-cluster.example.com.br
>>>>>>>> osm_default_subdomain=example.com.br
>>>>>>>> openshift_master_default_subdomain=example.com.br
>>>>>>>> osm_default_node_selector="role=app"
>>>>>>>> os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant
>>>>>>>> openshift_master_identity_providers=[{'name': 'htpasswd_auth',
>>>>>>>> 'login': 'true', 'challenge': 'true', 'kind': 
>>>>>>>> 'HTPasswdPasswordIdentityProvider',
>>>>>>>> 'filename': '/etc/origin/master/htpasswd'}]
>>>>>>>> osm_use_cockpit=false
>>>>>>>> containerized=False
>>>>>>>>
>>>>>>>> openshift_master_cluster_method=native
>>>>>>>> openshift_master_console_port=443
>>>>>>>> openshift_master_api_port=443
>>>>>>>&

Azure cloud provider - error adding nodes

2018-01-12 Thread Mateus Caruccio
After fllowing instructions from [1], origin-node can register itself with
error message:


Jan 12 13:39:40 master-1 origin-node[85592]: W0112 13:39:40.254195   85592
sdn_controller.go:48] Could not find an allocated subnet for node:
master-1, Waiting...
Jan 12 13:39:40 master-1 origin-node[85592]: F0112 13:39:40.254221   85592
network.go:45] SDN node startup failed: failed to get subnet for this host:
master-1, error: timed out waiting for the condition


Is there any secret sauce I'm not aware?

Thanks,
Mateus

[1]
https://docs.openshift.com/container-platform/3.7/install_config/configuring_azure.html

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: origin v3.7.0 images at docker.io

2017-12-19 Thread Mateus Caruccio
Sorry I was not specific enougth. I meant images for
origin-metrics-{hawkular-metrics,cassandra,heapster}.
AFAIR there was an issue with auth for hawkular on v3.6...

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-12-19 12:12 GMT-02:00 Clayton Coleman <ccole...@redhat.com>:

> Did I forget to send the email?  We cut a few weeks ago.
>
> On Dec 19, 2017, at 8:46 AM, Mateus Caruccio <mateus.caruccio@getupcloud.
> com> wrote:
>
> Hey, is there any prevision for when will 3.7.0 be released?
> tnx
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
> 2017-11-21 16:54 GMT-02:00 Clayton Coleman <ccole...@redhat.com>:
>
>> We haven't cut 3.7.0 yet in origin.  Still waiting for final soak
>> determination - there are a few outstanding issues being chased.  I'd urge
>> everyone who wants 3.7.0 to verify that 3.7.0.rc0 works for them.
>>
>> On Tue, Nov 21, 2017 at 1:40 PM, Jason Brooks <jbro...@redhat.com> wrote:
>>
>>> Can we get v3.7.0 images for openshift/origin?
>>>
>>> See https://hub.docker.com/r/openshift/origin/tags/
>>>
>>> Regards, Jason
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: SCC privileged not applying

2017-12-19 Thread Mateus Caruccio
Makes sense. Thanks for your clarification ;)

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-12-19 4:48 GMT-02:00 Weiwei Jiang <wji...@redhat.com>:

> Hi:
>
> I think you make some misunderstanding with OpenShift.
>
> Actually you create a daemonset with a specific serviceaccount you created
> which is granted with the SCC privileged, right?
> But the scc is trying to verify the creater account(you can see this with
> audit enabled), and should be daemonset-controller or something like this
> but not the given serviceaccount).
> So you grant the new-relic account, but the creater is
> daemonset-controller(just put it here, maybe this is also not the right
> serviceaccount to create the target pod), so got this issue.
>
> And back to your scenario, I have no better suggestion if you insistently
> use daemonset to create the pod.
>
> You can pick up the pod template from the daemonset to just create the pod
> directly and grant the scc with your user(`oc whoami`) but will loss the
> daemonset features.
>
>
> Regards!
>
> On Tue, Dec 19, 2017 at 3:01 AM Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> There is this daemonset which needs host access. I've created a
>> namespace, added `privileged` scc to a new serviceaccount and set pod to
>> run with that SA.
>>
>> The problem is openshift is not applying the privileged SCC to my
>> serviceAccount.
>>
>> *$ oc get ev*
>> LASTSEEN   FIRSTSEEN   COUNT NAME KINDSUBOBJECT
>> TYPE  REASON SOURCE   MESSAGE
>> 17s17s 25newrelic-agent   DaemonSet
>> Warning   FailedCreate   daemon-set   Error creating: pods
>> "newrelic-agent-" is forbidden: unable to validate against any security
>> context constraint: [provider restricted: .spec.securityContext.hostNetwork:
>> Invalid value: true: Host network is not allowed to be used provider
>> restricted: .spec.securityContext.hostPID: Invalid value: true: Host PID is
>> not allowed to be used provider restricted: .spec.securityContext.hostIPC:
>> Invalid value: true: Host IPC is not allowed to be used provider
>> restricted: .spec.containers[0].securityContext.privileged: Invalid
>> value: true: Privileged containers are not allowed provider restricted:
>> .spec.containers[0].securityContext.volumes[1]: Invalid value:
>> "hostPath": hostPath volumes are not allowed to be used provider
>> restricted: .spec.containers[0].securityContext.volumes[2]: Invalid
>> value: "hostPath": hostPath volumes are not allowed to be used provider
>> restricted: .spec.containers[0].securityContext.volumes[3]: Invalid
>> value: "hostPath": hostPath volumes are not allowed to be used provider
>> restricted: .spec.containers[0].securityContext.volumes[4]: Invalid
>> value: "hostPath": hostPath volumes are not allowed to be used provider
>> restricted: .spec.containers[0].securityContext.hostNetwork: Invalid
>> value: true: Host network is not allowed to be used provider restricted:
>> .spec.containers[0].securityContext.hostPID: Invalid value: true: Host
>> PID is not allowed to be used provider restricted: 
>> .spec.containers[0].securityContext.hostIPC:
>> Invalid value: true: Host IPC is not allowed to be used]
>>
>>
>> This is my config:
>>
>>
>> *$ oc version*
>> oc v3.6.0+c4dd4cf
>> kubernetes v1.6.1+5115d708d7
>> features: Basic-Auth GSSAPI Kerberos SPNEGO
>>
>> Server https://[REDACTED]
>> openshift v3.6.0+c4dd4cf
>> kubernetes v1.6.1+5115d708d7
>>
>>
>> *$ oc whoami*
>> system:admin
>>
>>
>> *$ oc get ds -o yaml -n new-relic*
>> apiVersion: v1
>> items:
>> - apiVersion: extensions/v1beta1
>>   kind: DaemonSet
>>   metadata:
>> creationTimestamp: 2017-12-18T18:20:42Z
>> generation: 1
>> labels:
>>   app: newrelic-agent
>>   tier: monitoring
>>   version: v1
>> name: newrelic-agent
>> namespace: new-relic
>> resourceVersion: "9280118"
>> selfLink: /apis/extensions/v1beta1/namespaces/new-relic/
>> daemonsets/newrelic-agent
>> uid: 286ed3c9-e420-11e7-aa46-000af7b3efa4
>>   spec:
>> selector:
>>   matchLabels:
>> name: newrelic
>> template:
>>   metadata:
>> creationTimestamp: null
>> labels:
>>   name: newrelic
>>   spec:
>> containers:
>>

Re: origin v3.7.0 images at docker.io

2017-12-19 Thread Mateus Caruccio
Hey, is there any prevision for when will 3.7.0 be released?
tnx

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-11-21 16:54 GMT-02:00 Clayton Coleman <ccole...@redhat.com>:

> We haven't cut 3.7.0 yet in origin.  Still waiting for final soak
> determination - there are a few outstanding issues being chased.  I'd urge
> everyone who wants 3.7.0 to verify that 3.7.0.rc0 works for them.
>
> On Tue, Nov 21, 2017 at 1:40 PM, Jason Brooks <jbro...@redhat.com> wrote:
>
>> Can we get v3.7.0 images for openshift/origin?
>>
>> See https://hub.docker.com/r/openshift/origin/tags/
>>
>> Regards, Jason
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


SCC privileged not applying

2017-12-18 Thread Mateus Caruccio
Container: true
  allowedCapabilities:
  - '*'
  apiVersion: v1
  defaultAddCapabilities: []
  fsGroup:
type: RunAsAny
  groups:
  - system:cluster-admins
  - system:nodes
  kind: SecurityContextConstraints
  metadata:
annotations:
  kubernetes.io/description: 'privileged allows access to all
privileged and host
features and the ability to run as any user, any group, any
fsGroup, and with
any SELinux context.  WARNING: this is the most relaxed SCC and
should be
used only for cluster administration. Grant with caution.'
creationTimestamp: 2017-10-05T19:28:00Z
name: privileged
namespace: ""
resourceVersion: "9278361"
selfLink: /api/v1/securitycontextconstraints/privileged
uid: 4cd4dab7-aa03-11e7-afc6-000af7b3f4a4
  priority: null
  readOnlyRootFilesystem: false
  requiredDropCapabilities: []
  runAsUser:
type: RunAsAny
  seLinuxContext:
type: RunAsAny
  seccompProfiles:
  - '*'
  supplementalGroups:
type: RunAsAny
  users:
  - system:serviceaccount:openshift-infra:build-controller
  - system:serviceaccount:management-infra:management-admin
  - system:serviceaccount:management-infra:inspector-admin
  - system:serviceaccount:default:registry
  - system:serviceaccount:aws-logging-fluentd:aws-logging-fluentd
  - system:serviceaccount:logging-test-deploy:aws-logging-fluentd
  - system:serviceaccount:default:logging-newrelic
  - system:serviceaccount:default:default

*  - system:serviceaccount:new-relic:default  -
system:serviceaccount:new-relic:new-relic*
  volumes:
  - '*'

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Webhook token auth

2017-12-01 Thread Mateus Caruccio
Exactly.

Em 1 de dez de 2017 19:09, "Clayton Coleman" <ccole...@redhat.com> escreveu:

> At the current time authenticator web hooks aren't supported (3.6).  It's
> being discussed for 3.9, but more realistically 3.10.
>
> This is for IAM integration with AWS?
>
> On Fri, Dec 1, 2017 at 3:48 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Hi.
>> Is it possible to use external webhook auth on openshift?
>>
>> I've edited origin-master with this fragment:
>>
>> kubernetesMasterConfig:
>>   apiServerArguments:
>> authentication-token-webhook-config-file:
>> /etc/kubernetes/heptio-authenticator-aws/kubeconfig.yaml
>>
>> However it looks like apiserver is not even hitting the webhook service
>> at 127.0.0.1
>> No log messages messages even when loglevel=10
>>
>>
>> $ oc version
>> oc v3.6.1+008f2d5
>> kubernetes v1.6.1+5115d708d7
>> features: Basic-Auth GSSAPI Kerberos SPNEGO
>>
>> Server https://XXX.XXX.getupcloud.com:443
>> openshift v3.6.1+008f2d5
>> kubernetes v1.6.1+5115d708d7
>>
>>
>> Thanks
>>
>> --
>> Mateus Caruccio / Master of Puppets
>> GetupCloud.com
>> We make the infrastructure invisible
>> Gartner Cool Vendor 2017
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Webhook token auth

2017-12-01 Thread Mateus Caruccio
Hi.
Is it possible to use external webhook auth on openshift?

I've edited origin-master with this fragment:

kubernetesMasterConfig:
  apiServerArguments:
authentication-token-webhook-config-file: /etc/kubernetes/heptio-
authenticator-aws/kubeconfig.yaml

However it looks like apiserver is not even hitting the webhook service at
127.0.0.1
No log messages messages even when loglevel=10


$ oc version
oc v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://XXX.XXX.getupcloud.com:443
openshift v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7


Thanks

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Hawkular metrics returns Forbiden

2017-10-05 Thread Mateus Caruccio
Hey Matt, any update on this?

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-09-28 10:19 GMT-03:00 Matthew Wringe <mwri...@redhat.com>:

> Wait, there is another update that we need. That PR probably wont work
> properly for you yet. I am investigating
>
> On Thu, Sep 28, 2017 at 9:06 AM, Matthew Wringe <mwri...@redhat.com>
> wrote:
>
>> The PR is this: https://github.com/openshift/origin-metrics/pull/382
>>
>> It was a problem in one of our releases of Hawkular Metrics, but I didn't
>> think it made it into the 3.6 release (but it did).
>>
>> On Thu, Sep 28, 2017 at 8:41 AM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>>> Sweet! Would you mind pointing the PR url?
>>> Thanks.
>>>
>>> --
>>> Mateus Caruccio / Master of Puppets
>>> GetupCloud.com
>>> We make the infrastructure invisible
>>> Gartner Cool Vendor 2017
>>>
>>> 2017-09-28 9:34 GMT-03:00 Matthew Wringe <mwri...@redhat.com>:
>>>
>>>> Ah, sorry, this somehow got missed. We have had an issue that slipped
>>>> into 3.6.0 that we are currently in progress to fix. The PR has been
>>>> submitted and we are waiting for a new image to be built and pushed out.
>>>>
>>>> On Thu, Sep 28, 2017 at 6:53 AM, Mateus Caruccio <
>>>> mateus.caruc...@getupcloud.com> wrote:
>>>>
>>>>> Nope, no time to debug yet :(
>>>>>
>>>>> --
>>>>> Mateus Caruccio / Master of Puppets
>>>>> GetupCloud.com
>>>>> We make the infrastructure invisible
>>>>> Gartner Cool Vendor 2017
>>>>>
>>>>> 2017-09-28 7:52 GMT-03:00 Andrew Lau <and...@andrewklau.com>:
>>>>>
>>>>>> Did you find any solution for this?
>>>>>>
>>>>>> On Fri, 15 Sep 2017 at 01:34 Mateus Caruccio <
>>>>>> mateus.caruc...@getupcloud.com> wrote:
>>>>>>
>>>>>>> Yep, there it is:
>>>>>>>
>>>>>>> [OSEv3:children]
>>>>>>> masters
>>>>>>> etcd
>>>>>>> nodes
>>>>>>>
>>>>>>> [OSEv3:vars]
>>>>>>> deployment_type=origin
>>>>>>> openshift_release=v3.6
>>>>>>> debug_level=1
>>>>>>> openshift_debug_level=1
>>>>>>> openshift_node_debug_level=1
>>>>>>> openshift_master_debug_level=1
>>>>>>> openshift_master_access_token_max_seconds=2419200
>>>>>>> osm_cluster_network_cidr=172.16.0.0/16
>>>>>>> openshift_registry_selector="docker-registry=true"
>>>>>>> openshift_hosted_registry_replicas=1
>>>>>>>
>>>>>>> openshift_master_cluster_hostname=api-cluster.example.com.br
>>>>>>> openshift_master_cluster_public_hostname=api-cluster.example.com.br
>>>>>>> osm_default_subdomain=example.com.br
>>>>>>> openshift_master_default_subdomain=example.com.br
>>>>>>> osm_default_node_selector="role=app"
>>>>>>> os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant
>>>>>>> openshift_master_identity_providers=[{'name': 'htpasswd_auth',
>>>>>>> 'login': 'true', 'challenge': 'true', 'kind': 
>>>>>>> 'HTPasswdPasswordIdentityProvider',
>>>>>>> 'filename': '/etc/origin/master/htpasswd'}]
>>>>>>> osm_use_cockpit=false
>>>>>>> containerized=False
>>>>>>>
>>>>>>> openshift_master_cluster_method=native
>>>>>>> openshift_master_console_port=443
>>>>>>> openshift_master_api_port=443
>>>>>>>
>>>>>>> openshift_master_overwrite_named_certificates=true
>>>>>>> openshift_master_named_certificates=[{"certfile":"{{lookup('
>>>>>>> env','PWD')}}/certs/wildcard.example.com.br.crt","keyfile":"
>>>>>>> {{lookup('env','PWD')}}/certs/wildcard.example.com.br.key",
>>>>>>> "cafile":"{{lookup('env','PWD')}}/certs/wildcard.example.com
>>>>>>> .br.int.crt"}]
>>>>>>> openshift_master_session_auth_se

Re: Hawkular metrics returns Forbiden

2017-09-28 Thread Mateus Caruccio
Sweet! Would you mind pointing the PR url?
Thanks.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-09-28 9:34 GMT-03:00 Matthew Wringe <mwri...@redhat.com>:

> Ah, sorry, this somehow got missed. We have had an issue that slipped into
> 3.6.0 that we are currently in progress to fix. The PR has been submitted
> and we are waiting for a new image to be built and pushed out.
>
> On Thu, Sep 28, 2017 at 6:53 AM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Nope, no time to debug yet :(
>>
>> --
>> Mateus Caruccio / Master of Puppets
>> GetupCloud.com
>> We make the infrastructure invisible
>> Gartner Cool Vendor 2017
>>
>> 2017-09-28 7:52 GMT-03:00 Andrew Lau <and...@andrewklau.com>:
>>
>>> Did you find any solution for this?
>>>
>>> On Fri, 15 Sep 2017 at 01:34 Mateus Caruccio <
>>> mateus.caruc...@getupcloud.com> wrote:
>>>
>>>> Yep, there it is:
>>>>
>>>> [OSEv3:children]
>>>> masters
>>>> etcd
>>>> nodes
>>>>
>>>> [OSEv3:vars]
>>>> deployment_type=origin
>>>> openshift_release=v3.6
>>>> debug_level=1
>>>> openshift_debug_level=1
>>>> openshift_node_debug_level=1
>>>> openshift_master_debug_level=1
>>>> openshift_master_access_token_max_seconds=2419200
>>>> osm_cluster_network_cidr=172.16.0.0/16
>>>> openshift_registry_selector="docker-registry=true"
>>>> openshift_hosted_registry_replicas=1
>>>>
>>>> openshift_master_cluster_hostname=api-cluster.example.com.br
>>>> openshift_master_cluster_public_hostname=api-cluster.example.com.br
>>>> osm_default_subdomain=example.com.br
>>>> openshift_master_default_subdomain=example.com.br
>>>> osm_default_node_selector="role=app"
>>>> os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant
>>>> openshift_master_identity_providers=[{'name': 'htpasswd_auth',
>>>> 'login': 'true', 'challenge': 'true', 'kind': 
>>>> 'HTPasswdPasswordIdentityProvider',
>>>> 'filename': '/etc/origin/master/htpasswd'}]
>>>> osm_use_cockpit=false
>>>> containerized=False
>>>>
>>>> openshift_master_cluster_method=native
>>>> openshift_master_console_port=443
>>>> openshift_master_api_port=443
>>>>
>>>> openshift_master_overwrite_named_certificates=true
>>>> openshift_master_named_certificates=[{"certfile":"{{lookup('
>>>> env','PWD')}}/certs/wildcard.example.com.br.crt","keyfile":"
>>>> {{lookup('env','PWD')}}/certs/wildcard.example.com.br.key",
>>>> "cafile":"{{lookup('env','PWD')}}/certs/wildcard.example.com
>>>> .br.int.crt"}]
>>>> openshift_master_session_auth_secrets=['F71uoyI/Tkv/LiDH2PiF
>>>> KK1o76bLoH10+uE2a']
>>>> openshift_master_session_encryption_secrets=['bjDwQfiy4ksB/3
>>>> qph87BGulYb/GUho6K']
>>>> openshift_master_audit_config={"enabled": true, "auditFilePath":
>>>> "/var/log/openshift-audit/openshift-audit.log",
>>>> "maximumFileRetentionDays": 30, "maximumFileSizeMegabytes": 500,
>>>> "maximumRetainedFiles": 10}
>>>>
>>>> openshift_ca_cert_expire_days=1825
>>>> openshift_node_cert_expire_days=730
>>>> openshift_master_cert_expire_days=730
>>>> etcd_ca_default_days=1825
>>>>
>>>> openshift_hosted_router_create_certificate=false
>>>> openshift_hosted_manage_router=true
>>>> openshift_router_selector="role=infra"
>>>> openshift_hosted_router_replicas=2
>>>> openshift_hosted_router_certificate={"certfile":"{{lookup('e
>>>> nv','PWD')}}/certs/wildcard.example.com.br.crt","keyfile":"{
>>>> {lookup('env','PWD')}}/certs/wildcard.example.com.br.key",
>>>> "cafile":"{{lookup('env','PWD')}}/certs/wildcard.example.com
>>>> .br.int.crt"}
>>>>
>>>> openshift_hosted_metrics_deploy=true
>>>> openshift_hosted_metrics_public_url=https://hawkular-metrics
>>>> .example.com.br/hawkular/metrics
>>>>
>>>> openshift_hosted_logging_deploy=true
>>>> opens

Re: Hawkular metrics returns Forbiden

2017-09-28 Thread Mateus Caruccio
Nope, no time to debug yet :(

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-09-28 7:52 GMT-03:00 Andrew Lau <and...@andrewklau.com>:

> Did you find any solution for this?
>
> On Fri, 15 Sep 2017 at 01:34 Mateus Caruccio <mateus.caruccio@getupcloud.
> com> wrote:
>
>> Yep, there it is:
>>
>> [OSEv3:children]
>> masters
>> etcd
>> nodes
>>
>> [OSEv3:vars]
>> deployment_type=origin
>> openshift_release=v3.6
>> debug_level=1
>> openshift_debug_level=1
>> openshift_node_debug_level=1
>> openshift_master_debug_level=1
>> openshift_master_access_token_max_seconds=2419200
>> osm_cluster_network_cidr=172.16.0.0/16
>> openshift_registry_selector="docker-registry=true"
>> openshift_hosted_registry_replicas=1
>>
>> openshift_master_cluster_hostname=api-cluster.example.com.br
>> openshift_master_cluster_public_hostname=api-cluster.example.com.br
>> osm_default_subdomain=example.com.br
>> openshift_master_default_subdomain=example.com.br
>> osm_default_node_selector="role=app"
>> os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant
>> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
>> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
>> 'filename': '/etc/origin/master/htpasswd'}]
>> osm_use_cockpit=false
>> containerized=False
>>
>> openshift_master_cluster_method=native
>> openshift_master_console_port=443
>> openshift_master_api_port=443
>>
>> openshift_master_overwrite_named_certificates=true
>> openshift_master_named_certificates=[{"certfile":"{{
>> lookup('env','PWD')}}/certs/wildcard.example.com.br.crt","
>> keyfile":"{{lookup('env','PWD')}}/certs/wildcard.example.com.br.key",
>> "cafile":"{{lookup('env','PWD')}}/certs/wildcard.example.
>> com.br.int.crt"}]
>> openshift_master_session_auth_secrets=['F71uoyI/Tkv/
>> LiDH2PiFKK1o76bLoH10+uE2a']
>> openshift_master_session_encryption_secrets=['bjDwQfiy4ksB/3qph87BGulYb/
>> GUho6K']
>> openshift_master_audit_config={"enabled": true, "auditFilePath":
>> "/var/log/openshift-audit/openshift-audit.log",
>> "maximumFileRetentionDays": 30, "maximumFileSizeMegabytes": 500,
>> "maximumRetainedFiles": 10}
>>
>> openshift_ca_cert_expire_days=1825
>> openshift_node_cert_expire_days=730
>> openshift_master_cert_expire_days=730
>> etcd_ca_default_days=1825
>>
>> openshift_hosted_router_create_certificate=false
>> openshift_hosted_manage_router=true
>> openshift_router_selector="role=infra"
>> openshift_hosted_router_replicas=2
>> openshift_hosted_router_certificate={"certfile":"{{
>> lookup('env','PWD')}}/certs/wildcard.example.com.br.crt","
>> keyfile":"{{lookup('env','PWD')}}/certs/wildcard.example.com.br.key",
>> "cafile":"{{lookup('env','PWD')}}/certs/wildcard.example.com.br.int.crt"}
>>
>> openshift_hosted_metrics_deploy=true
>> openshift_hosted_metrics_public_url=https://hawkular-
>> metrics.example.com.br/hawkular/metrics
>>
>> openshift_hosted_logging_deploy=true
>> openshift_hosted_logging_hostname=kibana.example.com.br
>>
>> openshift_install_examples=true
>>
>> openshift_node_kubelet_args={'pods-per-core': ['20'], 'max-pods':
>> ['100'], 'image-gc-high-threshold': ['80'], 'image-gc-low-threshold':
>> ['50'],'minimum-container-ttl-duration': ['60s'],
>> 'maximum-dead-containers-per-container': ['1'],
>> 'maximum-dead-containers': ['15']}
>>
>> logrotate_scripts=[{"name": "syslog", "path": "/var/log/cron\n/var/log/
>> maillog\n/var/log/messages\n/var/log/secure\n/var/log/spooler\n",
>> "options": ["daily", "rotate 7", "compress", "sharedscripts", "missingok"],
>> "scripts": {"postrotate": "/bin/kill -HUP `cat /var/run/syslogd.pid 2>
>> /dev/null` 2> /dev/null || true"}}]
>>
>> openshift_builddefaults_image_labels=[{'name':'builder','value':'true'}]
>> openshift_builddefaults_nodeselectors={'builder':'true'}
>> openshift_builddefaults_annotations={'builder':'true'}
>> openshift_builddefaults_resources_requests_cpu=10m
>> openshift_builddefaults_resources_requests_memory=128Mi
>> openshift_builddefaults_resources_limits_cpu=

Re: OpenShift Origin 3.6 + Ceph persistent storage problems with secret

2017-09-15 Thread Mateus Caruccio
Hey Piotr, I believe you'd have a better chance asking on
dev@lists.openshift.redhat.com (CCed)

Cheers,
Mateus


Em 15 de set de 2017 05:08, "Piotr Baranowski" 
escreveu:

*bump*

Anyone?

--

*Od: *"Piotr Baranowski" 
*Do: *"users" 
*Wysłane: *czwartek, 31 sierpnia, 2017 20:22:19
*Temat: *OpenShift Origin 3.6 + Ceph persistent storage problems with secret

Hey group,

I'm struggling a little trying to integrate Origin 3.6 with ceph.
First there are several docs that are not actually in sync and send
contradicting messages.

docs.openshift.org
access.redhat.com

They have slighly different examples on how to set up that integration.

I have an issue:
Creating the pvc automatically creates a PV.
I see that it was successful:

date master1.foo.bar origin-master-controllers[26578]: I0831
19:58:45.018017   26578 rbd.go:324] successfully created rbd image
"kubernetes-dynamic-pvc-07ef3830-8e76-11e7-80e4-5254000e374d"

I see that that rbd image was created:

[root@ceph1 ~]# rbd --pool=kube ls
kubernetes-dynamic-pvc-07ef3830-8e76-11e7-80e4-5254000e374d

[root@ceph1 ~]# rbd --pool=kube info kubernetes-dynamic-pvc-
07ef3830-8e76-11e7-80e4-5254000e374d
rbd image 'kubernetes-dynamic-pvc-07ef3830-8e76-11e7-80e4-5254000e374d':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.3e0e6.2ae8944a
format: 1

I can create such storage from default project as well as from any other
project i want.

However when i try to use it i end up with Creating Container  and state
Pending.
date node1.foo.bar origin-node[36836]: E0831 20:16:22.995258   36836
rbd.go:459] failed to get secret from ["foo"/"ceph-secret-user"]
date node1.foo.bar origin-node[36836]: E0831 20:16:22.995296   36836
rbd.go:111] Couldn't get secret from foo/{
Name:ceph-secret-user,}
date node1.foo.bar origin-node[36836]: E0831 20:16:22.995338   36836
reconciler.go:308] operationExecutor.MountVolume failed for volume "
kubernetes.io/rbd/18573675-8e77-11e7-8a05-5254000e374d-
pvc-07e8d3a0-8e76-11e7-94f2-5254008efc4e" (spec.Name:
"pvc-07e8d3a0-8e76-11e7-94f2-5254008efc4e") pod
"18573675-8e77-11e7-8a05-5254000e374d"
(UID: "18573675-8e77-11e7-8a05-5254000e374d")
controllerAttachDetachEnabled: true with err: MountVolume.NewMounter failed
for volume "kubernetes.io/rbd/18573675-8e77-11e7-8a05-5254000e374d-
pvc-07e8d3a0-8e76-11e7-94f2-5254008efc4e" (spec.Name:
"pvc-07e8d3a0-8e76-11e7-94f2-5254008efc4e") pod
"18573675-8e77-11e7-8a05-5254000e374d"
(UID: "18573675-8e77-11e7-8a05-5254000e374d") with: failed to get secret
from ["foo"/"ceph-secret-user"]

(the message is for another attempt so pvc-id does not match but that does
not matter. Ther error message is pretty much the same for all attempts)

Any idea what's wrong?

br

-- 
Piotr Baranowski

___
users mailing list
us...@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


-- 
Piotr Baranowski
CTO/VP/Chief Instructor@OSEC  mob://0048504242337
Dlaczego informatycy mylą Halloween z Bożym narodzeniem?
bo 31oct == 25dec

___
users mailing list
us...@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Hawkular metrics returns Forbiden

2017-09-14 Thread Mateus Caruccio
Yep, there it is:

[OSEv3:children]
masters
etcd
nodes

[OSEv3:vars]
deployment_type=origin
openshift_release=v3.6
debug_level=1
openshift_debug_level=1
openshift_node_debug_level=1
openshift_master_debug_level=1
openshift_master_access_token_max_seconds=2419200
osm_cluster_network_cidr=172.16.0.0/16
openshift_registry_selector="docker-registry=true"
openshift_hosted_registry_replicas=1

openshift_master_cluster_hostname=api-cluster.example.com.br
openshift_master_cluster_public_hostname=api-cluster.example.com.br
osm_default_subdomain=example.com.br
openshift_master_default_subdomain=example.com.br
osm_default_node_selector="role=app"
os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
'filename': '/etc/origin/master/htpasswd'}]
osm_use_cockpit=false
containerized=False

openshift_master_cluster_method=native
openshift_master_console_port=443
openshift_master_api_port=443

openshift_master_overwrite_named_certificates=true
openshift_master_named_certificates=[{"certfile":"{{lookup('env','PWD')}}/certs/wildcard.example.com.br.crt","keyfile":"{{lookup('env','PWD')}}/certs/wildcard.example.com.br.key",
"cafile":"{{lookup('env','PWD')}}/certs/wildcard.example.com.br.int.crt"}]
openshift_master_session_auth_secrets=['F71uoyI/Tkv/LiDH2PiFKK1o76bLoH10+uE2a']
openshift_master_session_encryption_secrets=['bjDwQfiy4ksB/3qph87BGulYb/GUho6K']
openshift_master_audit_config={"enabled": true, "auditFilePath":
"/var/log/openshift-audit/openshift-audit.log", "maximumFileRetentionDays":
30, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 10}

openshift_ca_cert_expire_days=1825
openshift_node_cert_expire_days=730
openshift_master_cert_expire_days=730
etcd_ca_default_days=1825

openshift_hosted_router_create_certificate=false
openshift_hosted_manage_router=true
openshift_router_selector="role=infra"
openshift_hosted_router_replicas=2
openshift_hosted_router_certificate={"certfile":"{{lookup('env','PWD')}}/certs/wildcard.example.com.br.crt","keyfile":"{{lookup('env','PWD')}}/certs/wildcard.example.com.br.key",
"cafile":"{{lookup('env','PWD')}}/certs/wildcard.example.com.br.int.crt"}

openshift_hosted_metrics_deploy=true
openshift_hosted_metrics_public_url=
https://hawkular-metrics.example.com.br/hawkular/metrics

openshift_hosted_logging_deploy=true
openshift_hosted_logging_hostname=kibana.example.com.br

openshift_install_examples=true

openshift_node_kubelet_args={'pods-per-core': ['20'], 'max-pods': ['100'],
'image-gc-high-threshold': ['80'], 'image-gc-low-threshold':
['50'],'minimum-container-ttl-duration': ['60s'],
'maximum-dead-containers-per-container': ['1'], 'maximum-dead-containers':
['15']}

logrotate_scripts=[{"name": "syslog", "path":
"/var/log/cron\n/var/log/maillog\n/var/log/messages\n/var/log/secure\n/var/log/spooler\n",
"options": ["daily", "rotate 7", "compress", "sharedscripts", "missingok"],
"scripts": {"postrotate": "/bin/kill -HUP `cat /var/run/syslogd.pid 2>
/dev/null` 2> /dev/null || true"}}]

openshift_builddefaults_image_labels=[{'name':'builder','value':'true'}]
openshift_builddefaults_nodeselectors={'builder':'true'}
openshift_builddefaults_annotations={'builder':'true'}
openshift_builddefaults_resources_requests_cpu=10m
openshift_builddefaults_resources_requests_memory=128Mi
openshift_builddefaults_resources_limits_cpu=500m
openshift_builddefaults_resources_limits_memory=2Gi

openshift_upgrade_nodes_serial=1
openshift_upgrade_nodes_max_fail_percentage=0
openshift_upgrade_control_plane_nodes_serial=1
openshift_upgrade_control_plane_nodes_max_fail_percentage=0

openshift_disable_check=disk_availability,memory_availability

[masters]
e001vmov40p42
e001vmov40p51
e001vmov40p52

[etcd]
e001vmov40p42
e001vmov40p51
e001vmov40p52

[nodes]
e001vmov40p42 openshift_node_labels="{'role': 'master'}"
e001vmov40p51 openshift_node_labels="{'role': 'master'}"
e001vmov40p52 openshift_node_labels="{'role': 'master'}"

e001vmov40p45 openshift_node_labels="{'role': 'infra',
'docker-registry':'true', 'logging':'true'}"
e001vmov40p46 openshift_node_labels="{'role': 'infra', 'metrics': 'true'}"

e001vmov40p47 openshift_node_labels="{'role': 'app', 'builder': 'true'}"
e001vmov40p48 openshift_node_labels="{'role': 'app', 'builder': 'true'}"
e001vmov40p49 openshift_node_labels="{'role': 'app', 'builder': 'true'}"





--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-09-14

Re: Pods Not Terminating

2017-09-05 Thread Mateus Caruccio
Would you mind posting the issue link here so I can keep up on it? I'm
seeing some errors like those too.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-09-05 18:28 GMT-03:00 Clayton Coleman <ccole...@redhat.com>:

> Please open a bug in openshift/origin and we'll triage it there.
>
> On Tue, Sep 5, 2017 at 5:14 PM, Patrick Tescher <patr...@outtherelabs.com>
> wrote:
>
>> The pods are still “terminating” and have been stuck in that state. New
>> pods have come and gone since then but the stuck ones are still stuck.
>>
>>
>> On Sep 5, 2017, at 2:13 PM, Clayton Coleman <ccole...@redhat.com> wrote:
>>
>> So the errors recur continuously for a given pod once they start
>> happening?
>>
>> On Tue, Sep 5, 2017 at 5:07 PM, Patrick Tescher <patr...@outtherelabs.com
>> > wrote:
>>
>>> No patches have been applied since we upgraded to 3.6.0 over a week ago.
>>> The errors just popped up for a few different pods in different namespaces.
>>> The only thing we did today was launch a stateful set in a new namespace.
>>> Those pods were not the ones throwing this error.
>>>
>>>
>>> On Sep 5, 2017, at 1:19 PM, Clayton Coleman <ccole...@redhat.com> wrote:
>>>
>>> Were any patches applied to the system?  Some of these are normal if
>>> they happen for a brief period of time.  Are you seeing these errors
>>> continuously for the same pod over and over?
>>>
>>> On Tue, Sep 5, 2017 at 3:23 PM, Patrick Tescher <
>>> patr...@outtherelabs.com> wrote:
>>>
>>>> This morning our cluster started experiencing an odd error on multiple
>>>> nodes. Pods are stuck in the terminating phase. In our node log I see the
>>>> following:
>>>>
>>>> Sep  5 19:17:22 ip-10-0-1-184 origin-node: E0905
>>>> 19:17:22.043257  112306 nestedpendingoperations.go:262] Operation for
>>>> "\"kubernetes.io/secret/182285ee-9267-11e7-b7be-06415eb17bbf
>>>> -default-token-f18hx\
>>>> <http://kubernetes.io/secret/182285ee-9267-11e7-b7be-06415eb17bbf-default-token-f18hx%5C>"
>>>> (\"182285ee-9267-11e7-b7be-06415eb17bbf\")" failed. No retries
>>>> permitted until 2017-09-05 19:17:22.543230782 + UTC
>>>> (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for
>>>> volume "kubernetes.io/secret/182285ee-9267-11e7-b7be-06415eb17bbf-d
>>>> efault-token-f18hx" (volume.spec.Name <http://volume.spec.name/>:
>>>> "default-token-f18hx") pod "182285ee-9267-11e7-b7be-06415eb17bbf"
>>>> (UID: "182285ee-9267-11e7-b7be-06415eb17bbf") with: remove
>>>> /var/lib/origin/openshift.local.volumes/pods/182285ee-9267-1
>>>> 1e7-b7be-06415eb17bbf/volumes/kubernetes.io~secret/default-token-f18hx:
>>>> device or resource busy
>>>>
>>>> That path is not mounted (running mount does not list it) and running
>>>> fuser -v on that directory does not show anything. Trying to rmdir results
>>>> in a similar error:
>>>>
>>>> sudo rmdir var/lib/origin/openshift.local.volumes/pods/182285ee-9267-11
>>>> e7-b7be-06415eb17bbf/volumes/kubernetes.io~secret/default-token-f18hx
>>>> rmdir: failed to remove ‘var/lib/origin/openshift.loca
>>>> l.volumes/pods/182285ee-9267-11e7-b7be-06415eb17bbf/volumes/
>>>> kubernetes.io~secret/default-token-f18hx’: No such file or directory
>>>>
>>>> Is anyone else getting this error?
>>>>
>>>>
>>>> ___
>>>> dev mailing list
>>>> dev@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>>>
>>>
>>>
>>
>>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift-only swagger api

2017-06-16 Thread Mateus Caruccio
Thanks, I will keep an eye on it.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible

2017-06-16 17:30 GMT-03:00 Clayton Coleman <ccole...@redhat.com>:

> open api should have a set of extension fields that say what group the
> resources come from, but there's a bug open that they aren't set.  Not sure
> when that will get looked at  openshift-openapi-spec.json missing
> x-kubernetes-group-version-kind
> <https://github.com/openshift/origin/issues/14373>
>
> On Fri, Jun 16, 2017 at 4:28 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Just to clarify, I meaning swagger 2.0.
>>
>> --
>> Mateus Caruccio / Master of Puppets
>> GetupCloud.com
>> We make the infrastructure invisible
>>
>> 2017-06-16 17:15 GMT-03:00 Mateus Caruccio <mateus.caruc...@getupcloud.co
>> m>:
>>
>>> Hello!
>>>
>>> Is there any other way to get all swagger api definitions for openshift
>>> only objects other than https://github.com/opensh
>>> ift/origin/tree/master/api/swagger-spec?
>>>
>>> What I am trying to achieve is to generate python client libs like this
>>> https://github.com/kubernetes-client/python-base
>>>
>>> I already have a fully operational client in https://github.com/caruccio
>>> /client-python using openshift's swagger but that duplicates all
>>> kubernetes endpoints/models already present in the original client-python.
>>>
>>> --
>>> Mateus Caruccio / Master of Puppets
>>> GetupCloud.com
>>> We make the infrastructure invisible
>>>
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift-only swagger api

2017-06-16 Thread Mateus Caruccio
Just to clarify, I meaning swagger 2.0.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible

2017-06-16 17:15 GMT-03:00 Mateus Caruccio <mateus.caruc...@getupcloud.com>:

> Hello!
>
> Is there any other way to get all swagger api definitions for openshift
> only objects other than https://github.com/openshift/origin/tree/master/
> api/swagger-spec?
>
> What I am trying to achieve is to generate python client libs like this
> https://github.com/kubernetes-client/python-base
>
> I already have a fully operational client in https://github.com/
> caruccio/client-python using openshift's swagger but that duplicates all
> kubernetes endpoints/models already present in the original client-python.
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Openshift-only swagger api

2017-06-16 Thread Mateus Caruccio
Hello!

Is there any other way to get all swagger api definitions for openshift
only objects other than
https://github.com/openshift/origin/tree/master/api/swagger-spec?

What I am trying to achieve is to generate python client libs like this
https://github.com/kubernetes-client/python-base

I already have a fully operational client in
https://github.com/caruccio/client-python using openshift's swagger but
that duplicates all kubernetes endpoints/models already present in the
original client-python.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Erased pvc disk

2017-04-07 Thread Mateus Caruccio
Is it possible for this line to run while a PVC is still mounted?

https://github.com/openshift/origin/blob/7558d75e1b677c019259136a73abbd625591f5ed/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go#L2123

I got an entire disk erased, with no FS/ceph corruption indications, and
tons of the following messages:

I0401 19:05:03.8045641422 kubelet.go:2117] Failed to remove orphaned
pod "2b46c157-16e5-11e7-9f74-000d3ac02da0" dir; err: remove
/var/lib/docker/openshift.local.volumes/pods/2b46c157-16e5-11e7-9f74-000d3ac02da0/volumes/
kubernetes.io~rbd/ceph-6704: device or resource busy


Regards,
--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Flocker PVC

2017-04-05 Thread Mateus Caruccio
Thanks for your reply.
I'm using Ceph on Azure already, but it's expensive and complex to maintain.
Since my openshift stills on v1.3 I'm always looking for alternatives,
until we can update to a version wich supports Azure Disk StorageClass
(even better if it's th new Azure Managed Data Disk).

Regards,

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible

2017-04-05 22:20 GMT-03:00 Jonathan Yu <jaw...@redhat.com>:

>
>
> On Wed, Apr 5, 2017 at 6:07 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Hi There.
>> Is it possible to create PVs/PVCs backed by Flocker datasets?
>>
>
> I'm guessing no, given: https://github.com/openshift/origin/issues/9398 -
> not sure if that issue is outdated, though. Can you provide some more
> context regarding why you're interested in Flocker over the supported
> alternatives? Have you considered Container Native Storage (Gluster on
> OpenShift) https://www.redhat.com/en/technologies/storage/use-
> cases/container-native-storage
>
> --
> Jonathan Yu / Software Engineer, OpenShift by Red Hat / @jawnsy
> <https://twitter.com/jawnsy>
>
> *“There are a million ways to get rich. But there’s only one way to stay
> rich: Humility, often to the point of paranoia. The irony is that few
> things squash humility like getting rich in the first place.”* — Morgan
> Housel, Getting Rich vs. Staying Rich
> <http://www.collaborativefund.com/blog/getting-rich-vs-staying-rich/>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Flocker PVC

2017-04-05 Thread Mateus Caruccio
Hi There.
Is it possible to create PVs/PVCs backed by Flocker datasets?
There are any security implications?

Thanks

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Heapster failing for some pods

2017-03-24 Thread Mateus Caruccio
Turns out it was excessive disk read across all nodes. There was too many
container start errors.

Thanks Derek for the tip and Solly for your time.
I guess logs wont be necessary anymore.

Regards,

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible

2017-03-23 15:15 GMT+00:00 Solly Ross <sr...@redhat.com>:

> Also, would it be possible to see the full set of Heapster logs?
> Sometimes there can be useful signals in there as to what's going
> on that aren't immediately obvious.
>
> Best Regards,
> Solly Ross
>
> - Original Message -
> > From: "Derek Carr" <dec...@redhat.com>
> > To: "Mateus Caruccio" <mateus.caruc...@getupcloud.com>
> > Cc: "Solly Ross" <sr...@redhat.com>, dev@lists.openshift.redhat.com
> > Sent: Wednesday, March 22, 2017 11:27:11 PM
> > Subject: Re: Heapster failing for some pods
> >
> > Are you seeing high iops on impacted nodes?
> >
> > If so, it could be related to the following:
> > https://github.com/openshift/origin/pull/12822
> >
> > If so, you can try to remove thin_ls from your host so it will not be
> used
> > to do per container devicemapper usage stats in cAdvisor which has been
> > shown to cause issues similar to this.
> >
> > Thanks,
> >
> > On Wed, Mar 22, 2017 at 9:42 PM Mateus Caruccio <
> > mateus.caruc...@getupcloud.com> wrote:
> >
> > > At
> > > https://paste.fedoraproject.org/paste/FYFahXSMMQOVUWHkXcrer15M1UNdIG
> YhyRLivL9gydE=
> > > you can find a log grep from heapster with --sink=log set.
> > >
> > > Looking for pod "portal-107-rg2ia" one can see it's not being sinked
> every
> > > scraping period (only 3/9 during this snippet).
> > >
> > >
> > >
> > > --
> > > Mateus Caruccio / Master of Puppets
> > > GetupCloud.com
> > > We make the infrastructure invisible
> > >
> > > 2017-03-22 19:43 GMT-03:00 Derek Carr <dec...@redhat.com>:
> > >
> > > +Solly
> > >
> > > Anything you can assist with here?
> > >
> > > Thanks,
> > >
> > > On Wed, Mar 22, 2017 at 6:27 PM Mateus Caruccio <
> > > mateus.caruc...@getupcloud.com> wrote:
> > >
> > > Hi.
> > >
> > > Heapster is experiencing failures for some pods of the cluster, which
> in
> > > turn causes HPA to malfunction.
> > >
> > > From project events I can see:
> > >
> > > 2017-03-22T22:13:29Z   2017-03-22T21:32:59Z   32portal
> > >  HorizontalPodAutoscaler   Warning   FailedGetMetrics
> > > {horizontal-pod-autoscaler }   failed to get CPU consumption and
> request:
> > > metrics obtained for 2/4 of pods
> > > 2017-03-22T22:13:29Z   2017-03-22T21:32:59Z   32portal
> > >  HorizontalPodAutoscaler Warning   FailedComputeReplicas
> > > {horizontal-pod-autoscaler }   failed to get CPU utilization: failed
> to get
> > > CPU consumption and request: metrics obtained for 2/4 of pods
> > >
> > >
> > > Heapster logs says some pods have no metrics, while other pods from the
> > > same project does:
> > >
> > > I0322 22:10:29.104727   1 handlers.go:242] No metrics for container
> > > wordpress in pod kondzilla/portal-107-rg2ia
> > > I0322 22:10:29.104746   1 handlers.go:178] No metrics for pod
> > > kondzilla/portal-107-rg2ia
> > > ...
> > > I0322 22:12:21.780763   1 pod_based_enricher.go:141] Container
> > > namespace:kondzilla/pod:portal-107-rg2ia/container:wordpress not
> found,
> > > creating a stub
> > >
> > >
> > > Hitting kubelete's /stats/container/ does returns valid stats, as
> > > expected.
> > >
> > >
> > > I'm running:
> > >
> > > openshift v1.3.1
> > > kubernetes v1.3.0+52492b4
> > > etcd 2.3.0+git
> > >
> > > openshift/origin-metrics-cassandra:v1.3.1
> > > openshift/origin-metrics-hawkular-metrics:v1.3.1
> > > openshift/origin-metrics-heapster:v1.3.2 (v1.3.1 has the same effect)
> > >
> > >
> > > Thanks,
> > >
> > > --
> > > Mateus Caruccio / Master of Puppets
> > > GetupCloud.com
> > > We make the infrastructure invisible
> > > ___
> > > dev mailing list
> > > dev@lists.openshift.redhat.com
> > > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> > >
> > >
> > >
> >
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Heapster failing for some pods

2017-03-22 Thread Mateus Caruccio
At
https://paste.fedoraproject.org/paste/FYFahXSMMQOVUWHkXcrer15M1UNdIGYhyRLivL9gydE=
you can find a log grep from heapster with --sink=log set.

Looking for pod "portal-107-rg2ia" one can see it's not being sinked every
scraping period (only 3/9 during this snippet).



--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible

2017-03-22 19:43 GMT-03:00 Derek Carr <dec...@redhat.com>:

> +Solly
>
> Anything you can assist with here?
>
> Thanks,
>
> On Wed, Mar 22, 2017 at 6:27 PM Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Hi.
>>
>> Heapster is experiencing failures for some pods of the cluster, which in
>> turn causes HPA to malfunction.
>>
>> From project events I can see:
>>
>> 2017-03-22T22:13:29Z   2017-03-22T21:32:59Z   32portal
>>  HorizontalPodAutoscaler   Warning   FailedGetMetrics
>> {horizontal-pod-autoscaler }   failed to get CPU consumption and request:
>> metrics obtained for 2/4 of pods
>> 2017-03-22T22:13:29Z   2017-03-22T21:32:59Z   32portal
>>  HorizontalPodAutoscaler Warning   FailedComputeReplicas
>> {horizontal-pod-autoscaler }   failed to get CPU utilization: failed to get
>> CPU consumption and request: metrics obtained for 2/4 of pods
>>
>>
>> Heapster logs says some pods have no metrics, while other pods from the
>> same project does:
>>
>> I0322 22:10:29.104727   1 handlers.go:242] No metrics for container
>> wordpress in pod kondzilla/portal-107-rg2ia
>> I0322 22:10:29.104746   1 handlers.go:178] No metrics for pod
>> kondzilla/portal-107-rg2ia
>> ...
>> I0322 22:12:21.780763   1 pod_based_enricher.go:141] Container
>> namespace:kondzilla/pod:portal-107-rg2ia/container:wordpress not found,
>> creating a stub
>>
>>
>> Hitting kubelete's /stats/container/ does returns valid stats, as
>> expected.
>>
>>
>> I'm running:
>>
>> openshift v1.3.1
>> kubernetes v1.3.0+52492b4
>> etcd 2.3.0+git
>>
>> openshift/origin-metrics-cassandra:v1.3.1
>> openshift/origin-metrics-hawkular-metrics:v1.3.1
>> openshift/origin-metrics-heapster:v1.3.2 (v1.3.1 has the same effect)
>>
>>
>> Thanks,
>>
>> --
>> Mateus Caruccio / Master of Puppets
>> GetupCloud.com
>> We make the infrastructure invisible
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Heapster failing for some pods

2017-03-22 Thread Mateus Caruccio
Hi.

Heapster is experiencing failures for some pods of the cluster, which in
turn causes HPA to malfunction.

>From project events I can see:

2017-03-22T22:13:29Z   2017-03-22T21:32:59Z   32portal
 HorizontalPodAutoscaler   Warning   FailedGetMetrics
{horizontal-pod-autoscaler }   failed to get CPU consumption and request:
metrics obtained for 2/4 of pods
2017-03-22T22:13:29Z   2017-03-22T21:32:59Z   32portal
 HorizontalPodAutoscaler Warning   FailedComputeReplicas
{horizontal-pod-autoscaler }   failed to get CPU utilization: failed to get
CPU consumption and request: metrics obtained for 2/4 of pods


Heapster logs says some pods have no metrics, while other pods from the
same project does:

I0322 22:10:29.104727   1 handlers.go:242] No metrics for container
wordpress in pod kondzilla/portal-107-rg2ia
I0322 22:10:29.104746   1 handlers.go:178] No metrics for pod
kondzilla/portal-107-rg2ia
...
I0322 22:12:21.780763   1 pod_based_enricher.go:141] Container
namespace:kondzilla/pod:portal-107-rg2ia/container:wordpress not found,
creating a stub


Hitting kubelete's /stats/container/ does returns valid stats, as expected.


I'm running:

openshift v1.3.1
kubernetes v1.3.0+52492b4
etcd 2.3.0+git

openshift/origin-metrics-cassandra:v1.3.1
openshift/origin-metrics-hawkular-metrics:v1.3.1
openshift/origin-metrics-heapster:v1.3.2 (v1.3.1 has the same effect)


Thanks,

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Unable to mount GCE Persistent Disks

2017-03-14 Thread Mateus Caruccio
Thanks Clayton, Andrew and Erik.

It was exactly the hostname. It's being defined by ansible into
node-config.yaml instead (short) nodename.


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible

2017-03-14 12:36 GMT-03:00 Clayton Coleman <ccole...@redhat.com>:

> Resending to whole list, sorry Erik.
>
> On Mon, Mar 13, 2017 at 6:49 PM, Clayton Coleman <ccole...@redhat.com>
> wrote:
>
>> I think your node names in 1.4.1 need to match the GCE instance name, and
>> not be fully qualified domain names.  I hit this in origin-gce and we don't
>> have a fix for this until Kube 1.6
>>
>> On Mar 13, 2017, at 6:23 PM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>> Hi there. I'm trying to setup GCE persistent disk for my PVCs but kep
>> getting this error:
>>
>> Mar 13 22:18:22 n-0 origin-node[18945]: E0313 22:18:22.849255   18945
>> attacher.go:90] Error attaching PD "disk1" to node
>> "n-0.c.endless-duality-161113.internal": googleapi: Error 400: Invalid
>> value 'n-0.c.endless-duality-161113.internal'. Values must match the
>> following regular expression: '[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?',
>> invalidParameter
>> Mar 13 22:18:22 n-0 origin-node[18945]: E0313 22:18:22.849330   18945
>> nestedpendingoperations.go:253] Operation for "\"
>> kubernetes.io/gce-pd/disk1\ <http://kubernetes.io/gce-pd/disk1%5C>""
>> failed. No retries permitted until 2017-03-13 22:20:22.849310037 + UTC
>> (durationBeforeRetry 2m0s). Error: Failed to attach volume "pv-disk1" on
>> node "n-0.c.endless-duality-161113.internal" with: googleapi: Error 400:
>> Invalid value 'n-0.c.endless-duality-161113.internal'. Values must match
>> the following regular expression: '[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?',
>> invalidParameter
>>
>> # openshift version
>> openshift v1.4.1
>> kubernetes v1.4.0+776c994
>> etcd 3.1.0-rc.0
>>
>> # oc export pv/pv-disk1
>> apiVersion: v1
>> kind: PersistentVolume
>> metadata:
>>   annotations:
>> pv.kubernetes.io/bound-by-controller: "yes"
>>   creationTimestamp: null
>>   labels:
>> failure-domain.beta.kubernetes.io/region: us-east1
>> failure-domain.beta.kubernetes.io/zone: us-east1-b
>>   name: pv-disk1
>> spec:
>>   accessModes:
>>   - ReadWriteOnce
>>   capacity:
>> storage: 10Gi
>>   claimRef:
>> apiVersion: v1
>> kind: PersistentVolumeClaim
>> name: pvc-disk1
>> namespace: mateus
>> resourceVersion: "6166"
>> uid: a60391d7-083a-11e7-805a-42010a03
>>   gcePersistentDisk:
>> fsType: ext4
>> pdName: disk1
>>   persistentVolumeReclaimPolicy: Retain
>> status: {}
>>
>>
>> # oc get pvc -n mateus
>> NAME  STATUSVOLUME   CAPACITY   ACCESSMODES   AGE
>> pvc-01Bound pv-disk-01   10Gi   RWO   41m
>>
>>
>> Is that something wrong with GCE API client? How can I increase node log
>> level?
>>
>> Thanks!
>>
>> --
>> Mateus Caruccio / Master of Puppets
>> GetupCloud.com
>> We make the infrastructure invisible
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


NetworkManager missing field search on /etc/resolv.conf

2017-02-22 Thread Mateus Caruccio
Hello.

I'm trying openshift-ansible for openshift 1.4.1.
Is it ok to field search be missing from /etc/resolv.conf after
NetworkManager is (re)started?

I've tried som of options to force it:

- set- variables SEARCH and/or DOMAIN in /etc/sysconfig/network(-scripts)/
- add sections [connections(-type(-ifname))] and [global-dns] to
NetworkManager.conf with configs "searchs" and "ipv4.dns-search"

When I `systemctl restart NetworkManager` aAfter any of the above options
​, the field search is missing from /etc/resolv.conf. However, after a
clean reboot it is there.


Thanks,
--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: openshift-sti-build stuck

2017-02-09 Thread Mateus Caruccio
Thanks, I'm monitoring build pods until someone goes stuck.
Will back with trace ASAP.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Thu, Feb 9, 2017 at 12:50 PM, Clayton Coleman <ccole...@redhat.com>
wrote:

> In order to get an accurate dump you'll want to sigquit the process from a
> privileged user (cluster admin can rsh into the stick pod and kill -SIGQUIT
> 1, or find the process from the node and do similar on the right pid).
> That should dump the full stack to the main container logs, which will have
> more context.  Gdb doesn't do a great job of showing the full stack.
>
> On Feb 9, 2017, at 7:57 AM, Mateus Caruccio <mateus.caruccio@getupcloud.
> com> wrote:
>
> Sorry, forgot that:
>
> openshift/origin-sti-builder:v1.3.1
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com - Eliminamos a Gravidade
>
> On Thu, Feb 9, 2017 at 10:44 AM, Cesar Wong <cew...@redhat.com> wrote:
>
>> Hi Mateus,
>>
>> What is the version of the builder image?
>>
>> On Feb 9, 2017, at 6:32 AM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>> Hi. I'm seen from time to time (more than once in a week) builder images
>> stuck, doing nothing and lasting forever.
>>
>> Entering the container I can see /usr/bin/openshift-sti-build is up, but
>> not running.
>>
>>
>> Here are some clues:
>>
>> *$ oc logs pods/web-22-build -n **
>>
>> Pulling image "getupcloud/sti-php-extra:melissa" ...
>> Pulling image "getupcloud/sti-php-extra:melissa" ...
>> Cloning "https://github.com/transainc/grendene-melissa-site.git; ...
>> Commit: f97792dbf727b254e55edbedfd530d4fea674f30 (Merge branch 'dev')
>> Author: Renan <re...@transainc.com>
>> Date: Wed Feb 8 18:01:40 2017 -0200
>> Pulling image "172.30.34.145:5000/melissaprod/web:latest" ...
>> WARNING: Clean build will be performed because of error saving previous
>> build artifacts
>>
>>
>> *$ sudo docker exec -it
>> k8s_sti-build.22823935_web-22-build__a4904f03-ee39-11e6-8bca-000d3ac04b9b_d337fdb8
>> ps fax*
>>
>>PID TTY  STAT   TIME COMMAND
>>121 ?Rs+0:00 ps fax
>>  1 ?Ssl0:10 /usr/bin/openshift-sti-build --loglevel=0
>>
>>
>> *$ sudo strace -p 14154*
>>
>> Process 14154 attached
>> futex(0x855aac8, FUTEX_WAIT, 0, NULL^CProcess 14154 detached
>>  
>>
>> *$ sudo gdb -p 14154*
>>
>> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-80.el7
>> Copyright (C) 2013 Free Software Foundation, Inc.
>> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.h
>> tml>
>> This is free software: you are free to change and redistribute it.
>> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
>> and "show warranty" for details.
>> This GDB was configured as "x86_64-redhat-linux-gnu".
>> For bug reporting instructions, please see:
>> <http://www.gnu.org/software/gdb/bugs/>.
>> Attaching to process 14154
>> Reading symbols from /usr/bin/openshift...done.
>> Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from
>> /usr/lib/debug/usr/lib64/ld-2.17.so.debug...done.
>> done.
>> Loaded symbols for /lib64/ld-linux-x86-64.so.2
>> runtime.futex () at /usr/lib/golang/src/runtime/sys_linux_amd64.s:307
>> 307 /usr/lib/golang/src/runtime/sys_linux_amd64.s: Arquivo ou diretório
>> não encontrado.
>> warning: Missing auto-load scripts referenced in section
>> .debug_gdb_scripts
>> of file /usr/bin/openshift
>> Use `info auto-load python [REGEXP]' to list them.
>> (gdb) bt
>> #0  runtime.futex () at /usr/lib/golang/src/runtime/sys_linux_amd64.s:307
>> #1  0x0042d603 in runtime.futexsleep (addr=0x855aac8
>> <runtime.mheap_+2312>, val=0, ns=-1) at /usr/lib/golang/src/runtime/os
>> 1_linux.go:40
>> #2  0x00411de4 in runtime.notesleep (n=0x855aac8
>> <runtime.mheap_+2312>) at /usr/lib/golang/src/runtime/lock_futex.go:145
>> #3  0x00435bbb in runtime.stopm () at
>> /usr/lib/golang/src/runtime/proc.go:1538
>> #4  0x00436f49 in runtime.findrunnable (gp=0x41ccb6
>> <runtime.gcMarkWorkAvailable+118>, inheritTime=false) at
>> /usr/lib/golang/src/runtime/proc.go:1976
>> #5  0x004375cf in runtime.schedule () at
>> /usr/lib/golang/src/runtime/proc.go:2075
>> #6  0x0043786b in runtime.park_m (gp=0xc820001380) at
>> /usr/lib/golang/src

openshift-sti-build stuck

2017-02-09 Thread Mateus Caruccio
Hi. I'm seen from time to time (more than once in a week) builder images
stuck, doing nothing and lasting forever.

Entering the container I can see /usr/bin/openshift-sti-build is up, but
not running.


Here are some clues:

*$ oc logs pods/web-22-build -n **

Pulling image "getupcloud/sti-php-extra:melissa" ...
Pulling image "getupcloud/sti-php-extra:melissa" ...
Cloning "https://github.com/transainc/grendene-melissa-site.git; ...
Commit: f97792dbf727b254e55edbedfd530d4fea674f30 (Merge branch 'dev')
Author: Renan <re...@transainc.com>
Date: Wed Feb 8 18:01:40 2017 -0200
Pulling image "172.30.34.145:5000/melissaprod/web:latest" ...
WARNING: Clean build will be performed because of error saving previous
build artifacts


*$ sudo docker exec -it
k8s_sti-build.22823935_web-22-build__a4904f03-ee39-11e6-8bca-000d3ac04b9b_d337fdb8
ps fax*

   PID TTY  STAT   TIME COMMAND
   121 ?Rs+0:00 ps fax
 1 ?Ssl0:10 /usr/bin/openshift-sti-build --loglevel=0


*$ sudo strace -p 14154*

Process 14154 attached
futex(0x855aac8, FUTEX_WAIT, 0, NULL^CProcess 14154 detached
 

*$ sudo gdb -p 14154*

GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-80.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html
>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Attaching to process 14154
Reading symbols from /usr/bin/openshift...done.
Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from
/usr/lib/debug/usr/lib64/ld-2.17.so.debug...done.
done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
runtime.futex () at /usr/lib/golang/src/runtime/sys_linux_amd64.s:307
307 /usr/lib/golang/src/runtime/sys_linux_amd64.s: Arquivo ou diretório não
encontrado.
warning: Missing auto-load scripts referenced in section .debug_gdb_scripts
of file /usr/bin/openshift
Use `info auto-load python [REGEXP]' to list them.
(gdb) bt
#0  runtime.futex () at /usr/lib/golang/src/runtime/sys_linux_amd64.s:307
#1  0x0042d603 in runtime.futexsleep (addr=0x855aac8
<runtime.mheap_+2312>, val=0, ns=-1) at
/usr/lib/golang/src/runtime/os1_linux.go:40
#2  0x00411de4 in runtime.notesleep (n=0x855aac8
<runtime.mheap_+2312>) at /usr/lib/golang/src/runtime/lock_futex.go:145
#3  0x00435bbb in runtime.stopm () at
/usr/lib/golang/src/runtime/proc.go:1538
#4  0x00436f49 in runtime.findrunnable (gp=0x41ccb6
<runtime.gcMarkWorkAvailable+118>, inheritTime=false) at
/usr/lib/golang/src/runtime/proc.go:1976
#5  0x004375cf in runtime.schedule () at
/usr/lib/golang/src/runtime/proc.go:2075
#6  0x0043786b in runtime.park_m (gp=0xc820001380) at
/usr/lib/golang/src/runtime/proc.go:2140
#7  0x00460a3b in runtime.mcall () at
/usr/lib/golang/src/runtime/asm_amd64.s:233
#8  0x0855a000 in
github.com/openshift/origin/vendor/github.com/ugorji/go/codec.fastpathAV ()
#9  0x7fffbf823e00 in ?? ()
#10 0x0855a080 in
github.com/openshift/origin/vendor/github.com/ugorji/go/codec.fastpathAV ()
#11 0x00434aa2 in runtime.mstart () at
/usr/lib/golang/src/runtime/proc.go:1068
#12 0x004608d8 in runtime.rt0_go () at
/usr/lib/golang/src/runtime/asm_amd64.s:149
#13 0x0002 in ?? ()
#14 0x7fffbf823f18 in ?? ()
#15 0x00000002 in ?? ()
#16 0x7fffbf823f18 in ?? ()
#17 0x in ?? ()
(gdb)



--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Invalid/Ignored ImageStreamTag name

2017-02-02 Thread Mateus Caruccio
Hi, I'm having trouble to import image stream below

{
  "kind": "ImageStreamList",
  "apiVersion": "v1",
  "metadata": {},
  "items": [
{
  "kind": "ImageStream",
  "apiVersion": "v1",
  "metadata": {
"name": "nodejs5",
"creationTimestamp": null
  },
  "spec": {
"tags": [
  {
"name": "latest",
"annotations": {
  "description": "Build and run NodeJS applications",
  "iconClass": "icon-nodejs",
  "tags": "builder,nodejs,nodejs-5.12.0",
  "sampleRepo": "https://github.com/getupcloud/pillar-base.git;
},
"from": {
  "kind": "ImageStreamTag",
  "name": "5.12.0"
}
  },
  {
"name": "5",
"annotations": {
  "description": "Build and run NodeJS applications",
  "iconClass": "icon-nodejs",
  "tags": "builder,nodejs,nodejs-5.12.0",
  "sampleRepo": "https://github.com/getupcloud/pillar-base.git;
},
"from": {
  "kind": "ImageStreamTag",
  "name": "5.12.0"
}
  },
  {
"name": "5.12",
"annotations": {
  "description": "Build and run NodeJS applications",
  "iconClass": "icon-nodejs",
  "tags": "builder,nodejs,nodejs-5.12.0",
  "sampleRepo": "https://github.com/getupcloud/pillar-base.git;
},
"from": {
  "kind": "DockerImage",
  "name": "getupcloud/centos7-s2i-nodejs:5.12.0"
}
  },
  {
"name": "5.12.0",
"annotations": {
  "description": "Build and run NodeJS applications",
  "iconClass": "icon-nodejs",
  "tags": "builder,nodejs,nodejs-5.12.0",
  "sampleRepo": "https://github.com/getupcloud/pillar-base.git;
},
"from": {
  "kind": "DockerImage",
  "name": "getupcloud/centos7-s2i-nodejs:5.12.0"
}
  }
]
  }
}
  ]
}




I can create and verify it starts importing, but got stuck with
message "importing
latest image ..."


$ oc create -f nodejs5.json
imagestream "nodejs5" created

$ oc describe is/nodejs5
Name: nodejs5
Namespace: mateus
Created: 13 seconds ago
Labels: 
Annotations: openshift.io/image.dockerRepositoryCheck=2017-02-02T14:23:08Z
Docker Pull Spec: 172.30.34.145:5000/mateus/nodejs5
Unique Images: 1
Tags: 4

5.12
  tagged from getupcloud/centos7-s2i-nodejs:5.12.0

  Build and run NodeJS applications
  Tags: builder, nodejs, nodejs-5.12.0
  Example Repo: https://github.com/getupcloud/pillar-base.git

  ~ importing latest image ...

5.12
  tagged from getupcloud/centos7-s2i-nodejs:5.12.0

  Build and run NodeJS applications
  Tags: builder, nodejs, nodejs-5.12.0
  Example Repo: https://github.com/getupcloud/pillar-base.git

  ~ importing latest image ...



​If I change ​"name" from "5.12" to something else like "5_12" or "5.13" it
works as expected.​


​I'm running:
openshift v1.3.1
kubernetes v1.3.0+52492b4
etcd 2.3.0+git
​



--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Allow method PATCH in CORS

2017-01-26 Thread Mateus Caruccio
Thanks. I guess github's repo search doesn't looks into PRs.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Thu, Jan 26, 2017 at 3:00 PM, Andy Goldstein <agold...@redhat.com> wrote:

> (copying the list with the right address)
>
> On Thu, Jan 26, 2017 at 11:59 AM, Andy Goldstein <agold...@redhat.com>
> wrote:
>
>> Kube PR: https://github.com/kubernetes/kubernetes/pull/35978
>>
>> OpenShift backport: https://github.com/openshift/origin/pull/11700
>>
>> In Origin 1.4.0 and later.
>>
>> Andy
>>
>> On Thu, Jan 26, 2017 at 11:57 AM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>>> I known about --cors-allowed-origins and corsAllowedOrigins, but those
>>> accept only a list of domains.
>>>
>>> How can I add PATCH to be sent over Access-Control-Allow-Methods in a
>>> pre-flight OPTIONS request to openshift api?
>>>
>>> The only allowed methods seams to be POST, GET, OPTIONS, PUT and DELETE.
>>>
>>> Thanks,
>>> --
>>> Mateus Caruccio / Master of Puppets
>>> GetupCloud.com - Eliminamos a Gravidade
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Registry Access Denied

2017-01-20 Thread Mateus Caruccio
CCing the list.


PS: Clayton, your email client returned me with 'CC: "
dev@lists.openshift.redhat.com" <d...@redhat.com>'


--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Fri, Jan 20, 2017 at 4:54 PM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> Sorry, forgot that
>
> $ openshift version
> openshift v1.3.1
> kubernetes v1.3.0+52492b4
> etcd 2.3.0+git
>
> registry image is openshift/origin-docker-registry:v1.3.1
>
>
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com - Eliminamos a Gravidade
>
> On Fri, Jan 20, 2017 at 4:49 PM, Clayton Coleman <ccole...@redhat.com>
> wrote:
>
>> I'm assuming this is 1.4?
>>
>> On Fri, Jan 20, 2017 at 1:28 PM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>>> I'm receiving 401 after fist building of a new app.
>>> After first build everything goes fine.
>>> The project exists for a long time.
>>>
>>> Excerpt of logs can be found at http://paste.fedoraproject.
>>> org/531688/36751148/
>>>
>>> The main message is:
>>>
>>> "OpenShift client error: User \"system:anonymous\" cannot create
>>> localsubjectaccessreviews in project \"mateus\""
>>>
>>> Thanks,
>>> --
>>> Mateus Caruccio / Master of Puppets
>>> GetupCloud.com - Eliminamos a Gravidade
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Registry Access Denied

2017-01-20 Thread Mateus Caruccio
I'm receiving 401 after fist building of a new app.
After first build everything goes fine.
The project exists for a long time.

Excerpt of logs can be found at
http://paste.fedoraproject.org/531688/36751148/

The main message is:

"OpenShift client error: User \"system:anonymous\" cannot create
localsubjectaccessreviews in project \"mateus\""

Thanks,
--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Error TeardownNetwork

2016-12-30 Thread Mateus Caruccio
Hi everyone.
I'm seen this event message a lot. Any clues?
Error syncing pod, skipping: failed to "TeardownNetwork" for
"nodeapp-2-deploy_proj01" with TeardownNetworkError: "Failed to teardown
network for pod \"8984da80-ced2-11e6-a95b-000d3ac02da0\" using network
plugins \"redhat/openshift-ovs-multitenant\": Error running network
teardown script: Could not find IP address for container
987cc40a64273f661082d0cc9bb6e017a869dc1627f1493cb11e8a37b1070020"
--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OC on Windows 2003

2016-12-12 Thread Mateus Caruccio
That makes sense. I guess the client server is 32bits.
Thanks

--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Mon, Dec 12, 2016 at 12:51 PM, Fabiano Franz <ffr...@redhat.com> wrote:

> Hey Mateus!
>
> We provide Origin packages for Windows in the releases page
> <https://github.com/openshift/origin/releases>, which should work on
> Windows 2003 as long as it's the 64 bits version. We don't provide it
> packaged for Windows 32 bits, but it should be feasible to compile to that
> platform if that's what you need.
>
> Fabiano Franz
>
>
> On Mon, Dec 12, 2016 at 9:45 AM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Hi.
>> Does anybody have a win 2003 binary of oc?
>> Is it even feasible?
>>
>> --
>> Mateus Caruccio / Master of Puppets
>> GetupCloud.com - Eliminamos a Gravidade
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OC on Windows 2003

2016-12-12 Thread Mateus Caruccio
Hi.
Does anybody have a win 2003 binary of oc?
Is it even feasible?

--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Tying resources to it's namespace

2016-03-02 Thread Mateus Caruccio
I guess annotation would be better suitable for oc and other clients to
issue queries, doesn't it?
According to kubernets docs [1] it was designed for cases like this.

[1]
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/annotations.md
Em 02/03/2016 21:59, "Derek Carr" <dec...@redhat.com> escreveu:

> Right... So you really need an immutable field in metadata or something
> similar, or an annotation field that is overridden on every create/update
> during admission.
>
> On Wednesday, March 2, 2016, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Hi Derek.
>> I'm building a billing backend based on container and pvc usage.
>> The system is going to track any interesting activity (create and delete)
>> by watching the corresponding endpoints and store it in a document database
>> (mongodb or similar).
>> One important point is that users should not be able to tamper this
>> identifier, i.e. "oc edit pod/somepod".
>>
>> --
>> Mateus Caruccio / Master of Puppets
>> GetupCloud.com - Eliminamos a Gravidade
>>
>> On Wed, Mar 2, 2016 at 9:27 PM, Derek Carr <dec...@redhat.com> wrote:
>>
>>> This is not a bad idea to do in admission control as part of the
>>> namespace existence check.
>>>
>>> Can you elaborate a little more what you are trying to build around the
>>> feature to see if there is anything else that would be required?  I am not
>>> sure it should be an annotation versus a field in metadata, i.e.
>>> metadata.namespaceUid or something similar.
>>>
>>> Thanks,
>>>
>>> On Wednesday, March 2, 2016, Mateus Caruccio <
>>> mateus.caruc...@getupcloud.com> wrote:
>>>
>>>> Is there any way to tie resources (pod, pvc, secrets, bc, etc) to it's
>>>> belonging namespace without looking for namespace's lifetime?
>>>>
>>>> Today I can do it by watching and recording the create and delete
>>>> events for a namespace, then associate any resources to that namespace, but
>>>> it doesn't seams to be the best approach. Namespaces can be destroyed and
>>>> recreated by a different user with same name.
>>>>
>>>> I'm looking for something like automatically adding an annotation
>>>> containing namespace's uid to all resources created inside it (some sort of
>>>> primary key), as soon as the resource is created.
>>>>
>>>>
>>>> --
>>>> Mateus Caruccio / Master of Puppets
>>>> GetupCloud.com - Eliminamos a Gravidade
>>>>
>>>
>>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Tying resources to it's namespace

2016-03-02 Thread Mateus Caruccio
Hi Derek.
I'm building a billing backend based on container and pvc usage.
The system is going to track any interesting activity (create and delete)
by watching the corresponding endpoints and store it in a document database
(mongodb or similar).
One important point is that users should not be able to tamper this
identifier, i.e. "oc edit pod/somepod".

--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Wed, Mar 2, 2016 at 9:27 PM, Derek Carr <dec...@redhat.com> wrote:

> This is not a bad idea to do in admission control as part of the namespace
> existence check.
>
> Can you elaborate a little more what you are trying to build around the
> feature to see if there is anything else that would be required?  I am not
> sure it should be an annotation versus a field in metadata, i.e.
> metadata.namespaceUid or something similar.
>
> Thanks,
>
> On Wednesday, March 2, 2016, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Is there any way to tie resources (pod, pvc, secrets, bc, etc) to it's
>> belonging namespace without looking for namespace's lifetime?
>>
>> Today I can do it by watching and recording the create and delete events
>> for a namespace, then associate any resources to that namespace, but it
>> doesn't seams to be the best approach. Namespaces can be destroyed and
>> recreated by a different user with same name.
>>
>> I'm looking for something like automatically adding an annotation
>> containing namespace's uid to all resources created inside it (some sort of
>> primary key), as soon as the resource is created.
>>
>>
>> --
>> Mateus Caruccio / Master of Puppets
>> GetupCloud.com - Eliminamos a Gravidade
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Tying resources to it's namespace

2016-03-02 Thread Mateus Caruccio
Is there any way to tie resources (pod, pvc, secrets, bc, etc) to it's
belonging namespace without looking for namespace's lifetime?

Today I can do it by watching and recording the create and delete events
for a namespace, then associate any resources to that namespace, but it
doesn't seams to be the best approach. Namespaces can be destroyed and
recreated by a different user with same name.

I'm looking for something like automatically adding an annotation
containing namespace's uid to all resources created inside it (some sort of
primary key), as soon as the resource is created.


--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Default LC_ALL for sti based images

2016-02-17 Thread Mateus Caruccio
Sure, what is the best place for it? Dockerfile{rhel7}?

And what about to add TERM too? It's a little annoying to have to set it
every time I want to use vi/top/less/watch from within a container.
Is TERM=vt100 a reasonable value?

*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
@MateusCaruccio <https://twitter.com/MateusCaruccio>*
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Wed, Feb 17, 2016 at 7:50 PM, Ben Parees <bpar...@redhat.com> wrote:

> Seems like a reasonable idea to me.  Do you want to submit a pull to
> sti-base?
>
>
> On Tue, Feb 16, 2016 at 9:23 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Hello everyone.
>>
>> Given that LC_ALL is an important config var, shouldn't it be set with a
>> reasonable default value, at least for sti based images? Something like
>> en_US.UTF-8 or C.UTF-8 (really don't known if later is valid).
>>
>> I'm asking it cause I need to run ruby 1.9 (yeah, pray for me) and a lot
>> of rake tasks fail, complaining on "invalid char blablabla", just because
>> my developers are pt-br speaker and they insist in writing comments with
>> accentuation (french will known the struggle).
>>
>> Obrigado,
>>
>> *Mateus Caruccio*
>> Master of Puppets
>> +55 (51) 8298.0026
>> gtalk:
>>
>>
>> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
>> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
>> This message and any attachment are solely for the intended
>> recipient and may contain confidential or privileged information
>> and it can not be forwarded or shared without permission.
>> Thank you!
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Postgresql and ceph volumes

2016-02-06 Thread Mateus Caruccio
Thanks. That works.

*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
@MateusCaruccio <https://twitter.com/MateusCaruccio>*
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Sat, Feb 6, 2016 at 4:17 AM, Ben Parees <bpar...@redhat.com> wrote:

>
>
> On Fri, Feb 5, 2016 at 10:03 PM, Clayton Coleman <ccole...@redhat.com>
> wrote:
>
>> I think we work around this in the Postgres image by using a subdir of
>> the root dir and changing perms.
>>
>
> ​yes, precisely:
>
> https://github.com/openshift/postgresql/blob/master/9.4/root/usr/share/container-scripts/postgresql/common.sh#L166-L176
>
>
> ​
>
>
>>
>> On Feb 5, 2016, at 8:18 PM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>> Hi there.
>>
>> I'm facing an issue trying to use a pvc for postgresql-9.4 using ceph
>> storage.
>> By setting fsGroup on my template causes pvc to be mounted with perms
>> g+rwx for that specific GID.
>>
>> The problem is postgresql refuses to start if PGDATA isn't 0700.
>>
>> Thoughts?
>>
>>
>> *Mateus Caruccio*
>> Master of Puppets
>> +55 (51) 8298.0026
>> gtalk:
>>
>>
>> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
>> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
>> This message and any attachment are solely for the intended
>> recipient and may contain confidential or privileged information
>> and it can not be forwarded or shared without permission.
>> Thank you!
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Runtime values in sti-php ini templates

2016-02-03 Thread Mateus Caruccio
On Tue, Feb 2, 2016 at 9:51 PM, Ben Parees <bpar...@redhat.com> wrote:

>
>
> On Tue, Feb 2, 2016 at 12:21 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> This could lead to an issue since most .ini files depend on some module
>> to be available, newrelic.so in my case. Simply processing templates with
>> no way to install modules my not suffice.
>>
>>
> ​well i'd expect those dependencies to also be defined by the source repo
> that was including the template ini file.
> ​
>
>
​Doesn't ​it breaks security for non-root pods? Is it even possible to
install RPMs on runtime if restricted scc is enforced?
IMHO the best way to allow for custom modules is through rpm/yum. It seams
to me that blobs on a source repo is way too ugly.
I'm thinking on a more simple scenario, where users are enabled to provide
custom configs for already-available containers.



>
>
>> In my case i've create an sti-php-extra[1] docker image with a bunch of
>> new modules (in fact, only one for now).
>>
>> On the other hand it may be useful for users to provide custom template
>> files from their source repo. I believe it must be processed by s2i/bin/run
>> instead assemble because it may depende on environment variables available
>> only on runtime (think on an api key, which is much easier to set and use
>> than a secret).
>>
>>
> ​fair enough, processing at run/startup is ok with me, given that that's
> what the existing run script is doing anyway.
> ​
>
>
>
>> For example, there could be a dir structure from source repo reflected
>> inside de pod:
>>
>>  (repo) .sti/templates/etc/php.d/mymodule.ini.template -> envsubst ->
>> (pod) /etc/php.d/mymodule.ini
>>
>> BTW, does anyone known of more flexible template processing engine that
>> could fit for docker images? Something capable to understand conditionals.
>>
>>
> ​There are lots of options, mustache is a popular one for example, but i'm
> reluctant to make the PHP image dependent on a particular templating
> language if we can avoid it.  Those discussions tend to break down into
> religious wars over which templating framework should be anointed.
>
> ​
>

Agree. The number of template engines are too damn high!

Please note that PHP itself can be used as a template engine. In the end,
php IS a template engine.

BTW, each lang has some kind of standard template system. Maybe it could be
better if we stick with it, i.e. php for php, jinja2/django for
python/django, erb for ruby, "whatever" for nodejs. All of them are
able to substitute
env vars and provide some level of control blocks.



>
>
>> Thoughts?
>>
>> [1] https://github.com/getupcloud/sti-php-extra (still outdated)
>>
>>
>>
>>
>>
>> *Mateus Caruccio*
>> Master of Puppets
>> +55 (51) 8298.0026
>> gtalk:
>>
>>
>> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
>> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
>> This message and any attachment are solely for the intended
>> recipient and may contain confidential or privileged information
>> and it can not be forwarded or shared without permission.
>> Thank you!
>>
>> On Tue, Feb 2, 2016 at 12:50 PM, Ben Parees <bpar...@redhat.com> wrote:
>>
>>>
>>>
>>> On Tue, Feb 2, 2016 at 2:39 AM, Honza Horak <hho...@redhat.com> wrote:
>>>
>>>> It seems fine to me as well, we need to make the images extensible.
>>>>
>>>> I'd just like to make sure I understand the use case -- that would mean
>>>> you'd add a file like /etc/opt/rh/rh-php56/php.d/newrelic.ini.template
>>>> (using bind-mount or in another layer)?
>>>>
>>>>
>>> ​I wouldn't expect it to be a bindmount or additional layer.  I'd expect
>>> "newrelic.ini.template" would be supplied via the source repository that
>>> was being built, and the assemble script should copy the source-supplied
>>> templates ​into an appropriate location and then process them.  (or process
>>> them into the appropriate location).
>>>
>>> so this file would not be a part of the php56 image.
>>>
>>>
>>>
>>>> Also adding Remi to comment on this.
>>>>
>>>> Honza
>>>>
>>>> On 02/02/2016 12:54 AM, Ben Parees wrote:
>>>>
>>>>> I think that sounds reasonable, i'd be inclined to accept it as a PR.
>>>>> Adding Honza since his team technically controls the PHP image now (5.6
>

Re: Runtime values in sti-php ini templates

2016-02-02 Thread Mateus Caruccio
Exactly. I'm going to send a PR today. Just need to test it here.
Em 02/02/2016 05:39, "Honza Horak" <hho...@redhat.com> escreveu:

> It seems fine to me as well, we need to make the images extensible.
>
> I'd just like to make sure I understand the use case -- that would mean
> you'd add a file like /etc/opt/rh/rh-php56/php.d/newrelic.ini.template
> (using bind-mount or in another layer)?
>
> Also adding Remi to comment on this.
>
> Honza
>
> On 02/02/2016 12:54 AM, Ben Parees wrote:
>
>> I think that sounds reasonable, i'd be inclined to accept it as a PR.
>> Adding Honza since his team technically controls the PHP image now (5.6
>> anyway).
>>
>>
>> On Mon, Feb 1, 2016 at 4:43 PM, Mateus Caruccio
>> <mateus.caruc...@getupcloud.com <mailto:mateus.caruc...@getupcloud.com>>
>> wrote:
>>
>> Hi.
>>
>> I need to run newrelic on a php container. Its license must be set
>> ​from ​
>> php.ini
>> ​ or ​
>> any .ini inside /etc/opt/rh/rh-php56/php.d/
>> ​.
>>
>> ​
>> The problem is it need
>> ​s​
>> to be set on run time, not build time because the license key is
>> stored
>> ​in
>>   a
>> n​
>> env var.
>>
>> What is the best way to do that?
>> Wouldn't be good to have some kind of template processing like [1]?
>> Something like this:
>>
>> for tpl in $PHP_INI_SCAN_DIR/*.template; do
>> envsubst < $tpl > ${tpl%.template}
>> done
>>
>> There is any reason not to adopt this approach? Is it something
>> origin would accept as a PR?
>>
>> [1]
>>
>> https://github.com/openshift/sti-php/blob/04a0900b68264642def9aaea9465a71e1075e713/5.6/s2i/bin/run#L20-L21
>>
>>
>> *Mateus Caruccio*
>> Master of Puppets
>> +55 (51) 8298.0026
>> gtalk: _mateus.caruc...@getupcloud.com
>> <mailto:diogo.goe...@getupcloud.com>
>> twitter: @MateusCaruccio <https://twitter.com/MateusCaruccio>
>>
>> _
>> This message and any attachment are solely for the intended
>> recipient and may contain confidential or privileged information
>> and it can not be forwarded or shared without permission.
>> Thank you!
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com <mailto:dev@lists.openshift.redhat.com
>> >
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>>
>>
>> --
>> Ben Parees | OpenShift
>>
>>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Username resolution failing

2016-01-19 Thread Mateus Caruccio
Hi.

Regarding openshift policy for safely running images, it's recommended to
disable scc for unprivileged user. This may causes some issues while
reading from password database since EUID of the running user is generated
by openshift and can't be found inside the container:

bash-4.2$ pip install memcache
Traceback (most recent call last):
  File "/opt/rh/rh-python34/root/usr/bin/pip", line 7, in 
from pip import main
  File
"/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/__init__.py",
line 9, in 
from pip.util import get_installed_distributions, get_prog
  File
"/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/util.py",
line 16, in 
from pip.locations import site_packages, running_under_virtualenv,
virtualenv_no_global
  File
"/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
line 96, in 
build_prefix = _get_build_prefix()
  File
"/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
line 65, in _get_build_prefix
__get_username())
  File
"/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
line 60, in __get_username
return pwd.getpwuid(os.geteuid()).pw_name
KeyError: 'getpwuid(): uid not found: 100018'

How can I circumvent this obstacle? Should I rebuild all sti scripts to
include this user into the image? There is any trick to allow passwd
readers to read from a mock?


Thanks,


*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
@MateusCaruccio <https://twitter.com/MateusCaruccio>*
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Username resolution failing

2016-01-19 Thread Mateus Caruccio
Yes, we are using rhel images.

Thanks!

*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
@MateusCaruccio <https://twitter.com/MateusCaruccio>*
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Tue, Jan 19, 2016 at 1:15 PM, Ben Parees <bpar...@redhat.com> wrote:

> Yes there is a trick, documented here:
>
>
> https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines
>
> see the section on "*Support Arbitrary User IDs" *which describes how to
> use nss wrapper to work around this.
>
> That said, the openshift python image already does the nss trick.  I think
> we had an issue with the rhel image not containing the right package, are
> you using the rhel image or the centos image?
>
> For the moment you might try the centos image if you haven't already,
> until we get the rhel image updated.
>
>
>
> On Tue, Jan 19, 2016 at 9:53 AM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Hi.
>>
>> Regarding openshift policy for safely running images, it's recommended to
>> disable scc for unprivileged user. This may causes some issues while
>> reading from password database since EUID of the running user is generated
>> by openshift and can't be found inside the container:
>>
>> bash-4.2$ pip install memcache
>> Traceback (most recent call last):
>>   File "/opt/rh/rh-python34/root/usr/bin/pip", line 7, in 
>> from pip import main
>>   File
>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/__init__.py",
>> line 9, in 
>> from pip.util import get_installed_distributions, get_prog
>>   File
>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/util.py",
>> line 16, in 
>> from pip.locations import site_packages, running_under_virtualenv,
>> virtualenv_no_global
>>   File
>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
>> line 96, in 
>> build_prefix = _get_build_prefix()
>>   File
>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
>> line 65, in _get_build_prefix
>> __get_username())
>>   File
>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
>> line 60, in __get_username
>> return pwd.getpwuid(os.geteuid()).pw_name
>> KeyError: 'getpwuid(): uid not found: 100018'
>>
>> How can I circumvent this obstacle? Should I rebuild all sti scripts to
>> include this user into the image? There is any trick to allow passwd
>> readers to read from a mock?
>>
>>
>> Thanks,
>>
>>
>> *Mateus Caruccio*
>> Master of Puppets
>> +55 (51) 8298.0026
>> gtalk:
>>
>>
>> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
>> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
>> This message and any attachment are solely for the intended
>> recipient and may contain confidential or privileged information
>> and it can not be forwarded or shared without permission.
>> Thank you!
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Username resolution failing

2016-01-19 Thread Mateus Caruccio
Yep, just tried centos images and it is working fine.

It took me a while to understand the whole thing. I was simply "oc
exec-ing" into the pod, but those NSS vars are create by sti/run.
It may be good if those vars would be available from any shell.

Thanks.


*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
@MateusCaruccio <https://twitter.com/MateusCaruccio>*
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Tue, Jan 19, 2016 at 1:52 PM, Ben Parees <bpar...@redhat.com> wrote:

> Ok, can you try the centos image (centos/python-34-centos7)?
>
>
> Honza:  do you know when the RHEL SCL python images(2.7 and 3.4) will be
> updated to fix the missing nss rpm issue?
>
>
> On Tue, Jan 19, 2016 at 10:27 AM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Yes, we are using rhel images.
>>
>> Thanks!
>>
>> *Mateus Caruccio*
>> Master of Puppets
>> +55 (51) 8298.0026
>> gtalk:
>>
>>
>> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
>> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
>> This message and any attachment are solely for the intended
>> recipient and may contain confidential or privileged information
>> and it can not be forwarded or shared without permission.
>> Thank you!
>>
>> On Tue, Jan 19, 2016 at 1:15 PM, Ben Parees <bpar...@redhat.com> wrote:
>>
>>> Yes there is a trick, documented here:
>>>
>>>
>>> https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines
>>>
>>> see the section on "*Support Arbitrary User IDs" *which describes how
>>> to use nss wrapper to work around this.
>>>
>>> That said, the openshift python image already does the nss trick.  I
>>> think we had an issue with the rhel image not containing the right package,
>>> are you using the rhel image or the centos image?
>>>
>>> For the moment you might try the centos image if you haven't already,
>>> until we get the rhel image updated.
>>>
>>>
>>>
>>> On Tue, Jan 19, 2016 at 9:53 AM, Mateus Caruccio <
>>> mateus.caruc...@getupcloud.com> wrote:
>>>
>>>> Hi.
>>>>
>>>> Regarding openshift policy for safely running images, it's recommended
>>>> to disable scc for unprivileged user. This may causes some issues while
>>>> reading from password database since EUID of the running user is generated
>>>> by openshift and can't be found inside the container:
>>>>
>>>> bash-4.2$ pip install memcache
>>>> Traceback (most recent call last):
>>>>   File "/opt/rh/rh-python34/root/usr/bin/pip", line 7, in 
>>>> from pip import main
>>>>   File
>>>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/__init__.py",
>>>> line 9, in 
>>>> from pip.util import get_installed_distributions, get_prog
>>>>   File
>>>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/util.py",
>>>> line 16, in 
>>>> from pip.locations import site_packages, running_under_virtualenv,
>>>> virtualenv_no_global
>>>>   File
>>>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
>>>> line 96, in 
>>>> build_prefix = _get_build_prefix()
>>>>   File
>>>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
>>>> line 65, in _get_build_prefix
>>>> __get_username())
>>>>   File
>>>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
>>>> line 60, in __get_username
>>>> return pwd.getpwuid(os.geteuid()).pw_name
>>>> KeyError: 'getpwuid(): uid not found: 100018'
>>>>
>>>> How can I circumvent this obstacle? Should I rebuild all sti scripts to
>>>> include this user into the image? There is any trick to allow passwd
>>>> readers to read from a mock?
>>>>
>>>>
>>>> Thanks,
>>>>
>>>>
>>>> *Mateus Caruccio*
>>>> Master of Puppets
>>>> +55 (51) 8298.0026
>>>> gtalk:
>>>>
>>>>
>>>> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
>>>> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
>>>> This message and any attachment are solely for the intended
>>>> recipient and may contain confidential or privileged information
>>>> and it can not be forwarded or shared without permission.
>>>> Thank you!
>>>>
>>>> ___
>>>> dev mailing list
>>>> dev@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Ben Parees | OpenShift
>>>
>>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev