Re: Inject Custom CA during builds

2018-07-17 Thread Subhendu Ghosh
Might be a use case for testing crio and the secrets for generically
updating container ca cert chain.

On Tue, Jul 17, 2018, 10:38 Ahmed Ossama  wrote:

> So I inspected the container runtime, and it turns out to be that
> /etc/ssl/certs is a sym link to /etc/pki/tls/certs directory.
>
> Modifiying the destinationDir caused the certificate to be injected, but
> the build process is still failing because the certificate is not in the
> global trusted CAs in the container.
>
> Did anyone come across an issue like this where the outbound internet
> connection going to an appliance that inspects the traffic and injecting
> it's own certificate?
>
> On 07/17/2018 08:50 AM, Ben Parees wrote:
>
>
>
> On Tue, Jul 17, 2018 at 5:06 AM, Ahmed Ossama  wrote:
>
>> For option #1, I granted the sa/builder the anyuid scc, and added the
>> serviceAccount: builder in the buildconfig. I thought this might make the
>> build run with root (Yes, it's not a good idea to run builds using root, I
>> was just trying it), but it didn't work anyway.
>>
>> For option #2, I've created the secret with:
>>
>> $ oc create secret generic root-certificate
>> --from-file=RootCertificate-2048-SHA256.crt=RootCertificate-2048-SHA256.crt
>>
>> Then edited the bc to:
>>
>>   source:
>> git:
>>   ref: c967a614ca0429ef219e884ae1b2ff6e447449d8
>>   uri: http://gitlab.example.com/public-projects/java-blueprint.git
>> secrets:
>> - destinationDir: /etc/ssl/certs
>>   secret:
>> name: root-certificate
>> type: Git
>>
>> So this causes the build to fail with the error:
>>
>> error: Uploading to container failed: Error response from daemon:
>> {"message":"Error processing tar file(exit status 1): mkdir
>> /certs/..2018_07_17_00_07_32.144170643: no such file or directory"}
>> ERROR: The destination directory for "/var/run/secrets/
>> openshift.io/build/root-certificate" injection must exist in container
>> ("/etc/ssl/certs")
>>
>
> the docs make this behavior clear:
>
> "The destinationDir must exist or an error will occur. No directory paths
> are created during the copy process."
>
>
> https://docs.openshift.org/latest/dev_guide/builds/build_inputs.html#using-secrets-s2i-strategy
>
>
>
>> I tried changing the destinationDir to  /etc/certs, and the build passed
>> the above error but yet failed to connect to the repositories.
>>
>
> presumably this created a directory named "/etc/certs" containing a file
> for each key in your secret.  Your build logic would need to reference
> /etc/certs/ as the CA input file.
>
>
> Is there another way to inject the CA during the builds? Or this is the
>> only way?
>>
>> On 07/16/2018 09:49 PM, Graham Dumpleton wrote:
>>
>> The first will not work because you aren't root when a build occurs so
>> can't copy files to locations which require root access.
>>
>> For the second option, how has the build secret been set up in the build
>> config? Specifically, what does the spec.source.secrets part of the build
>> config look like, and what keys are defined in the secret?
>>
>> $ oc explain bc.spec.source.secrets
>> RESOURCE: secrets <[]Object>
>>
>> DESCRIPTION:
>>  secrets represents a list of secrets and their destinations that
>> will be
>>  used only for the build.
>>
>>  SecretBuildSource describes a secret and its destination directory
>> that
>>  will be used only at the build time. The content of the secret
>> referenced
>>  here will be copied into the destination directory instead of
>> mounting.
>>
>> FIELDS:
>>destinationDir 
>>  destinationDir is the directory where the files from the secret
>> should be
>>  available for the build time. For the Source build strategy, these
>> will be
>>  injected into a container where the assemble script runs. Later,
>> when the
>>  script finishes, all files injected will be truncated to zero
>> length. For
>>  the Docker build strategy, these will be copied into the build
>> directory,
>>  where the Dockerfile is located, so users can ADD or COPY them during
>>  docker build.
>>
>>secret  -required-
>>  secret is a reference to an existing secret that you want to use in
>> your
>>  build.
>>
>> $ oc explain bc.spec.source.secrets.secret
>> RESOURCE: secret 
>>
>> DESCRIPTION:
>>  secret is a reference to an existing secret that you want to use in
>> your
>>  build.
>>
>>  LocalObjectReference contains enough information to let you locate
>> the
>>  referenced object inside the same namespace.
>>
>> FIELDS:
>>name 
>>  Name of the referent. More info:
>>
>> https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
>>
>> Graham
>>
>> On 17 Jul 2018, at 9:16 am, Ahmed Ossama  wrote:
>>
>> Hi Everyone,
>>
>> I have an OpenShift installation which is sitting behind an appliance
>> which intercepts outbound SSL traffic. Regular machines have the SSL
>> certificate of the appliance installed on them and they are able to access
>> the 

Re: Inject Custom CA during builds

2018-07-17 Thread Aleksandar Kostadinov

Maybe you can try to replace/add files inside

> /etc/pki/ca-trust/extracted/

You can prepare the files on a real machine and then copy them over to 
containers as secrets.


P.S. SSL was invented exactly to prevent man in the middle (what the 
appliance is presently doing) as far as I can tell. While there might be 
legitimate use cases they would be mostly limited to the situation where 
the user and the intercepting appliance owner is the same person/company.


Ahmed Ossama wrote on 07/17/18 17:14:
So I inspected the container runtime, and it turns out to be that 
/etc/ssl/certs is a sym link to /etc/pki/tls/certs directory.


Modifiying the destinationDir caused the certificate to be injected, but 
the build process is still failing because the certificate is not in the 
global trusted CAs in the container.


Did anyone come across an issue like this where the outbound internet 
connection going to an appliance that inspects the traffic and injecting 
it's own certificate?



On 07/17/2018 08:50 AM, Ben Parees wrote:



On Tue, Jul 17, 2018 at 5:06 AM, Ahmed Ossama > wrote:


For option #1, I granted the sa/builder the anyuid scc, and added
the serviceAccount: builder in the buildconfig. I thought this
might make the build run with root (Yes, it's not a good idea to
run builds using root, I was just trying it), but it didn't work
anyway.

For option #2, I've created the secret with:

$ oc create secret generic root-certificate
--from-file=RootCertificate-2048-SHA256.crt=RootCertificate-2048-SHA256.crt

Then edited the bc to:

  source:
    git:
  ref: c967a614ca0429ef219e884ae1b2ff6e447449d8
  uri:
http://gitlab.example.com/public-projects/java-blueprint.git

    secrets:
    - destinationDir: /etc/ssl/certs
  secret:
    name: root-certificate
    type: Git

So this causes the build to fail with the error:

error: Uploading to container failed: Error response from daemon:
{"message":"Error processing tar file(exit status 1): mkdir
/certs/..2018_07_17_00_07_32.144170643: no such file or directory"}
ERROR: The destination directory for
"/var/run/secrets/openshift.io/build/root-certificate
" injection must exist
in container ("/etc/ssl/certs")


the docs make this behavior clear:

"The |destinationDir| must exist or an error will occur. No directory 
paths are created during the copy process."


https://docs.openshift.org/latest/dev_guide/builds/build_inputs.html#using-secrets-s2i-strategy

I tried changing the destinationDir to  /etc/certs, and the build
passed the above error but yet failed to connect to the repositories.


presumably this created a directory named "/etc/certs" containing a 
file for each key in your secret.  Your build logic would need to 
reference /etc/certs/ as the CA input file.



Is there another way to inject the CA during the builds? Or this
is the only way?


On 07/16/2018 09:49 PM, Graham Dumpleton wrote:

The first will not work because you aren't root when a build
occurs so can't copy files to locations which require root access.

For the second option, how has the build secret been set up in
the build config? Specifically, what does the spec.source.secrets
part of the build config look like, and what keys are defined in
the secret?

$ oc explain bc.spec.source.secrets
RESOURCE: secrets <[]Object>

DESCRIPTION:
     secrets represents a list of secrets and their destinations
that will be
     used only for the build.

     SecretBuildSource describes a secret and its destination
directory that
     will be used only at the build time. The content of the
secret referenced
     here will be copied into the destination directory instead
of mounting.

FIELDS:
   destinationDir
     destinationDir is the directory where the files from the
secret should be
     available for the build time. For the Source build strategy,
these will be
     injected into a container where the assemble script runs.
Later, when the
     script finishes, all files injected will be truncated to
zero length. For
     the Docker build strategy, these will be copied into the
build directory,
     where the Dockerfile is located, so users can ADD or COPY
them during
     docker build.

   secret -required-
     secret is a reference to an existing secret that you want to
use in your
     build.

$ oc explain bc.spec.source.secrets.secret
RESOURCE: secret 

DESCRIPTION:
     secret is a reference to an existing secret that you want to
use in your
     build.

     LocalObjectReference contains enough information to let you
locate the
     

Re: Managing Routes with a Service Account

2018-07-17 Thread Eric D Helms
Thanks Clayton. I have made the modification to a ClusterRoleBinding but
still see the following output:

User \\\"system:serviceaccount:foreman:foreman-operator\\\" cannot get
routes in project
\\\"foreman\\\"\",\"reason\":\"Forbidden\",\"details\":{\"name\":\"foreman-http-pulp\",\"kind\":\"routes\"

Updated RBAC:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: foreman-operator
rules:
- apiGroups:
  - app.theforeman.org
  resources:
  - "*"
  verbs:
  - "*"
- apiGroups:
  - ""
  resources:
  - pods
  - services
  - endpoints
  - persistentvolumeclaims
  - events
  - configmaps
  - secrets
  - serviceaccounts
  verbs:
  - "*"
- apiGroups:
  - apps
  resources:
  - deployments
  - daemonsets
  - replicasets
  - statefulsets
  verbs:
  - "*"
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - "*"
- apiGroups:
  - route.openshift.io
  resources:
  - routes
  - routes/status
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - roles
  verbs:
  - "*"
- apiGroups:
  - project.openshift.io
  resources:
  - projects
  verbs:
  - get

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: foreman-account-app-operator
subjects:
- kind: ServiceAccount
  name: foreman-operator
  namespace: foreman
roleRef:
  kind: ClusterRole
  name: foreman-operator
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: foreman-operator


On Tue, Jul 17, 2018 at 11:22 AM Clayton Coleman 
wrote:

> To access things across all namespaces, you need a ClusterRoleBinding, not
> a RoleBinding.  RoleBindings only give you access to the role scoped to the
> namespace the RoleBinding is in.
>
> On Tue, Jul 17, 2018 at 10:21 AM Eric D Helms 
> wrote:
>
>> Howdy,
>>
>> I am trying to manage routes via a serviceaccount with the following but
>> running into an issue with permission denied:
>>
>> "User \\\"system:serviceaccount:foreman:foreman-operator\\\" cannot get
>> routes in the namespace \\\"foreman\\\""
>>
>> Resource Definitions:
>>
>> apiVersion: rbac.authorization.k8s.io/v1beta1
>> kind: ClusterRole
>> metadata:
>>   name: foreman-operator
>> rules:
>> - apiGroups:
>>   - app.theforeman.org
>>   resources:
>>   - "*"
>>   verbs:
>>   - "*"
>> - apiGroups:
>>   - ""
>>   resources:
>>   - pods
>>   - services
>>   - endpoints
>>   - persistentvolumeclaims
>>   - events
>>   - configmaps
>>   - secrets
>>   - serviceaccounts
>>   verbs:
>>   - "*"
>> - apiGroups:
>>   - apps
>>   resources:
>>   - deployments
>>   - daemonsets
>>   - replicasets
>>   - statefulsets
>>   verbs:
>>   - "*"
>> - apiGroups:
>>   - batch
>>   resources:
>>   - jobs
>>   verbs:
>>   - "*"
>> - apiGroups:
>>   - route.openshift.io
>>   resources:
>>   - routes
>>   - routes/status
>>   verbs:
>>   - create
>>   - delete
>>   - deletecollection
>>   - get
>>   - list
>>   - patch
>>   - update
>>   - watch
>> - apiGroups:
>>   - rbac.authorization.k8s.io
>>   resources:
>>   - roles
>>   verbs:
>>   - "*"
>>
>> ---
>>
>> kind: RoleBinding
>> apiVersion: rbac.authorization.k8s.io/v1beta1
>> metadata:
>>   name: foreman-account-app-operator
>>   namespace: foreman
>> subjects:
>> - kind: ServiceAccount
>>   name: foreman-operator
>> roleRef:
>>   kind: ClusterRole
>>   name: foreman-operator
>>   apiGroup: rbac.authorization.k8s.io
>>
>> ---
>>
>> apiVersion: v1
>> kind: ServiceAccount
>> metadata:
>>   name: foreman-operator
>>
>>
>> --
>> Eric D. Helms
>> Red Hat Engineering
>> Ph.D. Student - North Carolina State University
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>

-- 
Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing Routes with a Service Account

2018-07-17 Thread Clayton Coleman
To access things across all namespaces, you need a ClusterRoleBinding, not
a RoleBinding.  RoleBindings only give you access to the role scoped to the
namespace the RoleBinding is in.

On Tue, Jul 17, 2018 at 10:21 AM Eric D Helms 
wrote:

> Howdy,
>
> I am trying to manage routes via a serviceaccount with the following but
> running into an issue with permission denied:
>
> "User \\\"system:serviceaccount:foreman:foreman-operator\\\" cannot get
> routes in the namespace \\\"foreman\\\""
>
> Resource Definitions:
>
> apiVersion: rbac.authorization.k8s.io/v1beta1
> kind: ClusterRole
> metadata:
>   name: foreman-operator
> rules:
> - apiGroups:
>   - app.theforeman.org
>   resources:
>   - "*"
>   verbs:
>   - "*"
> - apiGroups:
>   - ""
>   resources:
>   - pods
>   - services
>   - endpoints
>   - persistentvolumeclaims
>   - events
>   - configmaps
>   - secrets
>   - serviceaccounts
>   verbs:
>   - "*"
> - apiGroups:
>   - apps
>   resources:
>   - deployments
>   - daemonsets
>   - replicasets
>   - statefulsets
>   verbs:
>   - "*"
> - apiGroups:
>   - batch
>   resources:
>   - jobs
>   verbs:
>   - "*"
> - apiGroups:
>   - route.openshift.io
>   resources:
>   - routes
>   - routes/status
>   verbs:
>   - create
>   - delete
>   - deletecollection
>   - get
>   - list
>   - patch
>   - update
>   - watch
> - apiGroups:
>   - rbac.authorization.k8s.io
>   resources:
>   - roles
>   verbs:
>   - "*"
>
> ---
>
> kind: RoleBinding
> apiVersion: rbac.authorization.k8s.io/v1beta1
> metadata:
>   name: foreman-account-app-operator
>   namespace: foreman
> subjects:
> - kind: ServiceAccount
>   name: foreman-operator
> roleRef:
>   kind: ClusterRole
>   name: foreman-operator
>   apiGroup: rbac.authorization.k8s.io
>
> ---
>
> apiVersion: v1
> kind: ServiceAccount
> metadata:
>   name: foreman-operator
>
>
> --
> Eric D. Helms
> Red Hat Engineering
> Ph.D. Student - North Carolina State University
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin 3.9.0's Jenkins - forgetful agents!

2018-07-17 Thread Gabe Montero
On Tue, Jul 17, 2018 at 9:09 AM, Alan Christie <
achris...@informaticsmatters.com> wrote:

> Hi Gabe,
>
> I’m annotating the ImageStream, essentially doing this: `slave-label: 
> buildah-slave`.
> The Dockerfile and ImageStream YAML template for my agent (a buildah/skopeo
> agent) based on jenkins-slave-maven-centos can be found at our public
> repo (https://github.com/InformaticsMatters/openshift-
> jenkins-buildah-slave).
>
> I can understand the agent being replaced when the external image changes
> but I was curious about why it might change (for no apparent reason).
>

Just remembered, our background polling mechanism is most likely updating
it.  It gets back to not being able to merge our partial config
with any changes you've made on the Jenkins side.  That said, we could try
to avoid the update if our partial config matches.

Open an issue against https://github.com/openshift/jenkins-sync-plugin, and
I can look into that.  Also, we should update our docs to encourage
folks to use the ConfigMap approach if they are modifying the PodTemplate
config beyond the basics we employ.


> But ... I will take a look at the configMap approach because that sounds a
> lot more useful - especially for a CI/CD process and would allow me to set
> the agent up from the command-line without having to use the Jenkins
> management console.
>
> Where might I find a good reference example for the ConfigMap approach?
>

Check out
https://docs.openshift.org/latest/using_images/other_images/jenkins.html#configuring-the-jenkins-kubernetes-plug-in


>
> Alan Christie
> achris...@informaticsmatters.com
>
>
>
> On 17 Jul 2018, at 13:18, Gabe Montero  wrote:
>
> Hi Alan,
>
> Are you leveraging our feature to inject agents by labelling ImageStreams
> with
> the label "role" set to a value of "jenkins-slave", or annotating an
> ImageStreamTag
> with the same k/v pair?
>
> If so, that is going to update the agent definition every those items are
> are updated
> in OpenShift.  And there is currently no merging of the partial
> PodTemplate config
> we construct from ImageStream / ImageStreamTags with whateve modifications
> to
> the PodTemplate was made from within Jenkins after the agent is initially
> created
> (there are no k8s API we can leverage to do that).
>
> If the default config we provide for IS/ISTs is not sufficient, I would
> suggest switching
> to our ConfigMap version of this injection.  With that form, you can
> specify the
> entire PodTemplate definition, including the settings you noted below,
> where the image
> for the PodTemplate is the docker ref for the IS/IST you are currently
> referencing.
>
> If you are inject agents in another way, please elaborate and we'll go
> from there.
>
> thanks,
> gabe
>
> On Tue, Jul 17, 2018 at 4:45 AM, Alan Christie <
> achris...@informaticsmatters.com> wrote:
>
>> Hi,
>>
>> I’m using Jenkins on an OpenShift Origin 3.9.0 deployment and notice that
>> Jenkins periodically forgets the additional settings for my custom agent.
>>
>> I’m using the built-in Jenkins from the catalogue (Jenkins 2.89.4) with
>> all the plugins updated.
>>
>> Incidentally, I doubt it has anything to do with the origin release as I
>> recall seeing this on earlier (3.7/3.6) releases.
>>
>> It happens when I deploy a new agent to Docker hub so this I can partly
>> understand (i.e. a new ‘latest’ image is available so it’s pulled) -
>> although I do struggle to understand why it creates a *new* Kubernetes pod
>> template in the cloud configuration when one already exists for the same
>> agent (but that’ll probably be the subject of another question). So, each
>> time I push an image I have to fix the cloud configuration for my agent.
>>
>> This I can live with (for now) but it also happens periodically for no
>> apparent reason. I’m not sure about the frequency but I’ll notice every
>> week, or every few weeks, the Kubernetes Pod Template for my agent has
>> forgotten all the _extra_ setup. Things like: -
>>
>> - Run in privileged mode
>> - Additional volumes
>> - Max number of instances
>> - Time in minutes to retain slave when idle
>>
>> Basically anything adjusted beyond the defaults provided when you first
>> instantiate an agent is lost.
>>
>> Has anyone reported this behaviour before?
>> Is there a fix or can anyone suggest an area of investigation?
>>
>> Alan Christie
>> achris...@informaticsmatters.com
>>
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Inject Custom CA during builds

2018-07-17 Thread Ahmed Ossama
So I inspected the container runtime, and it turns out to be that 
/etc/ssl/certs is a sym link to /etc/pki/tls/certs directory.


Modifiying the destinationDir caused the certificate to be injected, but 
the build process is still failing because the certificate is not in the 
global trusted CAs in the container.


Did anyone come across an issue like this where the outbound internet 
connection going to an appliance that inspects the traffic and injecting 
it's own certificate?



On 07/17/2018 08:50 AM, Ben Parees wrote:



On Tue, Jul 17, 2018 at 5:06 AM, Ahmed Ossama > wrote:


For option #1, I granted the sa/builder the anyuid scc, and added
the serviceAccount: builder in the buildconfig. I thought this
might make the build run with root (Yes, it's not a good idea to
run builds using root, I was just trying it), but it didn't work
anyway.

For option #2, I've created the secret with:

$ oc create secret generic root-certificate
--from-file=RootCertificate-2048-SHA256.crt=RootCertificate-2048-SHA256.crt

Then edited the bc to:

  source:
    git:
  ref: c967a614ca0429ef219e884ae1b2ff6e447449d8
  uri:
http://gitlab.example.com/public-projects/java-blueprint.git

    secrets:
    - destinationDir: /etc/ssl/certs
  secret:
    name: root-certificate
    type: Git

So this causes the build to fail with the error:

error: Uploading to container failed: Error response from daemon:
{"message":"Error processing tar file(exit status 1): mkdir
/certs/..2018_07_17_00_07_32.144170643: no such file or directory"}
ERROR: The destination directory for
"/var/run/secrets/openshift.io/build/root-certificate
" injection must exist
in container ("/etc/ssl/certs")


the docs make this behavior clear:

"The |destinationDir| must exist or an error will occur. No directory 
paths are created during the copy process."


https://docs.openshift.org/latest/dev_guide/builds/build_inputs.html#using-secrets-s2i-strategy

I tried changing the destinationDir to /etc/certs, and the build
passed the above error but yet failed to connect to the repositories.


presumably this created a directory named "/etc/certs" containing a 
file for each key in your secret.  Your build logic would need to 
reference /etc/certs/ as the CA input file.



Is there another way to inject the CA during the builds? Or this
is the only way?


On 07/16/2018 09:49 PM, Graham Dumpleton wrote:

The first will not work because you aren't root when a build
occurs so can't copy files to locations which require root access.

For the second option, how has the build secret been set up in
the build config? Specifically, what does the spec.source.secrets
part of the build config look like, and what keys are defined in
the secret?

$ oc explain bc.spec.source.secrets
RESOURCE: secrets <[]Object>

DESCRIPTION:
     secrets represents a list of secrets and their destinations
that will be
     used only for the build.

     SecretBuildSource describes a secret and its destination
directory that
     will be used only at the build time. The content of the
secret referenced
     here will be copied into the destination directory instead
of mounting.

FIELDS:
   destinationDir
     destinationDir is the directory where the files from the
secret should be
     available for the build time. For the Source build strategy,
these will be
     injected into a container where the assemble script runs.
Later, when the
     script finishes, all files injected will be truncated to
zero length. For
     the Docker build strategy, these will be copied into the
build directory,
     where the Dockerfile is located, so users can ADD or COPY
them during
     docker build.

   secret -required-
     secret is a reference to an existing secret that you want to
use in your
     build.

$ oc explain bc.spec.source.secrets.secret
RESOURCE: secret 

DESCRIPTION:
     secret is a reference to an existing secret that you want to
use in your
     build.

     LocalObjectReference contains enough information to let you
locate the
     referenced object inside the same namespace.

FIELDS:
   name
     Name of the referent. More info:

https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names



Graham


On 17 Jul 2018, at 9:16 am, Ahmed Ossama mailto:ah...@aossama.com>> wrote:

Hi Everyone,

I have an OpenShift installation which is sitting behind an
appliance which intercepts outbound SSL traffic. Regular

Managing Routes with a Service Account

2018-07-17 Thread Eric D Helms
Howdy,

I am trying to manage routes via a serviceaccount with the following but
running into an issue with permission denied:

"User \\\"system:serviceaccount:foreman:foreman-operator\\\" cannot get
routes in the namespace \\\"foreman\\\""

Resource Definitions:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: foreman-operator
rules:
- apiGroups:
  - app.theforeman.org
  resources:
  - "*"
  verbs:
  - "*"
- apiGroups:
  - ""
  resources:
  - pods
  - services
  - endpoints
  - persistentvolumeclaims
  - events
  - configmaps
  - secrets
  - serviceaccounts
  verbs:
  - "*"
- apiGroups:
  - apps
  resources:
  - deployments
  - daemonsets
  - replicasets
  - statefulsets
  verbs:
  - "*"
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - "*"
- apiGroups:
  - route.openshift.io
  resources:
  - routes
  - routes/status
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - roles
  verbs:
  - "*"

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: foreman-account-app-operator
  namespace: foreman
subjects:
- kind: ServiceAccount
  name: foreman-operator
roleRef:
  kind: ClusterRole
  name: foreman-operator
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: foreman-operator


-- 
Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin 3.9.0's Jenkins - forgetful agents!

2018-07-17 Thread Alan Christie
Hi Gabe,

I’m annotating the ImageStream, essentially doing this: `slave-label: 
buildah-slave`. The Dockerfile and ImageStream YAML template for my agent (a 
buildah/skopeo agent) based on jenkins-slave-maven-centos can be found at our 
public repo 
(https://github.com/InformaticsMatters/openshift-jenkins-buildah-slave 
).

I can understand the agent being replaced when the external image changes but I 
was curious about why it might change (for no apparent reason).

But ... I will take a look at the configMap approach because that sounds a lot 
more useful - especially for a CI/CD process and would allow me to set the 
agent up from the command-line without having to use the Jenkins management 
console.

Where might I find a good reference example for the ConfigMap approach?

Alan Christie
achris...@informaticsmatters.com



> On 17 Jul 2018, at 13:18, Gabe Montero  wrote:
> 
> Hi Alan,
> 
> Are you leveraging our feature to inject agents by labelling ImageStreams with
> the label "role" set to a value of "jenkins-slave", or annotating an 
> ImageStreamTag
> with the same k/v pair?
> 
> If so, that is going to update the agent definition every those items are are 
> updated
> in OpenShift.  And there is currently no merging of the partial PodTemplate 
> config 
> we construct from ImageStream / ImageStreamTags with whateve modifications to
> the PodTemplate was made from within Jenkins after the agent is initially 
> created
> (there are no k8s API we can leverage to do that).
> 
> If the default config we provide for IS/ISTs is not sufficient, I would 
> suggest switching
> to our ConfigMap version of this injection.  With that form, you can specify 
> the 
> entire PodTemplate definition, including the settings you noted below, where 
> the image 
> for the PodTemplate is the docker ref for the IS/IST you are currently 
> referencing.
> 
> If you are inject agents in another way, please elaborate and we'll go from 
> there.
> 
> thanks,
> gabe
> 
> On Tue, Jul 17, 2018 at 4:45 AM, Alan Christie 
> mailto:achris...@informaticsmatters.com>> 
> wrote:
> Hi,
> 
> I’m using Jenkins on an OpenShift Origin 3.9.0 deployment and notice that 
> Jenkins periodically forgets the additional settings for my custom agent.
> 
> I’m using the built-in Jenkins from the catalogue (Jenkins 2.89.4) with all 
> the plugins updated.
> 
>   Incidentally, I doubt it has anything to do with the origin release as 
> I recall seeing this on earlier (3.7/3.6) releases.
> 
> It happens when I deploy a new agent to Docker hub so this I can partly 
> understand (i.e. a new ‘latest’ image is available so it’s pulled) - although 
> I do struggle to understand why it creates a *new* Kubernetes pod template in 
> the cloud configuration when one already exists for the same agent (but 
> that’ll probably be the subject of another question). So, each time I push an 
> image I have to fix the cloud configuration for my agent.
> 
> This I can live with (for now) but it also happens periodically for no 
> apparent reason. I’m not sure about the frequency but I’ll notice every week, 
> or every few weeks, the Kubernetes Pod Template for my agent has forgotten 
> all the _extra_ setup. Things like: -
> 
> - Run in privileged mode
> - Additional volumes
> - Max number of instances
> - Time in minutes to retain slave when idle
> 
> Basically anything adjusted beyond the defaults provided when you first 
> instantiate an agent is lost.
> 
> Has anyone reported this behaviour before?
> Is there a fix or can anyone suggest an area of investigation?
> 
> Alan Christie
> achris...@informaticsmatters.com 
> 
> 
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> 
> 
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Inject Custom CA during builds

2018-07-17 Thread Ben Parees
On Tue, Jul 17, 2018 at 5:06 AM, Ahmed Ossama  wrote:

> For option #1, I granted the sa/builder the anyuid scc, and added the
> serviceAccount: builder in the buildconfig. I thought this might make the
> build run with root (Yes, it's not a good idea to run builds using root, I
> was just trying it), but it didn't work anyway.
>
> For option #2, I've created the secret with:
>
> $ oc create secret generic root-certificate --from-file=RootCertificate-
> 2048-SHA256.crt=RootCertificate-2048-SHA256.crt
>
> Then edited the bc to:
>
>   source:
> git:
>   ref: c967a614ca0429ef219e884ae1b2ff6e447449d8
>   uri: http://gitlab.example.com/public-projects/java-blueprint.git
> secrets:
> - destinationDir: /etc/ssl/certs
>   secret:
> name: root-certificate
> type: Git
>
> So this causes the build to fail with the error:
>
> error: Uploading to container failed: Error response from daemon:
> {"message":"Error processing tar file(exit status 1): mkdir
> /certs/..2018_07_17_00_07_32.144170643: no such file or directory"}
> ERROR: The destination directory for "/var/run/secrets/openshift.
> io/build/root-certificate" injection must exist in container
> ("/etc/ssl/certs")
>

the docs make this behavior clear:

"The destinationDir must exist or an error will occur. No directory paths
are created during the copy process."

https://docs.openshift.org/latest/dev_guide/builds/build_inputs.html#using-secrets-s2i-strategy



> I tried changing the destinationDir to  /etc/certs, and the build passed
> the above error but yet failed to connect to the repositories.
>

presumably this created a directory named "/etc/certs" containing a file
for each key in your secret.  Your build logic would need to reference
/etc/certs/ as the CA input file.


Is there another way to inject the CA during the builds? Or this is the
> only way?
>
> On 07/16/2018 09:49 PM, Graham Dumpleton wrote:
>
> The first will not work because you aren't root when a build occurs so
> can't copy files to locations which require root access.
>
> For the second option, how has the build secret been set up in the build
> config? Specifically, what does the spec.source.secrets part of the build
> config look like, and what keys are defined in the secret?
>
> $ oc explain bc.spec.source.secrets
> RESOURCE: secrets <[]Object>
>
> DESCRIPTION:
>  secrets represents a list of secrets and their destinations that will
> be
>  used only for the build.
>
>  SecretBuildSource describes a secret and its destination directory
> that
>  will be used only at the build time. The content of the secret
> referenced
>  here will be copied into the destination directory instead of
> mounting.
>
> FIELDS:
>destinationDir 
>  destinationDir is the directory where the files from the secret
> should be
>  available for the build time. For the Source build strategy, these
> will be
>  injected into a container where the assemble script runs. Later, when
> the
>  script finishes, all files injected will be truncated to zero length.
> For
>  the Docker build strategy, these will be copied into the build
> directory,
>  where the Dockerfile is located, so users can ADD or COPY them during
>  docker build.
>
>secret  -required-
>  secret is a reference to an existing secret that you want to use in
> your
>  build.
>
> $ oc explain bc.spec.source.secrets.secret
> RESOURCE: secret 
>
> DESCRIPTION:
>  secret is a reference to an existing secret that you want to use in
> your
>  build.
>
>  LocalObjectReference contains enough information to let you locate the
>  referenced object inside the same namespace.
>
> FIELDS:
>name 
>  Name of the referent. More info:
>  https://kubernetes.io/docs/concepts/overview/working-
> with-objects/names/#names
>
> Graham
>
> On 17 Jul 2018, at 9:16 am, Ahmed Ossama  wrote:
>
> Hi Everyone,
>
> I have an OpenShift installation which is sitting behind an appliance
> which intercepts outbound SSL traffic. Regular machines have the SSL
> certificate of the appliance installed on them and they are able to access
> the internet without any issues.
>
> My issue is with during the build; Because OpenShift builds images in
> containers, thus the container which is building the code doesn't have the
> SSL certificate of the interceptor installed in it. So grabbing code
> dependencies from npm, maven or pypi during a build fails because the build
> tries to connect to the repo manager via HTTPs, but since the CA of the
> interceptor is not installed in the build container it fails.
>
> My question is: How can I inject the CA certificate of the interceptor in
> the build container so that the traffic from the interceptor is trusted?
>
> So far I've tried two options but they failed:
>
> Option #1, have customized .s2i/bin/assemble script which downloads the
> certificate in /etc/pki/ca-trust/source/anchors/ and running
> update-ca-trust. But this 

Re: Origin 3.9.0's Jenkins - forgetful agents!

2018-07-17 Thread Gabe Montero
Hi Alan,

Are you leveraging our feature to inject agents by labelling ImageStreams
with
the label "role" set to a value of "jenkins-slave", or annotating an
ImageStreamTag
with the same k/v pair?

If so, that is going to update the agent definition every those items are
are updated
in OpenShift.  And there is currently no merging of the partial PodTemplate
config
we construct from ImageStream / ImageStreamTags with whateve modifications
to
the PodTemplate was made from within Jenkins after the agent is initially
created
(there are no k8s API we can leverage to do that).

If the default config we provide for IS/ISTs is not sufficient, I would
suggest switching
to our ConfigMap version of this injection.  With that form, you can
specify the
entire PodTemplate definition, including the settings you noted below,
where the image
for the PodTemplate is the docker ref for the IS/IST you are currently
referencing.

If you are inject agents in another way, please elaborate and we'll go from
there.

thanks,
gabe

On Tue, Jul 17, 2018 at 4:45 AM, Alan Christie <
achris...@informaticsmatters.com> wrote:

> Hi,
>
> I’m using Jenkins on an OpenShift Origin 3.9.0 deployment and notice that
> Jenkins periodically forgets the additional settings for my custom agent.
>
> I’m using the built-in Jenkins from the catalogue (Jenkins 2.89.4) with
> all the plugins updated.
>
> Incidentally, I doubt it has anything to do with the origin release as I
> recall seeing this on earlier (3.7/3.6) releases.
>
> It happens when I deploy a new agent to Docker hub so this I can partly
> understand (i.e. a new ‘latest’ image is available so it’s pulled) -
> although I do struggle to understand why it creates a *new* Kubernetes pod
> template in the cloud configuration when one already exists for the same
> agent (but that’ll probably be the subject of another question). So, each
> time I push an image I have to fix the cloud configuration for my agent.
>
> This I can live with (for now) but it also happens periodically for no
> apparent reason. I’m not sure about the frequency but I’ll notice every
> week, or every few weeks, the Kubernetes Pod Template for my agent has
> forgotten all the _extra_ setup. Things like: -
>
> - Run in privileged mode
> - Additional volumes
> - Max number of instances
> - Time in minutes to retain slave when idle
>
> Basically anything adjusted beyond the defaults provided when you first
> instantiate an agent is lost.
>
> Has anyone reported this behaviour before?
> Is there a fix or can anyone suggest an area of investigation?
>
> Alan Christie
> achris...@informaticsmatters.com
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Inject Custom CA during builds

2018-07-17 Thread Ahmed Ossama
For option #1, I granted the sa/builder the anyuid scc, and added the 
serviceAccount: builder in the buildconfig. I thought this might make 
the build run with root (Yes, it's not a good idea to run builds using 
root, I was just trying it), but it didn't work anyway.


For option #2, I've created the secret with:

$ oc create secret generic root-certificate 
--from-file=RootCertificate-2048-SHA256.crt=RootCertificate-2048-SHA256.crt


Then edited the bc to:

  source:
    git:
  ref: c967a614ca0429ef219e884ae1b2ff6e447449d8
  uri: http://gitlab.example.com/public-projects/java-blueprint.git
    secrets:
    - destinationDir: /etc/ssl/certs
  secret:
    name: root-certificate
    type: Git

So this causes the build to fail with the error:

error: Uploading to container failed: Error response from daemon: 
{"message":"Error processing tar file(exit status 1): mkdir 
/certs/..2018_07_17_00_07_32.144170643: no such file or directory"}
ERROR: The destination directory for 
"/var/run/secrets/openshift.io/build/root-certificate" injection must 
exist in container ("/etc/ssl/certs")


I tried changing the destinationDir to /etc/certs, and the build passed 
the above error but yet failed to connect to the repositories.


Is there another way to inject the CA during the builds? Or this is the 
only way?



On 07/16/2018 09:49 PM, Graham Dumpleton wrote:
The first will not work because you aren't root when a build occurs so 
can't copy files to locations which require root access.


For the second option, how has the build secret been set up in the 
build config? Specifically, what does the spec.source.secrets part of 
the build config look like, and what keys are defined in the secret?


$ oc explain bc.spec.source.secrets
RESOURCE: secrets <[]Object>

DESCRIPTION:
     secrets represents a list of secrets and their destinations that 
will be

     used only for the build.

     SecretBuildSource describes a secret and its destination 
directory that
     will be used only at the build time. The content of the secret 
referenced
     here will be copied into the destination directory instead of 
mounting.


FIELDS:
   destinationDir
     destinationDir is the directory where the files from the secret 
should be
     available for the build time. For the Source build strategy, 
these will be
     injected into a container where the assemble script runs. Later, 
when the
     script finishes, all files injected will be truncated to zero 
length. For
     the Docker build strategy, these will be copied into the build 
directory,

     where the Dockerfile is located, so users can ADD or COPY them during
     docker build.

   secret -required-
     secret is a reference to an existing secret that you want to use 
in your

     build.

$ oc explain bc.spec.source.secrets.secret
RESOURCE: secret 

DESCRIPTION:
     secret is a reference to an existing secret that you want to use 
in your

     build.

     LocalObjectReference contains enough information to let you 
locate the

     referenced object inside the same namespace.

FIELDS:
   name
     Name of the referent. More info:
https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names

Graham

On 17 Jul 2018, at 9:16 am, Ahmed Ossama > wrote:


Hi Everyone,

I have an OpenShift installation which is sitting behind an appliance 
which intercepts outbound SSL traffic. Regular machines have the SSL 
certificate of the appliance installed on them and they are able to 
access the internet without any issues.


My issue is with during the build; Because OpenShift builds images in 
containers, thus the container which is building the code doesn't 
have the SSL certificate of the interceptor installed in it. So 
grabbing code dependencies from npm, maven or pypi during a build 
fails because the build tries to connect to the repo manager via 
HTTPs, but since the CA of the interceptor is not installed in the 
build container it fails.


My question is: How can I inject the CA certificate of the 
interceptor in the build container so that the traffic from the 
interceptor is trusted?


So far I've tried two options but they failed:

Option #1, have customized .s2i/bin/assemble script which downloads 
the certificate in /etc/pki/ca-trust/source/anchors/ and running 
update-ca-trust. But this option fails with:


$ oc logs dsqc-4-build
  % Total    % Received % Xferd  Average Speed   Time Time Time  
Current
 Dload  Upload   Total Spent    Left  
Speed
  0 0    0 0    0 0  0  0 --:--:-- --:--:-- 
--:--:-- 0Warning: Failed to create the file
Warning: 
/etc/pki/ca-trust/source/anchors/ZscalerRootCertificate-2048-SHA256.cr

Warning: t: Permission denied
 52  1732   52   901    0 0  14515  0 --:--:-- --:--:-- 
--:--:-- 14770

curl: (23) Failed writing body (0 != 901)
p11-kit: couldn't create file: 

Origin 3.9.0's Jenkins - forgetful agents!

2018-07-17 Thread Alan Christie
Hi,

I’m using Jenkins on an OpenShift Origin 3.9.0 deployment and notice that 
Jenkins periodically forgets the additional settings for my custom agent.

I’m using the built-in Jenkins from the catalogue (Jenkins 2.89.4) with all the 
plugins updated.

Incidentally, I doubt it has anything to do with the origin release as 
I recall seeing this on earlier (3.7/3.6) releases.

It happens when I deploy a new agent to Docker hub so this I can partly 
understand (i.e. a new ‘latest’ image is available so it’s pulled) - although I 
do struggle to understand why it creates a *new* Kubernetes pod template in the 
cloud configuration when one already exists for the same agent (but that’ll 
probably be the subject of another question). So, each time I push an image I 
have to fix the cloud configuration for my agent.

This I can live with (for now) but it also happens periodically for no apparent 
reason. I’m not sure about the frequency but I’ll notice every week, or every 
few weeks, the Kubernetes Pod Template for my agent has forgotten all the 
_extra_ setup. Things like: -

- Run in privileged mode
- Additional volumes
- Max number of instances
- Time in minutes to retain slave when idle

Basically anything adjusted beyond the defaults provided when you first 
instantiate an agent is lost.

Has anyone reported this behaviour before?
Is there a fix or can anyone suggest an area of investigation?

Alan Christie
achris...@informaticsmatters.com



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Failed to provision volume with StorageClass "glusterfs-storage": create volume error: error creating volume

2018-07-17 Thread Yu Wei
It seemed that you didn't configure correct heketi endpoint.

Could you access 
http://heketi-storage-glusterfs.cnsc.net
 manually?

Thx,

Jared

On 2018年06月26日 00:33, Julián Tete wrote:
Hello friends

Greetings to the OpenShift Origin community from Colombia. I have installed 
OpenShift Origin 3.9 on oVirt 4.1. A master server and 3 nodes. With the 
following file /etc/ansible/hosts:

https://pastebin.com/EQvUdA2Y

But when creating a storage volume, I get the error:

"Failed to provision volume with StorageClass "glusterfs-storage": create 
volume error: error creating volume Post 
http://heketi-storage-glusterfs.cnsc.net/volumes: dial tcp: lookup 
heketi-storage-glusterfs.cnsc.net on 
192.168.52.60:53: no such host"

What should I do? Does the /etc/ansible/hosts file have errors?

My idea is to create an OpenShift Origin system on oVirt, and use GlusterFS as 
storage.

Thank you very much in advance.



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users