I thought a bit and I guess I can create a fixed name secret that holds the access token for the SA and use it as a usual build-time/mounted secret. Now my process works.

But to go back to my initial confusion, what is the use case of buildconfig.spec.serviceAccount? The API documentation states "(string) serviceAccount is the name of the ServiceAccount to use to run the pod created by this build. The pod will be allowed to use secrets referenced by the ServiceAccount" If the build process is directly run by the Docker engine, then to which Pod does the above doc make reference to?

Thanks again for the info and help,

Dan Pungă


On 30.11.2018 02:49, Ben Parees wrote:


On Thu, Nov 29, 2018 at 6:53 PM Dan Pungă <dan.pu...@gmail.com <mailto:dan.pu...@gmail.com>> wrote:

    Thanks for the reply!

    My response is inline as well.

    On 30.11.2018 00:51, Ben Parees wrote:


    On Thu, Nov 29, 2018 at 5:34 PM Dan Pungă <dan.pu...@gmail.com
    <mailto:dan.pu...@gmail.com>> wrote:

        Hello all,

        The short version/question would be: How can I use a custom
        ServiceAccount with a BuildConfig?


    you can choose the SA used by the build via:
    buildconfig.spec.serviceAccount

    But I don't think this will help you.


        It appears the build Pod doesn't have the serviceAcoount's
        token mounted
        at the location:

        cat: /var/run/secrets/kubernetes.io/serviceaccount/token
        <http://kubernetes.io/serviceaccount/token>: No such file
        or directory


    how are you running the cat command?

    In general users cannot get into/manipulate the build pod.  If
    you're executing that from within your build logic, then it's
    going to run inside your build container (ie where your
    application is constructd) which does not have the builder
    service account available, it's not the same as the build pod
    itself which would have the service account token mounted.

    It sounds like you might want to use build secrets to make a
    credential available to your build logic:
    
https://docs.okd.io/latest/dev_guide/builds/build_inputs.html#using-secrets-during-build


    I'm running the command as a postCommit hook/script. So, if I
    understand it right, it should be a temporary pod that runs the
    image that was just build.


it's not run as the pod, that is the source of your confusion.  It is directly run by the container runtime engine, it is not managed by kubernetes/openshift, thus it does not have any "pod" content injected.

    The actual BuildConfig holds:

    spec:
      ....
      postCommit:
        command:
          - /bin/bash
          - '-c'
          - $HOME/scripts/checkAndCreateConf.sh
      serviceAccount: manager

    I was expecting the same behaviour as with a container defined in
    a DeploymentConfig/Job/CronJob where the serviceAccount's token is
    mounted in /var/run/secrets/kubernetes.io/serviceaccount/token
    <http://kubernetes.io/serviceaccount/token>

    So I don't use it during the actual build process and I can't
    configure it as a build input because I can't reference the secret
    by name in a consistent way. OKD creates the secrets for SAs with
    some appended random 5 characters....manager-token-xxxxx


ok, if you can't define a consistently named secret yourself that the build can reference, i'm afraid I don't have another option for you that just uses the buildconfig.

You might be better served by using a jenkins pipeline that executes the actions you want.




        Thank you!

        Longer version:

        I'm trying to create Openshift resources from within a Pod.
        The starting point is the app - that needs to be deployed -
        which holds
        an "unknown" number of configurations/customers that need to
        run on
        their own containers. So for each of them I need a set of
        resources
        created inside an Openshift/OKD project; mainly a
        deploymentConfig and a
        service that exposes the runtime ports.

        I can build the application for all the customers and the
        build is also
        triggered by a repository hook. So each time a build is done,
        it is
        certain that the image pushed to the stream holds app-builds
        for all
        those customers.

        What I've done so far is to make use of a custom
        ServiceAccount with a
        custom project role given to it and a Template that defines the
        DeploymentConfig, Service, etc in parameterized form. The
        idea being
        that I would run a pod, using the ServiceAccount, on a image
        that holds
        the built application, authenticate via token to the OKD API
        and, based
        on some logic, it would discover the customers that don't
        have the
        needed resources and create those from the template with
        specific
        parameter values.

        I've tried using a Job, only to realize that it has "run once"
        behaviour. So I cannot use the triggering mechanism.

        I've also tried using a CronJob, and i'll probably use it if
        there's no
        other way to achieve the goal. I'd rather have this work by
        way of
        notification and not by "polling".

        I've tried using the postCommit hook and call my scripted
        logic after
        the build is done, but I get the error about the unfound
        token. I also
        think I'll need to extend the custom role of the service
        account so it
        also has the rights of the builder SA.

        _______________________________________________
        users mailing list
        users@lists.openshift.redhat.com
        <mailto:users@lists.openshift.redhat.com>
        http://lists.openshift.redhat.com/openshiftmm/listinfo/users



-- Ben Parees | OpenShift



--
Ben Parees | OpenShift

_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to