Re: Using ImageStreams with CronJobs?

2020-08-28 Thread Ben Parees
On Fri, Aug 28, 2020 at 5:26 PM Luiz Carvalho  wrote:

> Hello all,
>
> I'm digging through the docs and I don't see a way of creating a CronJob
> that uses an image from an ImageStream. Is this possible?
>

It is possible, but only if the imagestream resides in the same
project/namespace as your cronjob/pod:

https://docs.openshift.com/container-platform/3.11/dev_guide/managing_images.html#using-is-with-k8s

those are 3.11 docs but it works the same way in 4.x.  The difference in
4.x is the admin can't enable/disable/customize the behavior.

but i don't think you need this. see below.



>
> I'd like to use an ImageStream so I can use its cached version instead of
> hitting the registry everytime the pod starts. My understanding is that by
> doing so the image is stored in the cluster's internal docker registry.
> Thus, it shouldn't hit the registry even if it needs to be used by a pod on
> a different node. Is this assumption correct?
>

are you saying you want to avoid hitting the external registry (e.g.
docker.io or quay.io)?  If so, the imagestream will have no bearing on that.

What you want is pullthrough.
https://docs.openshift.com/container-platform/3.11/install_config/registry/extended_registry_configuration.html#middleware-repository-pullthrough

You can achieve pullthrough by:
1) defining the imagestream that points to the external registry
2) defining your cronjob to use a regular image pullspec which points to
the the imagestream, e.g.:
internalregistry.com/imagestreamnamespace/imagestreamname:imagestreamtag




>
> Thanks,
> Luiz
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: in OpenShift 4.2, /apis is not accessible to anonymous users. Workarounds?

2019-10-03 Thread Ben Parees
On Thu, Oct 3, 2019 at 11:30 AM David Eads  wrote:

> Yes, this is out of upstream, based on downstream choices we made five
> years ago.
>
> This behavior is not considered a bug.  When anonymous authentication is
> enabled, you will only get a 401 if presenting an invalid token or expired
> certificate.  When connecting anonymously, your connection successfully
> authenticated, it is not a 401 condition.  Your successfully authenticated
> connection (successful as anonymous), attempted to access a resource which
> it did not have access to, it is forbidden, getting a 403.
>

Thanks, that explanation helps.


> On Thu, Oct 3, 2019 at 11:18 AM Ben Parees  wrote:
>
>>
>>
>> On Thu, Oct 3, 2019 at 10:52 AM David Eads  wrote:
>>
>>> There is no plan to switch to 401.
>>>
>>
>> Would plans be created if a BZ were opened?  Or this is an outright
>> rejection of ever changing it because it's not deemed incorrect (or because
>> "it's an api now and we can't change it")
>>
>> (Also i assume this is coming out of upstream?)
>>
>>
>>
>>>
>>> On Thu, Oct 3, 2019 at 10:44 AM Jean-Francois Maury 
>>> wrote:
>>>
>>>> According to the spec, it's wrong to return 403 in this case. Please re
>>>> read my wording from the spec.
>>>> Should I understand that there is no plan at all to switch to 401 ?
>>>>
>>>> Jeff
>>>>
>>>> On Thu, Oct 3, 2019 at 3:46 PM David Eads  wrote:
>>>>
>>>>> The 403 is intentional.  The user has been authenticated as anonymous,
>>>>> so a 401 isn't returned.  Kubernetes and OpenShift both return 403 when a
>>>>> user (even anonymous) attempts to access a forbidden resource regardless 
>>>>> of
>>>>> whether it even exists.
>>>>>
>>>>> On Wed, Oct 2, 2019 at 4:06 PM Jean-Francois Maury 
>>>>> wrote:
>>>>>
>>>>>> We are trying to adapt our library but found the following problem:
>>>>>> when we issue a call to /apis or some of the discovery endpoint without
>>>>>> authentication info; OCP returns 403 instead of 401.
>>>>>> According to the HTTP spec,403 should not be repeated and
>>>>>> authentication will not help (see
>>>>>> https://tools.ietf.org/html/rfc2616#section-10.4.4)
>>>>>>
>>>>>> So is it on purpose or is this going to be fixed ?
>>>>>>
>>>>>> Jeff
>>>>>>
>>>>>> On Tue, Oct 1, 2019 at 5:56 PM Andre Dietisheim 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Akram
>>>>>>>
>>>>>>> Thanks for the answer. Insightful.
>>>>>>> For now we can't easily switch libraries given the extent of usage
>>>>>>> and amount of work to migrate.
>>>>>>>
>>>>>>> Cheers
>>>>>>> André
>>>>>>> Am 01.10.19 um 16:34 schrieb Akram Ben Aissi:
>>>>>>>
>>>>>>> Hi André,
>>>>>>>
>>>>>>> indeed this is the new default. And, historically, because of a CVE
>>>>>>> raising an issue about it, dropping discovery of /api has been removed 
>>>>>>> but
>>>>>>> then temporary restored in 4.1 and removed in 4.2.
>>>>>>> See this https://bugzilla.redhat.com/show_bug.cgi?id=1711533
>>>>>>>
>>>>>>> On the Jenkins plugins we were about to fix similar issues, cause
>>>>>>> /oapi was deprecated in OCP 4.2 . We depends on kubernetes-client Java
>>>>>>> library which fixed this.
>>>>>>> https://github.com/fabric8io/kubernetes-client/issues/1587 and
>>>>>>> follow the different PR. If you depend on this library also, maybe you 
>>>>>>> have
>>>>>>> your fix in a recent version.
>>>>>>>
>>>>>>> Otherwise, IIRC, the eclipse plugin required credentials (or a
>>>>>>> token) to connect to openshift server, so in your case, you maybe "just"
>>>>>>> need to use them to then get the endpoints.
>>>>>>>
>>>>>>> Akram
>>>>>>>
>>>>>>>
>>>>>>> Le mar. 1 oct. 2019 à 15:38, Andre Dietisheim 
>>>>

Re: in OpenShift 4.2, /apis is not accessible to anonymous users. Workarounds?

2019-10-03 Thread Ben Parees
On Thu, Oct 3, 2019 at 10:52 AM David Eads  wrote:

> There is no plan to switch to 401.
>

Would plans be created if a BZ were opened?  Or this is an outright
rejection of ever changing it because it's not deemed incorrect (or because
"it's an api now and we can't change it")

(Also i assume this is coming out of upstream?)



>
> On Thu, Oct 3, 2019 at 10:44 AM Jean-Francois Maury 
> wrote:
>
>> According to the spec, it's wrong to return 403 in this case. Please re
>> read my wording from the spec.
>> Should I understand that there is no plan at all to switch to 401 ?
>>
>> Jeff
>>
>> On Thu, Oct 3, 2019 at 3:46 PM David Eads  wrote:
>>
>>> The 403 is intentional.  The user has been authenticated as anonymous,
>>> so a 401 isn't returned.  Kubernetes and OpenShift both return 403 when a
>>> user (even anonymous) attempts to access a forbidden resource regardless of
>>> whether it even exists.
>>>
>>> On Wed, Oct 2, 2019 at 4:06 PM Jean-Francois Maury 
>>> wrote:
>>>
>>>> We are trying to adapt our library but found the following problem:
>>>> when we issue a call to /apis or some of the discovery endpoint without
>>>> authentication info; OCP returns 403 instead of 401.
>>>> According to the HTTP spec,403 should not be repeated and
>>>> authentication will not help (see
>>>> https://tools.ietf.org/html/rfc2616#section-10.4.4)
>>>>
>>>> So is it on purpose or is this going to be fixed ?
>>>>
>>>> Jeff
>>>>
>>>> On Tue, Oct 1, 2019 at 5:56 PM Andre Dietisheim 
>>>> wrote:
>>>>
>>>>> Hi Akram
>>>>>
>>>>> Thanks for the answer. Insightful.
>>>>> For now we can't easily switch libraries given the extent of usage and
>>>>> amount of work to migrate.
>>>>>
>>>>> Cheers
>>>>> André
>>>>> Am 01.10.19 um 16:34 schrieb Akram Ben Aissi:
>>>>>
>>>>> Hi André,
>>>>>
>>>>> indeed this is the new default. And, historically, because of a CVE
>>>>> raising an issue about it, dropping discovery of /api has been removed but
>>>>> then temporary restored in 4.1 and removed in 4.2.
>>>>> See this https://bugzilla.redhat.com/show_bug.cgi?id=1711533
>>>>>
>>>>> On the Jenkins plugins we were about to fix similar issues, cause
>>>>> /oapi was deprecated in OCP 4.2 . We depends on kubernetes-client Java
>>>>> library which fixed this.
>>>>> https://github.com/fabric8io/kubernetes-client/issues/1587 and follow
>>>>> the different PR. If you depend on this library also, maybe you have your
>>>>> fix in a recent version.
>>>>>
>>>>> Otherwise, IIRC, the eclipse plugin required credentials (or a token)
>>>>> to connect to openshift server, so in your case, you maybe "just" need to
>>>>> use them to then get the endpoints.
>>>>>
>>>>> Akram
>>>>>
>>>>>
>>>>> Le mar. 1 oct. 2019 à 15:38, Andre Dietisheim  a
>>>>> écrit :
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> In OpenShift 4.2 "/apis" started only being accessible to authorized
>>>>>> users. This causes troubles for the Eclipse tooling and the java
>>>>>> client
>>>>>> library openshift-restclient-java
>>>>>> (https://github.com/openshift/openshift-restclient-java) which tries
>>>>>> to
>>>>>> discover endpoints before authenticating.
>>>>>>
>>>>>> Thus my question(s):
>>>>>>
>>>>>> * Is this the new default?
>>>>>> * if this restriction is deliberate, what's the reasoning behind it?
>>>>>> * Is there a workaround?
>>>>>>
>>>>>> Thanks for your answers!
>>>>>> André
>>>>>>
>>>>>> ___
>>>>>> dev mailing list
>>>>>> dev@lists.openshift.redhat.com
>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>>>
>>>>> ___
>>>>> dev mailing list
>>>>> dev@lists.openshift.redhat.com
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Jeff Maury
>>>>
>>>> Manager, DevTools
>>>>
>>>> Red Hat EMEA <https://www.redhat.com>
>>>>
>>>> jma...@redhat.com
>>>> @RedHat <https://twitter.com/redhat>   Red Hat
>>>> <https://www.linkedin.com/company/red-hat>  Red Hat
>>>> <https://www.facebook.com/RedHatInc>
>>>> <https://www.redhat.com>
>>>> <https://redhat.com/summit>
>>>> ___
>>>> dev mailing list
>>>> dev@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>>
>>
>> --
>>
>> Jeff Maury
>>
>> Manager, DevTools
>>
>> Red Hat EMEA <https://www.redhat.com>
>>
>> jma...@redhat.com
>> @RedHat <https://twitter.com/redhat>   Red Hat
>> <https://www.linkedin.com/company/red-hat>  Red Hat
>> <https://www.facebook.com/RedHatInc>
>> <https://www.redhat.com>
>> <https://redhat.com/summit>
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [4.x]: understand the role/ scope of image registry operator

2019-06-17 Thread Ben Parees
On Mon, Jun 17, 2019 at 6:44 AM Daniel Comnea  wrote:

> Hi,
>
> Initially when i read the docs [1] i assumed that image registry
> operator's role is similar to what we used to have in 3.x - a simple
> registry should the user want to use it for images built with [2]
>

The registry in 3.x and the registry in 4.x serve the same purpose. The
registry itself is the same.  The difference is that in 3.x the registry
was deployed/managed by the ansible installer + the admin making direct
edits to the registry deploymentconfig and using the "oc adm registry"
command.

In 4.x, the registry is deployed/managed by the registry operator and the
admin asserts desired config by editing the registry operator's config
resource.

In your case the registry was not initially available because on vsphere
there is no valid storage available, so the operator cannot default the
storage configuration.  Thus is reports unavailable until the admin takes
action to configure the storage properly.



>
> While i was playing with 4.1 i've followed the steps mentioned in [3]
> because w/o it the openshift-installer will not report as installation
> complete. Also the CVO will not be in a healthy state ready to pick up new
> updates.
>
> As such it seems that the image registry scope is different (and not
> documented yet, happy to follow up on docs repo once i figure out with your
> help ;) ) than i thought and so my questions are:
>
>- are all the operator images bundled inside the release payload being
>stored on the image registry storage?
>
>
No.


>
>- if not then is it only CVO which needs to store its own release
>   image ?
>
>
The registry doesn't store any images needed by the openshift.  The reason
the CVO is complaining is because one of the operators (in this case the
registry operator) is not reporting available.  You'd experience the same
thing if any other platform operator was reporting unavailable, it's not
specific to a dependency on the registry.



>
>-
>- any particular reason why there is no option to customize the size
>and so it must be 100GB size (as per the docs and the code base) ?
>
>
The docs are a bit unclear but what it is saying is that you must define a
100gig PV because that is the size of volume that the PVC created by the
registry operator will require.  So if you don't have a 100gig PV, the PVC
will not be able to find a matching volume.  (Adam/Oleg we should probably
clarify and or explain that prereq)

That is simply a default that we chose for the PVC the registry operator
automatically creates.  If you want to use a different sized volume, then
you simply need to create your own PVC (and PV) and point the registry
operator to the PVC you want to use, instead of letting the registry
operator create its own PVC.



>
> Thank you,
> Dani
>
>
> [1]
> https://docs.openshift.com/container-platform/4.1/registry/architecture-component-imageregistry.html
>
> [2]
> https://docs.openshift.com/container-platform/4.1/builds/understanding-image-builds.html
> [3]
> https://docs.openshift.com/container-platform/4.1/installing/installing_vsphere/installing-vsphere.html#installation-registry-storage-config_installing-vsphere
> ___________
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: How to get this from the librbary?

2019-05-07 Thread Ben Parees
if you run oc with "--loglevel=8" you can see all the api calls it is
making and the responses.


On Tue, May 7, 2019 at 7:06 PM Tony Herstell 
wrote:

> I dont know Openshift too well but I am trying to get the details that
> would be returned from this command:
>
> oc describe node -l app=sc-app
>
> ... (towards the bottom)...
>
> *Allocated resources:*
>
> *(Total limits may be over 100 percent, i.e., overcommitted.)*
>
> *CPU Requests CPU Limits Memory Requests Memory Limits*
>
> * -- --- -*
>
> *804m (50%) 1600m (100%) 1557Mi (6%) 3160Mi (12%)*
>
>
> I take all the %ages for all the nodes and then average them for the whole
> instance thereby getting a "utilization"...
>
> Please advise...
>
>
>
>
> I then disregard any nodes that are not ready..
>
> oc get node -l app=sc-app --no-headers
>
>
> and this bit seems to be available as *client.getServerReadyStatus();*
>
>
>
>
>
>
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>  Virus-free.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
> <#m_-605702793579341_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> _______
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: s2i spec, doc ?

2018-06-21 Thread Ben Parees
On Thu, Jun 21, 2018 at 4:00 PM, Charles Moulliard 
wrote:

>
>
> On Thu, Jun 21, 2018 at 4:21 PM, Ben Parees  wrote:
>
>>
>>
>> On Thu, Jun 21, 2018 at 6:36 AM, Charles Moulliard 
>> wrote:
>>
>>> Hi Charles,
>>>
>>> The info shared is really valuable as it describes a new BuildStrategy
>>> type (= Custom) that the BuildController will process.
>>>
>>> To be honest, I was looking about a general document describing the S2i
>>> Spec v1 Architecture, API, use cases supported and the entities (described
>>> as field, type, size, default value,...)
>>>
>>
>> from an openshift perspective, that would be the buildconfig api spec:
>>
>> https://docs.openshift.org/latest/rest_api/apis-build.opensh
>> ift.io/v1.BuildConfig.html#object-schema
>>
>> you can drill down into the "sourcestrategy" field.
>>
>>
>>
>>> Such info should help us to discuss current situation and improvements
>>> to propose for S2I Spec v2 such as decouple compilation from docker build,
>>> ...
>>>
>>
>> we've recently implemented the ability for s2i to output a dockerfile
>> (which can then be built w/ non-docker technologies like Kaniko or Buildah)
>> in the upstream source-to-image project.  We'll be looking to bring it to
>> openshift in the near future.
>>
>
> >> Is it this PR which supports this option -> https://github.com/
> openshift/source-to-image/pull/878 ?
>

yes

That means that the dockerfile created could be then processed by non
> docker tools such buildah, kanibo, ... ?
>

yes


>> You can get a good overall sense of the "api" of s2i by looking at the
>> s2i config struct:
>>
>> https://github.com/openshift/source-to-image/blob/master/pkg
>> /api/types.go#L50
>>
>>
>>
>> and adopt a new version of the BuildConfig
>>>
>>> e.g
>>>
>>>   spec:
>>> output:
>>>   to:
>>> kind: ImageStreamTag
>>> name: 'spring-boot-rest-http:1.5.13-1'
>>> baseImage: openjdk1.8:1.3.7
>>> source:
>>>   git:
>>>   type: Git
>>> Strategy:
>>>   compilation:
>>>  tool: maven
>>>  version: 3.5
>>>  command: mvn test && mvn package -Dxxx
>>>   sourceStrategy:
>>> from:
>>>   kind: ImageStreamTag
>>>   name: 's2i-imageORplaybook:latest'
>>>
>>> Regards
>>>
>>> Charles
>>>
>>>
>>>
>>> On Thu, Jun 21, 2018 at 11:57 AM, Charles Sabourdin <
>>> kanedafrompa...@gmail.com> wrote:
>>>
>>>> Hi Charles,
>>>>
>>>> is that :
>>>>  - https://docs.openshift.com/container-platform/3.9/creatin
>>>> g_images/custom.html#creating-images-custom
>>>>  - https://github.com/YannMoisan/openshift-tagger-custom-builder
>>>>  - https://github.com/openshift/origin/blob/master/images/bu
>>>> ilder/docker/docker-builder/Dockerfile
>>>>
>>>> The kind of infos you are looking for ?
>>>>
>>>> because It seems to me that s2i is pretty much a "specific type of custom
>>>> build".
>>>>
>>>>
>>>> Le jeu. 21 juin 2018 à 11:45, Tako Schotanus  a
>>>> écrit :
>>>>
>>>>> Aren't the links at the end of the README basically what you're
>>>>> looking for?
>>>>>
>>>>> On Thu, Jun 21, 2018 at 11:37 AM Charles Moulliard <
>>>>> cmoul...@redhat.com> wrote:
>>>>>
>>>>>> This project "source-to-image" represents the top part of the iceberg
>>>>>> to build a s2i image but not how it is processed ate the server side by 
>>>>>> the
>>>>>> BuildConfigController
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Jun 21, 2018 at 11:24 AM, Tako Schotanus >>>>> > wrote:
>>>>>>
>>>>>>> THis is probably a good place to start Charles:
>>>>>>> https://github.com/openshift/source-to-image
>>>>>>>
>>>>>>> On Thu, Jun 21, 2018 at 10:24 AM Charles Moulliard <
>>>>>>> cmoul...@redhat.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>&

Re: s2i spec, doc ?

2018-06-21 Thread Ben Parees
On Thu, Jun 21, 2018 at 6:36 AM, Charles Moulliard 
wrote:

> Hi Charles,
>
> The info shared is really valuable as it describes a new BuildStrategy
> type (= Custom) that the BuildController will process.
>
> To be honest, I was looking about a general document describing the S2i
> Spec v1 Architecture, API, use cases supported and the entities (described
> as field, type, size, default value,...)
>

from an openshift perspective, that would be the buildconfig api spec:

https://docs.openshift.org/latest/rest_api/apis-build.openshift.io/v1.BuildConfig.html#object-schema

you can drill down into the "sourcestrategy" field.



> Such info should help us to discuss current situation and improvements to
> propose for S2I Spec v2 such as decouple compilation from docker build, ...
>

we've recently implemented the ability for s2i to output a dockerfile
(which can then be built w/ non-docker technologies like Kaniko or Buildah)
in the upstream source-to-image project.  We'll be looking to bring it to
openshift in the near future.

You can get a good overall sense of the "api" of s2i by looking at the s2i
config struct:

https://github.com/openshift/source-to-image/blob/master/pkg/api/types.go#L50



and adopt a new version of the BuildConfig
>
> e.g
>
>   spec:
> output:
>   to:
> kind: ImageStreamTag
> name: 'spring-boot-rest-http:1.5.13-1'
> baseImage: openjdk1.8:1.3.7
> source:
>   git:
>   type: Git
> Strategy:
>   compilation:
>  tool: maven
>  version: 3.5
>  command: mvn test && mvn package -Dxxx
>   sourceStrategy:
> from:
>   kind: ImageStreamTag
>   name: 's2i-imageORplaybook:latest'
>
> Regards
>
> Charles
>
>
>
> On Thu, Jun 21, 2018 at 11:57 AM, Charles Sabourdin <
> kanedafrompa...@gmail.com> wrote:
>
>> Hi Charles,
>>
>> is that :
>>  - https://docs.openshift.com/container-platform/3.9/creatin
>> g_images/custom.html#creating-images-custom
>>  - https://github.com/YannMoisan/openshift-tagger-custom-builder
>>  - https://github.com/openshift/origin/blob/master/images/
>> builder/docker/docker-builder/Dockerfile
>>
>> The kind of infos you are looking for ?
>>
>> because It seems to me that s2i is pretty much a "specific type of custom
>> build".
>>
>>
>> Le jeu. 21 juin 2018 à 11:45, Tako Schotanus  a
>> écrit :
>>
>>> Aren't the links at the end of the README basically what you're looking
>>> for?
>>>
>>> On Thu, Jun 21, 2018 at 11:37 AM Charles Moulliard 
>>> wrote:
>>>
>>>> This project "source-to-image" represents the top part of the iceberg
>>>> to build a s2i image but not how it is processed ate the server side by the
>>>> BuildConfigController
>>>>
>>>>
>>>>
>>>> On Thu, Jun 21, 2018 at 11:24 AM, Tako Schotanus 
>>>> wrote:
>>>>
>>>>> THis is probably a good place to start Charles: https://github.com/op
>>>>> enshift/source-to-image
>>>>>
>>>>> On Thu, Jun 21, 2018 at 10:24 AM Charles Moulliard <
>>>>> cmoul...@redhat.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Excepted the API and Controller [1], I haven't been able to find
>>>>>> another source of information. So my queation is : Is there a
>>>>>> document presenting and explaining the S2I spec ?
>>>>>>
>>>>>> Is it possible to discuss/collaborate with Origin team in order to
>>>>>> propose enhancements ? How ?
>>>>>>
>>>>>> [1] https://github.com/openshift/origin/blob/master/pkg/
>>>>>> build/apis/build/v1/defaults.go
>>>>>> [2] Build Type : https://github.com/openshift
>>>>>> /api/blob/master/build/v1/types.go, Controller: https://github.com
>>>>>> /openshift/origin/blob/master/pkg/build/controller/buildconf
>>>>>> ig/buildconfig_controller.go
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Charles
>>>>>> ___
>>>>>> dev mailing list
>>>>>> dev@lists.openshift.redhat.com
>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> TAKO SCHOTANUS
>>>>>
>>>>> SENIOR SOFTWARE ENGINEER
>>>>>
>>>>> Red Hat
>>>>>
>>>>> <https://www.redhat.com/>
>>>>> <https://red.ht/sig>
>>>>>
>>>>>
>>>>
>>>
>>> --
>>>
>>> TAKO SCHOTANUS
>>>
>>> SENIOR SOFTWARE ENGINEER
>>>
>>> Red Hat
>>>
>>> <https://www.redhat.com/>
>>> <https://red.ht/sig>
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: openshift/api and openshift/client-go are authoritative

2017-12-05 Thread Ben Parees
On Tue, Dec 5, 2017 at 4:13 PM, Simo Sorce <s...@redhat.com> wrote:

> On Tue, 2017-11-28 at 13:58 -0500, David Eads wrote:
> > As of https://github.com/openshift/origin/pull/17477, h
> > ttps://github.com/openshift/api and https://github.com/openshift/
> client-go are
> > the authoritative source of the OpenShift API types and the OpenShift
> > external clients.  The external types and go client are no longer present
> > in https://github.com/openshift/origin.  This makes it possible to
> interact
> > with an OpenShift cluster without trying to vendor openshift/origin and
> it
> > changes the way that API changes are merged.
> >
> > To make an API change in 3.8+:
> >
> >1. open a pull with the external types to openshift/api and get it
> >reviewed, approved, and merged
> >2. bump the vendored dependencies in openshift/client-go, regenerate
> >(make generate build), and get it merged.
> >3. bump the vendored dependencies in openshift/origin
> >(hack/update-deps.sh), update your internal types, and start serving
> your
> >new API.
> >
> > For forks, you will have to merge into the appropriate branches of the
> > various repositories.
>
> SoI have to deal with this now.
>
> Looking at the api repository I see you opened commits with titles like
> this:
>
> UPSTREAM: openshift/origin: missed tags
>
> I do not understand, if the api repository is authoritative, then this
> repository is upstream, not openshift/origin
>
> Is there a convention we need to follow for the vendoring of api into
> client-go and origin as far as naming commits ? Care to provide an
> example/template ?
>

"bump(*)"

I got to be the guinea pig on this, so you can see my prs here:

changed the api:
https://github.com/openshift/api/pull/10

bumped the client-go deps:
https://github.com/openshift/client-go/pull/12

trying to bump the origin deps so i can actually make the change i care
about:
https://github.com/openshift/origin/pull/17314/commits/fc47ea3baba0631ba84e82a80a98c10670dabbd8




>
> Simo.
>
> --
> Simo Sorce
> Sr. Principal Software Engineer
> Red Hat, Inc
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>



-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Docker builds in an OpenShift Jenkins slave?

2017-12-05 Thread Ben Parees
On Tue, Dec 5, 2017 at 11:54 AM, Alan Christie <
achris...@informaticsmatters.com> wrote:

> Thanks again, Ben.
>
> I run into two problems with your _alternative_ suggestion … it looked
> really promising because at least you have access to the pod configuration
> (in the Jenkins "Configure System->Cloud->Kubernetes”), which is cool, but
> I encounter the following in the Jenkins log as it attempts to launch my
> slave Pod...
>
> * Invalid value: true: Privileged containers are not allowed*
>
> That’s annoying, especially as there’s a checkbox for it. But also…
>
>
> * Invalid value: "hostPath": hostPath volumes are not allowed to be used*
>

You'll have to take that up w/ Kubernetes plugin maintainers:
https://plugins.jenkins.io/kubernetes

Hence my recommendation you use DOCKER_HOST instead.



>
>
> On 5 Dec 2017, at 15:48, Ben Parees <bpar...@redhat.com> wrote:
>
>
>
> On Tue, Dec 5, 2017 at 10:36 AM, Alan Christie <achristie@
> informaticsmatters.com> wrote:
>
>> Thanks Ben. It does seem sensible to use build strategies but prior to a
>> wholesale migration to OpenShift, and for existing workflows that may
>> contain docker and docker-compose commands is there any reasonable option
>> other than a an external (cloud/proprietary/dedicated) docker-enabled
>> slave? I can, for example, just have a Docker slave available (outside the
>> OpenShift cluster) but that’s not ideal.
>>
>> Is there an _unsafe_ route I might be able to use now?
>>
>
> use DOCKER_HOST env variable and point to a host w/ a public docker.
>
> The alternative is to try to use a hostpath volume definition in your
> slave pod template but then you also need to run the slave pod as
> privileged.
>
>
>
>
>> I understand the issues around sharing a docker.sock but it seems to be
>> an acceptable strategy for many. And, for a controlled environment, just
>> mounting docker.sock is a rather neat (quick-n-dirty) solution.
>>
>> It may be that, was you say there’s no sensible route down the
>> OpenShift/CICD road other than build strategies. It’s just that for
>> existing/legacy projects not having docker.sock is quite a hill to climb.
>>
>> Thanks for your advice though, that has been gratefully received.
>>
>> Alan.
>>
>> On 5 Dec 2017, at 13:41, Ben Parees <bpar...@redhat.com> wrote:
>>
>>
>>
>>
>>
>> On Dec 5, 2017 07:57, "Alan Christie" <achris...@informaticsmatters.com>
>> wrote:
>>
>> I’m using Jenkins from the CI/CD catalogue and am able to spin up slaves
>> and use an `ImageStream` to identify my own slave image. That’s useful, but
>> what I want to be able to do is build and run Docker images, primarily for
>> unit/functional test purposes. The _sticking point_, it seems, is the
>> ability to mount the host's `docker.sock`, without this I’m unable to run
>> any Docker commands in my Docker containers.
>>
>> Q. Is there a way to mount the Jenkins/OpenShift host’s
>> /var/run/docker.sock in my slave so that I can run Docker commands?
>>
>>
>> Not safely. (mounting the host docker socket is giving out root access to
>> your host).
>>
>> You could use a remote docker host with a certificate for access I
>> believe. (that's still handing out root access on the docker host but at
>> least it's a little protected)
>>
>> If not, what is the recommended/best practice for
>> building/running/pushing Docker images from a slave agent?
>>
>>
>> Define docker build strategies in openshift and trigger them from your
>> jenkins job.
>>
>>
>> Alan
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Docker builds in an OpenShift Jenkins slave?

2017-12-05 Thread Ben Parees
On Tue, Dec 5, 2017 at 10:36 AM, Alan Christie <
achris...@informaticsmatters.com> wrote:

> Thanks Ben. It does seem sensible to use build strategies but prior to a
> wholesale migration to OpenShift, and for existing workflows that may
> contain docker and docker-compose commands is there any reasonable option
> other than a an external (cloud/proprietary/dedicated) docker-enabled
> slave? I can, for example, just have a Docker slave available (outside the
> OpenShift cluster) but that’s not ideal.
>
> Is there an _unsafe_ route I might be able to use now?
>

use DOCKER_HOST env variable and point to a host w/ a public docker.

The alternative is to try to use a hostpath volume definition in your slave
pod template but then you also need to run the slave pod as privileged.




> I understand the issues around sharing a docker.sock but it seems to be an
> acceptable strategy for many. And, for a controlled environment, just
> mounting docker.sock is a rather neat (quick-n-dirty) solution.
>
> It may be that, was you say there’s no sensible route down the
> OpenShift/CICD road other than build strategies. It’s just that for
> existing/legacy projects not having docker.sock is quite a hill to climb.
>
> Thanks for your advice though, that has been gratefully received.
>
> Alan.
>
> On 5 Dec 2017, at 13:41, Ben Parees <bpar...@redhat.com> wrote:
>
>
>
>
>
> On Dec 5, 2017 07:57, "Alan Christie" <achris...@informaticsmatters.com>
> wrote:
>
> I’m using Jenkins from the CI/CD catalogue and am able to spin up slaves
> and use an `ImageStream` to identify my own slave image. That’s useful, but
> what I want to be able to do is build and run Docker images, primarily for
> unit/functional test purposes. The _sticking point_, it seems, is the
> ability to mount the host's `docker.sock`, without this I’m unable to run
> any Docker commands in my Docker containers.
>
> Q. Is there a way to mount the Jenkins/OpenShift host’s
> /var/run/docker.sock in my slave so that I can run Docker commands?
>
>
> Not safely. (mounting the host docker socket is giving out root access to
> your host).
>
> You could use a remote docker host with a certificate for access I
> believe. (that's still handing out root access on the docker host but at
> least it's a little protected)
>
> If not, what is the recommended/best practice for building/running/pushing
> Docker images from a slave agent?
>
>
> Define docker build strategies in openshift and trigger them from your
> jenkins job.
>
>
> Alan
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Docker builds in an OpenShift Jenkins slave?

2017-12-05 Thread Ben Parees
On Dec 5, 2017 07:57, "Alan Christie" 
wrote:

I’m using Jenkins from the CI/CD catalogue and am able to spin up slaves
and use an `ImageStream` to identify my own slave image. That’s useful, but
what I want to be able to do is build and run Docker images, primarily for
unit/functional test purposes. The _sticking point_, it seems, is the
ability to mount the host's `docker.sock`, without this I’m unable to run
any Docker commands in my Docker containers.

Q. Is there a way to mount the Jenkins/OpenShift host’s
/var/run/docker.sock in my slave so that I can run Docker commands?


Not safely. (mounting the host docker socket is giving out root access to
your host).

You could use a remote docker host with a certificate for access I believe.
(that's still handing out root access on the docker host but at least it's
a little protected)

If not, what is the recommended/best practice for building/running/pushing
Docker images from a slave agent?


Define docker build strategies in openshift and trigger them from your
jenkins job.


Alan

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: registry deletion

2017-11-28 Thread Ben Parees
On Tue, Nov 28, 2017 at 6:06 PM, Brian Keyes <bke...@vizuri.com> wrote:

> I want to delete this docker registry and start over, this is the command
> that I think was run to create it
>
> oadm registry --config=/etc/origin/master/admin.kubeconfig
> --service-account=registry
>
>
>
> oadm registry delete or something like that ???
>

sorry, there's no command to delete it.  You should be able to just delete
the deploymentconfig (oc delete dc docker-registry -n default) and then run
oadm registry again to recreate it, however.  You'll get some errors
because of resources that already exist, but it'll get you a new registry
pod.

But i might also ask why you feel you need to delete the registry to get
back to a clean state.




>
> thanks 
> --
> Brian Keyes
> Systems Engineer, Vizuri
> 703-855-9074 <(703)%20855-9074>(Mobile)
> 703-464-7030 x8239 <(703)%20464-7030> (Office)
>
> FOR OFFICIAL USE ONLY: This email and any attachments may contain
> information that is privacy and business sensitive.  Inappropriate or
> unauthorized disclosure of business and privacy sensitive information may
> result in civil and/or criminal penalties as detailed in as amended Privacy
> Act of 1974 and DoD 5400.11-R.
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: TestImageStreamImport and TestImageStreamImportDockerHub testcase fails with "invalid memory address or nil pointer dereference"

2017-10-10 Thread Ben Parees
> github.com/openshift/origin/pkg/image/importer.(*
> ImageStreamImporter).Import(0xc421161380, 0x3ffafac6a00, 0xc420014fe8,
> 0xc421194580, 0xc420014fe8, 0x0)
> /root/go/src/github.com/openshift/origin/_output/local/go/src/github.com/
> openshift/origin/pkg/image/importer/importer.go:93 +0xd0
> github.com/openshift/origin/test/integration.
> TestImageStreamImportDockerHub.func1(0x84c0d1b8, 0x4c42015f300)
> /root/go/src/github.com/openshift/origin/_output/local/go/src/github.com/
> openshift/origin/test/integration/imageimporter_test.go:825 +0xba
> github.com/openshift/origin/test/integration.retryOnErrors(0xc4211a8000,
> 0xc421161340, 0x4, 0x4, 0xc420c75d98, 0x4, 0x4)
> /root/go/src/github.com/openshift/origin/_output/local/go/src/github.com/
> openshift/origin/test/integration/dockerregistryclient_test.go:57 +0x30
> github.com/openshift/origin/test/integration.retryWhenUnreachable(
> 0xc4211a8000, 0xc420045d98, 0x0, 0x0, 0x0, 0x0, 0x600)
> /root/go/src/github.com/openshift/origin/_output/local/go/src/github.com/
> openshift/origin/test/integration/dockerregistryclient_test.go:82 +0xfa
> github.com/openshift/origin/test/integration.
> TestImageStreamImportDockerHub(0xc4211a8000)
> /root/go/src/github.com/openshift/origin/_output/local/go/src/github.com/
> openshift/origin/test/integration/imageimporter_test.go:837 +0x372
> testing.tRunner(0xc4211a8000, 0x84bfecc0)
> /usr/lib/golang/src/testing/testing.go:657 +0xa6
> created by testing.(*T).Run
> /usr/lib/golang/src/testing/testing.go:697 +0x2e4
> Running TestImageStreamImportQuayIO...
> ok  TestImageStreamImportQuayIO
> //
>
> Thanks.
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Performing "oc new-app" programmatically

2017-08-24 Thread Ben Parees
On Thu, Aug 24, 2017 at 6:48 AM, Tako Schotanus <tscho...@redhat.com> wrote:

> Hi,
>
> I wanted to know if there's any way we could easily perform an "oc new-app
> ..." from Java?
> I've of course looked at the output that passing "-o yaml" to the oc
> command generates but that's what you get after "oc new-app" has already
> done all of its "smart" work. (Like detection of the language and selecting
> a builder image etc)
> Is there a way to do that (besides executing "oc" itself of course)?
>

Not really, oc new-app is a semi fat-client coded in Go.  All the logic for
deciding what to do with your repo/image/etc is in go code on the client
side, not in apis that are being invoked.

However we do have java client logic in the eclipse plugin which may be a
useful starting point for at least some of what you want to do.  Jeff
Cantril can point you to it.



>
> Thanks!
>
> --
>
> TAKO SCHOTANUS
>
> SENIOR SOFTWARE ENGINEER
>
> Red Hat
>
> <https://www.redhat.com/>
> <https://red.ht/sig>
>
>
> _______
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Changes in imagestream does not trigger build

2017-08-11 Thread Ben Parees
On Fri, Aug 11, 2017 at 12:07 PM, Bamacharan Kundu <bku...@redhat.com>
wrote:

>
>
> On 08/11/2017 09:22 PM, Ben Parees wrote:
> >
> >
> > On Fri, Aug 11, 2017 at 11:09 AM, Bamacharan Kundu <bku...@redhat.com
> > <mailto:bku...@redhat.com>> wrote:
> >
> > Hi Ben,
> >
> > On 08/11/2017 07:37 PM, Ben Parees wrote:
> > >
> > >
> > > On Thu, Aug 10, 2017 at 12:48 PM, Bamacharan Kundu <
> bku...@redhat.com <mailto:bku...@redhat.com>
> > > <mailto:bku...@redhat.com <mailto:bku...@redhat.com>>> wrote:
> > >
> > > Hi,
> > >   I am having one build chain with openshift/origin:v3.6.0.
> where
> > >
> > > 1. First build is triggered using custom build strategy
> manually. pushes
> > > newly created image to an imagestream.
> > > 2. Second build is triggered based on image change in image
> stream tag.
> > >
> > > Now, the first build is working normally and pushing the image
> to the
> > > specific image stream. But the second build is not getting
> triggered.
> > >
> > > This works fine with v1.2.1. Also I have enabled
> > > "system:build-strategy-custom", even tried adding
> cluster-admin cluster
> > > role to the user, but no luck.
> > >
> > > Any suggestion for what should I be looking at, or if I am
> missing
> > > something.
> > >
> > > pasting the build trigger template below for reference.
> > >
> > >
> > > can you paste the output of "oc get bc test -o yaml" too?  That
> will
> > > give us more insight about the state of your triggers.
> >
> > output of `oc get bc test -o yaml`
> > http://pastebin.centos.org/148271/ <http://pastebin.centos.org/
> 148271/>
> >
> > >
> > > also "oc get is ${JOBID} -o yaml" (where you replace $JOBID with
> the
> > > actual imagestream name)
> >
> > out put of `oc get is python -o yaml` is
> > http://pastebin.centos.org/148246/ <http://pastebin.centos.org/
> 148246/>
> >
> > I am not able to find anything, any suggestion would be of great
> help.
> >
> >
> > I don't see anything obviously wrong either, which makes me suspect
> > permissions issues as Clayton did.
>
> Yeah, but I even tried setting user as custer-admin for triggering the
> builds. Setting a fresh cluster again and checking.
>
>
> > Further investigation requires level 5 logs from your master, during the
> > time period in which you created the imagestream and buildconfig.
> You mean the opanshift logs at the time of builds running, or build
> config creation time?
>

in your case, both since you're using the creation of one image to trigger
the build of another.



>
> Thanks
> Bamacharan
>
> >
> >
> >
> >
> > Thanks
> > Bamacharan
> >
> > >
> > >
> > >
> > >
> > > {
> > >   "kind": "BuildConfig",
> > >   "apiVersion": "v1",
> > >   "metadata": {
> > >   "name": "test"
> > >   },
> > >   "spec": {
> > > "triggers": [
> > >   {
> > > "type": "Generic",
> > > "generic": {
> > >   "secret": "${BUILD_TRIGGER_SECRET}"
> > > }
> > >   },
> > >   {
> > > "type": "ImageChange",
> > > "imageChange": {
> > >   "from": {
> > > "kind": "ImageStreamTag",
> > > "name": "${JOBID}:test"
> > >   }
> > > }
> > >       }
> > > ],
> >     >     "strategy": {
> > >   "type": "Custom",
> > >   "customStrategy": {
> > >

Re: Changes in imagestream does not trigger build

2017-08-11 Thread Ben Parees
On Fri, Aug 11, 2017 at 11:09 AM, Bamacharan Kundu <bku...@redhat.com>
wrote:

> Hi Ben,
>
> On 08/11/2017 07:37 PM, Ben Parees wrote:
> >
> >
> > On Thu, Aug 10, 2017 at 12:48 PM, Bamacharan Kundu <bku...@redhat.com
> > <mailto:bku...@redhat.com>> wrote:
> >
> > Hi,
> >   I am having one build chain with openshift/origin:v3.6.0. where
> >
> > 1. First build is triggered using custom build strategy manually.
> pushes
> > newly created image to an imagestream.
> > 2. Second build is triggered based on image change in image stream
> tag.
> >
> > Now, the first build is working normally and pushing the image to the
> > specific image stream. But the second build is not getting triggered.
> >
> > This works fine with v1.2.1. Also I have enabled
> > "system:build-strategy-custom", even tried adding cluster-admin
> cluster
> > role to the user, but no luck.
> >
> > Any suggestion for what should I be looking at, or if I am missing
> > something.
> >
> > pasting the build trigger template below for reference.
> >
> >
> > can you paste the output of "oc get bc test -o yaml" too?  That will
> > give us more insight about the state of your triggers.
>
> output of `oc get bc test -o yaml` http://pastebin.centos.org/148271/
>
> >
> > also "oc get is ${JOBID} -o yaml" (where you replace $JOBID with the
> > actual imagestream name)
>
> out put of `oc get is python -o yaml` is http://pastebin.centos.org/
> 148246/
>
> I am not able to find anything, any suggestion would be of great help.
>

I don't see anything obviously wrong either, which makes me suspect
permissions issues as Clayton did.

Further investigation requires level 5 logs from your master, during the
time period in which you created the imagestream and buildconfig.



>
> Thanks
> Bamacharan
>
> >
> >
> >
> >
> > {
> >   "kind": "BuildConfig",
> >   "apiVersion": "v1",
> >   "metadata": {
> >   "name": "test"
> >   },
> >   "spec": {
> > "triggers": [
> >   {
> > "type": "Generic",
> > "generic": {
> >   "secret": "${BUILD_TRIGGER_SECRET}"
> > }
> >   },
> >   {
> > "type": "ImageChange",
> > "imageChange": {
> >   "from": {
> > "kind": "ImageStreamTag",
> > "name": "${JOBID}:test"
> >   }
> > }
> >   }
> >         ],
> > "strategy": {
> >   "type": "Custom",
> >   "customStrategy": {
> > "exposeDockerSocket": true,
> > "from": {
> >   "kind": "DockerImage",
> >   "name": "cccp-test"
> > }
> >
> > Thanks & Regards
> > Bamacharan Kundu
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com <mailto:dev@lists.openshift.
> redhat.com>
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> > <http://lists.openshift.redhat.com/openshiftmm/listinfo/dev>
> >
> >
> >
> >
> > --
> > Ben Parees | OpenShift
> >
>



-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Changes in imagestream does not trigger build

2017-08-11 Thread Ben Parees
On Thu, Aug 10, 2017 at 12:48 PM, Bamacharan Kundu <bku...@redhat.com>
wrote:

> Hi,
>   I am having one build chain with openshift/origin:v3.6.0. where
>
> 1. First build is triggered using custom build strategy manually. pushes
> newly created image to an imagestream.
> 2. Second build is triggered based on image change in image stream tag.
>
> Now, the first build is working normally and pushing the image to the
> specific image stream. But the second build is not getting triggered.
>
> This works fine with v1.2.1. Also I have enabled
> "system:build-strategy-custom", even tried adding cluster-admin cluster
> role to the user, but no luck.
>
> Any suggestion for what should I be looking at, or if I am missing
> something.
>
> pasting the build trigger template below for reference.
>

can you paste the output of "oc get bc test -o yaml" too?  That will give
us more insight about the state of your triggers.

also "oc get is ${JOBID} -o yaml" (where you replace $JOBID with the actual
imagestream name)



>
> {
>   "kind": "BuildConfig",
>   "apiVersion": "v1",
>   "metadata": {
>   "name": "test"
>   },
>   "spec": {
> "triggers": [
>   {
> "type": "Generic",
> "generic": {
>   "secret": "${BUILD_TRIGGER_SECRET}"
> }
>   },
>   {
> "type": "ImageChange",
> "imageChange": {
>   "from": {
> "kind": "ImageStreamTag",
> "name": "${JOBID}:test"
>   }
> }
>   }
> ],
> "strategy": {
>   "type": "Custom",
>   "customStrategy": {
> "exposeDockerSocket": true,
> "from": {
>   "kind": "DockerImage",
>   "name": "cccp-test"
> }
>
> Thanks & Regards
> Bamacharan Kundu
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>



-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Seeking Advice On Exporting Image Streams

2017-08-09 Thread Ben Parees
On Wed, Aug 9, 2017 at 10:00 AM, Devan Goodwin <dgood...@redhat.com> wrote:

> On Wed, Aug 9, 2017 at 9:58 AM, Ben Parees <bpar...@redhat.com> wrote:
> >
> >
> > On Wed, Aug 9, 2017 at 8:49 AM, Devan Goodwin <dgood...@redhat.com>
> wrote:
> >>
> >> We are working on a more robust project export/import process (into a
> >> new namespace, possibly a new cluster, etc) and have a question on how
> >> to handle image streams.
> >>
> >> Our first test was with "oc new-app
> >> https://github.com/openshift/ruby-hello-world.git;, this results in an
> >> image stream like the following:
> >>
> >> $ oc get is ruby-hello-world -o yaml
> >> apiVersion: v1
> >> kind: ImageStream
> >> metadata:
> >>   annotations:
> >> openshift.io/generated-by: OpenShiftNewApp
> >>   creationTimestamp: 2017-08-08T12:01:22Z
> >>   generation: 1
> >>   labels:
> >> app: ruby-hello-world
> >>   name: ruby-hello-world
> >>   namespace: project1
> >>   resourceVersion: "183991"
> >>   selfLink: /oapi/v1/namespaces/project1/imagestreams/ruby-hello-world
> >>   uid: 4bd229be-7c31-11e7-badf-989096de63cb
> >> spec:
> >>   lookupPolicy:
> >> local: false
> >> status:
> >>   dockerImageRepository: 172.30.1.1:5000/project1/ruby-hello-world
> >>   tags:
> >>   - items:
> >> - created: 2017-08-08T12:02:04Z
> >>   dockerImageReference:
> >>
> >> 172.30.1.1:5000/project1/ruby-hello-world@sha256:
> 8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
> >>   generation: 1
> >>   image:
> >> sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
> >> tag: latest
> >>
> >>
> >> If we link up with the kubernetes resource exporting by adding --export:
> >>
> >> $ oc get is ruby-hello-world -o yaml --export
> >> apiVersion: v1
> >> kind: ImageStream
> >> metadata:
> >>   annotations:
> >> openshift.io/generated-by: OpenShiftNewApp
> >>   creationTimestamp: null
> >>   generation: 1
> >>   labels:
> >> app: ruby-hello-world
> >>   name: ruby-hello-world
> >>   namespace: default
> >>   selfLink: /oapi/v1/namespaces/default/imagestreams/ruby-hello-world
> >> spec:
> >>   lookupPolicy:
> >> local: false
> >> status:
> >>   dockerImageRepository: 172.30.1.1:5000/default/ruby-hello-world
> >>
> >>
> >> This leads to an initial question, what stripped the status tags? I
> >> would have expected this code to live in the image stream strategy:
> >>
> >> https://github.com/openshift/origin/blob/master/pkg/image/
> registry/imagestream/strategy.go
> >> but this does not satisfy RESTExportStrategy, I wasn't able to
> >> determine where this is happening.
> >>
> >> The dockerImageRepository in status remains, but weirdly flips from
> >> "project1" to "default" when doing an export. Should this remain in an
> >> exported IS at all? And if so is there any reason why it would flip
> >> from project1 to default?
> >>
> >> Our real problem however picks up in the deployment config after
> >> import, in here we end up with the following (partial) DC:
> >>
> >> apiVersion: v1
> >> kind: DeploymentConfig
> >> metadata:
> >>   annotations:
> >> openshift.io/generated-by: OpenShiftNewApp
> >>   labels:
> >> app: ruby-hello-world
> >>   name: ruby-hello-world
> >>   namespace: project2
> >>   selfLink:
> >> /oapi/v1/namespaces/project2/deploymentconfigs/ruby-hello-world
> >> spec:
> >>   template:
> >> metadata:
> >>   annotations:
> >> openshift.io/generated-by: OpenShiftNewApp
> >>   labels:
> >> app: ruby-hello-world
> >> deploymentconfig: ruby-hello-world
> >> spec:
> >>   containers:
> >>   - image:
> >> 172.30.1.1:5000/project1/ruby-hello-world@sha256:
> 8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
> >> imagePullPolicy: Always
> >> name: ruby-hello-world
> >>
> >> So our deployment config still refers to a very specific image and
> >> points to the old project. Is there any logic we c

Re: Seeking Advice On Exporting Image Streams

2017-08-09 Thread Ben Parees
On Wed, Aug 9, 2017 at 8:49 AM, Devan Goodwin <dgood...@redhat.com> wrote:

> We are working on a more robust project export/import process (into a
> new namespace, possibly a new cluster, etc) and have a question on how
> to handle image streams.
>
> Our first test was with "oc new-app
> https://github.com/openshift/ruby-hello-world.git;, this results in an
> image stream like the following:
>
> $ oc get is ruby-hello-world -o yaml
> apiVersion: v1
> kind: ImageStream
> metadata:
>   annotations:
> openshift.io/generated-by: OpenShiftNewApp
>   creationTimestamp: 2017-08-08T12:01:22Z
>   generation: 1
>   labels:
> app: ruby-hello-world
>   name: ruby-hello-world
>   namespace: project1
>   resourceVersion: "183991"
>   selfLink: /oapi/v1/namespaces/project1/imagestreams/ruby-hello-world
>   uid: 4bd229be-7c31-11e7-badf-989096de63cb
> spec:
>   lookupPolicy:
> local: false
> status:
>   dockerImageRepository: 172.30.1.1:5000/project1/ruby-hello-world
>   tags:
>   - items:
> - created: 2017-08-08T12:02:04Z
>   dockerImageReference:
> 172.30.1.1:5000/project1/ruby-hello-world@sha256:
> 8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
>   generation: 1
>   image: sha256:8d0f81a13ec1b8f8fa4372d26075f0
> dd87578fba2ec120776133db71ce2c2074
> tag: latest
>
>
> If we link up with the kubernetes resource exporting by adding --export:
>
> $ oc get is ruby-hello-world -o yaml --export
> apiVersion: v1
> kind: ImageStream
> metadata:
>   annotations:
> openshift.io/generated-by: OpenShiftNewApp
>   creationTimestamp: null
>   generation: 1
>   labels:
> app: ruby-hello-world
>   name: ruby-hello-world
>   namespace: default
>   selfLink: /oapi/v1/namespaces/default/imagestreams/ruby-hello-world
> spec:
>   lookupPolicy:
> local: false
> status:
>   dockerImageRepository: 172.30.1.1:5000/default/ruby-hello-world
>
>
> This leads to an initial question, what stripped the status tags? I
> would have expected this code to live in the image stream strategy:
> https://github.com/openshift/origin/blob/master/pkg/image/
> registry/imagestream/strategy.go
> but this does not satisfy RESTExportStrategy, I wasn't able to
> determine where this is happening.
>
> The dockerImageRepository in status remains, but weirdly flips from
> "project1" to "default" when doing an export. Should this remain in an
> exported IS at all? And if so is there any reason why it would flip
> from project1 to default?
>
> Our real problem however picks up in the deployment config after
> import, in here we end up with the following (partial) DC:
>
> apiVersion: v1
> kind: DeploymentConfig
> metadata:
>   annotations:
> openshift.io/generated-by: OpenShiftNewApp
>   labels:
> app: ruby-hello-world
>   name: ruby-hello-world
>   namespace: project2
>   selfLink: /oapi/v1/namespaces/project2/deploymentconfigs/ruby-hello-
> world
> spec:
>   template:
> metadata:
>   annotations:
> openshift.io/generated-by: OpenShiftNewApp
>   labels:
> app: ruby-hello-world
> deploymentconfig: ruby-hello-world
> spec:
>   containers:
>   - image: 172.30.1.1:5000/project1/ruby-hello-world@sha256:
> 8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
> imagePullPolicy: Always
> name: ruby-hello-world
>
> So our deployment config still refers to a very specific image and
> points to the old project. Is there any logic we could apply safely to
> address this?
>
> It feels like this should boil down to something like
> "ruby-hello-world@sha256:HASH", could we watch for
> $REGISTRY_IP:PORT/projectname/ during export and strip that leading
> portion out? What would be the risks in doing so?
>

Adding Cesar since he was recently looking at some of the export logic you
have questions about and he's also very interested in this subject since
he's working on a related piece of functionality.  That said:

if you've got an imagechangetrigger in the DC you should be able to strip
the entire image field (it should be repopulated from the ICT imagestream
reference during deployment).  However:

1) you still need to straighten out the ICT reference which is also going
to be pointing to an imagestreamtag in the old project/cluster/whatever
2) if you don't have an ICT reference you do need to sort this out and
stripping it the way you propose is definitely not a good idea...what's
going to repopulate that w/ the right prefix/project in the new cluster?
What if the image field was pointing to docker.io or some other external
registry?

In short, you're atte

Re: How to preserve test reports with S2I

2017-04-26 Thread Ben Parees
On Wed, Apr 26, 2017 at 3:54 PM, <lieferh...@mailbox.org> wrote:

> Hi,
>
> Were building our Java APP, spring boot, with Source2Image. Currently we
> are wondering how to preserve e.g. test results reports aso since they are
> lost after running the build. What do you suggest?
>
> In our current Jenkins builds we run multiple maven build steps which
> operate
> On actual build workspace...
>

​how are you running the tests?  as a postcommit hook in the buildconfig?
​



>
> Best regards
> Rocco
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Ruby native extension compilation fails

2017-04-26 Thread Ben Parees
On Wed, Apr 26, 2017 at 3:45 AM, Ralf Herzog <ralf.her...@netcom-kassel.de>
wrote:

> Hi,
>
> I got an Issue try to run a Ruby application inside OpenShift Origin
> Cluster with a native extension.
>
> = The Problem =
>
> Using native extensions in Ruby require them to be compiled. The sources
> for the compilation are not shipped with the ruby docker image. Let me give
> you an example:
>
> Create a Gemfile with the following content (FireBird driver example):
>
> source 'https://rubygems.org'
> gem 'fb'
>
> After that, run "bundler install" inside a ruby Docker Image Container.
> The following error will occur:
>
> Gem::Ext::BuildError: ERROR: Failed to build gem native extension.
> [...]
> fb.c:41:19: fatal error: ibase.h: No such file or directory
> [...]
>
> So there is missing a system library package which includes ibase.h.
>
> = Solving approach =
>
> The compile process will succeed if the package firebird-dev (since the
> ruby image is Debian based) was installed before.
>
> Now my question is:
> How can OpenShift Origin make changes to an Image before start to build
> the application to satisfy the dependencies?
>

Since you're trying to install a package is going to require root
permissions(something you can't do via an s2i assemble script), ​you'd have
to create a docker strategy build which extends the builder image.  Then
use that extended image as your builder image for your s2i strategy build.
 (You can even make the s2i build be triggered by the new builder image
produced by the docker build, so any time it changes, your app gets
rebuilt).

​



>
>
> Sincerely,
>
> Ralf Herzog
>
>
>
> ___________
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Template Schema?

2017-04-24 Thread Ben Parees
There is the swagger spec for the template object:
https://docs.openshift.org/latest/rest_api/openshift_v1.html#v1-template


On Mon, Apr 24, 2017 at 9:17 AM, Anton <kurren...@gmail.com> wrote:

> Hello
>
> Is there a schema for OpenShift Templates?
>
> Thanks
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Enabling Incremental Builds

2016-07-24 Thread Ben Parees
On Sun, Jul 24, 2016 at 12:06 PM, Lalatendu Mohanty <lmoha...@redhat.com>
wrote:

> Hi,
>
> Can I enable "incremental builds" for the whole OpenShift setup? I have
> gone through the Origin documentation [1] and it seems I need to change the
> build configuration of individual application to enable incremental builds.
>
> [1]
> https://docs.openshift.org/latest/dev_guide/builds.html#incremental-builds


​Unfortunately there is no way to do that today.  We have the global build
defaulter+overrider which is intended to solve this sort of problem, but
today "incremental build" is not one of the fields it supports
defaulting/overriding:

https://docs.openshift.org/latest/install_config/build_defaults_overrides.html

i think we'd definitely consider adding it though, do you want to open an
issue at https://github.com/openshift/origin/issues and tag me (bparees)?
​



>
>
> Thanks,
>
> Lala
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>



-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [Origin] Fail to build from remote Git repository

2016-05-19 Thread Ben Parees
On May 19, 2016 5:26 AM, "ABDALA Olga" <olga.abd...@solucom.fr> wrote:
>
> Hello,
>
>
>
> I have been trying to deploy my application that already exists on
Github, on openShift, using the “oc new-app” command, but I have been
receiving a build error and I don’t know where that might be coming from.
>
>
>
> Here is what I get after running the command:
>
>
>
>
>
> But when I go through the logs, here is what I get:
>
>
>
>
>
> Does anybody know what might be the cause of the fail?

Basically the git clone is failing. Can you deploy our jenkins image to a
pod, rsh into it, and attempt a git clone from there? That will hopefully
give us more information about the nature of the failure.

>
>
>
> Ps: The git repo is public…
>
>
>
> Thank you!
>
>
>
> Olga A.
>
>
>
>
> ___________
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>

Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: binary source in a Custom type build

2016-05-05 Thread Ben Parees
i believe the content is being streamed into your stdin.  so your custom
image would need to read stdin as a tar stream.

On Thu, May 5, 2016 at 4:31 PM, Luke Meyer <lme...@redhat.com> wrote:

> How in a custom builder do you retrieve binary build content (from e.g.
> the --from-dir flag)?
> https://docs.openshift.org/latest/dev_guide/builds.html#binary-source
> does not seem to give any clues. SOURCE_URI comes in blank. Is there a
> secret handshake I'm missing?
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Hosting MySQL images in OpenShift Origin

2016-03-03 Thread Ben Parees
On Thu, Mar 3, 2016 at 3:21 AM, David Balakirev <david.balaki...@adnovum.hu>
wrote:

> Hi,
>
> Thanks for the answers Ben, Aaron!
>
> Right so at this point the applications themselves are not hosted inside
> OpenShift. There is a plan for that but we are not  there yet.
>
> @Aaron your assumption was right in a sense that I intend to have MySQL
> available as a service to applications running on our corporate build
> infra.
> They would connect and run their database creation scripts and test.
>
> So while I have confidence I could access the service from other apps
> hosted inside the OpenShift infra, I actually need to somehow access them
> extenally.
>
> To re-iterate the question, I am puzzled how should I route connections to
> the TCP port of the MySQL service (or pod?) from the network.
>

​external access requires a Route and routes only support
http/https/websocket or SNI traffic.  Your generic TCP requirements can
only be met if you use a TLS connection and your mysql clients support SNI
(that is how openshift can know where to route the request).

https://docs.openshift.org/latest/dev_guide/routes.html
​



> I assume it has something to do with routing / port forwarding.
>
> @Ben thanks I'll checkout what those templates offer.
>
>
> On 03/02/2016 05:45 PM, Ben Parees wrote:
>
>
>
> On Wed, Mar 2, 2016 at 11:41 AM, Aaron Weitekamp < <aweit...@redhat.com>
> aweit...@redhat.com> wrote:
>
>> On Wed, Mar 2, 2016 at 9:05 AM, Ben Parees < <bpar...@redhat.com>
>> bpar...@redhat.com> wrote:
>>
>>> Take a look at this template which deploys mysql:
>>>
>>> https://github.com/openshift/origin/blob/master/examples/db-templates/mysql-ephemeral-template.json
>>> (or this one which uses persistent storage:
>>> <https://github.com/openshift/origin/blob/master/examples/db-templates/mysql-persistent-template.json>
>>> https://github.com/openshift/origin/blob/master/examples/db-templates/mysql-persistent-template.json
>>> )
>>>
>>> And this application which deploys both a DB and an application that
>>> communicates with that DB:
>>>
>>> https://github.com/openshift/origin/blob/master/examples/quickstarts/cakephp-mysql.json
>>> (source for the application is here:
>>> <https://github.com/openshift/cakephp-ex>
>>> https://github.com/openshift/cakephp-ex)
>>>
>>> I would not necessarily expect you to deploy a single mysql instance and
>>> have each app create its own DB in that instance.  I'd expect each app to
>>> just deploy its own mysql instance for testing.  I think you will find that
>>> easier to setup.
>>>
>>>
>> While one db per app is straightforward and there​
>> ​ are many examples of this, aren't there benefits to hosting a single DB​
>> ​ service that apps can use? This is the enterprise model. It seems to me
>> the question is how to share a service across projects. Once that's in
>> place it should "just work" but I couldn't figure out how that might be
>> done. oc policy ... ?
>> ​
>>
>>
>
> ​by default, a service is accessible to all other projects, no policy
> needed.  the challenge you have is creating the additional DBs in the mysql
> instance.  Our image creates one DB for you, if you want to create more,
> its up to you to exec into the DB container and run mysql commands to
> create more databases.
> ​
>
>
>
>>
>>>
>>> On Wed, Mar 2, 2016 at 4:13 AM, David Balakirev <
>>> <david.balaki...@adnovum.hu>david.balaki...@adnovum.hu> wrote:
>>>
>>>> Hi,
>>>>
>>>> I am trying to host MySQL containers inside OpenShift. The goal would
>>>> be that projects could connect to a given container, setup a database for
>>>> themselves remotely and execute their integration tests.
>>>>
>>>> The first question could be: is this something OpenShift could be used
>>>> for or not?
>>>>
>>>> For my installation I created a project with a single MySQL app
>>>> (mysql:latest).
>>>>
>>>> On the server, I can connect to the database via TCP (--protocol=tcp):
>>>> * via the IP of the pod
>>>> * via the IP of the service (that was auto created for me)
>>>>
>>>> Of course the goal would be to access the database from our corporate
>>>> network.
>>>>
>>>> After digesting many threads on Stackoverflow, especially [1] and [2] I
>>>> think the conclusion is that only port 80/443/8000/

Re: Hosting MySQL images in OpenShift Origin

2016-03-02 Thread Ben Parees
On Wed, Mar 2, 2016 at 11:41 AM, Aaron Weitekamp <aweit...@redhat.com>
wrote:

> On Wed, Mar 2, 2016 at 9:05 AM, Ben Parees <bpar...@redhat.com> wrote:
>
>> Take a look at this template which deploys mysql:
>>
>> https://github.com/openshift/origin/blob/master/examples/db-templates/mysql-ephemeral-template.json
>> (or this one which uses persistent storage:
>> https://github.com/openshift/origin/blob/master/examples/db-templates/mysql-persistent-template.json
>> )
>>
>> And this application which deploys both a DB and an application that
>> communicates with that DB:
>>
>> https://github.com/openshift/origin/blob/master/examples/quickstarts/cakephp-mysql.json
>> (source for the application is here:
>> https://github.com/openshift/cakephp-ex)
>>
>> I would not necessarily expect you to deploy a single mysql instance and
>> have each app create its own DB in that instance.  I'd expect each app to
>> just deploy its own mysql instance for testing.  I think you will find that
>> easier to setup.
>>
>>
> While one db per app is straightforward and there​
> ​ are many examples of this, aren't there benefits to hosting a single DB​
> ​ service that apps can use? This is the enterprise model. It seems to me
> the question is how to share a service across projects. Once that's in
> place it should "just work" but I couldn't figure out how that might be
> done. oc policy ... ?
> ​
>
>

​by default, a service is accessible to all other projects, no policy
needed.  the challenge you have is creating the additional DBs in the mysql
instance.  Our image creates one DB for you, if you want to create more,
its up to you to exec into the DB container and run mysql commands to
create more databases.
​



>
>>
>> On Wed, Mar 2, 2016 at 4:13 AM, David Balakirev <
>> david.balaki...@adnovum.hu> wrote:
>>
>>> Hi,
>>>
>>> I am trying to host MySQL containers inside OpenShift. The goal would be
>>> that projects could connect to a given container, setup a database for
>>> themselves remotely and execute their integration tests.
>>>
>>> The first question could be: is this something OpenShift could be used
>>> for or not?
>>>
>>> For my installation I created a project with a single MySQL app
>>> (mysql:latest).
>>>
>>> On the server, I can connect to the database via TCP (--protocol=tcp):
>>> * via the IP of the pod
>>> * via the IP of the service (that was auto created for me)
>>>
>>> Of course the goal would be to access the database from our corporate
>>> network.
>>>
>>> After digesting many threads on Stackoverflow, especially [1] and [2] I
>>> think the conclusion is that only port 80/443/8000/8443 could be
>>> accessed externally.
>>>
>>> I know of services, routes and port-forwarding, but probably I did not
>>> yet understand when they should be used.
>>>
>>> I can use port-forwarding to map 3306 to a local port, then I access the
>>> database via "-h localhost".
>>>
>>> I0302 09:20:01.1333889195 portforward.go:213] Forwarding from
>>> 127.0.0.1:49220 -> 3306
>>> I0302 09:20:01.1335169195 portforward.go:213] Forwarding from
>>> [::1]:49220 -> 3306
>>>
>>> But I assume I cannot use this to expose the port because of what I have
>>> found in [1] and [2].
>>>
>>> Routes I learned could be used to match a path, but I think that is
>>> better used for HTTP services.
>>>
>>> Frankly I did not yet understand the role of a Router in this context.
>>>
>>> Could someone please let me know if it is possible to do what I want or
>>> not? RTFM is perfect for me, provided I can see a specific example for
>>> exposing a TCP port somehow. It is possible the solution is there but I did
>>> not realize.
>>>
>>> I am using Origin: 1.1.3.
>>>
>>> Thanks in advance,
>>> Dave
>>>
>>> [1]
>>> <http://stackoverflow.com/questions/33985138/how-to-host-and-access-murmur-mumble-server-on-openshift-without-port-forwardi?rq=1>
>>> http://stackoverflow.com/
>>> questions/33985138/how-to-host-and-access-murmur-mumble-server-on-openshift-without-port-forwardi?rq=1
>>> [2]
>>> <http://stackoverflow.com/questions/33838765/openshift-v3-confusion-on-services-and-routes>
>>> http://stackoverflow.com/
>>> questions/33838765/openshift-v3-confusion-on-services-and-routes
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Ben Parees | OpenShift
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Hosting MySQL images in OpenShift Origin

2016-03-02 Thread Ben Parees
Take a look at this template which deploys mysql:
https://github.com/openshift/origin/blob/master/examples/db-templates/mysql-ephemeral-template.json
(or this one which uses persistent storage:
https://github.com/openshift/origin/blob/master/examples/db-templates/mysql-persistent-template.json
)

And this application which deploys both a DB and an application that
communicates with that DB:
https://github.com/openshift/origin/blob/master/examples/quickstarts/cakephp-mysql.json
(source for the application is here: https://github.com/openshift/cakephp-ex
)

I would not necessarily expect you to deploy a single mysql instance and
have each app create its own DB in that instance.  I'd expect each app to
just deploy its own mysql instance for testing.  I think you will find that
easier to setup.



On Wed, Mar 2, 2016 at 4:13 AM, David Balakirev <david.balaki...@adnovum.hu>
wrote:

> Hi,
>
> I am trying to host MySQL containers inside OpenShift. The goal would be
> that projects could connect to a given container, setup a database for
> themselves remotely and execute their integration tests.
>
> The first question could be: is this something OpenShift could be used for
> or not?
>
> For my installation I created a project with a single MySQL app
> (mysql:latest).
>
> On the server, I can connect to the database via TCP (--protocol=tcp):
> * via the IP of the pod
> * via the IP of the service (that was auto created for me)
>
> Of course the goal would be to access the database from our corporate
> network.
>
> After digesting many threads on Stackoverflow, especially [1] and [2] I
> think the conclusion is that only port 80/443/8000/8443 could be accessed
> externally.
>
> I know of services, routes and port-forwarding, but probably I did not yet
> understand when they should be used.
>
> I can use port-forwarding to map 3306 to a local port, then I access the
> database via "-h localhost".
>
> I0302 09:20:01.1333889195 portforward.go:213] Forwarding from
> 127.0.0.1:49220 -> 3306
> I0302 09:20:01.1335169195 portforward.go:213] Forwarding from
> [::1]:49220 -> 3306
>
> But I assume I cannot use this to expose the port because of what I have
> found in [1] and [2].
>
> Routes I learned could be used to match a path, but I think that is better
> used for HTTP services.
>
> Frankly I did not yet understand the role of a Router in this context.
>
> Could someone please let me know if it is possible to do what I want or
> not? RTFM is perfect for me, provided I can see a specific example for
> exposing a TCP port somehow. It is possible the solution is there but I did
> not realize.
>
> I am using Origin: 1.1.3.
>
> Thanks in advance,
> Dave
>
> [1]
> <http://stackoverflow.com/questions/33985138/how-to-host-and-access-murmur-mumble-server-on-openshift-without-port-forwardi?rq=1>
> http://stackoverflow.com/
> questions/33985138/how-to-host-and-access-murmur-mumble-server-on-openshift-without-port-forwardi?rq=1
> [2]
> <http://stackoverflow.com/questions/33838765/openshift-v3-confusion-on-services-and-routes>
> http://stackoverflow.com/
> questions/33838765/openshift-v3-confusion-on-services-and-routes
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Postgresql and ceph volumes

2016-02-05 Thread Ben Parees
On Fri, Feb 5, 2016 at 10:03 PM, Clayton Coleman <ccole...@redhat.com>
wrote:

> I think we work around this in the Postgres image by using a subdir of the
> root dir and changing perms.
>

​yes, precisely:
https://github.com/openshift/postgresql/blob/master/9.4/root/usr/share/container-scripts/postgresql/common.sh#L166-L176


​


>
> On Feb 5, 2016, at 8:18 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
> Hi there.
>
> I'm facing an issue trying to use a pvc for postgresql-9.4 using ceph
> storage.
> By setting fsGroup on my template causes pvc to be mounted with perms
> g+rwx for that specific GID.
>
> The problem is postgresql refuses to start if PGDATA isn't 0700.
>
> Thoughts?
>
>
> *Mateus Caruccio*
> Master of Puppets
> +55 (51) 8298.0026
> gtalk:
>
>
> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
> This message and any attachment are solely for the intended
> recipient and may contain confidential or privileged information
> and it can not be forwarded or shared without permission.
> Thank you!
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
> ___________
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Runtime values in sti-php ini templates

2016-02-03 Thread Ben Parees
On Wed, Feb 3, 2016 at 9:40 AM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> On Tue, Feb 2, 2016 at 9:51 PM, Ben Parees <bpar...@redhat.com> wrote:
>
>>
>>
>> On Tue, Feb 2, 2016 at 12:21 PM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>>> This could lead to an issue since most .ini files depend on some module
>>> to be available, newrelic.so in my case. Simply processing templates with
>>> no way to install modules my not suffice.
>>>
>>>
>> ​well i'd expect those dependencies to also be defined by the source repo
>> that was including the template ini file.
>> ​
>>
>>
> ​Doesn't ​it breaks security for non-root pods? Is it even possible to
> install RPMs on runtime if restricted scc is enforced?
> IMHO the best way to allow for custom modules is through rpm/yum. It seams
> to me that blobs on a source repo is way too ugly.
> I'm thinking on a more simple scenario, where users are enabled to provide
> custom configs for already-available containers.
>
>
​Sorry, I did not understand the dependencies you were referring to were
coming from RPMs.  No, it is not possible to install RPMs as part of the
assemble process, due to the root requirement.  and yes, stashing the blobs
in the source repo would be undesirable as well.

I was assuming you were referring to additional dependencies that would be
pulled in by composer.
​



>
>
>>
>>
>>> In my case i've create an sti-php-extra[1] docker image with a bunch of
>>> new modules (in fact, only one for now).
>>>
>>> On the other hand it may be useful for users to provide custom template
>>> files from their source repo. I believe it must be processed by s2i/bin/run
>>> instead assemble because it may depende on environment variables available
>>> only on runtime (think on an api key, which is much easier to set and use
>>> than a secret).
>>>
>>>
>> ​fair enough, processing at run/startup is ok with me, given that that's
>> what the existing run script is doing anyway.
>> ​
>>
>>
>>
>>> For example, there could be a dir structure from source repo reflected
>>> inside de pod:
>>>
>>>  (repo) .sti/templates/etc/php.d/mymodule.ini.template -> envsubst ->
>>> (pod) /etc/php.d/mymodule.ini
>>>
>>> BTW, does anyone known of more flexible template processing engine that
>>> could fit for docker images? Something capable to understand conditionals.
>>>
>>>
>> ​There are lots of options, mustache is a popular one for example, but
>> i'm reluctant to make the PHP image dependent on a particular templating
>> language if we can avoid it.  Those discussions tend to break down into
>> religious wars over which templating framework should be anointed.
>>
>> ​
>>
>
> Agree. The number of template engines are too damn high!
>
> Please note that PHP itself can be used as a template engine. In the end,
> php IS a template engine.
>
> BTW, each lang has some kind of standard template system. Maybe it could
> be better if we stick with it, i.e. php for php, jinja2/django for
> python/django, erb for ruby, "whatever" for nodejs. All of them are able
> to substitute env vars and provide some level of control blocks.
>
>
>
>>
>>
>>> Thoughts?
>>>
>>> [1] https://github.com/getupcloud/sti-php-extra (still outdated)
>>>
>>>
>>>
>>>
>>>
>>> *Mateus Caruccio*
>>> Master of Puppets
>>> +55 (51) 8298.0026
>>> gtalk:
>>>
>>>
>>> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
>>> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
>>> This message and any attachment are solely for the intended
>>> recipient and may contain confidential or privileged information
>>> and it can not be forwarded or shared without permission.
>>> Thank you!
>>>
>>> On Tue, Feb 2, 2016 at 12:50 PM, Ben Parees <bpar...@redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Feb 2, 2016 at 2:39 AM, Honza Horak <hho...@redhat.com> wrote:
>>>>
>>>>> It seems fine to me as well, we need to make the images extensible.
>>>>>
>>>>> I'd just like to make sure I understand the use case -- that would
>>>>> mean you'd add a file like 
>>>>> /etc/opt/rh/rh-php56/php.d/newrelic.ini.template
>>>>> (using bind-mount or in anoth

Re: Runtime values in sti-php ini templates

2016-02-02 Thread Ben Parees
On Tue, Feb 2, 2016 at 12:21 PM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> This could lead to an issue since most .ini files depend on some module to
> be available, newrelic.so in my case. Simply processing templates with no
> way to install modules my not suffice.
>
>
​well i'd expect those dependencies to also be defined by the source repo
that was including the template ini file.
​



> In my case i've create an sti-php-extra[1] docker image with a bunch of
> new modules (in fact, only one for now).
>
> On the other hand it may be useful for users to provide custom template
> files from their source repo. I believe it must be processed by s2i/bin/run
> instead assemble because it may depende on environment variables available
> only on runtime (think on an api key, which is much easier to set and use
> than a secret).
>
>
​fair enough, processing at run/startup is ok with me, given that that's
what the existing run script is doing anyway.
​



> For example, there could be a dir structure from source repo reflected
> inside de pod:
>
>  (repo) .sti/templates/etc/php.d/mymodule.ini.template -> envsubst ->
> (pod) /etc/php.d/mymodule.ini
>
> BTW, does anyone known of more flexible template processing engine that
> could fit for docker images? Something capable to understand conditionals.
>
>
​There are lots of options, mustache is a popular one for example, but i'm
reluctant to make the PHP image dependent on a particular templating
language if we can avoid it.  Those discussions tend to break down into
religious wars over which templating framework should be anointed.

​



> Thoughts?
>
> [1] https://github.com/getupcloud/sti-php-extra (still outdated)
>
>
>
>
>
> *Mateus Caruccio*
> Master of Puppets
> +55 (51) 8298.0026
> gtalk:
>
>
> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
> This message and any attachment are solely for the intended
> recipient and may contain confidential or privileged information
> and it can not be forwarded or shared without permission.
> Thank you!
>
> On Tue, Feb 2, 2016 at 12:50 PM, Ben Parees <bpar...@redhat.com> wrote:
>
>>
>>
>> On Tue, Feb 2, 2016 at 2:39 AM, Honza Horak <hho...@redhat.com> wrote:
>>
>>> It seems fine to me as well, we need to make the images extensible.
>>>
>>> I'd just like to make sure I understand the use case -- that would mean
>>> you'd add a file like /etc/opt/rh/rh-php56/php.d/newrelic.ini.template
>>> (using bind-mount or in another layer)?
>>>
>>>
>> ​I wouldn't expect it to be a bindmount or additional layer.  I'd expect
>> "newrelic.ini.template" would be supplied via the source repository that
>> was being built, and the assemble script should copy the source-supplied
>> templates ​into an appropriate location and then process them.  (or process
>> them into the appropriate location).
>>
>> so this file would not be a part of the php56 image.
>>
>>
>>
>>> Also adding Remi to comment on this.
>>>
>>> Honza
>>>
>>> On 02/02/2016 12:54 AM, Ben Parees wrote:
>>>
>>>> I think that sounds reasonable, i'd be inclined to accept it as a PR.
>>>> Adding Honza since his team technically controls the PHP image now (5.6
>>>> anyway).
>>>>
>>>>
>>>> On Mon, Feb 1, 2016 at 4:43 PM, Mateus Caruccio
>>>> <mateus.caruc...@getupcloud.com <mailto:mateus.caruc...@getupcloud.com
>>>> >>
>>>>
>>>> wrote:
>>>>
>>>> Hi.
>>>>
>>>> I need to run newrelic on a php container. Its license must be set
>>>> ​from ​
>>>> php.ini
>>>> ​ or ​
>>>> any .ini inside /etc/opt/rh/rh-php56/php.d/
>>>> ​.
>>>>
>>>> ​
>>>> The problem is it need
>>>> ​s​
>>>> to be set on run time, not build time because the license key is
>>>> stored
>>>> ​in
>>>>   a
>>>> n​
>>>> env var.
>>>>
>>>> What is the best way to do that?
>>>> Wouldn't be good to have some kind of template processing like [1]?
>>>> Something like this:
>>>>
>>>> for tpl in $PHP_INI_SCAN_DIR/*.template; do
>>>> envsubst < $tpl > ${tpl%.template}
>>>> done
>>>>
>>>> The

Re: Runtime values in sti-php ini templates

2016-02-02 Thread Ben Parees
On Tue, Feb 2, 2016 at 2:39 AM, Honza Horak <hho...@redhat.com> wrote:

> It seems fine to me as well, we need to make the images extensible.
>
> I'd just like to make sure I understand the use case -- that would mean
> you'd add a file like /etc/opt/rh/rh-php56/php.d/newrelic.ini.template
> (using bind-mount or in another layer)?
>
>
​I wouldn't expect it to be a bindmount or additional layer.  I'd expect
"newrelic.ini.template" would be supplied via the source repository that
was being built, and the assemble script should copy the source-supplied
templates ​into an appropriate location and then process them.  (or process
them into the appropriate location).

so this file would not be a part of the php56 image.



> Also adding Remi to comment on this.
>
> Honza
>
> On 02/02/2016 12:54 AM, Ben Parees wrote:
>
>> I think that sounds reasonable, i'd be inclined to accept it as a PR.
>> Adding Honza since his team technically controls the PHP image now (5.6
>> anyway).
>>
>>
>> On Mon, Feb 1, 2016 at 4:43 PM, Mateus Caruccio
>> <mateus.caruc...@getupcloud.com <mailto:mateus.caruc...@getupcloud.com>>
>>
>> wrote:
>>
>> Hi.
>>
>> I need to run newrelic on a php container. Its license must be set
>> ​from ​
>> php.ini
>> ​ or ​
>> any .ini inside /etc/opt/rh/rh-php56/php.d/
>> ​.
>>
>> ​
>> The problem is it need
>> ​s​
>> to be set on run time, not build time because the license key is
>> stored
>> ​in
>>   a
>> n​
>> env var.
>>
>> What is the best way to do that?
>> Wouldn't be good to have some kind of template processing like [1]?
>> Something like this:
>>
>> for tpl in $PHP_INI_SCAN_DIR/*.template; do
>> envsubst < $tpl > ${tpl%.template}
>> done
>>
>> There is any reason not to adopt this approach? Is it something
>> origin would accept as a PR?
>>
>> [1]
>>
>> https://github.com/openshift/sti-php/blob/04a0900b68264642def9aaea9465a71e1075e713/5.6/s2i/bin/run#L20-L21
>>
>>
>> *Mateus Caruccio*
>> Master of Puppets
>> +55 (51) 8298.0026
>> gtalk: _mateus.caruc...@getupcloud.com
>> <mailto:diogo.goe...@getupcloud.com>
>> twitter: @MateusCaruccio <https://twitter.com/MateusCaruccio>
>>
>> _
>> This message and any attachment are solely for the intended
>>     recipient and may contain confidential or privileged information
>> and it can not be forwarded or shared without permission.
>> Thank you!
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com <mailto:dev@lists.openshift.redhat.com
>> >
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>>
>>
>> --
>> Ben Parees | OpenShift
>>
>>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Runtime values in sti-php ini templates

2016-02-01 Thread Ben Parees
I think that sounds reasonable, i'd be inclined to accept it as a PR.
Adding Honza since his team technically controls the PHP image now (5.6
anyway).


On Mon, Feb 1, 2016 at 4:43 PM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> Hi.
>
> I need to run newrelic on a php container. Its license must be set
> ​from ​
> php.ini
> ​ or ​
> any .ini inside /etc/opt/rh/rh-php56/php.d/
> ​.
>
> ​
> The problem is it need
> ​s​
> to be set on run time, not build time because the license key is stored
> ​in
>  a
> n​
> env var.
>
> What is the best way to do that?
> Wouldn't be good to have some kind of template processing like [1]?
> Something like this:
>
> for tpl in $PHP_INI_SCAN_DIR/*.template; do
>envsubst < $tpl > ${tpl%.template}
> done
>
> There is any reason not to adopt this approach? Is it something origin
> would accept as a PR?
>
> [1]
> https://github.com/openshift/sti-php/blob/04a0900b68264642def9aaea9465a71e1075e713/5.6/s2i/bin/run#L20-L21
>
>
> *Mateus Caruccio*
> Master of Puppets
> +55 (51) 8298.0026
> gtalk:
>
>
> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
> This message and any attachment are solely for the intended
> recipient and may contain confidential or privileged information
> and it can not be forwarded or shared without permission.
> Thank you!
>
> ___________
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Username resolution failing

2016-01-19 Thread Ben Parees
Yes there is a trick, documented here:

https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines

see the section on "*Support Arbitrary User IDs" *which describes how to
use nss wrapper to work around this.

That said, the openshift python image already does the nss trick.  I think
we had an issue with the rhel image not containing the right package, are
you using the rhel image or the centos image?

For the moment you might try the centos image if you haven't already, until
we get the rhel image updated.



On Tue, Jan 19, 2016 at 9:53 AM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> Hi.
>
> Regarding openshift policy for safely running images, it's recommended to
> disable scc for unprivileged user. This may causes some issues while
> reading from password database since EUID of the running user is generated
> by openshift and can't be found inside the container:
>
> bash-4.2$ pip install memcache
> Traceback (most recent call last):
>   File "/opt/rh/rh-python34/root/usr/bin/pip", line 7, in 
> from pip import main
>   File
> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/__init__.py",
> line 9, in 
> from pip.util import get_installed_distributions, get_prog
>   File
> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/util.py",
> line 16, in 
> from pip.locations import site_packages, running_under_virtualenv,
> virtualenv_no_global
>   File
> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
> line 96, in 
> build_prefix = _get_build_prefix()
>   File
> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
> line 65, in _get_build_prefix
> __get_username())
>   File
> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
> line 60, in __get_username
> return pwd.getpwuid(os.geteuid()).pw_name
> KeyError: 'getpwuid(): uid not found: 100018'
>
> How can I circumvent this obstacle? Should I rebuild all sti scripts to
> include this user into the image? There is any trick to allow passwd
> readers to read from a mock?
>
>
> Thanks,
>
>
> *Mateus Caruccio*
> Master of Puppets
> +55 (51) 8298.0026
> gtalk:
>
>
> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
> This message and any attachment are solely for the intended
> recipient and may contain confidential or privileged information
> and it can not be forwarded or shared without permission.
> Thank you!
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Username resolution failing

2016-01-19 Thread Ben Parees
That's a good point.  We do have the mechanism in place to do that.

Michal, any objection to adding the NSS env definitions to our scl_enable
script?



On Tue, Jan 19, 2016 at 11:02 AM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> Yep, just tried centos images and it is working fine.
>
> It took me a while to understand the whole thing. I was simply "oc
> exec-ing" into the pod, but those NSS vars are create by sti/run.
> It may be good if those vars would be available from any shell.
>
> Thanks.
>
>
> *Mateus Caruccio*
> Master of Puppets
> +55 (51) 8298.0026
> gtalk:
>
>
> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
> This message and any attachment are solely for the intended
> recipient and may contain confidential or privileged information
> and it can not be forwarded or shared without permission.
> Thank you!
>
> On Tue, Jan 19, 2016 at 1:52 PM, Ben Parees <bpar...@redhat.com> wrote:
>
>> Ok, can you try the centos image (centos/python-34-centos7)?
>>
>>
>> Honza:  do you know when the RHEL SCL python images(2.7 and 3.4) will be
>> updated to fix the missing nss rpm issue?
>>
>>
>> On Tue, Jan 19, 2016 at 10:27 AM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>>> Yes, we are using rhel images.
>>>
>>> Thanks!
>>>
>>> *Mateus Caruccio*
>>> Master of Puppets
>>> +55 (51) 8298.0026
>>> gtalk:
>>>
>>>
>>> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
>>> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
>>> This message and any attachment are solely for the intended
>>> recipient and may contain confidential or privileged information
>>> and it can not be forwarded or shared without permission.
>>> Thank you!
>>>
>>> On Tue, Jan 19, 2016 at 1:15 PM, Ben Parees <bpar...@redhat.com> wrote:
>>>
>>>> Yes there is a trick, documented here:
>>>>
>>>>
>>>> https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines
>>>>
>>>> see the section on "*Support Arbitrary User IDs" *which describes how
>>>> to use nss wrapper to work around this.
>>>>
>>>> That said, the openshift python image already does the nss trick.  I
>>>> think we had an issue with the rhel image not containing the right package,
>>>> are you using the rhel image or the centos image?
>>>>
>>>> For the moment you might try the centos image if you haven't already,
>>>> until we get the rhel image updated.
>>>>
>>>>
>>>>
>>>> On Tue, Jan 19, 2016 at 9:53 AM, Mateus Caruccio <
>>>> mateus.caruc...@getupcloud.com> wrote:
>>>>
>>>>> Hi.
>>>>>
>>>>> Regarding openshift policy for safely running images, it's recommended
>>>>> to disable scc for unprivileged user. This may causes some issues while
>>>>> reading from password database since EUID of the running user is generated
>>>>> by openshift and can't be found inside the container:
>>>>>
>>>>> bash-4.2$ pip install memcache
>>>>> Traceback (most recent call last):
>>>>>   File "/opt/rh/rh-python34/root/usr/bin/pip", line 7, in 
>>>>> from pip import main
>>>>>   File
>>>>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/__init__.py",
>>>>> line 9, in 
>>>>> from pip.util import get_installed_distributions, get_prog
>>>>>   File
>>>>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/util.py",
>>>>> line 16, in 
>>>>> from pip.locations import site_packages, running_under_virtualenv,
>>>>> virtualenv_no_global
>>>>>   File
>>>>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
>>>>> line 96, in 
>>>>> build_prefix = _get_build_prefix()
>>>>>   File
>>>>> "/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/pip/locations.py",
>>>>> line 65, in _get_build_prefix
>>>>> __get_username())
>>>>>   File
>>>>> "/opt/rh/rh-python34/r