Re: few basic questions about S2I and docker run

2016-09-13 Thread Ben Parees
On Tue, Sep 13, 2016 at 1:35 AM, Ravi Kapoor 
wrote:

> Hi Ben,
>
> I am finally able to run my nodejs code on openshift with both approaches
> (volume mount as well as S2I)
> I was also able to resolve most of other issues I mentioned and was able
> to run JEE application as well.
>
> Thanks a lot for helping me through all the silly questions.
> Good news is that now my company will be using openshift to manage our
> dockers/deployments.
>

​cool!  glad we were able to get you going.
​



>
> regards
>
>
> On Sat, Sep 10, 2016 at 8:23 AM, Ben Parees  wrote:
>
>> you can define a command on the container within the pod:
>> http://kubernetes.io/docs/user-guide/configuring-containers/
>> #launching-a-container-using-a-configuration-file
>>
>>
>> On Fri, Sep 9, 2016 at 5:21 PM, Ravi  wrote:
>>
>>>
>>> Thank you for this help.
>>>
>>> I was trying nginx because after invoking container, I do not have to
>>> run a command. For java or node, after the container is run I will need to
>>> run a command e.g.
>>>
>>> java -jar myapp.jar
>>> OR
>>> node server.js
>>>
>>> Can you guide me how to add this to the json file or point me to
>>> documentation so I can try this?
>>>
>>> thanks so much
>>>
>>>
>>> On 9/8/2016 6:56 PM, Ben Parees wrote:
>>>
 Downloads$ oc get pods
 NAME READY STATUSRESTARTS   AGE
 nginx-1-deploy   1/1   Running   0  14s
 nginx-1-rmfl90/1   Error 0  11s

 Downloads$ oc logs nginx-1-rmfl9
 2016/09/09 01:54:21 [warn] 1#1: the "user" directive makes sense only if
 the master process runs with super-user privileges, ignored in
 /etc/nginx/nginx.conf:2
 nginx: [warn] the "user" directive makes sense only if the master
 process runs with super-user privileges, ignored in
 /etc/nginx/nginx.conf:2
 2016/09/09 01:54:21 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp"
 failed (13: Permission denied)
 nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13:
 Permission denied)


 the nginx image probably only works when run as root or as some other
 specific user.  when images are run in openshift, by default they are
 assigned a random uid for security purposes.  that can cause issues with
 images that expect to run as a specific user.  please see our
 documentation:

 https://docs.openshift.org/latest/creating_images/guidelines
 .html#openshift-origin-specific-guidelines
 (section on support arbitrary uids)

 to relax the restriction, see:
 https://docs.openshift.org/latest/admin_guide/manage_scc.htm
 l#enable-images-to-run-with-user-in-the-dockerfile





 On Thu, Sep 8, 2016 at 9:50 PM, Ravi > wrote:


 oh, forgot to add, I do not have any readiness probe.

 On 9/8/2016 6:47 PM, Ravi Kapoor wrote:

 I removed volumes, pod still failed. json and logs attached



 On Thu, Sep 8, 2016 at 6:35 PM, Ben Parees 
 >> wrote:

 though i don't see it in your json it sounds like you have a
 readiness probe defined on your pod and it's not being met
 successfully.

 the other possibility is it has to do w/ your mounts.  can
 you
 temporarily remove the volume mounts and see if the pod
 comes up?


 On Thu, Sep 8, 2016 at 8:33 PM, Ravi Kapoor
 
 >> wrote:

 Pod deployment failed. error in console log is

 --> Scaling nginx-1 to 1
 --> Waiting up to 10m0s for pods in deployment nginx-1
 to become
 ready
 error: update acceptor rejected nginx-1: pods for
 deployment
 "nginx-1" took longer than 600 seconds to become ready



 *$ oc describe pods*
 Name:   nginx-1-deploy
 Namespace:  test
 Security Policy:restricted
 Node:   172.27.104.71/172.27.104.71
 
 >
 Start Time: Thu, 08 Sep 2016 17:30:29 -0400
 Labels:
 openshift.io/deployer-pod-for.name=nginx-1
 

Re: few basic questions about S2I and docker run

2016-09-12 Thread Ravi Kapoor
Hi Ben,

I am finally able to run my nodejs code on openshift with both approaches
(volume mount as well as S2I)
I was also able to resolve most of other issues I mentioned and was able to
run JEE application as well.

Thanks a lot for helping me through all the silly questions.
Good news is that now my company will be using openshift to manage our
dockers/deployments.

regards


On Sat, Sep 10, 2016 at 8:23 AM, Ben Parees  wrote:

> you can define a command on the container within the pod:
> http://kubernetes.io/docs/user-guide/configuring-containers/#launching-a-
> container-using-a-configuration-file
>
>
> On Fri, Sep 9, 2016 at 5:21 PM, Ravi  wrote:
>
>>
>> Thank you for this help.
>>
>> I was trying nginx because after invoking container, I do not have to run
>> a command. For java or node, after the container is run I will need to run
>> a command e.g.
>>
>> java -jar myapp.jar
>> OR
>> node server.js
>>
>> Can you guide me how to add this to the json file or point me to
>> documentation so I can try this?
>>
>> thanks so much
>>
>>
>> On 9/8/2016 6:56 PM, Ben Parees wrote:
>>
>>> Downloads$ oc get pods
>>> NAME READY STATUSRESTARTS   AGE
>>> nginx-1-deploy   1/1   Running   0  14s
>>> nginx-1-rmfl90/1   Error 0  11s
>>>
>>> Downloads$ oc logs nginx-1-rmfl9
>>> 2016/09/09 01:54:21 [warn] 1#1: the "user" directive makes sense only if
>>> the master process runs with super-user privileges, ignored in
>>> /etc/nginx/nginx.conf:2
>>> nginx: [warn] the "user" directive makes sense only if the master
>>> process runs with super-user privileges, ignored in
>>> /etc/nginx/nginx.conf:2
>>> 2016/09/09 01:54:21 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp"
>>> failed (13: Permission denied)
>>> nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13:
>>> Permission denied)
>>>
>>>
>>> the nginx image probably only works when run as root or as some other
>>> specific user.  when images are run in openshift, by default they are
>>> assigned a random uid for security purposes.  that can cause issues with
>>> images that expect to run as a specific user.  please see our
>>> documentation:
>>>
>>> https://docs.openshift.org/latest/creating_images/guidelines
>>> .html#openshift-origin-specific-guidelines
>>> (section on support arbitrary uids)
>>>
>>> to relax the restriction, see:
>>> https://docs.openshift.org/latest/admin_guide/manage_scc.htm
>>> l#enable-images-to-run-with-user-in-the-dockerfile
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Sep 8, 2016 at 9:50 PM, Ravi >> > wrote:
>>>
>>>
>>> oh, forgot to add, I do not have any readiness probe.
>>>
>>> On 9/8/2016 6:47 PM, Ravi Kapoor wrote:
>>>
>>> I removed volumes, pod still failed. json and logs attached
>>>
>>>
>>>
>>> On Thu, Sep 8, 2016 at 6:35 PM, Ben Parees >> 
>>> >> wrote:
>>>
>>> though i don't see it in your json it sounds like you have a
>>> readiness probe defined on your pod and it's not being met
>>> successfully.
>>>
>>> the other possibility is it has to do w/ your mounts.  can
>>> you
>>> temporarily remove the volume mounts and see if the pod
>>> comes up?
>>>
>>>
>>> On Thu, Sep 8, 2016 at 8:33 PM, Ravi Kapoor
>>> 
>>> >> >> wrote:
>>>
>>> Pod deployment failed. error in console log is
>>>
>>> --> Scaling nginx-1 to 1
>>> --> Waiting up to 10m0s for pods in deployment nginx-1
>>> to become
>>> ready
>>> error: update acceptor rejected nginx-1: pods for
>>> deployment
>>> "nginx-1" took longer than 600 seconds to become ready
>>>
>>>
>>>
>>> *$ oc describe pods*
>>> Name:   nginx-1-deploy
>>> Namespace:  test
>>> Security Policy:restricted
>>> Node:   172.27.104.71/172.27.104.71
>>> 
>>> >> >
>>> Start Time: Thu, 08 Sep 2016 17:30:29 -0400
>>> Labels:
>>> openshift.io/deployer-pod-for.name=nginx-1
>>> 
>>> >> >
>>> Status: Failed
>>> IP: 172.17.0.2

Re: few basic questions about S2I and docker run

2016-09-05 Thread Ben Parees
On Mon, Sep 5, 2016 at 11:42 PM, Ravi  wrote:

>
> Ben,
>
> You have been very helpful. I am sincerely thankful.
>
> > ​I still think you'll get more mileage by trying to use the system as it
> > was designed to be used(build an image with your compiled source built
> > in) instead of trying to force a different workflow onto it.
>
> I understand and agree. Accordingly I need to working on 2 step solution
> 1. First step is to get my dockers up and running in a day or two.
> Considering how long it is taking me to understand the system, I want to do
> it the short way first. I.e. able to run following command from within
> openshift
> "docker run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp
> openjdk:8-jre-alpine java myClass"
>
> This means
> - download jars/files from source control to a host folder foobar
> - mount the host folder foobar, that has my class files/jars into a java
> docker at /usr/src/myapp
> - run java docker (along with -w flag)
>

​i'd start by doing a basic tutorial with "oc cluster up" and building one
of the existing applications/running some of the existing images.

Then you can advance to building your own image (either building it using
openshift, or building it yourself via "docker build" and pushing the image
to a registry so you can then deploy it on openshift using "oc new-app" or
"oc run".
​



>
> 2. Once I can get system up, I will continue to understand how to make S2I
> operational and switch to it once I have enough confidence. I am struggling
> with the fact that running php through S2I seems to be straightforward. No
> special config etc is needed. However to run Java or Node code, the repo
> should have particular images or package.json etc. and so far I am not able
> to understand what I need to add to the repo to make it S2I compatible.
> In other words, if I create one php file, put in a repo, mention the git
> url in S2I, it works. If I create a single node file or java file or even
> jboss example git (https://github.com/jboss-deve
> loper/jboss-eap-quickstarts) in S2I, the build fails.
>

​i'm not sure what issues you are having with nodejs, but for java it is
true that the only s2i build images openshift itself provides assume you
are using maven to build your app, that your app produces an ear or war
file, and you are going to run that ear/war on eap, tomcat, or wildfly.  So
if you're trying to build/run a standalone java app, there is no out of the
box s2i builder image that can help you.  It would not be incredibly
difficult to write one, but given where you're at, i'd start by either
using a docker-type build, or building the image manually outside of
openshift and just using openshift to run the image.

I'd be happy to give you some pointers on creating a generic java s2i
builder image when you get to that point.
​



>
> I hope that makes sense.
> Regards
>
>
>
> On 9/5/2016 7:33 PM, Ben Parees wrote:
>
>>
>>
>> On Fri, Sep 2, 2016 at 4:27 PM, Ravi > > wrote:
>>
>>
>> Ben, thanks for pointing me in right direction. However, after a
>> week, I am still struggling and need help.
>>
>> The questions you raised are genuine issues which, if managed by
>> openshift will be easy to handle, however if openshift does not
>> manage them, then manually managing them is certainly a difficult
>> task.
>>
>> Leaving that aside, I have been struggling with running my app on
>> openshift. Here is a list of everything I tried
>>
>> As suggested by you, I tried to create a volume and run java docker
>> with it. I am getting really lost in variety of issues, here are some:
>>
>>
>> ​I still think you'll get more mileage by trying to use the system as it
>> was designed to be used(build an image with your compiled source built
>> in) instead of trying to force a different workflow onto it.
>> ​
>>
>>
>>
>> - unless I login with service:admin user (no password), I am not
>> authorized to mount a volume.
>>
>>
>> ​what type of volume?  what do you mean by "mount a volume"?  what
>> commands are you running?​  how is your pod or deployment config defined?
>>
>>
>>
>> - I can only login with service:admin on command line, the UI gives
>> me error. So basically I cannot visually see mounted volumes
>> - There is no way from UI to create a Volume Claim, I must define a
>> JSON
>>
>> - I was unable to find any documentation for this JSON and had to
>> copy from other places
>>
>>
>> ​​you can use "oc set volumes" to add volume claims to a deployment
>> config, once you have (as an administrator) defined persistent volumes
>> in your cluster.
>>
>> you can also "attach storage" to a deployment config from within the
>> openshift console, but that does not apply to your scenario since you
>> are trying to mount a "specific" volume into your pod instead of just
>> requesting persistent storage.
>>
>>
>>
>>
>> - 

Re: few basic questions about S2I and docker run

2016-09-05 Thread Ravi


Ben,

You have been very helpful. I am sincerely thankful.

> ​I still think you'll get more mileage by trying to use the system as it
> was designed to be used(build an image with your compiled source built
> in) instead of trying to force a different workflow onto it.

I understand and agree. Accordingly I need to working on 2 step solution
1. First step is to get my dockers up and running in a day or two. 
Considering how long it is taking me to understand the system, I want to 
do it the short way first. I.e. able to run following command from 
within openshift

"docker run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp
openjdk:8-jre-alpine java myClass"

This means
- download jars/files from source control to a host folder foobar
- mount the host folder foobar, that has my class files/jars into a java 
docker at /usr/src/myapp

- run java docker (along with -w flag)

2. Once I can get system up, I will continue to understand how to make 
S2I operational and switch to it once I have enough confidence. I am 
struggling with the fact that running php through S2I seems to be 
straightforward. No special config etc is needed. However to run Java or 
Node code, the repo should have particular images or package.json etc. 
and so far I am not able to understand what I need to add to the repo to 
make it S2I compatible.
In other words, if I create one php file, put in a repo, mention the git 
url in S2I, it works. If I create a single node file or java file or 
even jboss example git 
(https://github.com/jboss-developer/jboss-eap-quickstarts) in S2I, the 
build fails.


I hope that makes sense.
Regards



On 9/5/2016 7:33 PM, Ben Parees wrote:



On Fri, Sep 2, 2016 at 4:27 PM, Ravi > wrote:


Ben, thanks for pointing me in right direction. However, after a
week, I am still struggling and need help.

The questions you raised are genuine issues which, if managed by
openshift will be easy to handle, however if openshift does not
manage them, then manually managing them is certainly a difficult task.

Leaving that aside, I have been struggling with running my app on
openshift. Here is a list of everything I tried

As suggested by you, I tried to create a volume and run java docker
with it. I am getting really lost in variety of issues, here are some:


​I still think you'll get more mileage by trying to use the system as it
was designed to be used(build an image with your compiled source built
in) instead of trying to force a different workflow onto it.
​



- unless I login with service:admin user (no password), I am not
authorized to mount a volume.


​what type of volume?  what do you mean by "mount a volume"?  what
commands are you running?​  how is your pod or deployment config defined?



- I can only login with service:admin on command line, the UI gives
me error. So basically I cannot visually see mounted volumes
- There is no way from UI to create a Volume Claim, I must define a
JSON

- I was unable to find any documentation for this JSON and had to
copy from other places


​​you can use "oc set volumes" to add volume claims to a deployment
config, once you have (as an administrator) defined persistent volumes
in your cluster.

you can also "attach storage" to a deployment config from within the
openshift console, but that does not apply to your scenario since you
are trying to mount a "specific" volume into your pod instead of just
requesting persistent storage.




- After all this, how do I know which volume is being attached to
which volume claim?


​you aren't supposed to care.  You ask for persistent storage, the
system finds persistent storage to meet those needs, and you use it.

If you're trying to set up a specific persistent volume definition with
existing content, and then ensure that particular PV gets assigned to
your Pod then you don't use a PVC, you just reference the volume
directly in the Pod definition as with the git repo volume example.



- I copied mongodb.json and switched image to java.json, this did
not work
- I decided, this was too complex, lets just do S2I. However, when I
cannot find any documentation how to do it. The example images work
but when i try my own node or JEE project, S2I fails. I am guessing
it needs some specific files in source to do this.
- While PHP project https://github.com/gshipley/simplephp
 works with S2I with only a
php file, when I create a nodejs file, it does not work. I could not
find documentation on how to get my node file to run.


​https://github.com/openshift/nodejs-ex

https://docs.openshift.org/latest/using_images/s2i_images/nodejs.html

​


- I tried to do walkthroughs, but most of them are using 

Re: few basic questions about S2I and docker run

2016-09-05 Thread Cameron Braid
Sorry to hijack your thread, but where is the "git repo volume example" ?

In origin gitI can see the gitserver (
https://github.com/openshift/origin/tree/master/examples/gitserver) but it
uses wither ephemeral or pvc.

Cheers

Cameron

On Tue, 6 Sep 2016 at 12:34 Ben Parees  wrote:

> On Fri, Sep 2, 2016 at 4:27 PM, Ravi  wrote:
>
>>
>> Ben, thanks for pointing me in right direction. However, after a week, I
>> am still struggling and need help.
>>
>> The questions you raised are genuine issues which, if managed by
>> openshift will be easy to handle, however if openshift does not manage
>> them, then manually managing them is certainly a difficult task.
>>
>> Leaving that aside, I have been struggling with running my app on
>> openshift. Here is a list of everything I tried
>>
>> As suggested by you, I tried to create a volume and run java docker with
>> it. I am getting really lost in variety of issues, here are some:
>>
>
> ​I still think you'll get more mileage by trying to use the system as it
> was designed to be used(build an image with your compiled source built in)
> instead of trying to force a different workflow onto it.
> ​
>
>
>>
>> - unless I login with service:admin user (no password), I am not
>> authorized to mount a volume.
>>
>
> ​what type of volume?  what do you mean by "mount a volume"?  what
> commands are you running?​  how is your pod or deployment config defined?
>
>
>
>> - I can only login with service:admin on command line, the UI gives me
>> error. So basically I cannot visually see mounted volumes
>> - There is no way from UI to create a Volume Claim, I must define a JSON
>>
> - I was unable to find any documentation for this JSON and had to copy
>> from other places
>>
>
> ​​you can use "oc set volumes" to add volume claims to a deployment
> config, once you have (as an administrator) defined persistent volumes in
> your cluster.
>
> you can also "attach storage" to a deployment config from within the
> openshift console, but that does not apply to your scenario since you are
> trying to mount a "specific" volume into your pod instead of just
> requesting persistent storage.
>
>
>
>
>> - After all this, how do I know which volume is being attached to which
>> volume claim?
>>
>
> ​you aren't supposed to care.  You ask for persistent storage, the system
> finds persistent storage to meet those needs, and you use it.
>
> If you're trying to set up a specific persistent volume definition with
> existing content, and then ensure that particular PV gets assigned to your
> Pod then you don't use a PVC, you just reference the volume directly in the
> Pod definition as with the git repo volume example.
>
>
>
>> - I copied mongodb.json and switched image to java.json, this did not work
>> - I decided, this was too complex, lets just do S2I. However, when I
>> cannot find any documentation how to do it. The example images work but
>> when i try my own node or JEE project, S2I fails. I am guessing it needs
>> some specific files in source to do this.
>> - While PHP project https://github.com/gshipley/simplephp works with S2I
>> with only a php file, when I create a nodejs file, it does not work. I
>> could not find documentation on how to get my node file to run.
>>
>
> ​https://github.com/openshift/nodejs-ex
> https://docs.openshift.org/latest/using_images/s2i_images/nodejs.html
> ​
>
>
>> - I tried to do walkthroughs, but most of them are using openshift online
>> and a command "rhc" that is not available to me.
>>
>
> ​i'm not sure what walkthroughs you found, but "rhc" is a command like
> tool for the previous version of openshift, v2.  So that is irrelevant to
> what you're trying to do.  The v3 online environment is here:
>
> https://console.preview.openshift.com/console/
>
> and you can find a tutorial here:
> https://github.com/openshift/origin/tree/master/examples/sample-app
> (if you already have an openshift cluster, you can start at step 7,
> "Create a new project in OpenShift. "
> ​
>
>
>>
>> And all I wanted to do was run one simple command:
>>
>> docker run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp
>> openjdk:8-jre-alpine java myClass
>>
>> ARGGG!! HELP please.
>>
>>
>>
>> On 8/26/2016 3:24 PM, Ben Parees wrote:
>>
>>>
>>>
>>> On Fri, Aug 26, 2016 at 6:10 PM, Ravi Kapoor >> > wrote:
>>>
>>>
>>> Ben,
>>>
>>> Thank you so much for taking the time to explain. This is very
>>> helpful.
>>> If I may, I have a few followup questions:
>>>
>>> > ​That is not a great approach to running code.  It's fine for
>>> development, but you really want to be producing immutable images that a
>>> developer can hand to QE has tested it, they can hand that exact same image
>>> to prod, and there's no risk that pieces have changed.
>>>
>>> Q1: It seems like Lyft uses the approach I was mentioning i.e.
>>> inject code into dockers rather than copy 

Re: few basic questions about S2I and docker run

2016-09-05 Thread Ben Parees
On Fri, Sep 2, 2016 at 4:27 PM, Ravi  wrote:

>
> Ben, thanks for pointing me in right direction. However, after a week, I
> am still struggling and need help.
>
> The questions you raised are genuine issues which, if managed by openshift
> will be easy to handle, however if openshift does not manage them, then
> manually managing them is certainly a difficult task.
>
> Leaving that aside, I have been struggling with running my app on
> openshift. Here is a list of everything I tried
>
> As suggested by you, I tried to create a volume and run java docker with
> it. I am getting really lost in variety of issues, here are some:
>

​I still think you'll get more mileage by trying to use the system as it
was designed to be used(build an image with your compiled source built in)
instead of trying to force a different workflow onto it.
​


>
> - unless I login with service:admin user (no password), I am not
> authorized to mount a volume.
>

​what type of volume?  what do you mean by "mount a volume"?  what commands
are you running?​  how is your pod or deployment config defined?



> - I can only login with service:admin on command line, the UI gives me
> error. So basically I cannot visually see mounted volumes
> - There is no way from UI to create a Volume Claim, I must define a JSON
>
- I was unable to find any documentation for this JSON and had to copy from
> other places
>

​​you can use "oc set volumes" to add volume claims to a deployment config,
once you have (as an administrator) defined persistent volumes in your
cluster.

you can also "attach storage" to a deployment config from within the
openshift console, but that does not apply to your scenario since you are
trying to mount a "specific" volume into your pod instead of just
requesting persistent storage.




> - After all this, how do I know which volume is being attached to which
> volume claim?
>

​you aren't supposed to care.  You ask for persistent storage, the system
finds persistent storage to meet those needs, and you use it.

If you're trying to set up a specific persistent volume definition with
existing content, and then ensure that particular PV gets assigned to your
Pod then you don't use a PVC, you just reference the volume directly in the
Pod definition as with the git repo volume example.



> - I copied mongodb.json and switched image to java.json, this did not work
> - I decided, this was too complex, lets just do S2I. However, when I
> cannot find any documentation how to do it. The example images work but
> when i try my own node or JEE project, S2I fails. I am guessing it needs
> some specific files in source to do this.
> - While PHP project https://github.com/gshipley/simplephp works with S2I
> with only a php file, when I create a nodejs file, it does not work. I
> could not find documentation on how to get my node file to run.
>

​https://github.com/openshift/nodejs-ex
https://docs.openshift.org/latest/using_images/s2i_images/nodejs.html
​


> - I tried to do walkthroughs, but most of them are using openshift online
> and a command "rhc" that is not available to me.
>

​i'm not sure what walkthroughs you found, but "rhc" is a command like tool
for the previous version of openshift, v2.  So that is irrelevant to what
you're trying to do.  The v3 online environment is here:

https://console.preview.openshift.com/console/

and you can find a tutorial here:
https://github.com/openshift/origin/tree/master/examples/sample-app
(if you already have an openshift cluster, you can start at step 7, "Create
a new project in OpenShift. "
​


>
> And all I wanted to do was run one simple command:
>
> docker run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp
> openjdk:8-jre-alpine java myClass
>
> ARGGG!! HELP please.
>
>
>
> On 8/26/2016 3:24 PM, Ben Parees wrote:
>
>>
>>
>> On Fri, Aug 26, 2016 at 6:10 PM, Ravi Kapoor > > wrote:
>>
>>
>> Ben,
>>
>> Thank you so much for taking the time to explain. This is very
>> helpful.
>> If I may, I have a few followup questions:
>>
>> > ​That is not a great approach to running code.  It's fine for
>> development, but you really want to be producing immutable images that a
>> developer can hand to QE has tested it, they can hand that exact same image
>> to prod, and there's no risk that pieces have changed.
>>
>> Q1: It seems like Lyft uses the approach I was mentioning i.e.
>> inject code into dockers rather than copy code inside dockers
>> (ref: https://youtu.be/iC2T3gJsB0g?t=595
>> ). In this approach there are
>>
>> only two elements - the image (which will not change) and the code
>> build/tag which will also not change. So what else can change?
>>
>>
>>
>> Since you're mounting the code from the local filesystem into the
>> running container, how do you know the code is the same on every machine
>> that you're running the 

Re: few basic questions about S2I and docker run

2016-09-02 Thread Ravi


Ben, thanks for pointing me in right direction. However, after a week, I 
am still struggling and need help.


The questions you raised are genuine issues which, if managed by 
openshift will be easy to handle, however if openshift does not manage 
them, then manually managing them is certainly a difficult task.


Leaving that aside, I have been struggling with running my app on 
openshift. Here is a list of everything I tried


As suggested by you, I tried to create a volume and run java docker with 
it. I am getting really lost in variety of issues, here are some:


- unless I login with service:admin user (no password), I am not 
authorized to mount a volume.
- I can only login with service:admin on command line, the UI gives me 
error. So basically I cannot visually see mounted volumes

- There is no way from UI to create a Volume Claim, I must define a JSON
- I was unable to find any documentation for this JSON and had to copy 
from other places
- After all this, how do I know which volume is being attached to which 
volume claim?

- I copied mongodb.json and switched image to java.json, this did not work
- I decided, this was too complex, lets just do S2I. However, when I 
cannot find any documentation how to do it. The example images work but 
when i try my own node or JEE project, S2I fails. I am guessing it needs 
some specific files in source to do this.
- While PHP project https://github.com/gshipley/simplephp works with S2I 
with only a php file, when I create a nodejs file, it does not work. I 
could not find documentation on how to get my node file to run.
- I tried to do walkthroughs, but most of them are using openshift 
online and a command "rhc" that is not available to me.


And all I wanted to do was run one simple command:

docker run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp
openjdk:8-jre-alpine java myClass

ARGGG!! HELP please.



On 8/26/2016 3:24 PM, Ben Parees wrote:



On Fri, Aug 26, 2016 at 6:10 PM, Ravi Kapoor > wrote:


Ben,

Thank you so much for taking the time to explain. This is very helpful.
If I may, I have a few followup questions:

> ​That is not a great approach to running code.  It's fine for 
development, but you really want to be producing immutable images that a developer 
can hand to QE has tested it, they can hand that exact same image to prod, and 
there's no risk that pieces have changed.

Q1: It seems like Lyft uses the approach I was mentioning i.e.
inject code into dockers rather than copy code inside dockers
(ref: https://youtu.be/iC2T3gJsB0g?t=595
). In this approach there are
only two elements - the image (which will not change) and the code
build/tag which will also not change. So what else can change?



Since you're mounting the code from the local filesystem into the
running container, how do you know the code is the same on every machine
that you're running the container on?

If you have 15 nodes in your cluster, what happens when only 14 of them
get the latest code update and the 15th one is still mounting an old file?

Or your admin accidentally copies a dev version of the code to one of
the nodes?

When you look at a running container how do you know what version of the
application it's running, short of inspecting the mounted content?

When you bring a new node online in your cluster, how do you get all the
right code onto that node so all your images (thousands possibly!) are
able to mount what they need when they start up?

Do you put all the code for all your applications on all your nodes so
that you can run any application on any node?  Do you build your own
infrastructure to copy the right code to the right place before starting
an application?  Do you rely on a shared filesystem mounted to all your
nodes to make the code accessible?

These are questions you don't have to answer when the image *is* the
application.
​



> running things in that way means you need to get both the image and
your class files into paths on any machine where the image is going
to be run, and then specify that mount path correctly

Q2: I would think that openshift has a mechanism to pull files from
git to a temp folder and way to volume mount that temp folder into
any container it runs.Volume mounts are very basic feature of
dockers and I am hoping they are somehow workable with openshift.
Are they not? Don't we need them for lets say database dockers? Lets
say a mongodb container is running, it is writing data to a volume
mounted disk. If container crashes, is openshift able to start a new
container with previous saved data?



​Openshift does support git-based volumes if you want to go that approach:

https://docs.openshift.org/latest/dev_guide/volumes.html#adding-volumes

i'm not sure whether you can provide git credentials to that volume
definition to handle 

Re: few basic questions about S2I and docker run

2016-08-26 Thread Ben Parees
On Fri, Aug 26, 2016 at 6:10 PM, Ravi Kapoor 
wrote:

>
> Ben,
>
> Thank you so much for taking the time to explain. This is very helpful.
> If I may, I have a few followup questions:
>
> > ​That is not a great approach to running code.  It's fine for
> development, but you really want to be producing immutable images that a
> developer can hand to QE has tested it, they can hand that exact same image
> to prod, and there's no risk that pieces have changed.
>
> Q1: It seems like Lyft uses the approach I was mentioning i.e. inject
> code into dockers rather than copy code inside dockers (ref:
> https://youtu.be/iC2T3gJsB0g?t=595). In this approach there are only two
> elements - the image (which will not change) and the code build/tag which
> will also not change. So what else can change?
>
> > running things in that way means you need to get both the image and
> your class files into paths on any machine where the image is going to be
> run, and then specify that mount path correctly
>
> Q2: I would think that openshift has a mechanism to pull files from git to
> a temp folder and way to volume mount that temp folder into any container
> it runs.Volume mounts are very basic feature of dockers and I am hoping
> they are somehow workable with openshift. Are they not? Don't we need them
> for lets say database dockers? Lets say a mongodb container is running, it
> is writing data to a volume mounted disk. If container crashes, is
> openshift able to start a new container with previous saved data?
>

​and just to answer the last part of this explicitly:  yes, persistent
storage is possible with openshift through persistent volumes, the
persistent volume will be re-mounted to the container when it restarts and
the previous data will be present.  My previous email included a link to
docs on persistent volumes for more info.
​



>
>
> Q3: Even if you disagree, I would still like to know (if nothing else then
> for learning/education) about how to run external images with volume mounts
> and other parameters being passed into the image. I am having very hard
> time finding this.
>
> regards
> Ravi
>
>
> On Fri, Aug 26, 2016 at 10:29 AM, Ben Parees  wrote:
>
>>
>>
>> On Fri, Aug 26, 2016 at 1:07 PM, Ravi  wrote:
>>
>>>
>>> So I am trying to use openshift to manage our dockers.
>>>
>>> First problem I am facing is that most of documentation and image
>>> templates seem to be about S2I. We are
>>
>>
>> ​When it comes to building images, openshift supports basically 4
>> approaches, in descending order of recommendation and increasing order of
>> flexibility:
>>
>> 1) s2i (you supply source and pick a builder image, we build a new
>> application image and push it somewhere)
>> 2) docker-type builds (you supply the dockerfile and content, we run
>> docker build for you and push the image somewhere)
>> 3) custom​ (you supply an image, we'll run that image, it can do whatever
>> it wants to "build" something and push it somewhere, whether that something
>> is an image, jar file, etc)
>> 4) build your images externally on your own infrastructure and just use
>> openshift to run them.
>>
>> The first (3) of those are discussed here:
>> https://docs.openshift.org/latest/architecture/core_concepts
>> /builds_and_image_streams.html#builds
>> ​
>>
>>
>>> considering a continuous builds for multiple projects and building an
>>> image every 1 hour for multiple projects would create total 20GB images
>>> every day.
>>>
>>
>> I'm not sure how this statement relates to s2i.  Do yo have a specific
>> concern about s2i with respect to creating these images?  Openshift does
>> offer image pruning to help deal with the number of images you sound like
>> you'll be creating, if you're interested in that.
>>
>>
>>
>>>
>>> Q1: Is this right way of thinking? Since today most companies are doing
>>> CI, this should be a common problem. Why is S2I considered impressive
>>> feature?
>>>
>>
>> ​S2I really has little to do with CI/CD.  S2I is one way to produce
>> docker images, there are others as I listed above.  Your CI flow is going
>> to be something like:
>>
>> 1) change source
>> 2) build that source into an image (in whatever way you want, s2i is one
>> mechanism)
>> 3) test the new image
>> 4) push the new image into production
>>
>> ​The advantages to using s2i are not about how it specifically works well
>> with CI, but rather with the advantages it offers around building images in
>> a quick, secure, convenient way, as described here:
>>
>> https://docs.openshift.org/latest/architecture/core_concepts
>> /builds_and_image_streams.html#source-build
>>
>>
>>
>>
>>>
>>> So, I am trying to use off the shelf images and inject code/conf into
>>> them. I know how to do this from docker command line (example: docker run
>>> --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp
>>> openjdk:8-jre-alpine java myClass )
>>>
>>
>> ​That is not a great approach to 

Re: few basic questions about S2I and docker run

2016-08-26 Thread Ravi Kapoor
Ben,

Thank you so much for taking the time to explain. This is very helpful.
If I may, I have a few followup questions:

> ​That is not a great approach to running code.  It's fine for
development, but you really want to be producing immutable images that a
developer can hand to QE has tested it, they can hand that exact same image
to prod, and there's no risk that pieces have changed.

Q1: It seems like Lyft uses the approach I was mentioning i.e. inject code
into dockers rather than copy code inside dockers (ref:
https://youtu.be/iC2T3gJsB0g?t=595). In this approach there are only two
elements - the image (which will not change) and the code build/tag which
will also not change. So what else can change?

> running things in that way means you need to get both the image and your
class files into paths on any machine where the image is going to be run,
and then specify that mount path correctly

Q2: I would think that openshift has a mechanism to pull files from git to
a temp folder and way to volume mount that temp folder into any container
it runs.Volume mounts are very basic feature of dockers and I am hoping
they are somehow workable with openshift. Are they not? Don't we need them
for lets say database dockers? Lets say a mongodb container is running, it
is writing data to a volume mounted disk. If container crashes, is
openshift able to start a new container with previous saved data?


Q3: Even if you disagree, I would still like to know (if nothing else then
for learning/education) about how to run external images with volume mounts
and other parameters being passed into the image. I am having very hard
time finding this.

regards
Ravi


On Fri, Aug 26, 2016 at 10:29 AM, Ben Parees  wrote:

>
>
> On Fri, Aug 26, 2016 at 1:07 PM, Ravi  wrote:
>
>>
>> So I am trying to use openshift to manage our dockers.
>>
>> First problem I am facing is that most of documentation and image
>> templates seem to be about S2I. We are
>
>
> ​When it comes to building images, openshift supports basically 4
> approaches, in descending order of recommendation and increasing order of
> flexibility:
>
> 1) s2i (you supply source and pick a builder image, we build a new
> application image and push it somewhere)
> 2) docker-type builds (you supply the dockerfile and content, we run
> docker build for you and push the image somewhere)
> 3) custom​ (you supply an image, we'll run that image, it can do whatever
> it wants to "build" something and push it somewhere, whether that something
> is an image, jar file, etc)
> 4) build your images externally on your own infrastructure and just use
> openshift to run them.
>
> The first (3) of those are discussed here:
> https://docs.openshift.org/latest/architecture/core_
> concepts/builds_and_image_streams.html#builds
> ​
>
>
>> considering a continuous builds for multiple projects and building an
>> image every 1 hour for multiple projects would create total 20GB images
>> every day.
>>
>
> I'm not sure how this statement relates to s2i.  Do yo have a specific
> concern about s2i with respect to creating these images?  Openshift does
> offer image pruning to help deal with the number of images you sound like
> you'll be creating, if you're interested in that.
>
>
>
>>
>> Q1: Is this right way of thinking? Since today most companies are doing
>> CI, this should be a common problem. Why is S2I considered impressive
>> feature?
>>
>
> ​S2I really has little to do with CI/CD.  S2I is one way to produce docker
> images, there are others as I listed above.  Your CI flow is going to be
> something like:
>
> 1) change source
> 2) build that source into an image (in whatever way you want, s2i is one
> mechanism)
> 3) test the new image
> 4) push the new image into production
>
> ​The advantages to using s2i are not about how it specifically works well
> with CI, but rather with the advantages it offers around building images in
> a quick, secure, convenient way, as described here:
>
> https://docs.openshift.org/latest/architecture/core_
> concepts/builds_and_image_streams.html#source-build
>
>
>
>
>>
>> So, I am trying to use off the shelf images and inject code/conf into
>> them. I know how to do this from docker command line (example: docker run
>> --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp
>> openjdk:8-jre-alpine java myClass )
>>
>
> ​That is not a great approach to running code.  It's fine for development,
> but you really want to be producing immutable images that a developer can
> hand to QE has tested it, they can hand that exact same image to prod, and
> there's no risk that pieces have changed.
>
> Also running things in that way means you need to get both the image and
> your class files into paths on any machine where the image is going to be
> run, and then specify that mount path correctly.  It's not a scalable
> model.  You want to build runnable images, not images that need the
> application side-loaded via a 

few basic questions about S2I and docker run

2016-08-26 Thread Ravi


So I am trying to use openshift to manage our dockers.

First problem I am facing is that most of documentation and image 
templates seem to be about S2I. We are considering a continuous builds 
for multiple projects and building an image every 1 hour for multiple 
projects would create total 20GB images every day.


Q1: Is this right way of thinking? Since today most companies are doing 
CI, this should be a common problem. Why is S2I considered impressive 
feature?



So, I am trying to use off the shelf images and inject code/conf into 
them. I know how to do this from docker command line (example: docker 
run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp 
openjdk:8-jre-alpine java myClass )


Q2: How do I configure exact same command from openshift? I will need to 
do following steps
1. Jenkins is pushing compiled jar files to git repository. First step 
will be to pull the files down.
2. I may have to unzip some files (in case it is bunch of configurations 
etc.)

3. Openshift should use docker run to create containers.

thanks so much for help
Ravi

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users