Thank you all for your considerations and advice!
I just wanted to get some idea about hook-uses and how/if I should work
with them at this point. I guess I first relied more on the naming of
the option..."deployment lifecycle hook" and description "allow behavior
to be injected into the deployment process".
Now, if you'd a allow a newbie to make some considerations, this is a
bit misleading. What I initially thought after reading this, is that
these are running environments somewhat similar to what Tomas linked in
the first reply with the Kubernetes initContainer.
In fact these are separate, (even more..)ephemeral pods, that get
instantiated from what the DeploymentConfig states. They're not "hooks"
(which I interpreted as "an attachement to") for the deployment, but
rather volatile replicas used to do some "things" outside the scope of
the deployment itself, after which they're gone....blink pods :)
Now, for the standard examples that I see online with the database
provisioning/replication etc, not one of them explicitly underlined
that, in order for this to work, you need to use persistent volumes,
because on that external resource it is where all pre/mid/post hook
procedure gets persisted. Or maybe that's just standard knowledge that I
didn't have..
(just as a side issue and coming from the recent exchange between Graham
and Fernando:
https://blog.openshift.com/using-post-hook-to-initialize-a-database/ at
the very start of the post: "
You can solve this in multiple ways, such as:
* You can create a custom database image and bake in the script into
that image
* You can do the DB initialization from the application container that
runs the script on the database
"
Now I wonder how your colleague would implement the first option. I'm
guessing more or less Graham's approach.)
Thank you Graham for you examples! I've actually tried changing the
start command for the pod, more or less in the same way. Not through a
mounted ConfigMap, but through a script that was doing my changes and
then starting the pod(it was available to the image because I was not in
your scenario with standard image; I was/am using a custom one). However
this failed. I haven't really checked to see the actual reason. Might be
that the primary process was the script and at some point it
exited(didn't include the actual start command), or the timeout for the
readiness probe was exceeded.
The trick with the wrapper is greatly appreciated, thank you!
In the end I got it solved with Fernando's approach to push the
configuration at build time. I was not bound to not being able to create
an extra layer/custom image. In fact I was actually on the "extra" layer
of composing the artifact image (built with S2I) with the Runtime
Wildfly instance. My inline Dockerfile got a bit more content than a
FROM, COPY and CMD.
Another advantage here would also be that the rolling out of a new
deployment is quicker, with the old pods being quickly switched to the
new ones. In a stateless environment, such as mine, this is nice.
Thanks again,
Dan Pungă
PS: I'm kind of interfering in an ongoing discussion. Please, don't let
my message stop you; this is first-hand knowledge! :)
On 22.02.2018 14:42, Fernando Lozano wrote:
Hi Graham,
If the image was designed to be configured using environment
variables or configuration files that can be provided as volumes, yes
you don't need a custom image. But from Dan message I expect more
extensive customizations which would become cumbersome.
And the idea of forcing the image to run a different command than its
entrypoint, them get more files from a volume, to customize the image
or compensate for deficiencies in the original entrypoint command,
seem also cumbersome to me. You are making extensive changes each
time you start the container (to it's ephemeral read/write layer). I
don't see the advantage compared to just creating a child image with
an extra layer that has the customizations.
[]s, Fernando Lozano
On Wed, Feb 21, 2018 at 7:40 PM, Graham Dumpleton
<[email protected] <mailto:[email protected]>> wrote:
Another example of where this can be useful is where the primary
process in the container doesn't do what is required of process
ID 1. That is, reap zombie processes. If that becomes an issue
you can use a run script wrapper like:
#!/bin/sh
trap 'kill -TERM $PID' TERM INT
/usr/libexec/s2i/run &
PID=$!
wait $PID
trap - TERM INT
wait $PID
STATUS=$?
exit $STATUS
This simple alternative to a mini init process manager such as
tini, will work fine in many cases.
Replace /usr/libexec/s2i/run with actual program to run.
Graham
On 22 Feb 2018, at 9:33 am, Graham Dumpleton
<[email protected] <mailto:[email protected]>> wrote:
Badly worded perhaps.
In some cases you don't have the ability to modify an existing
image with the application in it, plus you may not want to
create a new custom image as a layer on top. In those cases, if
all you need to do is some minor tweaks to config prior to the
application starting in the container you can use the configmap
trick as described. It will work so long as the config files you
need to change can be modified as the user the container is run as.
So you can do:
oc create configmap blog-run-script --from-file=run
oc set volume dc/blog --add --type=configmap \
--configmap-name=blog-run-script \
--mount-path=/opt/app-root/scripts
oc patch dc/blog --type=json --patch \
'[{"op":"add",
"path":"/spec/template/spec/containers/0/command",
"value":["bash","/opt/app-root/scripts/run"]}]'
So the 'run' script makes the changes and then executes original
command to start the application in the container.
Graham
On 22 Feb 2018, at 9:22 am, Fernando Lozano <[email protected]
<mailto:[email protected]>> wrote:
Hi Graham,
This doesn't make sense to me:
> 3. If don't want to create a new custom image.
If you wanna run your application in a container you have to
create a custom image with the application. There's no way
around, because container images are immutable. You can only
choose how you will build your custom image. This is the way
containers are supposed to work, with or without OpenShift.
[]s, Fernando Lozano
On Wed, Feb 21, 2018 at 6:15 PM, Graham Dumpleton
<[email protected] <mailto:[email protected]>> wrote:
On 22 Feb 2018, at 3:21 am, Fernando Lozano
<[email protected] <mailto:[email protected]>> wrote:
Hi Dan,
As you learned, lifecycle hooks were not made to change
anything inside a container image. Remember that container
images are, by design, immutable. It looks you want to
build a custom container image that includes your
customizations to the wildfly configs plus your
application. There are two ways to accomplish that with
OpenShift:
1. Create a Dockerfile that uses the standard wildfly
container image as the parent, and adds your customization.
2. Use the OpenShift source-to-image (s2i) process to add
configurations and your application. See the OpenShift
docs about the wildfly s2i builder image for details, this
is easier than using a Dockerfile. The standard s2i
processes builds the application from sources, but it also
supports feeding an application war/ear.
3. If don't want to create a new custom image, but want to
add additional actions before application started in the
container, mount a shell script into the container from a
config map. Override the command for the pod to run your
script mounted from config map. Do you work in the script,
with your script then doing an exec on the original command
for the application.
Graham
[]s, Fernando Lozano
On Wed, Feb 21, 2018 at 9:43 AM, Dan Pungă
<[email protected] <mailto:[email protected]>> wrote:
Hello all!
Trying to build an OShift configuration for running a
Java app with a Wildfly server.
I've setup this with ChainBuilds where the app's
artifacts are combined with a runtime image of Wildfly.
For this particular app, however, I need to do some
configuration on the Wildfly environment, so that the
app is properly deployed and works.
- update a server module (grabbing the contents from
the web and copying them in the right location inside
Wildfly)
- add system properties and some other configuration
to Wildfly's standalone.xml configuration file
- create some directory structure
I've tried to run all this with the Recreate
deployment starategy and as a mid-hook procedure (so
the previous deployment pod is scaled down), but all
these changes aren't reflected in the actual(new)
deployment pod.
Taking a closer look at the docs, I've found this line
"Pod-based lifecycle hooks execute hook code in a new
pod derived from the template in a deployment
configuration."
So whatever I'm doing in my hook, is actually done in
a different pod, the hook pod, and not in the actual
deployment pod. Did I understand this correctly?
If so, how does the injection work here? Does it have
to do with the fact that the deployment _has to have_
persistent volumes? So the hooks actually do changes
inside a volume that will be mounted with the
deployment pod too...
Thank you!
_______________________________________________
users mailing list
[email protected]
<mailto:[email protected]>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
<http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
_______________________________________________
users mailing list
[email protected]
<mailto:[email protected]>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
<http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users