> And fundamentally why not rebuild ok SVN changes? You can
automate that. Take into account that if you don't have different
images with the code, you can't use Kubernetes to rollback either.
Or you should check in some other way which pod had which svn
revision at any moment in time, and also handle if an SVN up fails
or fails in some pods only. OIW, it can add more problems than it
solves to do that, consider it carefully.
To be honest I don't think that automate svn updates is a reliable
solution.
Let me give an example:
- I commit some file --> revision 123 and I have to deploy those
changes on prod
- I create a docker image where I update the code to revision 123
- Then I deploy the image with a rolling update to kubernetes cluster
- In the following days I work to the code, committing the files
to make them available to the team. Now the svn revision is 200.
But a deploy on prod is not scheduled
- For a memory problem, on prod env, kubernetes kills a pod and
restarts it automatically. If any automatic code update mechanism
is activated when the pod is started, it will lead to the
situation where a pod will have code of revision 200 and all the
others will remain to revision 123
Exactly. But that will only happen if you manage SVN files outside of
the docker build. As long as the container files are contained there,
this can't happen. This is exactly why I was advising not to manage
files updates outside of the container.
Or am I missing something?
In our development env we don't use the docker image but a standard
stack apache/php/db installed on linux machine.
Day to day we commit file changes on svn repo. Then when a deployment is
required, we create a docker image on our local linux machine (without
kubernetes).
In the docker container we'll update the svn code. Then push of the
docker container (with a new image tag) on gce.
Finally rolling update with the new image tag on prod env via Kubernetes.
So after a deploy, where a new image (containing a particular svn
revision) has been installed, we can misalign the code version of the
image (in prod) respect to the version of the svn.
Why not an emptydir?
We need a persistent volume could be mounted in our web container (or
better: on every container that daemonset will create). In this volume
we would have our svn code.
With a single pod there are no problems to apply this approach.
We are able to execute a svn checkout on the persistent disk, then mount
it in the directory we want.
If pod restarts due some problems, thanks the persistent disk, the pod
will use the same svn revision.
Problems arise when we try to use the persistent volume (in ReadOnlyMany
mode) inside daemonset configuration.
Below the yaml used in our tests:
### test-persistent-readonly-disk.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: test-persistent-disk
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadOnlyMany
gcePersistentDisk:
pdName: "test-persistent-disk"
fsType: "ext4"
### test-persistent-readonly-disk-claim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-persistent-disk-claim
labels:
type: gcePersistentDisk
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 100Gi
### test-daemonset.yaml:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: test-daemonset
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
nodeSelector:
app: frontend-node
terminationGracePeriodSeconds: 30
volumes:
- name: persistent-volume
persistentVolumeClaim:
claimName: test-persistent-disk-claim
containers:
- name: web-container
image: autoxyweb_build:v4.3.2
ports:
- containerPort: 80
volumeMounts:
- name: persistent-volume
mountPath: /opt/live/
subPath: svn/project
We get this error:
Back-off restarting failed container
Error syncing pod
AttachVolume.Attach failed for volume "pvc-a0683...." : googleapi: Error
400: The disk resource '.../disks/gke-test-cluster-8ef05-pvc-a0...' is
already being used by
'.../instances/gke-test-cluster-default-pool-32598eec-lfn7'
Any suggestions?
Thanks ;)
--
You received this message because you are subscribed to the Google Groups "Kubernetes
user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.