> On 14 Jun 2019, at 01:25, Olivier JUDITH <gnu...@gmail.com> wrote:
> 
> Of course, 
> 
> Yaml files are strictly dedicated to kubernetes object definition and 
> deployment (like deployment.yaml for service on docker). They are not to be 
> included in docker container. 
> In fact the better is to create a folder "k8s" and move them into .
> 
> In order to create the container/pod on kubernetes you have to launch : 
> kubectl create -f 389-ds-container/kube/* with all yaml files in kube folder
> In interactive mode :
> kubectl run -d --name 389-ds-container 
> --image=registry.opensuse.org/home/firstyear/containers/389-ds-container:latest
>  --port=3389
> 
> 
> 389-ds-container/
> ├── Dockerfile
> ├── other files/floders required for Docker image creation
> └── k8s
>     ├── 389-ds-container.yaml
>     ├── secrets.yaml
>     ├── service.yaml
>     └── storage.yaml
> 
> 389-ds-container => (i made a mistake the name should be 
> 389-ds-container.yaml) is the kubernetes pod declaration on k8s. It defines 
> what volume, environment variables, resources limit and more required to 
> start the container on the kube cluster. This file will use the image created 
> and pushed on the registry of your choice 
> 
> services.yaml => create a kube service which will allow external access to  
> the pod (container). On kubernetes you can not access to you container/pod 
> directly. You have to implement a service. In my case I created a ClusterIP 
> which means that the service in only available inside kubernetes cluster (Can 
> be changed) . Other modes are NodePort or LoadBalancer (available only when 
> your k8s cluster is hosted on a CloudProvider). Service allow loadbalancing 
> natively if you have many pods up (ie with MMR) . 
> 
> Secrets.yaml => all stuff like passwords or certificates that can be defined 
> like variable but are mounted in the pod/container and can be used like files 
> or variables. It’s an equivalent of secret in Docker. In defined there my own 
> fake certificate in order to inject them with certutil in the pods instead of 
> the self-signed one
> 
> Storage.yaml => Are the definition of my volume in k8s world. I created a pv 
> (Persistent volume)  which use a physical storage type as filesystem in my 
> case but can be another kind of storage (NFS,gluster,iscsi,FC…) the user has 
> to provide a pv with the same name whatever the kind of storage.
> Then PVC (PersistentVolumeClaim) , which wil bind the volume defined with the 
> name and the size required. 

Do you think there is value in upstreaming these? Can they be made generic? Or 
is this a per-site kind of configuration and needs customisation? 

> 
> 
> > A better idea may be to have dscontainer take a set of PEM files and then 
> > load them to your certificate store on startup instead rather than the 
> > current method of certificate handling.
> 
> Yes i agree but if you want to read them at startup you have to provide them 
> somewhere accessible from the container. The better i think is the secret.

Okay - how does the content of secrets.yaml get sent to the process running in 
the container? 

There are two primary use cases here remember - both atomic/transactional 
server to run 389 (ie vanilla docker), and of course, k8s. So I'd prefer to 
re-use or share mechanisms if possible.


Is there also a way in k8s that when an event occurs (IE a new container is 
launched in a pod) that a program can be called in existing containers? (This 
way we can automate replica addition/removal) 


Primarily my first goal has been to make 389 work for atomic/transactional 
server, because this is easy, and matches the way many peopl would run 389 
today. k8s was always a future goal due to the deeper level of automation 
needed, and the "opinionated" nature of k8s. So I want to support both though 
:) 


> 
> 
> Le jeu. 13 juin 2019 à 16:15, William Brown <wbr...@suse.de> a écrit :
> Most of those look pretty reasonable. Can you describe to be the work flow 
> and how those yaml files interacte with k8s and how they are associated to 
> the container? Do they need to be in the docker file? Or something else? 
> 
> Thanks! 
> 
> 
> > On 13 Jun 2019, at 15:21, Olivier JUDITH <gnu...@gmail.com> wrote:
> > 
> > _______________________________________________
> 
> —
> Sincerely,
> 
> William Brown
> 
> Senior Software Engineer, 389 Directory Server
> SUSE Labs
> _______________________________________________
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
> Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> _______________________________________________
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
> Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

—
Sincerely,

William Brown

Senior Software Engineer, 389 Directory Server
SUSE Labs
_______________________________________________
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

Reply via email to