Hello all! I'm currently trying to get certs into my Pods when they start. Shortly put, the design is the following:
1/ Pod come up, init container queries a Cert KeyPair from a CertService. 2/ CertService checks the pods info (secrets, ips,...) and reads what SAN are expected in the certs (we aim to feed only the appropriate SAN into the certs). It also annotates the pod as "has had cert". If the info is correct, it generates and return the cert to the pod that can live happily ever after (till expiration and refresh). I just realized the Kubelet has full RW on the pod object (based on https://kubernetes.io/docs/admin/authorization/node/#overview). I was planning on using the podtemplate to keep the information but that seems like an issue that the kubelet can access this information. If the kubelet is compromised, one would be able to alter what data goes in the cert and request new ones. I'm not sure where to put the information for those Pods. I've also heard about the pod identity effort and need to participate more there :) Hopefully I can integrate my infra with it. I was considering limiting access to the kubelet to only PodStatus with Authorization but I'm not sure if it will be an issue for the kubelet to do its job and the node authorization doc confused me. Other solutions could be to find another place for it that the kubelet doesn't need (Parent object like Deployment? Or an attached ConfigMap? but that doesn't sound that easy). Has anyone tried to do that? Would you have any recommendation? Thanks for your help :) Let me know if I'm not really clear. Co -- You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group. To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscr...@googlegroups.com. To post to this group, send email to kubernetes-users@googlegroups.com. Visit this group at https://groups.google.com/group/kubernetes-users. For more options, visit https://groups.google.com/d/optout.