Re: [kubernetes-users] Set service-node-port-range in Google Kubernetes Engine

2018-09-16 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Hmm, this is a bit of a stretch for NodePorts.  Why not use a HostPort
and update DNS dynamically if/when their VM updates?
On Sun, Sep 16, 2018 at 6:30 PM Phạm Huy Hoàng  wrote:
>
> Hi Tim,
>
> Thanks for your reply. I'll explain our uses case below.
>
> Our uses case is that we provide a service as a Linux VM so that user can SSH 
> and VNC into that VM.
> Each VM is run as a stateful set in GKE cluster.
>
> For each user, we need to provide expose 2 ports via a service (1 for VNC and 
> one for SSH).
> We do not use LoadBalancer service, because the price of 1 forwarding rule is 
> ~4-6 USD, that will increase our cost per user to 4-6USD/month.
> Therefore, we use nodePort service. Because the port range is 3-32767, it 
> means 1 cluster can only serve about ~1400 users. Currently, our user base is 
> ~500 so it might not be a problem, but it might be in the future.
>
> My naive solution is to increase the port range so a cluster might be able to 
> serve more users. If the number of users becomes big enough, maybe we can 
> consider creating another cluster.
>
> Thanks.
>
> On Friday, 14 September 2018 23:46:24 UTC+8, Tim Hockin wrote:
>>
>> We do not expose that as a parameter today.  We can discuss the
>> options here, but there's not short answer.  Can you talk about what
>> you're doing to need so many node ports?
>> On Fri, Sep 14, 2018 at 8:27 AM Phạm Huy Hoàng  wrote:
>> >
>> > For our use-case, we need to access a lot of services via NodePort. By 
>> > default, the NodePort range is 3-32767. With kubeadm, I can set the 
>> > port range via --service-node-port-range flag.
>> >
>> > We are using Google Kubernetes Engine (GKE) cluster. How can I set the 
>> > port range for a GKE cluster?
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "Kubernetes user discussion and Q" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to kubernetes-use...@googlegroups.com.
>> > To post to this group, send email to kubernet...@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Set service-node-port-range in Google Kubernetes Engine

2018-09-14 Thread 'Tim Hockin' via Kubernetes user discussion and Q
We do not expose that as a parameter today.  We can discuss the
options here, but there's not short answer.  Can you talk about what
you're doing to need so many node ports?
On Fri, Sep 14, 2018 at 8:27 AM Phạm Huy Hoàng  wrote:
>
> For our use-case, we need to access a lot of services via NodePort. By 
> default, the NodePort range is 3-32767. With kubeadm, I can set the port 
> range via --service-node-port-range flag.
>
> We are using Google Kubernetes Engine (GKE) cluster. How can I set the port 
> range for a GKE cluster?
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Re: Kube-proxy pod do not want to initialize

2018-09-12 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Did you check that bug?  Is the whole sysfs mounted read-only or just that
file?  Can you show me /proc/mounts from the node?

On Wed, Sep 12, 2018, 3:12 AM Grzegorz Panek  wrote:

> Yes, kube-proxy pod is running in privillaged mode, but still problems
> occured
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Re: Autoscale volume and pods simultaneously

2018-09-06 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Thu, Sep 6, 2018, 3:33 PM David Rosenstrauch  wrote:

> FWIW, I recently ran into a similar issue, and the way I handled it was
> to have each of the pods mount an NFS shared file system as a PV (AWS
> EFS, in my case) and have each pod write its output into a directory on
> the NFS share.  The only issue then is just to make sure that each pod
> writes it's output to a file that has a unique name (e.g., has the pod
> name or ID in the file name) so that the pods don't overwrite each
> other's data.
>

As pods come and go - don't you eventually waste the disk space?



> HTH,
>
> DR
>
> On 9/6/18 4:33 PM, 'Tim Hockin' via Kubernetes user discussion and Q
> wrote:
> > You have to understand what you are asking for.  You're saying "this
> > data is important and needs to be preserved beyond any one pod (a
> > persistent volume)" but you're also saying "the pods have no identity
> > because they can scale horizontally".  These are mutually incompatible
> > statements.
> >
> > You really want a shared storage API, not volumes...
> > On Thu, Sep 6, 2018 at 1:08 PM Naseem Ullah  wrote:
> >>
> >> I see I see.. what about autoscaling statefulsets with an HPA?
> >>
> >>> On Sep 6, 2018, at 4:06 PM, 'Tim Hockin' via Kubernetes user
> discussion and Q  wrote:
> >>>
> >>> Deployments and PersistentVolumes are generally not a good
> >>> combination.  This is what StatefulSets are for.
> >>>
> >>> There's work happening to allow creation of a volume from a snapshot,
> >>> but it's only Alpha in the next release.
> >>> On Thu, Sep 6, 2018 at 1:03 PM Naseem Ullah 
> wrote:
> >>>>
> >>>> Hello,
> >>>>
> >>>> I have a similar use case to Montassar.
> >>>>
> >>>> Although I could use emptyDirs, each newly spun pod takes 2-3 minutes
> to download required data(pod does something similar to git-sync). If
> volumes could be prepopulated when a new pod is spun it will simply sync
> the diff, which will drastically reduce startup readiness time.
> >>>>
> >>>> Any suggestions? Now I have a tradeoff between creating a static
> number of replicas and creating same number of PVCs , or using HPA but
> emptyDir volume which increases startup time for the pod.
> >>>>
> >>>> Thanks,
> >>>> Naseem
> >>>>
> >>>> On Thursday, January 5, 2017 at 6:07:42 PM UTC-5, Montassar Dridi
> wrote:
> >>>>>
> >>>>> Hello!!
> >>>>>
> >>>>> I'm using Kubernetes deployment with persistent volume to run my
> application, but when I try to add more replicas or autoscale, all the new
> pods try to connect to the same volume.
> >>>>> How can I simultaneously auto create new volumes for each new pod.,
> like statefulsets(petsets) are able to do it.
> >>>>
> >>>> --
> >>>> You received this message because you are subscribed to the Google
> Groups "Kubernetes user discussion and Q" group.
> >>>> To unsubscribe from this group and stop receiving emails from it,
> send an email to kubernetes-users+unsubscr...@googlegroups.com.
> >>>> To post to this group, send email to
> kubernetes-users@googlegroups.com.
> >>>> Visit this group at https://groups.google.com/group/kubernetes-users.
> >>>> For more options, visit https://groups.google.com/d/optout.
> >>>
> >>> --
> >>> You received this message because you are subscribed to the Google
> Groups "Kubernetes user discussion and Q" group.
> >>> To unsubscribe from this group and stop receiving emails from it, send
> an email to kubernetes-users+unsubscr...@googlegroups.com.
> >>> To post to this group, send email to kubernetes-users@googlegroups.com
> .
> >>> Visit this group at https://groups.google.com/group/kubernetes-users.
> >>> For more options, visit https://groups.google.com/d/optout.
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups "Kubernetes user discussion and Q" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an email to kubernetes-users+unsubscr...@googlegroups.com.
> >> To post to this group, send email to kubernetes-users@googlegroups.com.
> >> Visit this group at https://groups.google.com/group/kubernetes-users.
> >> For more options, visit https://groups.google.com/d/op

Re: [kubernetes-users] Re: Autoscale volume and pods simultaneously

2018-09-06 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Thu, Sep 6, 2018, 3:20 PM Naseem Ullah  wrote:

> Thank you Jing and thank you Tim.
> If this feature will allow HPA enabled deployment managed pods to spawn
> with a prepopulated volume each, that would be
>

This feature enables start-from-snapshot volumes but does not fundamentally
alter the model.

nice. If not, using emptyDir with a 2 minute startup delay as the data is
> synced for each new pod is what it is.
> PS would be nice if GKE had a RWX SC out of the box.
>

We have cloud file store (
https://cloud.google.com/filestore/) if that works for you, but this is not
a Google productailing list, so I won't advertise any more. :)

Cheers
>
> On Sep 6, 2018, at 6:10 PM, 'Jing Xu' via Kubernetes user discussion and
> Q  wrote:
>
> Naseem, for your volume data prepopulated request, like Tim mentioned, we
> now have volume snapshot which will be available in v1.12 as alpha
> feature.  This allows you to create volume snapshots from volume (PVC).
> With snapshot available, you can create a new volume (PVC) from snapshot as
> the data source. So the volume will have data prepopulated. We also plan to
> work on data clone and population features which allow you to clone data
> from one PVC to another one or prepopulate data from some data source.
> Please let us know if you have any questions about it or any requirements
> for the feature. Thanks!
>
> On Thursday, September 6, 2018 at 1:52:11 PM UTC-7, Naseem Ullah wrote:
>>
>> I do not think you have to understand what you are asking for, I've
>> learned a lot by asking questions I only half understood :) With that said
>> autoscaling sts was a question and not a feature request :)
>>
>> I do not see how "data is important, and needs to be preserved" and "pods
>> (compute) have no identity" are mutually incompatible statements but if you
>> say so. :)
>>
>> In any case the data is more or less important since it is fetchable, its
>> just that if its already there when a new pod is spun up, or when a
>> deployment is updated and a new pod created, it speeds up the startup time
>> drastically (virtually immediate vs 2-3 minutes to sync)
>>
>> Shared storage would be ideal. (deployement with hpa, with a mounted nfs
>> vol) But I get OOM errors when using RWX persistent volume and +1 pods are
>> syncing the same data to that volume at the same time, i do not know why
>> these OOM errors occur.  But maybe that has something to do with the code
>> running that syncs the data. RWX seems to be a recurring challenge.
>>
>>
>> On Thursday, September 6, 2018 at 4:33:18 PM UTC-4, Tim Hockin wrote:
>>>
>>> You have to understand what you are asking for.  You're saying "this
>>> data is important and needs to be preserved beyond any one pod (a
>>> persistent volume)" but you're also saying "the pods have no identity
>>> because they can scale horizontally".  These are mutually incompatible
>>> statements.
>>>
>>> You really want a shared storage API, not volumes...
>>> On Thu, Sep 6, 2018 at 1:08 PM Naseem Ullah  wrote:
>>> >
>>> > I see I see.. what about autoscaling statefulsets with an HPA?
>>> >
>>> > > On Sep 6, 2018, at 4:06 PM, 'Tim Hockin' via Kubernetes user
>>> discussion and Q  wrote:
>>> > >
>>> > > Deployments and PersistentVolumes are generally not a good
>>> > > combination.  This is what StatefulSets are for.
>>> > >
>>> > > There's work happening to allow creation of a volume from a
>>> snapshot,
>>> > > but it's only Alpha in the next release.
>>> > > On Thu, Sep 6, 2018 at 1:03 PM Naseem Ullah 
>>> wrote:
>>> > >>
>>> > >> Hello,
>>> > >>
>>> > >> I have a similar use case to Montassar.
>>> > >>
>>> > >> Although I could use emptyDirs, each newly spun pod takes 2-3
>>> minutes to download required data(pod does something similar to git-sync).
>>> If volumes could be prepopulated when a new pod is spun it will simply sync
>>> the diff, which will drastically reduce startup readiness time.
>>> > >>
>>> > >> Any suggestions? Now I have a tradeoff between creating a static
>>> number of replicas and creating same number of PVCs , or using HPA but
>>> emptyDir volume which increases startup time for the pod.
>>> > >>
>>> > >> Thanks,
>>> > >> Naseem
>>> > >>
>>> > >> On Thursday, Jan

Re: [kubernetes-users] Re: Autoscale volume and pods simultaneously

2018-09-06 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Thu, Sep 6, 2018, 1:52 PM Naseem Ullah  wrote:

> I do not think you have to understand what you are asking for, I've
> learned a lot by asking questions I only half understood :) With that said
> autoscaling sts was a question and not a feature request :)
>

LOL, fair enough


> I do not see how "data is important, and needs to be preserved" and "pods
> (compute) have no identity" are mutually incompatible statements but if you
> say so. :)
>

Think of it this way - if the deployment scales up, clearly we should add
more volumes.  What do I do if it scales down?  Delete the volumes?  Hold
on to them for some later scale-up?  For how long?  How many volumes?

Fundamentally, the persistent volume abstraction is wrong for what you want
here.  We have talked about something like volume pools which would be the
storage equivalent of deployments, but we have found very few use cases
where that seems to be the best abstraction.

E.g. In this case, the data seems to be some sort of cache of recreatable
data.  Maybe you really want a cache?

>
In any case the data is more or less important since it is fetchable, its
> just that if its already there when a new pod is spun up, or when a
> deployment is updated and a new pod created, it speeds up the startup time
> drastically (virtually immediate vs 2-3 minutes to sync)
>

Do you need all the data right away or can it be copied in on-demand?

Shared storage would be ideal. (deployement with hpa, with a mounted nfs
> vol) But I get OOM errors when using RWX persistent volume and +1 pods are
> syncing the same data to that volume at the same time, i do not know why
> these OOM errors occur.  But maybe that has something to do with the code
> running that syncs the data. RWX seems to be a recurring challenge.
>

RWX is challenging because block devices generally don't support it at all,
and the only mainstream FS that does is NFS, and well... NFS...


>
> On Thursday, September 6, 2018 at 4:33:18 PM UTC-4, Tim Hockin wrote:
>>
>> You have to understand what you are asking for.  You're saying "this
>> data is important and needs to be preserved beyond any one pod (a
>> persistent volume)" but you're also saying "the pods have no identity
>> because they can scale horizontally".  These are mutually incompatible
>> statements.
>>
>> You really want a shared storage API, not volumes...
>> On Thu, Sep 6, 2018 at 1:08 PM Naseem Ullah  wrote:
>> >
>> > I see I see.. what about autoscaling statefulsets with an HPA?
>> >
>> > > On Sep 6, 2018, at 4:06 PM, 'Tim Hockin' via Kubernetes user
>> discussion and Q  wrote:
>> > >
>> > > Deployments and PersistentVolumes are generally not a good
>> > > combination.  This is what StatefulSets are for.
>> > >
>> > > There's work happening to allow creation of a volume from a snapshot,
>> > > but it's only Alpha in the next release.
>> > > On Thu, Sep 6, 2018 at 1:03 PM Naseem Ullah 
>> wrote:
>> > >>
>> > >> Hello,
>> > >>
>> > >> I have a similar use case to Montassar.
>> > >>
>> > >> Although I could use emptyDirs, each newly spun pod takes 2-3
>> minutes to download required data(pod does something similar to git-sync).
>> If volumes could be prepopulated when a new pod is spun it will simply sync
>> the diff, which will drastically reduce startup readiness time.
>> > >>
>> > >> Any suggestions? Now I have a tradeoff between creating a static
>> number of replicas and creating same number of PVCs , or using HPA but
>> emptyDir volume which increases startup time for the pod.
>> > >>
>> > >> Thanks,
>> > >> Naseem
>> > >>
>> > >> On Thursday, January 5, 2017 at 6:07:42 PM UTC-5, Montassar Dridi
>> wrote:
>> > >>>
>> > >>> Hello!!
>> > >>>
>> > >>> I'm using Kubernetes deployment with persistent volume to run my
>> application, but when I try to add more replicas or autoscale, all the new
>> pods try to connect to the same volume.
>> > >>> How can I simultaneously auto create new volumes for each new pod.,
>> like statefulsets(petsets) are able to do it.
>> > >>
>> > >> --
>> > >> You received this message because you are subscribed to the Google
>> Groups "Kubernetes user discussion and Q" group.
>> > >> To unsubscribe from this group and stop receiving emails from it,
>> send an email to kubernetes-use...@googlegroups.com.
>&

Re: [kubernetes-users] Re: Autoscale volume and pods simultaneously

2018-09-06 Thread 'Tim Hockin' via Kubernetes user discussion and Q
You have to understand what you are asking for.  You're saying "this
data is important and needs to be preserved beyond any one pod (a
persistent volume)" but you're also saying "the pods have no identity
because they can scale horizontally".  These are mutually incompatible
statements.

You really want a shared storage API, not volumes...
On Thu, Sep 6, 2018 at 1:08 PM Naseem Ullah  wrote:
>
> I see I see.. what about autoscaling statefulsets with an HPA?
>
> > On Sep 6, 2018, at 4:06 PM, 'Tim Hockin' via Kubernetes user discussion and 
> > Q  wrote:
> >
> > Deployments and PersistentVolumes are generally not a good
> > combination.  This is what StatefulSets are for.
> >
> > There's work happening to allow creation of a volume from a snapshot,
> > but it's only Alpha in the next release.
> > On Thu, Sep 6, 2018 at 1:03 PM Naseem Ullah  wrote:
> >>
> >> Hello,
> >>
> >> I have a similar use case to Montassar.
> >>
> >> Although I could use emptyDirs, each newly spun pod takes 2-3 minutes to 
> >> download required data(pod does something similar to git-sync). If volumes 
> >> could be prepopulated when a new pod is spun it will simply sync the diff, 
> >> which will drastically reduce startup readiness time.
> >>
> >> Any suggestions? Now I have a tradeoff between creating a static number of 
> >> replicas and creating same number of PVCs , or using HPA but emptyDir 
> >> volume which increases startup time for the pod.
> >>
> >> Thanks,
> >> Naseem
> >>
> >> On Thursday, January 5, 2017 at 6:07:42 PM UTC-5, Montassar Dridi wrote:
> >>>
> >>> Hello!!
> >>>
> >>> I'm using Kubernetes deployment with persistent volume to run my 
> >>> application, but when I try to add more replicas or autoscale, all the 
> >>> new pods try to connect to the same volume.
> >>> How can I simultaneously auto create new volumes for each new pod., like 
> >>> statefulsets(petsets) are able to do it.
> >>
> >> --
> >> You received this message because you are subscribed to the Google Groups 
> >> "Kubernetes user discussion and Q" group.
> >> To unsubscribe from this group and stop receiving emails from it, send an 
> >> email to kubernetes-users+unsubscr...@googlegroups.com.
> >> To post to this group, send email to kubernetes-users@googlegroups.com.
> >> Visit this group at https://groups.google.com/group/kubernetes-users.
> >> For more options, visit https://groups.google.com/d/optout.
> >
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "Kubernetes user discussion and Q" group.
> > To unsubscribe from this group and stop receiving emails from it, send an 
> > email to kubernetes-users+unsubscr...@googlegroups.com.
> > To post to this group, send email to kubernetes-users@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kubernetes-users.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Re: Autoscale volume and pods simultaneously

2018-09-06 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Deployments and PersistentVolumes are generally not a good
combination.  This is what StatefulSets are for.

There's work happening to allow creation of a volume from a snapshot,
but it's only Alpha in the next release.
On Thu, Sep 6, 2018 at 1:03 PM Naseem Ullah  wrote:
>
> Hello,
>
> I have a similar use case to Montassar.
>
> Although I could use emptyDirs, each newly spun pod takes 2-3 minutes to 
> download required data(pod does something similar to git-sync). If volumes 
> could be prepopulated when a new pod is spun it will simply sync the diff, 
> which will drastically reduce startup readiness time.
>
> Any suggestions? Now I have a tradeoff between creating a static number of 
> replicas and creating same number of PVCs , or using HPA but emptyDir volume 
> which increases startup time for the pod.
>
> Thanks,
> Naseem
>
> On Thursday, January 5, 2017 at 6:07:42 PM UTC-5, Montassar Dridi wrote:
>>
>> Hello!!
>>
>> I'm using Kubernetes deployment with persistent volume to run my 
>> application, but when I try to add more replicas or autoscale, all the new 
>> pods try to connect to the same volume.
>> How can I simultaneously auto create new volumes for each new pod., like 
>> statefulsets(petsets) are able to do it.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Re: [google-containers] Can two persistent volume claims be bound to the same persistent volume?

2018-08-24 Thread 'Tim Hockin' via Kubernetes user discussion and Q
If you pointed them at the same NFS export (server + path) then it's
expected that they would see each other's changes.  You can either
create another export on the server or mount a sub-dir of that export
(e.g. export /home, but mount /home/you vs /home/me) or you can use
k8s' `subPath` field on the pods to mount different subdirs.
On Fri, Aug 24, 2018 at 9:39 AM Stephen Eaton  wrote:
>
> I tried creating two PV's of 10Gi both pointing at the same NFS of 1TB. I 
> also create 2 PVCs for the same StorageClass also of 10Gi.
>
> The problem is that when I add a file to on the the PVs the same file is 
> present on the other PV.
>
> What I would like to be able to do it to create to separate 'partitions' (or 
> locaigical separation) so that one PV does not share the data of the other PV 
> - is this possible?
>
> On Monday, August 1, 2016 at 9:52:41 AM UTC+2, Tim Hockin wrote:
>>
>> YOu can create multiple PVs with the same NFS export, as long as that
>> is acceptable to you :)
>>
>> On Mon, Aug 1, 2016 at 12:51 AM, Qian Zhang  wrote:
>> > Got it, thanks Tim!
>> >
>> > BTW, is there any best practice to use PV of NFS type? E.g., there is an 
>> > NFS
>> > server which has only one export, should admin only create 1 PV for it? Or
>> > it is also OK to create multiple PVs?
>> >
>> >
>> > Thanks,
>> > Qian Zhang
>> >
>> > On Mon, Aug 1, 2016 at 3:32 PM, 'Tim Hockin' via Containers at Google
>> >  wrote:
>> >>
>> >> If your NFS system supports that, that is one way ti could be done, yes.
>> >>
>> >> On Mon, Aug 1, 2016 at 12:13 AM, Qian Zhang  wrote:
>> >> > I am curious how the storage system behind us does the enforcement,
>> >> > e.g.,
>> >> > will we let NFS server know the capacity of PV is 1GB, and NFS server
>> >> > can
>> >> > guarantee the pod using that PV can not write more than 1GB?
>> >> >
>> >> >
>> >> > Thanks,
>> >> > Qian Zhang
>> >> >
>> >> > On Mon, Aug 1, 2016 at 3:00 PM, 'Tim Hockin' via Containers at Google
>> >> >  wrote:
>> >> >>
>> >> >> Nobody enforces it yet (well, WE don't, but the storage system behind
>> >> >> us might).  It's a way to match user needs (PVC) to available
>> >> >> resources (PV).
>> >> >>
>> >> >> > why does user need to define the PVC's capacity when creating the
>> >> >> > PVC?
>> >> >>
>> >> >> If the user needs 100GB and I give them a PV with 2 GB, they will not
>> >> >> be happy.  They have to specify how much they nee so we can bind (or
>> >> >> provision) a PV for them.
>> >> >>
>> >> >> On Sun, Jul 31, 2016 at 11:54 PM, Qian Zhang 
>> >> >> wrote:
>> >> >> > Yeah, actually I am also a bit confused about the capacity user
>> >> >> > defined
>> >> >> > in
>> >> >> > PV and PVC, who will be responsible for enforcing it? E.g., I have an
>> >> >> > NFS
>> >> >> > server which has 10GB free, and I create a PV (1GB) and PVC (1GB),
>> >> >> > and
>> >> >> > create a pod uses that PVC. So in the pod, can I only write 1GB into
>> >> >> > the
>> >> >> > mounted NFS directory? If so, who enforces it?
>> >> >> >
>> >> >> > And if a PV can only be used by a single PVC, why does user need to
>> >> >> > define
>> >> >> > the PVC's capacity when creating the PVC? I think we should not ask
>> >> >> > user
>> >> >> > to
>> >> >> > define it, i.e., all the capacity of the PV should be used.
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > Thanks,
>> >> >> > Qian Zhang
>> >> >> >
>> >> >> > On Mon, Aug 1, 2016 at 2:45 PM, 'Tim Hockin' via Containers at Google
>> >> >> >  wrote:
>> >> >> >>
>> >> >> >> A PV uses a single backing medium, but multiple PVs might share that
>> >> >> >> medium.  Consider "thin" block devices which allocate actual space
>> >> >> >> on
>> >> >> >> demand.  You might over-commit your storage system.  Consider NFS
>> >> >> >> which can have multiple exports on the same filesystem.  You might
>> >> >> >> over-commit your NFS server.
>> >> >> >>
>> >> >> >> Not saying it's a great idea, just that it is possible.
>> >> >> >>
>> >> >> >> On Sun, Jul 31, 2016 at 11:40 PM, Qian Zhang 
>> >> >> >> wrote:
>> >> >> >> > Thanks Tim! So a PV can only be used by a single PVC no matter
>> >> >> >> > what
>> >> >> >> > its
>> >> >> >> > type
>> >> >> >> > is.
>> >> >> >> >
>> >> >> >> > And can you please clarify a bit about "You can make a PV that
>> >> >> >> > uses
>> >> >> >> > the
>> >> >> >> > same
>> >> >> >> > backing medium, if the driver allows it"? I do not quite
>> >> >> >> > understand
>> >> >> >> > about
>> >> >> >> > it, I think a PV should always use a single backing medium rather
>> >> >> >> > than
>> >> >> >> > multiple, right?
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > Thanks,
>> >> >> >> > Qian Zhang
>> >> >> >> >
>> >> >> >> > On Mon, Aug 1, 2016 at 2:34 PM, 'Tim Hockin' via Containers at
>> >> >> >> > Google
>> >> >> >> >  wrote:
>> >> >> >> >>
>> >> >> >> >> A PersistentVolume (PV) is an atomic abstraction.  You can not
>> >> >> >> >> subdivide it across multiple claims.  You can make a PV that uses
>> >> >> >> >> the
>> >> >> >> >> same backing medium, 

Re: [kubernetes-users] is there a way to stop the pod scheduling if all the slaves nodes are down?

2018-08-13 Thread 'Tim Hockin' via Kubernetes user discussion and Q
You asked this question in another thread.  I and others answered
there - why open a new thread?
On Sun, Aug 12, 2018 at 11:05 PM Basanta Kumar Panda
 wrote:
>
> HI,
> I have 1 master and  2 slave nodes and both the slave nodes down  and
> since the master is up job is scheduling and pod is waiting to be scheduled 
> and job also waiting ..
>
> is there a way to stop the pod scheduling if all the slaves nodes are down?
> or there is a way to rerun the same jenkin jobs.
>
> Regards,
> Basanta
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Is there a way to not start the pod creation if no slave nodes available to run the jobs .

2018-08-10 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Well, we're not "starting" the pods, we're queuing them up for when
nodes become available.  Would you rather they get rejected
immediately?  what if a node comes online 3 seconds after that
rejection?
On Fri, Aug 10, 2018 at 2:28 PM Basanta Kumar Panda
 wrote:
>
> Hi ,
>
> Here is one of the Scenario
> 1. K8S Master is up with 2 slave nodes and is configured to jenkins master.
> 2. Both the slave nodes are down.
> 3. Job triggered from jenkins and the job is waiting/hanging.
>
> bash-4.2kubectl get nodes
> NAME   STATUS ROLES AGE   VERSION
> Server1   Ready,SchedulingDisabled   34d   v1.9.1+2.1.5.el7
> Server2   Ready,SchedulingDisabled   29d   v1.9.1+2.1.5.el7
> Server3   Ready,SchedulingDisabled   master34d   v1.9.1+2.1.5.el7
>
> bash-4.2$ kubectl get pods -o wide -w
> NAME READY STATUSRESTARTS   AGE   
> IPNODE
> kube-lv7dz   0/1   Pending   0 0s
> kube-lv7dz   0/1   Pending   0 0s
> kube-lv7dz   0/1   Terminating   0 4m
> kube-lv7dz   0/1   Terminating   0 4m
> kube-7mztq   0/1   Pending   0 0s
> kube-7mztq   0/1   Pending   0 0s
>
> Here pods are waiting to be scheduled on the slave nodes and since no slave 
> nodes are available, jobs are waiting/hanging.
> is there a way to not start the pod creation as no nodes available to run the 
> jobs ?
>
> Regards,
> Basanta
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Does GKE CNI calico plugin shares "etcd" with control plane?

2018-08-08 Thread 'Tim Hockin' via Kubernetes user discussion and Q
We only use Calico in the mode that reads node.spec.podCIDR, so it doesn't
need etcd.

On Wed, Aug 8, 2018 at 3:36 PM parthi.geo  wrote:

> Wondering if Google Kubernetes Engine native CNI add-on (calico) shares
> "etcd" with master / control plane.
>
>
> Regards
> Parthiban,S
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to monitor/alert on container/pod death or restart

2018-08-08 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Most of what you're asking for is available via the k8s API, if you watch
it.

On Wed, Aug 8, 2018 at 12:58 PM David Rosenstrauch 
wrote:

> As we're getting ready to go to production with our k8s-based system,
> we're trying to pin down exactly how we're going to do all the needed
> monitoring/alerting for it.  We can easily collect many of the metrics
> we need (using kube-state-metrics to feed into prometheus, and/or
> Datadog) and alert off of those.
>
> However, there's other important k8s-related info about our system that
> we need to be able to access, monitor, and alert on, most notably things
> like:
>
> * If a container crashes and is restarted by k8s
>

Represented the in the pod.status block


> * If k8s kills a container and restarts it (e.g., due to exceeding cpu
> or memory limits, or due to repeated failure of liveness check)
>

Also in pod.status

* If k8s kills a container but cannot restart it
>

in pod.status and/or events depending on exactly what you want to know


> * If an entire pod crashes and is restarted by k8s
>

There's not really a concept of a pod "crashing", just containers being
restarted.


>
> etc.
>
>
> How would would go about gaining access to those k8s-related events in
> an automated fashion, and setting up monitoring/alerting off of those?
>
> Thanks,
>
> DR
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to mount a service status file with pvc in kubernetes?

2018-08-08 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Can you explain more what you mean?

Who writes this file?

Who reads this file?

What is the lifetime of this file?

Is this a simple one-writer, one-reader case?

On Fri, Aug 3, 2018 at 10:34 AM 'zulv' via Kubernetes user discussion and
Q  wrote:

> The issue is that I would like to persistent one status file(status
> generated by the service) of some service in case the status lost when
> service restart, how to solve?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Change the default name of Load Balancer

2018-05-24 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Yes, but any given GCP project can have multiple clusters, but the LB names
are flat across them.  We need to make sure LB names don't collide.  That's
why we use the UUID - the name is not exactly random - i is a UUID that
maps back to the kubernetes Service.

On Wed, May 23, 2018 at 5:38 PM Jonathan Mejias <drumber...@gmail.com>
wrote:

> I dont know if i explain myself well.. I want to use
> "apliccation-loadbalancer" name, instead "jsvaq1568njsuwha38." when
> expose a deployment.yaml like loadBalancer type.
>
> I miss something?
>
> Regards!
>
> On Wed, May 23, 2018, 20:33 'Tim Hockin' via Kubernetes user discussion
> and Q <kubernetes-users@googlegroups.com> wrote:
>
>> The problem is that we only get 63 characters to make a unique name, and
>> both kubernetes namespace and service names can be that long themselves,
>> and even then they are not unique across clusters.  We could use the UUID
>> and up to 27 characters of the combination of those names, but then we have
>> a back-compat problem.
>>
>> Maybe not impossible, but not simple.
>>
>> On Wed, May 23, 2018 at 1:13 PM Jonathan Mejías <drumber...@gmail.com>
>> wrote:
>>
>>> Hi.
>>>
>>> By default kubernetes when expose the Google Load Balancer gives a name
>>> like "3efre2udfi9w2du9qwefds200992di" there is a way to change that name to
>>> one more human readble?
>>>
>>> Regards
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Kubernetes user discussion and Q" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to kubernetes-users+unsubscr...@googlegroups.com.
>>> To post to this group, send email to kubernetes-users@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Ingress paths not working with dynamic endpoint

2018-05-23 Thread 'Tim Hockin' via Kubernetes user discussion and Q
can you try to repro with a manual LB and /* - that should match all
sub-paths.

On Wed, May 23, 2018 at 1:11 PM Jonathan Mejias <drumber...@gmail.com>
wrote:

> Ahmet i did all that you mention it.
>
> /angular, /angular/, /angular/*  and if a get the healtcheck to " / " i
> will get the response of a default backend (404). Insted my app have an
> "healthCheck" path response (resolve an HTTP 200).
>
> My path didn`t work. All the problems get solution when you change to an
> Nginx Controller using L4 GLB
>
> Use Nginx Controller and rewrite-target
>
>
> 2018-05-21 20:00 GMT-04:00 'Ahmet Alp Balkan' via Kubernetes user
> discussion and Q <kubernetes-users@googlegroups.com>:
>
>> +1 to Tim. Your "rewrite-target" annotation won't work on GKE (it's only
>> for nginx-ingress).
>>
>> Also note that "Services exposed through an Ingress must serve a response
>> with HTTP 200 status to the GET requests on "/". This is used for health
>> checking. If your application does not serve HTTP 200 on "/", the backend
>> will be marked unhealthy and will not get traffic."
>> (Quoted from:
>> https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#remarks
>> )
>>
>> So I would say make sure:
>>
>> 1. Your "GET /" of the load balancer returns a 200 OK. (Looks like you're
>> not handling "/", only /angular and /angular2)
>> 2. All backend services listed (in your case "angular-svc") returns HTTP
>> 200 OK for "GET /" when called directly (not through the created load
>> balancer), you can test this with "kubectl port-forward"
>> 3. "path: /angular" will not work for "/angular/foo". As Tim said, use
>> "/angular/*".
>>
>> On Mon, May 21, 2018 at 4:43 PM 'Tim Hockin' via Kubernetes user
>> discussion and Q <kubernetes-users@googlegroups.com> wrote:
>>
>>> Did you try /* ?
>>>
>>> https://cloud.google.com/compute/docs/load-balancing/http/url-map
>>>
>>> On Mon, May 21, 2018 at 10:44 AM Jonathan Mejias <drumber...@gmail.com>
>>> wrote:
>>>
>>>> The only way that i resolve the problem was changing to an nginx
>>>> controller, instead a gke. Installing nginx-controller with kubernetes helm
>>>> and using rewrite option. Gke is limited in configuration option i do not
>>>> recommend.
>>>>
>>>> PD: Nginx-controller uses network load balancer (TCP) not HTTP.
>>>>
>>>>
>>>>
>>>> On Mon, May 21, 2018, 13:09 <davidshakespe...@gmail.com> wrote:
>>>>
>>>>> I have same problem. Did you mane to resolve?
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Kubernetes user discussion and Q" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to kubernetes-users+unsubscr...@googlegroups.com.
>>>>> To post to this group, send email to kubernetes-users@googlegroups.com
>>>>> .
>>>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Kubernetes user discussion and Q" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to kubernetes-users+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to kubernetes-users@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Kubernetes user discussion and Q" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to kubernetes-users+unsubscr...@googlegroups.com.
>>> To post to this group, send email to kubernetes-users@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kub

Re: [kubernetes-users] Change the default name of Load Balancer

2018-05-23 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The problem is that we only get 63 characters to make a unique name, and
both kubernetes namespace and service names can be that long themselves,
and even then they are not unique across clusters.  We could use the UUID
and up to 27 characters of the combination of those names, but then we have
a back-compat problem.

Maybe not impossible, but not simple.

On Wed, May 23, 2018 at 1:13 PM Jonathan Mejías 
wrote:

> Hi.
>
> By default kubernetes when expose the Google Load Balancer gives a name
> like "3efre2udfi9w2du9qwefds200992di" there is a way to change that name to
> one more human readble?
>
> Regards
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Ingress paths not working with dynamic endpoint

2018-05-21 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Did you try /* ?

https://cloud.google.com/compute/docs/load-balancing/http/url-map

On Mon, May 21, 2018 at 10:44 AM Jonathan Mejias 
wrote:

> The only way that i resolve the problem was changing to an nginx
> controller, instead a gke. Installing nginx-controller with kubernetes helm
> and using rewrite option. Gke is limited in configuration option i do not
> recommend.
>
> PD: Nginx-controller uses network load balancer (TCP) not HTTP.
>
>
>
> On Mon, May 21, 2018, 13:09  wrote:
>
>> I have same problem. Did you mane to resolve?
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Re: Recommended way to get the pods of a Kubernetes deployment?

2018-05-20 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The outdated pods are being terminated, so they should go away "soon".  In
the meantime it's not wrong to include them because they exist.  If you
want then to terminate faster, that is something you can control :)

On Sun, May 20, 2018, 9:06 AM Torsten Bronger <bron...@physik.rwth-aachen.de>
wrote:

> Hallöchen!
>
> 'Tim Hockin' via Kubernetes user discussion and Q writes:
>
> > Maybe I don't understand - the labels in the template are applied
> > to the pod.  Just label select against pods.
>
> If the deployment has been updated recently, I have still the
> outdated pods hanging around.  Do you recommend to add a version
> number to the deployment's labels?
>
> Tschö,
> Torsten.
>
> --
> Torsten Bronger
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2018-05-20 Thread 'Tim Hockin' via Kubernetes user discussion and Q
You can build that as a controller that runs in-cluster, picks one of the
nodes, and assigns the static IP.  It will still be racy, though, in that
it will never be instantaneous.

On Sun, May 20, 2018, 3:28 PM  wrote:

> An update: I was able to do this with the standard add-access-config
> mechanism here:
>
> https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
>
> No guarantees around when GKE will rebuild those nodes and lose the node
> IPs, but it works for now.
>
> On Sunday, May 20, 2018 at 12:13:30 PM UTC-7, mi...@percy.io wrote:
> > Evan,
> >
> > Did you figure out a way to assign reserved static IP addresses to a few
> specific nodes in a GKE pool?
> >
> > We are also fine with doing this manually for a couple of specific nodes
> for the time being (rather than building a NAT gateway), but I cannot find
> reliable information about how to assign a reserved static IP to a GKE node.
> >
> > Cheers,
> > Mike
> >
> > On Wednesday, May 3, 2017 at 12:13:42 PM UTC-7, Evan Jones wrote:
> > > Correct, but at least at the moment we aren't using auto-resizing, and
> I've never seen nodes get removed without us manually taking some action
> (e.g. upgrading Kubernetes releases or similar). Are there automated events
> that can delete a VM and remove it, without us having done something?
> Certainly I've observed machines rebooting, but that also preserves
> dedicated IPs. I can live with having to take some manual configuration
> action periodically, if we are changing something with our cluster, but I
> would like to know if there is something I've overlooked. Thanks!
> > >
> > >
> > >
> > >
> > >
> > > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
> > >
> > > The public IP is not stable in GKE. You can manually assign a static
> IP to a GKE node, but then if the node goes away (e.g. your cluster was
> resized) the IP will be detached, and you'll have to manually reassign. I'd
> guess this is also true on an AWS managed equivalent like CoreOS's
> CloudFormation scripts.
> > >
> > >
> > > On Wed, May 3, 2017 at 8:52 AM, Evan Jones 
> wrote:
> > >
> > > As Rodrigo described, we are using Container Engine. I haven't fully
> tested this yet, but my plan is to assign "dedicated IPs" to a set of
> nodes, probably in their own Node Pool as part of the cluster. Those are
> the IPs used by outbound connections from pods running those nodes, if I
> recalling correctly from a previous experiment. Then I will use Rodrigo's
> taint suggestion to schedule Pods on those nodes.
> > >
> > > If for whatever reason we need to remove those nodes from that pool,
> or delete and recreate them, we can move the dedicated IP and taints to new
> nodes, and the jobs should end up in the right place again.
> > >
> > >
> > > In short: I'm pretty sure this is going to solve our problem.
> > >
> > >
> > > Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Recommended way to get the pods of a Kubernetes deployment?

2018-05-20 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Maybe I don't understand - the labels in the template are applied to the
pod.  Just label select against pods.

On Sun, May 20, 2018, 8:12 AM Torsten Bronger 
wrote:

> Hallöchen!
>
> Since this question is apparently off-topic on SO
> (https://stackoverflow.com/q/50434349/188108), I ask here: What is
> the recommended way to get the pods of a Kubernetes deployment?
>
> Currently, I do:
>
> 1. Add unique labels to the deployment's template.
>
> 2. Get the revision number of the deployment.
>
> 3. Get all replica sets with the labels.
>
> 4. Filter them further to find the one with the correct revision
>number.
>
> 5. Extract the pod template hash from the replica set.
>
> 6. Get all pods with the labels plus the pod template hash.
>
> However, this is awkward and complex. Besides, I am not sure that
> (4) and (6) are guaranteed to yield only the wanted objects.  Is
> there a more reliable and possibly simpler way?
>
> Tschö,
> Torsten.
>
> --
> Torsten Bronger
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Kubernetes Ingress HTTP Load Balancer with port range

2018-05-17 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Kubernetes' Ingress abstraction does what you want.

On Wed, May 16, 2018 at 6:38 PM Jonathan Mejias <drumber...@gmail.com>
wrote:

> Im using kubernetes to deploy apps, how can i create that virtual host
> into a container cluster?
>
> On Wed, May 16, 2018, 19:36 'Tim Hockin' via Kubernetes user discussion
> and Q <kubernetes-users@googlegroups.com> wrote:
>
>> HTTP gives you a much better solution - virtual hosts.
>>
>> The 'host' header tells your HTTP ingress which logical service to access.
>>
>> e.g. `curl -h 'host: foo.com' http://210.210.210.22:80/`
>> <http://210.210.210.22:80/>  is different than `curl -h 'host: bar.com'
>> http://210.210.210.22:80/` <http://210.210.210.22:80/>
>>
>> On Wed, May 16, 2018 at 1:19 PM Jonathan Mejías <drumber...@gmail.com>
>> wrote:
>>
>>> Hi
>>>
>>> How do i to create a HTTP load balancer with kubernetes ingress?
>>>
>>> Example:
>>>
>>> SVC-1 .  --  210.210.210.22:*80 (internet)*
>>> SVC-2 .  --  210.210.210.22:*81 (internet)*
>>> SVC-3 .  --  210.210.210.22:*82 (internet)*
>>>
>>> services created in type NodePort, but what are the definitios for
>>> ingress.yaml file?
>>>
>>> I don't want to use paths, because my services have dynamic endpoints,
>>> and additional paths responses with 404. So i want to define by port range.
>>>
>>> how can i do that?
>>>
>>> Regards
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Kubernetes user discussion and Q" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to kubernetes-users+unsubscr...@googlegroups.com.
>>> To post to this group, send email to kubernetes-users@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Kubernetes Ingress HTTP Load Balancer with port range

2018-05-16 Thread 'Tim Hockin' via Kubernetes user discussion and Q
HTTP gives you a much better solution - virtual hosts.

The 'host' header tells your HTTP ingress which logical service to access.

e.g. `curl -h 'host: foo.com' http://210.210.210.22:80/`  is different
than `curl
-h 'host: bar.com' http://210.210.210.22:80/`

On Wed, May 16, 2018 at 1:19 PM Jonathan Mejías 
wrote:

> Hi
>
> How do i to create a HTTP load balancer with kubernetes ingress?
>
> Example:
>
> SVC-1 .  --  210.210.210.22:*80 (internet)*
> SVC-2 .  --  210.210.210.22:*81 (internet)*
> SVC-3 .  --  210.210.210.22:*82 (internet)*
>
> services created in type NodePort, but what are the definitios for
> ingress.yaml file?
>
> I don't want to use paths, because my services have dynamic endpoints, and
> additional paths responses with 404. So i want to define by port range.
>
> how can i do that?
>
> Regards
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] ingress host enforcement

2018-05-09 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Admission controller webhooks are how you can add custom pre-admission
checks.
On Tue, May 8, 2018 at 11:45 PM Christopher Schmidt 
wrote:

> Hi,

> what I want is to enforce a specific host setting for users ingresses.

> lets say, every ingess host setting has
> - to be unique and
> - has to contain the namespace it has been created in and
> - a specific domain (f.e. myapp.my-namespace.foo.bar.com)

> Does anyone know how to do this? By patching nginx-ingress?
> By ingress claims (which is still a proposal?) ?
> Writing a custom Admission Controller like this one
https://github.com/yahoo/k8s-ingress-claim?

> Thanks for any tips...
> best Christopher

> --
> You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Best practice for running variants of k8s services?

2018-05-07 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The community does not have (and won't for a while, if ever) a "preferred"
model.  It's very much organic exploration for now.  Opinions proliferate
and it's possible they will never converge.  That's not a bad thing, IMO.

On Mon, Apr 30, 2018 at 8:13 AM David Rosenstrauch <dar...@darose.net>
wrote:

> Thanks for the suggestion, Tim.  Looks like that might fit the bill.
> I'll kick the tires on it a bit.
>
> Is kustomize the k8s project's preferred (or suggested) way to handle
> this type of situation?
>
> Thanks,
>
> DR
>
> On 04/27/2018 07:06 PM, 'Tim Hockin' via Kubernetes user discussion and
> Q wrote:
> > Does this head in the direction you want?
> >
> > https://github.com/kubernetes/kubectl/tree/master/cmd/kustomize
> >
> > On Fri, Apr 27, 2018 at 10:52 PM David Rosenstrauch <dar...@darose.net>
> > wrote:
> >
> >> We've been using Kubernetes to get a dev version of our environment up
> >> and running, and so far the experience has been great - nearly a dozen
> >> services up and running, and Kubernetes has made the whole process very
> >> straight-forward.
> >>
> >> However, we're now looking at moving this implementation towards a
> >> production release and things are starting to get a bit more
> >> complicated.  Specifically, we're envisioning that we're going to wind
> >> up with different "variants" of each service, that each differ only
> >> slightly from each other.  For example, for service X, we'll likely
> have:
> >>
> >> * a dev, qa, and prod version, each of which talks to different database
> >> backends, writes to different locations on a shared file system, etc.
> >>
> >> * in the prod version, we're foreseeing that there'll likely need to be
> >> a way to specify that a portion of the pods running service X need to be
> >> segregated to run on a specific set of hosts that are dedicated to
> >> "premium customers".
> >>
> >>
> >> Given how we're currently configuring k8s (just using raw yaml files)
> >> it's easy to see that the net result would wind up being a set of nearly
> >> identical yaml files for each service - i.e.:  serviceX-dev.yaml,
> >> serviceX-qa.yaml, serviceX-prod-premium.yaml,
> >> serviceX-prod-standard.yaml, etc.
> >>
> >> That's obviously not an efficient solution to this issue.  So I'm
> >> wondering:  what's the generally accepted kubernetes "best practice" for
> >> handling this type of situation?  (Or is there even one?)
> >>
> >> I did a quick search this afternoon, and came upon a number of community
> >> discussions about things like "templating" and "parameterization of yaml
> >> files", as well as some 3rd party tools that seem to implement this type
> >> of functionality (e.g., Helm).  But there's a lot out there, and I
> >> wasn't able to see any consensus.
> >>
> >> Is there any Kubernetes standard approach to handle this sort of
> >> situation?  (Or is there likely to be soon?)
> >>
> >> Thanks,
> >>
> >> DR
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "Kubernetes user discussion and Q" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to kubernetes-users+unsubscr...@googlegroups.com.
> >> To post to this group, send email to kubernetes-users@googlegroups.com.
> >> Visit this group at https://groups.google.com/group/kubernetes-users.
> >> For more options, visit https://groups.google.com/d/optout.
> >>
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Kubernetes ingress

2018-04-28 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Ingress is sort of the lowest-common-API across many platforms.  I am not
sure that the majority of them can support it natively.  I think it's
logical, but may not be practical yet.

On Sat, Apr 28, 2018, 7:41 AM Kanthi P  wrote:

> ohk Tim. Does it sound like a good thing to add?
>
> Let me share our usecase. We are building a datascience platform using
> kubernetes.
>
> We have datascience app which uses tensorflow internally, this runs as a
> service in kubernetes cluster.
> And we configured ingress controller for this service.
>
> Tensorflow has a dashboard called Tensorboard that shows some metrics/data
> about the datascience application performance.
> While the tensorboard UI is hosted at /, the data it tries to fetch
> reside at /data
>
> After configuring ingress, we can see the tensorboard dashboard since
> / gets redirected to / as expected.
> But it fails to load the data as //data also gets redirected as
> /
>
> If we can add the support for such URL manipulation, it will help similar
> usecases. Thoughts?
>
> Thanks,
> Kanthi
>
>
>
>
> On Saturday, April 28, 2018 at 11:38:24 AM UTC+5:30, Tim Hockin wrote:
>>
>> Ingress does not do prefix stripping or URL munging by default, as not
>> all platforms support it.  I verified against the Google implementation, it
>> passes the URL path through directly.
>>
>> On Sat, Apr 28, 2018, 6:09 AM Kanthi P  wrote:
>>
>>> Thanks David for the example. I tried it, with this we can only redirect
>>> /test/data to /data, but we won't be able to redirect /test to /.
>>>
>>> We actually want /test to remain redirected to / itself and /test/data
>>> to redirect to /data and /test/data/runs to /data/runs and so on.
>>>
>>> So in short, we just want /test/* to be redirected to /*.
>>>
>>> Is there any provision for such wildcard match kind of thing?
>>>
>>> Thanks much,
>>> Kanthi
>>>
>>>
>>> On Saturday, April 28, 2018 at 2:08:14 AM UTC+5:30, David Rosenstrauch
>>> wrote:

 If you were using the nginx ingress, you would do it like this:

 apiVersion: extensions/v1beta1
 kind: Ingress
 metadata:
name: test-ingress
annotations:
  nginx.ingress.kubernetes.io/rewrite-target: /data
  nginx.ingress.kubernetes.io/ssl-redirect: "false"
 spec:
rules:
- http:
paths:
- path: /test/data
  backend:
serviceName: test
servicePort: 6006

 (See:

 https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite)


 But I'm not sure how you'd do it using traefik.  (And I don't think the
 standard k8s ingress controller supports rewrite.)

 HTH,

 DR

 On 04/27/2018 03:11 PM, Kanthi P wrote:
 > Hi, Need some help with ingress controller
 > we want to redirect a http request say //xyz to be mapped
 to a
 > service in the backend. And the service should receive the request as
 > /xyz
 > How do we annotate this in the ingress resource?
 >
 > Have configured the ingress resource as shown:
 >
 >
 > apiVersion: extensions/v1beta1
 > kind: Ingress
 > metadata:
 >   annotations:
 > kubernetes.io/ingress.class: traefik
 >   name: test-ingress
 >   namespace: default
 >
 > spec:
 >   rules:
 >   - http:
 >   paths:
 >   - backend:
 >   serviceName: test
 >   servicePort: 6006
 > path: /test
 > status:
 >   loadBalancer: {}
 >
 > But the problem is /test/data gets redirected as /, but
 we want
 > it to be redirected as /data
 > Any idea how to annotate this?
 >

>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Kubernetes user discussion and Q" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to kubernetes-use...@googlegroups.com.
>>> To post to this group, send email to kubernet...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send 

Re: [kubernetes-users] Kubernetes ingress

2018-04-28 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Ingress does not do prefix stripping or URL munging by default, as not all
platforms support it.  I verified against the Google implementation, it
passes the URL path through directly.

On Sat, Apr 28, 2018, 6:09 AM Kanthi P  wrote:

> Thanks David for the example. I tried it, with this we can only redirect
> /test/data to /data, but we won't be able to redirect /test to /.
>
> We actually want /test to remain redirected to / itself and /test/data to
> redirect to /data and /test/data/runs to /data/runs and so on.
>
> So in short, we just want /test/* to be redirected to /*.
>
> Is there any provision for such wildcard match kind of thing?
>
> Thanks much,
> Kanthi
>
>
> On Saturday, April 28, 2018 at 2:08:14 AM UTC+5:30, David Rosenstrauch
> wrote:
>>
>> If you were using the nginx ingress, you would do it like this:
>>
>> apiVersion: extensions/v1beta1
>> kind: Ingress
>> metadata:
>>name: test-ingress
>>annotations:
>>  nginx.ingress.kubernetes.io/rewrite-target: /data
>>  nginx.ingress.kubernetes.io/ssl-redirect: "false"
>> spec:
>>rules:
>>- http:
>>paths:
>>- path: /test/data
>>  backend:
>>serviceName: test
>>servicePort: 6006
>>
>> (See:
>>
>> https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite)
>>
>>
>> But I'm not sure how you'd do it using traefik.  (And I don't think the
>> standard k8s ingress controller supports rewrite.)
>>
>> HTH,
>>
>> DR
>>
>> On 04/27/2018 03:11 PM, Kanthi P wrote:
>> > Hi, Need some help with ingress controller
>> > we want to redirect a http request say //xyz to be mapped
>> to a
>> > service in the backend. And the service should receive the request as
>> > /xyz
>> > How do we annotate this in the ingress resource?
>> >
>> > Have configured the ingress resource as shown:
>> >
>> >
>> > apiVersion: extensions/v1beta1
>> > kind: Ingress
>> > metadata:
>> >   annotations:
>> > kubernetes.io/ingress.class: traefik
>> >   name: test-ingress
>> >   namespace: default
>> >
>> > spec:
>> >   rules:
>> >   - http:
>> >   paths:
>> >   - backend:
>> >   serviceName: test
>> >   servicePort: 6006
>> > path: /test
>> > status:
>> >   loadBalancer: {}
>> >
>> > But the problem is /test/data gets redirected as /, but we
>> want
>> > it to be redirected as /data
>> > Any idea how to annotate this?
>> >
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Best practice for running variants of k8s services?

2018-04-27 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Does this head in the direction you want?

https://github.com/kubernetes/kubectl/tree/master/cmd/kustomize

On Fri, Apr 27, 2018 at 10:52 PM David Rosenstrauch 
wrote:

> We've been using Kubernetes to get a dev version of our environment up
> and running, and so far the experience has been great - nearly a dozen
> services up and running, and Kubernetes has made the whole process very
> straight-forward.
>
> However, we're now looking at moving this implementation towards a
> production release and things are starting to get a bit more
> complicated.  Specifically, we're envisioning that we're going to wind
> up with different "variants" of each service, that each differ only
> slightly from each other.  For example, for service X, we'll likely have:
>
> * a dev, qa, and prod version, each of which talks to different database
> backends, writes to different locations on a shared file system, etc.
>
> * in the prod version, we're foreseeing that there'll likely need to be
> a way to specify that a portion of the pods running service X need to be
> segregated to run on a specific set of hosts that are dedicated to
> "premium customers".
>
>
> Given how we're currently configuring k8s (just using raw yaml files)
> it's easy to see that the net result would wind up being a set of nearly
> identical yaml files for each service - i.e.:  serviceX-dev.yaml,
> serviceX-qa.yaml, serviceX-prod-premium.yaml,
> serviceX-prod-standard.yaml, etc.
>
> That's obviously not an efficient solution to this issue.  So I'm
> wondering:  what's the generally accepted kubernetes "best practice" for
> handling this type of situation?  (Or is there even one?)
>
> I did a quick search this afternoon, and came upon a number of community
> discussions about things like "templating" and "parameterization of yaml
> files", as well as some 3rd party tools that seem to implement this type
> of functionality (e.g., Helm).  But there's a lot out there, and I
> wasn't able to see any consensus.
>
> Is there any Kubernetes standard approach to handle this sort of
> situation?  (Or is there likely to be soon?)
>
> Thanks,
>
> DR
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] ClusterIP service not distributing requests evenlyamong pods in Google Kubernetes Engine

2018-04-13 Thread 'Tim Hockin' via Kubernetes user discussion and Q
What are you using for a client?  Is it by chance http and written in go?
Some client libraries, including Go's http, aggressively reuse
connections.

If you try with something like exec netcat, I bet you see different results.

BTW, one might argue that if you depend on RR, you will eventually be
broken.  You would have to do that client side or in your own LB.

On Fri, Apr 13, 2018, 1:23 PM  wrote:

>
> I am running them against the service's cluster IP address (through its
> name, i.e. "btm-calculator" which translates to the cluster IP), and port
> 3006.
>
>
> On Friday, April 13, 2018 at 1:19:32 PM UTC-4, Rodrigo Campos wrote:
> > And how are you running the requests? Against which IP and which port?
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] ClusterIP service not distributing requests evenlyamong pods in Google Kubernetes Engine

2018-04-13 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Without a statistically significant load, this is random.  What you are
seeing satisfies that definition.

The real reason is that round-robin is a lie.  Each node in a cluster will
do it's own RR from any number of clients.

On Fri, Apr 13, 2018, 10:51 AM  wrote:

> On Friday, April 13, 2018 at 10:39:38 AM UTC-4, Tim Hockin wrote:
> > The load is random, but the distribution should be approximately equal
> for non-trivial loads.  E.g. when we run tests for 1000 requests you can
> see it is close to equal.
> >
> >
> > How unequal is it?  Are you using session affinity?
> >
> >
> >
> > On Fri, Apr 13, 2018, 10:34 AM Cristian Cocheci 
> wrote:
> >
> >
> >
> > Thank you Sunil, but the LoadBalancer type is used for exposing the
> service externally, which I don't need. All I need is my service exposed
> inside the cluster.
> >
> >
> >
> >
> > On Fri, Apr 13, 2018 at 10:30 AM, Sunil Bhai  wrote:
> >
> >
> >
> > HI,
> >
> > Check this once :
> >
> >
> https://kubernetes.io/docs/tasks/access-application-cluster/load-balance-access-application-cluster/
> >
> >
> > https://kubernetes.io/docs/concepts/services-networking/service/
> >
> >
> >
> > Sent from Mail for Windows 10
> >
> >
> > From: cristian...@gmail.com
> > Sent: Friday, April 13, 2018 7:11 PM
> > To: Kubernetes user discussion and Q
> > Subject: [kubernetes-users] ClusterIP service not distributing requests
> evenlyamong pods in Google Kubernetes Engine
> >
> >
> > I have a ClusterIP service in my cluster with 4 pods behind it. I
> noticed that requests to the service are not evenly distributed among pods.
> After further reading I learned that the kube-proxy pod is responsible for
> setting up the iptables rules that forward requests to the pods. After
> logging into the kube-proxy pod and listing the nat table rules, this is
> what I got:
> >
> > Chain KUBE-SVC-4F4JXO37LX4IKRUC (1 references)
> > target prot opt source   destination
> > KUBE-SEP-6X4IVU3LDAAZJUPD  all  --  0.0.0.0/00.0.0.0/0
> /* default/btm-calculator: */ statistic mode random probability
> 0.250
> > KUBE-SEP-TXRPWWIIUWW3MNFH  all  --  0.0.0.0/00.0.0.0/0
> /* default/btm-calculator: */ statistic mode random probability
> 0.282
> > KUBE-SEP-HW6SF2LJM4S7X5ZN  all  --  0.0.0.0/00.0.0.0/0
> /* default/btm-calculator: */ statistic mode random probability
> 0.500
> > KUBE-SEP-TTJKD52QZSH2OH4O  all  --  0.0.0.0/00.0.0.0/0
> /* default/btm-calculator: */
> >
> > The comments seem to suggest that the load is balanced according to the
> static mode random probability with an uneven probability distribution. Is
> this how it's supposed to work? Every piece of documentation that I read
> about load balancing by a ClusterIP service indicates that it should be
> round robin. Obviously this is not the case here.
> > Is there a way to set a ClusterIP to perform round robin load balancing?
> >
> > Thank you,
> > Cristian
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Kubernetes user discussion and Q" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to kubernetes-use...@googlegroups.com.
> > To post to this group, send email to kubernet...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kubernetes-users.
> > For more options, visit https://groups.google.com/d/optout.
> >
> >
> >
> >
> >
> >
> > --
> >
> > You received this message because you are subscribed to the Google
> Groups "Kubernetes user discussion and Q" group.
> >
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to kubernetes-use...@googlegroups.com.
> >
> > To post to this group, send email to kubernet...@googlegroups.com.
> >
> > Visit this group at https://groups.google.com/group/kubernetes-users.
> >
> > For more options, visit https://groups.google.com/d/optout.
>
>
> I am not using session affinity, and I am not sending a statistically
> significant number of requests. In my particular use case I only need to
> send a number of requests of 100 or less. I also have the problem that I
> mentioned above, if I send 20 requests in a loop, they ALL go to the same
> pod. If I wait a while and send another group of 20 requests, they MIGHT go
> to a different pod, but they all go to the same pod (even if different than
> the first one). This is a big issue for me, since my requests are actually
> heavy calculations, an I was hoping to use this mechanism as a way of
> parallelizing my computations.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this 

Re: [kubernetes-users] ClusterIP service not distributing requests evenlyamong pods in Google Kubernetes Engine

2018-04-13 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The load is random, but the distribution should be approximately equal for
non-trivial loads.  E.g. when we run tests for 1000 requests you can see it
is close to equal.

How unequal is it?  Are you using session affinity?

On Fri, Apr 13, 2018, 10:34 AM Cristian Cocheci 
wrote:

>
> Thank you Sunil, but the LoadBalancer type is used for exposing the
> service externally, which I don't need. All I need is my service exposed
> inside the cluster.
>
>
> On Fri, Apr 13, 2018 at 10:30 AM, Sunil Bhai 
> wrote:
>
>> HI,
>>
>>
>>
>> Check this once :
>>
>>
>>
>>
>> https://kubernetes.io/docs/tasks/access-application-cluster/load-balance-access-application-cluster/
>>
>>
>>
>>
>>
>> https://kubernetes.io/docs/concepts/services-networking/service/
>>
>>
>>
>>
>>
>>
>>
>> Sent from Mail  for
>> Windows 10
>>
>>
>>
>> *From: *cristian.coch...@gmail.com
>> *Sent: *Friday, April 13, 2018 7:11 PM
>> *To: *Kubernetes user discussion and Q
>> 
>> *Subject: *[kubernetes-users] ClusterIP service not distributing
>> requests evenlyamong pods in Google Kubernetes Engine
>>
>>
>>
>>
>>
>> I have a ClusterIP service in my cluster with 4 pods behind it. I noticed
>> that requests to the service are not evenly distributed among pods. After
>> further reading I learned that the kube-proxy pod is responsible for
>> setting up the iptables rules that forward requests to the pods. After
>> logging into the kube-proxy pod and listing the nat table rules, this is
>> what I got:
>>
>>
>>
>> Chain KUBE-SVC-4F4JXO37LX4IKRUC (1 references)
>>
>> target prot opt source   destination
>>
>> KUBE-SEP-6X4IVU3LDAAZJUPD  all  --  0.0.0.0/00.0.0.0/0
>> /* default/btm-calculator: */ statistic mode random probability
>> 0.250
>>
>> KUBE-SEP-TXRPWWIIUWW3MNFH  all  --  0.0.0.0/00.0.0.0/0
>> /* default/btm-calculator: */ statistic mode random probability
>> 0.282
>>
>> KUBE-SEP-HW6SF2LJM4S7X5ZN  all  --  0.0.0.0/00.0.0.0/0
>> /* default/btm-calculator: */ statistic mode random probability
>> 0.500
>>
>> KUBE-SEP-TTJKD52QZSH2OH4O  all  --  0.0.0.0/00.0.0.0/0
>> /* default/btm-calculator: */
>>
>>
>>
>> The comments seem to suggest that the load is balanced according to the
>> static mode random probability with an uneven probability distribution. Is
>> this how it's supposed to work? Every piece of documentation that I read
>> about load balancing by a ClusterIP service indicates that it should be
>> round robin. Obviously this is not the case here.
>>
>> Is there a way to set a ClusterIP to perform round robin load balancing?
>>
>>
>>
>> Thank you,
>>
>> Cristian
>>
>>
>>
>> --
>>
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>>
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>>
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>>
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] A little nudge about DNS

2018-04-10 Thread 'Tim Hockin' via Kubernetes user discussion and Q
upstream is upstream from the kube-DNS.  Pods won't see that.

On Tue, Apr 10, 2018 at 4:13 PM Marcio Garcia  wrote:

> Hi All,
>
>
> Maybe this is a dumb question, but I didn't find any answer for that.
>
>
> Recently I changed my kube-dns config with this:
>
> apiVersion: v1
> kind: ConfigMap
> metadata:
>   name: kube-dns
>   namespace: kube-system
> data:
>   upstreamNameservers: |
>   ["1.1.1.1", "8.8.8.8"]
>
>
> Applied all the configs, Ok, but when I log into any pod, I didn't see
> this in the /etc/resolv.conf file.
> That's supposed to be?  Or in /etc/resolv.conf file I will always see the
> Kube-dns IP only ?
>
>
>
>
>
> Thanks in advance,
>
>
> Marcio
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] independent custom kubernetes - best solution to Publish services ?

2018-04-04 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Nodeports are published on all nodes, so any one node going away is not a
problem, per se.  but... Nodeports alone require you to use a specific node
IP, which is a problem.  Nodeports were designed to be hidden behind
load-balancers  or proxies with stable VIPs, which is what it sounds like
you are doing.

On Wed, Apr 4, 2018 at 1:06 PM Gabriel Sousa 
wrote:

>
> https://kubernetes.io/docs/setup/independent/high-availability/
>
> "able to contact the NodePort service, from outside the cluster, by
> requesting :"
>
>
> if we  Publish a service using nodeport we have the access using
> node/master ip , and if that node/master dies ?
> we lose the access to the service...
>
> with pacemaker or keepalived i will use the VIP ( that is configured
> on pacemaker/keepalived )
>
> On Wednesday, 4 April 2018 17:47:33 UTC+1, Rodrigo Campos wrote:
>>
>> On Wed, Apr 04, 2018 at 09:33:28AM -0700, Gabriel Sousa wrote:
>> >
>> > Now i know what i have to do,
>> >
>> > Create a cluster with 3 masters and will use pacemaker/virtual ip and
>> use
>> > nodeport to Publish services .
>>
>> Really, can you please elaborate?
>>
>> >
>> > can i have only 3 masters without workers ?
>>
>> Yes
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Can I launch Google Container Engine (GKE) in Private GCP network Subnet?

2018-03-30 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Private cluster is private by default.  You can not access the master from
the internet.  You can specifically change that with the master authorized
networks feature, or you can access it from within your VPC network.

On Thu, Mar 29, 2018 at 10:42 PM Vinita  wrote:

> Hi,
>
> I am trying to use private cluster. I am able to create private cluster
> but kubectl commands are not working. I am seeing connection time out error
> as below -
>
> kubectl run nginx --image=nginx --replicas=2error: failed to discover
> supported resources: Get https://104.154.200.217/api: dial tcp
> 104.154.200.217:443: i/o timeout
> Am I missing something. I am seeing this issue in my SDK as well as Cloud
> shell.Thanks
>
>
> On Monday, March 26, 2018 at 1:31:46 PM UTC-7, manjo...@google.com wrote:
>>
>> On Thursday, March 8, 2018 at 4:56:09 AM UTC, Tim Hockin wrote:
>> > NB there are two issues here:
>> >
>> > 1) how to run a cluster where the VMs have no public IP, and the node
>> > <-> master comms are private IP.
>> >
>> > 2) how to run a cluster with long-term-stable egress IPs.
>> >
>> > They are not the same issue, despite being related :)
>> >
>> > Tim
>> >
>> >
>> > On Wed, Mar 7, 2018 at 2:27 AM,   wrote:
>> > > On Friday, October 13, 2017 at 9:05:14 PM UTC+5:30, Tim Hockin wrote:
>> > >> On Fri, Oct 13, 2017 at 3:17 AM,   wrote:
>> > >> > On Friday, July 28, 2017 at 11:52:27 AM UTC+5:30, Tim Hockin wrote:
>> > >> >> Private Google Access is not a private subnet.  That simply
>> allows your VMs to access google service without a public IP.  You still
>> have to make VMs without a public IP, which GKE does not support yet.
>> > >> >
>> > >> > Are there any near plan to have GKE working in Private network ? I
>> don't want to expose my containers to public IPs
>> > >>
>> > >> We are evaluating how best to support this.  In the mean time, it's
>> > >> important to note that none of your containers are exposed by
>> default,
>> > >> they do not have external IPs, and with the exception of the nodes'
>> > >> SSH port, all the default GCP firewalls default to "closed".  The
>> only
>> > >> "public" traffic required is GKE masters <-> nodes, and that is only
>> > >> "public" in name.  The traffic stays withing Google's network.
>> > >>
>> > >> Tim
>> > >
>> > > I would like to give this thread a bump and love to know if there is
>> any update.
>> > > It is not uncommon to allow access to a service by whitelisting the
>> public ip. Each kubernetes node having its own public ip makes a mess.
>> Right now, only solution seems to be running a NAT instance[1]. GCP doesn't
>> provide NAT gateway as service either, so one would have to deal with
>> scaling and high availability themselves.
>> > >
>> > >
>> > > [1]
>> https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
>> > >
>> > > --
>> > > You received this message because you are subscribed to the Google
>> Groups "Kubernetes user discussion and Q" group.
>> > > To unsubscribe from this group and stop receiving emails from it,
>> send an email to kubernetes-use...@googlegroups.com.
>> > > To post to this group, send email to kubernet...@googlegroups.com.
>> > > Visit this group at https://groups.google.com/group/kubernetes-users.
>> > > For more options, visit https://groups.google.com/d/optout.
>>
>> Hi,
>>
>> GKE now supports private clusters :-)
>>
>> https://cloudplatform.googleblog.com/2018/03/kubernetes-engine-private-clusters-now.html
>>
>> Hope that helps!
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Understending Google Pricing plan

2018-03-30 Thread 'Tim Hockin' via Kubernetes user discussion and Q
It all depends on your needs for availability and performance.  "a few
containers" can usually fit on a single node.  You can run a 1-node, 1-core
GKE "cluster" for the cost of the VM (< $30/month) + any additional
resources you use.

On Fri, Mar 9, 2018 at 10:00 AM  wrote:

> Sorry if this is very basic question but my background is not in web
> development. I'd like to deploy very basic web app (say ngnix + letsencrypt
> + my backend) and I thought about using GCP instead of rolling my own
> CoreOS/Atomic instance. However I'm unable to make sense out of pricing
> plan and my estimates range from $7 (cheaper than alternatives) to $2000
> (!).
>
> How to estimate running a few docker containers in GCP which in total
> would take <500M and would be idle most of the time?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] service with host network

2018-03-30 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Fri, Mar 30, 2018 at 7:46 AM  wrote:

> - the ability to have one IP per pod?
> - the ability to use same listening port on each container?

You also said "for performance cost I rather also to bind to physical
port".  If you are binding to the physical port, you can't use the same
listening port, anyway.  At that point you DO have an IP per pod, it's just
the node's IP.

Maybe I am misunderstanding...

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Load balancer drops backend while leaving frontend connected

2018-03-29 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Which environment and which Ingress controller?

On Thu, Mar 29, 2018 at 8:42 PM Tyler Johnson 
wrote:

> Is it possible that an HTTP load balancer (auto-configured as part of an
> Ingress) could occasionally drop backend connections while leaving the
> frontend connected?
>
> I'm running a websocket backend service (the backend-service timeout is
> high) and on very rare occasions I'll see the service pod log that the
> client dropped connection, while on the client side the HTTP connection is
> still ESTABLISHED. So I'm guessing it must be the LB.
>
> Is there a recommended way to troubleshoot the LB?
>
> Any other potential scenarios that could cause this problem?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to allow firewall for containers.

2018-03-29 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The normal answer is 10.0.0.0/8, and if you need more 192.168.0.0/16 and
172.16.0.0/12

On Thu, Mar 29, 2018 at 1:33 AM Immadi Ramalingeswararao <
immadi_ramalingeswara...@papajohns.com> wrote:

> Hi , I have my jenkins slaves running on gke dynamically on port 5. If
> I don't allow 0.0.0.0 to use port 5 jobs are getting suspended and I
> need to allow those containers to access my nexus server which is running
> on port 8080 on a different instance but same network. In firewall I have
> to allow those containers to access nexus-port 8080. But I don't want to
> keep 0.0.0.0 in source IP ranges. What is the IP range that I should allow
> to make these work. I tried Internal IPs, Cluster EndPoint in Source IP and
> targets I allowed all instances in the network. It is not working as
> expected. I need some help.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] service with host network

2018-03-29 Thread 'Tim Hockin' via Kubernetes user discussion and Q
What networking features do you lose?

On Thu, Mar 29, 2018, 8:59 AM  wrote:

> Hi
>
> I'd like to setup my pods to have two network, the first is the default
> k8s network and the second one the host (node) network.
>
> The reason is that I need to bind to range of UDP ports, and also for
> performance cost I rather also to bind to physical port.
>
> I don't want to use the hostNetwork: true, since i'd lose the networking
> features of k8s, and won't be able to load balance the actual service.
>
> Is this possible to define the two networks, is there an example for that?
>
> Thank you
> Guy.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Network Policy to limit open connections per pod

2018-03-28 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The simple answer is to change the limit.  The more robust answer would be
toake the limit more dynamic, but that can fail at runtime if, for example,
kernel memory is fragmented.  Also I am not sure that tunable can be
live-adjusted.

:(

We have ideas about how to be more frugal with conntrack records, but have
not had anyone follow up on that work.  So much to do.

On Wed, Mar 28, 2018, 8:44 AM Rodrigo Campos  wrote:

> Just curious, but why not change the contrack limit?
>
> On Wednesday, March 28, 2018,  wrote:
>
>> Is there anything similar to a network policy that limits x open
>> connections per pod?
>>
>> During a 100k TPS load test, a subset of pods had errors connecting to a
>> downstream service and we maxed out the nf_conntrack table (500k) which
>> affected the rest of the pods on each node that had this issue - which
>> happened to be 55% of the cluster.
>>
>> Besides handling this at the application level, I wanted to protect the
>> cluster as a whole so that not one deployment can affect the entire cluster
>> in this manner.
>>
>> Thanks for any help.
>>
>> -Jonathan
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Username of non-root UID

2018-03-15 Thread 'Tim Hockin' via Kubernetes user discussion and Q
You can't.  It has to be in the /etc/passwd in the image.  I think this is
an area we could improve the UX, but I am not sure what the right answer
is.  This is no different than raw  docker, as far as I know.


On Wed, Mar 14, 2018 at 10:00 PM  wrote:

> how to specify a name for the non-root UID in yaml?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How the communication happens in between pods

2018-03-10 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Point of clarity - not necessarily L2.  Flat L3 space is closer.

On Fri, Mar 9, 2018, 4:46 PM Igor Cicimov 
wrote:

> In kubernetes ALL pods have access to each other by default they reside in
> a flat L2 lan space.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Unique DNS records for every pod in a autoscaled deployment

2018-03-01 Thread 'Tim Hockin' via Kubernetes user discussion and Q
That's such a broken assumption.

StatefulSet is the only primitive that satisfies this condition for now.

On Thu, Mar 1, 2018 at 1:48 PM,   wrote:
> 1. Can't change the apparent hostname of the worker to be either an IP/ 
> dash-seperated IP worker DNS, as Airflow only supports a direct getfqdn call 
> in our version.
>
> no - hostname detected on the pod has to be resolvable on other pods.
>
>
> On Thursday, 1 March 2018 21:43:53 UTC, Tim Hockin  wrote:
>> Does it have to be DNS?  Are unique IPs sufficient?
>>
>> On Thu, Mar 1, 2018 at 10:15 AM,   wrote:
>> > I'm using Apache Airflow, which uses a scale out worker model.
>> >
>> > The workers run jobs, and the job logs are collected from the workers via 
>> > a http call from a central server. These pods definitely do have specific 
>> > identity, but they are not important individually in the way that 
>> > StatefulSet is designed for.
>> >
>> > I've ruled out the following methods.
>> >
>> > 1. Can't change the apparent hostname of the worker to be either an IP/ 
>> > dash-seperated IP worker DNS, as Airflow only supports a direct getfqdn 
>> > call in our version.
>> > 2. Don't want to change the hostname on the pod OS on startup, as that 
>> > would entail the pod running as root, even with a privilege downgrade 
>> > later. These pods run code outside of our direct control.
>> > 3. The stateful set method is not ideal, because it makes autoscaling 
>> > awkward.
>> >
>> > Eventually, this whole problem may go away, with a native K8s scheduler in 
>> > airflow - but I'm not in control of Airflow's priorities and release 
>> > cycles.
>> >
>> > To be clear, I already have a mature helm-packaged airflow installation 
>> > that does what I need. I'm just looking to support a feature, and that 
>> > feature requires Pods contactable via DNS - with a minimum tradeoff in 
>> > other areas.
>> >
>> > I don't think there is an acceptable trade-off for Pod DNS in my case, but 
>> > I thought I'd ask.
>> >
>> > thanks
>> >
>> > James M
>> >
>> > On Thursday, 1 March 2018 16:19:54 UTC, Tim Hockin  wrote:
>> >> The short answer is that you are ascribing identity to pods that don't
>> >> really have any.  They are literally called "replicas".  If you need
>> >> identity, you really sort of want StatefulSet.  If that doesn't work,
>> >> it would be good to understand more concretely what you're trying to
>> >> achieve.
>> >>
>> >> On Thu, Mar 1, 2018 at 1:22 AM,   wrote:
>> >> > Hi list,
>> >> >
>> >> > I'm following this guide: 
>> >> > https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-hostname-and-subdomain-fields
>> >> >
>> >> > I wish to have each pod in a deployment have a unique hostname, which 
>> >> > allows another pod to contact each of the autoscaled pods by hostname.
>> >> >
>> >> > However, although the guide makes sense for individual pods, it does 
>> >> > not work for deployments, as due to the lack of parameterisation each 
>> >> > pod hostname set would be the same.
>> >> >
>> >> > Can anyone think of a way around this?
>> >> >
>> >> > I've considered StatefulSet, but the lifecycle of the pods doesn't 
>> >> > really fit this model.
>> >> >
>> >> > thanks
>> >> >
>> >> > James M
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google 
>> >> > Groups "Kubernetes user discussion and Q" group.
>> >> > To unsubscribe from this group and stop receiving emails from it, send 
>> >> > an email to kubernetes-users+unsubscr...@googlegroups.com.
>> >> > To post to this group, send email to kubernetes-users@googlegroups.com.
>> >> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "Kubernetes user discussion and Q" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to kubernetes-users+unsubscr...@googlegroups.com.
>> > To post to this group, send email to kubernetes-users@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.

Re: [kubernetes-users] Unique DNS records for every pod in a autoscaled deployment

2018-03-01 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Does it have to be DNS?  Are unique IPs sufficient?

On Thu, Mar 1, 2018 at 10:15 AM,   wrote:
> I'm using Apache Airflow, which uses a scale out worker model.
>
> The workers run jobs, and the job logs are collected from the workers via a 
> http call from a central server. These pods definitely do have specific 
> identity, but they are not important individually in the way that StatefulSet 
> is designed for.
>
> I've ruled out the following methods.
>
> 1. Can't change the apparent hostname of the worker to be either an IP/ 
> dash-seperated IP worker DNS, as Airflow only supports a direct getfqdn call 
> in our version.
> 2. Don't want to change the hostname on the pod OS on startup, as that would 
> entail the pod running as root, even with a privilege downgrade later. These 
> pods run code outside of our direct control.
> 3. The stateful set method is not ideal, because it makes autoscaling awkward.
>
> Eventually, this whole problem may go away, with a native K8s scheduler in 
> airflow - but I'm not in control of Airflow's priorities and release cycles.
>
> To be clear, I already have a mature helm-packaged airflow installation that 
> does what I need. I'm just looking to support a feature, and that feature 
> requires Pods contactable via DNS - with a minimum tradeoff in other areas.
>
> I don't think there is an acceptable trade-off for Pod DNS in my case, but I 
> thought I'd ask.
>
> thanks
>
> James M
>
> On Thursday, 1 March 2018 16:19:54 UTC, Tim Hockin  wrote:
>> The short answer is that you are ascribing identity to pods that don't
>> really have any.  They are literally called "replicas".  If you need
>> identity, you really sort of want StatefulSet.  If that doesn't work,
>> it would be good to understand more concretely what you're trying to
>> achieve.
>>
>> On Thu, Mar 1, 2018 at 1:22 AM,   wrote:
>> > Hi list,
>> >
>> > I'm following this guide: 
>> > https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-hostname-and-subdomain-fields
>> >
>> > I wish to have each pod in a deployment have a unique hostname, which 
>> > allows another pod to contact each of the autoscaled pods by hostname.
>> >
>> > However, although the guide makes sense for individual pods, it does not 
>> > work for deployments, as due to the lack of parameterisation each pod 
>> > hostname set would be the same.
>> >
>> > Can anyone think of a way around this?
>> >
>> > I've considered StatefulSet, but the lifecycle of the pods doesn't really 
>> > fit this model.
>> >
>> > thanks
>> >
>> > James M
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "Kubernetes user discussion and Q" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to kubernetes-users+unsubscr...@googlegroups.com.
>> > To post to this group, send email to kubernetes-users@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Unique DNS records for every pod in a autoscaled deployment

2018-03-01 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The short answer is that you are ascribing identity to pods that don't
really have any.  They are literally called "replicas".  If you need
identity, you really sort of want StatefulSet.  If that doesn't work,
it would be good to understand more concretely what you're trying to
achieve.

On Thu, Mar 1, 2018 at 1:22 AM,   wrote:
> Hi list,
>
> I'm following this guide: 
> https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-hostname-and-subdomain-fields
>
> I wish to have each pod in a deployment have a unique hostname, which allows 
> another pod to contact each of the autoscaled pods by hostname.
>
> However, although the guide makes sense for individual pods, it does not work 
> for deployments, as due to the lack of parameterisation each pod hostname set 
> would be the same.
>
> Can anyone think of a way around this?
>
> I've considered StatefulSet, but the lifecycle of the pods doesn't really fit 
> this model.
>
> thanks
>
> James M
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] What is kubernetes containers, nodes, services, and apps?

2018-02-22 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Some of the older presentations I have done were really introductory.
They might be a bit stale, but those fundamentals have not changed
much.

https://speakerdeck.com/thockin?page=2

On Sun, Feb 18, 2018 at 8:56 AM,   wrote:
> I'm trying to get my head around kubernetes. I've been watching a few YouTube 
> videos on what is kubernetes and they're mentioning things like containers, 
> nodes, services, and apps but I still need  to understand what those are in 
> relations to kubernetes. Any explanation, documentation, and/or videos show 
> that what these are is much appreciated. I assumed kubernetes is gearing 
> towards enterprise website versus personal or mom and pop business website, 
> correct? When would I need to use kubernetes?
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Re: Is it possible to pool resources across hosts/nodes like VMware does

2018-02-15 Thread 'Tim Hockin' via Kubernetes user discussion and Q
I don't know VMWare either, but that seems disastrous from a
predictability point of view.

On Wed, Feb 14, 2018 at 8:02 PM, Warren Strange
 wrote:
>
> AFAIK you can not split a pod between more than one node.
>
> I know nothing about VMware, but I am guessing they can split VM processes
> across nodes, which is pretty much equivalent to what Kubernetes does with
> pods (VM process == a pod, roughly speaking).
>
>
>
> On Wednesday, February 14, 2018 at 8:04:30 PM UTC-7, chez wrote:
>>
>> Folks,
>> Looks like VMware with vsphere (and vcenter?) is able to allocate
>> resources (vcpu for instance) across hosts for a single VM ? Is this
>> possible with kubernetes for containers ?
>> Can kubernetes pool vcpu between multiple hosts/nodes for one container ?
>>
>> https://pubs.vmware.com/vsphere-4-esx-vcenter/index.jsp?topic=/com.vmware.vsphere.intro.doc_41/c_hosts_clusters_and_resource_pools.html
>>
>> I am really intrigued by this statement -
>> "You can dynamically change resource allocation policies. For example, at
>> year end, the workload on Accounting increases, and which requires an
>> increase in the Accounting resource pool reserve of 4GHz of power to 6GHz.
>> You can make the change to the resource pool dynamically without shutting
>> down the associated virtual machines."
>>
>> Each physical host is 4Ghz, but this doc says it can pull 2Ghz out of the
>> second host. Is it because of ESXi ?
>>
>> thanks
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] NAT for outgoing traffic

2018-02-14 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Currently the only way to get a static egress IP is to install your
own proxy VM(s) - either L7 or L4 (NAT).

On Tue, Feb 13, 2018 at 1:17 PM,   wrote:
> Hi,
>
> I've got an RDS Database running on AWS, and I want to access it from 
> Kubernetes, running on GKE.
>
> My cluster is set up for auto-scaling.
>
> Is there a way I can set up NAT or similar to get a static IP address for 
> outgoing traffic from my pods, so I can allow access through my AWS Security 
> Group?
>
> I see it currently uses the IP address of the node, but as it's an 
> auto-scaling cluster, it's possible that the IP addresses will change.
>
> Thanks,
> Gary
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] kubectl proxy vs kube-proxy is confusing to new users?

2018-02-11 Thread 'Tim Hockin' via Kubernetes user discussion and Q
kube-proxy should be renamed, in truth, but that isn't happening in
the near term.

On Sun, Feb 11, 2018 at 1:54 PM, Scalefastr  wrote:
> kubectl proxy is just for the API server.
>
> But kube-proxy is for services defined in the cluster and available where
> the proxy is running.
>
> I think this is kind of confusing?  Is it just me?
>
> Maybe kubectl proxy could/should be renamed to kubectl api-server-proxy
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How hostNetwork : true works with K8s Internal Services?

2018-02-05 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Kubernetes does not demand an overlay, and most of the overlays used
for kube employ some form of node gateway to allow packets to cross
between planes.

On Sat, Feb 3, 2018 at 12:59 PM, Chase  wrote:
> Hello - I am trying to understand how "hostNetwork: true" works with
> internal pod communication.  For example, if I create a daemonSet with
> hostNetwork : true and ClusterFirstWithHostNet should this pod be able to :
>
> 1.  Bind to the host network
> 2.  Communicate to services in the K8s network.
>
> The reason I ask is that in Docker if you bind to the host network you
> cannot communicate over the overlay network to other services.   It seems
> from reading the hostNetwork: true should works to allow both host and
> internal K8s communication.  Any explanation here would be great.
>
> Thanks
> Chase
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Re: pod crashes when secuityContext used.

2018-02-02 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Thanks for the followup!

On Fri, Feb 2, 2018 at 3:56 PM, R Melton  wrote:
>
> I later went back and created a new image file (on docker) and reran the
> runAsUser (and fsGroup) yaml file and it worked correctly.
>
> On Friday, February 2, 2018 at 11:52:07 AM UTC-6, R Melton wrote:
>>
>> using kubectl v1.9 on client and server.
>> ubuntu 16.04 server on GCP.
>>
>> I was trying to follow the demo listed on
>> https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
>> which assigns a security context to a pod when it is created.
>> Pod yaml file is:
>>
>> apiVersion: v1
>> kind: Pod
>> metadata:
>>   name: security-context-demo
>> spec:
>>   securityContext:
>> runAsUser: 1000
>> fsGroup: 2000
>>   volumes:
>>   - name: sec-ctx-vol
>> emptyDir: {}
>>   containers:
>>   - name: sec-ctx-demo
>> image: gcr.io/google-samples/node-hello:1.0
>> volumeMounts:
>> - name: sec-ctx-vol
>>   mountPath: /data/demo
>> securityContext:
>>   allowPrivilegeEscalation: false
>>
>> problem: pod always crashes and gets restarted many times:
>>
>> kubectl get pods
>> NAME   READY STATUS RESTARTS   AGE
>> busybox-855686df5d-2667x   1/1   Running1  1h
>> security-context-demo  0/1   CrashLoopBackOff   1  12s
>> << this is the problem.
>>
>> I tried removing each securityContext section. Crash remains when either
>> securityContext section is present in the yaml file.
>>
>> pod describe shows:
>>
>> Events:
>>   Type Reason AgeFrom
>> Message
>>    --    
>> ---
>>   Normal   Scheduled  58sdefault-scheduler
>> Successfully assigned security-context-demo to worker-0
>>   Normal   SuccessfulMountVolume  58skubelet, worker-0
>> MountVolume.SetUp succeeded for volume "sec-ctx-vol"
>>   Normal   SuccessfulMountVolume  58skubelet, worker-0
>> MountVolume.SetUp succeeded for volume "default-token-ptfl5"
>>   Normal   Pulled 10s (x4 over 56s)  kubelet, worker-0
>> Container image "gcr.io/google-samples/node-hello:1.0" already present on
>> machine
>>   Normal   Created10s (x4 over 56s)  kubelet, worker-0
>> Created container
>>   Normal   Started10s (x4 over 56s)  kubelet, worker-0
>> Started container
>>   Warning  BackOff9s (x6 over 54s)   kubelet, worker-0
>> Back-off restarting failed container
>>
>>
>> Logs in pod say:
>>
>> return binding.open(pathModule._makeLong(path), stringToFlags(flags),
>> mode);
>>  ^
>>
>> Error: EACCES: permission denied, open '/server.js'
>> at Error (native)
>> at Object.fs.openSync (fs.js:549:18)
>> at Object.fs.readFileSync (fs.js:397:15)
>> at Object.Module._extensions..js (module.js:415:20)
>> at Module.load (module.js:343:32)
>> at Function.Module._load (module.js:300:12)
>> at Function.Module.runMain (module.js:441:10)
>> at startup (node.js:139:18)
>> at node.js:968:3
>>
>>
>> If I remove both securityContext sections, pod runs normally.
>>
>> So does the runAsUser function work or not?
>>
>> How to specify the securityContext and avoid the crash?
>>
>>
>>
>>
>>
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] pod crashes when secuityContext used.

2018-02-02 Thread 'Tim Hockin' via Kubernetes user discussion and Q
It looks like that file is not readable by a non-root user.  You're
volunteering to lower your privileges, but you  need to account for
that in the image.  If this is a custom image, chmod ugo+r that file?
If it is a pre-built image, yell at whoever built it.

On Fri, Feb 2, 2018 at 9:52 AM, R Melton  wrote:
> using kubectl v1.9 on client and server.
> ubuntu 16.04 server on GCP.
>
> I was trying to follow the demo listed on
> https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
> which assigns a security context to a pod when it is created.
> Pod yaml file is:
>
> apiVersion: v1
> kind: Pod
> metadata:
>   name: security-context-demo
> spec:
>   securityContext:
> runAsUser: 1000
> fsGroup: 2000
>   volumes:
>   - name: sec-ctx-vol
> emptyDir: {}
>   containers:
>   - name: sec-ctx-demo
> image: gcr.io/google-samples/node-hello:1.0
> volumeMounts:
> - name: sec-ctx-vol
>   mountPath: /data/demo
> securityContext:
>   allowPrivilegeEscalation: false
>
> problem: pod always crashes and gets restarted many times:
>
> kubectl get pods
> NAME   READY STATUS RESTARTS   AGE
> busybox-855686df5d-2667x   1/1   Running1  1h
> security-context-demo  0/1   CrashLoopBackOff   1  12s   <<
> this is the problem.
>
> I tried removing each securityContext section. Crash remains when either
> securityContext section is present in the yaml file.
>
> pod describe shows:
>
> Events:
>   Type Reason AgeFrom
> Message
>    --    
> ---
>   Normal   Scheduled  58sdefault-scheduler
> Successfully assigned security-context-demo to worker-0
>   Normal   SuccessfulMountVolume  58skubelet, worker-0
> MountVolume.SetUp succeeded for volume "sec-ctx-vol"
>   Normal   SuccessfulMountVolume  58skubelet, worker-0
> MountVolume.SetUp succeeded for volume "default-token-ptfl5"
>   Normal   Pulled 10s (x4 over 56s)  kubelet, worker-0
> Container image "gcr.io/google-samples/node-hello:1.0" already present on
> machine
>   Normal   Created10s (x4 over 56s)  kubelet, worker-0
> Created container
>   Normal   Started10s (x4 over 56s)  kubelet, worker-0
> Started container
>   Warning  BackOff9s (x6 over 54s)   kubelet, worker-0
> Back-off restarting failed container
>
>
> Logs in pod say:
>
> return binding.open(pathModule._makeLong(path), stringToFlags(flags), mode);
>  ^
>
> Error: EACCES: permission denied, open '/server.js'
> at Error (native)
> at Object.fs.openSync (fs.js:549:18)
> at Object.fs.readFileSync (fs.js:397:15)
> at Object.Module._extensions..js (module.js:415:20)
> at Module.load (module.js:343:32)
> at Function.Module._load (module.js:300:12)
> at Function.Module.runMain (module.js:441:10)
> at startup (node.js:139:18)
> at node.js:968:3
>
>
> If I remove both securityContext sections, pod runs normally.
>
> So does the runAsUser function work or not?
>
> How to specify the securityContext and avoid the crash?
>
>
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Can you route external traffic to a pod without using a Google Cloud Loadbalancer or routing directly to the Node?

2018-01-31 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Jan 31, 2018 12:03 PM,  wrote:

Hi guys,

I was wondering if there is another way to route external traffic to a Pod.
So I know that you can use a Kubernetes Service of type "LoadBalancer"
which on GKE will automatically create a Google Cloud Loadbalancer for you
(as described here https://kubernetes.io/docs/concepts/services-networking/
service/#type-loadbalancer). However having a Google Cloud Loadbalancer is
complete overkill for my small use case and also relatively expensive.

Furthermore I've seen solutions online, where people would use externalIPs
on the service and then used the external IP of the Node itself to access
the Pod (see for example here https://serverfault.com/
questions/801189/expose-port-80-and-443-on-google-
container-engine-without-load-balancer). However since your container can
be assigned to any Node, this solution is not really suitable as with each
new deployment you have to look up the IP current Node.


You have identified the major issues.  Add to that the fact that the set of
IPs assigned to all your VMs can change as nodes come and go.


Isn't there a way, to just reserve an external IP via Google Cloud and then
"attach" a Kubernetes Service to it?


That is what a Service type=LoadBalancer is doing.  Pretty much literally,
though the details are more involved.


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] destoryed pod containers trigger

2018-01-31 Thread 'Tim Hockin' via Kubernetes user discussion and Q
kubectl logs ... --previous ?

On Wed, Jan 31, 2018 at 6:38 AM, Colstuwjx  wrote:
>>
>>
>>>
>>> But, what if we want to trigger the detail exited reason for the exited
>>> containers? Is there any parameters configure that?
>>
>>
>> Have you checked the terminationGracePeriod? I think it will do just that.
>
>
> I'm afraid not, I need to check the exited container, such as some container
> with wrong configurations, and determine the root cause.
> After  the `terminationGracePeriod `, the unhealthy container would be
> deleted, and we can't do things like `docker inspect `
> to trigger that case.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] howto define isolated vlan definitions

2018-01-29 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Look into NetworkPolicy - it's not your traditional VLAN approach to
ACL, it's more dynamic and application-focused.

On Mon, Jan 29, 2018 at 10:27 PM, Oğuz Yarımtepe
 wrote:
> My current k8s structure is 2 worker and one master node deployment. I am
> testing it with NodePort services. Now we will install a bigger cluster, 3
> master and more worker nodes. The problem is, using NodePort is causing all
> the services exit with the same worker node ips. But we need vlan
> definitions and isolate services or pods. Our switch has ACLs  related with
> these vlans so, some can access eachother some can not. How can i define
> this structure at K8s?
>
> Any tip?
>
> I know Calico can be used, but this will be a software based approach. Any
> other method at network level?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to keep full Kubernetes private?

2018-01-22 Thread 'Tim Hockin' via Kubernetes user discussion and Q
VPN is the normal answer - you are extending your private space into the cloud.

On Sun, Jan 21, 2018 at 8:39 AM, Lorenz Vanthillo
<lorenz.vanthi...@gmail.com> wrote:
> Thanks for your reply. Now I want to use GKE to create my Kubernetes
> cluster, so my master IP will be public. I read something here
> (https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks)
> about how we can secure this.
>
> For our cluster we disabled the GKE Ingress Controller, since that would
> create public HTTP(S) load balancers for us when creating Ingress resources.
> (like in the tutorial).
> We are now just creating deployments (pods, rs, ..), with services of the
> type ClusterIP. Those services will only be accessible from inside our
> cluster.
>
> Now we are searching for a good way to connect to this cluster. We were
> thinking about a VPN connection which will offer us an IP from inside this
> cluster. So we can access the services inside our browser etc. (it will look
> public for us, but it's private).
>
> Is there a way documentated on how we can set this up?
>
> On 20 January 2018 at 23:36, 'Tim Hockin' via Kubernetes user discussion and
> Q <kubernetes-users@googlegroups.com> wrote:
>>
>> Important - this is for kubernetes on GCE, not for GKE.  GKE masters use
>> public IP, even though the traffic never leaves Google.  We are looking at
>> how best o support true private GKE.
>>
>> On Jan 20, 2018 2:34 PM, "Tim Hockin" <thoc...@google.com> wrote:
>>>
>>> You should not need a public IP unless you access public things.  Stuff
>>> like GCR (inside Google) will be ok.  If you need to egress, you need a NAT
>>> (diy for now).
>>>
>>> On Jan 20, 2018 10:29 AM, "lvthillo" <lorenz.vanthi...@gmail.com> wrote:
>>>>
>>>> We want to start using Kubernetes on Google Cloud Platform. We want that
>>>> this Kubernetes (and all services, etc) are only accessible from inside our
>>>> network. It's for development purposes so we don't need public access. (But
>>>> we want internet access from inside our cluster, for example to download
>>>> dependencies in our Jenkins pod).
>>>>
>>>> We have some VPN service for users who are working remotely to connect
>>>> to our network.
>>>> Here I was reading about another solution to make the Kubernetes cluster
>>>> private:
>>>> https://engineering.bitnami.com/articles/creating-private-kubernetes-clusters-on-gke.html
>>>>
>>>> I'm searching for ideas/replies/opinions of people who have this
>>>> experience with it.
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Kubernetes user discussion and Q" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to kubernetes-users+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to kubernetes-users@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>>> For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Kubernetes user discussion and Q" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/kubernetes-users/pkam7V4NPt8/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to configure decenterlized dns resolution like the way in docker `--dns`

2018-01-22 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Please be aware that there are semantic mismatches between how people
think about DNS resolution and how some DNS clients implement it.
Resolv.conf is somewhat under-specified in this regard.

Specifically, if you have two `nameserver` lines, which point to
servers with different information (e.g. one for kube and one for
corp), you are going to have a bad time eventually.  There's a
somewhat unstated assumption that all `nameserver` entries have
identical information, and any query can be satisfied by any server.
Some clients do not try them in order, like you might assume.

This is part of why we added stub domains at the DNS server - it
satisfies this constraint.

On Sun, Jan 21, 2018 at 7:06 PM, 吴佳兴  wrote:
> Hi team,
>
> I have been read the awesome blogspot:
> http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html,
> and learned to configure cluster dns policy with stubDomains and upstream
> nameservers.
>
> But it can not meet my requirements as I'd like to override the each
> container's `resolv.conf` like the way in docker `--dns`, rather than let
> all containers resolution point to one centerlized nameserver.
>
> Thanks.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to keep full Kubernetes private?

2018-01-20 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Important - this is for kubernetes on GCE, not for GKE.  GKE masters use
public IP, even though the traffic never leaves Google.  We are looking at
how best o support true private GKE.

On Jan 20, 2018 2:34 PM, "Tim Hockin"  wrote:

> You should not need a public IP unless you access public things.  Stuff
> like GCR (inside Google) will be ok.  If you need to egress, you need a NAT
> (diy for now).
>
> On Jan 20, 2018 10:29 AM, "lvthillo"  wrote:
>
>> We want to start using Kubernetes on Google Cloud Platform. We want that
>> this Kubernetes (and all services, etc) are only accessible from inside our
>> network. It's for development purposes so we don't need public access. (But
>> we want internet access from inside our cluster, for example to download
>> dependencies in our Jenkins pod).
>>
>> We have some VPN service for users who are working remotely to connect to
>> our network.
>> Here I was reading about another solution to make the Kubernetes cluster
>> private: https://engineering.bitnami.com/articles/creating-private-ku
>> bernetes-clusters-on-gke.html
>>
>> I'm searching for ideas/replies/opinions of people who have this
>> experience with it.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to keep full Kubernetes private?

2018-01-20 Thread 'Tim Hockin' via Kubernetes user discussion and Q
You should not need a public IP unless you access public things.  Stuff
like GCR (inside Google) will be ok.  If you need to egress, you need a NAT
(diy for now).

On Jan 20, 2018 10:29 AM, "lvthillo"  wrote:

> We want to start using Kubernetes on Google Cloud Platform. We want that
> this Kubernetes (and all services, etc) are only accessible from inside our
> network. It's for development purposes so we don't need public access. (But
> we want internet access from inside our cluster, for example to download
> dependencies in our Jenkins pod).
>
> We have some VPN service for users who are working remotely to connect to
> our network.
> Here I was reading about another solution to make the Kubernetes cluster
> private: https://engineering.bitnami.com/articles/creating-private-
> kubernetes-clusters-on-gke.html
>
> I'm searching for ideas/replies/opinions of people who have this
> experience with it.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Can't access UDP port on load balancer in kubernetes on Google kubernetes engine

2018-01-09 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Make sure all firewalls are open?

I just tested it and it works:

```
$ kubectl run udp --image=ubuntu -- bash -c "while true; do sleep 10;
done"
deployment "udp" created

$ kubectl expose deployment udp --port=12345 --protocol=UDP
--type=LoadBalancer
service "udp" exposed
```

Then I got the IP from `get svc`.  I used `kubectl exec -ti` to exec into
my pod and run `nc -l -p 12345 -u` in one terminal and I sent bytes to it
via `netcat -u  12345`.

Tim

On Tue, Jan 9, 2018 at 12:03 PM, Tameem Iftikhar  wrote:

>
> down votefavorite
> 
>
> I am trying to run a very simple UDP service in kubernetes on Google Cloud
> but am unable to access the port I am exposing to the internet. Here is the
> deployment and service file:
>
> Deployment.yaml
>
> apiVersion: extensions/v1beta1
> kind: Deployment
> metadata:
>   name: udp-server-deployment
> spec:
>   replicas: 2
>   template:
> metadata:
>   labels:
> name: udp-server
> spec:
>   containers:
>   - name: udp-server
> image: jpoon/udp-server
> imagePullPolicy: Always
> ports:
> - containerPort: 10001
>   protocol: UDP
>
> Service.yaml:
>
> apiVersion: v1
> kind: Service
> metadata:
>   name: udp-server-service
>   labels:
> app: udp-server
> spec:
>   type: LoadBalancer
>   ports:
>   - port: 10001
> protocol: UDP
>   selector:
> name: udp-server
>
> This create the load balancer in google cloud with the correct port
> exposed. Like so:
>
> Load balancer 
>
> But when i try to access the port it's unaccessible. I have tried a few
> variations in GCE to expose udp port but none seem to be working.
>
> ➜  udp-example telnet 35.192.59.72 10001
> Trying 35.192.59.72...
> telnet: connect to address 35.192.59.72: Connection refused
> telnet: Unable to connect to remote host
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Is Kubernetes better as one cluster per subteam, or should the entire org run on a single cluster?

2018-01-06 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Kubernetes is designed for a smaller number of larger clusters.  What does
"stepping in toes" mean?  Certainly container isolation is not perfect, but
with realistic resource requests it is pretty decent.

That said, many people do what is being suggested, and many are happy and
successful. The major downsides are that it reduces overall elasticity
(especially once priority and preemption is in place), limits overall
bin-packing opportunities, and perhaps most notably multiplies and spreads
the overhead (both literal compute resources and human admin effort).

I think everyone benefits from having a few deep experts available to help
with kubernetes.  This is easier when you have less clusters.  E.g. Google
runs an incredible number of Borg machines with a very small SRE team.

On Jan 6, 2018 3:59 PM,  wrote:

> My manager is starting to look into moving us off Azure Web App into some
> kind of container management system, either k8s or service fabric (we're
> *mostly* a MS shop but not entirely).  I was talking with him yesterday and
> he mentioned his plan is that each of the teams (~5-10 devs each, generally
> one main web app and a few background jobs) in our billing group (~50 devs
> total) would run their own cluster.
>
> My naive understanding is that somewhat defeats the primary purpose of
> k8s.  I was imagining the the entire billing group would have a single
> cluster, and the various teams would then not have to think about how to
> manage it; things would "just work".  My manager's perspective is that with
> a big shared cluster everyone would be stepping on each others toes and it
> would become *more* difficult to manage rather than *less*.  Plus org
> structure is always fluid and teams get reorganized into other departments
> etc every so often, so that could be messy.  But neither of us really know.
>
> Anyone have experience or advice on things like this?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] CRD for PODs with identity

2018-01-05 Thread 'Tim Hockin' via Kubernetes user discussion and Q
I think this is exactly the sort of thing that a custom deployment-like
operator is good for.  You have particular needs that are not easily
satisfied with existing constructs.  CRDs and controllers let you build
this, and figure out how you want it to work.

Later, maybe, you can solicit other users, and see if it satisfies them
too.  Or not.

On Jan 5, 2018 11:09 AM, "Michele Bertasi" <
michele.bert...@brightcomputing.com> wrote:

> Hi everyone,
>
> I'm trying to implement an operator to manage a Custom Resource Definition
> for this proof-of-concept application:
> * a user creates an instance of my CRD
> * the operator creates a POD for that CRD
> * after a specified timeout, both the POD and the CRD disappear
>
> the use case is a scratch space for users, where a container with sshd is
> created, they can connect there and play (every POD gets a different secret
> mounted with different allowed SSH keys). Then the container is removed.
> The PODs will be exposed through a NodePort service or a L4 ingress (but
> that's not the point of my question).
>
> Of course I can create all the PODs myself through the operator, then
> manage eventual POD deletions, scaling up and down, etc.
> What I'm trying though is to not reinvent the wheel and try to reuse as
> much as possible existing constructs, so I was looking at
> ReplicationControllers for example. I could let a RC manage the number of
> replicas, and when I need a new POD, I just scale the number up. When I
> have to downscale it gets more tricky, because I have to delete a specific
> one (and not a random one). I also looked at StatefulSets but in that case
> downscaling deletes the POD with the highest ID. So I'm a bit stuck.
>
> What do you guys suggest? Is there anything I can reuse or I really need
> to manage the resources myself? Maybe there's also a similar CRD that I
> could reuse; the idea that a POD has a definite lifetime doesn't seem so
> crazy to me.
>
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] kube-proxy creating iptable rule for wrong interface

2018-01-02 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Hi Mike,

> service tokens can't come through to nodes because kubelet tries to talk to 
> the api server through the api's "advertised ip address", which defaults to 
> the default route, which is shunted.

This seems wrong.  Kubelet has a master address that is *NOT*
dependent on Services.  If that is not being used in all cases, it
seems to be a problem.

> Is there any reason why kube-proxy cannot have a mode (or default behavior) 
> to route default/kubernetes traffic to the same endpoint _it_ is connecting 
> to for api service

I'm not sure if/how this plays with things like HA masters, and it
makes the "specialness" of the kubernetes Service leak even further.
It seems strictly simpler to set the flag - that's what it was
intended for..


On Tue, Jan 2, 2018 at 2:00 PM,   wrote:
> This behavior puzzles me.  API service is definitely a unique service, but 
> this issue can be tough debug, and causes a problem if some traffic needs to 
> be sent through a security device for monitoring.
>
> Let me explain:
>  - Assume there is no direct API connectivity between worker nodes and 
> control plane nodes.  All API access goes through a security device and/or 
> external load balancer (say ELB)
>  - One could tell kubelet to connect to api server through a load balancer / 
> security device - and kubelet will seem to function fine.
>  - kube-proxy can spin up and seem to be working fine
>  - service tokens can't come through to nodes because kubelet tries to talk 
> to the api server through the api's "advertised ip address", which defaults 
> to the default route, which is shunted.
>  - User can't figure out issue, results in (╯°□°)╯︵ ┻━┻)
>
> It happened to me, and the only fix for me was to set a valid advertise ip 
> address and allow traffic to flow to it, despite me setting --master on 
> kube-proxy
>
> Is there any reason why kube-proxy cannot have a mode (or default behavior) 
> to route default/kubernetes traffic to the same endpoint _it_ is connecting 
> to for api service?  Otherwise we leave people with no other option than to 
> have direct connectivity to api server node ports.
>
> Mike
>
>
> On Wednesday, May 31, 2017 at 12:38:53 PM UTC-4, reza@gmail.com wrote:
>> On Wednesday, May 31, 2017 at 11:04:38 AM UTC-5, Tim Hockin wrote:
>> > This being the kubernetes Service, the value is coming from Endpoints,
>> > which is being written by your apiserver.  By default, it chooses the
>> > interface with a default route.  If that is wrong, look at the
>> > `--advertise-address` flag.
>> >
>> > On Wed, May 31, 2017 at 8:33 AM,   wrote:
>> > >
>> > > Kubernetes version: 1.6.3
>> > >
>> > > I have following interfaces on my vagrant machine.
>> > >
>> > > enp0s3Link encap:Ethernet  HWaddr 08:00:27:ee:32:98
>> > >   inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
>> > >   ...
>> > >
>> > > enp0s8Link encap:Ethernet  HWaddr 08:00:27:88:a1:e8
>> > >   inet addr:10.0.15.11  Bcast:10.0.15.255  Mask:255.255.255.0
>> > >   ...
>> > >
>> > > I have deployed gcr.io/google_containers/kube-proxy-amd64:v1.6.3 on the 
>> > > cluster.
>> > >
>> > > vagrant@node1:/vagrant/kubeadm$ kubectl get po -n kube-system
>> > > NAME   READY STATUS
>> > > RESTARTS   AGE
>> > > etcd-node1  1/1Running   
>> > > 0   11m
>> > > kube-apiserver-node11/1 Running   0  
>> > > 10m
>> > > kube-controller-manager-node1   1/1 Running   0  
>> > > 11m
>> > > kube-proxy-95pq2  1/1   Running   0  
>> > >  1m
>> > > kube-scheduler-node1   1/1 Running   0   
>> > >   11m
>> > >
>> > >
>> > > vagrant@node1:/vagrant/kubeadm$ sudo iptables-save |grep 443
>> > >
>> > > -A KUBE-SEP-OGNOLD2JUSLFPOMZ -p tcp -m comment --comment 
>> > > "default/kubernetes:https" -m recent --set --name 
>> > > KUBE-SEP-OGNOLD2JUSLFPOMZ --mask 255.255.255.255 --rsource -m tcp -j 
>> > > DNAT --to-destination 10.0.2.15:6443
>> > >
>> > > -A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment 
>> > > "default/kubernetes:https cluster IP" -m tcp --dport 443 -j 
>> > > KUBE-SVC-NPX46M4PTMTKRN6Y
>> > >
>> > > -
>> > >
>> > > The problem the iptable rule says " DNAT --to-destination 
>> > > 10.0.2.15:6443", it should be set to  "DNAT --to-destination 
>> > > 10.0.15.11:6443" -- the interface.
>> > >
>> > > is there anyway I can force to use 10.0.15.11 instead of 10.0.2.15.
>> > >
>> > > In 1.5.2 Kube version, the IP table rule uses 10.0.15.11.
>> > >
>> > > thanks in advance
>> > >
>> > > -Reza
>> > >
>> > > --
>> > > You received this message because you are subscribed to the Google 
>> > > Groups "Kubernetes user discussion and Q" group.
>> > > To unsubscribe from this group and stop 

Re: [kubernetes-users] Kubernetes Volume storage questions

2018-01-02 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The main difference is that EBS and things like that are fully
managed, and you should be able to assume some operational simplicity
(if their capabilities meet your needs).  If you need multi-writer,
for example, EBS will not suffice.  Clustered filesystems require YOU
to operate them (for now?), so have some further administrative costs.

On Tue, Jan 2, 2018 at 7:31 AM, DK  wrote:
> Thanks, what are the benefits (if any) to choosing GlusterFS/Ceph over
> VsphereVolume/AWSElasticBlockStore? For a database like Postgres.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Kubernetes service type for background app

2018-01-01 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Why do you need a Service at all?

On Jan 1, 2018 8:43 PM, "Mario Rodriguez"  wrote:

> Hi, I'm in the middle of creating an K8s app that doesn't expose any HTTP
> endpoints, is just a background app that pulls messages from a message bus
> and takes some action based on the incoming message. No other apps will
> interact directly with this background app, only thru posting messages into
> the message bus.
>
> Scaling is a requirement and most likely will always need to run more than
> one replica.
>
>
> What is the recommended Service type in Kubernetes to handle this type of
> workload ?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-12-22 Thread 'Tim Hockin' via Kubernetes user discussion and Q
AFAIK we need CloudNAT to become available, at which point we can use
it pretty much transparently.

On Wed, Dec 20, 2017 at 6:56 AM,   wrote:
> On Thursday, August 10, 2017 at 1:03:42 AM UTC-5, Tim Hockin wrote:
>> The GKE team has heard the desire for this and is looking at possible
>> ways to provide it.
>>
>> On Wed, Aug 9, 2017 at 3:56 PM,   wrote:
>> > On Friday, June 16, 2017 at 11:24:15 AM UTC-5, pa...@qwil.co wrote:
>> >> Yes, this is the right approach -- here's a detailed walk-through:
>> >>
>> >> https://github.com/johnlabarge/gke-nat-example
>> >>
>> >> On Friday, June 16, 2017 at 8:36:13 AM UTC-7, giorgio...@beinnova.it 
>> >> wrote:
>> >> > Hello, I've the same problem described there. I have a GKE cluster and 
>> >> > I need to connect to an external service. I find the NAT solution is 
>> >> > right for my needs, my cluster resizes automatically. @Paul Tiplady 
>> >> > have you config the external NAT? Can you share your experiences? I 
>> >> > tried following this guide 
>> >> > https://cloud.google.com/compute/docs/vpc/special-configurations#natgateway
>> >> >  but seems it doesn't work.
>> >> >
>> >> > Thanks,
>> >> > Giorgio
>> >> > Il giorno mercoledì 3 maggio 2017 22:08:50 UTC+2, Paul Tiplady ha 
>> >> > scritto:
>> >> > > Yes, my reply was more directed to Rodrigo. In my use-case I do 
>> >> > > resize clusters often (as part of the node upgrade process), so I 
>> >> > > want a solution that's going to handle that case automatically. The 
>> >> > > NAT Gateway approach appears to be the best (only?) option that 
>> >> > > handles all cases seamlessly at this point.
>> >> > >
>> >> > >
>> >> > > I don't know in which cases a VM could be destroyed, I'd also be 
>> >> > > interested in seeing an enumeration of those cases. I'm taking a 
>> >> > > conservative stance as the consequences of dropping traffic through 
>> >> > > changing source-IP is quite severe in my case, and because I want to 
>> >> > > keep the process for upgrading the cluster as simple as possible.  
>> >> > > From 
>> >> > > https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
>> >> > >  it sounds like VM termination should not be caused by planned 
>> >> > > maintenance, but I assume it could be caused by unexpected failures 
>> >> > > in the datacenter. It doesn't seem reckless to manually set the IPs 
>> >> > > as part of the upgrade process as you're suggesting.
>> >> > >
>> >> > >
>> >> > > On Wed, May 3, 2017 at 12:13 PM, Evan Jones  
>> >> > > wrote:
>> >> > >
>> >> > > Correct, but at least at the moment we aren't using auto-resizing, 
>> >> > > and I've never seen nodes get removed without us manually taking some 
>> >> > > action (e.g. upgrading Kubernetes releases or similar). Are there 
>> >> > > automated events that can delete a VM and remove it, without us 
>> >> > > having done something? Certainly I've observed machines rebooting, 
>> >> > > but that also preserves dedicated IPs. I can live with having to take 
>> >> > > some manual configuration action periodically, if we are changing 
>> >> > > something with our cluster, but I would like to know if there is 
>> >> > > something I've overlooked. Thanks!
>> >> > >
>> >> > >
>> >> > >
>> >> > >
>> >> > >
>> >> > >
>> >> > >
>> >> > > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
>> >> > >
>> >> > > The public IP is not stable in GKE. You can manually assign a static 
>> >> > > IP to a GKE node, but then if the node goes away (e.g. your cluster 
>> >> > > was resized) the IP will be detached, and you'll have to manually 
>> >> > > reassign. I'd guess this is also true on an AWS managed equivalent 
>> >> > > like CoreOS's CloudFormation scripts.
>> >> > >
>> >> > >
>> >> > > On Wed, May 3, 2017 at 8:52 AM, Evan Jones  
>> >> > > wrote:
>> >> > >
>> >> > > As Rodrigo described, we are using Container Engine. I haven't fully 
>> >> > > tested this yet, but my plan is to assign "dedicated IPs" to a set of 
>> >> > > nodes, probably in their own Node Pool as part of the cluster. Those 
>> >> > > are the IPs used by outbound connections from pods running those 
>> >> > > nodes, if I recalling correctly from a previous experiment. Then I 
>> >> > > will use Rodrigo's taint suggestion to schedule Pods on those nodes.
>> >> > >
>> >> > > If for whatever reason we need to remove those nodes from that pool, 
>> >> > > or delete and recreate them, we can move the dedicated IP and taints 
>> >> > > to new nodes, and the jobs should end up in the right place again.
>> >> > >
>> >> > >
>> >> > > In short: I'm pretty sure this is going to solve our problem.
>> >> > >
>> >> > >
>> >> > > Thanks!
>> >
>> > The approach of configuring a NAT works but it has 2 major drawbacks:
>> >
>> > 1. It creates a single point of failure (if the VM that runs the NAT 

Re: [kubernetes-users] why can i see the process that in a container in the host?

2017-12-19 Thread 'Tim Hockin' via Kubernetes user discussion and Q
That is what a container does.  PID namespaces, unlike most others, nest.

On Dec 19, 2017 5:04 AM,  wrote:

> hi all,
>
> i got confused that when i create a pod like mysql, i can see the mysqld
> process in the host, any one can tell me why that happens?
> thanks.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Should apps bind to 0.0.0.0 or 127.0.0.1 or pod_ip?

2017-12-15 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Well, binding to 127 addresses means nobody else can access you.
Binding to a specific IP is just not the "normal" thing to do in
network programming, in my experience.  Unless you know something
specific, 0 is the best option.  E.g. you might have more than one
network interface, and 0 is the only way to catch them all without
enumerating them.

On Fri, Dec 15, 2017 at 4:56 PM,   wrote:
> Thanks for your reply.
>
> Not port-forwarding from inside. Just from host (using minikube). I happened 
> to write an app that binds to 127.0.0.1 and stumbled on this behavior 
> (inconsistency?) Is there somewhere you could point me to that talks about 0 
> being the normal way to go? Just trying to learn more about this...
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Should apps bind to 0.0.0.0 or 127.0.0.1 or pod_ip?

2017-12-15 Thread 'Tim Hockin' via Kubernetes user discussion and Q
What are you doing with port-forward inside your pod?

Binding to 0 is the "normal" way to do things unless you have reason to dO
otherwise.

On Fri, Dec 15, 2017 at 4:42 PM, Dietrich Schultz
 wrote:
> Just started exploring kubernetes, and ran into this. Haven't found any docs
> or clear statements of best practice. The only thing I found was this note
> in the container.v1 spec describing the port field:
>
>> Any port which is listening on the default "0.0.0.0" address inside a
>> container will be accessible from the network.
>
>
> While interesting, it doesn't quite answer my question. I've found that if
> my app binds to pod_ip then I can't use kubectl port-forward. If I bind to
> 127.0.0.1 then port-forward works, but I can't connect from other pods. Only
> binding to 0.0.0.0 seems to work for both cases. Is this intentional? Is
> binding to 0.0.0.0 considered a best practice or is kubectl deficient? Is
> this requirement/best practice documented somewhere?
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Rolling restart of pods in deployment

2017-12-15 Thread 'Tim Hockin' via Kubernetes user discussion and Q
What I have seen several people do for this is to increment an env
var, or use a timestamp - something trivial that doesn't impact the
app, but forces a restart.  Updating an env var can not ever be done
without restart.

On Fri, Dec 15, 2017 at 2:00 AM, Keshava Bharadwaj
 wrote:
> Hi,
>
> We have a simple deployment of 3 replicas.
> We have a requirement to have a kubernetes cron job, that would need to
> restart(rolling restart) the pods in the deployment.
>
> Use-case: we use certs in our services in deployment and we need the
> certificates to be auto renewed before expiry, and hence the cron would
> restart
> the pods before expiry. On a container startup, it would fetch certificates.
>
> Is this possible with Deployments construct and API?
>
> From documentation,  -
>>
>>  A Deployment’s rollout is triggered if and only if the Deployment’s pod
>> template (that is, .spec.template) is changed, for example if the labels or
>> container images of the template are updated. Other updates, such as scaling
>> the Deployment, do not trigger a rollout.
>
>
> So, i wanted to ask whether - is it possible to have rolling restart of
> containers periodically?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to put in communication two clusters in Kubernetes

2017-12-13 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Wed, Dec 13, 2017 at 6:47 AM, Gmail  wrote:
> Sorry, not follow the price argument. You are only charged for the nodes you
> use on a Kubernetes cluster (no Masters, no matter cluster size).
>
>
> I don't understand very well "no matter cluster size" whereas no one has
> ever talked about creating nodes that will not be used later. In my example
> every node will be used and of course I will be charged the cost, making the
> cluster size very important to define total spending
>
>
> So, I really don't why it makes a difference the number of clusters
>
> what I mean is very simple:
> if I have to use a single cluster, the minimum hardware features must be
> able to bear db requirements.
> My db must have 60 GB of RAM.
> So every node in this cluster will have 60 gb.

Not true.  You can add many NodePools to a single cluster, each with a
different shape.  Resource scheduling will ensure that your DB lands
on a big machine, and smaller jobs fit in wherever they can.

> I can spend 1000$/month so I can afford two nodes.
> One node will be for db, the other will be used for many (8-10-6 I don't
> know) web pod
> So I'm asking
>
> in terms of performance, scalability and stability which is the better
> solution between:
>
>
> a single cluster with 2 nodes where 1 node is used for db and other for n
> web-pod
>
> or
>
> (considering that the requirements of the db machine are very different from
> those of the web machines) two clusters, one for db (n1-standard-16 single
> node) and another for web machines (with more n1-standard-2 nodes)

I think you are always better off with more nodes of smaller size,
though I wouldn't go artificially small.   2 cores or 4 cores give you
a lot of freedom, unless you need to run pods that just do not fit,
and they give you better availability properties.

>
> Can't you use an internal load balancer to communicate?
>
>
> I noticed that if I create a load balancer service or an ingress service,
> Kubernetes will create a public ip address.
> So when you say internal load balancer, what are you referring to?
> Because I tried to use a nodeport service to communicate between cluster and
> didn't work
>
>
>
>
> Il 13/12/2017 13:56, Rodrigo Campos ha scritto:
>
> Sorry, not follow the price argument. You are only charged for the nodes you
> use on a Kubernetes cluster (no Masters, no matter cluster size).
>
> So, I really don't why it makes a difference the number of clusters
>
> On Wednesday, December 13, 2017,  wrote:
>>
>> I think that the situation is more complicated if we start looking at
>> machine prices.
>> Let me use some real data:
>> 1) I have to use a db machine like gcloud n1-standard-16 ---> kubernetes
>> cluster with 1 node for 500$/month
>> 2) I have to use 9 web server like n1-standard-2 ---> kubernetes cluster
>> with 9 nodes for 480$/month
>>
>> So with about 1000$/month I have the configuration that currently supports
>> the web traffic of my company.
>>
>> If I wanted to use a single cluster I should choose nodes like
>> n1-standard-16.
>> Wanting not to exceed the $1000 limit, I could create a cluster with 2
>> nodes.
>> So I'll have: a node for db and a node for 9 (web) pod
>>
>> So the real question could be: in terms of performance, scalability and
>> stability which is the better solution between: (9 nodes with 1 pod) vs (1
>> node with 9 pods)
>>
>> If two alternatives are comparable I could use a single cluster :)
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Il giorno martedì 12 dicembre 2017 23:00:10 UTC+1, David Rosenstrauch ha
>> scritto:
>> > On 2017-12-12 4:38 pm, Marco De Rosa wrote:
>> > > The main reason is that the "web" cluster has hardware features
>> > > different from the "db" cluster and I didn't find a way to have a
>> > > cluster with for example one node better, in cpu and/or ram, than
>> > > others.
>> > > So 2 clusters to put in communication with the doubt that I have
>> > > described above.
>> > > The alternative could be create a single cluster with n nodes sized in
>> > > such a way as to support web traffic and database work.
>> > > So a situation where I have for example 4 nodes: in 3 nodes 6 web-pods
>> > > plus the last node as pure db machine.
>> > > But this solution is quite complicated in terms of how precisely to
>> > > size the web pods, the db and the overall characteristics of the
>> > > cluster..
>> > > So the idea to create two different clusters
>> >
>> >
>> > FYI, this could probably be easily accomplished on a single cluster,
>> > using node labels and node selectors.
>> >
>> > Let's say you had 2 types of nodes:  machines with big disks, and
>> > machines with lots of memory.  Then let's say that you have 2 different
>> > types of containers - one that runs a memory cache, and one that runs a
>> > log file processing system.  What you could do is label the nodes as,
>> > say, either "type=hidisk" or "type=himem", as appropriate.  And then you
>> > could set a node selector on the 

Re: [kubernetes-users] How to put in communication two clusters in Kubernetes

2017-12-12 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Tue, Dec 12, 2017 at 12:49 AM,   wrote:
> I have a situation like this:
>
> - a cluster of web machines
> - a cluster of db machines and other services

I think you have made the problem much more complicated than it needs
to be.  Why not one cluster?

> The question is how put in communication the 2 clusters in order to use some 
> hostnames in /etc/hosts of web machines.
>
> To protect your data, is it safe create an ingress service to make visible 
> the db from the external? I tried with a nodePort service (so using internal 
> ip addresses) but I'm not able to put in contact db-web between different 
> clusters
>
> At the moment my temporary solution is:
>
> a) define a public static ip with the command:
> gcloud compute addresses create my-public-static-ip --global
>
>
> b) use an ingress configuration for my db service where I set the static ip 
> with the option:
>
> apiVersion: extensions/v1beta1
> kind: Ingress
> metadata:
>   name: my-ingress
>   annotations:
> kubernetes.io/ingress.global-static-ip-name: my-public-static-ip
>
> c) in my daemonset.yaml I define a hostAliases:
>
> apiVersion: extensions/v1beta1
> kind: DaemonSet
> metadata:
>   name: my-daemonset
>
> spec:
>   updateStrategy:
> type: RollingUpdate
>
>   template:
> spec:
>   nodeSelector:
> app: frontend-node
>
>   terminationGracePeriodSeconds: 30
>
>   hostAliases:
>   - ip: 
> hostnames:
> - "my-db-service"
>
>
> and it's working. But I'm not too convinced that this solution is the best or 
> however correct on a live environment...
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] how to pass kubernetes pods arguments like Docker arguments from command line

2017-12-07 Thread 'Tim Hockin' via Kubernetes user discussion and Q
You want a template expander before you get to kubectl.  Otherwise, the
thing that is running isn't reflected by any versionable artifact.

Because templating is a high-opinion space, we do not (currently) have one
that is built-in.

On Dec 7, 2017 10:12 AM, "Henry Hottelet"  wrote:

>
> Is there not a way to pass arguments from command line to the Pod
> specification?  There should be, because this is not the first time that a
> Docker argument is needed when calling a Pod instance, whether dynamic or
> staticly defined.
>
> I could have Pod1.yaml, Pod2.yaml, and have an Ipaddress, and Port number
> for reach separate Pod that is defined.
>
>
>
> On Thursday, December 7, 2017 at 11:03:28 AM UTC-5, Tim Hockin wrote:
>>
>> Kubectl is not a templating system, which is what you are asking for.
>> Create/Apply are declarative plumbing, suitable to things you would check
>> in to source control.  There are porcelain commands, eg. kubectl run, which
>> are closer to docker run, but less suitable to source control.
>>
>> On Dec 7, 2017 9:56 AM, "Henry Hottelet"  wrote:
>>
>>>
>>> A problem:
>>>
>>> Docker arguments will pass from command line:
>>>
>>> docker run -it -p 8080:8080 joethecoder2/spring-boot-web 
>>> -Dcassandra_ip=127.0.0.1 -Dcassandra_port=9042
>>>
>>> However, when I do:
>>>
>>> kubectl create -f ./singlePod.yaml
>>>
>>> Kubernetes POD arguments will not pass from singlePod.yaml file:
>>>
>>> apiVersion: v1
>>> kind: Pod
>>> metadata:
>>>   name: spring-boot-web-demo
>>>   labels:
>>> purpose: demonstrate-spring-boot-web
>>> spec:
>>>   containers:
>>>   - name: spring-boot-web
>>> image: docker.io/joethecoder2/spring-boot-web
>>> env: ["name": "-Dcassandra_ip", "value": "127.0.0.1"]
>>> command: ["java","-jar", "spring-boot-web-0.0.1-SNAPSHOT.jar", 
>>> "-D","cassandra_ip=127.0.0.1", "-D","cassandra_port=9042"]
>>> args: ["-Dcassandra_ip=127.0.0.1", "-Dcassandra_port=9042"]
>>>   restartPolicy: OnFailure
>>>
>>> Question: How do I correctly specify arguments that will change at
>>> runtime?  I want to add two arguments that change at Kubernetes POD
>>> runtime, because these should be configurable for each POD that is defined.
>>>   Arguments for the POD are:  -Dcassandra_ip=127.0.0.1",
>>> "-Dcassandra_port=9042
>>>
>>> I want the arguments to be accepted just like the Docker command line.
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Kubernetes user discussion and Q" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to kubernetes-use...@googlegroups.com.
>>> To post to this group, send email to kubernet...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] how to pass kubernetes pods arguments like Docker arguments from command line

2017-12-07 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Kubectl is not a templating system, which is what you are asking for.
Create/Apply are declarative plumbing, suitable to things you would check
in to source control.  There are porcelain commands, eg. kubectl run, which
are closer to docker run, but less suitable to source control.

On Dec 7, 2017 9:56 AM, "Henry Hottelet"  wrote:

>
> A problem:
>
> Docker arguments will pass from command line:
>
> docker run -it -p 8080:8080 joethecoder2/spring-boot-web 
> -Dcassandra_ip=127.0.0.1 -Dcassandra_port=9042
>
> However, when I do:
>
> kubectl create -f ./singlePod.yaml
>
> Kubernetes POD arguments will not pass from singlePod.yaml file:
>
> apiVersion: v1
> kind: Pod
> metadata:
>   name: spring-boot-web-demo
>   labels:
> purpose: demonstrate-spring-boot-web
> spec:
>   containers:
>   - name: spring-boot-web
> image: docker.io/joethecoder2/spring-boot-web
> env: ["name": "-Dcassandra_ip", "value": "127.0.0.1"]
> command: ["java","-jar", "spring-boot-web-0.0.1-SNAPSHOT.jar", 
> "-D","cassandra_ip=127.0.0.1", "-D","cassandra_port=9042"]
> args: ["-Dcassandra_ip=127.0.0.1", "-Dcassandra_port=9042"]
>   restartPolicy: OnFailure
>
> Question: How do I correctly specify arguments that will change at
> runtime?  I want to add two arguments that change at Kubernetes POD
> runtime, because these should be configurable for each POD that is defined.
>   Arguments for the POD are:  -Dcassandra_ip=127.0.0.1",
> "-Dcassandra_port=9042
>
> I want the arguments to be accepted just like the Docker command line.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to make a container(s) to able to reach to ClusterIP:port or Service's Publilc IP:NodePort?

2017-12-04 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Did you figure it out?

On Mon, Dec 4, 2017 at 10:15 AM, Kyunam Kim  wrote:
> Understood - thanks!
>
> On Thursday, November 30, 2017 at 8:55:31 PM UTC-8, Tim Hockin wrote:
>>
>> If it came in via the public IP, as you said:
>> `https://PublicIP:31245/app/rest/init` then the 192 address isn't part
>> of the request.
>>
>> On Thu, Nov 30, 2017 at 8:35 PM, Kyunam Kim  wrote:
>> > my guess is that the 3rd party web app copied it from the request.
>> >
>> > On Thursday, November 30, 2017 at 4:30:03 PM UTC-8, Tim Hockin wrote:
>> >>
>> >> Did you tell the app about the 192 address?  How did it know that IP
>> >> to redirect you?
>> >>
>> >> On Thu, Nov 30, 2017 at 4:07 PM, Kyunam Kim  wrote:
>> >> > service IP
>> >> >
>> >> > On Thursday, November 30, 2017 at 3:32:07 PM UTC-8, Tim Hockin wrote:
>> >> >>
>> >> >> It's not clear what 192 address represents - the pod IP, the service
>> >> >> IP, or an external LB IP?
>> >> >>
>> >> >> Also note that you have / characters where you need . characters - I
>> >> >> assume that is human error is reporting the issue?
>> >> >>
>> >> >>
>> >> >> On Thu, Nov 30, 2017 at 3:27 PM, Kyunam Kim 
>> >> >> wrote:
>> >> >> > I'm not that smart to prohibit anything in k8s yet ;-)
>> >> >> > Let me retry.
>> >> >> >
>> >> >> > My docker container runs a 3rd party web application over which I
>> >> >> > have
>> >> >> > no
>> >> >> > control and I have successful deployed it in k8s.
>> >> >> > I can access it thru https://192.168.99.100:31245.
>> >> >> > When I call https://PublicIP:31245/app/rest/init, the 3rd party
>> >> >> > web
>> >> >> > app
>> >> >> > starts an internal setup process and errors out by saying
>> >> >> > 'http://192.168/88/100:31245/app/admin' is not reachable from
>> >> >> > 'MY-APP-DEPLOYMENT-768D8FBC5D-CP92L'.
>> >> >> >
>> >> >> > I interpreted this as, Docker container with host name
>> >> >> > MY-APP-DEPLOYMENT-768D8FBC5D-CP92L cannot reach to
>> >> >> > http://192.168/88/100:31245/app/admin.
>> >> >> >
>> >> >> > What k8s' magic can I do to make this container to be able to
>> >> >> > reach
>> >> >> > 1)
>> >> >> > http://192.168/88/100:31245/app/admin or 2)
>> >> >> > http://172.17.0.7:6443/app/admin
>> >> >> > (Service's Endpoints) ?
>> >> >> >
>> >> >> > Hope this helps.
>> >> >> >
>> >> >> > On Thursday, November 30, 2017 at 2:19:35 PM UTC-8, Rodrigo Campos
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> On Thursday, November 30, 2017, Kyunam Kim 
>> >> >> >> wrote:
>> >> >> >>>
>> >> >> >>> How do I make a container aware of the service's IP:NodePort or
>> >> >> >>> ClusterIP:port address?
>> >> >> >>> Let's say, I can access my application at
>> >> >> >>> http://public-ip:port/myapp
>> >> >> >>> from the external world.
>> >> >> >>> I want a container(s) to be able to reach to
>> >> >> >>> http://public-ip:port
>> >> >> >>> Or
>> >> >> >>> to reach to ClusterIP:port.
>> >> >> >>>
>> >> >> >>> What k8s' capability do I use to make this happen?
>> >> >> >>
>> >> >> >>
>> >> >> >> Sorry, not sure I follow. Does it work for you using the service
>> >> >> >> name?
>> >> >> >> (Or
>> >> >> >> service+namespace)?
>> >> >> >>
>> >> >> >> Unless you prohibited it in some way (like with network policy,
>> >> >> >> but
>> >> >> >> that
>> >> >> >> is probably not the case) that should work.
>> >> >> >>
>> >> >> >> So, I might be missing something, sorry in advance :)
>> >> >> >
>> >> >> > --
>> >> >> > You received this message because you are subscribed to the Google
>> >> >> > Groups
>> >> >> > "Kubernetes user discussion and Q" group.
>> >> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> >> > send
>> >> >> > an
>> >> >> > email to kubernetes-use...@googlegroups.com.
>> >> >> > To post to this group, send email to kubernet...@googlegroups.com.
>> >> >> > Visit this group at
>> >> >> > https://groups.google.com/group/kubernetes-users.
>> >> >> > For more options, visit https://groups.google.com/d/optout.
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "Kubernetes user discussion and Q" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> > send
>> >> > an
>> >> > email to kubernetes-use...@googlegroups.com.
>> >> > To post to this group, send email to kubernet...@googlegroups.com.
>> >> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "Kubernetes user discussion and Q" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to kubernetes-use...@googlegroups.com.
>> > To post to this group, send email to kubernet...@googlegroups.com.
>> > Visit this group at 

Re: [kubernetes-users] How to force Kubernetes to update deployment with a pod in every node

2017-12-04 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Would you prefer a Daemon set instead?

On Dec 4, 2017 7:28 AM, "Itamar O"  wrote:

> I'm guessing you have as many replicas as you have nodes, and you used the
> "required" affinity policy over the "preferred" one.
> If this is the case, then when you try to update the deployment (with the
> default upgrade strategy), the controller tries the schedule a *4th pod*
> (with the new image) before taking down any of the running 3 pods, failing
> to do so, because the anti-affinity policy will be violated.
> Try using "preferred" instead of "required".
>
> On Mon, Dec 4, 2017 at 3:57 PM  wrote:
>
>> Sorry but now I'm facing another problem :-(
>> The deployment with the options podAntiAffinity/podAffinity is working
>> but when I try to update the deployment with the command:
>>
>> kubectl set image deployment/apache-deployment
>> apache-container=xx:v2.1.2
>>
>> then I get this error:
>>
>> apache-deployment-7c774d67f5-l69lb  No nodes are available that match
>> all of the predicates: MatchInterPodAffinity (3).
>>
>>
>> I don't know how to fix. Maybe the podAntiAffinity option need another
>> kind of setting to update a deployment?
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Re: How to permanently delete a deployment

2017-12-02 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Alternatively, you can scale it to 0 replicas.

On Fri, Dec 1, 2017 at 2:40 PM, Peter Idah  wrote:
> Hi,
>
> tiller is deployed in acs-engine as an addon. The kubernetes addon manager
> constantly ensures that defined addons are running, so everytime you delete
> the tiller deployment object via the kubernetes API, the addon manager
> automatically re-creates it again within a few seconds.
>
> In acs-engine you can disable the tiller addon by adding this to your
> config:
>
> "kubernetesConfig": {
> "addons": [
> {
> "name": "tiller",
> "enabled" : false
> }
> ]
> }
>
>
> More information on this is available here:
> https://github.com/Azure/acs-engine/blob/master/docs/clusterdefinition.md#kubernetesconfig
>
> Cheers,
> Peter
>
>
> On Fri, Dec 1, 2017 at 9:24 PM,  wrote:
>>
>> mfisher wrote:
>> Also, which cloud provider are you using? I know in certain instances
>> (like on ACS) tiller is deployed through their addon manager, and this
>> manifested in a bug. See https://github.com/Azure/ACS/issues/55 for more
>> background on that one.
>>
>> I created the cluster via acs-engine.  Accourding to
>> https://github.com/Azure/ACS/issues/55, acs-engine deploys tiller as a
>> cluster service, so Kuberentes doesn't allow updates through the kubernetes
>> api.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> Sign-up to receive updates on my upcoming book – DevOps 101 now! –
> http://www.DevOps101.com
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to make a container(s) to able to reach to ClusterIP:port or Service's Publilc IP:NodePort?

2017-11-30 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Did you tell the app about the 192 address?  How did it know that IP
to redirect you?

On Thu, Nov 30, 2017 at 4:07 PM, Kyunam Kim  wrote:
> service IP
>
> On Thursday, November 30, 2017 at 3:32:07 PM UTC-8, Tim Hockin wrote:
>>
>> It's not clear what 192 address represents - the pod IP, the service
>> IP, or an external LB IP?
>>
>> Also note that you have / characters where you need . characters - I
>> assume that is human error is reporting the issue?
>>
>>
>> On Thu, Nov 30, 2017 at 3:27 PM, Kyunam Kim  wrote:
>> > I'm not that smart to prohibit anything in k8s yet ;-)
>> > Let me retry.
>> >
>> > My docker container runs a 3rd party web application over which I have
>> > no
>> > control and I have successful deployed it in k8s.
>> > I can access it thru https://192.168.99.100:31245.
>> > When I call https://PublicIP:31245/app/rest/init, the 3rd party web app
>> > starts an internal setup process and errors out by saying
>> > 'http://192.168/88/100:31245/app/admin' is not reachable from
>> > 'MY-APP-DEPLOYMENT-768D8FBC5D-CP92L'.
>> >
>> > I interpreted this as, Docker container with host name
>> > MY-APP-DEPLOYMENT-768D8FBC5D-CP92L cannot reach to
>> > http://192.168/88/100:31245/app/admin.
>> >
>> > What k8s' magic can I do to make this container to be able to reach 1)
>> > http://192.168/88/100:31245/app/admin or 2)
>> > http://172.17.0.7:6443/app/admin
>> > (Service's Endpoints) ?
>> >
>> > Hope this helps.
>> >
>> > On Thursday, November 30, 2017 at 2:19:35 PM UTC-8, Rodrigo Campos
>> > wrote:
>> >>
>> >> On Thursday, November 30, 2017, Kyunam Kim  wrote:
>> >>>
>> >>> How do I make a container aware of the service's IP:NodePort or
>> >>> ClusterIP:port address?
>> >>> Let's say, I can access my application at http://public-ip:port/myapp
>> >>> from the external world.
>> >>> I want a container(s) to be able to reach to http://public-ip:port
>> >>> Or
>> >>> to reach to ClusterIP:port.
>> >>>
>> >>> What k8s' capability do I use to make this happen?
>> >>
>> >>
>> >> Sorry, not sure I follow. Does it work for you using the service name?
>> >> (Or
>> >> service+namespace)?
>> >>
>> >> Unless you prohibited it in some way (like with network policy, but
>> >> that
>> >> is probably not the case) that should work.
>> >>
>> >> So, I might be missing something, sorry in advance :)
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "Kubernetes user discussion and Q" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to kubernetes-use...@googlegroups.com.
>> > To post to this group, send email to kubernet...@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to make a container(s) to able to reach to ClusterIP:port or Service's Publilc IP:NodePort?

2017-11-30 Thread 'Tim Hockin' via Kubernetes user discussion and Q
It's not clear what 192 address represents - the pod IP, the service
IP, or an external LB IP?

Also note that you have / characters where you need . characters - I
assume that is human error is reporting the issue?


On Thu, Nov 30, 2017 at 3:27 PM, Kyunam Kim  wrote:
> I'm not that smart to prohibit anything in k8s yet ;-)
> Let me retry.
>
> My docker container runs a 3rd party web application over which I have no
> control and I have successful deployed it in k8s.
> I can access it thru https://192.168.99.100:31245.
> When I call https://PublicIP:31245/app/rest/init, the 3rd party web app
> starts an internal setup process and errors out by saying
> 'http://192.168/88/100:31245/app/admin' is not reachable from
> 'MY-APP-DEPLOYMENT-768D8FBC5D-CP92L'.
>
> I interpreted this as, Docker container with host name
> MY-APP-DEPLOYMENT-768D8FBC5D-CP92L cannot reach to
> http://192.168/88/100:31245/app/admin.
>
> What k8s' magic can I do to make this container to be able to reach 1)
> http://192.168/88/100:31245/app/admin or 2) http://172.17.0.7:6443/app/admin
> (Service's Endpoints) ?
>
> Hope this helps.
>
> On Thursday, November 30, 2017 at 2:19:35 PM UTC-8, Rodrigo Campos wrote:
>>
>> On Thursday, November 30, 2017, Kyunam Kim  wrote:
>>>
>>> How do I make a container aware of the service's IP:NodePort or
>>> ClusterIP:port address?
>>> Let's say, I can access my application at http://public-ip:port/myapp
>>> from the external world.
>>> I want a container(s) to be able to reach to http://public-ip:port
>>> Or
>>> to reach to ClusterIP:port.
>>>
>>> What k8s' capability do I use to make this happen?
>>
>>
>> Sorry, not sure I follow. Does it work for you using the service name? (Or
>> service+namespace)?
>>
>> Unless you prohibited it in some way (like with network policy, but that
>> is probably not the case) that should work.
>>
>> So, I might be missing something, sorry in advance :)
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Unexpected Behavior When Scaling Application

2017-11-22 Thread 'Tim Hockin' via Kubernetes user discussion and Q
If you repeatedly. Curl it always gives the right answer?  Can you verify
that from both VMs?

What is the error case?  You said "the home page" - is that your apps home
page or something else?

On Nov 22, 2017 5:23 PM,  wrote:

> Thanks for sticking with me Tim.
>
> So what I currently have is a pod that has two containers in it. One
> container has the port 5000 and one has the port 5432. When I run `kubectl
> get pods` I can see my the IP for my pod and I can curl that from any
> machine. I exposed this deployment using `kubectl expose deployment
> frontend` which created a Service for my pod. If I run `kubectl get
> services` and grab the clusterIP of said service and then curl
> `:5000` the port the application is listening on that works fine
> on any machine also.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Unexpected Behavior When Scaling Application

2017-11-22 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Sorry.  When you `kubectl get services` you get a listing which
includes the "cluster IP" of a service.  This is a VIP that is
reachable by your cluster nodes.  You can SSH into a VM and curl your
cluster IP and it will give us a clue where the process is breaking
down.  In particular, curl it 100 times - do you always get valid
responses?

On Wed, Nov 22, 2017 at 4:22 PM,   wrote:
> When you say curl the Services custerIP, what do you mean? Which service? I'm 
> relatively new to kubernetes/GCP so still finding my feet.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Unexpected Behavior When Scaling Application

2017-11-22 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Log in to each VM and try accessing the service's clusterIP, eg with curl.

Try connecting to the VM IP on the service node port.

On Wed, Nov 22, 2017 at 1:39 PM,   wrote:
> So the application is created using Flask, which is a python web-framework. 
> Logging into the application is simply done by querying the db and then 
> retrieving a User. It is handled by the Flask-Login module which stores 
> user-session data in cookies 
>
> There's no errors in kube-proxy logs. What else could possibly be causing 
> this?
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Unexpected Behavior When Scaling Application

2017-11-22 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Things to try:

Log in to each VM and try accessing the service's clusterIP, eg with curl.

Make sure kube-proxy is running on both VMs.

Check kube-proxy logs for any obvious errors, like failure to sync with
master.

Try connecting to the VM IP on the service node port.



On Nov 22, 2017 5:49 AM, "Rodrigo Campos"  wrote:

If I have to bet, I'd bet it's the application.

But okay, that is what you see. But else do you see in Kubernetes? Logs of
kube-proxy, etc.?

How do you keep the users logged in the application? Is there any container
restart when this happens?

Are you using GKE? And how do you connect, via a public load balancer?

Tell us more about the app and your setup


On Wednesday, November 22, 2017,  wrote:

> Hi there,
>
> I’ve deployed an application to a Google Cloud Kubernete Cluster. The
> application is built using the Python web-framework Flask and uses a
> CloudSQL Postgres database for persistence. The script I used to deploy the
> application is this: https://gist.github.com/tnolan
> 8/85e91394d9ec1327f930808c71081aba -> the gist is actually slightly
> outdated and instead of a ReplicationController I’m now using a Deployment.
>
> When I have a single VM in my instance group for my Cluster and a singular
> pod deployed with the lb service running everything works fine, the
> application works just as intended. However when I scale to having two VMs
> in my instance group for my Cluster and keep only the one pod, so it’s
> essentially still only running on the one machine, I get unexpected
> behavior. For example, when logging into the application instead of
> actually logging in it will redirect to the homepage but 1/3 times it will
> actually log in.
>
> I don’t think it’s something to do with the application itself. Everything
> works okay locally and on a singular VM instance. I’ve tried looking
> through logs using StackDriver but I’m not really even sure what I should
> be looking for, there’s some weird disconnect occurring and I really can’t
> figure out why.
>
> Has anyone ever seen something like this? Any thoughts on what I could try
> to debug it or thoughts on what might actually be causing it?
>
> Much Appreciated,
>
> Tom.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>
-- 
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] How to schedule pods of specified namespace to dedicated hosts

2017-11-21 Thread 'Tim Hockin' via Kubernetes user discussion and Q
You'll have to say more about why you can't modify them.  Did you get an error?

On Tue, Nov 21, 2017 at 12:43 AM, Yong Zhang  wrote:
> Hi, all
>
> I have some pods running in kube-system namespace, I want to schedule these
> pods to some dedicated hosts to avoid resource conflict. For some reason I
> can't modify the deployment, so I can't use nodeselector or taint, any
> ideas? Thanks a lot.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] simple k8s GCP cluster requires 2 nodes after upgrade to 1.6.11

2017-11-17 Thread 'Tim Hockin' via Kubernetes user discussion and Q
And know that we're looking at ways to optimize the scale-down
resourcing to be more appropriate for 1-node, 1-core "clusters"

On Fri, Nov 17, 2017 at 9:42 PM, 'Robert Bailey' via Kubernetes user
discussion and Q  wrote:
> You can inspect the pods running in the kube-system namespace by running
>
> kubectl get pods --namespace=kube-system
>
>
> Some of those pods can be disabled via the GKE API (e.g. turn off dashboard,
> disable logging and/or monitoring if you don't need them).
>
> On Fri, Nov 17, 2017 at 2:40 AM, 'Vitalii Tamazian' via Kubernetes user
> discussion and Q  wrote:
>>
>> Hi!
>> I have small java/alpine linux microservice that previously was running
>> fine on n1-standard-1 (1 vCPU, 3.75 GB memory) on GCP.
>> But after nodepool upgrade to 1.6.11 my service become "unschedulable".
>> And I was able to fix it only by adding the second node. So my cluster now
>> runs on 2 vCPUs, 7.50 GB, which imo is a quite overkill for the service
>> which actually uses up to 300Mb of memory. The average cpu usage is very
>> low.
>> There is still a single pod in the cluster.
>>
>> Is there any way to check what consumes the rest of the resources? Is
>> there a way to make it schedulable on 1 node again?
>>
>> Thanks,
>> Vitalii
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Container is unable to write to mounted volume

2017-11-09 Thread 'Tim Hockin' via Kubernetes user discussion and Q
We don't auto-apply ownership changes to hostPath volumes because that
would allow, for example, a user to take over /etc.  We've considered
heuristics like "apply ownership if we make the directory" or "apply
ownership if the path is under a flag-defined root", but none of them
have been totally satisfying, and nobody has stepped up to prototype
them

On Thu, Nov 9, 2017 at 2:32 AM, lppier  wrote:
> I created a volume in linux as a certain user and mounted it using the
> hostPath method.
> My container is the tensorflow gpu default container, and I am able to see
> the linux command prompt when I do :
>
> kubectl exec -it tf-gpu /bin/bash
>
> It logs into the container as root.
> My issue now is that users would like to write to the mounted volume. I
> found that this was not possible, unless I explicitly
> chmod o+w -R volume
>
> which would beat the purpose of this volume being a user-specific volume
> (other users should not be able to write or delete the items inside).
>
> Could I get some suggestions on how to proceed?
>
> Many thanks.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Share the IPC namespace between pods

2017-11-09 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Thu, Nov 9, 2017 at 2:27 AM,   wrote:
> We measured that we lose at least one order of magnitude in terms of latency, 
> which is our key KPI in this setup.

If you are moving from a model where your apps were guaranteed to be
colocated into Kubernetes, you are going to have a rough trip.
Kubernetes starts by assuming that everything speaks over the network.

> Pods are not always on the same host, but we play with co-location a lot.
> 1:1 mainly.

Consider bundling them into the same Pod?  Pods share IPC and are
co-scheduled.  In some sense, a pod is a replacement for a VM.

Alternatively, you could consider the `hostIPC` field - it will put
your pods in the machine's IPC space, wherein they can find each
other.  We don't have a general mechanism yet for sharing namespaces
across pods.  we may get there some day, but the complexity has to be
justified pretty broadly.

> Matthias, Tim, I will get back to you with a decent benchmark on our metals. 
> Thanks so far for your kind help!
>
> Zoltán
>
> On Wednesday, November 8, 2017 at 8:27:52 PM UTC+1, Tim Hockin wrote:
>> Are you concerned about perf because you measured it?  Or because you
>> suspect it might become a thing later?
>>
>> Are you really sure that your pods will ALWAYS be on the same host?
>>
>> Are your pods 1:1 or 1:N relationships?
>>
>> Could these highly-connected pods just be one bigger pod?
>>
>> To be sure, there's some overhead in networking containers today, but
>> you haven't really explained your problem.
>>
>> On Wed, Nov 8, 2017 at 10:08 AM,   wrote:
>> > Thanks Tim,
>> >
>> > Do you know of any technique(s) to speed up the network between pods 
>> > (probably co-located onto the same machine)? Shared memory communication 
>> > seems to be a good candidate within pods.
>> >
>> > On Wednesday, November 8, 2017 at 6:42:39 PM UTC+1, Tim Hockin wrote:
>> >> Pods should make very few assumptions about other pods.  Sharing IPC
>> >> implies a high level of affinity, at which point I would question why
>> >> they are two different pods in the first place.
>> >>
>> >> On Wed, Nov 8, 2017 at 9:22 AM,   wrote:
>> >> > Currently it is possible to share (shared memory) IPC namespace within 
>> >> > pods, but not possible to share between pods.
>> >> >
>> >> > Is this something that will be supported in the future? Or goes against 
>> >> > the very design of Kubernetes?
>> >> >
>> >> > What is the general opinion of the Community on this?
>> >> >
>> >> > Thanks,
>> >> > Z
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google 
>> >> > Groups "Kubernetes user discussion and Q" group.
>> >> > To unsubscribe from this group and stop receiving emails from it, send 
>> >> > an email to kubernetes-users+unsubscr...@googlegroups.com.
>> >> > To post to this group, send email to kubernetes-users@googlegroups.com.
>> >> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "Kubernetes user discussion and Q" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to kubernetes-users+unsubscr...@googlegroups.com.
>> > To post to this group, send email to kubernetes-users@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Share the IPC namespace between pods

2017-11-08 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Are you concerned about perf because you measured it?  Or because you
suspect it might become a thing later?

Are you really sure that your pods will ALWAYS be on the same host?

Are your pods 1:1 or 1:N relationships?

Could these highly-connected pods just be one bigger pod?

To be sure, there's some overhead in networking containers today, but
you haven't really explained your problem.

On Wed, Nov 8, 2017 at 10:08 AM,   wrote:
> Thanks Tim,
>
> Do you know of any technique(s) to speed up the network between pods 
> (probably co-located onto the same machine)? Shared memory communication 
> seems to be a good candidate within pods.
>
> On Wednesday, November 8, 2017 at 6:42:39 PM UTC+1, Tim Hockin wrote:
>> Pods should make very few assumptions about other pods.  Sharing IPC
>> implies a high level of affinity, at which point I would question why
>> they are two different pods in the first place.
>>
>> On Wed, Nov 8, 2017 at 9:22 AM,   wrote:
>> > Currently it is possible to share (shared memory) IPC namespace within 
>> > pods, but not possible to share between pods.
>> >
>> > Is this something that will be supported in the future? Or goes against 
>> > the very design of Kubernetes?
>> >
>> > What is the general opinion of the Community on this?
>> >
>> > Thanks,
>> > Z
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "Kubernetes user discussion and Q" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to kubernetes-users+unsubscr...@googlegroups.com.
>> > To post to this group, send email to kubernetes-users@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Cannot access Ingress from LoadBalancer

2017-11-08 Thread 'Tim Hockin' via Kubernetes user discussion and Q
I am assuming 10.59.246.49 is your cluster IP?  and port 80 is the
service port as defined in the Service.spec.ports[] ?

Are your pods actually listening on port 80?  Or are they on a different port?

On Wed, Nov 8, 2017 at 9:41 AM, bg303  wrote:
> Thanks, Tim. That guide was very informative. I was able to get through a few 
> of these steps:
>
> Does the Service have any Endpoints? Yes - verified that three pods were 
> returned
>
> Are the Pods working? Verified all three pods are working.
>
> Is the kube-proxy working? Yes
>
> Is kube-proxy writing iptables rules? This is where I hit an issue. When I 
> run this command in both dev (working) and my new cluster (not working), i 
> see records with "hostname" in them. The guid says " If you do not see these, 
> try restarting kube-proxy with the -V flag set to 4, and then look at the 
> logs again." But I don't know how to do that.
>
>
> Is kube-proxy proxying? Per the guide, I try to do this: `curl 
> 10.59.246.49:80` from a node in the cluster but get "Failed to connect to 
> 10.59.246.49 port 80: Connection refused".
>
> So I guess my issue is somewhere between the "is kube-proxy writing uptable 
> rules?" and "is kube-proxy proxying?"
>
> I took my YML files that I created my DEV environment (working) and ran them 
> against a brand new cluster in GKE and cannot access the service.
>
> I don't know what I did differently the first time I set the cluster up, but 
> I cannot seem recreate it.
>
>
> Any thoughts on how I can further diagnose this?
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Two Cluster in single node

2017-11-07 Thread 'Tim Hockin' via Kubernetes user discussion and Q
This is not a tested configuration - I am not sure that there are
enough knobs in, for example, kubelet to make that happen, and I am
pretty sure kube-proxy will not work.

On Mon, Nov 6, 2017 at 10:39 PM,   wrote:
> I am working to configure two kubernetes cluster setup(including binaries 
> installation) in a single machine, so far, I configured Ethernet adapter to 
> provide 2 ips, is that possible to go, having two instances, of all the 
> services? and having two working clusters in a single machine??
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Cannot access Ingress from LoadBalancer

2017-11-07 Thread 'Tim Hockin' via Kubernetes user discussion and Q
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/

Does the Service have Endpoints?

On Tue, Nov 7, 2017 at 1:10 PM, bg303  wrote:
> I created a Deployment, Service, Ingress and then an nginx-ingress 
> deployment. If I point my DNS record at any of the nodes running the 
> nginx-ingress, I am able to access to the service.
>
> I then created a Service with spec.Type = LoadBalancer, pointed it to my 
> nginx-ingress, and pointed my DNS at the IP of the LoadBalancer. After doing 
> that, I'm not able to connect to my service (curl: (7) Failed to connect to 
> mydomain.com port 443: Connection refused).
>
>
> The weirdest thing is that I used these exact same YML files on my DEV 
> cluster and it works great. I compared the LoadBalancers and ExternalIPs 
> between my DEV envrionment (working) and my new environment (not working) and 
> they appear identical.
>
> What steps can I follow to debug the LoadBalancer and why it is not able to 
> pass traffic to the nginx-ingress?
>
> Thanks in advance.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] HTTPS Load Balancer without an Ingress/Ingress Controller

2017-11-06 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Starting with the last question first:

> Any ideas as to what I am doing wrong?

Yes - You're trying to do it all yourself instead of relying on the
pieces that have already been built and tested :)

On Mon, Nov 6, 2017 at 9:49 AM, bg303  wrote:
> I recently tried to put SSL on a service by deploying an Ingress and a 
> Ingress controller, but ultimately I do not think that is what I want.
>
> I think I just want to have a Google Cloud HTTPS Load Balancer and just 
> declare a service like this:
>
> ---
> apiVersion: v1
> kind: Service
> metadata:
>   name: api
>   labels:
> app: api
> spec:
>   type: LoadBalancer
>   loadBalancerIP: 
>   ports:
> - port: 443
>   targetPort: 8090
>   protocol: TCP
>   name: https
>   selector:
> app: api

This is not going to work the way you want.  The `type: LoadBalancer`
plus `loadBalancerIP` field is going to try to allocate a Google
Network LB with that IP.  In general, Service == Network LB (L4) and
Ingress == HTTP LB (L7).

Network LB is VIP-like.  HTTP LB is Proxy-like.

> Here are the steps I went through to try to achieve this:
>
> 1. Upload my SSL cert
> gcloud compute ssl-certificates create star --certificate my.crt 
> --private-key my.key
>
> 2. Create a static IP address
>
> 3. Create a Load Balancer
> I created an HTTPS load balancer with a backend pointing to my cluster on 
> port 8090. I created a frontend using my static IP address, port 443, using 
> my cert.

If you really want to do this manually, set the Service to
`type=NodePort` and aim your HTTP LB at the NodePort.  But you get to
maintain the IGs that back it, and we can't make any guarantees about
that working over time - you're going to end up manipulating managed
GCP resources in ways we can't predict or understand.

This is, more or less, EXACTLY what the Google LB controller is doing
for you, when you make an Ingress, except that is code that we
maintain and test, so we know it works.

Tim


> 4. Assign my Service's loadBalancerIP to that of my static IP.
>
> When I run `kubectl get services` I'm shown:
>
> NAME   CLUSTER-IP  EXTERNAL-IP  PORT(S) AGE
> api 10.21.25.24   443:32606/TCP   43m
>
>
> When I load https://mysite.com (pointing to my static IP), I get this in the 
> browser:
>
> Error: Server Error
>
> The server encountered a temporary error and could not complete your request.
> Please try again in 30 seconds.
>
> when I run `gcloud compute forwarding-rules list` I get this:
>
> NAME   REGION  IP_ADDRESS IP_PROTOCOL  TARGET
> api-feTCP  api-lb-target-proxy
>
>
> Any ideas as to what I am doing wrong? I cannot tell if my error is my 
> Kubernetes architecture or in the way I provisioned by Google Cloud 
> LoadBalancer.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Imported docker image doesn't run on kubernetes

2017-10-31 Thread 'Tim Hockin' via Kubernetes user discussion and Q
You'll have to post more details then, including kubelet logs.

On Tue, Oct 31, 2017 at 5:06 PM, Pier Lim <madst...@gmail.com> wrote:
> Yes I did ... sorry I forgot to mention that. I don't think it's the image
> because my image without commits is loaded just fine.
> Thanks.
>
> ________
> From: 'Tim Hockin' via Kubernetes user discussion and Q
> <kubernetes-users@googlegroups.com>
> Sent: Wednesday, November 1, 2017 2:03:01 AM
> To: Kubernetes user discussion and Q
> Subject: Re: [kubernetes-users] Imported docker image doesn't run on
> kubernetes
>
> It could be that kubernetes is trying to re-pull the image.  Did you
> set `imagePullPolicy: Never` ?
>
> On Mon, Oct 30, 2017 at 11:15 PM, lppier <madst...@gmail.com> wrote:
>> Hi,
>>
>> I am setting up some servers in an offline environment, and am downloading
>> some tensorflow images for use on these offline servers.
>> The kubernetes cluster has been set up, and we have verified that the pods
>> can be allocated to the various worker nodes.
>>
>> The issue now here is this - typically I use on the internet computer
>>
>> docker save mycontainername >  myimage.tar
>>
>> and then on the offline server
>>
>> docker load < myimage.tar
>>
>> For the most part this has worked when we run the pod on kubernetes.
>> However, now we are trying to make commits to the container on the
>> internet
>> computer.
>> So now,
>>
>> docker commit CONTAINER_ID mycontainername2
>>
>> followed by:
>>
>> docker save mycontainername2 > myimage2.tar
>>
>> on the offline server:
>>
>> docker load < myimage2.tar
>>
>> In this case, kubernetes fails to load the pod that launches the docker
>> container. We keep seeing crashloop as soon as the pod is launched.  The
>> difference is the container image in the second case had 1 commit.
>>
>> What could be the issue here? Could someone versed in docker/kubernetes
>> help
>> us out here?
>>
>> Thanks.
>>
>> Pier.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Kubernetes user discussion and Q" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/kubernetes-users/_rysXciIzms/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Imported docker image doesn't run on kubernetes

2017-10-31 Thread 'Tim Hockin' via Kubernetes user discussion and Q
It could be that kubernetes is trying to re-pull the image.  Did you
set `imagePullPolicy: Never` ?

On Mon, Oct 30, 2017 at 11:15 PM, lppier  wrote:
> Hi,
>
> I am setting up some servers in an offline environment, and am downloading
> some tensorflow images for use on these offline servers.
> The kubernetes cluster has been set up, and we have verified that the pods
> can be allocated to the various worker nodes.
>
> The issue now here is this - typically I use on the internet computer
>
> docker save mycontainername >  myimage.tar
>
> and then on the offline server
>
> docker load < myimage.tar
>
> For the most part this has worked when we run the pod on kubernetes.
> However, now we are trying to make commits to the container on the internet
> computer.
> So now,
>
> docker commit CONTAINER_ID mycontainername2
>
> followed by:
>
> docker save mycontainername2 > myimage2.tar
>
> on the offline server:
>
> docker load < myimage2.tar
>
> In this case, kubernetes fails to load the pod that launches the docker
> container. We keep seeing crashloop as soon as the pod is launched.  The
> difference is the container image in the second case had 1 commit.
>
> What could be the issue here? Could someone versed in docker/kubernetes help
> us out here?
>
> Thanks.
>
> Pier.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Expose individual pods externally?

2017-10-30 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Mon, Oct 30, 2017 at 7:56 PM, David Rosenstrauch  wrote:
> Hi.  I'm having some issues migrating an (admittedly somewhat
> unconventional) existing system to a containerized environment (k8s) and was
> hoping someone might have some pointers on how I might be able to work
> around them.
>
> A major portion of the system is implemented using what basically are
> micro-services conceptually.  I'm in the process of porting each
> micro-service to a pod (eventually to be replicated), and then exposing the
> micro-service externally to other processes outside of the Kubernetes
> overlay network.  Although it's been quite easy to create containers/pods
> out of each micro-service and get them to run successfully on Kubernetes,
> I'm running into issues with the networking configuration, specifically with
> respect to how to expose these services properly to the outside world.
>
>
> The problem is that the way the system is currently built (out of my control
> - depends on idiosyncrasies of a piece of 3rd party software) these
> micro-services have to operate more like "pets" than "cattle". That is, even
> if there's multiple instances of a particular micro-service running, client
> code needs to be able to access a specific instance (pod), rather than just
> any instance.  This is obviously different from the way most micro-service
> systems work, where each individual instance is pretty much identical to any
> other, so you can expose the service to the external network using a load
> balancer. Because of this issue, it's been proving to be a bit non-trivial
> to make this migration work correctly.

This is not THAT uncommon.  The typical answer is to create a Service
per Pod.  Obviously this doesn't scale very well if you have hundreds
of replicas, but if you have a small (and stable) number of pods that
each need a public IP, this works.

You can do it with either a NodePort or a LoadBalancer, but you need
to set up different labels on each pod, so it's not fully automated.

> What I've been trying to do is find a way for *each* of these individual
> instances of a micro-service to get assigned a public ip/port, rather than
> just assigning one single public ip/port that points to a load balancer in
> front of them.  But I don't see any way to do this properly in Kubernetes.
>
> * I tried exposing the pods externally using NodePort.  However that doesn't
> accomplish what I'm looking for.  Although it does open a public port on
> each host, each of those public ports just points to a single load balancer
> in front of the service's pods, rather than to the individual pods.
>
> * I tried exposing the pods externally using HostPort.  That does work, and
> comes closest to accomplishing what I'm looking for.  But it has the major
> drawback of not being able to run more than one instance of the same pod on
> the same host machine (since each instance wants to listen on the same
> port).  As a result, if I want to run N instances of the same pod, I need to
> have N host machines.  This is not ideal from a scalability / hardware
> utilization (and cost) perspective.
>
> I guess ideally what I'd be looking for is some way for each pod that got
> launched to automatically get assigned a unique external hostname/port
> combo, with multiple instances of the same pod able to run on a single host,
> all while ensuring no port conflicts.  E.g.:
>
> service_A_pod1 exposed at 192.168.0.10:30001
> service_A_pod2 exposed at 192.168.0.10:30005
> service_A_pod3 exposed at 192.168.0.20:31007
> etc.

If you need this to be elastic, you might want a custom controller, or
it might be something StatefulSet could handle.

> I've read through the docs pretty thoroughly, though, and Kubernetes doesn't
> seem to provide anything like this.
>
>
> Has anyone run into a similar problem like this before and/or any ideas how
> to solve this?  Might there be any 3rd party add-ons to k8s that might help
> address a situation like this?  (On a related note, we're using k8s in
> conjunction with Rancher.  Might Rancher provide some capabilities here?  I
> didn't see anything in the Rancher docs, but it's possible I could have
> missed something.)  Or does Kubernetes have any hooks to allow you "roll
> your own" service deployment plugin, in order to customize the way external
> port exposure is done?

You can always do this.  All the built-in controllers are just clients
of the Kubernetes API.  It's pretty easy to write your own to do
whatever you need.

> Another possible way for me to work around this problem is that I could
> probably eliminate the "pets" constraint I'm bumping up against if I were
> able to run the pods behind a customized Service/load balancer that was a
> bit smarter about which specific pod instance it routed traffic to.  So same
> question about Kubernetes Services:  any hooks to "roll your own" service?
> From what I can glean from the documentation, k8s services only provide 2
> types of routing 

Re: [kubernetes-users] Multiple version of software on same namespace

2017-10-30 Thread 'Tim Hockin' via Kubernetes user discussion and Q
What are you trying to do?  Do you want 2 versions in perpetuity or do
you want to do some form of switchover?

On Mon, Oct 30, 2017 at 3:34 AM,   wrote:
> I'm trying to figure out what's the best approach to deploy multiple versions 
> of the same software in kubernetes without relying on namespaces. According 
> to the docs:
>
> "It is not necessary to use multiple namespaces just to separate slightly 
> different resources, such as different versions of the same software: use 
> labels to distinguish resources within the same namespace."
>
> The only way (that I know of) to separate multiple versions of same software 
> on the same namespace is naming services in accordance to software version, 
> adjust the selector field and tag pods appropriately. This has maintenance 
> overhead and I'm required to reference services with a different name 
> according to the desired version. I don't think this is a solution.
>
> I don't see any other way besides using namespaces. What am I missing 
> something?
>
> Thanks for any help.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Container termination force pod termination?

2017-10-27 Thread 'Tim Hockin' via Kubernetes user discussion and Q
What Rodrigo said - what problem are you trying to solve?

The pod lifecycle is defined as restart-in-place, today.  Nothing you
can do inside your pod, except deleting it from the apiserver, will do
what you asking.  It doesn't seem too far fetched that a pod could
exit and "ask for a different node", but we're not going there without
a solid solid solid use case.

On Fri, Oct 27, 2017 at 1:23 PM, Rodrigo Campos  wrote:
> I don't think it is configurable.
>
> But I don't really see what you are trying to solve, maybe there is another
> way to achieve it? If you are running a pod of a single container, what is
> the problem that the container is restarted when is appropriate instead of
> the whole pod?
>
> I mean, you would need to handle the case where some container in the pod
> crashed or is stalled, right? The liveness probe will be done periodically,
> but until the next check is done, it can be hunged or something. So even if
> the whole pod is restarted, that problem is still there. And restarting the
> whole pod won't solve that. So probably my guess is not correct about what
> you are trying to solve.
>
> So, sorry, but can I ask again what is the problem you want to address? :)
>
>
> On Friday, October 27, 2017, David Rosenstrauch  wrote:
>>
>> Was speaking to our admin here, and he offered that running a health check
>> container inside the same pod might work.  Anyone agree that that would be a
>> good (or even preferred) approach?
>>
>> Thanks,
>>
>> DR
>>
>> On 2017-10-27 11:41 am, David Rosenstrauch wrote:
>>>
>>> I have a pod which runs a single container.  The pod is being run
>>> under a ReplicaSet (which starts a new pod to replace a pod that's
>>> terminated).
>>>
>>>
>>> What I'm seeing is that when the container within that pod terminates,
>>> instead of the pod terminating too, the pod stays alive, and just
>>> restarts the container in it.  However I'm thinking that what would
>>> make more sense would be for the entire pod to terminate in this
>>> situation, and then another would automatically start to replace it.
>>>
>>> Does this seem sensible?  If so, how would one accomplish this with
>>> k8s?  Changing the restart policy setting doesn't seem to be an
>>> option.  The restart policy (e.g. Restart=Always) seems to apply only
>>> to whether to restart a pod; the decision about whether to restart a
>>> container in a pod doesn't seem to be configurable.  (At least not
>>> that I could see.)
>>>
>>> Would appreciate any guidance anyone could offer here.
>>>
>>> Thanks,
>>>
>>> DR
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] GCE and multiple masters

2017-10-24 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Single-zone masters are GA.  Regional masters (multi-zone) are alpha
now, beta before too long.

If we see your master is out, we do try to bring it back, but only
within the same zone.  So a true zonal outage could leave you without
a master (in theory).  As you said, existing Pods will run and restart
in-place.

On Tue, Oct 24, 2017 at 4:44 AM, andygore3 via Kubernetes user
discussion and Q  wrote:
> Apologies, I meant to say GKE rather than GCE.
>
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Can namespaces be prioritized ?

2017-10-23 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Oh, I was very wrong :)

No, there's no sort order but name.

On Tue, Oct 17, 2017 at 11:34 AM,   wrote:
> On Monday, October 16, 2017 at 10:42:10 PM UTC-7, David Oppenheimer wrote:
>> Can you explain what you mean by "prioritized" ?
>>
>>
>>
>>
> I mean see the list of namespaces that we want to see on the top (example: 
> prod) and the rest of the namespaces in the bottom (example: dev)
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Kubernetes Customers

2017-10-20 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Please do not use this list to solicit business.  It is a technical
list for users of Kubernetes to discuss Kubernetes issues.

On Thu, Oct 19, 2017 at 11:00 PM, Jordans Evan
 wrote:
>
>
>
>
> Hi,
>
>
>
> Would you like to reach out to Kubernetes users and also similar technology
> users?
>
>
>
> Users like
>
> Docker
> OpenStack
> Mesos
> ECS
> Cloud Foundry and many more
>
>
>
> Let me know and I will send along more information for you to review how we
> compile and validate our data.
>
>
>
> Regards,
>
> Jordans Evan
>
> Demand Generation Head
>
> Email Data Channels
>
>
>
> Note: Please forward to the right person if you are not the decision maker
>
>
>
> Reply with an “opt-out” to unsubscribe
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   >