[kubernetes-users] Re: Inquiry to write a guest blog for

2017-08-14 Thread Tony Branson
Hey ,

I wanted to follow up and check if you have had the time to think about my 
request to write on your blog.

Please do let me know what you think and I can send some topic ideas across.

Waiting for your reply.

Thanks,

Tony Branson
Database Load Balancing Senior Analyst, ScaleArc
www.scalearc.com

On Tue, Aug 8, 2017 at 10:17 AM, Tony Branson  wrote:

Hello,

Are you looking for guest bloggers and contributors? If yes, I’ve got a great 
idea for a post that would do well on your Blog.

I’ve previously written for Computer World, CSO, HomeBusiness Mag and a few 
other sites as well. Here are some links to the posts I’ve written in the past, 
which have been received well by their readers and have resulted in positive 
traction:

  *   
http://www.computerworld.com.au/article/610274/how-make-sure-black-friday-doesn-t-overwhelm-your-online-store/
  *   
http://www.cso.com.au/article/610441/load-balancing-key-business-continuity-cloud/
  *   
https://homebusinessmag.com/businesses/ecommerce/site-management/scaling-smoothly-find-your-business-ready-scale/

I would love to collaborate with you if you are looking for some original and 
engaging content. Let me know if you’re interested and I’ll send some topics 
your way!

Regards,

Tony Branson, ScaleArc
Database Load Balancing Senior Analyst
www.scalearc.com



This message is intended only for the designated recipient(s). It may contain 
confidential or proprietary information and may be subject to confidentiality 
protections. If you are not a designated recipient, you may not review, copy or 
distribute this message. If you receive this message in error, please notify 
the sender by reply e-mail and delete this message.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: Proposal for a new SIG: SIG-GCP

2017-08-14 Thread 'Adam Worrall' via Kubernetes user discussion and Q
A few days have elapsed, without any opposition. I'll get started with the
setup process, and followup when it's open for business.

 - Adam

On Fri, Aug 11, 2017 at 2:14 PM Adam Worrall  wrote:

> Thanks, Jaice !
>
> I'll followup on Monday. Coincidentally, I'm off on vacation for ~2 weeks
> starting Tuesday, so there might be a brief hiatus in subsequent action.
> But hopefully I'll be able to leave knowing that SIG-GCP will indeed become
> a thing :)
>
>  - Adam
>
> On Fri, Aug 11, 2017 at 12:03 PM Jaice Singer DuMars 
> wrote:
>
>> Hi Michael,
>>
>> With no opposition, it seems that this has approval.  For those curious
>> on the process, these are the current guidelines
>> 
>> :
>>
>>- Propose the new SIG publicly, including a brief mission statement,
>>by emailing kubernetes-...@googlegroups.com and
>>kubernetes-users@googlegroups.com, then wait a couple of days for
>>feedback
>>
>> So, technically a couple of days would *probably* be tomorrow.  I'd
>> recommend waiting until Monday to get started on all of the SIG creation
>> steps.  It's very, very time consuming.  I have a meeting scheduled with
>> Adam Worrall (the petitioner) on Monday to help guide him through the
>> implementation process so that gives a little time for people to speak up.
>>
>> All my best,
>> Jaice
>>
>>
>> On Friday, August 11, 2017 at 2:44:07 PM UTC-4, Michael Rubin wrote:
>>
>>> Looks like we have enough belief this is useful. Is the next step to
>>> just start forming the SIG?
>>>
>>> mrubin
>>>
>>> On Thu, Aug 10, 2017 at 1:13 PM, Joseph Jacks 
>>> wrote:
>>> > +1.
>>> >
>>> > On Wednesday, August 9, 2017 at 3:16:34 PM UTC-7, Adam Worrall wrote:
>>> >>
>>> >> I am proposing to create SIG-GCP. It would fill a similar role as
>>> SIG-AWS
>>> >> and SIG-Azure, but for GCP. Here are the details:
>>> >>
>>> >> Proposed mission statement:
>>> >>
>>> >> A Special Interest Group for building, deploying, maintaining,
>>> supporting,
>>> >> and using Kubernetes on the Google Cloud Platform.
>>> >>
>>> >> Secondary statement:
>>> >>
>>> >> The SIG will be responsible for designing, discussing, and
>>> maintaining the
>>> >> GCP cloud provider and its relevant tests. The SIG will also be
>>> responsible
>>> >> for any roadmap and release requirements for Kubernetes on GCP.
>>> >>
>>> >> Implementation:
>>> >>
>>> >> For implementation, I will be the initial point of contact and will
>>> at a
>>> >> minimum ensure scheduling, documentation, transparency, and
>>> facilitation are
>>> >> consistent with Kubernetes community SIG standards. We may add
>>> additional
>>> >> leaders from within or outside of Google later.
>>> >>
>>> >> The SIG will meet monthly and the meetings will be announced through
>>> the
>>> >> standard channels.
>>> >>
>>> >> Thanks,
>>> >>
>>> >>  - Adam
>>> >> (eng manager on the Kubernetes/GKE team at Google)
>>> >>
>>> > --
>>> > You received this message because you are subscribed to the Google
>>> Groups
>>> > "Kubernetes developer/contributor discussion" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> an
>>>
>> > email to kubernetes-de...@googlegroups.com.
>>> > To post to this group, send email to kuberne...@googlegroups.com.
>>>
>> > To view this discussion on the web visit
>>> >
>>> https://groups.google.com/d/msgid/kubernetes-dev/f760176b-6461-4518-be62-1aec4467b526%40googlegroups.com.
>>>
>>> >
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes developer/contributor discussion" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-dev+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/kubernetes-dev/9bb1993a-d859-4777-936f-3a79ac017fbd%40googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Kubernetes client's Presentation

2017-08-14 Thread 'Mehdy Bohlool' via Kubernetes user discussion and Q
Hi Kubernetes users/developers,

I am going to present our (non-go) client libraries and current state of
`kubernetes-client` org in our next sig-api-machinery meeting (this
Wednesday).

If you are interested in talking to kubernetes clusters in your native
language (any java speakers here?) or if you are interested in contributing
to an existing or new client library (Haskell anyone?) this presentation is
for you. Please join us on Wednesday August 16th.

More information about the meeting can be find here: https://goo.gl/0lbiM9.

Zoom link: https://zoom.us/my/apimachinery 

Regards,
Mehdy Bohlool |  Software Engineer |  me...@google.com |  mbohlool@github


-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-14 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Mon, Aug 14, 2017 at 10:56 AM, David Rosenstrauch  wrote:
> On 2017-08-14 12:13 pm, 'Tim Hockin' via Kubernetes user discussion and Q
> wrote:
>>
>> On Mon, Aug 14, 2017 at 9:03 AM, David Rosenstrauch 
>> wrote:
>>>
>>> So, for example, I have a k8s setup with 4 machines:  a master, 2 worker
>>> nodes, and a "driver" machine.  All 4 machines are on the flannel
>>> network.
>>> I have a nginx service defined like so:
>>>
>>> $ kubectl get svc nginx; kubectl get ep nginx
>>> NAME  CLUSTER-IP  EXTERNAL-IP   PORT(S)AGE
>>> nginx 10.128.105.78  80:30207/TCP   2d
>>> NAME  ENDPOINTS   AGE
>>> nginx 10.240.14.5:80,10.240.27.2:80   2d
>>>
>>>
>>> Now "curl 10.128.105.78" only succeeds on the 2 worker node machines,
>>> while
>>> "curl 10.240.14.5" succeeds on all 4.
>>>
>>> I'm guessing this is expected / makes sense, since 10.240.0.0/12
>>> addresses
>>> are accessible to any machine on the flannel network, whereas
>>> 10.128.0.0/16
>>> addresses can only be reached via iptables rules - i.e., only accessible
>>> on
>>> machines running kube-proxy, aka the worker nodes.
>>
>>
>> Right.  To get to Services you need to either route the Service range
>> to your VMs (and use them as gateways) or expose them via some other
>> form of traffic director (e.g. a load-balancer).
>
>
> Can you clarify what you mean by "route the Service range to your VMs"?  I'm
> familiar with the load balancer approach you mentioned - i.e., to get
> outside machines to access your service you could set up a load balancer
> that points to the NodePort of each machine that's running the service.  How
> would it work to route the service range?

Unfortunately, I can not easily clarify.  It depends on your
infrastructure.  If you have an L2 domain you should be able to set up
static routes on each machine or use proxy ARP.  If you have L3
infrastructure, you can maybe use BGP or something else, or statically
manipulate the routing tables.

E.g. in GCP you can establish a Route resource pointing to a VM, for
the service range.  Set up multiple routes for ECMP-ish behavior and
high(er) availability.  But since it is static you need to manage it
manually.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: Pod network latency problem

2017-08-14 Thread mixia via Kubernetes user discussion and Q
Would you mind file an issue on github?


On Thursday, July 20, 2017 at 6:33:07 AM UTC-7, theairkit wrote:
> Hi
> 
> 
> I've encountered a pod network latency problem.
> 
> 
> I run simple mysql query ('time mysql -e "..."') to outer
> (not in k8s cluster) host and got following results:
> 
> 
> 1. From k8s-pod to outer host:
> avg time: 60ms
> worst time: 100ms
> (So, running applications in pods has same latency when works to outer 
> resources)
> 
> 
> 2. From k8s-host (which contains pod above) to outer host:
> avg time: 10ms
> worst time: 20ms
> 
> 
> Also:
> - There are no any dependency on DNS: avg and worst times not changed
>   when mysql-cli connects to IP-address or hostname of outer host.
> - There are enough free resources: typical k8s node have 56-cores CPU
>   with LA ~ 10 (CPU load only and no any I/O load), more than half free 
> memory,
>   and no threads/processes (user/kernel), which cosume up to 100% cpu time.
> 
> 
> As a possible solution I divided CPUs between kernel and user processes
> (via kernel parameter isolcpus and cgroups), but latency did not changes.
> 
> 
> For now I perform more complex measurements, and trying to investigate
> which subsytem (k8s, cni, iptables etc.) may leads to this behaviour.
> 
> 
> Have you encountered such or similar behavior?
> May you provide any tip for my investigations?
> 
> 
> Would be very grateful for your help.
> 
> 
> 
> 
> My k8s cluster:
> 
> 
> kubelet version: 1.5.7
> network type: calico, image: calico/cni:v1.5.5, quay.io/calico/node:v1.0.0
> num of nodes: ~20
> pods per node: ~30
> 
> 
> OS: Debian GNU/Linux 8.6 (jessie)
> Kernel: 4.9.0-0.bpo.2-amd64
> 
> 
> 
> 
> Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-14 Thread David Rosenstrauch
On 2017-08-14 12:13 pm, 'Tim Hockin' via Kubernetes user discussion and 
Q wrote:
On Mon, Aug 14, 2017 at 9:03 AM, David Rosenstrauch  
wrote:
So, for example, I have a k8s setup with 4 machines:  a master, 2 
worker
nodes, and a "driver" machine.  All 4 machines are on the flannel 
network.

I have a nginx service defined like so:

$ kubectl get svc nginx; kubectl get ep nginx
NAME  CLUSTER-IP  EXTERNAL-IP   PORT(S)AGE
nginx 10.128.105.78  80:30207/TCP   2d
NAME  ENDPOINTS   AGE
nginx 10.240.14.5:80,10.240.27.2:80   2d


Now "curl 10.128.105.78" only succeeds on the 2 worker node machines, 
while

"curl 10.240.14.5" succeeds on all 4.

I'm guessing this is expected / makes sense, since 10.240.0.0/12 
addresses
are accessible to any machine on the flannel network, whereas 
10.128.0.0/16
addresses can only be reached via iptables rules - i.e., only 
accessible on

machines running kube-proxy, aka the worker nodes.


Right.  To get to Services you need to either route the Service range
to your VMs (and use them as gateways) or expose them via some other
form of traffic director (e.g. a load-balancer).


Can you clarify what you mean by "route the Service range to your VMs"?  
I'm familiar with the load balancer approach you mentioned - i.e., to 
get outside machines to access your service you could set up a load 
balancer that points to the NodePort of each machine that's running the 
service.  How would it work to route the service range?


Thanks,

DR

--
You received this message because you are subscribed to the Google Groups "Kubernetes 
user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-14 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Mon, Aug 14, 2017 at 9:03 AM, David Rosenstrauch  wrote:
> Thanks for the feedback.  I see I didn't quite understand k8s networking
> properly (and had my cluster misconfigured as a result).
>
> I now have it configured as:
>
> --cluster-cidr=10.240.0.0/12

/12 gives you room for ~4000 nodes at /24 each. (24 - 12 = 12, 2^12 = 4096)

> --service-cluster-ip-range=10.128.0.0/16

room for 65536 Services (2^16)

> And I'm deducing that the /12 in the cluster-cidr is what would then allow
> this cluster to go beyond 256 nodes.
>
>
> One other point about the networking I'm a little confused about that I'd
> like to clarify:  it seems that IP's in the cluster-cidr range (i.e.,
> service endpoints) are reachable from any host that is on the flannel
> network, while IP's in the service-cluster-ip-range (i.e., services) are
> only reachable from the worker nodes in the cluster.

Generally correct.

> So, for example, I have a k8s setup with 4 machines:  a master, 2 worker
> nodes, and a "driver" machine.  All 4 machines are on the flannel network.
> I have a nginx service defined like so:
>
> $ kubectl get svc nginx; kubectl get ep nginx
> NAME  CLUSTER-IP  EXTERNAL-IP   PORT(S)AGE
> nginx 10.128.105.78  80:30207/TCP   2d
> NAME  ENDPOINTS   AGE
> nginx 10.240.14.5:80,10.240.27.2:80   2d
>
>
> Now "curl 10.128.105.78" only succeeds on the 2 worker node machines, while
> "curl 10.240.14.5" succeeds on all 4.
>
> I'm guessing this is expected / makes sense, since 10.240.0.0/12 addresses
> are accessible to any machine on the flannel network, whereas 10.128.0.0/16
> addresses can only be reached via iptables rules - i.e., only accessible on
> machines running kube-proxy, aka the worker nodes.

Right.  To get to Services you need to either route the Service range
to your VMs (and use them as gateways) or expose them via some other
form of traffic director (e.g. a load-balancer).

> Again I guess this is makes sense in retrospect.  But I was a bit surprised
> when I first saw this, as I had thought that services' cluster IP's would be
> reachable from all machines.  (Or at least from the master too.)
>
> Perhaps you could confirm that I'm understanding this all correctly.  (And
> have my cluster configured correctly?)
>
> Thanks,
>
> DR
>
> On 2017-08-11 11:26 am, Matthias Rampke wrote:
>>
>> Oh hold on. the _service cluster IP range_ is not for pod IPs at all.
>> It's for the ClusterIP of services, so you can have up to 64k services
>> in a cluster at the default setting. The range for pods is the
>> --cluster-cidr flag on kube-controller-manager.
>>
>> On Fri, Aug 11, 2017 at 3:05 PM David Rosenstrauch 
>> wrote:
>>
>>> Actually, that begs another question.  The docs also specify that
>>> k8s
>>> can support up to 5000 nodes.  But I'm not clear on how the
>>> networking
>>> can support that.
>>>
>>> So let's go back to that service-cluster-ip-range with the /16 CIDR.
>>> That only supports a maximum of 256 nodes.
>>>
>>> Now the maximum size for the service-cluster-ip-range appears to be
>>> /12
>>> - e.g., --service-cluster-ip-range=10.240.0.0/12 [1]  (Beyond that
>>>
>>> you get a
>>> "Specified service-cluster-ip-range is too large" error.)  So that
>>> means
>>> 12 bits for the high part of the address, and with each node taking
>>> the
>>> lower 8 bits for the IP address of individual pods, that leaves 12
>>> remaining bits worth of unique IP address ranges.  12 bits = 4095
>>> possible IP addresses for nodes.  How then could anyone scale up to
>>> 5000
>>> nodes?
>>>
>>> DR
>>>
>>> On 2017-08-11 10:47 am, David Rosenstrauch wrote:

 Ah.  That makes a bit more sense.

 Thanks!

 DR

 On 2017-08-11 10:41 am, Ben Kochie wrote:
>
> Kuberentes will be giving a /24 to each node, not each pod.  Each
>>>
>>> node
>
> will give one IP out of that /24 to a pod it controls.  This
>>>
>>> default
>
> means you can have 253 pods-per-node.  This of course can be
>>>
>>> adjust
>
> depending on the size of your pods and nodes.
>
> This means that you can fully utilize the /16 for pods (minus
>>>
>>> per-node
>
> network, broadcast, gateway)
>
> On Fri, Aug 11, 2017 at 4:36 PM, David Rosenstrauch
>  wrote:
>
>> According to the docs, k8s can support systems of up to 15
>>>
>>> pods.
>>
>> (See https://kubernetes.io/docs/admin/cluster-large/ [1])  But
>> given k8s' networking model, I'm a bit puzzled on how that would
>> work.
>>
>> It seems like a typical setup is to assign a
>> service-cluster-ip-range with a /16 CIDR.  (Say 10.254.0.0/16
>>>
>>> [2] [2])
>>>
>> However, I notice that my cluster assigns a full /24 IP range to
>> each pod that it creates.  (E.g., pod1 gets 10.254.1.*, pod2
>>>
>>> gets
>>
>> 10.254.2.*, etc.)  Given this networking setup, it would seem

Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-14 Thread David Rosenstrauch
Thanks for the feedback.  I see I didn't quite understand k8s networking 
properly (and had my cluster misconfigured as a result).


I now have it configured as:

--cluster-cidr=10.240.0.0/12
--service-cluster-ip-range=10.128.0.0/16

And I'm deducing that the /12 in the cluster-cidr is what would then 
allow this cluster to go beyond 256 nodes.



One other point about the networking I'm a little confused about that 
I'd like to clarify:  it seems that IP's in the cluster-cidr range 
(i.e., service endpoints) are reachable from any host that is on the 
flannel network, while IP's in the service-cluster-ip-range (i.e., 
services) are only reachable from the worker nodes in the cluster.


So, for example, I have a k8s setup with 4 machines:  a master, 2 worker 
nodes, and a "driver" machine.  All 4 machines are on the flannel 
network.  I have a nginx service defined like so:


$ kubectl get svc nginx; kubectl get ep nginx
NAME  CLUSTER-IP  EXTERNAL-IP   PORT(S)AGE
nginx 10.128.105.78  80:30207/TCP   2d
NAME  ENDPOINTS   AGE
nginx 10.240.14.5:80,10.240.27.2:80   2d


Now "curl 10.128.105.78" only succeeds on the 2 worker node machines, 
while "curl 10.240.14.5" succeeds on all 4.


I'm guessing this is expected / makes sense, since 10.240.0.0/12 
addresses are accessible to any machine on the flannel network, whereas 
10.128.0.0/16 addresses can only be reached via iptables rules - i.e., 
only accessible on machines running kube-proxy, aka the worker nodes.


Again I guess this is makes sense in retrospect.  But I was a bit 
surprised when I first saw this, as I had thought that services' cluster 
IP's would be reachable from all machines.  (Or at least from the master 
too.)


Perhaps you could confirm that I'm understanding this all correctly.  
(And have my cluster configured correctly?)


Thanks,

DR

On 2017-08-11 11:26 am, Matthias Rampke wrote:

Oh hold on. the _service cluster IP range_ is not for pod IPs at all.
It's for the ClusterIP of services, so you can have up to 64k services
in a cluster at the default setting. The range for pods is the
--cluster-cidr flag on kube-controller-manager.

On Fri, Aug 11, 2017 at 3:05 PM David Rosenstrauch 
wrote:


Actually, that begs another question.  The docs also specify that
k8s
can support up to 5000 nodes.  But I'm not clear on how the
networking
can support that.

So let's go back to that service-cluster-ip-range with the /16 CIDR.
That only supports a maximum of 256 nodes.

Now the maximum size for the service-cluster-ip-range appears to be
/12
- e.g., --service-cluster-ip-range=10.240.0.0/12 [1]  (Beyond that
you get a
"Specified service-cluster-ip-range is too large" error.)  So that
means
12 bits for the high part of the address, and with each node taking
the
lower 8 bits for the IP address of individual pods, that leaves 12
remaining bits worth of unique IP address ranges.  12 bits = 4095
possible IP addresses for nodes.  How then could anyone scale up to
5000
nodes?

DR

On 2017-08-11 10:47 am, David Rosenstrauch wrote:

Ah.  That makes a bit more sense.

Thanks!

DR

On 2017-08-11 10:41 am, Ben Kochie wrote:

Kuberentes will be giving a /24 to each node, not each pod.  Each

node

will give one IP out of that /24 to a pod it controls.  This

default

means you can have 253 pods-per-node.  This of course can be

adjust

depending on the size of your pods and nodes.

This means that you can fully utilize the /16 for pods (minus

per-node

network, broadcast, gateway)

On Fri, Aug 11, 2017 at 4:36 PM, David Rosenstrauch
 wrote:


According to the docs, k8s can support systems of up to 15

pods.

(See https://kubernetes.io/docs/admin/cluster-large/ [1])  But
given k8s' networking model, I'm a bit puzzled on how that would
work.

It seems like a typical setup is to assign a
service-cluster-ip-range with a /16 CIDR.  (Say 10.254.0.0/16

[2] [2])

However, I notice that my cluster assigns a full /24 IP range to
each pod that it creates.  (E.g., pod1 gets 10.254.1.*, pod2

gets

10.254.2.*, etc.)  Given this networking setup, it would seem

that

Kubernetes would only be capable of launching a maximum of 256

pods.


Am I misunderstanding how k8s works in this r?  Or is it that

the

networking would need to be configured differently to support

more

than 256 pods?

Thanks,

DR

--
You received this message because you are subscribed to the

Google

Groups "Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from

it,

send an email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to
kubernetes-users@googlegroups.com.
Visit this group at

https://groups.google.com/group/kubernetes-users

[3].
For more options, visit https://groups.google.com/d/optout [4].


--
You received this message because you are subscribed to the

Google

Groups "Kubernetes user discussion and Q" group.
To 

Re: [kubernetes-users] Dynamic volume provisioning not working for AWS EBS

2017-08-14 Thread jamelseagraves
Hi Rodrigo, thanks for the reply! I did confirm that the node is also in AZ 
us-west-2a. Such a weird issue that seems like it should be pretty straight 
forward following the docs.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: Grafana Data Lost after Minikube restart

2017-08-14 Thread Kamesh Sampath
thanks let me check that out.

On Friday, August 11, 2017 at 10:37:49 PM UTC+5:30, Zack Butcher wrote:
>
> In the default Istio deployment's configs, Grafana is not set up to write 
> data to any persistent volume; you can see the deployment here 
> .
>  
> There are a few ways you could persist the data, one of the easiest may be 
> to map the Grafana directory to a directory on the minikube host machine. 
> You could also write it to a directory on the minikube VM, which AFAIK 
> minikube persists the state of when shutdown gracefully.
>
> On Friday, August 11, 2017 at 9:52:27 AM UTC-7, Kamesh Sampath wrote:
>>
>> nothing I did, just a standard Istio on minikube installation. no 
>> customizations
>>
>> On Friday, August 11, 2017 at 10:20:17 PM UTC+5:30, Rodrigo Campos wrote:
>>>
>>> (Moving to kubernetes users) 
>>>
>>> On Fri, Aug 11, 2017 at 03:30:35AM -0700, Kamesh Sampath wrote: 
>>> > 
>>> > why frequently i see the grafana dashboard losing the data and 
>>> services 
>>> > info - I am seeing this happening after i get my computer to wake up 
>>> after 
>>> > in sleep or I restart my minikube. 
>>> > 
>>> > Whats that way to get back data - Do I need to setup any persistence 
>>> ??? 
>>>
>>> Are you using a HostPath volume or something to store the data? How is 
>>> the 
>>> volume configured? 
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: SSH into pod

2017-08-14 Thread Warren Strange

Yes, it is possible, but it is not recommended.  Here is an older article 
that discusses the issues:

https://jpetazzo.github.io/2014/06/23/docker-ssh-considered-evil/ 


If you *really* need to do this, you must enable sshd in the container, and 
create a kubernetes service to reach it. You will want to read up on 
services:

https://kubernetes.io/docs/concepts/services-networking/service/ 



On Monday, August 14, 2017 at 12:48:32 AM UTC-6, eswar...@gmail.com wrote:
>
> Hi Warren Strange,
>
> Thanks for the reply.
>
> Yes, But we can use this command where we installed kubectl only.
> But I need to take ssh from my local linux machine where I doesn't have 
> kubectl.
>
> Is it possible?
>
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: SSH into pod

2017-08-14 Thread eswari . hima
Hi Warren Strange,

Thanks for the reply.

Yes, But we can use this command where we installed kubectl only.
But I need to take ssh from my local linux machine where I doesn't have kubectl.

Is it possible?

 

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.