Hmm, this is a bit of a stretch for NodePorts. Why not use a HostPort
and update DNS dynamically if/when their VM updates?
On Sun, Sep 16, 2018 at 6:30 PM Phạm Huy Hoàng wrote:
>
> Hi Tim,
>
> Thanks for your reply. I'll explain our uses case below.
>
> Our uses case is that we provide a service
We do not expose that as a parameter today. We can discuss the
options here, but there's not short answer. Can you talk about what
you're doing to need so many node ports?
On Fri, Sep 14, 2018 at 8:27 AM Phạm Huy Hoàng wrote:
>
> For our use-case, we need to access a lot of services via
Did you check that bug? Is the whole sysfs mounted read-only or just that
file? Can you show me /proc/mounts from the node?
On Wed, Sep 12, 2018, 3:12 AM Grzegorz Panek wrote:
> Yes, kube-proxy pod is running in privillaged mode, but still problems
> occured
>
> --
> You received this message
space?
> HTH,
>
> DR
>
> On 9/6/18 4:33 PM, 'Tim Hockin' via Kubernetes user discussion and Q
> wrote:
> > You have to understand what you are asking for. You're saying "this
> > data is important and needs to be preserved beyond any one pod (a
> > pers
istent volume)" but you're also saying "the pods have no identity
>>> because they can scale horizontally". These are mutually incompatible
>>> statements.
>>>
>>> You really want a shared storage API, not volumes...
>>> On Thu
mutually incompatible
>> statements.
>>
>> You really want a shared storage API, not volumes...
>> On Thu, Sep 6, 2018 at 1:08 PM Naseem Ullah wrote:
>> >
>> > I see I see.. what about autoscaling statefulsets with an HPA?
>> >
>&g
ble
statements.
You really want a shared storage API, not volumes...
On Thu, Sep 6, 2018 at 1:08 PM Naseem Ullah wrote:
>
> I see I see.. what about autoscaling statefulsets with an HPA?
>
> > On Sep 6, 2018, at 4:06 PM, 'Tim Hockin' via Kubernetes user discussion and
Deployments and PersistentVolumes are generally not a good
combination. This is what StatefulSets are for.
There's work happening to allow creation of a volume from a snapshot,
but it's only Alpha in the next release.
On Thu, Sep 6, 2018 at 1:03 PM Naseem Ullah wrote:
>
> Hello,
>
> I have a
If you pointed them at the same NFS export (server + path) then it's
expected that they would see each other's changes. You can either
create another export on the server or mount a sub-dir of that export
(e.g. export /home, but mount /home/you vs /home/me) or you can use
k8s' `subPath` field on
You asked this question in another thread. I and others answered
there - why open a new thread?
On Sun, Aug 12, 2018 at 11:05 PM Basanta Kumar Panda
wrote:
>
> HI,
> I have 1 master and 2 slave nodes and both the slave nodes down and
> since the master is up job is scheduling and pod is
Well, we're not "starting" the pods, we're queuing them up for when
nodes become available. Would you rather they get rejected
immediately? what if a node comes online 3 seconds after that
rejection?
On Fri, Aug 10, 2018 at 2:28 PM Basanta Kumar Panda
wrote:
>
> Hi ,
>
> Here is one of the
We only use Calico in the mode that reads node.spec.podCIDR, so it doesn't
need etcd.
On Wed, Aug 8, 2018 at 3:36 PM parthi.geo wrote:
> Wondering if Google Kubernetes Engine native CNI add-on (calico) shares
> "etcd" with master / control plane.
>
>
> Regards
> Parthiban,S
>
> --
> You
Most of what you're asking for is available via the k8s API, if you watch
it.
On Wed, Aug 8, 2018 at 12:58 PM David Rosenstrauch
wrote:
> As we're getting ready to go to production with our k8s-based system,
> we're trying to pin down exactly how we're going to do all the needed
>
Can you explain more what you mean?
Who writes this file?
Who reads this file?
What is the lifetime of this file?
Is this a simple one-writer, one-reader case?
On Fri, Aug 3, 2018 at 10:34 AM 'zulv' via Kubernetes user discussion and
Q wrote:
> The issue is that I would like to persistent
; Regards!
>
> On Wed, May 23, 2018, 20:33 'Tim Hockin' via Kubernetes user discussion
> and Q <kubernetes-users@googlegroups.com> wrote:
>
>> The problem is that we only get 63 characters to make a unique name, and
>> both kubernetes namespace and service names can be th
like you're
>> not handling "/", only /angular and /angular2)
>> 2. All backend services listed (in your case "angular-svc") returns HTTP
>> 200 OK for "GET /" when called directly (not through the created load
>> balancer), you can test this with &
The problem is that we only get 63 characters to make a unique name, and
both kubernetes namespace and service names can be that long themselves,
and even then they are not unique across clusters. We could use the UUID
and up to 27 characters of the combination of those names, but then we have
a
Did you try /* ?
https://cloud.google.com/compute/docs/load-balancing/http/url-map
On Mon, May 21, 2018 at 10:44 AM Jonathan Mejias
wrote:
> The only way that i resolve the problem was changing to an nginx
> controller, instead a gke. Installing nginx-controller with
wth-aachen.de>
wrote:
> Hallöchen!
>
> 'Tim Hockin' via Kubernetes user discussion and Q writes:
>
> > Maybe I don't understand - the labels in the template are applied
> > to the pod. Just label select against pods.
>
> If the deployment has been updated recently, I
You can build that as a controller that runs in-cluster, picks one of the
nodes, and assigns the static IP. It will still be racy, though, in that
it will never be instantaneous.
On Sun, May 20, 2018, 3:28 PM wrote:
> An update: I was able to do this with the standard
Maybe I don't understand - the labels in the template are applied to the
pod. Just label select against pods.
On Sun, May 20, 2018, 8:12 AM Torsten Bronger
wrote:
> Hallöchen!
>
> Since this question is apparently off-topic on SO
>
Kubernetes' Ingress abstraction does what you want.
On Wed, May 16, 2018 at 6:38 PM Jonathan Mejias <drumber...@gmail.com>
wrote:
> Im using kubernetes to deploy apps, how can i create that virtual host
> into a container cluster?
>
> On Wed, May 16, 2018, 19:36 'Tim Hockin' v
HTTP gives you a much better solution - virtual hosts.
The 'host' header tells your HTTP ingress which logical service to access.
e.g. `curl -h 'host: foo.com' http://210.210.210.22:80/` is different
than `curl
-h 'host: bar.com' http://210.210.210.22:80/`
On Wed, May 16, 2018 at 1:19 PM
Admission controller webhooks are how you can add custom pre-admission
checks.
On Tue, May 8, 2018 at 11:45 PM Christopher Schmidt
wrote:
> Hi,
> what I want is to enforce a specific host setting for users ingresses.
> lets say, every ingess host setting has
> - to be
..@darose.net>
wrote:
> Thanks for the suggestion, Tim. Looks like that might fit the bill.
> I'll kick the tires on it a bit.
>
> Is kustomize the k8s project's preferred (or suggested) way to handle
> this type of situation?
>
> Thanks,
>
> DR
>
> On 04/2
Ingress is sort of the lowest-common-API across many platforms. I am not
sure that the majority of them can support it natively. I think it's
logical, but may not be practical yet.
On Sat, Apr 28, 2018, 7:41 AM Kanthi P wrote:
> ohk Tim. Does it sound like a good
Ingress does not do prefix stripping or URL munging by default, as not all
platforms support it. I verified against the Google implementation, it
passes the URL path through directly.
On Sat, Apr 28, 2018, 6:09 AM Kanthi P wrote:
> Thanks David for the example. I
Does this head in the direction you want?
https://github.com/kubernetes/kubectl/tree/master/cmd/kustomize
On Fri, Apr 27, 2018 at 10:52 PM David Rosenstrauch
wrote:
> We've been using Kubernetes to get a dev version of our environment up
> and running, and so far the
What are you using for a client? Is it by chance http and written in go?
Some client libraries, including Go's http, aggressively reuse
connections.
If you try with something like exec netcat, I bet you see different results.
BTW, one might argue that if you depend on RR, you will eventually be
Without a statistically significant load, this is random. What you are
seeing satisfies that definition.
The real reason is that round-robin is a lie. Each node in a cluster will
do it's own RR from any number of clients.
On Fri, Apr 13, 2018, 10:51 AM wrote:
> On
The load is random, but the distribution should be approximately equal for
non-trivial loads. E.g. when we run tests for 1000 requests you can see it
is close to equal.
How unequal is it? Are you using session affinity?
On Fri, Apr 13, 2018, 10:34 AM Cristian Cocheci
upstream is upstream from the kube-DNS. Pods won't see that.
On Tue, Apr 10, 2018 at 4:13 PM Marcio Garcia wrote:
> Hi All,
>
>
> Maybe this is a dumb question, but I didn't find any answer for that.
>
>
> Recently I changed my kube-dns config with this:
>
> apiVersion: v1
Nodeports are published on all nodes, so any one node going away is not a
problem, per se. but... Nodeports alone require you to use a specific node
IP, which is a problem. Nodeports were designed to be hidden behind
load-balancers or proxies with stable VIPs, which is what it sounds like
you
Private cluster is private by default. You can not access the master from
the internet. You can specifically change that with the master authorized
networks feature, or you can access it from within your VPC network.
On Thu, Mar 29, 2018 at 10:42 PM Vinita wrote:
> Hi,
>
>
It all depends on your needs for availability and performance. "a few
containers" can usually fit on a single node. You can run a 1-node, 1-core
GKE "cluster" for the cost of the VM (< $30/month) + any additional
resources you use.
On Fri, Mar 9, 2018 at 10:00 AM wrote:
On Fri, Mar 30, 2018 at 7:46 AM wrote:
> - the ability to have one IP per pod?
> - the ability to use same listening port on each container?
You also said "for performance cost I rather also to bind to physical
port". If you are binding to the physical port, you can't use
Which environment and which Ingress controller?
On Thu, Mar 29, 2018 at 8:42 PM Tyler Johnson
wrote:
> Is it possible that an HTTP load balancer (auto-configured as part of an
> Ingress) could occasionally drop backend connections while leaving the
> frontend
The normal answer is 10.0.0.0/8, and if you need more 192.168.0.0/16 and
172.16.0.0/12
On Thu, Mar 29, 2018 at 1:33 AM Immadi Ramalingeswararao <
immadi_ramalingeswara...@papajohns.com> wrote:
> Hi , I have my jenkins slaves running on gke dynamically on port 5. If
> I don't allow 0.0.0.0 to
What networking features do you lose?
On Thu, Mar 29, 2018, 8:59 AM wrote:
> Hi
>
> I'd like to setup my pods to have two network, the first is the default
> k8s network and the second one the host (node) network.
>
> The reason is that I need to bind to range of UDP ports,
The simple answer is to change the limit. The more robust answer would be
toake the limit more dynamic, but that can fail at runtime if, for example,
kernel memory is fragmented. Also I am not sure that tunable can be
live-adjusted.
:(
We have ideas about how to be more frugal with conntrack
You can't. It has to be in the /etc/passwd in the image. I think this is
an area we could improve the UX, but I am not sure what the right answer
is. This is no different than raw docker, as far as I know.
On Wed, Mar 14, 2018 at 10:00 PM wrote:
> how to specify a
Point of clarity - not necessarily L2. Flat L3 space is closer.
On Fri, Mar 9, 2018, 4:46 PM Igor Cicimov
wrote:
> In kubernetes ALL pods have access to each other by default they reside in
> a flat L2 lan space.
>
> --
> You received this message because you
That's such a broken assumption.
StatefulSet is the only primitive that satisfies this condition for now.
On Thu, Mar 1, 2018 at 1:48 PM, wrote:
> 1. Can't change the apparent hostname of the worker to be either an IP/
> dash-seperated IP worker DNS, as Airflow only
Does it have to be DNS? Are unique IPs sufficient?
On Thu, Mar 1, 2018 at 10:15 AM, wrote:
> I'm using Apache Airflow, which uses a scale out worker model.
>
> The workers run jobs, and the job logs are collected from the workers via a
> http call from a central
The short answer is that you are ascribing identity to pods that don't
really have any. They are literally called "replicas". If you need
identity, you really sort of want StatefulSet. If that doesn't work,
it would be good to understand more concretely what you're trying to
achieve.
On Thu,
Some of the older presentations I have done were really introductory.
They might be a bit stale, but those fundamentals have not changed
much.
https://speakerdeck.com/thockin?page=2
On Sun, Feb 18, 2018 at 8:56 AM, wrote:
> I'm trying to get my head around kubernetes. I've
I don't know VMWare either, but that seems disastrous from a
predictability point of view.
On Wed, Feb 14, 2018 at 8:02 PM, Warren Strange
wrote:
>
> AFAIK you can not split a pod between more than one node.
>
> I know nothing about VMware, but I am guessing they can
Currently the only way to get a static egress IP is to install your
own proxy VM(s) - either L7 or L4 (NAT).
On Tue, Feb 13, 2018 at 1:17 PM, wrote:
> Hi,
>
> I've got an RDS Database running on AWS, and I want to access it from
> Kubernetes, running on GKE.
>
> My cluster
kube-proxy should be renamed, in truth, but that isn't happening in
the near term.
On Sun, Feb 11, 2018 at 1:54 PM, Scalefastr wrote:
> kubectl proxy is just for the API server.
>
> But kube-proxy is for services defined in the cluster and available where
> the proxy is
Kubernetes does not demand an overlay, and most of the overlays used
for kube employ some form of node gateway to allow packets to cross
between planes.
On Sat, Feb 3, 2018 at 12:59 PM, Chase wrote:
> Hello - I am trying to understand how "hostNetwork: true" works with
Thanks for the followup!
On Fri, Feb 2, 2018 at 3:56 PM, R Melton wrote:
>
> I later went back and created a new image file (on docker) and reran the
> runAsUser (and fsGroup) yaml file and it worked correctly.
>
> On Friday, February 2, 2018 at 11:52:07 AM UTC-6, R Melton
It looks like that file is not readable by a non-root user. You're
volunteering to lower your privileges, but you need to account for
that in the image. If this is a custom image, chmod ugo+r that file?
If it is a pre-built image, yell at whoever built it.
On Fri, Feb 2, 2018 at 9:52 AM, R
On Jan 31, 2018 12:03 PM, wrote:
Hi guys,
I was wondering if there is another way to route external traffic to a Pod.
So I know that you can use a Kubernetes Service of type "LoadBalancer"
which on GKE will automatically create a Google Cloud Loadbalancer for you
(as
kubectl logs ... --previous ?
On Wed, Jan 31, 2018 at 6:38 AM, Colstuwjx wrote:
>>
>>
>>>
>>> But, what if we want to trigger the detail exited reason for the exited
>>> containers? Is there any parameters configure that?
>>
>>
>> Have you checked the terminationGracePeriod?
Look into NetworkPolicy - it's not your traditional VLAN approach to
ACL, it's more dynamic and application-focused.
On Mon, Jan 29, 2018 at 10:27 PM, Oğuz Yarımtepe
wrote:
> My current k8s structure is 2 worker and one master node deployment. I am
> testing it with
a VPN connection which will offer us an IP from inside this
> cluster. So we can access the services inside our browser etc. (it will look
> public for us, but it's private).
>
> Is there a way documentated on how we can set this up?
>
> On 20 January 2018 at 23:36, 'Ti
Please be aware that there are semantic mismatches between how people
think about DNS resolution and how some DNS clients implement it.
Resolv.conf is somewhat under-specified in this regard.
Specifically, if you have two `nameserver` lines, which point to
servers with different information (e.g.
Important - this is for kubernetes on GCE, not for GKE. GKE masters use
public IP, even though the traffic never leaves Google. We are looking at
how best o support true private GKE.
On Jan 20, 2018 2:34 PM, "Tim Hockin" wrote:
> You should not need a public IP unless you
You should not need a public IP unless you access public things. Stuff
like GCR (inside Google) will be ok. If you need to egress, you need a NAT
(diy for now).
On Jan 20, 2018 10:29 AM, "lvthillo" wrote:
> We want to start using Kubernetes on Google Cloud
Make sure all firewalls are open?
I just tested it and it works:
```
$ kubectl run udp --image=ubuntu -- bash -c "while true; do sleep 10;
done"
deployment "udp" created
$ kubectl expose deployment udp --port=12345 --protocol=UDP
--type=LoadBalancer
service "udp" exposed
```
Then I got the
Kubernetes is designed for a smaller number of larger clusters. What does
"stepping in toes" mean? Certainly container isolation is not perfect, but
with realistic resource requests it is pretty decent.
That said, many people do what is being suggested, and many are happy and
successful. The
I think this is exactly the sort of thing that a custom deployment-like
operator is good for. You have particular needs that are not easily
satisfied with existing constructs. CRDs and controllers let you build
this, and figure out how you want it to work.
Later, maybe, you can solicit other
Hi Mike,
> service tokens can't come through to nodes because kubelet tries to talk to
> the api server through the api's "advertised ip address", which defaults to
> the default route, which is shunted.
This seems wrong. Kubelet has a master address that is *NOT*
dependent on Services. If
The main difference is that EBS and things like that are fully
managed, and you should be able to assume some operational simplicity
(if their capabilities meet your needs). If you need multi-writer,
for example, EBS will not suffice. Clustered filesystems require YOU
to operate them (for now?),
Why do you need a Service at all?
On Jan 1, 2018 8:43 PM, "Mario Rodriguez" wrote:
> Hi, I'm in the middle of creating an K8s app that doesn't expose any HTTP
> endpoints, is just a background app that pulls messages from a message bus
> and takes some action based on the
AFAIK we need CloudNAT to become available, at which point we can use
it pretty much transparently.
On Wed, Dec 20, 2017 at 6:56 AM, wrote:
> On Thursday, August 10, 2017 at 1:03:42 AM UTC-5, Tim Hockin wrote:
>> The GKE team has heard the desire for this and is looking at
That is what a container does. PID namespaces, unlike most others, nest.
On Dec 19, 2017 5:04 AM, wrote:
> hi all,
>
> i got confused that when i create a pod like mysql, i can see the mysqld
> process in the host, any one can tell me why that happens?
> thanks.
>
> --
>
Well, binding to 127 addresses means nobody else can access you.
Binding to a specific IP is just not the "normal" thing to do in
network programming, in my experience. Unless you know something
specific, 0 is the best option. E.g. you might have more than one
network interface, and 0 is the
What are you doing with port-forward inside your pod?
Binding to 0 is the "normal" way to do things unless you have reason to dO
otherwise.
On Fri, Dec 15, 2017 at 4:42 PM, Dietrich Schultz
wrote:
> Just started exploring kubernetes, and ran into this. Haven't found
What I have seen several people do for this is to increment an env
var, or use a timestamp - something trivial that doesn't impact the
app, but forces a restart. Updating an env var can not ever be done
without restart.
On Fri, Dec 15, 2017 at 2:00 AM, Keshava Bharadwaj
On Wed, Dec 13, 2017 at 6:47 AM, Gmail wrote:
> Sorry, not follow the price argument. You are only charged for the nodes you
> use on a Kubernetes cluster (no Masters, no matter cluster size).
>
>
> I don't understand very well "no matter cluster size" whereas no one has
>
On Tue, Dec 12, 2017 at 12:49 AM, wrote:
> I have a situation like this:
>
> - a cluster of web machines
> - a cluster of db machines and other services
I think you have made the problem much more complicated than it needs
to be. Why not one cluster?
> The question is how
You want a template expander before you get to kubectl. Otherwise, the
thing that is running isn't reflected by any versionable artifact.
Because templating is a high-opinion space, we do not (currently) have one
that is built-in.
On Dec 7, 2017 10:12 AM, "Henry Hottelet"
Kubectl is not a templating system, which is what you are asking for.
Create/Apply are declarative plumbing, suitable to things you would check
in to source control. There are porcelain commands, eg. kubectl run, which
are closer to docker run, but less suitable to source control.
On Dec 7, 2017
Did you figure it out?
On Mon, Dec 4, 2017 at 10:15 AM, Kyunam Kim wrote:
> Understood - thanks!
>
> On Thursday, November 30, 2017 at 8:55:31 PM UTC-8, Tim Hockin wrote:
>>
>> If it came in via the public IP, as you said:
>> `https://PublicIP:31245/app/rest/init` then the
Would you prefer a Daemon set instead?
On Dec 4, 2017 7:28 AM, "Itamar O" wrote:
> I'm guessing you have as many replicas as you have nodes, and you used the
> "required" affinity policy over the "preferred" one.
> If this is the case, then when you try to update the
Alternatively, you can scale it to 0 replicas.
On Fri, Dec 1, 2017 at 2:40 PM, Peter Idah wrote:
> Hi,
>
> tiller is deployed in acs-engine as an addon. The kubernetes addon manager
> constantly ensures that defined addons are running, so everytime you delete
> the tiller
Did you tell the app about the 192 address? How did it know that IP
to redirect you?
On Thu, Nov 30, 2017 at 4:07 PM, Kyunam Kim wrote:
> service IP
>
> On Thursday, November 30, 2017 at 3:32:07 PM UTC-8, Tim Hockin wrote:
>>
>> It's not clear what 192 address represents -
It's not clear what 192 address represents - the pod IP, the service
IP, or an external LB IP?
Also note that you have / characters where you need . characters - I
assume that is human error is reporting the issue?
On Thu, Nov 30, 2017 at 3:27 PM, Kyunam Kim wrote:
> I'm
If you repeatedly. Curl it always gives the right answer? Can you verify
that from both VMs?
What is the error case? You said "the home page" - is that your apps home
page or something else?
On Nov 22, 2017 5:23 PM, wrote:
> Thanks for sticking with me Tim.
>
> So what
Sorry. When you `kubectl get services` you get a listing which
includes the "cluster IP" of a service. This is a VIP that is
reachable by your cluster nodes. You can SSH into a VM and curl your
cluster IP and it will give us a clue where the process is breaking
down. In particular, curl it 100
Log in to each VM and try accessing the service's clusterIP, eg with curl.
Try connecting to the VM IP on the service node port.
On Wed, Nov 22, 2017 at 1:39 PM, wrote:
> So the application is created using Flask, which is a python web-framework.
> Logging into the
Things to try:
Log in to each VM and try accessing the service's clusterIP, eg with curl.
Make sure kube-proxy is running on both VMs.
Check kube-proxy logs for any obvious errors, like failure to sync with
master.
Try connecting to the VM IP on the service node port.
On Nov 22, 2017 5:49
You'll have to say more about why you can't modify them. Did you get an error?
On Tue, Nov 21, 2017 at 12:43 AM, Yong Zhang wrote:
> Hi, all
>
> I have some pods running in kube-system namespace, I want to schedule these
> pods to some dedicated hosts to avoid resource
And know that we're looking at ways to optimize the scale-down
resourcing to be more appropriate for 1-node, 1-core "clusters"
On Fri, Nov 17, 2017 at 9:42 PM, 'Robert Bailey' via Kubernetes user
discussion and Q wrote:
> You can inspect the pods running in the
We don't auto-apply ownership changes to hostPath volumes because that
would allow, for example, a user to take over /etc. We've considered
heuristics like "apply ownership if we make the directory" or "apply
ownership if the path is under a flag-defined root", but none of them
have been totally
On Thu, Nov 9, 2017 at 2:27 AM, wrote:
> We measured that we lose at least one order of magnitude in terms of latency,
> which is our key KPI in this setup.
If you are moving from a model where your apps were guaranteed to be
colocated into Kubernetes, you are going to
Are you concerned about perf because you measured it? Or because you
suspect it might become a thing later?
Are you really sure that your pods will ALWAYS be on the same host?
Are your pods 1:1 or 1:N relationships?
Could these highly-connected pods just be one bigger pod?
To be sure, there's
I am assuming 10.59.246.49 is your cluster IP? and port 80 is the
service port as defined in the Service.spec.ports[] ?
Are your pods actually listening on port 80? Or are they on a different port?
On Wed, Nov 8, 2017 at 9:41 AM, bg303 wrote:
> Thanks, Tim. That guide
This is not a tested configuration - I am not sure that there are
enough knobs in, for example, kubelet to make that happen, and I am
pretty sure kube-proxy will not work.
On Mon, Nov 6, 2017 at 10:39 PM, wrote:
> I am working to configure two kubernetes cluster
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
Does the Service have Endpoints?
On Tue, Nov 7, 2017 at 1:10 PM, bg303 wrote:
> I created a Deployment, Service, Ingress and then an nginx-ingress
> deployment. If I point my DNS record at any of
Starting with the last question first:
> Any ideas as to what I am doing wrong?
Yes - You're trying to do it all yourself instead of relying on the
pieces that have already been built and tested :)
On Mon, Nov 6, 2017 at 9:49 AM, bg303 wrote:
> I recently tried to put SSL
ne.
> Thanks.
>
> ________
> From: 'Tim Hockin' via Kubernetes user discussion and Q
> <kubernetes-users@googlegroups.com>
> Sent: Wednesday, November 1, 2017 2:03:01 AM
> To: Kubernetes user discussion and Q
> Subject: Re: [kubernetes-users] Imported docker imag
It could be that kubernetes is trying to re-pull the image. Did you
set `imagePullPolicy: Never` ?
On Mon, Oct 30, 2017 at 11:15 PM, lppier wrote:
> Hi,
>
> I am setting up some servers in an offline environment, and am downloading
> some tensorflow images for use on these
On Mon, Oct 30, 2017 at 7:56 PM, David Rosenstrauch wrote:
> Hi. I'm having some issues migrating an (admittedly somewhat
> unconventional) existing system to a containerized environment (k8s) and was
> hoping someone might have some pointers on how I might be able to work
>
What are you trying to do? Do you want 2 versions in perpetuity or do
you want to do some form of switchover?
On Mon, Oct 30, 2017 at 3:34 AM, wrote:
> I'm trying to figure out what's the best approach to deploy multiple versions
> of the same software in kubernetes
What Rodrigo said - what problem are you trying to solve?
The pod lifecycle is defined as restart-in-place, today. Nothing you
can do inside your pod, except deleting it from the apiserver, will do
what you asking. It doesn't seem too far fetched that a pod could
exit and "ask for a different
Single-zone masters are GA. Regional masters (multi-zone) are alpha
now, beta before too long.
If we see your master is out, we do try to bring it back, but only
within the same zone. So a true zonal outage could leave you without
a master (in theory). As you said, existing Pods will run and
Oh, I was very wrong :)
No, there's no sort order but name.
On Tue, Oct 17, 2017 at 11:34 AM, wrote:
> On Monday, October 16, 2017 at 10:42:10 PM UTC-7, David Oppenheimer wrote:
>> Can you explain what you mean by "prioritized" ?
>>
>>
>>
>>
> I mean see the list of
Please do not use this list to solicit business. It is a technical
list for users of Kubernetes to discuss Kubernetes issues.
On Thu, Oct 19, 2017 at 11:00 PM, Jordans Evan
wrote:
>
>
>
>
> Hi,
>
>
>
> Would you like to reach out to Kubernetes users and also
1 - 100 of 273 matches
Mail list logo