I have the same issue.
Possibly https://github.com/kubernetes/ingress/issues/277 ?
On Friday, April 14, 2017 at 9:04:23 AM UTC-6, Daniel Watrous wrote:
>
> I am using the nginx ingress controller on two k8s clusters. On one the
> HTTPS works as expected, but on the other HTTPS traffic
Helm works nicely for this.
You can template out things like the registry, image names, etc., and then
have a small values.yaml file that sets the values for the target
platform.
On Wednesday, March 8, 2017 at 11:56:45 PM UTC-7, ag...@jantox.com wrote:
>
> I am currently running a
I think you need to provide more information for people to help you out.
Are nodes in the cluster running out of memory or CPU resources? Is disk
I/O slow?
If you want to increase the
size of the nodes (more CPU, memory) the procedure will depend on the
Kubernetes environment you are
If you are using VirtualBox with minikube, it will mount the /Users/xxx/
folder in the VM.
You can use a hostPath volume to mount a local folder on your mac on to a
pod volume.
hostPath:
path: /Users/my-username/Downloads/example
On Friday, March 3, 2017 at 7:22:32 PM UTC-7, Imran Akbar
Rather than setting up ssh, it may be easier to use kubectl exec to get a
shell inside your pod:
kubectl exec my-pod-xxx -it /bin/sh
On Friday, August 11, 2017 at 12:11:30 AM UTC-6, Eswari wrote:
>
> Hi,
>
> I have exposed my pod externally (public ip).
> Tried to ssh to my pod using *ssh
will want to read up on
services:
https://kubernetes.io/docs/concepts/services-networking/service/
On Monday, August 14, 2017 at 12:48:32 AM UTC-6, eswar...@gmail.com wrote:
>
> Hi Warren Strange,
>
> Thanks for the reply.
>
> Yes, But we can use this command where we inst
Sometimes it takes a while for PVs to be provisioned - so this error often
goes away if you give it time. If the PVC eventually gets bound, this is
probably not the issue.
It looks like you are running out of memory or CPU. kubectl describe on the
pod should tell you which. You either need
DNS is used for service name lookup, but there is no shared memory between
pods.
On Thursday, June 22, 2017 at 9:57:52 AM UTC-6, Tobias Rahloff wrote:
>
> Can sb point me towards sources that explain how information sharing in
> k8s works? Especially in a academic, distributed computing
To echo what Matthias has said, Kubernetes is doing a lot more housekeeping
work behind the scenes than just docker run.
You can see Kube is adding about one second of overhead. If your containers
typically run for a just a second or two, that is probably not a good fit
for Kube (or even
ImagePullBackOff means that Kubernetes can not find the image.
You have:
image: agentc
You need:
image: library/agentc:latest
This also assumes you have done a "docker build" direct to the docker
daemon that your Kubernetes cluster is using.
If you are using minikube, you must make sure
Debugging performance issues on Docker/Kube can be interesting
You could try exposing the service through a Nodeport, and run your
benchmark directly against the node IP. That would at least tell you if the
GKE LB is a factor or not.
Also - are your pods possibly CPU or memory limited
What you are describing is a really good use case for namespaces.
If you really want to deploy multiple instances to the same namespace, you
could have a look at how some of the Helm charts do this.
Some charts use dynamic labels to (e.g. app={{ .Release.Name }} ) to
distinguish multiple
This is likely a better bet than DIND:
https://github.com/GoogleContainerTools/kaniko
On Saturday, May 5, 2018 at 9:02:08 AM UTC-6, Sudha Subramanian wrote:
>
> Hi,
>
> I have a use case where my application container needs to pull a build
> image and run code inside of it. I'm considering
hen you SSH into the node.
>
> On Thu, Feb 1, 2018 at 8:20 PM, Warren Strange <warren@gmail.com
> > wrote:
>
>>
>>
>> Stackdriver will show me disk IOPS and throughput for PD disks.
>>
>> How do I measure disk latency? I have a suspicion that a ser
Stackdriver will show me disk IOPS and throughput for PD disks.
How do I measure disk latency? I have a suspicion that a service is slow
because of latency (my PD disks are operating well below their potential
IOPS).
iostat does not seem to be installed on the COS nodes.
Suggestions
AFAIK you can not split a pod between more than one node.
I know nothing about VMware, but I am guessing they can split VM processes
across nodes, which is pretty much equivalent to what Kubernetes does with
pods (VM process == a pod, roughly speaking).
On Wednesday, February 14, 2018 at
16 matches
Mail list logo