Do you have more details on the resource you were running out of on the
Tomcast containers? Were you CPU bound? Perhaps K8s is limiting the number
of CPU's available in a different fashion than what docker was.
Could you include the Tomcat iostat, vmstat, etc. output?
-EJ
On Tue, Sep 19, 2017
Sorry, not sure I parsed your reply.
If you test docker with client and server on the same node, you need
to test kubernetes the same way.
You can test your client to the pod's IP directly (should be same as
docker perf) and then test kube services.
On Tue, Sep 19, 2017 at 10:16 PM, Vinoth
I will try this and post the results.
On Wednesday, September 20, 2017 at 5:31:12 AM UTC+5:30, jay vyas wrote:
>
> Sounds like subtly differences in the tests that make the performance
> metric more realistic for kube and more like a theoretical max test when
> you run docker measurements.
>
Tim u mean the backend for k8s node is same as the result of the backend on
native tomcat test as well as on the docker.
k8s node backend is different that the tomcat and docker test.
Tomcat/docker test did on the GCP machine with ubuntu flavour with
8cpu/30Gig machine
k8s test did on 3
Thanks for the response Tim.
I don't see any failures in the ab result all the request got successful in
all the test.
The following are the results that I attached in the GitHub.
https://github.com/kubernetes/kubernetes/issues/52652
k8s_service.txt
I am not limiting resource for the tomcat test on k8s.
On Tuesday, September 19, 2017 at 9:25:50 PM UTC+5:30, Warren Strange wrote:
>
>
> Debugging performance issues on Docker/Kube can be interesting
>
> You could try exposing the service through a Nodeport, and run your
> benchmark
Sounds like subtly differences in the tests that make the performance metric
more realistic for kube and more like a theoretical max test when you run
docker measurements.
Two ways to even the playing field:
1) If you run the test inside the kube pod does it have the same performance as
the
On 20 September 2017 at 01:05, Rodrigo Campos wrote:
> On Tue, Sep 19, 2017 at 09:08:22PM +0300, Lubomir I. Ivanov wrote:
>> On 19 September 2017 at 17:54, Rodrigo Campos wrote:
>> >
>> >
>> > On Tuesday, September 19, 2017, Lubomir I. Ivanov
On Tue, Sep 19, 2017 at 09:08:22PM +0300, Lubomir I. Ivanov wrote:
> On 19 September 2017 at 17:54, Rodrigo Campos wrote:
> >
> >
> > On Tuesday, September 19, 2017, Lubomir I. Ivanov
> > wrote:
> >>
> >
> > To make sure your setup is okay, checkout an
On 19 September 2017 at 17:54, Rodrigo Campos wrote:
>
>
> On Tuesday, September 19, 2017, Lubomir I. Ivanov
> wrote:
>>
>> hello,
>>
>> i've tried setuping a local cluster and after a lot of trial and error
>> that worked well for 1.7.5.
>> but my
NodePort vs VIP should have no difference - they traverse the same paths.
This is a much steeper difference than what I measured and more than I
would expect.
Is this 8k new connections per second? Could you be exhausting
conntrack records and getting some failures? It would be interesting
to
Your volume config is not valid. What you need depends on whether you
want your volume to literally map `/c/Users/abcd/config` (which you
manage out of band, kubernetes won't touch) into your container or
whether you want just "an empty directory".
The literal equivalent would be more like:
```
This is not production. This is just local image. Trying to learn basics.
This is my kubernetes config file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: agent-kuber
spec:
replicas: 1
template:
metadata:
labels:
run: agentc
spec:
containers:
-
Debugging performance issues on Docker/Kube can be interesting
You could try exposing the service through a Nodeport, and run your
benchmark directly against the node IP. That would at least tell you if the
GKE LB is a factor or not.
Also - are your pods possibly CPU or memory limited
On Tue, Sep 19, 2017 at 07:57:55AM -0700, paperless wrote:
>
>
> I have developed a simple Docker image. This can be run using command
>
> docker run -e VOLUMEDIR=agentsvolume -v /c/Users/abcd/config:/agentsvolume
> app-agent
>
> Same thing if I want to run using kubernetes, can someone guide
--
Filip
On Tue, Sep 19, 2017 at 3:54 PM, Rodrigo Campos wrote:
> I think this can probably be done using custom metrics:
> https://kubernetes.io/docs/tasks/run-application/
> horizontal-pod-autoscale/#support-for-custom-metrics
>
> I have not used custom metrics, so it's
I have developed a simple Docker image. This can be run using command
docker run -e VOLUMEDIR=agentsvolume -v /c/Users/abcd/config:/agentsvolume
app-agent
Same thing if I want to run using kubernetes, can someone guide me what are
the steps to do it? Do I must create Pods/ Controller or
On Tuesday, September 19, 2017, Lubomir I. Ivanov
wrote:
> hello,
>
> i've tried setuping a local cluster and after a lot of trial and error
> that worked well for 1.7.5.
> but my current employer is interested into kubernetes contributions, so
> i've tried building and
I think this can probably be done using custom metrics:
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics
I have not used custom metrics, so it's not something I really know about
:-)
On Tuesday, September 19, 2017, Parth Gandhi
hello,
i've tried setuping a local cluster and after a lot of trial and error that
worked well for 1.7.5.
but my current employer is interested into kubernetes contributions, so
i've tried building and running the most recent github maste,r so that i
can try to understand the project upstream.
Hi,
we have an console application running in the pods in kubernetes cluster.
We have a DB table that keeps the count of incoming queue messages. We need
to autoscale the pods when the table reaches certain threshold values. can
this be achieved using k8s HPA? or do we need to write a different
Environment:
Kubernetes version (use kubectl version):
kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3",
GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean",
BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc",
22 matches
Mail list logo