Finally the issue was with the hardware spec. The previous  k8s test i did 
with 3 node cluster with each node spec has 1cpu and 4Gig RAM.

Today i map the spec of the native tomcat test machine with the K8s node.

I created the new 3-node K8s cluster with version 1.6.9 with node 
configuration 8CPU and 30Gig RAM each. 

In this setup i see a consistent 8k Request/handling both in POD-POD 
communication as well as from Loadbalancer Endpoints from K8s.


In the resource usage none of the cluster hitting the limit.CPU is well 
with at max  less than 13% usage on both the cluster.

When i see the CPU information from 
lscpu

This is the CPU sec for the K8s node where we get the 2k req/sec numbers

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                1
On-line CPU(s) list:   0
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Model name:            Intel(R) Xeon(R) CPU @ 2.50GHz
Stepping:              4
CPU MHz:               2500.000
BogoMIPS:              5000.00
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              30720K
NUMA node0 CPU(s):     0
Flags



===================================================================

This is the CPU sec for the K8s node where we get the 8k req/sec numbers


Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    2
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Model name:            Intel(R) Xeon(R) CPU @ 2.50GHz
Stepping:              4
CPU MHz:               2500.000
BogoMIPS:              5000.00
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              30720K
NUMA node0 CPU(s):     0-7
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb 
rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni 
pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes xsave avx f16c rdrand 
hypervisor lahf_lm fsgsbase tsc_adjust smep erms xsaveopt





I think the throughput all depends on the following CPU architecture i 
guess.



Thread(s) per core:2,Core(s) per socket:4

Thanks for all your valuable suggestions.

On Tuesday, September 19, 2017 at 11:55:05 AM UTC+5:30, Vinoth Narasimhan 
wrote:
>
> Environment:
>
> Kubernetes version (use kubectl version):
>  kubectl version
> Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", 
> GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", 
> BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", 
> Platform:"linux/amd64"}
> Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9", 
> GitCommit:"a3d1dfa6f433575ce50522888a0409397b294c7b", GitTreeState:"clean", 
> BuildDate:"2017-08-23T16:58:45Z", GoVersion:"go1.7.6", Compiler:"gc", 
> Platform:"linux/amd64"}
>
> Cloud provider or hardware configuration**:
>
> Google Container Engine.
>
> What happened:
>
> We are in testing phase of springboot based microservice deployment on 
> GKE. During testing QA filed a performance issue , stats that the 
> throughput of the service in k8s is low when compared to run the java app in
>
> java -jar method
> docker run
> For testing i skip those springboot stuff and take native tomcat home page 
> as the test bed for the "ab" testing.
>
> The test setup looks like:
>
> Create an 8cpu/30Gig RAM ubuntu server in GCP and install native 
> tomact-8.5.20(80) and test the home page.
>
> Stop the native tomcat. Create the docker tomcat instances on the same 
> host and test the same home page.
> The docker version is: Version: 17.06.2-ce
>
> Create the 3 node K8s cluster 1.6.9. Run the tomcat deployment the same 
> 8.5.20 and expose the service through LB and test the same home page.
>
> I install the ab tool in other GCP instances and hit the above 3 different 
> endpoints.
>
> What's the Result:
>
> The first 2 test with native tomcat and docker run the throughput i got is 
> nearly 8k Req/sec on avg on different request/concurrent level.
>
> But the same on K8s LB the throughput i got on the average of 2k req/sec 
> on avg on different request/concurrency level.
>
> Is this something am i missing on the test. Or this is how the GKE LB 
> store and forward the request at this rate.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to