Hi Matt

Thanks for the reply

Test is done on different server on the same network
Application using "gopkg.in/redis.v3" as plugin so TCP
Pool size = 100000

Below is the simple program storing "Hello World" small data compare to 
what I use to store in my actual application where Redis not even 
supporting 4k reads/sec even after using snapply and probuffing compression

Redis-benchmark utility shows 80 k reads etc

//Output - "Hello World" http://server:8001/testkey

package main

import (
          "gopkg.in/redis.v3"
          "github.com/gin-gonic/gin"
       )

var redisClient *redis.Client
func main() {
  redisClient = redis.NewClient(&redis.Options{
      Addr:     "localhost:6379",
      Password: "",
      DB:       0,
      PoolSize: 100000,
  })

  //Gin Code
  gin.SetMode(gin.ReleaseMode)
  r := gin.Default()
  //r.Use(Proton.Gzip(Proton.BestSpeed))
       
  r.GET("/:key", func(c *gin.Context) {
    key := c.Param("key")
    value, _ := redisClient.Get(key).Result()
    c.JSON(200, gin.H{"response": value})
  })
  r.Run(":8001")
}

$ wrk -t1 -c500 http://server:8001/testkey
Running 10s test @ http://server:8001/testkey
  1 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    15.24ms   16.99ms 296.58ms   90.73%
    Req/Sec    28.84k     3.20k   35.90k    87.88%
  288461 requests in 10.08s, 41.81MB read
Requests/sec:  28606.13
Transfer/sec:      4.15MB

$ wrk -t4 -c500 http://server:8001/testkey
Running 10s test @ http://server:8001/testkey
  4 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    19.41ms   17.19ms 306.65ms   68.10%
    Req/Sec     7.08k     1.00k    8.91k    73.75%
  281946 requests in 10.03s, 40.87MB read
Requests/sec:  28098.00
Transfer/sec:      4.07MB

$ wrk -t10 -c500 http://server:8001/testkey
Running 10s test @ http://server:8001/testkey
  10 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    19.51ms   18.30ms 316.32ms   66.97%
    Req/Sec     2.86k   349.38     3.88k    78.39%
  286221 requests in 10.07s, 41.49MB read
Requests/sec:  28413.52
Transfer/sec:      4.12MB

Thanks for your help

Rgds,

Abhi 


On Monday, July 11, 2016 at 4:08:36 AM UTC+5:30, Matt Silverlock wrote:
>
>
>    - Are your wrk threads starving your Go program? (why 10 threads? Why 
>    not 4 for wrk, 4 for Go?)
>    - How are you connecting to Redis? (TCP? Unix socket?)
>    - What size pool have you set for Redis?
>    - Show us your code.
>
> This is likely a classic example of where framework 'benchmarks' are 
> completely divorced from reality. 
>
>
> On Sunday, July 10, 2016 at 4:55:56 AM UTC-7, desaia...@gmail.com wrote:
>>
>> I am getting only - 29 k req/sec, can you please help me as Redis 
>> document says it support 80 k+ req/sec reads not sure if I am doing any 
>> wrong
>>
>> Machine - 8 core with 57 gb ram + ssd
>> Go - 1.6.2 linux/amd64
>> Ubuntu - 15.10
>> DB - Redis 
>> Plugin - gopkg.in/redis.v3
>> Key - only "hello" value
>> Webservice - Gin framework
>>
>> Sysctl - 
>>
>> file-max = 5752905
>> file-nr = 1888 0 5752905
>>
>> net.core.rmem_max = 134217728
>>
>> net.core.wmem_max = 134217728
>>
>> net.ipv4.tcp_rmem = 4096 87380 67108864
>>
>> net.ipv4.tcp_wmem = 4096 65536 67108864
>>
>> net.core.netdev_max_backlog = 250000
>>
>> net.ipv4.tcp_congestion_control=htcp
>>
>> net.ipv4.tcp_mtu_probing=1
>>
>> ulimits - 200000
>>
>>
>>
>> wrk -t10 -c500 http://xx/testkey
>>
>> Running 10s test @ http://xx/testkey
>>
>>   10 threads and 500 connections
>>
>>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>>
>>     Latency    18.84ms   16.90ms 309.98ms   58.24%
>>
>>     Req/Sec     2.93k   382.13     4.46k    76.62%
>>
>>   293282 requests in 10.08s, 42.51MB read
>>
>> Requests/sec:  29090.62
>>
>> Transfer/sec:      4.22MB
>>
>>
>> Thanks,
>>
>>
>> Abhi
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to