If you're still stuck I'll write more of a guide, just let me know.

On Sun, 22 Mar 2020, dormando wrote:

> Hey,
>
> I thought I wrote this in the rest of the e-mail + the README: it doesn't
> print stats at the end. you run the benchmark and then pull stats via
> other utilities. Take a close look at what I wrote and the files in the
> repo.
>
> On Sun, 22 Mar 2020, Martin Grigorov wrote:
>
> > Hi,
> >
> > On Thu, Mar 19, 2020 at 9:06 PM dormando <dorma...@rydia.net> wrote:
> >       memtier is trash. Check the README for mc-crusher, I just updated it 
> > a bit
> >       a day or two ago. Those numbers are incredibly low, I'd have to dig a
> >       laptop out of the 90's to get something to perform that badly.
> >
> >       mc-crusher runs blindly and you use the other utilities that come 
> > with it
> >       to find command rates and sample the latency while the benchmark runs.
> >       Almost all 3rd party memcached benchmarks end up benchmarking the
> >       benchmark tool, not the server. I know mc-crusher doesn't make it very
> >       obvious how to use though, sorry.
> >
> >
> > What I miss to find so far is how to get the statistics after a run.
> > For example, I run 
> > ./mc-crusher --conf ./conf/asciiconf --ip 192.168.1.43 --port 12345 
> > --timeout 10
> >  
> > and the output is:
> >
> > --------------------------------------------------------------
> > ip address default: 192.168.1.43
> > port default: 12345
> > id 0 for key send value ascii_get
> > id 1 for key recv value blind_read
> > id 5 for key conns value 50
> > id 8 for key key_prefix value foobar
> > id 26 for key key_prealloc value 0
> > id 24 for key pipelines value 8
> > id 0 for key send value ascii_set
> > id 1 for key recv value blind_read
> > id 5 for key conns value 10
> > id 8 for key key_prefix value foobar
> > id 26 for key key_prealloc value 0
> > id 24 for key pipelines value 4
> > id 19 for key stop_after value 200000
> > id 3 for key usleep value 1000
> > id 12 for key value_size value 10
> > setting a timeout
> > done initializing
> > timed run complete
> > --------------------------------------------------------------
> >
> > And I see that the server is busy at that time.
> > How to find out how many sets/gets/... were made ?
> >
> > Martin
> >  
> >
> >       A really quick untuned test against my raspberry pi 3 nets 92,000
> >       gets/sec. (mc-crusher running on a different machine). On a xeon 
> > machine
> >       I can get tens of millions of ops/sec depending on the read/write 
> > ratio.
> >
> >       On Thu, 19 Mar 2020, Martin Grigorov wrote:
> >
> >       > Hi
> >       >
> >       > I've made some local performance testing
> >       >
> >       > First I tried with https://github.com/memcached/mc-crusher but it 
> > seems it doesn't calculate any statistics after the load runs.
> >       >
> >       > The results below are from 
> > https://github.com/RedisLabs/memtier_benchmark
> >       >
> >       > 1) Text
> >       > ./memtier_benchmark --server XYZ --port 12345 -P memcache_text
> >       >
> >       > ARM64 text
> >       > 
> > =========================================================================
> >       > Type         Ops/sec     Hits/sec   Misses/sec      Latency       
> > KB/sec
> >       > 
> > -------------------------------------------------------------------------
> >       > Sets          985.28          ---          ---     20.02700        
> > 67.22
> >       > Gets         9842.00         0.00      9842.00     20.01900       
> > 248.83
> >       > Waits           0.00          ---          ---      0.00000         
> >  ---
> >       > Totals      10827.28         0.00      9842.00     20.02000       
> > 316.05
> >       >
> >       >
> >       > X86 text
> >       > 
> > =========================================================================
> >       > Type         Ops/sec     Hits/sec   Misses/sec      Latency       
> > KB/sec
> >       > 
> > -------------------------------------------------------------------------
> >       > Sets          931.04          ---          ---     20.06800        
> > 63.52
> >       > Gets         9300.21         0.00      9300.21     20.32600       
> > 235.13
> >       > Waits           0.00          ---          ---      0.00000         
> >  ---
> >       > Totals      10231.26         0.00      9300.21     20.30200       
> > 298.66
> >       >
> >       >
> >       >
> >       > 2) Binary
> >       > ./memtier_benchmark --server XYZ --port 12345 -P memcache_binary
> >       >
> >       > ARM64 binary
> >       > 
> > =========================================================================
> >       > Type         Ops/sec     Hits/sec   Misses/sec      Latency       
> > KB/sec
> >       > 
> > -------------------------------------------------------------------------
> >       > Sets          829.68          ---          ---     23.46500        
> > 63.90
> >       > Gets         8287.69         0.00      8287.69     23.56100       
> > 314.75
> >       > Waits           0.00          ---          ---      0.00000         
> >  ---
> >       > Totals       9117.37         0.00      8287.69     23.55200       
> > 378.65
> >       >
> >       > X86 binary
> >       > 
> > =========================================================================
> >       > Type         Ops/sec     Hits/sec   Misses/sec      Latency       
> > KB/sec
> >       > 
> > -------------------------------------------------------------------------
> >       > Sets          829.32          ---          ---     23.63600        
> > 63.87
> >       > Gets         8284.10         0.00      8284.10     23.58600       
> > 314.61
> >       > Waits           0.00          ---          ---      0.00000         
> >  ---
> >       > Totals       9113.42         0.00      8284.10     23.59100       
> > 378.48 
> >       >
> >       >
> >       >
> >       > Text is faster on the ARM64. Binary is similar for both.
> >       >
> >       > The benchmarking tool runs on different machine than the ones 
> > running Memcached:
> >       >
> >       > The ARM64 server has this spec:
> >       >
> >       > $ lscpu
> >       > Architecture:        aarch64
> >       > Byte Order:          Little Endian
> >       > CPU(s):              4
> >       > On-line CPU(s) list: 0-3
> >       > Thread(s) per core:  1
> >       > Core(s) per socket:  4
> >       > Socket(s):           1
> >       > NUMA node(s):        1
> >       > Vendor ID:           0x48
> >       > Model:               0
> >       > Stepping:            0x1
> >       > BogoMIPS:            200.00
> >       > L1d cache:           64K
> >       > L1i cache:           64K
> >       > L2 cache:            512K
> >       > L3 cache:            32768K
> >       > NUMA node0 CPU(s):   0-3
> >       > Flags:               fp asimd evtstrm aes pmull sha1 sha2 crc32 
> > atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm
> >       >
> >       >
> >       > The x64 one:
> >       > Architecture:        x86_64
> >       > CPU op-mode(s):      32-bit, 64-bit
> >       > Byte Order:          Little Endian
> >       > CPU(s):              4
> >       > On-line CPU(s) list: 0-3
> >       > Thread(s) per core:  2
> >       > Core(s) per socket:  2
> >       > Socket(s):           1
> >       > NUMA node(s):        1
> >       > Vendor ID:           GenuineIntel
> >       > CPU family:          6
> >       > Model:               85
> >       > Model name:          Intel(R) Xeon(R) Gold 6266C CPU @ 3.00GHz
> >       > Stepping:            7
> >       > CPU MHz:             3000.000
> >       > BogoMIPS:            6000.00
> >       > Hypervisor vendor:   KVM
> >       > Virtualization type: full
> >       > L1d cache:           32K
> >       > L1i cache:           32K
> >       > L2 cache:            1024K
> >       > L3 cache:            30976K
> >       > NUMA node0 CPU(s):   0-3
> >       > Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep 
> > mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx
> >       pdpe1gb rdtscp lm
> >       > constant_tsc rep_good nopl xtopology nonstop_tsc cpuid 
> > tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe 
> > popcnt
> >       > tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 
> > 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase
> >       tsc_adjust
> >       > bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq 
> > rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec
> >       xgetbv1 arat
> >       > avx512_vnni md_clear flush_l1d arch_capabilities
> >       >
> >       > Both with 16GB RAM.
> >       >
> >       >
> >       > Regards,
> >       > Martin
> >       >
> >       > On Mon, Mar 9, 2020 at 11:23 AM Martin Grigorov 
> > <martin.grigo...@gmail.com> wrote:
> >       >       Hi Dormando,
> >       >
> >       > On Mon, Mar 9, 2020 at 9:19 AM Martin Grigorov 
> > <martin.grigo...@gmail.com> wrote:
> >       >       Hi Dormando,
> >       >
> >       > On Fri, Mar 6, 2020 at 10:15 PM dormando <dorma...@rydia.net> wrote:
> >       >       Yo,
> >       >
> >       >       Just to add in: yes we support ARM64. Though my build test 
> > platform is a
> >       >       raspberry pi 3 and I haven't done any serious performance 
> > work. packet.net
> >       >       had an arm test platform program but I wasn't able to get 
> > time to do any
> >       >       work.
> >       >
> >       >       From what I hear it does seem to perform fine on high end 
> > ARM64 platforms,
> >       >       I just can't do any specific perf work unless someone donates 
> > hardware.
> >       >
> >       >
> >       > I will talk with my managers!
> >       > I think it should not be a problem to give you a SSH access to one 
> > of our machines.
> >       > What specs do you prefer ? CPU, disks, RAM, network, ...
> >       > VM or bare metal ? 
> >       > Preferred Linux flavor ?
> >       >
> >       > It would be good to compare it against whatever AMD64 instance you 
> > have. Or I can also ask for two similar VMs - ARM64 and AMD64.
> >       >
> >       >
> >       > My manager confirmed that we can give you access to an ARM64 
> > machine. VM would be easier to setup but bare metal is also possible.
> >       > Please tell me the specs you prefer.
> >       > We can give you access only temporarily though, i.e. we will have 
> > to shut it down after you finish the testing, so it doesn't stay idle and
> >       waste
> >       > budget. Later if you need it we can allocate it again.
> >       > Would this work for you ?
> >       >
> >       > Martin 
> >       >  
> >       >
> >       >
> >       > Martin
> >       >  
> >       >
> >       >       -Dormando
> >       >
> >       >       On Fri, 6 Mar 2020, Martin Grigorov wrote:
> >       >
> >       >       > Hi Emilio,
> >       >       >
> >       >       > On Fri, Mar 6, 2020 at 9:14 AM Emilio Fernandes 
> > <emilio.fernande...@gmail.com> wrote:
> >       >       >       Thank you for sharing your experience, Martin!
> >       >       > I've played for few days with Memcached on our ARM64 test 
> > servers and so far I also didn't face any issues.
> >       >       >
> >       >       > Do you know of any performance benchmarks of Memcached on 
> > AMD64 and ARM64 ? Or at least of a performance test suite that I can
> >       >       run myself ?
> >       >       >
> >       >       >
> >       >       > I am not aware of any public benchmark results for 
> > Memcached on AMD64 vs ARM64.
> >       >       > But quick search in Google returned these promising results:
> >       >       > 1) https://github.com/memcached/mc-crusher
> >       >       > 2) 
> > https://github.com/scylladb/seastar/wiki/Memcached-Benchmark
> >       >       > 3) https://github.com/RedisLabs/memtier_benchmark
> >       >       > 4) http://www.lmdb.tech/bench/memcache/
> >       >       >  
> >       >       > I will try some of them next week and report back!
> >       >       >
> >       >       > Martin
> >       >       >
> >       >       >
> >       >       > Gracias!
> >       >       > Emilio
> >       >       >
> >       >       > сряда, 4 март 2020 г., 16:30:37 UTC+2, Martin Grigorov 
> > написа:
> >       >       >       Hello Emilio!
> >       >       > Welcome to this community!
> >       >       >
> >       >       > I am a regular user of Memcached and I can say that it 
> > works just fine for us on ARM64!
> >       >       > We are still at early testing stage but so far so good!
> >       >       >
> >       >       > I like the idea to have this mentioned on the website!
> >       >       > It will bring confidence to more users!
> >       >       >
> >       >       > Regards,
> >       >       > Martin
> >       >       >
> >       >       > On Wed, Mar 4, 2020 at 4:09 PM Emilio Fernandes 
> > <emilio.f...@gmail.com> wrote:
> >       >       >       Hello Memcached community!
> >       >       > I'd like to know whether ARM64 architecture is officially 
> > supported ?
> >       >       > I've seen that Memcached is being tested on ARM64 at Travis 
> > but I do not see anything on the website or in GitHub Wiki
> >       >       explicitly saying
> >       >       > whether it is officially supported or not.
> >       >       >
> >       >       > Gracias!
> >       >       > Emilio
> >       >       >
> >       >       > --
> >       >       >
> >       >       > ---
> >       >       > You received this message because you are subscribed to the 
> > Google Groups "memcached" group.
> >       >       > To unsubscribe from this group and stop receiving emails 
> > from it, send an email to memc...@googlegroups.com.
> >       >       > To view this discussion on the web visit
> >       >       > 
> > https://groups.google.com/d/msgid/memcached/bb39d899-643b-4901-8188-a11138c37b82%40googlegroups.com.
> >       >       >
> >       >       > --
> >       >       >
> >       >       > ---
> >       >       > You received this message because you are subscribed to the 
> > Google Groups "memcached" group.
> >       >       > To unsubscribe from this group and stop receiving emails 
> > from it, send an email to memcached+unsubscr...@googlegroups.com.
> >       >       > To view this discussion on the web visit
> >       >       
> > https://groups.google.com/d/msgid/memcached/568921e6-0e29-4830-94be-355d1dbdab26%40googlegroups.com.
> >       >       >
> >       >       > --
> >       >       >
> >       >       > ---
> >       >       > You received this message because you are subscribed to the 
> > Google Groups "memcached" group.
> >       >       > To unsubscribe from this group and stop receiving emails 
> > from it, send an email to memcached+unsubscr...@googlegroups.com.
> >       >       > To view this discussion on the web visit
> >       >       > 
> > https://groups.google.com/d/msgid/memcached/CAMomwMpu%2BOcwRBhzn7_PMLe9c6_sau-wNmMTyoBGhrL1L9XTBQ%40mail.gmail.com.
> >       >       >
> >       >       >
> >       >
> >       >       --
> >       >
> >       >       ---
> >       >       You received this message because you are subscribed to the 
> > Google Groups "memcached" group.
> >       >       To unsubscribe from this group and stop receiving emails from 
> > it, send an email to memcached+unsubscr...@googlegroups.com.
> >       >       To view this discussion on the web visit 
> > https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2003061214140.25120%40dskull.
> >       >
> >       > --
> >       >
> >       > ---
> >       > You received this message because you are subscribed to the Google 
> > Groups "memcached" group.
> >       > To unsubscribe from this group and stop receiving emails from it, 
> > send an email to memcached+unsubscr...@googlegroups.com.
> >       > To view this discussion on the web visit
> >       > 
> > https://groups.google.com/d/msgid/memcached/CAMomwMqhBnOpyBf1JvEsdK2V0VGZkKH0D5OhxV9uS6-_%2B1AsyA%40mail.gmail.com.
> >       >
> >       >
> >
> >       --
> >
> >       ---
> >       You received this message because you are subscribed to the Google 
> > Groups "memcached" group.
> >       To unsubscribe from this group and stop receiving emails from it, 
> > send an email to memcached+unsubscr...@googlegroups.com.
> >       To view this discussion on the web visit 
> > https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2003191154040.6707%40dskull.
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google Groups 
> > "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send an 
> > email to memcached+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit 
> > https://groups.google.com/d/msgid/memcached/CAMomwMqoB_3vyUjr_4Yw8rTJMCkunROmLveuoSjeLUQtjVfjeA%40mail.gmail.com.
> >
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2003221335200.6707%40dskull.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2003221340560.6707%40dskull.

Reply via email to