Re: [smartos-discuss] Standardized Benchmarks and various overheads.
> On Sep 30, 2016, at 7:43 PM, Matthew Parsons> wrote: > > FWIW, the main production workload that I will care about is a not-well > threaded java server app, so single-threaded performance, coupled with a > large-ish MySQL DB with frequent, random I/O both read and write. > > I went down the rabbit-hole of attempting to to use the Phoronix Test Suite > since it "supports" Solaris and BSD, has some pre-defined Java and Database > test setups, and can compare to publicly recorded reports. However chasing > down the dependencies and chasing down the various "this test failed" errors > was taking up too much time. Looks like I'll stick w/ Bonnie++ and IOZone via > PkgSrc. (Which means I'll have to start over w/ CentOS) My experience is that unless you're running something that's completely and utterly canned (no tweaking whatsoever to get it working), there is too much of a repeatability gap to compare with other people's results, so you're back to running all your test cases yourself. I'd check to see if Bonnie++ and IOZone are complex enough to give you a serious workout (and of course you need to read and write stuff that's bigger than your disk cache, which in ZFS land is "approaching the size of your RAM"), so be skeptical of numbers that sound really good. Be prepared for some surprises though. I hear people talk so much about needing an slog to get decent performance that I decided to do a bakeoff with a test case of running compiles. The result was a measurable-but-not-significant performance hit (no, I didn't run multiple times) for adding an slog, and a measurable-but-not-significant performance improvement for going all-SSD. On a database load, with synchronous writes, your mileage will of course vary. More here: https://technotes.seastrom.com/2016/09/21/disk-isnt-the-long-pole.html -r --- smartos-discuss Archives: https://www.listbox.com/member/archive/184463/=now RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00 Modify Your Subscription: https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb Powered by Listbox: http://www.listbox.com
Re: [smartos-discuss] Standardized Benchmarks and various overheads.
On 10/ 1/16 10:18 PM, Paul Sture wrote: On 1 Oct 2016, at 2:00, Ian Collins wrote: On 10/ 1/16 12:43 PM, Matthew Parsons wrote: (Sorry for the delay in replying.) Please note I didn't ask "what matches my workload" or "please architect my setup for me" :P Mainly I just wanted something for a couple basic sanity checks that hardware is performing in the general ballpark of what it should, that there weren't any pathological issues w/ the drivers under SmartOS. Secondarily would be something that can run under native, LX, and KVM to compare relative overheads, and to compare against the same hardware running, say Ubuntu Server or CentOS. If it were something that there was a public database to compare against, that would be a bonus. I stand by my initial reply :) There really are too many variables to offer a generic solution. One obvious example is comparing lx-brand zone with KVM on a pool without decent log devices. Disk benchmarks will be shite in the KVM, but if the same setup had a log, things would be much closer (zones would still win!). If I want a quick and dirty comparison, I use bonnie++ (or CrystalMark on a Windows KVM) and building gcc. The latter is a surprisingly good test; it will stress various aspects of your machine and has shown up numerous performance issues over the years. The single character write numbers from bonnie++ are pretty meaningless on ZFS. FWIW, the main production workload that I will care about is a not-well threaded java server app, so single-threaded performance, coupled with a large-ish MySQL DB with frequent, random I/O both read and write. bonnie++ and building gcc (both can be single or multi-threaded) should give you some decent comparisons. A recent suggestion I was given was to build LLVM (and clang for platforms where it's available), and "possibly other core LLVM projects for good measure, such as LLVM's version of the C++ standard library". The "where it's available" caveat as why I use gcc and libsdc++, they can be built pretty much anywhere! -- Ian. --- smartos-discuss Archives: https://www.listbox.com/member/archive/184463/=now RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00 Modify Your Subscription: https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb Powered by Listbox: http://www.listbox.com
Re: [smartos-discuss] Standardized Benchmarks and various overheads.
On 1 Oct 2016, at 2:00, Ian Collins wrote: On 10/ 1/16 12:43 PM, Matthew Parsons wrote: (Sorry for the delay in replying.) Please note I didn't ask "what matches my workload" or "please architect my setup for me" :P Mainly I just wanted something for a couple basic sanity checks that hardware is performing in the general ballpark of what it should, that there weren't any pathological issues w/ the drivers under SmartOS. Secondarily would be something that can run under native, LX, and KVM to compare relative overheads, and to compare against the same hardware running, say Ubuntu Server or CentOS. If it were something that there was a public database to compare against, that would be a bonus. I stand by my initial reply :) There really are too many variables to offer a generic solution. One obvious example is comparing lx-brand zone with KVM on a pool without decent log devices. Disk benchmarks will be shite in the KVM, but if the same setup had a log, things would be much closer (zones would still win!). If I want a quick and dirty comparison, I use bonnie++ (or CrystalMark on a Windows KVM) and building gcc. The latter is a surprisingly good test; it will stress various aspects of your machine and has shown up numerous performance issues over the years. The single character write numbers from bonnie++ are pretty meaningless on ZFS. FWIW, the main production workload that I will care about is a not-well threaded java server app, so single-threaded performance, coupled with a large-ish MySQL DB with frequent, random I/O both read and write. bonnie++ and building gcc (both can be single or multi-threaded) should give you some decent comparisons. A recent suggestion I was given was to build LLVM (and clang for platforms where it's available), and "possibly other core LLVM projects for good measure, such as LLVM's version of the C++ standard library". There's a another scenario lurking here. Back in the day I had tools whose purpose was to test the hardware in a "fill the memory up, processor(s) too, and thrash the disks and network" kind of way. While the design aim of those tools was to test and burn in new hardware, they were a handy way of filling up a system to practice your monitoring skills and see what effects the available O/S tuning knobs and switches had. --- smartos-discuss Archives: https://www.listbox.com/member/archive/184463/=now RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00 Modify Your Subscription: https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb Powered by Listbox: http://www.listbox.com
Re: [smartos-discuss] Standardized Benchmarks and various overheads.
On 10/ 1/16 12:43 PM, Matthew Parsons wrote: (Sorry for the delay in replying.) Please note I didn't ask "what matches my workload" or "please architect my setup for me" :P Mainly I just wanted something for a couple basic sanity checks that hardware is performing in the general ballpark of what it should, that there weren't any pathological issues w/ the drivers under SmartOS. Secondarily would be something that can run under native, LX, and KVM to compare relative overheads, and to compare against the same hardware running, say Ubuntu Server or CentOS. If it were something that there was a public database to compare against, that would be a bonus. I stand by my initial reply :) There really are too many variables to offer a generic solution. One obvious example is comparing lx-brand zone with KVM on a pool without decent log devices. Disk benchmarks will be shite in the KVM, but if the same setup had a log, things would be much closer (zones would still win!). If I want a quick and dirty comparison, I use bonnie++ (or CrystalMark on a Windows KVM) and building gcc. The latter is a surprisingly good test; it will stress various aspects of your machine and has shown up numerous performance issues over the years. The single character write numbers from bonnie++ are pretty meaningless on ZFS. FWIW, the main production workload that I will care about is a not-well threaded java server app, so single-threaded performance, coupled with a large-ish MySQL DB with frequent, random I/O both read and write. bonnie++ and building gcc (both can be single or multi-threaded) should give you some decent comparisons. -- Ian. --- smartos-discuss Archives: https://www.listbox.com/member/archive/184463/=now RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00 Modify Your Subscription: https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb Powered by Listbox: http://www.listbox.com
Re: [smartos-discuss] Standardized Benchmarks and various overheads.
(Sorry for the delay in replying.) Please note I didn't ask "what matches my workload" or "please architect my setup for me" :P Mainly I just wanted something for a couple basic sanity checks that hardware is performing in the general ballpark of what it should, that there weren't any pathological issues w/ the drivers under SmartOS. Secondarily would be something that can run under native, LX, and KVM to compare relative overheads, and to compare against the same hardware running, say Ubuntu Server or CentOS. If it were something that there was a public database to compare against, that would be a bonus. FWIW, the main production workload that I will care about is a not-well threaded java server app, so single-threaded performance, coupled with a large-ish MySQL DB with frequent, random I/O both read and write. I went down the rabbit-hole of attempting to to use the Phoronix Test Suite since it "supports" Solaris and BSD, has some pre-defined Java and Database test setups, and can compare to publicly recorded reports. However chasing down the dependencies and chasing down the various "this test failed" errors was taking up too much time. Looks like I'll stick w/ Bonnie++ and IOZone via PkgSrc. (Which means I'll have to start over w/ CentOS) On Mon, Sep 26, 2016 at 5:45 PM, Ian Collinswrote: > On 27/09/16 12:57 pm, Matthew Parsons wrote: > >> Is there a suite/script/configs for benchmarks that have emerged as >> standardized in the smartos community? I'm most interested in disk I/O, and >> would like to compare native linux hardware RAID vs MD RAID, and Native >> zone hardware RAID vs. ZFS (without and with SLOG). >> >> This is a classic "it depends" question. The only reliable benchmark is > one close to your actual workload. You should be building your ZFS pool to > match. > > In my case, my of my systems are in build farms, so I use database > benchmarks along with building a representative project (or gcc). > > Ideally (time permitting) would want to test comparing native, LX zone, >> and KVM. Anyone have ballpark figures of the performance overhead those >> respective types, ideally of the various subsystems? (Disk, CPU, network, >> memory) >> >> Again, it depends what you are doing. LX is much kinder on the hardware > and will have better I/O, but there are corners where KVM will still be > faster. > > Ian. > --- smartos-discuss Archives: https://www.listbox.com/member/archive/184463/=now RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00 Modify Your Subscription: https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb Powered by Listbox: http://www.listbox.com
Re: [smartos-discuss] Standardized Benchmarks and various overheads.
On 27/09/16 12:57 pm, Matthew Parsons wrote: Is there a suite/script/configs for benchmarks that have emerged as standardized in the smartos community? I'm most interested in disk I/O, and would like to compare native linux hardware RAID vs MD RAID, and Native zone hardware RAID vs. ZFS (without and with SLOG). This is a classic "it depends" question. The only reliable benchmark is one close to your actual workload. You should be building your ZFS pool to match. In my case, my of my systems are in build farms, so I use database benchmarks along with building a representative project (or gcc). Ideally (time permitting) would want to test comparing native, LX zone, and KVM. Anyone have ballpark figures of the performance overhead those respective types, ideally of the various subsystems? (Disk, CPU, network, memory) Again, it depends what you are doing. LX is much kinder on the hardware and will have better I/O, but there are corners where KVM will still be faster. Ian. --- smartos-discuss Archives: https://www.listbox.com/member/archive/184463/=now RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00 Modify Your Subscription: https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb Powered by Listbox: http://www.listbox.com
Re: [smartos-discuss] Standardized Benchmarks and various overheads.
Hi Matthew, I have previously run disk performance tests with ZFS. Certainly not a standard, but enough to characterise performance. At a high level performance with ZFS is akin to standalone native SSD. see http://mechanical-sympathy.blogspot.com.au/2011/12/java-sequential-io-performance.html Now there are lx-zones I would depreciate KVM usage. Networking performance with KVM is terrible. We are using native zones for latency and performance sensitive processing and this is more than acceptable for our use cases. I am current working with a colleague in the mechanical sympathy community to review performance of in memory queues on V4 hardware. We have found performance within a given socket is consistent between native and LZ zones. We are getting about 1.1B msgs / second at > 9-11GB /s. This drops down to 4-5GB/s if you process cross sockets. It does seem a slight over head with LZ Zones cross sockets. We have quite some engineering effort to go, if we are to come close to the 35GB/s theoretical limit. That said, the connivence of "taskset"ing processes to a given core range in linux is awesome. We are about a month away from being able publish benchmark tests. I am yet to do network performance testing with LZ Zones. The above all said, I would worry more about the physical placement of your applications on the hardware and engineering your solution to take advantage of 45MB caches and the 100-1 performance difference between CPU's and RAM. What is your specific use case? HTH Philip On 2016-09-27 09:57, Matthew Parsons wrote: Is there a suite/script/configs for benchmarks that have emerged as standardized in the smartos community? I'm most interested in disk I/O, and would like to compare native linux hardware RAID vs MD RAID, and Native zone hardware RAID vs. ZFS (without and with SLOG). Ideally (time permitting) would want to test comparing native, LX zone, and KVM. Anyone have ballpark figures of the performance overhead those respective types, ideally of the various subsystems? (Disk, CPU, network, memory) In lieu of other options, I was planning to try a couple samples from pkgsource, since they should be available/comparable on both most platforms. Any suggestions/requests? (Bonnie++ vs iozone? lmbench vs ubench?) Would prefer to use something with published numbers to compare against. SMARTOS-DISCUSS | Archives [1] [2] | Modify [3] Your Subscription [4] Links: -- [1] https://www.listbox.com/member/archive/184463/=now [2] https://www.listbox.com/member/archive/rss/184463/28381004-66bf1c33 [3] https://www.listbox.com/member/?; [4] http://www.listbox.com --- smartos-discuss Archives: https://www.listbox.com/member/archive/184463/=now RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00 Modify Your Subscription: https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb Powered by Listbox: http://www.listbox.com