Hello Oracle chaps, plus George, plus Juergen, plus everyone on xen-devel, :-)
As promised, I'll have a deep look at the tests and benchmarks results that Elena dumped on us all ASAP. However, this is only fair if I also spam you with an huge load of numbers onto which you can scratch (or bang?!?) your heads, isn't it? :-D So, here we are. I'm starting a new thread because this is somewhat independent from the topology related side of things, which Elena is talking about (and which myself and Juergen were also investigating and working on already). In fact, Linux's scheduling domain can be configured in a variety of ways, by means of a set of flags (and normally done during Linux's boot). In a way, everything there is really related to cpu topology (scheduling domains _are_ the Linux's scheduler interface to cpu topology!). But strictly speaking, there are 'pure topology' related flags, and more abstract 'behavioral' flags. This is the list of these flags, BTW: http://lxr.free-electrons.com/source/include/linux/sched.h#L981 /* * sched-domains (multiprocessor balancing) declarations: */ #define SD_LOAD_BALANCE 0x0001 /* Do load balancing on this domain. */ #define SD_BALANCE_NEWIDLE 0x0002 /* Balance when about to become idle */ #define SD_BALANCE_EXEC 0x0004 /* Balance on exec */ #define SD_BALANCE_FORK 0x0008 /* Balance on fork, clone */ #define SD_BALANCE_WAKE 0x0010 /* Balance on wakeup */ #define SD_WAKE_AFFINE 0x0020 /* Wake task to waking CPU */ #define SD_SHARE_CPUCAPACITY 0x0080 /* Domain members share cpu power */ #define SD_SHARE_POWERDOMAIN 0x0100 /* Domain members share power domain */ #define SD_SHARE_PKG_RESOURCES 0x0200 /* Domain members share cpu pkg resources */ #define SD_SERIALIZE 0x0400 /* Only a single load balancing instance */ #define SD_ASYM_PACKING 0x0800 /* Place busy groups earlier in the domain */ #define SD_PREFER_SIBLING 0x1000 /* Prefer to place tasks in a sibling domain */ #define SD_OVERLAP 0x2000 /* sched_domains of this level overlap */ #define SD_NUMA 0x4000 /* cross-node balancing */ To check how scheduling domains are configured (and to change it), look here: /proc/sys/kernel/sched_domain/cpu*/domain*/flags I noticed some oddities in the way Linux's and Xen's schedulers interacted in some cases, and I noticed that changing the 'behavioral' flags had an impact. I did run a preliminary set of experiments with Unixbench, with the following results: (Hint, look at the "Execl Throughput" and "Process Creation" rows, in the 1x case.) # ./Run -c 1 (1 parallel copy of each benchmark inside a 4 vcpus HVM guest) Flags 4143 4135 4131 4151 4147 4115 4099 4128 1 x Dhrystone 2 using register variables 2299.0 2298.4 2302.0 2311.4 2312.1 2312.1 2299.2 2301.6 1 x Double-Precision Whetstone 619.5 619.5 619.8 619.0 619.0 619.1 619.2 619.6 1 x Execl Throughput 458.0 449.6 1017.0 449.4 1012.1 1017.4 1018.2 1022.6 1 x File Copy 1024 bufsize 2000 maxblocks 2188.8 2317.4 2403.1 2412.5 2420.8 2423.8 2422.7 2430.5 1 x File Copy 256 bufsize 500 maxblocks 1459.7 1576.1 1648.3 1647.7 1649.4 1663.5 1652.4 1649.0 1 x File Copy 4096 bufsize 8000 maxblocks 3467.8 3581.9 3621.1 3624.7 3635.9 3619.8 3606.1 3608.8 1 x Pipe Throughput 1518.3 1505.3 1519.0 1514.7 1518.9 1516.5 1517.2 1518.0 1 x Pipe-based Context Switching 803.7 798.7 801.8 801.4 797.9 132.9 92.0 809.7 1 x Process Creation 404.3 931.8 942.5 950.4 932.7 967.4 960.1 962.7 1 x Shell Scripts (1 concurrent) 1304.4 1256.4 1755.1 1259.5 1756.5 1741.3 1726.0 1819.6 1 x Shell Scripts (8 concurrent) 4564.2 4704.1 4714.0 4691.8 4710.2 4570.8 4571.0 1694.6* 1 x System Call Overhead 2251.1 2249.6 2250.1 2248.9 2250.3 2249.9 2251.0 2249.0 System Benchmarks Index Score ======= 1380.2 == 1495.1 == 1662.2 == 1511.4 == 1661.5 == 1431.8 == 1384.9 == 1536.5 +/- 0.00% +8.32% +20.43% +9.51% +20.38% +3.74% +0.34% +11.32% # ./Run -c 4 (4 parallel copies of each benchmark inside a 4 vcpus HVM guest) Flags 4143 4135 4131 4151 4147 4115 4099 4128 4 x Dhrystone 2 using register variables 8619.4 8551.3 8661.7 8694.1 8731.8 8578.0 8591.7 2293.4 4 x Double-Precision Whetstone 2351.8 2348.9 2352.2 2352.8 2351.7 2351.3 2352.4 2470.6 4 x Execl Throughput 3264.3 3346.9 3743.0 3365.7 3726.6 3745.7 3759.2 1017.0 4 x File Copy 1024 bufsize 2000 maxblocks 2741.7 2789.7 2871.5 2842.5 2793.5 2935.9 2846.5 2376.7 4 x File Copy 256 bufsize 500 maxblocks 1736.6 1754.9 1841.4 1815.1 1763.5 1829.7 1836.6 1579.4 4 x File Copy 4096 bufsize 8000 maxblocks 4457.1 4461.9 4284.4 4566.8 4476.7 4619.0 4815.9 3648.2 4 x Pipe Throughput 5724.0 5719.3 5732.4 5747.6 5747.2 5720.8 5740.1 1509.6 4 x Pipe-based Context Switching 2847.8 2841.9 2831.7 2826.2 2844.5 2433.2 2832.3 745.2 4 x Process Creation 1863.1 3383.8 3358.6 3365.9 3339.1 3206.9 3338.2 924.7 4 x Shell Scripts (1 concurrent) 5126.8 4992.7 6739.1 4973.7 6773.8 6770.9 6806.4 1823.5 4 x Shell Scripts (8 concurrent) 5969.8 6021.7 6258.9 6018.9 6302.5 6284.0 6323.6 1683.6 4 x System Call Overhead 6647.9 6661.2 6672.6 6669.9 6665.0 6641.8 6649.9 2244.1 System Benchmarks Index Score ======= 3786.1 == 3987.8 == 4155.4 == 4018.1 == 4151.5 == 4116.1 == 4195.1 == 1695.8 +/- 0.00% +5.33% +9.75% +6.13% +9.65% +8.72% +10.80% -55.21% 4131 and 4147 are the ones that looked more promising. Linux's default, for an HVM guest is 4143: LOAD_BALANCE | BALANCE_NEWIDLE | BALANCE_EXEC | BALANCE_FORK | WAKE_AFFINE | PREFER_SIBLINGS Using 4131 means: LOAD_BALANCE | BALANCE_NEWIDLE | WAKE_AFFINE | PREFER_SIBLINGS 4147 means: LOAD_BALANCE | BALANCE_NEWIDLE | BALANCE_WAKE | WAKE_AFFINE | PREFER_SIBLINGS For now, I focused on 4131 (as the results of other ah-hoc benchmarks were hinting at that), but I want to investigate 4147 too (see below). Basically, with 4143 as the scheduling domain's (there's only one domain in one of our HVM guests right now) flags, I'm telling the Linux scheduler that its load balancing logic should *not* trigger as a result of fork() or exec(), nor when a task wakes-up (seems a bit aggressive, but still...). So, I arranged for comparing the performance of the default set of flags with 4131, in an extensive way. Here's what I did (it's a bit of a long explanation, but it's for making sure you know what each benchmarking configuration did). I selected the following benchmarks: - makexen: how long it takes to compile Xen (results: lower == better) - iperf: iperf from guest(s) toward the host (results: higher == etter) - sysbench-cpu: pure number crunching (results: lower == better) - sysbench-oltp: concurrent database transactions (results: higher == better) - unixbench: runs a set of tests and compute a global perf index (results: higher == better [1]) The actual workload was always run in guests. A varying number of HVM guests was used. Number of vcpus and amount of memory of the guests was also varying. All the benchmarks were "homogeneous", i.e., all the guests used in a particular instance were equal (in terms of number of vcpus and amount of RAM), and all run the same workload. They also were "synchronous", i.e., all the guests started running the workload at the same time. Each benchmark was repeated 5 times. This first set of results shows the average of all the output samples of all the iterations from all the guests involved in each benchmark.[2] The benchmarks were run on a 24 pCPUs (arranged in 2 NUMA nodes) host, with 32GB of RAM. Xen version was always the same (what staging was a few weeks ago). Linux dom0 kernel was 4.3.0. Guests' kernels were 4.2.0. Results are collected for just the default case, and for the case where I reconfigured the guests' scheduling domain's flag to 4131 (yes, only the flags of the guests for now, I can rerun changing dom0's flags as well). A particular benchmark is characterized as follows: * host load: basically, how many guest vcpus were being used. It can be sequential, small, medium, large, full, overload or overwhelmed * guests size: how big (in terms of vcpus and memory) the guests were. It can be sequential, small, medium or large * guest load: how busy the guests were kept. I can be sequential, moderate, full or overbooked. In some more details: - host load: * sequential load means there is only 1 VM * small load means that total number of guest vcpus was ~1/3 of the host pcpus (i.e., 24/3 = 8 vcpus) * medium load means total number of guest vcpus was ~1/2 of host pcpus (i.e., 12, but it was 16 some of the times) * large load means total number of guest vcpus was ~2/3 of host pcpus (i.e. 16, but it was actually 20) * full load means total number of guest vcpus was == to host pcpus (i.e., 24) * overload means total number of guest vcpus was ~2/3 of host pcpus (i.e., 36, but it was 32 some of the times) * overwhelmed means total number of guest vcpus was 2x host pcpus (i.e., 48) - guest size: * sequential guests had 1 vcpu and 2048MB of RAM * small guests had 4 vcpus and 4096MB of RAM * medium guests had 8 vcpus and 5120MB of RAM * large guests had 12 vcpus and 8192MB of RAM - guest load: * sequential means the benchmark was sequentially run (e.g., make -j1, unixbench -c1, sysbench --num-threads=1, etc.) * moderate means the benchmark was keeping half of the guest's vcpus budy (e.g., make -j4 on in 8 vcpus guest) * full means the benchmark was keeping all the guest's vcpus busy (e.g., make -j8 in an 8 vcpus guest) * overbooked means the benchmark was running with 2x degree of parallelism wrt to guests vcpus (e.g., make -j16 in an 8 vcpus guest) Combining these three 'parameters', several benchmark configurations were generated. Not all the possible combinations are meaningful, but I ended up with quite a few cases. Let me just put down a couple of examples, to make sure what happened during a particular benchmark can be fully understood. So, for instance, the sysboltp-smallhload-smallvm-fullwkload benchmark was: + running sysbench --oltp + running it inside 2 HVM guests at the same time (small host load) + the guests had 4 vcpus (small vms) + running it with --num-threads=4 (full workload) Another one, makexen-overldhload-medvm-modrtwkload was: + running a Xen build + running it in 4 HVM guests at the same time (32 vcpus total, overloaded host) + the guest had 8 vcpus (medium vms) + running it with -j4 (moderate workload) And so on and so forth. The first column in the attached results files, identify the specific benchmark configuration, according to this characterization. The other various columns shows, for each workload, the performance with default flags and with 4131, and the percent increase we got by using 4131. Of course, when lower is better (like in makexen and the sysbench-es) an actual increase is a bad thing. In fact, I highlighted with an '*' instances where changing flags to 4131 caused a regressions. I'm too tired right now to do an appropriate analysis, but just very quickly: - for Xen build-alike workloads, using 4131 is just awesome; :-) - for Unixbench, likewise (at least the runs that I have); - for iperf, there are issues; - for the two sysbench-es, there are a few issues; cpu is worse than OLTP, which is somewhat good, as OLTP is more representative of real workloads, while cpu is purely syntetic. The iperf (and perhaps also the OLTP) results is what makes me want to re-try everything with 4147. In fact, I expect (although, nothing more than a speculation) that allowing the load balancer to act upon a Linux's task's wakeup, has the potential of making things better for that kind of workload. We shall see whether that affect other ones that much... So, now I need to go to bed (esp. considering that I'm up since 4:30 AM, for catching a flight). I'll continue looking at and working on this... in the meanwhile, feel free to voice your opinions. :-D Thanks and Regards, Dario [1] There must have been an issue with Unixbench runs, and I don't have the numbers from all the configurations I wanted to test, so I'm attaching what I have, and re-running. [2] We can aggregate data like this, because of the homogeneous and synchronous nature of the benchmarks themselves. Another interesting analysis that could be attempted is about fairness. I.e., we can check by how much the performance of each of the guests involved in each instance varies between each others (ideally, very few). I haven't done this kind of analysis yet. -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
MAKEXEN IPERFHOST SYSBENCH-CPU SYSBENCH-OLTP BNECHMARK BASELINE 4131 %-INCR BASELINE 4131 %-INCR BASELINE 4131 %-INCR BASELINE 4131 %-INCR fullhload-largevm-fullwkload: 20.737001 19.867060 -4.195117% 22.400000 20.940000 -6.517857% * 1.212670 1.216050 +0.278724% * 912.071000 939.683000 +3.027396% fullhload-largevm-modrtwkload: 25.875727 24.616319 -4.867140% 21.270000 20.670000 -2.820874% * 1.741750 1.660660 -4.655662% 1041.851000 1065.509000 +2.270766% fullhload-largevm-overbkwkload: 21.037268 20.232846 -3.823799% 20.790000 20.360000 -2.068302% * 1.213050 1.212550 -0.041218% 797.165000 797.136000 -0.003638% * fullhload-largevm-seqwkload: 105.108283 101.308165 -3.615431% 20.440000 21.120000 +3.326810% 9.408030 9.407980 -0.000531% 622.248000 622.045000 -0.032624% * fullhload-medvm-fullwkload: 28.596028 27.812717 -2.739229% 13.506667 13.180000 -2.418559% * 1.765140 1.765327 +0.010575% * 850.244667 900.003333 +5.852276% fullhload-medvm-modrtwkload: 37.897117 35.376937 -6.650055% 13.346667 13.260000 -0.649351% * 2.903067 2.796020 -3.687365% 1035.314000 1057.451333 +2.138224% fullhload-medvm-overbkwkload: 29.130728 28.405084 -2.490994% 13.620000 13.393333 -1.664219% * 1.736533 1.746547 +0.576628% * 762.240000 761.896000 -0.045130% * fullhload-medvm-seqwkload: 106.427681 102.817818 -3.391846% 13.866667 12.873333 -7.163462% * 9.414440 9.412847 -0.016924% 619.598000 623.454667 +0.622447% fullhload-smallvm-fullwkload: 52.387622 51.465100 -1.760954% 6.439333 6.405000 -0.533181% * 3.462010 3.512523 +1.459075% * 765.034333 824.488667 +7.771459% fullhload-smallvm-modrtwkload: 66.756567 64.220347 -3.799206% 6.628333 6.395333 -3.515212% * 4.991587 4.919033 -1.453512% 942.162333 941.158000 -0.106599% * fullhload-smallvm-overbkwkload: 53.814088 52.951749 -1.602440% 6.574000 6.370667 -3.092993% * 3.532890 3.532357 -0.015096% 692.337667 700.611667 +1.195082% fullhload-smallvm-seqwkload: 113.748351 110.804230 -2.588276% 6.552667 6.371000 -2.772408% * 9.827337 9.827187 -0.001526% 594.734667 594.396667 -0.056832% * largehload-smallvm-fullwkload: 43.667924 43.075842 -1.355876% 8.156800 7.710000 -5.477638% * 3.293180 3.301780 +0.261146% * 874.668000 910.944800 +4.147494% largehload-smallvm-modrtwkload: 62.873234 60.744095 -3.386400% 7.762800 7.739600 -0.298861% * 4.942712 5.020776 +1.579376% * 953.987200 957.302800 +0.347552% largehload-smallvm-overbkwkload: 43.199855 43.196234 -0.008383% 7.918000 7.773200 -1.828745% * 3.271968 3.267152 -0.147190% 776.054000 783.394000 +0.945810% largehload-smallvm-seqwkload: 109.082792 106.128440 -2.708358% 7.718800 7.828400 +1.419910% 9.666264 9.969128 +3.133206% * 603.200400 606.349200 +0.522016% medhload-medvm-fullwkload: 22.221618 21.553793 -3.005296% 20.100000 20.850000 +3.731343% 1.469640 1.470580 +0.063961% * 995.432000 1043.618000 +4.840712% medhload-medvm-modrtwkload: 32.078953 31.101091 -3.048299% 20.550000 20.540000 -0.048662% * 2.617640 2.464650 -5.844578% 1149.444000 1096.365000 -4.617798% * medhload-medvm-overbkwkload: 22.080351 21.481607 -2.711662% 20.120000 20.080000 -0.198807% * 1.467290 1.467380 +0.006134% * 915.001000 905.838000 -1.001420% * medhload-medvm-seqwkload: 104.053386 101.087348 -2.850496% 19.830000 20.020000 +0.958144% 9.407920 9.407420 -0.005315% 625.652000 623.031000 -0.418923% * medhload-seqvm-overbkwkload: 120.511271 119.230560 -1.062732% 3.092833 3.046833 -1.487309% * 9.881935 9.837693 -0.447702% 561.956167 561.795833 -0.028531% * medhload-seqvm-seqwkload: 116.551999 117.376038 +0.707015% * 3.095333 3.058167 -1.200732% * 9.833542 9.847105 +0.137929% * 567.738167 565.861000 -0.330640% * medhload-smallvm-fullwkload: 35.061704 34.770855 -0.829535% 13.333333 13.220000 -0.850000% * 2.800940 2.766960 -1.213164% 1082.764000 1060.697333 -2.037994% * medhload-smallvm-modrtwkload: 56.635514 55.351022 -2.267997% 13.253333 13.673333 +3.169014% 4.842813 4.844260 +0.029872% * 987.764667 989.880000 +0.214154% medhload-smallvm-overbkwkload: 34.277466 34.070879 -0.602690% 13.293333 13.420000 +0.952859% 2.746307 2.760173 +0.504921% * 963.434667 961.465333 -0.204408% * medhload-smallvm-seqwkload: 104.504903 102.548326 -1.872235% 13.446667 13.053333 -2.925136% * 9.414700 9.409320 -0.057145% 619.398000 623.431333 +0.651170% overldhload-largevm-fullwkload: 31.324010 29.604206 -5.490371% 13.200000 13.413333 +1.616162% 1.559440 1.649920 +5.802083% * 778.237333 758.058000 -2.592954% * overldhload-largevm-modrtwkload: 31.722678 30.173347 -4.883985% 13.233333 13.326667 +0.705290% 2.189233 2.151127 -1.740640% 874.411333 967.226000 +10.614532% overldhload-largevm-overbkwkload: 31.418548 30.271661 -3.650350% 13.313333 13.506667 +1.452178% 1.577487 1.678560 +6.407239% * 695.858667 703.436000 +1.088918% overldhload-largevm-seqwkload: 107.302004 103.206817 -3.816506% 13.133333 13.460000 +2.487310% 9.413020 9.413247 +0.002408% * 617.931333 615.452667 -0.401123% * overldhload-medvm-fullwkload: 38.045174 37.043454 -2.632975% 9.883000 9.792000 -0.920773% * 2.237525 2.270195 +1.460095% * 744.922000 765.907500 +2.817141% overldhload-medvm-modrtwkload: 41.241772 39.408808 -4.444435% 9.847000 9.928500 +0.827663% 2.912050 2.893550 -0.635291% 975.093000 1049.098500 +7.589584% overldhload-medvm-overbkwkload: 38.929887 37.685454 -3.196599% 9.798500 9.785000 -0.137776% * 2.194120 2.255085 +2.778563% * 676.294000 673.603000 -0.397904% * overldhload-medvm-seqwkload: 108.945042 104.374926 -4.194881% 9.872000 9.757500 -1.159846% * 9.423840 9.418165 -0.060220% 618.272500 620.430500 +0.349037% overldhload-smallvm-fullwkload: 71.563046 70.052749 -2.110443% 4.732750 4.676750 -1.183244% * 4.456670 4.574157 +2.636217% * 596.285250 639.959500 +7.324389% overldhload-smallvm-modrtwkload: 78.021121 75.457790 -3.285432% 4.668250 4.715250 +1.006801% 5.769267 5.790990 +0.376521% * 765.554500 779.689500 +1.846374% overldhload-smallvm-overbkwkload: 72.147802 71.122344 -1.421329% 4.690000 4.690750 +0.015991% 4.627425 4.574013 -1.154260% 542.668500 547.895250 +0.963157% overldhload-smallvm-seqwkload: 119.092470 114.809739 -3.596140% 4.703750 4.715250 +0.244486% 9.826590 9.831255 +0.047473% * 578.225250 581.258250 +0.524536% overwlmhload-largevm-fullwkload: 41.466578 38.621317 -6.861577% 9.820000 9.884500 +0.656823% 2.139350 2.137065 -0.106808% 668.033500 693.308500 +3.783493% overwlmhload-largevm-modrtwkload: 38.432678 36.594824 -4.782009% 9.815000 9.957000 +1.446765% 2.307815 2.328780 +0.908435% * 778.919000 837.113000 +7.471123% overwlmhload-largevm-overbkwkload: 42.158261 40.791225 -3.242631% 9.937000 9.800000 -1.378686% * 1.999400 2.170215 +8.543313% * 630.577500 -100.000000 +0.000000% overwlmhload-largevm-seqwkload: 110.668856 105.772351 -4.424465% 9.840500 9.850000 +0.096540% 9.544950 9.528770 -0.169514% 610.552000 603.345500 -1.180325% * overwlmhload-medvm-fullwkload: 57.843676 56.701519 -1.974558% 6.425333 6.393333 -0.498029% * 3.243273 3.339213 +2.958123% * 587.445000 656.301000 +11.721268% overwlmhload-medvm-modrtwkload: 55.645663 53.631712 -3.619242% 6.379000 6.407667 +0.449391% 3.384013 3.533333 +4.412512% * 742.682000 797.172000 +7.336922% overwlmhload-medvm-overbkwkload: 58.375959 57.208164 -2.000472% 6.421333 6.390667 -0.477575% * 3.053370 3.294753 +7.905473% * 555.294000 559.051000 +0.676579% overwlmhload-medvm-seqwkload: 120.074511 110.720759 -7.789957% 6.379000 6.318000 -0.956263% * 10.073857 9.825313 -2.467211% 588.953000 593.809667 +0.824627% seqhload-largevm-fullwkload: 20.178480 19.465247 -3.534621% 36.800000 32.320000 -12.173913% * 1.211500 1.213500 +0.165085% * 928.818000 990.046000 +6.592034% seqhload-largevm-modrtwkload: 24.222176 23.400769 -3.391137% 33.400000 31.980000 -4.251497% * 1.812720 1.700120 -6.211660% 1054.564000 1085.430000 +2.926897% seqhload-largevm-overbkwkload: 20.219314 19.688460 -2.625477% 33.420000 32.800000 -1.855177% * 1.209420 1.206860 -0.211672% 863.324000 900.384000 +4.292711% seqhload-largevm-seqwkload: 103.694797 101.065404 -2.535704% 33.260000 35.500000 +6.734817% 9.404880 9.406140 +0.013397% * 626.846000 619.110000 -1.234115% * seqhload-medvm-fullwkload: 21.872927 21.531929 -1.558997% 33.320000 33.700000 +1.140456% 1.470720 1.544880 +5.042428% * 1044.930000 1052.484000 +0.722919% seqhload-medvm-modrtwkload: 31.439851 30.713816 -2.309285% 35.100000 32.840000 -6.438746% * 2.554980 2.464240 -3.551496% 1172.320000 1077.568000 -8.082435% * seqhload-medvm-overbkwkload: 42.898053 21.590199 -49.670911% 36.140000 35.560000 -1.604870% * 1.469440 1.467060 -0.161966% 953.176000 940.458000 -1.334276% * seqhload-medvm-seqwkload: 103.579303 101.011340 -2.479225% 36.320000 33.880000 -6.718062% * 9.406060 9.404540 -0.016160% 618.332000 627.450000 +1.474612% seqhload-seqvm-overbkwkload: 102.612585 102.547247 -0.063674% 35.020000 35.280000 +0.742433% 9.405840 9.407380 +0.016373% * 625.978000 627.306000 +0.212148% seqhload-seqvm-seqwkload: 100.911104 101.151855 +0.238578% * 34.640000 34.720000 +0.230947% 9.406020 9.405340 -0.007229% 631.038000 623.342000 -1.219578% * seqhload-smallvm-fullwkload: 31.371477 30.958012 -1.317963% 35.160000 33.860000 -3.697383% * 2.515140 2.464040 -2.031696% 1161.380000 1089.280000 -6.208132% * seqhload-smallvm-modrtwkload: 53.330663 52.767446 -1.056085% 34.800000 34.440000 -1.034483% * 4.708720 4.710140 +0.030157% * 1013.726000 1019.318000 +0.551628% seqhload-smallvm-overbkwkload: 31.226369 30.899420 -1.047029% 34.200000 34.120000 -0.233918% * 2.461320 2.462720 +0.056880% * 1058.950000 1053.220000 -0.541102% * seqhload-smallvm-seqwkload: 102.464011 100.261457 -2.149587% 34.040000 34.540000 +1.468860% 9.405880 9.404820 -0.011270% 627.118000 629.102000 +0.316368% smallhload-seqvm-overbkwkload: 115.687086 114.982952 -0.608655% 4.772500 4.780250 +0.162389% 9.828832 9.833490 +0.047386% * 585.617750 588.638000 +0.515737% smallhload-seqvm-seqwkload: 112.134748 112.416323 +0.251104% * 4.843000 4.770000 -1.507330% * 9.826988 9.831333 +0.044215% * 591.845750 590.571500 -0.215301% * smallhload-smallvm-fullwkload: 31.566760 30.893237 -2.133647% 20.340000 20.180000 -0.786627% * 2.507620 2.603000 +3.803607% * 1164.950000 1114.514000 -4.329456% * smallhload-smallvm-modrtwkload: 53.527179 53.003865 -0.977661% 21.010000 20.830000 -0.856735% * 4.709180 4.709890 +0.015077% * 1022.303000 1020.294000 -0.196517% * smallhload-smallvm-overbkwkload: 31.262871 31.061621 -0.643736% 20.240000 20.470000 +1.136364% 2.461270 2.462150 +0.035754% * 1064.173000 1062.845000 -0.124792% * smallhload-smallvm-seqwkload: 103.005610 100.879419 -2.064151% 20.090000 21.040000 +4.728721% 9.408340 9.408440 +0.001063% * 625.742000 622.902000 -0.453861% *
MAKEXEN BNECHMARK BASELINE 4131 %-INCR fullhload-largevm-fullwkload: 6011.490000 6317.580000 +5.091749% fullhload-largevm-seqwkload: 1795.880000 1919.030000 +6.857362% fullhload-medvm-fullwkload: 4235.386667 4428.100000 +4.550076% fullhload-medvm-seqwkload: 1743.946667 1886.553333 +8.177238% fullhload-smallvm-fullwkload: 2746.820000 2810.046667 +2.301813% fullhload-smallvm-seqwkload: 1596.913333 1726.240000 +8.098540% largehload-smallvm-fullwkload: 3221.852000 3336.528000 +3.559319% largehload-smallvm-seqwkload: 1645.188000 1771.280000 +7.664291% medhload-medvm-fullwkload: 5960.200000 6298.050000 +5.668434% medhload-medvm-seqwkload: 1815.440000 1924.890000 +6.028841% medhload-seqvm-fullwkload: 1576.891667 1575.535000 -0.086034% * medhload-seqvm-seqwkload: 1572.166667 1583.676667 +0.732111% medhload-smallvm-fullwkload: 4115.053333 4288.606667 +4.217523% medhload-smallvm-seqwkload: 1773.926667 1899.214286 +7.062728% overldhload-largevm-fullwkload: 3748.480000 3507.678571 -6.423975% * overldhload-largevm-seqwkload: 1719.613333 1865.913333 +8.507727% overldhload-medvm-fullwkload: 3322.565000 3267.475000 -1.658056% * overldhload-medvm-seqwkload: 1688.425000 1846.520000 +9.363460% overldhload-smallvm-fullwkload: 2061.030000 2114.092500 +2.574562% overldhload-smallvm-seqwkload: 1473.672500 1638.752632 +11.201955% overwlmhload-largevm-fullwkload: 3092.147368 2725.510526 -11.857030% * overwlmhload-largevm-seqwkload: 1658.515000 1818.240000 +9.630603% overwlmhload-medvm-fullwkload: 2567.303704 2431.944828 -5.272414% * overwlmhload-medvm-seqwkload: 1568.867857 1713.936667 +9.246719% seqhload-largevm-fullwkload: 6165.400000 6658.980000 +8.005644% seqhload-largevm-seqwkload: 1825.420000 1937.040000 +6.114757% seqhload-medvm-fullwkload: 6031.620000 6310.360000 +4.621312% seqhload-medvm-seqwkload: 1836.340000 1943.140000 +5.815916% seqhload-seqvm-fullwkload: 1882.600000 1880.780000 -0.096675% * seqhload-seqvm-seqwkload: 1872.500000 1876.740000 +0.226435% seqhload-smallvm-fullwkload: 4794.320000 4928.800000 +2.804986% seqhload-smallvm-seqwkload: 1830.400000 1922.180000 +5.014205% smallhload-seqvm-fullwkload: 1689.540000 1674.155000 -0.910603% * smallhload-seqvm-seqwkload: 1676.525000 1664.855000 -0.696083% * smallhload-smallvm-fullwkload: 4804.250000 4908.270000 +2.165166% smallhload-smallvm-seqwkload: 1817.490000 1918.320000 +5.547761%
signature.asc
Description: This is a digitally signed message part
_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel