Workaround has been applied:-
Minus 50 sec
-bash-3.2$ cat fork1.output
# bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 23881.74167 86 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min
22510.65833 22510.65833 # max 49583.97639 26053.60615 # mean 24982.76488 24009.66137 # median 24098.16940 23881.74167 # stddev 3179.35577
686.60767 # standard error 314.80313 74.03881 # 99% confidence level 732.23208 172.21427 # skew 5.00473 0.60015 # kurtosis 33.19530 0.17513 # time correlation
-15.29000 -0.54277 # # elasped time 257.34507 # number of samples 86 # number of outliers 16 # getnsecs overhead 238 # # DISTRIBUTION # counts usecs/call
means # 1 22500.00000 |**** 22510.65833 # 0 22590.00000 | - # 0 22680.00000
| - # 0 22770.00000 | - # 3 22860.00000 |************
22890.55975 # 0 22950.00000 | - # 3 23040.00000 |************ 23074.55564 # 1 23130.00000
|**** 23178.31415 # 2 23220.00000 |******** 23264.49284 # 4 23310.00000 |**************** 23376.40680 # 4 23400.00000
|**************** 23453.45574 # 4 23490.00000 |**************** 23523.64573 # 7 23580.00000 |**************************** 23635.30316 # 8 23670.00000 |******************************** 23710.76531 # 5 23760.00000 |********************
23788.50820 # 4 23850.00000 |**************** 23893.68383 # 3 23940.00000 |************ 23973.25051 # 4 24030.00000 |**************** 24078.65052 # 3 24120.00000
|************ 24141.85327 # 3 24210.00000 |************ 24246.67291 # 4 24300.00000 |**************** 24364.04465 # 5 24390.00000 |********************
24444.50472 # 4 24480.00000 |**************** 24539.64735 # 1 24570.00000 |**** 24621.17191 # 2 24660.00000 |********
24739.20456 # 1 24750.00000 |**** 24760.38002 # 0 24840.00000 | - # 0 24930.00000
| - # 3 25020.00000 |************ 25076.92854 # 1 25110.00000 |**** 25182.04928 # 1
25200.00000 |**** 25276.34825 # # 5 > 95% |******************** 25573.33034 # # mean of 95% 23913.13860 # 95th %ile 25295.36178
--- On Mon, 5/5/08, Juergen Keil <[EMAIL PROTECTED]> wrote:
From: Juergen Keil <[EMAIL PROTECTED]> Subject: Re: [xen-discuss] How to install Solaris 0508 HVM DomU at SNV87 To:
[EMAIL PROTECTED] Cc: [email protected] Date: Monday, May 5, 2008, 12:43 PM
> We are done on SNV87 HVM DomU. > Service ppd-cache-update now online. > I gonna wait for completition on S10U5 HVM DomU
Can you please try to run the fork_100 test from the libMicro-0.4.0 benchmark on that SNV87 HVM DomU?
The source of the benchmark is available for download here:
http://opensolaris.org/os/project/libmicro/ http://opensolaris.org/os/project/libmicro/files/libmicro-0.4.0.tar.gz
% wget http://opensolaris.org/os/project/libmicro/files/libmicro-0.4.0.tar.gz
% gunzip< libmicro-0.4.0.tar.gz |tar xf - % cd libMicro-0.4.0 % make
% bin/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 > /tmp/fork.output Running: fork_100 for 4.77470 seconds
% cat /tmp/fork.output # bin/../bin-i86pc/fork -E -C 200 -L -S
-W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 280.76964 100 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 237.29929 237.29929 # max 949.35702 424.32036 # mean 306.45514 298.44325 # median 281.65932 280.76964 # stddev 79.72480 44.62684 # standard error 7.89393 4.46268 # 99% confidence level 18.36128 10.38020 # skew 5.29799 1.00536 # kurtosis 38.85951 0.12177 # time correlation -1.47891 -1.07615 # # elasped time 4.76910 # number
of samples 100 # number of outliers 2 # getnsecs overhead 399 # # DISTRIBUTION # counts usecs/call means # 1 234.00000 |** 237.29929 # 7 240.00000 |************** 242.43814 # 2 246.00000 |**** 249.26708 # 1 252.00000 |** 256.03853 # 3 258.00000 |****** 262.67013 # 16 264.00000 |******************************** 267.68925 # 10 270.00000 |******************** 273.27753 # 12 276.00000 |************************ 279.59795 # 7 282.00000 |************** 284.16094 #
5 288.00000 |********** 290.65786 # 2 294.00000 |**** 297.36328 # 3 300.00000 |****** 302.75731 # 1 306.00000 |** 309.20215 # 2 312.00000 |**** 315.67402 # 1 318.00000 |** 318.00967 # 1 324.00000 |** 326.74916 # 3 330.00000 |****** 331.74546 # 3 336.00000 |****** 340.82672 # 3 342.00000 |****** 342.74536 # 5 348.00000 |********** 350.05268 # 2 354.00000 |****
357.77666 # 0 360.00000 | - # 1 366.00000 |** 369.75543 # 0 372.00000 | - # 1 378.00000 |** 383.21981 # 3 384.00000 |****** 386.35274 # # 5 > 95% |********** 408.37425 # # mean of 95% 292.65741 # 95th %ile 397.26077
On an "AMD Athlon(tm) 64 X2 Dual Core Processor 6400+" / metal / snv_89, the fork benchmark runs for 4.77 seconds (that's the output included above). Under xVM / Xen and on current Intel / AMD processors it should complete in 20-30 seconds.
I've seen cases where the fork_100 benchmark used > 700 seconds, when I
ran it in a 32-bit PV domU.
--------------------------------------------------------------------
I do have a workaround for this xen / opensolaris performance problem[*]; it's a modified /usr/lib/libc/libc_hwcap3.so.1 shared C library (see the attachment). It was compiled from current opensolaris source (post snv_89), but seems to work OK under snv_81. Under xVM you should find a lofs mount like this in "df -h" output:
# df -h Filesystem size used avail capacity Mounted on ... /usr/lib/libc/libc_hwcap3.so.1 6.1G 4.1G 1.9G 69% /lib/libc.so.1 ...
Now try this in your domU:
# gunzip < libc_hwcap3.tar.gz | ( cd /tmp; tar xfv - ) x libc_hwcap3.so.1, 1646444 bytes, 3216 tape blocks
# mount -O -F lofs /tmp/libc_hwcap3.so.1 /lib/libc.so.1
Now run the fork_100 benchmark. It might run much
faster.
And after "umount /lib/libc.so.1", the fork_100 should become slow.
=== [*] http://www.opensolaris.org/jive/thread.jspa?threadID=58717&tstart=0 |
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.
_______________________________________________
xen-discuss mailing list
[email protected]