01/14/10 21:56, Matthew Dillon написав(ла):
This would explain why the performance is not as bad as linux but
is not as good as a properly pipelined case.
For what it may be worth, here are the stats for Solaris as well:
* Solaris 8, native, 32-bit binary (using -lcrypto instead
Andrew Snow wrote:
Hi Mikhail, I assume these tests were done on UFS. Have you tried ZFS?
I'm curious to see the results.
I suspect it would be noticably worse :) AFAIK ZFS integration with mmap
does at least one extra in-memory data copy.
___
03/25/06 14:03, John-Mark Gurney wrote:
The other useful/interesting number would be to compare system time
between the mmap case and the read case to see how much work the
kernel is doing in each case...
After adding begin- and end-offset options to md5(1) -- implemented
using mmap (see
Hi Mikhail, I assume these tests were done on UFS. Have you tried ZFS?
I'm curious to see the results.
- Andrew
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to
01/14/10 17:15, Andrew Snow wrote:
Hi Mikhail, I assume these tests were done on UFS. Have you tried ZFS?
I'm curious to see the results.
I suspect, it would be harder for me to setup ZFS, than for you to apply
my patch for to md5.c :-)
-mi
: mmap: 43.400u 9.439s 2:35.19 34.0%16+184k 0+0io 106994pf+0w
: read: 41.358u 23.799s 2:12.04 49.3% 16+177k 67677+0io 0pf+0w
:
:Observe, that even though read-ing is quite taxing on the kernel (high
:sys-time), the mmap-ing loses overall -- at least, on an otherwise idle
:system