Currently i am FreeBSD user in production machines.

installed DragonFlyBSD 3.6 from pendrive image onto my laptop with 512MB RAM and 80GB disk - mostly to test hammer filesystem.

While i am not fan of such complex-featured filesystem (i use FreeBSD with UFS) i was possitively surprised about performance.

even when i willingly reduced maxvnodes to 5000 from default about 50000, it takes a moment to scan through whole /usr filesystem (with system sources) by
find /usr >/dev/null.

To make sure Dragonfly caches more that i think i repeated the test after 
reboot - same.

it works as fast or faster than UFS on most tests - like tarring whole /usr to single file.

It's even better under parallel load with multiple processes doing similar or different things.

Very importantly response times are better under high I/O load even compared to FreeBSD with UFS.

fsync performance is comparable to UFS, maybe even better.

did some test with "swapcache". but had only pendrive, no real SSD, so it wasn't actually speedup (sometimes - slowdown) but i did it while watching I/O rates with systats to confirm it's operation. It works as advertised.

did some snapshot testing - as advertised lots of snapshot don't have noticable effects on performance.

Tested reblock and rebalance too - works fast and doesn't stop other activities down much. And seems it's done intelligently - doing reblock again takes a moment so it doesn't copy things that don't need it.

I expected it to be better performing than ZFS (can anything be worse? i don't treat ZFS as serious product, except of serious marketing ;), but not to be better performing than UFS under FreeBSD or DragonFly.

But need to ask few questions:

1) Why "wired" memory is so high - over 120MB. How to check what actually takes that much.

vmstat -m shows few megabytes, i stripped kernel binaries so kernel+few modules are less than 11MB. even looking at vmstat -s doesn't sum up.

Actually it is similar to FreeBSD, yet i still didn't get definite answer how to check what actually takes that memory. all data i was advised to check simply don't sum up.

2) I can mount whole hammer filesystem with "nohistory" option. Just as well as noatime that i do on all systems no matter what filesystem i use.

But how can i have nohistory mode on selected PFS?

3) can i set PFS up so all history points would be visible under some subdirectory?

I would use it for samba-exported PFS so user would be able to browse it, and eg. recover older versions of files without asking me for help.

4) I'm getting once every few minutes under load messages like

"[diagnostics] had to recurse on directory /XXX"

what is it?

except this messages from kernel everything works OK.

5) with swapcache i can turn on metadata caching (great), and data caching.

For data caching i can limit maximum file size AND can include/exclude caching for selected directories/files using chflags.

But it would be VERY useful to have following features:

- for files larger than maximum - cache only first few kilobytes of file instead of not caching at all. Very useful for Maildirs - mail header could be cached, but attachments on large mail files will not be.

- while limiting maximum cache file size - ability to declare - by chflags, that selected files would be cached in spite of being bigger.
I would use it for iscsi-exported images.

OR

- cache all files no matter how big, but excluding data that was read linearly.

the last one would be most useful - if someone would have 1GB movie it will not be cached, but if 1GB database file or disk image - it will.



BTW ability to keep swapcache over reboots would be great. Otherwise - cache is empty when it is most needed - after reboot when there is high I/O rate.



Finally

6) Why there is a limit of 65536 PFS's? No problem for me now, but on large server it would be useful to make one PFS per user and it may be a problem then.



Reply via email to