though I don't remember the exact reason I chose it originally.
The practical limitation for swap is 4096GB (4TB) due to the use
of 32 bit block numbers coupled with internal arithmatic overflows
in the swap algorithms which eats another 2 bits.
this is definitely enough for me :)
We do not want to increase the size of the radix tree element because
the larger structure size would double the per-swap-block physical memory
overhead, and physical memory overhead is already fairly significant...
around 1MB of physical memory is needed per 1GB of swap.
this is right. i don't like more locked memory just because more swap may
POSSIBLY be used.
There are a maximum of 4 swap devices (w/512GB limit by default in total,
with the per-device limit 1/4 of that). Devices are automatically
too - it is enough.
For now i am FreeBSD user, but when i read what are proposed by
developers(!) for FreeBSD i clearly understand i will need something else.
And swapcache is nearly what i need for one use where i have a mix of I/O
heavy files and large data. dividing it manually is hard.
UFS ought to be be cached by swapcache but there's no point using it
on DragonFly. You should use HAMMER.
do you wrote some (even rough and preliminary) about how hammer's data is
laid out over disk? Or maybe better HAMMER2 which you are working on now.
From my 15 year experience in unix (which first 5 were unfortunately linux
and i know what is filesystem loss) it would be hard to impossible to
convince me filesystem without offline fsck is a good idea.
Even from as good programmer as you. UFS is plain indestructible,
including many of my harsh tests that eg. completely kills ZFS with full
data loss ;)
Are i-nodes laid out in predefined places or scattered over and accessed
with tree like structure?
UFS do first, so fsck ALWAYS know where to find inodes. getting
trash-write on random place would destroy few files but not the whole
thing.
:> 3) how about reboots? From my understanding reboot, even clean, means losing
:> ALL cached data. am i right?
All swapcache-cache data is lost on reboot.
this is quite a disadventage. Of course production system would not crash
every day, but imagine that crash happened for any reason (power spike,
lost of power etc.) then i rebooted and it works and now all users wants
to use server so we get time of highest load and... swapcache is empty.
warmup will take some time and system would be slower that time.
Still ability to manually decide what files are cached is plain great and
exactly what i need.
And finally losing swapcache device means no data loss. i could risk using
cheap flash media that have warranty. if it fails, just replace.
:> In spite of HAMMER being far far far better implementation of filesystem
:> that ZFS, i don't want to use any of them for the same reasons.
:>
:> UFS is safe.
A large, full UFS filesystem can take hours to fsck, meaning that a
35 minutes for largest i use - 2TB. I ALWAYS(TM) do one filesystem per
drive or 2 drive mirror, rest are checked in parallel.
This is safe, double disk failure would result of losing 2TB volume, not
20TB.
crash/reboot of the system could end up not coming back on line for
a long, long time. On 32-bit systems the UFS fsck can even run the
system out of memory and not be able to complete. On 64-bit systems
this won't happen but the system can still end up paging heavily
depending on how much ram it has.
wrong. I've never got more than 500MB RAM per drive. and i always have
more than 500MB per drive on machine.
i would need to have tens of millions of files per disk. it doesn't
happen. i never had more than 3 millions.
In contrast, HAMMER is instant-up and has no significant physical
memory limitations (very large HAMMER filesystems can run on systems
with small amounts of memory).
this is true and i already tested it.
I could call HAMMER "ZFS done right" but still it is dangerous.
Until i would UNDERSTAND hammer is safe i will not believe it.
HAMMER is really great deal of Your work but if it is a good idea
(contrary to good implementation) is something else.
:> thanks
With some work, people have had mixed results, but DragonFly is designed
to run on actual hardware and not under virtualization.
Seems like you missed my question. i DO NOT WANT to virtualize DragonFly.
Just as i dont want to virtualize FreeBSD now.
Today "virtualize everything" trend is plain stupid, and people like
stupid ideas. i don't.
But i run few windows sessions using VirtualBox UNDER FreeBSD. Without
such option i would need separate machine for it.
for everything else i use jails, and DragonFly have working jails.