People who use HAMMER also tend to backup their filesystems using
the streaming mirroring feature. You need a backup anyway, regardless.
definitely. Backups are different thing.
But i do not consider online mirroring from hammer as backup feature, but
something like more sophisticated
HAMMER(ROOT) recovery check seqno=8ca97e62
HAMMER(ROOT) recovery range 36877528-36892fa0
HAMMER(ROOT) recovery nexto 36892fa0 endseqno=8ca98015
HAMMER(ROOT) recovery undo 36877528-36892fa0 (113272 bytes)(RW)
ad4: FAILURE - READ_DMA48
not great.
This is not a hammer problem but a problem with the underlying disk. It
couldn't read from the disk - that is pretty much a file-system
independent problem; UFS would fail equally miserably.
not true.
it is very unlinkey case you will not be able to mount. you will not be
able to
UFS use flat on disk structure. inodes are at known places.
I don't know how HAMMER data is placed, but seems everything is dynamic.
any link to description of HAMMER on disk layout?
Please, read hammer(8) (at the subcommand recover).
thank you very much.
While such recovery is painfully
which i don't have at the moment.
just dd /dev/random and overwrite a few sectors?
good but... real failures are always worse than that.
In my tests ZFS for example (which for me is plain example of bad design
and bad implementation) failed within less than hour to the point it was
My main problem had been with ffs_fsck. At one point my machine was
randomly crashing due to a bad power supply. Everytime I started up, did
an hour of work, then crash, then 30-40 minutes for fsck to run, and an
you may postpone fsck when using softupdates. It is clearly stated in
you may postpone fsck when using softupdates. It is clearly stated in
softupdate documents you may find (McKusick was one of the authors).
that's what i do.
Then, you suffer a performance hit when fsck'ing in bg.
once again - read more carefully :)
I am NOT talking about background fsck
OK, understood now, i think: you agree with temporarily loosing a bit of
unreclaimed free-space on disk until time permits cleaning things up
properly, afaiu softupdates (+journalling ? not really clear).
That it. And that's how original softupdates document describe it.
You may run quite
Any Tree-like structure produces a huge risk of losing much more data that
was corrupted at first place.
Not so sure about that statement, but well, let's agree we might disagree :)
disagreement is a source of all good ideas. but you should explain why.
my explanation below.
You asked
Sorry, i also just love ZFS for the business case i rely on it for. It has
some
clearly nice features.
sorry if your resoning for software is based on love, not logic then it's
good idea to end topic.
Probably your business is more about deploying as much as possible and
that's all.
though I don't remember the exact reason I chose it originally.
The practical limitation for swap is 4096GB (4TB) due to the use
of 32 bit block numbers coupled with internal arithmatic overflows
in the swap algorithms which eats another 2 bits.
this is definitely enough for me :)
For now i am FreeBSD user, but when i read what are proposed by
developers(!) for FreeBSD i clearly understand i will need something else.
Which FreeBSD plans do you find worrisome?
more and more user friendly features that are proposed as well as
confirmed by developers. Read
more and more user friendly features that are proposed as well as
confirmed by developers. Read FreeBSD-hackers mailing lists since 2 months.
Found training wheels and replacing rc(8) threads.
Anything else?
this is off topic so i recommend stopping that here.
i would post you privately but
i have few questions. i am currently using FreeBSD, dragonfly was just
tried.
1) why on amd64 platform swapcache is said to be limited to 512GB?
actually it may be real limit on larger setup with more than one SSD.
2) it is said that you are limited to cache about 40 inodes unless you
14 matches
Mail list logo