On 2015-05-17 03:24, Mark Campbell wrote:

So that said, I'm highly in favor of the OPTION of ZFS on
RedSleeve. As
you've mentioned, there are some cons to it, especially to
embedded
devices with limited resources. I think these types of devices
should
have the ability to stay on ext.

Perhaps this is a matter of opinion but for non-zfs cases I am
struggling to figure out why an .img is advantageous over a generic
.tar.gz in the first place. We aren't really aiming this
distribution at people who don't already know how to mkfs.ext4 and
extract the tarball onto it.

I'm not sure I understand what this has to do with my text you quoted.
 But as for your question, I use the .img file when flashing my SD
cards in Windows using win32diskimager.  At least in the Windows
world, that's how I've had the best success in getting these SD cards
created.

I have to say it never occurred to me that there might be users
of RedSleeve that don't have another Linux system up and running
already. Expending any effort specifically for Windows-only users
seems like a poor use of resources considering the target
audience. RedSleeve most certainly isn't aimed at beginners.

The reason why what I said seemed (to me) related is because I
am suggesting that for the typical use-case on an ARM machine
with a binary-only kernel, there is going to be enough effort
required to get any image not specific to the hardware up and
running that an image wouldn't be particularly advantageous to
a generic tarball.

But for the few systems that we have hardware and suitable
kernels available for, IMO, if we're going to make dd-able
images, they might as well be for a configuration that involves
some additional process that may not be obvious to everyone
such as making it run on a decent file system.

If reliability is a concern, I'm surprised you don't consider ext* a
liability, especially on ARM, considering that things like parts of
e2fsprogs (like fsck) do things that are unbelievably dangerous on
architectures without transparent automatic alignment fixup. What
happens all over that codebase is that they keep allocating a buffer
as an array of char[4096] (char being byte aligned), then cast it
into a struct (which will be at least word aligned, possibly to an
bigger boundary depending on what the first element is).

My statement about stability/reliability was more about the code
itself; so in other words, choosing the more mature, stable code set
of EL, vs something a bit more bleeding edge/untested, like Fedora.
But you make a fair point about ext in that regard.  I wasn't actually
aware of automatic alignment fixup.

Most people aren't, including developers who _really_ need to know
better.

The net result is that unless you have set your kernel to
automatically fix up alignment problems (and take the ~20x speed
penalty for it), there is a decent chance that fsck-ing your ext*
file system on an ARM machine will irreversibly trash the fs. The
only reason there isn't a massive outcry about this is because most
distros ship with kernels configured to automatically do alignment
fixup. The performance hit from it is generally being ignored.

So I'd dispute the implication that ext* is somehow safer than
anything else out there.

Wasn't implying that ext* was safer than anything else out there, just
that in my experience, it's been pretty stable, and has bounced back
pretty well from anything I've thrown at it.  But then again, most of
that experience is on x86-based systems, where you say automatic
alignment fixup is enabled.

I have lost data on ext* more than once on x86. I have no plans
to put any important, inconvenient to restore from backups data
on ext* again in the future.

In fairness, the only two cases in which I find that I am running
out of CPU with zfs-fuse is when doing a scrub or zfs send/receive.
Neither of these are even possible with a different file system, so
it is debatable whether you actually have a real use-case where you
are even bottlenecked by the FS a significant fraction of the time.

And all that ignores the safety aspects of zfs' checksumming.

Also, most embedded devices run off SD cards and on those
compression and the fact that ZFS tries to combine writes into large
(128KB) blocks is likely to actually get you ahead overall.

 My primary concern for performance is that I intend to do a level of
processing & calculations that will likely stretch the boundaries of
embedded devices somewhat, so my goal is to keep the system/OS from
contributing to the load any more than it has to, so that more CPU is
available for those calculations.  Maybe I'm overstating the
performance hit ZFS will actually have on an embedded system, but it
seemed like something that should be brought up.

I understand your concern, but do your applications do heavy disk
I/O while number crunching that hard? On a 2GHz Marvell Kirkwood I
find that zfs send/receive tops out at about 30MB/s, partially
because netcat seems to be eating upward of 20% of system CPU time
(even when it is piping to /dev/null, not a zfs-fuse related issue).
Scrubbing the FS goes at about 50MB/s, purely CPU bound. While that
is not exactly stellar, it is certainly far more than you would
achieve on typical general purpose system random seek pattern on
spinning rust.

I would be quite interested to see empirical evidence for your
intended load. The zfs-fuse package is on the mirrors. Perhaps
add a scratch disk, zfs it up and see how it behaves under realistic
application load vs. other FS-es?

Gordan
_______________________________________________
users mailing list
[email protected]
http://lists.redsleeve.org/mailman/listinfo/users

Reply via email to