On Mon, Aug 08, 2016 at 09:23:56PM +1000, [email protected] wrote:
> On Sunday, 7 August 2016 1:58:25 AM AEST Robin Humble via luv-main wrote:
> > has anyone else had issues with ZFS on recent kernels and distros?
>
> Debian/Jessie (the latest version of Debian) is working really well
> for me.  Several systems in a variety of configurations without any
> problems at all.

me too, no problems on sid with 4.6.x kernels and zfs-dkms 0.6.5.7-1
on several different machines.

i recently upgraded my main system from 16GB to 32GB, but that was
because I started using chromium again and it really uses a LOT of
memory. leaks a lot too.  I took the opportunity to tune zfs_arc_min &
zfs_arc_max to 4GB & 8GB (they had been set to 1 & 4GB), and have zswap
configured to use up to 25% of RAM for compressed swap.

> > BTW this is on fedora 24 with root on ZFS, but it sounds like
> > ubuntu has similar issues. symptoms feel like a livelock in some
> > slab handling rather than an outright OOM. there's 100% system
> > time on all cores, zero fs activity, no way out except to reset.
> > unfortunately root on ZFS on a laptop means no way that I can think
> > of to get stack traces or logs :-/

syslog over the LAN?  serial console?

> For the laptops I run I use BTRFS.  It gives all the benefits of ZFS
> for a configuration that doesn't have anything better than RAID-1 and
> doesn't support SSD cache (IE laptop hardware) without the pain.

I'm probably going to do this when i replace my boot SSDs sometime in
the nearish future (currently mdadm raid-1 partitions for / and /boot,
with other partitions for swap, mirrored ZIL, and L2ARC).

I'd like to use zfs for root (i'm happy enough to net- or usb- boot
a rescue image with ZFS tools built-in if/when i ever need to do any
maintenance without the rpool mounted) except for the fact that ZFS is
only just about to get good TRIM support for VDEVs.  If it's ready and
well-tested by the time i replace my SSDs, I may even go ahead with
that. being able to use 'zfs send' instead of rsync to backup the root
filesystems on all machines on my LAN will be worth it.


speaking of which, have you ever heard of any tools that can interpret
a btrfs send stream and extract files from it? and maybe even merge in
future incremental streams? in other words, btrfs send to any filesystem
(including zfs). something like that would make btrfs for rootfs and zfs
for bulk storage / backup really viable.

I need a good excuse to start learning Go, so i think i'll start playing
with that idea on my ztest vm (initially created for zfs testing but now
has the 5GB boot virtual disk + 12 x 200MB more disks for mdadm, lvm,
btrfs, and zfs testing). BTW, there's a bug in seabios which causes a VM
to lock up on "warm" reboot if there's more than 8 virtual disks if you
have the BIOS boot menu enabled....which is an improvement over what it
used to do, which was lock up even on initial "cold" boot.

it may not even be possible - the idea is based on fuzzy memories from
years ago that a btrfs send stream contains a sequence of commands
(and data) which are interpreted and executed by btrfs receive. IIRC,
the btrfs devs' original plan was to make it tar compatible, but tar
couldn't do what they needed so they wrote their own.


> ZFS is necessary if you need RAID-Z/RAID-5 type functionality (I
> wouldn't trust BTRFS RAID-5 at this stage), if you are running a
> server (BTRFS performance sucks and reliability isn't adequate for a
> remote DC), or if you need L2ARC/ZIL type functionality.

i made the mistake of using raidz when i first started using zfs years
ago. it's not buggy (it's rock solid reliable), it's just that mirrors
(raid1 or raid10) are much faster, and easier to expand. it made sense,
financially, at the time to use 4x1TB drives in raid-z1, but I'm only
using around 1.8GB of that, so I'm planning to replace them with either
2x2TB or 2x4TB. maybe even 4x2TB for better performance.

the performance difference is significant my "backup" pool has two
mirrored pairs, while my main "export" pool has raid-z. scrubs run at
200-250MB/s on "backup", and around 90-130MB/s on "export".


i also use raidz on my mythtv box. performance isn't terribly important
on that, but storage capacity is. even so, mirrored pairs would be
easier to upgrade than raidz - cheaper too, because I only have to
upgrade a pair at a time rather than all four raid-z members.  I have no
intention of replacing any until the drives start dying or 8+TB drives
are cheap enough to consider buying a pair of them.

craig

-- 
craig sanders <[email protected]>
_______________________________________________
luv-main mailing list
[email protected]
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Reply via email to