On Tue, Oct 11, 2016 at 11:13:34PM +1100, russ...@coker.com.au wrote:
> On Tuesday, 11 October 2016 10:30:01 PM AEDT Craig Sanders via luv-main wrote:
> > I was rebooting anyway in order to replace a failed SSD on one
> > machine and convert both of them to root on ZFS.  It booted up OK on
> > both, so I made it the default. If it refrains from sucking badly
> > enough to really piss me off for a decent length of time, i'll leave
> > it as the default.
>
> That's a bold move.  

switching my main system to systemd? yes, i know. very bold. very risky.
i'll probably regret it at some point.

:)

> While ZFS has been totally reliable in preserving data I have had
> ongoing problems with filesystems not mounting when they should.  

i know other people have reported problems like that, but it's never
happened on any of my zfs machines...and i've got most of my pools
plugged in to LSI cards (IBM M1015 reflashed to IT mode) using the
mp2sas driver - which is supposed to exacerbate the problem due to the
staggered drive spinup it does.

the only time i've ever seen something similar was my own stupid fault,
i rebooted and just pulled out the old SSD forgetting that I had ZIL
and L2ARC for the pools on that SSD.  I had to plug the old SSD back in
before I could import the pool, so i could remove them from the pool
(and add partitions from my shiny new SSDs to replace them).

> I don't trust ZFS to be reliable as a root filesystem, I want my ZFS
> systems to allow me to login and run "zfs mount -a" if necessary.

not so bold these days, it works quite well and reliably. and i really
want to be able to snapshot my rootfs and backup with zfs send rather
than rsync.

anyway, i've left myself an ext4 mdadm raid-1 /boot partition (with
memdisk and a rescue ISO) in case of emergency.

the zfs root on my main system is two mirrored pairs (raid-10) of
crucial mx300 275G SSDs(*). slightly more expensive than a pair of
500-ish GB but much better performance....read speeds roughly 4 x SATA
SSD read (approximating pci-e SSD speeds), write speeds about 2 x SATA
SSD.

i haven't run bonnie++ on it yet.  it's on my todo list.

http://blog.taz.net.au/2016/10/09/converting-to-a-zfs-rootfs/


the other machine got a pair of the same SSDs, so raid-1 rather than
raid-10. still quite fast (although i'm having weirdly slow scrub
performance on that machine. haven't figured out why yet. peformance
during actual usage is good, noticably better than the single aging SSD
I replaced).


(*) 275 marketing GB.  SI units.  256 GiB in real terms.

they're good value for money anyway....i got mine for $108 each.  I've
since seen them for $97 (itspot.com.au).  MSY doesn't stock them for
some reason (maybe they want to clear their stock of MX200 models
first).

we're just on the leading edge of some massive drops in price/GB. a bit
earlier than I was predicting, i though we'd start seeing it next year.
wont be long before 2 or 4TB SSDs are affordable for home users (you can
get 2TB SSDs for around $800 now). and then I can replace some of my HDD
pools.

> I agree that those things need to be improved.  There should be a way
> to get to a root login while the 90 second wait is happening.

so there really is no way to do that?  i was hoping it was just some
trivially-obvious-in-hindsight thing that i didn't know.

it's really annoying to have to wait and watch those damn stars when you
just wnat to get a shell and start investigating & fixing whatever's
gone wrong.

> There should be an easy and obvious way to display those binary logs
> from the system when it's not running systemd or from another system
> (IE logs copied from another system).

yep. can you even access journald logs if you're booted up with a rescue
disk? (genuine question, i don't know the answer but figure it's one of
the things i need to know)

craig

--
craig sanders <c...@taz.net.au>
_______________________________________________
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Reply via email to