> On Sat, 6 Nov 2021, Pouya Tafti wrote:
> > This obviously seems wrong, so I tried using /etc/pam.d/other or
> > /etc/pam.d/system instead, to have it authenticate using pam_unix.so
> > (or to use the latter directly). But now I can't get it to accept
> > *any*
[not a pam(8) expert]
pkgsrc's x11/i3lock is an X11 screen lock that uses pam(8) for
authentication and comes with a sample /etc/pam.d config that simply
includes /etc/pam.d/login.
On this particular system, out of the box:
$ uname -r -m
9.2_STABLE amd64
$ cat /etc/pam.d/login | grep auth
#
On Mon, 04 Oct 2021 at 18:08 +0200, Pouya Tafti wrote:
> I followed a guide [1] inspired by the wiki [2] with small deviations
> [3] to set up cgd-on-root on 9.2_STABLE. It seems to work well, with
> the minor annoyance that a root filesystem check is triggered after
> each (re)boot.
I followed a guide [1] inspired by the wiki [2] with small deviations
[3] to set up cgd-on-root on 9.2_STABLE. It seems to work well, with
the minor annoyance that a root filesystem check is triggered after
each (re)boot.
Looking at /var/log/messages I can guess why: the cgd device is
My 10-year-old ThinkPad X201 was running very hot (80+ degC). After
re-applying the CPU thermal paste and cleaning the fan it now runs much
cooler (47-53 degC in normal use), but still ca 10-12 degrees hotter
than under debian, which I ran just to compare (btw envstat(4) doesn't
show the fan
On Sat, 21 Aug 2021 at 17:03 -, Michael van Elst wrote:
> pouya+lists.net...@nohup.io (Pouya Tafti) writes:
>
> >Aug 20 06:04:33 basil smartd[1106]: Device: /dev/rsd5d [SAT], SMART
> >Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 65 to 79
> >Aug 20 06
Duplicate, please ignore. Apologies for the noise.
On Fri, 20 Aug 2021 at 06:34 +0200, Pouya Tafti wrote:
> After a recent drive failure in my primary zfs pool, I set
> up a secondary pool on a cgd(4) device on a single new sata
> hdd (zfs on gpt on cgd on gpt on a 4TB Seagate Ironw
On Fri, 20 Aug 2021 at 06:13 -, Michael van Elst wrote:
[snip]
> Yes. It could be the drive itself, but I'd suspect the
> backplane or cables. The PSU is also a possible candidate.
Thanks. Retrying the replication in another bay now before
opening up the box.
After a recent drive failure in my primary zfs pool, I set
up a secondary pool on a cgd(4) device on a single new sata
hdd (zfs on gpt on cgd on gpt on a 4TB Seagate Ironwolf
hdd) to back up the primary.
I initialy scrubbed the entire disk without apparent
incident using a temporary cryptographic
On Fri, 13 Aug 2021 at 11:48 +0100, David Brownlee wrote:
> How does the rate of change in data compare to upload bandwidth? In my
> case I bootstrapped the remote backup boxes by having them connected
> to the same network for a few days until everything was up to date,
> then transported them to
After a recent drive failure in my primary zfs pool, I set
up a secondary pool on a cgd(4) device on a single new sata
hdd (zfs on gpt on cgd on gpt on a 4TB Seagate Ironwolf
hdd) to back up the primary.
I initialy scrubbed the entire disk without apparent
incident using a temporary cryptographic
On Sun, 15 Aug 2021 at 00:57 +0200, Pouya Tafti wrote:
> Thanks. What I ended up doing was detach the replacement
> device, offline the old one, and then re-issue replace.
> It's resilvering the replacement now. Will see how it goes.
This 'worked' in the sense that the resilvering
On Sat, 14 Aug 2021 at 18:52 -0400, Brad Spencer wrote:
[snip]
> As a general rule, although I will say not a required-hard-rule, it
> would be a good idea to take not fully failed ZFS member offline before
> doing a replacement. If the member has failed completely, that is
> different, but if
On Sat, 14 Aug 2021 at 23:29 +0200, Pouya Tafti wrote:
> I'm new to ZFS and this is the first time I'm dealing with
> disk errors. So I don't know if this is normal behaviour
> and I should just wait or if I was wrong to issue replace
> rather than take the drive offline and r
I started to see r/w errors from one of my SAS drives after
a routine zpool(8) scrub (dmesg is littered with ACK/NAK
timeout errors). Since I have a pair of spares I thought
I'd replace the drive before investigating further (the
HDD, controller, cables, and backplate are all old and
suspect).
I
oncern with offline disks is bitrot and site loss. With
over-the-internet backups the main problem is my lousy
bandwidth (no fibre). :(
[snip]
> When NetBSD gains native ZFS encryption I may look again at
> ZFS snapshots and possibly https://zfs.rent/ or
> https://www.rsync.net/products/zfsintro.html :)
Depending on volume those seem potentially interesting. :)
--
Pouya Tafti
I'm looking for a low cost offsite backup solution for my teeny local NAS
(couple of TiB of redundant ZFS RAIDZ2 on /amd64 9.2_STABLE) for disaster
recovery. Seemingly affordable LTO-5 drives (~EUR 250; sans libraries) pop up
on eBay from time to time, and I thought I might start mailing tape
Thank you everyone for all the helpful and informative replies. I ended up
using zfs without encryption in a configuration similar to what David had
suggested. To summarise:
1. I was concerned hiding the SAS drives behind cgd could interfere with
low-level fault tolerance mechanisms of zfs
On Fri, 30 Jul 2021 at 19:10 +0100, David Brownlee wrote:
> [...]
> I started setting up using raw /dev/wdX or /dev/sdX devices, but
> switched across to ZFS gpt partitions. Both work fine, but for me:
> [...]
Your list of pros and cons was very helpful, thank you. I
decided to follow your
Hi NetBSD users,
Any advice on using zfs on raw disks vs gpt partitions? I'm
going to use entire disks for zfs and don't need
root-on-zfs. One advantage of using partitions seems to be
to protect against the risk of having to replace a disk by
a marginally smaller one. But I'm wondering if
On Sun, 18 Jul 2021 at 09:29, Pouya Tafti wrote:
> > Thanks! This is an interesting suggestion. I'm
> > wondering though, wouldn't having a two-drive mirror create
> > an assymmetry in how many failed drives you could tolerate?
> > If you lost both mirrors the who
>
[pouya+lists.net...@nohup.io]
> >> I'm now thinking, would it make sense to do the layering the other
> >> way around, i.e. have cgd on top of a zvol? I wonder if there would
> >> be any resilience (and possibly performance) advantage to having zfs
> >> directly access the hard drives (which,
On Thu, Jul 15, 2021 at 11:43:55AM +0100, David Brownlee wrote:
> Depending on your upgrade plans you may want to consider one 6x1TB
> RAIDZ2 rather than 2 4x1TB RAIDZ2 - you end up with the same amount of
> usable space and you have two spare bays for when the time comes to
> upgrade. (This
nia writes:
> > If anecdotal evidence is helpful, I'm using ZFS with CGD
> > (ZFS on GPT on CGD on GPT...) without problems and I know at least
> > one other developer is doing the same.
On Tue, Jul 13, 2021 at 07:23:43PM -0400, Brad Spencer wrote:
> I didn't use quite as many layers, but I
> nia writes:
> > If anecdotal evidence is helpful, I'm using ZFS with CGD
> > (ZFS on GPT on CGD on GPT...) without problems and I know at least
> > one other developer is doing the same.
>
On Tue, Jul 13, 2021 at 07:23:43PM -0400, Brad Spencer wrote:
> I didn't use quite as many layers, but I
(Apologies in case this is not the right mailing list.)
*tl;dr* Is it sensible to use zfs on top of cgd or are there drawbacks w.r.t.
zfs expecting raw I/O?
(Too many) details follow.
I plan to re-purpose a circa 2012 Supermicro server for a cheap home NAS. It
comes with an LSI MegaRAID
26 matches
Mail list logo