Re: zfs forgetting cache wedges?

2020-10-06 Thread Frank Kardel
Yepp, Moving devpubd earlier (before mountall as that does the "zfs 
mount -a" !) works.


Looks like we could refine the sequence here or pursue a variant of 
devfs in the spare time :-).


Frank


On 09/28/20 19:41, Michael van Elst wrote:

kar...@kardel.name (Frank Kardel) writes:


Interesting - I am running 9.99.72 currently.
I was always wondering why the devices show no statistics. These are
simple gpt zfs wedges.
Any idea what is wrong there?


When you use devpubd to create symlinks in dev/wedges, the links may
be stale when zfs starts because devpubd runs too late.

Moving devpubd to an earlier position would help, but the wedgenames hook
doesn't work without /usr.





Re: zfs forgetting cache wedges?

2020-09-28 Thread Michael van Elst
kar...@kardel.name (Frank Kardel) writes:

>Interesting - I am running 9.99.72 currently.

>I was always wondering why the devices show no statistics. These are 
>simple gpt zfs wedges.

>Any idea what is wrong there?


When you use devpubd to create symlinks in dev/wedges, the links may
be stale when zfs starts because devpubd runs too late.

Moving devpubd to an earlier position would help, but the wedgenames hook
doesn't work without /usr.

-- 
-- 
Michael van Elst
Internet: mlel...@serpens.de
"A potential Snark may lurk in every tree."


Re: zfs forgetting cache wedges?

2020-09-28 Thread Frank Kardel

Interesting - I am running 9.99.72 currently.

I was always wondering why the devices show no statistics. These are 
simple gpt zfs wedges.


Any idea what is wrong there?

Frank


On 09/28/20 18:04, Michael van Elst wrote:

kar...@netbsd.org (Frank Kardel) writes:


After a boot it looks like this:
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP
  HEALTH  ALTROOT
pool-18.94T  2.76T  6.17T - 5%30%  1.11x
  ONLINE  -
   raidz1  8.94T  2.76T  6.17T - 5%30%
 wedges/zfs10g0-0  -  -  - -  -  -
 wedges/zfs10g1-0  -  -  - -  -  -
 wedges/zfs10g2-0  -  -  - -  -  -
cache -  -  - -  -  -
   384839849488-  -  - -  -  -
The cache wedge does not look very usable in that state.


Didn't happen here (with recent -current). The device paths for
the pool devices are stored in /etc/zfs/zfs.cache and the device
path to the cache device is stored on the pool devices.

But your pool devices also do not return data, and that's probably
the reason for the strange cache device path that couldn't be read.

NAME  SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  
ALTROOT
mypool 80M   104K  79.9M - 4% 0%  1.00x  ONLINE  -
   wedges/image080M   104K  79.9M - 4% 0%
cache-  -  - -  -  -
   wedges/image1  95.2M 1K  95.2M - 0% 0%





Re: zfs forgetting cache wedges?

2020-09-28 Thread Michael van Elst
kar...@netbsd.org (Frank Kardel) writes:

>After a boot it looks like this:
>NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP 
>  HEALTH  ALTROOT
>pool-18.94T  2.76T  6.17T - 5%30%  1.11x 
>  ONLINE  -
>   raidz1  8.94T  2.76T  6.17T - 5%30%
> wedges/zfs10g0-0  -  -  - -  -  -
> wedges/zfs10g1-0  -  -  - -  -  -
> wedges/zfs10g2-0  -  -  - -  -  -
>cache -  -  - -  -  -
>   384839849488-  -  - -  -  -

>The cache wedge does not look very usable in that state.


Didn't happen here (with recent -current). The device paths for
the pool devices are stored in /etc/zfs/zfs.cache and the device
path to the cache device is stored on the pool devices.

But your pool devices also do not return data, and that's probably
the reason for the strange cache device path that couldn't be read.

NAME  SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  
ALTROOT
mypool 80M   104K  79.9M - 4% 0%  1.00x  ONLINE  -
  wedges/image080M   104K  79.9M - 4% 0%
cache-  -  - -  -  -
  wedges/image1  95.2M 1K  95.2M - 0% 0%

-- 
-- 
Michael van Elst
Internet: mlel...@serpens.de
"A potential Snark may lurk in every tree."