Re: [MLUG] Advice needed about ZFS

2020-08-20 Thread Craig Sanders via luv-main
On Thu, Aug 20, 2020 at 01:40:03PM +0100, stripes theotoky wrote:
> When we started this discussion we had this
>
> stripinska@Stripinska:~$ sudo zfs list
> NAME   USED  AVAIL  REFER
>  MOUNTPOINT
> alexandria 332G  1.06G96K
>  /alexandria

> Now having moved 47.3 GB of files to an external drive I have this.
>
> stripinska@Stripinska:~$ sudo zfs list
> NAME USED  AVAIL  REFER
>  MOUNTPOINT
> alexandria   332G   782M96K
>  /alexandria
>
> What is eating my space?

to truly delete files from ZFS (or from anything that supports snapshots), you
need to delete not only the file(s) from the current state of the filesystem,
but also any snapshots containing them.

The space will not be freed until there are no remaining snapshots containing
the file(s) you deleted.

Note that you can't delete individual files from a snapshot, you can only
delete entire snapshots.  This will have a significant impact on your backup
and recovery strategy.

> It is not the cache for Firefox as this is only 320M.

Maybe, maybe not.  Browser cache directories tend to change a lot, so they end 
up
using a lot more space in the snapshots than you might think because they keep
all the cached junk that your browser itself has deleted, but your zfs-autosnap
hasn't expired yet.

There really isn't much value in keeping snapshots of cache dirs like this, so
try creating a new dataset to hold these caches (and make sure it, or at least
sub-directories on it, are RW by your uid).  Configure zfs-autosnap to ignore
it (i.e. no snapshots), and then configure your browser to use it for caches.

I don't use zfs-auto-snap myself, but according to the man page, to exclude
a dataset from zfs-auto-snap, you need to create a property on it called
"com.sun:auto-snapshot' and set it to false. e.g.

zfs create alexandria/nosnaps
zfs set com.sun:auto-snapshot=false alexandria/nosnaps

(btw, you can also set a quota on a dataset so that it can't use all available
space - better to have firefox die because it can't cache extra stuff than to
have other random programs fail or crash due to out of space errors)

If you have multiple programs that keep caches like this, you could create
one dataset each for them.  IMO, it would be better to create just one
dataset (call it something like "alexandria/nosnaps") and then create as many
sub-directories as you need under it.

make /alexandria/nosnaps/stripes/ readable and writable by user 'stripes', and
your programs can create directories and files underneath it as needed. e.g.
something like

mkdir -p /alexandria/nosnaps/stripes/firefox-cache
chown -R stripes /alexandria/nosnaps/stripes
chmod u=rwX /alexandria/nosnaps/stripes



I'm not entirely sure how to change the cache dir in firefox, but i am certain
that it can be done, probably somewhere in "about:config".  At worst, you
can either set the mountpoint of the "nosnaps" dataset to be the cache dir
(rememeber to quit from firefox and delete the existing cache first), or by
symlinking into a subdir under "nosnaps".  The latter is better because it
enables multiple diffferent cache dirs under the one nosnaps dataset.



BTW, if you download stuff with deluge or some other torrent client, you should
make an alexandria/torrents dataset with a recordsize of 16K instead of the
default 128K (bit torrent does a lot of random reads and writes in 16KB
blocks, so this is the optimum recordsize). for example:

  zfs create -o recordsize=16k -o mountpoint=/home/stripes/torrents/download 
alexandria/torrents
  chown stripes /home/stripes/torrents/download
  chmod u=rwx /home/stripes/torrents/download


zfs-autosnap should be configured to ignore this dataset too, and your torrent
client should be configured to download torrents into this directory, and then
move them to somewhere else once the download has completed.  This avoids
wasting space on snapshots of partially downloaded stuff, AND minimises
fragmented (as the downloaded torrents will be de-fragmented when they're
moved to somewhere else)



there are probably other things that don't need to be snapshotted - like
/tmp, /var/tmp, maybe (parts of) /var/cache, and other directories containing
short-lived transient files.  I wouldn't bother doing anything about them
unless they waste a lot of disk space.


> How do I get it back before the box freezes.

1. for such a small filesystem, I recommend setting a more aggressive snapshot
expiration policy for zfs-autosnap.  From what you've posted, it looks like
zfs-autosnap is configured to keep the last 12 months or so snapshots but you
probably can't afford to keep more than the last three to six months or so of
snapshots.

zfs-auto-snap doesn't seem to have a config file, the snapshot retention
is handled by command-line options, which you can see if you look 

Re: [MLUG] Advice needed about ZFS

2020-08-20 Thread Joel W Shea via luv-main
On Thu, 20 Aug 2020 at 22:40, stripes theotoky via luv-main <
luv-main@luv.asn.au> wrote:

> When we started this discussion we had this
> (...)
>
The more disk space I free the less available space I have.
>

You're not freeing space by deleting files if they continue to exist in
snapshots, you can either;
- wait for snapshots to get rotated out and destroyed
- destroy unused snapshots, use `zfs send` to backup to external media
first?
- mount each snapshot, then remove the offending files from each; which
seems tedious, and prone to mistakes.
  c.f. `zfs diff` to find differences between snapshots

Cheers, Joel
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: [MLUG] Advice needed about ZFS

2020-08-20 Thread Russell Coker via luv-main
On Thursday, 20 August 2020 10:40:03 PM AEST stripes theotoky via luv-main 
wrote:
> When we started this discussion we had this
> 
> stripinska@Stripinska:~$ sudo zfs list
> NAME   USED  AVAIL  REFER
>  MOUNTPOINT
> alexandria 332G  1.06G96K
>  /alexandria
> 
> Now having moved 47.3 GB of files to an external drive I have this.
> 
> stripinska@Stripinska:~$ sudo zfs list
> NAME USED  AVAIL  REFER
>  MOUNTPOINT
> alexandria   332G   782M96K
>  /alexandria
> 
> What is eating my space?

You have a script setup to make regular snapshots so that if you accidentally 
delete something you can get it back from an older snapshot.  The snapshots 
keep the storage space for the files in use.  If you wait a few weeks the 
space will be freed as newer snapshots without those 47G of files replace the 
old ones with them.

Alternatively you can delete some of the snapshots manually.

-- 
My Main Blog http://etbe.coker.com.au/
My Documents Bloghttp://doc.coker.com.au/



___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Advice needed about ZFS

2020-08-20 Thread Russell Coker via luv-main
The first question that was asked was whether many snapshots slow the system 
down.  The answer is probably not.  When developing my etbemon monitoring 
system I made the BTRFS monitor alert by default on more than 500 subvolumes 
(usually snapshots) as I had noticed performance problems with 1000+.  You 
have less than that and ZFS has been optimised for performance more than BTRFS 
so it shouldn't be a problem.

http://cdn.msy.com.au/Parts/PARTS.pdf

The next thing is that 344G is not much space by today's standards.  It's 
large enough that if installed in 2016 that probably isn't SSD, but small 
enough to easily fit on modern SSDs.  MSY has 1TB SSDs starting at $135 and 
2TB SSDs starting at $289.  Get a couple of those and transfer the filesystem 
to them and it will be much faster.

Also fragmentation doesn't matter on SSD.

On Thursday, 20 August 2020 8:23:25 PM AEST Colin Fee via luv-main wrote:
> > sudo zpool list
> > 
> > NAME  SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
> > alexandria 344G 332G 11.8G - 69% 96% 1.00x ONLINE -
> 
> Thanks for the reply.  Above line is the most telling data and so I take
> back what I said before, this is likely the cause given it's your home
> volume.  It's very full!
> 
> Your pool is at 96% capacity and 69% fragmented. The 96% is the biggest
> issue, there is (was?) a rule-of-thumb guid with copy-on-write filesystems,
> of which ZFS is an example, that you should keep the used capacity below
> 80%.

The amount of space you need to keep free varies a lot depending on the data 
type and usage.  For example I run a ZFS server that regularly goes to 98% 
capacity with no problems, it is almost entirely used for 20MB files.  Lots of 
files the same size means fragmentation isn't a big problem and large files 
written contiguously means it's easy for the filesystem to write them 
contiguously to disk.  That is however an unusual corner case.

For a /home 96% used is probably really bad for performance.

You might get something useful from the following command:

du /home|sort -n|tail -50

> > alexandria/home 1.02G 332G 90.4G 242G 0 0
> 
> In the pool, your one dataset 'alexandria/home', the snapshots are  90.4G
> of 332G or 27%, so not the full culprit.  The fact the disk is nearly full
> is.
> 
> Possible things to do
> 
> Make a backup first!!  or several.

https://www.officeworks.com.au/shop/officeworks/c/technology/hard-drives-data-storage/portable-hard-drives

Yes, always have lots of backups!  Officeworks has some reasonable options for 
USB attached backup devices.

-- 
My Main Blog http://etbe.coker.com.au/
My Documents Bloghttp://doc.coker.com.au/



___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Advice needed about ZFS

2020-08-20 Thread stripes theotoky via luv-main
Wiser and more learned folk will give you a better answer but it will help
if you can provide more data by sending the output of the following:

sudo zpool list

NAME  SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
alexandria 344G 332G 11.8G - 69% 96% 1.00x ONLINE -


> sudo zpool status
>

~$ sudo zpool status
pool: alexandria
state: ONLINE
scan: scrub repaired 0 in 69h10m with 0 errors on Thu Aug 20 04:51:13 2020
config:

NAME STATE READ WRITE CKSUM
alexandria ONLINE 0 0 0
ata-WDC_WD10JPVX-08JC3T6_WD-WXX1E752KKE7-part7 ONLINE 0 0 0

errors: No known data errors


> sudo zfs list
>

~$ sudo zfs list
NAME USED AVAIL REFER MOUNTPOINT
alexandria 332G 1.02G 96K /alexandria
alexandria@zfs-auto-snap_monthly-2019-07-13-0141 64K - 96K -
alexandria@zfs-auto-snap_monthly-2019-08-25-1034 0 - 96K -
alexandria@zfs-auto-snap_monthly-2019-10-09-0831 0 - 96K -
alexandria@zfs-auto-snap_monthly-2019-11-09-2312 0 - 96K -
alexandria@zfs-auto-snap_monthly-2019-12-09-1923 0 - 96K -
alexandria@zfs-auto-snap_monthly-2020-01-23-0728 0 - 96K -
alexandria@zfs-auto-snap_monthly-2020-02-27-0408 0 - 96K -
alexandria@zfs-auto-snap_monthly-2020-03-28-0921 0 - 96K -
alexandria@zfs-auto-snap_monthly-2020-04-27-1130 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-05-18-0221 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-05-19-0117 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-05-20-0358 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-05-21-0350 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-05-22-0105 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-05-27-0155 0 - 96K -
alexandria@zfs-auto-snap_monthly-2020-05-27-0156 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-05-28-0349 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-06-02-0055 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-06-04-0725 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-06-09-0753 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-06-10-1034 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-06-17-0905 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-06-23-0415 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-06-29-0722 0 - 96K -
alexandria@zfs-auto-snap_monthly-2020-06-29-0732 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-06-30-0701 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-07-02-0619 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-07-03-0401 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-07-09-0110 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-07-19-0706 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-03-0048 0 - 96K -
alexandria@zfs-auto-snap_monthly-2020-08-03-0055 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-05-0229 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-06-0052 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-07-0202 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-08-1109 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-09-1053 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-10-0157 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-11-0347 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-12-0849 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-16-0601 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-16-1817 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-16-1917 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-16-2017 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-16-2117 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-16-2217 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-16-2317 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-0017 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-0117 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-0217 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-0317 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-0417 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-0517 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-0617 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-0717 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-17-0738 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-0817 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-0917 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-17-1017 0 - 96K -
alexandria@zfs-auto-snap_daily-2020-08-20-0201 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-20-0217 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-20-0317 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-20-0417 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-20-0517 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-20-0617 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-20-0717 0 - 96K -
alexandria@zfs-auto-snap_frequent-2020-08-20-0745 0 - 96K -
alexandria@zfs-auto-snap_frequent-2020-08-20-0800 0 - 96K -
alexandria@zfs-auto-snap_frequent-2020-08-20-0815 0 - 96K -
alexandria@zfs-auto-snap_hourly-2020-08-20-0817 0 - 96K -
alexandria@zfs-auto-snap_frequent-2020-08-20-0830 0 - 96K -
alexandria/home 332G 1.02G 242G /home
alexandria/home@zfs-auto-snap_monthly-2019-07-13-0141 328M - 288G -
alexandria/home@zfs-auto-snap_hourly-2019-07-18-0319 46.6M - 288G -

Re: [MLUG] Advice needed about ZFS

2020-08-20 Thread stripes theotoky via luv-main
When we started this discussion we had this

stripinska@Stripinska:~$ sudo zfs list
NAME   USED  AVAIL  REFER
 MOUNTPOINT
alexandria 332G  1.06G96K
 /alexandria

Now having moved 47.3 GB of files to an external drive I have this.

stripinska@Stripinska:~$ sudo zfs list
NAME USED  AVAIL  REFER
 MOUNTPOINT
alexandria   332G   782M96K
 /alexandria

What is eating my space?
It is not the cache for Firefox as this is only 320M.
How do I get it back before the box freezes.
The more disk space I free the less available space I have.

Stripes.


On Thu, 20 Aug 2020 at 11:05, Russell Shaw  wrote:

> On 20/8/20 7:56 pm, 'stripes theotoky' via mlug-au wrote:
> > On Thu, 20 Aug 2020 at 07:33, Russell Shaw  > > wrote:
> >
> > On 20/8/20 4:53 pm, 'stripes theotoky' via mlug-au wrote:
> >  > Dear Group,
> >  >
> >  > I am looking for advice on ZFS. My computer is running painfully
> slowly.
> >
> > Check your pc has 8G of ram or more.
> >
> > It has 16G
>
> NAMEAVAIL   USED
> alexandria  1.06G   332G
>
> The free space looks pretty low if much buffering is needed.
>
> Think more free space is needed.
>
> --
> You received this message because you are subscribed to the Google Groups
> "mlug-au" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mlug-au+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/mlug-au/8bcd2b83-a63c-3cc5-26c7-d2d05dbadf4e%40netspace.net.au
> .
>


-- 
Stripes Theotoky

-37 .713869
145.050562
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Advice needed about ZFS

2020-08-20 Thread Colin Fee via luv-main
On Thu, 20 Aug 2020 at 19:55, stripes theotoky <
stripes.theot...@googlemail.com> wrote:

>
> Wiser and more learned folk will give you a better answer but it will help
> if you can provide more data by sending the output of the following:
>
> sudo zpool list
>
> NAME  SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
> alexandria 344G 332G 11.8G - 69% 96% 1.00x ONLINE -
>
>
>>
>>
Thanks for the reply.  Above line is the most telling data and so I take
back what I said before, this is likely the cause given it's your home
volume.  It's very full!

Your pool is at 96% capacity and 69% fragmented. The 96% is the biggest
issue, there is (was?) a rule-of-thumb guid with copy-on-write filesystems,
of which ZFS is an example, that you should keep the used capacity below
80%.

> alexandria/home 1.02G 332G 90.4G 242G 0 0

In the pool, your one dataset 'alexandria/home', the snapshots are  90.4G
of 332G or 27%, so not the full culprit.  The fact the disk is nearly full
is.

Possible things to do

Make a backup first!!  or several.

Delete some stuff, clear browser caches, other caches in your user profile.

To clear the fragmentation, copy the data somewhere else, delete it, then
copy it back.




-- 
Colin Fee
tfecc...@gmail.com
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Advice needed about ZFS

2020-08-20 Thread Colin Fee via luv-main
On Thu, 20 Aug 2020 at 16:59, stripes theotoky via luv-main <
luv-main@luv.asn.au> wrote:

> Dear Group,
>
> I am looking for advice on ZFS. My computer is running painfully slowly. I
> assume this is due to ZFS snapshots eating the entire disk space. My
> knowledge of ZFS is almost zero, this box was installed by a friend in the
> UK back in 2016 in the days when Ubuntu 16:04 was experimental and is still
> running 16.04. I have run the following commands
>
> sudo zpool set listsnapshots=on alexandria
> zfs list -o space -r alexandria
> Which seems to show the space is being eaten by snapshots of the home
> directory and oddly that there are no snapshots from before 2019. The
> computer is in effect unuseable at the moment. All suggestions are very
> welcome.
> The box is a dual boot Linux / Windows but the windows partition could be
> drastically reduced in size if this is the real problem.
>
> NAMEAVAIL   USED
>  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
> alexandria  1.06G   332G
>  320K 96K  0   332G
> alexandria@zfs-auto-snap_monthly-2019-07-13-0141-64K
> -   -  -  -
> alexandria@zfs-auto-snap_monthly-2019-08-25-1034-  0
> -   -  -  -
> alexandria@zfs-auto-snap_monthly-2019-10-09-0831-  0
> -   -  -  -
> alexandria@zfs-auto-snap_monthly-2019-11-09-2312-  0
> -   -  -  -
> alexandria@zfs-auto-snap_monthly-2019-12-09-1923-  0
> -   -  -  -
> alexandria@zfs-auto-snap_monthly-2020-01-23-0728-  0
> -   -  -  -
> alexandria@zfs-auto-snap_monthly-2020-02-27-0408-  0
> -   -  -  -
>



> alexandria/home 1.06G   332G
> 90.3G242G  0  0
> alexandria/home@zfs-auto-snap_monthly-2019-07-13-0141   -   328M
> -   -  -  -
> alexandria/home@zfs-auto-snap_hourly-2019-07-18-0319-  46.6M
> -   -  -  -
> alexandria/home@zfs-auto-snap_hourly-2019-07-18-0818-  37.5M
> -   -  -  -
> alexandria/home@zfs-auto-snap_hourly-2019-07-19-0019-  83.3M
> -   -  -  -
> alexandria/home@zfs-auto-snap_monthly-2019-08-25-1034   -   282M
> -   -  -  -
> alexandria/home@zfs-auto-snap_monthly-2019-10-09-0831   -   241M
> -   -  -  -
> alexandria/home@zfs-auto-snap_monthly-2019-11-09-2312   -   787M
> -   -  -  -
> alexandria/home@zfs-auto-snap_monthly-2019-12-09-1923   -   975M
> -   -  -  -
> alexandria/home@zfs-auto-snap_monthly-2020-01-23-0728   -   513M
> -   -  -  -
> alexandria/home@zfs-auto-snap_monthly-2020-02-27-0408   -   810M
> -   -  -  -
> alexandria/home@zfs-auto-snap_monthly-2020-03-28-0921   -   754M
> -   -  -  -
> alexandria/home@zfs-auto-snap_monthly-2020-04-27-1130   -   704M
> -   -  -  -
>
>
Your pool 'alexandria' has
  - 1.06G of available capacity  (AVAIL)
  - is using 332G  (USED)
  - of which Snapshots are using 320K (USEDSNAP)
  - the dataset itself is using 96K (USEDDS)
  - of the USED capacity, 332G is in children of the dataset (USEDCHILD)

Which in this case is in 'alexandria/home', presumably your home folder,
and by the evidence you've listed is where all the action is.
  - 1.06G of available capacity  (AVAIL)
  - is using 332G  (USED)
  - of which Snapshots are using 90.3G (USEDSNAP)
  - the dataset itself is using 242G (USEDDS)

Where is your root filesystem located?

Wiser and more learned folk will give you a better answer but it will help
if you can provide more data by sending the output of the following:

sudo zpool list
sudo zpool status
sudo zfs list
sudo zfs list -o space


Here's what mine look like...

~$ sudo zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
data  5.44T  2.75T  2.69T - 3%50%  1.00x  ONLINE  -

Here you can see my 'data' pool is at 50% capacity

~$ sudo zpool status
  pool: data
 state: ONLINE
  scan: scrub repaired 0 in 4h1m with 0 errors on Sun Aug 16 06:01:14 2020
config:

NAME   STATE READ WRITE
CKSUM
data   ONLINE   0 0
0
  mirror-0 ONLINE   0 0
0
sdaONLINE   0 0
0
sdbONLINE  

Advice needed about ZFS

2020-08-20 Thread stripes theotoky via luv-main
Dear Group,

I am looking for advice on ZFS. My computer is running painfully slowly. I
assume this is due to ZFS snapshots eating the entire disk space. My
knowledge of ZFS is almost zero, this box was installed by a friend in the
UK back in 2016 in the days when Ubuntu 16:04 was experimental and is still
running 16.04. I have run the following commands

sudo zpool set listsnapshots=on alexandria
zfs list -o space -r alexandria
Which seems to show the space is being eaten by snapshots of the home
directory and oddly that there are no snapshots from before 2019. The
computer is in effect unuseable at the moment. All suggestions are very
welcome.
The box is a dual boot Linux / Windows but the windows partition could be
drastically reduced in size if this is the real problem.

NAMEAVAIL   USED
 USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
alexandria  1.06G   332G
 320K 96K  0   332G
alexandria@zfs-auto-snap_monthly-2019-07-13-0141-64K
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2019-08-25-1034-  0
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2019-10-09-0831-  0
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2019-11-09-2312-  0
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2019-12-09-1923-  0
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2020-01-23-0728-  0
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2020-02-27-0408-  0
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2020-03-28-0921-  0
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2020-04-27-1130-  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-05-18-0221  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-05-19-0117  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-05-20-0358  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-05-21-0350  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-05-22-0105  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-05-27-0155  -  0
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2020-05-27-0156-  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-05-28-0349  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-06-02-0055  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-06-04-0725  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-06-09-0753  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-06-10-1034  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-06-17-0905  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-06-23-0415  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-06-29-0722  -  0
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2020-06-29-0732-  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-06-30-0701  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-07-02-0619  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-07-03-0401  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-07-09-0110  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-07-19-0706  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-08-03-0048  -  0
  -   -  -  -
alexandria@zfs-auto-snap_monthly-2020-08-03-0055-  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-08-05-0229  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-08-06-0052  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-08-07-0202  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-08-08-1109  -  0
  -   -  -  -
alexandria@zfs-auto-snap_daily-2020-08-09-1053  -  0
  -   -  -  -