Re: [MLUG] Advice needed about ZFS

2020-08-29 Thread Craig Sanders via luv-main
On Wed, Aug 26, 2020 at 10:43:30AM +, stripes theotoky wrote:
> > I would suggest looking at something like a Network Attached Storage
> > device, with multiple drives in a suitable RAID array.
>
> This is the ultimate plan to build a NAS from an HP Microserver. I am
> leaning towards Nas4Free on an SSD or internal USB and 3, 6TB mirrors. This
> is a project that has to wait because right now due to Covid19 and
> Brexit we are not sure where we are.  I am here and can't leave but
> expecting to be out of work (which won't stop my research), my husband is
> British/Australian, resident in Austria to avoid Brexit but is stranded by
> Covid in Greece. When it all settles down and we have a home again building
> this NAS is going to be pretty high on the list of things to do.

In the meantime, you can use a largish (>= 4 or 6 TB) external USB drive set
up to be a ZFS pool for backups.

Then 'zfs send' your snapshots to the USB drive, and keep a multi-year
snapshot history on them.  Aggressively expire the snapshots in your laptop to
minimise the amount of space they're taking.

You can have multiple USB backup drives like this - each one has to be
initialised with a full backup, but can then be incrementally updated with
newer snapshots.  Each backup pool should have a different name - like
backup1, backup2, etc.

You can automate much of this with some good scripting, but your scripts
will need to query the backup destination pool (with 'zfs list') to find out
what the latest backup snapshot on it is.  Incremental 'zfs send' updates
send the difference between two snapshots, so you need to know what the
lastest snapshot on the backup pool is AND that snapshot has to sill exist
on the source pool.

You should use a different snapshot naming scheme for the backup snapshots.
If your main snapshots are "@zfs-autosnap-MMDD" or whatever, then use
"@backup-MMDD".  Create that snapshot, and use it for a full zfs send,
then create new "@backup-MMDD" snapshots just before each incremental
send.

e.g. the initial full backup on a pool called "source" to a pool called
"backup", if you had done it yesterday:

zfs snapshot source@backup-20200829
zfs send -v -R source@backup-20200829 | /sbin/zfs receive -v -d -F backup

and to do an incremental backup of *everything* (including all snapshots
created manually or by zfs-autosnap) from @backup-20200829 to today between
the same pools:

# source@backup-20200829 already exists from the last backup, no need to 
create it.
zfs snapshot source@backup-20200830
zfs send -R -i source@backup-20200829 source@backup-20200830 | zfs receive 
-v -u -d backup

** NOTE: @backup-20200829 has to exist on both the source & backup pools **

Unless you need to make multiple backups to different pools, you can delete
the source@backup-20200829 snapshot at this point because the next backup will
be from source@backup-20200830 to some future @backup-MMDD snapshot.


BTW, you don't have to backup to the top level of the backup pool. e.g. to
backup to a dataset called "mylaptop" on pool backup:

zfs create backup/mylaptop
zfs snapshot source@backup-20200829
zfs send -R -i source@backup-20200829 source@backup-20200830 | zfs receive 
-v -u -d backup/mylaptop

(you'd do this if you wanted to backup multiple machines to the same backup
drive. or if you wanted to use it for backups AND for storage of other stuff
like images or videos or audio files).



and, oh yeah, get used to using the '-n' aka '--dry-run' and '-v'/'--verbose'
options with both 'zfs send' and 'zfs receive' until you understand how they
work and are sure they're going to do what you want.


NOTE: as a single drive vdev, there will be no redundancy in the USB backup
drive. but I'm guessing that since you're using a laptop, it's probably also
a single drive and that you're only using ZFS for the auto compression and
snapshot capabilities.  If you want redundancy, you can always plug in two USB
drives at a time and set them up as a zfs mirrored pool, but then you have to
label them so that you know which pairs of drives belong together

This is not as good as a NAS but it's cheap and easy and a lot better than
nothing.


I recommend using USB drive adaptors that allow you to use any drives in them
(i.e. USB to SATA adaptors), not pre-made self-contained external drives (just
a box with a drive in it and a USB socket or cable).

Sometimes you see them with names like "disk docking station", with a power
adaptor, a USB socket, and SATA slots for 1, 2, or 4 drives.  Other forms
include plain cables with a USB plug on one end and a SATA socket on the
other.

craig

ps: If your backup pool was on some other machine somewhere on the internet,
you can pipe the zfs send over ssh. e.g.

zfs send -R -i source@backup-20200829 source@backup-20200830 | ssh 
remote-host zfs receive -u -d poolname/dataset

The pool on your laptop is probably small enough that you could do the initial
full backup over 

Re: [MLUG] Advice needed about ZFS

2020-08-26 Thread Rohan McLeod via luv-main

Russell Coker via luv-main wrote:


Having things on external disks is safe against some problems.  But if you
have a disk just sitting around for long periods of time problems can occur.
You need to verify the storage before it rots.



A much underestimated problem I believe, with ' half lives of  data on  
'shelved' mediums   uncomfortably short;

my impression (no link !)
-SSD's noticeable decay after  three years
-HDD  noticeable decay after  five years
--DVD home burnt noticeable decay after  five years
-DVD factory burnt noticeable decay after  10 years
-M-disk noticeable decay after  25 years

regards Rohan McLeod
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: [MLUG] Advice needed about ZFS

2020-08-26 Thread Russell Coker via luv-main
On Wednesday, 26 August 2020 8:43:30 PM AEST stripes theotoky via luv-main 
wrote:
> Question, do you really need to recover files from the snapshots on a
> 
> > routine basis?
> 
> No I don't.  I have never recovered anything from ZFS and hope to never
> have to. ZFS is a last ditch line of defense against some of my co-authors
> who have a terrible habit of working on old files, saving them out with the
> same name as the current working file. More than once a changed
> introduction has destroyed years of work and it has only been my paranoia
> in having about 6 backups of everything that has saved the project.

Storage is getting bigger all the time.  2TB SSDs in 2.5" laptop form factor 
are affordable now.  Sometimes it's easier to just buy more storage than to 
use storage effectively.  As an aside I'm sure that somewhere under my home 
directory I have the megabytes of wasted space that were an annoyance when my 
laptop had a 3.8G disk, but now with a 160G SSD it's not a problem.

> > I would suggest that you consider the frequency of the
> > snapshots, and of backups. If you truly understand, and take care with
> > your actions, how much do you need to recover and undelete as the
> > snapshots enable? If you do not need quite so frequent, nor so many
> > snapshots, then you will have more usable space when you transfer to a
> > new drive.
> 
> The trouble is I am often working on 4 - 5 academic papers at the same
> time; some of them maybe neglected for a couple of months hence errors
> won't be noticed until I come back to work on it again. In the worst case I
> had to dig up files from 3 years ago to recover data.

Having 3 years of snapshots of the important stuff is possible.  You could 
have a separate ZFS mountpoint just for that important data.  How large are 
your really important files?  1G?  3 years of archives of that 1G won't take 
much space on a 2TB SSD.

> I realise that. The reasons for ZFS are outlined above. I know a better
> strategy would be to use GIT but as my co-authors are very smart in maths
> and economics but have almost zero computer knowledge, trying to tell them
> what GIT is let alone why we should be using it is impossible.

Some systems I run use etckeeper to make a git repository of /etc.  On some 
triggering operations (including Debian package updates) etckeeper will run 
and commit all changes to git.  I presume you could run something similar that 
uses git for tracking snapshots of changes for the projects in question.

> Much of the data is on external disks located in 3 different countries so
> hopefully it is safe.

Having things on external disks is safe against some problems.  But if you 
have a disk just sitting around for long periods of time problems can occur.  
You need to verify the storage before it rots.

-- 
My Main Blog http://etbe.coker.com.au/
My Documents Bloghttp://doc.coker.com.au/



___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: [MLUG] Advice needed about ZFS

2020-08-26 Thread stripes theotoky via luv-main
Hello Mark,

Question, do you really need to recover files from the snapshots on a
> routine basis?


No I don't.  I have never recovered anything from ZFS and hope to never
have to. ZFS is a last ditch line of defense against some of my co-authors
who have a terrible habit of working on old files, saving them out with the
same name as the current working file. More than once a changed
introduction has destroyed years of work and it has only been my paranoia
in having about 6 backups of everything that has saved the project.


> I would suggest that you consider the frequency of the
> snapshots, and of backups. If you truly understand, and take care with
> your actions, how much do you need to recover and undelete as the
> snapshots enable? If you do not need quite so frequent, nor so many
> snapshots, then you will have more usable space when you transfer to a
> new drive.
>

The trouble is I am often working on 4 - 5 academic papers at the same
time; some of them maybe neglected for a couple of months hence errors
won't be noticed until I come back to work on it again. In the worst case I
had to dig up files from 3 years ago to recover data.

The other matter is that the snapshots are not a backup strategy, they
> too are lost along with the current copies if you have a hard drive
> crash.


I realise that. The reasons for ZFS are outlined above. I know a better
strategy would be to use GIT but as my co-authors are very smart in maths
and economics but have almost zero computer knowledge, trying to tell them
what GIT is let alone why we should be using it is impossible.


> I would suggest looking at something like a Network Attached
> Storage device, with multiple drives in a suitable RAID array.


This is the ultimate plan to build a NAS from an HP Microserver. I am
leaning towards Nas4Free on an SSD or internal USB and 3, 6TB mirrors. This
is a project that has to wait because right now due to Covid19 and Brexit
we are not sure where we are.  I  am here and can't leave but expecting to
be out of work (which won't stop my research), my husband is
British/Australian, resident in Austria to avoid Brexit but is stranded by
Covid in Greece. When it all settles down and we have a home again building
this NAS is going to be pretty high on the list of things to do.

There are others on the LUV lists who can better advise on which strategies
> will actually provide a reasonable measure of security of data.
> Remember that there are repositories from which you can restore the OS
> and applications, but that your data, including particular
> configuration, can only be recovered from a suitable backup on another
> device, and best often stored on a separate site.
>

Much of the data is on external disks located in 3 different countries so
hopefully it is safe.

These are pointers to think about as to what is important, and what
> can you afford to loose, possibly because you can regenerate, possibly
> because it is not really that critical. As to life, yes the data has
> meaning to each of us, but it is not food and water even if it can be
> used in exchange for such.
>

True.

> First things first ZFS is choking due to lack of disk space. However, I
> > have a lot of totally unused diskspace on a windows partition so how can
> I
> > reduce the windows partition and increase the ZFS to stop it choking. It
> > has 560GB of which I do not need more than 200max to leave for Windoze.
>
> From this I gather that you find a need to have the Microsoft malware
> available. I have problems now and then when I have to deal
> interactively with a Microsoft Word document in the newest format. I
> usually point out that Word may well be widespread, but that is not
> universal, and a PDF can be locked to prevent being tampered with.
>

Windows certainly is malware. I have it to run Scientific Workplace (SW) as
some of my co-authors are unable to deal with Latex. In an ideal world I
would only use Kile or Eclipse but sometimes I don't have the hours / days
necessary to edit the perverted abomination that Scientific Workplace
thinks is a tex file into an actual tex file, hence the need to have the
windoze to run SW.

> Gparted apparently doesn't like ZFS but does this matter? I can I assume
> > use it to shrink the Windows partition and free up about 360GB, then how
> do
> > I expand the ZFS to use that free space?
> > Second, I have checked /etc/cron and find auto snap shot commands for
> > hourly, daily, weekly & monthly
> > hourly = 24, daily = 31, weekly = 8, monthly = 6. It seems I could change
> > that to hourly = 24, daily = 7, weekly = 4, monthly = 6 and get pretty
> much
> > the same coverage with a lot less snapshots.
> >
> > Listing snapshots shows some from 2019, where are they coming from as
> with
> > monthly only storing 6 months there shouldn't be anything newer than
> > February 2020 or am I not understanding something here?
>
> I would think that if the computer was not in use at the critical
> time, the 

Re: [MLUG] Advice needed about ZFS

2020-08-24 Thread Mark Trickett via luv-main
Hello Stripes,

On 8/24/20, stripes theotoky via luv-main  wrote:
> Thank you all for the very helpful advice:-
>
> If I understand you all the problems are ZFS is choking because I don't
> have enough disk space. The reason I don't have enough disk space is
> because the files I deleted are still in the old snapshots, plus it was
> already pretty full.
>
> This seems like a good way forward but is it really and if so how do I do
> it?

Question, do you really need to recover files from the snapshots on a
routine basis? I would suggest that you consider the frequency of the
snapshots, and of backups. If you truly understand, and take care with
your actions, how much do you need to recover and undelete as the
snapshots enable? If you do not need quite so frequent, nor so many
snapshots, then you will have more usable space when you transfer to a
new drive.

The other matter is that the snapshots are not a backup strategy, they
too are lost along with the current copies if you have a hard drive
crash. I would suggest looking at something like a Network Attached
Storage device, with multiple drives in a suitable RAID array. There
are others on the LUV lists who can better advise on which strategies
will actually provide a reasonable measure of security of data.
Remember that there are repositories from which you can restore the OS
and applications, but that your data, including particular
configuration, can only be recovered from a suitable backup on another
device, and best often stored on a separate site.

These are pointers to think about as to what is important, and what
can you afford to loose, possibly because you can regenerate, possibly
because it is not really that critical. As to life, yes the data has
meaning to each of us, but it is not food and water even if it can be
used in exchange for such.

> First things first ZFS is choking due to lack of disk space. However, I
> have a lot of totally unused diskspace on a windows partition so how can I
> reduce the windows partition and increase the ZFS to stop it choking. It
> has 560GB of which I do not need more than 200max to leave for Windoze.

>From this I gather that you find a need to have the Microsoft malware
available. I have problems now and then when I have to deal
interactively with a Microsoft Word document in the newest format. I
usually point out that Word may well be widespread, but that is not
universal, and a PDF can be locked to prevent being tampered with.

> Gparted apparently doesn't like ZFS but does this matter? I can I assume
> use it to shrink the Windows partition and free up about 360GB, then how do
> I expand the ZFS to use that free space?
> Second, I have checked /etc/cron and find auto snap shot commands for
> hourly, daily, weekly & monthly
> hourly = 24, daily = 31, weekly = 8, monthly = 6. It seems I could change
> that to hourly = 24, daily = 7, weekly = 4, monthly = 6 and get pretty much
> the same coverage with a lot less snapshots.
>
> Listing snapshots shows some from 2019, where are they coming from as with
> monthly only storing 6 months there shouldn't be anything newer than
> February 2020 or am I not understanding something here?

I would think that if the computer was not in use at the critical
time, the snapshots could remain. Cron cannot do things while the
computer is off. I know that there is an alternative, anacron, that is
coded to cope with doing the actions that fell while the computer was
off, when it is next booted. It would be worth checking which is
installed and used.


> The computer is a Lenovo W541 laptop, the longer term plan is to double the
> memory to 32GB and put a 2TB SSD in this box. Does that sound sensible?
>
> In the meantime, a thorough backup is running in at least two external
> drives (one incremental and one fresh).

Again, consider backups on a separate device. Again, look to your use
patterns and practices so that you have less need for the snapshots
and recovering deleted files, and as such can use less snapshots. Look
at what you are trying to achieve, and ask how to be reasonably
effective. Try to make the most of the resources you can afford,
rather than spending too much to compensate for suboptimal habits and
practices.

> Stripes.

Regards,

Mark Trickett
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: [MLUG] Advice needed about ZFS

2020-08-24 Thread stripes theotoky via luv-main
Thank you all for the very helpful advice:-

If I understand you all the problems are ZFS is choking because I don't
have enough disk space. The reason I don't have enough disk space is
because the files I deleted are still in the old snapshots, plus it was
already pretty full.

This seems like a good way forward but is it really and if so how do I do
it?
First things first ZFS is choking due to lack of disk space. However, I
have a lot of totally unused diskspace on a windows partition so how can I
reduce the windows partition and increase the ZFS to stop it choking. It
has 560GB of which I do not need more than 200max to leave for Windoze.

Gparted apparently doesn't like ZFS but does this matter? I can I assume
use it to shrink the Windows partition and free up about 360GB, then how do
I expand the ZFS to use that free space?
Second, I have checked /etc/cron and find auto snap shot commands for
hourly, daily, weekly & monthly
hourly = 24, daily = 31, weekly = 8, monthly = 6. It seems I could change
that to hourly = 24, daily = 7, weekly = 4, monthly = 6 and get pretty much
the same coverage with a lot less snapshots.

Listing snapshots shows some from 2019, where are they coming from as with
monthly only storing 6 months there shouldn't be anything newer than
February 2020 or am I not understanding something here?

The computer is a Lenovo W541 laptop, the longer term plan is to double the
memory to 32GB and put a 2TB SSD in this box. Does that sound sensible?

In the meantime, a thorough backup is running in at least two external
drives (one incremental and one fresh).

Stripes.

On Sun, 23 Aug 2020 at 06:55, Keith Bainbridge 
wrote:

>
> On 23/8/20 10:25 am, Darren Wurf wrote:
> > I noticed the snapshots follow a grandparent-parent-child pattern,
> > something is managing these snapshots for you - which would explain why
> > you have no older ones as well as why there are many recent snapshots
> > and few older ones.
>
> And if that is the case, that manager may be using hard links to reduce
> the space being used.  So deleting older snapshots may not have a big
> affect.
>
> I know timeshift works that way - the grandparent-parent-child pattern
> and hard links.
>
> --
> Keith Bainbridge
>
> keithrbaugro...@gmail.com
> or ke1thozgro...@gmx.com
>
> --
> You received this message because you are subscribed to the Google Groups
> "mlug-au" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mlug-au+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/mlug-au/a8673e8a-117b-e3b3-3f61-6316b46fac13%40gmail.com
> .
>


-- 
Stripes Theotoky

-37 .713869
145.050562
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: [MLUG] Advice needed about ZFS

2020-08-20 Thread Craig Sanders via luv-main
On Thu, Aug 20, 2020 at 01:40:03PM +0100, stripes theotoky wrote:
> When we started this discussion we had this
>
> stripinska@Stripinska:~$ sudo zfs list
> NAME   USED  AVAIL  REFER
>  MOUNTPOINT
> alexandria 332G  1.06G96K
>  /alexandria

> Now having moved 47.3 GB of files to an external drive I have this.
>
> stripinska@Stripinska:~$ sudo zfs list
> NAME USED  AVAIL  REFER
>  MOUNTPOINT
> alexandria   332G   782M96K
>  /alexandria
>
> What is eating my space?

to truly delete files from ZFS (or from anything that supports snapshots), you
need to delete not only the file(s) from the current state of the filesystem,
but also any snapshots containing them.

The space will not be freed until there are no remaining snapshots containing
the file(s) you deleted.

Note that you can't delete individual files from a snapshot, you can only
delete entire snapshots.  This will have a significant impact on your backup
and recovery strategy.

> It is not the cache for Firefox as this is only 320M.

Maybe, maybe not.  Browser cache directories tend to change a lot, so they end 
up
using a lot more space in the snapshots than you might think because they keep
all the cached junk that your browser itself has deleted, but your zfs-autosnap
hasn't expired yet.

There really isn't much value in keeping snapshots of cache dirs like this, so
try creating a new dataset to hold these caches (and make sure it, or at least
sub-directories on it, are RW by your uid).  Configure zfs-autosnap to ignore
it (i.e. no snapshots), and then configure your browser to use it for caches.

I don't use zfs-auto-snap myself, but according to the man page, to exclude
a dataset from zfs-auto-snap, you need to create a property on it called
"com.sun:auto-snapshot' and set it to false. e.g.

zfs create alexandria/nosnaps
zfs set com.sun:auto-snapshot=false alexandria/nosnaps

(btw, you can also set a quota on a dataset so that it can't use all available
space - better to have firefox die because it can't cache extra stuff than to
have other random programs fail or crash due to out of space errors)

If you have multiple programs that keep caches like this, you could create
one dataset each for them.  IMO, it would be better to create just one
dataset (call it something like "alexandria/nosnaps") and then create as many
sub-directories as you need under it.

make /alexandria/nosnaps/stripes/ readable and writable by user 'stripes', and
your programs can create directories and files underneath it as needed. e.g.
something like

mkdir -p /alexandria/nosnaps/stripes/firefox-cache
chown -R stripes /alexandria/nosnaps/stripes
chmod u=rwX /alexandria/nosnaps/stripes



I'm not entirely sure how to change the cache dir in firefox, but i am certain
that it can be done, probably somewhere in "about:config".  At worst, you
can either set the mountpoint of the "nosnaps" dataset to be the cache dir
(rememeber to quit from firefox and delete the existing cache first), or by
symlinking into a subdir under "nosnaps".  The latter is better because it
enables multiple diffferent cache dirs under the one nosnaps dataset.



BTW, if you download stuff with deluge or some other torrent client, you should
make an alexandria/torrents dataset with a recordsize of 16K instead of the
default 128K (bit torrent does a lot of random reads and writes in 16KB
blocks, so this is the optimum recordsize). for example:

  zfs create -o recordsize=16k -o mountpoint=/home/stripes/torrents/download 
alexandria/torrents
  chown stripes /home/stripes/torrents/download
  chmod u=rwx /home/stripes/torrents/download


zfs-autosnap should be configured to ignore this dataset too, and your torrent
client should be configured to download torrents into this directory, and then
move them to somewhere else once the download has completed.  This avoids
wasting space on snapshots of partially downloaded stuff, AND minimises
fragmented (as the downloaded torrents will be de-fragmented when they're
moved to somewhere else)



there are probably other things that don't need to be snapshotted - like
/tmp, /var/tmp, maybe (parts of) /var/cache, and other directories containing
short-lived transient files.  I wouldn't bother doing anything about them
unless they waste a lot of disk space.


> How do I get it back before the box freezes.

1. for such a small filesystem, I recommend setting a more aggressive snapshot
expiration policy for zfs-autosnap.  From what you've posted, it looks like
zfs-autosnap is configured to keep the last 12 months or so snapshots but you
probably can't afford to keep more than the last three to six months or so of
snapshots.

zfs-auto-snap doesn't seem to have a config file, the snapshot retention
is handled by command-line options, which you can see if you look 

Re: [MLUG] Advice needed about ZFS

2020-08-20 Thread Joel W Shea via luv-main
On Thu, 20 Aug 2020 at 22:40, stripes theotoky via luv-main <
luv-main@luv.asn.au> wrote:

> When we started this discussion we had this
> (...)
>
The more disk space I free the less available space I have.
>

You're not freeing space by deleting files if they continue to exist in
snapshots, you can either;
- wait for snapshots to get rotated out and destroyed
- destroy unused snapshots, use `zfs send` to backup to external media
first?
- mount each snapshot, then remove the offending files from each; which
seems tedious, and prone to mistakes.
  c.f. `zfs diff` to find differences between snapshots

Cheers, Joel
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: [MLUG] Advice needed about ZFS

2020-08-20 Thread Russell Coker via luv-main
On Thursday, 20 August 2020 10:40:03 PM AEST stripes theotoky via luv-main 
wrote:
> When we started this discussion we had this
> 
> stripinska@Stripinska:~$ sudo zfs list
> NAME   USED  AVAIL  REFER
>  MOUNTPOINT
> alexandria 332G  1.06G96K
>  /alexandria
> 
> Now having moved 47.3 GB of files to an external drive I have this.
> 
> stripinska@Stripinska:~$ sudo zfs list
> NAME USED  AVAIL  REFER
>  MOUNTPOINT
> alexandria   332G   782M96K
>  /alexandria
> 
> What is eating my space?

You have a script setup to make regular snapshots so that if you accidentally 
delete something you can get it back from an older snapshot.  The snapshots 
keep the storage space for the files in use.  If you wait a few weeks the 
space will be freed as newer snapshots without those 47G of files replace the 
old ones with them.

Alternatively you can delete some of the snapshots manually.

-- 
My Main Blog http://etbe.coker.com.au/
My Documents Bloghttp://doc.coker.com.au/



___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: [MLUG] Advice needed about ZFS

2020-08-20 Thread stripes theotoky via luv-main
When we started this discussion we had this

stripinska@Stripinska:~$ sudo zfs list
NAME   USED  AVAIL  REFER
 MOUNTPOINT
alexandria 332G  1.06G96K
 /alexandria

Now having moved 47.3 GB of files to an external drive I have this.

stripinska@Stripinska:~$ sudo zfs list
NAME USED  AVAIL  REFER
 MOUNTPOINT
alexandria   332G   782M96K
 /alexandria

What is eating my space?
It is not the cache for Firefox as this is only 320M.
How do I get it back before the box freezes.
The more disk space I free the less available space I have.

Stripes.


On Thu, 20 Aug 2020 at 11:05, Russell Shaw  wrote:

> On 20/8/20 7:56 pm, 'stripes theotoky' via mlug-au wrote:
> > On Thu, 20 Aug 2020 at 07:33, Russell Shaw  > > wrote:
> >
> > On 20/8/20 4:53 pm, 'stripes theotoky' via mlug-au wrote:
> >  > Dear Group,
> >  >
> >  > I am looking for advice on ZFS. My computer is running painfully
> slowly.
> >
> > Check your pc has 8G of ram or more.
> >
> > It has 16G
>
> NAMEAVAIL   USED
> alexandria  1.06G   332G
>
> The free space looks pretty low if much buffering is needed.
>
> Think more free space is needed.
>
> --
> You received this message because you are subscribed to the Google Groups
> "mlug-au" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mlug-au+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/mlug-au/8bcd2b83-a63c-3cc5-26c7-d2d05dbadf4e%40netspace.net.au
> .
>


-- 
Stripes Theotoky

-37 .713869
145.050562
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main