On Apr 10, 2019, at 12:08 PM, Peng Yu <[email protected]> wrote:
>
> https://softwarerecs.stackexchange.com/questions/45010/transparent-file-compression-apps-for-macos
>
> I work on Mac. Would this be worthwhile to try?
The first link didn’t work here because it didn’t like the APFS drive I tried
it on. (Symptom: “Expecting f_type of 17, 23 or 24. f_type is 26.”)
I then tried the so-called “Github mirror”, which is no such thing: it’s
considerably advanced beyond the last version published at the first link, and
one of the improvements is APFS awareness.
Using that improved version, simple tests then worked, but then attempting to
use it on a SQLite DB file uncompressed it and left it uncompressed. I believe
this is because this OS feature relies on the old resource fork feature, which
means it only works with apps using the Apple-proprietary programming
interfaces, not POSIX interfaces, as SQLite does.
> Does the transparent
> compression work at the file system level or at the directory level?
Neither: it works at the file level.
You can point afsctool at a directory and it will compress the files in that
directory, but if you then drop another file in that directory, it won’t
automatically be compressed.
The tool will also skip over files it considers “already compressed” unless you
give it the -L flag, so giving it a directory name isn’t guaranteed to result
in all files in that directory being compressed.
> Would
> it have a slight chance to corrupt the existent files on the disk (e.g.,
> power outrage during compression)?
Between the resource forks issue and the fact that we’re having to use a third
party tool to enable it, I wouldn’t put much trust in this feature.
If you want to put your trust in a third-party OS add-on, O3X is worth much
more of your attention than afsctool:
https://openzfsonosx.org/
If you don’t have a spare disk to feed to it, you can create a pool using raw
disk images:
https://openzfsonosx.org/wiki/FAQ#Q.29_Can_I_set_up_a_test_pool_using_files_instead_of_disks.3F
ZFS’s abilities to a) add vdevs to a pool; and b) replace smaller vdevs with
larger ones together mean you can safely set the initial pool size to just
barely larger than the initial DB file plus any ancillary space needed. (WAL,
slack pages between VACUUM calls, etc.) You can then grow the pool
occasionally to keep ahead of the growth of the DB without sacrificing too much
on filesystem overhead. It’d be easy to write an on-demand 20% pool size
growth script, for instance; it’d be maybe half a dozen lines of Bash.
Lest you go off on an unfortunate tangent from this idea, note that the
“compressed disk image” feature of Disk Utility won’t help you here. Those are
read-only.
_______________________________________________
sqlite-users mailing list
[email protected]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users