brogeriofernan...@gmail.com wrote at about 15:21:10 -0300 on Wednesday, 
February 16, 2022:
 > Hi everyone
 > 
 > Currently, I'm using urbackup to backup a lot of image files and
 > thinking of migrating to backuppc.
 > 
 > I'm wondering if would be possible to run a command just after client
 > transfers file data but before it's stored in backuppc pool. My idea is
 > to do an image compression, like jpeg-xl lossless, instead of the
 > standard zlib one.

That capability does not (yet) exist.
Currently, BackupPC only supports zlib compression (or no compression)
and even the level of the compression is not variable between files on
a given backup (and frankly, compression is typically set uniformly
for all hosts and backups for pooling consistency).

The ability to have a per-file option to specify compression type is
an interesting feature request -- but would require some significant
rearchitecting to:

1. Create logic after the file transfer to decide what compression to
   use, potentially based on a number of different possible criteria.
   e.g., file extension, file size, regexp match on file name, file
   type...
   To do it right would add a fair bit of complexity and depending on
   the test could slow down backups

2. Decide on how to associate the compression type with the file.

   If this is done at the pool file level (as part of a header or
   Magic number), then you may run into problems if for the same file
   content, there is conflicting choices of compression (this could
   happen between hosts or even for the same host if the files have
   the same content but different names, for example)

   If this is done at the backup level by adding an entry to the
   attrib file, then you have a similar problem of conflicting
   compression directions for the same pooled file content.


3. Have a way to pass code (perl interpreted code? perl code stub?
   command line function call) to do the compression.


 > Reading the docs, I couldn't find any mention to specifying a command
 > to run before storing data, just commands to run before or after a
 > backup is done as a whole (eg. $Conf{DumpPreUserCmd},
 > $Conf{DumpPostUserCmd})

True


 > Maybe one option would be to convert and keep both versions of files,
 > JPEG and JPEG-XL, on client, but it's inviable to me because currently
 > I have about 15TB of images and don't have enough room to keep both of
 > them.

That is the best way.

 > 
 > So, the steps I'm thinking of would be to transfer the files from
 > client to server as usual but, just before storing them in the pool,
 > running a command (in this case, cjxl) to do an image compression.
Doing this right is not so simple as outlined above.
Though the code is open source and you are encouraged to submit
patches to do what you want here.

 > Certainly, this would be more bandwidth-friendly if it was possible to
 > do this compression before transferring to server, but I can't figure
 > out how I could accomplish this.
Presumably harder as it would require host-side code to do things
such as running a patched host version of rsync (or other transfer
method executable)
 > 
 > Any thoughts on this?
 > 
 > Many thanks
 > 
 > 
 > _______________________________________________
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:    https://github.com/backuppc/backuppc/wiki
 > Project: https://backuppc.github.io/backuppc/


_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

Reply via email to