Am 05.04.22 um 11:55 schrieb Stefan G. Weichinger:
Am 04.04.22 um 23:33 schrieb Chris Hassell:
Google cloud works and works well enough indeed. All of them work...
but a non-block technique is capped at 5TB by almost all providers,
and getting to there is complicated and can DOUBLE your storage cost
over one month [only] if you cannot make temporaries without being
charged. (looking at you, Wasabi!!).
However either A or B or C for Google specifically...
A) the desired block size must be kept automatically small (it varies
but ~40MB or smaller buffer or so for a 4GB system) ... and each DLE
"tape" must be limited in size
B) the biggest block size can be used to store 5TB objects [max ==
512MB] but the curl buffers will take ~2.5GB and must be hardcoded
[currently] in the build. Its too much for many systems.
C) the biggest block size can be used but google cannot FAIL EVEN ONCE
... or the cloud upload cannot be restarted and the DLE fails
basically. This doesn't succeed often on the way to 5TB.
D) the biggest block size can be used but Multi-Part must be turned
off and a second DLE and later gets to be very very very slow to add on
Option D has NO limits to backups, but it is what needs the O(log N)
check for single-stored blocks.
This currently does a O(N) check against earlier blocks to check the
cloud storage total after file #1 every time. Verrry slow at only
1000 per xaction.
@Chris, thanks for these infos.
Sounds a bit complicated, I have to see how I can start there.
I won't have very large DLEs anyway, that might help.
I have ~700GB per tape right now, that's not very much. Although
bandwidth (=backup time window) also will be an issue here.
I now at last received credentials to that gcs storage bucket, so I can
start to try ...
Does it make sense to somehow follow
https://wiki.zmanda.com/index.php/How_To:Backup_to_Amazon_S3 ?
I don't find anything mentioning Google Cloud in the Wiki.