debian 11, vtapes not found correctly anymore

2022-04-05 Thread Stefan G. Weichinger



At two amanda-installations on Debian11 I see this lately:

both setups use an aggregate setup of multiple external usb disks with 
vtapes on them.


That worked well for years.

Now I get errors because amanda does not find valid tapes on one or more 
external disks.


"amtape config inventory" shows the vtapes, but "amcheck -t config" 
tells me "No acceptable volumes found".


If I relabel a tape "amlabel -f vtape_usb vtape_usb-002-004  slot 1:4" 
it gets detected again:


$ amcheck -t vtape_usb
Amanda Tape Server Host Check
-
mount: /mnt/externaldisk01: can't find 
UUID=98cb03d6-e95e-462d-a823-6011b37c9f42.

slot 1:4: volume 'vtape_usb-002-004'
Will write to volume 'vtape_usb-002-004' in slot 1:4.
NOTE: skipping tape-writable test
Server check took 6.673 seconds
(brought to you by Amanda 3.5.1)


(externaldisk01 is absent: OK, externaldisk02 is connected)

I assume I should grep through some debug logs. What and where to look for?

regards, Stefan


Re: using Google Cloud for virtual tapes

2022-04-05 Thread Stefan G. Weichinger

Am 04.04.22 um 23:33 schrieb Chris Hassell:

Google cloud works and works well enough indeed.   All of them work... but a 
non-block technique is capped at 5TB by almost all providers, and getting to 
there is complicated and can DOUBLE your storage cost over one month [only] if 
you cannot make temporaries without being charged.  (looking at you, Wasabi!!).

However either A or B or C for Google specifically...
A) the desired block size must be kept automatically small (it varies but ~40MB or 
smaller buffer or so for a 4GB system) ... and each DLE "tape" must be limited 
in size
B) the biggest block size can be used to store 5TB objects [max == 512MB] but 
the curl buffers will take ~2.5GB and must be hardcoded [currently] in the 
build.  Its too much for many systems.
C) the biggest block size can be used but google cannot FAIL EVEN ONCE ... or 
the cloud upload cannot be restarted and the DLE fails basically.   This 
doesn't succeed often on the way to 5TB.
D) the biggest block size can be used but Multi-Part must be turned off and a 
second DLE and later gets to be very very very slow to add on

Option D has NO limits to backups, but it is what needs the O(log N) check for 
single-stored blocks.

This currently does a O(N) check against earlier blocks to check the cloud 
storage total after file #1 every time.   Verrry slow at only 1000 per xaction.


@Chris, thanks for these infos.

Sounds a bit complicated, I have to see how I can start there.

I won't have very large DLEs anyway, that might help.

I have ~700GB per tape right now, that's not very much. Although 
bandwidth (=backup time window) also will be an issue here.