Re: Multiple storage-definitions, generating archive tapes

2022-10-30 Thread Stefan G. Weichinger

Am 30.10.22 um 16:32 schrieb Exuvo:
Are you not supposed to do all the -Letter options before the 
configuration name? Is that maybe why it does not work? I.e: amflush

-o storage=storage1 configurationNameHere




No.

amflush config -o storage=storage1

works fine so far.

My question is: how can I check to which storage I have to flush things 
without starting amflush?


The policies for the 2 storages are different so the holding disk files 
of some DLEs go to storage1, others to storage2, but I don't know where 
amflush would put them to in advance.




Re: Multiple storage-definitions, generating archive tapes

2022-10-30 Thread Exuvo

Are you not supposed to do all the -Letter options before the configuration 
name? Is that maybe why it does not work?
I.e: amflush -o storage=storage1 configurationNameHere

Anton "exuvo" Olsson
   ex...@exuvo.se

On 2022-10-24 19:08, Stefan G. Weichinger wrote:


So far my setups with multiple storages work.

One of the missing features (or I don't know how):

if dumps stay in the holding disk I don't know which storage they would go to 
via amflush.

I start

amflush config -o storage=storage1

and then see in amstatus, that all DLEs would go to storage2

amflush stops then, OK, and I can start over, but sometimes I'd like to check 
that because there are some little DLEs for one storage and many DLEs for the 
other etc

Is there a way to check that without starting amflush?


Re: Multiple storage-definitions, generating archive tapes

2022-10-24 Thread Stefan G. Weichinger



So far my setups with multiple storages work.

One of the missing features (or I don't know how):

if dumps stay in the holding disk I don't know which storage they would 
go to via amflush.


I start

amflush config -o storage=storage1

and then see in amstatus, that all DLEs would go to storage2

amflush stops then, OK, and I can start over, but sometimes I'd like to 
check that because there are some little DLEs for one storage and many 
DLEs for the other etc


Is there a way to check that without starting amflush?


Re: Multiple storage-definitions, generating archive tapes

2022-08-24 Thread Stefan G. Weichinger

Am 23.08.22 um 23:16 schrieb Exuvo:
I use two separate backup configurations. One for weekly backups and one 
for yearly archival.
The weekly i start from crontab and the archival one i start manually 
with "sudo -u amanda amdump archive".
Each configuration has its own storage definition with tape names 
starting with R or A ( ^A[0-9]{5}$ ) respectively.
They share holding disk but i always flush it so it is only used for 
slow DLEs to avoid the drive starting and stopping a lot.


Is that what you are trying to do or did i read incorrectly?


This is what I *had* before.

What I try now is having both configs in one single amanda.conf.

So I use two storage definition blocks, two tape pools, etc ... all in 
one amanda.conf.


My goal is to avoid reading/compressing/encrypting the whole data twice 
every week or so.


In the specific case I want to avoid a ~26 hour amdump run filling 4 
tapes every weekend: it has to do ALL the dumping of ALL the DLEs again 
that already happened during the week sometimes. And even if I decide to 
let it do fresh FULLs on the weekend, I would prefer to all have it in 
ONE single configuration.


(some parts of) the archive tapes should be generated/prepared while 
doing the normal daily backups.


For 2 sites that looks good already: there I use physical and virtual 
tapes as the 2 storages. Amanda is able to write to both storages in 
parallel so I get tapes in both storages filled in every run (and the 
vtapes collect only the FULLs).


At one site it's more complicated because the 2 storages basically only 
point to separate (groups of) slots in one physical tape changer. So I 
can't write to both storages in parallel: only one tape drive.


So I try to come up with a schedule like:

* run the config with "-o storage=daily" every Mon-Fr do incrementals 
and fulls mixed to match the "daily" policy


* let that storage-config keep (some) fulls in the holding disks: they 
can be flushed by the storage-config "archive" on the weekend


* weekends: "-o storage=archive": let the config clean up the holding 
disk, plus let it do any missing fulls in the weekend runs (DLEs which 
are too big etc)


I am sure amanda is capable of doing that, and I get closer to getting 
it right with every run now.


The docs don't tell us much about the possibilities with that newer 
config syntax, to me it seems that all this had been added before the 
change of ownership and before development of the Community Edition stalled.


Back then Jean-Louis Martineau gave some tips on how to use that (in 
some ml-threads) but I never found some real documentation or examples.


OK, the sections and parameters are documented in the man-pages, but 
there is no HOWTO, afaik.


Re: Multiple storage-definitions, generating archive tapes

2022-08-23 Thread Exuvo

I use two separate backup configurations. One for weekly backups and one for 
yearly archival.
The weekly i start from crontab and the archival one i start manually with "sudo -u 
amanda amdump archive".
Each configuration has its own storage definition with tape names starting with 
R or A ( ^A[0-9]{5}$ ) respectively.
They share holding disk but i always flush it so it is only used for slow DLEs 
to avoid the drive starting and stopping a lot.

Is that what you are trying to do or did i read incorrectly?

Anton "exuvo" Olsson
   ex...@exuvo.se

On 2022-08-23 10:15, Stefan G. Weichinger wrote:

Am 10.08.22 um 08:52 schrieb Stefan G. Weichinger:


What I try:

storage1 should leave the lev0 backups in the holding disk after writing to 
tapes in pool1

storage2 should be allowed to remove them when written to the tapes in pool2


Does noone else use multiple storage definitions?

I have it in 3 sites now, various configs.

The main one is the one with the "split tape changer": 4 tapes for daily, 4 
tapes for archive

The current plan:

* storage1 (daily) uses

runtapes 1
dumpselection ALL ALL

and runs monday to friday

-> do incrementals and fulls in mix, only use 1 tape/day

* storage2 (archive) uses

runtapes 4
dumpselection ALL FULL

and runs on saturday (or sunday)

-> only write FULLs to the tapes, use 4 tapes to get all DLEs onto one set of 
tapes

-

I don't get all fulls into the holding disks so I have to use "holdingdisk never" 
for some DLEs (there are Veeam vbk files on one LVM volume, and one holding disk is another 
LVM volume in the same VG -> no sense to copy that anyway).

What I try to come up with:

how to trigger fulls on the weekend?

I plan to use "amadmin archive force *" before starting "amdump archive -o 
storage=archive" on weekends.

OK?

Some fulls could/should be collected in the holdingdisk by running the daily 
backups. This could be achieved by using the right *-threshold values, I assume.

All this gets quite complicated quickly, at least for me ;-)

Maybe I overlook something, maybe I don't yet fully understand some parts here.

Would be great to discuss this with others, thanks.




Re: Multiple storage-definitions, generating archive tapes

2022-08-23 Thread Stefan G. Weichinger

Am 10.08.22 um 08:52 schrieb Stefan G. Weichinger:


What I try:

storage1 should leave the lev0 backups in the holding disk after writing 
to tapes in pool1


storage2 should be allowed to remove them when written to the tapes in 
pool2


Does noone else use multiple storage definitions?

I have it in 3 sites now, various configs.

The main one is the one with the "split tape changer": 4 tapes for 
daily, 4 tapes for archive


The current plan:

* storage1 (daily) uses

runtapes 1
dumpselection ALL ALL

and runs monday to friday

-> do incrementals and fulls in mix, only use 1 tape/day

* storage2 (archive) uses

runtapes 4
dumpselection ALL FULL

and runs on saturday (or sunday)

-> only write FULLs to the tapes, use 4 tapes to get all DLEs onto one 
set of tapes


-

I don't get all fulls into the holding disks so I have to use 
"holdingdisk never" for some DLEs (there are Veeam vbk files on one LVM 
volume, and one holding disk is another LVM volume in the same VG -> no 
sense to copy that anyway).


What I try to come up with:

how to trigger fulls on the weekend?

I plan to use "amadmin archive force *" before starting "amdump archive 
-o storage=archive" on weekends.


OK?

Some fulls could/should be collected in the holdingdisk by running the 
daily backups. This could be achieved by using the right *-threshold 
values, I assume.


All this gets quite complicated quickly, at least for me ;-)

Maybe I overlook something, maybe I don't yet fully understand some 
parts here.


Would be great to discuss this with others, thanks.




Re: Multiple storage-definitions, generating archive tapes

2022-08-10 Thread Stefan G. Weichinger



(resend with correct ML-email)

Am 09.08.22 um 16:50 schrieb Jose M Calhariz:

On Mon, Aug 08, 2022 at 04:34:11PM +0200, Stefan G. Weichinger wrote:

Am 04.08.22 um 10:08 schrieb Stefan G. Weichinger:
(...)

combined with

flush-threshold-dumped  200 # (or more)
flush-threshold-scheduled   200 # (or more)
taperflush  200


My experience is this options do not work as documented.  On my case
there is a autoflush before the holding disk have enough files to fill
a tape.  On a setup that produces less files per amdump than the size
of a physical tape.


I never fully understood and therefore trusted these options ;-)

What I try:

storage1 should leave the lev0 backups in the holding disk after writing 
to tapes in pool1


storage2 should be allowed to remove them when written to the tapes in pool2

Maybe it's too complicated to define 2 policies for the 2 storages, 
that's what I try to find out currently.


I flush stuff to storage2 right now and it seems to work as intended.

Can't confirm your observation though, would have to test that in detail.

thanks!


Re: Multiple storage-definitions, generating archive tapes

2022-08-09 Thread Jose M Calhariz
On Mon, Aug 08, 2022 at 04:34:11PM +0200, Stefan G. Weichinger wrote:
> Am 04.08.22 um 10:08 schrieb Stefan G. Weichinger:
> (...)
> 
> combined with
> 
> flush-threshold-dumped200 # (or more)
> flush-threshold-scheduled 200 # (or more)
> taperflush200

My experience is this options do not work as documented.  On my case
there is a autoflush before the holding disk have enough files to fill
a tape.  On a setup that produces less files per amdump than the size
of a physical tape.



> autoflush yes
> 
> (...)

Kind regards
Jose M Calhariz


-- 
--

Ciência e tecnologia se multlipicam à nossa volta. Em grande parte elas ditam a 
extensão das linguagens com que falamos e pensamos. Ou usamos essa linguagem, 
ou permanecemos mudos

--J.G. Ballard


signature.asc
Description: PGP signature


Re: Multiple storage-definitions, generating archive tapes

2022-08-08 Thread Stefan G. Weichinger

Am 04.08.22 um 10:08 schrieb Stefan G. Weichinger:


1) I would have to "split" a physical tape change into 2 logical changers:

12 tapes for "daily" runs (= storage 1)
12 tapes for "full-only" runs (= storage 2)

(How) would amanda handle that? As there is only one tape drive it would 
have to keep everything in the holding disk and write it 2 times 
sequentially to each tape(set).


I am working on 2 sites using multiple storages. One runs OK already 
using 2 changers in parallel.


The second one only has one tape library.

I define one changer using slots 1-4 (storage1) and the 2nd changer with 
slots 5-8 (storage2).


Both changers use the same and only tape drive ... that's why I need 
sequential processing.


I am unsure how to use that correctly.

My approach so far (not yet fully tested):

# daily cronjob

/usr/sbin/amdump config -ostorage=storage1

combined with

flush-threshold-dumped  200 # (or more)
flush-threshold-scheduled   200 # (or more)
taperflush  200
autoflush yes

this should keep the holding disk files.

Now if I want to write these to storage2: do I run amdump or amflush?

(assuming I use "-o storage=storage2")

That second storage should be allowed to clear the written DLEs from the 
holdingdisk.