Re: Amvault hangs while trying to vault backups from holding disk.

2021-10-11 Thread Winston Sorfleet
I do the same as you for amvault command line invocation i.e.
--latest-fulls --dest-storage.  However I am vaulting from the vtl
directories only, not the holding disk.  Without some details on your
amanda.conf I don't know if that's part of the problem but you appear to
never get to loading a (vtl) slot to read /from/.

My output goes like this:


Sat Oct 09 13:00:43.952721726 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Amanda::Taper::Scan::traditional stage 1: search for oldest
reusable volume
Sat Oct 09 13:00:43.957348285 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Amanda::Taper::Scan::traditional oldest reusable volume is
'Vault-1'
Sat Oct 09 13:00:43.957795520 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Amanda::Taper::Scan::traditional changer is not
fast-searchable; skipping to stage 2
Sat Oct 09 13:00:43.958013506 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Amanda::Taper::Scan::traditional stage 2: scan for any reusable
volume
Sat Oct 09 13:00:43.958171915 2021: pid 4031574: thd-0x558e5a458a00:
amvault: warning: "/dev/nst0" uses deprecated device naming convention;
using "tape:/dev/nst0" instead.

Sat Oct 09 13:00:43.960571475 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Device is in fixed block size of 32768
Sat Oct 09 13:00:43.965927459 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Slot 1 with label Vault-7 is usable
Sat Oct 09 13:00:43.966108460 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Amanda::Taper::Scan::traditional result: 'Vault-7' on /dev/nst0
slot 1, mode 2
Sat Oct 09 13:00:43.966999371 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Amanda::Recovery::Clerk: loading volume 'vtl1'
Sat Oct 09 13:00:43.96703 2021: pid 4031574: thd-0x558e5a458a00:
amvault: find_volume labeled 'vtl1'
Sat Oct 09 13:00:44.591343047 2021: pid 4031574: thd-0x558e5a458a00:
amvault: parse_inventory: load slot 1 with label 'vtl1'
Sat Oct 09 13:00:44.591609582 2021: pid 4031574: thd-0x558e5a458a00:
amvault:
/usr/lib/x86_64-linux-gnu/amanda/perl/Amanda/Recovery/Scan.pm:307:info:120
slot 1
Sat Oct 09 13:00:44.597276267 2021: pid 4031574: thd-0x558e5a458a00:
amvault: dir_name: /amandatapes/slot1/
Sat Oct 09 13:00:44.656621198 2021: pid 4031574: thd-0x558e5a458a00:
amvault:
/usr/lib/x86_64-linux-gnu/amanda/perl/Amanda/Recovery/Scan.pm:459:info:121
vtl1
Sat Oct 09 13:00:44.664182154 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Amanda::Recovery::Clerk: successfully located first part for
recovery
Sat Oct 09 13:00:44.664832388 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Amanda::Taper::Scribe preparing to write, part size 0, using
LEOM detection (no caching) (splitter)  (LEOM supported)
Sat Oct 09 13:00:44.665859394 2021: pid 4031574: thd-0x558e5a458a00:
amvault: Starting  ->
)>



On 2021-10-11 6:51 a.m., Rami Lehti wrote:
> Hi all,
>
> I have a setup where I have a holding disk and vtapes. Then I try to use
> amvault to copy the latest full backups to tape.
> This works if I use amvault's --fulls-only with --src-storage parameter
> and use the vtapes as the source.
> But if I try to vault backups that are still on the holding disk by
> using --latest-fulls. Amvault tries to copy the first full to tape but
> hangs indefinitely.
>
> The full redacted command line is
> sudo -u backup amvault --dest-storage tapelibrary --latest-fulls backupset
>
> Here's the relevant part of the redacted log file.
>
> ma loka 11 09:23:13.626028751 2021: pid 1126324: thd-0x562b5502a400:
> amvault: Amanda::Taper::Scan::traditional result: 'redacted-vault-1014'
> on tape:/dev/nst0 slot 8, mode 2
> ma loka 11 09:23:13.627741696 2021: pid 1126324: thd-0x562b5502a400:
> amvault:
> /usr/lib/x86_64-linux-gnu/amanda/perl/Amanda/Vault.pm:1196:info:2500017
> Reading '/backup/amanda/holding/20211004180502/._.0': FILE:
> date 20211004180502 host  disk / lev 0 comp .gz program
> /bin/tar crypt enc client_encrypt /usr/sbin/amcrypt-ossl
> client_decrypt_option -d
> ma loka 11 09:23:13.627958672 2021: pid 1126324: thd-0x562b5502a400:
> amvault: Amanda::Recovery::Clerk: successfully located holding file for
> recovery
> ma loka 11 09:23:13.628001640 2021: pid 1126324: thd-0x562b5502a400:
> amvault: start_recovery called
> ma loka 11 09:23:13.628634814 2021: pid 1126324: thd-0x562b5502a400:
> amvault: Amanda::Taper::Scribe preparing to write, part size 0, using
> LEOM detection (no caching) (splitter)  (LEOM supported)
> ma loka 11 09:23:13.639842536 2021: pid 1126324: thd-0x562b5502a400:
> amvault: Starting  ( ->
> )>
> ma loka 11 09:23:13.639865559 2021: pid 1126324: thd-0x562b5502a400:
> amvault: Final linkage:  -(MEM_RING)->
> 
> ma loka 11 09:23:13.639986931 2021: pid 1126324: thd-0x562b5502a400:
> amvault: Amanda::Recovery::Clerk: starting recovery
> ma loka 11 09:23:13.641060521 2021: pid 1126324: thd-0x562b5502a400:
> amvault: start_recovery called
>
> And then nothing.
>
> Is this a bug or am I doing something wrong?
>
> Kind regards,
> Rami Lehti
>



Amvault hangs while trying to vault backups from holding disk.

2021-10-11 Thread Rami Lehti
Hi all,

I have a setup where I have a holding disk and vtapes. Then I try to use
amvault to copy the latest full backups to tape.
This works if I use amvault's --fulls-only with --src-storage parameter
and use the vtapes as the source.
But if I try to vault backups that are still on the holding disk by
using --latest-fulls. Amvault tries to copy the first full to tape but
hangs indefinitely.

The full redacted command line is
sudo -u backup amvault --dest-storage tapelibrary --latest-fulls backupset

Here's the relevant part of the redacted log file.

ma loka 11 09:23:13.626028751 2021: pid 1126324: thd-0x562b5502a400:
amvault: Amanda::Taper::Scan::traditional result: 'redacted-vault-1014'
on tape:/dev/nst0 slot 8, mode 2
ma loka 11 09:23:13.627741696 2021: pid 1126324: thd-0x562b5502a400:
amvault:
/usr/lib/x86_64-linux-gnu/amanda/perl/Amanda/Vault.pm:1196:info:2500017
Reading '/backup/amanda/holding/20211004180502/._.0': FILE:
date 20211004180502 host  disk / lev 0 comp .gz program
/bin/tar crypt enc client_encrypt /usr/sbin/amcrypt-ossl
client_decrypt_option -d
ma loka 11 09:23:13.627958672 2021: pid 1126324: thd-0x562b5502a400:
amvault: Amanda::Recovery::Clerk: successfully located holding file for
recovery
ma loka 11 09:23:13.628001640 2021: pid 1126324: thd-0x562b5502a400:
amvault: start_recovery called
ma loka 11 09:23:13.628634814 2021: pid 1126324: thd-0x562b5502a400:
amvault: Amanda::Taper::Scribe preparing to write, part size 0, using
LEOM detection (no caching) (splitter)  (LEOM supported)
ma loka 11 09:23:13.639842536 2021: pid 1126324: thd-0x562b5502a400:
amvault: Starting  ->
)>
ma loka 11 09:23:13.639865559 2021: pid 1126324: thd-0x562b5502a400:
amvault: Final linkage:  -(MEM_RING)->

ma loka 11 09:23:13.639986931 2021: pid 1126324: thd-0x562b5502a400:
amvault: Amanda::Recovery::Clerk: starting recovery
ma loka 11 09:23:13.641060521 2021: pid 1126324: thd-0x562b5502a400:
amvault: start_recovery called

And then nothing.

Is this a bug or am I doing something wrong?

Kind regards,
Rami Lehti


Re: can't kill a non-numeric process ID -- try "use" option for holding disk

2020-01-07 Thread Nathan Stratton Treadway
On Mon, Jan 06, 2020 at 16:13:25 -0500, Chris Hoogendyk wrote:
> I had two jobs running, one started Saturday evening and one started
> Sunday evening. Both holding disks were 100% full. I ran amstatus
> and found that the "current" run was flushing a reasonably large DLE

I was curious if you have a "use" option in place for your holding disk
definitions

I don't think it would resolve your underlying/original problem (which I
guess is that the taper process dies or soemthing), but it seems like
some of your "cleanup" issues would be avoided if the holiding-disk
filesystems didn't reach 100% usage and thus you avoiding the
empty-"pid"-file situation.

I'm not sure how precisely Amanda is able to control the actual usage as
data streams in from the clients, but if you don't already have a "use"
option in your holding-disk definitions, you might try adding "use -1 gb"
or something like to see if that at least prevents the filesystems
from getting 100% full when this situation hits  (Presumably you
don't fill up the holding-disk filesystems during a normal run; if you
do come close, you might need "use -200 mb" instead, or whatever.)

Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: holding disk too small? -- holding disk RAID configuration

2019-12-25 Thread Gene Heskett
On Wednesday 25 December 2019 19:33:04 Jon LaBadie wrote:

> On Mon, Dec 23, 2019 at 11:51:11PM -0500, Gene Heskett wrote:
> > On Monday 23 December 2019 21:16:26 Nathan Stratton Treadway wrote:
> > > On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> > > > The first, /dev/sda contains the current operating system. This
> > > > includes /usr/dumps as a holding disk area.
>
> ...
>
> > Sounds good, so I'll try it.
>
> If the sda DLE(s) are small enough to go direct to "tape",
> define all holdings, but run sda DLEs with "holdingdisk no".
>

Some are rather gargantuan with multiple iso's etc. , so moving the 
holding disk to an otherwise unused spindle makes the best situation by 
my reasoning.  And backup times the last 2 nights have been cut 
drastically. The question then is will it work that well for 2 weeks. Or 
a month?

> > Merry Christamas everybody.
>
> mega-dittos!

Same here as I'm celebrating yet another instance of makeing the guy with 
the scythe blink. Twice now. But my non-oem parts list is beginning to 
read like the 6 million dollar man. But in the middle of all that work 
in the cath-lab at Ruby in Morgantown, my drivers license expired so I 
need to go get that fixed tomorrow. A week past a new Aortic valve in my 
ticker, I feel like I ought to be good for another decade.  Great IOW.

Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>


Re: holding disk too small? -- holding disk RAID configuration

2019-12-25 Thread Jon LaBadie
On Mon, Dec 23, 2019 at 11:51:11PM -0500, Gene Heskett wrote:
> On Monday 23 December 2019 21:16:26 Nathan Stratton Treadway wrote:
> 
> > On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> > > The first, /dev/sda contains the current operating system. This
> > > includes /usr/dumps as a holding disk area.
> > >
...
> 
> Sounds good, so I'll try it.
> 
If the sda DLE(s) are small enough to go direct to "tape",
define all holdings, but run sda DLEs with "holdingdisk no".


> Merry Christamas everybody.

mega-dittos!

-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)


Re: holding disk too small? -- bumpmult

2019-12-24 Thread Gene Heskett
On Tuesday 24 December 2019 10:40:39 Nathan Stratton Treadway wrote:

> On Mon, Dec 23, 2019 at 23:51:11 -0500, Gene Heskett wrote:
> > Sounds good, so I'll try it.  Also, where is the best explanation
> > for "bumpmult"? I don't seem to be getting the results I expect.
>
> I'm only aware of these parameters being explained in the amanda.conf
> man page
>
> However, did you find the "amadmin ... bumpsize" command?  You can use
> it to check the actual effect of the bump* parameters in the config
> file, which perhaps will help you get a sense of how they interrelate:
>
> =
> # su backup -c "amadmin TestBackup bumpsize"
> Current bump parameters:
>   bumppercent  20 % - minimum savings (threshold) to bump level 1
> -> 2 bumpdays 1- minimum days at each level
>   bumpmult 4- threshold = disk_size * bumppercent *
> bumpmult**(level-1)
>
>   Bump -> To  Threshold
> 1  ->  220.00 %
> 2  ->  380.00 %
> 3  ->  4   100.00 %
> 4  ->  5   100.00 %
> 5  ->  6   100.00 %
> 6  ->  7   100.00 %
> 7  ->  8   100.00 %
> 8  ->  9   100.00 %
> =
>
>
>   Nathan

No, I wasn't aware of this tool, and it showed me why I wasn't getting 
the promotions I expected. Now I think I should see better results. Last 
nights run was in something less than an hour, whereas the night before 
had many more compressed DLE's which took 4:45 to complete. The mix of 
compressed vs why waste the time trying straight copies is confusing the 
issue, but I don't recall a just over 40 minute run in recent history 
either.  We'll let this "settle" for a couple weeks and see.  Thanks and 
have a Merry Christmas, Nathan.
>
> --
>-- Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic
> region Ray Ontko & Co.  -  Software consulting services  -  
> http://www.ontko.com/ GPG Key:
> http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: holding disk too small? -- bumpmult

2019-12-24 Thread Nathan Stratton Treadway
On Mon, Dec 23, 2019 at 23:51:11 -0500, Gene Heskett wrote:
> Sounds good, so I'll try it.  Also, where is the best explanation 
> for "bumpmult"? I don't seem to be getting the results I expect.

I'm only aware of these parameters being explained in the amanda.conf man
page

However, did you find the "amadmin ... bumpsize" command?  You can use
it to check the actual effect of the bump* parameters in the config
file, which perhaps will help you get a sense of how they interrelate:

=
# su backup -c "amadmin TestBackup bumpsize" 
Current bump parameters:
  bumppercent  20 % - minimum savings (threshold) to bump level 1 -> 2
  bumpdays 1- minimum days at each level
  bumpmult 4- threshold = disk_size * bumppercent * 
bumpmult**(level-1)

  Bump -> To  Threshold
1  ->  220.00 %
2  ->  380.00 %
3  ->  4   100.00 %
4  ->  5   100.00 %
5  ->  6   100.00 %
6  ->  7   100.00 %
7  ->  8   100.00 %
8  ->  9   100.00 %
=


Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: holding disk too small? -- holding disk RAID configuration

2019-12-23 Thread Gene Heskett
On Monday 23 December 2019 23:51:11 Gene Heskett wrote:

> On Monday 23 December 2019 21:16:26 Nathan Stratton Treadway wrote:
> > On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> > > The first, /dev/sda contains the current operating system. This
> > > includes /usr/dumps as a holding disk area.
> > >
> > > The next box of rust, /dev/sdb, is the previous os, kept in case I
> > > need to go get something I forgot to copy over when I first made
> > > the present install. It also contains this /user/dumps directory
> > > but currently unused as it normally isn't mounted.
> > >
> > > Wash, rinse and repeat for /dev/sdc. normally not mounted.
> >
> > [...]
> >
> > > What would be the effect of moving from a single holding area on
> > > /dev/sda as it is now operated, compared to mounting and using the
> > > holding directorys that already exist on /dev/sdb and /dev/sdc?
> > > Seems to me this
> >
> > Right... mount the sdb and sdc holding-disk filesystems, then add
> > additional holdingdisk{} definitions pointing to those directories
> > to your amanda.conf.
> >
> > > should result in less pounding on the /dev/sda seek mechanism
> > > while backing up /dev/sda as it would move those writes to a
> > > different spindle, with less total time spent seeking overall.
> > >
> > > Am I on the right track?  How does amanda determine which holding
> > > disk area to use for a given DLE in that case?
> >
> > Yes, I think that's the right track.
> >
> > I have not investigated this in depth, but as far as I know Amanda
> > doesn't have a way to notice that a particular DLE is on physical
> > device local-sda and that a particular holding-disk directory is
> > also on that same physical device, and thus choose to use a
> > different holding disk for that particular DLE.  (It does attempt to
> > spread out temporary files across the various holding-disk
> > directories -- it just presumably can't take into account the
> > physical device origin of a particular DLE when decided where to
> > send that DLEs temporary file.)
> >
> > So if you left your existing holding-disk definition as well as
> > adding the ones for sdb and sdc, about one third of the time
> > (theoretically) Amanda would end up using sda for the holding disk
> > for the
> > os-files-on-sda's DLE, and you'd end up with some thrashing.  As far
> > as I know, the only way to completely avoid that is to to remove the
> > holdingdisk section pointing to sda from the config and use only the
> > other two.
> >
> > However, as long as you are using more than two dumpers in your
> > config, I'm pretty sure that having more than two physical drives in
> > use for holding disks will still come out ahead, because there will
> > also be some thrashing between the holding-disk files for different
> > DLEs that are being backed up in parallel.  So unless the server's
> > sda DLE was a huge portion of the overall data being backed up
> > across your entire disklist, I'd guess that the occasional thrashing
> > on sda when backing up that DLE is a price worth paying to have the
> > holdingdisk activity spread across as many physical drives as
> > possible.
> >
> > (Of course it wouldn't be a bad idea to try it for a dumpcycle with
> > three holding-disk drives and then comment out the entry for the
> > holding disk on sda and try that for a few runs at least and see how
> > the performance compares in reality on your actual installation...)
> >
> >
> > Nathan
>
> Sounds good, so I'll try it.

Except when I mouunted the sdc, it turned out to be the old 1T 
for /amandatapes, and its to close to launch time to go thru all the 
formatting. So we'll try with 1 holding disks, removeing /dev/sda.
 
> Also, where is the best explanation 
> for "bumpmult"? I don't seem to be getting the results I expect.
>
> Merry Christamas everybody.
>
> > 
> >-- -- Nathan Stratton Treadway  -  natha...@ontko.com  - 
> > Mid-Atlantic region Ray Ontko & Co.  -  Software consulting services
> >  -
> > http://www.ontko.com/ GPG Key:
> > http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> > fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239
>
> Copyright 2019 by Maurice E. Heskett
> Cheers, Gene Heskett



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>


Re: holding disk too small? -- holding disk RAID configuration

2019-12-23 Thread Gene Heskett
On Monday 23 December 2019 21:16:26 Nathan Stratton Treadway wrote:

> On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> > The first, /dev/sda contains the current operating system. This
> > includes /usr/dumps as a holding disk area.
> >
> > The next box of rust, /dev/sdb, is the previous os, kept in case I
> > need to go get something I forgot to copy over when I first made the
> > present install. It also contains this /user/dumps directory but
> > currently unused as it normally isn't mounted.
> >
> > Wash, rinse and repeat for /dev/sdc. normally not mounted.
>
> [...]
>
> > What would be the effect of moving from a single holding area on
> > /dev/sda as it is now operated, compared to mounting and using the
> > holding directorys that already exist on /dev/sdb and /dev/sdc?
> > Seems to me this
>
> Right... mount the sdb and sdc holding-disk filesystems, then add
> additional holdingdisk{} definitions pointing to those directories to
> your amanda.conf.
>
> > should result in less pounding on the /dev/sda seek mechanism while
> > backing up /dev/sda as it would move those writes to a different
> > spindle, with less total time spent seeking overall.
> >
> > Am I on the right track?  How does amanda determine which holding
> > disk area to use for a given DLE in that case?
>
> Yes, I think that's the right track.
>
> I have not investigated this in depth, but as far as I know Amanda
> doesn't have a way to notice that a particular DLE is on physical
> device local-sda and that a particular holding-disk directory is also
> on that same physical device, and thus choose to use a different
> holding disk for that particular DLE.  (It does attempt to spread out
> temporary files across the various holding-disk directories -- it just
> presumably can't take into account the physical device origin of a
> particular DLE when decided where to send that DLEs temporary file.)
>
> So if you left your existing holding-disk definition as well as adding
> the ones for sdb and sdc, about one third of the time (theoretically)
> Amanda would end up using sda for the holding disk for the
> os-files-on-sda's DLE, and you'd end up with some thrashing.  As far
> as I know, the only way to completely avoid that is to to remove the
> holdingdisk section pointing to sda from the config and use only the
> other two.
>
> However, as long as you are using more than two dumpers in your
> config, I'm pretty sure that having more than two physical drives in
> use for holding disks will still come out ahead, because there will
> also be some thrashing between the holding-disk files for different
> DLEs that are being backed up in parallel.  So unless the server's sda
> DLE was a huge portion of the overall data being backed up across your
> entire disklist, I'd guess that the occasional thrashing on sda when
> backing up that DLE is a price worth paying to have the holdingdisk
> activity spread across as many physical drives as possible.
>
> (Of course it wouldn't be a bad idea to try it for a dumpcycle with
> three holding-disk drives and then comment out the entry for the
> holding disk on sda and try that for a few runs at least and see how
> the performance compares in reality on your actual installation...)
>
>
>   Nathan

Sounds good, so I'll try it.  Also, where is the best explanation 
for "bumpmult"? I don't seem to be getting the results I expect.

Merry Christamas everybody.
>
> --
>-- Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic
> region Ray Ontko & Co.  -  Software consulting services  -  
> http://www.ontko.com/ GPG Key:
> http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>


Re: holding disk too small? -- holding disk RAID configuration

2019-12-23 Thread Nathan Stratton Treadway
On Sun, Dec 22, 2019 at 11:47:21 -0500, Gene Heskett wrote:
> The first, /dev/sda contains the current operating system. This 
> includes /usr/dumps as a holding disk area.
> 
> The next box of rust, /dev/sdb, is the previous os, kept in case I need 
> to go get something I forgot to copy over when I first made the present 
> install. It also contains this /user/dumps directory but currently 
> unused as it normally isn't mounted.
> 
> Wash, rinse and repeat for /dev/sdc. normally not mounted.

[...] 

> What would be the effect of moving from a single holding area on /dev/sda 
> as it is now operated, compared to mounting and using the holding 
> directorys that already exist on /dev/sdb and /dev/sdc? Seems to me this 

Right... mount the sdb and sdc holding-disk filesystems, then add
additional holdingdisk{} definitions pointing to those directories to
your amanda.conf.

> should result in less pounding on the /dev/sda seek mechanism while 
> backing up /dev/sda as it would move those writes to a different 
> spindle, with less total time spent seeking overall.
> 
> Am I on the right track?  How does amanda determine which holding disk 
> area to use for a given DLE in that case?

Yes, I think that's the right track.

I have not investigated this in depth, but as far as I know Amanda
doesn't have a way to notice that a particular DLE is on physical device
local-sda and that a particular holding-disk directory is also on that
same physical device, and thus choose to use a different holding disk
for that particular DLE.  (It does attempt to spread out temporary files
across the various holding-disk directories -- it just presumably can't
take into account the physical device origin of a particular DLE when
decided where to send that DLEs temporary file.)

So if you left your existing holding-disk definition as well as adding
the ones for sdb and sdc, about one third of the time (theoretically)
Amanda would end up using sda for the holding disk for the
os-files-on-sda's DLE, and you'd end up with some thrashing.  As far as
I know, the only way to completely avoid that is to to remove the
holdingdisk section pointing to sda from the config and use only the
other two.

However, as long as you are using more than two dumpers in your config,
I'm pretty sure that having more than two physical drives in use for
holding disks will still come out ahead, because there will also be some
thrashing between the holding-disk files for different DLEs that are
being backed up in parallel.  So unless the server's sda DLE was a huge
portion of the overall data being backed up across your entire disklist,
I'd guess that the occasional thrashing on sda when backing up that DLE
is a price worth paying to have the holdingdisk activity spread across
as many physical drives as possible.

(Of course it wouldn't be a bad idea to try it for a dumpcycle with
three holding-disk drives and then comment out the entry for the holding
disk on sda and try that for a few runs at least and see how the
performance compares in reality on your actual installation...)


Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: holding disk too small? -- holding disk RAID configuration

2019-12-22 Thread Gene Heskett
On Thursday 05 December 2019 16:16:58 Nathan Stratton Treadway wrote:

> On Tue, Dec 03, 2019 at 15:43:10 +0100, Stefan G. Weichinger wrote:
> > I consider recreating that holding disk array (currently RAID1 of 2
> > disks) as RAID0 ..
>
> Just focusing on this one aspect of your question: assuming the
> filesystem in question doesn't have anything other than the Amanda
> holding-disk area on it, I suspect you would be better off creating
> two separate filesystems, one on each underlying disk, rather than
> making them into a RAID0 array.
>
> Amanda can make use of two separate holding-disk directories in
> parallel, so you can still get twice the total holding disk size
> avilable in a run (compared to the current RAID1 setup), but Ananda's
> parallel accesses will probably cause less contention on the physical
> device since each filesystem is stored independently on one drive.
>
>
> (Also, if one of the drives fails the other holding disk filesystem
> will still be available, while if you are using RAID0 one drive
> failing will take out the whole array)
>
>   Nathan

I find this an interesting concept Nathan, and would like to explore it 
further.

In my setup here, serving this machine and 4 others in my machine shop 
menagery (sp?), I have 4 boxes of spinning rust.

The first, /dev/sda contains the current operating system. This 
includes /usr/dumps as a holding disk area.

The next box of rust, /dev/sdb, is the previous os, kept in case I need 
to go get something I forgot to copy over when I first made the present 
install. It also contains this /user/dumps directory but currently 
unused as it normally isn't mounted.

Wash, rinse and repeat for /dev/sdc. normally not mounted.

/dev/sdd is /amandatapes, mounted full time,

(I find keeping a disk spinning results is disks that last 100,000+ hours 
with no increase in error rates, I have a 1T that had 25 bad, 
reallocated sectors the first time I checked it at about 5k hours in 
2006, still has the same 25 reallocated sectors today at about 100,000 
head flying hours.)

What would be the effect of moving from a single holding area on /dev/sda 
as it is now operated, compared to mounting and using the holding 
directorys that already exist on /dev/sdb and /dev/sdc? Seems to me this 
should result in less pounding on the /dev/sda seek mechanism while 
backing up /dev/sda as it would move those writes to a different 
spindle, with less total time spent seeking overall.

Am I on the right track?  How does amanda determine which holding disk 
area to use for a given DLE in that case?

Thanks.

> --
>-- Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic
> region Ray Ontko & Co.  -  Software consulting services  -  
> http://www.ontko.com/ GPG Key:
> http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>


Re: holding disk too small?

2019-12-14 Thread Jon LaBadie
On Tue, Dec 10, 2019 at 03:27:13PM +0100, Stefan G. Weichinger wrote:
> Am 05.12.19 um 21:47 schrieb Stefan G. Weichinger:
> > Am 03.12.19 um 15:43 schrieb Stefan G. Weichinger:
> >>
> >> Another naive question:
> >>
> >> Does the holdingdisk have to be bigger than the size of one tape?
> > 
> > As there were multiple replies to my original posting and as I am way
> > too busy right now: a quick "thanks" to all the people who replied.
> > 
> > So far the setup works. Maybe not optimal, but it works.
> > 
> > ;-)
> > 
> > stay tuned ...
> 
> Now an additional obstacle:
> 
> one DLE (a Veeam Backup Dir, so I don't want to split it via "exclude"
> or so) is larger than (a) one tape and (b) the holding disk.
> 
> DLE = 2.9 TB
> holding disk = 2 TB
> one tape = 2.4 TB (LTO6)
> 
> It seems that the tape device doesn't support LEOM ...
> 
> Amdump dumps the DLE directly to tape, fills it and fails with
> 
> " lev 0  partial taper: No space left on device, splitting not enabled
> "
> 
> I am unsure how to set LEOM within:
> 
> define device lto6_drive {
> tapedev "tape:/dev/nst0"
> #device-property "BLOCK_SIZE" "2048K"
> device-property "LEOM" "false"
> }
> 
> define changer robot {
>   tpchanger "chg-robot:/dev/sg4"
>   #property "tape-device" "0=tape:/dev/nst0"
>   property "tape-device" "0=lto6_drive"
>   property "eject-before-unload" "yes"
>   property "use-slots" "1-8"
> }
> 
> 
> ... makes amcheck happy.
> 
> additional for your checking eyes:
> 
> define tapetype LTO6 {
> comment "Created by amtapetype; compression enabled"
> length 244352 kbytes
> filemark 868 kbytes
> speed 157758 kps
> blocksize 2048 kbytes
> 
>   part_size 100G
>   part_cache_type memory
>   part_cache_max_size 8G # use roughly the amount of free RAM on your 
> system
> }
> 
> 
> We have 32 GB RAM in there so this should work?

Perhaps my lack of using large devices is causing me to
miss something, but I don't see how.

You are writing 100GB "parts" directly to tape.  At some
point, the tape fills while writing one of these parts.
To repeat that part on a second tape, the 100GB of the
failed part must be saved somewhere.  Certainly not in
memory!  Can the holding disk be used to "cache" the
parts?

Are you sure you can't just plugin another 2TB USB drive
as a second holding disk?

BTW you do have "runtapes" > 1 correct?

Jon
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)


Re: holding disk too small?

2019-12-10 Thread Debra S Baddorf



> On Dec 10, 2019, at 8:27 AM, Stefan G. Weichinger  wrote:
> 
> Am 05.12.19 um 21:47 schrieb Stefan G. Weichinger:
>> Am 03.12.19 um 15:43 schrieb Stefan G. Weichinger:
>>> 
>>> Another naive question:
>>> 
>>> Does the holdingdisk have to be bigger than the size of one tape?
>> 
>> As there were multiple replies to my original posting and as I am way
>> too busy right now: a quick "thanks" to all the people who replied.
>> 
>> So far the setup works. Maybe not optimal, but it works.
>> 
>> ;-)
>> 
>> stay tuned ...
> 
> Now an additional obstacle:
> 
> one DLE (a Veeam Backup Dir, so I don't want to split it via "exclude"
> or so) is larger than (a) one tape and (b) the holding disk.
> 
> DLE = 2.9 TB
> holding disk = 2 TB
> one tape = 2.4 TB (LTO6)
> 
> It seems that the tape device doesn't support LEOM ...
> 
> Amdump dumps the DLE directly to tape, fills it and fails with
> 
> " lev 0  partial taper: No space left on device, splitting not enabled
> "
> 
> I am unsure how to set LEOM within:
> 
> define device lto6_drive {
>tapedev "tape:/dev/nst0"
>#device-property "BLOCK_SIZE" "2048K"
>device-property "LEOM" "false"
> }
> 
> define changer robot {
>   tpchanger "chg-robot:/dev/sg4"
>   #property "tape-device" "0=tape:/dev/nst0"
>   property "tape-device" "0=lto6_drive"
>   property "eject-before-unload" "yes"
>   property "use-slots" "1-8"
> }
> 
> 
> ... makes amcheck happy.
> 
> additional for your checking eyes:
> 
> define tapetype LTO6 {
>comment "Created by amtapetype; compression enabled"
>length 244352 kbytes
>filemark 868 kbytes
>speed 157758 kps
>blocksize 2048 kbytes
> 
>   part_size 100G
>   part_cache_type memory
>   part_cache_max_size 8G # use roughly the amount of free RAM on your 
> system
> }
> 
> 
> We have 32 GB RAM in there so this should work?


Except that clearly (as you say) it ISN’T working.I was going to talk about 
“splitsize”
and “allow-split”  but the comment in the config file say this all defaults to 
YES  (allow)
and  not-used (splitsize).  But either you or amanda needs to split this DLE, 
since it doesn’t
fit onto a tape.   Sounds like amanda will split it by default.  So…..

I don’t have LT06  (LT05 here)  BUT  if it isn’t working as is,
I would set the tape length to 23……  (all the rest)  and see if that makes it 
work.
If yes,  then gradually increase that param until it stops working.

However, I do seem to recall that amanda will keep trying to go further and 
further on the tape
until it actually reaches the end, and fails.   If so, changing the tape length 
won’t help,
if amanda is going to keep testing the ice for itself.  If there is only ONE 
DLE on the tape,  then failing 
and trying again is senseless.   I wonder if there is some new  “force split” 
parameter(s) that
might help?Or, look up  “chunking” ?

Maybe this will stir up ideas in somebody else?

Deb Baddorf
Fermilab







Re: holding disk too small?

2019-12-10 Thread Stefan G. Weichinger
Am 05.12.19 um 21:47 schrieb Stefan G. Weichinger:
> Am 03.12.19 um 15:43 schrieb Stefan G. Weichinger:
>>
>> Another naive question:
>>
>> Does the holdingdisk have to be bigger than the size of one tape?
> 
> As there were multiple replies to my original posting and as I am way
> too busy right now: a quick "thanks" to all the people who replied.
> 
> So far the setup works. Maybe not optimal, but it works.
> 
> ;-)
> 
> stay tuned ...

Now an additional obstacle:

one DLE (a Veeam Backup Dir, so I don't want to split it via "exclude"
or so) is larger than (a) one tape and (b) the holding disk.

DLE = 2.9 TB
holding disk = 2 TB
one tape = 2.4 TB (LTO6)

It seems that the tape device doesn't support LEOM ...

Amdump dumps the DLE directly to tape, fills it and fails with

" lev 0  partial taper: No space left on device, splitting not enabled
"

I am unsure how to set LEOM within:

define device lto6_drive {
tapedev "tape:/dev/nst0"
#device-property "BLOCK_SIZE" "2048K"
device-property "LEOM" "false"
}

define changer robot {
tpchanger "chg-robot:/dev/sg4"
#property "tape-device" "0=tape:/dev/nst0"
property "tape-device" "0=lto6_drive"
property "eject-before-unload" "yes"
property "use-slots" "1-8"
}


... makes amcheck happy.

additional for your checking eyes:

define tapetype LTO6 {
comment "Created by amtapetype; compression enabled"
length 244352 kbytes
filemark 868 kbytes
speed 157758 kps
blocksize 2048 kbytes

part_size 100G
part_cache_type memory
part_cache_max_size 8G # use roughly the amount of free RAM on your 
system
}


We have 32 GB RAM in there so this should work?


Re: holding disk too small?

2019-12-08 Thread Jon LaBadie
On Thu, Dec 05, 2019 at 09:00:12AM -0700, Charles Curley wrote:
> On Thu, 5 Dec 2019 04:43:15 -0500
> Gene Heskett  wrote:
> 
> > > # systemctl status amanda.socket  
> > pi@rpi4:/etc $ sudo systemctl status amanda.socket
> > Unit amanda.socket could not be found.
> 
> Same on Debian 10.2. Also, it appears that no Debian 10.2 package
> provides amanda.service:
> 
> charles@hawk:~$ apt-file search amanda.service
> charles@hawk:~$ 
> 
> So I expect amanda.service is a Fedora-ism.

No, there is no amanda.service.  There is amanda@.service.

If my primative knowledge of systemd is correct, The "@"
indicates a separate instance is activated each time it is
needed rather than running constantly as a daemon process.

Jon
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)


Re: keep one DLE in the holding disk

2019-12-08 Thread Jon LaBadie
On Fri, Nov 29, 2019 at 07:29:13PM +0100, Stefan G. Weichinger wrote:
> Am 27.11.19 um 21:22 schrieb Debra S Baddorf:
> > 
> > 
> >> On Nov 27, 2019, at 3:29 AM, Stefan G. Weichinger  wrote:
> >>
> >> There could also be a separate cronjob with "amdump --no-taper" when I
> >> think about it.
> >>
> >> I could run that during the day maybe. This would give me time to run
> >> another script find-ing the latest dump in the holdingdisk etc
> > 
> > Actually, I like this idea a lot.
> 
> Thank you ;-)
> 
> > Create a script like this
> > 
> > amdump —no-taper
> >trigger synch process (or just copy to remote area)
> > 
Speaking of "scripts".  Amanda has the ability to run a
user defined script at many points of a run.  One of those
points is "post-dle-backup".  However, nothing is said about
whether "post...backup" is pre- or post- taping.  If it is
post-taping, I suppose you could just access the vtape,
removing the header block from each tape chunk.

Jon

Do people still read signatures?  Did they ever?
I've emperical evidence no one reads mine.  People
I regularly communicate via email still ask me for
my address or phone numbers.

Now you know where to send my Christmas card ;-))
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)


Re: holding disk too small? -- holding disk RAID configuration

2019-12-05 Thread Nathan Stratton Treadway
On Tue, Dec 03, 2019 at 15:43:10 +0100, Stefan G. Weichinger wrote:
> I consider recreating that holding disk array (currently RAID1 of 2
> disks) as RAID0 ..

Just focusing on this one aspect of your question: assuming the
filesystem in question doesn't have anything other than the Amanda
holding-disk area on it, I suspect you would be better off creating two
separate filesystems, one on each underlying disk, rather than making
them into a RAID0 array.

Amanda can make use of two separate holding-disk directories in
parallel, so you can still get twice the total holding disk size
avilable in a run (compared to the current RAID1 setup), but Ananda's
parallel accesses will probably cause less contention on the physical
device since each filesystem is stored independently on one drive.


(Also, if one of the drives fails the other holding disk filesystem will
still be available, while if you are using RAID0 one drive failing will
take out the whole array)

Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: holding disk too small?

2019-12-05 Thread Stefan G. Weichinger
Am 03.12.19 um 15:43 schrieb Stefan G. Weichinger:
> 
> Another naive question:
> 
> Does the holdingdisk have to be bigger than the size of one tape?

As there were multiple replies to my original posting and as I am way
too busy right now: a quick "thanks" to all the people who replied.

So far the setup works. Maybe not optimal, but it works.

;-)

stay tuned ...


Re: holding disk too small?

2019-12-05 Thread Gene Heskett
On Thursday 05 December 2019 10:50:34 Charles Curley wrote:
And I replied back on the list where this belongs, even if some of it is 
me blowing my own horn.

> On Thu, 5 Dec 2019 00:00:24 -0500
>
> Gene Heskett  wrote:
> > Lesson #2, I just learned today that the raspbian AND debian buster
> > 10.2 versions have NO inetd or xinetd. Ditto for RH.
>
> I don't know where you get that idea, as far as Debian goes.
>
That list is where I git that info, and it mention that RH was doing it 
too, so I checked my only buster install, which did not yet belong to my 
amanda setup and discovered both were missing on my rpi4/buster 10.2 
raspbian install.  But as you saw from my previous post this morning, 
apt now calls in some bsd stuff which I assume 
installs /etc/xinetd.d/amanda, which itself has a new option I've not 
seen before. It was not there before I had apt install the client stuff.

Because I had played with debians buster arm64 installs on both the pi3 
and the pi4, I know for a fact that touching those clients from the 
server, crashes the arm64 installs, leaving nothing in the logs.  I 
liked the idea of debians arm64 actually using grub to boot instead of 
the u-boot BS, but debian's amanda versions of the client stuff are 
instant crashers.  Between that and the relatively poor latency 
performance of the arm64 with its bigger stack frame, I reasoned that 
the armhf was the install of choice, raspbian was still on armhf, and 
its running beautifully, dead stable, moving that bigger lathe faster 
and sweeter than the pi3 ever did.  And building its own food on itself.  
The rpi4 has arrived IOW.  The only thing I'd do diff is order the 4GB 
model. A 2GB needs close to 3Gigs  of swap to build LinuxCNC, but it 
does it just fine. Swap is not on the u-sd card, but on a 120Gig SSD 
plugged into a sata<->usb3 adapter, making it much faster than spinning 
rust...

Since I'm just barely doing email on a machine pulled pulled out of the 
midden heap in the garage, and this boot drive is the boot drive I'll 
install in the new server when the rest of it arrives, I've not gone any 
further until the new system is up and running. With the realtime kernel 
pinned, uptime is now 13 days, and will probably run till the next power 
bump.

Anyway, thats the story and I'm sticking to it. You can download that 
bleeding edge rpi4 stuff from my web page, but as thats on this drive, 
in this temp machine, it will be watching paint dry slow and maybe die 
mid download as the OOM may kill it.

Time to go see what I'm fixing us for lunch.

 > root@jhegaala:~# cat /etc/debian_version
> 10.2
> root@jhegaala:~# apt-cache search inetd | grep inetd
> inetutils-inetd - internet super server
> libnl-idiag-3-200 - library for dealing with netlink sockets -
> inetdiag interface openbsd-inetd - OpenBSD Internet Superserver
> puppet-module-puppetlabs-xinetd - Puppet module for xinetd
> reconf-inetd - maintainer script for programmatic updates of
> inetd.conf rinetd - Internet TCP redirection server
> rlinetd - gruesomely over-featured inetd replacement
> update-inetd - inetd configuration file updater
> xinetd - replacement for inetd with many enhancements
> root@jhegaala:~#
>
> Indeed, amanda depends on openbsd-inetd:
>
> root@jhegaala:~# apt show amanda-common | grep inetd
>
> WARNING: apt does not have a stable CLI interface. Use with caution in
> scripts.
>
> Depends: adduser, bsd-mailx | mailx, debconf (>= 0.5) | debconf-2.0,
> openbsd-inetd | inet-superserver, update-inetd, perl (>= 5.28.0-3),
> perlapi-5.28.0, libc6 (>= 2.27), libcurl4 (>= 7.16.2), libglib2.0-0
> (>= 2.41.1), libssl1.1 (>= 1.1.0) root@jhegaala:~#
>
> I believe that we should remove the dependencies on openbsd-inetd |
> inet-superserver and update-inetd, and make those suggested, and
> encourage amanda over SSH, but that's another can of lawyers.



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: holding disk too small?

2019-12-05 Thread Charles Curley
On Thu, 5 Dec 2019 04:43:15 -0500
Gene Heskett  wrote:

> > # systemctl status amanda.socket  
> pi@rpi4:/etc $ sudo systemctl status amanda.socket
> Unit amanda.socket could not be found.

Same on Debian 10.2. Also, it appears that no Debian 10.2 package
provides amanda.service:

charles@hawk:~$ apt-file search amanda.service
charles@hawk:~$ 

So I expect amanda.service is a Fedora-ism.


-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/


Re: holding disk too small?

2019-12-05 Thread Charles Curley
On Thu, 5 Dec 2019 00:00:24 -0500
Gene Heskett  wrote:

> Lesson #2, I just learned today that the raspbian AND debian buster
> 10.2 versions have NO inetd or xinetd. Ditto for RH.

I don't know where you get that idea, as far as Debian goes.

root@jhegaala:~# cat /etc/debian_version 
10.2
root@jhegaala:~# apt-cache search inetd | grep inetd
inetutils-inetd - internet super server
libnl-idiag-3-200 - library for dealing with netlink sockets - inetdiag 
interface
openbsd-inetd - OpenBSD Internet Superserver
puppet-module-puppetlabs-xinetd - Puppet module for xinetd
reconf-inetd - maintainer script for programmatic updates of inetd.conf
rinetd - Internet TCP redirection server
rlinetd - gruesomely over-featured inetd replacement
update-inetd - inetd configuration file updater
xinetd - replacement for inetd with many enhancements
root@jhegaala:~# 

Indeed, amanda depends on openbsd-inetd:

root@jhegaala:~# apt show amanda-common | grep inetd

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Depends: adduser, bsd-mailx | mailx, debconf (>= 0.5) | debconf-2.0, 
openbsd-inetd | inet-superserver, update-inetd, perl (>= 5.28.0-3), 
perlapi-5.28.0, libc6 (>= 2.27), libcurl4 (>= 7.16.2), libglib2.0-0 (>= 
2.41.1), libssl1.1 (>= 1.1.0)
root@jhegaala:~# 

I believe that we should remove the dependencies on openbsd-inetd |
inet-superserver and update-inetd, and make those suggested, and
encourage amanda over SSH, but that's another can of lawyers.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/


Re: holding disk too small?

2019-12-05 Thread Gene Heskett
On Thursday 05 December 2019 02:12:52 Uwe Menges wrote:

> On 2019-12-05 06:00, Gene Heskett wrote:
> > Lesson #2, I just learned today that the raspbian AND debian buster
> > 10.2 versions have NO inetd or xinetd. Ditto for RH.
>
> I think that's along with other stuff moving to systemd.
> On Fedora 30, I have
>
> # systemctl status amanda.socket
pi@rpi4:/etc $ sudo systemctl status amanda.socket
Unit amanda.socket could not be found.

> ● amanda.socket - Amanda Activation Socket
>Loaded: loaded (/usr/lib/systemd/system/amanda.socket; enabled;
> vendor preset: disabled)
>Active: active (listening) since Sat 2019-11-30 14:46:46 CET; 4
> days ago Listen: [::]:10080 (Stream)
>  Accepted: 0; Connected: 0;
> Tasks: 0 (limit: 4915)
>Memory: 0B
>CGroup: /system.slice/amanda.socket
>
> Nov 30 14:46:46 lima systemd[1]: Listening on Amanda Activation
> Socket.
>
> # systemctl cat amanda.socket
> # /usr/lib/systemd/system/amanda.socket
> [Unit]
> Description=Amanda Activation Socket
>
> [Socket]
> ListenStream=10080
> Accept=true
>
> [Install]
> WantedBy=sockets.target

So I had apt install the usual suspects since that did nothing to this 
machine.

>pi@rpi4:/etc $ sudo apt install amanda-common amanda-client
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  openbsd-inetd tcpd
Suggested packages:
  dump gnuplot smbclient
The following NEW packages will be installed:
  amanda-client amanda-common openbsd-inetd tcpd
0 upgraded, 4 newly installed, 0 to remove and 6 not upgraded.
Need to get 2,363 kB of archives.
After this operation, 9,161 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirror.pit.teraswitch.com/raspbian/raspbian buster/main 
armhf tcpd armhf 7.6.q-28 [21.5 kB]
Get:2 http://mirror.pit.teraswitch.com/raspbian/raspbian buster/main 
armhf openbsd-inetd armhf 0.20160825-4 [34.3 kB]
Get:3 http://mirror.pit.teraswitch.com/raspbian/raspbian buster/main 
armhf amanda-common armhf 1:3.5.1-2+b3 [1,889 kB]
Get:4 http://mirror.pit.teraswitch.com/raspbian/raspbian buster/main 
armhf amanda-client armhf 1:3.5.1-2+b3 [418 kB]
Fetched 2,363 kB in 3s (825 kB/s)
Preconfiguring packages ...
Selecting previously unselected package tcpd.
(Reading database ... 263218 files and directories currently installed.)
Preparing to unpack .../tcpd_7.6.q-28_armhf.deb ...
Unpacking tcpd (7.6.q-28) ...
Selecting previously unselected package openbsd-inetd.
Preparing to unpack .../openbsd-inetd_0.20160825-4_armhf.deb ...
Unpacking openbsd-inetd (0.20160825-4) ...
Selecting previously unselected package amanda-common.
Preparing to unpack .../amanda-common_1%3a3.5.1-2+b3_armhf.deb ...
Unpacking amanda-common (1:3.5.1-2+b3) ...
Selecting previously unselected package amanda-client.
Preparing to unpack .../amanda-client_1%3a3.5.1-2+b3_armhf.deb ...
Unpacking amanda-client (1:3.5.1-2+b3) ...
Setting up tcpd (7.6.q-28) ...
Setting up openbsd-inetd (0.20160825-4) ...
Created symlink /etc/systemd/system/multi-user.target.wants/inetd.service 
→ /lib/systemd/system/inetd.service.
Setting up amanda-common (1:3.5.1-2+b3) ...
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
Adding user `backup' to group `disk' ...
Adding user backup to group disk
Done.
Adding user `backup' to group `tape' ...
Adding user backup to group tape
Done.
Setting up amanda-client (1:3.5.1-2+b3) ...
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for systemd (241-7~deb10u2+rpi1) ...

Looks good, but:

pi@rpi4:/etc $ sudo systemctl status amanda.socket
Unit amanda.socket could not be found.

Clearly, the install is incomplete.  Or is it?, there is now 
an /etc/xinetd.d/ with an amanda file that was not there before. And it 
has arguments I've not seen before:

service amanda
{
disable = no
flags   = IPv4
socket_type = stream
protocol= tcp
wait= no
user= backup
group   = disk
groups  = yes
server  = /usr/lib/amanda/amandad
server_args = -auth=bsdtcp amdump amindexd amidxtaped 
senddiscover
}

That last I've not seen before. And /etc/amandahosts looks incomplete.

The rest of the checking will have to wait till the new server is 
running. Tomorrow (Friday maybe). This one doesn't have the cajones to 
run amanda and kmail at the same time.


>
> Yours, Uwe

Looks like they've at least tried to fix it. But that will be quite a 
heavy load since it has two aux ssd drives attached that contain stuff 
no one else has done (yet).  There's a reason its called bleeding 
edge...  And I'll sure sleep better if its backed up.

Thanks Uwe.

Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect f

Re: holding disk too small?

2019-12-04 Thread Uwe Menges
On 2019-12-05 06:00, Gene Heskett wrote:
> Lesson #2, I just learned today that the raspbian AND debian buster 10.2
> versions have NO inetd or xinetd. Ditto for RH.

I think that's along with other stuff moving to systemd.
On Fedora 30, I have

# systemctl status amanda.socket
● amanda.socket - Amanda Activation Socket
   Loaded: loaded (/usr/lib/systemd/system/amanda.socket; enabled;
vendor preset: disabled)
   Active: active (listening) since Sat 2019-11-30 14:46:46 CET; 4 days ago
   Listen: [::]:10080 (Stream)
 Accepted: 0; Connected: 0;
Tasks: 0 (limit: 4915)
   Memory: 0B
   CGroup: /system.slice/amanda.socket

Nov 30 14:46:46 lima systemd[1]: Listening on Amanda Activation Socket.

# systemctl cat amanda.socket
# /usr/lib/systemd/system/amanda.socket
[Unit]
Description=Amanda Activation Socket

[Socket]
ListenStream=10080
Accept=true

[Install]
WantedBy=sockets.target


Yours, Uwe



Re: holding disk too small?

2019-12-04 Thread Gene Heskett
On Tuesday 03 December 2019 20:23:04 Olivier wrote:

> "Stefan G. Weichinger"  writes:
> > So far it works but maybe not optimal. I consider recreating that
> > holding disk array (currently RAID1 of 2 disks) as RAID0 ..
>
> Unless your backups are super critical, you may not need RAID 1 for
> holding disk. Also consider that holding dick puts a lot of mechanical
> stress on the disk: I have seen at least one case where the disk did
> start failing and developing bad blocks in the holding disk space
> while the rest of the disk was OK.
>
> Best regards,
>
> Olivier

Lesson #1: Never ever put the holding disk area on the backup disk 
holding the vtapes, it just pounds the seek mechanism of that drive 
leading to a potential early failure. I've always put it on the main 
drive, but that then subjects the main drive to the same abuse only when 
backing up the main drive, and I'm supposedly backing up 4 other 
machines too. TBT I will have a drive that is not in current use when 
the new machine is built, and which may well be an even better solution. 
Set up that way should also be a measurable speed improvement.

Lesson #2, I just learned today that the raspbian AND debian buster 10.2 
versions have NO inetd or xinetd. Ditto for RH.

So that probably explains why I can install the clients and configure 
them to be backed up, except the clients crash about 3 seconds after the 
server first accesses that client, leaving NO clues in the logs of the 
crashed machines. I've had it happen to an armbian jessie install on a 
rock64, an rpi3 with a 64 bit debian buster install, and I expect it 
will be the same should I try to backup the raspbian armhf install on 
the rpi4.

This machine I pulled out of the midden heap in the garage so I at least 
had email, is so crippled I've turned amanda off until I can get a new 
machine built and this boot drive moved to it. Got the cpu today, might 
have the rest of it tomorrow.  The tower has smoke stains from the fire 
but that won't hurt it a bit.

Anyway, we've got to figure out what to do about the missing #inetd 
stuffs or its all going to die as folks update. 

Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>


Re: holding disk too small?

2019-12-03 Thread Olivier
"Stefan G. Weichinger"  writes:

> So far it works but maybe not optimal. I consider recreating that
> holding disk array (currently RAID1 of 2 disks) as RAID0 ..

Unless your backups are super critical, you may not need RAID 1 for
holding disk. Also consider that holding dick puts a lot of mechanical
stress on the disk: I have seen at least one case where the disk did
start failing and developing bad blocks in the holding disk space while
the rest of the disk was OK.

Best regards,

Olivier



RE: holding disk too small?

2019-12-03 Thread Cuttler, Brian R (HEALTH)
Stefan,

In order for the holding disk to be used it has to be bigger than the largest 
DLE.
To get parallelism in dumping it has to be large enough to hold more than one 
DLE at a time, ideally I suppose the number of in parallel dumps, and then some 
more so that you can begin spooling to tape while a new dump is being performed.

I think that a work area larger than a tape is probably overkill - but the took 
I like to use to visualize where the bottle neck is, is amplot.

With a work area as large as yours I think you will probably see that the work 
area is never fully utilized, and that dumping constraints are somewhere else, 
or showing that you can increase parallelism in dumping to shorten overall 
amdump run time.

I don't know what the config looks like, number of clients, number and size of 
partitions being managed, at some point you will run out of CPU, or disk 
performance or something you can't overcome with Amanda tuning.

Best,
Brian

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Stefan G. Weichinger
Sent: Tuesday, December 3, 2019 9:43 AM
To: amanda-users@amanda.org
Subject: holding disk too small?

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Another naive question:

Does the holdingdisk have to be bigger than the size of one tape?

I know that it would be good, but what if not?

I right now have ~2TB holding disk and "runtapes 2" with LTO6 tapetype.

That is 2.4 TB per tape.

So far it works but maybe not optimal. I consider recreating that
holding disk array (currently RAID1 of 2 disks) as RAID0 ..

And sub-question:

how would you configure these parameters here:

autoflush   yes
flush-threshold-dumped  50
flush-threshold-scheduled 50
taperflush  50

I'd like to collect some files in the disk before writing to tape, but
can't collect a full tape's data ...

I assume here also "dumporder" plays a role:

dumporder "Ssss"

- thanks, Stefan



holding disk too small?

2019-12-03 Thread Stefan G. Weichinger


Another naive question:

Does the holdingdisk have to be bigger than the size of one tape?

I know that it would be good, but what if not?

I right now have ~2TB holding disk and "runtapes 2" with LTO6 tapetype.

That is 2.4 TB per tape.

So far it works but maybe not optimal. I consider recreating that
holding disk array (currently RAID1 of 2 disks) as RAID0 ..

And sub-question:

how would you configure these parameters here:

autoflush   yes
flush-threshold-dumped  50
flush-threshold-scheduled 50
taperflush  50

I'd like to collect some files in the disk before writing to tape, but
can't collect a full tape's data ...

I assume here also "dumporder" plays a role:

dumporder "Ssss"

- thanks, Stefan


Re: keep one DLE in the holding disk

2019-11-29 Thread Stefan G. Weichinger
Am 27.11.19 um 21:22 schrieb Debra S Baddorf:
> 
> 
>> On Nov 27, 2019, at 3:29 AM, Stefan G. Weichinger  wrote:
>>
>> There could also be a separate cronjob with "amdump --no-taper" when I
>> think about it.
>>
>> I could run that during the day maybe. This would give me time to run
>> another script find-ing the latest dump in the holdingdisk etc
> 
> Actually, I like this idea a lot.

Thank you ;-)

> Create a script like this
> 
>   amdump —no-taper
>trigger synch process (or just copy to remote area)
> 
> so that the sync  happens immediately after the dump is finished.
> Run script earlier in the day than normal backups, allowing enough time 
> for it to finish.
> 
> Then,  when the normal backup run is done,  amanda will auto-magically
> FLUSH that earlier run onto the vtape with the rest of them, and will then
> delete it from the holding disk.
> 
> Or,  run it immediately after LAST NIGHT’s backup run.  The dump
> would sit in holding all day, and become part of the NEXT night’s normal 
> backup.

Yes, exactly.

I was busy with other things this week so I haven't yet found the time
and/or the brain to work on this.

A detail to solve is:

the dump will look like

# tree /mnt/amhold/vtape/
/mnt/amhold/vtape/
└── 20191129170354
├── pre01svdeb02.ntcs-sql-latest.0
├── pre01svdeb02.ntcs-sql-latest.0.1
└── pre01svdeb02.ntcs-sql-latest.0.2

... the timestamp will change every day/run, so I need some find-command
to find the latest file (there could be more than one sometimes) and
then maybe setup a rotating symlink or so (I mean something like a
prefix "sqldump-latest" or so).

We talk about ~5GB here. Unfortunately this will be a completely new
file each day (think "binary and compressed here"), so I expect rsync to
transfer the whole file each time. This will take a few hours over that
small DSL-line ... so this will have to happen during the night.

But the rsync could be run at a completely different time.

As there are vtapes in place ... it's even possible to run a specific
"amflush -b" for that DLE after the sync.

I will try that in the next days or so.


Re: keep one DLE in the holding disk

2019-11-27 Thread Debra S Baddorf



> On Nov 27, 2019, at 10:23 AM, Charles Curley 
>  wrote:
> 
> On Wed, 27 Nov 2019 10:32:51 +0100
> "Stefan G. Weichinger"  wrote:
> 
>>> amvault might be worth looking at.  
>> 
>> I never understood that one ... :-(
> 
> Drat. I never did, either. I was hoping you'd figure it out and then I
> could use it. :-)
> 
> -- 
> Does anybody read signatures any more?
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__charlescurley.com&d=DwICAg&c=gRgGjJ3BkIsb5y6s49QqsA&r=HMrKaRiCv4jddln9fLPIOw&m=LiGtiMGuwV2CtKyZHyBb3XQURZKkcxHTK7xTDHsa_5s&s=wCoYLLrcsgAkB0puuhZ8sMjXns1vgRzvfoSsw7iclPo&e=
>  
> https://urldefense.proofpoint.com/v2/url?u=https-3A__charlescurley.com_blog_&d=DwICAg&c=gRgGjJ3BkIsb5y6s49QqsA&r=HMrKaRiCv4jddln9fLPIOw&m=LiGtiMGuwV2CtKyZHyBb3XQURZKkcxHTK7xTDHsa_5s&s=0iFBsUoklgZueGsxxiL6QAPx2bGzBZAGX3Qt9TnmOk8&e=
>  

No.  (re signature question)  Except when there’s a cute saying, which
I sometimes swipe.


I’ve done a little bit of vaulting,  to get a years worth of backups off
old-tape-format  and over to another machine to write to
new-tape-format.

So I vaulted twice.   Once,  to copy the backup off  old-tape-format onto a
physical disk. I network mounted that disk onto the second machine.
Then used amvault to copy the backup from disk to new-tape-format.

That’s been a few years ago,  but I might dredge up how I did it,  if
you have any particular questions.

Deb Baddorf

previously swiped sig, which I’ve used other places:
They say God never gives you more than you can handle.  Apparently God thinks 
I’m a badass.



Re: keep one DLE in the holding disk

2019-11-27 Thread Debra S Baddorf



> On Nov 27, 2019, at 3:29 AM, Stefan G. Weichinger  wrote:
> 
> There could also be a separate cronjob with "amdump --no-taper" when I
> think about it.
> 
> I could run that during the day maybe. This would give me time to run
> another script find-ing the latest dump in the holdingdisk etc

Actually, I like this idea a lot.

Create a script like this

amdump —no-taper
   trigger synch process (or just copy to remote area)

so that the sync  happens immediately after the dump is finished.
Run script earlier in the day than normal backups, allowing enough time 
for it to finish.

Then,  when the normal backup run is done,  amanda will auto-magically
FLUSH that earlier run onto the vtape with the rest of them, and will then
delete it from the holding disk.

Or,  run it immediately after LAST NIGHT’s backup run.  The dump
would sit in holding all day, and become part of the NEXT night’s normal backup.

Deb Baddorf
Fermilab



Re: keep one DLE in the holding disk

2019-11-27 Thread Charles Curley
On Wed, 27 Nov 2019 10:32:51 +0100
"Stefan G. Weichinger"  wrote:

> > amvault might be worth looking at.  
> 
> I never understood that one ... :-(

Drat. I never did, either. I was hoping you'd figure it out and then I
could use it. :-)

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/


Re: keep one DLE in the holding disk

2019-11-27 Thread Stefan G. Weichinger
Am 26.11.19 um 19:58 schrieb Charles Curley:

> I wonder if this would capture complete backups? If you have all level
> 0 (total) backups, this should be fine. But if you have non-level-0
> backups, you need a way to capture and keep until the next level 0
> backup all the non-level-0 backups.
> 
> amvault might be worth looking at.

I never understood that one ... :-(



Re: keep one DLE in the holding disk

2019-11-27 Thread Stefan G. Weichinger
Am 26.11.19 um 19:42 schrieb Debra S Baddorf:

> I’ve no particular knowledge of this,  so this is just suggestion - until 
> somebody with better
> ideas comes along.

yeah, the dozens of amanda-users populating the discussions here ;-)

> I suspect amanda does not like to leave things in her holding disk.  She puts 
> them to
> tape (vtape)  and then is done with them …. or she doesn’t, and the taping is 
> pending.

Right. AFAIK it could only be influenced indirectly by using the various
thresholds (keep stuff in holding disk until X tapes could be filled etc)

> How about a cronjob to copy the tarballs from the vtape to another disk  (NOT 
> the
> holding disk,  tho it could be a directory on the same disk but not in the 
> holding area).
> Sync that area,  and delete when done (if desired).

Some automated restore(-test) then. ok.

There could also be a separate cronjob with "amdump --no-taper" when I
think about it.

I could run that during the day maybe. This would give me time to run
another script find-ing the latest dump in the holdingdisk etc

Although it's not very elegant or "integrated", would be nicer to have
that *defined* in amanda somehow:

* keep this tarball in holdingdisk for 2 days
* set up some symlink like "myDLE_latest.tgz" for the rsync-script


Re: keep one DLE in the holding disk

2019-11-26 Thread Charles Curley
On Tue, 26 Nov 2019 13:13:00 +0100
"Stefan G. Weichinger"  wrote:

> One DLE uses amsamba-application to dump a Windows-share, containing a
> specific SQL-Server-export
> 
> That dump should go to (a) amanda's vtapes (done already) and (b)
> rsynced to some remote server off-site
> 
> Now how do I keep the tarballs in the holdingdisk for syncing them
> away ... and only for this DLE?
> 
> Some post-script? Does someone have a nice example or so?

I wonder if this would capture complete backups? If you have all level
0 (total) backups, this should be fine. But if you have non-level-0
backups, you need a way to capture and keep until the next level 0
backup all the non-level-0 backups.

amvault might be worth looking at.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/


Re: keep one DLE in the holding disk

2019-11-26 Thread Debra S Baddorf



> On Nov 26, 2019, at 6:13 AM, Stefan G. Weichinger  wrote:
> 
> 
> special need again:
> 
> One DLE uses amsamba-application to dump a Windows-share, containing a
> specific SQL-Server-export
> 
> That dump should go to (a) amanda's vtapes (done already) and (b)
> rsynced to some remote server off-site
> 
> Now how do I keep the tarballs in the holdingdisk for syncing them away
> ... and only for this DLE?
> 
> Some post-script? Does someone have a nice example or so?


I’ve no particular knowledge of this,  so this is just suggestion - until 
somebody with better
ideas comes along.

I suspect amanda does not like to leave things in her holding disk.  She puts 
them to
tape (vtape)  and then is done with them …. or she doesn’t, and the taping is 
pending.

How about a cronjob to copy the tarballs from the vtape to another disk  (NOT 
the
holding disk,  tho it could be a directory on the same disk but not in the 
holding area).
Sync that area,  and delete when done (if desired).

Deb Baddorf
Fermilab



keep one DLE in the holding disk

2019-11-26 Thread Stefan G. Weichinger


special need again:

One DLE uses amsamba-application to dump a Windows-share, containing a
specific SQL-Server-export

That dump should go to (a) amanda's vtapes (done already) and (b)
rsynced to some remote server off-site

Now how do I keep the tarballs in the holdingdisk for syncing them away
... and only for this DLE?

Some post-script? Does someone have a nice example or so?


Re: Flushing the Holding Disk

2018-11-16 Thread Gene Heskett
On Friday 16 November 2018 14:51:21 Debra S Baddorf wrote:

> > On Nov 16, 2018, at 1:37 PM, Gene Heskett 
> > wrote:
> >
> > On Friday 16 November 2018 13:59:59 Debra S Baddorf wrote:
> >>> On Nov 16, 2018, at 12:11 PM, Austin S. Hemmelgarn
> >>>  wrote:
> >>>
> >>> On 2018-11-16 12:27, Chris Miller wrote:
> >>>> Hi Folks,
> >>>> I'm unclear on the timing of the flush from holding disk to
> >>>> vtape. Suppose I run two backup jobs,and each uses the holding
> >>>> disk. When will the second job start? Obviously, after the client
> >>>> has sent everything... Before the holding disk flush starts, or
> >>>> after the holding disk flush has completed?
> >>>
> >>> If by 'jobs' you mean 'amanda configurations', the second one
> >>> starts when you start it.  Note that `amdump` does not return
> >>> until everything is finished dumping and optionally taping if
> >>> anything would be taped, so you can literally just run each one
> >>> sequentially in a shell script and they won't run in parallel.
> >>>
> >>> If by 'jobs' you mean DLE's, they run as concurrently as you tell
> >>> Amanda to run them.  If you've got things serialized (`inparallel`
> >>> is set to 1 in your config), then the next DLE will start dumping
> >>> once the previous one is finished dumping to the holding disk.
> >>> Otherwise, however many you've said can run in parallel run
> >>> (within per-host limits), and DLE's start when the previous one in
> >>> sequence for that dumper finishes. Taping can (by default) run in
> >>> parallel with dumping if you're using a holding disk, which is
> >>> generally a good thing, though you can also easily configure it to
> >>> wait for some amount of data to be buffered on the holding disk
> >>> before it starts taping.
> >>>
> >>>> Is there any way to defer the holding disk flush until all backup
> >>>> jobs for a given night have completed?
> >>>
> >>> Generically, set `autoflush no` in each configuration, and then
> >>> run `amflush` for each configuration once all the dumps are done.
> >>>
> >>> However, unless you've got an odd arrangement where every system
> >>> saturates the network link while actually dumping and you are
> >>> sharing a single link on the Amanda server for both dumping and
> >>> taping, this actually probably won't do anything for your
> >>> performance.  You can easily configure amanda to flush backups
> >>> from each DLE as soon as they are done, and it will wait to exit
> >>> until everything is actually flushed.
> >>>
> >>> Building from that, if you just want to ensure the `amdump`
> >>> instances don't run in parallel, just use a tool to fire them off
> >>> sequentially in the foreground.  Stuff like Ansible is great for
> >>> this (especially because you can easily conditionally back up your
> >>> index and tapelist when the dump finishes).  As long as the next
> >>> `amdump` command isn't started until the previous one returns, you
> >>> won't have to worry about them fighting each other for bandwidth.
> >>
> >> Chris:  you have some control over when DLEs go from the holding
> >> disk to the actual tape (or vtape). This paragraph is from the
> >> examples, and I keep it in my config files so I remember how to
> >> setup these params: #  New amanda includes these explanatory
> >> paragraphs:
> >>
> >> # flush-threshold-dumped, flush-threshold-scheduled, taperflush,
> >> and autoflush # are used to control tape utilization. See the
> >> amanda.conf (5) manpage for # details on how they work. Taping will
> >> not start until all criteria are # satisfied. Here are some
> >> examples: #
> >> # You want to fill tapes completely even in the case of failed
> >> dumps, and # don't care if some dumps are left on the holding disk
> >> after a run: # flush-threshold-dumped100 # (or more)
> >> # flush-threshold-scheduled 100 # (or more)
> >> # taperflush100
> >> # autoflush yes
> >> #
> >> # You want to improve tape performance by waiting for a complete
> >> tape of data # before writing anything. However, all

Re: Manually flush the holding disk

2018-11-16 Thread Nathan Stratton Treadway
On Fri, Nov 16, 2018 at 14:52:08 -0500, Nathan Stratton Treadway wrote:
> On Fri, Nov 16, 2018 at 09:42:18 -0800, Chris Miller wrote:
> > Hi Folks, 
> > 
> > I have 194 files on my holding disk that were written as a result of 
> > "amdump aequitas.tclc.org", but I can't manually flush them. 
> > 
> > 
> > 
> > bash-4.2 $ ls -lv /var/amanda/hold/20181115124329/ 
> > : 
> > -rw---. 1 amandabackup disk 1073741824 Nov 15 15:11 
> > aequitas.tclc.org.C__.0.1.tmp 
> > : 
> > -rw---. 1 amandabackup disk 544308613 Nov 16 01:05 
> > aequitas.tclc.org.C__.0.194.tmp 
> > 
> 
> I dtake it you have 194 files in that one directory, all 1 MiB in size
> (except the last) and all ending in ".tmp"?  This would indicate the
> dump as not finished, and so amflush would not consider them valid files
> for flushing.
> 
> Is the 20181115124329 run of amdump still running? 
> 
> If not, perhaps it crashed before that DLE finished dumping?
> 

Actually, after sending that I remembered that in a conversation a year
ago Jean-Louis mentioned that amflush _will_ try to flush .tmp files, so
the above answer is not complete.

(What version of Amanda is this?)

Its definitely important that the files in question are still ".tmp"
files -- that dump did not complete for some reason.  

However, since amflush does flush .tmp files  (at least in Amanda 3.5),
the next step is probably to look in the amflush-process log file to see
if it given any indication why it's skipping those files when searching
for something to flush.

Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Flushing the Holding Disk

2018-11-16 Thread Debra S Baddorf



> On Nov 16, 2018, at 1:37 PM, Gene Heskett  wrote:
> 
> On Friday 16 November 2018 13:59:59 Debra S Baddorf wrote:
> 
>>> On Nov 16, 2018, at 12:11 PM, Austin S. Hemmelgarn
>>>  wrote:
>>> 
>>> On 2018-11-16 12:27, Chris Miller wrote:
>>>> Hi Folks,
>>>> I'm unclear on the timing of the flush from holding disk to vtape.
>>>> Suppose I run two backup jobs,and each uses the holding disk. When
>>>> will the second job start? Obviously, after the client has sent
>>>> everything... Before the holding disk flush starts, or after the
>>>> holding disk flush has completed?
>>> 
>>> If by 'jobs' you mean 'amanda configurations', the second one starts
>>> when you start it.  Note that `amdump` does not return until
>>> everything is finished dumping and optionally taping if anything
>>> would be taped, so you can literally just run each one sequentially
>>> in a shell script and they won't run in parallel.
>>> 
>>> If by 'jobs' you mean DLE's, they run as concurrently as you tell
>>> Amanda to run them.  If you've got things serialized (`inparallel`
>>> is set to 1 in your config), then the next DLE will start dumping
>>> once the previous one is finished dumping to the holding disk. 
>>> Otherwise, however many you've said can run in parallel run (within
>>> per-host limits), and DLE's start when the previous one in sequence
>>> for that dumper finishes. Taping can (by default) run in parallel
>>> with dumping if you're using a holding disk, which is generally a
>>> good thing, though you can also easily configure it to wait for some
>>> amount of data to be buffered on the holding disk before it starts
>>> taping.
>>> 
>>>> Is there any way to defer the holding disk flush until all backup
>>>> jobs for a given night have completed?
>>> 
>>> Generically, set `autoflush no` in each configuration, and then run
>>> `amflush` for each configuration once all the dumps are done.
>>> 
>>> However, unless you've got an odd arrangement where every system
>>> saturates the network link while actually dumping and you are
>>> sharing a single link on the Amanda server for both dumping and
>>> taping, this actually probably won't do anything for your
>>> performance.  You can easily configure amanda to flush backups from
>>> each DLE as soon as they are done, and it will wait to exit until
>>> everything is actually flushed.
>>> 
>>> Building from that, if you just want to ensure the `amdump`
>>> instances don't run in parallel, just use a tool to fire them off
>>> sequentially in the foreground.  Stuff like Ansible is great for
>>> this (especially because you can easily conditionally back up your
>>> index and tapelist when the dump finishes).  As long as the next
>>> `amdump` command isn't started until the previous one returns, you
>>> won't have to worry about them fighting each other for bandwidth.
>> 
>> Chris:  you have some control over when DLEs go from the holding disk
>> to the actual tape (or vtape). This paragraph is from the examples, 
>> and I keep it in my config files so I remember how to setup these
>> params: #  New amanda includes these explanatory paragraphs:
>> 
>> # flush-threshold-dumped, flush-threshold-scheduled, taperflush, and
>> autoflush # are used to control tape utilization. See the amanda.conf
>> (5) manpage for # details on how they work. Taping will not start
>> until all criteria are # satisfied. Here are some examples:
>> #
>> # You want to fill tapes completely even in the case of failed dumps,
>> and # don't care if some dumps are left on the holding disk after a
>> run: # flush-threshold-dumped100 # (or more)
>> # flush-threshold-scheduled 100 # (or more)
>> # taperflush100
>> # autoflush yes
>> #
>> # You want to improve tape performance by waiting for a complete tape
>> of data # before writing anything. However, all dumps will be flushed;
>> none will # be left on the holding disk.
>> # flush-threshold-dumped100 # (or more)
>> # flush-threshold-scheduled 100 # (or more)
>> # taperflush0
>> #
>> # You don't want to use a new tape for every run, but want to start
>> writing # to tape as soon as possible:
>> # flush-threshold-dumped0   # (or more)
>> # flus

Re: Manually flush the holding disk

2018-11-16 Thread Nathan Stratton Treadway
On Fri, Nov 16, 2018 at 09:42:18 -0800, Chris Miller wrote:
> Hi Folks, 
> 
> I have 194 files on my holding disk that were written as a result of "amdump 
> aequitas.tclc.org", but I can't manually flush them. 
> 
> 
> 
> bash-4.2 $ ls -lv /var/amanda/hold/20181115124329/ 
> : 
> -rw---. 1 amandabackup disk 1073741824 Nov 15 15:11 
> aequitas.tclc.org.C__.0.1.tmp 
> : 
> -rw---. 1 amandabackup disk 544308613 Nov 16 01:05 
> aequitas.tclc.org.C__.0.194.tmp 
> 

I dtake it you have 194 files in that one directory, all 1 MiB in size
(except the last) and all ending in ".tmp"?  This would indicate the
dump as not finished, and so amflush would not consider them valid files
for flushing.

Is the 20181115124329 run of amdump still running? 

If not, perhaps it crashed before that DLE finished dumping?

Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Manually flush the holding disk

2018-11-16 Thread Debra S Baddorf



> On Nov 16, 2018, at 1:28 PM, Alan Hodgson  wrote:
> 
> On Fri, 2018-11-16 at 11:00 -0800, Chris Miller wrote:
>> Hi Alan,
>>> From: "Alan Hodgson" 
>>> To: "amanda-users" 
>>> Sent: Friday, November 16, 2018 9:59:29 AM
>>> Subject: Re: Manually flush the holding disk
>>> On Fri, 2018-11-16 at 09:42 -0800, Chris Miller wrote:
>>> # man amflush
>>>   amflush [-b] [-f] [--exact-match] [-s] [-D datestamp] [-o 
>>> configoption...] config [host [disk...]...]
>>> 
>>> .. the first argument to most amanda commands is the configuration name
>> I think that is what I am doing, but I would be happy to learn that I am 
>> mistaken. My backup command was "amdump aequitas.tclc.org", so I expect my 
>> flush command should be "amflush aequitas.tclc.org". Am I wrong?
>> 
>> 
> 
> If that's the config name then I apologize, it sure looks like a host name.
> 
> Maybe try amflush -b aequitas.tclc.org, I know I always use -b but reading 
> the man page again now I'm not sure why.
> 
> Otherwise, I guess make sure it's using the right config to match the holding 
> disk you're looking at.
> 
> That's the only ideas I'm having.
> 

I usually find about half a dozen abandoned items in my holding disk, over a 
year.
If they haven’t flushed themselves automatically after a month or two  
(autoflush all),  
I just manually delete them
 rm -fr  /spool1/amanda/daily/201808*
for instance

They seem to come from aborted jobs.   Amcleanup  cleans log files,  but 
perhaps it doesn’t touch the holding disk areas?

Deb Baddorf




Re: Flushing the Holding Disk

2018-11-16 Thread Gene Heskett
On Friday 16 November 2018 13:59:59 Debra S Baddorf wrote:

> > On Nov 16, 2018, at 12:11 PM, Austin S. Hemmelgarn
> >  wrote:
> >
> > On 2018-11-16 12:27, Chris Miller wrote:
> >> Hi Folks,
> >> I'm unclear on the timing of the flush from holding disk to vtape.
> >> Suppose I run two backup jobs,and each uses the holding disk. When
> >> will the second job start? Obviously, after the client has sent
> >> everything... Before the holding disk flush starts, or after the
> >> holding disk flush has completed?
> >
> > If by 'jobs' you mean 'amanda configurations', the second one starts
> > when you start it.  Note that `amdump` does not return until
> > everything is finished dumping and optionally taping if anything
> > would be taped, so you can literally just run each one sequentially
> > in a shell script and they won't run in parallel.
> >
> > If by 'jobs' you mean DLE's, they run as concurrently as you tell
> > Amanda to run them.  If you've got things serialized (`inparallel`
> > is set to 1 in your config), then the next DLE will start dumping
> > once the previous one is finished dumping to the holding disk. 
> > Otherwise, however many you've said can run in parallel run (within
> > per-host limits), and DLE's start when the previous one in sequence
> > for that dumper finishes. Taping can (by default) run in parallel
> > with dumping if you're using a holding disk, which is generally a
> > good thing, though you can also easily configure it to wait for some
> > amount of data to be buffered on the holding disk before it starts
> > taping.
> >
> >> Is there any way to defer the holding disk flush until all backup
> >> jobs for a given night have completed?
> >
> > Generically, set `autoflush no` in each configuration, and then run
> > `amflush` for each configuration once all the dumps are done.
> >
> > However, unless you've got an odd arrangement where every system
> > saturates the network link while actually dumping and you are
> > sharing a single link on the Amanda server for both dumping and
> > taping, this actually probably won't do anything for your
> > performance.  You can easily configure amanda to flush backups from
> > each DLE as soon as they are done, and it will wait to exit until
> > everything is actually flushed.
> >
> > Building from that, if you just want to ensure the `amdump`
> > instances don't run in parallel, just use a tool to fire them off
> > sequentially in the foreground.  Stuff like Ansible is great for
> > this (especially because you can easily conditionally back up your
> > index and tapelist when the dump finishes).  As long as the next
> > `amdump` command isn't started until the previous one returns, you
> > won't have to worry about them fighting each other for bandwidth.
>
> Chris:  you have some control over when DLEs go from the holding disk
> to the actual tape (or vtape). This paragraph is from the examples, 
> and I keep it in my config files so I remember how to setup these
> params: #  New amanda includes these explanatory paragraphs:
>
> # flush-threshold-dumped, flush-threshold-scheduled, taperflush, and
> autoflush # are used to control tape utilization. See the amanda.conf
> (5) manpage for # details on how they work. Taping will not start
> until all criteria are # satisfied. Here are some examples:
> #
> # You want to fill tapes completely even in the case of failed dumps,
> and # don't care if some dumps are left on the holding disk after a
> run: # flush-threshold-dumped100 # (or more)
> # flush-threshold-scheduled 100 # (or more)
> # taperflush100
> # autoflush yes
> #
> # You want to improve tape performance by waiting for a complete tape
> of data # before writing anything. However, all dumps will be flushed;
> none will # be left on the holding disk.
> # flush-threshold-dumped100 # (or more)
> # flush-threshold-scheduled 100 # (or more)
> # taperflush0
> #
> # You don't want to use a new tape for every run, but want to start
> writing # to tape as soon as possible:
> # flush-threshold-dumped0   # (or more)
> # flush-threshold-scheduled 100 # (or more)
> # taperflush100
> # autoflush yes
> # maxdumpsize   100k # amount of data to dump each run; see above.
> #
> # You want to keep the most recent dumps on holding disk, for faster
> recovery. # Older dumps will be rotated to tape during each run.
> # flush-threshold-dumped

Re: Manually flush the holding disk

2018-11-16 Thread Alan Hodgson
On Fri, 2018-11-16 at 11:00 -0800, Chris Miller wrote:
> Hi Alan,
> > From: "Alan Hodgson" 
> > To: "amanda-users" 
> > Sent: Friday, November 16, 2018 9:59:29 AM
> > Subject: Re: Manually flush the holding disk
> > On Fri, 2018-11-16 at 09:42 -0800, Chris Miller wrote:
> > # man amflush
> >   amflush [-b] [-f] [--exact-match] [-s] [-D datestamp] [-o
> > configoption...] config [host [disk...]...]
> > 
> > .. the first argument to most amanda commands is the configuration
> > name
> 
> I think that is what I am doing, but I would be happy to learn that I
> am mistaken. My backup command was "amdump aequitas.tclc.org", so I
> expect my flush command should be "amflush aequitas.tclc.org". Am I
> wrong?
> 
> 

If that's the config name then I apologize, it sure looks like a host
name.
Maybe try amflush -b aequitas.tclc.org, I know I always use -b but
reading the man page again now I'm not sure why.


Otherwise, I guess make sure it's using the right config to match the
holding disk you're looking at.


That's the only ideas I'm having.



Re: Manually flush the holding disk

2018-11-16 Thread Chris Miller
Hi Alan, 

> From: "Alan Hodgson" 
> To: "amanda-users" 
> Sent: Friday, November 16, 2018 9:59:29 AM
> Subject: Re: Manually flush the holding disk

> On Fri, 2018-11-16 at 09:42 -0800, Chris Miller wrote:

> # man amflush
> amflush [-b] [-f] [--exact-match] [-s] [-D datestamp] [-o configoption...]
> config [host [disk...]...]

> .. the first argument to most amanda commands is the configuration name

I think that is what I am doing, but I would be happy to learn that I am 
mistaken. My backup command was "amdump aequitas.tclc.org", so I expect my 
flush command should be "amflush aequitas.tclc.org". Am I wrong? 

Thanks for the help, Alan. 
-- 
Chris. 

V:916.974.0424 
F:916.974.0428 


Re: Flushing the Holding Disk

2018-11-16 Thread Debra S Baddorf



> On Nov 16, 2018, at 12:11 PM, Austin S. Hemmelgarn  
> wrote:
> 
> On 2018-11-16 12:27, Chris Miller wrote:
>> Hi Folks,
>> I'm unclear on the timing of the flush from holding disk to vtape. Suppose I 
>> run two backup jobs,and each uses the holding disk. When will the second job 
>> start? Obviously, after the client has sent everything... Before the holding 
>> disk flush starts, or after the holding disk flush has completed?
> If by 'jobs' you mean 'amanda configurations', the second one starts when you 
> start it.  Note that `amdump` does not return until everything is finished 
> dumping and optionally taping if anything would be taped, so you can 
> literally just run each one sequentially in a shell script and they won't run 
> in parallel.
> 
> If by 'jobs' you mean DLE's, they run as concurrently as you tell Amanda to 
> run them.  If you've got things serialized (`inparallel` is set to 1 in your 
> config), then the next DLE will start dumping once the previous one is 
> finished dumping to the holding disk.  Otherwise, however many you've said 
> can run in parallel run (within per-host limits), and DLE's start when the 
> previous one in sequence for that dumper finishes. Taping can (by default) 
> run in parallel with dumping if you're using a holding disk, which is 
> generally a good thing, though you can also easily configure it to wait for 
> some amount of data to be buffered on the holding disk before it starts 
> taping.
>> Is there any way to defer the holding disk flush until all backup jobs for a 
>> given night have completed?
> Generically, set `autoflush no` in each configuration, and then run `amflush` 
> for each configuration once all the dumps are done.
> 
> However, unless you've got an odd arrangement where every system saturates 
> the network link while actually dumping and you are sharing a single link on 
> the Amanda server for both dumping and taping, this actually probably won't 
> do anything for your performance.  You can easily configure amanda to flush 
> backups from each DLE as soon as they are done, and it will wait to exit 
> until everything is actually flushed.
> 
> Building from that, if you just want to ensure the `amdump` instances don't 
> run in parallel, just use a tool to fire them off sequentially in the 
> foreground.  Stuff like Ansible is great for this (especially because you can 
> easily conditionally back up your index and tapelist when the dump finishes). 
>  As long as the next `amdump` command isn't started until the previous one 
> returns, you won't have to worry about them fighting each other for bandwidth.

Chris:  you have some control over when DLEs go from the holding disk to the 
actual tape (or vtape).
This paragraph is from the examples,  and I keep it in my config files so I 
remember how to setup these params:
#  New amanda includes these explanatory paragraphs:

# flush-threshold-dumped, flush-threshold-scheduled, taperflush, and autoflush
# are used to control tape utilization. See the amanda.conf (5) manpage for
# details on how they work. Taping will not start until all criteria are
# satisfied. Here are some examples:
#
# You want to fill tapes completely even in the case of failed dumps, and
# don't care if some dumps are left on the holding disk after a run:
# flush-threshold-dumped100 # (or more)
# flush-threshold-scheduled 100 # (or more)
# taperflush100
# autoflush yes
#
# You want to improve tape performance by waiting for a complete tape of data
# before writing anything. However, all dumps will be flushed; none will
# be left on the holding disk.
# flush-threshold-dumped100 # (or more)
# flush-threshold-scheduled 100 # (or more)
# taperflush0
#
# You don't want to use a new tape for every run, but want to start writing
# to tape as soon as possible:
# flush-threshold-dumped0   # (or more)
# flush-threshold-scheduled 100 # (or more)
# taperflush100
# autoflush yes
# maxdumpsize   100k # amount of data to dump each run; see above.
#
# You want to keep the most recent dumps on holding disk, for faster recovery.
# Older dumps will be rotated to tape during each run.
# flush-threshold-dumped300 # (or more)
# flush-threshold-scheduled 300 # (or more)
# taperflush300
# autoflush yes
#
# Defaults:
# (no restrictions; flush to tape immediately; don't flush old dumps.)
#flush-threshold-dumped 0
#flush-threshold-scheduled 0
#taperflush 0
#autoflush no
#
 —
Here is part of my setup, with further comments beside each param.  I may have 
written some of these comments,
so I hope they are completely correct.  I think they are.
———
## with LTO5 tapes,  

Re: Flushing the Holding Disk

2018-11-16 Thread Chris Miller
Hi Austin,

- Original Message -
> From: "Austin S. Hemmelgarn" 
> To: "Chris Miller" , "amanda-users" 
> Sent: Friday, November 16, 2018 10:11:22 AM
> Subject: Re: Flushing the Holding Disk

> On 2018-11-16 12:27, Chris Miller wrote:
>> Hi Folks,
>> 
>> I'm unclear on the timing of the flush from holding disk to vtape.
>> Suppose I run two backup jobs,and each uses the holding disk. When will
>> the second job start? Obviously, after the client has sent everything...
>> Before the holding disk flush starts, or after the holding disk flush
>> has completed?



> Taping can (by default) run in parallel with dumping if you're using a
> holding disk, which is generally a good thing, ...

I am using a holding disk, so this means that only the collection from the 
client is serialized. This is good news, because I want to do all my collecting 
before I spool anything from holding disk to vtape.



> Generically, set `autoflush no` in each configuration, and then run
> `amflush` for each configuration once all the dumps are done.

Perfect! That is exactly what I seek to accomplish!



> 
> However, unless you've got an odd arrangement where every system
> saturates the network link while actually dumping and you are sharing a
> single link on the Amanda server for both dumping and taping, this
> actually probably won't do anything for your performance.  You can
> easily configure amanda to flush backups from each DLE as soon as they
> are done, and it will wait to exit until everything is actually flushed.

I have everything on the same LAN segment, which is only 10/100. I must be sure 
that all my clients have reported before the next working day, but I can spool 
from holding disk during working hours, if necessary. I'm guessing that my 
network bandwidth is going to be my bottleneck, because I'm pretty sure that 
any old desktop machine can individually saturate a 10/100 link. If I backup 
more than one client at a time, I'm going to increase collisions and decrease 
throughput. So, I can cut backup time in half, if I hold everything and spool 
as the final step.




Thanks for the help, Austin.
-- 
Chris. 

V:916.974.0424 
F:916.974.0428


Re: Flushing the Holding Disk

2018-11-16 Thread Austin S. Hemmelgarn

On 2018-11-16 12:27, Chris Miller wrote:

Hi Folks,

I'm unclear on the timing of the flush from holding disk to vtape. 
Suppose I run two backup jobs,and each uses the holding disk. When will 
the second job start? Obviously, after the client has sent everything... 
Before the holding disk flush starts, or after the holding disk flush 
has completed?
If by 'jobs' you mean 'amanda configurations', the second one starts 
when you start it.  Note that `amdump` does not return until everything 
is finished dumping and optionally taping if anything would be taped, so 
you can literally just run each one sequentially in a shell script and 
they won't run in parallel.


If by 'jobs' you mean DLE's, they run as concurrently as you tell Amanda 
to run them.  If you've got things serialized (`inparallel` is set to 1 
in your config), then the next DLE will start dumping once the previous 
one is finished dumping to the holding disk.  Otherwise, however many 
you've said can run in parallel run (within per-host limits), and DLE's 
start when the previous one in sequence for that dumper finishes. 
Taping can (by default) run in parallel with dumping if you're using a 
holding disk, which is generally a good thing, though you can also 
easily configure it to wait for some amount of data to be buffered on 
the holding disk before it starts taping.


Is there any way to defer the holding disk flush until all backup jobs 
for a given night have completed?
Generically, set `autoflush no` in each configuration, and then run 
`amflush` for each configuration once all the dumps are done.


However, unless you've got an odd arrangement where every system 
saturates the network link while actually dumping and you are sharing a 
single link on the Amanda server for both dumping and taping, this 
actually probably won't do anything for your performance.  You can 
easily configure amanda to flush backups from each DLE as soon as they 
are done, and it will wait to exit until everything is actually flushed.


Building from that, if you just want to ensure the `amdump` instances 
don't run in parallel, just use a tool to fire them off sequentially in 
the foreground.  Stuff like Ansible is great for this (especially 
because you can easily conditionally back up your index and tapelist 
when the dump finishes).  As long as the next `amdump` command isn't 
started until the previous one returns, you won't have to worry about 
them fighting each other for bandwidth.


Re: Manually flush the holding disk

2018-11-16 Thread Alan Hodgson
On Fri, 2018-11-16 at 09:42 -0800, Chris Miller wrote:
> > 
> > bash-4.2$ amflush aequitas.tclc.org
> > Could not find any Amanda directories to flush.
> 
> 
> 
> Does anybody have any advice?
> 


# man amflush

  amflush [-b] [-f] [--exact-match] [-s] [-D datestamp] [-o
configoption...] config [host [disk...]...]

.. the first argument to most amanda commands is the configuration name

Manually flush the holding disk

2018-11-16 Thread Chris Miller
Hi Folks, 

I have 194 files on my holding disk that were written as a result of "amdump 
aequitas.tclc.org", but I can't manually flush them. 



bash-4.2 $ ls -lv /var/amanda/hold/20181115124329/ 
: 
-rw---. 1 amandabackup disk 1073741824 Nov 15 15:11 
aequitas.tclc.org.C__.0.1.tmp 
: 
-rw---. 1 amandabackup disk 544308613 Nov 16 01:05 
aequitas.tclc.org.C__.0.194.tmp 



BQ_BEGIN



bash-4.2$ amflush aequitas.tclc.org 
Could not find any Amanda directories to flush. 

BQ_END




Does anybody have any advice? 

Thanks for the help, 
-- 
Chris. 

V:916.974.0424 
F:916.974.0428 


Flushing the Holding Disk

2018-11-16 Thread Chris Miller
Hi Folks, 

I'm unclear on the timing of the flush from holding disk to vtape. Suppose I 
run two backup jobs,and each uses the holding disk. When will the second job 
start? Obviously, after the client has sent everything... Before the holding 
disk flush starts, or after the holding disk flush has completed? 

Is there any way to defer the holding disk flush until all backup jobs for a 
given night have completed? 

Thanks for the help, 
-- 
Chris. 

V:916.974.0424 
F:916.974.0428 


keep specific DLEs in holding disk?

2017-03-03 Thread Stefan G. Weichinger

(how) can I define this behavior:

for DLEs kvm_host:/my/virt-backup/VM_*

amdump their content to amanda_server:/mnt/amhold

write it to tape but leave the files in the holdingdisk as well until
next amdump-run

I want that behavior for some DLEs only, not for the whole config.
Is there a trick?




Amanda stalls when tape and holding disk both filled

2016-11-04 Thread Chris Hoogendyk
I've run into this situation a couple of times. Amanda doesn't finish running. Top shows it not 
being active, though many processes exist if I look by user amandabackup. `df -k` shows the holding 
disks completely full with no free space. The tape drive is unloaded and idle. When I run `amstatus 
daily`, I see a DLE indicating PARTIAL (i.e. it ran off the end of the tape), I see a few "waiting 
for writing to tape", and I see a couple "waiting for holding disk space." So, in other words, 
nothing can be done. If I didn't look, it would sit there forever, apparently.


So, to clean it up, I do `amcleanup -k daily`. Then, since I have a significant amount of dumps on 
holding disk and can fill most of a tape, I go ahead and run `amflush daily`. I don't want to put 
that on automatic, because sometimes there can be tape difficulties, and I would end up going 
through a bunch of tapes.


This is Amanda 3.3.9 on Ubuntu 14.04 with LTO6 and 2TB of holding disk as two 
1TB enterprise SSDs.

I realize I can work around this with configuration and by watching what's going on with my servers 
and Amanda, but I also think this should be classified as a bug. Amanda should know that it is not 
going anywhere and isn't going to accomplish anything by waiting. The admin can't do anything to fix 
it either. It has to be terminated. So, Amanda should terminate and send a report.



--
---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology & Geosciences Departments
 (*) \(*) -- 315 Morrill Science Center
~~ - University of Massachusetts, Amherst



---

Erdös 4



Re: A question about cleaning up holding disk

2016-08-04 Thread Debra S Baddorf
I’ve not run into any problems.   I do wait till the partial dump is several 
days old, to make sure 
amanda refuses to flush it.   Any other readers care to comment?

Deb Baddorf


> On Aug 4, 2016, at 1:06 AM, Chaerin Kim  wrote:
> 
> Hello, Debra.
> Thank you for responding to my question.
> 
> I have a more question.
> If I remove partial dumps in holding disk by running "rm -rf treename",
> aren't there any problem to running Amanda later?
> I saw *.tmp files related the partial dumps in index directory when I tested 
> about this problem, 
> and I think it's possible to be created any other related files. 
> Don't I have to remove these tmp files?
> 
> Regards,
> Chaerin.
> 
> 
> 2016-08-04 1:33 GMT+09:00 Debra S Baddorf :
> 
> > On Aug 2, 2016, at 9:55 PM, Chaerin Kim  wrote:
> >
> > Hello.
> >
> > I have a question about cleaning up holding disk.
> > For cleaning up holding disk, do I have to run amflush command?
> >
> > I don't want to waste a tape by flushing incomplete backup image remaining 
> > in holding disk.
> > So, I searched on the internet and found this page in zmanda forum.
> > (https://forums.zmanda.com/showthread.php?1565-Reset-Amanda-and-remove-holding-files)
> 
> 
> > An Amanda hacker said, 'You can manually remove the data on the holding 
> > disk (rm -rf ) and run "amcleanup" after that.'.
> > So, I tried it.
> > But amcleanup returned following message and didn't do anything.
> > "amcleanup: no unprocessed logfile to clean up.”
> 
> If you have AUTOFLUSH  set to ALL or YES (the latter flushes only the DLEs 
> listed on the dump command;  so if you are
> re-doing only SOME of your DLEs,  then setting AUTOFLUSH  to YES  will only 
> put  *those*same*  DLEs onto the tape;
> I prefer to set it to ALL)
> then it should flush to tape those dumps which are still valid.
> 
> Dumps which were partial,  and will never be run onto the tape  (ie you find 
> them there several days later, and amanda
> has not flushed them)  —  Yeah,  I just run   “rm -fr  treename”
> Amcleanup doesn’t seem to be needed at this point, since amanda has already 
> rejected the dumps.
> 
> I’m only told to use amcleanup when I control-C  a backup,  or do something 
> else weird  (a machine crash in the middle
> would probably look the same).So,  if  “AMCLEANUP  ” **  gives no 
> replies,  then you are fine.
> 
> **(I’m capitalizing some of the above commands to avoid spellcheck.  
> Don’t capitalize them yourself!)
> 
> Deb Baddorf
> Fermilab
> 
> > I tried to executing amcleanup with every options, but the result was same.
> >
> > According to man page of amcleanup, amcleanup with '-r' option remove bad 
> > files in holding disk.
> > What is the bad files?
> > And what should I do to remove remaining data caused by failure backup in 
> > holding disk?
> > Are there any recommended way about cleaning up holding disk in Amanda?
> >
> > Please answer my question.
> >
> > Regards,
> > Chaerin.
> 
> 
> 




Re: A question about cleaning up holding disk

2016-08-03 Thread Chaerin Kim
Hello, Debra.
Thank you for responding to my question.

I have a more question.
If I remove partial dumps in holding disk by running "rm -rf treename",
aren't there any problem to running Amanda later?
I saw *.tmp files related the partial dumps in index directory when I
tested about this problem,
and I think it's possible to be created any other related files.
Don't I have to remove these tmp files?

Regards,
Chaerin.


2016-08-04 1:33 GMT+09:00 Debra S Baddorf :

>
> > On Aug 2, 2016, at 9:55 PM, Chaerin Kim  wrote:
> >
> > Hello.
> >
> > I have a question about cleaning up holding disk.
> > For cleaning up holding disk, do I have to run amflush command?
> >
> > I don't want to waste a tape by flushing incomplete backup image
> remaining in holding disk.
> > So, I searched on the internet and found this page in zmanda forum.
> > (
> https://forums.zmanda.com/showthread.php?1565-Reset-Amanda-and-remove-holding-files
> )
>
>
> > An Amanda hacker said, 'You can manually remove the data on the holding
> disk (rm -rf ) and run "amcleanup" after that.'.
> > So, I tried it.
> > But amcleanup returned following message and didn't do anything.
> > "amcleanup: no unprocessed logfile to clean up.”
>
> If you have AUTOFLUSH  set to ALL or YES (the latter flushes only the DLEs
> listed on the dump command;  so if you are
> re-doing only SOME of your DLEs,  then setting AUTOFLUSH  to YES  will
> only put  *those*same*  DLEs onto the tape;
> I prefer to set it to ALL)
> then it should flush to tape those dumps which are still valid.
>
> Dumps which were partial,  and will never be run onto the tape  (ie you
> find them there several days later, and amanda
> has not flushed them)  —  Yeah,  I just run   “rm -fr  treename”
> Amcleanup doesn’t seem to be needed at this point, since amanda has
> already rejected the dumps.
>
> I’m only told to use amcleanup when I control-C  a backup,  or do
> something else weird  (a machine crash in the middle
> would probably look the same).So,  if  “AMCLEANUP  ” **  gives
> no replies,  then you are fine.
>
> **(I’m capitalizing some of the above commands to avoid spellcheck.
> Don’t capitalize them yourself!)
>
> Deb Baddorf
> Fermilab
>
> > I tried to executing amcleanup with every options, but the result was
> same.
> >
> > According to man page of amcleanup, amcleanup with '-r' option remove
> bad files in holding disk.
> > What is the bad files?
> > And what should I do to remove remaining data caused by failure backup
> in holding disk?
> > Are there any recommended way about cleaning up holding disk in Amanda?
> >
> > Please answer my question.
> >
> > Regards,
> > Chaerin.
>
>
>


Re: A question about cleaning up holding disk

2016-08-03 Thread Debra S Baddorf

> On Aug 2, 2016, at 9:55 PM, Chaerin Kim  wrote:
> 
> Hello.
> 
> I have a question about cleaning up holding disk.
> For cleaning up holding disk, do I have to run amflush command?
> 
> I don't want to waste a tape by flushing incomplete backup image remaining in 
> holding disk.
> So, I searched on the internet and found this page in zmanda forum.
> (https://forums.zmanda.com/showthread.php?1565-Reset-Amanda-and-remove-holding-files)


> An Amanda hacker said, 'You can manually remove the data on the holding disk 
> (rm -rf ) and run "amcleanup" after that.'.
> So, I tried it.
> But amcleanup returned following message and didn't do anything.
> "amcleanup: no unprocessed logfile to clean up.”

If you have AUTOFLUSH  set to ALL or YES (the latter flushes only the DLEs 
listed on the dump command;  so if you are
re-doing only SOME of your DLEs,  then setting AUTOFLUSH  to YES  will only put 
 *those*same*  DLEs onto the tape;
I prefer to set it to ALL)
then it should flush to tape those dumps which are still valid.

Dumps which were partial,  and will never be run onto the tape  (ie you find 
them there several days later, and amanda
has not flushed them)  —  Yeah,  I just run   “rm -fr  treename”
Amcleanup doesn’t seem to be needed at this point, since amanda has already 
rejected the dumps.

I’m only told to use amcleanup when I control-C  a backup,  or do something 
else weird  (a machine crash in the middle
would probably look the same).So,  if  “AMCLEANUP  ” **  gives no 
replies,  then you are fine.

**(I’m capitalizing some of the above commands to avoid spellcheck.  Don’t 
capitalize them yourself!)

Deb Baddorf
Fermilab

> I tried to executing amcleanup with every options, but the result was same.
> 
> According to man page of amcleanup, amcleanup with '-r' option remove bad 
> files in holding disk.
> What is the bad files?
> And what should I do to remove remaining data caused by failure backup in 
> holding disk?
> Are there any recommended way about cleaning up holding disk in Amanda?
> 
> Please answer my question.
> 
> Regards,
> Chaerin.





A question about cleaning up holding disk

2016-08-02 Thread Chaerin Kim
Hello.

I have a question about cleaning up holding disk.
For cleaning up holding disk, do I have to run amflush command?

I don't want to waste a tape by flushing incomplete backup image remaining
in holding disk.
So, I searched on the internet and found this page in zmanda forum.
(
https://forums.zmanda.com/showthread.php?1565-Reset-Amanda-and-remove-holding-files
)
An Amanda hacker said, 'You can manually remove the data on the holding
disk (rm -rf ) and run "amcleanup" after that.'.
So, I tried it.
But amcleanup returned following message and didn't do anything.
"amcleanup: no unprocessed logfile to clean up."
I tried to executing amcleanup with every options, but the result was same.

According to man page of amcleanup, amcleanup with '-r' option remove bad
files in holding disk.
What is the bad files?
And what should I do to remove remaining data caused by failure backup in
holding disk?
Are there any recommended way about cleaning up holding disk in Amanda?

Please answer my question.

Regards,
Chaerin.


Re: Leaving dumps on the holding disk?

2015-01-26 Thread Debra S Baddorf
I wonder if one could somehow use the  AMVAULT  command to  do this?
It serves to make a COPY  of a dump tape  (as I understand it).

At the very least,  you could have a cron job  copy the  tape  back  *off*  of 
the
tape,  onto a spare corner of the holding disk that you had labeled as   
“virtual disk”
tape storage.
Or,  no wait …..do that on-disk copy first.   Have the holding disk data   
saved to another
portion of the large disk area  that is a “virtual disk tape storage”  as your 
nightly
backup tape.   But since you really want a physical tape  (I gather you do?  I 
do.)
then use  AMVAULT   to copy  the virtual disk  off to the real tape that you 
want.
Every night.   This would leave you with a copy of it on virtual disk.Which 
could be
set up to have a very short  recycle period,  if you like?  (Assuming you don’t 
have
*that*  much space to spare.)

Maybe play with this idea a bit?

Deb Baddorf
Fermilab

( sorry for capitalizing words like  AMVAULT — my mac keeps trying to change
the spelling of words that I write)


On Jan 26, 2015, at 12:09 PM, Jason L Tibbitts III  wrote:

>>>>>> "JM" == Jean-Louis Martineau  writes:
> 
> JM> The main problem is that if you leave the dump in the holding disk,
> JM> amanda will automatically re-flush (autoflush) them on the next run.
> JM> There is no way to store the information about dump that are already
> JM> flushed and dump that are not flushed.
> 
> Well, I figured as a hack, they'd just get renamed in some way that the
> code would ignore when it comes to flushing things, and a cron job would
> nuke the oldest ones every few minutes if the disk is full.  But of
> course that wouldn't integrate the held dumps with the regular restore
> process.
> 
> JM> The amanda development branch have that capabilities. Implementing
> JM> this feature will be a lot easier.
> 
> Cool.  It's not as if I'm in a huge hurry.  I just thought it might be
> nice to use the holding disk for more than just a very temporary staging
> area.
> 
> - J<




Re: Leaving dumps on the holding disk?

2015-01-26 Thread Jason L Tibbitts III
>>>>> "JM" == Jean-Louis Martineau  writes:

JM> The main problem is that if you leave the dump in the holding disk,
JM> amanda will automatically re-flush (autoflush) them on the next run.
JM> There is no way to store the information about dump that are already
JM> flushed and dump that are not flushed.

Well, I figured as a hack, they'd just get renamed in some way that the
code would ignore when it comes to flushing things, and a cron job would
nuke the oldest ones every few minutes if the disk is full.  But of
course that wouldn't integrate the held dumps with the regular restore
process.

JM> The amanda development branch have that capabilities. Implementing
JM> this feature will be a lot easier.

Cool.  It's not as if I'm in a huge hurry.  I just thought it might be
nice to use the holding disk for more than just a very temporary staging
area.

 - J<


Re: Leaving dumps on the holding disk?

2015-01-26 Thread Jean-Louis Martineau

On 01/25/2015 10:27 PM, Jason L Tibbitts III wrote:

My amanda server has a really large holding disk, because disk is cheap
and because lots of disk striping generally equals better write
performance.

The usual restore operation I have to do is pulling things off of last
night's backup, which involves waiting for the library to load things
and such, and of course hoping that there's no problem with the tapes.
But there's a good chance that the backup image I need was just on the
holding disk, and if it hadn't been deleted then there would be no
reason to touch the tapes at all.  In fact, even with LTO6 tapes, I
should still be able to fit several tapes worth of backups on the
holding disk.

Is there any way to force amanda to delay deleting dumps until it
actually needs space on the holding disk?  Or, is there any particular
place I might start looking in order to hack this in somehow?


Interesting idea!

The main problem is that if you leave the dump in the holding disk, 
amanda will automatically re-flush (autoflush) them on the next run.
There is no way to store the information about dump that are already 
flushed and dump that are not flushed.


The amanda development branch have that capabilities. Implementing this 
feature will be a lot easier.


Jean-Louis


Re: Leaving dumps on the holding disk?

2015-01-25 Thread Toomas Aas

Mon, 26 Jan 2015 kirjutas Jason L Tibbitts III :


The usual restore operation I have to do is pulling things off of last
night's backup, which involves waiting for the library to load things
and such, and of course hoping that there's no problem with the tapes.
But there's a good chance that the backup image I need was just on the
holding disk, and if it hadn't been deleted then there would be no
reason to touch the tapes at all.  In fact, even with LTO6 tapes, I
should still be able to fit several tapes worth of backups on the
holding disk.

Is there any way to force amanda to delay deleting dumps until it
actually needs space on the holding disk?  Or, is there any particular
place I might start looking in order to hack this in somehow?


I don't know, but there is an alternative approach you could use. If  
your tapes are big enough to hold several days worth of dumps, you  
could delay writing dumps to tape until enough of them have gathered  
on the holding disk to fill a tape. That is what I am doing here.


Relevant parameters in my amanda.conf:

flush-threshold-dumped 100
flush-threshold-scheduled 100
taperflush 100
autoflush yes


--
Toomas Aas
Tartu linnakantselei arvutivõrgu peaspetsialist
tel 736 1274
mob 513 6493



Leaving dumps on the holding disk?

2015-01-25 Thread Jason L Tibbitts III
My amanda server has a really large holding disk, because disk is cheap
and because lots of disk striping generally equals better write
performance.

The usual restore operation I have to do is pulling things off of last
night's backup, which involves waiting for the library to load things
and such, and of course hoping that there's no problem with the tapes.
But there's a good chance that the backup image I need was just on the
holding disk, and if it hadn't been deleted then there would be no
reason to touch the tapes at all.  In fact, even with LTO6 tapes, I
should still be able to fit several tapes worth of backups on the
holding disk.

Is there any way to force amanda to delay deleting dumps until it
actually needs space on the holding disk?  Or, is there any particular
place I might start looking in order to hack this in somehow?

 - J<


Re: Amanda 2.5.1p3 does not recognize ZFS holding disk

2014-10-24 Thread Nathan Stratton Treadway
On Mon, Oct 20, 2014 at 14:24:23 -0400, Nathan Stratton Treadway wrote:
> If that's indeed the problem, it should be possible (as an alternative
> to rebuilding Amanda [or upgrading to a later version]) to simply shrink
> the size of the ZFS partition down enough to avoid triggering the
> overflow.
> 
> You could just do that by trial and error, or if you send the output of
>   df -v /var/spool/amanda/disk1 /var/spool/amanda/disk2
> and
>   df -g /var/spool/amanda/disk1 /var/spool/amanda/disk2
> , I'll see if I can tell from the 2.5 code what's overflowing and come
> up with the max size that should work...

Chris sent me the output of "df -g /var/spool/amanda/disk2":

/var/spool/amanda/disk2(jpool/amandaspace ):   131072 block size   512 
frag size
419430400 total blocks  167374111 free blocks 167374111 available   
167432983 total   files 
  
167374111 free files 67174417 filesys id
  zfs fstype   0x0004 flag 255 filename length

...and that turned out to have the key clue, specifically the section
saying "512 frag size".

It turns out that in the Amanda 2.5.1p3 code line the routine to check
holding disk free space does some calculations that assume the frag size
is a multiple of 1024, and the 512 frag size found here caused it to
round everything down to zero.  (That's why the amcheck message says
"only 0 KB free", rather than some number that results from an integer
overflow situation.)

(The frag size on the UFS filesystem containing his other holding disk
was 1024, so it is indeed a problem specific to ZFS in this case.)


I did a little web searching and it seems that the "ashift" setting on
ZFS vdevs might possibily be related to the frag size value for the
filesystem, but I don't have access to a Solaris system where I can
create a ZFS filesystem using a greater-than-9 ashift and then confirm
that df -g reports a "frag size" bigger than 512.

I believe that if it did so, Amanda 2.5.1p3 would then be able to use
that holding disk -- but on the other hand it seems that any version of
Amanda 2.5.2 or later fixes this particular bug, so it might be easier
just to upgrade Amanda rather than attempting to fix this on the ZFS
side.

Nathan


p.s. A few related URLs, just to record them somewhere:

The code in question:
  
http://amanda.cvs.sourceforge.net/viewvc/amanda/amanda/server-src/amcheck.c?revision=1.149.2.10&view=markup&pathrev=amanda251p3#l87
  
http://amanda.cvs.sourceforge.net/viewvc/amanda/amanda/common-src/statfs.c?revision=1.16&view=markup&pathrev=amanda251p3#l133
  
A bug covering the same problem in filesystem free space calculations,
though in a different environment:
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=420100




Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Amanda 2.5.1p3 does not recognize ZFS holding disk

2014-10-20 Thread Nathan Stratton Treadway
On Mon, Oct 20, 2014 at 13:16:17 -0400, Chris Hoogendyk wrote:
> According to Wietse Venema (with regard to compiling Postfix on Solaris with 
> ZFS):
> 
> There was a workaraound involving setting parameters on the ZFS
> that didn't overload the statvfs() call.
> 
> The fix was to build it using statvfs64().
> 
> 
> I don't know if that is the answer for Amanda, but it sounds like a 
> possibility.

Given Jean-Louis's note about Amanda 2.5 not using gnulib, there's
certainly more opportunity for problems with overflowing 32-bit
size limits.

If that's indeed the problem, it should be possible (as an alternative
to rebuilding Amanda [or upgrading to a later version]) to simply shrink
the size of the ZFS partition down enough to avoid triggering the
overflow.

You could just do that by trial and error, or if you send the output of
  df -v /var/spool/amanda/disk1 /var/spool/amanda/disk2
and
  df -g /var/spool/amanda/disk1 /var/spool/amanda/disk2
, I'll see if I can tell from the 2.5 code what's overflowing and come
up with the max size that should work...

Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Amanda 2.5.1p3 does not recognize ZFS holding disk

2014-10-20 Thread Jean-Louis Martineau

On 10/17/2014 07:34 PM, Nathan Stratton Treadway wrote:

On Fri, Oct 17, 2014 at 01:08:42 -0400, Nathan Stratton Treadway wrote:

On Thu, Oct 16, 2014 at 15:58:58 -0400, Chris Hoogendyk wrote:

Is it possible that Amanda 2.5.1p3 is using some UFS specific system
level call that doesn't work for ZFS?

I had a copy of the amanda 2.6.1p1 source lying around (as downloaded
from Ubuntu), and skimming through that quickly it looks like on Sun,
amcheck (via the "get_fs_usage()" function in gnulib/fsusage.c) uses the
"statfs()" function to get the available disk space info for a
particular partition.


2.5.1p3 do not use get_fs_usage from gnulib.
We moved to get_fs_usage because amanda code was not good on some 
filesystem.


Jean-Louis



If you happen to have GNU coreutils installed on this system, the output
   stat -f /var/spool/amanda/disk2
might be interesting.  (I believe that also uses the statfs() call
internally.)

Looking more closely I think the "statfs()" call was for SunOS 4.x, and
more recent Solaris versions use the "statvfs()" call.

 From the log generated by various "truss df ..." commands it seems that
/usr/bin/df also uses a statvfs-family function internally (at least on
the "SunOS 5.9" system I am able to test on)... so off hand it's not
obvious why amcheck would get different results than "df".

Still, there's a small chance that seeing the output you get from
   df -v /var/spool/amanda/disk1 /var/spool/amanda/disk2
and
   df -g /var/spool/amanda/disk1 /var/spool/amanda/disk2
could shed some light on what's going on with that call.

Nathan



Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
  GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
  Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239




Re: Amanda 2.5.1p3 does not recognize ZFS holding disk

2014-10-20 Thread Chris Hoogendyk

Thanks, Nathan,

According to Wietse Venema (with regard to compiling Postfix on Solaris with 
ZFS):

There was a workaraound involving setting parameters on the ZFS that didn't overload the 
statvfs() call.


The fix was to build it using statvfs64().


I don't know if that is the answer for Amanda, but it sounds like a possibility.



On 10/17/14 7:34 PM, Nathan Stratton Treadway wrote:

On Fri, Oct 17, 2014 at 01:08:42 -0400, Nathan Stratton Treadway wrote:

On Thu, Oct 16, 2014 at 15:58:58 -0400, Chris Hoogendyk wrote:

Is it possible that Amanda 2.5.1p3 is using some UFS specific system
level call that doesn't work for ZFS?

I had a copy of the amanda 2.6.1p1 source lying around (as downloaded
from Ubuntu), and skimming through that quickly it looks like on Sun,
amcheck (via the "get_fs_usage()" function in gnulib/fsusage.c) uses the
"statfs()" function to get the available disk space info for a
particular partition.

If you happen to have GNU coreutils installed on this system, the output
   stat -f /var/spool/amanda/disk2
might be interesting.  (I believe that also uses the statfs() call
internally.)

Looking more closely I think the "statfs()" call was for SunOS 4.x, and
more recent Solaris versions use the "statvfs()" call.

>From the log generated by various "truss df ..." commands it seems that
/usr/bin/df also uses a statvfs-family function internally (at least on
the "SunOS 5.9" system I am able to test on)... so off hand it's not
obvious why amcheck would get different results than "df".

Still, there's a small chance that seeing the output you get from
   df -v /var/spool/amanda/disk1 /var/spool/amanda/disk2
and
   df -g /var/spool/amanda/disk1 /var/spool/amanda/disk2
could shed some light on what's going on with that call.

Nathan



Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
  GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
  Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



--
---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology & Geology Departments
 (*) \(*) -- 347 Morrill Science Center
~~ - University of Massachusetts, Amherst



---

Erdös 4



Re: Amanda 2.5.1p3 does not recognize ZFS holding disk

2014-10-17 Thread Nathan Stratton Treadway
On Fri, Oct 17, 2014 at 01:08:42 -0400, Nathan Stratton Treadway wrote:
> On Thu, Oct 16, 2014 at 15:58:58 -0400, Chris Hoogendyk wrote:
> > Is it possible that Amanda 2.5.1p3 is using some UFS specific system
> > level call that doesn't work for ZFS?
> 
> I had a copy of the amanda 2.6.1p1 source lying around (as downloaded
> from Ubuntu), and skimming through that quickly it looks like on Sun,
> amcheck (via the "get_fs_usage()" function in gnulib/fsusage.c) uses the
> "statfs()" function to get the available disk space info for a
> particular partition.
> 
> If you happen to have GNU coreutils installed on this system, the output 
>   stat -f /var/spool/amanda/disk2
> might be interesting.  (I believe that also uses the statfs() call
> internally.)

Looking more closely I think the "statfs()" call was for SunOS 4.x, and
more recent Solaris versions use the "statvfs()" call.

>From the log generated by various "truss df ..." commands it seems that
/usr/bin/df also uses a statvfs-family function internally (at least on
the "SunOS 5.9" system I am able to test on)... so off hand it's not
obvious why amcheck would get different results than "df".

Still, there's a small chance that seeing the output you get from 
  df -v /var/spool/amanda/disk1 /var/spool/amanda/disk2
and
  df -g /var/spool/amanda/disk1 /var/spool/amanda/disk2
could shed some light on what's going on with that call.

Nathan



Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Amanda 2.5.1p3 does not recognize ZFS holding disk

2014-10-16 Thread Nathan Stratton Treadway
On Thu, Oct 16, 2014 at 15:58:58 -0400, Chris Hoogendyk wrote:
> Is it possible that Amanda 2.5.1p3 is using some UFS specific system
> level call that doesn't work for ZFS?

I had a copy of the amanda 2.6.1p1 source lying around (as downloaded
from Ubuntu), and skimming through that quickly it looks like on Sun,
amcheck (via the "get_fs_usage()" function in gnulib/fsusage.c) uses the
"statfs()" function to get the available disk space info for a
particular partition.

If you happen to have GNU coreutils installed on this system, the output 
  stat -f /var/spool/amanda/disk2
might be interesting.  (I believe that also uses the statfs() call
internally.)

Nathan



Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Amanda 2.5.1p3 does not recognize ZFS holding disk

2014-10-16 Thread Chris Hoogendyk
On Solaris it is /etc/vfstab. However, if you are using ZFS, it is not in vfstab. It is all accessed 
via the zpool and zfs commands.


I gave all the output of the zfs properties as well as the permissions on the 
directories.

The amcheck command has to be run as amanda. You can't run it as root. I can cd into the directory 
and do touch or mkdir as amanda.



On 10/16/14 6:20 PM, Joi L. Ellis wrote:

What does /etc/fstab contain for the two partitions with the holding disks?

I've never used a zfs filesystem; does the amanda account have sufficient 
permissions to create files/directories on the new holding disk directory?  
Could the mount permissions be incorrect?  (IE it's mounted for owner=root and 
thus only root can read/write to it?)  If mount has a uid=root option, it 
wouldn't matter what the actual uid of the owner is inside the filesystem, as 
the kernel overrides it with the mount option.

It's not clear from your samples if you were logged in as your amanda account 
for your tests, or as root.


--
Joi Owen
System Administrator
Pavlov Media, Inc

-Original Message-
From: owner-amanda-us...@amanda.org [mailto:owner-amanda-us...@amanda.org] On 
Behalf Of Chris Hoogendyk
Sent: Thursday, October 16, 2014 3:48 PM
To: Cuttler, Brian (HEALTH); AMANDA users
Subject: Re: Amanda 2.5.1p3 does not recognize ZFS holding disk

If it were not mounted, it would not show up in df -k with the mount point.

Also, I copied a bunch of files from disk1 to disk2, which requires that it be 
mounted. I can cd into it and list its contents.

I find it puzzling that the OS has no issues with it, but Amanda does.


On 10/16/14 4:08 PM, Cuttler, Brian (HEALTH) wrote:

Is the file system actually mounted?

You may have a zfs filsystem with a mount point but not have mounted it.

-Original Message-
From: owner-amanda-us...@amanda.org
[mailto:owner-amanda-us...@amanda.org] On Behalf Of Chris Hoogendyk
Sent: Thursday, October 16, 2014 3:59 PM
To: AMANDA users
Subject: Amanda 2.5.1p3 does not recognize ZFS holding disk

I have an older Sun server (T5220, Solaris 10, J4200 SAS, LIB162-AIT5)  that is 
still running but close to being replaced.

I tried to add some holding disk space by allocating from a ZFS pool.

amcheck tells me that there is 0 KB free, but df -k tells me it has 179G free. 
amcheck debug makes no reference to this check.

Is it possible that Amanda 2.5.1p3 is using some UFS specific system level call 
that doesn't work for ZFS?

Following is just about everything I thought might be relevant.


-

wahoo:/usr/local/etc/amanda$ amcheck daily

Amanda Tape Server Host Check
-----
WARNING: holding disk /var/spool/amanda/disk2: only 0 KB free, using
nothing Holding disk /var/spool/amanda/disk1: 32070944 KB disk space
available, using 30534944 KB

-

wahoo:/# df -k | grep amanda

/dev/dsk/c1t2d0s3103275934 70172231 3207094469% /var/spool/amanda/disk1
/dev/dsk/c1t0d0s624784872   24585 24512439 1% 
/usr/local/etc/amanda/snapshots
jpool/amandaspace209715200 29841383 17987381715% /var/spool/amanda/disk2

--

wahoo:/# zfs get all jpool/amandaspace

NAME   PROPERTY VALUESOURCE
jpool/amandaspace  type filesystem   -
jpool/amandaspace  creation Fri Oct 10 17:20 2014-
jpool/amandaspace  used 28.5G-
jpool/amandaspace  available172G -
jpool/amandaspace  referenced   28.5G-
jpool/amandaspace  compressratio1.00x-
jpool/amandaspace  mounted  yes  -
jpool/amandaspace  quota200G local
jpool/amandaspace  reservation  none default
jpool/amandaspace  recordsize   128K default
jpool/amandaspace  mountpoint   /var/spool/amanda/disk2  local
jpool/amandaspace  sharenfs off  default
jpool/amandaspace  checksum on   default
jpool/amandaspace  compression  off  default
jpool/amandaspace  atimeon   default
jpool/amandaspace  devices  on   default
jpool/amandaspace  exec on   default
jpool/amandaspace  setuid   on   default
jpool/amandaspace  readonly off  default
jpool/amandaspace  zonedoff  default
jpool/amandaspace  snapdir  hidden   default
jpool/amandaspace  aclmode  groupmaskdefault
jpool/amandaspace  aclinherit   restricted   default
jpool/amandaspace  canmount on   default
jpool/amandaspace  shareiscsi   off  default
jpool/am

RE: Amanda 2.5.1p3 does not recognize ZFS holding disk

2014-10-16 Thread Joi L. Ellis
What does /etc/fstab contain for the two partitions with the holding disks?

I've never used a zfs filesystem; does the amanda account have sufficient 
permissions to create files/directories on the new holding disk directory?  
Could the mount permissions be incorrect?  (IE it's mounted for owner=root and 
thus only root can read/write to it?)  If mount has a uid=root option, it 
wouldn't matter what the actual uid of the owner is inside the filesystem, as 
the kernel overrides it with the mount option.

It's not clear from your samples if you were logged in as your amanda account 
for your tests, or as root.


--
Joi Owen
System Administrator
Pavlov Media, Inc

-Original Message-
From: owner-amanda-us...@amanda.org [mailto:owner-amanda-us...@amanda.org] On 
Behalf Of Chris Hoogendyk
Sent: Thursday, October 16, 2014 3:48 PM
To: Cuttler, Brian (HEALTH); AMANDA users
Subject: Re: Amanda 2.5.1p3 does not recognize ZFS holding disk

If it were not mounted, it would not show up in df -k with the mount point.

Also, I copied a bunch of files from disk1 to disk2, which requires that it be 
mounted. I can cd into it and list its contents.

I find it puzzling that the OS has no issues with it, but Amanda does.


On 10/16/14 4:08 PM, Cuttler, Brian (HEALTH) wrote:
> Is the file system actually mounted?
>
> You may have a zfs filsystem with a mount point but not have mounted it.
>
> -Original Message-
> From: owner-amanda-us...@amanda.org 
> [mailto:owner-amanda-us...@amanda.org] On Behalf Of Chris Hoogendyk
> Sent: Thursday, October 16, 2014 3:59 PM
> To: AMANDA users
> Subject: Amanda 2.5.1p3 does not recognize ZFS holding disk
>
> I have an older Sun server (T5220, Solaris 10, J4200 SAS, LIB162-AIT5)  that 
> is still running but close to being replaced.
>
> I tried to add some holding disk space by allocating from a ZFS pool.
>
> amcheck tells me that there is 0 KB free, but df -k tells me it has 179G 
> free. amcheck debug makes no reference to this check.
>
> Is it possible that Amanda 2.5.1p3 is using some UFS specific system level 
> call that doesn't work for ZFS?
>
> Following is just about everything I thought might be relevant.
>
>
> -
>
> wahoo:/usr/local/etc/amanda$ amcheck daily
>
> Amanda Tape Server Host Check
> -----
> WARNING: holding disk /var/spool/amanda/disk2: only 0 KB free, using 
> nothing Holding disk /var/spool/amanda/disk1: 32070944 KB disk space 
> available, using 30534944 KB
>
> -
>
> wahoo:/# df -k | grep amanda
>
> /dev/dsk/c1t2d0s3103275934 70172231 3207094469% 
> /var/spool/amanda/disk1
> /dev/dsk/c1t0d0s624784872   24585 24512439 1% 
> /usr/local/etc/amanda/snapshots
> jpool/amandaspace209715200 29841383 17987381715% 
> /var/spool/amanda/disk2
>
> --
>
> wahoo:/# zfs get all jpool/amandaspace
>
> NAME   PROPERTY VALUESOURCE
> jpool/amandaspace  type filesystem   -
> jpool/amandaspace  creation Fri Oct 10 17:20 2014-
> jpool/amandaspace  used 28.5G-
> jpool/amandaspace  available172G -
> jpool/amandaspace  referenced   28.5G-
> jpool/amandaspace  compressratio1.00x-
> jpool/amandaspace  mounted  yes  -
> jpool/amandaspace  quota200G local
> jpool/amandaspace  reservation  none default
> jpool/amandaspace  recordsize   128K default
> jpool/amandaspace  mountpoint   /var/spool/amanda/disk2  local
> jpool/amandaspace  sharenfs off  default
> jpool/amandaspace  checksum on   default
> jpool/amandaspace  compression  off  default
> jpool/amandaspace  atimeon   default
> jpool/amandaspace  devices  on   default
> jpool/amandaspace  exec on   default
> jpool/amandaspace  setuid   on   default
> jpool/amandaspace  readonly off  default
> jpool/amandaspace  zonedoff  default
> jpool/amandaspace  snapdir  hidden   default
> jpool/amandaspace  aclmode  groupmaskdefault
> jpool/amandaspace  aclinherit   restricted   default
> jpool/amandaspace  canmount on   default
> jpool/amandaspace  shareiscsi   off  default
> jpool/amandaspace  xattron  

Re: Amanda 2.5.1p3 does not recognize ZFS holding disk

2014-10-16 Thread Chris Hoogendyk

If it were not mounted, it would not show up in df -k with the mount point.

Also, I copied a bunch of files from disk1 to disk2, which requires that it be mounted. I can cd 
into it and list its contents.


I find it puzzling that the OS has no issues with it, but Amanda does.


On 10/16/14 4:08 PM, Cuttler, Brian (HEALTH) wrote:

Is the file system actually mounted?

You may have a zfs filsystem with a mount point but not have mounted it.

-Original Message-
From: owner-amanda-us...@amanda.org [mailto:owner-amanda-us...@amanda.org] On 
Behalf Of Chris Hoogendyk
Sent: Thursday, October 16, 2014 3:59 PM
To: AMANDA users
Subject: Amanda 2.5.1p3 does not recognize ZFS holding disk

I have an older Sun server (T5220, Solaris 10, J4200 SAS, LIB162-AIT5)  that is 
still running but close to being replaced.

I tried to add some holding disk space by allocating from a ZFS pool.

amcheck tells me that there is 0 KB free, but df -k tells me it has 179G free. 
amcheck debug makes no reference to this check.

Is it possible that Amanda 2.5.1p3 is using some UFS specific system level call 
that doesn't work for ZFS?

Following is just about everything I thought might be relevant.


-

wahoo:/usr/local/etc/amanda$ amcheck daily

Amanda Tape Server Host Check
-
WARNING: holding disk /var/spool/amanda/disk2: only 0 KB free, using nothing 
Holding disk /var/spool/amanda/disk1: 32070944 KB disk space available, using 
30534944 KB

-

wahoo:/# df -k | grep amanda

/dev/dsk/c1t2d0s3103275934 70172231 3207094469% /var/spool/amanda/disk1
/dev/dsk/c1t0d0s624784872   24585 24512439 1% 
/usr/local/etc/amanda/snapshots
jpool/amandaspace209715200 29841383 17987381715% /var/spool/amanda/disk2

--

wahoo:/# zfs get all jpool/amandaspace

NAME   PROPERTY VALUESOURCE
jpool/amandaspace  type filesystem   -
jpool/amandaspace  creation Fri Oct 10 17:20 2014-
jpool/amandaspace  used 28.5G-
jpool/amandaspace  available172G -
jpool/amandaspace  referenced   28.5G-
jpool/amandaspace  compressratio1.00x-
jpool/amandaspace  mounted  yes  -
jpool/amandaspace  quota200G local
jpool/amandaspace  reservation  none default
jpool/amandaspace  recordsize   128K default
jpool/amandaspace  mountpoint   /var/spool/amanda/disk2  local
jpool/amandaspace  sharenfs off  default
jpool/amandaspace  checksum on   default
jpool/amandaspace  compression  off  default
jpool/amandaspace  atimeon   default
jpool/amandaspace  devices  on   default
jpool/amandaspace  exec on   default
jpool/amandaspace  setuid   on   default
jpool/amandaspace  readonly off  default
jpool/amandaspace  zonedoff  default
jpool/amandaspace  snapdir  hidden   default
jpool/amandaspace  aclmode  groupmaskdefault
jpool/amandaspace  aclinherit   restricted   default
jpool/amandaspace  canmount on   default
jpool/amandaspace  shareiscsi   off  default
jpool/amandaspace  xattron   default
jpool/amandaspace  copies   1default
jpool/amandaspace  version  3-
jpool/amandaspace  utf8only off  -
jpool/amandaspace  normalizationnone -
jpool/amandaspace  casesensitivity  sensitive-
jpool/amandaspace  vscanoff  default
jpool/amandaspace  nbmand   off  default
jpool/amandaspace  sharesmb off  default
jpool/amandaspace  refquota none default
jpool/amandaspace  refreservation   none default

-

wahoo:/usr/local/etc/amanda$ more 
/tmp/amanda/server/daily/amcheck.20141016151955.debug

amcheck: debug 1 pid 28285 ruid 555 euid 0: start at Thu Oct 16 15:19:55 2014
amcheck: debug 1 pid 28285 ruid 555 euid 555: rename at Thu Oct 16 15:19:56 2014
security_getdriver(name=BSD) returns ff316518 security_handleinit(handle=32cf8, 
driver=ff316518 (BSD))
amcheck-clients: time 0.010: bind_portrange2: Try  port 731: Available   - 
Success
amcheck-clients: time 0.010: dgram_bind: socket bound to 0.0.0.0.731
amcheck-clients: dgram_send_addr(addr=ffbf7508, dgram=ff316908)
amcheck-clients: time 0.011: (sockaddr_in *)ffbf7508 = { 2, 10080, 127.0.0.1 }
am

Amanda 2.5.1p3 does not recognize ZFS holding disk

2014-10-16 Thread Chris Hoogendyk
I have an older Sun server (T5220, Solaris 10, J4200 SAS, LIB162-AIT5)  that is still running but 
close to being replaced.


I tried to add some holding disk space by allocating from a ZFS pool.

amcheck tells me that there is 0 KB free, but df -k tells me it has 179G free. amcheck debug makes 
no reference to this check.


Is it possible that Amanda 2.5.1p3 is using some UFS specific system level call that doesn't work 
for ZFS?


Following is just about everything I thought might be relevant.


-

wahoo:/usr/local/etc/amanda$ amcheck daily

Amanda Tape Server Host Check
-
WARNING: holding disk /var/spool/amanda/disk2: only 0 KB free, using nothing
Holding disk /var/spool/amanda/disk1: 32070944 KB disk space available, using 
30534944 KB

-

wahoo:/# df -k | grep amanda

/dev/dsk/c1t2d0s3103275934 70172231 3207094469% /var/spool/amanda/disk1
/dev/dsk/c1t0d0s624784872   24585 24512439 1% 
/usr/local/etc/amanda/snapshots
jpool/amandaspace209715200 29841383 17987381715% /var/spool/amanda/disk2

--

wahoo:/# zfs get all jpool/amandaspace

NAME   PROPERTY VALUESOURCE
jpool/amandaspace  type filesystem   -
jpool/amandaspace  creation Fri Oct 10 17:20 2014-
jpool/amandaspace  used 28.5G-
jpool/amandaspace  available172G -
jpool/amandaspace  referenced   28.5G-
jpool/amandaspace  compressratio1.00x-
jpool/amandaspace  mounted  yes  -
jpool/amandaspace  quota200G local
jpool/amandaspace  reservation  none default
jpool/amandaspace  recordsize   128K default
jpool/amandaspace  mountpoint   /var/spool/amanda/disk2  local
jpool/amandaspace  sharenfs off  default
jpool/amandaspace  checksum on   default
jpool/amandaspace  compression  off  default
jpool/amandaspace  atimeon   default
jpool/amandaspace  devices  on   default
jpool/amandaspace  exec on   default
jpool/amandaspace  setuid   on   default
jpool/amandaspace  readonly off  default
jpool/amandaspace  zonedoff  default
jpool/amandaspace  snapdir  hidden   default
jpool/amandaspace  aclmode  groupmaskdefault
jpool/amandaspace  aclinherit   restricted   default
jpool/amandaspace  canmount on   default
jpool/amandaspace  shareiscsi   off  default
jpool/amandaspace  xattron   default
jpool/amandaspace  copies   1default
jpool/amandaspace  version  3-
jpool/amandaspace  utf8only off  -
jpool/amandaspace  normalizationnone -
jpool/amandaspace  casesensitivity  sensitive-
jpool/amandaspace  vscanoff  default
jpool/amandaspace  nbmand   off  default
jpool/amandaspace  sharesmb off  default
jpool/amandaspace  refquota none default
jpool/amandaspace  refreservation   none default

-

wahoo:/usr/local/etc/amanda$ more 
/tmp/amanda/server/daily/amcheck.20141016151955.debug

amcheck: debug 1 pid 28285 ruid 555 euid 0: start at Thu Oct 16 15:19:55 2014
amcheck: debug 1 pid 28285 ruid 555 euid 555: rename at Thu Oct 16 15:19:56 2014
security_getdriver(name=BSD) returns ff316518
security_handleinit(handle=32cf8, driver=ff316518 (BSD))
amcheck-clients: time 0.010: bind_portrange2: Try  port 731: Available   - 
Success
amcheck-clients: time 0.010: dgram_bind: socket bound to 0.0.0.0.731
amcheck-clients: dgram_send_addr(addr=ffbf7508, dgram=ff316908)
amcheck-clients: time 0.011: (sockaddr_in *)ffbf7508 = { 2, 10080, 127.0.0.1 }
amcheck-clients: dgram_send_addr: ff316908->socket = 5
security_getdriver(name=ssh) returns ff316638
security_handleinit(handle=34880, driver=ff316638 (SSH))
security_streaminit(stream=3bcb8, driver=ff316638 (SSH))
amcheck-clients: time 0.142: dgram_recv(dgram=ff316908, timeout=0, 
fromaddr=ff3268f4)
amcheck-clients: time 0.142: (sockaddr_in *)ff3268f4 = { 2, 10080, 127.0.0.1 }
amcheck-clients: time 0.185: dgram_recv(dgram=ff316908, timeout=0, 
fromaddr=ff3268f4)
amcheck-clients: time 0.185: (sockaddr_in *)ff3268f4 = { 2, 10080, 127.0.0.1 }
amcheck-clients: dgram_send_addr(addr=ffbf73f8, dgram=ff316908)
amcheck-clients: time 0.185: (sockaddr_in *)ffbf73f8 = { 2, 10080, 127.0.0.1 }
amcheck

Re: holding disk/staging area to drive an LTO6 drive

2014-03-07 Thread Sven Rudolph
Marcus Pless  writes:

> I'm looking at a new backup server and trying to spec
> something that will keep up with incoming amanda dump
> files and keep an LTO6 drive streaming. It looks like
> 16 7200 rpm drives in a RAID 10 should easily be able
> to feed an LTO6 drive but I'd like to hear what any
> other LTO6 users are doing for a holding disk.

LTO6 isn't much faster than LTO4, AFAIR 160MB/s vs. 120 MB/s.

I am running five linux md raid5, each consisting of five 2 TB SAS
drives (7200 rpm). The five RAIDs give me five independant
"spindles". This gives enough concurrency for both incoming and outgoing
data (i.e. both dumping and taping). This gives me full taping speed:

  driver: result time 180317.487 from taper: PARTDONE 00-00401
  labelxxx 2 579595935 "[sec 3694.959354 bytes 593506237887 kp s 156861.247844 
orig-kb 765306100]"

In your case I'd suggest 4 raids with four drives each.

I have that many disks because I want the holding disks to be able to
keep a full amanda dump. So the backup works even when tapes, tape
drives or tape changer fail completely.

Sven


holding disk/staging area to drive an LTO6 drive

2014-03-06 Thread Marcus Pless

I'm looking at a new backup server and trying to spec
something that will keep up with incoming amanda dump
files and keep an LTO6 drive streaming. It looks like
16 7200 rpm drives in a RAID 10 should easily be able
to feed an LTO6 drive but I'd like to hear what any
other LTO6 users are doing for a holding disk. We're
currently using LTO4 drives so I can't do my own real
world benchmarking. Thanks in advance!

--Marcus


Re: holding disk not used?

2011-08-29 Thread Jean-Louis Martineau

On 08/17/2011 03:35 PM, Jean-Francois Malouin wrote:

I think I got it now.

The amanda.conf used the following holding disk definition:

define holdingdisk "holddisk" {
directory "/holddisk/charm"
use -50Gb
chunksize 0
}

So I changed it to:

holdingdisk "holddisk" {
directory "/holddisk/charm"
use -50Gb
chunksize 0
}
'define holdingdisk' define it, but it is not used, you could have added 
a simple line:


holdingdisk "holddisk"

to tell amanda to use the defined holdingdisk, or using the older syntax 
like you did.


Jean-Louis



Re: holding disk not used?

2011-08-17 Thread Jean-Francois Malouin
* Michael Müskens  [20110817 14:46]:
> 
> Am 17.08.2011 um 18:00 schrieb Jean-Francois Malouin:
> 
> > * u...@3.am  [20110817 11:54]:
> >> 
> >> It appears that you are telling amanda to use -50GB of space for your 
> >> holding
> >> diskwhy would you want a negative number?
> > 
> > 
> > driver: pid 9802 ruid 111 euid 111 version 3.3.0: start at Wed Aug 17 
> > 11:36:39 2011
> > driver: pid 9802 ruid 111 euid 111 version 3.3.0: rename at Wed Aug 17 
> > 11:36:39 2011
> > driver: find_diskspace: want 1277107616 K 
> > driver: pid 9802 finish time Wed Aug 17 11:52:43 2011
> > 
> 
> 
> hello,
> 
> what does amcheck -l say? You usually get output like this:
> 
> backup@tobak012:~$ amcheck DailySet -l
> Amanda Tape Server Host Check
> -
> Holding disk /backup/tapes/raid001/holdingdisk_DailySet: 43416 MB disk space 
> available, using 17816 MB
> NOTE: skipping tape checks
> Server check took 0.001 seconds
> 
> (brought to you by Amanda 3.3.0)
> 
> If something is wrong with the holdingdisk, it should appear right there.
> 
> /Michael

Bingo! Thanks, that put me on the right path:

~# su amanda -c "/opt/amanda/sbin/amcheck -l charm"
Amanda Tape Server Host Check
-
NOTE: skipping tape checks
NOTE: host info dir /opt/amanda/usr/adm/amanda/charm/curinfo/gaspar does not 
exist
NOTE: it will be created on the next run.
NOTE: index dir /opt/amanda/usr/adm/amanda/charm/index/gaspar does not exist
NOTE: it will be created on the next run.
Server check took 0.001 seconds

(brought to you by Amanda 3.3.0)

I think I got it now. 

The amanda.conf used the following holding disk definition:

define holdingdisk "holddisk" {
   directory "/holddisk/charm"
   use -50Gb
   chunksize 0
}

So I changed it to:

holdingdisk "holddisk" {
   directory "/holddisk/charm"
   use -50Gb
   chunksize 0
}

and now the amcheck outputs:

~# su amanda -c "/opt/amanda/sbin/amcheck -l charm"
Amanda Tape Server Host Check
-
Holding disk /holddisk/charm: 4004264 MB disk space available, using 3953064 MB
NOTE: skipping tape checks
NOTE: host info dir /opt/amanda/usr/adm/amanda/charm/curinfo/gaspar does not 
exist
NOTE: it will be created on the next run.
NOTE: index dir /opt/amanda/usr/adm/amanda/charm/index/gaspar does not exist
NOTE: it will be created on the next run.
Server check took 0.001 seconds

(brought to you by Amanda 3.3.0)

amdump is now writing to the hold disk.

thanks again!
jf

> 
> -- 
> Michael Müskens
> 
> Rule #18: It's better to seek forgiveness than ask permission.
> 

--
Lay on, MacDuff, and curs'd be him who first cries, "Hold, enough!".
-- Shakespeare


Re: holding disk not used?

2011-08-17 Thread Michael Müskens

Am 17.08.2011 um 18:00 schrieb Jean-Francois Malouin:

> * u...@3.am  [20110817 11:54]:
>> 
>> It appears that you are telling amanda to use -50GB of space for your holding
>> diskwhy would you want a negative number?
> 
> 
> driver: pid 9802 ruid 111 euid 111 version 3.3.0: start at Wed Aug 17 
> 11:36:39 2011
> driver: pid 9802 ruid 111 euid 111 version 3.3.0: rename at Wed Aug 17 
> 11:36:39 2011
> driver: find_diskspace: want 1277107616 K 
> driver: pid 9802 finish time Wed Aug 17 11:52:43 2011
> 


hello,

what does amcheck -l say? You usually get output like this:

backup@tobak012:~$ amcheck DailySet -l
Amanda Tape Server Host Check
-
Holding disk /backup/tapes/raid001/holdingdisk_DailySet: 43416 MB disk space 
available, using 17816 MB
NOTE: skipping tape checks
Server check took 0.001 seconds

(brought to you by Amanda 3.3.0)

If something is wrong with the holdingdisk, it should appear right there.

/Michael

-- 
Michael Müskens

Rule #18: It's better to seek forgiveness than ask permission.




Re: holding disk not used?

2011-08-17 Thread gene heskett
On Wednesday, August 17, 2011 12:58:46 PM u...@3.am did opine:

> It appears that you are telling amanda to use -50GB of space for your
> holding diskwhy would you want a negative number?
 
That _used_ to be (could have been changed in the last 2-3 years) how one 
would specify the use of all available holdingdisk capacity until there is 
only 50Gb left, do you have that much available after the reserved is 
subtracted?

It also might be that somehow the reserved is 100%, meaning save it all for 
emergency "no tape(s) available" use.

> Mine us configured as:
> 
> use 17 Mb
> 
> > Anyone on this?
> > Right now this is a show stopper for me :(
> > 
> > jf
> > 
> > * Jean-Francois Malouin 
> > [20110815
> > 
> > 13:26]:
> >> Hi,
> >> 
> >> I've have this seemingly simple problem but I can't put my finger on
> >> it  :)
> >> 
> >> I just installed amanda-3.3.0 on a new server and amanda doesn't seem
> >> to use the holding disk: it port-dumps directly to tape and if I
> >> specify 'holdingdisk required' in the dumptype the run simply fails:
> >> 
> >> define holdingdisk "holddisk" {
> >> 
> >>directory "/holddisk/charm"
> >>use -50Gb
> >>chunksize 0
> >> 
> >> }
> >> 
> >> define dumptype "app-amgtar-span" {
> >> 
> >> "global"
> >> program "APPLICATION"
> >> application "app-amgtar"
> >> priority high
> >> allow-split
> >> holdingdisk required
> >> compress none
> >> 
> >> }
> >> 
> >> Permissions are ok for /holddisk/charm.
> >> I've attached the amdump log file.  I can provide more debug upon
> >> request.
> >> 
> >> amdump log has this:
> >> 
> >> driver: flush size 0
> >> find diskspace: not enough diskspace. Left with 1277107616 K
> >> driver: state time 1642.093 free kps: 1024000 space: 0 taper: idle
> >> idle-dumpers: 12 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0
> >> driver-idle: no-diskspace
> >> 
> >> The tapetype definition is:
> >> 
> >> define tapetype tape-lto5 {
> >> 
> >> comment "Created by amtapetype; compression disabled"
> >> length 1480900608 kbytes
> >> filemark 3413 kbytes
> >> speed 107063 kps
> >> blocksize 2048 kbytes
> >> part-size 100gb
> >> part-cache-max-size 100gb
> >> 
> >> }
> >> 
> >> and the DLE is
> >> 
> >> gaspar /raid/ipl {
> >> 
> >> "app-amgtar-span"
> >> record no
> >> 
> >> }
> >> 
> >> ??
> >> 
> >> thanks in advance,
> >> jf
> >> 
> >> amdump: start at Fri Aug 12 20:59:45 EDT 2011
> >> amdump: datestamp 20110812
> >> amdump: starttime 20110812205945
> >> amdump: starttime-locale-independent 2011-08-12 20:59:45 EDT
> >> driver: pid 5120 executable /opt/amanda-3.3.0/libexec/amanda/driver
> >> version 3.3.0
> >> planner: pid 5119 executable /opt/amanda-3.3.0/libexec/amanda/planner
> >> version 3.3.0
> >> planner: build: VERSION="Amanda-3.3.0"
> >> planner:BUILT_DATE="Wed Aug 10 13:24:08 EDT 2011"
> >> BUILT_MACH="" planner:BUILT_REV="4084" BUILT_BRANCH="3_3"
> >> CC="gcc" planner: paths: bindir="/opt/amanda-3.3.0/bin"
> >> planner:sbindir="/opt/amanda-3.3.0/sbin"
> >> planner:libexecdir="/opt/amanda-3.3.0/libexec"
> >> planner:amlibexecdir="/opt/amanda-3.3.0/libexec/amanda"
> >> planner:mandir="/opt/man" AMANDA_TMPDIR="/var/tmp/amanda"
> >> planner:AMANDA_DBGDIR="/var/tmp/amanda"
> >> planner:CONFIG_DIR="/opt/amanda-3.3.0/etc/amanda"
> >> planner:DEV_PREFIX="/dev/" RDEV_PREFIX="/dev/"
> >> DUMP="/sbin/dump" planner:RESTORE="/sbin/restore"
> >> VDUMP=UNDEF VRESTORE=UNDEF planner:XFSDUMP=UNDEF
> >> XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF planner:   
> >> SAMBA_CLIENT="/usr/bin/smbclient" GNUTAR="/bin/tar" planner:   
> >> COMPRESS_PATH="/bin/gzip" UNCOMPRESS_PATH

Re: holding disk not used?

2011-08-17 Thread up

It appears that you are telling amanda to use -50GB of space for your holding
diskwhy would you want a negative number?

Mine us configured as:

use 17 Mb

> Anyone on this?
> Right now this is a show stopper for me :(
>
> jf
>
> * Jean-Francois Malouin  [20110815
> 13:26]:
>> Hi,
>>
>> I've have this seemingly simple problem but I can't put my finger on
>> it  :)
>>
>> I just installed amanda-3.3.0 on a new server and amanda doesn't seem
>> to use the holding disk: it port-dumps directly to tape and if I
>> specify 'holdingdisk required' in the dumptype the run simply fails:
>>
>> define holdingdisk "holddisk" {
>>directory "/holddisk/charm"
>>use -50Gb
>>chunksize 0
>> }
>>
>> define dumptype "app-amgtar-span" {
>> "global"
>> program "APPLICATION"
>> application "app-amgtar"
>> priority high
>> allow-split
>> holdingdisk required
>> compress none
>> }
>>
>> Permissions are ok for /holddisk/charm.
>> I've attached the amdump log file.  I can provide more debug upon
>> request.
>>
>> amdump log has this:
>>
>> driver: flush size 0
>> find diskspace: not enough diskspace. Left with 1277107616 K
>> driver: state time 1642.093 free kps: 1024000 space: 0 taper: idle
>> idle-dumpers: 12 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle:
>> no-diskspace
>>
>> The tapetype definition is:
>>
>> define tapetype tape-lto5 {
>> comment "Created by amtapetype; compression disabled"
>> length 1480900608 kbytes
>> filemark 3413 kbytes
>> speed 107063 kps
>> blocksize 2048 kbytes
>> part-size 100gb
>> part-cache-max-size 100gb
>> }
>>
>> and the DLE is
>>
>> gaspar /raid/ipl {
>> "app-amgtar-span"
>> record no
>> }
>>
>> ??
>>
>> thanks in advance,
>> jf
>
>> amdump: start at Fri Aug 12 20:59:45 EDT 2011
>> amdump: datestamp 20110812
>> amdump: starttime 20110812205945
>> amdump: starttime-locale-independent 2011-08-12 20:59:45 EDT
>> driver: pid 5120 executable /opt/amanda-3.3.0/libexec/amanda/driver version
>> 3.3.0
>> planner: pid 5119 executable /opt/amanda-3.3.0/libexec/amanda/planner version
>> 3.3.0
>> planner: build: VERSION="Amanda-3.3.0"
>> planner:BUILT_DATE="Wed Aug 10 13:24:08 EDT 2011" BUILT_MACH=""
>> planner:BUILT_REV="4084" BUILT_BRANCH="3_3" CC="gcc"
>> planner: paths: bindir="/opt/amanda-3.3.0/bin"
>> planner:sbindir="/opt/amanda-3.3.0/sbin"
>> planner:libexecdir="/opt/amanda-3.3.0/libexec"
>> planner:amlibexecdir="/opt/amanda-3.3.0/libexec/amanda"
>> planner:mandir="/opt/man" AMANDA_TMPDIR="/var/tmp/amanda"
>> planner:AMANDA_DBGDIR="/var/tmp/amanda"
>> planner:CONFIG_DIR="/opt/amanda-3.3.0/etc/amanda"
>> planner:DEV_PREFIX="/dev/" RDEV_PREFIX="/dev/" DUMP="/sbin/dump"
>> planner:RESTORE="/sbin/restore" VDUMP=UNDEF VRESTORE=UNDEF
>> planner:XFSDUMP=UNDEF XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF
>> planner:SAMBA_CLIENT="/usr/bin/smbclient" GNUTAR="/bin/tar"
>> planner:COMPRESS_PATH="/bin/gzip" UNCOMPRESS_PATH="/bin/gzip"
>> planner: LPRCMD=UNDEF  MAILER=UNDEF
>> planner:listed_incr_dir="/opt/amanda-3.3.0/var/amanda/gnutar-lists"
>> planner: defs:  DEFAULT_SERVER="edgar" DEFAULT_CONFIG="charm"
>> planner:DEFAULT_TAPE_SERVER="edgar"
>> planner:DEFAULT_TAPE_DEVICE="tape:/dev/nst0" NEED_STRSTR
>> planner:AMFLOCK_POSIX AMFLOCK_FLOCK AMFLOCK_LOCKF AMFLOCK_LNLOCK
>> planner:SETPGRP_VOID AMANDA_DEBUG_DAYS=4 BSD_SECURITY USE_AMANDAHOSTS
>> planner:CLIENT_LOGIN="amanda" CHECK_USERID HAVE_GZIP
>> planner:COMPRESS_SUFFIX=".gz" COMPRESS_FAST_OPT="--fast"
>> planner:COMPRESS_BEST_OPT="--best" UNCOMPRESS_OPT="-dc"
>> READING CONF INFO...
>> planner: timestamp 20110812205945
>> planner: tape_length is set from tape length (1480900608 KB) * runtapes (3) 
>> ==
>> 4442701824 KB
>> planner: time 0.000: startup took 0.0

Re: holding disk not used?

2011-08-17 Thread Jean-Francois Malouin
* u...@3.am  [20110817 11:54]:
> 
> It appears that you are telling amanda to use -50GB of space for your holding
> diskwhy would you want a negative number?

from the man page:

   use int
   Default: 0 Gb. Amount of space that can be used in this holding disk 
area. 
   If the value is zero, all available space on the file system is 
used. 
   If the value is negative, Amanda will use all available space minus 
that
   value.

I tested using the default value with the same result:

driver: pid 9802 ruid 111 euid 111 version 3.3.0: start at Wed Aug 17 11:36:39 
2011
driver: pid 9802 ruid 111 euid 111 version 3.3.0: rename at Wed Aug 17 11:36:39 
2011
driver: find_diskspace: want 1277107616 K 
driver: pid 9802 finish time Wed Aug 17 11:52:43 2011

and it stops there. 

thanks
jf


> 
> Mine us configured as:
> 
> use 17 Mb
> 
> > Anyone on this?
> > Right now this is a show stopper for me :(
> >
> > jf
> >
> > * Jean-Francois Malouin  [20110815
> > 13:26]:
> >> Hi,
> >>
> >> I've have this seemingly simple problem but I can't put my finger on
> >> it  :)
> >>
> >> I just installed amanda-3.3.0 on a new server and amanda doesn't seem
> >> to use the holding disk: it port-dumps directly to tape and if I
> >> specify 'holdingdisk required' in the dumptype the run simply fails:
> >>
> >> define holdingdisk "holddisk" {
> >>directory "/holddisk/charm"
> >>use -50Gb
> >>chunksize 0
> >> }
> >>
> >> define dumptype "app-amgtar-span" {
> >> "global"
> >> program "APPLICATION"
> >> application "app-amgtar"
> >> priority high
> >> allow-split
> >> holdingdisk required
> >> compress none
> >> }
> >>
> >> Permissions are ok for /holddisk/charm.
> >> I've attached the amdump log file.  I can provide more debug upon
> >> request.
> >>
> >> amdump log has this:
> >>
> >> driver: flush size 0
> >> find diskspace: not enough diskspace. Left with 1277107616 K
> >> driver: state time 1642.093 free kps: 1024000 space: 0 taper: idle
> >> idle-dumpers: 12 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle:
> >> no-diskspace
> >>
> >> The tapetype definition is:
> >>
> >> define tapetype tape-lto5 {
> >> comment "Created by amtapetype; compression disabled"
> >> length 1480900608 kbytes
> >> filemark 3413 kbytes
> >> speed 107063 kps
> >> blocksize 2048 kbytes
> >> part-size 100gb
> >> part-cache-max-size 100gb
> >> }
> >>
> >> and the DLE is
> >>
> >> gaspar /raid/ipl {
> >> "app-amgtar-span"
> >> record no
> >> }
> >>
> >> ??
> >>
> >> thanks in advance,
> >> jf
> >
> >> amdump: start at Fri Aug 12 20:59:45 EDT 2011
> >> amdump: datestamp 20110812
> >> amdump: starttime 20110812205945
> >> amdump: starttime-locale-independent 2011-08-12 20:59:45 EDT
> >> driver: pid 5120 executable /opt/amanda-3.3.0/libexec/amanda/driver version
> >> 3.3.0
> >> planner: pid 5119 executable /opt/amanda-3.3.0/libexec/amanda/planner 
> >> version
> >> 3.3.0
> >> planner: build: VERSION="Amanda-3.3.0"
> >> planner:BUILT_DATE="Wed Aug 10 13:24:08 EDT 2011" BUILT_MACH=""
> >> planner:BUILT_REV="4084" BUILT_BRANCH="3_3" CC="gcc"
> >> planner: paths: bindir="/opt/amanda-3.3.0/bin"
> >> planner:sbindir="/opt/amanda-3.3.0/sbin"
> >> planner:libexecdir="/opt/amanda-3.3.0/libexec"
> >> planner:amlibexecdir="/opt/amanda-3.3.0/libexec/amanda"
> >> planner:mandir="/opt/man" AMANDA_TMPDIR="/var/tmp/amanda"
> >> planner:AMANDA_DBGDIR="/var/tmp/amanda"
> >> planner:CONFIG_DIR="/opt/amanda-3.3.0/etc/amanda"
> >> planner:DEV_PREFIX="/dev/" RDEV_PREFIX="/dev/" DUMP="/sbin/dump"
> >> planner:RESTORE="/sbin/restore" VDUMP=UNDEF VRESTORE=UNDEF
> >> planner:XFSDUMP=UNDEF XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF
> >> planner:SAMBA_CLIENT="/usr/bin/s

Re: holding disk not used?

2011-08-17 Thread Jean-Francois Malouin
Anyone on this?
Right now this is a show stopper for me :(

jf

* Jean-Francois Malouin  [20110815 
13:26]:
> Hi,
> 
> I've have this seemingly simple problem but I can't put my finger on
> it  :)
> 
> I just installed amanda-3.3.0 on a new server and amanda doesn't seem
> to use the holding disk: it port-dumps directly to tape and if I
> specify 'holdingdisk required' in the dumptype the run simply fails:
> 
> define holdingdisk "holddisk" {
>directory "/holddisk/charm"
>use -50Gb
>chunksize 0
> }
> 
> define dumptype "app-amgtar-span" {
> "global"
> program "APPLICATION"
> application "app-amgtar"
> priority high
> allow-split
> holdingdisk required
> compress none
> }
> 
> Permissions are ok for /holddisk/charm. 
> I've attached the amdump log file.  I can provide more debug upon
> request.
> 
> amdump log has this:
> 
> driver: flush size 0
> find diskspace: not enough diskspace. Left with 1277107616 K
> driver: state time 1642.093 free kps: 1024000 space: 0 taper: idle
> idle-dumpers: 12 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle:
> no-diskspace
> 
> The tapetype definition is:
> 
> define tapetype tape-lto5 {
> comment "Created by amtapetype; compression disabled"
> length 1480900608 kbytes
> filemark 3413 kbytes
> speed 107063 kps
> blocksize 2048 kbytes
> part-size 100gb
> part-cache-max-size 100gb
> }
> 
> and the DLE is 
> 
> gaspar /raid/ipl { 
> "app-amgtar-span"
> record no
> }
> 
> ??
> 
> thanks in advance,
> jf

> amdump: start at Fri Aug 12 20:59:45 EDT 2011
> amdump: datestamp 20110812
> amdump: starttime 20110812205945
> amdump: starttime-locale-independent 2011-08-12 20:59:45 EDT
> driver: pid 5120 executable /opt/amanda-3.3.0/libexec/amanda/driver version 
> 3.3.0
> planner: pid 5119 executable /opt/amanda-3.3.0/libexec/amanda/planner version 
> 3.3.0
> planner: build: VERSION="Amanda-3.3.0"
> planner:BUILT_DATE="Wed Aug 10 13:24:08 EDT 2011" BUILT_MACH=""
> planner:BUILT_REV="4084" BUILT_BRANCH="3_3" CC="gcc"
> planner: paths: bindir="/opt/amanda-3.3.0/bin"
> planner:sbindir="/opt/amanda-3.3.0/sbin"
> planner:libexecdir="/opt/amanda-3.3.0/libexec"
> planner:amlibexecdir="/opt/amanda-3.3.0/libexec/amanda"
> planner:mandir="/opt/man" AMANDA_TMPDIR="/var/tmp/amanda"
> planner:AMANDA_DBGDIR="/var/tmp/amanda"
> planner:CONFIG_DIR="/opt/amanda-3.3.0/etc/amanda"
> planner:DEV_PREFIX="/dev/" RDEV_PREFIX="/dev/" DUMP="/sbin/dump"
> planner:RESTORE="/sbin/restore" VDUMP=UNDEF VRESTORE=UNDEF
> planner:XFSDUMP=UNDEF XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF
> planner:SAMBA_CLIENT="/usr/bin/smbclient" GNUTAR="/bin/tar"
> planner:COMPRESS_PATH="/bin/gzip" UNCOMPRESS_PATH="/bin/gzip"
> planner: LPRCMD=UNDEF  MAILER=UNDEF
> planner:listed_incr_dir="/opt/amanda-3.3.0/var/amanda/gnutar-lists"
> planner: defs:  DEFAULT_SERVER="edgar" DEFAULT_CONFIG="charm"
> planner:DEFAULT_TAPE_SERVER="edgar"
> planner:DEFAULT_TAPE_DEVICE="tape:/dev/nst0" NEED_STRSTR
> planner:AMFLOCK_POSIX AMFLOCK_FLOCK AMFLOCK_LOCKF AMFLOCK_LNLOCK
> planner:SETPGRP_VOID AMANDA_DEBUG_DAYS=4 BSD_SECURITY USE_AMANDAHOSTS
> planner:CLIENT_LOGIN="amanda" CHECK_USERID HAVE_GZIP
> planner:COMPRESS_SUFFIX=".gz" COMPRESS_FAST_OPT="--fast"
> planner:COMPRESS_BEST_OPT="--best" UNCOMPRESS_OPT="-dc"
> READING CONF INFO...
> planner: timestamp 20110812205945
> planner: tape_length is set from tape length (1480900608 KB) * runtapes (3) 
> == 4442701824 KB
> planner: time 0.000: startup took 0.000 secs
> 
> SENDING FLUSHES...
> ENDFLUSH
> 
> SETTING UP FOR ESTIMATES...
> planner: time 0.000: setting up estimates for gaspar:/raid/ipl
> gaspar:/raid/ipl overdue 15199 days for level 0
> setup_estimate: gaspar:/raid/ipl: command 0, options: nonelast_level -1 
> next_level0 -15199 level_days 0getting estimates 0 (-3) -1 (-3) -1 (-3)
> driver: tape size 1480900608
> planner: time 0.000: setting up estimates took 0.000 secs
> 
> GETTING ESTIMATES...
> reserving 0 out of 0 for degraded-mode dumps
> driver: started dumper0 pid 51

holding disk not used?

2011-08-15 Thread Jean-Francois Malouin
Hi,

I've have this seemingly simple problem but I can't put my finger on
it  :)

I just installed amanda-3.3.0 on a new server and amanda doesn't seem
to use the holding disk: it port-dumps directly to tape and if I
specify 'holdingdisk required' in the dumptype the run simply fails:

define holdingdisk "holddisk" {
   directory "/holddisk/charm"
   use -50Gb
   chunksize 0
}

define dumptype "app-amgtar-span" {
"global"
program "APPLICATION"
application "app-amgtar"
priority high
allow-split
holdingdisk required
compress none
}

Permissions are ok for /holddisk/charm. 
I've attached the amdump log file.  I can provide more debug upon
request.

amdump log has this:

driver: flush size 0
find diskspace: not enough diskspace. Left with 1277107616 K
driver: state time 1642.093 free kps: 1024000 space: 0 taper: idle
idle-dumpers: 12 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle:
no-diskspace

The tapetype definition is:

define tapetype tape-lto5 {
comment "Created by amtapetype; compression disabled"
length 1480900608 kbytes
filemark 3413 kbytes
speed 107063 kps
blocksize 2048 kbytes
part-size 100gb
part-cache-max-size 100gb
}

and the DLE is 

gaspar /raid/ipl { 
"app-amgtar-span"
record no
}

??

thanks in advance,
jf
amdump: start at Fri Aug 12 20:59:45 EDT 2011
amdump: datestamp 20110812
amdump: starttime 20110812205945
amdump: starttime-locale-independent 2011-08-12 20:59:45 EDT
driver: pid 5120 executable /opt/amanda-3.3.0/libexec/amanda/driver version 
3.3.0
planner: pid 5119 executable /opt/amanda-3.3.0/libexec/amanda/planner version 
3.3.0
planner: build: VERSION="Amanda-3.3.0"
planner:BUILT_DATE="Wed Aug 10 13:24:08 EDT 2011" BUILT_MACH=""
planner:BUILT_REV="4084" BUILT_BRANCH="3_3" CC="gcc"
planner: paths: bindir="/opt/amanda-3.3.0/bin"
planner:sbindir="/opt/amanda-3.3.0/sbin"
planner:libexecdir="/opt/amanda-3.3.0/libexec"
planner:amlibexecdir="/opt/amanda-3.3.0/libexec/amanda"
planner:mandir="/opt/man" AMANDA_TMPDIR="/var/tmp/amanda"
planner:AMANDA_DBGDIR="/var/tmp/amanda"
planner:CONFIG_DIR="/opt/amanda-3.3.0/etc/amanda"
planner:DEV_PREFIX="/dev/" RDEV_PREFIX="/dev/" DUMP="/sbin/dump"
planner:RESTORE="/sbin/restore" VDUMP=UNDEF VRESTORE=UNDEF
planner:XFSDUMP=UNDEF XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF
planner:SAMBA_CLIENT="/usr/bin/smbclient" GNUTAR="/bin/tar"
planner:COMPRESS_PATH="/bin/gzip" UNCOMPRESS_PATH="/bin/gzip"
planner: LPRCMD=UNDEF  MAILER=UNDEF
planner:listed_incr_dir="/opt/amanda-3.3.0/var/amanda/gnutar-lists"
planner: defs:  DEFAULT_SERVER="edgar" DEFAULT_CONFIG="charm"
planner:DEFAULT_TAPE_SERVER="edgar"
planner:DEFAULT_TAPE_DEVICE="tape:/dev/nst0" NEED_STRSTR
planner:AMFLOCK_POSIX AMFLOCK_FLOCK AMFLOCK_LOCKF AMFLOCK_LNLOCK
planner:SETPGRP_VOID AMANDA_DEBUG_DAYS=4 BSD_SECURITY USE_AMANDAHOSTS
planner:CLIENT_LOGIN="amanda" CHECK_USERID HAVE_GZIP
planner:COMPRESS_SUFFIX=".gz" COMPRESS_FAST_OPT="--fast"
planner:COMPRESS_BEST_OPT="--best" UNCOMPRESS_OPT="-dc"
READING CONF INFO...
planner: timestamp 20110812205945
planner: tape_length is set from tape length (1480900608 KB) * runtapes (3) == 
4442701824 KB
planner: time 0.000: startup took 0.000 secs

SENDING FLUSHES...
ENDFLUSH

SETTING UP FOR ESTIMATES...
planner: time 0.000: setting up estimates for gaspar:/raid/ipl
gaspar:/raid/ipl overdue 15199 days for level 0
setup_estimate: gaspar:/raid/ipl: command 0, options: nonelast_level -1 
next_level0 -15199 level_days 0getting estimates 0 (-3) -1 (-3) -1 (-3)
driver: tape size 1480900608
planner: time 0.000: setting up estimates took 0.000 secs

GETTING ESTIMATES...
reserving 0 out of 0 for degraded-mode dumps
driver: started dumper0 pid 5122
driver: send-cmd time 0.001 to dumper0: START 20110812205945
driver: started dumper1 pid 5123
driver: send-cmd time 0.001 to dumper1: START 20110812205945
driver: started dumper2 pid 5124
driver: send-cmd time 0.002 to dumper2: START 20110812205945
driver: started dumper3 pid 5125
driver: send-cmd time 0.002 to dumper3: START 20110812205945
driver: started dumper4 pid 5126
driver: send-cmd time 0.002 to dumper4: START 20110812205945
driver: started dumper5 pid 5127
driver: send-cmd time 0.002 to dumper5: START 20110812205945
driver: started dumper6 pid 5128
driver: send-cmd time 0.002 to dumper6: START 20110812205945
driver: started dumper7 pid 5129
driver: send-cmd 

Re: iSCSI holding disk: slow read.

2010-09-29 Thread Dustin J. Mitchell
On Wed, Sep 29, 2010 at 11:46 AM, Valeriu Mutu  wrote:
> What do you mean by "shoe-shining"?

shoe-shining is when a tape drive must stop the tape repeatedly while
it buffers more deta.  It creates a lot of wear on the tape, and also
kills performance.

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com


Re: iSCSI holding disk: slow read.

2010-09-29 Thread Valeriu Mutu
Hi Dustin,

Quote: "Perhaps Amanda's not *writing* data very quickly at that point - maybe 
your tape drive is shoe-shining?"

What do you mean by "shoe-shining"?

Best,
Valeriu

On Wed, Sep 15, 2010 at 10:17:04PM -0500, Dustin J. Mitchell wrote:
> On Wed, Sep 15, 2010 at 8:08 PM, Valeriu Mutu  wrote:
> > /dev/mapper/cronos-amanda-holdingdisk
> > ?? ?? ?? ?? ?? ?? ?? ??886G ??116G ??726G ??14% /data3
> > /dev/mapper/cronos-amanda-diskbuffer
> > ?? ?? ?? ?? ?? ?? ?? ??394G ??199M ??374G ?? 1% /data4
> 
> 350G seems a bit large for a disk buffer - what part size are you using?
> 
> > When the problem re-occurs, I change the MTU to a different value: if it's 
> > 9000bytes, I change it to 1500bytes, and vice-versa.
> >
> > Does anyone know why this happens? Why does Amanda become slow at reading 
> > from the holding disk residing on an iSCSI volume?
> 
> I don't know much about iSCSI, but as you know Amanda just accesses
> the disk via the normal filesystem, so I suspect that you could see a
> similar phenomenon with any application that is reading from or
> writing to the partition.
> 
> Perhaps Amanda's not *writing* data very quickly at that point - maybe
> your tape drive is shoe-shining?  Interrupting the connection by
> resetting the MTU (and I don't see why this would interrupt the
> connection..) would allow several layers of buffers to flush,
> resulting in a surge of read activity when connectivity is restored,
> until the buffers are again full.
> 
> It sounds like there are some unresolved network issues with iSCSI, if
> the iptables is causing problems with multipathing.
> 
> I would recommend taking a hard look with tcpdump to see what's going
> on.  You'll always learn something!  I've found some people who were
> complaining vociferously that Amanda's throughput "sucked", when in
> fact their network had a high packet loss rate because their server
> was hooked into a hub with a massive number of collisions.  Hopefully
> your situation isn't quite that dire..
> 
> Dustin
> 
> -- 
> Open Source Storage Engineer
> http://www.zmanda.com

-- 
Valeriu Mutu


Re: iSCSI holding disk: slow read.

2010-09-15 Thread Dustin J. Mitchell
On Wed, Sep 15, 2010 at 8:08 PM, Valeriu Mutu  wrote:
> /dev/mapper/cronos-amanda-holdingdisk
>                886G  116G  726G  14% /data3
> /dev/mapper/cronos-amanda-diskbuffer
>                394G  199M  374G   1% /data4

350G seems a bit large for a disk buffer - what part size are you using?

> When the problem re-occurs, I change the MTU to a different value: if it's 
> 9000bytes, I change it to 1500bytes, and vice-versa.
>
> Does anyone know why this happens? Why does Amanda become slow at reading 
> from the holding disk residing on an iSCSI volume?

I don't know much about iSCSI, but as you know Amanda just accesses
the disk via the normal filesystem, so I suspect that you could see a
similar phenomenon with any application that is reading from or
writing to the partition.

Perhaps Amanda's not *writing* data very quickly at that point - maybe
your tape drive is shoe-shining?  Interrupting the connection by
resetting the MTU (and I don't see why this would interrupt the
connection..) would allow several layers of buffers to flush,
resulting in a surge of read activity when connectivity is restored,
until the buffers are again full.

It sounds like there are some unresolved network issues with iSCSI, if
the iptables is causing problems with multipathing.

I would recommend taking a hard look with tcpdump to see what's going
on.  You'll always learn something!  I've found some people who were
complaining vociferously that Amanda's throughput "sucked", when in
fact their network had a high packet loss rate because their server
was hooked into a hub with a massive number of collisions.  Hopefully
your situation isn't quite that dire..

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



iSCSI holding disk: slow read.

2010-09-15 Thread Valeriu Mutu
Hi,

I am using Amanda 2.6.1p2.

I'm currently using Equallogic iSCSI storage for Amanda's holding disk.

My current Amanda server has iptables disabled because this somehow affects 
iSCSI multipathing, i.e. if iptables is enabled, only one path works. I have 
yet to determine how to get iptables+iscsi multipathing to work. But this is 
not why I'm writing.

I have 2 iSCSI volumes setup for Amanda's usage:
[r...@cronos ~]# df -h /data3 /data4
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/cronos-amanda-holdingdisk
886G  116G  726G  14% /data3
/dev/mapper/cronos-amanda-diskbuffer
394G  199M  374G   1% /data4

One volume contains the holding disk, the other the diskbuffer.

I've noticed by running 'iostat' that Amanda gets terribly slow at reading the 
data from the holding disk, i.e. the reading speed would be a crawling 
0.5Mb/sec. This happens after the backup has been running for a while, e.g. a 
day.

The manual solution I came up with is to change the MTU for the network 
interfaces:
ifconfig eth3 mtu 9000
ifconfig eth4 mtu 9000

It takes about 30 seconds for the iSCSI paths to fail and become active again. 
Once this happens, Amanda starts reading from the holding disk at normal speed 
of ~50Mb/sec:
Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
dm-7791.0051.80 0.00 51  0

When the problem re-occurs, I change the MTU to a different value: if it's 
9000bytes, I change it to 1500bytes, and vice-versa.

Does anyone know why this happens? Why does Amanda become slow at reading from 
the holding disk residing on an iSCSI volume?

Best,
-- 
Valeriu Mutu


Re: Holding disk warning: only $SIZE KB available

2010-09-03 Thread Dustin J. Mitchell
On Fri, Sep 3, 2010 at 5:57 PM, Jon LaBadie  wrote:
> Isn't a decision made before each DLE is dumped whether there is enough
> holding disk for it?  In that case, does the reported amount (i.e. 2.3GB
> above) serve as an upper limit for the entire amdump run?  If so, maybe
> that part of the code needs to be tweaked to allow use of any space
> freed by flushed dumps.

Right - that assessment is made at the time, and any dumps that are
not yet flushed are not considered free space.  As far as I can tell,
the error is only in printing the warning, not in the actual driver
behavior.

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



Re: Holding disk warning: only $SIZE KB available

2010-09-03 Thread Jon LaBadie
On Fri, Sep 03, 2010 at 01:19:50PM -0500, Dustin J. Mitchell wrote:
> On Fri, Sep 3, 2010 at 11:13 AM, Valeriu Mutu  wrote:
> > Next time 'amdump' runs, it will not be able to use the size of holding 
> > disk specified and the warning will be printed:
> > NOTES:
> > ??driver: WARNING: /data3/amanda/holdingdisk/Daily1/: 880803840 KB 
> > requested, but only 2351104 KB available.
> >
> > Does this mean that Amanda will use a holding disk of 2351104Kb? Or will it 
> > flush the entire holding disk area to tape (880803840Kb) once the 2351104Kb 
> > are used? Also, after the flush, will it use 880803840Kb of holding disk 
> > space or just 2351104Kb?
> 
> Yes, Amanda will use whatever space it finds available.  if autoflush
> is enabled, it will begin by flushing any existing dumps to tape,
> after which it will use the full available space.
> 
> If autoflush is enabled, it would seem sensible to consider any
> existing holding files to be available space when deciding whether to
> print this warning, but this isn't currently done (that I can see,
> anyway).

Isn't a decision made before each DLE is dumped whether there is enough
holding disk for it?  In that case, does the reported amount (i.e. 2.3GB
above) serve as an upper limit for the entire amdump run?  If so, maybe
that part of the code needs to be tweaked to allow use of any space
freed by flushed dumps.

jl
-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: Holding disk warning: only $SIZE KB available

2010-09-03 Thread Dustin J. Mitchell
On Fri, Sep 3, 2010 at 11:13 AM, Valeriu Mutu  wrote:
> Next time 'amdump' runs, it will not be able to use the size of holding disk 
> specified and the warning will be printed:
> NOTES:
>  driver: WARNING: /data3/amanda/holdingdisk/Daily1/: 880803840 KB requested, 
> but only 2351104 KB available.
>
> Does this mean that Amanda will use a holding disk of 2351104Kb? Or will it 
> flush the entire holding disk area to tape (880803840Kb) once the 2351104Kb 
> are used? Also, after the flush, will it use 880803840Kb of holding disk 
> space or just 2351104Kb?

Yes, Amanda will use whatever space it finds available.  if autoflush
is enabled, it will begin by flushing any existing dumps to tape,
after which it will use the full available space.

If autoflush is enabled, it would seem sensible to consider any
existing holding files to be available space when deciding whether to
print this warning, but this isn't currently done (that I can see,
anyway).

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



Holding disk warning: only $SIZE KB available

2010-09-03 Thread Valeriu Mutu
Hi,

Sometimes, after I run a backup, amdump leaves dump images in the holding disk.

Next time 'amdump' runs, it will not be able to use the size of holding disk 
specified and the warning will be printed:
NOTES:
  driver: WARNING: /data3/amanda/holdingdisk/Daily1/: 880803840 KB requested, 
but only 2351104 KB available.

Does this mean that Amanda will use a holding disk of 2351104Kb? Or will it 
flush the entire holding disk area to tape (880803840Kb) once the 2351104Kb are 
used? Also, after the flush, will it use 880803840Kb of holding disk space or 
just 2351104Kb?

Best,
Valeriu
-- 
Valeriu Mutu


Re: Holding disk and dump splitting questions

2010-08-16 Thread Dustin J. Mitchell
On Mon, Aug 16, 2010 at 3:55 PM, Valeriu Mutu  wrote:
> According to the documentation, to speed up backups, one could could setup 
> holding disks where the data will be buffered before it is written to tape. 
> This method is known as FILE-WRITE [1]. This sounds good and works well for 
> DLE's which can fit into the holding disk area.

Correct.

> Nevertheless, for the DLE's that don't fit into the holding disk, Amanda 
> would use the second method known as PORT-WRITE [1]. With this method, Amanda 
> splits the DLE into chucks of a given size S, writes each chunk to disk one 
> at a time, and then, once the chunk of size S is completely on disk, writes 
> the chunk to tape.

This is not correct.

First, "chunks" are used in the holding disk, and are completely
unrelated to splitting.  Amanda writes "parts" to tapes.

Second, and more importantly, Amanda writes the data directly to tape
as it arrives, but writes it to a part buffer in parallel, either on
disk or in memory.  I'm specifically objecting to "then, once the
chunk .. is completely on disk", as this implies the operations do not
occur in parallel.

The part buffer is only consulted if there is an error writing the
data to tape (rather than start the dump over).  Note that 3.2 will
reduce the need for these split buffers -- but I won't get into that
right now.

> Questions:
> - Does Amanda continuosly keep the disk buffer full? In other words, as it 
> starts writing to tape the buffered chunk1, will it start buffering chunk2? 
> Probably not, because it would need the complete copy of chunk1, if chunk1 
> fails to be written successfully to tape. Right?

Right - it only starts filling the disk buffer with data for part 2
once part 1 is written to tape.  But in ordinary operation, that
occurs immediately after the last byte of part 1 is read from the
dumper.

> - Is there a way to see the speed at which 'taper' writes data to tape?

You can look at the report sent by amreport after the dump run - but
note that it includes the time to write filemarks and labels and
whatnot, so it is not the full "streaming" rate.

> - Why is disk buffer so slow for me? When I use FILE-WRITE, I get great 
> speeds! This is not the case with PORT-WRITE. I am running Amanda 2.6.1p2 
> (I'll upgrade to 3.1.2 soon), with a holding disk of 840Gb and a disk buffer 
> of 370Gb. I have some DLE's which are greate than 1Tb and Amanda is using the 
> PORT-WRITE method to write them to tape. When this happens, I see that the 
> disk buffer becomes full, which is good, but yet the speed of writing to tape 
> seems slow. I don't have a way to see the writing speed of the 'taper', so 
> I'm relying on the amount of data Amanda reads from the disk device. With 
> 'iostat' I can see that Amanda is reading from the device at a peak spead of 
> 4Mb/sec (the device /dev/dm-8 or /dev/mapper/cronos-amanda-diskbuffer below 
> is dedicated to Amanda's disk buffer):

I wouldn't necessarily trust iostat - what's going on is a bit
higher-level than iostat is intended to address.

I wonder why you have a 370Gb disk buffer.  Unless your tapes are 8TB,
that's too big a part size.  Part size (and thus disk buffer) should
be 5-10% of your tape size, at most.  You could probably take 160GB+
of disk buffer and use it as holding instead of disk buffer, allowing
this DLE to fit in holding disk.

The bottleneck with PORT-WRITE is generally the filesystem.  If you
were dumping bytes from a raw disk onto tape, then the additional
write to the disk buffer might be a problem.  But in most cases, the
bytes are being dumped by tar, which is making all sorts of funky
filesystem calls, traversing directories, inodes, etc., and generally
putting a strain on the filesystem to keep up.  It then adds a lot of
overhead to encode that data into 512 byte tar records.  The
bottleneck can be hard to see because it's not all I/O, and it's not
all userspace CPU time.

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



Holding disk and dump splitting questions

2010-08-16 Thread Valeriu Mutu
Hi Amanda developers,

I would like to get a better understanding of how Amanda's holding disk and 
dump splitting features work.

According to the documentation, to speed up backups, one could could setup 
holding disks where the data will be buffered before it is written to tape. 
This method is known as FILE-WRITE [1]. This sounds good and works well for 
DLE's which can fit into the holding disk area.

Nevertheless, for the DLE's that don't fit into the holding disk, Amanda would 
use the second method known as PORT-WRITE [1]. With this method, Amanda splits 
the DLE into chucks of a given size S, writes each chunk to disk one at a time, 
and then, once the chunk of size S is completely on disk, writes the chunk to 
tape.

Questions:
- Does Amanda continuosly keep the disk buffer full? In other words, as it 
starts writing to tape the buffered chunk1, will it start buffering chunk2? 
Probably not, because it would need the complete copy of chunk1, if chunk1 
fails to be written successfully to tape. Right?
- Is there a way to see the speed at which 'taper' writes data to tape?
- Why is disk buffer so slow for me? When I use FILE-WRITE, I get great speeds! 
This is not the case with PORT-WRITE. I am running Amanda 2.6.1p2 (I'll upgrade 
to 3.1.2 soon), with a holding disk of 840Gb and a disk buffer of 370Gb. I have 
some DLE's which are greate than 1Tb and Amanda is using the PORT-WRITE method 
to write them to tape. When this happens, I see that the disk buffer becomes 
full, which is good, but yet the speed of writing to tape seems slow. I don't 
have a way to see the writing speed of the 'taper', so I'm relying on the 
amount of data Amanda reads from the disk device. With 'iostat' I can see that 
Amanda is reading from the device at a peak spead of 4Mb/sec (the device 
/dev/dm-8 or /dev/mapper/cronos-amanda-diskbuffer below is dedicated to 
Amanda's disk buffer):

# df -h /dev/mapper/cronos-amanda-diskbuffer
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/cronos-amanda-diskbuffer
  394G  371G  3.2G 100% /data4

# dmsetup info /dev/dm-8 
Name:  cronos-amanda-diskbuffer
State: ACTIVE
Read Ahead:256
Tables present:LIVE
Open count:1
Event number:  0
Major, minor:  253, 8
Number of targets: 1
UUID: mpath-36090a06850b8fdba16c6c4f6f41d

# iostat -m /dev/dm-8 1 5
Linux 2.6.18-194.8.1.el5 (cronos)08/16/2010
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   0.660.001.622.490.00   95.24
Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
dm-8180.10 0.01 0.70   5250 431913
Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
dm-8  18791.00 1.9371.47  1 71
Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
dm-8169.00 0.65 0.01  0  0
Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
dm-8458.42 1.79 0.00  1  0
Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
dm-8978.00 3.82 0.00  3  0

Please advise.

[1] http://wiki.zmanda.com/man/amanda.conf.5.html

Best,
Valeriu


Re: Advice for holding disk specification for AMANDA system with LTO5 drive

2010-07-18 Thread Florian Lengyel
Right--it seems that getting accurate measurements of end-to-end throughput
data is nontrivial. How do you measure this? (I'm aware of one
research group in Louisiana that
has written software to measure end-to-end network performance, taking DMA, RAM,
etc into account, but I don't know if this is available.)

On Sun, Jul 18, 2010 at 7:14 PM, Dustin J. Mitchell  wrote:
> On Sun, Jul 18, 2010 at 6:34 PM, Florian Lengyel
>  wrote:
>> However, a SAS drive can deliver 300 Mb/sec (theoretical); in practice
>> 270 Mb/sec
>> is more likely. I have neither the drive nor the server with the
>> disks, so I am guessing.
>> I imagine a RAID5 SAS disk configuration with at least 3 disks would
>> be suitable
>> for a holding disk with 2 LTO5 drives. Either that or a fully-loaded MD1000 
>> with
>> SATA drives.
>
> Keep in mind that your SAS and LTO5 won't talk to one another
> directly, so all of that data will need to get into and out of RAM -
> hopefully via DMA, but still traversing PCI, SCSI, SAS, FC, RAID, etc.
>
> Dustin
>
> --
> Open Source Storage Engineer
> http://www.zmanda.com
>


Re: Advice for holding disk specification for AMANDA system with LTO5 drive

2010-07-18 Thread Dustin J. Mitchell
On Sun, Jul 18, 2010 at 6:34 PM, Florian Lengyel
 wrote:
> However, a SAS drive can deliver 300 Mb/sec (theoretical); in practice
> 270 Mb/sec
> is more likely. I have neither the drive nor the server with the
> disks, so I am guessing.
> I imagine a RAID5 SAS disk configuration with at least 3 disks would
> be suitable
> for a holding disk with 2 LTO5 drives. Either that or a fully-loaded MD1000 
> with
> SATA drives.

Keep in mind that your SAS and LTO5 won't talk to one another
directly, so all of that data will need to get into and out of RAM -
hopefully via DMA, but still traversing PCI, SCSI, SAS, FC, RAID, etc.

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com


Re: Advice for holding disk specification for AMANDA system with LTO5 drive

2010-07-18 Thread Florian Lengyel
Let's see if anyone disagrees with me:

Max throughput on a compressed LTO5 is 280 Mb/sec
A single SATA would be too slow for this at 60-66 Mb sec; a RAID configuration
with at least 5 or 6 disks would be necessary to get closer to 280 Mb/sec. (On
a system such as a DELL MD1000, the first 10 disks contribute to throughput;
the remaining 5 do not.)

However, a SAS drive can deliver 300 Mb/sec (theoretical); in practice
270 Mb/sec
is more likely. I have neither the drive nor the server with the
disks, so I am guessing.
I imagine a RAID5 SAS disk configuration with at least 3 disks would
be suitable
for a holding disk with 2 LTO5 drives. Either that or a fully-loaded MD1000 with
SATA drives.

On Sun, Jul 18, 2010 at 6:07 PM, Dustin J. Mitchell  wrote:
> On Thu, Jul 15, 2010 at 4:19 PM, Florian Lengyel
>  wrote:
>> We're attempting to get the specifications for the holding disk of an AMANDA
>> server with 6GB SAS controllers that will be connected to an LTO5 drive.
>>
>> We want to know whether 7200 RPM SATA drives would be sufficiently fast
>> for the holding disk (this will be a separate RAID unit), or do we need 
>> faster
>> SAS drives to prevent shoe-shining?
>
> You should probably look at the raw *observed* throughput of your SATA
> drives and of your tape drive, and then apply a healthy margin of
> error.  That margin accounts for any interference (e.g., interrupt
> queueing) between the two subsystems, as well as filesystem overhead
> on the SATA drives.  This, of course, assumes that you have enough CPU
> to move that much data.  No, it's not particularly predictable - such
> is life with a portable backup application!  If Amanda was a hardware
> appliance, the numbers would be much more predictable.
>
> Others may have some experience that they can share to help you out -
> although with no replies in 3 days, maybe not..
>
> Dustin
>
> --
> Open Source Storage Engineer
> http://www.zmanda.com
>



Re: Advice for holding disk specification for AMANDA system with LTO5 drive

2010-07-18 Thread Dustin J. Mitchell
On Thu, Jul 15, 2010 at 4:19 PM, Florian Lengyel
 wrote:
> We're attempting to get the specifications for the holding disk of an AMANDA
> server with 6GB SAS controllers that will be connected to an LTO5 drive.
>
> We want to know whether 7200 RPM SATA drives would be sufficiently fast
> for the holding disk (this will be a separate RAID unit), or do we need faster
> SAS drives to prevent shoe-shining?

You should probably look at the raw *observed* throughput of your SATA
drives and of your tape drive, and then apply a healthy margin of
error.  That margin accounts for any interference (e.g., interrupt
queueing) between the two subsystems, as well as filesystem overhead
on the SATA drives.  This, of course, assumes that you have enough CPU
to move that much data.  No, it's not particularly predictable - such
is life with a portable backup application!  If Amanda was a hardware
appliance, the numbers would be much more predictable.

Others may have some experience that they can share to help you out -
although with no replies in 3 days, maybe not..

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com


Advice for holding disk specification for AMANDA system with LTO5 drive

2010-07-15 Thread Florian Lengyel
Hi,
We're attempting to get the specifications for the holding disk of an AMANDA
server with 6GB SAS controllers that will be connected to an LTO5 drive.

We want to know whether 7200 RPM SATA drives would be sufficiently fast
for the holding disk (this will be a separate RAID unit), or do we need faster
SAS drives to prevent shoe-shining?

Do we need to know I/O throughput figures for the whole system to answer
this question? How have systems of users on the list avoided the show-shining
problem?

Thanks,
Florian


Re: holding disk not being used in amanda 3.1.1

2010-07-02 Thread Dustin J. Mitchell
On Fri, Jul 2, 2010 at 11:23 PM, Jon LaBadie  wrote:
> If the 'define holdingdisk "name"' syntax is used, are you saying
> the 'holdingdisk "name"' directive must be used to specify which
> defined hd to use?

No, and actually this is just an extension of the existing
terminology.  For forever, the following syntax has worked to specify
a holding disk:

holdingdisk "foo" {
  directory "/foo"
  # ...
}

But unlike tapetypes or other config subsections, there was no way to
define a holdingdisk that was not subsequently used -- and for some
unusually contorted Amanda configurations, this flexbility was useful
(I don't recall who requested this or what the use-case was).  So we
designed the *additional* ability to write

define holdingdisk "foo" {
  directory "/foo"
  # ...
}

and to put a holding disk "into play" with

holdingdisk "foo"

> If multiple 'define holdingdisk "name"' are specified, can a
> dumptype use any collection of them with multiple 'holdingdisk "name"'
> directives?  Or, can multiple "name(s)" go on the same line?

This is not a dumptype directive - particular DLEs can not be directed
to a particular holding disk.

> Is there a reason that holdingdisk {yes|required} does not imply
> use any and all defined holdingdisks?

It means use any and all "active" holding disks.

This probably isn't clear in the documentation.  Can you send along an
adjusted version that is clearer, and I'll take care of getting it
translated to docbook and committed?  (You're certainly welcome to
send me a docbook patch too, if you're so inclined!)

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



Re: holding disk not being used in amanda 3.1.1

2010-07-02 Thread Jon LaBadie
On Fri, Jul 02, 2010 at 04:36:52PM -0500, Dustin J. Mitchell wrote:
> On Fri, Jul 2, 2010 at 4:24 PM, Jon LaBadie  wrote:
> > The status seems to say there is no holding disk space available.
> >
> > You've asked to set aside 700GB, does /zvol/amanda/holdingdisk/daily
> > have that much space available?
> >
> > Are the DLEs causing the problem larger than 700GB?
> 
> It seems my message did not come through this morning..
> 
> Robert is using "define holdingdisk", which just defines the
> holdingdisk.  The fix is either
> 
> define holdingdisk "foo" { ... }
> holdingdisk "foo"
> 
> or just
> 
> holdingdisk "foo" { ... }
> 

I suspect the documentation is lagging behind development.
Looking at the manpages I was unable to answer these questions.

If the 'define holdingdisk "name"' syntax is used, are you saying
the 'holdingdisk "name"' directive must be used to specify which
defined hd to use?

If multiple 'define holdingdisk "name"' are specified, can a
dumptype use any collection of them with multiple 'holdingdisk "name"'
directives?  Or, can multiple "name(s)" go on the same line?

Is there a reason that holdingdisk {yes|required} does not imply
use any and all defined holdingdisks?

-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: holding disk not being used in amanda 3.1.1

2010-07-02 Thread Dustin J. Mitchell
On Fri, Jul 2, 2010 at 4:24 PM, Jon LaBadie  wrote:
> The status seems to say there is no holding disk space available.
>
> You've asked to set aside 700GB, does /zvol/amanda/holdingdisk/daily
> have that much space available?
>
> Are the DLEs causing the problem larger than 700GB?

It seems my message did not come through this morning..

Robert is using "define holdingdisk", which just defines the
holdingdisk.  The fix is either

define holdingdisk "foo" { ... }
holdingdisk "foo"

or just

holdingdisk "foo" { ... }

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com


Re: holding disk not being used in amanda 3.1.1

2010-07-02 Thread Jon LaBadie
On Fri, Jul 02, 2010 at 08:17:45AM -0400, McGraw, Robert P wrote:
> I am trying to get amanda-3.1.1 up and running on a Sun Solaris 10 x86 host.
> 
> The problem is that amanda is not using the holdingdisk configured in 
> amanda.conf. 
> 
> To check this I set the global dumptype "holdingdisk required" and then ran 
> an amdump.
> 
> This is the email output:
> 
>   Hostname: hertz.math.purdue.edu
>   Org : DAILY BACKUP - MATHNET
>   Config  : daily
>   Date: July 1, 2010
> 
> The next 4 tapes Amanda expects to use are: 4 new tapes.
> The next 4 new tapes already labelled are: D01003, D01004, D01005, 
> D01006 FAILURE DUMP SUMMARY:
>  hertz /gauss/export/users-q RESULTS MISSING
>  hertz /gauss/export/users-q lev 0  FAILED [can't dump required 
> holdingdisk]
> 
> I ran the same backup dumptype "holdingdisk yes" and it ran to completion but 
> it did not write to the holding disk but sent directly to tape.
> 
> Can you see any reason why the holding disk is not being used?
> 
> And amstatus shows 
> 
>   6 dumpers idle  : no-diskspace
  
>   taper status: Idle
>   taper qlen: 0
>   network free kps:   100
>   holding space   : 0m (  0.00%)
 
>0 dumpers busy :  0:00:37  ( 97.08%)not-idle:  0:00:37  
> (100.00%)
> 
> 
> This is a snippets from my amanda.conf file
> 
>   define holdingdisk hd1 {
>   comment "holding disk"
>   directory "/zvol/amanda/holdingdisk/daily"
>   use 700GB   
>   chunksize 10GB
>   }
> 

The status seems to say there is no holding disk space available.

You've asked to set aside 700GB, does /zvol/amanda/holdingdisk/daily
have that much space available?

Are the DLEs causing the problem larger than 700GB?

-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


  1   2   3   4   5   6   7   >