Re: error compiling amanda on FreeBSD 6.3 x86_64

2008-03-24 Thread Patrick M. Hausen
Hi!

On Mon, Mar 24, 2008 at 04:00:58PM -0500, Oscar Ricardo Silva wrote:
 Eric Schnoebelen wrote:
 Oscar Ricardo Silva writes:
 - I'm attempting to compile the client version of amanda on a Dell
 - PowerEdge server running FreeBSD 6.3 but keep running into an
 - ssh related error.  I use the following configure statement:
 
 Use the port from /usr/ports.. It has patches to fix both of the
 errors.

Then have a look at the port's patches to see, what you need
to change on FreeBSD.

Or cd to the port directory and make extract patch. Then make
your local adjustments and aftewards make install.

Kind regards,

Patrick M. Hausen
Leiter Netzwerke und Sicherheit
-- 
punkt.de GmbH * Vorholzstr. 25 * 76137 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
[EMAIL PROTECTED]   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285


Re: error compiling amanda on FreeBSD 6.3 x86_64

2008-03-22 Thread Patrick M. Hausen
Oscar Ricardo Silva wrote at 18:33 -0500 on Mar 21, 2008:
 I'm attempting to compile the client version of amanda on a
 Dell PowerEdge server running FreeBSD 6.3 but keep running
 into an ssh related error.

Use the port, Luke.

# cd /usr/ports/misc/amanda-client
# make install clean

Kind regards,

Patrick M. Hausen
Leiter Netzwerke und Sicherheit
-- 
punkt.de GmbH * Vorholzstr. 25 * 76137 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
[EMAIL PROTECTED]   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285


Re: Backing up VMware-VMs

2008-02-27 Thread Patrick M. Hausen
Hi, all,

On Wed, Feb 27, 2008 at 01:27:11PM -0500, Curtis Preston wrote:
 Unless you're coordinating with the OS, then taking a VMware snapshot
 and copying it is equivalent to pulling the power plug on a server.
 Will it power back up without corruption?  99.9% of the time, yes.  Has
 anyone who has been in the biz for a while had a scenario where
 powercycling a box caused a corrupted OS disk?  I'd say so.
 
 The SAFE thing to do is to make sure that no app is writing to the
 filesystem while you're snapping it.  IMHO, Oracle/Exchange/SQL Server
 should not be running or in backup mode if you're going to make a
 snapshot at the virtual console level.  Powering the system down
 obviously meets that requirement.

100% agreed. Unfortunately VMware doesn't allow to take snapshots
of powered off virtual machines. I'd really like to see that
as a means to minimize downtime:

shutdown
take snapshot
boot

(downtime less than 5 minutes)

now take all the time you need to copy/backup snapshot while
your virtual server ist already back at your service


Does anyone know the reason why you cannot snapshot powered off
machines?

Kind regards,
Patrick
-- 
punkt.de GmbH * Vorholzstr. 25 * 76137 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
[EMAIL PROTECTED]   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285


Re: Backing up VMware-VMs

2008-02-27 Thread Patrick M. Hausen
Hello,

On Wed, Feb 27, 2008 at 07:40:26PM +, Rodrigo Ventura wrote:
 On Feb 27, 2008, at 7:00 PM, Patrick M. Hausen wrote:
 
 Does anyone know the reason why you cannot snapshot powered off
 machines?
 
 
 Hum, only the obvious answer pops to mind: because there is no state to 
 snapshot at all! A powered-off machine has no state, besides the contents 
 of the harddrive (vmware: the disk image) and of the NVRAM (vmware: stored 
 in some file, together with the vmware config). So, if you backed up the 
 virtual machine directory (containing the disk image, config and NVRAM), I 
 guess you are all set. Nothing else exists in a real physical computer 
 anyway...

Correct. But as I understand, snapshots in VMware work similar to
transaction logs in databases. The virtual disk file is locked
in its current state and further changes are temporarily written
to separate files until the snapshot is comitted or rolled back.

So if you want to backup/copy an entire VM with the guarantee of
consistent hard disk state, you need to shut it down. Copying
a multi gigabyte virtal disk file is bound to take quite some time.

So you need to power down your virtual machine for what can amount
to hours.

Compare:

power down
lock virtual hard disk
boot writing all changes to separate transaction log instead of vdisk
VM runs and provides service while you can backup the vdisk ...
commit snapshot after backup is done

You can do something at least similar with Linux/Unix VMs:

boot into single user mode (fs mounted read-only)
take snapshot
boot multiuser
do backup while running multiuser
commit snapshot when backup is done

Regards,
Patrick
-- 
punkt.de GmbH * Vorholzstr. 25 * 76137 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
[EMAIL PROTECTED]   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285


Re: Backing up VMware-VMs

2008-02-14 Thread Patrick M. Hausen
Hi!

 For a client I have to dump VMware-VMs with amanda

We install Amanda inside the VM's OS, most of the time FreeBSD.
I don't see a fundamental difference from a backup point of view
between a virtual and a real server.

If you backup VMs from the outside, you are bound to copy
the virtual disk in its entirety to the backup medium at
every run, even with incrementals, because the virtual disk
file will have changed.

I can imagine implementing your scheme with rsync's capability
to create incrementals of single files, not only of DLEs.

With dump or tar, you are simply wasting a lot of backup media.

HTH,
Patrick M. Hausen
Leiter Netzwerke und Sicherheit
-- 
punkt.de GmbH * Vorholzstr. 25 * 76137 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
[EMAIL PROTECTED]   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285


Restore scenarios, architecture plans?

2007-10-16 Thread Patrick M. Hausen
Good morning,

during the last weekend we had a major security breach
that we were able to fix (forunately), but that showed
additional flaws in our desaster recovery mechanisms.

We have been relying on Amanda for mor than ten years.
This is not going to be a complaint about Amanda's features.
The software has proven extremely reliable over time.
I'm going to tell what could be improved for our particular
needs and I am asking for suggestions for a different setup,
additional software, whatever ... just ideas.

OK, we used Amanda with tape drives and now with ever
increasing number of hosts, we switched to disk based
(vtape) storage about a year ago.

Routine restore of directory bla on host foo on date
bar is perfectly OK. We are backing up 53 hosts with
161 DLEs. The nightly run takes about 10 hours.
Most connections are 100 Mbit/s.
The Amanda server is a single Intel storage system
with about 4 TB of RAID5 storage space divided into
20 vtapes of 200 GB each.

After the incident we had four servers that were in for
a complete restore. We switched off the compromised machines
and set up the applications on completely new systems.

That meant 100GB+ to simply extract from the archives.
We ran amrecover on the Amanda server host in a temporary
directory, then transferred the data to the new systems.

Unfortunately amrecover cannot run multiple instances in
parallel? That meant operator supervised restore, using
amtape to change vtapes, ... for a couple of hours.

Currently I'm waiting for the nightly backup to finish so
I can restore another system that had lower priority,
since amrecover cannot run in parallel with amdump, either.

So how can I improve the situation? Would setting up
amrecover capability on the clients enable me to restore
multiple clients at the same time?

Wishlist: multiple amrecover sessions, automatic selection
of appropriate vtape during restore. Are there any plans
in this direction? I know that with the increasing number
of people using disks for backup, some API changes for
the tape interface are on their way.

How do you plan for I need to restore a couple of systems
at once on bare hardware with just fresh OS installed?

Any suggestions and lively discussion welcome.

Thanks,

Patrick M. Hausen
Leiter Netzwerke und Sicherheit
-- 
punkt.de GmbH * Vorholzstr. 25 * 76137 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
[EMAIL PROTECTED]   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285


Re: Question to: Friday tape question - Top 10

2007-08-01 Thread Patrick M. Hausen
Hi!

On Wed, Aug 01, 2007 at 11:45:00AM +0200, Ralf Auer wrote:

   thanks, but does this also mean, that in future times I have always to
 put the tapes in ...-02-06-03-... order, or does *she* switch over to
 02-06-03-04-05-01-02-03-04-05-06-01... automatically, so that the newest
 tape is in fact added at the end of the list after one full cycle?

Amanda may reorder tapes to be used any time, anyway.
Set up a cron job that executes amcheck every morning and
sends you the ouput via email. It contains the information
which tape Amanda expects next.

HTH,

Patrick M. Hausen
Leiter Netzwerke und Sicherheit
-- 
punkt.de GmbH * Vorholzstr. 25 * 76137 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
[EMAIL PROTECTED]   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285


Re: wrapper for ufsdump with snapshots

2007-02-06 Thread Patrick M. Hausen
Hi, Chris!

 Well, those are both interesting, but don't get me anywhere right now.
 
 I know it can be done, I just don't know what it takes to do it.
 
 ...
 
 I'd even be willing to tangle a little with source code, as long as it 
 didn't get too messy and someone could point me to it with enough 
 detail. Or, I wonder if it could be as simple as going into Makefile and 
 changing DUMP=/usr/sbin/ufsdump to something else?

Well, let's just look at the FreeBSD patch, shall we? ;-)

http://www.freebsd.org/cgi/cvsweb.cgi/ports/misc/amanda-server/files/extra-patch-sendbackup-dump.c?rev=1.2content-type=text/x-cvsweb-markup

Looks like just client-src/sendbackup-dump.c is patched
and at least for dump on FreeBSD you simply need to add
the L option to the dumpkeys string. Line 378 and below.
Probably it's the same option for ufsdump on Solaris.

HTH,

Patrick M. Hausen
Leiter Netzwerke und Sicherheit
-- 
punkt.de GmbH * Vorholzstr. 25 * 76137 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
[EMAIL PROTECTED]   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285


Re: wrapper for ufsdump with snapshots

2007-02-06 Thread Patrick M. Hausen
Mornin'

On Tue, Feb 06, 2007 at 05:04:58PM -0500, Chris Hoogendyk wrote:

 I presume you are on FreeBSD.

Yes.

 If this patch is what does it (are you sure that's all that is 
 required?), then it implies that the snapshot on FreeBSD has been added 
 as an option to dump. Is that true?

Yes, of course. On FreeBSD passing L as an option to dump is all
that is required. Create snapshot - dump - release snapshot.

 On Solaris 9, it is not an option on ufsdump. There is a separate 
 command fssnap that is used to create a snapshot.

*argh* I'm very sorry. I assumed that Solaris ufsdump was just
BSD dump in disguise. On FreeBSD there is a similar command in
addition to dump's capabiltity. mksnap_ffs(8).

 are comparable. Just as a long shot, I did `man dump` on my Mac OS X 
 system (which is related to FreeBSD), but it has no L option.

Mac OS X doesn't have a current FFS/Softdep implementation but favours
HFS+ with their new logging extensions.

HTH,
Patrick M. Hausen
Leiter Netzwerke und Sicherheit
-- 
punkt.de GmbH * Vorholzstr. 25 * 76137 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
[EMAIL PROTECTED]   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285


Speeding up your backups - strategies?

2007-01-14 Thread Patrick M. Hausen
Hi, all!

We installed a disk based backup system a couple of months
ago, because we have been hitting the capacity of our tape
changer on almost every nightly run - and we did not even
backup everything. So a new system was in called for.

The backup server features 2 GB of main memory, a reasonably
fast CPU (2.8 GHz Xeon) and 4 TB of storage in two separate
RAID 5 sets.

In our primary Amanda instance we have 200 GB sized vtapes,
200 GB of holding disk space (holding disk and vtapes on different
RAID sets). The compressed nightly backup data are frequently less
than 100 GB and never more than 130 GB. We have 95 DLEs on 35 hosts.

When we started, we had only one RAID set, so I did not configure
a holding disk. With the current architecture of Amanda this
essentially means inparallel 1. The backups took about 20 hours
after all 95 DLEs were added.

Since we needed more disk space and a second Amanda instance anyway,
I doubled the disk space by buying more disks and ended up with two
identical RAID 5 sets. With the machine we have there have to be
two sets, because there are two separate controllers connected
to 6 SATA disks each.

This bought us a holding disk and parallelism. So I hoped. Now
the backups take 12 to 13 hours per nightly run. Sometimes 8
if there is not much to backup.

The config changes I made to achieve this were:

inparallel 10
maxdumps 1
netusage 12500 kbps

holdingdisk hd1 {
directory /raid1/holdingdisk
use 204800 mbytes
}

But I'm not satisfied, yet ;-)

Example:

Original Size (meg)218907.0
Avg Dump Rate (k/s)  1158.9
Tape Time (hrs:min)1:08

This run took about 13 hours. About one hour of this is tape time.
So the rest is needed to transfer the data from the clients to the server.

Since the server doesn't do anything but backups, I'd like to max
out CPU and network usage on the server. Therefore I compress everything on
the server besides 4 hosts that are located in a remote datacenter where
we have to pay for data transferred.

All other backup clients have a dedicated 100 Mbit/s connection to the
backup server. My goal is to have the network almost saturated.
If you use a calculator, then with a 100 Mbit/s network the 200+
GB of the above run should be transferred in less than 5 hours.
If I didn't make a stupid mistake here.

The CPU compressing the data runs at about 60% idle when busy compresing
data and almost completely idle the rest of the time.

So how do I go about identifying the bottlenecks? Just crank up
inparallel? It must be possible to get down to, say, half of the
13 hours, IMHO.


Thanks a lot for any hints,
Patrick
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Vorholzstr. 25Tel. 0721 9109 -0 Fax: -100
76137 Karlsruhe   http://punkt.de


inparallel 1 without holding disk?

2007-01-02 Thread Patrick M. Hausen
Hi, all!

We are running Amanda 2.5.1p2 with vtapes for disk based backup.
To speed up the nightly runs, I'd like to dump multiple DLEs
concurrently.

From what I read in the docs, any value for inparallel that
is bigger than 1 requires the presence of a holding disk.
Is this correct? Is there a reason besides historical ones
and/or architecture? I understand that you cannot send two
dumps to a single tape at the same time, but with vtapes
you should be able to have N dumpers running in parallel.

If I use a holding disk, that one would have to go to the
same RAID volume that contains the vtapes, unfortunately.
Will Amanda just move the dump archives to their final
destination or will it read from holding disk and write
to the vtape, so the data gets unnecessarily copied a second
time? Again, with real tapes I know what happens and why,
so this question is specific to disk based backup.

Thanks,
Patrick
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Vorholzstr. 25Tel. 0721 9109 -0 Fax: -100
76137 Karlsruhe   http://punkt.de


Re: Changing dump arguments/parameters in Amanda client

2007-01-02 Thread Patrick M. Hausen
Hi!

On Tue, Jan 02, 2007 at 09:27:57AM -0500, Mark Hennessy wrote:
 I'm using Amanda 2.5.1p2 on many FreeBSD machines, some 4.11, others 6.1.
 
 I need to change the dump arguments/parameters on my 6.1 machines to include
 '-L' so live filesystem dumps work properly.
 
 How do I do this for those particular clients?  I can't seem to find any way
 in the docs to do this.

If you install Amanda via the FreeBSD port, there's an option
to use snapshots with dump.

HTH,

Patrick M. Hausen
Leiter Netzwerke und Sicherheit
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Vorholzstr. 25Tel. 0721 9109 -0 Fax: -100
76137 Karlsruhe   http://punkt.de


Excluding with Samba

2006-12-13 Thread Patrick M. Hausen
Hello!

I just tried to add our Windows server to our Amanda backup.
Basically everything is working, the nightly dump fails
because of dumps way to big.

I want to backup this disklist entry:

hugo10.ka.punkt.de  //hugo11/c$ samba-tar

The samba-tar definition looks like this:

define dumptype samba-tar {
root-tar
comment user partitions dumped with tar
priority medium
exclude ./Virtual Machines
}

Obviously this does *not* correctly exclude the Virtual Machines
directory on drive C:, because otherwise the dumps would definitely
fit on the tape.

How to specify exclusions if one is using Samba? The above syntax
I tried is correct for Unix hosts with gnutar, but I could not
find an example for Windows.

Thanks,

Patrick M. Hausen
Leiter Netzwerke und Sicherheit
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Vorholzstr. 25Tel. 0721 9109 -0 Fax: -100
76137 Karlsruhe   http://punkt.de


Re: upgrade to Amanda 2.5.1p1 issues

2006-11-06 Thread Patrick M. Hausen
Hello!

On Mon, Nov 06, 2006 at 01:08:11AM -0800, Steffan Vigano wrote:

 Having what I think are bsd auth issues after upgrading from 2.4.4b1.  
 The server is running FreeBSD 4.7.
 
 Regarding the new needed pieces for 2.5x, I've added some items to my 
 inetd.conf file, but am unsure about what the syntax look like for the 
 only_from section for inetd, or if it's even needed?

man hosts.allow

inetd in FreeBSD has got tcpwrappers built in.

Example:

amandad : 1.2.3.4 : allow
amandad : ALL : deny

HTH,

Patrick M. Hausen
Leiter Netzwerke und Sicherheit
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Vorholzstr. 25Tel. 0721 9109 -0 Fax: -100
76137 Karlsruhe   http://punkt.de


Re: upgrade to Amanda 2.5.1p1 issues

2006-11-06 Thread Patrick M. Hausen
Hi!

On Mon, Nov 06, 2006 at 07:50:11AM -0800, Steffan Vigano wrote:

Nov  6 07:18:40 phatb inetd[82282]: 
 /usr/local/libexec/amandad[82539]: exit status 0x100
Nov  6 07:18:40 phatb inetd[82282]: 
 /usr/local/libexec/amandad[82540]: exit status 0x100
Nov  6 07:18:40 phatb inetd[82282]: 
 /usr/local/libexec/amandad[82541]: exit status 0x100
Nov  6 07:18:40 phatb inetd[82282]: 
 /usr/local/libexec/amandad[82542]: exit status 0x100
Nov  6 07:18:40 phatb inetd[82282]: amanda/udp server failing 
 (looping), service terminated

If /tmp/amanda doesn't contain any useful logs, I'd recommend
relying on my favourite two tools - crude, but effective ;-)

# ktrace -i pid of inetd
# tcpdump -s0 -n -X

HTH,
Patrick
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Vorholzstr. 25Tel. 0721 9109 -0 Fax: -100
76137 Karlsruhe   http://punkt.de


Re: Performance degrading over time?

2002-11-04 Thread Patrick M. Hausen
Hi all!

 Since your dumper and taper times are always nearly identical you probably
 aren't using a holding disk and are dumping directly to tape.  And since
 the rates for one filesystem have remained constant while the other one has
 dropped I would look into possible recent changes on the larger (slower)
 filesystem. Try doing a dump to /dev/null and see how fast (or slow) that is.
 If data isn't fed to the tape fast enough the tape drive has to stop and
 reposition itself every time it runs out of data, which will slow it down
 considerably.

We finally found the culprit, yet have to decide what to do about it.

Seems like Oracle likes to create a lot of small .trc files over
time. The filesystem in question is littered with thousands of them.

Once we archived and deleted them, backup performance was back to normal.

Would separate holding disk (we don't use one at all at the moment)
help in a configuration like this? Additionally I'd suggest deleting
the trace (?) files in a nightly run - seems like nobody needs
them, anyway.

Thanks for your help,
Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: Performance degrading over time?

2002-11-04 Thread Patrick M. Hausen
Hello!

Jon H. LaBadie wrote:

 On Mon, Nov 04, 2002 at 03:04:27PM +0100, Patrick M. Hausen wrote:
  
  So my question is: does a holding-disk speed up this process? I mean,
  Amanda will start a dumper on the filesystem that starts filling
  the holding-disk. At the same time (?) a taper will start wrtiting
  the holding-disk's contents to tape. Now imagine the dumper getting
  to slow for the tape ... the holding-disk won't be filled quick
  enough either. Then there's a lot of seeks on the holding-disk
  itself, if it's read and written at the same time.
  
  Or is the mental picture I have about Amanda's operation incorrect?
 
 yes, incorrect.
 
 nothing goes from holding disk to tape unless the dump to holding disk
 has finished.  Then it goes like cat'ting or dd'ing to tape.  Small
 likelyhood of too slow for tape.

Aggreed - cat/dd /dev/tape will surely be fast enough.
But I need a holding disk at least as large as my largest FS to dump?
So if I have one 170 GB RAID I need one 170 GB holding disk?

The customer won't like that ;-)

Or is this what the chunksize parameter is for - taper will start when
the first chunk is completely written to the holding disk?

In this case I'm sure a holding disk will speed up things quite a
bit even in my pathological case of only one big FS.
A little bit of tape stop-and-go between chunks won't hurt as much
as the current configuration does.


Thanks,

Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: Performance degrading over time?

2002-11-04 Thread Patrick M. Hausen
Hi!

   nothing goes from holding disk to tape unless the dump to holding disk
   has finished.  Then it goes like cat'ting or dd'ing to tape.  Small
   likelyhood of too slow for tape.
  
  Aggreed - cat/dd /dev/tape will surely be fast enough.
  But I need a holding disk at least as large as my largest FS to dump?
  So if I have one 170 GB RAID I need one 170 GB holding disk?
 
 Repeating,
 
   nothing goes from holding disk to tape unless the dump to holding disk
   has finished.

OK. Understood - finally.

  The customer won't like that ;-)
 
 Doesn't have to be high performance drives.
 Cheap IDE drives are way fast enough.

How do you fit cheap IDE drives into a Sun Enterprise 3500?

Configuration of the machine:

1 internal 9 GB (system) drive
1 external Sun Storedge A1000 RAID enclosure - 170 GB net storage
1 external LTO drive

Oooops :-)))


Well, you can't argue against the facts - I'll suggest either
getting rid of the small trace files on a nightly basis
or disabling that feature altogether - or adding 5 internal
36 GB drives in a RAID 0 configuration as a holding disk.

Hmmm ... possibly some cheap PC style external SCSI-to-IDE-RAID
will do the trick just as well. Sun won't officially support it,
but they don't support the HP LTO, either.


Thanks to all, end of thread (hopefully)

Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: Performance degrading over time?

2002-10-29 Thread Patrick M. Hausen
Hi all!

Jay Lessert wrote:

 So the first thing is to look at two AMANDA MAIL
 REPORT outputs:  one from a fast run in June, and one from a slow
 run yesterday.
 
 Look at Estimate Time, Dump Time, Tape Time, Dump Rate, and Taper Rate
 for your one file system.  Where is the increase happening?

 My personal bet is estimate time, and that your 6GB of growth is in the
 form of a large number of small files, but that is a wild guess.

The dump time is increasing way beyond the growth rate of data
to be dumped.

 If you give us actual data, we can do much better.  :-)

See below.

As you can see from April up until August the dumpsize increased
about 1.5 fold while the dump _time_ increased by a factor of
about 2.5. Dump rate dropped from 7.5 K/s to 4.6 K/s.

From August until now the dump size increased by about 10%
while dump time continues to grow dramatically.

I'll look into the lots of small files added? question.

Thanks,
Patrick
---
5. April 2002 15:29

STATISTICS:
  Total   Full  Daily
      
Dump Time (hrs:min)1:54   1:53   0:00   (0:01 start, 0:00 idle)
Output Size (meg)   48155.548155.50.0
Original Size (meg) 48155.548155.50.0
Avg Compressed Size (%) -- -- -- 
Tape Used (%)  24.1   24.10.0
Filesystems Dumped2  2  0
Avg Dump Rate (k/s)  7284.9 7284.9-- 
Avg Tp Write Rate (k/s)  7282.8 7282.8-- 

DUMP SUMMARY:
  DUMPER STATS  TAPER
STATS
HOSTNAME  DISK   L  ORIG-KB   OUT-KB COMP%  MMM:SS   KB/s  MMM:SS KB/s
-- 
allianz01 c0t0d0s0   0  1199200  1199200   -- 6:27 3100.76:28 3089.9
allianz01 c1t0d0s6   0 48112064 48112064   --   106:22 7538.5  106:23 7537.7

1. Mai 2002 01:25

STATISTICS:
  Total   Full  Daily
      
Dump Time (hrs:min)1:51   1:49   0:00   (0:02 start, 0:00 idle)
Output Size (meg)   49187.349187.30.0
Original Size (meg) 49187.349187.30.0
Avg Compressed Size (%) -- -- -- 
Tape Used (%)  24.6   24.60.0
Filesystems Dumped2  2  0
Avg Dump Rate (k/s)  7700.0 7700.0-- 
Avg Tp Write Rate (k/s)  7698.5 7698.5-- 

DUMP SUMMARY:
  DUMPER STATS  TAPER
STATS
HOSTNAME  DISK   L  ORIG-KB   OUT-KB COMP%  MMM:SS   KB/s  MMM:SS KB/s
-- 
allianz01 c0t0d0s0   0  1201184  1201184   -- 5:50 3427.75:52 3416.7
allianz01 c1t0d0s6   0 49166592 49166592   --   103:11 7941.9  103:11 7941.6

1. Juni 2002 01:26

STATISTICS:
  Total   Full  Daily
      
Dump Time (hrs:min)1:53   1:51   0:00   (0:02 start, 0:00 idle)
Output Size (meg)   50107.250107.20.0
Original Size (meg) 50107.250107.20.0
Avg Compressed Size (%) -- -- -- 
Tape Used (%)  25.1   25.10.0
Filesystems Dumped2  2  0
Avg Dump Rate (k/s)  7674.4 7674.4-- 
Avg Tp Write Rate (k/s)  7672.9 7672.9-- 

DUMP SUMMARY:
  DUMPER STATS  TAPER
STATS
HOSTNAME  DISK   L  ORIG-KB   OUT-KB COMP%  MMM:SS   KB/s  MMM:SS KB/s
-- 
allianz01 c0t0d0s0   0  1202656  1202656   -- 5:45 3487.65:46 3476.1
allianz01 c1t0d0s6   0 50107072 50107072   --   105:41 7902.1  105:41 7901.9

1. Juli 2002 01:30

STATISTICS:
  Total   Full  Daily
      
Dump Time (hrs:min)1:58   1:56   0:00   (0:02 start, 0:00 idle)
Output Size (meg)   51813.251813.20.0
Original Size (meg) 51813.251813.20.0
Avg Compressed Size (%) -- -- -- 
Tape Used (%)  25.9   25.90.0
Filesystems Dumped2  2  0
Avg Dump Rate (k/s)  7640.7 7640.7-- 
Avg Tp Write Rate (k/s)  7639.2 7639.2-- 

DUMP SUMMARY:
  DUMPER STATS  TAPER
STATS
HOSTNAME  DISK   L  ORIG-KB   OUT-KB COMP%  MMM:SS   KB/s  MMM:SS KB/s
-- 
allianz01 c0t0d0s0   0  1634400  1634400   -- 6:50 3985.36:51 3974.2
allianz01 c1t0d0s6   0 51422368 

Performance degrading over time?

2002-10-28 Thread Patrick M. Hausen
Hi all!

We have a database server that is backed up daily using Amanda
to a dedicated tape drive.

Sun E3500
Sun E1000 RAID, ~170 GB of storage
LTO tape connected to LVD controller

One and only application on this system is an Oracle database
server (8i). The tables are kept in one filesystem (UFS) and the
database is shut down nightly to accomodate for the Amanda run.
Despite of the dumps being incremental, of course all the files
have changed during the day so we will get an almost full dump.

When I installed Amanda in June the nightly backup run took
about 4.5 hours. This has continuously increased to more than
7 hours today - without any changes to hardware or configuration.
The size of the table files has increased from 72 to 78 GB
which doesn't quite qualify as the only reason ;-)

First thing I thought of was UFS degradation when the FS is almost
full - can't be, since the FS in question is around 170 GB and
thus only 45% full with the currrent tables.

Obviously with a single host connected to a single drive and
only so much storage we don't use a holding disk at the moment.


Any ideas where to start searching?

Thanks,
Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de






Re: Performance degrading over time?

2002-10-28 Thread Patrick M. Hausen
Hi all!

Andreas Baier wrote:

 my answer seems to be a bit off-topic. but have you ever given a 
 Lgical Volume Manager a try - you could stop your oracle-instance, 
 issue the snapshot of the filesystem, startup your oracle and have 
 almost all-the time in the world for your backup while your users are 
 able to use the database with only a small performance impact. I do 
 this on a (smaller but 24/7) Oracle 8i database.

But why - of course!

Unfortunately the big expensive Sun/Oracle system was neither designed
nor sold by us. We just inherited the system when problems started.
The backup concept of the original seller/supporter was to use
the internal DDS drive |-)

So after we took over service and most of the budget already spent
Amanda was an obvious solution - just add a real tape drive, they
almost choked on the price of that already. We did offer volume
management but for this component the price was prohibitive.
Seems like we have a big sales opportunity now ;-))

 Fragmentation *might* be a cause for this performance ( 3,5 MB/s ) but 
 you would have noticed this with a poor performance of your 
 oracle-performance as-well.

Well, the database is _only_ accessed by some web application.
There are other limiting factors in this setup as far as responsiveness
is concerned.

I used to believe UFS wasn't prone to fragmentation as long
as you keep your 10% reserve. Seems like very few very big files
are sort of a pathological case.

 What is about your network you told your performance in June was about 
 4,5 MB/s - could there be a network-bottleneck?

It's _local_ backup - no network communication involved.
Well, OK, it _is_ a network socket and there will be quite a
bit of context switching.
4,6 MB/s would be acceptable - it just keeps getting even slower.

 Perhaps you´ll give the holdingdisk a try - you moght not regret.

This is the first shot we will take - though I'm not very confident
since the holding space will have to be on the same RAID in the same
(only existing) partition as the database tables. I'll let you
know the results.

Thanks,

Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: mailing of logreport

2002-06-23 Thread Patrick M. Hausen

Hi! all!

Tom Van de Wiele wrote:

 Philip J. Hollenback wrote:
  
  Yes, use ssmtp, a send-only replacement for sendmail.

 that would be a good idea, if ssmtp wasn't discontinued...
 
 where can I find it?  I searched the entire OSDN network, freshmeat,
 linuxapps, sourceforge...

Why not run sendmail -q30m without -bd option. It won't open
a listening socket, so it can't be compromised over the network.

HTH,

Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: Mac OSX 10.1.5 and dump ...

2002-06-18 Thread Patrick M. Hausen

Hi all!

Martin wrote:

 Trying to get amanda working on Mac OSX 10.1.5 using dump as the backup 
 service. However when I try and run dump I get the following..
...
DUMP: bad sblock magic number
DUMP: The ENTIRE dump is aborted.
 
 Any ideas?? I'll try with gtar and see what happens, but I'd prefer to 
 use dump.

Well, I figure, you are using HFS+ for your Mac OS X hard disk?
(It's the default)

If so, then most probably dump on OS X supports UFS filesystems only.
The message bad sblock magic number leads me to that conclusion.

To be sure, you'd better either check the source from Darwin
or ask at some Darwin developers' mailing list.

HTH,
Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: BSD restore can restore Solaris ufsdumps?

2002-05-06 Thread Patrick M. Hausen

Hi all!

Dan wrote:

 Just to see if it would work, I tried to restore a Solaris (2.5.1)
 ufsdump using NetBSD's restore. To my surprise, it worked, modulo
 Solaris ACLs, of course. Amanda's amrecover won't restore them,
 however -- it complains that it can't find ufsdump/ufsrestore, and
 symlinking them to dump/restore doesn't appear to help.
 
 Is there any way around this? Tar takes quite a bit longer on my
 system, and bogs it down more, so I'd rather be using dump if it's
 possible.

Unfortunately I can't try it anymore or even give you a hint
about our setup, but 2 years ago when we had still 2 Solaris
machins running we did all backup on FreeBSD servers.

We routinely restored files backed up from the Solaris Amanda
clients on the FreeBSD Amanda server - no problem whatsoever.
(ufsdump/ufsrestore and dump/restore respectively)

IIRC we did get an informational message about byte swapping
but besides that everything went as it is supposed to.

On FreeBSD I always install Amanda via the ports collection,
that would be pkgsrc on NetBSD. On Solaris I compiled it myself
but can't remember anything special.

HTH,

Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: filetype

2002-02-27 Thread Patrick M. Hausen

Hello!

 I have a Tandberg SLR100 tapedrive and didn't find the tapetype for this,
 so I ran the filetype program. First time with compression:
 
 [...]
 
 Why are all the numbers different? Is that right? Can I trust those
 numbers? Which ones should I use?

SLR drives do read-after-write verification and map out
defetcs automatically.  Then the tape might go into
stop-and-go-mode occasionaly, which wastes tape length,
and thus, capacity. So you will always get _slightly_
differnet numbers.

As for what is correct: I've talked to Tandberg Data support
and they told me that all SLR models had a filemark size of
something between 2 and 8 kbytes. There is no one true
filemark size according to them. I'm using 8 kbytes.
Everything else (size/speed) seemed reasonable in your results.
I never used tapetype, though, but just looked up the max.
uncompressed sustained rate in the data sheets provided by
Tandberg Data. Same for the size.
An SLR 100 tape holds 5 kbytes, period - worked every
time so far ;-)

HTH,
Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: mailing of logreport

2002-02-09 Thread Patrick M. Hausen

Hi!

 that's compiling, installing and configuring a maildaemon just to
 transfer a report... that's a bit overkill, no?

I assumed that Sendmail came with the OS - that's definitely
the case for Solaris and FreeBSD, which is what we run here.

So why install, configure and run an additional piece of software
if sendmail -q30m does the trick? ;-)))

Regards,

Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: mailing of logreport

2002-02-06 Thread Patrick M. Hausen

Hi! all!

Tom Van de Wiele wrote:

 Philip J. Hollenback wrote:
  
  Yes, use ssmtp, a send-only replacement for sendmail.

 that would be a good idea, if ssmtp wasn't discontinued...
 
 where can I find it?  I searched the entire OSDN network, freshmeat,
 linuxapps, sourceforge...

Why not run sendmail -q30m without -bd option. It won't open
a listening socket, so it can't be compromised over the network.

HTH,

Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: Host Down Problem... again.

2002-01-10 Thread Patrick M. Hausen

Hi all!

 I don't understand why you think inetd isn't trying to run the right thing.
  When I type netstat -a | grep -i amanda, I get this output:
 
 alpha:/etc/rc.d/init.d netstat -a | grep -i amanda
 tcp0  0 *:amandaidx *:* LISTEN 
 udp0  0 *:amanda*:*   
 
 Which makes it appear that amanda is running correctly to me... I might be
 wrong... I'm far from an expert.  Can someone please tell me what is
 wrong???

The netstat output just shows that inetd opened the correct ports.
Precisely it doesn't show even that. It shows that inetd opened the ports
that are associated with the entries amandaidx and amanda in
/etc/services ;-) But I'm nitpicking.

This definitely doesn't say a single bit about which commands
inetd will run, when something actually connects to the ports
in question. You could enter

amanda dgram udp wait amanda humpty dumpty

in your /etc/inetd.conf and would get the same netstat output.

Though, inetd would have a hard time actually running humpty, once
you connect.

HTH,
Patrick
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: SLR 100, read while write capability

2001-06-18 Thread Patrick M. Hausen

Hi!

Olivier wrote:

 While browsing to find a cheap supplier for Imation SLR 100 cartridge
 (any hint welcome :) I found out that SLR drives are capable of read
 while write.
 
 I wonder if that capability is implemented in Amanda, could be
 implemented in Amanda, if it relies on the tape device...
 
 It could be a very nice add-on if Amanda could do verify after write,
 on fly...

That capability would have to be added to dump and gtar in the
first place.

Regards,
Patrick
-- 
--- WEB ISS GmbH - Scheffelstr. 17a - 76135 Karlsruhe - 0721/9109-0 ---
-- Patrick M. Hausen - Technical Director - [EMAIL PROTECTED] ---
Two men say, they're Jesus - one of 'em must be wrong. (Dire Straits)



Files not backed up while they should?

2001-06-07 Thread Patrick M. Hausen

Hi all!

We are using Amanda for backup a couple of years now and
it has never failed for us. But today I encountered something
really weird:

We have two servers in our internal network, hugo10 and hugo20.
hugo20 is the tape server and backup host. Backups are done
daily.

Today I tried to restore one particular file one of our developers
accidentally lost. The disk in question is da0s1d on hugo20.
Of course, it was backed up last night. Level 1 dump, it seems:

hugo20# su -m operator -c amadmin HAUSNETZ info hugo20 da0s1d

Current info for hugo20 da0s1d:
  Stats: dump rates (kps), Full:  1778.0, 962.0, 1566.0
Incremental:1.0,   1.0,   1.0
  compressed size, Full: -100.0%,-100.0%,-100.0%
Incremental: -100.0%,-100.0%,-100.0%
  Dumps: lev datestmp  tape file   origK   compK secs
  0  20010530  HAUSNETZ-1412 1734032 1734048  975
  1  20010607  HAUSNETZ-4  8 153 160  130
  2  20010412  HAUSNETZ-12 5 153 160  129
  3  20010224  HAUSNETZ-10 3 362 384   48


But: the index file for that disk on hugo20 doesn't show any (!)
file having been backed up. And of course a lot of files
changed since 20010530 which seemingly is the last level 0 dump.

In fact all index files from 20010531 on are empty!

The time on the host is OK, timestamps on all file in question
OK, too.

I need some help on where to look further for information. At the
moment I'm really puzzled and completely stuck. All reports
said, backup went fine, but nothing was dumped for that disk.

Versions installed:

FreeBSD hugo20.ka.punkt.de 4.3-BETA FreeBSD 4.3-BETA #1: Thu Mar 22 10:25:35 CET 2001  
   [EMAIL PROTECTED]:/usr/obj/usr/src/sys/HUGO20  i386
hugo20# pkg_info | grep amanda
amanda-2.4.1The Advanced Maryland Automatic Network Disk Archiver


Thanks for any hints,
Patrick



Q: tape drives beyond DLT or SLR

2001-01-23 Thread Patrick M. Hausen

Hi fellow Amanda users!

We use Amanda for networked backup. Currently we rely on
Tandberg SLR tape drives exclusively. These drives
are available at a maximum capacity of 50 GB w/o compression.
And they're robust, reliable and _reasonably_ fast and cheap.
DLT drives max out at 40 GB w/o compression.

What options do I have for higher capacity drives?
Amanda still requires the largest dump to fit on a single tape.
With filesystems routinely approaching a couple of hundreds of gigs
these days there seems to be a huge gap ...

I found IBM Ultrium technology. Has someone used these successfully?
OTOH they are 100 GB uncompressed - not a _real_ order of magnitude.

What _do_ multi-terabyte datacenters use for backup, anyway?

I know, if I find something to backup 200 or 300 GB on a single tape,
that I will face another limit - the speed of my 100 MBit/s
Ethernet network. We already have a separate VLAN for backup only.
A rough calculation shows a _theoretical_ bandwidth of ~47 GB per hour
without counting packet overhead. And if I fix that by going to
Gigabit Ethernet for the biggest servers, I'll hit the SCSI-bus and
tape bandwidth barrier ... ;-)

Operating systems we use are FreeBSD and Solaris, so we are familiar
with i386 and Sparc architecture and technology.

Thanks in advance for any comments/hints,
Patrick
-- 
--- WEB ISS GmbH - Scheffelstr. 17a - 76135 Karlsruhe - 0721/9109-0 ---
-- Patrick M. Hausen - Technical Director - [EMAIL PROTECTED] ---
"Contrary to popular belief, penguins are not the salvation of modern
 technology.  Neither do they throw parties for the urban proletariat."



Re: Q: tape drives beyond DLT or SLR

2001-01-23 Thread Patrick M. Hausen

Hi all!

Gerhard den Hollander wrote:

 * Patrick M. Hausen [EMAIL PROTECTED] (Tue, Jan 23, 2001 at 09:11:36AM +0100)
 
  What options do I have for higher capacity drives?
  Amanda still requires the largest dump to fit on a single tape.
  With filesystems routinely approaching a couple of hundreds of gigs
  these days there seems to be a huge gap ...
 
  I found IBM Ultrium technology. Has someone used these successfully?
 Yup.
 
  OTOH they are 100 GB uncompressed - not a _real_ order of magnitude.
 
 No,
 amanda with compressuion should get you to 200G depending on the type of
 data)

I just compared uncompressed capacity. Tandberg's SLR100 gives 50 GB
uncompressed, IBM's Ultrium 100 GB uncompressed.

Unfortunately our customer with the highes capacity needs stores
precompressed data of several GB per month and wants them all
available on disk. So we need to plan for full dumps of 100 GB
and maybe even more. That's why I'd prefer a tape solution that
gets 200 GB of data on a tape.

For the curious: webserver logfiles - they want at least one
year's worth of that available online for analysis ... marketing types ;-)

 Different backup software that allows dumps to span multiple tapes, in
 combination with stackers or taperobots.

That's what I figured ...

 Work on multitapebackup is (apparently) underway for Amanda NG

_This_ is good news indeed. So one possible solution would be
to go for a library instead of an autoloader, start with 1 tape
of 50 or 100 GB capacity now - and hope AmandaNG will be available
when we need it ;-))

We certainly would be willing to help with beta testing, but don't
have spare time for development.

2 more options I found on www.backupcentral.com:

"Quantum Super DLT 220N" with a quoted capacity of 220 GB (that's almost
certainly compressed, so it may have 110 GB uncompressed).
Unfortunately the link to Quantum's website doesn't reveal _any_
information on that product. Is it vaporware?

"Ampex DST 312" with a quoted capacity of 330 GB - anyone used
these? They're helical scan technology which makes me feel a
lot less comfortable than with Tandberg or IBM ...

Thanks,
Patrick
-- 
--- WEB ISS GmbH - Scheffelstr. 17a - 76135 Karlsruhe - 0721/9109-0 ---
-- Patrick M. Hausen - Technical Director - [EMAIL PROTECTED] ---
"Contrary to popular belief, penguins are not the salvation of modern
 technology.  Neither do they throw parties for the urban proletariat."



Re: Q: tape drives beyond DLT or SLR

2001-01-23 Thread Patrick M. Hausen

Hi!

Mitch wrote:

 On Tue, 23 Jan 2001, Patrick M. Hausen wrote:
 
  Unfortunately our customer with the highes capacity needs stores
  precompressed data of several GB per month and wants them all
  available on disk. So we need to plan for full dumps of 100 GB
  and maybe even more. That's why I'd prefer a tape solution that
  gets 200 GB of data on a tape.
  
  For the curious: webserver logfiles - they want at least one
  year's worth of that available online for analysis ... marketing types ;-)
  
 You _might_ want to consider modifying your strategy here.  What
 you're describing is really an archive of static data.  Why beat
 your backup hardware/software up over it when it's static logs?

Absolutely correct ... but ...

 How about something like this.  At end of each month, move current
 month's logs into your online archive and cut a tape, perhaps with
 duplicates if you prefer the extra security, of just the new bits.
 Add tape to your tape archive.  Don't bother making periodic backups
 of data in the archive, since you've already got it on both disk and
 tape, and it's not changing anyway.

This means a separate tape drive and/or manual intevention.
All servers are located in a remote data center -
that's why we want a "change cartridges once a week and forget
about the rest" solution ...

If the customer is willing to buy a separate autoloader instead
of using our standard "data center backup service", we can implement
your suggestion.

And ... the customers wants _yesterday's_ logs available for analysis
today. Together with all accumulated data over the last year up to and
including yesterday. They're using Webtrends Enterprise Reporting -
this software just can't analyze seperate months separately and
give out reports containing the entire period. Still they insist
on using it - it generates "prettier" reports than, say, NetTracker,
and has a "nicer" UI.

There are quite a few other quirks with this product.
If you analyze a year's worth of compressed logfiles, WT
insist on decompressing _everything_ to temporary storage,
then analyzing, then remove the temporary files.

It starts one thread per ip address to reverse-lookup ...

Need I say more? Sun, IBM, Compaq sure like it a lot ;-)))

The customer asks - we suggest and offer - they buy - or don't. I don't
have a problem with that, I'm providing _services_.

But now I'm definitely getting off topic.

 Alternatively, if the above just won't cut it for whatever reason,
 and you want to/have to keep this archival data in your regular
 backup cycle, this seems like one time when the current amanda
 workaround for filesystems too large for one tape will work quite
 well.  This being the "use tar and make separate entries in disklist
 for each top-level directory" approach.

Right. Something along this line probably will do the trick.

Getting the biggest tape drive available with current technology
won't hurt, either. ;-)

Thanks,
Patrick
-- 
--- WEB ISS GmbH - Scheffelstr. 17a - 76135 Karlsruhe - 0721/9109-0 ---
-- Patrick M. Hausen - Technical Director - [EMAIL PROTECTED] ---
"Contrary to popular belief, penguins are not the salvation of modern
 technology.  Neither do they throw parties for the urban proletariat."



Re: Q: tape drives beyond DLT or SLR

2001-01-23 Thread Patrick M. Hausen

Hi!

   Work on multitapebackup is (apparently) underway for Amanda NG
  
  _This_ is good news indeed. So one possible solution would be
  to go for a library instead of an autoloader, start with 1 tape
  of 50 or 100 GB capacity now - and hope AmandaNG will be available
  when we need it ;-))
 
 What is "Amanda NG"? Amanda Next Generation?

That's what I implied. Besides that I don't know more.
I couldn't find anything on Amanda NG on www.amanda.org neither
on egroups.

Can someone of the people who seem to know about it give us a clue?

Regards,
Patrick
-- 
--- WEB ISS GmbH - Scheffelstr. 17a - 76135 Karlsruhe - 0721/9109-0 ---
-- Patrick M. Hausen - Technical Director - [EMAIL PROTECTED] ---
"Contrary to popular belief, penguins are not the salvation of modern
 technology.  Neither do they throw parties for the urban proletariat."