Re: building amanda 3.3.3

2013-10-22 Thread Jens Berg
Hi Petr,

I've never taken a deeper look into amanda's makefiles so the following
is just a guess.
It looks like there is a dependency missing or not recognized and you
are using the make utility with parallel job execution enabled (option
-j), so make resolves to start gcc in parallel with the generation of
genversion.h. Depending on your system load the generation of that
header file finishes before or after gcc needs it, which results in a
kind of unreproducible behavior. Maybe this is a result of passing
--disable-dependency-tracking to ./configure ??

Best
Jens

On Tue Oct 22 2013 11:45:20 GMT+0200
phra...@redhat.com (Petr Hracek) wrote:

 Hi folks,
 
 I would like to build amanda under s390x environment
 but it is failing with
 
 Making all in common-src
 make[2]: Entering directory `/builddir/build/BUILD/amanda-3.3.3/common-src'
 rm -f genversion.h genversion.h.new
 gcc -DHAVE_CONFIG_H -I. -I../config -I../gnulib  -fno-strict-aliasing
 -D_GNU_SOURCE -I/usr/include -pthread -I/usr/include/glib-2.0
 -I/usr/lib64/glib-2.0/include-Wall -Wextra -Wparentheses
 -Wdeclaration-after-statement -Wmissing-prototypes -Wstrict-prototypes
 -Wmissing-declarations -Wformat -Wformat-security -Wsign-compare
 -Wfloat-equal -Wold-style-definition -Wno-strict-aliasing
 -Wno-unknown-pragmas -Wno-deprecated-declarations -O2 -g -pipe -Wall
 -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
 --param=ssp-buffer-size=4 -grecord-gcc-switches
 -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -march=z10
 -mtune=zEC12 -fPIE -fno-strict-aliasing  -c genversion.c
 echo '#define CC gcc'  genversion.h.new
 echo '#define BUILT_DATE '`date`''  genversion.h.new
 genversion.c:38:24: fatal error: genversion.h: No such file or directory
  #include genversion.h
 ^
 compilation terminated.
 echo '#define BUILT_MACH '  genversion.h.new
 mv genversion.h.new genversion.h
 The bug is not reproducible, so it is likely a hardware or OS problem.
 make[2]: Leaving directory `/builddir/build/BUILD/amanda-3.3.3/common-src'
 
 
 Did you observed this behaviour?
 configure script is called with these parameters
 
  ./configure --program-prefix= --disable-dependency-tracking
 --prefix=/usr --exec-prefix=/usr
  --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
 --datadir=/usr/share
  --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/lib64
 --localstatedir=/var --sharedstatedir=/var/lib
  --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared
 --disable-rpath --disable-static --disable-dependency-tracking
  --disable-installperms --with-amdatadir=/var/lib/amanda
 --with-amlibdir=/usr/lib64 --with-amperldir=/usr/lib64/perl5/vendor_perl
  --with-index-server=amandahost --with-tape-server=amandahost
 --with-config=DailySet1 --with-gnutar-listdir=/var/lib/amanda/gnutar-lists
  --with-smbclient=/usr/bin/smbclient
 --with-amandates=/var/lib/amanda/amandates --with-amandahosts
 --with-user=amandabackup --with-group=disk
  --with-tmpdir=/var/log/amanda --with-gnutar=/bin/tar
 --with-ssh-security --with-rsh-security --with-bsdtcp-security
 --with-bsdudp-security --with-krb5-security
 
 Thank you in advance.
 



Re: building amanda 3.3.3

2013-10-22 Thread Jean-Louis Martineau

Petr is right,

I added this dependencies to the Makefile:
genversion.$(OBJEXT): $(genversion_SOURCES) genversion.h

Jean-Louis


On 10/22/2013 08:39 AM, Jens Berg wrote:

Hi Petr,

I've never taken a deeper look into amanda's makefiles so the following
is just a guess.
It looks like there is a dependency missing or not recognized and you
are using the make utility with parallel job execution enabled (option
-j), so make resolves to start gcc in parallel with the generation of
genversion.h. Depending on your system load the generation of that
header file finishes before or after gcc needs it, which results in a
kind of unreproducible behavior. Maybe this is a result of passing
--disable-dependency-tracking to ./configure ??

Best
Jens

On Tue Oct 22 2013 11:45:20 GMT+0200
phra...@redhat.com (Petr Hracek) wrote:


Hi folks,

I would like to build amanda under s390x environment
but it is failing with

Making all in common-src
make[2]: Entering directory `/builddir/build/BUILD/amanda-3.3.3/common-src'
rm -f genversion.h genversion.h.new
gcc -DHAVE_CONFIG_H -I. -I../config -I../gnulib  -fno-strict-aliasing
-D_GNU_SOURCE -I/usr/include -pthread -I/usr/include/glib-2.0
-I/usr/lib64/glib-2.0/include-Wall -Wextra -Wparentheses
-Wdeclaration-after-statement -Wmissing-prototypes -Wstrict-prototypes
-Wmissing-declarations -Wformat -Wformat-security -Wsign-compare
-Wfloat-equal -Wold-style-definition -Wno-strict-aliasing
-Wno-unknown-pragmas -Wno-deprecated-declarations -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -grecord-gcc-switches
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -march=z10
-mtune=zEC12 -fPIE -fno-strict-aliasing  -c genversion.c
echo '#define CC gcc'  genversion.h.new
echo '#define BUILT_DATE '`date`''  genversion.h.new
genversion.c:38:24: fatal error: genversion.h: No such file or directory
  #include genversion.h
 ^
compilation terminated.
echo '#define BUILT_MACH '  genversion.h.new
mv genversion.h.new genversion.h
The bug is not reproducible, so it is likely a hardware or OS problem.
make[2]: Leaving directory `/builddir/build/BUILD/amanda-3.3.3/common-src'


Did you observed this behaviour?
configure script is called with these parameters

  ./configure --program-prefix= --disable-dependency-tracking
--prefix=/usr --exec-prefix=/usr
  --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
--datadir=/usr/share
  --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/lib64
--localstatedir=/var --sharedstatedir=/var/lib
  --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared
--disable-rpath --disable-static --disable-dependency-tracking
  --disable-installperms --with-amdatadir=/var/lib/amanda
--with-amlibdir=/usr/lib64 --with-amperldir=/usr/lib64/perl5/vendor_perl
  --with-index-server=amandahost --with-tape-server=amandahost
--with-config=DailySet1 --with-gnutar-listdir=/var/lib/amanda/gnutar-lists
  --with-smbclient=/usr/bin/smbclient
--with-amandates=/var/lib/amanda/amandates --with-amandahosts
--with-user=amandabackup --with-group=disk
  --with-tmpdir=/var/log/amanda --with-gnutar=/bin/tar
--with-ssh-security --with-rsh-security --with-bsdtcp-security
--with-bsdudp-security --with-krb5-security

Thank you in advance.



diff --git a/common-src/Makefile.am b/common-src/Makefile.am
index d7000a3..40dacb5 100644
--- a/common-src/Makefile.am
+++ b/common-src/Makefile.am
@@ -143,6 +143,8 @@ genversion_SOURCES = genversion.c svn-info.h
 genversion_LDADD = $(libamanda_la_LIBADD)	\
 	../gnulib/libgnu.la
 
+genversion.$(OBJEXT): $(genversion_SOURCES) genversion.h
+
 genversion.h: $(top_builddir)/config.status
 	-rm -f $@ $@.new
 	echo '#define CC $(CC)'  $@.new


bumping my backups

2013-10-22 Thread Brian Cuttler

Amanda users,

One of my amanda servers uses Vtapes, and has multiple zpools
(ZFS filesystem) assigned to it. Vtapes have been configured
at 1.8 Tbytes, which is a value that seems to be insufficient.
At least based on dump estimates.

I seem to be sitting at level 1 dumps for several nights, and
think that perhaps my bump parameters could be better set.

I think these values are carry-forwards from older amanda versions
as they are no where near the current defaults (per amanda.conf
man page on the web) and are not parameters that we usually mess with.

bumpsize 20 Mb  # minimum savings (threshold) to bump level 1 - 2
bumppercent 20  # minimum savings (threshold) to bump level 1 - 2
bumpdays 1  # minimum days at each level
bumpmult 4  # threshold = bumpsize * bumpmult^(level-1)

The file systems I'm looking at are in excess of 100Gig and may
we in excess of 500 gig, so not bumping is causing them to be
skipped because total dumps are too large and level 0 dumps are
taking presidents over these level 1 dumps.

I'm also going to check with the data owners, as I'm rather surprised
that these dumps aren't falling under the savings cap.

Than again we had some that where failing for a long time because
the number of files per directory was excessive (zfs has virtually
no limit, but putting several hundred thousand files in a directory
is still not recommended) and we had no level 0 dumps for a while.

Estimates are server, and that may play into estimates that are
in fact excessive based on lack of current data.

Do you have any recommendations on how to proceed (dump parameters
or other things to look at) from here?

thank you,

Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773