Re: Q: 'all estimate timed out' error
On Thu, Aug 23, 2012 at 11:44 AM, Geert Uytterhoeven ge...@linux-m68k.org wrote: On Mon, Oct 10, 2011 at 9:53 AM, Albrecht Dreß albrecht.dr...@arcor.de wrote: I use amanda 2.5.2p1 on a Ubuntu 8.04 server to back up several machines. The backup of /one/ disk from /one/ machine, which worked flawlessly for years, now regularly throws the message FAILURE AND STRANGE DUMP SUMMARY: srv-erp3 /mnt1 lev 0 FAILED [disk /mnt1, all estimate timed out] planner: ERROR Request to srv-erp3 failed: timeout waiting for REP in the report, but the other disks are written properly: Did this ever got resolved? How? Since a few days, I'm getting the same error for one of my DLEs, which also worked flawlessly for years, and its contents haven't changed recently: FAILURE DUMP SUMMARY: machine /path lev 0 FAILED Failed reading dump header. machine /path lev 0 FAILED Failed reading dump header. machine /path lev 0 FAILED [too many dumper retry: [request failed: timeout waiting for REP]] I discovered I had ca. 200 hanging backup processes, like: backup3004 0.0 0.0 0 0 ?ZAug22 0:00 [amandad] defunct backup3034 0.0 0.0 0 0 ?ZAug22 0:00 [sendbackup] defunct and one like this: backup 23748 0.0 0.0 40184 128 ?Ss Aug19 1:00 amandad -auth=bsd amdump amindexd amidxtaped After killing them, the next backup round completed succesfully... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Q: 'all estimate timed out' error
Hi Albrecht, On Mon, Oct 10, 2011 at 9:53 AM, Albrecht Dreß albrecht.dr...@arcor.de wrote: I use amanda 2.5.2p1 on a Ubuntu 8.04 server to back up several machines. The backup of /one/ disk from /one/ machine, which worked flawlessly for years, now regularly throws the message FAILURE AND STRANGE DUMP SUMMARY: srv-erp3 /mnt1 lev 0 FAILED [disk /mnt1, all estimate timed out] planner: ERROR Request to srv-erp3 failed: timeout waiting for REP in the report, but the other disks are written properly: Did this ever got resolved? How? Since a few days, I'm getting the same error for one of my DLEs, which also worked flawlessly for years, and its contents haven't changed recently: FAILURE DUMP SUMMARY: machine /path lev 0 FAILED Failed reading dump header. machine /path lev 0 FAILED Failed reading dump header. machine /path lev 0 FAILED [too many dumper retry: [request failed: timeout waiting for REP]] The dumper log shows: 1345678162.592411: dumper: security_getdriver(name=BSD) returns 0x7f5180959900 1345678162.592421: dumper: security_handleinit(handle=0x1874000, driver=0x7f5180959900 (BSD)) 1345678162.592676: dumper: dgram_send_addr(addr=0x1874040, dgram=0x7f5180964188) 1345678162.592686: dumper: (sockaddr_in *)0x1874040 = { 2, 10080, 127.0.1.1 } 1345678162.592692: dumper: dgram_send_addr: 0x7f5180964188-socket = 4 1345678162.593013: dumper: dgram_recv(dgram=0x7f5180964188, timeout=0, fromaddr=0x7f5180974180) 1345678162.593027: dumper: (sockaddr_in *)0x7f5180974180 = { 2, 10080, 127.0.1.1 } 1345678222.653129: dumper: dgram_send_addr(addr=0x1874040, dgram=0x7f5180964188) 1345678222.653163: dumper: (sockaddr_in *)0x1874040 = { 2, 10080, 127.0.1.1 } 1345678222.653170: dumper: dgram_send_addr: 0x7f5180964188-socket = 4 1345678222.653542: dumper: dgram_recv(dgram=0x7f5180964188, timeout=0, fromaddr=0x7f5180974180) 1345678222.653556: dumper: (sockaddr_in *)0x7f5180974180 = { 2, 10080, 127.0.1.1 } 1345678282.671245: dumper: dgram_send_addr(addr=0x1874040, dgram=0x7f5180964188) 1345678282.671280: dumper: (sockaddr_in *)0x1874040 = { 2, 10080, 127.0.1.1 } 1345678282.671287: dumper: dgram_send_addr: 0x7f5180964188-socket = 4 1345678282.671645: dumper: dgram_recv(dgram=0x7f5180964188, timeout=0, fromaddr=0x7f5180974180) 1345678282.671683: dumper: (sockaddr_in *)0x7f5180974180 = { 2, 10080, 127.0.1.1 } 1345678342.674842: dumper: security_seterror(handle=0x1874000, driver=0x7f5180959900 (BSD) error=timeout waiting for REP) 1345678342.674897: dumper: security_close(handle=0x1874000, driver=0x7f5180959900 (BSD)) 1345678342.674938: dumper: putresult: 11 TRY-AGAIN Amanda 2.6.1p1-2 on Ubuntu 10.04.4 LTS. Thanks! Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: [Amanda-users] Changing Next Tape Expected
On Thu, Jan 7, 2010 at 00:31, brnn8r amanda-fo...@backupcentral.com wrote: This is my first post on this forum and I'm a bit of a Amanda newb. At my company we have a Weekly backup regime and after each weekly backup the tape is sent offsite for storage and the next Weekly tape is brought in. I've just come back from holiday and Amanda is expecting our next Weekly backup tape to be called Weekly03. Unfortunately while I was away this tape was given to the backup company and we now have Weekly04. My question is: Can I change what Amanda expects to be the next Weekly tape? i.e. can I set it to Weekly04 instead of Weekly03? Otherwise I'll have to request the previous weekly tape is returned and this will of course have some cost. If Weekly04 was last written more than tapecycle runs ago, I think Amanda will happily accept it. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: A warning for Amanda users on Ubuntu 9.10
On Sat, Nov 28, 2009 at 23:03, Charles Curley charlescur...@charlescurley.com wrote: I just got a nasty surprise. With Ubuntu 9.10, you can have your entire home directory encrypted with ecryptfs. Cool, I said, I'll try that on my laptop. The encrypted file system is mounted at /home/${USER}. In addition, there is a directory, /home/.ecryptfs, where the lower (encrypted version) file system is kept. I was backing up with a DLE for /home: dragon.localdomain /home comp-server-root-tar Here's the problem: Normally tar won't cross a mount point. So I was getting useless backups of /home on my laptop. (I was getting .ecryptfs backed up, but that did me no good with my password broken.) I discovered all this when my laptop went screwy and I couldn't log in as my normal user. I was able to recover my data, see http://dragon/~ccurley/crcweb/blog/archives/2009/11/24/recovering_from_login_failure_on_ubuntu_9_10/index.html dragon? No FQDN? Google couldn't help me... for the gory details. I immediately changed the DLE to back up /home/${USER}. (But in order for that to work, the user has to be logged in, or else his partition otherwise mounted. Fortunately, that's normally the case when my laptop is home and running.) Alternatively, you can use encrypted LVM so the whole system is encrypted, incl. swap (and the hibernation area). I use that on my laptop. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: user amanda on server, user backup on client, access denied to ama...@coyote.coyote.den
On Fri, Oct 2, 2009 at 03:46, Gene Heskett gene.hesk...@verizon.net wrote: On Thursday 01 October 2009, Charles Curley wrote: On Thu, 01 Oct 2009 20:51:56 -0400 Gene Heskett gene.hesk...@verizon.net wrote: On Thursday 01 October 2009, Charles Curley wrote: On Thu, 01 Oct 2009 12:58:48 -0400 Gene Heskett gene.hesk...@verizon.net wrote: On Thursday 01 October 2009, Dustin J. Mitchell wrote: See? Its sort of a now it can be told thing. But that brings up the question of who owns /home/amanda on such a (broken IMO) installation? Actually, nobody owns it; it doesn't exist on a standard Ubuntu installation. From my unchanged Ubuntu client installation: r...@dzur:/var# grep backup /etc/passwd backup:x:34:34:backup:/var/backups:/bin/sh r...@dzur:/var# Oookaaay, what do you get from a grep amanda /etc/passwd? About the value of a political promise on election night. ccur...@dragon:~$ egrep amanda\|backup /etc/passwd backup:x:34:34:backup:/var/backups:/bin/sh ccur...@dragon:~$ grep amanda /etc/passwd ccur...@dragon:~$ So you don't even have a user amanda, nor a /home/amanda directory... Interesting. Bears further investigation I believe. When I'm fresher. Debian and Ubuntu use the `backup' user. It's homedir is `/var/backups', and /var/backups/.amandahosts is a symlink to /etc/amandahosts. BTW, I never had issues with it (Debian user since long before I started using Amanda in 1997). Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: amcheck gets permission denied error
On Sun, Sep 20, 2009 at 11:13, Geert Uytterhoeven ge...@linux-m68k.org wrote: On Mon, 14 Apr 2008, Chris Hoogendyk wrote: Well, I have to confess I'm puzzled by this one. I added a couple of partitions to the disklist on my backup server, expecting it to be a totally routine thing. However, I got permission denied from amcheck when it tried to access these partitions (on another server). I have scads of partitions on that server already getting backed up. What's more, amdump was perfectly successful in backing these up. But, amcheck keeps complaining. This has been going on for about 2 weeks. I've checked permissions, and even umounted the partitions and checked the underlying permissions of the mount point. I can't see that there is anything unique about them compared to other partitions. I have permissions all over the map, with different faculty and labs having ownership and varying requirements for access and security. There are at least a couple of others where root is neither owner nor a member of the group owner and other permissions are 0. The underlying mount points are typically root:other with 755. So, what, exactly is it that amcheck is doing that makes it different from amdump and might make it complain in some way? I've put the contents of the email message from amcheck and the debug file from the client server at the end of this message. The only clue I have is probably just a red herring. My boss had been browsing through, tightening up some security stuff and changed the root umask to 077 a few weeks back. That may have been before he added this drive. But, if that had changed anything, I should be able to see it in the permissions now. I don't. And, amdump doesn't seem to either. I'm seeing a similar issue on a machine I just installed (Ubuntu 9.04/amd64, Amanda 1:2.5.2p1-4): | Amanda Backup Client Hosts Check | | ERROR: hostname: [Can't open disk /home/username] | ERROR: hostname: [No include for /home/username/subdir1] | ERROR: hostname: [could not access /home/username/subdir2 (/home/username/subdir2/REST): Permission denied] | ERROR: hostname: [Can't open disk /home/username/subdir2] | ERROR: hostname: [No include for /home/username/subdir2/subdir3] | ERROR: hostname: [could not access /home/username/subdir2 (/home/username/subdir2/subdir3): Permission denied] | ... But unlike in Chris' case, amdump couldn't back it up neither. Worse, it didn't report any failure, but created an empty tar archive instead: | HOSTNAME DISK L ORIG-kB OUT-kB COMP% MMM:SS KB/s MMM:SS KB/s | - --- --- | hostname /home/username/subdir1 0 10 32 -- 0:00 112.2 0:00 124.8 [...] The protection mask of /home/username/ is 2770. As the backup user is not a member of the right group, it cannot access the directory. Adding the backup user to this group fixes at least the amcheck issue (will see what happens with the dump next night), but this doesn't sound like The Right Thing to do to me... Amdump also succeeded. But we can't to be expected to add the backup user to _all_ groups, right? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: amcheck gets permission denied error
-incremental /var/lib/amanda/gnutar-lists/hostname_home_username_subdir1_0.new --sparse --ignore-failed-read --totals --exclude-from /tmp/amanda/sendbackup._home_username_subdir1.20090920014723.exclude --files-from /tmp/amanda/sendbackup._home_username_subdir1.20090920014723.include | sendbackup: time 0.040: started index creator: /bin/tar -tf - 2/dev/null | sed -e 's/^\.//' | sendbackup-gnutar: time 0.040: /usr/lib/amanda/runtar: pid 4974 | sendbackup: time 0.040: started backup | sendbackup: time 0.076: 47:size(|): Total bytes written: 10240 (10KiB, 151MiB/s) The protection mask of /home/username/ is 2770. As the backup user is not a member of the right group, it cannot access the directory. Adding the backup user to this group fixes at least the amcheck issue (will see what happens with the dump next night), but this doesn't sound like The Right Thing to do to me... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: how large must the data volume be...
On Thu, 20 Nov 2008, Olivier Nicole wrote: ...so that tape drives become more cost-effective than storing everything on HD's? From my past experience, 50GB SLR100 tape costs $100 while for that price I can have a 500GB disk... Recently I asked our sysadmin at work how much an LTO tape (800 GB, i.e. 400 GB native) costs, and I was quite surprised. I had expected O(100 EUR), but it was much less. But now let's start talking about the price of an autoloader... IIRC that's something like 3000 EUR. If you're a SOHO-user, you can probably get away with a few removable hard drivers (and bring some of them offsite). Pricewise, I guess you can get to the point were a bunch of HDs is still cheaper than an autoloader with tapes, but it becomes more of a hassle to manage your hard drives. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Out of space problem
On Tue, 6 May 2008, Nigel Allen wrote: I'm experiencing an odd problem with a USB DAT drive that keeps running out of space. Apologies for the length of the post. The drive is supposed to be 36 / 72 GB. GB or GiB? Total Full Incr. Estimate Time (hrs:min)0:04 Run Time (hrs:min)14:44 Dump Time (hrs:min) 11:22 11:22 0:00 Output Size (meg) 33927.133927.10.0 These are in KiB. Original Size (meg) 50710.150710.00.0 Do you have hardware compression enabled on the drive? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Curiosity vs a cat named Gene
On Tue, 19 Feb 2008, Gene Heskett wrote: What would be the correct syntax to use in determining how many lines of code there is in the current 2.6.0b2 tarball? sloccount Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: /usr/lib/amanda/chg-disk:: tape_rdlabel: tape open: line 146: [: -le: unary operator expected
On Wed, 5 Dec 2007, Dennis Ortsen wrote: I've just hit a 100% full root filesystem (as in / )(ouch). I managed to get it a bit larger (using LVM on RHEL5). After the root file system had space left again, I thought I could check the upcoming backup with AMANDA again. So I ran a amcheck jobname -a and got the following result mailed to me: Amanda Tape Server Host Check - Holding disk /amhold2: 95485 MB disk space available, using 92413 MB slot /usr/lib/amanda/chg-disk:: tape_rdlabel: tape open: line 146: [: -le: unary operator expected: No such file or directory (expecting tape DPF-03 or a new tape) Server check took 0.738 seconds Amanda Backup Client Hosts Check Client check: 27 hosts checked in 0.106 seconds, 0 problems found (brought to you by Amanda 2.5.0p2) What does that error on lin 146 mean? I don't understand where this comes from. I opened /usr/lib/amanda/chg-disk and checked line 146. It all looks fine to me. The /bin/sh (referred to in the first line) exists and is executable (it's actually a symlink to /bin/bash, RedHat default). I've got the idea that when my root filesystem was up to 100% full, some kind of flag got messed up with amanda's virtual tape (chg-disk) settings. Is that possible? only my /usr/lib/amanda (binaries) and the /etc/amanda (config files and the changer* files) could have had a problem with that. Is this recoverable? the server hasn't been updated, no new things have happened, I don't know where this comes from. Yes, somewhere on the root file system is a file that indicates the current changes status. Apparently it's modified in an unsafe way, causing problems when the root file system becomes full. I had a similar problem last week. IIRC, I fixed it by letting the changer move to an explicit slot number: amtape DailySet1 slot 1 Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: excluding directories for amanda bare metal restores
On Thu, 22 Nov 2007, Jon LaBadie wrote: On Thu, Nov 22, 2007 at 02:10:35AM -0800, Gil Vidals wrote: I'm aiming to use Amanda for bare metal restores as described in Backup Recovery by Curtis Preston - p. 145. However, I'm not sure which directories to exclude. In fact, I'm not sure that I should exclude anything at all. Should I exclude these? /proc /mnt /dev /tmp /sys With the possible exception of local usage of /mnt, all of these are temporary or dynamic or pseudo file systems or directory trees. Don't back them up. It depends. /dev may be real. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
RE: Wasted Space, capacity under 50% over multiple tapes
On Wed, 21 Nov 2007, Wayne Thorpe wrote: Tried setting 'runtape 1' already, tape did not get past 42.7% This is what happed __ These dumps were to tape tapebac05. *** A TAPE ERROR OCCURRED: [No more writable valid tape found]. I.e. unexpected end of tape. Some dumps may have been left in the holding disk. Run amflush to flush them to tape. The next tape Amanda expects to use is: a new tape. The next new tape already labelled is: tapebac06. ... ... ... USAGE BY TAPE: Label Time Size %NbNc tapebac05 3:23 17479168K 42.7 7 0 Let me guess: - You told Amanda you can fit 40 GB on a tape, while in reality, only 20 GB will fit? - You have hardware compression enabled, so the precompressed data is expanded again by the hardware compressor, and you can no longer fit 20 GB of (precompressed) data on the tape. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Probably very stupid question about chg-disk
On Fri, 16 Nov 2007, Francis Galiegue wrote: Is the holding disk still necessary? My guess is no, but then I may be mistaken. The documentation is not clear on this point. It only says not to put holding disks in the same place than vtapes... Which doesn't really mean you WILL still need a holding disk, but try and put it on another disk for performance reasons. A holding disk allows to run multiple dumps in parallel. Without a holding disk, all dumps will run sequential. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: RE AS400 Amanda?
On Wed, 3 Oct 2007, Francis Galiegue wrote: Le mercredi 03 octobre 2007, Cyrille Bollu a �crit�: [...] We don't have any Linux partition on our AS400. What exactly do you call a Linux partition? An ext3 filesystem or a Linux VM host? Correct me if I'm wrong but I think it's the latter, because AFAIK, AIX doesn't have a native ext2/ext3 fs driver... On AS/400, `partition' doesn't mean `disk partition', but `logical partition' (lpar). Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: HP DAT 160 tapetype
On Wed, 8 Aug 2007, Rory Beaton wrote: Someone was asking for this recently and the FAQ threw a tantrum when I tried to add it. Anyway...this is the output of amatapetype for a Hewlett Packard DAT 160 USB that we recently acquired. Is it common to see a zero length filemark value? define tapetype HP_DAT160 { comment HP DAT 160 USB (hardware compression on) -EHARDWARE_COMPRESSION_ON length 65535 mbytes And that's why only 64 GiB fit on your fancy DAT160 tape... filemark 0 kbytes speed 5319 kps } Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: gtar and related issues
On Tue, 22 May 2007, Brian Cuttler wrote: On Tue, May 22, 2007 at 04:07:32PM -0400, Jon LaBadie wrote: On Tue, May 22, 2007 at 03:10:02PM -0400, Brian Cuttler wrote: Jon, Think I found a good kit for 1.15.1, which seems to be on the good list at amanda.org. Unless you advise otherwise I will try to build using that version. It looks like 1.15.1 compiled easily on my Solaris 9 x86 box. I've not used it with amanda however so I can't comment on that. 1.15.1 is the highest version that is listed a working on this page, so I'm keeping my fingers crossed. http://www.amanda.org/docs/faq.html#id347035 Looks like it built ok, need to wait for the current amdump to complete before I install, would hate after all this time to fail on otherwise ok amdump run. Having the prefix in the file path is annoying, but in a pinch I can work with it if I have to. I'm using 1.16 (from Debian) since a while. Seems to work fine (no restore done so far, though ;-) Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Hardware suggestion
On Fri, 11 May 2007, Olivier Nicole wrote: Normal hot swap bay for hard disk are not designed for daily use, they are designed for maintenance only, and would break soon if I swaped the disk every day. You might want to try some (external) eSata devices. The connectors seem more reliable than the internal ones - and probably are cheap to replace... I was going through the reports of various amanda run, my actual tape is reported to give about 5MB/s, so any USB2 interface would be plenty enough. I assume you're not using a holding disk? What aveage tap write rate do you see in your reports (in the statistics)? I get about 5000 k/s for SLR100 taep drive. I'm not using a holding disk neither, and vtapes on the (single) SATA disk of my server (yes, I do copy my vtapes to a removable disk from time to time ;-), and I get up to 15 MiB/s for level zeroes. So for a big server with a holding disk and vtapes, you can use much more than a bandwidth of 5 MB/s. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: strange behavior when tape drive needs cleaning
On Fri, 13 Apr 2007, Freels, James D. wrote: I discovered today that my tape drive was dirty and did not realize it. What happened was the drive sent a scsi error to the kernel/OS and somehow the root (/), home (/home), and holding disk area recognized by AMANDA was changed from rw access to ro access. I believe this change was made by AMANDA itself. This happened while AMANDA was backing up. I had to reboot the machine to place the ro filesystems back to rw as they should be. I then repeated the attempted backup and the failure occurred again exactly the same way (so it was repeatable). This is when I suspected the dirty tape drive. I cleaned the drive and the problem went away. The backups now work like they should (and have for years). This is the first time I have seen this. The drive got dirty due to a different person changing the tapes did not realize they should also clean the drive occasionally. I also have new higher-capacity tapes so that the same number of tapes will get the driver dirtier quicker. Does the switch from rw to ro by AMANDA make sense ?? Is this a feature ? This is first I have heard of this. The Linux kernel (I assume you run Linux?) automatically remounts a file system ro if it notices a medium error on the underlying disk. Since there are no actual medium errors on the disk, but problems on the tape drive (both are on the same SCSI host adapter?), it looks like the SCSI driver has a bug and incorrectly told the upper layer about an error on the disk. Is there some more info about the actual error(s) in the kernel logs? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: dumps way too big
On Sat, 24 Mar 2007, [EMAIL PROTECTED] wrote: Le jeudi 22 mars 2007 � 20:13 -0400, Gene Heskett a �crit : On Thursday 22 March 2007, [EMAIL PROTECTED] wrote: Hello, One backup partly failed with : FAILURE AND STRANGE DUMP SUMMARY: k400 /mnt/d_mails lev 1 FAILED [dumps way too big, 1025270 KB, must skip incremental dumps] k400 /home/jpp lev 1 FAILED [dumps way too big, 1116100 KB, must skip incremental dumps] k400 /etc lev 0 STRANGE for some other directories and machines the backup is OK. What is the problem ? Regards Storm66 Your kernel version please? Kernel 2.6.16 on the master machine, 2.6.18 and 2.6.20 on other machines. Frank Smith asks for the size of tape I am using : it is a virtual tape on a separate disk whth more than 100G avalaible. You write `partly' failed? Doesn't it just mean that some DLEs were dumped (or estimated to dump) to tape, but Amanda noticed the remaining DLEs couldn't fit anymore? I see it from time to time, too. No harm, it just gets solved the next night :-) Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Slow performance with dump + LVM
On Tue, 23 Jan 2007, Gene Heskett wrote: On Tuesday 23 January 2007 21:56, Ross Vandegrift wrote: On Tue, Jan 23, 2007 at 01:01:38AM -0500, Gene Heskett wrote: Well, since dump works at the partition level, it may be that dump and LVM aren't compatible. Switch to tar, which is file oriented see what happens. Looks like I'm hitting nearly the same speed situation. I backed up 7932MiB in 2460 seconds (I got impatient...) which comes out to about 3-4MiB/s. Since tar behaves the same way, I think I'm going to play around with various record sizes when reading from LVM. Something in the blokc layer is shooting my transfer rates. I don't think I'm having any troubles with tar in that regard, my 7-9GB backups are all done in sub 1 hour times. anakin$ units 2438 units, 71 prefixes, 32 nonlinear units You have: 8 GB/hour You want: MB/s * 2.222 / 0.45 Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: lev 1 FAILED [no backup size line]
On Sun, 26 Nov 2006, Jean-Louis Martineau wrote: Geert Uytterhoeven wrote: Since a few days one of my DLEs consistently fails with: | /-- anakin /home/src lev 1 FAILED [no backup size line] | sendbackup: start [anakin:/home/src level 1] | sendbackup: info BACKUP=/bin/tar | sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f - ... | sendbackup: info COMPRESS_SUFFIX=.gz | sendbackup: info end | ? gtar: /var/lib/amanda/gnutar-lists/anakin_home_src_1.new: Missing record terminator | ? gtar: Error is not recoverable: exiting now | sendbackup: error [no backup size line] Anyone with a clue? Could this be /var running out of diskspace? yes, or /var/lib/amanda/gnutar-lists/anakin_home_src_0 is corrupted, you should force a level 0. That fixed the problem, thx! So finally I'm getting back to full speed, after the tar 1.13.91 debacle and the upgrade to amanda 2.5 I still see this new (since the last amanda upgrade) harmless warning in the strange results section: | taper: ERROR could not mount disk: mount: can't find file in /etc/fstab or | /etc/mtab | taper: ERROR Can't read media info, using defaults: dvd+rw-mediainfo | returned 'file: unable to open: No such file or directory_', cdrecord -atip | returned '/usr/bin/cdrecord: No such file or directory. _Cannot open SCSI | driver!_For possible targets try 'wodim -scanbus'._For possible transport | specifiers try 'wodim dev=help'._For IDE/ATAPI devices configuration, see | the file README.ATAPI.setup from_the wodim documentation._' I'm using cdrw-taper instead of the normal one (but I don't write to CDs, just to vdisks). I guess it tries to pass my tapedev (file:/scratch/amanda/DailySet1) to cdrecord or something like that? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
lev 1 FAILED [no backup size line]
Hi, Since a few days one of my DLEs consistently fails with: | /-- anakin /home/src lev 1 FAILED [no backup size line] | sendbackup: start [anakin:/home/src level 1] | sendbackup: info BACKUP=/bin/tar | sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f - ... | sendbackup: info COMPRESS_SUFFIX=.gz | sendbackup: info end | ? gtar: /var/lib/amanda/gnutar-lists/anakin_home_src_1.new: Missing record terminator | ? gtar: Error is not recoverable: exiting now | sendbackup: error [no backup size line] sendbackup.20061125005608.debug has: | sendbackup: start: anakin:/home/src lev 1 | sendbackup: time 0.000: spawning /bin/gzip in pipeline | sendbackup: argument list: /bin/gzip --fast | sendbackup-gnutar: time 0.001: pid 8618: /bin/gzip --fast | sendbackup-gnutar: time 7.362: doing level 1 dump as listed-incremental from '/var/lib/amanda/gnutar-lists/anakin_home_src_0' to '/var/lib/amanda/gnutar-lists/anakin_home_src_1.new' | sendbackup-gnutar: time 7.407: doing level 1 dump from date: 2006-11-20 12:45:43 GMT | sendbackup: time 7.419: started index creator: /bin/tar -tf - 2/dev/null | sed -e 's/^\.//' | sendbackup: time 7.420: spawning /usr/lib/amanda/runtar in pipeline | sendbackup: argument list: runtar DailySet1 gtar --create --file - --directory /home --one-file-system --listed-incremental /var/lib/amanda/gnutar-lists/anakin_home_src_1.new --sparse --ignore-failed-read --totals --exclude-from /tmp/amanda/sendbackup._home_src.20061125005616.exclude --files-from /tmp/amanda/sendbackup._home_src.20061125005616.include | sendbackup-gnutar: time 7.423: /usr/lib/amanda/runtar: pid 8623 | sendbackup: time 7.423: started backup | sendbackup: time 13.134: 118: strange(?): gtar: /var/lib/amanda/gnutar-lists/anakin_home_src_1.new: Missing record terminator | sendbackup: time 13.148: 118: strange(?): gtar: Error is not recoverable: exiting now | sendbackup: time 13.162: index created successfully | sendbackup: time 13.163: error [no backup size line] | sendbackup: time 13.163: pid 8616 finish time Sat Nov 25 00:56:21 2006 Anyone with a clue? Could this be /var running out of diskspace? I'm using Debian, with Amanda 2.5.1p1-2 and tar 1.16-1. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Amanda error Unexpected field value...
On Tue, 17 Oct 2006, Paul Yeatman wrote: Just to follow up on this, I was forced to downgrade to tar version 1.15.1 before things worked correctly and without the Unexpected field value messages. Things were just not working as expected with 1.15.91. Indeed. JFYI, Debian recently released an update of 1.15.91 (1.15.91-2.1), which fixes some bugs w.r.t. restores of incrementals. But the issue with creating incrementals and escaping to different file systems is still there :-( -In response to your message- --received from Paul Yeatman-- I have run into the exact same problem on a client that is running Debian (Etch) linux using tar version 1.15.91 (and amanda 2.5.0p2-1). I have now noticed, given your message, that things worked fine with the first Debian tar 1.15.91 package but, when a revised tar 1.15.91 package was released end of July, 2 of the 3 partitions on the client began failing during the size estimation stage with the same tar error message, Unexpected field value. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Problems with gtar
On Fri, 13 Oct 2006, Paul Bijnens wrote: On 2006-10-13 14:21, Nick Pierpoint wrote: FAILED AND STRANGE DUMP DETAILS: /-- rollins /home lev 1 FAILED [/bin/tar returned 2] sendbackup: start [rollins:/home level 1] sendbackup: info BACKUP=/bin/tar sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... - sendbackup: info COMPRESS_SUFFIX=.gz sendbackup: info end ? gtar: /var/lib/amanda/gnutar-lists/rollins_home_1.new:1: Invalid time stamp ? gtar: /var/lib/amanda/gnutar-lists/rollins_home_1.new:2: Invalid inode number | gtar: ./nick/.evolution/cache/tmp/spamd-socket-path-M7CcHJ: socket ignored | Total bytes written: 7639367680 (7.2GiB, 7.1MiB/s) ? gtar: Error exit delayed from previous errors sendbackup: error [/bin/tar returned 2] \ I believe you have a problem with the filesystem that holds /var. Is it full? Or is it corrupt? ... Gnutar creates a file in the directory /var/lib/amanda/gnutar-lists for its listed-incremental feature. And accessing that file somehow triggers the errors Invalid time stamp and Invalid inode number. Maybe because /home is larger, and thus while backing up /home the error occurs, while backing up /etc, the listed-incremental file is much smaller, and does not trigger the error. Have you recently upgraded tar to 1.15? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Problems with gtar
On Fri, 13 Oct 2006, Charles Curley wrote: On Fri, Oct 13, 2006 at 01:21:14PM +0100, Nick Pierpoint wrote: I've been using Amanda for a couple of months and it has all been working beautifully, but in the last few days I've been seeing some errors connected with gtar: sendbackup: info end ? gtar: /var/lib/amanda/gnutar-lists/rollins_home_1.new:1: Invalid time stamp ? gtar: /var/lib/amanda/gnutar-lists/rollins_home_1.new:2: Invalid inode number | gtar: ./nick/.evolution/cache/tmp/spamd-socket-path-M7CcHJ: socket rollins /home 1 FAILED --- (brought to you by Amanda version 2.4.5p1) I have been seeing the same thing for about two weeks. I'm running Fedora Core 5, and the same version of Amanda. Last night my computer crashed, so I ran fsck on everything. There were numerous errors on my / file system, where /var/lib/amanda resides. If those were real errors I would think that fsck would have caught them. I note in my yum.log: Oct 07 08:58:06 Updated: tar.i386 2:1.15.1-15.FC5 I started seeing those error messages after October 7. Possibly this is a bug in the latest FC5 tar? I'll check bugzilla later today. Things I have not yet tried: rolling back to the previous version of tar; deleting the offending files. You need to use a more recent Amanda, which can handle the new incremental format used by tar 1.15. Apart from this, there are other bugs in tar 1.15 (at least the Debian version ignores --one-file-system when doing incrementals), that's why I reverted to 1.14. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Leaving backups on disk vs. using file driver
On Thu, 28 Sep 2006, Oscar Ricardo Silva wrote: We're moving away from tape based backups and have purchased a large disk array that directly attaches to my amanda server. I'm aware of the file driver for mimicking tape drives but should I even go that route? Why not just leave the backups on the holding disk? Any suggestions/information would be appreciated. Amanda only keeps track of what was written to `tape'. You cannot restore from the holding disk without resorting to a manual restore. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Release of amanda-2.5.1
On Wed, 20 Sep 2006, Paul Bijnens wrote: On 2006-09-20 11:56, Geert Uytterhoeven wrote: On Mon, 11 Sep 2006, Geert Uytterhoeven wrote: On Sat, 9 Sep 2006, Josef Wolf wrote: On Tue, Sep 05, 2006 at 03:34:42PM -0400, Jean-Louis Martineau wrote: * Works with GNU tar 1.15.91 - work with new gtar state file format. Can someone please explain what this exactly means? The format to store information about the incrementals was changed. Since Amanda made some assumptions about this format (while she shouldn't have cared, and just considered it as opaque files), this broke Amanda. After the fix, Amanda just treats the files as opaque files. But be careful, at least the tar 1.15.91-2 from Debian is broken: it ignores the --one-file-system option when doing incrementals, causing exorbitant backup sizes for any level 0. I don't know about the upstream version, but since this bug has been reported almost 2 months ago, I'm afraid that one is broken, too. Apparently the problem is more subtle. Thanks to the Debian bug tracking system, I noticed this: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=384508 tar: -l option changed meaning, without any warning! OK. But AFAIK (grep *.c in the sources), Amanda does NOT use the -l option, but only the --one-file-system option, since a very long time already. So I think this option name change has nothing to do with the use of gnutar by Amanda. (AFAIK the format of the incremental-state files has changed, and Amanda assumed they were in some line-oriented format instead of handling it as opaque objects.) Indeed, thanks for reminding me! I just send a clarification to the Debian BTS: - 384508 is about -l no longer meaning --one-file-system - 377124 is about --one-file-system breaking when combined with --listed-incremental (Amanda does pass --one-file-system (not -l) to tar) Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
RE: Release of amanda-2.5.1
On Thu, 21 Sep 2006, McGraw, Robert P. wrote: I am running gnu tar 1.15.1 on my solaris hosts and it does not show a -l parameter. I did tar --help | grep \-l. From the --help I have the following: --check-links print a message if not all links are dumped On a RH X86_64 system I have gnu tar 1.14 and it shows -l, --one-file-systemstay in local file system when creating archive `-l' wasn't recycled before 1.15.91, according to the changelog. On a Debian testing box: | tux$ tar --version | tar (GNU tar) 1.15.91 | Copyright (C) 2006 Free Software Foundation, Inc. | This is free software. You may redistribute copies of it under the terms of | the GNU General Public License http://www.gnu.org/licenses/gpl.html. | There is NO WARRANTY, to the extent permitted by law. | | Written by John Gilmore and Jay Fenlason. | tux$ tar --help | grep -- -l | -t, --list list the contents of an archive | --test-label test the archive volume label and exit | -g, --listed-incremental=FILE handle new GNU-format incremental backup | --diff, --extract or --list and when a list of | --force-local archive file is local even if it has a colon | -L, --tape-length=NUMBER change tape after writing NUMBER x 1024 bytes | -V, --label=TEXT create archive with volume name TEXT; at | -l, --check-links print a message if not all links are dumped | tux$ -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Charles Stroom Sent: Wednesday, September 20, 2006 12:25 PM To: amanda-users@amanda.org Subject: Re: Release of amanda-2.5.1 on Wed, 20 Sep 2006 10:33:15 EDT Gene Heskett [EMAIL PROTECTED] wrote: On Wednesday 20 September 2006 05:56, Geert Uytterhoeven wrote: On Mon, 11 Sep 2006, Geert Uytterhoeven wrote: But be careful, at least the tar 1.15.91-2 from Debian is broken: it ignores the --one-file-system option when doing incrementals, causing exorbitant backup sizes for any level 0. I don't know about the upstream version, but since this bug has been reported almost 2 months ago, I'm afraid that one is broken, too. Apparently the problem is more subtle. Thanks to the Debian bug tracking system, I noticed this: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=384508 tar: -l option changed meaning, without any warning! Gr{oetje,eeting}s, Geert Good Grief Charley Brown! Tar is supposed to be a stable, mature utility is it not? I mean its what, 30 years old, existing in the various *nix's long before gnu took over? Whyinhell can't the folks over at gnu.org find something else to screw with besides tar? It doesn't _need_ to be on their WPA or CCC lists as a makework project when there's nothing else to do around the office. On my Suse 10.0 system: (2): cs tar --version tar (GNU tar) 1.15.1 (0): cs tar -l dum dum tar: Semantics of -l option will change in the future releases. tar: Please use --one-file-system option instead. So, at least there were warnings (not really an excuse I think) Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Release of amanda-2.5.1
On Thu, 21 Sep 2006, Gene Heskett wrote: On Thursday 21 September 2006 05:09, Geert Uytterhoeven wrote: On Wed, 20 Sep 2006, Paul Bijnens wrote: On 2006-09-20 11:56, Geert Uytterhoeven wrote: On Mon, 11 Sep 2006, Geert Uytterhoeven wrote: On Sat, 9 Sep 2006, Josef Wolf wrote: On Tue, Sep 05, 2006 at 03:34:42PM -0400, Jean-Louis Martineau wrote: * Works with GNU tar 1.15.91 - work with new gtar state file format. Can someone please explain what this exactly means? The format to store information about the incrementals was changed. Since Amanda made some assumptions about this format (while she shouldn't have cared, and just considered it as opaque files), this broke Amanda. After the fix, Amanda just treats the files as opaque files. But be careful, at least the tar 1.15.91-2 from Debian is broken: it ignores the --one-file-system option when doing incrementals, causing exorbitant backup sizes for any level 0. I don't know about the upstream version, but since this bug has been reported almost 2 months ago, I'm afraid that one is broken, too. Apparently the problem is more subtle. Thanks to the Debian bug tracking system, I noticed this: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=384508 tar: -l option changed meaning, without any warning! OK. But AFAIK (grep *.c in the sources), Amanda does NOT use the -l option, but only the --one-file-system option, since a very long time already. So I think this option name change has nothing to do with the use of gnutar by Amanda. (AFAIK the format of the incremental-state files has changed, and Amanda assumed they were in some line-oriented format instead of handling it as opaque objects.) Indeed, thanks for reminding me! I just send a clarification to the Debian BTS: - 384508 is about -l no longer meaning --one-file-system - 377124 is about --one-file-system breaking when combined with --listed-incremental (Amanda does pass --one-file-system (not -l) to tar) And how does this breakage manifest itself again? Is it by not following and counting out-of-filesystem links in the estimate phase, but including them during the backup? This would of course result in small estimate notations. I noticed 2 things when doing non-level-zero backups: 1. Warnings about weird files in /proc, while tar shouldn't have entered /proc as it's a different file system 2. Backups being way too large, as tar escaped from the file system it was backing up. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Release of amanda-2.5.1
On Mon, 11 Sep 2006, Geert Uytterhoeven wrote: On Sat, 9 Sep 2006, Josef Wolf wrote: On Tue, Sep 05, 2006 at 03:34:42PM -0400, Jean-Louis Martineau wrote: * Works with GNU tar 1.15.91 - work with new gtar state file format. Can someone please explain what this exactly means? The format to store information about the incrementals was changed. Since Amanda made some assumptions about this format (while she shouldn't have cared, and just considered it as opaque files), this broke Amanda. After the fix, Amanda just treats the files as opaque files. But be careful, at least the tar 1.15.91-2 from Debian is broken: it ignores the --one-file-system option when doing incrementals, causing exorbitant backup sizes for any level 0. I don't know about the upstream version, but since this bug has been reported almost 2 months ago, I'm afraid that one is broken, too. Apparently the problem is more subtle. Thanks to the Debian bug tracking system, I noticed this: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=384508 tar: -l option changed meaning, without any warning! Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Release of amanda-2.5.1
On Wed, 20 Sep 2006, Gene Heskett wrote: On Wednesday 20 September 2006 05:56, Geert Uytterhoeven wrote: On Mon, 11 Sep 2006, Geert Uytterhoeven wrote: On Sat, 9 Sep 2006, Josef Wolf wrote: But be careful, at least the tar 1.15.91-2 from Debian is broken: it ignores the --one-file-system option when doing incrementals, causing exorbitant backup sizes for any level 0. I don't know about the upstream version, but since this bug has been reported almost 2 months ago, I'm afraid that one is broken, too. Apparently the problem is more subtle. Thanks to the Debian bug tracking system, I noticed this: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=384508 tar: -l option changed meaning, without any warning! Tar is supposed to be a stable, mature utility is it not? I mean its what, 30 years old, existing in the various *nix's long before gnu took over? Whyinhell can't the folks over at gnu.org find something else to screw with besides tar? It doesn't _need_ to be on their WPA or CCC lists as a makework project when there's nothing else to do around the office. According to http://www.gnu.org/software/tar/manual/html_node/Option-Summary.html the --one-file-system option still exists, but must be spelled out as shown here. The -l option now checks hard links. So amanda CAN be fixed, but is tars option buffer big enough to do the job when we have to spell every option out in order to protect us from such future actions? I feel rather strongly about this, so [EMAIL PROTECTED] has been added to the Cc: list. They need to know how the users feel about such shennanigans. I wasn't able to find the docs for 1.15-1 on their site, so I have no idea if this might explain the rash of small estimates I'm getting that occasionally overrun my nominally 8GB vtape size by as much as 1.5GB! Question for the gnu folks: can you please tell us when this -l option was actually changed to be the hardlink checking function from the formerly used shorthand for the --one-file-system option? tar-1.15.91/NEWS states: | version 1.15.91 - Sergey Poznyakoff, (CVS version) | | * Incompatible changes | | ** Short option -l is now an alias of --check-links option, which complies | with UNIX98. This ends the transition period started with version 1.14. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: amtapetype aborts on XServe running Yellow Dog Linux 4.1
On Thu, 14 Sep 2006, Nick Jones wrote: Here is what I've gotten twice, with hardware compression on and off. [EMAIL PROTECTED] amanda]# ./sbin/amtapetype -f /dev/tape -e 400g -o Writing 2048 Mbyte compresseable data: 31 sec Writing 2048 Mbyte uncompresseable data: 31 sec Estimated time to write 2 * 409600 Mbyte: 12400 sec = 3 h 26 min wrote 12320768 32Kb blocks in 94 files in 5655 seconds (short write) wrote 12386304 32Kb blocks in 189 files in 6025 seconds (short write) define tapetype unknown-tapetype { comment just produced by tapetype prog (hardware compression off) length 386048 mbytes filemark 0 kbytes speed 67752 kps } *** glibc detected *** free(): invalid pointer: 0xf7d9a000 *** Aborted Anybody got any ideas? Will this cause a problem, or is it a problem? Is it reproducible? If yes, and you're running on Linux/ia32, could you try running it under valgrind? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: amtapetype aborts on XServe running Yellow Dog Linux 4.1
On Fri, 15 Sep 2006, Geert Uytterhoeven wrote: On Thu, 14 Sep 2006, Nick Jones wrote: Here is what I've gotten twice, with hardware compression on and off. [EMAIL PROTECTED] amanda]# ./sbin/amtapetype -f /dev/tape -e 400g -o Writing 2048 Mbyte compresseable data: 31 sec Writing 2048 Mbyte uncompresseable data: 31 sec Estimated time to write 2 * 409600 Mbyte: 12400 sec = 3 h 26 min wrote 12320768 32Kb blocks in 94 files in 5655 seconds (short write) wrote 12386304 32Kb blocks in 189 files in 6025 seconds (short write) define tapetype unknown-tapetype { comment just produced by tapetype prog (hardware compression off) length 386048 mbytes filemark 0 kbytes speed 67752 kps } *** glibc detected *** free(): invalid pointer: 0xf7d9a000 *** Aborted Anybody got any ideas? Will this cause a problem, or is it a problem? Is it reproducible? If yes, and you're running on Linux/ia32, could you try running it under valgrind? Bummer, didn't read the subject, Xserve is PPC... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Release of amanda-2.5.1
On Sat, 9 Sep 2006, Josef Wolf wrote: On Tue, Sep 05, 2006 at 03:34:42PM -0400, Jean-Louis Martineau wrote: * Works with GNU tar 1.15.91 - work with new gtar state file format. Can someone please explain what this exactly means? The format to store information about the incrementals was changed. Since Amanda made some assumptions about this format (while she shouldn't have cared, and just considered it as opaque files), this broke Amanda. After the fix, Amanda just treats the files as opaque files. But be careful, at least the tar 1.15.91-2 from Debian is broken: it ignores the --one-file-system option when doing incrementals, causing exorbitant backup sizes for any level 0. I don't know about the upstream version, but since this bug has been reported almost 2 months ago, I'm afraid that one is broken, too. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Debian packages for Amanda
On Fri, 8 Sep 2006, Phil Howard wrote: Any Debian users/gurus around? I found that Debian has Amanda broken into 3 packages: amanda-common amanda-client amanda-server What I expected was I could install amanda-client on client machines and amanda-server on a server machine (both on a machine that is the tape server _and_ has data to be backed up). I expected amanda-common to be needed on either client or server. Yes. However, when installing amanda-common, it also installs amanda-client. Anyone know why Debian has things arranged this way? None of the amanda-common packages (I checked stable, testing, and unstable) depend on amanda-client. I had no problem (on Debian testing) removing amanda-client (and keeping amanda-common), or installing amanda-common only. But amanda-common does suggest amanda-client | amanda-server. Perhaps you have some `auto install suggested packages' option enabled? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: using disk instead of tape
On Fri, 8 Sep 2006, Ronan KERYELL wrote: Third, what about bad blocks on disk? How to skip them in a raw partition if you do not have state-of-the-art disks that do block remapping for you in your back-yard (such as SCSI)? Often FS do these tricks for you on IDE disks for example. These days IDE does that too. But if there are too many of them, you loose (same for SCSI). Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Amanda vs. rsync vs. ... (was: Re: using disk instead of tape)
On Tue, 5 Sep 2006, Phil Howard wrote: If you want all those benefits of restore, and don't mind having a disk with a filesystem already on it, then why not use something like rsync to make backups? As long as you aren't working with over about a million individual files, it works great. It makes a replica of a filesystem or multi-filesystem tree, and gives you direct access to every individual file for restore purpose. Use multiple disks to make multiple backups. When backing up to a disk previously used, rsync avoids the writing work for files not changed (according to matching meta data, though this can be turned off). And rsync works well over a network via ssh. So I can't really understand your argument. What you seem to specifically want that dismisses raw disk might well be better served with rsync instead of Amanda. I might want Amanda, though, for huge volume and speed. Now it starts to become interesting :-) This is actually what I've been in mind to post since a long time... First, let's say I use Amanda and vtapes to backup my home systems. I like Amanda, because it's simple to set up, robust, ease of recovery, ... However, storing backups offsite over the Internet (say, on a remote disk at a friend's place) is not an option, due to the monthly upload quota enforced by all ISPs here (in Belgium). I like rsync, since it only transfers what needs to be transfered. But it doesn't keep multiple days of backups and hard links can be tricky. I tried rdiff-backup, which keeps reverse-incrementals, but it can take lots of memory on the client side (i.e. not suitable to backup old machines) and doesn't work well with hard links. I also use duplicity, which keeps reverse-incrementals and supports encryption and authentication (nice for offsite backups of my digital pictures on a big scratch disk at work :-), but it can take lots of space on $TMPDIR on the client side, and it doesn't support hard links. So my ideal backup solution would be Amanda, with support for incrementally storing backups at a remote location :-) In theory, it should be possible to write a tool to take the tar archives as created by Amanda and calculate differentials, and reassemble the tar archives at the other end of the network pipe, right? Or are there better solutions? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Amanda vs. rsync vs. ... (was: Re: using disk instead of tape)
On Wed, 6 Sep 2006, Ian Turner wrote: On Wednesday 06 September 2006 04:23, Geert Uytterhoeven wrote: So my ideal backup solution would be Amanda, with support for incrementally storing backups at a remote location :-) Well, Amanda does that, via incremental backups. What it doesn't do (because of tool support) is incremental backups of individual files -- mostly because we don't have (I'm not aware of) any tool that does that. Except that from time to time you need a level 0, which is big. Switching to pure-incremental doesn't help, since then you (a) need to keep the initial level 0 forever and (b) restore will be painful since you have to go throughall incrementals. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: using disk instead of tape
On Tue, 5 Sep 2006, Phil Howard wrote: On Sat, Sep 02, 2006 at 06:39:40PM -0400, Jon LaBadie wrote: | It certainly would destroy one of amanda's features, | the ability to easily recover backup data using | standard unix utilities without amanda software. How is that destroyed? Suppose you use tar format. You can have tar read from tape directly, which is what I presume you mean for being able to recover outside of Amanda. You can have tar read from disk partitions if the native partition scheme is used. At first I had the same reaction as you: it would work fine if you would cycle your tapedev through the partitions. However, then I realized a tape can store multiple `files' sequentially, while a disk partition can't (without hackerish that would annihiliate the easy recovery again). So as long as you dump only one DLE, it would work fine. If you dump more than one DLE, you need more logic. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: using disk instead of tape
On Tue, 5 Sep 2006, Gene Heskett wrote: On Tuesday 05 September 2006 05:24, Geert Uytterhoeven wrote: On Tue, 5 Sep 2006, Phil Howard wrote: On Sat, Sep 02, 2006 at 06:39:40PM -0400, Jon LaBadie wrote: | It certainly would destroy one of amanda's features, | the ability to easily recover backup data using | standard unix utilities without amanda software. How is that destroyed? Suppose you use tar format. You can have tar read from tape directly, which is what I presume you mean for being able to recover outside of Amanda. You can have tar read from disk partitions if the native partition scheme is used. At first I had the same reaction as you: it would work fine if you would cycle your tapedev through the partitions. However, then I realized a tape can store multiple `files' sequentially, while a disk partition can't (without hackerish that would annihiliate the easy recovery again). So as long as you dump only one DLE, it would work fine. If you dump more than one DLE, you need more logic. I don't know how this conclusion was reached, but IMO its wrong. One of the beauties of amanda is that bare metal recoveries can be done with nothing more than dd, tar(or dump if that what was used) and gzip. Its far more trouble to locate a file you want on a sequential tape than it is to locate it in a vtape. The vtape itself is nothing more than a subdir in a subdir in the filesystem of the hard drive. Switching the vtapes is as simple as replacing the link to the directory called data, with a new link named data that points at the desired directory. Yes, that's true. But this discussion was about using raw partitions on a disk instead of files on a filesystem on a disk. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Amanda is creating files (locale problem)
On Thu, 24 Aug 2006, Cyrille Bollu wrote: I'm runing amanda-2.4.4p1-0.3E (RH ES 3.3) and have some problem with non-ascii caracters. For example, I have a file called aandr�.nsf. This file turns into aandré.nsf when I list it with ls. But, funnier, Amanda creates 2 files when I restore it (aandré.nsf and aandr?.nsf). LANG is en_US.UTF-8. Is it a problem in my config or bug in Amanda? It's a bug in the low-level backup tool Amanda uses (e.g. dump or tar). Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: need regex help
On Thu, 24 Aug 2006, Jeff Portwine wrote: Sorry I should've been more clear.. user1, user2, etc.. are just placeholders for actual directory names. I actually have about 8 or 9 directories with nothing in common in their names.I was just giving generic names to show what I was attempting to do. In that case, try ./home/{user1,user2,user3,user4}* - Original Message - From: Ken D'Ambrosio [EMAIL PROTECTED] To: Jeff Portwine [EMAIL PROTECTED] Cc: amanda-users@amanda.org Sent: Thursday, August 24, 2006 11:36 AM Subject: Re: need regex help [Sent a second time from an address known to the list; sorry if a dup.] On Thu, August 24, 2006 10:40 am, Jeff Portwine wrote: I want amanda to back up multiple home directories, but I don't want to back up everything in /home. I tried using syntax like this: myserver /dev/sdc1 { server-user-tar include ./home/(user1|user2|user3|user4)* } I can't swear this would work, but this is how I'd write the regex if I were doing something similar in Perl or grep: ./home/user[1234]* -Ken Amcheck didn't have any problems with it but when the backup actually ran, I got the following messages in the dump report: FAILURE AND STRANGE DUMP SUMMARY: frank.veritime.com /dev/sdc1 lev 0 STRANGE FAILED AND STRANGE DUMP DETAILS: /-- myserver /dev/sdc1 lev 0 STRANGE sendbackup: start [myserver:/dev/sdc1 level 0] sendbackup: info BACKUP=/bin/gtar sendbackup: info RECOVER_CMD=/bin/gtar -f... - sendbackup: info end ? gtar: ./home/(user1|user2|user3|user4)*: Warning: Cannot stat: No such file or directory | Total bytes written: 10240 (10kB, ?B/s) sendbackup: size 10 sendbackup: end \ Did I do the regex incorrectly, or is there a better way to accomplish this? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Why tar with amanda?
On Tue, 22 Aug 2006, [ISO-8859-1] Natalia Garc�a Nebot wrote: Hi! I have one doubt. Amanda use gnu-tar to backup subdirectories and amanda can access to all files and directories in disklist file not being the owner. But gnu-tar hasn't suid bit activated then, How can amanda access them? which internal mechanism use amanda? Through the setuid wrapper runtar: | $ ls -l /usr/lib/amanda/runtar | -rwsr-xr-- 1 root backup 5196 May 26 07:09 /usr/lib/amanda/runtar* | $ Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Amanda Server on Cygwin amdump not working :-S
On Mon, 21 Aug 2006, Paul Bijnens wrote: On 2006-08-21 11:23, David S�nchez Mart�n wrote: The programs on cygwin shell seem correctly setuid to root (the user i've created). Maybe is a permissions problem but i tried to change owner to SYSTEM (Windows best equivalent to God... err ... root, i mean :-) I'm not a cygwin user, but root is not just an ordinary user in Unix. It is a user with special privileges. You need to have those privileges to be able to do certain things that Amanda relies on: like opening ports 1024, etc. It MUST be useruid number 0 in normal Unix environments too (and I'm sure lots of program break when that is not the case). Is there no equivalent thing user in cygwin? I'm far from a Windows expert (I try to stay away from it as far as possible :-), but on Cygwin you can run e.g. sshd on port 22 as an ordinary user. So I guess amandad won't be a problem neither. How the setuid tar wrapper should behave is a different question. I missed `cu' on Cygwin so I compiled it myself, but it needed to be setuid too and according to our sysadmin that feature didn't exist on Windows. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: [RFE] - Footer file on tape
On Sun, 20 Aug 2006, Jon LaBadie wrote: On Sun, Aug 20, 2006 at 01:28:29PM +0100, Chris Lee wrote: I was wondering if anyone thought it would be a good idea for Amanda to write a footer file to tape after completing a dump providing a record of the tapes needed to produce a full restore to the state the system was in when this tape was written. I don't use amanda in a way that this would be very useful to me, it is only for home backups and I have a dumpcycle of 2 so I just need the last 3 tapes and I am sure to have all I need. However I was thinking in situations where longer dumpcycles and lots of DLEs are common it could help with restores if you lose Amanda and her data. The footer file would just be a list of the tapes from the last level 0 to now for each DLE. For example a file formatted like this, so anyone can read it without Amanda's help: /home/bob { //Tapes needed for this DLE DailySet3 0 Thursday 17/08/2006 DailySet4 1 Friday18/08/2006 DailySet5 2 Monday21/08/2006 } /home/anne { //Tapes Needed for this DLE DailySet5 0 Monday21/08/2006 } etc. Restore Set { //list of all tapes needed for all DLEs DailySet3 Thursday 17/08/2006 DailySet4 Friday18/08/2006 DailySet5 Monday21/08/2006 } Interesting idea. A 32KB trailer file is, I believe, already added to the last tape used in a dump. I'm not sure I like the idea of adding this data, either as a last file before the trailer, or as part of the trailer. In a large installation it could get quite large and could result in an extra tape being used if the last DLE nearly filled the tape. IIRC, Gene has a script to add this info. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Is GNU tar 1.13.25 still good with 2.5.0p2?
On Fri, 18 Aug 2006, Gene Heskett wrote: On Friday 18 August 2006 11:35, Toomas Aas wrote: I'm planning to upgrade my Amanda server (currently 2.4.5) to 2.5.0p2. I'm wondering whether GNU tar 1.13.25 is still officially considered a good version, or is it absolutely required to upgrade to 1.15.1? I noticed that when installing Amanda from FreeBSD ports, the installation pulls in gtar from ports (archivers/gtar), which is currently version 1.15.1. However, my FreeBSD 5.4 seems to include GNU tar 1.13.25 installed with base FreeBSD system as /usr/bin/gtar and I was thinking, maybe there is no need to have two gtars on my system? BTW, I currently use dump for backups, so gtar is only used for indexes. As near as I have been able to determine, they are interchangeable. I've been using 1.15-1 since it came out. 1.13 plain, and any 1.14 will eat your lunch however. As Debian stable has 1.14-2.2, I guess there do exist good (patched) versions... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: amanda client inetd problem
On Fri, 11 Aug 2006, rom wrote: Jeff Portwine wrote: What happens when you execute the command /usr/local/libexec/amandad as user backup manually? $ /usr/local/libexec/amandad /usr/local/libexec/amandad: error in loading shared libraries: libamclient-2.5.0p2.so: cannot open shared object file: No such file or directory However, that library does exist... And is it readable by the backup user? The library may exist but the system could not find it. Have you the directory /usr/local/lib listed on /etc/ld.so.conf? This file is a kind of path for finding libraries. You probably don't have it. After adding it you have to run ldconfig to update the cache used to find libraries. If amandad doesn't work when called by hand it won't work when called by inetd... ;-) What does `ldd /usr/local/libexec/amandad' say? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Next Tapes are Offsite
On Tue, 8 Aug 2006, Jon LaBadie wrote: One thing still up in the air (to me anyway) is final tape selection from within the tapelist and physical tape changer. Your description gets to which tapes are eligible to be selected, but not which tape (or runtape number of tapes) among that set is ultimately chosen. Very simple: the tape in the currently loaded slot. And if that one is not eligible, Amanda skips to the next slot, until she finds an eligible tape. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: FATAL driver reading result from taper: Connection reset by peer
On Mon, 7 Aug 2006, mario wrote: the ubuntu package cdrw-taper caused this problem, afer removing it everything seems to go fine (it came with a weird/broken perl script). I have some other sort of problem now, which i will post in a new mail. If it behaves like plain Debian, make sure user backup belongs to the cdrom group, so Amanda can run cdrecord. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: restoring from DVDs
On Sat, 5 Aug 2006, Ross Vandegrift wrote: On Sat, Aug 05, 2006 at 12:29:43AM +0100, Laurence Darby wrote: BTW, it was actually one disk of nice and fast RAID 0 (so I'm restoring to the one good disk). Does anybody know if data recovery from it would be possible? I hope *not*, since I'm sending it back under waranty, and I can't erase it cos its dead, although it sounded like the platter might be all scratched up... As always, it depends on what you want to pay. If you have the money to burn, just about anything besdies physical platter destruction/degaussing can be recovered. I read an article not that long ago about recovering a hard disk that had been burned in a fire. I've had quite good luck doing poor man's data recovery. Boot the machine into Knoppix or like ilk and use dd_rescue to copy the disk to an image file or another disk. dd_rescue is smart about skipping areas of the disk it cannot read instead of giving up. It can take a long time, but I've recovered quite a bit of data with that sucker. But this simple methods won't work, as the disk used to be part of a RAID0 setup, and thus contains only half of the data. Then it depends on the stripe size: the larger it is, the more likely you can find useful pieces of data (e.g. a complete password or credit card number). That's the advantage (for the manufacturer) of drive warranty policies: people who care a lot about the security of their data will never return a drive under warranty, but just buy a new one instead... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: 2.5.0p2-20060424 just took me down.
On Sun, 6 Aug 2006, Gene Heskett wrote: And I'm beginning to wonder about my hardware, gkrellms displayed temps are fubar after the reboot, one is 32F, the other about 261.6F, and the heat sinks feel plumb normal. Displayed temps a few minutes prior to the crash were both normal and in the low 130F range, its nice warm in the coyotes den tonight. AC is off, and I just opened the window about a foot. Anyway, from /var/log/messages: Aug 6 01:23:08 coyote kernel: BUG: unable to handle kernel NULL pointer dereference at virtual address 002e Aug 6 01:23:08 coyote kernel: printing eip: Aug 6 01:23:08 coyote kernel: c01543b3 Aug 6 01:23:08 coyote kernel: *pde = Aug 6 01:23:08 coyote kernel: Oops: 0002 [#1] Aug 6 01:23:08 coyote kernel: PREEMPT Aug 6 01:23:08 coyote kernel: Modules linked in: snd_rtctimer cx88_dvb cx88_vp3054_i2c mt352 or51132 video_buf_dvb dvb_core nxt200x zl10353 cx24123 lgdt330x cx22702 dvb_pll cx8802 tda9887 cx8800 compat_ioctl32 v4l1_compat cx88xx ir_common i2c_algo_bit video_buf btcx_risc tuner v4l2_common tveeprom videodev radeon drm nvidia_agp agpgart w83627 hf hwmon_vid i2c_isa i2c_nforce2 i2c_core snd_seq_oss snd_pcm_oss snd_mixer_oss snd_bt87x pl2303 usbserial snd_seq_ midi snd_emu10k1_synth snd_emux_synth snd_seq_virmidi snd_seq_midi_event snd_seq_midi_emul snd_seq snd_intel8x0 snd _emu10k1 snd_rawmidi snd_ac97_codec snd_ac97_bus snd_pcm snd_seq_device snd_timer snd_page_alloc snd_util_mem snd_h wdep snd soundcore nfsd exportfs lockd nfs_acl smbfs sunrpc ohci1394 ieee1394 Aug 6 01:23:08 coyote kernel: CPU:0 Aug 6 01:23:08 coyote kernel: EIP:0060:[c01543b3]Not tainted VLI Aug 6 01:23:08 coyote kernel: EFLAGS: 00010202 (2.6.17.7 #1) Aug 6 01:23:08 coyote kernel: EIP is at __pollwait+0x25/0x4b Aug 6 01:23:08 coyote kernel: eax: f000 ebx: ee87d700 ecx: 002e edx: 0032 Aug 6 01:23:08 coyote kernel: esi: c08da398 edi: 0145 ebp: 0040 esp: c8910b44 Aug 6 01:23:08 coyote kernel: ds: 007b es: 007b ss: 0068 Aug 6 01:23:08 coyote kernel: Process dumper (pid: 18807, While it's Amanda's dumper process that was running when your kernel crashed ... threadinfo=c891 task=efef5090) Aug 6 01:23:08 coyote kernel: Stack: c0f965e0 c029fba0 ee87d700 c08da398 c8910bd4 ee87d700 ee87d700 Aug 6 01:23:08 coyote kernel:ee87d700 0145 0040 c027d05b ee87d700 c08da380 c8910bd4 Aug 6 01:23:08 coyote kernel:c0154642 ee87d700 c8910bd4 00e0 00e0 Aug 6 01:23:08 coyote kernel: Call Trace: Aug 6 01:23:08 coyote kernel: c029fba0 tcp_poll+0x24/0x146 c027d05b ... it's a kernel bug if your kernel crashes. Apparently it did a NULL pointer dereference in tcp_poll(). sock_poll+0x13/0x17 Aug 6 01:23:08 coyote kernel: c0154642 do_select+0x1a4/0x33c c015438e __pollwait+0x0/0x4b Aug 6 01:42:30 coyote syslogd 1.4.1: restart. The delay was the e2fsck of all disks. Kernel is 2.6.17.7, amanda was 2.5.0p2-20060424. kmail was running on another screen, and I was also playing patience (solitaire) when everything went black. I've performed an amcleanup, and restarted the amanda wrapper script that I use. The amstatus output looks fairly normal for this time of the night. Is there enough here to allow some finger pointing? Could be a kernel bug. Or a hardware bug. Or an environment-too-hot bug. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: notes
On Wed, 2 Aug 2006, Frank Smith wrote: Glenn English wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 The backup works and verifies, but the report says: planner: disk zbox.slsware.lan:/usr/bin, estimate of level 2 failed. planner: disk zbox.slsware.lan:/var, estimate of level 1 failed. planner: disk zbox.slsware.lan:/home, estimate of level 1 failed. planner: disk zbox.slsware.lan:/boot, estimate of level 1 failed. planner: disk zbox.slsware.lan:/, estimate of level 1 failed. for every DLE on this host. It does level 0s, so things get backed up. And it just started; I didn't change anything. Any explanations? Yes, you updated your packages and got tar 1.15.91, which changed something related to --listed-incremental. I believe there is a current snapshot of Amanda that addresses that issue, but you might want to just revert to a previous version of tar, as 1.15.91 also has an issue with using --one-file-system in conjunction with the --listed-incremental option that causes it to leak out of the base filesystem, Indeed. The only way to get my backups working again was downgrading to tar from sarge: apt-get install tar=1.14-2.2 Debian Linux, testing; VERSION=Amanda-2.5.0p2; installed by apt-get. This is the Amanda host; these DLEs are local disks. The hosts on the nets are fine. Does that first entry mean that the level 1 worked but the 2 didn't? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: notes
On Thu, 3 Aug 2006, Glenn English wrote: Geert Uytterhoeven wrote: Indeed. The only way to get my backups working again was downgrading to tar from sarge: apt-get install tar=1.14-2.2 That's exactly what I did 5 minutes after reading Frank Smith's reply. That fixed the incrementals. Now it says: /-- zbox.slsware.lan /boot lev 1 STRANGE sendbackup: start [zbox.slsware.lan:/boot level 1] sendbackup: info BACKUP=/bin/tar sendbackup: info RECOVER_CMD=/bin/tar -f... - sendbackup: info end ? gtar: /var/lib/amanda/gnutar-lists/zbox.slsware.lan_boot_1.new:1: Invalid time stamp ? gtar: /var/lib/amanda/gnutar-lists/zbox.slsware.lan_boot_1.new:2: Invalid inode number | Total bytes written: 13301760 (13MiB, 5.1MiB/s) | gtar: Error exit delayed from previous errors sendbackup: size 12990 sendbackup: end \ And I just now tried to recover. It didn't work. Something's significantly bent here. A project for the afternoon... The format for incrementals was changed in 1.15.91. While the new tar can probably read old incrementals, I guess the old tar can't read the new format. Probably I didn't suffer from the downgrade since all my level zero dumps were long overdue, and Amanda no longer wanted to do incrementals. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
tar escaping into a different file system
Finally I'm getting back on track after the tar 1.15.91-2 incremental changes. Due to this issue and my yearly holidays, my backups are long overdue and Amanda likes to do lots of level 0s (as expected). I just noticed this in the mail log of last night's backup: | /-- anakin / lev 0 FAILED [data write: Connection reset by peer] | sendbackup: start [anakin:/ level 0] | sendbackup: info BACKUP=/bin/tar | sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... - | sendbackup: info COMPRESS_SUFFIX=.gz | sendbackup: info end | ? gtar: ./proc/19106/fd/5: Warning: Cannot stat: No such file or directory | ? gtar: ./proc/19106/task/19106/fd/5: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/6490/task/19182: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/6490/task/19182/attr: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/6490/task/19182/attr: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/6490/task/19182/fd: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/6490/task/19182/fd: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/asound/card0/pcm0p/sub1: file changed as we read it | ? gtar: ./proc/irq/14/ide0: file changed as we read it | ? gtar: ./proc/sys/fs/mqueue: file changed as we read it | ? gtar: ./proc/sys/net/ipv6/conf/default: file changed as we read it | | gtar: ./dev/gpmctl: socket ignored | | gtar: ./dev/log: socket ignored | \ Since /proc is on a separate file system, tar is not supposed to enter that directory. Anyone seen that before? I'm using tar 1.15.91-2 and amanda 1:2.5.0p2-1 (+ my backport of the fixes for tar 1.15.91, cfr. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=378558), on Debian testing/unstable. Is this another tar 1.15.91 breakage? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: tar escaping into a different file system
On Mon, 31 Jul 2006, Jon LaBadie wrote: On Mon, Jul 31, 2006 at 09:52:10AM +0200, Geert Uytterhoeven wrote: Finally I'm getting back on track after the tar 1.15.91-2 incremental changes. Due to this issue and my yearly holidays, my backups are long overdue and Amanda likes to do lots of level 0s (as expected). I just noticed this in the mail log of last night's backup: | /-- anakin / lev 0 FAILED [data write: Connection reset by peer] | sendbackup: start [anakin:/ level 0] | sendbackup: info BACKUP=/bin/tar | sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... - | sendbackup: info COMPRESS_SUFFIX=.gz | sendbackup: info end | ? gtar: ./proc/19106/fd/5: Warning: Cannot stat: No such file or directory | ? gtar: ./proc/19106/task/19106/fd/5: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/6490/task/19182: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/6490/task/19182/attr: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/6490/task/19182/attr: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/6490/task/19182/fd: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/6490/task/19182/fd: Warning: Cannot stat: No such file or | directory | ? gtar: ./proc/asound/card0/pcm0p/sub1: file changed as we read it | ? gtar: ./proc/irq/14/ide0: file changed as we read it | ? gtar: ./proc/sys/fs/mqueue: file changed as we read it | ? gtar: ./proc/sys/net/ipv6/conf/default: file changed as we read it | | gtar: ./dev/gpmctl: socket ignored | | gtar: ./dev/log: socket ignored | \ Since /proc is on a separate file system, tar is not supposed to enter that directory. Anyone seen that before? I'm using tar 1.15.91-2 and amanda 1:2.5.0p2-1 (+ my backport of the fixes for tar 1.15.91, cfr. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=378558), on Debian testing/unstable. Is this another tar 1.15.91 breakage? Things to check might include: Do the amanda debug files on the client show the tar argument list with the -l or --one-file-system argument. Do you have any include directives. Tar will happily go to other file systems if you specifically tell it to. I once ran a DLE that covered 3 or 4 tiny file systems in one DLE by using / as the starting directory and including /a, /b, /c, ... (the mount points). If you run tar with the create and verbose (cv) options and the output is to /dev/null (f /dev/null), then tar will output the filenames of what it would backup, but doesn't. It only visits the inodes, not the datablocks. So you could try 1.15.91 manually and see if the output filenames include the /proc files. Lots of output, so probably collect to a file and search with less or more. Also, don't forget the --one-file-system option. According to the logs, it used | running: /bin/tar: gtar --create --file - --directory / --one-file-system --listed-incremental /var/lib/amanda/gnutar-lists/anakin__0.new --sparse --ignore-failed-read --totals --exclude-from /tmp/amanda/sendbackup._.20060731014330.exclude . If I manually run | tar -v --create --file /dev/null --directory / --one-file-system --sparse --ignore-failed-read --totals --exclude-from /tmp/amanda/sendbackup._.20060731014330.exclude . it doesn't visit /proc and says | tar: ./proc/: file is on a different filesystem; not dumped when noticing its existence. However, if I add `--listed-incremental /tmp/xxx' it tells me: | tar: ./bin: Directory is new | tar: ./boot: Directory is new | tar: ./build: Directory is new | tar: ./dev: Directory is new | tar: ./etc: Directory is new | tar: ./home: Directory is new | tar: ./initrd: Directory is new | tar: ./lib: Directory is new | tar: ./lost+found: Directory is new | tar: ./media: Directory is new | tar: ./mnt: Directory is new | tar: ./none: Directory is new | tar: ./proc: Directory is new | tar: ./root: Directory is new | tar: ./sbin: Directory is new | tar: ./scratch: Directory is new | tar: ./sys: Directory is new | tar: ./tmp: Directory is new and so on. And it enters e.g. /home, while /home is on a separate file system (just like /boot, /usr, /var, /tmp). Looks like tar 1.15.92 is seriously broken... Bummer, it's even a known issue: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=376816 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=377124 Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: planner: disk xxx:/, estimate of level N failed
fixing before I upgrade. The estimate error may be just a Debian package thing, either in the Amanda package or possibly in tar, but evidently the debug files have been missing longer than the estimate error. I thought about a tar issue, too. In the mean time I upgraded tar to the one in Debian unstable, but no improvement. BTW, I'm using server side estimates. Which makes me wonder: why would it want to run tar for the estimates in that case? Or is this just an argument list for another Amanda program, which ignores the tar command when using server side estimates? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: planner: disk xxx:/, estimate of level N failed
On Thu, 13 Jul 2006, Jean-Louis Martineau wrote: | sendsize[11906]: argument list: /bin/tar --create --file /dev/null --directory /home/p72 --one-file-system --numeric-owner --listed-incremental /var/lib/amanda/gnutar-lists/anakin_home_p72_orig_3.new --sparse --ignore-failed-read --totals --exclude-from /tmp/amanda/sendsize._home_p72_orig.20060713004504000.exclude --files-from /tmp/amanda/sendsize._home_p72_orig.20060713004504000.include | sendsize[11906]: time 1.574: /bin/tar: Unexpected field value in snapshot file | sendsize[11906]: time 1.574: /bin/tar: Error is not recoverable: exiting now | sendsize[11906]: time 1.575: . | sendsize[11906]: estimate time for /home/p72/orig level 3: 0.005 | sendsize[11906]: no size line match in /bin/tar output for /home/p72/orig It looks like the /var/lib/amanda/gnutar-lists/anakin_home_p72_orig_3.new file is corrupted. This file is a copy of /var/lib/amanda/gnutar-lists/anakin_home_p72_orig_2. IC. Do you have free space in /var/lib/amanda/gnutar-lists/ ? Yes. Maybe it's a bug in tar? which tar version? 1.15.91-1 and 1.15.91-2. So probably it's a bug in tar, as it was upgraded from 1.15.1dfsg-3 to 1.15.91-1 on 2006-07-06 (according to /var/log/dpkg.log). So the backup run of 2006-07-07 created the first corrupt files, which were noticed first during the next run on 2006-07-08. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
driver: FATAL reading result from taper: Connection reset by peer
Hi all, After upgrading amanda from 2.5.0 to 2.5.0p2 (Debian testing), all backups fail with: | *** THE DUMPS DID NOT FINISH PROPERLY! | | The next tape Amanda expects to use is: DAILY18. | | FAILURE AND STRANGE DUMP SUMMARY: | anakin / RESULTS MISSING ^^^ | anakin /boot RESULTS MISSING ^^^ | ... | driver: FATAL reading result from taper: Connection reset by peer ^ There's no indication of the failure in any log file in /tmp/amanda/. /var/log/amanda/DailySet1/log.20060608.0 has: | DISK planner anakin / | DISK planner anakin /boot | ... | START planner date 20060608 | START driver date 20060608 | STATS driver startup time 0.003 | FATAL driver reading result from taper: Connection reset by peer | INFO planner Incremental of ... bumped to level 2. | INFO planner Full dump of ... promoted from 21 days ahead. | ... | FINISH planner date 20060608 time 30.445 Anyone with a clue? I'm using vtapes. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: driver: FATAL reading result from taper: Connection reset by peer
On Fri, 9 Jun 2006, Paul Bijnens wrote: On 2006-06-09 13:27, Geert Uytterhoeven wrote: After upgrading amanda from 2.5.0 to 2.5.0p2 (Debian testing), all backups fail with: | *** THE DUMPS DID NOT FINISH PROPERLY! | | The next tape Amanda expects to use is: DAILY18. | | FAILURE AND STRANGE DUMP SUMMARY: | anakin / RESULTS MISSING ^^^ | anakin /boot RESULTS MISSING ^^^ | ... | driver: FATAL reading result from taper: Connection reset by peer ^ Seems like taper died suddenly? Or the TCP connection between driver and taper was broken by a local firewall rule maybe? There's no indication of the failure in any log file in /tmp/amanda/. Taper and driver have their stderr redirected into the amdump file which gets renamed to amdump.1 (etc.) when finished. Any clue in there? Hmm, how could I have missed that file? | FATAL: Can't find system command 'cdrecord' on search path ('/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin')! | Can't use string (0) as a HASH ref while strict refs in use at /usr/lib/amanda/taper line 48. | driver: reading result from taper: Connection reset by peer (I'm using cdrw-taper, but I'm not actually backing up to CD). The strange thing is that cdrecord is /usr/bin/cdrecord... Ah, but it's not accessible for user backup, because backup is not a member of group cdrom: | [EMAIL PROTECTED]:~$ ls -l /usr/bin/cdrecord | -rwsr-xr-- 1 root cdrom 133 Jan 7 19:43 /usr/bin/cdrecord | [EMAIL PROTECTED]:~$ groups | backup disk tape | [EMAIL PROTECTED]:~$ Time to file a bug with Debian's cdrw-taper... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: vtape, end of tape waste
On Tue, 23 May 2006, Ian Turner wrote: On Tuesday 23 May 2006 16:28, Jon LaBadie wrote: But running out of disk space caused me to look more closely at the situation and I realized that the failed taping is left on the disk. This of course mimics what happens on physical tape. However with the file:driver if this failed, and useless tape file were deleted, it would free up space for other data. Has anyone addressed this situation? There is no good short-term solution to this problem. Sorry. :-( Tape spanning helps, but is not a panacea. This is one of the limitations of the vtape API that I was talking about -- it tries to reimplement tape semantics on a filesystem, even when that doesn't make sense. [ Disclaimer: I haven't looked at the code yet ] But I guess it can't be that difficult to call remove() on write error? When close() is called later, it will be deleted. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Tapecat: a cool tape utility
On Wed, 10 May 2006, Inaki Sanchez wrote: Great tool!. Has someone succeeded in compiling it in RHEL 4? I got the following error: $ make gcc -Wall -ansi -c -o cmdline.o cmdline.c cmdline.c:26:31: asm-generic/errno.h: No such file or directory That should be errno.h, according to my copy of C99. gcc -Wall -ansitapecat.c cmdline.o debug.o -o tapecat /tmp/ccwCGgux.o(.text+0x6b): In function `get_ioctl_statistics': : undefined reference to `errno' Missing #include errno.h. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Newbie needs Help
On Thu, 4 May 2006, Luis Rodrigues wrote: The normal everyday amanda incremental backup saves all the touched files to the holding disk? Yes. If the file is changed 5 times in a backup cicle it will get backed up 5 times? if so how do I access it using amandarecover? By specifying the date of the version you want to recover. On Thu, 4 May 2006 10:26:39 -0400 Jon LaBadie [EMAIL PROTECTED] wrote: On Thu, May 04, 2006 at 03:52:07PM +0200, Luis Rodrigues wrote: I've just started a job at a new company. They have an backup system with everyday copies the touched files to a disk on a backup server and on the weekend it makes a 0 level to tape. They say the system is not really relayable because some times they don't find the files an want a new one. So I will install amanda. The problem is that they want to keep the old backup touched files each day and zero level on the weekend. In http://www.amanda.org/docs/topten.html#id2552255 says this is not possible. So I will use amanda with a weekly cycle so that at saturday everyting is backuedup. But I still need to incremental backup the touched files since last inscremental, is it possible to force amanda to (in its runs during week) first copy to backup server the touched files and them some others? You can nearly mimic what they have been doing. Not necessarily my recommendation, but sometimes ya gotta do what da boss sais ya gotta do. Think of their current backup disk as amanda's holding disk. Set up your amanda config to 'autoflush the holding disk whenever it actually does tape. Set up your config to not do level 0's except when forced with amadmin. I forget if this is skip-full or incr-only. Set up amanda to use a symbolic link to the tape device, eg. /dev/amandatape. This will be missing normally, so incremental backups will collect on the holding disk, waiting to autoflush. Set up your daily cron job to rm amandatape then amdump. Set up your weekend cronjob to amadmin force level 0's, create the symbolic tape device link, then do amdump, the rm amandatape. The entire week will go to tape. If so what will happen if a file is changed two times in one week? You will have a copy of each version. During the week you can still use amrecover to get at the incrementals collecting on the holding disk. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Newbie needs Help
On Thu, 4 May 2006, Luis Rodrigues wrote: Ahh, ok I did't had understood that. Does it work with both tar and dump or just with dump? It should work with both (I always used tar). On Thu, 4 May 2006 17:40:34 +0200 (CEST) Geert Uytterhoeven [EMAIL PROTECTED] wrote: On Thu, 4 May 2006, Luis Rodrigues wrote: The normal everyday amanda incremental backup saves all the touched files to the holding disk? Yes. If the file is changed 5 times in a backup cicle it will get backed up 5 times? if so how do I access it using amandarecover? By specifying the date of the version you want to recover. On Thu, 4 May 2006 10:26:39 -0400 Jon LaBadie [EMAIL PROTECTED] wrote: On Thu, May 04, 2006 at 03:52:07PM +0200, Luis Rodrigues wrote: I've just started a job at a new company. They have an backup system with everyday copies the touched files to a disk on a backup server and on the weekend it makes a 0 level to tape. They say the system is not really relayable because some times they don't find the files an want a new one. So I will install amanda. The problem is that they want to keep the old backup touched files each day and zero level on the weekend. In http://www.amanda.org/docs/topten.html#id2552255 says this is not possible. So I will use amanda with a weekly cycle so that at saturday everyting is backuedup. But I still need to incremental backup the touched files since last inscremental, is it possible to force amanda to (in its runs during week) first copy to backup server the touched files and them some others? You can nearly mimic what they have been doing. Not necessarily my recommendation, but sometimes ya gotta do what da boss sais ya gotta do. Think of their current backup disk as amanda's holding disk. Set up your amanda config to 'autoflush the holding disk whenever it actually does tape. Set up your config to not do level 0's except when forced with amadmin. I forget if this is skip-full or incr-only. Set up amanda to use a symbolic link to the tape device, eg. /dev/amandatape. This will be missing normally, so incremental backups will collect on the holding disk, waiting to autoflush. Set up your daily cron job to rm amandatape then amdump. Set up your weekend cronjob to amadmin force level 0's, create the symbolic tape device link, then do amdump, the rm amandatape. The entire week will go to tape. If so what will happen if a file is changed two times in one week? You will have a copy of each version. During the week you can still use amrecover to get at the incrementals collecting on the holding disk. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
2.4.5p1 - 2.5.0
last week I (well, Debian testing) upgraded from 2.4.5p1 to 2.5.0. And suddenly my nightly backups to vtapes started failing with: | The next tape Amanda expects to use is: DAILY10. | | FAILURE AND STRANGE DUMP SUMMARY: | anakin /varlev 3 FAILED [no more holding disk space] ... | taper: FATAL could not write tapelist: No space left on device | taper: FATAL syncpipe_get: r: unexpected EOF But I don't use a holding disk, and there was plenty of free space on my vtape partition. Then I noticed that / was 100% full (except for the reserved blocks for root). After making sure there was free space on / for ordinary users, Amanda continued making backups. So I guess 2.4.5p1 used root privileges to write to /etc, while 2.5.0 falls back to user backup. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Another setup question...
On Wed, 5 Apr 2006, Bruce Thompson wrote: Next step is to set up our two PowerBooks. Currently, both PowerBooks get their IP address via DHCP. While switching them to static addresses is not impossible, it's a bit inconvenient for me moving between home and work. From what I can tell, Amanda wants a hostname for the clients. Is there an easy way anyone knows of to set it up with dynamic client addresses? Just give your backup clients a fixed address in dhcpd.conf, like host myhost { hardware ethernet 00:11:22:33:44:55; fixed-address myhost.at.mydomain; } and make sure myhost.at.mydomain is in your DNS config, too. Then your clients will always receive the same IP address. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Whats next after Amanda 2.5
On Mon, 27 Mar 2006, Matthias Andree wrote: Paddy Sreenivasan [EMAIL PROTECTED] writes: We need to decide on the release version? 2.6? 2.5.1? Following is the list of features that have been requested: - Support for POSIX file names (allowing spaces in filenames) What has this got to do with POSIX file names? I think the name for this feature is misleading: the POSIX portable file name character set contains _only_ the letters (a-z, A-Z), digits (0-9), the period ., underscore _ and hyphen -, that's it (as of IEEE Std 1003.1-2001, 2004 edition). Maybe they really meant UTF-8 filenames ;-) Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Whats next after Amanda 2.5
On Mon, 27 Mar 2006, Jon LaBadie wrote: On Mon, Mar 27, 2006 at 08:12:00AM -0600, Graeme Humphries wrote: stan wrote: Right, spaces in filenames is a disease. Why? The vector for spreading this disease has largely been microsloth, and all of the uneducated users they brought to the party. Why does spaces in names = a lack of education? Perhaps the clearest way to label a file includes a space? Not a reason _not_ to handle them corectly, though, just an editorial comment. IMO there's nothing intrinsically wrong with any character in a file name, as long as it makes sense with regards to your naming conventions. The only difficulty it causes is if your tools can't handle that character very well. These days, *even* on UNIX, most tools should handle spaces in filenames just fine. GUI tools perhaps. I think command line tools will always have the difficulty of what character or sequence is the separator between arguments. That's what single and double quotes, and backslashes are used for... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: question about external drives
On Thu, 2 Mar 2006, Ian Turner wrote: On Thursday 02 March 2006 12:31, you wrote: Does UDF support a modern permissions system though? I thought it didn't, because it was designed for optical media... Yes. It is designed to be a superset of all common filesystems, feature-wise. So it has support for NT ACLs, POSIX ACLs, UNIX permissions, Apple file types and resource forks, etc. I dunno if Windows is OK with UDF on a hard disk, though I have heard of seeing it on USB flash sticks. Also, UDF is optimized for devices with high seek times -- it tries very hard to avoid fragmentation, which might actually be suboptimal on hard disks. That sounds actually good to me... A while ago I read somewhere that we should start to treat hard drives more like tape drives: - disk capacity is increasing a lot, - disk transfer speeds are also increasing, but less compared to capacity, - disk latency (seek times) is improving very slowly. Hence currently it takes much longer to copy an average disk than it took a few years ago. And while raw transfer speeds are great, as soon as you need a seek, it degrades a lot. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: question about external drives
On Wed, 1 Mar 2006, Frank Smith wrote: Jon LaBadie wrote: Some of you are undoubtedly using external hard drives that are USB or FireWire connected. Perhaps as your holding disk or for virtual tapes. These drives seem to come formatted with FAT-32 file systems. I wonder how people handle them. Do you leave your external drives as FAT-32 or do you reformat to something like ext2 or ext3? If I'm only going to be using it on Linux I reformat to ext3. If I'm expecting to sometimes use it on Windows boxes I leave it FAT-32. What considerations went into your decision? features of one type FS vs another type? No. Isn't there a file size limit with FAT32, which may bite when using it as a holding disk or for virtual tapes? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: question about external drives
On Thu, 2 Mar 2006, Ian Turner wrote: On Thursday 02 March 2006 04:43, Geert Uytterhoeven wrote: Isn't there a file size limit with FAT32, which may bite when using it as a holding disk or for virtual tapes? NT refuses to create a FAT32 volume above a certain size -- my memory says 32GB. But if you use other tools to create the filesystem, every windows will read it back to 95SP3. IIRC FAT32 itself goes up to a few terabytes in max size. I meant individual file size, not file system size. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: out of tape with vtapes
On Thu, 9 Feb 2006, Paul Bijnens wrote: [EMAIL PROTECTED] wrote: NOTES: planner: Adding new disk hercules:/samba/CONTROLLING. [...] planner: Adding new disk hercules:/K_EIN_REST. taper: tape slot3 kb 18874336 fm 24 writing file: No space left on device Here you see when Amanda hit the end of tape really: at 18874336 kb or 18431 Mbyte (compared this to length 18432 Mbyte in the tapetype). If you take into account the tape header of 32 kiB, it's a perfect match: 18432 * 1024 + 32 = 18874336 Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Question: does this DLT Drive work?
On Thu, 9 Feb 2006, Sebastian Koesters wrote: Quantum DLT V4 320 U2W (160GB uncompressed - 320GB compressed) Does this drive work good with amanda? If it works with your OS, it works with Amanda. How do i activate the compression of the drive to get the full 230GB? You don't, and let Amanda handle the compression. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Question about vtape size
On Thu, 2 Feb 2006, Jon LaBadie wrote: In contrast, when I 'played' with vtapes a year or more earlier, I just specified a huge size, knowing it would never be exceeded. However a possibility was that I would run the file system out of space if many vtapes were pretty full. Indeed. Either you waste space, or you have the risk of running out of space. I chose the second approach: my vtapes are on a local disk, and every 3 weeks I manually copy the most recent vtapes to a removable disk (I have 2 of them). Problems: - Both my local disk and the removable disks are too small to hold the full tapecycle. - I have to manually delete vtapes on the local disk on a regular base, to make space if the disk runs out-of-space. - Amanda doesn't keep track of vtapes older than tapecycle that are still on the oldest removable disk. To solve these, I started writing a script that would automatically migrate tapes to and from external disks, and create new and destroy old vtapes when needed, but due to limited spare time it's not yet finished... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: numeric-owner problem
On Mon, 30 Jan 2006, Kosa Attila wrote: Environment: - server - Debian Sarge, amanda-server 2.4.4p3-3; - client - Debian Sarge, amanda-client 2.4.4p3-3; - Debian Woody, amanda-client 2.4.4p3-3 (I made it myself backport). The clients' full backup is succesful with the following config (typically reiserfs partitions are required to backup): define dumptype myconfig { program GNUTAR comment partitions dumped with tar options no-compress, index, exclude-list /etc/amanda/exclude.gtar priority high dumpcycle 0 maxcycle 0 } I tried to restore the full backup by 3.7 Knoppix CD, and I succeeded, but there is only one mistake. Uid/gid pairs in the Knoppix system are different than in my Woody (or Sarge) system, therefore certain parts of the restored system are not able to work. I think it would be a solution if the tar used also a --numeric-owner option to backup. I wonder if the only possibility to do it is rewriting the source or there is any simpler method I didn't notice. I saw a similar thing when the disk of my backup server died last month. The machine ran Debian testing, and I used an Ubuntu Live CD (the Knoppix I had lying around didn't support SATA) to do the restore. After the restore, some services didn't work because some configuration files were owned by the wrong user. I ended up comparing the uids and gids in /etc/passwd and /etc/group on the restored image and on the Ubuntu Live CD, and for all differences, manually verifying all uids and gids of all restored files and directories. Fortunately this took much less time than expected :-) I noticed one very strange thing though: uids and gids were not changed in a consistent way: some files had the incorrect uid or gid from Ubuntu, while other related files that should have the same uid/gid had the one from the original system. So sometimes uids/gids were remapped during restore, but not always... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: numeric-owner problem
On Tue, 31 Jan 2006, Jon LaBadie wrote: On Tue, Jan 31, 2006 at 10:56:13AM +0100, Geert Uytterhoeven wrote: I saw a similar thing when the disk of my backup server died last month. The machine ran Debian testing, and I used an Ubuntu Live CD (the Knoppix I had lying around didn't support SATA) to do the restore. After the restore, some services didn't work because some configuration files were owned by the wrong user. I ended up comparing the uids and gids in /etc/passwd and /etc/group on the restored image and on the Ubuntu Live CD, and for all differences, manually verifying all uids and gids of all restored files and directories. Fortunately this took much less time than expected :-) I noticed one very strange thing though: uids and gids were not changed in a consistent way: some files had the incorrect uid or gid from Ubuntu, while other related files that should have the same uid/gid had the one from the original system. So sometimes uids/gids were remapped during restore, but not always... My understanding, subject to correction, is that by default guntar restores by trying to match text names (user and group) between the archive and the recovery system. If a match is found, then the restore is to the numeric uid/gid of the recovery system, thus matching the names, but not the necessarily the numeric ids in the archive. If matching text names are not found, then the archive's numeric ids are used. Exactly my understanding as well. So you could easily get a real hodge-podge of names and numeric ids by recovering to a different system. Archived System Recovery SystemResult of Recovery name id # name id #nameid # AAA111 AAA 111 AAA111 BBB222 BBB 234 BBB234 CCC333 (no CCC) (no 333) (none) 333 DDD444 (no DDD) (EEE is 444) EEE444 Note, 3 of the 4 cases result in a recovery that doesn't match the originally archived system. May or may not be what was wanted. But as soon as /etc/passwd and /etc/group have been restored from backup as well and you boot from the restored medium, CCC and DDD become correct again, right? But this is not what I saw. Some files that should be owned by user BBB had uid 222, while others had 234. It was not consistent. If the --numeric-owner option was used, only the second case would change, the recovered result using an id of 222 rather than 234 with a text name of either none or whatever name matchs id 222. Which is what you want (assumed you backup OS files, instead of doing a clean reinstall of the OS)... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Dumb Q?
On Thu, 26 Jan 2006, Gene Heskett wrote: I've attemped to add a couple of disklist entries to my backup schedule, but I've run into a problem that doesn't seem to make a lot of sense to me. I've copied the .amandahosts file over to that box in /home/amanda, and added the fqdn and alias of that box and the user as amanda and as root on seperate lines to that same file All this after installing the amanda-common and amanda-client packages, 2.4.3-p3 I think, from the debian repo on that box. The install did add the invocation line to /etc/inetd.conf ok, and I've made amanda a member of the disk: group in the group file on that box. Doesn't Debian use user `backup' instead of `amanda'? At least it does on my box. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Amanda and DVD-RAM
On Wed, 25 Jan 2006, Gene Heskett wrote: On Tuesday 24 January 2006 13:27, Matthias Andree wrote: The real point is if some voltage surge comes through your power supply unit, it might fry your computer with all of its hardware at the same time, and the separate drive that was in the cupboard might survive. In that case, it will have to destroy a 1500J wall suppressor that ties everything together surgewise, followed by a 1500KVA Belkin ups before Do not underestimate lightning ;-) If in the unlikely event I get something in here blown, then I'd expect the damages to the surrounding area, like the rest of this house, will be of far greater importance. One strike that I witnessed a year ago, hit the ground wire on top of the pole where my transformer is mounted, Transformers on poles? Oh the horror! I highly recommend such a setup to anyone. I figure that $60 suppressor has paid for itself half a dozen times by now. And everybody needs a ups don't they? UPS? Yes, we have one at work. But at home? Probably I'm too spoiled by the reliability of the Belgian power grid... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Amanda and DVD-RAM
On Tue, 24 Jan 2006, Gene Heskett wrote: On Tuesday 24 January 2006 08:44, Graeme Humphries wrote: Ian Turner wrote: The good news is that we are working on native optical media support. No promises about a release date, but it will be available at some point. I am *so* happy to hear that. I need a real backup solution for my home server, but I can't personally afford a nice tape changer. ;) Graeme Neither can I, Graeme. So, since big hard drives are almost commodity items now, I'd get one big (200GB?) enough, install it on the 2nd cable ^^^ Make it at least two, and never connect both of them to your system at the same time. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: handling unreasonably large, non-static directories
On Wed, 11 Jan 2006, Gene Heskett wrote: The only solution that I can think of is a DLE per customer. But if you have thousands, then I don't know if its been tested at that scale. One thing it would do is to help isolate the users from each other, and that can only be good from a security aspect. Indeed. Especially if things like `please delete all info about this customer, including your backups' might happen in the future... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: new feature: client-side, server-side encryption dumptype option
On Thu, 29 Dec 2005, Kevin Till wrote: Another point I want to add is that while public-key encryption allows you to encrypt the data with just the public-key and store away the private-key. It does requires more computational resources, thus much slower than symmetric encryption. Computational resources don't matter that much: most systems generate a symmetric session key, which is encrypted using the public key. Hence the slow part is limited to the encryption of the session key, while the actual data is encrypted using the fast symmetric algorithm. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: vtapes and runtapes 1
On Thu, 29 Dec 2005, Gene Heskett wrote: On Thursday 29 December 2005 05:03, Geert Uytterhoeven wrote: On Wed, 28 Dec 2005, Gene Heskett wrote: On Wednesday 28 December 2005 11:25, Geert Uytterhoeven wrote: Is anyone else using vtapes with runtapes 1? Recently I decreased tapelength and increased runtapes from 1 to 2. On most days 1 vtape is sufficient. But every time Amanda hits (artificial as specified by tapetype) end of tape on a vtape and retries on the second tape, she fails with: |*** THE DUMPS DID NOT FINISH PROPERLY! | |These dumps were to tapes DAILY14, DAILY15. |The next 2 tapes Amanda expects to used are: DAILY16, DAILY17. | |FAILURE AND STRANGE DUMP SUMMARY: | anakin /usr/share lev 3 FAILED [data write: Connection | reset by peer] ISTR I had a similar problem Geert, but that was because I'd somehow disabled my holding disk when I converted to vtapes. Fixing the I'm indeed not using a holding disk. That begs the question then Geert, are you going to have one tonight? No ;-) You see, the reason it failed after it brought in the next 'tape' is that once it had done that, it then discovered that since there was no holdng disk image to work from, it had no copy of the src tarball to retry a save of. Since amanda isn't wired to repeat the whole disklist entry in the event of a failure, it has no choice but to send you a fail message. IC... Some may say that this is a shortcoming of amanda, but I'm inclined to agree with the amanda design in this respect, as it can go on and finish the rest of the DLE list on the second (or more) tape its allowed to use by the runtapes setting. If the DLE was truely too big for the medium, then it would sit there and use new medium each time until it had run out of runtapes, or usable tapes. So if its a fail, there really isn't any use of a retry if the holding disk image is missing. Unfortunately she never continued with the remaining DLEs, they are just marked as `RESULTS MISSING' in the report. Hence runtapes = 2 behaves exactly the same as runtapes = 1, exact that if it hits end-of-tape on the first one, it wastes the second tape (by just storing the end-of-tape marker there). Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: vtapes and runtapes 1
On Fri, 30 Dec 2005, Gene Heskett wrote: On Friday 30 December 2005 04:12, Geert Uytterhoeven wrote: On Thu, 29 Dec 2005, Gene Heskett wrote: On Thursday 29 December 2005 05:03, Geert Uytterhoeven wrote: On Wed, 28 Dec 2005, Gene Heskett wrote: On Wednesday 28 December 2005 11:25, Geert Uytterhoeven wrote: Is anyone else using vtapes with runtapes 1? Recently I decreased tapelength and increased runtapes from 1 to 2. On most days 1 vtape is sufficient. But every time Amanda hits (artificial as specified by tapetype) end of tape on a vtape and retries on the second tape, she fails with: |*** THE DUMPS DID NOT FINISH PROPERLY! Some may say that this is a shortcoming of amanda, but I'm inclined to agree with the amanda design in this respect, as it can go on and finish the rest of the DLE list on the second (or more) tape its allowed to use by the runtapes setting. If the DLE was truely too big for the medium, then it would sit there and use new medium each time until it had run out of runtapes, or usable tapes. So if its a fail, there really isn't any use of a retry if the holding disk image is missing. Unfortunately she never continued with the remaining DLEs, they are just marked as `RESULTS MISSING' in the report. Is this on the same machine, or on another client machine? In the latter case, the connection may have timed out during the failure recovery. Client and server are the same machine. What are the dtimeout and etimeouts set for in the amanda.conf? The defaults there tend to be somewhat shorter than is sometimes needed. This is a moderately fast x86 box, with two clients, itself and my firewall box. But it needs more than the default 300 seconds for some things: # grep timeout /usr/local/etc/amanda/Daily/amanda.conf etimeout 900 # number of seconds per filesystem for estimates. dtimeout 1800# number of idle seconds before a dump is aborted. ctimeout 8 # maximum number of seconds that amcheck waits --- | [16:11:43]~# grep timeout /etc/amanda/DailySet1/amanda.conf | etimeout 2000 # number of seconds per filesystem for estimates. (default 300 s) | ctimeout 30 # Maximum amount of time that amcheck will wait for each client host. (default 30 s) | dtimeout 3600 # Amount of idle time per disk on a given client that a dumper running from within amdump will wait before it fails with a data timeout error. (default 1800 s) | [16:11:51]~# Should be OK... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: vtapes and runtapes 1
On Fri, 30 Dec 2005, Gene Heskett wrote: On Friday 30 December 2005 10:13, Geert Uytterhoeven wrote: [...] Is this on the same machine, or on another client machine? In the latter case, the connection may have timed out during the failure recovery. Client and server are the same machine. What are the dtimeout and etimeouts set for in the amanda.conf? The defaults there tend to be somewhat shorter than is sometimes needed. This is a moderately fast x86 box, with two clients, itself and my firewall box. But it needs more than the default 300 seconds for some things: # grep timeout /usr/local/etc/amanda/Daily/amanda.conf etimeout 900 # number of seconds per filesystem for estimates. dtimeout 1800# number of idle seconds before a dump is aborted. ctimeout 8 # maximum number of seconds that amcheck waits --- | [16:11:43]~# grep timeout /etc/amanda/DailySet1/amanda.conf | etimeout 2000 # number of seconds per filesystem for | estimates. (default 300 s) ctimeout 30 # Maximum | amount of time that amcheck will wait for each client host. | (default 30 s) dtimeout 3600 # Amount of idle time per | disk on a given client that a dumper running from within amdump | will wait before it fails with a data timeout error. (default 1800 | s) [16:11:51]~# Should be OK... Looks like it should be, Geert. Back to the logs then. As I said before, nothing indicative there, except for the failure when backing up /usr/share: | sendbackup-gnutar: time 0.217: /usr/lib/amanda/runtar: pid 19253 | sendbackup: time 132.719: 124: strange(?): | sendbackup: time 132.719: 124: strange(?): gzip: stdout: Connection reset by peer | sendbackup: time 132.720: index tee cannot write [Broken pipe] | sendbackup: time 132.720: pid 19251 finish time Wed Dec 28 00:57:39 2005 Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: vtapes and runtapes 1
On Wed, 28 Dec 2005, Gene Heskett wrote: On Wednesday 28 December 2005 11:25, Geert Uytterhoeven wrote: Is anyone else using vtapes with runtapes 1? Recently I decreased tapelength and increased runtapes from 1 to 2. On most days 1 vtape is sufficient. But every time Amanda hits (artificial as specified by tapetype) end of tape on a vtape and retries on the second tape, she fails with: |*** THE DUMPS DID NOT FINISH PROPERLY! | |These dumps were to tapes DAILY14, DAILY15. |The next 2 tapes Amanda expects to used are: DAILY16, DAILY17. | |FAILURE AND STRANGE DUMP SUMMARY: | anakin /usr/share lev 3 FAILED [data write: Connection reset | by peer] ISTR I had a similar problem Geert, but that was because I'd somehow disabled my holding disk when I converted to vtapes. Fixing the I'm indeed not using a holding disk. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
vtapes and runtapes 1
Hi all, Is anyone else using vtapes with runtapes 1? Recently I decreased tapelength and increased runtapes from 1 to 2. On most days 1 vtape is sufficient. But every time Amanda hits (artificial as specified by tapetype) end of tape on a vtape and retries on the second tape, she fails with: |*** THE DUMPS DID NOT FINISH PROPERLY! | |These dumps were to tapes DAILY14, DAILY15. |The next 2 tapes Amanda expects to used are: DAILY16, DAILY17. | |FAILURE AND STRANGE DUMP SUMMARY: | anakin /usr/share lev 3 FAILED [data write: Connection reset by peer] |USAGE BY TAPE: | Label Time Size %Nb | DAILY14 0:10 3058470k 100.220 | DAILY15 0:000k0.0 0 |FAILED AND STRANGE DUMP DETAILS: | |/-- anakin /usr/share lev 3 FAILED [data write: Connection reset by peer] |sendbackup: start [anakin:/usr/share level 3] |sendbackup: info BACKUP=/bin/tar |sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... - |sendbackup: info COMPRESS_SUFFIX=.gz |sendbackup: info end |NOTES: | taper: tape DAILY14 kb 3071968 fm 21 writing file: No space left on device | taper: retrying anakin:/usr/share.3 on new tape: [writing file: No space left on device] ^^^ Sometimes I get `[writing file: short write]' here instead. | taper: tape DAILY15 kb 0 fm 0 [OK] |DUMP SUMMARY: | DUMPER STATS TAPER STATS |HOSTNAME DISK L ORIG-kBOUT-kB COMP% MMM:SSKB/s MMM:SSKB/s |- --- --- |anakin /usr/share 3 FAILED The first vtape contains all succesfully backed up DLEs, and a part of the failed DLE. The second vtape always contains the tape header and the tape end only, but no evidence of the retried DLE: | # ls -l | total 72 | -rw--- 1 backup backup10 Dec 28 00:57 0-DAILY15 | -rw--- 1 backup backup 32768 Dec 28 00:57 0.DAILY15 | -rw--- 1 backup backup10 Dec 28 00:57 1-TAPEEND | -rw--- 1 backup backup 32768 Dec 28 00:57 1.TAPEEND | # /tmp/amanda/sendbackup.20051228005526.debug doesn't contain any clues, just `strange(?): gzip: stdout: Connection reset by peer' instead of the usual `size(|): Total bytes written ...'. I'm using Amanda 2.4.5-2 (from Debian testing) and tpchanger chg-disk with the `file:' driver. Any one with a clue? Thanks! I guess for now I'll fall back to using `runtapes 1' and very large tape lengths, so all daily backups will fit on one tape (until my vtape partition fills up, of course)... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Verizon subscribers -- off topic
On Tue, 6 Dec 2005, Paul Bijnens wrote: [Off topic] This isn't the first time I'm hit with this nonsense: I can't send mail to a Verizon email address. And I'm surely not alone. http://www.theinquirer.net/?article=23703 Just to let people know (Gene!) that I do send mail to them, I'm not ignoring them. But their provider is ignoring their users. Yes, I did fill out the whitelist request, twice already. Just enough to get one mail pass through, and then a few weeks later, it bounces again. If people with Verizon email addresses want to read my responses, it's time to switch providers. Pfeew, that reliefs the anger a bit... I can confirm I cannot send email to Gene from work, but I can from home. Apparently it's unrelated to the sender address, but related to the outgoing SMTP server. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: is excluding /usr/local/var/amanda a bad idea ?
On Thu, 10 Nov 2005, Gene Heskett wrote: On Thursday 10 November 2005 16:42, Jon LaBadie wrote: [...] Gene H, on this list, has described his approach. Realizing that there is generally space left on the last tape, and further, amanda does not rewind the tape when finished. Thus he adds his tar'red up amanda tree in an extra tape file after the last amanda file. Say you lose the amanda indexes and db, is there some sort of way to rebuild it from the tapes ? Or a way to know what files (and last modified date) are on which tapes ? That is a tool just waiting for someone to volunteer to write. Who needs to write it Jon? I simply untar/unzip the files from the end of the tape (vtape in my case), back to the directory they belong in. Because you didn't really loose them, since you stored them at the end of your tape? That doesn't really need a script does it? He meant a script to read the complete tapes and reconstruct the database from that information. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Amanda performance
On Fri, 11 Nov 2005, Montagni, Giovanni wrote: I have a problem with amanda speed. I have an lto drive with 100gb tape. To fill that tape amanda take more time than necessary, because it takes about 8 hours. LTO speed is ~10 Mb/s, so i expect that the tape is filled up in about 3 hours. I also have another lto drive, on a windows machine and it takes 3 hour to fill tapes. What parameters i have to set-up to speed up amanda? Does the presence of the holding disk influence performance? You should use a holding disk to keep the tape drive at full streaming speed. -- [...] Damned, the Dutch version of the email disclaimer is missing! ;-) Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Email delivery
On Fri, 28 Oct 2005, Jon LaBadie wrote: On Fri, Oct 28, 2005 at 11:54:10AM +0200, Lars Bakker wrote: I think the best way to handle my particular problem would be to recompile the debian package. Forgot to mention, that if the debian package can't send mail reports properly, a defect report to debian seems appropriate. Indeed. On my Amanda server, mail works fine (but I use sendmail ;-). Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Using DRDB and disks for tapes.
On Mon, 26 Sep 2005, Graeme Humphries wrote: Owen Williams wrote: A colleague and I are thinking of getting two large RAID arrays keeping one in his machine room and one in mine in different buildings. Then split them in two with DRDB running in pairs on the four partitions. Unless you have 100Bt or better between your machine rooms, you may find that this is quite slow. I'd do some replication benchmarks before doing a full implementation. ;) Indeed. And rsyncing vtapes doesn't help much, since the vtapes' contents differ a lot (it would be nice if at least successive level 0's of unchanged file systems were (almost) identical. Anyone played with gzip --rsyncable yet?). So you will end up copying (at least) the full amount of data once per dumpcycle. I guess e.g. rdiff-backup would be a better solution. If you just add data (e.g. your digital picture collection), plain rsync can already be quite handy. Having broadband at home (6 Mbps downstream, 512 kbps[*] upstream in my case :-) is sufficient for these purposes. Gr{oetje,eeting}s, Geert [*] Doh, we're getting spoiled. I can easily remember when the local university had a whopping 256 kbps Internet connection... -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: questions about tunning configuration
Hi Gene, On Thu, 15 Sep 2005, Gene Heskett wrote: On Thursday 15 September 2005 09:11, Geert Uytterhoeven wrote: On Thu, 15 Sep 2005, Jon LaBadie wrote: On Tue, Sep 13, 2005 at 07:42:36PM +0200, Geert Uytterhoeven wrote: On Tue, 13 Sep 2005, Matt Hyclak wrote: On Tue, Sep 13, 2005 at 03:41:34PM +0100, Rodrigo Ventura enlightened us: tapecycle is the total number of tapes; only these tapes are rotated, right? Not exactly. tapecycle is the minimum number of tapes that will be used before any single tape can be overwritten. Many people have a tapecycle less than the total number of tapes so that if a tape happens to go bad, it doesn't hold everything up waiting for a new one. Tapecycle is also the number of slots in the virtual tape changer if you use vtapes. Actually that isn't completely correct neither, as I found out amtape continues scanning after the last accessed tape, but it refuses to scan for more than tapecycle tapes in one invocation. I'd like to have tapecycle different from the number of slots in the virtual tape changer, so I can move vtapes offline, like with a real changer. Right now the workaround is to make the number of slots equal to tapecycle, but this makes some assumptions I'd prefer not to make. I presume this is a problem in the changer script, not amtape per se. What changer script do you use? Any script hackers want to tackle it? chg-disk I've been running chg-disk for about a year, maybe longer. As you Yep, me too. can see with ls, each virtual tape is nothing but a directory that is linked to as appropriate. There is nothing I can think of that would preclude the copying of a directory (and its associated index files) to one or more data dvd's in order to 'take a snapshot' for archival reasons. Or, if there is space available on the target drive, I Indeed. assume it could be marked as no-reuse, and a new directory created and amlabeled in its place. I'm working (when time permit) on a script to `intelligently' move tapes to an offline location, and create/remove tapes on an as-needed basis, so I don't have to do this manually anymore. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: questions about tunning configuration
On Thu, 15 Sep 2005, Jon LaBadie wrote: On Tue, Sep 13, 2005 at 07:42:36PM +0200, Geert Uytterhoeven wrote: On Tue, 13 Sep 2005, Matt Hyclak wrote: On Tue, Sep 13, 2005 at 03:41:34PM +0100, Rodrigo Ventura enlightened us: tapecycle is the total number of tapes; only these tapes are rotated, right? Not exactly. tapecycle is the minimum number of tapes that will be used before any single tape can be overwritten. Many people have a tapecycle less than the total number of tapes so that if a tape happens to go bad, it doesn't hold everything up waiting for a new one. Tapecycle is also the number of slots in the virtual tape changer if you use vtapes. Actually that isn't completely correct neither, as I found out amtape continues scanning after the last accessed tape, but it refuses to scan for more than tapecycle tapes in one invocation. I'd like to have tapecycle different from the number of slots in the virtual tape changer, so I can move vtapes offline, like with a real changer. Right now the workaround is to make the number of slots equal to tapecycle, but this makes some assumptions I'd prefer not to make. I presume this is a problem in the changer script, not amtape per se. What changer script do you use? Any script hackers want to tackle it? chg-disk Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: questions about tunning configuration
On Tue, 13 Sep 2005, Matt Hyclak wrote: On Tue, Sep 13, 2005 at 03:41:34PM +0100, Rodrigo Ventura enlightened us: tapecycle is the total number of tapes; only these tapes are rotated, right? Not exactly. tapecycle is the minimum number of tapes that will be used before any single tape can be overwritten. Many people have a tapecycle less than the total number of tapes so that if a tape happens to go bad, it doesn't hold everything up waiting for a new one. Tapecycle is also the number of slots in the virtual tape changer if you use vtapes. Actually that isn't completely correct neither, as I found out amtape continues scanning after the last accessed tape, but it refuses to scan for more than tapecycle tapes in one invocation. I'd like to have tapecycle different from the number of slots in the virtual tape changer, so I can move vtapes offline, like with a real changer. Right now the workaround is to make the number of slots equal to tapecycle, but this makes some assumptions I'd prefer not to make. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Question about tape label
On Fri, 2 Sep 2005, Montagni, Giovanni wrote: I have a tape labelled bkdaily12. I need to change the label to bkmonthly02, because i've lost the tape named bkmonthly02. is it possible? Yes. i suppose i have to run: amrmtape config bkdaily12 amlabel -f configmonthly bkmonthly02 is this correct? Indeed. in tapelist of configmonthly i already have bkmontly02, it's not a problem, is it? You have to amrmtape the old bkmontly02 first. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: dump larger than tape
On Wed, 31 Aug 2005, Matt Hyclak wrote: On Wed, Aug 31, 2005 at 10:04:01AM +0200, Geert Uytterhoeven enlightened us: I just got this in my daily report from Amanda (2.4.5-1, Debian etch/testing): | FAILURE AND STRANGE DUMP SUMMARY: | | [...] | | host dle lev 5 FAILED [dump larger than tape, -1 KB, skipping incremental] | | [...] | | DUMP SUMMARY: | | [...] | | host dle5 FAILED | | [...] The funny thing is that this DLE is only about 5 GiB large (according to du), while the tapetype length is 2 mbytes. Anyone ever seen this before? Yes. https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=154882 Without more details we can't tell if that's it, though :-) It's not that one, since I don't have a 64-bit system. However, after sending the email I realized the probable cause of the problem: I started using server side estimates a few weeks ago, so the estimate can easily be larger than the actual data. And then it's larger than the tape size, Amanda will just refuse to back it up, right? Anyway, it happened again last night. I just disabled server side estimates, and will see what happens this night. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Why Oh Why only THIS DLE is giving me those timeout problems ?
On Wed, 31 Aug 2005, Steve Wray wrote: Geert Uytterhoeven wrote: On Tue, 30 Aug 2005, Graeme Humphries wrote: Guy Dallaire wrote: Yes, thanks. I know about hard links. But how would it impact the size or performance of my backups ? Well, if a file is hard linked multiple times, it'll be backed up multiple times. Therefor, a filesystem with tons of hard links will take a really long time to back up. :) Fortunately tar is sufficiently smart to back it up only once. Usually the problem with lots of hard links is not the data timeout value, but the estimate timeout value, as I found out the hard way[*]. We've been having similar problems with estimates timeing out. I just ran the 'find' command given in an earlier email and found a grand total of 607 hard links on the entire filesystem. What I'm wondering is, does 607 count as 'lots' WRT amanda estimate timeouts? Not really, given I have many files with more than 600 hard links. I seem to have 1582186 of them in my cluster of Linux kernel source trees. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
dump larger than tape
Hi, I just got this in my daily report from Amanda (2.4.5-1, Debian etch/testing): | FAILURE AND STRANGE DUMP SUMMARY: | | [...] | | host dle lev 5 FAILED [dump larger than tape, -1 KB, skipping incremental] | | [...] | | DUMP SUMMARY: | | [...] | | host dle5 FAILED | | [...] The funny thing is that this DLE is only about 5 GiB large (according to du), while the tapetype length is 2 mbytes. Anyone ever seen this before? Thx! Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Why Oh Why only THIS DLE is giving me those timeout problems ?
On Tue, 30 Aug 2005, Guy Dallaire wrote: Again, this morning: mesg read: Connection timed out When it's not this message, it's data timeout error on the same DLE, on the same host. There are 3 other disks on this host, with not a signficant higher amount of data, and they always backup OK. This always occur on level 1 dumps it seems. Level 0 dumps always work just fine. I don't know what to think. Perhaps the problem DLE has lots of hard links? Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Why Oh Why only THIS DLE is giving me those timeout problems ?
On Tue, 30 Aug 2005, Graeme Humphries wrote: Guy Dallaire wrote: Yes, thanks. I know about hard links. But how would it impact the size or performance of my backups ? Well, if a file is hard linked multiple times, it'll be backed up multiple times. Therefor, a filesystem with tons of hard links will take a really long time to back up. :) Fortunately tar is sufficiently smart to back it up only once. Usually the problem with lots of hard links is not the data timeout value, but the estimate timeout value, as I found out the hard way[*]. Gr{oetje,eeting}s, Geert [*] Like having `all' Linux kernel source trees on my disk, with identical files hardlinked together, as a poor man's blazing fast SCM system. -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds