Re: Not found in archive

2010-07-14 Thread Mark Adams
On Tue, Jul 13, 2010 at 04:14:17PM -0500, Dustin J. Mitchell wrote:
 Hmm, 14 parts and 14 Error writing to fd 5 messages.  From my memory
 and a brief look at the 2.6.1 sources (I couldn't find a version in
 the thread, but this looks like 2.6.1 to me), that wouldn't have come
 from Amanda itself, but from something Amanda ran - perhaps the
 decompression binary?  You said you disabled custom_compress - is this
 an uncompressed dump?

Yes I'm on 2.6.1

It's a gzip dump, this is my config:

define dumptype habackup_archive {
comment HABACKUP_ARCHIVE
program GNUTAR
property NO-UNQUOTE yes
tape_splitsize 40Gb
split_diskbuffer /tapehold/
fallback_splitsize 10Gb
index
priority high
auth bsd
}


 
 The problem is this: all of the parts are concatenated onto the same
 file descriptor - so if the descriptor (a pipe to amandad in this
 case) gets closed prematurely, and amidxtaped somehow failed to notice
 this, then it would make sense to see the remaining parts trigger the
 same failure.
 
 The logs you posted do not have five tapes, but your original problem
 statement said when it gets to the 5th tape.. are these two
 different failures?

No, I snipped the log to make it more friendly for the mailing list. It
get's all the way to the 5th tape, then those errors came up. In my
numerours runs, I've never been able to retrieve anything with
amrecover past the 1st tape. 

 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com



Re: Not found in archive

2010-07-13 Thread Mark Adams
Hi Dustin,

In amidxtaped I get the following:

1278951470.354229: amidxtaped: search_a_tape: desired_tape=0x7a0450 
label=HASNAPSHOT-05
1278951470.354237: amidxtaped: tape:   numfiles = 14
1278951470.354242: amidxtaped: tape:   files[0] = 1
1278951470.354247: amidxtaped: tape:   files[1] = 2
1278951470.354252: amidxtaped: tape:   files[2] = 3
1278951470.354257: amidxtaped: tape:   files[3] = 4
1278951470.354261: amidxtaped: tape:   files[4] = 5
1278951470.354269: amidxtaped: tape:   files[5] = 6
1278951470.354272: amidxtaped: tape:   files[6] = 7
1278951470.354276: amidxtaped: tape:   files[7] = 8
1278951470.354279: amidxtaped: tape:   files[8] = 9
1278951470.354282: amidxtaped: tape:   files[9] = 10
1278951470.354286: amidxtaped: tape:   files[10] = 11
1278951470.354289: amidxtaped: tape:   files[11] = 12
1278951470.354293: amidxtaped: tape:   files[12] = 13
1278951470.354296: amidxtaped: tape:   files[13] = 14
1278951470.354299: amidxtaped: current tapefile_idx = 0
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe
Error writing fd 5: Broken pipe


and im amrecover log:

1278945471.045407: amrecover: security_stream_close(0x10356d0)
1278952987.830601: amrecover: security_stream_close(0x10245a0)

On Mon, Jul 12, 2010 at 08:15:32PM -0500, Dustin J. Mitchell wrote:
 On Mon, Jul 12, 2010 at 11:45 AM, Mark Adams m...@campbell-lange.net wrote:
  OK, So i've tried again without the custom-compress option, now when it
  gets to the 5th tape when trying to recover I just get
 
  tar: Unexpected EOF in archive
  tar: Error is not recoverable: exiting now
  Extractor child exited with status 2
 
 
  Whats the best way forward here? Should I be using a newer version of
  Amanda?
 
 I don't know - that's very strange.  Are there any messages in the
 amidxtaped or amrecover debug logfiles?
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com


Re: Not found in archive

2010-07-12 Thread Mark Adams
OK, So i've tried again without the custom-compress option, now when it
gets to the 5th tape when trying to recover I just get

tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
Extractor child exited with status 2


Whats the best way forward here? Should I be using a newer version of
Amanda?

Regards,
Mark

On Wed, Jul 07, 2010 at 05:00:58PM +0100, Mark Adams wrote:
 Hi Dustin,
 
 I'd tried it not using pigz with the original errors. Thats when I added
 NO-UNQUOTE to see if it helped, I'll run again with pigz but I believe
 the problem is elsewhere.
 
 On Wed, Jul 07, 2010 at 10:46:32AM -0500, Dustin J. Mitchell wrote:
  On Wed, Jul 7, 2010 at 10:41 AM, Mark Adams m...@campbell-lange.net wrote:
      client_custom_compress /usr/bin/pigz
  
  I've heard about pigz in use several times now, and in no case was it
  successful.  I don't know exactly what the problem is, but history
  suggests that things will work fine for you if you just remove this
  line and run a new backup.
  
  Dustin
  
  -- 
  Open Source Storage Engineer
  http://www.zmanda.com
 
 -- 


Re: Not found in archive

2010-07-07 Thread Mark Adams
Hi Dustin,

I'd tried it not using pigz with the original errors. Thats when I added
NO-UNQUOTE to see if it helped, I'll run again with pigz but I believe
the problem is elsewhere.

On Wed, Jul 07, 2010 at 10:46:32AM -0500, Dustin J. Mitchell wrote:
 On Wed, Jul 7, 2010 at 10:41 AM, Mark Adams m...@campbell-lange.net wrote:
     client_custom_compress /usr/bin/pigz
 
 I've heard about pigz in use several times now, and in no case was it
 successful.  I don't know exactly what the problem is, but history
 suggests that things will work fine for you if you just remove this
 line and run a new backup.
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com

-- 


Re: Not found in archive

2010-07-07 Thread Mark Adams
Hi Dustin,

backed up using sudo -u backup amdump namehere

Trying to recover using amrecover. Config for dump is as follows;

-
define dumptype clientbackup_archive {
comment CLIENTBACKUP_ARCHIVE
program GNUTAR
property NO-UNQUOTE yes
compress client custom
client_custom_compress /usr/bin/pigz
tape_splitsize 40Gb
split_diskbuffer /tapehold/
fallback_splitsize 10Gb
index
priority high
auth bsd
}
-
On Tue, Jul 06, 2010 at 11:01:04AM -0500, Dustin J. Mitchell wrote:
 On Tue, Jul 6, 2010 at 8:37 AM, Mark Adams m...@campbell-lange.net wrote:
  Anything else I can try? This is really killing me!
 
 It doesn't look like that was a tar file.  How was it backed up, and
 how are you restoring?
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com



Re: Not found in archive

2010-07-06 Thread Mark Adams
Hi Dustin,

Doesn't look like it's worked. Now I'm getting the following (before the
normal Not found in archive message comes up
tar: Skipping to next header
tar: Archive contains `b\371^h\r%\247\0 \006\212\214' where numeric off_t value 
expected
tar: Archive contains `\211\023\034cQ\r\004\252\005\300\370' where numeric 
time_t value expected
tar: Archive contains `\227\305jq\0\033' where numeric uid_t value expected
tar: Archive contains `9\n\342)\236\2255\025' where numeric gid_t value expected
tar: Skipping to next header
tar: Archive contains `\231K\347)\002\346%\026\3207A\372' where numeric off_t 
value expected
tar: Archive contains `\344\203\0\023 \263(T' where numeric mode_t value 
expected
tar: Archive contains `p\244\267\a\030\034\006G\310\034\3220' where numeric 
time_t value expected
tar: Archive contains `\356\004C\3047tn' where numeric uid_t value expected
tar: Archive contains `\020\030\004\004\224\036R\357' where numeric gid_t value 
expected
tar: Skipping to next header
tar: ./datasnap0/design/P1050213.JPG: Not found in archive

Anything else I can try? This is really killing me!

Regards,
Mark

On Thu, Jul 01, 2010 at 10:02:54AM -0500, Dustin J. Mitchell wrote:
 On Thu, Jul 1, 2010 at 3:57 AM, Mark Adams m...@campbell-lange.net wrote:
  Great thanks. I'll try this - is this something most people set on as
  part of a normal config?
 
 It's relatively new, so no, not yet, but if it proves to have lots of
 upsides and no significant downsides, then I'm sure it will become
 common.
 
 If this does end up solving your problem, perhaps you could add a
 Troubleshooting page to the wiki to help others?
 
 In general, Amanda's interactions with various tar implementations'
 quoting and unquoting mechanisms is unspecified and not always
 correct.  It's something we'd like to address, but will probably not
 get to soon -- it's best solved with a completely new approach to
 indexing dumps.
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com



Re: Not found in archive

2010-07-01 Thread Mark Adams
Great thanks. I'll try this - is this something most people set on as
part of a normal config?

On Wed, Jun 30, 2010 at 12:52:38PM -0500, Dustin J. Mitchell wrote:
 On Wed, Jun 30, 2010 at 12:52 PM, Dustin J. Mitchell dus...@zmanda.com 
 wrote:
   property NO-QUOTING yes
 
 Sorry, that's
   property NO-UNQUOTE yes
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com


Re: Not found in archive

2010-06-30 Thread Mark Adams
Hi All,

I've split this down in to smaller DLE's (although the biggest is still
3T..) And re-run, but I'm still having the same issue - only files on
the first tape can be recovered using amrecover! Please please please
does anyone know what could be causing this?!

Regards,
Mark

On Fri, Jun 18, 2010 at 02:27:44PM +0100, Mark Adams wrote:
 On Thu, Jun 17, 2010 at 04:47:05PM -0400, Jon LaBadie wrote:
  On Thu, Jun 17, 2010 at 10:50:31AM +0100, Mark Adams wrote:
   Hi All,
   
   I had written to the list a little while ago regarding issues I was
   having with pigz, and not being able to retrieve from a 2nd tape. I then
   went on to test using normal gzip and retrieved from a 2nd tape without
   issue.
   
   However, I've now run a set with 7 tapes, and am having trouble
   retrieving from them. I'm getting the Not found in archive message
   even though the files show in the index.
   
   This makes me think that maybe, after all, it was just my poor
   configuration and not pigz that was causing the issues! Can anyone shed
   any light on this or advise on how I can figure out why it thinks these
   files aren't in the archive even though they are in the index? as
   before, retrievals from the first tape work.
   
  Just a guess.  amrecover shows the index as of a particular date.
  Might the file(s) you are asking for have been present some days,
  but not on the day (singular) you told amrecover to work with.
  
 Hi There,
 
 I'm navigating through the index and selecting the file using add then
 using extract. I only have a single run on this (snapshot) so I don't
 think this is the issue.
 
 Does anyone else have an idea?
 
 Regards,
 Mark


Re: Not found in archive

2010-06-30 Thread Mark Adams
I've just seen this old thread

http://forums.zmanda.com/showthread.php?t=1691page=2

Is this possibly my problem Dustin?

NO-UNQUOTE
If NO (the default), gnutar doesn't get the --no-unquote option
and the diskname can't have some characters, eg. '\'. If YES, then the
--no-unquote option is given to gnutar and the diskname can have any
characters. This option is available only if you are using tar-1.16 or
newer. 

How do I enable this option? (and shouldn't it be YES by default for a
normal install? whats the downside?)

Cheers,
Mark

On Wed, Jun 30, 2010 at 04:40:33PM +0100, Mark Adams wrote:
 Hi All,
 
 I've split this down in to smaller DLE's (although the biggest is still
 3T..) And re-run, but I'm still having the same issue - only files on
 the first tape can be recovered using amrecover! Please please please
 does anyone know what could be causing this?!
 
 Regards,
 Mark
 
 On Fri, Jun 18, 2010 at 02:27:44PM +0100, Mark Adams wrote:
  On Thu, Jun 17, 2010 at 04:47:05PM -0400, Jon LaBadie wrote:
   On Thu, Jun 17, 2010 at 10:50:31AM +0100, Mark Adams wrote:
Hi All,

I had written to the list a little while ago regarding issues I was
having with pigz, and not being able to retrieve from a 2nd tape. I then
went on to test using normal gzip and retrieved from a 2nd tape without
issue.

However, I've now run a set with 7 tapes, and am having trouble
retrieving from them. I'm getting the Not found in archive message
even though the files show in the index.

This makes me think that maybe, after all, it was just my poor
configuration and not pigz that was causing the issues! Can anyone shed
any light on this or advise on how I can figure out why it thinks these
files aren't in the archive even though they are in the index? as
before, retrievals from the first tape work.

   Just a guess.  amrecover shows the index as of a particular date.
   Might the file(s) you are asking for have been present some days,
   but not on the day (singular) you told amrecover to work with.
   
  Hi There,
  
  I'm navigating through the index and selecting the file using add then
  using extract. I only have a single run on this (snapshot) so I don't
  think this is the issue.
  
  Does anyone else have an idea?
  
  Regards,
  Mark


Re: Not found in archive

2010-06-30 Thread Mark Adams
How do I enable it?

Regards,
Mark

On 30 Jun 2010, at 17:39, Dustin J. Mitchell dus...@zmanda.com wrote:

 On Wed, Jun 30, 2010 at 10:51 AM, Mark Adams m...@campbell-lange.net wrote:
 Is this possibly my problem Dustin?
 
 NO-UNQUOTE
If NO (the default), gnutar doesn't get the --no-unquote option
 and the diskname can't have some characters, eg. '\'. If YES, then the
 --no-unquote option is given to gnutar and the diskname can have any
 characters. This option is available only if you are using tar-1.16 or
 newer.
 
 How do I enable this option? (and shouldn't it be YES by default for a
 normal install? whats the downside?)
 
 Possibly.  It's not default because it will break tar1.16.  Tar's
 quoting behavior is fantastically complicated - see
  http://wiki.zmanda.com/index.php/GNU_Tar_Include_and_Exclude_Behavior
 
 But give it a shot!
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com



Re: Not found in archive

2010-06-18 Thread Mark Adams
On Thu, Jun 17, 2010 at 04:47:05PM -0400, Jon LaBadie wrote:
 On Thu, Jun 17, 2010 at 10:50:31AM +0100, Mark Adams wrote:
  Hi All,
  
  I had written to the list a little while ago regarding issues I was
  having with pigz, and not being able to retrieve from a 2nd tape. I then
  went on to test using normal gzip and retrieved from a 2nd tape without
  issue.
  
  However, I've now run a set with 7 tapes, and am having trouble
  retrieving from them. I'm getting the Not found in archive message
  even though the files show in the index.
  
  This makes me think that maybe, after all, it was just my poor
  configuration and not pigz that was causing the issues! Can anyone shed
  any light on this or advise on how I can figure out why it thinks these
  files aren't in the archive even though they are in the index? as
  before, retrievals from the first tape work.
  
 Just a guess.  amrecover shows the index as of a particular date.
 Might the file(s) you are asking for have been present some days,
 but not on the day (singular) you told amrecover to work with.
 
Hi There,

I'm navigating through the index and selecting the file using add then
using extract. I only have a single run on this (snapshot) so I don't
think this is the issue.

Does anyone else have an idea?

Regards,
Mark


Not found in archive

2010-06-17 Thread Mark Adams
Hi All,

I had written to the list a little while ago regarding issues I was
having with pigz, and not being able to retrieve from a 2nd tape. I then
went on to test using normal gzip and retrieved from a 2nd tape without
issue.

However, I've now run a set with 7 tapes, and am having trouble
retrieving from them. I'm getting the Not found in archive message
even though the files show in the index.

This makes me think that maybe, after all, it was just my poor
configuration and not pigz that was causing the issues! Can anyone shed
any light on this or advise on how I can figure out why it thinks these
files aren't in the archive even though they are in the index? as
before, retrievals from the first tape work.

My config is as follows;

--

org ORGBACKUP # Title of report
mailto m...@mail.net # recipients of report, space separated
dumpuser backup # the user to run dumps under
inparallel 4 # maximum dumpers that will run in parallel
netusage 100 # maximum net bandwidth for Amanda, in KB per sec

# a filesystem is due for a full backup once every day
dumpcycle 0 days # the number of days in the fullback dump cycle
runspercycle 1# daily full backups
runtapes 7

bumpsize 20 MB # minimum savings (threshold) to bump level 1 - 2
bumpdays 1 # minimum days at each level
bumpmult 4 # threshold = bumpsize * (level-1)**bumpmult

tapedev /dev/nst0 # Linux @ tuck, important: norewinding
tpchanger chg-zd-mtx
changerfile /etc/amanda/ha/changer.conf
changerdev /dev/sg4

tapetype LTO4 # what kind of tape it is (see tapetypes below)
labelstr ^ORGSNAPSHOT-[0-9][0-9]*$ # label constraint regex: all tapes
must match

holdingdisk hd1 {
comment main holding disk
directory /tapehold   # where the holding disk is
use -1000 Mb # how much space can we use on it
# a non-positive value means:
#use all space but that value
chunksize 1Gb   # size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}

infofile /etc/amanda/org/curinfo # database directory
logdir /etc/amanda/org/log # log filename
indexdir /etc/amanda/org/index

define tapetype LTO4 {
comment HP LTO4
length 804191104 kbytes
filemark 0 kbytes
speed 90511 kps
blocksize 32 kbytes
}

define dumptype orgbackup {
program GNUTAR
comment HABACKUP
tape_splitsize 40Gb
split_diskbuffer /tapehold/
fallback_splitsize 10Gb
index
priority high
auth bsd
}

-



Re: Identifying what file is at the start of a tape for audit purposes

2010-06-15 Thread Mark Adams
On Tue, Jun 15, 2010 at 10:09:10AM -0500, Dustin J. Mitchell wrote:
 On Tue, Jun 15, 2010 at 8:53 AM, Mark Adams m...@campbell-lange.net wrote:
  Is there any way to identify (via the index or other means) what file is
  at the start of what tape or what split part?
 
 You can use 'amadmin find' I suppose.. although it doesn't let you
 search by tape.
 
  I need to audit my tape sets to ensure data has been backed up
  correctly, I was anticipating trying to retrieve a single file from each
  tape to confirm this. Does anyone have any suggestions on how best to do
  this or is there any automated way to do this with Amanda?
 
 amcheckdump is probably what you want.

Hi Dustin, Thanks for this pointer. I assume amcheckdump takes just as
long as the backup takes in the first place?

Regards,
Mark

 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com



Re: Issue with amrecover from 2nd tape

2010-05-18 Thread Mark Adams
On Mon, May 17, 2010 at 05:17:19PM -0400, Dustin J. Mitchell wrote:
 On Wed, May 12, 2010 at 12:54 PM, Mark Adams m...@campbell-lange.net wrote:
  1273659791.836703: sendbackup: critical (fatal): index tee cannot write
  [Broken pipe]
 
 This means that the index tee (which splits off the 'tar' output to
 generate the index) cannot write to its output, which is the
 client-side compression.  Since the error is EPIPE, this means either
 that pbzip2 exited, or that it closed its standard input prematurely.
 
 The next step would be to figure out why pbzip2 would do that.  Does
 it automatically compress its stdin and pipe it to stdout?
 
 Dustin
 

You need to specify -c with pbzip2 to have it compress to stdout I
believe. If you just run it by itself;

pbzip2
pbzip2: *ERROR: Won't write compressed data to terminal.  Aborting!

pbzip2 -h

Usage: pbzip2 [-1 .. -9] [-b#cdfhklm#p#qrS#tVz] filename filename2
filenameN
 -1 .. -9set BWT block size to 100k .. 900k (default 900k)
 -b# Block size in 100k steps (default 9 = 900k)
 -c,--stdout Output to standard out (stdout)
 -d,--decompress Decompress file
 -f,--force  Overwrite existing output file
 -h,--help   Print this help message
 -k,--keep   Keep input file, don't delete
 -l,--loadavgLoad average determines max number processors to use
 -m# Maximum memory usage in 1MB steps (default 100 = 100MB)
 -p# Number of processors to use (default: autodetect [2])
 -q,--quiet  Quiet mode (default)
 -r,--read   Read entire input file into RAM and split between
processors
 -t,--test   Test compressed file integrity
 -v,--verboseVerbose mode
 -V,--versionDisplay version info for pbzip2 then exit
 -z,--compress   Compress file (default)



Re: Issue with amrecover from 2nd tape

2010-05-12 Thread Mark Adams
Hi,

On Fri, May 07, 2010 at 10:10:27AM -0500, Dustin J. Mitchell wrote:
 On Fri, May 7, 2010 at 4:22 AM, Mark Adams m...@campbell-lange.net wrote:
  Does this help at all? as it read the 2nd tape does this mean the data
  is on the tapes and it's a problem with amrecover?
 
 Basically.  It could mean that the data on the tapes is corrupt,
 although the basic structure of the data is intact (that is, the
 filemarks are in the right place and the headers are right).
 
  Also for future ref, do I actually need around 1.5* the space of the DLE
  to do amfetchdump? (to include the merging..)
 
 I don't recall which version you're using.  This is no longer the case in 3.1.
 
  I'm going to try now using plain gzip instead of pigz, just incase this
  is causing my issues.
 
 It's quite possible..

This was the problem. Using gzip it retrieves from the 2nd tape just
fine. It takes twice as long to run the backup though! Is anyone using
pigz and is successfully retrieving data from the 2nd tape on? or has
anyone used pbzip2?

Cheers,
Mark

 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com



Re: Issue with amrecover from 2nd tape

2010-05-12 Thread Mark Adams
On Wed, May 12, 2010 at 09:55:05AM -0500, Dustin J. Mitchell wrote:
 On Wed, May 12, 2010 at 4:39 AM, Mark Adams m...@campbell-lange.net wrote:
  This was the problem. Using gzip it retrieves from the 2nd tape just
  fine. It takes twice as long to run the backup though! Is anyone using
  pigz and is successfully retrieving data from the 2nd tape on? or has
  anyone used pbzip2?
 
 Interesting!  We're hoping to implement official support for this in
 the next (3.2) release, so I'm curious to know more about why pigz
 failed.  In theory, it's a drop-in gzip replacement, right?

As far as I know, yes. Might be something to do with the merging of the
tar parts? Maybe you should try in your labs and see how you get
on! :)

In the mean time I can't get pbzip2 to work at all. It just crashes out
with the following:

--snip
 programGNUTAR/program
  disk/upbackup/disk
  level0/level
  authbsd/auth
  
compressSERVER-CUSTOMcustom-compress-program/usr/bin/pbzip2/custom-compress-program
  /compress
  recordYES/record
  indexYES/index
/dle

1273659787.671710: dumper: is_partial   = 0
1273659787.671714: dumper: partnum  = 0
1273659787.671717: dumper: totalparts   = 0
1273659787.671720: dumper: blocksize= 32768
1273659791.836144: dumper: security_stream_close(0x1c14990)
1273659791.836191: dumper: security_stream_close(0x1c1c9f0)
1273659791.836215: dumper: security_stream_close(0x1c24a50)
1273659791.836277: dumper: putresult: 10 FAILED
1273659792.404559: dumper: getcmd: QUIT 
1273659792.404617: dumper: pid 741 finish time Wed May 12 11:23:12 2010

 230,1
snip--

This is my dumptype:

define dumptype upbackup {
program GNUTAR
compress server custom
server_custom_compress /usr/bin/pbzip2
tape_splitsize 40Gb
split_diskbuffer /tapehold/
fallback_splitsize 10Gb
comment UPBACKUP
index
priority high
auth bsd
}

 
 Dustin


Re: Issue with amrecover from 2nd tape

2010-05-12 Thread Mark Adams
1273659787.668231: sendbackup: pid 1529 ruid 34 euid 34 version 2.6.1p1:
start at Wed May 12 11:23:07 2010
1273659787.668261: sendbackup: Version 2.6.1p1
1273659787.668465: sendbackup: pid 1529 ruid 34 euid 34 version 2.6.1p1:
rename at Wed May 12 11:23:07 2010
1273659787.668839: sendbackup:   Parsed request as: program `GNUTAR'
1273659787.668847: sendbackup:  disk `/upbackup'
1273659787.668851: sendbackup:  device `/upbackup'
1273659787.668854: sendbackup:  level 0
1273659787.668857: sendbackup:  since NODATE
1273659787.668860: sendbackup:  options `'
1273659787.668928: sendbackup: start: localhost:/upbackup lev 0
1273659787.669014: sendbackup: doing level 0 dump as listed-incremental
to '/var/lib/amanda/gnutar-lists/localhost_upbackup_0.new'
1273659787.669561: sendbackup: pipespawnv: stdoutfd is 50
1273659787.669716: sendbackup: Spawning /usr/lib/amanda/runtar runtar
up /bin/tar --create --file - --directory /upbackup --one-file-system
--listed-incremental /var/
1273659787.669747: sendbackup: Started index creator: /bin/tar -tf -
2/dev/null | sed -e 's/^\.//'
1273659787.669846: sendbackup: gnutar: /usr/lib/amanda/runtar: pid 1533
1273659787.669867: sendbackup: Started backup
1273659791.836703: sendbackup: critical (fatal): index tee cannot write
[Broken pipe]
/usr/lib/amanda/libamanda-2.6.1p1.so[0x7f3d7e034a21]
/lib/libglib-2.0.so.0(g_logv+0x1a7)[0x7f3d7d2bb6f7]
/lib/libglib-2.0.so.0(g_log+0x83)[0x7f3d7d2bbad3]
/usr/lib/amanda/sendbackup(start_index+0x28e)[0x403bfe]
/usr/lib/amanda/sendbackup[0x407e22]
/usr/lib/amanda/sendbackup(main+0x11c1)[0x405dc1]
/lib/libc.so.6(__libc_start_main+0xfd)[0x7f3d7c8f6abd]
/usr/lib/amanda/sendbackup[0x403519]

On Wed, May 12, 2010 at 11:33:15AM -0500, Dustin J. Mitchell wrote:
 On Wed, May 12, 2010 at 11:17 AM, Mark Adams m...@campbell-lange.net wrote:
  In the mean time I can't get pbzip2 to work at all. It just crashes out
  with the following:
 
 Please attach the whole sendbackup log.
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com


Re: Issue with amrecover from 2nd tape

2010-05-07 Thread Mark Adams
Hi There,

I'm not so worried about the data, im just testing backing up to
multiple tapes at present.

I've retrieved using the following;

sudo -u backup amfetchdump up localhost /upbackup

which has given me 2 files;

-rw-r- 1 backup backup 1350349324288 2010-05-07 03:11 
localhost._upbackup.20100430113529.0.001
drwxr-xr-x 2 backup backup   108 2010-05-06 20:03 .
-rw-r- 1 backup backup  649939222528 2010-05-06 19:47 
localhost._upbackup.20100430113529.0.020

Unfortunately I ran out of space on the device I was restoring the dumps
to at this stage;

Merging localhost._upbackup.20100430113529.0.020 with 
localhost._upbackup.20100430113529.0.001
Error writing fd 7: No space left on device 
amfetchdump: Error copying data from file 
localhost._upbackup.20100430113529.0.020 to fd 7.

Does this help at all? as it read the 2nd tape does this mean the data
is on the tapes and it's a problem with amrecover?

Also for future ref, do I actually need around 1.5* the space of the DLE
to do amfetchdump? (to include the merging..)

I'm going to try now using plain gzip instead of pigz, just incase this
is causing my issues.

Thanks for your help so far.

Regards,
Mark

On Wed, May 05, 2010 at 10:16:50AM -0500, Dustin J. Mitchell wrote:
 On Wed, May 5, 2010 at 3:55 AM, Mark Adams m...@campbell-lange.net wrote:
  Why would the file not be in the archive if it is showing in the index?
  Is there anything else I can try before I try to retrieve the whole DLE?
 
 So this is a new dump?  No, I can't see any reason it would do that.
 Hopefully retrieving the entire DLE will offer some clues (and get you
 access to your data).
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com



Re: Issue with amrecover from 2nd tape

2010-05-05 Thread Mark Adams
Hi All,

I've run this backup again with smaller chunks (40Gb, not that this
should make a difference?) and it is now saying it can't find anything I
try to recover in the archive.

tar: ./path/here/IMG_4254.jpg: Not found in archive
tar: Error exit delayed from previous errors

Why would the file not be in the archive if it is showing in the index?
Is there anything else I can try before I try to retrieve the whole DLE?

On Wed, Apr 28, 2010 at 02:57:32PM +0100, Mark Adams wrote:
 On Fri, Apr 23, 2010 at 10:26:52AM -0700, Dustin J. Mitchell wrote:
  On Fri, Apr 23, 2010 at 9:59 AM, Mark Adams m...@campbell-lange.net wrote:
   I've determined that when I try to retrieve files from the 2nd tape in
   my set it won't work. Tar crashes out with the helpful error due to 
   previous
   errors -- no errors show before this apart from tar skipping to the
   next tape. Retrieves from the first tape work fine.
  
  Hmm, that's odd, because as far as tar is concerned, there aren't
  separate tapes - just a single datastream.  Even when recovering a
  file only on the first tape, amrecover will still read the entire
  dumpfile (and thus require the second tape).  I assume that you've
  seen files recovered successfully before you swapped to the second
  tape.  In that case, did you kill amrecover after getting the desired
  files?  If not, did it finish reading the second tape without error?
 
 Yes once I've seen the files restored I've always killed amrecover.
  
  I would recommend using 'amfetchdump' for the same dumpfile, and
  examining the resulting tarfile.
 
 I assume this will require space the size of the DLE? (in my case 2T)
 
  
  Dustin
  
  -- 
  Open Source Storage Engineer
  http://www.zmanda.com

-- 
Mark Adams
Technical Manager
m...@campbell-lange.net
.
Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928


Re: Issue with amrecover from 2nd tape

2010-04-28 Thread Mark Adams
On Fri, Apr 23, 2010 at 10:26:52AM -0700, Dustin J. Mitchell wrote:
 On Fri, Apr 23, 2010 at 9:59 AM, Mark Adams m...@campbell-lange.net wrote:
  I've determined that when I try to retrieve files from the 2nd tape in
  my set it won't work. Tar crashes out with the helpful error due to 
  previous
  errors -- no errors show before this apart from tar skipping to the
  next tape. Retrieves from the first tape work fine.
 
 Hmm, that's odd, because as far as tar is concerned, there aren't
 separate tapes - just a single datastream.  Even when recovering a
 file only on the first tape, amrecover will still read the entire
 dumpfile (and thus require the second tape).  I assume that you've
 seen files recovered successfully before you swapped to the second
 tape.  In that case, did you kill amrecover after getting the desired
 files?  If not, did it finish reading the second tape without error?

Yes once I've seen the files restored I've always killed amrecover.
 
 I would recommend using 'amfetchdump' for the same dumpfile, and
 examining the resulting tarfile.

I assume this will require space the size of the DLE? (in my case 2T)

 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com


Re: Issue with amrecover from 2nd tape

2010-04-23 Thread Mark Adams
Hi Guys,

Sorry for the stupid questions, but I've no idea how to identify
*which* file needs to be selected in amrecover in order for it to
restore from the start of the 10th chunk.

Or should I be trying to recover this some other way? using tar or
amrestore?

Cheers
Mark

On Thu, Apr 22, 2010 at 12:28:32PM -0700, Paul Yeatman wrote:
 On Thu, 2010-04-22 at 12:00 -0700, Paul Yeatman wrote:
  On Thu, 2010-04-22 at 09:07 -0700, Dustin J. Mitchell wrote:
   On Thu, Apr 22, 2010 at 6:49 AM, Mark Adams m...@campbell-lange.net 
   wrote:
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-021 10/-1 OK
   
   This is part 10 of the dump, at filemark 1 on UPSNAPSHOT-02.  is that
   what you were looking for?
  
  Thus the first file on the second tape, file number=0.
  
  Are we understanding the question?
  
  Paul
 
 Mark, sorry.  It is actually file number=1 or filemark 1 as Dustin
 said.  0 is the tape label.
 
 Always trust Dustin first!
 
 Paul
 



Re: Issue with amrecover from 2nd tape

2010-04-23 Thread Mark Adams
I guess this is a reason not to snip out bits from the thread history!

I've determined that when I try to retrieve files from the 2nd tape in
my set it won't work. Tar crashes out with the helpful error due to previous
errors -- no errors show before this apart from tar skipping to the
next tape. Retrieves from the first tape work fine.

Paul suggested I try to retrieve a file from the *very* beginning of the
2nd tape, I'm trying to find out how to do that.

Cheers,
Mark

On Fri, Apr 23, 2010 at 08:31:57AM -0700, Dustin J. Mitchell wrote:
 On Fri, Apr 23, 2010 at 3:29 AM, Mark Adams m...@campbell-lange.net wrote:
  Sorry for the stupid questions, but I've no idea how to identify
  *which* file needs to be selected in amrecover in order for it to
  restore from the start of the 10th chunk.
 
 I had been wondering why you were so concerned with which part (chunks
 are different) was on which tape.
 
 What is the task you're tring to accomplish here?
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com



Re: Issue with amrecover from 2nd tape

2010-04-22 Thread Mark Adams
Hi Paul,

  
  How can I tell what file will be on the start of the 2nd tape?
 
 
 You can run amadmin with the find option.  This will show which parts of
 the backup are on which tapes.  Reading the 32k block at the beginning
 of each tape file should agree with this information.
 
  

I have tried this and get the following;

2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-011  1/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-012  2/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-013  3/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-014  4/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-015  5/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-016  6/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-017  7/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-018  8/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-019  9/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-01   10 10/-1 PARTIAL
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-021 10/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-022 11/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-023 12/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-024 13/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-025 14/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-026 15/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-027 16/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-028 17/-1 OK
2010-04-09 16:55:11 localhost /upbackup  0 UPSNAPSHOT-029 18/-1 OK

How can I find out what File to retrieve at the start of the 2nd tape?

Regards,
Mark



Re: Issue with amrecover from 2nd tape

2010-04-16 Thread Mark Adams
Hi, Thanks for your response. Please see my questions line

On Thu, Apr 15, 2010 at 10:37:29AM -0700, Paul Yeatman wrote:
 Hi!
 
 On Thu, 2010-04-15 at 10:18 +0100, Mark Adams wrote:
  Hi All,
  
  Debian lenny, Amanda 2.6.1p1-2
  
  I'm backing up a single 1.9T xfs filesystem to an LTO4 drive with
  changer. The backup seems to complete correctly and recovering files
  that are on the first tape complete correctly.
  
  However, when trying to retrieve and files that are on the 2nd tape, the
  tape is loaded correctly by the changer (after first spooling through
  the first tape, as a side note is there a way to skip this?) then after
  some time the following error is shown
 
 You would not be able to skip the first tape if this is where the backup
 image begins

Ok. It doesn't help if it is in 80G chunks?
 
  tar: Error exit delayed from previous errors
  Extractor child exited with status 2
  
  How can I find out what the previous errors were?
 
 You need to look in the debug log for the backup application you are
 using on the client in /var/log/amanda/client/config/.

I don't have any amgtar logs, just the amrecover logs - which have the
security stream_stream_close message I noted below. Is there some
additional logging I can add?

 
 For amgtar, it will be amgtar.datestamp.debug
 
 Cheers,
 Paul
 
  
  In the amrecover log I also see
  
  1271275442.543746: amrecover: security_stream_close(0xd89180)
  1271275520.583556: amrecover: security_stream_close(0xd99f30)
  
  My tape and dump config is as follows
  
  define tapetype LTO4 {
  comment HP LTO4
  length 804191104 kbytes
  filemark 0 kbytes
  speed 90511 kps
  blocksize 32 kbytes
  }
  
  define dumptype upbackup {
  program GNUTAR
  compress server custom
  server_custom_compress /usr/bin/pigz
  tape_splitsize 80Gb
  split_diskbuffer /tapehold/
  fallback_splitsize 10Gb
  comment UPBACKUP
  index
  priority high
  auth bsd
  }
  
  Help would be very much appreciated!
  
  Cheers,
  Mark