>... Thus amrecover is good, but somehow
>things didn't get backed up.
Nicely diagnosed.
>But I think I've found the problem. The last file in the index is an 11GB
>file. The backup server, if you recall, is Linux without the large
>filesystem patches. My guess is that it all just died when it tried to get
>through 11GB. A quick browse through the samba mailinglist says that large
>filesystem support (lfs) isn't enabled by default in the latest 2.0.7
>release for ostype "linux", even if you have the 2.4 kernel or the 2.2
>patches. Anyone else run into this problem?
I don't think this would be the problem. LFS would only come into play
if the file went to disk. The only way it would go to disk is in the
holding disk, and you can set the chunksize in amanda.conf for that.
Also, Amanda would have whined mightily if you hit this limit.
I think the problem is that Samba (or SMB or whatever) cannot do the
large files itself, regardless of OS. I seem to recall other people
having trouble with this, but may be mistaken.
See if you can just do the smbclient part to nothing (image never hits
the disk):
smbclient '\\pc\share' -U <user> -E -d0 -Tqc - <file> \
| dd bs=1k \
| tar tvf -
The dd will tell you how many blocks it processed, which, as long as
they are all full, will be the image size in KBytes. You should also
see a size message from smbclient and the "tar t". You might also try
this without the "<file>" arg so the whole share is dumped, which is
what Amanda is asking. And you might play with different debug levels
and the 'q' option to see if you can get it to complain any.
John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]