* James Lamanna [EMAIL PROTECTED] [20060522 21:52]:
Hi.
I have a couple of XFS + TAR partitions that I'm backing up onto tape.
There are no errors thrown when backing up, however whenever I try to
amrestore one of the XFS partition dump files I get the following:
# amrestore /dev/nst0
Gordon J. Mills III wrote:
It did work in 2.4.5. I have 2 other servers that are running 2.4.5
successfully (as this one was before yesterday). I used this one as a test
since it is not in production yet, but will be soon.
I compared the versions of the script and they are identical. It is the
Hi all,I have a 1 TB RAID5 array which I inherited. The previous admin configured it to be a single file system as well. The disklist I have set up currently splits this file system up into multiple DLEs for backup purposes and dumps them using gtar.
In the past, on systems with multiple
Paul Lussier wrote:
Hi all,
I have a 1 TB RAID5 array which I inherited. The previous admin
configured it to be a single file system as well. The disklist I have
set up currently splits this file system up into multiple DLEs for
backup purposes and dumps them using gtar.
In the past, on
On 5/23/06, Andreas Hallmann [EMAIL PROTECTED] wrote:
Since in the raid blocks are spreed sequentially (w.r.t the file) amongmost (raid5) of the avail platters, it will behave more like a singlespindle with more layers. So, my question is this: Am I doing the right thing by dumping these
DLEs
On 5/23/06, Jean-Francois Malouin
[EMAIL PROTECTED] wrote:
* James Lamanna [EMAIL PROTECTED] [20060522 21:52]:
Hi.
I have a couple of XFS + TAR partitions that I'm backing up onto tape.
There are no errors thrown when backing up, however whenever I try to
amrestore one of the XFS partition
On Tue, May 23, 2006 at 10:39:09AM -0400, Paul Lussier wrote:
On 5/23/06, Andreas Hallmann [EMAIL PROTECTED] wrote:
Since in the raid blocks are spreed sequentially (w.r.t the file) among
most (raid5) of the avail platters, it will behave more like a single
spindle with more layers.
So,
On Tue, May 23, 2006 at 11:04:44AM -0400, Jon LaBadie wrote:
Recently I had a look at amplot results for my new vtape setup.
One thing it showed was that for 2/3 of the time, only one of the
default four dumpers was active.
This is a good point. amplot is awesome for checking out what kind of
Hello list,
i installed a new holding disk in my system and copied all the files
of the old holding disks to the new one.
When i start a amflush now to flush them to tape amanda is gives me
the following results :
snip ---
HOSTNAME DISKL
2006/5/23, Karsten Fuhrmann [EMAIL PROTECTED]:
Hello list,i installed a new holding disk in my system and copied all the filesof the old holding disks to the new one.Did you mount the new holding disk at the same mount point that the previous one ?
All these 'NO FILE TO FLUSH' lines are holding
On Tue, May 23, 2006 at 12:29:13PM -0400, Guy Dallaire wrote:
2006/5/23, Karsten Fuhrmann [EMAIL PROTECTED]:
Hello list,
i installed a new holding disk in my system and copied all the files
of the old holding disks to the new one.
Did you mount the new holding disk at the same mount
Does anyone have the tape type for the LTO3 (Quantum) ?
Are there any other parameters I should tweak to get better
performance/utilization ?
This is still a reasonable default ?
tapebufs 20
I am running the StorEdge C2 jukebox with lto3 drive on a SunFire 280R
under Solaris 9 with 4 gig of
Brian Cuttler wrote:
Does anyone have the tape type for the LTO3 (Quantum) ?
Are there any other parameters I should tweak to get better
performance/utilization ?
This is still a reasonable default ?
tapebufs 20
I am running the StorEdge C2 jukebox with lto3 drive on a SunFire 280R
under
I'm running vtapes on a new server. The vtapes
are split across two external disk drives.
Realizing that some tapes would not fill completely,
I decided that rather than define the tape size to be
exactly disk/N, I would add a fudge factor to the size.
Things worked exactly as anticipated. The
On Tue, 23 May 2006 at 3:55pm, Brian Cuttler wrote
Does anyone have the tape type for the LTO3 (Quantum) ?
This is what I use:
define tapetype LTO3comp {
# All values guesswork :) jlb, 8/31/05
# except blocksize ;) jlb, 9/15/05
length 42 mbytes
blocksize
On Tue, May 23, 2006 at 01:22:02PM -0700, Pavel Pragin wrote:
Brian Cuttler wrote:
Does anyone have the tape type for the LTO3 (Quantum) ?
Are there any other parameters I should tweak to get better
performance/utilization ?
This is still a reasonable default ?
tapebufs 20
I am
Jon LaBadie wrote:
On Tue, May 23, 2006 at 01:22:02PM -0700, Pavel Pragin wrote:
Brian Cuttler wrote:
Does anyone have the tape type for the LTO3 (Quantum) ?
Are there any other parameters I should tweak to get better
performance/utilization ?
This is still a reasonable default ?
Jon,
There is no good short-term solution to this problem. Sorry. :-( Tape spanning
helps, but is not a panacea.
This is one of the limitations of the vtape API that I was talking about -- it
tries to reimplement tape semantics on a filesystem, even when that doesn't
make sense.
When the
On Tue, May 23, 2006 at 04:28:31PM -0400, Jon LaBadie wrote:
But running out of disk space caused me to look more
closely at the situation and I realized that the failed
taping is left on the disk. This of course mimics what
happens on physical tape. However with the file:driver
if this
19 matches
Mail list logo