when I had problems here, I also did not see hard errors w/ application code,
just slow performance (70MB/s or so). When I ran the manufacturer's drive
test code it did show up internal issues (compression/crc/etc not withing
manufacturer specs).
Not saying that this is your problem at all
Would also verify your drive. I've had issues with LTO drives that just flake
out over time and required replacement from the vendors in the past. Most
vendors supply a test utility.
As for the block size limit increase, that's not correct. Even though it was
stated in the change logs tha
>From: Martin Simmons
>> On Fri, 13 Apr 2012 09:13:31 +0000, Steve Costaras said:
>> It's falling through down to 64512 when I have anything set larger than
>> 2097152 for maximum block size.
>
>Yes, but it looks like this fails to report the error on startu
--
which is smaller than what I'm setting by ?18 bytes? I don't get that at all.
Looks like there may also be a file system prefetch caching issue on top of
this as even with the above larger block size I'm still only getting 50MB/s
(this is now on a raid-0 of 16 2TB drives
In continuing testing here I am still unable to get full speed even when using
the 96-drive array (16x 6-drive raidz2 vdevs). Which is just insane. Also
tried a raid-0 of 16 drives as a spool array.
Now the last volume I just tried ended with this:
2012-04-12 19SD-loki JobId 2: End of Vol
>-Original Message-
>From: Steve Ellis [mailto:el...@brouhaha.com]
>Your hardware is _much_ better than mine (spool in non-RAID, no SSD,
>LTO3, 4GB RAM, etc), yet I get tape throughput not a lot lower. At the
>despool rates you are seeing, I'm guessing you may be shoeshining the
>tape on
I'm running bacula 5.2.6 under ubuntu 10.04LTS this is a pretty simple setup
just backing up the same server that bacula is on as it's the main fileserver.
For some background: The main fileserver array is comprised of 96 2TB drives
in a raid-60 (16 raidz2 vdevs of 6 drives per group)). I ha
seems to be a common mis-conception or I'm /much/ luckier than I should be as I
routinely run jobs that last over 15-20 days with zero problems (besides them
taking 15-20 days. ;) ). I've been doing this for a couple years now end on end
with different deployments of bacula (mainly linux/ubuntu
I'm running 5.0.3 and don't see this 6-day limit for jobs and do not have max
run time set in the config files. Pretty much all of my full backup jobs run
into the 15-30 day range due to the shear size of the backup and the constant
pause/flushing of the spool.
I would think you're running int
On 2011-07-15 07:28, Paul Mather wrote:
On Jul 14, 2011, at 7:56 PM, Ken Mandelberg wrote:
Under Legato the license restriction artificially keep the "file-device"
small relative to the tape storage. However, these days disks are
cheaper than tapes and license free we could afford a lot of di
On 2011-07-12 05:38, Martin Simmons wrote:
> Yes, that looks mostly normal.
>
> I would report that log output as a bug at bugs.bacula.org.
>
> I'm a little surprised that it specifically asked for the volume named FA0016
> though:
>
>2011-07-10 03SD-loki JobId 6: Please mount Volume "FA0016"
On 2011-07-11 06:13, Martin Simmons wrote:
>>>>>> On Sun, 10 Jul 2011 12:17:55 +0000, Steve Costaras said:
>> Importance: Normal
>> Sensitivity: Normal
>>
>> I am trying a full backup/multi-job to a single client and all was going
>> well un
-Original Message-
>I suggest running smaller jobs. I don't mean to sound trite, but thatreally is
>the solution. Given that the alternative is non-trivial, thesensible choice
>is, I'm afraid, cancel the job.
I'm already kicking off 20+ jobs for a single system already. This does not
> Just had a quick look... the "read-only" message is this in stored/block.c:
>
> if (!dev->can_append()) {
> dev->dev_errno = EIO;
> Jmsg1(jcr, M_FATAL, 0, _("Attempt to write on read-only Volume. dev=%s\n"),
> dev->print_name());
> return false;
> }
>
>And can_append() is:
>
>int can_append()
no idea, if we can find out what triggered the original message. Without doing
anything physical, I did an umount storage=LTO4 from bacula and then went and
did a full btape rawfill without a single problem on the volume:
*status
Bacula status: file=0 block=1
Device status: ONLINE IM_REP_EN f
-Original Message-
From: Dan Langille [mailto:d...@langille.org]
Sent: Sunday, July 10, 2011 12:58 PM
To: stev...@chaven.com
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Catastrophic error. Cannot write overflow block to
device "LTO4"
>>
>> 2) since everything is sp
I am trying a full backup/multi-job to a single client and all was going well
until this morning when I received the error below. All other jobs were also
canceled.
My question is two fold:
1) What the heck is this error? I can unmount the drive, issue a rawfill to
the tape w/ btape an
Initial thoughts on this would be one of two ways (or both):
All in the fileset resource:
As a fileset option something like:
StreamPerFS
which would kick off a stream for every FS in the fileset. More of an
'automated' method to improve performance for those who don't want to manually
tune
Found a solutino to why multi-volume was not working correctly (don't know what
the problem was) but had to re-create the database and once I re-did
that/recreated the tape pool now it's working with multi-jobs using the same
tape. Go figure.
As for your comment here with multi-streaming, YES
t_mldonkey"
Ignore FileSet Changes = yes
Include {
Options {
accurate=mcs5
checkfilechanges=yes
hardlinks=yes
noatime=yes
onefs=yes
recurse=yes
signature=MD5
sparse=yes
verify=pns5
}
File = /opt/mldonkey
}
}
--
hink you're going to have to do a
lot of different configurations and test which ones work best for your
design parameters (i.e. questions like "How long can I go w/o a full
backup" and "How long can I stand a complete disaster recovery restore
taking").
From: "Ste
On 2011-06-28 10:01, Josh Fisher wrote:
On 6/27/2011 8:43 PM, Steve Costaras wrote:
- How to stream a single job to multiple tape drives. Couldn't
figure this out so that only one tape drive is being used.
There are hardware RAIT controllers available from Ultera
(http://www.ul
How would the the various parts communicate if you're running multiple
instances on different ports? I would think just by creating multiple
jobs would create multiple socket streams and do the same thing.
On 2011-06-28 02:09, Christian Manal wrote:
- File daemon is single threaded so
I have been using Bacula for over a year now and it has been providing
'passable' service though I think since day one I have been streching it to
it's limits or need a paradigm shift in how I am configuring it.
Basically, I have a single server which has direct atached disk (~128TB / 112
dri
I've done restores of ~50TB (~3,500,000 files) with v5.0.3 under ubuntu
10.04LTS against sqlite3 databases here with no problems (taking minutes to
create the tree and to do a mark *). I'm running on a dual cpu X5680 system w/
24GB ram if that helps with a data point.
-Original Message-
#x27;, stev...@chaven.com, bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Long running restore canceled by tape error? Any
way to continue?
Hello,
2011/6/11 John Drescher
On Sat, Jun 11, 2011 at 12:45 PM, Steve Costaras wrote:
> Hmm. Unfortunatly it's not just a few tapes,
c: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Long running restore canceled by tape error? Any
way to continue?
2011/6/11 Steve Costaras :
>
> I'm running bacula 5.0.3 under ubuntu 10.04 & lto4 tapes. Have a restore
> going that is a ~8 days long (~50TB), on the second to las
I'm running bacula 5.0.3 under ubuntu 10.04 & lto4 tapes. Have a restore going
that is a ~8 days long (~50TB), on the second to last day, got an I/O error onn
one of the tapes/one of the files, instead of continuing the restore/skipping
that file it canceled the entire restore process.
two iss
Not really, RAID6+0 only requires 8 drives minimum you can create two
RAID6's of 4 drives each and stripe them together.This has a benefit
as multi-layer based parity raids increases random write iops
performance. But the main issue is array integrity, mainly with
large capacity drives
I have quantum stand-alone LTO4 drives that I'm using w/ Bacula 3.0.3
under linux. I would like to enable the hardware encryption on the
drives (does not have to be fancy, even the same encryption key for
/all/ tapes would be much better than nothing).
I saw in the lists that this was disc
A little mis-quoted there:
On 2010-08-30 02:59, Henrik Johansen wrote:
>> On Aug 28, 2010, at 7:12 AM, Steve Costaras wrote:
>>
>> Could be due to a transient error (transmission or wild/torn read at
>> time of calculation). I see this a lot with integrity checking of
s is still speculation
though with a good probability of what's happening if the md5's are fine
on manual check.
Steve
On 2010-08-28 10:52, Paul Mather wrote:
On Aug 28, 2010, at 7:12 AM, Steve Costaras wrote:
Could be due to a transient error (transmission or wild/torn read at
time of c
Could be due to a transient error (transmission or wild/torn read at
time of calculation). I see this a lot with integrity checking of
files here (50TiB of storage).
Only way to get around this now is to do a known-good sha1/md5 hash of
data (2-3 reads of the file make sure that they all ma
. I then can compare those md5's with the ones in the
catalogue and restore just the corrupted files.
So I take it that there is no way to do this except a manual process at
this point?
steve
On 7/21/2010 2:06 PM, Martin Simmons wrote:
On Wed, 21 Jul 2010 05:51:37 -0500, Steve Costaras s
I am running v5.0.2 against some clients with many files (5-10
million). I have and want the fileset option of accurate=mcs5 (md5
checks) when I do a full backup of the clients. However I DO NOT want
to do a md5 check when I do differentials/incremental's due to the fact
that it takes day
What is your blocking factor on the drive? Assuming hardware is OK it
sounds like you do not have your drive configured properly for variable
block sizes.
issue:
mt-st -f /dev/st0 setblk 0
mt-st -f /dev/st0 defblksize 0
mt-st -f /dev/st0 defcompression 0
mt-st -f /dev/st0 compression 0
(o
I'm not saying that every tape is bad. I have over a hundred that are
working just fine. Seems just the recent ones (ones purchased in the
past 1-2 months about 20 tapes) are kind of flaky. when I hit the old
batch of tapes they work just fine.So far only 2 tapes have produced
media
Ok, I'm on 5.0.1, and am doing a differential backup w/ LTO4 drives
which takes a good 1-2 days. Backups proceed fine until I hit a tape
with an I/O error on it (bad media).
so like this:
--
61 | DD0011 | Full | 1 | 476322401280 | 100 |
3024000 | 1 | 10
I'm working on updating my v3.0.3 to 5.0.1 code base here. I'm running under
ubuntu 8.04 LTS server (64bit) and running sqlite3. bacula is manually compiled.
The new version of code compiles fine w/ shared libraries (ran into a problem
w/ static & the tools but will look at that later). Anyway
I've been noticing that when doing file restores that bacula
(v3.0.3)/sqlite will always ask for the first tape of a backup even if
the files that you want to restore are NOT on that tape.For example
below:
Bootstrap records written to
/opt/bacula/var/bacula/working/loki-dir.restor
On 01/28/2010 06:10, Dan Langille wrote:
> Steve Costaras wrote:
>>
>> On 01/27/2010 22:37, Phil Stracchino wrote:
>>> On 01/27/10 22:32, Steve Costaras wrote:
>>>> On 01/26/2010 22:34, Dan Langille wrote:
>>>>> Issue the run command. Use the
On 01/27/2010 22:37, Phil Stracchino wrote:
> On 01/27/10 22:32, Steve Costaras wrote:
>
>> On 01/26/2010 22:34, Dan Langille wrote:
>>
>>> Issue the run command. Use the mod option, then alter the job
>>> parameters to suit.
>>>
>>
On 01/26/2010 22:34, Dan Langille wrote:
> Steve Costaras wrote:
>>
>> On 01/26/2010 14:50, Arno Lehmann wrote:
>>
>> The different jobs were to make running ad-hoc backups of a client
>> outside of the schedule.All the backup jobs have the same cl
On 01/26/2010 14:50, Arno Lehmann wrote:
> Hi,
>
> 25.01.2010 22:51, Steve Costaras wrote:
>
>>
>> I am running into a problem here. I have had to purge a previous full
>> backup for a client machine. I then manually ran a full backup from
>> the co
I am running into a problem here. I have had to purge a previous full
backup for a client machine. I then manually ran a full backup from
the console.After that was completed I then tried to manually run an
incremental (as the full took several days to run). But if I try
either a d
Yes, unfortunately the directory names and the amount of data in them
varies dramatically (image file processing for each project) so there is
no real way to break it apart without a large chance of missing
something.Ideally I would like to have the SD multi-plex the files
to different ta
Some history:
On 01/14/2010 16:24, Dan Langille wrote:
> Steve Costaras wrote:
>> On 01/14/2010 15:59, Dan Langille wrote:
>>> Steve Costaras wrote:
>>>> I see the mtimeonly flag in the fileset options but there are many
>>>> caveats about using it
On 01/14/2010 15:59, Dan Langille wrote:
> Steve Costaras wrote:
>
> > I see the mtimeonly flag in the fileset options but there are many
> > caveats about using it as you will miss other files that may have been
> > copied over that have retained mtimes from before th
mtimes from before the last backup.
Since bacula does an MD5/SHA1 hash of all files I assumed (wrongly it
seems) that it would be smart enough to not back up files that it
already had backed up and are on tape.
On 01/14/2010 15:11, Martin Simmons wrote:
On Wed, 13 Jan 2010 20:18:19 -0600, Steve
On 01/13/2010 20:36, Dan Langille wrote:
Steve Costaras wrote:
Ok, found out why when I do a restore of files bacula keeps thinking
that they are 'new' and will back them up again. Seems that bacula
changes ctime to the time of the restore of the file not the original
ctim
Ok, found out why when I do a restore of files bacula keeps thinking
that they are 'new' and will back them up again. Seems that bacula
changes ctime to the time of the restore of the file not the original
ctime. atime & mtime are properly set on the files at restore but not
ctime.
I didn
-Original Message-
From: mehma sarja [mailto:mehmasa...@gmail.com]
Sent: Tuesday, January 5, 2010 11:23 PM
To: 'Steve Costaras'
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Backup/Delete/Restore & subsequent backups
Steve,
You have incremental and differen
sting that I would need that in order to avoid
this problem?
On 01/04/2010 03:05, Ralf Gross wrote:
Steve Costaras schrieb:
I've been diving into Bacula the past 2-3 weeks to come up with a backup
system here for some small server count but very large data store sizes
(30+TiB per server
I've been diving into Bacula the past 2-3 weeks to come up with a backup
system here for some small server count but very large data store sizes
(30+TiB per server).
In the coarse of my testing I have noticed something and want to know if
it's by design (in which case it would be very wastef
54 matches
Mail list logo