Hi,
It looks like I need to beef up "Maximum Block Size" to get good
performance out of our LTO-4 tape drives.
Before I do, I'd like to know what effect this has on restoring from backups
taken with the default block size. Will I
need to modify the Device configuration back to it's def
On 19/07/10, Dan Langille (d...@langille.org) wrote:
> >My objective, foolishly ommitted from my email, is to start from scratch
> >after the upgrade. I have now managed to do so by using the brute force
> >method to relabel the tapes as the catalogues from my tests failed to
> >follow from 2.4.4 t
On 7/19/2010 10:06 AM, Rory Campbell-Lange wrote:
> Hi Dan
>
> Thanks for your email.
>
> On 19/07/10, Dan Langille (d...@langille.org) wrote:
>> On 7/19/2010 4:21 AM, Rory Campbell-Lange wrote:
>> Oh your tapes should still be usable after the upgrade. I think
>> you need to concentrate on th
These are new volumes to bacula. They were used previously for Veritas
NetBackup but Bacula has never seen these tapes until recently.
JJ
-Original Message-
From: John Drescher [mailto:dresche...@gmail.com]
Sent: Monday, July 19, 2010 12:45 PM
To: Jeremiah D. Jester
Cc: bacula-users
Sub
On 19/07/10 18:45, Dan Langille wrote:
> On 7/19/2010 9:51 AM, Mister IT Guru wrote:
>> On 19/07/10 12:35, Dan Langille wrote:
>>> On 7/19/2010 6:07 AM, Mister IT Guru wrote:
I am trying to find where the log entries are made on the windows
clients, so that I can find out why my backups s
On Mon, Jul 19, 2010 at 3:38 PM, Jeremiah D. Jester
wrote:
> Do you think that might be the same issue with these volumes reporting full?
> I tried setting them to 'append' and when they were loaded it simply ejected
> them and set them to 'full' again. I may have the actual messages from the
>
> On Mon, 19 Jul 2010 15:44:00 +0200, Andreas Koch said:
>
> I'll make another attempt at interpreting the data:
>
> The broken file is a hard-link (to
> /usr/share/postgresql-8.4/man/man7/with.7.bz2). The crypto_digest is only
> calculated for the regular file /usr/share/postgresql-8.4/man/m
Do you think that might be the same issue with these volumes reporting full? I
tried setting them to 'append' and when they were loaded it simply ejected them
and set them to 'full' again. I may have the actual messages from the operation
I can send you, if you like.
JJ
> *list media pool=Onsi
I have a directory that when I go through it, I'm trying to not save large
files. If I'm doing a find command, it would look something like "find . -size
+5M"
Being on a linux SLES 10.3 server, what would be the proper entry in the
bacula-fd.config file? I'm getting most regexs to work-just ha
> That worked! It So I basically need to wipe it and then readd to the bacula
> by issuing 'label barcodes'?
Yes.
> And the problem was that it was corrupt?
>
The problem is that volume did not contain a valid volume label. It
appears that for some reason the labeling did not work when you firs
That worked! It So I basically need to wipe it and then readd to the bacula by
issuing 'label barcodes'? And the problem was that it was corrupt?
JJ
-Original Message-
From: John Drescher [mailto:dresche...@gmail.com]
Sent: Friday, July 16, 2010 4:04 PM
To: Jeremiah D. Jester
Cc: bacula
> On Mon, 19 Jul 2010 16:06:27 +0100, Rory Campbell-Lange said:
>
> As part of our intended audit process of a backup set we'd like to
> extract a file from each tape from which a backup set is composed.
>
> Is there a way of learning which tape holds a particular file (or
> vice-versa)? I as
On 07/19/10 10:25, Mister IT Guru wrote:
> I think I'm beginning to understand a bit more now. I setup my bacula
> setup as a test, and I expanded out from there. Once I create a new
> pool, and then migrate all the good jobs to that pool, all that good
> data is saved, but I guess if a volume h
> I think probally a bug.
Its not a bug actually a bug fix. Numbering now is not sequential to
avoid a bug.
> I want the number to start by 1 and continue-ing numbering up
> Something like this:
>
> Full-0001 t/m Full-0009
> Inc-0001 t/m Inc-0007
> Diff-0001 t/m Diff-0010
>
> Is this a bug. Or ha
On 7/19/2010 9:51 AM, Mister IT Guru wrote:
> On 19/07/10 12:35, Dan Langille wrote:
>> On 7/19/2010 6:07 AM, Mister IT Guru wrote:
>>> I am trying to find where the log entries are made on the windows
>>> clients, so that I can find out why my backups seem to fail at around
>>> 75% - each failed b
Hoi,
Im using version 2.4.4 of Bacula
This is the standard release used in Debian 5.0.4 Lenny
I have tryed the next example:
In CHAPTER 25. AUTOMATED DISK BACKUP
Director { # define myself
Name = bacula-dir
DIRport = 9101
QueryFile = "/home/bacula/bin/query.sql"
WorkingDirectory = "/home/bacula/
As part of our intended audit process of a backup set we'd like to
extract a file from each tape from which a backup set is composed.
Is there a way of learning which tape holds a particular file (or
vice-versa)? I assume this is how retrievals work, but it isn't that
obvious in the database schem
I ran a full backup of a volume last week on Bacula v2.4.4 and I have
now upgraded to 5.0.2. I'm getting half the throughput last week as I am
now.
Last week's test volume was a 2.1TB xfs volume and it read/wrote on
average at 70MB/s. I am now backing up a 4.4TB ext3 volume and
bconsole's status c
On 19/07/10 15:09, Phil Stracchino wrote:
> On 07/18/10 14:40, Joseph L. Casale wrote:
>
>>> I have a fair number of failed jobs in my database, and I would like to
>>> know is it possible to purge all the bad backups, so that all the
>>> diskspace that has been taken up can be released?
>>>
On 07/18/10 14:40, Joseph L. Casale wrote:
>> I have a fair number of failed jobs in my database, and I would like to
>> know is it possible to purge all the bad backups, so that all the
>> diskspace that has been taken up can be released?
>
> from bconsole, `del jobid=#`.
>
> If you have many,
Hi Dan
Thanks for your email.
On 19/07/10, Dan Langille (d...@langille.org) wrote:
> On 7/19/2010 4:21 AM, Rory Campbell-Lange wrote:
> Oh your tapes should still be usable after the upgrade. I think
> you need to concentrate on that problem first. Ignore all others.
>
> >Is the best thing
Monday morning, 3 days later and it is still restoring. It's finished about
600GB, which is less than half of what is on the two tapes. I'm pretty sure
that only one job was written to the tape at the time of backup. Also, I
thought 'bextract' would be faster than a normal restore, since I'm
On 19/07/10 12:35, Dan Langille wrote:
> On 7/19/2010 6:07 AM, Mister IT Guru wrote:
>> I am trying to find where the log entries are made on the windows
>> clients, so that I can find out why my backups seem to fail at around
>> 75% - each failed backup is taking up a significant amount of disk
>>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> Running the FD with debug level 400 might help to see if it generates the SHA1
> at all.
Here's the data for the broken file:
gundabad-fd: find_one.c:357-32 File :
/usr/share/postgresql-8.4/man/man7/table.7.bz2
gundabad-fd: backup.c:326-32 FT_L
Can you supply some more details on your setup/configuration?
- Is spooling implemented
- Do you backup to tape or disk
- have you implemented DTDTT
- does the backup device has it's own HBA
- what is the load of the ESX server
- any errors/warnings etc in ESX logs?
etcetera
Olaf
PS
In would not
On 19/07/2010 12:33, Dan Langille wrote:
> You are probably missing an index. I'm pretty sure this has been discussed
> previously on this list. Sorry, I don't know which index is missing but I'm
> pretty sure it will be easy to find with a search.
I did look back over the list for a couple of mon
On Mon, Jul 19, 2010 at 8:16 AM, John Drescher wrote:
> -- Forwarded message --
> From: Stan
> Date: Mon, Jul 19, 2010 at 6:43 AM
> Subject: RE: [Bacula-users] new user, first attempt
> To: John Drescher
>
>
> backupServer:/home/stan# ps auwx | grep bacula
> bacula 2770 0.0
On Mon, Jul 19, 2010 at 7:18 AM, Stan wrote:
> Does the password matter? This, and the other .conf files, have these kinds
> of (generated?) passwords. Even though I gave the installation a password
> when it asked.
>
> #
> # Bacula User Agent (or Console) Configuration File
> #
>
> Director {
>
-- Forwarded message --
From: Stan
Date: Mon, Jul 19, 2010 at 6:43 AM
Subject: RE: [Bacula-users] new user, first attempt
To: John Drescher
backupServer:/home/stan# ps auwx | grep bacula
bacula 2770 0.0 0.0 13392 1232 ? Ssl Jul18 0:00
/usr/sbin/bacula-sd -c /etc
Hi List!
Maybe someone has a good idea:
My Bacula 5.0.2 is damned slow on backing up some w2k3 servers.
I use compression = GZIP
and get only a throughput of about 1,3 MB/sec
I tested the network speed and got an avereage of 21mb/sec. Why is
Bacula that slow?
Without compression i get about 4-5
On 7/19/2010 4:21 AM, Rory Campbell-Lange wrote:
> On 18/07/10, John Drescher (dresche...@gmail.com) wrote:
>>> # btape -pv /dev/nst0
>>>
>>> ? ?Tape block granularity is 1024 bytes.
>>> ? ?btape: butil.c:284 Using device: "/dev/nst0" for writing.
>>> ? ?18-Jul 22:27 btape JobId 0: 3301 Issuing aut
Hello Nick,
This issue has been discussed few months ago, you can get advise by searching
on mailing list or on the wiki.
http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog
Bye
Le lundi 19 juillet 2010 13:04:23, Nick Hilliard a écrit :
> O
On 7/19/2010 6:07 AM, Mister IT Guru wrote:
> I am trying to find where the log entries are made on the windows
> clients, so that I can find out why my backups seem to fail at around
> 75% - each failed backup is taking up a significant amount of disk
> space, and I need to be certain that the bac
On 7/19/2010 7:04 AM, Nick Hilliard wrote:
> On 16/07/2010 16:48, Nick Hilliard wrote:
>>> ++---+--++-+-+
>>> | JobId | Level | JobFiles | JobBytes | StartTime |
>>> VolumeName |
>>> +
On 16/07/2010 16:48, Nick Hilliard wrote:
>> ++---+--++-+-+
>> | JobId | Level | JobFiles | JobBytes | StartTime |
>> VolumeName |
>> ++---+--++
I am trying to find where the log entries are made on the windows
clients, so that I can find out why my backups seem to fail at around
75% - each failed backup is taking up a significant amount of disk
space, and I need to be certain that the backup actually completed, or
can even be recovered
On 18/07/10, John Drescher (dresche...@gmail.com) wrote:
> > # btape -pv /dev/nst0
> >
> > ? ?Tape block granularity is 1024 bytes.
> > ? ?btape: butil.c:284 Using device: "/dev/nst0" for writing.
> > ? ?18-Jul 22:27 btape JobId 0: 3301 Issuing autochanger "loaded? drive 0"
> > command.
> > ? ?18-
37 matches
Mail list logo