Hi Davide,
thanks for your continuing work support for bacula-web. Are backup
job groups implemented in this version? I'd like to be able to create
a group of clients and / or jobs for space accounting purposes.
I'm aware this can be done with an admin job and a custom sql query,
however it
I am on bacula 5.2.12 and I backup to LTO5 tapes. I started running a big FULL
backup job. The job failed with the following error after backing up 2.2TB of
data. I had the same problem with a full backup of another client.
Start time: 23-Jan-2013 22:24:37
End time:
Zitat von Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
uthra.r@nasa.gov:
I am on bacula 5.2.12 and I backup to LTO5 tapes. I started running
a big FULL backup job. The job failed with the following error after
backing up 2.2TB of data. I had the same problem with a full backup
Thank you for your reply. Backup of these clients have been running fine all
these days. Only when I started a FULL back-up this week of the two clients
they failed:
Client1:
JobId 4183: Fatal error: backup.c:892 Network send error to SD. ERR=Broken pipe
24-Jan 20:23
Client2:
JobId 4180:
Well, they are related in my head at least :-)
I wiped all the labels on my tapes and did label barcodes and
everything seems to have gone fine and they all got added to the
default pool.
Except for 2 tapes. 06L4 and 000100L4
The latter I can understand because it is a cleaning tape. So
Oh, in case this helps :
*list media
Automatically selected Catalog: MyCatalog
Using Catalog MyCatalog
Pool: Default
+-++---+-++--+--+-+--+---+---+-+
| MediaId | VolumeName |
OK, this is odd. I just tried the label command for that one slot
and got an error
*label sotrage=SL24 slot=8 pool=Default barcodes
The defined Storage resources are:
1: File
2: SL24
Select Storage resource (1-2): 2
Connecting to Storage daemon SL24 at 127.0.0.1:9103 ...
3306 Issuing
Davide:
I needed 1 extra step to make this work on my Linux Mint (Ubuntu/Debian)
system:
# apt-get install libapache2-mod-php5
Otherwise, my apache didn't know what to do with .php, and would offer them for
download.
Thanks!
Regards,
Joseph Spenner
On Fri, Jan 25, 2013 at 3:40 AM, bacula-...@dflc.ch wrote:
Dear all,
I'm proud to announce that Bacula-Web 5.2.12 is available from now.
This version is major bug fix release, I'd suggest to all users using a
previous version of Bacula-Web to upgrade to the latest version.
More
Seems to make sense to me since the catalog backup is already
happening, but maybe this is a bad idea for some reason?
If it is OK to do that, are there any others I should add?
FileSet {
Name = Catalog
Include {
Options {
signature = MD5
}
File = /var/lib/bacula/bacula.sql
Seems to make sense to me since the catalog backup is already
happening, but maybe this is a bad idea for some reason?
If it is OK to do that, are there any others I should add?
FileSet {
Name = Catalog
Include {
Options {
signature = MD5
}
File =
On Fri, Jan 25, 2013 at 11:17 AM, Alan McKay
alan.mckay+bac...@gmail.com wrote:
File = /var/lib/bacula/bacula.sql
I went looking for this file out of curiousity, and found it was not there.
But upon further digging, it looks like the script dumps the DB, then
backs it up, then removes that
On 2013-01-25 10:14, Victor Hugo dos Santos wrote:
On Fri, Jan 25, 2013 at 3:40 AM, bacula-...@dflc.ch wrote:
Dear all,
I'm proud to announce that Bacula-Web 5.2.12 is available from now.
This version is major bug fix release, I'd suggest to all users
using a
previous version of
On Fri, Jan 25, 2013 at 1:44 PM, Alan McKay alan.mckay+bac...@gmail.com wrote:
On Fri, Jan 25, 2013 at 11:17 AM, Alan McKay
alan.mckay+bac...@gmail.com wrote:
File = /var/lib/bacula/bacula.sql
I went looking for this file out of curiousity, and found it was not there.
But upon further
Due to disk layout on my system, I have the DB dump stored elsewhere on my
server, and I changed the catalog backup to not delete the DB dump.
Assuming some kind of less catastrophic crash, my hope would be to restore
the DB from the on-disk copy. I maintain the DB dump, bootstrap files and
You could of course opt to use gzip or bzip2 to compress the file on disk
before backing it up; for these sql files it will manage a very significant
comression ratio.
I do have gzip enabled for that and it compresses down to somewhere
between 2 and 4GB.
John
Zitat von Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
uthra.r@nasa.gov:
Thank you for your reply. Backup of these clients have been running
fine all these days. Only when I started a FULL back-up this week of
the two clients they failed:
Client1:
JobId 4183: Fatal error:
On Jan 25, 2013, at 11:17 AM, Alan McKay wrote:
Seems to make sense to me since the catalog backup is already
happening, but maybe this is a bad idea for some reason?
If it is OK to do that, are there any others I should add?
FileSet {
Name = Catalog
Include {
Options {
Le 25/01/2013 20:09, John Drescher a écrit :
And further, before creating the file in the first place it removes
one that may already be there, by the looks of it.
Just making sure I got that right. Is there a reason not to leave the
file there until next time? Just a space issue?
Just a
19 matches
Mail list logo