I will skip the first question because there are two areas that do not
make sense to me and I do not want to add to your confusion...
Also, I'd presume that we'd want to have compression turned on - if anyone
has a comment on that, I'd be appreciative.
You want hardware compression (most
On Feb 12, 2008 3:12 PM, Jesper Krogh [EMAIL PROTECTED] wrote:
Arno Lehmann wrote:
It would be nice if the Spool-limit
was pr. job or something.
It's in the manual :-)
At
http://www.bacula.org/manuals/en/install/install/Storage_Daemon_Configuratio.html#DeviceResource
there is
Erm... unless I missed something the SD needs to be restarted for
configuration changes to take effect.
Thanks. I did not think reload would on the director would reload the
config on the sd or not. I have always just restarted the sd when it
was not in use and only after releasing the tapes
So if I could get these config-changes in, without interrupting current
running jobs.
Yes. You can modify the configs without affecting any jobs but they
will not take effect until you issue the reload command or possibly
restart the sd.
John
For what it's worth, we use the same model and have had no problem with the
expansion slots being recognized. Maybe you just need to power cycle the
unit?
One difference is that we disabled the import/export slot on the changer.
Not sure if that matters though.
I have the IE slot enabled on
On Feb 13, 2008 10:28 AM, Michael Da Cova [EMAIL PROTECTED] wrote:
Hi
What happens if you install the application as a service only, something I
hoping to try later? As I have the same issue
Michael
It should work but you do not see the tray icon.
John
Is there a way to rename volumes in Bacula to match the new barcodes?
Not without loosing all the data that is on them.
If
there is not, does anyone on the list know if the Quantum Superloader 3 will
support custom barcode labels with my volume format?
It should.
John
-- Forwarded message --
From: John Drescher [EMAIL PROTECTED]
Date: Feb 13, 2008 2:48 PM
Subject: Re: [Bacula-users] restore ?
To: Robin Blanchard [EMAIL PROTECTED]
On Feb 13, 2008 2:42 PM, Robin Blanchard [EMAIL PROTECTED] wrote:
So I attempted a basic restore this morning
How am I supposed to know which autochanger slot to mount, when it would
seems bacula just randomly selects tapes from the pool (see attached
screenshot)?
It should have told you in the console what tape it wanted to use.
However I have found in most cases when bacula is stuck like this just
*mount
Automatically selected Catalog: ITOS
Using Catalog ITOS
The defined Storage resources are:
1: LTO3-1
2: FileStorage
Select Storage resource (1-2): 1
Enter autochanger slot:
I have used a 2 drive autochanger with bacula for 18 months now and I
have never been asked that
On Thu, Feb 14, 2008 at 4:36 AM, Mark Maas [EMAIL PROTECTED] wrote:
Hello list,
is it at all possible to switch to a different Storage Device when a job is
running into a problem with the current one? (Like no tape mounted)?
For instance:
Job 1 is running and tries to send data to SD
Device status:
Autochanger Magnum224-0 with devices:
LTO2-0 (/dev/nst0)
LTO2-1 (/dev/nst1)
Device LTO2-0 (/dev/nst0) is not open.
Drive 0 status unknown.
Device LTO2-1 (/dev/nst1) is mounted with:
Volume: A00048
Pool:UserBackup-LTO2
Media type: LTO-2
Slot 20
Device status:
Autochanger Magnum224-0 with devices:
LTO2-0 (/dev/nst0)
LTO2-1 (/dev/nst1)
Device LTO2-0 (/dev/nst0) is not open.
Drive 0 status unknown.
Device LTO2-1 (/dev/nst1) is mounted with:
Volume: A00048
Pool:UserBackup-LTO2
Media type: LTO-2
Slot 20
On Thu, Feb 14, 2008 at 1:50 PM, Robin Blanchard
[EMAIL PROTECTED] wrote:
Ok. So the next queued job is also now stuck, waiting on the SD. Any
ideas how to get the SD/tape back ?
I would just restart bacula-sd.
John
-
On Fri, Feb 15, 2008 at 8:18 AM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Does anyone have an idea whether this should work or not? Am I typing the
command incorrectly?
Is there another way to re-introduce the catalog data from a tape to a
remote database on a non-standard port?
Yes.
could you show me what command line you use to tell it to use another port?
Thanks!
Sorry. I do not use a nonstandard port. That is most likely the
problem. My first stab at this is you could use an SSH tunnel to map
the default port (for mysql) on the local machine to the nonstandard
port on
On Fri, Feb 15, 2008 at 11:51 AM, Justin Francesconi
[EMAIL PROTECTED] wrote:
Hi.
OS: CentOS 5.1
Bacula Version: 2.28
SQL: PostgreSQL
Issue: I believe I have finally got bacula mostly configured. Mostly
because I still have on last issue. I am able to start all the daemons,
thats all
On Feb 18, 2008 1:01 PM, Tom Allison [EMAIL PROTECTED] wrote:
Way back when I had some tape drives. At that time Linux could not
provide compression on tape drives. Especially if you where trying to
write across multiple tapes.
Is this still true?
So if I'm looking at a tape drive that says
On Feb 20, 2008 12:14 AM, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
Thanks John,
I'll see how it goes.
Here is more info on this:
http://support.microsoft.com/kb/149984
John
-
This SF.net email is sponsored by:
I have tested the raw drive speed with vdump and bs=1024, The drive
speed is around 35MB/s during vdump.
The tape drive is SCSI LVD.I had tried replacing the cable twice.but no
improvement in speed through bacula.
Does a spool file on raid0 increases the despooling speed significantly.
So with LTO it seems to be no problem to use gzip and the drives
hardware compression.
Any thoughts on this?
With software compression on an LTO device your backups will take much
longer unless your client can somehow compress at the rate your tape
drive needs.
John
On Thu, Feb 21, 2008 at 9:07 AM, [EMAIL PROTECTED] wrote:
Hi
In my bacula configuration all daemon are seems to be running smoothly. But
when i check status for storage its showing following message:
Device FileStorage (/tmp) is not open.
no error message is captured in log file.
How
I'ts open when you are writing
Cesare
Oops, I meant to say that as well.
Thanks,
John
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
On Thu, Feb 21, 2008 at 9:43 AM, Eduardo Júnior [EMAIL PROTECTED] wrote:
Hi,
When i have a volume marked as recycle, what can I do to restore all data
stored on it? I already read the documentation about bscan, but i don't fell
confident to use it to restore data from a volume, because the
As for Bacula not using the IE slot. I was able to get Bacula to recognize
the slot and use tapes from it by modifying mtx-changer. Let me know if
you'd like to see what I've done.
Can you post your mtx-changer script? At one point I worked on that
but my script failed in some way some of the
Maximum Writing speed with vdump and 1024 block size - 32
Megabytes/second.
I am not sure what vdump is but if it dumps data to the tape similar
to dd you should be getting over 60MB/s for LTO3. With dd I get over
35MB/s with LTO2 and my root partition as the source data so this
number still
On Thu, Feb 21, 2008 at 6:45 AM, Rainer Koenig
[EMAIL PROTECTED] wrote:
Hi Arno,
Am Donnerstag, 21. Februar 2008 12:03 schrieb Arno Lehmann:
What it comes down to is that you need to know why the access times
of the directories are modified. Virus scanners are known to do this
Can anyone confirm whether the file daemon (client) runs on Windows 64-bit,
and if so, what if any restrictions there are?
I can confirm that it works for XP 64 bit.
John
-
This SF.net email is sponsored by: Microsoft
Why? Bacula doesn't use the atime when deciding about backups.
I stand corrected.
John
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
25-Feb 17:13 vm-backup-dir: Fatal Error at sql_get.c:572 because:
rwl_writelock failure. stat=22: ERR=Invalid argument
25-Feb 17:13 vm-backup-dir: Fatal Error because: Bacula interrupted by
signal 11: Segmentation violation
What did I do wrong, and how can I fix it?
I have never used
On Mon, Feb 25, 2008 at 12:05 PM, John Drescher [EMAIL PROTECTED] wrote:
25-Feb 17:13 vm-backup-dir: Fatal Error at sql_get.c:572 because:
rwl_writelock failure. stat=22: ERR=Invalid argument
BTW, This line looks interesting. What database/version are you using?
John
On Mon, Feb 25, 2008 at 12:44 PM, Justin Francesconi
[EMAIL PROTECTED] wrote:
Basically, this snippet from the console sums it all up:
---
[EMAIL PROTECTED] bin]# ./btape -c /usr/local/bacula/bin/bacula-sd.conf
/dev/nst0
Tape block granularity is
Not sure what you mean by 'list media'. Thanks for the help, though.
Type that in bconsole:
# bconsole
Connecting to Director jmd1:9101
1000 OK: jmd1-dir Version: 2.2.8 (26 January 2008)
Enter a period to cancel a command.
*list media
Automatically selected Catalog: MyCatalog
Using Catalog
My problem is as follows:
I mounted bacula (sd, dir, bconsole, mysql) in a server with centos 5 and
a unit powervault 114t with tapes LTO-2 (200 G/400 G). The backups have
so far been normally, but some time back I think support is a server
with 560 G of using in hard disk and filled
On Tue, Feb 26, 2008 at 12:17 PM, Francisco Munoz Perez
[EMAIL PROTECTED] wrote:
There is the error:
26-feb 05:39 bacula-sd: Please mount Volume ExternoFull2 on Storage
Device LTO2-0 (/dev/nst0) for Job Correo_Externo.2008-02-25_09.24.45
No, to load the first tape I do it manually (insert the tape in the
powervault, run the bconsole and execute mount to load)
Do you mean you loaded the tape to the tape drive of the powervault or
to a slot on the autochanger and bacula then had the changer grab put
the tape in the drive using
Load the tape in the slot in the powervault (this does not have
autocharger).
So I did not make tests autocharger as it has no autocharger.
You understand me?
Yes, I do. Your english is fine. The error message I saw it looked
like you had an autochanger. Can you post your
On Tue, Feb 26, 2008 at 12:55 PM, Francisco Munoz Perez
[EMAIL PROTECTED] wrote:
Con fecha 26/2/2008, John Drescher [EMAIL PROTECTED] escribió:
Load the tape in the slot in the powervault (this does not have
autocharger).
So I did not make tests autocharger as it has
This should be a working or something?
Now you confused me. I am not sure exactly what you are asking.
If you restarted the bacula-sd daemon it should now work.
At that point it would be good to would try a job that spans tapes to
see if it is working correct.
John
On Tue, Feb 26, 2008 at 5:39 PM, Damien Hull [EMAIL PROTECTED] wrote:
Here's what I have on my network.
1. Workstation
2. Laptop
3. 3 - 4 servers
4. 1 data storage server - could be the Bacula server
5. All are running Ubuntu
I would consider this a small network for bacula but I think
Used will absolutely be considered in the recycle policy. Archive will
not, AFAIK, but my machine recycles Used tapes all of the time (this is
the status that gets set if a tape reaches a maximum, other than end of
tape).
Sorry, about used.. My tapes are marked Full when they are full. I
On Tue, Feb 26, 2008 at 7:12 PM, Bob Hetzel [EMAIL PROTECTED] wrote:
I just noticed this oddity. Perhaps it's how I've got bacula
configured, but it otherwise seems to work properly. I've got an
autochanger with 72 slots and two tape drives. I just added some new
tapes in and did the
On Wed, Feb 27, 2008 at 9:43 AM, nda [EMAIL PROTECTED] wrote:
Hi,
We are currently testing Bacula at work, but it isn't a easy task...
Installation seems ok and i succeed the catalog backup.
I'm now trying to mage the web interface work, but i get this error message
:
Error query: 4
1.) How much data are you backing-up (Full) regularly with Bacula?
Fulls are anywhere from 1GB to 3TB depending on the system I am backing up.
2.) How much data (unexpired backups) is Bacula managing for you?
Here is my bacula-web statistics.
Total clients: 35 Total bytes
On Wed, Feb 27, 2008 at 12:14 PM, Peter Buschman [EMAIL PROTECTED] wrote:
All:
I was asked today how much data it was possible to back-up with
Bacula. Since my personal experience is limited to installations
with no more than 3TB, I thought I'd pose the following questions to
-- Forwarded message --
From: John Drescher [EMAIL PROTECTED]
Date: Wed, Feb 27, 2008 at 3:21 PM
Subject: Re: [Samba] Samba/LDAP Question
To: Hector Blanco [EMAIL PROTECTED]
On Wed, Feb 27, 2008 at 3:13 PM, Hector Blanco [EMAIL PROTECTED] wrote:
Mmmm..If I understood properly
Sorry for the misdirected post.
John
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
23-Feb 06:21 khyber-sd JobId 1843: 3304 Issuing autochanger load slot
1, drive 0 command.
23-Feb 06:26 khyber-sd JobId 1843: Fatal error: 3992 Bad autochanger
load slot 1, drive 0: ERR=Child died from signal 15: Termination.
Results=Loading media from Storage Element 1 into drive
On Fri, Feb 29, 2008 at 3:38 AM, nda [EMAIL PROTECTED] wrote:
Hi Jason,
Thanks for the feedback, i'll try to learn how to use the bconsole
efficiantly.
When I have had errors like that with bacula-web. I usually go to the
source (search for the error message) and find out what line is
By changing the Volume Use Duration in the Pool config, Bacula will pick up
the change after a reload and I don't need to update the volume in the
catalog?
Is there a better way to address this issue?
I may be wrong on this case but I have found that any changes in the
pool resource do not
On Mon, Mar 3, 2008 at 5:12 PM, Mark Nienberg [EMAIL PROTECTED] wrote:
I'm seeing this happen more often:
3304 Issuing autochanger load slot 2, drive 0 command.
3305 Autochanger load slot 2, drive 0, status is OK.
3301 Issuing autochanger loaded? drive 0 command.
3302 Autochanger loaded?
On Mon, Mar 3, 2008 at 5:59 PM, Mark Nienberg [EMAIL PROTECTED] wrote:
John Drescher wrote:
Have you tried adding a delay in your mtxchanger script where it says
it might be needed?
In mtx-changer
# Increase the sleep time if you have a slow device
# or remove the sleep and add
On Mon, Oct 6, 2008 at 7:18 AM, Alan Brown [EMAIL PROTECTED] wrote:
On Fri, 26 Sep 2008, John Drescher wrote:
BTW, I would never use raid0 or LVM (without every PV being raided)
for backup data that I cared about.
Spooled data isn't exactly worth keeping. After a bacula restart the
contents
On Mon, Oct 6, 2008 at 11:24 AM, Chris Picton [EMAIL PROTECTED] wrote:
On Mon, 2008-10-06 at 09:09 -0400, John Drescher wrote:
On Mon, Oct 6, 2008 at 7:18 AM, Alan Brown [EMAIL PROTECTED] wrote:
On Fri, 26 Sep 2008, John Drescher wrote:
BTW, I would never use raid0 or LVM (without every PV
30-Sep 01:06 bacula-sd JobId 227: Error: block.c:275 Volume data error
at 0:1! Wanted ID: BB02, got . Buffer discarded.
I believe this means the tape was overwritten or corrupted for some
reason. Since the first block does not have a valid header on it
bacula is not able to go any further.
Terry L. Inzauro wrote:
is there a document that outlines the steps necessary to purge 'older' data
on a disk device that is
full? i understand that my retention levels are not correct, but i wish to
purge data at my
discression until i get it all figured out.
You can purge entire
On Wed, Oct 8, 2008 at 11:49 AM, RYAN vAN GINNEKEN [EMAIL PROTECTED] wrote:
Hello all i am still having difficulty with retention periods and the like
could someone kindly help me with this.
Basically for the local intranet i would like bacula to take a full back at
the first of each month
Also consider not using localhost 127.0.0.1
and use fqdn name for dir,sd,fd.
The reason for this is that bacula will not be able to backup any
machine other than the director if you use 127.0.0.1 or localhost
anywhere in your configuration files.
John
On Wed, Oct 8, 2008 at 1:54 PM, John Drescher [EMAIL PROTECTED] wrote:
On Wed, Oct 8, 2008 at 1:47 PM, RYAN vAN GINNEKEN [EMAIL PROTECTED] wrote:
What is stopping bacula from using volume 44???
[EMAIL PROTECTED] wrote:
Last Written time on that was 2008-10-02 02:42:07
so then it has 16 days
On Wed, Oct 8, 2008 at 1:29 PM, subbustrato [EMAIL PROTECTED] wrote:
thank yuo very mouch for your advise,
this is the outpot of the debug command:
$ sudo /sbin/bacula-dir -d100 -c /etc/bacula/bacula-dir.conf
bacula-dir: dird.c:157-0 Debug level = 100
bacula-dir: bsys.c:566-0 Could not open
Does the lack of response mean there is no reason anyone can see for
this behavior? Any ideas what to do about it?
I generally try to answer as many user questions that I can add some
relevant info to, but I did not see a good reason for this behavior.
My backup is hanging
currently, and I
On Wed, Oct 8, 2008 at 11:15 PM, Thomas Arseneault
[EMAIL PROTECTED] wrote:
The storage daemon died on my storage box causing the dump to fatally error
out. I fixed the daemon problem but now I have 4 tapes that I would like to
reuse but they are far from their retention time so how do I
i tried to restore this job, same error. I cannot imagine why the
tape(s) should be corrupted, the other jobs are on the same tape(s) and
are restorable.
Any way to properly check whats wrong?
Are there any scsi errors in your dmesg output? Is this tape drive
part of an autochanger?
John
On Thu, Oct 9, 2008 at 7:24 AM, Kjetil Torgrim Homme [EMAIL PROTECTED] wrote:
John Drescher [EMAIL PROTECTED] writes:
Thomas Arseneault [EMAIL PROTECTED] wrote:
The storage daemon died on my storage box causing the dump to
fatally error out. I fixed the daemon problem but now I have 4
tapes
yes, i am seeing that now. thank you for the tip. i was actually
trying to setup volumes based on backup type (full, diff, inc) in
combination with clients. so client1 wold have a volume client1_fulls,
client1_diffs, and client1_incs, but i can't seem to get it working
properly. i was
I have set up a new Backups System and I don't know whether the
performance is good or not.
So I hope you can tell me a little bit more :-)
I am using a LTO4 Autoloader with 2 drives. It is connected to a Debian
Backup Server trough an PCI-X SCSI Card (U320).
This Server has another internal
On Thu, Oct 9, 2008 at 9:41 AM, John Drescher [EMAIL PROTECTED] wrote:
Could you point to some online docs on how to do it. Currently I have
one volume with a size of 567GB, and my space is running low. How
should the volumes be setup? 100gb each, and how do I set the rotation
so it knows
On Fri, Oct 10, 2008 at 11:39 AM, John Drescher [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 11:06 AM, Kjetil Torgrim Homme
[EMAIL PROTECTED] wrote:
Marc Richter [EMAIL PROTECTED] writes:
Further, I want to know, why bacula recycles Volumes (which destroys
data) instead of creating new
On Fri, Oct 10, 2008 at 11:39 AM, John Drescher [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 11:06 AM, Kjetil Torgrim Homme
[EMAIL PROTECTED] wrote:
Marc Richter [EMAIL PROTECTED] writes:
Further, I want to know, why bacula recycles Volumes (which destroys
data) instead of creating new
On Mon, Oct 13, 2008 at 7:33 AM, Radek Hladik [EMAIL PROTECTED] wrote:
Hi,
I would like to ask how should I recycle catalog backups.
I am using disk based storage and catalog pool is by default set to
half-year or so long retention period. I would think, that week would be
more than
On Mon, Oct 13, 2008 at 7:26 AM, Radek Hladik [EMAIL PROTECTED] wrote:
Hi,
I'm backing to disk based storage, every volume has maximum count set
to one job, so every pool has one file per job. Each pool is configured
with recycle period and volume count so it can hold just enough
However I need sometimes to delete some older backups by hand,
because
of low disk space, or because we sometimes manually run the backup and
it breaks the schema (the last backup is not old enough to be recycled).
I would like to ask whether there is a possibility to manually recycle
I have no problem with backing up catalog on other server with less
expensive storage space. But I do not see any use of old catalog backups at
all. I would do bscan into new catalog in all cases except the one, where
I have catalog backup newer than last data backed up and I need to recover
On Tue, Oct 14, 2008 at 5:23 AM, le dahut [EMAIL PROTECTED] wrote:
I re-ask my question, is there any way to restore only ACLs of files ?
Maybe is it possible to export the ACLs in a file that can then be used
by 'setfacl' ?
I do not think this is possible at the moment.
John
On Tue, Oct 14, 2008 at 12:57 PM, subbustrato [EMAIL PROTECTED] wrote:
I have created a label for my Volume,i think that something is improve but the
device remain BLOCKED.
This is the output about storage status:
Device status:
Device FileStorage (/media/sdb1/bacula) is mounted with:
On Tue, Oct 14, 2008 at 3:11 PM, sublayer [EMAIL PROTECTED] wrote:
I don't no why, but now if with bconsole I ask the storage status I have
as replay that the device is not open...
How can I open the device?
That is normal for a file device that is not mounted.
Device status:
Device
On Tue, Oct 14, 2008 at 11:25 AM, Steffen Knauf
[EMAIL PROTECTED] wrote:
Hello,
On every 1st Friday bacula starts a backup of a 4 TB partition.
This will take a while ;) , so the other backup jobs should be run
concurrent.
But nothing happens, the other jobs don't start until this huge job
Yes, she can.
I haven't firewalls active on remote machine and from bconsole, for
example if i send the command estimate for the remote machine i have a
positive reply:
*estimate
The defined Job resources are:
1: BackupFisso
2: BackupPiccolino -- Remote Machine
3:
On Tue, Oct 14, 2008 at 5:16 PM, subbustrato [EMAIL PROTECTED] wrote:
Actually my system provide the possibility of backup 2 clients, one in the
same
place of director and storage and the other in a remote place (in the same
LAN).
When the process starts, backup of remote pc it's ok, but
This is not the direction I mean. I assume that the director can
access the remote client. But can the remote client access the
storage? Is there a firewall that prevents the remote client fd from
initiating its own connection to the storage daemon @ port 9103?
John
ok, sorry.
Also on the
On Tue, Oct 14, 2008 at 6:36 PM, John Drescher [EMAIL PROTECTED] wrote:
This is not the direction I mean. I assume that the director can
access the remote client. But can the remote client access the
storage? Is there a firewall that prevents the remote client fd from
initiating its own
On Wed, Oct 15, 2008 at 9:22 AM, le dahut [EMAIL PROTECTED] wrote:
Sorry for the multipost, my mail client has bugged...
Is it possible to extract ACLs informations regardless of the output
format (which can be read by a script using 'setfacl') ?
You would have to restore the files somewhere
On Wed, Oct 15, 2008 at 4:32 AM, Steffen Knauf
[EMAIL PROTECTED] wrote:
I don't have different priorities I thought the default of this option in
the client section is 2 and in the storage section 10, so
i don't set this option explicit. Now i set this option in following
configs/sections:
Also make sure that 127.0.0.1 or localhost are not used in your bacula
config files.
John
I have solved the problem, you hit the nail on the head!!! The problem
was on the address of storage (was set as localhost).
Thank you very very very much!!!
sub
Not sure of your os but I have heard
I have few backups over network, which are quite big (50GB and 250GB for
example).
I do not consider this a huge backup. I have done 2TB+ backups
successfully with bacula.
I'm expecting them to be quite long jobs, and I'm working on
getting SD in same cabinet for them, but that ideal solution
On Thursday 16 October 2008 14:36:47 Piotr Gbyliczek wrote:
I didn't called it huge. But it is quite big imho if you doing it over
cloud, not through corporate network with stable 100Mb or even 1Tb
connections.
Nice thing I've did here. I wish to have 1Tb connection somewhere... 1Gb
should
Does the first job take more than 1 day?
Yes, that is true in this case. But is this making any difference ?? With
running full backup there shouldn't be any next job upgraded to full prior to
finish of first one.
The reason I ask is when the next backup is scheduled to begin if the
previous
On Thu, Oct 16, 2008 at 4:21 PM, Craig White [EMAIL PROTECTED] wrote:
I'm looking at the documentation and it suggests that I need to put the
'Changer Command' in both the Storage definition and in the Autochanger
Resource. That seems redundant. I am gathering that I only need that
command in
On Fri, Oct 17, 2008 at 5:54 AM, Piotr Gbyliczek [EMAIL PROTECTED] wrote:
On Thursday 16 October 2008 17:40:16 T. Horsnell wrote:
I had this problem from the start. I ended up with a line in my
RunBefore script which disabled the backup job:
echo disable job=Job1 |
-- Forwarded message --
From: John Drescher [EMAIL PROTECTED]
Date: Fri, Oct 17, 2008 at 11:49 AM
Subject: Re: [Bacula-users] Can't do tape backups - Cannot find any
appendable volumes
To: Arch Willingham [EMAIL PROTECTED]
On Fri, Oct 17, 2008 at 11:24 AM, Arch Willingham [EMAIL
On Fri, Oct 17, 2008 at 12:00 PM, Mikel Jimenez [EMAIL PROTECTED] wrote:
Mikel Jimenez escribió:
Hello
For about 4 days, I receive this message os a client backup:
Fatal error: No Job status returned
The backup completes but when it is going to insert atributes in DB, it
doesn do.
Here is
On Fri, Oct 17, 2008 at 3:07 PM, Arch Willingham [EMAIL PROTECTED] wrote:
#list media pool
Defined Pools:
1: Default
2: Scratch
3: SATA250POOL
4: SATA15TBPOOL
5: Daily File backup pools
6: Weekly File backup pools
7: Tape Backups Pool
8: Tape Backups Pool
On Fri, Oct 17, 2008 at 3:13 PM, John Drescher [EMAIL PROTECTED] wrote:
On Fri, Oct 17, 2008 at 3:07 PM, Arch Willingham [EMAIL PROTECTED] wrote:
#list media pool
Defined Pools:
1: Default
2: Scratch
3: SATA250POOL
4: SATA15TBPOOL
5: Daily File backup pools
6
Is there any firewall / network reason why the port on the client
could be dropped after some period of inactivity?
No... the backup finish 99.9 %. It copies all data from the client, but
it fails in the last moment...
I saw that. The reason why I asked this is that while sending the
I am learning how to use bacula to back up to external hard disks.
Everything works beautifully, but I notice that after a volume is purged
or pruned, it gets status Recycled and the actual file on disk stays.
That looks like perfect behavior for a tape backup, but for a hard disk
backup, I
On Mon, Oct 20, 2008 at 10:24 AM, John Drescher [EMAIL PROTECTED] wrote:
On Mon, Oct 20, 2008 at 10:00 AM, Dams [EMAIL PROTECTED] wrote:
Hi all,
I have already configure my Bacula on my LAN. It's working fine.
But now I would like to configure bacula to backup my Web Server
(Proxy as well
On Tue, Oct 21, 2008 at 10:41 AM, Tonton Dede [EMAIL PROTECTED] wrote:
Hello,
we have 1 director and 2 storage daemons installed on 2 servers, each
one with 1 tape drive.
Both SD are configure and known by the director. In the director, le
Maximum Concurrent jobs has been set to 2
You need
You have to run:
echo 16 |dbcheck -b -f -c /path/to/bacula-dir.conf -C dbname workingdir
vacuumdb -q -d dbname -z -f
reindexdb -q dbname
After running these commands my database size was reduced from 10 GB to 8.0
GB.
Just did this on my postgresql database. It went from 22.5 GB to 27 GB
-- Forwarded message --
From: John Drescher [EMAIL PROTECTED]
Date: Wed, Oct 22, 2008 at 11:04 PM
Subject: Re: [Bacula-users] Who talks to whom in bacula?
To: Kevin Keane [EMAIL PROTECTED]
On Wed, Oct 22, 2008 at 10:34 PM, Kevin Keane [EMAIL PROTECTED] wrote:
I would like
I started getting these recently on one of my file volumes:
23-Oct 12:03 fileserver-dir JobId 12922: Start Backup JobId 12922,
Job=CVSSource.2008-10-23_12.03.31
23-Oct 12:03 fileserver-dir JobId 12922: Using Device FileStorage
23-Oct 12:03 fileserver-sd JobId 12922: Volume Other-0216 previously
601 - 700 of 3232 matches
Mail list logo