If your problem is authentication or encryption then I would
suggest you check out msmtp (http://msmtp.sourceforge.net/).
This smtp client will use SSL/TLS for encrypted transport and
GSSAPI, Digest-MD5 and many more for authentication.
-John
On Tue, Oct 21, 2008 at 07:21:07AM +0200, Peter
As a side note, I'm pretty sure you could shorten this definition to:
Schedule {
Name = Schedule-apache
Run = Level=Full Storage=Disk3-apache on 1,16 at 19:05
Run = Level=Differential Storage=Disk3-apache on 8,23 at 19:05
Run = Level=Incremental Storage=Disk3-apache on
Hi All,
I have a mix of disk and tape backups. To disk I allow up to
20 jobs run concurrently. On my tape library I have 3 tape
drives, so only allow a max of 3 jobs to run concurrently.
I run Full backups once a month, Differentials once a week
and incrementals most days of the week. I would
:48AM -0400, John Lockard wrote:
Hi All,
I have a mix of disk and tape backups. To disk I allow up to
20 jobs run concurrently. On my tape library I have 3 tape
drives, so only allow a max of 3 jobs to run concurrently.
I run Full backups once a month, Differentials once a week
The minimum setting I have on Max Concurrent Jobs is on the
Tape Library and that's set to 3. It appears that priority
trumps all, unless the priority is the same or better.
So, if I have one job that has priority of, say, 10, then
any job running on any other tape drive or virtual library
will
On Fri, Mar 20, 2009 at 04:11:55PM -0400, Jason Dixon wrote:
On Fri, Mar 20, 2009 at 03:46:49PM -0400, Jason Dixon wrote:
Just to be certain, I kicked off a few OS jobs just prior to the
transaction log backup. I also changed the Storage directive to use
Maximum Concurrent Jobs = 1 for
On Sat, Mar 21, 2009 at 05:10:09AM -0700, Kevin Keane wrote:
John Lockard wrote:
The minimum setting I have on Max Concurrent Jobs is on the
Tape Library and that's set to 3. It appears that priority
trumps all, unless the priority is the same or better.
So, if I have one job that has
Attached, please find updates to bacula_mail_summary.sh which
was in the examples/reports directory in the source distribution.
I run this script once a week, after the log has been rotated
by my systems logrotate script.
I've tweaked the display formatting quite a bit. Rather than
displaying
who care.
#
# For it to work, you need to have all Bacula job report
# logging to a file, edit LOGFILE to match your setup.
# This should be run after all backup jobs have finished.
# Tested with bacula-2.4.4
# Some improvements by: John Lockard jlock...@umich.edu
# (University of Michigan
Hi All,
Looking through the manual in the Message Resource section
I don't see 'FileSet' as one of the options. (Version 2.4.4).
Is this available but undocumented or should I be putting in
a software change request?
Reason I ask, is that an email telling me that a job for
'Server1' finished
Nevermind... I'm a moron.
On Fri, Apr 03, 2009 at 02:54:01PM -0400, John Lockard wrote:
Hi All,
Looking through the manual in the Message Resource section
I don't see 'FileSet' as one of the options. (Version 2.4.4).
Is this available but undocumented or should I be putting in
a software
The time jumps at 2am, either forward or backward depending on
whether you're switching to or from DST. Most admins I know
just completely avoid the time period from 1:00am to 3:00am.
entirely because of the Daylight Saving Time switches.
If you're going to go UTC, then you should go UTC all the
I can't see a way in 2.4.x, but maybe it's present in the
3.0.x code... I would like to compress my Incremental backups,
but not my Differential backups or Full backups.
I keep my incremental backups on disk. They never transition
to Tape. My Differentials run weekly and I keep a week and a
Client and server at 2.4.4. Both client and server are Linux 2.6
Logs from client:
15-Apr 16:53 tibor-dir JobId 2954: Start Backup JobId 2954,
Job=Belobog-Data-Users.2009-04-15_16.32.07.04
15-Apr 16:53 tibor-dir JobId 2954: Using Volume 100108L2 from 'Scratch' pool.
15-Apr
is on a different LAN
segment.
James
-John
-Original Message-
From: John Lockard [mailto:jlock...@umich.edu]
Sent: Friday, 17 April 2009 01:38
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Backup failing reliably repeatable
Client and server at 2.4.4. Both client
I've just switched over from 2.4.4 to 3.0.0 so my familiarity
with new features is close to null.
Is there a way I can (maybe just for a specific job) output
to a file *everything* which is happening with a backup job?
I'd like to run a job and get a file containing which files
were backed up,
Nope, disabling tso and tx changed nothing.
-John
On Fri, Apr 17, 2009 at 12:24:25PM -0400, John Lockard wrote:
On Fri, Apr 17, 2009 at 09:39:24AM +1000, James Harper wrote:
Does belobog have the same network adapter and kernel as your other
servers?
It has the same ethernet (Intel e1000
Has anyone seen anything like this before?
21-Apr 14:10 tibor-dir JobId 3240: Start Backup JobId 3240,
Job=Belobog-Data3-Users.2009-04-21_12.10.22_07
21-Apr 14:10 tibor-dir JobId 3240: Using Device NEO-LTO-1
21-Apr 14:11 tibor-sd JobId 3240: Spooling data ...
21-Apr 16:10 tibor-dir JobId 3240:
Hi all,
Saw this last night. What would cause these Fatal errors?
Server: Linux 2.6.18 x86_64
Bacula Version:
Server: 3.0.0
Client: 2.4.4 (SPARC Solaris 8)
Filesystem: just under 1TB
Thanks for any help,
John
13-May 22:25 tibor-sd JobId 3833: Labeled new Volume Monthly-SIN-0363 on
On Thu, May 14, 2009 at 03:54:55PM -0400, John Drescher wrote:
On Thu, May 14, 2009 at 2:26 PM, John Lockard jlock...@umich.edu wrote:
Hi all,
Saw this last night. What would cause these Fatal errors?
Possible database corruption.
Other backups after this and concurrent ran fine. I
Appears that in acl.c, line 1145 there's a stray ; at the
end of the line (version 3.0.1). Removal allows compilation
on Solaris.
-John
--
What good is a ring Mr. Baggins if you don't have
any fingers. - Agent Elrond - Matrix of the Rings
On Fri, May 15, 2009 at 01:30:57PM +0200, Bruno Friedmann wrote:
John Lockard wrote:
Hi all,
Saw this last night. What would cause these Fatal errors?
Server: Linux 2.6.18 x86_64
Bacula Version:
Server: 3.0.0
Client: 2.4.4 (SPARC Solaris 8)
Filesystem: just under 1TB
On Fri, May 15, 2009 at 04:28:23PM -0400, John Lockard wrote:
On Fri, May 15, 2009 at 01:30:57PM +0200, Bruno Friedmann wrote:
John Lockard wrote:
Hi all,
Saw this last night. What would cause these Fatal errors?
Server: Linux 2.6.18 x86_64
Bacula Version:
Server
On Thu, May 21, 2009 at 01:34:31PM +0300, Alnis Morics wrote:
Yes, I can list all the files but that doesn't mean I can back them up. When
I
try to run the job, it terminates with an error, and there's also nothing I
can restore.
Here's the output of the last job:
3743 21-May 13:12
Hi All,
I know I don't have full information here, but don't want to send along
my full config as I'll guess that's overkill.
22-May 06:17 tibor-sd JobId 4223: Please mount Volume 100027L2 or label a new
one for:
Job: Belobog-Data2-Users.2009-05-22_05.15.00_35
Storage:
at 11:03:51AM -0400, John Drescher wrote:
On Fri, May 22, 2009 at 10:45 AM, John Lockard jlock...@umich.edu wrote:
Hi All,
I know I don't have full information here, but don't want to send along
my full config as I'll guess that's overkill.
22-May 06:17 tibor-sd JobId 4223: Please mount
When you run a job by hand the schedule isn't involved.
Either way, for your Schedule entry you need Level=
before the work Full.
Schedule {
Name = test
Run = Level=Full at 11:50
}
But, your problem is that your Job doesn't have a Default
Level defined. You'll need
Also, when you post your configs, it would be a really good idea
to remove password and account information.
On Fri, May 22, 2009 at 02:12:13PM -0400, John Lockard wrote:
When you run a job by hand the schedule isn't involved.
Either way, for your Schedule entry you need Level=
before
to unload to slot and
failed. It tried the unload twice (within the same minute) failed
both times, then sat there waiting for me to issue the mount command,
which succeeded.
Could it be a timing issue?
-John
On Fri, May 22, 2009 at 11:58:44AM -0400, John Lockard wrote:
And if I mounted a tape
:
On Fri, May 22, 2009 at 11:58 AM, John Lockard jlock...@umich.edu wrote:
And if I mounted a tape immediately after and also made sure
that automout was set to yes? Whenever I unmount a tape I
always make sure to mount another in it's stead.
I'll dig through previous and current job
On Sat, May 23, 2009 at 12:23:32PM -0500, Zhengquan Zhang wrote:
On Fri, May 22, 2009 at 02:21:26PM -0400, John Lockard wrote:
Also, when you post your configs, it would be a really good idea
to remove password and account information.
Thanks John, Can anyone use the passwords to connect
On Sat, May 23, 2009 at 12:11:28PM -0500, Zhengquan Zhang wrote:
On Fri, May 22, 2009 at 02:12:13PM -0400, John Lockard wrote:
When you run a job by hand the schedule isn't involved.
Either way, for your Schedule entry you need Level=
before the work Full.
Schedule {
Name = test
For backing up a laptop locally, bacula seems to be HUGE
overkill. How long will you be keeping these backups? How many
backups will you be keeping? My guess is that you'd be better
served by a little scripting, rsync and cron.
Each day, establish a new directory by date, then hourly run rsync
If you do 'status client=[clientname]', on the bottom of the
output you'll see the status of the last several jobs which
ran for that client.
I would keep doing like you are with moving old-fd to new-fd
and deleting config for the old config, etc..
Your old backup files I would think should be
When I'm running a backup job I can check the status of the
backup job (with statistics) through bconsole, using the command
'status client=clientname', which gives results like:
Connecting to Client clientname at clientname.si.umich.edu:9102
clientname-fd Version: 3.0.1 (30 April 2009)
Check your /tmp directory or MySQL 'tmpdir' location to see it
it's filling up with temporary DB data. This same problem
happened to, in my case it was dying at around the 180GB mark.
Moving the MySQL tmpdir to a much larger location took care of
my problem.
-John
On Wed, Jul 08, 2009 at
On Fri, Jul 24, 2009 at 06:48:24AM +0200, Marc Cousin wrote:
In theory, the latency from random IO should be much closer to zero on a
flash drive than on a thrashing hard drive, so I was hoping I might need
only 1 or two 64GB or 128GB flash drives to provide decent spool size,
perhaps not
I have modified my query.sql to include some queries that
I use frequently and I thought maybe someone else would
find them useful additions. Also, I was wondering if anyone
had queries which they find useful and would like to share.
In my setup, I need to rotate tapes on a weekly basis to
keep
While the job is running, keep an eye on the system which houses
your MySQL database and make sure that it isn't filling up a
partition with temp data. I was running into a similar problem
and needed to move my mysql_tmpdir (definable in /etc/my.cnf)
to another location.
-John
On Wed, Aug 12,
On Tue, Aug 11, 2009 at 02:39:39PM -0400, John Lockard wrote:
I have modified my query.sql to include some queries that
I use frequently and I thought maybe someone else would
find them useful additions. Also, I was wondering if anyone
had queries which they find useful and would like
Yes, quite possible.
Check the examples/reports directory in the source tarball.
I've taken the reports.pl script and tweaked it to do
somethings specific to me. It's very straightforward
and you just kick it off with cron.
-John
On Fri, Oct 08, 2010 at 10:38:37AM +0200, hOZONE wrote:
Best solution for this one... Run your backup server on UTC time rather
than local.
-John
On Mon, Mar 26, 2012 at 3:59 AM, Frank Seidinger
frank.seidin...@novity.dewrote:
Dear Bacula Users,
I think that I've found a minor bug in bacula concerning the adjustment
of clocks on the start of
I know this is probably a stupid question, but I've seen stupid questions
solve things in the past...
Are both your tape drive and tape at least LTO3? If your drive is LTO-3
and your tape is LTO-2, then your results make perfect sense.
-John
On Thu, Apr 19, 2012 at 1:07 AM, Andre Rossouw
I run into this issue with several of my servers and dealt with it by
creating migrate jobs. First job goes to disk. Second job runs some
reasonable time later and migrates the D2D job to tape. I had a number of
key servers I did this for with the advantage that I could offsite the
tapes and
Yes, but which IO?
Disk IO on the client?
Network IO from the client to the network?
Network IO from the network to the Bacula Director?
Network IO from the Bacula Director to the Bacula SD?
Disk IO on the Bacula SD?
Database IO on the Bacula Director?
Seems like you have more work to do than
There are a number of scripts available which can be run via cron to give
you stats on your backup jobs. Check your Bacula source
directory/examples/reports
In here I found report.pl from Jonas Björkland which I modified to give me
specific information I was looking for, but it's an excellent
modified in the script). I'm sure what I've done could probably be done
more cleanly/efficiently/correctly... you're free to make those changes
as you wish.
-John
On Mon, Nov 10, 2014 at 10:35 AM, John Lockard jlock...@umich.edu wrote:
There are a number of scripts available which can be run via
Are you starting all your jobs at the same time? Wondering if your having
issues with all of your jobs competing for bandwidth and slowing down.
Thinking a staggered start might help you.
On Thu, Dec 4, 2014 at 9:27 AM, Rai Blue raib...@gmail.com wrote:
Hi everyone,
I'm using Bacula 7.0 to
Compression?
Backup level (files not backed up because they haven't been changed)?
Exclusions?
Have you gone through a full list of files to be backed up and a full list
of the files which were actually backed up?
On Thu, May 7, 2015 at 1:42 PM, Romer Ventura rvent...@h-st.com wrote:
Hello,
You could almost do things this way. Unfortunately, you'll have to
occasionally wipe the mirror systems.
If I restored a full to the mirror machine, followed by a differential,
followed by any number of incremental backups, there's almost a 100% chance
that files which were deleted since a
Sorry for jumping in late, but was on vacation.
I use the attached script, which I modified from a script posted by Jonas
Björklund. You'll need to modify it to add your database values and email
specifics (if you want to lock them into the perl script). If you see
something wrong, please tell
How often are you backing up? Fulls, Differentials, Incrementals? How
long do you want to keep each? How compressible is your data? How much
does the data change? How often does the data change?
Too many variables to answer your questions as given.
Only full backups, once a month, you'd
kup levels.
On Tue, Oct 27, 2015 at 4:51 PM, Thing <thing.th...@gmail.com> wrote:
> Hi,
>
> It is a standard bacula configuration, data will also not change much. So
> from your estimate with compression a 3TB drive would seem the minimum.
>
> On 28 October 2015 at 09:14, John
I would use a "Copy" job so that impact to the client is minimal.
On Wed, Sep 16, 2015 at 2:04 PM, Kepler Mihály wrote:
> Hi!
>
> What is the good saving strategy if I have tape and disk storage too?
>
> How can I create backup from "client1" to both storage (tape and disk
>
"I'm not sure I see the utility of copying and pasting any of the other
formatted numbers here."
I do. Running calculations within a script, a tally of number of files,
job bytes, averages, etc.
On Fri, Sep 22, 2017 at 9:56 AM, Phil Stracchino
wrote:
> On 09/22/17 09:04,
I actually think the removal of the animated dots would make the site more
readable. The motion on the screen is sort of nauseating while trying to
read and makes me want to not read any more.
On Mon, May 7, 2018 at 5:37 AM, Sven Hartge wrote:
> On 07.05.2018 07:07, Kern
Fileset {
Name = “cadat"
EnableVss = no
EnableSnapshot = no
Include {
Options {
OneFS = no
RegexDir = "/mnt/cdat-.*"
}
Options {
OneFS = no
Exclude = yes
RegexDir = ".*"
}
File = "/mnt"
}
}
Justin, looking at this, Within /mnt, doesn't your exclude
They didn't change the name, it's a fork.
On Mon, Aug 29, 2022 at 8:16 AM Elias Pereira wrote:
> Hello Marcin,
>
> Sorry for the question, but why did you change the name from "baculum" to
> "bacularis"? :D
>
> On Fri, Aug 26, 2022 at 6:14 PM Marcin Haba wrote:
>
>> Hello Everybody,
>>
>> We
58 matches
Mail list logo