On Mon, Jan 13, 2014 at 08:44:45PM +0400, Vladimir Skubriev wrote:
I need to know: How I can list running jobs on client?
I want all runing jobs.
I need this to determine is there are runing jobs already on the server.
Or maybe status of my last restore job. But for example I don't
14.01.2014 13:42, Gary Stainburn пишет:
On Monday 13 January 2014 16:44:45 Vladimir Skubriev wrote:
I need to know: How I can list running jobs on client?
I want all runing jobs.
I need this to determine is there are runing jobs already on the server.
Or maybe status of my last restore
On Monday 13 January 2014 16:44:45 Vladimir Skubriev wrote:
I need to know: How I can list running jobs on client?
I want all runing jobs.
I need this to determine is there are runing jobs already on the server.
Or maybe status of my last restore job. But for example I don't remeber
jobID.
Hi,
I just installed Bacula 5.2.6 on Debian 7 with a Quantum Superloader 3
autochanger. The setup seems to work fine, I can use bconsole to mount and
label tapes. However when I use the GUI I noticed my autochanger is shown
in the storage tab but the entry under 'changer' says 'no'.
Why is bat
On Tuesday 14 January 2014 10:05:36 Vladimir Skubriev wrote:
Thank you very much. )
But this is of course overkill for me.
I only want to say: Why this is not upstream ?
echo message | mail -t k...@sibbald.com
A much simpler method would be
echo status dir|bconsole|grep ^ |grep
Solved it, I had to add 'autochanger = yes' to the storage definition on
the director also
Nico
On Tue, Jan 14, 2014 at 10:29 AM, Nico De Ranter
nico.deran...@esaturnus.com wrote:
Hi,
I just installed Bacula 5.2.6 on Debian 7 with a Quantum Superloader 3
autochanger. The setup seems to
I've decided to write my on SQL QUERY to just copy jobs that haven't been
copied in the last 24 hours (disk to tape backup follows on the heels on the
disk to disk backup). I wanted to just keep the same JOB for disk to tape copy
but us a SQL QUERY instead of the PoolUncopiedJobs. But I need
On Tue, Jan 14, 2014 at 02:17:12PM +, Steven Hammond wrote:
I've decided to write my on SQL QUERY to just copy jobs that haven't been
copied in the last 24 hours (disk to tape backup follows on the heels on the
disk to disk backup). I wanted to just keep the same JOB for disk to tape
We have separate pools for each level (incremental, differential, full). I
have 1 schedule and 1 job for the disk to tape copy and was just trying to keep
it that way. I know I could create separate jobs/schedules. I'm open to
suggestions. Here is the current configuration for the disk to
Hello,
I am running one instance using Bacula 2 and the other one using Bacula 5.
Is it possible to have parts of the Backup saved as .zip (or any other format,
that is good to handle)?
I would like to separate parts of the backup and save them somewhere else - in
case of emergency.
Than you!
On Tue, Jan 14, 2014 at 02:43:47PM +, Steven Hammond wrote:
We have separate pools for each level (incremental, differential, full). I
have 1 schedule and 1 job for the disk to tape copy and was just trying to
keep it that way. I know I could create separate jobs/schedules. I'm open
My guess is that during the migration from MySQL to Postgres, the
sequences in Bacula did not get seeded right and probably are starting
with a seed value of 1.
the filesetid field in the fileset table is automatically populated by
the fileset_filesetid_seq sequence.
Run the following two
No. They are like this:
Pool: Daily-Incr, Next Pool: Daily-Tape
Pool: Weekly-Diff, Next Pool: Weekly-Tape
Pool: Monthly-Full, Next Pool: Monthly-Tape
On 1/14/14 8:53 AM, Uwe Schuerkamp uwe.schuerk...@nionex.net wrote:
On Tue, Jan 14, 2014 at 02:43:47PM +, Steven Hammond wrote:
We have
On Tue, 14 Jan 2014 06:43:46 -0800
hendrik bacula-fo...@backupcentral.com wrote:
I am running one instance using Bacula 2 and the other one using
Bacula 5. Is it possible to have parts of the Backup saved as .zip
(or any other format, that is good to handle)?
I would like to separate parts
Hi all,
I've recently setup a new Bacula director/storage daemon in preparation to move
our existing backups to newer hardware. During testing, I've run into problems
doing restores of backups taken to disk, failing with the messages:
Error: block.c:275 Volume data error at 24:4294944994!
Dear Thomas,
In message 52d555c5.9070...@mtl.mit.edu you wrote:
My guess is that during the migration from MySQL to Postgres, the
sequences in Bacula did not get seeded right and probably are starting
with a seed value of 1.
Do you have any idea why this would happen? Is this something I
Wolfgang,
Dear Thomas,
In message 52d555c5.9070...@mtl.mit.edu you wrote:
My guess is that during the migration from MySQL to Postgres, the
sequences in Bacula did not get seeded right and probably are starting
with a seed value of 1.
Do you have any idea why this would happen? Is this
On 01/14/2014 02:26 PM, Thomas Lohman wrote:
I can't say exactly why it happened to you but my guess would be that
this problem could hit anyone porting from mysql to postgres.
At a guess migration scripts don't translate mysql's autoincrement (or
identity or whatever they call it) to
Dear Thomas,
In message 52d59d74.6000...@mtl.mit.edu you wrote:
Do you have any idea why this would happen? Is this something I can
influence?
Are there any other variables that might hit by similar issues?
I can't say exactly why it happened to you but my guess would be that
this
On 01/14/2014 04:57 PM, Wolfgang Denk wrote:
I didn't use any precanned procedure (is there one? I mean a
recommended/working one?). Basically whay I did is dumping the DB
under MySQL
and then importing the dump into PostgreSQL.
That's why the sequences didn't get reinitialized properly.
20 matches
Mail list logo