Hi list,
I've got a strange problem, and my current search didn't find a
solution yet, although I was able to find people with similar
problems, but the several mentioned solutions didn't match my problem
here.
Randomly, some jobs don't seem to be finished correctly. I'm running
bacula 2.4.3 on
It seems heartbeat statement fixed my problem too :)
Thanks a lot!
Massimo
On Wed, 2008-12-31 at 10:00 -0500, Brian Willis wrote:
Massimo Schenone wrote:
Hi all,
I have the same question: how to backup a remote samba server with 60GB
of file system space?
I get an average throughput of
Updated to version 2.4.4 and the sql error from the migrate job, with no
files to migrate, disappeared. Also the sql error PoolID seems to have gone
aways. Thank you.
Greetings,
Pieter
--
View this message in context:
This issue is solved on 2.5
Impressive response times!!
Regards, Angel
El Lunes, 5 de Enero de 2009 12:42:09 Angel escribió:
On bacula restoring files with
Job {
Name = RestoreFiles
Type = Restore
Client=quasar-fd
FileSet=Full Set
Pool = Default
Messages =
Jeff MacDonald wrote:
On 6-Jan-09, at 1:09 PM, Allan Black wrote:
setenv LDFLAGS -R/usr/postgres/8.3/lib
and then go back to the ./configure command. It might be an
Yup I clued in afterwards and used CRLE
crle -u -l /usr/postgres/8.3/lib
That is still not the recommended
How does bacula determine the ifolder/ifnewer part of replacements?
Is it a matter of conparing the catalog with the restore area, or is it
doing it on the fly as it reads each file off the backup volumes?
--
Check
On Tue, 6 Jan 2009, Jonathan Larsen wrote:
Is there away to tell it to recycle regardless? That would help me better
determine which tapes i need to put into my autochanger.
You can do it manually but I really recommend NOT doing this.
I wrote a couple of query snippets to find tapes which
Hi guys,
i have a Admin job configured running on 1st of January and on 1st of
June to check my db for errors.
My problem is now, this job runs now every day since 1st of January 2009.
My Schedule looks like this:
Schedule {
Name = dbcheck
Run = on 1st jan at 23:59
Run = on 1st june at
Thanks all, the update fixed the problem! Now all backups works just
they should!
_
R2M
Tarja Harmokivi
Kista Entré, Box 1027, SE-164 21, KISTA
Växel: 08 - 633 13 00, Direkt: 0733 - 709 512
E-Post: tarja.harmok...@r2m.se
Webb: http://www.r2m.se
/R2M
The output test was:
dd if=/dev/zero of=/dev/nst0 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 7.80926 seconds, 134 MB/s
I have a server let say A that has director and storage daemon with a
DAT72 and other server say B which has storage daemon and a
I think you should use /dev/random, not /dev/zero unless hardware
compression is disabled in order to have more realistic figures.
--
Ferdinando Pasqualetti
G.T.Dati srl
Tel. 0557310862 - 3356172731 - Fax 055720143
On Tue, Jan 6, 2009 at 6:44 PM, Mordechai T. Abzug mo...@frakir.org wrote:
On Tue, Jan 06, 2009 at 07:31:29AM -0500, John Drescher wrote:
[snip question about VSS and ntbackup backup]
You need both to backup the registry.
Thanks!
You would add multiple Run= commands (one for each level)
Ferdinando Pasqualetti schrieb:
I think you should use /dev/random, not /dev/zero unless hardware
compression is disabled in order to have more realistic figures.
This wouldn't be a good idea, /dev/random or /dev/urandom are just too
slow in generating random data. To test the nativ speed of
Ralf Gross wrote:
Ferdinando Pasqualetti schrieb:
I think you should use /dev/random, not /dev/zero unless hardware
compression is disabled in order to have more realistic figures.
This wouldn't be a good idea, /dev/random or /dev/urandom are just too
slow in generating random data. To
T. Horsnell schrieb:
Ralf Gross wrote:
Ferdinando Pasqualetti schrieb:
I think you should use /dev/random, not /dev/zero unless hardware
compression is disabled in order to have more realistic figures.
This wouldn't be a good idea, /dev/random or /dev/urandom are just too
slow in
On Wed, 2009-01-07 at 17:57 +0100, Ralf Gross wrote:
T. Horsnell schrieb:
Ralf Gross wrote:
Ferdinando Pasqualetti schrieb:
I think you should use /dev/random, not /dev/zero unless hardware
compression is disabled in order to have more realistic figures.
This wouldn't be a good
Craig White schrieb:
On Wed, 2009-01-07 at 17:57 +0100, Ralf Gross wrote:
T. Horsnell schrieb:
Ralf Gross wrote:
Ferdinando Pasqualetti schrieb:
I think you should use /dev/random, not /dev/zero unless hardware
compression is disabled in order to have more realistic figures.
On Jan 7, 2009, at 4:04 AM, Frank Altpeter wrote:
Hi list,
I've got a strange problem, and my current search didn't find a
solution yet, although I was able to find people with similar
problems, but the several mentioned solutions didn't match my problem
here.
Randomly, some jobs don't
2009/1/7 Ralf Gross ralf-li...@ralfgross.de:
Sergio Belkin schrieb:
2009/1/7 Ralf Gross ralf-li...@ralfgross.de:
T. Horsnell schrieb:
Ralf Gross wrote:
Ferdinando Pasqualetti schrieb:
I think you should use /dev/random, not /dev/zero unless hardware
compression is disabled in order
I can confirm to have the same trouble with 2.2.8 version and 2.4.4b1 version
What I've made to bypass the problem is changing my schedule to on 1st jan 2009
at ...
+ a reload
mail wrote:
Hi guys,
i have a Admin job configured running on 1st of January and on 1st of
June to check my db
Sergio Belkin schrieb:
The output test was:
dd if=/dev/zero of=/dev/nst0 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 7.80926 seconds, 134 MB/s
This looks ok.
Below is output of tapeinfo -f /dev/sg0
Product Type: Disk Drive
But sg0 is not
Running Jobs:
JobId Level Name Status
==
59 Increme client.hostname.tld.2009-01-07_00.30.46 has terminated
Director connected at: 07-Jan-09 09:57
Terminated Jobs:
JobId LevelFiles
07-Jan 21:44 quasar-dir JobId 61: Start Backup JobId 61,
Job=Backup_Linux_Full.2009-01-07_21.44.06
07-Jan 21:44 quasar-dir JobId 61: Using Device FileStorage0
07-Jan 21:44 quasar-dir JobId 61: Fatal error: fd_cmds.c:256 Unimplemented
backup level 86 V
07-Jan 21:44 quasar-dir JobId 61: Error:
On Wed, 07 Jan 2009 12:38:50 +0100, mail said:
Hi guys,
i have a Admin job configured running on 1st of January and on 1st of
June to check my db for errors.
My problem is now, this job runs now every day since 1st of January 2009.
My Schedule looks like this:
Schedule {
Name
So I'm a member of this site, use to be a moderator there for many
years but every year LQ holds member choice awards. I kept pushing for
a Backup Award and it's finally here. So go and vote for Bacula as the
best backup application and program for the year 2008. If you're not a
member, well, join
Hi!
2009/1/7 Andrea Conti a...@alyf.net:
Running Jobs:
JobId Level Name Status
==
59 Increme client.hostname.tld.2009-01-07_00.30.46 has terminated
Director connected at: 07-Jan-09 09:57
Craig White wrote:
On Wed, 2009-01-07 at 17:57 +0100, Ralf Gross wrote:
T. Horsnell schrieb:
Ralf Gross wrote:
Ferdinando Pasqualetti schrieb:
I think you should use /dev/random, not /dev/zero unless hardware
compression is disabled in order to have more realistic figures.
This wouldn't be
Sergio Belkin wrote:
Please always respond to the list...
OK, sorry, I am subscribed to many lists, I don't understand why it
not working reply...
Sourceforge lists are a bit quirky in that when you reply, it puts the
original sender (in this email, Sergio) rather than the mailing
Martin Simmons schrieb:
On Wed, 07 Jan 2009 12:38:50 +0100, mail said:
Hi guys,
i have a Admin job configured running on 1st of January and on 1st of
June to check my db for errors.
My problem is now, this job runs now every day since 1st of January 2009.
My Schedule looks like this:
On Wed, Jan 07, 2009 at 10:45:28AM -0500, John Drescher wrote:
What I want is to have two sets of backup media -- a daily backup
media set that gets rotated all year long, and a monthly media set
that is a full backup that gets archived offsite. Can this be done
under bacula with one set
Craig White wrote:
On Wed, 2009-01-07 at 17:57 +0100, Ralf Gross wrote:
T. Horsnell schrieb:
Ralf Gross wrote:
Ferdinando Pasqualetti schrieb:
I think you should use /dev/random, not /dev/zero unless hardware
compression is disabled in order to have more
Yes, but most people use hardware compresion with LTO drives. Sooner
or later he has to test the drive with compression.
funny thing is that amanda developers are adamant that you disable
hardware compression and use software compression instead.
Do they even say this for LTO4? I mean
32 matches
Mail list logo