tz
> It would probably help get a response if you give some details about
> "this seems to work no longer".
>
> Why doesn't it work? What does it do? What is the exact output?
>
> On 02/24/2016 04:04 AM, Dietz Pröpper wrote:
> > Hi,
> >
> > I have a
Hi,
I have a little problem:
I have an autochanger with two devices. In former times, after I issued a
"unmount storage=blah drive=0" the sd status for that device was displayed as
"BLOCKED by user", which should still be the case, according to the
documentation.
Since some months, this seems
Am Donnerstag, 3. Juli 2014, 16:58:22 schrieb antonio.mannatzu:
- -bash-4.1$ psql bacula
psql: FATAL: database bacula does not exist
DETAIL: The database subdirectory base/16386 is missing.
This looks like your database went corrupt. Not directly a bacula problem.
regards
Dietz
--
Adrian Reyer:
Make sure you have cache ram on your raid controller and a battery
backup unit installed. MySQL and Postgres like to write in sync. With
BBU+cache the write is completed as soon as the controller has the data,
no need to wait for disks. I doubt RAID0 would gain you much if any.
Dan Langille:
On Dec 29, 2011, at 7:55 AM, ninyo wrote:
No. Im not working with barcodes
o_0'
Is this because there is no barcode reader or because you don't want to
use the one that is there?
He mailed the output of mtx status way up the thread, and it seems to me,
that he does not
Mario Moder:
Am 23.12.2011 12:07, schrieb Dietz Pröpper:
Now the question, is it possible to have the incremental jobs run with
data spooling on and the full ones with spooling off? As far as I can
see from the manual, I can not accomplish that by means of a Schedule
ressource
Hi,
the problem: I've got some backup job that does a full backup once a week
and differential ones every other day. The full backup streams with the
nominal data rate of the tape device, which is fine. But for incremental
jobs, the rate is normally far beyond that rate.
Now the question, is
Johnston, James C. (GRC-RXP0):
I've got a large (~16 TB) file server that I want to back up onto a
~40TB RAID array connected to the fileserver via fiber channel (so
there is no backup traffic on our network).
Ok, I've never done stuff at that size. Just as a warning to read on ;-).
I need
Uwe Bolick:
On Thu, Dec 08, 2011 at 07:58:09PM +0100, Dietz Pröpper wrote:
Uwe Bolick:
Hi,
I needs to restore files from tape, but this time without success,
because the needed tape can not be loaded for action. After loading
the tape, it get's unloaded immediately
Quoting myself:
Uwe Bolick:
It's really strange, because everything seems OK. If I do it by hand,
I get:
Sorry, but you did not do it by hand ;-). And what I can see from the
logs you mailed, it's the tape in slot 3 that gets rejected.
You did it by hand ;-) Had to read another time to
Uwe Bolick:
Hi,
I needs to restore files from tape, but this time without success,
because the needed tape can not be loaded for action. After loading
the tape, it get's unloaded immediately:
[...]
What happens if you try to load the tape by hand, i.e. by means of a mtx
load 3, and try to
Carsten Pache:
A few days ago I had to restore some files. After the files were
restored successfully, the VolStatus (shown by list volumes) of the
two tapes that were needed during restore changed from Append to
Used. Is this an expected behaviour?
Do you hava a maximum use time configured?
Carsten Pache:
A few days ago I had to restore some files. After the files were
restored successfully, the VolStatus (shown by list volumes) of the
two tapes that were needed during restore changed from Append to
Used. Is this an expected behaviour?
Do you hava a maximum use time
Fahrer, Julian:
Hi,
Am I missing something or is the Volume Use Duration parameter in the
pool not working in 5.2.1?
[...]
Mediaid 5 8 should actually be in status used. It is 2011-11-24
17:00 right now...
Same thing for other pools with Volume Use Duration = 12 days
Any hints?
Am I
Hi,
I use several pools for daily, weekly and monthly backups. To be able to
remove the tapes from the monthly backup without bacula trying to use the
left over place on the last tape from one month again the next month I have
defined a suitable Volume Use Duration:
Pool {
Name = Monthly
Martin Simmons:
On Thu, 5 May 2011 09:18:45 +0200, Dietz =?utf-8?q?Pr=C3=B6pper?=
said:
And now the question - is there a way to accomplish the wished
behavior (tape gets set to used immediately after the usage
duration, not the next time the pool is touched) by some other means?
Martin Simmons:
On Fri, 29 Apr 2011 14:29:33 +0200, Dietz Pröpper said:
To see wether the file system is indeed the bottleneck you could try
to tar the fs to /dev/null and compare the transfer rate to that of
your bacula backup.
Good advice, but beware that GNU tar doesn't read any
Jason Voorhees:
On Thu, Apr 28, 2011 at 12:01 PM, John Drescher dresche...@gmail.com
wrote:
So do you believe these speeds of my backups are normal? I though my
Library tape with LTO-5 tapes could write at 140 MB/s approx. It
isn't possible to achieve higher speeds?
You need to speed
Jason Voorhees:
Well, these are my results of a bonnie++ test:
[...]
Version 1.03e --Sequential Output-- --Sequential Input-
--Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
Hi,
is there a possibility to attach files (i.e. bsr files) to
backup status mails?
At the moment, I use a post backup job to mail these, but it
would be nice if I could attach them directly to Backup
successful mails.
regards,
Dietz
signature.asc
Description: This is a digitally
Hi,
(if that has been already asked, sorry, feel free to provide a pointer...)
that weekend I found time to upgrade a 2.4.2 bacula zu 5.0.3.
Apart from an exploding mysql on restore, everything worked fine.
But now I'm trying to look into accurate backups and feel confused ;-).
To put it
Laxansh K. Adesara:
I have installed bacula-dir on ubuntu and bacula-client on window
machine and also configured it so I am able to connect bacula-dir from
bacula client machine form console and now I want to do backup by using
LTO 4 tape drive which is installed on window 2003 server and I
Peter Zenge:
Remember also that if you are trying to minimize tape/disk/other backup
media space used, and using encryption, you will need to use software
compression. The FD compresses before encrypting; once encrypted, as
noted above, the data is no longer compressible...
Ack, that was
James Harper:
Hi at all,
I got a trouble: a server of our server farm runs some perl scripts
that
are bad influenced by Bacula backup.
Until our developers will return from holiday (damn), I need to limit
bacula speed.
Our servers are connected by 100Mbps ethernet network,
Phil Stracchino:
On 08/14/10 09:34, Dietz Pröpper wrote:
Peter Zenge:
Remember also that if you are trying to minimize tape/disk/other
backup media space used, and using encryption, you will need to use
software compression. The FD compresses before encrypting; once
encrypted, as noted
You:
On Aug 13, 2010, at 2:23 PM, Dietz Pröpper wrote:
You:
On Aug 13, 2010, at 4:10 AM, Dietz Pröpper wrote:
IMHO there are two problems with hardware compression:
1. Data mix: The compression algorithms tend to work quite well on
compressable stuff, but can't cope very well
Rory Campbell-Lange:
On 12/08/10, Mike Hanby (mha...@uab.edu) wrote:
I'm curious whether others with the PV-124T with LTO4 are using
hardware or software compression.
I am testing a new Bacula deployment with one of these autoloaders /
drives and haven't found a good suggestion as to
John Drescher:
On Thu, Aug 12, 2010 at 6:59 PM, Mike Hanby mha...@uab.edu wrote:
Howdy,
I'm curious whether others with the PV-124T with LTO4 are using
hardware or software compression.
I am testing a new Bacula deployment with one of these autoloaders /
drives and haven't found a
John Drescher:
On Fri, Aug 13, 2010 at 4:12 AM, Dietz Pröpper di...@rotfl.franken.de
wrote:
John Drescher:
On Thu, Aug 12, 2010 at 6:59 PM, Mike Hanby mha...@uab.edu wrote:
Howdy,
I'm curious whether others with the PV-124T with LTO4 are using
hardware or software compression
You:
On Aug 13, 2010, at 4:10 AM, Dietz Pröpper wrote:
IMHO there are two problems with hardware compression:
1. Data mix: The compression algorithms tend to work quite well on
compressable stuff, but can't cope very well with precompressed stuff,
i.e. encrypted data or media files
Phil Stracchino:
Neither of these issues is applicable to LTO. The compression algorithm
(which is a pretty good one) is defined in the LTO specification, and
the drive compresses data block-by-block, doing a trial compression of
each data block and writing whichever is the smaller of the
Derek Harkness:
I'm getting the following error between one of my clients and the
storage server. All my other clients have been able to backup just
fine to this storage device but this one client always returns the
Connection reset by peer.
I've adjusted the heartbeat and the keepalive
Oddbjørn Sjøgren:
# mtx status
mtx: Request Sense: Long Report=yes
mtx: Request Sense: Valid Residual=no
mtx: Request Sense: Error Code=70 (Current)
mtx: Request Sense: Sense Key=Unit Attention
mtx: Request Sense: FileMark=no
mtx: Request Sense: EOM=no
mtx: Request Sense: ILI=no
mtx:
33 matches
Mail list logo