Hi Folks,
I have a problem with my library. I always get an error when i make a backup or
restore, but the backup and restore successfully completed.
[...]
Tape Error: dev.c:506 Unable to open device DELL-TL4000 (/dev/nst0): ERR=No
medium found Tape Ready to append to end of Volume 24L3
Hi,
sometimes the restore seems to work and ends with Status=OK, but it
allways dies. If Status is OK, then files are restored, and hopefully
all are restored.
The messages in the log file is allways: bacula-fd: Bacula interrupted
by signal 11: Segmentation violation
bacula informs me via mail:
I did the volume cleanup and had just around the number of volumes needed (or I
thought). Then it started to work the way I expected in that it recycled
volumes that jobs had expired (7 days). But then one day someone put a 32Gb of
data on so it couldn't recycle enough volumes so it auto
On Wed, Jun 13, 2012 at 12:13:19PM +0200, M. Müller wrote:
Hi,
sometimes the restore seems to work and ends with Status=OK, but it
allways dies. If Status is OK, then files are restored, and hopefully
all are restored.
The messages in the log file is allways: bacula-fd: Bacula interrupted
On Wed, Jun 13, 2012 at 03:34:16AM -0700, sbooth wrote:
I did the volume cleanup and had just around the number of volumes needed (or
I thought). Then it started to work the way I expected in that it recycled
volumes that jobs had expired (7 days). But then one day someone put a 32Gb
of
We have a 2 node Sun (now Oracle) Solaris HA Cluster that does NFS file
service. Only one node does service at a time, with an IP address that
moves to the other node on node failure.
Has anyone tried using Bacula to backup a Sun HA cluster?
I have done exactly that with an RHEL cluster, so long
On Tue, Jun 12, 2012 at 03:31:12PM -0400, Clark, Patricia A. wrote:
I have several large file systems (1TB) where I want to break them up to get
smaller backup streams in parallel to increase the throughput to tape. My
fileset directive is below. I want everything in /home, but divided by
Hey Steve,
I had something similar happen to me here, bacula will respect your retention
policies, and your volume limit. So if you have 200 files and you have
configured a limit of 205, when the next job starts, it will recycle anything
it can, use any volume that is purged, then start
Hi,
Is it possible get the list of file from a especific volume?
For example:
I have a job to an incremental backup today at 15:00h.
I would like to know what files were backuped since the last job.
Any reference would be great.
Thanks.
--
Eduardo Júnior
GNU/Linux user #423272
On 06/13/2012 01:25 PM, Eduardo Júnior wrote:
Hi,
Is it possible get the list of file from a especific volume?
For example:
I have a job to an incremental backup today at 15:00h.
I would like to know what files were backuped since the last job.
Try this SQL query:
select
Hello,
2012/6/13 Eduardo Júnior ihtrau...@gmail.com
Hi,
Is it possible get the list of file from a especific volume?
For example:
I have a job to an incremental backup today at 15:00h.
I would like to know what files were backuped since the last job.
Any reference would be great.
Sorry, forgot the title in my former message. Here it goes again.
Hi list,
Here's my setup :
DIR, SD and FD are on different virtual machines. They are debian squeeze
vservers (all of them). Until recently, we were running 2.4.4.1 (lenny) and
everything was fine. I had to update some servers in
once you hit the maximum volume number in the pool, bacula will purge
/ recycle the oldest volume it can find in the pool, depending on your
pool configuration. Are you limiting the volume size or number of jobs
in any way?
All the best, Uwe
Yes I am limiting the volume size to 5Gb and only
Hey Steve,
I had something similar happen to me here, bacula will respect your retention
policies, and your volume limit. So if you have 200 files and you have
configured a limit of 205, when the next job starts, it will recycle anything
it can, use any volume that is purged, then start
14 matches
Mail list logo