[Bacula-users] Possible bug in database update code

2011-08-09 Thread Frank Altpeter
Hi there,

It's possible that I just hit a bug with the routines that are
updating the catalog database.

Following prerequisites:

Server OS: SUSE Linux Enterprise Server 11 (x86_64)
Bacula Version: 5.0.3
MySQL Version: 5.0.67
Catalog Database size: 49G

The configuration states to do incremental backup daily except sunday
night where the full backup is running. Full backup is done from
client to disk via vchanger. After all full backups are done, monday
afternoon a copy job is being run which brings the full backups from
vchanger to an LTO-4 tape changer.
Usually this setting is running fine.

Today (tuesday) one of my clients is doing a full backup, where it
should run incremental. Looking in the log it says Prior failed job
found in catalog. Upgrading to Full.. I was wondering about that,
because I know that the incremental backup from yesterday was running
fine.

So I checked the database entries for this client. I've put the output of

select 
JobId,PriorJobId,Job,Type,Level,JobStatus,SchedTime,StartTime,EndTime,RealEndTime
from Job where Name like 'host.customer.com%' order by JobId;

into http://pastebin.com/4CVHzqaD for review.

The interesting line are the ones with JobId 113187 and 112500. I
don't know why these two jobs have identical values for StartTime and
EndTime. I think that some catalog update routine was going wrong here
because it was confused by the running copy job. I see that database
entry as reason that the incremental job of today thinks that the last
job failed.

I'm not good at compiler code debugging, but if there's more data
needed to find and eliminate this bug, let me know.


-- 
Le deagh dhùraghd,

        Frank Altpeter

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] fstype=bind being ignored

2011-01-19 Thread Frank Altpeter
Hi list,

After a short discussion on the channel, I was advised to create a
post here to see if there's help for that or if it's really a bug.

I've got a machine that is using bind mounts. Despite the fact that
the fileset definition is using onefs=no and fstype=ext3, the bind
mounts' content is saved which results in multiple saves of the same
content.

I know I could simply add the relevant mount points on the exclude
list in the fileset, but I think it would make sense if this is
considered like other fstype configurations as well, since I don't
like to manually tweak the fileset on every possible change of the
bind mounts, and to keep the default fileset simple and generic.

I've put some addional information on http://racoon.pastebin.com/r44RTxyP

The bacula-fd has version 2.4.4 on opensuse 11.1 and the server is
running 5.0.3, on SLES 11.1


Any hints appreciated.



-- 
Le deagh dhùraghd,

        Frank Altpeter

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with usage of Copy Jobs

2010-10-22 Thread Frank Altpeter
Hi,

I'm still curious about this because it's still present with the
current 5.0.3 implementation of bacula. Is there anything new about
the behaviour of copy jobs? Any other peopel with ideas to discuss
with? I mean, generally the copy jobs seem to work, but with almost
140 clients the database gets quite a little blown up with unnecessary
job entries AFAIK, if every copy job produces two job entries, besides
the fact that there is no real way to keep informed about the status
by mail, when all jobs have the same (parent) name.



2010/7/23 Frank Altpeter frank.altpe...@gmail.com:
 Moin,

 2010/7/23  c.kesch...@internet-mit-iq.de:
 That seems strange. I presume you also didn't get a mail for job 54425?

 Correct.

 I also don't know what to make of this. You could check the volumes to see if
 there was actually data written for the incremental job.

 I checked, and it seems that the job did not actually run, but only
 got entered (and displayed) twice, as the database tells me:

 mysql select JobId,Job,Name,Type,Level,JobFiles,JobBytes,ReadBytes
 from Job where JobId in ( 54423, 54424, 54425 ) limit 10;
 +---+--+---+--+---+--+---+---+
 | JobId | Job                                      | Name
 | Type | Level | JobFiles | JobBytes  | ReadBytes |
 +---+--+---+--+---+--+---+---+
 | 54423 | balance.company.com.2010-07-23_11.01.54_18 |
 balance.company.com | B    | F     |    52624 | 362371099 | 899243991
 |
 | 54424 | DiskToTape.2010-07-23_11.11.21_19        | DiskToTape
 | c    | F     |        0 |         0 |         0 |
 | 54425 | balance.company.com.2010-07-23_11.11.22_20 |
 balance.company.com | C    | F     |    52624 | 369022937 |         0
 |
 +---+--+---+--+---+--+---+---+

 Quite strange, since jobid 54424 does not have any data according to
 the mysql entry.



 Le deagh dhùraghd,

         Frank Altpeter




-- 
Le deagh dhùraghd,

        Frank Altpeter

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula GDB traceback of bacula-sd

2010-10-14 Thread Frank Altpeter
Moin,

sorry to respond so lately, but there was much work these days :-)

2010/10/6 Martin Simmons mar...@lispworks.com:
 On Wed, 6 Oct 2010 11:36:33 +0200, Frank Altpeter said:

 I just hit a segmentation fault with my bacula installation. I hope
 someone can help me how to get this fixed.

 The actual error was Resource temporarily unavailable but it is being
 reported with level M_ABORT so you get the segmentation fault.

 Maybe it is running out of memory or hitting some limit?  Can you monitor the
 size of the sd process periodically to see if it is growing over time?

It looks like you could be right with that. Currently the process is
quite large. top reports:

Mem:   3743792k total,  3641244k used,   102548k free,20448k buffers
Swap: 10490404k total,   115120k used, 10375284k free,  1034424k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
14169 root  20   0 2112m 2.0g  916 S   97 55.1   4903:21 bacula-sd
 3637 mysql 20   0  535m 393m 2500 S3 10.8 761:01.58 mysqld
 2795 root  20   0 2105m  11m 1264 S2  0.3 164:31.55 bacula-dir

It looks like the sd keeps stable at that size but I will have a look at it.

Le deagh dhùraghd,

        Frank Altpeter

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula GDB traceback of bacula-sd

2010-10-06 Thread Frank Altpeter
Hi list,

I just hit a segmentation fault with my bacula installation. I hope
someone can help me how to get this fixed.

This happened now the second time within a week, so I think there
might be a bug in the sd code.

Environmental information:

Version: 5.0.3 (04 August 2010)
OS: SUSE Linux Enterprise Server 11 (i586) Patchlevel 1

Configure flags (manual build, no binary package):

./configure \
--exec-prefix=/usr \
--sysconfdir=/etc/bacula \
--localstatedir=/var \
--mandir=/usr/share/man \
--datadir=/usr/share \
--with-openssl \
--with-mysql \
--with-dir-group=bacula \
--with-sd-group=bacula \
--enable-smartalloc \
--enable-batch-insert

storage daemon configuration:

Storage { # definition of myself
  Name = backup-sd
  SDPort = 9103  # Director's port
  WorkingDirectory = /var/bacula/working
  Pid Directory = /var/run
  MaximumConcurrentJobs = 270
}
Device {
Name= FileStorage
MediaType   = File
DeviceType  = File
ArchiveDevice   = /daten/backups/bacula/FileStorage
LabelMedia  = yes
RandomAccess= yes
AutomaticMount  = yes
RemovableMedia  = no
AlwaysOpen  = no
}
Device {
Name= FileStorageFull
MediaType   = FileFull
DeviceType  = File
ArchiveDevice   = /daten/backups/bacula/FileStorageFull
LabelMedia  = yes
RandomAccess= yes
AutomaticMount  = yes
RemovableMedia  = no
AlwaysOpen  = no
}
Autochanger {
Name= T24-Changer
Device  = LTO-4-1
ChangerDevice   = /dev/sg7
ChangerCommand  = /etc/bacula/mtx-changer %c %o %S %a %d
}
Device {
Name= LTO-4-1
DriveIndex  = 0
Autochanger = Yes
ArchiveDevice   = /dev/nst1
DeviceType  = Tape
MediaType   = LTO-4
AlwaysOpen  = Yes
RemovableMedia  = Yes
RandomAccess= No
RequiresMount   = No
AutomaticMount  = yes
AutoSelect  = yes
LabelMedia  = no
MaximumBlockSize= 262144
MaximumNetworkBufferSize= 65536
MaximumFileSize = 5G
SpoolDirectory  = /var/bacula/spool
MaximumChangerWait  = 600
}


I have attached the gdb traceback output because it's a quite large
file. Hope that's ok.


-- 
Le deagh dhùraghd,

        Frank Altpeter
[?1034hMissing separate debuginfo for /lib/libz.so.1
Try: zypper install -C 
debuginfo(build-id)=9f5e4b386d9826b14a48677c23dcdf8a2cb45bff
Missing separate debuginfo for /lib/libpthread.so.0
Try: zypper install -C 
debuginfo(build-id)=3043b78c80daa60cdb3d347dcb33f00bd1551163
Missing separate debuginfo for /lib/libdl.so.2
Try: zypper install -C 
debuginfo(build-id)=cd785200787e37f6917cc1c87966fb4404e65297
[Thread debugging using libthread_db enabled]
[New Thread 0xbf45cb70 (LWP 18527)]
[New Thread 0x9f4f0b70 (LWP 18525)]
[New Thread 0x5041b70 (LWP 18524)]
[New Thread 0xa44fab70 (LWP 14328)]
[New Thread 0xa6ffbb70 (LWP 3933)]
[New Thread 0xb6a56b70 (LWP 5866)]
Missing separate debuginfo for /usr/lib/libssl.so.0.9.8
Try: zypper install -C 
debuginfo(build-id)=1df05ccab62c61feacf8b1b41930d73aa5b88489
Missing separate debuginfo for /usr/lib/libcrypto.so.0.9.8
Try: zypper install -C 
debuginfo(build-id)=5fa488ae806481e4f6f787b52f1bbc71f9efa3d0
Missing separate debuginfo for /usr/lib/libstdc++.so.6
Try: zypper install -C 
debuginfo(build-id)=a06c7686acca71405f69e506919c07c1db9ea518
Missing separate debuginfo for /lib/libm.so.6
Try: zypper install -C 
debuginfo(build-id)=293aa876def3cb4177e0389f96c11e76b839b7e4
Missing separate debuginfo for /lib/libgcc_s.so.1
Try: zypper install -C 
debuginfo(build-id)=815f111c78054fb492b3e12dd7bef2d7efc808b7
Missing separate debuginfo for /lib/libc.so.6
Try: zypper install -C 
debuginfo(build-id)=d47431471a179a11c4f04201dca9fb320d26e78d
Missing separate debuginfo for /lib/ld-linux.so.2
Try: zypper install -C 
debuginfo(build-id)=a18c87099e8bcecf243e2c48b2db7b11bc51816b
0xe430 in __kernel_vsyscall ()
$1 = '\000' repeats 29 times
$2 = 0x809d048 bacula-sd
$3 = 0x809d070 /usr/sbin/bacula-sd
$4 = 0x0
$5 = 0xb771971e 5.0.3 (04 August 2010)
$6 = 0xb771973d i686-pc-linux-gnu
$7 = 0xb771974f suse
$8 = 0xb771973a 11
$9 = backup, '\000' repeats 43 times
$10 = 0xb7719735 suse 11
$11 = 0
Environment variable TestName not defined.
#0  0xe430 in __kernel_vsyscall ()
#1  0xb76af4bb in waitpid () from /lib/libpthread.so.0
#2  0xb770605e in signal_handler (sig=11) at signal.c:229
#3  signal handler called
#4  Jmsg (jcr=0x0, type=1, mtime=0, fmt=0xb771ceeb %s) at message.c:1298
#5  0xb76fbde4 in j_msg (file

[Bacula-users] Display of device content in BAT

2010-08-03 Thread Frank Altpeter
Hi list,

A little thing that I've noticed with the current bat (5.0.2)... when
I query the device status of my Tandberg 24-slot changer with one
LTO-4 tape drive, I'm getting this information:

Device LTO-4-1 (/dev/nst1) is mounted with:
Volume:  A00013
Pool:TapeCopyPool
Media type:  LTO-4
Slot 13 is loaded in drive 0.
Total Bytes=482,731,940,864 Blocks=1,841,493 Bytes/block=262,141
Positioned at File=109 Block=5,238


Well, but when in bat, and double-click on the changer device, I don't
get the status of the tape drive displayed, so one can not determine
with the bat, in what state the tape is. Here's a screenshot of what I
mean: http://tinyurl.com/35qxuse

Does someone have an idea about what I've possibly configured wrong?

The director consists of on Storage entry named T24-Changer, which
refers to the autochanger device in the storage-daemon. The
storage-daemon has one autochanger device and one tape device named
LTO-4-1.




Le deagh dhùraghd,

        Frank Altpeter

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with usage of Copy Jobs

2010-07-23 Thread Frank Altpeter
Moin,

2010/7/22  c.kesch...@internet-mit-iq.de:
 I don't see anything wront with it. Can you post
 /etc/bacula/INCLUDES/schedules.conf ?

Sure, but I don't think that it would change anything, because the
described behaviour also happens if I comment out the schedule entry
and do a manually run job=DiskToTape yes on the console.


The schedule in question would be this one:

Schedule {
Name = Bandsicherung
#   Run = Full mon at 10:42
}

For testing purpose, I've just tested the behaviour out again. The
current state is: all full backups are on tape, which means that run
job=DiskToTape yes does not do anything currently.

*run job=DiskToTape yes
Using Catalog MyCatalog
Job queued. JobId=54422
23-Jul 11:01 backup-dir JobId 54422: No JobIds found to copy.

So, after that, I've done one explicit full backup of one client:

*run job=balance.company.com level=full yes
Job queued. JobId=54423
23-Jul 11:01 backup-dir JobId 54423: Start Backup JobId 54423,
Job=balance.company.com.2010-07-23_11.01.54_18
23-Jul 11:01 backup-dir JobId 54423: Using Device FileStorageFull
23-Jul 11:01 backup-sd JobId 54423: Volume FileStorage-Full-1779
previously written, moving to end of data.
23-Jul 11:01 backup-sd JobId 54423: Ready to append to end of Volume
FileStorage-Full-1779 size=10912854848
[...]
23-Jul 11:06 backup-sd JobId 54423: Job write elapsed time = 00:04:15,
Transfer rate = 1.447 M Bytes/second


So, now I got exactly one full backup which should be considered for
the copy job. Now I've started the copy procedure manually:

*run job=DiskToTape yes
Job queued. JobId=54424
23-Jul 11:11 backup-dir JobId 54424: The following 1 JobId was chosen
to be copied: 54423
23-Jul 11:11 backup-dir JobId 54424: Copying using JobId=54423
Job=balance.company.com.2010-07-23_11.01.54_18
23-Jul 11:11 backup-dir JobId 54424: Start Copying JobId 54424,
Job=DiskToTape.2010-07-23_11.11.21_19
[...]
23-Jul 11:11 backup-dir JobId 54424: Bacula backup-dir 5.0.2
(28Apr10): 23-Jul-2010 11:11:47
  Build OS:   i686-pc-linux-gnu suse 11
  Prev Backup JobId:  54423
  Prev Backup Job:balance.company.com.2010-07-23_11.01.54_18
  New Backup JobId:   54425
  Current JobId:  54424
  Current Job:DiskToTape.2010-07-23_11.11.21_19
  Backup Level:   Full


So, now I've got the current list of Terminated Jobs output:

 54423  Full 52,624362.3 M  OK   23-Jul-10 11:06 balance.company.com
 54425  Incr 52,624369.0 M  OK   23-Jul-10 11:11 balance.company.com
 54424  Full 52,624369.0 M  OK   23-Jul-10 11:11 DiskToTape



You see what I mean? Where does the jobid 54425 come from? :)


Le deagh dhùraghd,

        Frank Altpeter

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with usage of Copy Jobs

2010-07-23 Thread Frank Altpeter
Moin,

2010/7/23  c.kesch...@internet-mit-iq.de:
 That seems strange. I presume you also didn't get a mail for job 54425?

Correct.

 I also don't know what to make of this. You could check the volumes to see if
 there was actually data written for the incremental job.

I checked, and it seems that the job did not actually run, but only
got entered (and displayed) twice, as the database tells me:

mysql select JobId,Job,Name,Type,Level,JobFiles,JobBytes,ReadBytes
from Job where JobId in ( 54423, 54424, 54425 ) limit 10;
+---+--+---+--+---+--+---+---+
| JobId | Job  | Name
| Type | Level | JobFiles | JobBytes  | ReadBytes |
+---+--+---+--+---+--+---+---+
| 54423 | balance.company.com.2010-07-23_11.01.54_18 |
balance.company.com | B| F |52624 | 362371099 | 899243991
|
| 54424 | DiskToTape.2010-07-23_11.11.21_19| DiskToTape
| c| F |0 | 0 | 0 |
| 54425 | balance.company.com.2010-07-23_11.11.22_20 |
balance.company.com | C| F |52624 | 369022937 | 0
|
+---+--+---+--+---+--+---+---+

Quite strange, since jobid 54424 does not have any data according to
the mysql entry.



Le deagh dhùraghd,

        Frank Altpeter

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problems with usage of Copy Jobs

2010-07-21 Thread Frank Altpeter
Hi list,

I'm using bacula-5.0.2 with a disk storage and a Tandberg 24-tape
changer system to backup about 130 server systems. The intention was
to do a disk based full backup once a week and afterwards copy these
full backups on tape. Currently, I'm using three storage devices
(FileStorage, FileStorageFull and LTO-4-1) and one autochanger
T24-Changer. Each client has its Client and Job definition, with a
Pool and a FullBackupPool defined. Additionally, there is one more Job
definition for the copy procedure, with SelectionType PoolUncopiedJobs
and the FileStorage-Full pool defined.

Well, my problem with this setting is (hopefully) mainly a configuration issue.

On sunday morning, all 130 clients are doing their full backup. On
monday morning, I'm starting the DiskToTape job, which then selects
all the full backups to be copied to tape. For some reason, there is
one full job with name DiskToTape.TIMESTAMP for each client running,
and additionally one incremental job with name clientname.TIMESTAMP
for each client running. This doubles of course the overall amount of
running jobs from 130 to 260, which then of course leads to problems
if there are some incremental jobs (from the usual daily run) still
running, which then blocks the tape copy procedure.
Additionally, after starting the DiskToTape job, I thought I would
get an email with the output of this job, but no, I'm getting one
email for every forked job, and every job output email has the same
subject because they all carry the variable expansion of the
DiskToTape job itself, but not of the forked job, which  makes it
nearly impossible to keep track about which backups have been written
to tape. Above double job behaviour also reflects in bacula database
where every copied job is entered as an additional incremental job and
not distinguishable from the other jobs.

So, did I miss something important in my configuration or is this
intented to be that way? Are there any hints how I could make this
setting a ltitle bit better?

It's quite hard this way, because since I've enabled the Copy feature
my backup system is not running reliable anymore (since sometimes the
copy jobs are blocking the normal incremental jobs which is bad bad).


Open for any hints.



Le deagh dhùraghd,

        Frank Altpeter

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with usage of Copy Jobs

2010-07-21 Thread Frank Altpeter
Hi,

2010/7/21  c.kesch...@internet-mit-iq.de:
 Can you post your configuration for the Jobs and Pools? That thing with the
 incremental jobs shouldn't happen.

Sure, I've anonymized the relevant config files (hopefully all that is
needed) and put it on pastebin here:

http://racoon.pastebin.com/E1LurCY7


 As for the forking: I have that same issue, every Mail has the same
 subject but I don't think you can change that (I might be wrong).

That would be quite bad, because it makes tracing the copy procedure
by the emails quite impossible...




Le deagh dhùraghd,

        Frank Altpeter

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [semi-solved] LTO-4 tape: only 20mb/sec when used with bacula

2010-07-15 Thread Frank Altpeter
Hi,

Sorry to warm up this slightly old discussion, but since I'm suffering
from a similar problem, I just stumpled upon this thread while
searching for a solution.

In my setting, it's a disk-to-disk-to-tape setup, I'm writing the data
from my clients to a RAID-6 storage (15 x 1,5TB storage system). Via a
Copy job the full backups are to be copied to a Tandberg 24-slot LTO-4
changer (one drive).

According to btape speed, this drive can have rates up to 214 MB per
second with default data, and about 104 MB per second with the random
data test.
When copying file volumes with dd, I get up to 90 MB per second throughput.

The copy job itself does only write with speeds between 5 and 20 MB
per second according to the bacula job output, while reading about 180
MB per second from the storage disk.

I'm very concerned about this, because I got to manage 130 clients
with a overall volume of 5 Terabyte, and with that low copy speed, it
takes almost more time than the schedule interval for the full backups
has to offer, so I'm open for any hints on speeding up this issue.


My setting is quite simple. All incremental jobs are in the
FileStorage-Default pool, all full jobs are in the FileStorage-Full
pool with TapeCopyPool as nextpool reference, and the third pool is
the TapeCopyPool, where the tapes reside. Then there is one copy job
with PoolUncopiedJobs set.

So, AFAIK quite simple, but not completely working as I expected.


2010/6/22 Lukas Kolbe l-li...@einfachkaffee.de:
 Am Montag, den 21.06.2010, 11:27 -0400 schrieb John Drescher:
 On Mon, Jun 21, 2010 at 7:35 AM, Lukas Kolbe l-li...@einfachkaffee.de 
 wrote:
  Am Montag, den 21.06.2010, 11:06 +0100 schrieb Alan Brown:
  On 21/06/10 10:56, Lukas Kolbe wrote:
  
   For comparison, I dd'ed a volume to /dev/null while the copy job was
   running:
   [r...@shepherd ~]# dd if=/var/bacula/dp/fs1/Vol0070 of=/dev/null bs=1M
   917504 bytes (9.2 GB) copied, 12.0225 seconds, 763 MB/s
  
   But dd'ing it to another file reveals a problem with the storage
   subsystem I believe:
  
   [r...@shepherd ~]# dd if=/var/bacula/dp/fs1/Vol0070 
   of=/var/bacula/dp/fs2/xxx bs=1M
   849346560 bytes (849 MB) copied, 32.665 seconds, 26.0 MB/s
  

 Do you have a hardware raid controller without a BBunit and thus the
 write cache is disabled to protect corruption that could occur if the
 machine crashed or lost power?

 We have a BBU and the write cache is enabled. Puzzling is that the now,
 and repeatedly, the same dd during the copy job works with
 350MiB/second. That dd slowness must have been a one-off.

 I skimmed the bacula-sd code (mac.c and block.c) and do see why it is
 not so simple to change the way copy jobs work, though.

 John

 --
 Lukas




-- 
Le deagh dhùraghd,

        Frank Altpeter

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Request for documentation enhancement: Exclude Directory Containing

2010-02-03 Thread Frank Altpeter
Hi list,

I just stumbled upon a missing documentation information about the new
feature Exclude Directory Containing. This feature is set in the
FileSet resource to be able to exclude directories on client side by
simply touching a file with a predefined name (for example Exclude
Directory Containg = .nobackup).
Since the documentation didn't mention anything to worry about, I've
set this feature in my recently updated setup. I'm running a bacula
3.0.3 server and sd with a mixed setup of about 130 fd clients between
versions 2.4.2 and 3.0.3. I remember that the dir and sd must be
updated at once, but the fd is compatible with newer dir versions (as
usual).

The documentation problem now: It's nowhere mentioned that the above
feature only works when the fd is updated as well. My nightly backup
run did finish almost all backup jobs with a fatal error because this
feature makes non-3.x fd clients to bail out:

03-Feb 00:33 backup-dir JobId 33948: Start Backup JobId 33948,
Job=clientname.2010-02-03_00.30.05_18
[...]
03-Feb 00:33 clientname-fd JobId 33948: Fatal error: Invalid FileSet
command: .nobackup
03-Feb 00:33 backup-dir JobId 33948: Fatal error: Socket error on
Include command: ERR=No data available

That's why I would request adding a suitable note to the docs that
other people are aware of this before upgrading a mixed environment
:-)



-- 
Le deagh dhùraghd,

Frank Altpeter

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to prevent two clients to run at the same time

2009-06-26 Thread Frank Altpeter
Hi again,

2009/6/22 Frank Altpeter frank.altpe...@gmail.com:
 2009/6/22  c.kesch...@internet-mit-iq.de:
 You could just give one server a lower priority. It would then run after all
 other Jobs. The default is 10 (11 for the catalog), so Priority = 12 in the
 Job Resource will make it run later.

 Hmm, this sounds indeed like a good idea. I just set up an additional
 Schedule definition for only these two clients, with priority set to
 12 and 13, and will see if it's doing as expected on the next
 scheduled run. Thanks for the suggestion :)

After some testing different settings I'm quite confused with
Schedules and Priority settings. The current setting is defined with
two schedules. The first Schedule is run on 00:23 and contains the two
systems that are to be saved after each other. So they have got their
own schedule and each of them as another priority (as output by status
console):

Scheduled Jobs:
Level          Type     Pri  Scheduled          Name               Volume
===
Incremental    Backup    11  27-Jun-09 00:23
hostname.domainname.tld hostname.domainname.tld-Default-0328
Incremental    Backup    12  27-Jun-09 00:23
secondhostname.domainname.tld
secondhostname.domainname.tld-Default-0331

The other 93 jobs are defined with a second schedule at 00:30 and priority 10.

Incremental    Backup    10  27-Jun-09 00:30    thirdhost.domain.tld
thirdhost.host.tld-Default-0806
Incremental    Backup    10  27-Jun-09 00:30    fourthhost.domain.tld
 fourthhost.domain.tld-Default-0081
[...]

I _thought_, that this should work when the low-prio jobs are started
before the default-prio jobs. But the reality is confusing me. My prio
11 job was finished at 00:27, and - as expected - the prio 12 job has
been started at 00:27. But, this job finished on 01:02, and all the
other prio 10 jobs - which should have been started by schedule on
00:30 - have been started at 01:02 ... and I don't have an idea why...



Le deagh dhùraghd,

       Frank Altpeter


There is no way to happiness. Happiness is the way.
   -- Buddha

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to prevent two clients to run at the same time

2009-06-22 Thread Frank Altpeter
Hi list,

I'm running a bacula server with almost 100 clients on it. For
historical and data saving reasons, each client has its own block of
Client, Job, Storage, and Pool definition (based on client's
fqdn).Data is saved to disk, and then, by external scripts, rotated to
tape. According the current configuration, all clients are started via
one Schedule on the same time and bacula is managing the run based on
a MaximumConcurrentJobs set to 12, which runs fine so far.
But, there's two clients, which I need to configure in a way, that
it's impossible for them to run simultaneously. The one client is a
vmware host system, and the other client is a vmware guest system on
this host. And sometimes it happens that they both run in parallel
which takes down performance on both systems.
Because of the current structure, I'm not able to just define a
dedicated Pool resource for these two clients and put a
MaximumConcurrentJobs=1 into it (which AFAIK would be the easiest
solution for default environments). So I'm searching for alternative
solutions for my problem, which is not start client a on 16:00 and
client b 12 hours later  :)

Any hints for funny solutions welcome :)

Besides that, wouldn't it a good point to have the
MaximumConcurrentJobs config option also for the Schedule resource ?


-- 
Le deagh dhùraghd,

Frank Altpeter


There is no way to happiness. Happiness is the way.
-- Buddha

--
Are you an open source citizen? Join us for the Open Source Bridge conference!
Portland, OR, June 17-19. Two days of sessions, one day of unconference: $250.
Need another reason to go? 24-hour hacker lounge. Register today!
http://ad.doubleclick.net/clk;215844324;13503038;v?http://opensourcebridge.org
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Looking for client removal documentation

2009-05-18 Thread Frank Altpeter
Hi list...

there once was an email about how to safely and completely remove a
client from a bacula server. Since there were no responses, and since
this was almost the only one I've found for this topic, I'm asking
here if someone has a good pointer for it.

Well, the problem is, bacula is quite perfect in doing periodic
backups on many clients. But, as I've seen so far, there is no single
pointer on how to remove a client completely. I mean, just removing
the client definition in the bacula config files doesn't help. The
catalog database is full of references to this client, and the jobs,
the files, etc. are not removed by auto-pruning if the client is
deleted. So, I once tried to just disable a client for some weeks,
hoping that the autopruning would do the job of cleaning the database,
but it doesn't seem to be very helpful.

Any pointer on how to really completely remove a client would be very
appreciated ...

-- 
Le deagh dhùraghd,

Frank Altpeter


There is no way to happiness. Happiness is the way.
-- Buddha

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables 
unlimited royalty-free distribution of the report engine 
for externally facing server and web deployment. 
http://p.sf.net/sfu/businessobjects
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Modifying status director output format

2009-03-16 Thread Frank Altpeter
Hi list,

I've got a quite simple question regarding some output
modification... since I'm  managing my backup clients with FQDN (for
many different reasons), the output of status director is more or
less unreadable because the names are longer than the default column
with, for example:

Scheduled Jobs:
Level  Type Pri  Scheduled  Name   Volume
===
IncrementalBackup10  17-Mar-09 00:20
server2.very-long-domainname.de
server2.very-long-domainname.de-Default-0230
IncrementalBackup10  17-Mar-09 00:20
server3.very-long-domainname.de
server3.very-long-domainname.de-Default-0092
IncrementalBackup10  17-Mar-09 00:30
acctdb.anotherdomain.net   acctdb.anotherdomain.net-Default-0714
IncrementalBackup10  17-Mar-09 00:30alpha.shortname.es
alpha.shortname.es-Default-0128
IncrementalBackup10  17-Mar-09 00:30ananas.abc.de
ananas.abc.de-Default-0713
IncrementalBackup10  17-Mar-09 00:30andromeda.domname.net
andromeda.domname.net-Default-0716
IncrementalBackup10  17-Mar-09 00:30
appserver01.another-long-domain.com
appserver01.another-long-domain.com-Default-0717
IncrementalBackup10  17-Mar-09 00:30
appserver02.another-long-domain.com
appserver02.another-long-domain.com-Default-0715
IncrementalBackup10  17-Mar-09 00:30
arthur.domainreseller.net arthur.domainreseller.net-Default-0427
IncrementalBackup10  17-Mar-09 00:30
backoffice.domainthing.com backoffice.domainthing.com-Default-0110

So... is it possible to tweak the output in a way that I could have
wider columns?



Le deagh dhùraghd,

Frank Altpeter


There is no way to happiness. Happiness is the way.
-- Buddha

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mini stupid how-to restore mark all .bak file | nobody ?

2009-01-15 Thread Frank Altpeter
As far as I see, a recursive marking of a file pattern is not possible
withn bacula directly. The part in the help section mark   mark
dir/file to be restored recursively, wildcards allowed means, that
you can add a file (or file pattern) in the local directory, or you
can add a directory, where it's contents are recursively restored.

As I see, you can circumvent this restriction with a little help of your bash:

In the bacula console, you type:

$ find *.bak
/usr/local/etc/courier-imap/imapd.bak
/usr/local/etc/webmin.bak/
/usr/local/etc/webmin.bak/config.bak
[...]

which, as you stated, does list all your files matching the *.bak
pattern in all subdirectories. This output gets now written per
copy-and-paste into a temporary file (with vi tempfile.txt for
example). Then you output the contents of this list through a little
bash scripting:

for line in `cat tempfile.txt`; do echo mark $line; done

This output would then again pasted into your bacula console with
copy-and-paste, to mark all *.bak files.

And, since the console based bconsole is capable of reading commands
from STDIN, you're of course able to build a little shell script
around it...


Hope this helps,


2009/1/13 Bruno Friedmann br...@ioda-net.ch:
 Bruno Friedmann wrote:
 Sorry this seems stupid but when I enter in a restore
 I use the choice 3 ( list of jobids )

 get my / rebuild
 If I issue a find *.bak it find and list all what I want to mark and restore 
 (recurse also)
 but giving  a
 mark *.bak give  0 file marked

 help say mark dir/file to be restored recursively, wildcards allowed

 if I manually go to a directory containing 2 or more .bak files and issue the
 mark *.bak it give me what I'm waiting for
 2 files marked

 Any suggestion ?

 A try was given with a 2.2.6, 2.2.8, and 2.4.4 version




Le deagh dhùraghd,

Frank Altpeter


There is no way to happiness. Happiness is the way.
-- Buddha

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Changes between 2.4.3 and 2.4.4 regarding 'status director' output

2009-01-12 Thread Frank Altpeter
Hi list,

does anyone reproduce this difference within the status director output:

With 2.4.3:


Terminated Jobs:
 JobId  LevelFiles  Bytes   Status   FinishedName

   224  Incr 19,146871.3 M  OK   07-Jan-09 01:51 JobName


And after upgrading to 2.4.4:

Terminated Jobs:
 JobId  LevelFiles  Bytes   Status   FinishedName

   224  Incr 19,146871.3 M  OK   07-Jan-09 01:51 JobName
   245  Incr22133.55 M  OK   12-Jan-09 11:53
JobName.2009-01-12_11


It's not that serious problem, but I'm wondering why now the Name
column does contain a part of the JobId name instead of just the job
name like before. Is there any reason for this (and possibly a new
option to configure this) ?


-- 
Le deagh dhùraghd,

Frank Altpeter


There is no way to happiness. Happiness is the way.
-- Buddha

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Another strangeness on 2.4.4 - upgrading to FULL after FULL backup

2009-01-08 Thread Frank Altpeter
My bad, just detected it by myself... the FileSet has been modified
and so I assume the Incremental backup has been upgraded to Full
because the FileSet has been changed.

But IMHO there should be a better notification for that, something
like FileSet has been modified, upgrading to FULL backup.


2009/1/8, Frank Altpeter frank.altpe...@gmail.com:
 Hi again,

  I've just hit another strange behaviour with my bacula system. A job
  which has been doing a FULL backup by normal schedule two days ago,
  did just advance to FULL on the incremental schedule today:




Le deagh dhùraghd,

Frank Altpeter


There is no way to happiness. Happiness is the way.
-- Buddha

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Another strangeness on 2.4.4 - upgrading to FULL after FULL backup

2009-01-08 Thread Frank Altpeter
Hi again,

I've just hit another strange behaviour with my bacula system. A job
which has been doing a FULL backup by normal schedule two days ago,
did just advance to FULL on the incremental schedule today:


08-Jan 00:30 backup-dir JobId 74: No prior Full backup Job record found.
08-Jan 00:30 backup-dir JobId 74: No prior or suitable Full backup
found in catalog. Doing FULL backup.

This is very strange, because the status of the client tells me something else:

Terminated Jobs:
 JobId  LevelFiles  Bytes   Status   FinishedName
==
55  Full105,7621.073 G  OK   06-Jan-09 00:37 www.domainname.de
61  Incr1864.637 M  OK   07-Jan-09 00:32 www.domainname.de
74  Full105,9661.073 G  OK   08-Jan-09 00:36
www.domainname.de.2009-01-08_00

(Besides that, why does 2.4.4, different to 2.4.3, show the jobid
timestamps in the Name field?)


The catalog database shows correct entries:

mysql select JobId, Job, Name, Level from Job where Name like
'www.domainname.de';
+---+--+---+---+
| JobId | Job  | Name  | Level |
+---+--+---+---+
|55 | www.domainname.de.2009-01-06_00.30.23| www.domainname.de
| F |
|61 | www.domainname.de.2009-01-07_00.30.48| www.domainname.de
| I |
|74 | www.domainname.de.2009-01-08_00.30.00.22 | www.domainname.de
| F |
+---+--+---+---+
3 rows in set (0.00 sec)

I've added my director configuration part for this client to the end
of this email. Values for WeeklyCycle schedule and DefaultJob jobdefs
are the ones provided with sample configs.
I currently don't have a clue where this problem could origin, so any
hints appreciated.



Client {
Name= www.domainname.de
Address = www.domainname.de
FDPort  = 9102
Catalog = MyCatalog
Password= Gwp^6byJK$rW%7g1
FileRetention   = 6 weeks
JobRetention= 6 weeks
AutoPrune   = yes
MaximumConcurrentJobs = 12
}

Job {
Name= www.domainname.de
Client  = www.domainname.de
JobDefs = DefaultJob
Schedule= WeeklyCycle
Pool= www.domainname.de-Default
FileSet = Full Set
FullBackupPool  = www.domainname.de-Full
WriteBootstrap  = /var/bacula/working/www.domainname.de.bsr
Messages= www.domainname.de
Storage = www.domainname.de

}

Storage {
Name= www.domainname.de
Address = backup.mycompany.net
SDPort  = 9103
Password= something
Device  = www.domainname.de
MediaType   = File
MaximumConcurrentJobs = 12
}

Pool {
Name= www.domainname.de-Default
PoolType= Backup
Recycle = yes
AutoPrune   = yes
VolumeRetention = 6 days
MaximumVolumes  = 12
LabelFormat = www.domainname.de-Default-
MaximumVolumeJobs   = 0
VolumeUseDuration   = 24h
RecycleOldestVolume = yes
}

Pool {
Name= www.domainname.de-Full
PoolType= Backup
Recycle = yes
AutoPrune   = yes
VolumeRetention = 6 weeks
MaximumVolumes  = 12
LabelFormat = www.domainname.de-Full-
MaximumVolumeJobs   = 0
VolumeUseDuration   = 24h
RecycleOldestVolume = yes
}





-- 
Le deagh dhùraghd,

Frank Altpeter


There is no way to happiness. Happiness is the way.
-- Buddha

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Another strangeness on 2.4.4 - upgrading to FULL after FULL backup

2009-01-08 Thread Frank Altpeter
Hi

2009/1/8, Silver Salonen sil...@ultrasoft.ee:
 To my mind that wasn't a question at all - he just notices that it was because
  of that (and there was nothing wrong with it), and made a suggestion for
  improving notice.

  The suggestion is quite useful to my mind and it should be made into a 
 correct
  project to make it official :)

ACK :-)
Since I know that the changed FileSet content was the cause for
upgrading to FULL, I think this is (at least for myself) a wanted
feature, to ensure that my backups are covering all the content to be
saved. I would only wish that the notification output would mention
that instead of the quite generic No prior or suitable full backup
found.



-- 
Le deagh dhùraghd,

Frank Altpeter


There is no way to happiness. Happiness is the way.
-- Buddha

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] job terminated but not finished

2009-01-07 Thread Frank Altpeter
Hi list,

I've got a strange problem, and my current search didn't find a
solution yet, although I was able to find people with similar
problems, but the several mentioned solutions didn't match my problem
here.

Randomly, some jobs don't seem to be finished correctly. I'm running
bacula 2.4.3 on both the server (SLES10-SP2) and the client (Debian
Lenny).

The status of the director mentions this about the client's state:


Running Jobs:
 JobId Level   Name   Status
==
59 Increme  client.hostname.tld.2009-01-07_00.30.46 has terminated



While the status client tells me that everything is fine:

Connecting to Client client-fd at client.hostname.tld:9102

client-fd Version: 2.4.3 (10 October 2008)  i486-pc-linux-gnu debian lenny/sid
Daemon started 02-Jan-09 11:51, 6 Jobs run since started.
 Heap: heap=991,232 smbytes=82,664 max_bytes=530,982 bufs=75 max_bufs=1,055
 Sizeof: boffset_t=8 size_t=4 debug=0 trace=0

Running Jobs:
Director connected at: 07-Jan-09 09:57
No Jobs running.


Terminated Jobs:
 JobId  LevelFiles  Bytes   Status   FinishedName
==
37  Full675,90837.54 G  OK   02-Jan-09 14:44 client-fd
43  Incr  1,815363.6 M  OK   03-Jan-09 00:41 client-fd
46  Incr  2,315897.6 M  OK   04-Jan-09 00:37 client-fd
49  Incr  2,408766.2 M  OK   05-Jan-09 00:35 client-fd
53  Full678,76337.65 G  OK   06-Jan-09 03:24 client-fd
59  Incr  3,322894.4 M  OK   07-Jan-09 00:36 client-fd



When I'm scrolling back at the console, I see that the job has
terminated successfully, since I find the output which would be mailed
as status report:

[...]
  Termination:Backup OK

07-Jan 00:35 backup-dir JobId 59: Begin pruning Jobs.
07-Jan 00:35 backup-dir JobId 59: No Jobs found to prune.
07-Jan 00:35 backup-dir JobId 59: Begin pruning Files.
07-Jan 00:35 backup-dir JobId 59: No Files found to prune.
07-Jan 00:35 backup-dir JobId 59: End auto prune.

But, I'm missing the final status report email and the client is still
referenced as has terminated within the console. The sql data as
been updated as well, so it looks to me like the job has been run
successfully but the email has not been generated and the status
within the director didn't get updated.

When restarting the director, everything is fine again. But this
shouldn't be an option to solve this...


Does anyone here have an idea on this case?




-- 
Le deagh dhùraghd,

Frank Altpeter


There is no way to happiness. Happiness is the way.
-- Buddha

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] job terminated but not finished

2009-01-07 Thread Frank Altpeter
Hi!

2009/1/7 Andrea Conti a...@alyf.net:
 Running Jobs:
 JobId Level   Name   Status
 ==
59 Increme  client.hostname.tld.2009-01-07_00.30.46 has terminated
 

 Director connected at: 07-Jan-09 09:57
 Terminated Jobs:
 JobId  LevelFiles  Bytes   Status   FinishedName
 ==
59  Incr  3,322894.4 M  OK   07-Jan-09 00:36 client-fd
 

 The Director was spooling file attributes to the database.

 I would tend to agree, but considering the job size 9+ hours seems a bit
 too much...

That's right.  A job that finished after 6 minutes shouldn't take 9
hours of spooling...

 Are you actually using data and/or attribute spooling? What kind of
 storage are you writing to?

Well, no spooling at all, storage is completely file based.
The only thing i could think of, is batched inserts, but I'm not sure
about that since I'm not completely familiar with them.



 You should also take a look at what the sd is doing...

Hmm, not sure what you're meaning with that. According to the
available status commands, the sd was not doing anything (

-- 
Le deagh dhùraghd,

Frank Altpeter


There is no way to happiness. Happiness is the way.
-- Buddha

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with VSS enabled windows backup

2007-03-26 Thread Frank Altpeter
Did you have any clues on that yet? :)


On 3/13/07, Frank Altpeter [EMAIL PROTECTED] wrote:
 On 3/8/07, Damian Lubosch [EMAIL PROTECTED] wrote:
  Frank Altpeter wrote:
Hi list,
 
  Hi Frank!
 
   
I was just hitting a little confusing problem in backing up a Windows
2003 Server with bacula (both client and server have version 2.0.2
running).
   
The Server is configured to backup C: and D:, the FileSet has Enable
VSS = yes defined. The client has been installed with winbacula.exe
like the other windows hosts im having. The VSS service is up and
running on the client.
   
This is what i'm getting as output from the backup job in my bconsole
  gui:
[snip] Does anyone has an idea what i'm missing here? It's quite
  confusing to
have a full backup with 0 bytes written...
   
   
 
  Please post your configuration data. Maybe your FileSet is wrong.
 Thanks for your help in advance - to not pollute the whole list with
 my configuration, i have put them online (reducted to the interesting
 parts):

 http://www.73f.de/temp/bacula/

 BTW: If i didn't mention it yet - the same FileSet used for this host
 is working on some other windows based clients with success. Only this
 one host makes that problem...

 --
 Le deagh dhùraghd,

 Frank Altpeter

 Two of the most famous products of Berkeley are LSD and Unix.
 I don't think that this is a coincidence.
 -- Anonymous



-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with VSS enabled windows backup

2007-03-13 Thread Frank Altpeter
On 3/8/07, Damian Lubosch [EMAIL PROTECTED] wrote:
 Frank Altpeter wrote:
   Hi list,

 Hi Frank!

  
   I was just hitting a little confusing problem in backing up a Windows
   2003 Server with bacula (both client and server have version 2.0.2
   running).
  
   The Server is configured to backup C: and D:, the FileSet has Enable
   VSS = yes defined. The client has been installed with winbacula.exe
   like the other windows hosts im having. The VSS service is up and
   running on the client.
  
   This is what i'm getting as output from the backup job in my bconsole
 gui:
   [snip] Does anyone has an idea what i'm missing here? It's quite
 confusing to
   have a full backup with 0 bytes written...
  
  

 Please post your configuration data. Maybe your FileSet is wrong.
Thanks for your help in advance - to not pollute the whole list with
my configuration, i have put them online (reducted to the interesting
parts):

http://www.73f.de/temp/bacula/

BTW: If i didn't mention it yet - the same FileSet used for this host
is working on some other windows based clients with success. Only this
one host makes that problem...

-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Failed Windows backup

2007-03-13 Thread Frank Altpeter
First of all: It's not necessary to post the same post three times.
This won't make you get faster answers - it just annoys people and
might lead to a complete ignorance of your request.

To your problem:


On 3/13/07, Administrator [EMAIL PROTECTED] wrote:
 01-Mar 18:30 Ubuntuccc-dir: message.c:462 Mail prog: bsmtp: bsmtp.c:88 Fatal
 malformed reply from localhost: 504 [EMAIL PROTECTED]: Sender address
 rejected: need fully-qualified address

 01-Mar 18:30 Ubuntuccc-dir: Client1.2007-03-01_18.30.00 Error: message.c:473
 Mail program terminated in error.
 CMD=/usr/lib/bacula/bsmtp -h localhost -f (Bacula) [EMAIL PROTECTED] -s
 Bacula: Backup Fatal Error of sbserverpdc-fd Full [EMAIL PROTECTED]

This doesn't seem to be a bacula problem, but more a problem with your
SMTP configuration.

I think you should check every log file for why the Mail program
terminated in error. A first step would be to add something valid as
sender address (the one behind the -f switch), for example something
like that:

Messages {
  Name = Daemon
  mailcommand = /usr/local/sbin/bsmtp -h localhost -f \\(Bacula\)
%r\ -s \Bacula daemon message\ %r
  mail = [EMAIL PROTECTED] = all, !skipped
  console = all, !skipped, !saved
  append = /var/db/bacula/log = all, !skipped
}



-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problems with VSS enabled windows backup

2007-03-08 Thread Frank Altpeter
Hi list,

I was just hitting a little confusing problem in backing up a Windows
2003 Server with bacula (both client and server have version 2.0.2
running).

The Server is configured to backup C: and D:, the FileSet has Enable
VSS = yes defined. The client has been installed with winbacula.exe
like the other windows hosts im having. The VSS service is up and
running on the client.

This is what i'm getting as output from the backup job in my bconsole gui:

run job=norma level=full yes
Job queued. JobId=23885
08-Mar 14:55 draco-dir: Start Backup JobId 23885, Job=norma.2007-03-08_14.55.17
08-Mar 14:55 draco-sd: Volume norma-0001 previously written, moving
to end of data.
08-Mar 14:55 norma-dir: Generate VSS snapshots. Driver=VSS Win 2003,
Drive(s)=CD

08-Mar 14:55 norma-dir: VSS Writer (BackupComplete): System Writer,
State: 0x1 (VSS_WS_STABLE)
08-Mar 14:55 norma-dir: VSS Writer (BackupComplete): MSDEWriter,
State: 0x1 (VSS_WS_STABLE)
08-Mar 14:55 norma-dir: VSS Writer (BackupComplete): Event Log
Writer, State: 0x1 (VSS_WS_STABLE)
08-Mar 14:55 norma-dir: VSS Writer (BackupComplete): Registry
Writer, State: 0x1 (VSS_WS_STABLE)
08-Mar 14:55 norma-dir: VSS Writer (BackupComplete): COM+ REGDB
Writer, State: 0x1 (VSS_WS_STABLE)
08-Mar 14:55 norma-dir: VSS Writer (BackupComplete): BITS Writer,
State: 0x1 (VSS_WS_STABLE)
08-Mar 14:55 norma-dir: VSS Writer (BackupComplete): IIS Metabase
Writer, State: 0x1 (VSS_WS_STABLE)
08-Mar 14:55 norma-dir: VSS Writer (BackupComplete): WMI Writer,
State: 0x1 (VSS_WS_STABLE)
08-Mar 14:55 draco-sd: Job write elapsed time = 00:00:16, Transfer
rate = 0  bytes/second

[...]

Scheduled time: 08-Mar-2007 14:55:17
  Start time: 08-Mar-2007 14:55:24
  End time:   08-Mar-2007 14:55:42
  Elapsed time:   18 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:yes
  Encryption: no
  Volume name(s): norma-0001
  Volume Session Id:  3
  Volume Session Time:1173354258
  Last Volume Bytes:  4,298 (4.298 KB)
  Non-fatal FD errors:11
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK


Does anyone has an idea what i'm missing here? It's quite confusing to
have a full backup with 0 bytes written...


-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fwd: Cleaning up database records

2007-02-20 Thread Frank Altpeter
-- Forwarded message --
From: Frank Altpeter [EMAIL PROTECTED]
Date: Feb 20, 2007 7:00 PM
Subject: Re: [Bacula-users] Cleaning up database records
To: Kern Sibbald [EMAIL PROTECTED]


I'm running dbcheck periodically every sunday with the following script:


#!/bin/sh

PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin

echo $(date) Creating temp indices for bacula database...

mysql -ubacula EOFA
use bacula
CREATE INDEX file_tmp_filenameid_idx ON File (FilenameId);
CREATE INDEX file_tmp_pathid_idx ON File (PathId);
EOFA

echo $(date) Running dbcheck...
dbcheck -c /usr/local/etc/bacula-dir.conf -f -b -v

echo $(date) Removing indices and optimizing bacula database...

mysql -ubacula EOFB
use bacula
DROP INDEX file_tmp_filenameid_idx ON File;
DROP INDEX file_tmp_pathid_idx ON File;
OPTIMIZE TABLE UnsavedFiles, Counters, CDImages, BaseFiles, Device,
Version, Status, MediaType, Storage, FileSet, Client, Pool, Media,
Job, JobMedia, File, Path, Filename;
EOFB

echo $(date) Done...


Due to the additional index entries, the script runs for about 90
minutes (eternally without them) with a database of 24 million file
entries from 88 clients, mostly application servers.

It helps a lot, but still it doesn't help against orphaned entries
from previously removed clients.



--
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous


-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Cleaning up database records

2007-02-19 Thread Frank Altpeter
Hi there,

My current bacula system (FreeBSD, bacula-2.0.1, mysql-4.1) has
currently some massive performance problems. One of the reasons i
think is caused by the massive amount of old and obsolete records.
For example, i had a client to backup once, which has dissappeared
some time ago. When this machine has been removed, the client has been
deleted from the bacula configuration. But this way the database
records for these clients remain in the database, thus making the db
records more and more unusable.
So, i would like to prune such records to reduce unneeded data.

Is there any way on archiving this?

Because these clients are removed from the config, any pruning from
within bconsole is not available...

And, for the future, what's the best practice to avoid this?




-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cleaning up database records

2007-02-19 Thread Frank Altpeter
On 2/19/07, Bill Moran [EMAIL PROTECTED] wrote:
 In response to Frank Altpeter [EMAIL PROTECTED]:
 
  My current bacula system (FreeBSD, bacula-2.0.1, mysql-4.1) has
  currently some massive performance problems. One of the reasons i
  think is caused by the massive amount of old and obsolete records.
  For example, i had a client to backup once, which has dissappeared
  some time ago. When this machine has been removed, the client has been
  deleted from the bacula configuration. But this way the database
  records for these clients remain in the database, thus making the db
  records more and more unusable.
  So, i would like to prune such records to reduce unneeded data.
 
  Is there any way on archiving this?

 dbcheck should clean this stuff up.

  And, for the future, what's the best practice to avoid this?

 Purge the volumes prior to removing the clients.

 On the flip side, running dbcheck periodically is pretty much a requirement
 for keeping Bacula's database reasonably sized.  I have it run once a month
 in read-only mode via cron and email us the results.  When the extra stuff
 gets significant, I use it to go in and clean up.

dbcheck is being run on a weekly basis (every sunday before backup
starts) but didn't catch all the old entries from the database.
I just removed manually old entries from Job and File database
relating to jobs back in 2005 ... about 5 million file entries (from
about 25 currently)...
So, dbcheck doesn't seem to clean up that much :)


-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] database error with 2.0.1

2007-02-19 Thread Frank Altpeter
Hi again,

i just hit another problem with my bacula installation. A job exited
with the following error in it:

19-Feb 14:41 draco-dir: grus.2007-02-19_14.21.34 Fatal error:
sql_create.c:845 sql_create.c:845 query SELECT FilenameId FROM
Filename WHERE Name='6969_o.jpeg' failed:
Out of memory (Needed 1145768 bytes)
19-Feb 14:41 draco-dir: sql_create.c:845 SELECT FilenameId FROM
Filename WHERE Name='6969_o.jpeg'
19-Feb 14:56 draco-dir: grus.2007-02-19_14.21.34 Warning:
sql_create.c:850 More than one Filename! 2 for file: 6969_o.jpeg

Manually executing the query gives me:

mysql SELECT FilenameId FROM Filename WHERE Name='6969_o.jpeg';
++
| FilenameId |
++
|   16320758 |
|   18427555 |
++
2 rows in set (0.00 sec)

What happened here? And what's this 'out of memory' stuff? The system
has plenty of its 4 GB RAM left to use...


-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula-dir coredump after upgrade to 2.0.1

2007-02-13 Thread Frank Altpeter
Hi there,

I just took the time to upgrade my bacula installation to the current
2.0.1 version, but unfortunatly it doesn't work anymore now:


[EMAIL PROTECTED]:~ # /usr/local/sbin/bacula-dir -u bacula -g bacula -v -c
/usr/local/etc/bacula-dir.conf
Bus error: 10 (core dumped)

I upgraded my 6.1-RELEASE-p3 server via the ports collection to

bacula-client-2.0.1 =   up-to-date with port
bacula-server-2.0.1 =   up-to-date with port

and i upgraded the database with
/usr/local/share/bacula/update_mysql_tables as recommended in
http://www.bacula.org/?page=news - additionally, i removed 'Accept Any
Volume' from the director configuration.

The gdb output from reading the core file looks a bit like this:

#0  0x2835f1d4 in pthread_sigmask () from /usr/local/lib/liblthread.so.3
[New LWP 100150]
(gdb)  backtrace
#0  0x2835f1d4 in pthread_sigmask () from /usr/local/lib/liblthread.so.3
#1  0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
#2  0x2835f21a in pthread_sigmask () from /usr/local/lib/liblthread.so.3
#3  0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
#4  0x2835f21a in pthread_sigmask () from /usr/local/lib/liblthread.so.3
#5  0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
#6  0x2835f21a in pthread_sigmask () from /usr/local/lib/liblthread.so.3
#7  0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
#8  0x2835f21a in pthread_sigmask () from /usr/local/lib/liblthread.so.3
#9  0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
[...]
#43572 0x2835f21a in pthread_sigmask () from /usr/local/lib/liblthread.so.3
#43573 0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
#43574 0x283606ca in __pthread_initialize_minimal () from
/usr/local/lib/liblthread.so.3
#43575 0x2836443d in __pthread_perform_cleanup () from
/usr/local/lib/liblthread.so.3
#43576 0x283543c5 in _init () from /usr/local/lib/liblthread.so.3
#43577 0xbfbfeaf8 in ?? ()
#43578 0x280e03d8 in ?? () from /libexec/ld-elf.so.1
#43579 0xbfbfeaa8 in ?? ()
#43580 0x280c6bad in _rtld_error () from /libexec/ld-elf.so.1
#43581 0x280c925b in _rtld () from /libexec/ld-elf.so.1
#43582 0x280c63e6 in .rtld_start () from /libexec/ld-elf.so.1



Any hints about that? What kind of problem did i inherit here? :)


--
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-dir coredump after upgrade to 2.0.1

2007-02-13 Thread Frank Altpeter
Sorry for the confusion - problem solved.

It seems that bacula does automagically detects the presence of
linuxthreads and compiles it in - after removing them all went fine...

nevertheless, i think there should be some knob in the freebsd ports
configuration to have this enabled/disabled on purpose...



On 2/13/07, Frank Altpeter [EMAIL PROTECTED] wrote:
 Hi there,

 I just took the time to upgrade my bacula installation to the current
 2.0.1 version, but unfortunatly it doesn't work anymore now:


 [EMAIL PROTECTED]:~ # /usr/local/sbin/bacula-dir -u bacula -g bacula -v -c
 /usr/local/etc/bacula-dir.conf
 Bus error: 10 (core dumped)

 I upgraded my 6.1-RELEASE-p3 server via the ports collection to

 bacula-client-2.0.1 =   up-to-date with port
 bacula-server-2.0.1 =   up-to-date with port

 and i upgraded the database with
 /usr/local/share/bacula/update_mysql_tables as recommended in
 http://www.bacula.org/?page=news - additionally, i removed 'Accept Any
 Volume' from the director configuration.

 The gdb output from reading the core file looks a bit like this:

 #0  0x2835f1d4 in pthread_sigmask () from /usr/local/lib/liblthread.so.3
 [New LWP 100150]
 (gdb)  backtrace
 #0  0x2835f1d4 in pthread_sigmask () from /usr/local/lib/liblthread.so.3
 #1  0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
 #2  0x2835f21a in pthread_sigmask () from /usr/local/lib/liblthread.so.3
 #3  0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
 #4  0x2835f21a in pthread_sigmask () from /usr/local/lib/liblthread.so.3
 #5  0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
 #6  0x2835f21a in pthread_sigmask () from /usr/local/lib/liblthread.so.3
 #7  0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
 #8  0x2835f21a in pthread_sigmask () from /usr/local/lib/liblthread.so.3
 #9  0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
 [...]
 #43572 0x2835f21a in pthread_sigmask () from /usr/local/lib/liblthread.so.3
 #43573 0x2816b0d0 in sigprocmask () from /usr/lib/libpthread.so.2
 #43574 0x283606ca in __pthread_initialize_minimal () from
 /usr/local/lib/liblthread.so.3
 #43575 0x2836443d in __pthread_perform_cleanup () from
 /usr/local/lib/liblthread.so.3
 #43576 0x283543c5 in _init () from /usr/local/lib/liblthread.so.3
 #43577 0xbfbfeaf8 in ?? ()
 #43578 0x280e03d8 in ?? () from /libexec/ld-elf.so.1
 #43579 0xbfbfeaa8 in ?? ()
 #43580 0x280c6bad in _rtld_error () from /libexec/ld-elf.so.1
 #43581 0x280c925b in _rtld () from /libexec/ld-elf.so.1
 #43582 0x280c63e6 in .rtld_start () from /libexec/ld-elf.so.1



 Any hints about that? What kind of problem did i inherit here? :)


 --
 Le deagh dhùraghd,

 Frank Altpeter

 Two of the most famous products of Berkeley are LSD and Unix.
 I don't think that this is a coincidence.
 -- Anonymous



-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] I/O errors on gentoo between 1.36 client and 1.38 server // missing current ebuild

2006-02-13 Thread Frank Altpeter
Hi list,

First of all a little question to whom it does concern: Who is the
responsible person to update the gentoo portage tree with a current
bacula ebuild?
Currently it looks like there is only bacula-1.36.3-r2.ebuild while
1.38.5 should be available by now.

But the more important question: Is it possible that a client running
1.36.3-r2 can produce I/O errors while doing a full backup against a
1.38.4 server?

I'm having exactly this problem that a gentoo server does crash at
night when doing a full backup, and it requires a cold start
afterwards.

A sample of the errors:

EXT3-fs error: (device sda3): ext3_find_entry: reading directory
#0241205 offset 0
This happens some thousand times on the console and after that,
nothing works anymore, including a simple ls or reboot command.

I'm not sure but currently i think that this error only occurs since
updating the server from 1.36 to 1.38 and if i remember correctly,
there have been massive changes in I/O handling...

So - can someone provide an ebuild for 1.38.x please ? :)


--
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Cleanly removing a client

2006-02-11 Thread Frank Altpeter
Hi list,

I'm about to remove two machines from my  bacula configuration because
they have been shut down. Because the last time i had to remove a
machine, it seemed to went a big wrong, i think i should ask for the
best way to do it.
So: What's the best (most clean and accurate) way to remove a client
from bacula?

Because, if i just remove it from the configuration files, it won't
remove the already made backups since there is no chance to prune
them. Also, the database gets never cleaned up because the client
never gets called again when out of configuration.

Last time i tried to use dbcheck to clean up the database from
obsolete entries, it took about 5 days to run and had to be aborted,
so i don't think this is an option.

Any hints welcome...

--
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Changing tapes without console interaction.

2005-06-24 Thread Frank Altpeter
2005/6/24, Attila Fülöp [EMAIL PROTECTED]:
 The problem is, the FreeBSD Port installs bconsole with
 754 (rwxr-xr--) root:wheel and the director fails to execute
 it since it runs as user+group bacula. Same problem with
 bconsole.conf and (gnome|wx)-console.
 
 Is this a bacula feature or something the FreeBSD port
 maintainer did? I would think its reasonable to change both
 files to be owned by group bacula. I will contact the port
 maintainer in case it's his part.
 
 Alternatively, is there another way to achieve above
 functionality?

Don't know if above makes sense, and it doesn't seem to come from the
port configuration - however, usually you don't have access to the
tape drive as non-root anyway...
So, i think the best way to archive your expected results is to run
sudo in your scripts, e.g. make it

echo mount CertanceDrive | sudo /usr/local/sbin/bconsole -c
/usr/local/etc/bconsole.conf
echo umount CertanceDrive | sudo /usr/local/sbin/bconsole -c
/usr/local/etc/bconsole.conf

and configure your sudoers file accordingly.


-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous


---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_idt77alloc_id492op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple clients to back up

2005-06-20 Thread Frank Altpeter
2005/6/20, laczko attila [EMAIL PROTECTED]:
 Hi,
 I want to backup a some clients: each client in his
 own directory, having volumes separated by type of the
 job. For each client i wish to do weekly backup for 2
 weeks and daily for a week.

I solved this by having each client its own Pool, Storage, Job and
Client configuration and build this config with some perl hack, which
then gets included in the main configs.

-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous


---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_idt77alloc_id492op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Trying to use Bacula-web

2005-06-08 Thread Frank Altpeter
Just for the records... here (FreeBSD 4.11) the bacula-web version 1.1
installs and runs perfectly - well, after installing the PEAR-DB
package (which is of course noticed in the README file) - and without
any installation or runtime errors... we like.


-- 
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous


---
This SF.Net email is sponsored by: NEC IT Guy Games.  How far can you shotput
a projector? How fast can you ride your desk chair down the office luge track?
If you want to score the big prize, get to know the little guy.
Play to win an NEC 61 plasma display: http://www.necitguy.com/?r 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Newbie questions about setting up file-only storage

2005-04-06 Thread Frank Altpeter
On Apr 6, 2005 11:28 AM, Arno Lehmann [EMAIL PROTECTED] wrote:
  So, this server has a lot of disk space (2.4G), currently splitted to
  8 partitions.
 There IS a typo, right? :-)
Uhm, yes, how did you notice? :)

  I once build a backup system with Legato NetWorker, so i'm still
  trying to find out how to make things with bacula like it was with
  networker ...
 Without knowing Legato - I guess this approach is not the most efficient
 way. From my experience, it's best to throw out the things you know
 about other products for the same purpose when you start using something
 new. Of course I know that this is not easy...

The Legato's way of grouping is not that great either, but i was able
to work with it. You know, if you've got a bunch of clients (about 50
hosts) where two or more are directly related and should be backed up
together it's a bit easyer to do a 'start group marketing-server' than
starting each of them manually.

  One of these things is how to create job groups to group single jobs
  together and run them all at one command (manually or by schedule). Is
  such possible?
 No, not by itself and currently.
 A future version of bacula might offer a similar function, when one job
 can start others. This is in the development version. Apart from that:
 Job grouping (and other, related things like putting multiple filesets
 andclients into one job) might be very useful, so I could imagine that
 trying to persuade someone to imlement them might be worth the effort.
I would second that try :)

 What I would do if I needed to manually start a given set of jobs more
 than a few times: I'd write a small shell script to send the appropriate
 commands to the console, like
Thought about that, and i think it will be that way for me to archive
the results  i try to get.

 No, sorry. Using multiple storage devices for one job is not possible
 currently. It's one of the things Kern works on right now, kind of.
So it would be best practice to have one file device at all (which
would be _very_ big, or define multiple devices and try to deliver the
clients equally over all?
I think, the 'one file device' solutions sounds better for me, as long
as i have multiple pools that create a new volume for each
backup/day/client/whatever to be able to cycle at all. Otherwise it
would write into one volume until the disk is full, which doesn't make
sense if you want to keep some backups :-)
The 'one file device' solution does have it's caveats, though - AFAIK
FreeBSD is unable to handle partitions with more than 512G of size
correctly, so using one partition with a size of 2.4T (hehe, no typo
this time :) unlikely to work.

 Perhaps you might contribute some ideas as well.
I can try... but most of my backup knowledge comes from my work with
legato (in combination with a autochanger robot, that is) - so it
might not be that helpful...

 Until bacula is ready for the setup you plan, you might get used to he
 way it works now, so you can help testing the next version :-)
I have to :-) Because i need a disk based backup solution that is able
to save data from about 40 clients (growing) within acceptable time
and with being able to have a retention time of about 3 months.
Besides that, i need a client/server system that is able to backup
unix and windows hosts (not my fault - customers' housing :).
I don't think that there are open source projects other than bacula
that can handle this. And a software like legato networker is
wy too expensive for a two-people-company...

Regards,

  Frank

-- 
Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users