Hi list,
something weired is happening. A couple of Jobs start at 1 AM and then the SD
crashes. In bactrace I see things like this:
threadid=0x7f9d62ffd700 JobId=80 JobStatus=C jcr=0x7f9d3c001078
name=client1-differential.2016-03-25_01.00.00_16
threadid=0x7f9d62ffd700 killable=0 JobId=80 JobStatus=C jcr=0x7f9d3c001078
name=client1-differential.2016-03-25_01.00.00_16
use_count=1
JobType=B JobLevel=F
sched_time=25-Mar-2016 02:52 start_time=01-Jan-1970 01:00
end_time=01-Jan-1970 01:00 wait_time=01-Jan-1970 01:00
db=(nil) db_batch=(nil) batch_started=0
01-Jan-1970? I checked hwclock and ntp. All good. Where is that date comming
from? I saw that people had that bug years and years ago (in bacula).
I think the SD gets somehow confused due to my unusual configuration. Thing is
that I want to have one Volume File for each Job. Thus 20 Devices.
My sd.conf:
Storage {
Allow Bandwidth Bursting = no
Client Connect Wait = 3 minute
Collect Device Statistics = yes
Collect Job Statistics = yes
Description = vampire3 Storage Daemon
FD Connect Timeout = 3 minute
File Device Concurrent Read = yes
Heartbeat Interval = 1 minute
Maximum Concurrent Jobs = 20
Name = vampire3-sd
Statistics Collect Interval = 5 minute
Director {
Name = my-dir
Password = "mypass"
}
Director {
Monitor = yes
Name = my-dir
Password = "mypass"
}
Device {
AlwaysOpen = no
Archive Device = /mnt/raid/bareos/volumes
Automatic Mount = yes
Collect Statistics = yes
Description = Device-1 RAID
Device Type = File
Label Media = yes
Maximum Concurrent Jobs = 20
Maximum Open Volumes = 20
Maximum Open Wait = 5 minute
Media Type = File
Name = V3-RAID-1
Random Access = yes
Removable Media = no
}
.
.
.
(continues till Device-20)
In the dir.conf I have Storage definitions like this:
Storage {
Address = 10.10.10.11
Allow Compression = yes
Collect Statistics = yes
Description = RAID on 10 network.
Device = V3-RAID-1
Device = V3-RAID-2
Device = V3-RAID-3
Device = V3-RAID-4
Device = V3-RAID-5
Device = V3-RAID-6
Device = V3-RAID-7
Device = V3-RAID-8
Device = V3-RAID-9
Device = V3-RAID-10
Device = V3-RAID-11
Device = V3-RAID-12
Device = V3-RAID-13
Device = V3-RAID-14
Device = V3-RAID-15
Device = V3-RAID-16
Device = V3-RAID-17
Device = V3-RAID-18
Device = V3-RAID-19
Device = V3-RAID-20
Enabled = yes
Heartbeat Interval = 1 minute
Maximum Concurrent Jobs = 20
Maximum Concurrent Read Jobs = 30
Media Type = File
Name = V3-RAID-10
Password = "my-pass"
Protocol = Native
Port = 9103
}
A Pool looks like this:
Pool {
Action On Purge = Truncate
Auto Prune = yes
Catalog = MyCatalog
Catalog Files = yes
Description = Pool for Full backups
File Retention = 365 days
Job Retention = 365 days
Label Format = "${Client}-${Year}-${Month}-${Day}-${JobId}"
Maximum Volume Jobs = 1
Name = V3-Full-RAID
Pool Type = Backup
Recycle = No
Use Catalog = yes
Volume Retention = 16 days
}
Nothing special about Schedules:
Schedule {
Description = Daily backup schedule
Enabled = yes
Name = "Daily"
Run = sun-fri at 01:00
}
After a crash I need to restart the daemons and start the Jobs by hand.
Any clues?
Thanks for hints!
Regards,
Oliver
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
For more options, visit https://groups.google.com/d/optout.