I run a small bacula setup for my own tiny office.
Just backing up my laptops etc with several jobs, all writing to a btrfs
pool on a debian-13 machine (small server, 8GB RAM, rather slow disks).
I just wonder:
I have two jobs backing up each around 30-40 GB:
job-projects ... 39.4 GB ... 646381 files ... 1h 3m 44s
job-nextcloud ... 31.4 GB ... 14271 files ... 7m 21s
(both reading from the same client "ivy" ... over a 1 GB/s connection)
->
Job {
Name = "job-nextcloud"
Type = "Backup"
Storage = "File"
Client = "ivy"
Fileset = "nextcloud"
JobDefs = "DefaultJob"
SpoolData = no
}
Job {
Name = "job-projects"
Type = "Backup"
Level = "Incremental"
Messages = "Standard"
Storage = "File"
Pool = "File"
Client = "ivy"
Fileset = "projects"
Schedule = "WeeklyCycle"
JobDefs = "DefaultJob"
}
So they both are based on DefaultJob ... so no difference here but I
show it anyway:
JobDefs {
Name = "DefaultJob"
Type = "Backup"
Level = "Incremental"
Messages = "Standard"
Storage = "File"
Pool = "File"
Client = "tx100-fd"
Fileset = "Full Set"
Schedule = "WeeklyCycle"
WriteBootstrap = "/opt/bacula/working/%c.bsr"
SpoolAttributes = yes
SpoolData = no
Runscript {
RunsWhen = "After"
RunsOnClient = no
Console = ".bvfs_update jobid=%i"
}
Priority = 10
}
So the storage is the same, the fd is the same, the difference seems to
be the sheer number of files.
So I assume the number of files somehow kills the overall performance by
doing the database/catalog inserts, right?
This is postgresql-17 ... as mentioned, on a rather slow box (Intel(R)
Xeon(R) CPU E3-1220 V2 @ 3.10GHz) (but also not super weak ...?)
It's not *important* to speed it up, I just want to learn here!
Anything I can try to tune here?
I might start the slower job now and watch "top" etc on the server ...
could have done that already ;-)
_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users