On mercredi, 4 janvier 2017 04.19:12 h CET [email protected] wrote:
> Dear list,
> 
> I want to keep a full backup on local disk and copy the last full
> backup to tape to take it off-site. Currently I manage about 20
> servers.
> 
> To speed up the backups I have followed the documentation on "Using
> Multiple Storage Devices" (http://doc.bareos.org/master/html/bareos-man
> ual-main-reference.html#x1-22400017.2.2).
> 
> However the migration (actually: copy) seems to run one job by one job
> instead of parallelizing the process. My copy job looks like:
> 
> Job {
>   Name = "migrate-disk-to-tape"
>   Type = Copy
>   Messages = Standard
>   Pool = Full
>   Selection Type = Volume
>   Selection Pattern = .
> }
> 
> My disk storage looks like:
> 
> Storage {
>   Name = File
>   Media Type = File
>   Address = bareos
>   Password = "…"
>   Device = FileStorage
>   Device = FileStorage2
>   Device = FileStorage3
>   Device = FileStorage4
>   Device = FileStorage5
>   Maximum Concurrent Jobs = 5
> }
> 
> But still my job list only runs jobs one by one:
> 
> Running Jobs:
> Console connected at 03-Jan-17 11:12
>  JobId Level   Name                       Status
> ======================================================================
>    868 Increme  migrate-full-to-tape.2017-01-02_22.25.13_19 has
> terminated
>    869 Increme  backup-server1-fd.2017-01-02_22.25.13_20 Dir inserting
> Attributes
>    870 Increme  migrate-full-to-tape.2017-01-02_22.25.13_21 is waiting
> on max Job jobs
>    871 Increme  backup-server2-fd.2017-01-02_22.25.13_22 is waiting
> execution
>    872 Increme  migrate-full-to-tape.2017-01-02_22.25.13_23 is waiting
> on max Job jobs
>    873 Increme  backup-server3-fd.2017-01-02_22.25.13_24 is waiting
> execution
>    874 Increme  migrate-full-to-tape.2017-01-02_22.25.13_25 is waiting
> on max Job jobs
>    875 Full    backup-server4-fd.2017-01-02_22.25.13_26 is waiting
> execution
>    876 Increme  migrate-full-to-tape.2017-01-02_22.25.13_27 is waiting
> on max Job jobs
>    877 Full    backup-server5-fd.2017-01-02_22.25.13_28 is waiting
> execution
>    878 Increme  migrate-full-to-tape.2017-01-02_22.25.13_29 is waiting
> on max Job jobs
> […]
> 
> My device status:
> 
> Device "FileStorage" (/srv/qnap) is mounted with:
>     Volume:      Full-0006
>     Pool:        Full
>     Media type:  File
>     Total Bytes Read=4,254,372,864 Blocks Read=65,947
> Bytes/block=64,512
>     Positioned at File=11 Block=1,378,210,138
> ==
> 
> Device "FileStorage2" (/srv/qnap) is not open.
> ==
> 
> Device "FileStorage3" (/srv/qnap) is not open.
> ==
> 
> Device "FileStorage4" (/srv/qnap) is not open.
> ==
> 
> Device "FileStorage5" (/srv/qnap) is not open.
> ==
> ====
> 
> Used Volume status:
> Full-0006 on device "FileStorage" (/srv/qnap)
>     Reader=1 writers=0 reserves=0 volinuse=1
> Volume: Full-0006 no device. volinuse= 0
> Volume: Full-0007 no device. volinuse= 0
> ====
> 
> Can anyone give me a clue how to improve the performance of the
> migration? Otherwise it will take days every time.
> 
> Maybe it's the "waiting on max Job jobs" but I'm now aware of any low
> "Max …" setting.
> 
> Thanks for your time.
> 
> Kindly… Christoph

Well in documentation here
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#QQ2-1-203
you will see in the table that default for parameter
Maximum Concurrent Jobs = positive-integer      1 

Which make sense in most case, now in your wanted solution, I would start
to check if changing this value for your Copy/Migrate job do it.


-- 

Bruno Friedmann 
 Ioda-Net Sàrl www.ioda-net.ch
 Bareos Partner, openSUSE Member, fsfe fellowship
 GPG KEY : D5C9B751C4653227
 irc: tigerfoot

openSUSE Tumbleweed
Linux 4.9.0-2-default x86_64 GNU/Linux, nvidia: 375.26
Qt: 5.7.1, KDE Frameworks: 5.29.0, Plasma: 5.8.4, kmail2 5.4.0

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to