On Wednesday, June 29, 2016 at 1:40:58 PM UTC-7, marco.van.wieringen wrote:
> On 06/29/16 09:35 PM, Fox Whittington wrote:
> >
> > Thanks for the reply. I have tried most of that without much luck. We
> > don't have enough local space for one of our massive jobs. Is it possible
> > to get BareOS to do spooling to local disk before it goes through rados to
> > ceph? We have configured spooling but it seems to be ignoring it.
> >
>
> It should be possible for every device type. I have it configured
> myself for disk (VTL) and tape storage.
>
> e.g. in the bareos-sd in the device section something like:
>
>
> Spool Directory = ...
> Maximum Job Spool Size = ...
>
> I haven't tested it with CEPH but as its generic I would expect it to just
> work.
>
> --
> Marco van Wieringen [email protected]
> Bareos GmbH & Co. KG Phone: +49-221-63069389
> http://www.bareos.com
>
> Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
> Komplementär: Bareos Verwaltungs-GmbH
> Geschäftsführer: Stephan Dühr, M. Außendorf, J. Steffens,
> P. Storz, M. v. Wieringen
Thanks, for some reason it is working now, but when it de-spools, the transfer
rate is as poor as always. No help there.
Below is our test config example for this storage and this client:
Running Jobs:
Writing: Full Backup job clienthost.example.com JobId=1415 Volume="Vol-0657"
pool="CephPool" device="CephStorage" (Rados Device)
spooling=0 despooling=1 despool_wait=0
Files=13,621,842 Bytes=3,296,515,371,142 AveBytes/sec=3,628,711
LastBytes/sec=3,419,745
FDReadSeqNo=159,816,491 in_msg=122797073 out_msg=5 fd=16
====
Device status:
Device "CephStorage" (Rados Device) is mounted with:
Volume: Vol-0657
Pool: CephPool
Media type: RadosFile
Total Bytes=58,054,993,153 Blocks=899,910 Bytes/block=64,511
Positioned at File=13 Block=2,220,418,304
==
====
Used Volume status:
Vol-0657 on device "CephStorage" (Rados Device)
Reader=0 writers=1 reserves=0 volinuse=1
====
Data spooling: 1 active jobs, 299,999,945,874 bytes; 2 total jobs,
300,000,008,124 max bytes/job.
Attr spooling: 1 active jobs, 1,610,400,986 bytes; 2 total jobs, 1,610,400,986
max bytes.
SD:
Device {
Name = CephStorage
Archive Device = "Rados Device"
Device Options = "conffile=/etc/ceph/ceph.conf,poolname=bareos"
Spool Directory = /u0/BareOS/spool
Maximum Spool Size = 300000000000
Device Type = rados
Media Type = RadosFile
Label Media = yes
Minimum block size = 2097152
Maximum block size = 4194304
Random Access = yes
Automatic Mount = yes
DIR:
JobDef:
JobDefs {
Name = "Ceph-Linux-weekly-all"
Type = Backup
Level = Incremental
Client = dirhost.example.com-fd
FileSet = "Linux All"
Schedule = "WeeklyCycle"
SpoolData = yes
Allow Duplicate Jobs = no
Storage = CephFile
Messages = Standard
Pool = CephPool
Priority = 10
Write Bootstrap = "/var/lib/bareos/%c.bsr"
}
Spool Dir:
-rw-r----- 1 bareos bareos 299999946807 Jun 21 03:05
clienthost.example.com-sd.data.1340.clienthost.example.com.2016-06-12_18.52.27_05.CephStorage.spool
-rw-r----- 1 bareos bareos 299999945874 Jul 5 21:09
clienthost.example.com-sd.data.1415.clienthost.example.com.2016-06-25_21.00.01_32.CephStorage.spool
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
For more options, visit https://groups.google.com/d/optout.