Thanks Jörg.

I made these changes:



Storage {
        Name = vps52371
        Device = vps52371                                                       
                                        # bacula-sd.conf Device
        Media Type = vps52371
        Address = backups.hosted-power.com                                      
                        # backup server fqdn > sent to client sd
        Password = "TO7KlwqV4+Y3+FcGA9EIZdyQcWP6rrsELiLFRMgyHTX7"               
# password for Storage daemon
        Maximum Concurrent Jobs = 1                                             
                                # required for virtual full
    # @/etc/bacula/storage.conf
}

Storage {
        Name = virtual-vps52371
        Device = virtual-vps52371                                               
                                # bacula-sd.conf Device
        Media Type = vps52371
        Address = backups.hosted-power.com                                      
                        # backup server fqdn > sent to client sd
        Password = "TO7KlwqV4+Y3+FcGA9EIZdyQcWP6rrsELiLFRMgyHTX7"               
# password for Storage daemon
        Maximum Concurrent Jobs = 1                                             
                                # required for virtual full
    # @/etc/bacula/storage.conf
}

Pool {
        Pool Type = Backup
        Name = vps52371
        LabelFormat = "vps52371-"
        Maximum Volume Jobs = 1                                                 
        # a new file for each backup that is done
        Volume Retention = 35 days
        AutoPrune = yes
        Action On Purge = Truncate
        Next Pool = virtual-vps52371
}

Pool {
        Pool Type = Backup
        Name = virtual-vps52371
        Storage = virtual-vps52371
        LabelFormat = "virtual-vps52371-"
        Maximum Volume Jobs = 1                                                 
        # a new file for each backup that is done
        Volume Retention = 60 days
        AutoPrune = yes
        Action On Purge = Truncate
        Next Pool = vps52371
}


So I changed the mediatype to the same. I also set a next pool in the virtual 
pool. Is this required/useful? I read somewhere it could be a solution, but it 
didn't work (never tried with same media so far)


The alwaysincremental looks very promising. I took a quick look and it says not 
suitable for longterm storage. I would only need 35 days. Would just one simple 
always incremental job suffice for 35 days to backup forever ?? :)

What about my MySQL databases, will they just work as well forever? Of course 
they will be fully captured each time, but that is ok.


Regards
Jo




Op donderdag 13 oktober 2016 17:39:05 UTC+2 schreef Jörg Steffens:
> Hi,
> 
> for virtual fulls you need two devices, one for reading one for writing.
> Your first virtual full ended up in virtual-vps52371 on storage
> virtual-vps52371. The second virtual full tries to use the first
> virtualfull (since it is the newest full) and tries to combine it with
> other diff/incs.
> 
> So Bareos chooses Device virtual-vps52371 as write device.
> However, to read the existing virtual full it would also need Device
> virtual-vps52371, which is already blocked.
> 
> You did choose the same path for the devices vps52371 and
> virtual-vps52371. However, you did also choose different Media Types
> (vps52371 and virtual-vps52371). So for Bareos the volumes in
> /home/vps52371/bareos are separated in two sets.
> 
> Making following change might solve this problem:
> Device {
>    Name = virtual-vps52371
>    # DON'T USE DIFFERENT MEDIA TYPE: Media Type = virtual-vps52371
>    Media Type = vps52371
>    Archive Device = /home/vps52371/bareos
>    LabelMedia = yes;
>    Random Access = Yes;
>    AutomaticMount = yes;
>    RemovableMedia = no;
>    AlwaysOpen = no;
>    Maximum Concurrent Jobs = 1
> }
> 
> At least it should point you to the right directly. If not, Bareos.com
> also offers support.
> 
> Instead of VirtualFulls, you might prefer to use the (well documented)
> Always Incremental feature of bareos-16.2:
> http://doc.bareos.org/master/html/bareos-manual-main-reference.html#AlwaysIncrementalBackupScheme
> 
> This is a more advanced use of VirtualFulls.
> 
> regards,
> Jörg
> 
> Am 15.08.2016 um 10:31 schrieb Hosted Power:
> > Hi,
> > 
> > 
> > We're using the virtual full feature. It worked at least once, but not the 
> > second time. I suspect a disk full issue or an issue when the second time 
> > comes, but I'm not sure.
> > 
> > Relevant config:
> > 
> > Schedule {
> >     Name = "monthly-cycle-2"
> >     Run = VirtualFull Accurate=yes 2nd sun at 00:15
> >     Run = Differential Accurate=yes 3rd-1st sun at 00:15
> >     Run = Incremental Accurate=yes mon-sat at 00:15
> >     
> > }
> > 
> > 
> > # Backup for vps52371
> > Device {
> >   Name = vps52371
> >   Media Type = vps52371
> >   Archive Device = /home/vps52371/bareos
> >   LabelMedia = yes;
> >   Random Access = Yes;
> >   AutomaticMount = yes;
> >   RemovableMedia = no;
> >   AlwaysOpen = no;
> >   Maximum Concurrent Jobs = 1
> > }
> > 
> > Device {
> >   Name = virtual-vps52371
> >   Media Type = virtual-vps52371
> >   Archive Device = /home/vps52371/bareos
> >   LabelMedia = yes;
> >   Random Access = Yes;
> >   AutomaticMount = yes;
> >   RemovableMedia = no;
> >   AlwaysOpen = no;
> >   Maximum Concurrent Jobs = 1
> > }
> > 
> > 
> > 
> > Storage {
> >     Name = vps52371
> >     Device = vps52371                                                       
> >                                         # Device
> >     Media Type = vps52371
> >     Address = www.hosted-power.com                                          
> >                 # backup server fqdn > sent to client sd
> >     Password = "TO7KlwqV4+Y3+FcGA9EIZdyQcWP6rrsELiLFRMgyHTX7"               
> > # password for Storage daemon
> >     Maximum Concurrent Jobs = 1                                             
> >                                 # required for virtual full
> > }
> > 
> > Storage {
> >     Name = virtual-vps52371
> >     Device = virtual-vps52371                                               
> >                                 # Device
> >     Media Type = virtual-vps52371
> >     Address = www.hosted-power.com                                          
> >                 # backup server fqdn > sent to client sd
> >     Password = "TO7KlwqV4+Y3+FcGA9EIZdyQcWP6rrsELiLFRMgyHTX7"               
> > # password for Storage daemon
> >     Maximum Concurrent Jobs = 1                                             
> >                                 # required for virtual full
> > }
> > 
> > Pool {
> >     Pool Type = Backup
> >     Name = vps52371
> >     LabelFormat = "vps52371-"
> >     Maximum Volume Jobs = 1                                                 
> >         # a new file for each backup that is done
> >     Maximum Volumes = 80
> >     Volume Retention = 60 days
> >     AutoPrune = yes
> >     Action On Purge = Truncate
> >     Next Pool = virtual-vps52371
> > }
> > 
> > Pool {
> >     Pool Type = Backup
> >     Name = virtual-vps52371
> >     Storage = virtual-vps52371
> >     LabelFormat = "virtual-vps52371-"
> >     Maximum Volume Jobs = 1                                                 
> >         # a new file for each backup that is done
> >     Maximum Volumes = 80
> >     Volume Retention = 60 days
> >     AutoPrune = yes
> >     Action On Purge = Truncate
> > }
> > 
> > 
> > Job {
> >     Type = Backup
> >     Name = "backup-vps52371"
> >     Client = vps52371
> >     Storage = vps52371
> >     Pool = vps52371
> >     FileSet = "windows-all"
> >     Messages = Standard
> >     Schedule = "monthly-cycle-2"
> >     #Defaults
> >     Accurate=yes
> >     Level=Full
> > }
> > 
> > 
> > 
> > (So use separate device for each client we backup)
> > 
> > 
> > 
> > Job log:
> > 
> > 
> > 2016-08-14 00:20:19 hostedpower-dir JobId 678: Created new Volume 
> > "virtual-vps52371-0689" in catalog.
> >  
> > 2016-08-14 00:20:19 hostedpower-dir JobId 678: Using Device 
> > "virtual-vps52371" to write.
> >  
> > 2016-08-14 00:20:19 backup02-sd JobId 678: acquire.c:114 Changing read 
> > device. Want Media Type="virtual-vps52371" have="vps52371"
> >  device="vps52371" (/home/vps52371/bareos)
> >  
> > 2016-08-14 00:20:19 backup02-sd JobId 678: Fatal error: acquire.c:169 No 
> > suitable device found to read Volume "virtual-vps52371-0189"
> >  
> > 2016-08-14 00:20:19 backup02-sd JobId 678: Elapsed time=408646:20:19, 
> > Transfer rate=0 Bytes/second
> >  
> > 2016-08-14 00:20:19 hostedpower-dir JobId 678: Error: Bareos 
> > hostedpower-dir 15.2.2 (16Nov15):
> >  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
> >  JobId: 678
> >  Job: backup-vps52371.2016-08-14_00.15.00_19
> >  Backup Level: Virtual Full
> >  Client: "vps52371" 15.2.2 (16Nov15) Microsoft Windows Server 2012 Standard 
> > Edition (build 9200), 64-bit,Cross-compile,Win64
> >  FileSet: "windows-all" 2016-06-30 09:09:54
> >  Pool: "virtual-vps52371" (From Job Pool's NextPool resource)
> >  Catalog: "MyCatalog" (From Client resource)
> >  Storage: "virtual-vps52371" (From Storage from Pool's NextPool resource)
> >  Scheduled time: 14-Aug-2016 00:15:00
> >  Start time: 13-Aug-2016 00:15:01
> >  End time: 13-Aug-2016 00:26:02
> >  Elapsed time: 11 mins 1 sec
> >  Priority: 10
> >  SD Files Written: 0
> >  SD Bytes Written: 0 (0 B)
> >  Rate: 0.0 KB/s
> >  Volume name(s): 
> >  Volume Session Id: 4
> >  Volume Session Time: 1471075390
> >  Last Volume Bytes: 0 (0 B)
> >  SD Errors: 1
> >  SD termination status: Fatal Error
> >  Termination: *** Backup Error ***
> > 
> >  
> > 2016-08-14 00:19:11 hostedpower-dir JobId 678: Using Device "vps52371" to 
> > read.
> >  
> > 2016-08-14 00:17:10 hostedpower-dir JobId 678: Bootstrap records written to 
> > /var/lib/bareos/hostedpower-dir.restore.2.bsr
> >  
> > 2016-08-14 00:15:00 hostedpower-dir JobId 678: Start Virtual Backup JobId 
> > 678, Job=backup-vps52371.2016-08-14_00.15.00_19
> >  
> > 
> > Showing 1 to 9 of 9 entries
> > 
> > 
> > Backup files currently present, /home/vps52371/bareos:
> > # ls -alh
> > total 104G
> > drwxrwxr-x 2 bareos   bareos   4.0K Aug 15 00:15 .
> > drwxr-xr-x 5 vps52371 vps52371 4.0K Apr 19 16:33 ..
> > -rw-r----- 1 bareos   bareos    34G Jul 10 00:21 virtual-vps52371-0189
> > -rw-r----- 1 bareos   bareos    34G Jun 30 10:24 vps52371-0045
> > -rw-r----- 1 bareos   bareos   457M Jul  1 00:29 vps52371-0053
> > -rw-r----- 1 bareos   bareos   432M Jul  2 00:27 vps52371-0066
> > -rw-r----- 1 bareos   bareos   530M Jul  3 00:26 vps52371-0076
> > -rw-r----- 1 bareos   bareos   167M Jul  4 00:26 vps52371-0088
> > -rw-r----- 1 bareos   bareos   527M Jul  5 00:30 vps52371-0099
> > -rw-r----- 1 bareos   bareos   455M Jul  6 00:31 vps52371-0113
> > -rw-r----- 1 bareos   bareos   483M Jul  7 00:33 vps52371-0124
> > -rw-r----- 1 bareos   bareos   473M Jul  8 00:24 vps52371-0156
> > -rw-r----- 1 bareos   bareos   653M Jul  9 00:24 vps52371-0171
> > -rw-r----- 1 bareos   bareos    16M Jul 11 00:22 vps52371-0200
> > -rw-r----- 1 bareos   bareos   570M Jul 12 00:24 vps52371-0216
> > -rw-r----- 1 bareos   bareos   496M Jul 13 00:24 vps52371-0230
> > -rw-r----- 1 bareos   bareos   1.1G Jul 14 00:25 vps52371-0246
> > -rw-r----- 1 bareos   bareos   704M Jul 15 00:25 vps52371-0261
> > -rw-r----- 1 bareos   bareos   566M Jul 16 00:24 vps52371-0276
> > -rw-r----- 1 bareos   bareos   1.6G Jul 17 00:28 vps52371-0291
> > -rw-r----- 1 bareos   bareos   1.3G Jul 18 00:31 vps52371-0304
> > -rw-r----- 1 bareos   bareos   482M Jul 19 00:27 vps52371-0321
> > -rw-r----- 1 bareos   bareos   654M Jul 20 00:26 vps52371-0335
> > -rw-r----- 1 bareos   bareos   1.5G Jul 21 00:32 vps52371-0351
> > -rw-r----- 1 bareos   bareos   410M Jul 22 00:29 vps52371-0365
> > -rw-r----- 1 bareos   bareos   561M Jul 23 00:32 vps52371-0381
> > -rw-r----- 1 bareos   bareos   3.7G Jul 24 00:53 vps52371-0394
> > -rw-r----- 1 bareos   bareos   251M Jul 25 00:23 vps52371-0409
> > -rw-r----- 1 bareos   bareos   398M Jul 26 00:24 vps52371-0424
> > -rw-r----- 1 bareos   bareos   346M Jul 27 00:24 vps52371-0440
> > -rw-r----- 1 bareos   bareos   504M Jul 28 00:24 vps52371-0454
> > -rw-r----- 1 bareos   bareos   475M Jul 29 00:23 vps52371-0469
> > -rw-r----- 1 bareos   bareos   366M Jul 30 00:23 vps52371-0484
> > -rw-r----- 1 bareos   bareos   3.9G Jul 31 00:35 vps52371-0499
> > -rw-r----- 1 bareos   bareos    25M Aug  1 00:22 vps52371-0512
> > -rw-r----- 1 bareos   bareos   570M Aug  2 00:26 vps52371-0528
> > -rw-r----- 1 bareos   bareos   2.0G Aug  3 00:34 vps52371-0543
> > -rw-r----- 1 bareos   bareos   490M Aug  4 00:31 vps52371-0559
> > -rw-r----- 1 bareos   bareos   557M Aug  5 00:29 vps52371-0574
> > -rw-r----- 1 bareos   bareos   528M Aug  6 00:29 vps52371-0588
> > -rw-r----- 1 bareos   bareos   4.4G Aug  7 00:54 vps52371-0604
> > -rw-r----- 1 bareos   bareos   7.5M Aug  8 00:30 vps52371-0617
> > -rw-r----- 1 bareos   bareos   636M Aug  9 00:37 vps52371-0632
> > -rw-r----- 1 bareos   bareos   601M Aug 10 00:40 vps52371-0646
> > -rw-r----- 1 bareos   bareos   964M Aug 11 00:29 vps52371-0663
> > -rw-r----- 1 bareos   bareos   1.3G Aug 12 00:28 vps52371-0677
> > -rw-r----- 1 bareos   bareos   481M Aug 13 00:26 vps52371-0682
> > -rw-r----- 1 bareos   bareos   1.1G Aug 15 00:28 vps52371-0695
> > 
> > 
> > 
> > 
> > It can hardly be a disk space issue on the device itself:
> > 
> > 
> > # df -h
> > Filesystem                            Size  Used Avail Use% Mounted on
> > /dev/mapper/backups-vps52371          296G  104G  192G  36% /home/vps52371
> > 
> > 
> > Unless it writes to a temp folder which is located somewhere else ? 
> > 
> > Any thoughts why this fails? 
> > 
> > 
> > 
> > Kind regards
> > Jo Goossens
> > 
> 
> 
> -- 
>  Jörg Steffens                   [email protected]
>  Bareos GmbH & Co. KG            Phone: +49 221 630693-91
>  http://www.bareos.com           Fax:   +49 221 630693-10
> 
>  Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
>  Komplementär: Bareos Verwaltungs-GmbH
>  Geschäftsführer:
>  S. Dühr, M. Außendorf, Jörg Steffens, P. Storz

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to