Hi!
One more update !
I was change the MaximumConcurrentJobs to 1 in Storage and the rotation
works with success.
Storage {
Name = backup-vmware
Address = 172.16.111.86
Password = PWD
Device = backup-vmware
MaximumConcurrentJobs = 1
Media Type = File
}
Maybe a bug ?
Regards,
Rodrigo
Em seg., 10 de mai. de 2021 às 08:32, Rodrigo Jorge <[email protected]>
escreveu:
> Hi There,
>
> Someone have Idea about this problem ?
>
> Regards,
>
> Rodrigo
>
> Em sex., 7 de mai. de 2021 às 11:53, Rodrigo Jorge <[email protected]>
> escreveu:
>
>> Hello there!
>>
>> I am executing the backup of virtual machines using the Vmware plugin
>> with success.
>> BUT, I have a problem with volume retention, even I have volumes with
>> status Recycle or Append.
>>
>> The job log show the new volume creations, but I don't know the cause:
>>
>> 06-May 19:00 backup01-dir JobId 104618: Start Backup JobId 104618,
>> Job=VMWARE-SRV-FSPERFIL01.2021-05-06_19.00.00_12
>> 06-May 19:00 backup01-dir JobId 104618: Connected Storage daemon at
>> 172.16.111.86:9103, encryption: PSK-AES256-CBC-SHA
>> 06-May 19:00 backup01-dir JobId 104618: Volume "vmware-diario-5546" has
>> Volume Retention of 1123200 sec. and has 0 jobs that will be pruned
>> 06-May 19:00 backup01-dir JobId 104618: Volume "vmware-diario-5436" has
>> Volume Retention of 1123200 sec. and has 0 jobs that will be pruned
>> 06-May 19:00 backup01-dir JobId 104618: Volume "vmware-diario-5445" has
>> Volume Retention of 1123200 sec. and has 3 jobs that will be pruned
>> 06-May 19:00 backup01-dir JobId 104618: Purging the following JobIds:
>> 103094,103097,103092
>> 06-May 19:00 backup01-dir JobId 104618: Volume "vmware-diario-5446" has
>> Volume Retention of 1123200 sec. and has 0 jobs that will be pruned
>> 06-May 19:01 backup01-dir JobId 104618: Volume "vmware-diario-5437" has
>> Volume Retention of 1123200 sec. and has 0 jobs that will be pruned
>> 06-May 19:01 backup01-dir JobId 104618: Created new Volume
>> "vmware-diario-5617" in catalog.
>> 06-May 19:01 backup01-dir JobId 104618: Volume "vmware-diario-5546" has
>> Volume Retention of 1123200 sec. and has 0 jobs that will be pruned
>> 06-May 19:01 backup01-dir JobId 104618: Volume "vmware-diario-5437" has
>> Volume Retention of 1123200 sec. and has 0 jobs that will be pruned
>> 06-May 19:01 backup01-dir JobId 104618: Created new Volume
>> "vmware-diario-5618" in catalog.
>> 06-May 19:01 backup01-dir JobId 104618: Volume "vmware-diario-5546" has
>> Volume Retention of 1123200 sec. and has 0 jobs that will be pruned
>> ...
>> ...
>> 06-May 19:01 backup01-dir JobId 104618: Volume "vmware-diario-5594" has
>> Volume Retention of 1123200 sec. and has 0 jobs that will be pruned
>> 06-May 19:01 backup01-dir JobId 104618: Volume "vmware-diario-5437" has
>> Volume Retention of 1123200 sec. and has 0 jobs that will be pruned
>> 06-May 19:01 backup01-dir JobId 104618: Created new Volume
>> "vmware-diario-5634" in catalog.
>> 06-May 19:03 backup01-dir JobId 104618: Using Device "backup-vmware" to
>> write.
>> 06-May 19:03 backup01-dir JobId 104618: Connected Client: srv-backup04-fd
>> at 172.16.111.86:9102, encryption: PSK-AES256-CBC-SHA
>> 06-May 19:03 backup01-dir JobId 104618: Handshake: Immediate TLS 06-May
>> 19:03 backup01-dir JobId 104618: Encryption: PSK-AES256-CBC-SHA
>> 06-May 19:03 srv-backup04-fd JobId 104618: Connected Storage daemon at
>> 172.16.111.86:9103, encryption: PSK-AES256-CBC-SHA
>> 06-May 19:03 srv-backup04-fd JobId 104618: Extended attribute support is
>> enabled
>> 06-May 19:03 srv-backup04-fd JobId 104618: ACL support is enabled
>> 06-May 19:04 srv-backup04-fd JobId 104618: python-fd: Starting backup of
>> /VMS/500714a5-1233-5678-866c-af8a10bb18c0/[VOL_01]
>> SRV-FSPERFIL01/SRV-FSPERFIL01.vmdk_cbt.json
>> 06-May 19:04 srv-backup04-fd JobId 104618: python-fd: Starting backup of
>> /VMS/500714a5-1233-5678-866c-af8a10bb18c0/[VOL_01]
>> SRV-FSPERFIL01/SRV-FSPERFIL01.vmdk
>> 06-May 19:06 srv-backup04-sd JobId 104618: Releasing device
>> "backup-vmware" (/mnt/backup-vmware).
>> 06-May 19:06 srv-backup04-sd JobId 104618: Elapsed time=00:03:05,
>> Transfer rate=3.086 M Bytes/second
>> 06-May 19:06 backup01-dir JobId 104618: Insert of attributes batch table
>> with 1 entries start
>> 06-May 19:06 backup01-dir JobId 104618: Insert of attributes batch table
>> done
>> 06-May 19:06 backup01-dir JobId 104618: Bareos backup01-dir 19.2.7
>> (16Apr20):
>> JobId: 104618
>> 06-May 19:06 backup01-dir JobId 104618: Begin pruning Jobs older than 6
>> months .
>> 06-May 19:06 backup01-dir JobId 104618: No Jobs found to prune.
>> 06-May 19:06 backup01-dir JobId 104618: Begin pruning Files.
>> 06-May 19:06 backup01-dir JobId 104618: Pruned Files from 2 Jobs for
>> client srv-backup04-fd from catalog.
>> 06-May 19:06 backup01-dir JobId 104618: End auto prune.
>>
>> I have around 50 jobs, one for each VM, all jobs are executed on
>> same client with 8 concurrent.
>> Why new volumes are created with volumes available ?
>> Any relation with the number of jobs concurrent from the same client ?
>> Every day a lot of number of new volumes are created, and after ALL jobs
>> finished these volumes stay with Append status.
>>
>>
>> | 5,617 | vmware-diario-5617 | Used | 1 | 2,653,728,918 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 2021-05-06
>> 19:07:13 | backup-vmware |
>> | 5,618 | vmware-diario-5618 | Used | 1 | 3,259,671,087 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 2021-05-06
>> 19:11:07 | backup-vmware |
>> | 5,619 | vmware-diario-5619 | Used | 1 | 1,644,532,427 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 2021-05-06
>> 19:16:32 | backup-vmware |
>> | 5,620 | vmware-diario-5620 | Append | 1 | 2,043,318,590 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 2021-05-06
>> 19:20:01 | backup-vmware |
>> | 5,622 | vmware-diario-5622 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,623 | vmware-diario-5623 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,624 | vmware-diario-5624 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,625 | vmware-diario-5625 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,626 | vmware-diario-5626 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,627 | vmware-diario-5627 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,628 | vmware-diario-5628 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,629 | vmware-diario-5629 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,630 | vmware-diario-5630 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,631 | vmware-diario-5631 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,632 | vmware-diario-5632 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,633 | vmware-diario-5633 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,634 | vmware-diario-5634 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,635 | vmware-diario-5635 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,636 | vmware-diario-5636 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>> | 5,637 | vmware-diario-5637 | Append | 1 | 0 |
>> 0 | 1,123,200 | 1 | 0 | 0 | File | 0000-00-00
>> 00:00:00 | backup-vmware |
>>
>> +---------+--------------------+-----------+---------+----------------+----------+--------------+---------+------+-----------+-----------+---------------------+---------------+
>>
>>
>> My configs:
>>
>> Job {
>> Name = "VMWARE-VMNAME"
>> Client=srv-backup04-fd
>> JobDefs = "DefJobsVMWARE"
>> FileSet = "VMWARE-DEIVE-REIS-W7"
>> Storage = backup-vmware
>> }
>>
>> FileSet {
>> Name = "VMWARE-VMNAME"
>>
>> Include {
>> Options {
>> signature = MD5
>> Compression = LZO
>> }
>> Plugin =
>> "python:module_path=/usr/lib64/bareos/plugins/vmware_plugin:module_name=bareos-fd-vmware:dc=DOMAIN:uuid=5019029f-01ba-bf3c-66c1-956683c4fc05:vcserver=vcenter01.MYDOMAIN.local:[email protected]
>> :vcpass=<MYPWD>"
>> }
>> }
>>
>> JobDefs {
>> Name = "DefJobsVMWARE"
>> Type = Backup
>> Level = Incremental
>> Storage = backup-vmware
>> Schedule = "vmware"
>> Messages = Standard
>> Accurate = no
>> Pool = vmware-diario
>> Priority = 10
>> Write Bootstrap = "/var/lib/bareos/%c.bsr"
>> Full Backup Pool = vmware-full
>> Incremental Backup Pool = vmware-diario
>> Maximum Concurrent Jobs = 5
>> }
>>
>> Schedule {
>> Name = "vmware"
>> Run = Full sat at 02:00
>> Run = Incremental mon-fri at 19:00
>> }
>>
>> Storage {
>> Name = backup-vmware
>> Address = ADDRESS
>> Password = PWD
>> Device = backup-vmware
>> MaximumConcurrentJobs = 8
>> Media Type = File
>> }
>>
>> Pool {
>> Name = vmware-full
>> Pool Type = Backup
>> Recycle = yes
>> AutoPrune = yes
>> Action On Purge = Truncate
>> Volume Retention = 13 days
>> File Retention = 13 days
>> Volume Use Duration = 4h
>> Label Format = "vmware-full-"
>> Recycle Oldest Volume = yes
>> Maximum Volume Bytes = 100GB
>> Maximum Volume Jobs = 10
>> }
>>
>> Pool {
>> Name = vmware-diario
>> Pool Type = Backup
>> Recycle = yes
>> AutoPrune = yes
>> Action On Purge = Truncate
>> Volume Retention = 13 days
>> File Retention = 13 days
>> Volume Use Duration = 4h
>> Label Format = "vmware-diario-"
>> Recycle Oldest Volume = yes
>> Maximum Volume Bytes = 100GB
>> Maximum Volume Jobs = 10
>> }
>>
>>
>> Regards,
>>
>>
>> Rodrigo L L Jorge
>>
>
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/bareos-users/CAEQmXaOO-qnJw4Mj%3DRoi_6vBNWwXMYF5ELx7gvkA-d_GG29fwg%40mail.gmail.com.