usually to find what's wrong you can use the estimate command by adding the 
listing keyword at the end (maybe you will also prefer to output to 
external file)
so you will see that the job will be really saving.


On Friday, November 11, 2022 at 7:23:53 AM UTC+1 Parashar Pradhan wrote:

> Hello Team,
>
> I have a fileset defined as below where I am taking backup of entire / 
> filesystem
>
> FileSet {
>   Name = "recorder"
>   Description = "fileset just to backup some files for selftest"
>   Include {
>     Options {
>       Signature = MD5 # calculate md5 checksum per file
> One FS = No
> FS Type = btrfs
> FS Type = ext2
> FS Type = ext3
> FS Type = ext4
> FS Type = reiserfs
> FS Type = jfs
> FS Type = xfs
> FS Type = zfs
> }
>     File = /
>   }
>    Exclude {
>     File = /var/lib/bareos
>     File = /proc
>     File = /snap
>     File = /backup
>     File = /tmp
>     File = /var/tmp
>     File = /.journal
>     File = /.fsck
>     File = /var/log/lastlog
>   }
> }
>
> Here is the output of storage used from that system:
>
> Filesystem               Size  Used Avail Use% Mounted on
> udev                      16G     0   16G   0% /dev
> tmpfs                    3.2G  3.2M  3.2G   1% /run
> /dev/sda4                 35G  7.6G   25G  24% /
> tmpfs                     16G     0   16G   0% /dev/shm
> tmpfs                    5.0M  4.0K  5.0M   1% /run/lock
> tmpfs                     16G     0   16G   0% /sys/fs/cgroup
> tmpfs                     16G     0   16G   0% /run/shm
> /dev/sda2                2.0G  318M  1.5G  18% /boot
> /dev/mapper/vg0-lv--0    8.8G  981M  7.4G  12% /home
> /dev/mapper/varvg-lvvar   97G   17G   76G  19% /var
> /dev/loop2                56M   56M     0 100% /snap/core18/2566
> /dev/loop7               266M  266M     0 100% /snap/rocketchat-server/1523
> /dev/loop5               266M  266M     0 100% /snap/rocketchat-server/1522
> /dev/mapper/vg2-lv1     1007G   39G  918G   4% /data
> tmpfs                    3.2G   36K  3.2G   1% /run/user/124
> overlay                   97G   17G   76G  19% 
> /var/lib/docker/overlay2/23795c65                                           
>                                             
>  f8e1206062cd45c6223c2a771eea6e77f891b383fdb4f56dd045ee52/merged
> overlay                   97G   17G   76G  19% 
> /var/lib/docker/overlay2/b4b4b87c                                           
>                                             
>  63b294fcd2e5ae093283ee0f77cf8f99b7228018345ee94cb41f50d4/merged
> overlay                   97G   17G   76G  19% 
> /var/lib/docker/overlay2/a63acffd                                           
>                                             
>  87ecee9c937d32054f4c6d4f831bc03f2b829cd1e3528c3598cbdbc2/merged
> shm                       64M     0   64M   0% 
> /var/lib/docker/containers/eadd0f                                           
>                                             
>  4b065db37c8681d7b454891c402433c2058417f179a4edf033d28c5520/mounts/shm
> shm                       64M     0   64M   0% 
> /var/lib/docker/containers/fdd832                                           
>                                             
>  08b54f8f4c95bfc6187379b4ac634526df7dd599e1b1484ac88ede7513/mounts/shm
> overlay                   97G   17G   76G  19% 
> /var/lib/docker/overlay2/725b8278                                           
>                                             
>  b3e667ef32f9a520c8ad13e0b10caf5db0bc53b1b1d765a2c3e3477c/merged
> shm                       64M     0   64M   0% 
> /var/lib/docker/containers/9a8ef8                                           
>                                             
>  10ceba6fa0e511d9042fd88703147fcbbf977c1a9fe9c7d8aac0f81ded/mounts/shm
> overlay                   97G   17G   76G  19% 
> /var/lib/docker/overlay2/ab110622                                           
>                                             
>  9904ccd250cf6318ae36506d748697b4dc5d97ad8638546ddfa8e997/merged
> overlay                   97G   17G   76G  19% 
> /var/lib/docker/overlay2/898a7f1b                                           
>                                             
>  68e79f597977d27e9e5ba590ed7be3a462587fd9895b84e947c89668/merged
> shm                       64M     0   64M   0% 
> /var/lib/docker/containers/2db64f                                           
>                                             
>  d702f802aa78b8ef9d2448eebbdd9b56dc49d8936fafc07a6849b72396/mounts/shm
> shm                       64M   16K   64M   1% 
> /var/lib/docker/containers/bbaa9e                                           
>                                             
>  a763410e437442878d8cbd5bcdfac24c62bd688af540f2c5573e843485/mounts/shm
> overlay                   97G   17G   76G  19% 
> /var/lib/docker/overlay2/cc5a69e0                                           
>                                             
>  9c0564469ee110dcb6a37b0b1e5cdf877d3869bacd3a980b0686ab02/merged
> shm                       64M     0   64M   0% 
> /var/lib/docker/containers/aae4b4                                           
>                                             
>  854558f28aa3408859f79d569bfa31db9836fc91a8efed080a835fb916/mounts/shm
> /dev/loop8                48M   48M     0 100% /snap/snapd/17029
> /dev/loop4                48M   48M     0 100% /snap/snapd/17336
> /dev/loop6                64M   64M     0 100% /snap/core20/1634
> /dev/loop1                56M   56M     0 100% /snap/core18/2620
> /dev/loop3                64M   64M     0 100% /snap/core20/1695
> tmpfs                    3.2G  8.0K  3.2G   1% /run/user/1000
>
> When I did full backup it is showing as 404 GB which is not matching as 
> per above used filesystem size.
>
> How it is calculating 404 GB. Any idea?
>
> Thanks & Regards,
> Parashar Pradhan
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/65a6c36e-088d-419a-8319-d1c6a6edb00an%40googlegroups.com.

Reply via email to