What is the status of those volumes?

list volume pool=AI-Consolidated

If the status is Purged,  those volumes have not been recycled.  If you want to 
free the space back up before Bareos re-uses them you can truncate them

truncate volstatus=Purged pool=AI-Consolidated yes

That will set the volumes back to size zero.   Bareos tries to preserve data.  
So even if the file and job records have been purged from the catalog it leaves 
the data so in a pinch you can always use bextract / bls  to get back data from 
an old volume.  This is more a hold over from tape where you won’t overwrite 
the data till you need the capacity.

If the volumes are not purged look at what jobs are left on them, and check 
your prune, incrmental timeouts/counts in settings

list jobs volume=AI-Consolidated-0001


Brock Palen
1 (989) 277-6075
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



> On Jun 24, 2020, at 9:11 AM, sim....@gmail.com <sim.su...@gmail.com> wrote:
> 
> 
> Hello
> 
> I'm trying to set up always incremental jobs.
> The storage target is ceph, so one big storage target.
> I have read many guides and believe I did the right configuration, just some 
> timing problems.
> When a consolidation Job runs, it starts a new one with the level 
> "VirtualFull", that's correct, isn't it?
> 
> The other question is, how to efficiently eliminate physically used volume 
> storage, not used in bareos?
> A "du -skh" in the cephfs tells me, there is 3.2TB storage used, but in the 
> job list, the full Consolidated job is 1.06 TB and the other incrementlas are 
> like 700GB, so where are those 1.5TB from?
> 
> [root@testnode1 ~]# ls -lah /cephfs/
> total 3.2T
> drwxrwxrwx   4 root       root         21 Jun 24 01:00 .
> dr-xr-xr-x. 18 root       root        258 Jun 17 10:04 ..
> -rw-r-----   1 bareos     bareos     200G Jun 11 16:04 AI-Consolidated-0001
> -rw-r-----   1 bareos     bareos     200G Jun 11 17:50 AI-Consolidated-0002
> -rw-r-----   1 bareos     bareos     200G Jun 11 19:20 AI-Consolidated-0003
> -rw-r-----   1 bareos     bareos     200G Jun 11 20:35 AI-Consolidated-0004
> -rw-r-----   1 bareos     bareos     176G Jun 11 21:44 AI-Consolidated-0005
> -rw-r-----   1 bareos     bareos     200G Jun 23 16:18 AI-Consolidated-0013
> -rw-r-----   1 bareos     bareos     200G Jun 23 18:00 AI-Consolidated-0014
> -rw-r-----   1 bareos     bareos     200G Jun 23 19:16 AI-Consolidated-0015
> -rw-r-----   1 bareos     bareos     200G Jun 23 20:34 AI-Consolidated-0016
> -rw-r-----   1 bareos     bareos     200G Jun 23 21:52 AI-Consolidated-0017
> -rw-r-----   1 bareos     bareos      49G Jun 23 22:12 AI-Consolidated-0018
> -rw-r-----   1 bareos     bareos     200G Jun 15 11:25 AI-Incremental-0007
> -rw-r-----   1 bareos     bareos     200G Jun 15 13:04 AI-Incremental-0008
> -rw-r-----   1 bareos     bareos     200G Jun 19 00:27 AI-Incremental-0009
> -rw-r-----   1 bareos     bareos     200G Jun 20 01:22 AI-Incremental-0010
> -rw-r-----   1 bareos     bareos     200G Jun 21 00:33 AI-Incremental-0011
> -rw-r-----   1 bareos     bareos     200G Jun 24 01:00 AI-Incremental-0012
> -rw-r-----   1 bareos     bareos      27G Jun 24 01:25 AI-Incremental-0019
> 
> Is there a way to view this efficiently, or do I have to search my way from 
> job to job?
> 
> Thanks in advance
> Simon
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to bareos-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/50bfad73-bf8d-4506-93e7-afa0afa91939n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/8257CBAB-420C-43E2-8061-8AABD4D779F8%40mlds-networks.com.

Reply via email to