Re: [bareos-users] Always incremental / Consolidation

2020-06-26 Thread Brock Palen
I can’t help with the hard-links, basefiles (never used that feature)  or 
keeping monthly's while also doing an AI scheme. 

The issue of unpurged volumes that have no jobs is why I have the script I 
shared.  I highly recomend you do not delete volumes but use prune 
volume= yes 

Which will force Bareos to review the state of the volume and then ether delete 
(have to delete both the catalog and the file on the filesystem)  or truncate 
volstatus=Purged   to free the space and make the volume ready for reuse when a 
new volume is needed.



Brock Palen
1 (989) 277-6075
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



> On Jun 26, 2020, at 4:10 AM, sim@gmail.com  wrote:
> 
> I can't see the image, this would be the link to imgur:
> https://imgur.com/a/hxfl3n4
> 
> And another try to include it:
> 
> 
> sim@gmail.com schrieb am Freitag, 26. Juni 2020 um 10:04:49 UTC+2:
> Hello,
> 
> First of all, thanks for the commands, those helped me much and I think I 
> should use the bconsole more often instead of the GUI.
> I think I know what the problem is. I had a long-term job at the beginning 
> and somehow this is where my storage got eaten up.
> There are consolidated jobs, which are not touched since 15 days and list 
> "jobs volume=AI-Consolidated-0001" returns nothing.
> I will clean them up manually.
> 
> Our goal was to do always incremental backups every 7 days, but with monthly 
> backups for three months.
> Until now, we had a script which writes hardlinks everywhere it can, so there 
> every backup was a full backup and even the 3 month old was in the 
> incremental chain.
> This will save huge ammounts of space.
> Is this also possible with bareos?
> 
> For illustration purpose:
> 
> 
> Thanks in Advance,
> 
> Simon
> bro...@mlds-networks.com schrieb am Donnerstag, 25. Juni 2020 um 16:34:37 
> UTC+2:
> What is the status of those volumes? 
> 
> 
> list volume pool=AI-Consolidated 
> 
> If the status is Purged, those volumes have not been recycled. If you want to 
> free the space back up before Bareos re-uses them you can truncate them 
> 
> truncate volstatus=Purged pool=AI-Consolidated yes 
> 
> That will set the volumes back to size zero. Bareos tries to preserve data. 
> So even if the file and job records have been purged from the catalog it 
> leaves the data so in a pinch you can always use bextract / bls to get back 
> data from an old volume. This is more a hold over from tape where you won’t 
> overwrite the data till you need the capacity. 
> 
> If the volumes are not purged look at what jobs are left on them, and check 
> your prune, incrmental timeouts/counts in settings 
> 
> list jobs volume=AI-Consolidated-0001 
> 
> 
> Brock Palen 
> 1 (989) 277-6075 
> bro...@mlds-networks.com 
> www.mlds-networks.com 
> Websites, Linux, Hosting, Joomla, Consulting 
> 
> 
> 
> > On Jun 24, 2020, at 9:11 AM, sim@gmail.com  wrote: 
> > 
> > 
> > Hello 
> > 
> > I'm trying to set up always incremental jobs. 
> > The storage target is ceph, so one big storage target. 
> > I have read many guides and believe I did the right configuration, just 
> > some timing problems. 
> > When a consolidation Job runs, it starts a new one with the level 
> > "VirtualFull", that's correct, isn't it? 
> > 
> > The other question is, how to efficiently eliminate physically used volume 
> > storage, not used in bareos? 
> > A "du -skh" in the cephfs tells me, there is 3.2TB storage used, but in the 
> > job list, the full Consolidated job is 1.06 TB and the other incrementlas 
> > are like 700GB, so where are those 1.5TB from? 
> > 
> > [root@testnode1 ~]# ls -lah /cephfs/ 
> > total 3.2T 
> > drwxrwxrwx 4 root root 21 Jun 24 01:00 . 
> > dr-xr-xr-x. 18 root root 258 Jun 17 10:04 .. 
> > -rw-r- 1 bareos bareos 200G Jun 11 16:04 AI-Consolidated-0001 
> > -rw-r- 1 bareos bareos 200G Jun 11 17:50 AI-Consolidated-0002 
> > -rw-r- 1 bareos bareos 200G Jun 11 19:20 AI-Consolidated-0003 
> > -rw-r- 1 bareos bareos 200G Jun 11 20:35 AI-Consolidated-0004 
> > -rw-r- 1 bareos bareos 176G Jun 11 21:44 AI-Consolidated-0005 
> > -rw-r- 1 bareos bareos 200G Jun 23 16:18 AI-Consolidated-0013 
> > -rw-r- 1 bareos bareos 200G Jun 23 18:00 AI-Consolidated-0014 
> > -rw-r- 1 bareos bareos 200G Jun 23 19:16 AI-Consolidated-0015 
> > -rw-r- 1 bareos bareos 200G Jun 23 20:34 AI-Consolidated-0016 
> > -rw-r- 1 bareos bareos 200G Jun 23 21:52 AI-Consolidated-0017 
> > -rw-r- 1 bareos bareos 49G Jun 23 22:12 AI-Consolidated-0018 
> > -rw-r- 1 bareos bareos 200G Jun 15 11:25 AI-Incremental-0007 
> > -rw-r- 1 bareos bareos 200G Jun 15 13:04 AI-Incremental-0008 
> > -rw-r- 1 bareos bareos 200G Jun 19 00:27 AI-Incremental-0009 
> > -rw-r- 1 bareos bareos 200G Jun 20 01:22 AI-Incremental-0010 
> > -rw-r- 1 bareos bareos 200G Jun 21 00:33 AI-Incremental-0011 
> > -rw-r- 1 bareos bareos 200G Jun 24 01:00 AI-Incremental-0012 
> 

Re: [bareos-users] Always incremental / Consolidation

2020-06-26 Thread sim....@gmail.com
I can't see the image, this would be the link to imgur:
https://imgur.com/a/hxfl3n4

And another try to include it:
[image: Diagram_Backup_Strategy_old.png]

sim@gmail.com schrieb am Freitag, 26. Juni 2020 um 10:04:49 UTC+2:

> Hello,
>
> First of all, thanks for the commands, those helped me much and I think I 
> should use the bconsole more often instead of the GUI.
> I think I know what the problem is. I had a long-term job at the beginning 
> and somehow this is where my storage got eaten up.
> There are consolidated jobs, which are not touched since 15 days and list 
> "jobs volume=AI-Consolidated-0001" returns nothing.
> I will clean them up manually.
>
> Our goal was to do always incremental backups every 7 days, but with 
> monthly backups for three months.
> Until now, we had a script which writes hardlinks everywhere it can, so 
> there every backup was a full backup and even the 3 month old was in the 
> incremental chain.
> This will save huge ammounts of space.
> Is this also possible with bareos?
>
> For illustration purpose:
>
> Thanks in Advance,
>
> Simon
> bro...@mlds-networks.com schrieb am Donnerstag, 25. Juni 2020 um 16:34:37 
> UTC+2:
>
>> What is the status of those volumes? 
>>
>>
>> list volume pool=AI-Consolidated 
>>
>> If the status is Purged, those volumes have not been recycled. If you 
>> want to free the space back up before Bareos re-uses them you can truncate 
>> them 
>>
>> truncate volstatus=Purged pool=AI-Consolidated yes 
>>
>> That will set the volumes back to size zero. Bareos tries to preserve 
>> data. So even if the file and job records have been purged from the catalog 
>> it leaves the data so in a pinch you can always use bextract / bls to get 
>> back data from an old volume. This is more a hold over from tape where you 
>> won’t overwrite the data till you need the capacity. 
>>
>> If the volumes are not purged look at what jobs are left on them, and 
>> check your prune, incrmental timeouts/counts in settings 
>>
>> list jobs volume=AI-Consolidated-0001 
>>
>>
>> Brock Palen 
>> 1 (989) 277-6075 <(989)%20277-6075> 
>> bro...@mlds-networks.com 
>> www.mlds-networks.com 
>> Websites, Linux, Hosting, Joomla, Consulting 
>>
>>
>>
>> > On Jun 24, 2020, at 9:11 AM, sim@gmail.com  
>> wrote: 
>> > 
>> > 
>> > Hello 
>> > 
>> > I'm trying to set up always incremental jobs. 
>> > The storage target is ceph, so one big storage target. 
>> > I have read many guides and believe I did the right configuration, just 
>> some timing problems. 
>> > When a consolidation Job runs, it starts a new one with the level 
>> "VirtualFull", that's correct, isn't it? 
>> > 
>> > The other question is, how to efficiently eliminate physically used 
>> volume storage, not used in bareos? 
>> > A "du -skh" in the cephfs tells me, there is 3.2TB storage used, but in 
>> the job list, the full Consolidated job is 1.06 TB and the other 
>> incrementlas are like 700GB, so where are those 1.5TB from? 
>> > 
>> > [root@testnode1 ~]# ls -lah /cephfs/ 
>> > total 3.2T 
>> > drwxrwxrwx 4 root root 21 Jun 24 01:00 . 
>> > dr-xr-xr-x. 18 root root 258 Jun 17 10:04 .. 
>> > -rw-r- 1 bareos bareos 200G Jun 11 16:04 AI-Consolidated-0001 
>> > -rw-r- 1 bareos bareos 200G Jun 11 17:50 AI-Consolidated-0002 
>> > -rw-r- 1 bareos bareos 200G Jun 11 19:20 AI-Consolidated-0003 
>> > -rw-r- 1 bareos bareos 200G Jun 11 20:35 AI-Consolidated-0004 
>> > -rw-r- 1 bareos bareos 176G Jun 11 21:44 AI-Consolidated-0005 
>> > -rw-r- 1 bareos bareos 200G Jun 23 16:18 AI-Consolidated-0013 
>> > -rw-r- 1 bareos bareos 200G Jun 23 18:00 AI-Consolidated-0014 
>> > -rw-r- 1 bareos bareos 200G Jun 23 19:16 AI-Consolidated-0015 
>> > -rw-r- 1 bareos bareos 200G Jun 23 20:34 AI-Consolidated-0016 
>> > -rw-r- 1 bareos bareos 200G Jun 23 21:52 AI-Consolidated-0017 
>> > -rw-r- 1 bareos bareos 49G Jun 23 22:12 AI-Consolidated-0018 
>> > -rw-r- 1 bareos bareos 200G Jun 15 11:25 AI-Incremental-0007 
>> > -rw-r- 1 bareos bareos 200G Jun 15 13:04 AI-Incremental-0008 
>> > -rw-r- 1 bareos bareos 200G Jun 19 00:27 AI-Incremental-0009 
>> > -rw-r- 1 bareos bareos 200G Jun 20 01:22 AI-Incremental-0010 
>> > -rw-r- 1 bareos bareos 200G Jun 21 00:33 AI-Incremental-0011 
>> > -rw-r- 1 bareos bareos 200G Jun 24 01:00 AI-Incremental-0012 
>> > -rw-r- 1 bareos bareos 27G Jun 24 01:25 AI-Incremental-0019 
>> > 
>> > Is there a way to view this efficiently, or do I have to search my way 
>> from job to job? 
>> > 
>> > Thanks in advance 
>> > Simon 
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google 
>> Groups "bareos-users" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an email to bareos-users...@googlegroups.com. 
>> > To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/bareos-users/50bfad73-bf8d-4506-93e7-afa0afa91939n%40googlegroups.com.
>>  
>>
>>
>>

-- 

Re: [bareos-users] Always incremental / Consolidation

2020-06-26 Thread sim....@gmail.com
Hello,

First of all, thanks for the commands, those helped me much and I think I 
should use the bconsole more often instead of the GUI.
I think I know what the problem is. I had a long-term job at the beginning 
and somehow this is where my storage got eaten up.
There are consolidated jobs, which are not touched since 15 days and list 
"jobs volume=AI-Consolidated-0001" returns nothing.
I will clean them up manually.

Our goal was to do always incremental backups every 7 days, but with 
monthly backups for three months.
Until now, we had a script which writes hardlinks everywhere it can, so 
there every backup was a full backup and even the 3 month old was in the 
incremental chain.
This will save huge ammounts of space.
Is this also possible with bareos?

For illustration purpose:

Thanks in Advance,

Simon
bro...@mlds-networks.com schrieb am Donnerstag, 25. Juni 2020 um 16:34:37 
UTC+2:

> What is the status of those volumes?
>
>
> list volume pool=AI-Consolidated
>
> If the status is Purged, those volumes have not been recycled. If you want 
> to free the space back up before Bareos re-uses them you can truncate them
>
> truncate volstatus=Purged pool=AI-Consolidated yes
>
> That will set the volumes back to size zero. Bareos tries to preserve 
> data. So even if the file and job records have been purged from the catalog 
> it leaves the data so in a pinch you can always use bextract / bls to get 
> back data from an old volume. This is more a hold over from tape where you 
> won’t overwrite the data till you need the capacity.
>
> If the volumes are not purged look at what jobs are left on them, and 
> check your prune, incrmental timeouts/counts in settings
>
> list jobs volume=AI-Consolidated-0001
>
>
> Brock Palen
> 1 (989) 277-6075 <(989)%20277-6075>
> bro...@mlds-networks.com
> www.mlds-networks.com
> Websites, Linux, Hosting, Joomla, Consulting
>
>
>
> > On Jun 24, 2020, at 9:11 AM, sim@gmail.com  
> wrote:
> > 
> > 
> > Hello
> > 
> > I'm trying to set up always incremental jobs.
> > The storage target is ceph, so one big storage target.
> > I have read many guides and believe I did the right configuration, just 
> some timing problems.
> > When a consolidation Job runs, it starts a new one with the level 
> "VirtualFull", that's correct, isn't it?
> > 
> > The other question is, how to efficiently eliminate physically used 
> volume storage, not used in bareos?
> > A "du -skh" in the cephfs tells me, there is 3.2TB storage used, but in 
> the job list, the full Consolidated job is 1.06 TB and the other 
> incrementlas are like 700GB, so where are those 1.5TB from?
> > 
> > [root@testnode1 ~]# ls -lah /cephfs/
> > total 3.2T
> > drwxrwxrwx 4 root root 21 Jun 24 01:00 .
> > dr-xr-xr-x. 18 root root 258 Jun 17 10:04 ..
> > -rw-r- 1 bareos bareos 200G Jun 11 16:04 AI-Consolidated-0001
> > -rw-r- 1 bareos bareos 200G Jun 11 17:50 AI-Consolidated-0002
> > -rw-r- 1 bareos bareos 200G Jun 11 19:20 AI-Consolidated-0003
> > -rw-r- 1 bareos bareos 200G Jun 11 20:35 AI-Consolidated-0004
> > -rw-r- 1 bareos bareos 176G Jun 11 21:44 AI-Consolidated-0005
> > -rw-r- 1 bareos bareos 200G Jun 23 16:18 AI-Consolidated-0013
> > -rw-r- 1 bareos bareos 200G Jun 23 18:00 AI-Consolidated-0014
> > -rw-r- 1 bareos bareos 200G Jun 23 19:16 AI-Consolidated-0015
> > -rw-r- 1 bareos bareos 200G Jun 23 20:34 AI-Consolidated-0016
> > -rw-r- 1 bareos bareos 200G Jun 23 21:52 AI-Consolidated-0017
> > -rw-r- 1 bareos bareos 49G Jun 23 22:12 AI-Consolidated-0018
> > -rw-r- 1 bareos bareos 200G Jun 15 11:25 AI-Incremental-0007
> > -rw-r- 1 bareos bareos 200G Jun 15 13:04 AI-Incremental-0008
> > -rw-r- 1 bareos bareos 200G Jun 19 00:27 AI-Incremental-0009
> > -rw-r- 1 bareos bareos 200G Jun 20 01:22 AI-Incremental-0010
> > -rw-r- 1 bareos bareos 200G Jun 21 00:33 AI-Incremental-0011
> > -rw-r- 1 bareos bareos 200G Jun 24 01:00 AI-Incremental-0012
> > -rw-r- 1 bareos bareos 27G Jun 24 01:25 AI-Incremental-0019
> > 
> > Is there a way to view this efficiently, or do I have to search my way 
> from job to job?
> > 
> > Thanks in advance
> > Simon
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "bareos-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to bareos-users...@googlegroups.com.
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/50bfad73-bf8d-4506-93e7-afa0afa91939n%40googlegroups.com
> .
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/11793670-90d5-45a0-8b2d-bb220ffae0cdn%40googlegroups.com.


Re: [bareos-users] Always incremental / Consolidation

2020-06-25 Thread Brock Palen
What is the status of those volumes?


list volume pool=AI-Consolidated

If the status is Purged,  those volumes have not been recycled.  If you want to 
free the space back up before Bareos re-uses them you can truncate them

truncate volstatus=Purged pool=AI-Consolidated yes

That will set the volumes back to size zero.   Bareos tries to preserve data.  
So even if the file and job records have been purged from the catalog it leaves 
the data so in a pinch you can always use bextract / bls  to get back data from 
an old volume.  This is more a hold over from tape where you won’t overwrite 
the data till you need the capacity.

If the volumes are not purged look at what jobs are left on them, and check 
your prune, incrmental timeouts/counts in settings

list jobs volume=AI-Consolidated-0001


Brock Palen
1 (989) 277-6075
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



> On Jun 24, 2020, at 9:11 AM, sim@gmail.com  wrote:
> 
> 
> Hello
> 
> I'm trying to set up always incremental jobs.
> The storage target is ceph, so one big storage target.
> I have read many guides and believe I did the right configuration, just some 
> timing problems.
> When a consolidation Job runs, it starts a new one with the level 
> "VirtualFull", that's correct, isn't it?
> 
> The other question is, how to efficiently eliminate physically used volume 
> storage, not used in bareos?
> A "du -skh" in the cephfs tells me, there is 3.2TB storage used, but in the 
> job list, the full Consolidated job is 1.06 TB and the other incrementlas are 
> like 700GB, so where are those 1.5TB from?
> 
> [root@testnode1 ~]# ls -lah /cephfs/
> total 3.2T
> drwxrwxrwx   4 root   root 21 Jun 24 01:00 .
> dr-xr-xr-x. 18 root   root258 Jun 17 10:04 ..
> -rw-r-   1 bareos bareos 200G Jun 11 16:04 AI-Consolidated-0001
> -rw-r-   1 bareos bareos 200G Jun 11 17:50 AI-Consolidated-0002
> -rw-r-   1 bareos bareos 200G Jun 11 19:20 AI-Consolidated-0003
> -rw-r-   1 bareos bareos 200G Jun 11 20:35 AI-Consolidated-0004
> -rw-r-   1 bareos bareos 176G Jun 11 21:44 AI-Consolidated-0005
> -rw-r-   1 bareos bareos 200G Jun 23 16:18 AI-Consolidated-0013
> -rw-r-   1 bareos bareos 200G Jun 23 18:00 AI-Consolidated-0014
> -rw-r-   1 bareos bareos 200G Jun 23 19:16 AI-Consolidated-0015
> -rw-r-   1 bareos bareos 200G Jun 23 20:34 AI-Consolidated-0016
> -rw-r-   1 bareos bareos 200G Jun 23 21:52 AI-Consolidated-0017
> -rw-r-   1 bareos bareos  49G Jun 23 22:12 AI-Consolidated-0018
> -rw-r-   1 bareos bareos 200G Jun 15 11:25 AI-Incremental-0007
> -rw-r-   1 bareos bareos 200G Jun 15 13:04 AI-Incremental-0008
> -rw-r-   1 bareos bareos 200G Jun 19 00:27 AI-Incremental-0009
> -rw-r-   1 bareos bareos 200G Jun 20 01:22 AI-Incremental-0010
> -rw-r-   1 bareos bareos 200G Jun 21 00:33 AI-Incremental-0011
> -rw-r-   1 bareos bareos 200G Jun 24 01:00 AI-Incremental-0012
> -rw-r-   1 bareos bareos  27G Jun 24 01:25 AI-Incremental-0019
> 
> Is there a way to view this efficiently, or do I have to search my way from 
> job to job?
> 
> Thanks in advance
> Simon
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to bareos-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/50bfad73-bf8d-4506-93e7-afa0afa91939n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/8257CBAB-420C-43E2-8061-8AABD4D779F8%40mlds-networks.com.