Hi,

this was the type of discussion I wanted to have. Thanks!

I did not think about the metadata and the other shares, indeed. So according 
to your comments I took a deeper dive to nocache:

It marks the pages used by the kernel for reading the data as „do not need“. So 
after reading a chunk the kernel frees the memory immediately. 
So for the next chunk it will be re-used. So the existing page cache does not 
get polluted by backup. 

But metadata and directory entries will still be cached- as they are not in 
page cache anyways they are not affected. So I guess for subsequent incremental 
backups this will be perfect. 

Looks like my initial thoughts were fine and because of metadata still caching 
it is really a very good option to improve setup. 
However, for running databases this is indeed not suitable- just as no backups 
are…

Grüsse 
Christian Völker 

> Am 09.08.2025 um 17:00 schrieb G.W. Haywood <bac...@jubileegroup.co.uk>:
> 
> Hi there,
> 
>> On Sat, 9 Aug 2025, Christian V?lker wrote:
>> 
>> I noticed ... high CPU usage 20% ... throwing away "old" filesystem cache
>> blocks to cache ... directory ... 400GB ... blocks are read only once ...
>> thrown away a couple of hours later ...
>> ...
>> ... "RsyncClientPath" ... to "/usr/bin/nocache /usr/bin/rsync".
>> 
>> Now the blocks read during backup are not going into the cache ...
>> 
>> So what do you think? Is this a good solution? Or are there any pitfalls?
> 
> Does using 20% CPU for a couple of hours actually matter?
> 
> Presumably by doing this you'll be defeating _all_ caching of anything
> you're backing up, not just caching of the 400GB directory.  That will
> include directory contents as well as file contents, and may possibly
> be particularly relevant to incremental backups.  You're obviously
> well in tune with your systems so I guess you'll probably notice any
> untoward effects.  I'd expect significantly increased disc activity
> because of directory reads which are no longer cached.  That might be
> more relevant for the life expectancy of spinning rust than for SSDs.
> It might mean that the backups take significantly longer.  I'm sure
> we'll all be interested in your reports, if you find such effects.
> 
> In a case like this as the data you're backing up is relatively static
> I think I'd be tempted to put the 400GB directory in a share all of
> its own, and back it up on a different schedule.  But the 'nocache'
> utility was created expressly for your use case so I don't think it's
> likely that there will be any serious downsides unless there's some
> terrible flaw in the implementation which I guess can't be ruled out.
> I'd be reluctant to do this to my backups for that reason.
> 
> FWIW where I have for example large database directories - of course
> not necessarily static data - I don't let BackupPC anywhere near them.
> Instead I use the DB facilities to create data dumps at intervals (via
> cron), and then back up the data dumps using rsync (again via cron).
> 
> --
> 
> 73,
> Ged.
> 
> 
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/



_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

Reply via email to