Hello,

> On 16 Dec 2025, at 13:45, Adolf Belka <[email protected]> wrote:
> 
> Hi Michael,
> 
> On 16/12/2025 11:30, Michael Tremer wrote:
>> Hello Adam,
>>> On 15 Dec 2025, at 19:29, Adam Gibbons <[email protected]> wrote:
>>> 
>>> Hi Michael, Adolf,
>>> 
>>> Yes, Adolf has patched the second issue. I’ve tested this and backups 
>>> dropped from 826 MB to around 10 MB. Thank you, Adolf, for that.
>> This sounds like a very reasonable size for a backup file.
>>> If the upstream Suricata fix lands soon, we may not need to do anything 
>>> further. Regarding the proposed `find` command, my only concern is partial 
>>> cache removal. When I removed the entire cache, Suricata regenerated it 
>>> cleanly on startup. I’m not sure how it behaves when only some of the cache 
>>> is missing, as I’ve only tested removing all of it (`rm -rf 
>>> /var/cache/suricata/sgh/*`). Perhaps it would be cleaner and potentially 
>>> safer to just purge the cache entirely?
>> I suppose that Suricata will just re-generate anything that is missing from 
>> the cache. It would just take a couple of seconds at startup.
>> You can simply test this be removing half the files and restart Suricata. If 
>> it fails to come up, I would consider this a bug that we should be reporting 
>> upstream. However, that would surprise me if it was implemented as such.
>> -Michael
>>> Thanks,
>>> Adam
>>> 
>>> 
>>> On 15 December 2025 16:54:51 GMT, Michael Tremer 
>>> <[email protected]> wrote:
>>> Hello Adam,
>>> 
>>> Thank you for raising this here.
>>> 
>>> We seem to have two different issues as far as I can see:
>>> 
>>> 1) The directory just keeps growing
>>> 
>>> 2) It is being backed up and completely blows up the size of the backup
>>> 
>>> No. 2 has been fixed by Adolf. It is however quite interesting that we 
>>> already have something in /var/cache in the backup. The intention was to 
>>> have a valid set of rules available as soon as a backup is being restored, 
>>> but I think there is very little value in this. The rules are probably long 
>>> expires and will be re-downloaded again.
>>> 
>>> We also have a large number of other (also large in disk size) lists around 
>>> that we are not backing up, so I would propose that we remove 
>>> /var/cache/suricata from the backup entirely. Until we have made a decision 
>>> on this, I have merged Adolf’s patch.
>>> 
>>> Regarding No. 1, this is indeed a problem that Suricata does not clean this 
>>> up itself. We could add a simple command that looks a bit like this:
>>> 
>>> find /var/cache/suricata/ -type f -atime +7 -delete
> 
> This doesn't remove anything.
> 
> find /var/cache/suricata/sgh/ -type f gives a list of all files in the 
> suricata directory and the sgh one.
> 
> However find /var/cache/suricata/sgh/ -type f -atime +7 gives an empty result.

This could be. The command tries to find any files that have not been read in 7 
days. Since Suricata should be reloaded every once in a while, this should 
actually be sufficient. Maybe we want 14 or even 30 days.

> Maybe that is because I have restarted suricata by changing some rules 
> selected entries.
> 
> find /var/cache/suricata/sgh/ -type f -mtime +7 -delete took the sgh size 
> down from 660MB to 127MB and suricata still worked.

This would remove any files that have been created (because Suricata would 
never actually modify them I believe) before the 7 days threshold. If 
signatures have not changed, we would remove some files that are still needed.

> I think we should do the trimming of files in the sgh directory only.

Absolutely. I have no idea why I removed the last part from my comment. That 
wasn’t intentional.

> If we do it for the suricata directory then it will also remove the tarballs 
> for rulesets that might be selected as providers and with the rulesets 
> getting an update but the rules not enabled and then the last updated date is 
> replaced by N/A and if the provider is then enabled suricata will go through 
> its  stuff and say it has completed but if you then go and look to customise 
> the rules you will find no entries for that selected provider because the 
> tarball has been removed.
> 
> So for selected but disabled providers you would have to go to the provider 
> page and force an update to force the tarball to be re-downloaded and then 
> you can enable it and the rules will be available to select in the customise 
> page.

We should never remove any downloaded data even if it is a bit older. It is 
better to have some signatures than no signatures if something during the 
update process goes wrong.

-Michael

> Regards,
> 
> Adolf.
> 
>>> 
>>> This would delete all of the cached files that have not been accessed in 
>>> the last seven days.
>>> 
>>> On my system, the entire directory is 1.4 GiB in size and the command would 
>>> remove 500 MiB.
>>> 
>>> Happy to read your thoughts.
>>> 
>>> -Michael
>>> 
>>> On 12 Dec 2025, at 16:49, Adam Gibbons <[email protected]> wrote:
>>> 
>>> Hi all,
>>> 
>>> As discussed on the forum
>>> https://community.ipfire.org/t/re-large-backupfile/15346
>>> it appears that Suricata’s new cache optimisation feature is creating a 
>>> large number of files under
>>> `/var/cache/suricata/sgh/`, which in some cases causes backup files to grow 
>>> to 800+ MB.
>>> 
>>> @Adolf has confirmed that this directory probably should not be included in 
>>> backups, as it is automatically regenerated, and I believe he mentioned he 
>>> is working on a patch to exclude it from the backup.
>>> 
>>> However, in the meantime, this directory continues to grow over time. The 
>>> upstream Suricata patches to automatically clean or maintain the cache have 
>>> not yet been merged, although they may be soon:
>>> 
>>> https://github.com/OISF/suricata/pull/13850
>>> https://github.com/OISF/suricata/pull/14400
>>> 
>>> To me this represents a disk-space exhaustion risk on systems with limited 
>>> storage. Perhaps we should consider disabling Suricata’s new cache 
>>> optimisation feature until automatic cache cleanup/maintenance is available 
>>> upstream and included.
>>> 
>>> Thanks,
>>> Adam



Reply via email to