Hi all,
Just following up.  inode usage continues to grow.  I took a look at
/var/ossec/queue/diff/server1/535 (I've inserted server1 by replacing the
name of one of our agents) and there are thousands of files with the name
state.number (number looks like a random or incremented number?).  I
checked the contents of one of these files and they appear to be the output
of last -n 5 command which is in the osssec.conf file of each of our Linux
agents.  There aren't any directories in /var/ossec/queue/diff named after
any of our Windows clients.  As I reported earlier, I did clear out
/var/ossec/queue/diff and reset the syscheck database without any side
affects a few days ago.  Would there be any issues by continuning to purge
/var/ossec/queue/diff and if so, should clear the syscheck database when
doing so?  I'm guessing this is a bug?  Please advise and thanks.

Aaron


On Thu, May 2, 2013 at 3:39 PM, Aaron Bliss <[email protected]> wrote:

> Hi all,
> In our environment, on the management server (version 2.7, CentOS 6 64
> bit), OSSEC is installed on a dedicated mount point at /var/ossec (fairly
> new install, has been online since this past December).  We have a mixture
> of Windows and Linux agents (200 or so).  The /var/ossec mount point on the
> management server ran out of inodes, despite only having about 3% of 20
> gigs disk utilization.  I determined that the inodes (1.3 million of them)
> were getting used in /var/ossec/queue/diff.  I was able to clean them up
> and clear syscheck database of the agents, after which everything started
> working again.  However, I was wondering what piece of OSSEC would be
> writing to /var/ossec/queue/diff and which configuration option would be
> doing so?  Please advise and thanks.
>
> Aaron
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to