Micheal,
I have not yet crossed that bridge, but thinking about it ...
 
Sharing rules between OSSEC servers could boil down to distribution/syncronization of a single file: /var/ossec/rules/local_rules.xml . But that would still require a manual server "restart" to read it.
 
How about using a wrapper script when editing local_rules.xml, that also performs an scp and ssh ...restart ?
 
KenW
 

From: [email protected] [[email protected]] On Behalf Of Michael Altfield [[email protected]]
Sent: Wednesday, August 05, 2009 12:49 PM
To: [email protected]
Subject: [ossec-list] Re: Agent alert queues to prevent data loss

Hi Ken,

Thanks for your input. I currently have this setup pointing the OSSEC Manager to Splunk. It works great.

Still, having 2 OSSEC Managers, every time you have to update a rule, you have to do so twice, correct? Did you find a good way around this?


-Michael

On Wed, Aug 5, 2009 at 11:12 AM, Ken Wachtler <[email protected]> wrote:
Michael,
We view our OSSEC alerts is through a commercial Log Manager. The syslog_output channel points to the Log Manager ...
 
  <syslog_output>
    <server>10.1.1.1</server>
  </syslog_output>

 

If multiple OSSEC servers were utilized, all pointing to the same Log Manager, you could view alerts from all of them their. Of course, this pushes the single point of failure to the LM.


KenW
 

From: [email protected] [[email protected]] On Behalf Of Michael Altfield [michael.sa@gmail.com]
Sent: Tuesday, August 04, 2009 5:28 PM
To: ossec-list
Subject: [ossec-list] Re: Agent alert queues to prevent data loss

bump

On Wed, Jul 29, 2009 at 4:02 PM, Michael Altfield <michael.sa@gmail.com> wrote:

Hi Ken,

Thanks for the response.

I thought about this solution, but I know from another ossec-list
thread ( http://groups.google.com/group/ossec-list/browse_thread/thread/a6f65d7ef0e2cd91
) that OSSEC doesn't handle load balancing or (more importantly)
centralized logging very well with multiple OSSEC managers.

My biggest issue with creating multiple, redundant OSSEC Managers is
that my alert logs are now potentially on 2 different servers--making
it a pain to troubleshoot an audit trail. For example, If I'm using
the OSSEC WUI, I'd now have to check (at least) 2 different WUIs.

I did some more googling, and I saw that the OSSEC team apparently
thought of this issue, so they created the concept of a *single*,
central OSSEC Manager to collect and analyze logs being sent from a
collection of other OSSEC Managers ( http://www.ossec.net/main/manual/manual-muti-server-architecture
). Correct me if I'm wrong, but I think that this solution re-creates
a single point of failure--destroying the reason to have a multi-
server architecture to begin with (redundancy)!

So, I guess another question is: Is there any way to have multiple,
*redundant* OSSEC Managers that have synced logs, rules, and
configurations?


Cheers,
Michael Altfield

On Jul 28, 7:45 pm, Ken Wachtler <[email protected]> wrote:
> Consider listing two OSSEC servers in agent's ossec.conf.
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of Michael Altfield
> Sent: Tuesday, July 28, 2009 12:08 PM
> To: ossec-list
> Subject: [ossec-list] Agent alert queues to prevent data loss
>
> Hello all,
>
> I've been playing with OSSEC for several weeks now, and I absolutely
> love this product. IMHO, it's by far the best FOSS HIDS on the market
> with a wonderful user and developer community.
>
> I'd like to utilize OSSEC in our corporate production environment, but
> the biggest problem I've found with it is that the agents don't appear
> to queue their alerts in memory. The issue being: if the OSSEC server
> is down or there is a temporary network issue, the alerts produced by
> the agent will be lost. This functionality would be unacceptable to
> most compliance standards (namely the PCI DSS).
>
> Is there any way to ensure that all alerts sent from OSSEC hosts to
> the OSSEC server are successfully received by the OSSEC server--and to
> hold onto those alerts that were not received successfully for re-
> sending?
>
> Thank you,
> Michael Altfield


Reply via email to