On Mon, Oct 26, 2015 at 7:04 PM, Shyam <srang...@redhat.com> wrote:
> Older idea on this was, to consume the logs and filter based on the message
> IDs for those situations that can be remedied. The logs are hence the point
> where the event for consumption is generated.
>
> Also, the higher level abstraction uses these logs, it can *watch* based on
> message ID filters that are of interest to it, than parse the log message
> entirely to gain insight on the issue.

Are all situations usually atomic? Is it expected to have specific
mapping between an event recorded in a log from one part of an
installed system to a possible symptom? Or, do a collection of events
lead up to an observed failure (which, in turn, is recorded as a
series of events on the logs)?


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to