[
https://issues.apache.org/jira/browse/JCS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359060#comment-17359060
]
Narasimha Raju commented on JCS-227:
------------------------------------
Yes agreed, but if we add log filters when millions of records are coming in it
will add load to cpu, which we dont want. So if we can have an in built logging
for such high volume logging that would be great. We want this duplicate
calculations to happen only incase of exceltion, if we add a filter it would
try to filter every message adding a single if condition itself is big
performance issue when millions of records per minute are processed.
> With jdbc disk cache, when the database is down too many logs are printed
> causing the disk become full
> ------------------------------------------------------------------------------------------------------
>
> Key: JCS-227
> URL: https://issues.apache.org/jira/browse/JCS-227
> Project: Commons JCS
> Issue Type: Bug
> Components: JDBC Disk Cache
> Affects Versions: jcs-3.0, jcs-3.1
> Environment: Linux OS, jdk8,java util logging
> Reporter: Narasimha Raju
> Assignee: Thomas Vandahl
> Priority: Critical
> Labels: newbie
> Fix For: jcs-3.1
>
> Original Estimate: 48h
> Remaining Estimate: 48h
>
> Incase any restart of the database lot log execptions are thrown, since we
> have millions of get and puts happenning the logs are getting filled up.
> The issue here is the put or get does not throw any exception which we could
> catch and handle and controll logging.
> If possible we should log on severe log level, logging should not print he
> expcetion stacktrace and if would be better if just print how many times such
> exception has occured instead of printing all. Printing all exceptions can be
> done in finest.
> Bcz of high cpu the log rollup is not happening.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)