Hi Sumedha, +1 Yes, the "admin" should be discontinued. We can add a new user (say, "restpublisher") to the default user store of "analytics-apim". +1 on adopting "dynamically provisioned" approach ( I guess this is secure enough) However currently AM (2.0.x) uses the configured REST credentials from api-manager.xml. I think we would better release with same way due to release time constraints.
However, as per my understanding AM events, regardless of logging or any other should go to "Analytics for AM". Hence I think they are not "two systems". About Java Logging, Yes, we should be able to adopt current "LogEventAppender" logic to Java logging framework as a Handler or Filter, or in fact for any logger framework. Cheers, Ruwan On Fri, May 13, 2016 at 12:07 PM, Miyuru Dayarathna <[email protected]> wrote: > Hi Ruwan, > > IMO it is good to store the data temporarily in a file in case it is not > possible to publish to DAS. As we discussed offline it might not be > practical to balance between keeping in-memory vs storing in file because > in most of the cases logs may get produced in huge amounts in few seconds > time. Hence it is better to go with the file based approach. > > -- > Thanks, > Miyuru Dayarathna > Senior Technical Lead > Mobile: +94713527783 > Blog: http://miyurublog.blogspot.com > > > On Fri, May 13, 2016 at 11:59 AM, Sumedha Rubasinghe <[email protected]> > wrote: > >> Ruwan, >> Some related thoughts.. >> >> 1. From product (AM in this case) we are calling two systems (for stat >> collection and log event collection). So IMO, we should rather be using two >> account to do so. I know we are pushing data to DAS in the end. But That is >> irrelevant at this level IMO. And using 'admin' account for any of these >> scenarios should be discontinued. >> >> 2. For IoT Server, we had to make this system to system call via a >> dynamically provisioned approach. We ended up writing OAuth2 token >> provisioning on top of DAS. >> >> 3. Tomcat uses Java logging now ( >> https://tomcat.apache.org/tomcat-9.0-doc/logging.html). They support >> other logging systems through 'juli'. May be we should switch to Java >> logging from C5. >> >> >> >> >> >> >> On Fri, May 13, 2016 at 11:26 AM, Ruwan Abeykoon <[email protected]> wrote: >> >>> Hi All, >>> Log Analyzer for APIM uses a log4j appender to publish log events to >>> DAS. Currently the DAS credentials are configured itself in the >>> log4j.properties as following. >>> >>> # DAS_AGENT is set to be a Custom Log Appender. >>> log4j.appender.DAS_AGENT=org.wso2.carbon.data.agents.log4j.appender.LogEventAppender >>> # DAS_AGENT uses PatternLayout. >>> log4j.appender.DAS_AGENT.layout=org.wso2.carbon.data.agents.log4.util.TenantAwarePatternLayout >>> log4j.appender.DAS_AGENT.columnList=%T,%S,%A,%d,%c,%p,%m,%H,%I,%Stacktrace >>> log4j.appender.DAS_AGENT.userName=admin >>> log4j.appender.DAS_AGENT.password=admin >>> log4j.appender.DAS_AGENT.url=tcp://localhost:7612 >>> log4j.appender.DAS_AGENT.truststorePath=/repository/resources/security/client-truststore.jks >>> log4j.appender.DAS_AGENT.maxTolerableConsecutiveFailure=5 >>> log4j.appender.DAS_AGENT.streamDef=loganalyzer:1.0.0 >>> log4j.logger.trace.messages=TRACE,CARBON_TRACE_LOGFILE >>> >>> >>> There are few problems in this. >>> 1. The username, password, and JKS file locations has to be syncrhonized >>> in two places (e.g. same is configured in api-manager.xml). >>> 2. Server start takes bit longer as the logger has to read JKS files and >>> publish all the log events to DAS while server being started. >>> >>> Suggestion is to configure the log4j appender with a OSGI component in >>> its activation method. The component activation can read properties in >>> "carbon.xml" or any custom configuration(e.g. api-manager.xml) and set the >>> DAS properties accordingly. Log appender can be started once these values >>> are known. >>> >>> What happens the logs prior to appender start? >>> Appender will collect all the logs in a temporary file until DAS >>> publishing starts. We can use a data structure similar to [1]. The data in >>> this file is published once the start is called and then normal publishing >>> can resume. >>> >>> Added bonus: We can use the same file storage in the event when DAS is >>> not contactable for small duration(due to transient network error or DAS >>> downtime). Then the log events can be published once DAS is available. >>> Currently it seems the events are kept in memory and would contribute to >>> OOM situation. >>> >>> There is one drawaback I could think of. >>> 1. No event is published until OSGI fully initialized, even repeated >>> restarts. Those events will be collected on temp file though. I think this >>> is OK, as usually OSGI/carbon will start unless it is due some low level >>> library issue. >>> >>> >>> WDYT? >>> >>> [1] >>> http://www.javaworld.com/article/2076333/java-web-development/use-a-randomaccessfile-to-build-a-low-level-database.html >>> >>> Cheers, >>> Ruwan >>> -- >>> >>> *Ruwan Abeykoon* >>> *Architect,* >>> *WSO2, Inc. http://wso2.com <http://wso2.com/> * >>> *lean.enterprise.middleware.* >>> >>> email: [email protected] >>> >>> _______________________________________________ >>> Architecture mailing list >>> [email protected] >>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture >>> >>> >> >> >> -- >> /sumedha >> m: +94 773017743 >> b : bit.ly/sumedha >> > > > > -- > Thanks, > Miyuru Dayarathna > Senior Technical Lead > Mobile: +94713527783 > Blog: http://miyurublog.blogspot.com > -- *Ruwan Abeykoon* *Architect,* *WSO2, Inc. http://wso2.com <http://wso2.com/> * *lean.enterprise.middleware.* email: [email protected]
_______________________________________________ Architecture mailing list [email protected] https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
