On Fri, Jun 22, 2012 at 1:45 PM, Amani Soysa <[email protected]> wrote: > > > On Fri, Jun 22, 2012 at 1:24 PM, Tharindu Mathew <[email protected]>wrote: > >> This will be useful for folks who want real time data access, but BAM is >> not designed to be real time. I don't want the Agent API to be specific to >> Cassandra, either. >> >> There should be a clean way to do this. How did you decide to do it this >> way? Was there a discussion? >> > Yes there was a discussion on this some time back on Architecture -"RFC: > Architecture for Stratos Log Processing" where we decided to push logs to > bam event receiver through the publisher and view logs using hector api. >
Initially we tried to use flume as the Stratos log collector/manager but we stop flume evaluation because BAM's capability to cover the same use case. There are several workarounds like create relevant keyspaces in the tenant creation or create extended event receiver only for logging. Thanks, Deependra. > >> On Fri, Jun 22, 2012 at 8:45 AM, Amani Soysa <[email protected]> wrote: >> >>> Hi, >>> >>> Currently we are sending LogEvent data through bam data publisher to bam >>> event receiver using a custom log4j appender. And we retrieve logs using >>> the hector API for the carbon log viewer. However, we need to have >>> secondary indexes for several columns so that we can filter log information >>> for a given column ( such as date, applicationName, priority,logger etc) >>> when creating the data publisher (keyspace). From the current Bam Data >>> publisher implementation we cannot do secondary indexing all we can do is >>> define the column name and the data type of that column, and reciver >>> creates the keyspaces for given columns with their data types. >>> >>> streamId = dataPublisher.defineEventStream("{" + " >>> 'name':'org.wso2.carbon.logging.$tenantId.$serverName'," >>> + " 'version':'1.0.0'," + " 'nickName': 'Logs'," >>> + " 'description': 'Logging Event'," + " >>> 'metaData':[" >>> + " {'name':'clientType','type':'STRING'}" + " >>> ]," >>> + " 'payloadData':[" >>> + " {'name':'tenantID','type':' >>> STRING'}," >>> + " {'name':'serverName','type':' >>> STRING'}," >>> + " {'name':'appName','type':'STRING'}," >>> + " {'name':'logTime','type':'LONG'}," >>> + " {'name':'logger','type':'STRING'}," >>> + " {'name':'priority','type':'STRING'} >>> ," >>> + " {'name':'message','type':'STRING'}," >>> + " {'name':'ip','type':'STRING'}," >>> + " {'name':'stacktrace','type':' >>> STRING'}," >>> + " {'name':'instance','type':' >>> STRING'}" >>> + " ]" >>> + "}"); >>> >>> Is it possible to have a cassandra specific event receiver (for logging >>> purposes) so that we can create key spaces with secondary >>> indexes?[1<https://wso2.org/jira/browse/CARBON-13468>] >>> and it will create keyspaces when ever logs are published . Or do we need >>> to create keyspaces at tenant creation time?. For a given tenant we need to >>> create several keyspaces, depending on the server (and if possible for >>> applications as well so we can have better performance when viewing logs). >>> ie - keyspace1 - org_wso2_logging_tenant1_application_server (store AS >>> specific logs) >>> keyspace2 - org_wso2_logging_tenant1_data_services_server (store >>> DSS specific logs) >>> >>> Please note that we cannot use BAM analytics to view logs because we >>> need a real time log-viwer. >>> >>> [1] - https://wso2.org/jira/browse/CARBON-13468 >>> >>> Regards, >>> Amani >>> >> >> >> >> -- >> Regards, >> >> Tharindu >> >> blog: http://mackiemathew.com/ >> M: +94777759908 >> >> > -- Deependra Ariyadewa WSO2, Inc. http://wso2.com/ http://wso2.org email [email protected]; cell +94 71 403 5996 ; Blog http://risenfall.wordpress.com/ PGP info: KeyID: 'DC627E6F'
_______________________________________________ Dev mailing list [email protected] http://wso2.org/cgi-bin/mailman/listinfo/dev
