Hello,

In its early stages, it is ideal if you maintain this extension yourselves.
This will allow you to iterate faster. You don't need to wait in the PMC
review line and you can release as often as you want. Once the project gets
enough attraction, we can revisit the idea of shipping it as an official
Log4j component.

Regarding your approach for listening on file roll events to trigger the
data ingestion... This implies an extra and potentially redundant storage
cost. I guess the file needs to be parsed too with an assumption of a
certain structured format. Given Azure Data Explorer has a Java SDK, I
would advise creating your own appender to address these shortcomings.
There you can simply feed `LogEvent`s directly to ADE in a format of your
choice and use either batching or streaming provided by the ADE Java SDK.

Kind regards.

On Tue, Aug 16, 2022 at 2:47 PM Ramachandran G <ram...@microsoft.com.invalid>
wrote:

> Hello Devs ,
>
> G'day all.
>
> We have a small extension to log4j to append data into Azure data explorer
> (a timeseries analytics platform on Azure) where we extend the RollingFile
> appender and then ingest the logs created into the database
>
> (The API's to ingest data use files that have been created through
> RollingFile appender using a custom action and works well currently)
>
> For example , we use the rolling files config & then when a new rolling
> file is created , we use the Action to get the RenameAction and use that to
> flush this file to the database
>
>
> <RollingFile name="ADXRollingFile" fileName="C:/logs/logs.log"
>              filePattern="C:/logs/logs-%d{yyyy-MM-dd}-%i.log">
>     <KustoStrategy
>
>
>
> Wanted to check how this artifact can be made into a contrib  or has to be
> maintained as an external contrib library in a different package space ?
>
> The specific reason for the question was because of the fact that other
> appenders have a full implementation of writing to database extending
> DBAppender or NoSqlAppender etc
>
>
> Kind Regards
> Ram
>
>
>
>
>

Reply via email to