While providing a custom logback.xml do remember to add Sling specific
handlers [1]. Otherwise OSGi integration of logging config would not
work as expected

Chetan Mehrotra
[1] 
https://sling.apache.org/documentation/development/logging.html#external-config-file

On Thu, Sep 14, 2017 at 6:45 AM, John Logan <john.lo...@texture.com> wrote:
> I got this working and thought I'd follow up with what I did in case anyone 
> else needs this sort of thing.
>
>
> I used SiftingAppender pretty much as shown in any of the examples one can 
> find online.  I put the following logback.xml in my sling.home, and pointed 
> the OSGI configuration variable 
> org.apache.sling.commons.log.configurationFile to it:
>
>
> <configuration>
>   <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
>     <discriminator>
>       <key>jobId</key>
>       <defaultValue>/var/log/sling/error.log</defaultValue>
>     </discriminator>
>     <sift>
>       <appender name="FILE-${jobId}" class="ch.qos.logback.core.FileAppender">
>         <file>${logPath}</file>
>         <layout class="ch.qos.logback.classic.PatternLayout">
>           <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - 
> %msg%n</pattern>
>         </layout>
>       </appender>
>     </sift>
>   </appender>
>
>   <root level="info">
>     <appender-ref ref="SIFT" />
>   </root>
> </configuration>
>
>
> An abstract wrapper implementation of JobExecutor sets up the MDC to conform 
> to what's expected in the logback.xml, and provides a few other niceties.  
> Replace the SlingResourceProvider/StorageNode stuff with whatever you use to 
> access files and Sling nodes; it's just a simple abstraction layer that we 
> happen to be using.
>
>
> package com.xyz.content.sling.processor;
>
> import java.io.BufferedReader;
> import java.io.IOException;
> import java.nio.charset.StandardCharsets;
> import java.nio.file.Files;
> import java.nio.file.Path;
> import java.nio.file.Paths;
>
> import org.apache.sling.api.resource.LoginException;
> import org.apache.sling.api.resource.ResourceResolver;
> import org.apache.sling.api.resource.ResourceResolverFactory;
> import org.apache.sling.event.jobs.Job;
> import org.apache.sling.event.jobs.consumer.JobExecutionContext;
> import org.apache.sling.event.jobs.consumer.JobExecutionResult;
> import org.apache.sling.event.jobs.consumer.JobExecutor;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import org.slf4j.MDC;
>
> import com.xyz.content.sling.storage.SlingResourceProvider;
> import com.xyz.storage.StorageNode;
>
> /**
>  * Sling job executor with build job logging.
>  *
>  * This class wraps build processing with job-specific log management.
>  * Job submitters need to configure the following job properties
>  * prior to submitting a job:
>  * * jobId - A symbolic identifier for the job.  This value must be
>  *           unique throughout the life of the job.  Subclasses
>  *           may use this value as a job name for submitting
>  *           and monitoring SLURM jobs.
>  * * logPath - The pathname to the build log file.
>  * * damLogPath - If specified, the DAM resource to which
>  *                the log output should be copied upon build completion.
>  * * clearLog - An option boolean parameter which, when true,
>  *              removes the build log file if it exists prior
>  *              to commencing the build.
>  *
>  * @author john
>  *
>  */
> public abstract class BuildJobExecutor implements JobExecutor {
>     private static Logger LOG = 
> LoggerFactory.getLogger(BuildJobExecutor.class);
>
>     /**
>      * Retrieve a resource resolver factory for build processing.
>      *
>      * @return  the resource resolver factory
>      */
>     protected abstract ResourceResolverFactory getResolverFactory();
>
>     /**
>      * Subclass-specific build processing method.
>      *
>      * @param job
>      * @param context
>      * @param resolver
>      * @return  the result of build processing
>      */
>     protected abstract JobExecutionResult build(Job job, JobExecutionContext 
> context, ResourceResolver resolver);
>
>     @Override
>     public JobExecutionResult process(Job job, JobExecutionContext context) {
>         //
>         //  Prepare for the log directory and file.
>         //
>         final String jobId = job.getProperty("jobId", String.class);
>         final Path logPath = Paths.get(job.getProperty("logPath", 
> String.class));
>         final Path logParentPath = logPath.getParent();
>         if (logParentPath != null) {
>             try {
>                 Files.createDirectories(logParentPath);
>             }
>             catch (final IOException e) {
>                 return handleError(context, "Unable to create log directory " 
> + logParentPath);
>             }
>         }
>
>         if (Boolean.TRUE.equals(job.getProperty("resetLog"))) {
>             try {
>                 Files.deleteIfExists(logPath);
>             }
>             catch (final IOException e) {
>                 return handleError(context, "Unable to clear log file " + 
> logPath);
>             }
>         }
>
>         LOG.info("Starting build job with ID " + jobId);
>         ResourceResolver resolver;
>         try {
>             resolver = getResolverFactory().getServiceResourceResolver(null);
>         }
>         catch (final LoginException e) {
>             return handleError(context, "Unable get build job resource 
> resolver, check cbservice user configuration.");
>         }
>
>         //
>         //  Perform the build operation.  Logging to the job-specific log 
> file starts here.
>         //
>         MDC.put("jobId", jobId);
>         MDC.put("logPath", logPath.toString());
>         try {
>             final JobExecutionResult result = build(job, context, resolver);
>             if (job.getProperty("damLogPath") != null) {
>                 final Path damLogPath = 
> Paths.get(job.getProperty("damLogPath", String.class));
>                 saveLog(resolver, logPath, damLogPath);
>             }
>             return result;
>         }
>         catch (final Throwable e) {
>             LOG.error("Build job failed with an exception.", e);
>             return handleError(context, "Build job failed with an exception: 
> " + e.getMessage());
>         }
>         finally {
>             MDC.remove("jobId");
>             MDC.remove("logPath");
>             resolver.close();
>         }
>     }
>
>     private JobExecutionResult handleError(JobExecutionContext context, 
> String message) {
>         LOG.error(message);
>         return context.result().message(message).cancelled();
>     }
>
>     private void saveLog(ResourceResolver resolver, Path logPath, Path 
> damLogPath) {
>         try (BufferedReader reader = Files.newBufferedReader(logPath, 
> StandardCharsets.UTF_8)) {
>             final SlingResourceProvider storageProvider = new 
> SlingResourceProvider();
>             storageProvider.setResolver(resolver);
>             final StorageNode damLogNode = storageProvider.get(damLogPath);
>             damLogNode.copyFromReader(reader, StandardCharsets.UTF_8);
>             resolver.commit();
>         }
>         catch (final IOException e) {
>             LOG.error("Unable to move build log " + logPath + " to DAM 
> resource " + damLogPath, e);
>         }
>     }
> }
>
>
>
> ________________________________
> From: Robert Munteanu <romb...@apache.org>
> Sent: Friday, September 8, 2017 1:13:02 AM
> To: users@sling.apache.org
> Subject: Re: Directing Sling job logging output to separate files?
>
> Hi John,
>
> On Fri, 2017-09-08 at 05:28 +0000, John Logan wrote:
>> Hi,
>>
>>
>> I'm using the Sling job manager to handle some long running tasks,
>> and would like to direct the log output for each job to its own file
>> at a job-specific path.  Is there a straightforward way to achieve
>> this?
>
> If your jobs use separate loggers, you can achieve that either by:
>
> - manually creating loggers and appenders via http://localhost:8080/sys
> tem/console/slinglog/
> - adding specific loggers/appenders to the provisioning model
>
> There might be a way of adding those at runtime using the logback APIs,
> but I haven't tried it before.
>
> Robert

Reply via email to