New Resource Provider SPI - limitations

2017-09-14 Thread Roy Teeuwen
Hey all,

We are currently upgrading our environment, and of course the new resource 
provider SPI is now available. But it seems that our current resource provider 
would not be able to be used in this new SPI, seeing as in the old one you 
could dynamically look if the resource provider could return a resource for the 
request, and if not just return null, letting the next resource provider have a 
look.

Seeing as the old resource provider is deprecated, what would be the 
recommended approach to have the same kind of logic:

- Register on root level
- Check if the request has something that we are looking for (in our situation 
we are looking for a specific structure in the requested url)
- If yes, return a resource, if no let the default resource provider do their 
job

I could of course wait until the old resource provider api has really been 
removed, but I would rather work proactively :)

Greets,
Roy


signature.asc
Description: Message signed with OpenPGP


Re: Directing Sling job logging output to separate files?

2017-09-14 Thread Robert Munteanu
On Thu, 2017-09-14 at 01:15 +, John Logan wrote:
> I got this working and thought I'd follow up with what I did in case
> anyone else needs this sort of thing.

Looks interesting, thanks for sharing.

Robert

> 
> 
> I used SiftingAppender pretty much as shown in any of the examples
> one can find online.  I put the following logback.xml in my
> sling.home, and pointed the OSGI configuration variable
> org.apache.sling.commons.log.configurationFile to it:
> 
> 
> 
>class="ch.qos.logback.classic.sift.SiftingAppender">
> 
>   jobId
>   /var/log/sling/error.log
> 
> 
>class="ch.qos.logback.core.FileAppender">
> ${logPath}
> 
>   %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} -
> %msg%n
> 
>   
> 
>   
> 
>   
> 
>   
> 
> 
> 
> An abstract wrapper implementation of JobExecutor sets up the MDC to
> conform to what's expected in the logback.xml, and provides a few
> other niceties.  Replace the SlingResourceProvider/StorageNode stuff
> with whatever you use to access files and Sling nodes; it's just a
> simple abstraction layer that we happen to be using.
> 
> 
> package com.xyz.content.sling.processor;
> 
> import java.io.BufferedReader;
> import java.io.IOException;
> import java.nio.charset.StandardCharsets;
> import java.nio.file.Files;
> import java.nio.file.Path;
> import java.nio.file.Paths;
> 
> import org.apache.sling.api.resource.LoginException;
> import org.apache.sling.api.resource.ResourceResolver;
> import org.apache.sling.api.resource.ResourceResolverFactory;
> import org.apache.sling.event.jobs.Job;
> import org.apache.sling.event.jobs.consumer.JobExecutionContext;
> import org.apache.sling.event.jobs.consumer.JobExecutionResult;
> import org.apache.sling.event.jobs.consumer.JobExecutor;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import org.slf4j.MDC;
> 
> import com.xyz.content.sling.storage.SlingResourceProvider;
> import com.xyz.storage.StorageNode;
> 
> /**
>  * Sling job executor with build job logging.
>  *
>  * This class wraps build processing with job-specific log
> management.
>  * Job submitters need to configure the following job properties
>  * prior to submitting a job:
>  * * jobId - A symbolic identifier for the job.  This value must be
>  *   unique throughout the life of the job.  Subclasses
>  *   may use this value as a job name for submitting
>  *   and monitoring SLURM jobs.
>  * * logPath - The pathname to the build log file.
>  * * damLogPath - If specified, the DAM resource to which
>  *the log output should be copied upon build
> completion.
>  * * clearLog - An option boolean parameter which, when true,
>  *  removes the build log file if it exists prior
>  *  to commencing the build.
>  *
>  * @author john
>  *
>  */
> public abstract class BuildJobExecutor implements JobExecutor {
> private static Logger LOG =
> LoggerFactory.getLogger(BuildJobExecutor.class);
> 
> /**
>  * Retrieve a resource resolver factory for build processing.
>  *
>  * @return  the resource resolver factory
>  */
> protected abstract ResourceResolverFactory getResolverFactory();
> 
> /**
>  * Subclass-specific build processing method.
>  *
>  * @param job
>  * @param context
>  * @param resolver
>  * @return  the result of build processing
>  */
> protected abstract JobExecutionResult build(Job job,
> JobExecutionContext context, ResourceResolver resolver);
> 
> @Override
> public JobExecutionResult process(Job job, JobExecutionContext
> context) {
> //
> //  Prepare for the log directory and file.
> //
> final String jobId = job.getProperty("jobId", String.class);
> final Path logPath = Paths.get(job.getProperty("logPath",
> String.class));
> final Path logParentPath = logPath.getParent();
> if (logParentPath != null) {
> try {
> Files.createDirectories(logParentPath);
> }
> catch (final IOException e) {
> return handleError(context, "Unable to create log
> directory " + logParentPath);
> }
> }
> 
> if (Boolean.TRUE.equals(job.getProperty("resetLog"))) {
> try {
> Files.deleteIfExists(logPath);
> }
> catch (final IOException e) {
> return handleError(context, "Unable to clear log file
> " + logPath);
> }
> }
> 
> LOG.info("Starting build job with ID " + jobId);
> ResourceResolver resolver;
> try {
> resolver =
> getResolverFactory().getServiceResourceResolver(null);
> }
> catch (final LoginException e) {
> return handleError(context, "Unable get build job
> resource resolver, check cbservice user configuration.");
> }
> 
> //
> //  

Re: Directing Sling job logging output to separate files?

2017-09-14 Thread Chetan Mehrotra
While providing a custom logback.xml do remember to add Sling specific
handlers [1]. Otherwise OSGi integration of logging config would not
work as expected

Chetan Mehrotra
[1] 
https://sling.apache.org/documentation/development/logging.html#external-config-file

On Thu, Sep 14, 2017 at 6:45 AM, John Logan  wrote:
> I got this working and thought I'd follow up with what I did in case anyone 
> else needs this sort of thing.
>
>
> I used SiftingAppender pretty much as shown in any of the examples one can 
> find online.  I put the following logback.xml in my sling.home, and pointed 
> the OSGI configuration variable 
> org.apache.sling.commons.log.configurationFile to it:
>
>
> 
>   
> 
>   jobId
>   /var/log/sling/error.log
> 
> 
>   
> ${logPath}
> 
>   %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - 
> %msg%n
> 
>   
> 
>   
>
>   
> 
>   
> 
>
>
> An abstract wrapper implementation of JobExecutor sets up the MDC to conform 
> to what's expected in the logback.xml, and provides a few other niceties.  
> Replace the SlingResourceProvider/StorageNode stuff with whatever you use to 
> access files and Sling nodes; it's just a simple abstraction layer that we 
> happen to be using.
>
>
> package com.xyz.content.sling.processor;
>
> import java.io.BufferedReader;
> import java.io.IOException;
> import java.nio.charset.StandardCharsets;
> import java.nio.file.Files;
> import java.nio.file.Path;
> import java.nio.file.Paths;
>
> import org.apache.sling.api.resource.LoginException;
> import org.apache.sling.api.resource.ResourceResolver;
> import org.apache.sling.api.resource.ResourceResolverFactory;
> import org.apache.sling.event.jobs.Job;
> import org.apache.sling.event.jobs.consumer.JobExecutionContext;
> import org.apache.sling.event.jobs.consumer.JobExecutionResult;
> import org.apache.sling.event.jobs.consumer.JobExecutor;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import org.slf4j.MDC;
>
> import com.xyz.content.sling.storage.SlingResourceProvider;
> import com.xyz.storage.StorageNode;
>
> /**
>  * Sling job executor with build job logging.
>  *
>  * This class wraps build processing with job-specific log management.
>  * Job submitters need to configure the following job properties
>  * prior to submitting a job:
>  * * jobId - A symbolic identifier for the job.  This value must be
>  *   unique throughout the life of the job.  Subclasses
>  *   may use this value as a job name for submitting
>  *   and monitoring SLURM jobs.
>  * * logPath - The pathname to the build log file.
>  * * damLogPath - If specified, the DAM resource to which
>  *the log output should be copied upon build completion.
>  * * clearLog - An option boolean parameter which, when true,
>  *  removes the build log file if it exists prior
>  *  to commencing the build.
>  *
>  * @author john
>  *
>  */
> public abstract class BuildJobExecutor implements JobExecutor {
> private static Logger LOG = 
> LoggerFactory.getLogger(BuildJobExecutor.class);
>
> /**
>  * Retrieve a resource resolver factory for build processing.
>  *
>  * @return  the resource resolver factory
>  */
> protected abstract ResourceResolverFactory getResolverFactory();
>
> /**
>  * Subclass-specific build processing method.
>  *
>  * @param job
>  * @param context
>  * @param resolver
>  * @return  the result of build processing
>  */
> protected abstract JobExecutionResult build(Job job, JobExecutionContext 
> context, ResourceResolver resolver);
>
> @Override
> public JobExecutionResult process(Job job, JobExecutionContext context) {
> //
> //  Prepare for the log directory and file.
> //
> final String jobId = job.getProperty("jobId", String.class);
> final Path logPath = Paths.get(job.getProperty("logPath", 
> String.class));
> final Path logParentPath = logPath.getParent();
> if (logParentPath != null) {
> try {
> Files.createDirectories(logParentPath);
> }
> catch (final IOException e) {
> return handleError(context, "Unable to create log directory " 
> + logParentPath);
> }
> }
>
> if (Boolean.TRUE.equals(job.getProperty("resetLog"))) {
> try {
> Files.deleteIfExists(logPath);
> }
> catch (final IOException e) {
> return handleError(context, "Unable to clear log file " + 
> logPath);
> }
> }
>
> LOG.info("Starting build job with ID " + jobId);
> ResourceResolver resolver;
> try {
> resolver = getResolverFactory().getServiceResourceResolver(null);
> }
> catch (final LoginException e) {
> return