[jira] [Commented] (SLING-7300) CPU spike in i18n job

2021-11-01 Thread Joerg Hoh (Jira)


[ 
https://issues.apache.org/jira/browse/SLING-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17436921#comment-17436921
 ] 

Joerg Hoh commented on SLING-7300:
--

Can we close this ticket?

> CPU spike in i18n job
> -
>
> Key: SLING-7300
> URL: https://issues.apache.org/jira/browse/SLING-7300
> Project: Sling
>  Issue Type: Bug
>  Components: i18n, Karaf
>Affects Versions: i18n 2.5.10
> Environment: Linux, jdk1.8 latest version, Karaf 4.1.3
>Reporter: Ivo Leitão
>Priority: Critical
> Attachments: 1.tdump, 2.tdump, 3.tdump, cpuspike.png, new1.tdump, 
> new2.tdump, new3.tdump
>
>
> Hi,
> I'm experiencing severe CPU spikes caused by the reload job in the i18n 
> bundle. I've uploaded 3 threadumps which I analysed in http://fastthread.io. 
> The analysis identified a cpu spike in the method scheduleReloadBundles of 
> the class 
> https://github.com/apache/sling-org-apache-sling-i18n/blob/master/src/main/java/org/apache/sling/i18n/impl/JcrResourceBundleProvider.java.
> This does not happen every time only on some situations. I've experienced 
> this problem at least on the previous i18n bundle versions. Since there was 
> changes on the mentioned class I've updated it hopping to get rid of this 
> problem. Unfortunately I'm experiencing it again :-(
> Thread dump bellow:
> sling-default-1-JcrResourceBundleProvider: reload all resource bundles - 
> priority:5 - threadId:0x7f2ec851e800 - nativeId:0x1f72 - state:RUNNABLE
> stackTrace:
> java.lang.Thread.State: RUNNABLE
> at 
> java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss(ThreadLocal.java:444)
> at java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:419)
> at java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:298)
> at java.lang.ThreadLocal.get(ThreadLocal.java:163)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.getByteBuffer(StringBuilderEncoder.java:115)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.encode(StringBuilderEncoder.java:54)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.encode(StringBuilderEncoder.java:32)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:219)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:177)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:170)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:161)
> at 
> org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:218)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:448)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:433)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:403)
> at 
> org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
> at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog0(PaxLoggerImpl.java:151)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog(PaxLoggerImpl.java:144)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:176)
> at 
> org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:86)
> at org.ops4j.pax.logging.slf4j.Slf4jLogger.info(Slf4jLogger.java:476)
> at 
> org.apache.sling.i18n.impl.JcrResourceBundleProvider$1.run(JcrResourceBundleProvider.java:319)
> at 
> org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:347)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0xe6d47830> (a java.util.concurrent.ThreadPoolExecutor$Worker)



--
This message was sent 

[jira] [Commented] (SLING-7300) CPU spike in i18n job

2017-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SLING-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287728#comment-16287728
 ] 

Ivo Leitão commented on SLING-7300:
---

Just another thing that I have noticed that is common between the two 
stacktraces. In both cases the initiator is always a quartz job, so all of this 
happens in another thread:

at 
org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:347)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)

The lines above are also common to both stacktraces and always end on:

at java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss(ThreadLocal.java:444)
at java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:419)
at java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:298)
at java.lang.ThreadLocal.get(ThreadLocal.java:163)

So this must be related with code running on a another thread.

Regards,
Ivo Leitão

> CPU spike in i18n job
> -
>
> Key: SLING-7300
> URL: https://issues.apache.org/jira/browse/SLING-7300
> Project: Sling
>  Issue Type: Bug
>  Components: i18n, Karaf
>Affects Versions: i18n 2.5.10
> Environment: Linux, jdk1.8 latest version, Karaf 4.1.3
>Reporter: Ivo Leitão
>Priority: Critical
> Attachments: 1.tdump, 2.tdump, 3.tdump, cpuspike.png, new1.tdump, 
> new2.tdump, new3.tdump
>
>
> Hi,
> I'm experiencing severe CPU spikes caused by the reload job in the i18n 
> bundle. I've uploaded 3 threadumps which I analysed in http://fastthread.io. 
> The analysis identified a cpu spike in the method scheduleReloadBundles of 
> the class 
> https://github.com/apache/sling-org-apache-sling-i18n/blob/master/src/main/java/org/apache/sling/i18n/impl/JcrResourceBundleProvider.java.
> This does not happen every time only on some situations. I've experienced 
> this problem at least on the previous i18n bundle versions. Since there was 
> changes on the mentioned class I've updated it hopping to get rid of this 
> problem. Unfortunately I'm experiencing it again :-(
> Thread dump bellow:
> sling-default-1-JcrResourceBundleProvider: reload all resource bundles - 
> priority:5 - threadId:0x7f2ec851e800 - nativeId:0x1f72 - state:RUNNABLE
> stackTrace:
> java.lang.Thread.State: RUNNABLE
> at 
> java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss(ThreadLocal.java:444)
> at java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:419)
> at java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:298)
> at java.lang.ThreadLocal.get(ThreadLocal.java:163)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.getByteBuffer(StringBuilderEncoder.java:115)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.encode(StringBuilderEncoder.java:54)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.encode(StringBuilderEncoder.java:32)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:219)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:177)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:170)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:161)
> at 
> org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:218)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:448)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:433)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:403)
> at 
> org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
> at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog0(PaxLoggerImpl.java:151)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog(PaxLoggerImpl.java:144)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:176)
> at 
> 

[jira] [Commented] (SLING-7300) CPU spike in i18n job

2017-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SLING-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287539#comment-16287539
 ] 

Ivo Leitão commented on SLING-7300:
---

I've added the new thread dumps (new1..3.tdump) with the stacktrace in my last 
comment

> CPU spike in i18n job
> -
>
> Key: SLING-7300
> URL: https://issues.apache.org/jira/browse/SLING-7300
> Project: Sling
>  Issue Type: Bug
>  Components: i18n, Karaf
>Affects Versions: i18n 2.5.10
> Environment: Linux, jdk1.8 latest version, Karaf 4.1.3
>Reporter: Ivo Leitão
>Priority: Critical
> Attachments: 1.tdump, 2.tdump, 3.tdump, cpuspike.png, new1.tdump, 
> new2.tdump, new3.tdump
>
>
> Hi,
> I'm experiencing severe CPU spikes caused by the reload job in the i18n 
> bundle. I've uploaded 3 threadumps which I analysed in http://fastthread.io. 
> The analysis identified a cpu spike in the method scheduleReloadBundles of 
> the class 
> https://github.com/apache/sling-org-apache-sling-i18n/blob/master/src/main/java/org/apache/sling/i18n/impl/JcrResourceBundleProvider.java.
> This does not happen every time only on some situations. I've experienced 
> this problem at least on the previous i18n bundle versions. Since there was 
> changes on the mentioned class I've updated it hopping to get rid of this 
> problem. Unfortunately I'm experiencing it again :-(
> Thread dump bellow:
> sling-default-1-JcrResourceBundleProvider: reload all resource bundles - 
> priority:5 - threadId:0x7f2ec851e800 - nativeId:0x1f72 - state:RUNNABLE
> stackTrace:
> java.lang.Thread.State: RUNNABLE
> at 
> java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss(ThreadLocal.java:444)
> at java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:419)
> at java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:298)
> at java.lang.ThreadLocal.get(ThreadLocal.java:163)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.getByteBuffer(StringBuilderEncoder.java:115)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.encode(StringBuilderEncoder.java:54)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.encode(StringBuilderEncoder.java:32)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:219)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:177)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:170)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:161)
> at 
> org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:218)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:448)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:433)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:403)
> at 
> org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
> at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog0(PaxLoggerImpl.java:151)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog(PaxLoggerImpl.java:144)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:176)
> at 
> org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:86)
> at org.ops4j.pax.logging.slf4j.Slf4jLogger.info(Slf4jLogger.java:476)
> at 
> org.apache.sling.i18n.impl.JcrResourceBundleProvider$1.run(JcrResourceBundleProvider.java:319)
> at 
> org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:347)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0xe6d47830> (a 

[jira] [Commented] (SLING-7300) CPU spike in i18n job

2017-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SLING-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287536#comment-16287536
 ] 

Ivo Leitão commented on SLING-7300:
---

Hello,

I think this is not related with sling nor with the log4j framework. From my 
investigations I think I stumbled upon a jvm bug probably only on linux base 
machines. My colleagues have a mix of mac, windows and linux machines and only 
the mac and linux users are complaining about this problem. I've solved the 
first one moving the logging infrastructure to asynchronous loggers via the 
lmax disruptor. However and unfortunately I've found the same problem later and 
the only common part is this part of the stacktrace:

at java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss(ThreadLocal.java:444)
at java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:419)
at java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:298)
at java.lang.ThreadLocal.get(ThreadLocal.java:163)

This new one happens on the sling events, bellow the new stacktrace with 100% 
cpu signature (notice that it stops on the getEntryAfterMiss like the other 
one).

I've found multiple reports of similar errors:
https://discuss.elastic.co/t/indexing-performance-degrading-over-time/40229
https://issues.apache.org/jira/browse/SOLR-9741
https://github.com/rundeck/rundeck/issues/715

The only clue was on the elastic thread where they recommended a new garbage 
collector but I've not tried that

I'm going to attach the new dumps in this thread also

sling-default-1-Registered Service.945 - priority:5 - 
threadId:0x7fdaf44ff800 - nativeId:0x6b8b - state:RUNNABLE
stackTrace:
java.lang.Thread.State: RUNNABLE
at java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss(ThreadLocal.java:444)
at java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:419)
at java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:298)
at java.lang.ThreadLocal.get(ThreadLocal.java:163)
at 
org.apache.felix.framework.ServiceRegistrationImpl.currentThreadMarked(ServiceRegistrationImpl.java:763)
at 
org.apache.felix.framework.ServiceRegistry.getService(ServiceRegistry.java:305)
at 
org.apache.felix.framework.EventDispatcher.createWhitelistFromHooks(EventDispatcher.java:712)
at 
org.apache.felix.framework.EventDispatcher.filterListenersUsingHooks(EventDispatcher.java:559)
at 
org.apache.felix.framework.EventDispatcher.fireServiceEvent(EventDispatcher.java:542)
at org.apache.felix.framework.Felix.fireServiceEvent(Felix.java:4595)
at org.apache.felix.framework.Felix.access$000(Felix.java:106)
at org.apache.felix.framework.Felix$1.serviceChanged(Felix.java:420)
at 
org.apache.felix.framework.ServiceRegistry.unregisterService(ServiceRegistry.java:170)
at 
org.apache.felix.framework.ServiceRegistrationImpl.unregister(ServiceRegistrationImpl.java:144)
at 
org.apache.sling.event.impl.jobs.jmx.QueuesMBeanImpl.remove(QueuesMBeanImpl.java:171)
at 
org.apache.sling.event.impl.jobs.jmx.QueuesMBeanImpl.removeAndNotify(QueuesMBeanImpl.java:140)
at 
org.apache.sling.event.impl.jobs.jmx.QueuesMBeanImpl.unbindQueueMBean(QueuesMBeanImpl.java:116)
at 
org.apache.sling.event.impl.jobs.jmx.QueuesMBeanImpl.sendEvent(QueuesMBeanImpl.java:95)
at 
org.apache.sling.event.impl.jobs.queues.QueueManager.maintain(QueueManager.java:203)
- locked <0xe8c0bf78> (a java.lang.Object)
at 
org.apache.sling.event.impl.jobs.queues.QueueManager.run(QueueManager.java:259)
at 
org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:347)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- <0xe721ea50> (a java.util.concurrent.ThreadPoolExecutor$Worker)


   

> CPU spike in i18n job
> -
>
> Key: SLING-7300
> URL: https://issues.apache.org/jira/browse/SLING-7300
> Project: Sling
>  Issue Type: Bug
>  Components: i18n, Karaf
>Affects Versions: i18n 2.5.10
> Environment: Linux, jdk1.8 latest version, Karaf 4.1.3
>Reporter: Ivo Leitão
>Priority: Critical
> Attachments: 1.tdump, 2.tdump, 3.tdump, cpuspike.png
>
>
> Hi,
> I'm experiencing severe CPU spikes caused by the reload job in the i18n 
> bundle. I've uploaded 3 threadumps which I analysed in http://fastthread.io. 
> The analysis identified a cpu spike in the method scheduleReloadBundles of 
> the class 
> https://github.com/apache/sling-org-apache-sling-i18n/blob/master/src/main/java/org/apache/sling/i18n/impl/JcrResourceBundleProvider.java.
> This does not happen every time only on some situations. I've experienced 
> this problem at least on the previous i18n bundle versions. Since 

[jira] [Commented] (SLING-7300) CPU spike in i18n job

2017-12-11 Thread Alexander Klimetschek (JIRA)

[ 
https://issues.apache.org/jira/browse/SLING-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286651#comment-16286651
 ] 

Alexander Klimetschek commented on SLING-7300:
--

All 3 dumps are inside the logging. Maybe there is a problem there. Do you see 
a lot of log output by the i18n? If yes, maybe it's set to a higher log level, 
outputting too much for an overloaded logging system (just guessing here) and a 
workaround is to limit logging for {{org.apache.sling.i18n}} to ERROR or the 
like. HTH

> CPU spike in i18n job
> -
>
> Key: SLING-7300
> URL: https://issues.apache.org/jira/browse/SLING-7300
> Project: Sling
>  Issue Type: Bug
>  Components: i18n, Karaf
>Affects Versions: i18n 2.5.10
> Environment: Linux, jdk1.8 latest version, Karaf 4.1.3
>Reporter: Ivo Leitão
>Priority: Critical
> Attachments: 1.tdump, 2.tdump, 3.tdump, cpuspike.png
>
>
> Hi,
> I'm experiencing severe CPU spikes caused by the reload job in the i18n 
> bundle. I've uploaded 3 threadumps which I analysed in http://fastthread.io. 
> The analysis identified a cpu spike in the method scheduleReloadBundles of 
> the class 
> https://github.com/apache/sling-org-apache-sling-i18n/blob/master/src/main/java/org/apache/sling/i18n/impl/JcrResourceBundleProvider.java.
> This does not happen every time only on some situations. I've experienced 
> this problem at least on the previous i18n bundle versions. Since there was 
> changes on the mentioned class I've updated it hopping to get rid of this 
> problem. Unfortunately I'm experiencing it again :-(
> Thread dump bellow:
> sling-default-1-JcrResourceBundleProvider: reload all resource bundles - 
> priority:5 - threadId:0x7f2ec851e800 - nativeId:0x1f72 - state:RUNNABLE
> stackTrace:
> java.lang.Thread.State: RUNNABLE
> at 
> java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss(ThreadLocal.java:444)
> at java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:419)
> at java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:298)
> at java.lang.ThreadLocal.get(ThreadLocal.java:163)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.getByteBuffer(StringBuilderEncoder.java:115)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.encode(StringBuilderEncoder.java:54)
> at 
> org.apache.logging.log4j.core.layout.StringBuilderEncoder.encode(StringBuilderEncoder.java:32)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:219)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:177)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:170)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:161)
> at 
> org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:218)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:448)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:433)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:403)
> at 
> org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
> at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog0(PaxLoggerImpl.java:151)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog(PaxLoggerImpl.java:144)
> at 
> org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:176)
> at 
> org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:86)
> at org.ops4j.pax.logging.slf4j.Slf4jLogger.info(Slf4jLogger.java:476)
> at 
> org.apache.sling.i18n.impl.JcrResourceBundleProvider$1.run(JcrResourceBundleProvider.java:319)
> at 
> org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:347)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
> at 
>