jackrabbit-oak build #6604: Fixed

2015-10-09 Thread Travis CI
Build Update for apache/jackrabbit-oak
-

Build: #6604
Status: Fixed

Duration: 2440 seconds
Commit: b6b6d1eaa3996fbbbacf8b37c7442984bbd620eb (1.2)
Author: Thomas Mueller
Message: OAK-3432 ConcurrentTest.testLoaderBlock fails because of waiting issue

git-svn-id: 
https://svn.apache.org/repos/asf/jackrabbit/oak/branches/1.2@1707667 
13f79535-47bb-0310-9956-ffa450edef68

View the changeset: 
https://github.com/apache/jackrabbit-oak/compare/8e810be60668...b6b6d1eaa399

View the full build log and details: 
https://travis-ci.org/apache/jackrabbit-oak/builds/84455763

--
sent by Jukka's Travis notification gateway


Re: jackrabbit-oak build #6598: Broken

2015-10-09 Thread Thomas Mueller
Hi,

> some fix missing in the 1.2 branch?

You are right, it looks like the cause is OAK-3432. This is fixed in the
trunk but not in the branch, because I thought it doesn't affect the
branch (because the branch doesn't contain OAK-3234). But, now I see it
also affects the branch.


Regards,
Thomas

On 08/10/15 09:27, "Marcel Reutegger"  wrote:

>the failure is:
>
>Failed tests:   
>testLoaderBlock(org.apache.jackrabbit.oak.cache.ConcurrentTest): Had to
>wait unexpectedly long for other threads: 1207
>
>
>looks unrelated to Chetan's change.
>
>is the threshold to low for the test or some fix missing in the 1.2
>branch?
>
>Regards
> Marcel
>
>On 08/10/15 09:00, "Travis CI" wrote:
>
>>Build Update for apache/jackrabbit-oak
>>-
>>
>>Build: #6598
>>Status: Broken
>>
>>Duration: 703 seconds
>>Commit: 8e810be6066862f415f0067248a29a945e0d8d13 (1.2)
>>Author: Chetan Mehrotra
>>Message: OAK-3476 - Memory leak caused by using marker names based on non
>>static session id
>>
>>Merging 1707435
>>
>>
>>git-svn-id: 
>>https://svn.apache.org/repos/asf/jackrabbit/oak/branches/1.2@1707437
>>13f79535-47bb-0310-9956-ffa450edef68
>>
>>View the changeset:
>>https://github.com/apache/jackrabbit-oak/compare/e4567a6c224a...8e810be60
>>6
>>68
>>
>>View the full build log and details:
>>https://travis-ci.org/apache/jackrabbit-oak/builds/84246458
>>
>>--
>>sent by Jukka's Travis notification gateway
>



jackrabbit-oak build #6609: Fixed

2015-10-09 Thread Travis CI
Build Update for apache/jackrabbit-oak
-

Build: #6609
Status: Fixed

Duration: 1586 seconds
Commit: 30f9078317c87828a668141ba19a762ef20ee2a3 (trunk)
Author: Michael Dürig
Message: OAK-3502: Improve logging during cleanup
Debug level logging for cleanup of tar files

git-svn-id: https://svn.apache.org/repos/asf/jackrabbit/oak/trunk@1707753 
13f79535-47bb-0310-9956-ffa450edef68

View the changeset: 
https://github.com/apache/jackrabbit-oak/compare/2727f9228ef7...30f9078317c8

View the full build log and details: 
https://travis-ci.org/apache/jackrabbit-oak/builds/84531217

--
sent by Jukka's Travis notification gateway


[Oak origin/1.0] Apache Jackrabbit Oak matrix - Build # 475 - Failure

2015-10-09 Thread Apache Jenkins Server
The Apache Jenkins build system has built Apache Jackrabbit Oak matrix (build 
#475)

Status: Failure

Check console output at 
https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/475/ to view 
the results.

Changes:
[thomasm] OAK-3500 Review padding for blobs collection (backport to 1.0)

 

Test results:
41 tests failed.
FAILED:  
org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditorTest.testIndexedProperties

Error Message:
No such core: oak

Stack Trace:
org.apache.solr.common.SolrException: No such core: oak
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:112)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:118)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102)
at 
org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditor.leave(SolrIndexEditor.java:131)
at 
org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditorTest.testIndexedProperties(SolrIndexEditorTest.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)


FAILED:  
org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditorTest.testIgnoredPropertiesNotIndexed

Error Message:
No such core: oak

Stack Trace:
org.apache.solr.common.SolrException: No such core: oak
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:112)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:118)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102)
at 
org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditor.leave(SolrIndexEditor.java:131)
at 
org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditorTest.testIgnoredPropertiesNotIndexed(SolrIndexEditorTest.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 

jackrabbit-oak build #6606: Broken

2015-10-09 Thread Travis CI
Build Update for apache/jackrabbit-oak
-

Build: #6606
Status: Broken

Duration: 1158 seconds
Commit: a9c02e5d2dddc56e613b3511cb1f4b460eb92f94 (trunk)
Author: Michael Dürig
Message: OAK-3290: Revision gc blocks repository shutdown
Signal repository shutdown to gain estimator and cleanup

git-svn-id: https://svn.apache.org/repos/asf/jackrabbit/oak/trunk@1707681 
13f79535-47bb-0310-9956-ffa450edef68

View the changeset: 
https://github.com/apache/jackrabbit-oak/compare/31350735ce17...a9c02e5d2ddd

View the full build log and details: 
https://travis-ci.org/apache/jackrabbit-oak/builds/84471798

--
sent by Jukka's Travis notification gateway


[Oak] Lucene copyonread OOM

2015-10-09 Thread Geoffroy Schneck
Hello Oak Experts,

On an Oak 1.2.4 version, OOM are thrown quite regularly by the copyonread 
feature , see below.

However, the system where it runs, has 32GB total, and JVM -Xmx settings set to 
12G . The JVM memory settings are the following :

-Xms12288m -Xmx12288m -XX:MaxMetaspaceSize=512m -XX:MaxPermSize=512M 
-XX:ReservedCodeCacheSize=96m

We have to assume, the repository size is huge (but unknown to me at that 
moment).


-Where does the Lucene copyonread feature use the memory from ? 
off-heap memory of JVM allocated memory ?

-Are there additional memory settings to increase for this specific 
feature ? Or one of the above seems unsufficient ?

Thanks,

09.10.2015 09:52:42.439 *ERROR* [pool-5-thread-28] 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker Failed to open 
Lucene index at /oak:index/lucene
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283)
at 
org.apache.lucene.store.MMapDirectory$MMapIndexInput.(MMapDirectory.java:228)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:195)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$FileReference.openLocalInput(IndexCopier.java:382)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory.openInput(IndexCopier.java:227)
at 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader.(Lucene40StoredFieldsReader.java:82)
at 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsFormat.fieldsReader(Lucene40StoredFieldsFormat.java:91)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:129)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:96)
at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:66)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.(IndexNode.java:94)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:62)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker$1.leave(IndexTracker.java:98)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:153)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:470)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:69)
at 
org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:125)
at 
org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:119)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
... 35 common frames omitted
09.10.2015 09:52:42.439 *WARN* [pool-5-thread-70] 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred while 
copying file [segments_nw] from OakDirectory@5aaa07ea 
lockFactory=org.apache.lucene.store.NoLockFactory@b1401e5 to 
MMapDirectory@/srv/jas/data/mapcms/cma0/CQ/prod0/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1
 
lockFactory=NativeFSLockFactory@/srv/jas/data/mapcms/cma0/CQ/prod0/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1
java.io.FileNotFoundException: segments_nw
at 
org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.openInput(OakDirectory.java:115)
at org.apache.lucene.store.Directory.copy(Directory.java:185)
at 

Re: [Oak] Lucene copyonread OOM

2015-10-09 Thread Stephan Becker
Hi Geoffroy,

what OS is used (I assume SLES)? What OOM is thrown exactly?

The JVM settings imho seem sufficient but the OS may not. Lucene now for
some operations uses parts of Virtualmemory.

Can you check the ulimits

ulimit -n

If not set unlimited -> give setting it a try to unlimited in the
/etc/security/limits.conf where it is the "as" parameter that should
resolve the issue. You will see the virtual memory usage increase
significantly though when checking with top, this is acceptable though.

On Fri, Oct 9, 2015 at 4:01 PM, Geoffroy Schneck  wrote:

> Hello Oak Experts,
>
>
>
> On an Oak 1.2.4 version, OOM are thrown quite regularly by the copyonread
> feature , see below.
>
>
>
> However, the system where it runs, has 32GB total, and JVM –Xmx settings
> set to 12G . The JVM memory settings are the following :
>
>
>
> -Xms12288m -Xmx12288m -XX:MaxMetaspaceSize=512m -XX:MaxPermSize=512M
> -XX:ReservedCodeCacheSize=96m
>
>
>
> We have to assume, the repository size is huge (but unknown to me at that
> moment).
>
>
>
> -Where does the Lucene copyonread feature use the memory from ?
> off-heap memory of JVM allocated memory ?
>
> -Are there additional memory settings to increase for this
> specific feature ? Or one of the above seems unsufficient ?
>
>
>
> Thanks,
>
>
>
> *09.10.2015 09:52:42.439 *ERROR* [pool-5-thread-28]
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker Failed to open
> Lucene index at /oak:index/lucene*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> * java.io.IOException: Map failed at
> sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907) at
> org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283) at
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.(MMapDirectory.java:228)
> at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:195)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$FileReference.openLocalInput(IndexCopier.java:382)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory.openInput(IndexCopier.java:227)
> at
> org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader.(Lucene40StoredFieldsReader.java:82)
> at
> org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsFormat.fieldsReader(Lucene40StoredFieldsFormat.java:91)
> at
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:129)
> at org.apache.lucene.index.SegmentReader.(SegmentReader.java:96) at
> org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
> at
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
> at
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
> at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:66) at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.(IndexNode.java:94)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:62)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker$1.leave(IndexTracker.java:98)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:153)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:470)
> at
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
> at
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:69)
> at
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:125)
> at
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:119)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) Caused by:
> java.lang.OutOfMemoryError: Map failed at
> sun.nio.ch.FileChannelImpl.map0(Native 

Re: [Oak] Lucene copyonread OOM

2015-10-09 Thread Thomas Mueller
Hi,

Is this a 32-bit or 64-bit JVM?

Could you try

ulimit -v unlimited

See 
http://stackoverflow.com/questions/8892143/error-when-opening-a-lucene-index-map-failed
and possibly 
http://stackoverflow.com/questions/11683850/how-much-memory-could-vm-use-in-linux

Regards,
Thomas


From: Geoffroy Schneck >
Date: Friday 9 October 2015 16:01
To: "oak-dev@jackrabbit.apache.org" 
>, DL-tech 
>
Subject: [Oak] Lucene copyonread OOM

Hello Oak Experts,

On an Oak 1.2.4 version, OOM are thrown quite regularly by the copyonread 
feature , see below.

However, the system where it runs, has 32GB total, and JVM –Xmx settings set to 
12G . The JVM memory settings are the following :

-Xms12288m -Xmx12288m -XX:MaxMetaspaceSize=512m -XX:MaxPermSize=512M 
-XX:ReservedCodeCacheSize=96m

We have to assume, the repository size is huge (but unknown to me at that 
moment).


-Where does the Lucene copyonread feature use the memory from ? 
off-heap memory of JVM allocated memory ?

-Are there additional memory settings to increase for this specific 
feature ? Or one of the above seems unsufficient ?

Thanks,

09.10.2015 09:52:42.439 *ERROR* [pool-5-thread-28] 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker Failed to open 
Lucene index at /oak:index/lucene
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283)
at 
org.apache.lucene.store.MMapDirectory$MMapIndexInput.(MMapDirectory.java:228)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:195)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$FileReference.openLocalInput(IndexCopier.java:382)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory.openInput(IndexCopier.java:227)
at 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader.(Lucene40StoredFieldsReader.java:82)
at 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsFormat.fieldsReader(Lucene40StoredFieldsFormat.java:91)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:129)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:96)
at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:66)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.(IndexNode.java:94)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:62)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker$1.leave(IndexTracker.java:98)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:153)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:470)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:69)
at 
org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:125)
at 
org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:119)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
... 35 common frames omitted
09.10.2015 09:52:42.439 *WARN* [pool-5-thread-70] 
org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred 

jackrabbit-oak build #6607: Fixed

2015-10-09 Thread Travis CI
Build Update for apache/jackrabbit-oak
-

Build: #6607
Status: Fixed

Duration: 1482 seconds
Commit: 2727f9228ef7514f667d30bc77e93c514a00ab82 (trunk)
Author: Thomas Mueller
Message: OAK-3486 Wrong evaluation of NOT NOT clause (see OAK-3371)

git-svn-id: https://svn.apache.org/repos/asf/jackrabbit/oak/trunk@1707715 
13f79535-47bb-0310-9956-ffa450edef68

View the changeset: 
https://github.com/apache/jackrabbit-oak/compare/a9c02e5d2ddd...2727f9228ef7

View the full build log and details: 
https://travis-ci.org/apache/jackrabbit-oak/builds/84507285

--
sent by Jukka's Travis notification gateway


Re: [Oak] Lucene copyonread OOM

2015-10-09 Thread Stephan Becker
Hi Thomas, Thierry,

my bad ulimit -v is what I meant. Set it in the limits.conf with as

https://plumbr.eu/outofmemoryerror/unable-to-create-new-native-thread
different OOM but probably related.

On Fri, Oct 9, 2015 at 4:25 PM, Thomas Mueller  wrote:

> Hi,
>
> Is this a 32-bit or 64-bit JVM?
>
> Could you try
>
> ulimit -v unlimited
>
> See
> http://stackoverflow.com/questions/8892143/error-when-opening-a-lucene-index-map-failed
> and possibly
> http://stackoverflow.com/questions/11683850/how-much-memory-could-vm-use-in-linux
>
> Regards,
> Thomas
>
>
> From: Geoffroy Schneck 
> Date: Friday 9 October 2015 16:01
> To: "oak-dev@jackrabbit.apache.org" ,
> DL-tech 
> Subject: [Oak] Lucene copyonread OOM
>
> Hello Oak Experts,
>
>
>
> On an Oak 1.2.4 version, OOM are thrown quite regularly by the copyonread
> feature , see below.
>
>
>
> However, the system where it runs, has 32GB total, and JVM –Xmx settings
> set to 12G . The JVM memory settings are the following :
>
>
>
> -Xms12288m -Xmx12288m -XX:MaxMetaspaceSize=512m -XX:MaxPermSize=512M
> -XX:ReservedCodeCacheSize=96m
>
>
>
> We have to assume, the repository size is huge (but unknown to me at that
> moment).
>
>
>
> -Where does the Lucene copyonread feature use the memory from ?
> off-heap memory of JVM allocated memory ?
>
> -Are there additional memory settings to increase for this
> specific feature ? Or one of the above seems unsufficient ?
>
>
>
> Thanks,
>
>
>
> *09.10.2015 09:52:42.439 *ERROR* [pool-5-thread-28]
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker Failed to open
> Lucene index at /oak:index/lucene*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> * java.io.IOException: Map failed at
> sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907) at
> org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283) at
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.(MMapDirectory.java:228)
> at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:195)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$FileReference.openLocalInput(IndexCopier.java:382)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory.openInput(IndexCopier.java:227)
> at
> org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader.(Lucene40StoredFieldsReader.java:82)
> at
> org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsFormat.fieldsReader(Lucene40StoredFieldsFormat.java:91)
> at
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:129)
> at org.apache.lucene.index.SegmentReader.(SegmentReader.java:96) at
> org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
> at
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
> at
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
> at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:66) at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.(IndexNode.java:94)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:62)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker$1.leave(IndexTracker.java:98)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:153)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:470)
> at
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
> at
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:69)
> at
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:125)
> at
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:119)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> 

[RESULT][VOTE] Release Apache Jackrabbit Oak 1.2.7

2015-10-09 Thread Davide Giannella
Hello Team,

the vote passes as follows:

+1 Alex Parvulescu
+1 Davide Giannella
+1 Julian Reschke
+1 Marcel Reutegger


Thanks for voting. I'll push the release out.

-- Davide



[ANNOUNCE] Apache Jackrabbit Oak 1.2.7 released

2015-10-09 Thread Davide Giannella
The Apache Jackrabbit community is pleased to announce the release of
Apache Jackrabbit Oak 1.2.7 The release is available for download at:

http://jackrabbit.apache.org/downloads.html

See the full release notes below for details about this release:

Release Notes -- Apache Jackrabbit Oak -- Version 1.2.7

Introduction


Jackrabbit Oak is a scalable, high-performance hierarchical content
repository designed for use as the foundation of modern world-class
web sites and other demanding content applications.

Apache Jackrabbit Oak 1.2.7 is a patch release that contains fixes and
improvements over Oak 1.2. Jackrabbit Oak 1.2.x releases are considered
stable and targeted for production use.

The Oak effort is a part of the Apache Jackrabbit project.
Apache Jackrabbit is a project of the Apache Software Foundation.

Changes in Oak 1.2.7


Sub-task

[OAK-3133] - Make compaction map more efficient for offline
compaction
[OAK-3359] - Compactor progress log
[OAK-3443] - Track the start time of mark in GC

Technical task

[OAK-3394] - RDBDocumentStore startup: log more DDL information
(incl. index information)
[OAK-3413] - RDBDocumentStorePerformanceTest leaks
PreparedStatements
[OAK-3414] - RDBDocumentStore: improve DB2 diagnostics
[OAK-3422] - RDBDocumentStore: improve index diagnostics
[OAK-3438] - RDBDocumentStoreDB: leaking resultset
[OAK-3445] - RDBDocumentStore: when generating SQL for queries,
leave out unneeded constraints
[OAK-3446] - RDBDocumentStore: update PostgresQL and MySQL JDBC
drivers

Bug

[OAK-2929] - Parent of unseen children must not be removable
[OAK-3201] - Use static references in SecurityProviderImpl for
composite services
[OAK-3311] - Potential NPE in syncAllExternalUsers() aborts the
process
[OAK-3318] - IndexRule not respecting inheritence based on mixins
[OAK-3371] - Wrong evaluation of NOT clause
[OAK-3388] - Inconsistent read in cluster with clock differences
[OAK-3396] - NPE during syncAllExternalUsers in
LdapIdentityProvider.createUser
[OAK-3412] - Node name having non space whitspace chars should not
be allowed
[OAK-3417] - oak-run OakFixture might set incorrect clusterIDs
[OAK-3418] - ClusterNodeInfo uses irrelevant network interface IDs
on Windows
[OAK-3419] - ClusterNodeInfo.createInstance fails to clean up
random entries
[OAK-3420] - DocumentNodeStoreService fails to restart
DocumentNodeStore
[OAK-3423] - RandomAuthorizableNodeName should not be part of the
default configuration of SecurityProviderImpl
[OAK-3431] - SecurityProviderRegistration should not be part of an
exported package
[OAK-3433] - Background update may create journal entry with
incorrect id
[OAK-3434] - Revert backwards-incompatible changes to
SecurityProviderImpl
[OAK-3456] - MongoBlobGCTest.gcLongRunningBlobCollection() fails

Improvement

[OAK-2948] - Expose DefaultSyncHandler
[OAK-3425] - Improve DocumentNodeStore startup/shutdown
diagnostics
[OAK-3435] - LastRevRecoveryAgent/MissingLastRevSeeker
improvements
[OAK-3441] - SecurityProviderImpl should not be an OSGi component
[OAK-3454] - Improve the logging capabilities of offline
compaction
[OAK-3455] - Improve conflict exception message

In addition to the above-mentioned changes, this release contains
all changes included up to the Apache Jackrabbit Oak 1.2.4 release.

For more detailed information about all the changes in this and other
Oak releases, please see the Oak issue tracker at

  https://issues.apache.org/jira/browse/OAK

Release Contents


This release consists of a single source archive packaged as a zip file.
The archive can be unpacked with the jar tool from your JDK installation.
See the README.md file for instructions on how to build this release.

The source archive is accompanied by SHA1 and MD5 checksums and a PGP
signature that you can use to verify the authenticity of your download.
The public key used for the PGP signature can be found at
http://www.apache.org/dist/jackrabbit/KEYS.

About Apache Jackrabbit Oak
---

Jackrabbit Oak is a scalable, high-performance hierarchical content
repository designed for use as the foundation of modern world-class
web sites and other demanding content applications.

The Oak effort is a part of the Apache Jackrabbit project. 
Apache Jackrabbit is a project of the Apache Software Foundation.

For more information, visit http://jackrabbit.apache.org/oak

About The Apache Software Foundation


Established in 1999, The Apache Software Foundation provides organizational,
legal, and financial support for more than 140 freely-available,
collaboratively-developed Open Source projects. The pragmatic Apache License
enables individual and commercial users to easily deploy Apache software;
the