[jira] [Commented] (OAK-1639) MarkSweepGarbageCollector improvements

2014-03-31 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954963#comment-13954963
 ] 

Amit Jain commented on OAK-1639:


Have added a patch for point 1,3, 4 above.
Will take the documentation improvement patch separately so, that its easier to 
review.

 MarkSweepGarbageCollector improvements
 --

 Key: OAK-1639
 URL: https://issues.apache.org/jira/browse/OAK-1639
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: core
Reporter: Michael Dürig
Assignee: Chetan Mehrotra
 Fix For: 1.0


 While reviewing the patch for OAK-1582 I stumbled over a few issues with 
 {{MarkSweepGarbageCollector}} that need improving. First an foremost 
 {{MarkSweepGarbageCollector}} needs better documentation. The current javadoc 
 as for many methods and arguments their semantics and invariants are unclear. 
 Furthermore:
 * {{MarkSweepGarbageCollector#init()}}: why an init method, and not pass the 
 respective arguments directly to the constructor? Also at when are clients 
 allowed to call init? Can I call it while a a GC cycle is currently taking 
 place? 
 * Is there (do we need) a protection for multiple GCs being initiated in 
 parallel?
 * {{MarkSweepGarbageCollector.Sweeper#run}} and 
 {{MarkSweepGarbageCollector.BlobIdRetriever#retrieve}} catch {{Exception}} 
 and {{e.printStackTrace()}}. This needs improving.
 * {{MarkSweepGarbageCollector#sweep}} catches 
 {{InterruptedExceptionInterruptedException}}  and {{e.printStackTrace()}. 
 This is wrong as at least the threads interrupted status need to be set.
 * {{DocumentNodeStore#getReferencedBlobsIterator}} is passed into 
 {{MarkSweepGarbageCollector#init)}} in {{DocumentNodeStoreService}}. Won't 
 this iterator be consumed after the first gc run such that any further run 
 won't do anything?
 [~amitj_76], could you have a look at these points and create sub tasks as 
 required?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-319) Similar (rep:similar) support

2014-03-31 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954971#comment-13954971
 ] 

Thomas Mueller commented on OAK-319:


The Jackrabbit 2.x documentation about rep:similar: 
http://wiki.apache.org/jackrabbit/SimilaritySearch

 Similar (rep:similar) support
 -

 Key: OAK-319
 URL: https://issues.apache.org/jira/browse/OAK-319
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: jcr, query
Reporter: Alex Parvulescu
Assignee: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 Test class is: SimilarQueryTest
 Trace:
 {noformat}
 Caused by: java.text.ParseException: Query:
 //*[rep:similar(.(*), '/testroot')]; expected: rep:similar is not supported
   at 
 org.apache.jackrabbit.oak.query.XPathToSQL2Converter.getSyntaxError(XPathToSQL2Converter.java:963)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1647) AsyncIndexUpdateTask creating too many checkpoint

2014-03-31 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-1647:


 Summary: AsyncIndexUpdateTask creating too many checkpoint
 Key: OAK-1647
 URL: https://issues.apache.org/jira/browse/OAK-1647
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 0.20


AsyncIndexUpdateTask currently creates a checkpoint before [1] proceeding to 
index. If the indexer is already running then also the checkpoint gets created 
as that check is performed later. 

Would it be possible to create checkpoint only after checking if the async 
index update is already running or not? 

[1] 
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/AsyncIndexUpdate.java#L117



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1647) AsyncIndexUpdateTask creating too many checkpoint

2014-03-31 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954972#comment-13954972
 ] 

Chetan Mehrotra commented on OAK-1647:
--

[~alexparvulescu] Can you have a look

With current frequency of 5 sec 1 checkpoint entry would be created for every 
run i.e. 720 entries per clutser node/per hour. And they can only be removed 
after 1 hour (default expiry time)

If such a change is not possible then this would cause issue depending on how 
checkpoints are stored. In case of DocumentNodeStore the checkpoint entry is 
currently saved in a single Mongo document as a sub property. A Mongo document 
can store 16 MB of data so should be able to store a bit.

This was done with the assumption that checkpoints are not created in large 
numbers :(. That implementation can be changed to say store checkpoint data as 
independent collection. 


 AsyncIndexUpdateTask creating too many checkpoint
 -

 Key: OAK-1647
 URL: https://issues.apache.org/jira/browse/OAK-1647
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 0.20


 AsyncIndexUpdateTask currently creates a checkpoint before [1] proceeding to 
 index. If the indexer is already running then also the checkpoint gets 
 created as that check is performed later. 
 Would it be possible to create checkpoint only after checking if the async 
 index update is already running or not? 
 [1] 
 https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/AsyncIndexUpdate.java#L117



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1647) AsyncIndexUpdateTask creating too many checkpoint

2014-03-31 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-1647:
-

Priority: Minor  (was: Major)

 AsyncIndexUpdateTask creating too many checkpoint
 -

 Key: OAK-1647
 URL: https://issues.apache.org/jira/browse/OAK-1647
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 0.20


 AsyncIndexUpdateTask currently creates a checkpoint before [1] proceeding to 
 index. If the indexer is already running then also the checkpoint gets 
 created as that check is performed later. 
 Would it be possible to create checkpoint only after checking if the async 
 index update is already running or not? 
 [1] 
 https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/AsyncIndexUpdate.java#L117



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-319) Similar (rep:similar) support

2014-03-31 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954980#comment-13954980
 ] 

Thomas Mueller commented on OAK-319:


MoreLikeThis support for Oak is described in OAK-1286, where we wrote this 
feature can be supported with native queries. OAK-1325 is about native queries.

So possibly, rep:similar could be implemented in Oak by converting the 
rep:similar condition to a native lucene condition. I just wonder how the 
condition would need to look like.

 Similar (rep:similar) support
 -

 Key: OAK-319
 URL: https://issues.apache.org/jira/browse/OAK-319
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: jcr, query
Reporter: Alex Parvulescu
Assignee: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 Test class is: SimilarQueryTest
 Trace:
 {noformat}
 Caused by: java.text.ParseException: Query:
 //*[rep:similar(.(*), '/testroot')]; expected: rep:similar is not supported
   at 
 org.apache.jackrabbit.oak.query.XPathToSQL2Converter.getSyntaxError(XPathToSQL2Converter.java:963)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1639) MarkSweepGarbageCollector improvements

2014-03-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954996#comment-13954996
 ] 

Michael Dürig commented on OAK-1639:


bq. {{MarkSweepGarbageCollector}} is not thread-safe 

I think this should go into the class Javadoc.

 MarkSweepGarbageCollector improvements
 --

 Key: OAK-1639
 URL: https://issues.apache.org/jira/browse/OAK-1639
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: core
Reporter: Michael Dürig
Assignee: Chetan Mehrotra
 Fix For: 1.0


 While reviewing the patch for OAK-1582 I stumbled over a few issues with 
 {{MarkSweepGarbageCollector}} that need improving. First an foremost 
 {{MarkSweepGarbageCollector}} needs better documentation. The current javadoc 
 as for many methods and arguments their semantics and invariants are unclear. 
 Furthermore:
 * {{MarkSweepGarbageCollector#init()}}: why an init method, and not pass the 
 respective arguments directly to the constructor? Also at when are clients 
 allowed to call init? Can I call it while a a GC cycle is currently taking 
 place? 
 * Is there (do we need) a protection for multiple GCs being initiated in 
 parallel?
 * {{MarkSweepGarbageCollector.Sweeper#run}} and 
 {{MarkSweepGarbageCollector.BlobIdRetriever#retrieve}} catch {{Exception}} 
 and {{e.printStackTrace()}}. This needs improving.
 * {{MarkSweepGarbageCollector#sweep}} catches 
 {{InterruptedExceptionInterruptedException}}  and {{e.printStackTrace()}. 
 This is wrong as at least the threads interrupted status need to be set.
 * {{DocumentNodeStore#getReferencedBlobsIterator}} is passed into 
 {{MarkSweepGarbageCollector#init)}} in {{DocumentNodeStoreService}}. Won't 
 this iterator be consumed after the first gc run such that any further run 
 won't do anything?
 [~amitj_76], could you have a look at these points and create sub tasks as 
 required?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1647) AsyncIndexUpdateTask creating too many checkpoint

2014-03-31 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955024#comment-13955024
 ] 

Alex Parvulescu commented on OAK-1647:
--

bq. Would it be possible to create checkpoint only after checking if the async 
index update is already running or not?
Yes, we could move the verification before the checkpoint.

bq. This was done with the assumption that checkpoints are not created in large 
numbers
Interesting, with OAK-1456 we will double the checkpoint numbers which will 
make the matters even worse. Should we maybe make the lifetime window smaller? 
It runs every 5 seconds, so reducing the window in half would not hurt I guess.

 AsyncIndexUpdateTask creating too many checkpoint
 -

 Key: OAK-1647
 URL: https://issues.apache.org/jira/browse/OAK-1647
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 0.20


 AsyncIndexUpdateTask currently creates a checkpoint before [1] proceeding to 
 index. If the indexer is already running then also the checkpoint gets 
 created as that check is performed later. 
 Would it be possible to create checkpoint only after checking if the async 
 index update is already running or not? 
 [1] 
 https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/AsyncIndexUpdate.java#L117



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1636) Lucene and Solr index: support jcr:excerpt and jcr:score

2014-03-31 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955027#comment-13955027
 ] 

Alex Parvulescu commented on OAK-1636:
--

Isn't this the same as OAK-318?

 Lucene and Solr index: support jcr:excerpt and jcr:score
 

 Key: OAK-1636
 URL: https://issues.apache.org/jira/browse/OAK-1636
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: query
Reporter: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 OAK-262 implements support for pseudo-properties such as jcr:score in the 
 query engine. It also implements support for jcr:score in the Lucene index.
 What is still missing is support for jcr:score in the Solr index, and 
 support for rep:excerpt with both Lucene and Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1453) MongoMK failover support for replica sets (esp. shards)

2014-03-31 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955028#comment-13955028
 ] 

Stefan Egli commented on OAK-1453:
--

OAK-1649 reports a problem on a save following a replica crash

 MongoMK failover support for replica sets (esp. shards)
 ---

 Key: OAK-1453
 URL: https://issues.apache.org/jira/browse/OAK-1453
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Michael Marth
Assignee: Thomas Mueller
Priority: Critical
  Labels: production, resilience
 Fix For: 0.20


 With OAK-759 we have introduced replica support in MongoMK. I think we still 
 need to address the resilience for failover from primary to secoandary:
 Consider a case where Oak writes to the primary. Replication to secondary is 
 ongoing. During that period the primary goes down and the secondary becomes 
 primary. There could be some half-replicated MVCC revisions, which need to 
 be either discarded or be ignored after the failover.
 This might not be an issue if there is only one shard, as the commit root is 
 written last (and replicated last)
 But with 2 shards the the replication state of these 2 shards could be 
 inconsistent. Oak needs to handle such a situation without falling over.
 If we can detect a Mongo failover we could query Mongo which revisions are 
 fully replicated to the new primary and discard the potentially 
 half-replicated revisions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1649) NamespaceException: OakNamespace0005 on save, after replica crash

2014-03-31 Thread Stefan Egli (JIRA)
Stefan Egli created OAK-1649:


 Summary: NamespaceException: OakNamespace0005 on save, after 
replica crash
 Key: OAK-1649
 URL: https://issues.apache.org/jira/browse/OAK-1649
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 0.19
Reporter: Stefan Egli
 Fix For: 0.20


After running a test that produces couple thousands of nodes, and overwrites 
the same properties couple thousand times, then crashing the replica primary, 
the exception below occurs.

The exception can be reproduced on the db and with the test case I'll attach in 
a minute

{code}javax.jcr.NamespaceException: OakNamespace0005: Namespace modification 
not allowed: rep:nsdata
at 
org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:227)
at 
org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:679)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:553)
at 
org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:417)
at 
org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:1)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
at 
org.apache.jackrabbit.oak.jcr.session.SessionImpl.perform(SessionImpl.java:127)
at 
org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:414)
at 
org.apache.jackrabbit.oak.run.OverwritePropertyTest.testReplicaCrashResilience(OverwritePropertyTest.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at 
org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at 
org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: 
OakNamespace0005: Namespace modification not allowed: rep:nsdata
at 
org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.modificationNotAllowed(NamespaceEditor.java:122)
at 
org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.childNodeChanged(NamespaceEditor.java:140)
at 
org.apache.jackrabbit.oak.spi.commit.CompositeEditor.childNodeChanged(CompositeEditor.java:122)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:143)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.dispatch(DocumentNodeState.java:405)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.compareAgainstBaseState(DocumentNodeState.java:245)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.dispatch(DocumentNodeState.java:405)
at 

[jira] [Updated] (OAK-1649) NamespaceException: OakNamespace0005 on save, after replica crash

2014-03-31 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-1649:
-

Attachment: OverwritePropertyTest.java

The exception can be reproduced with attached OverwritePropertyTest on the 
.tar.gzipped db downloadable at [0]

[0] https://www.dropbox.com/s/6boluui1b917h20/resilienceLargeTxTest.tar.gz

 NamespaceException: OakNamespace0005 on save, after replica crash
 -

 Key: OAK-1649
 URL: https://issues.apache.org/jira/browse/OAK-1649
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 0.19
Reporter: Stefan Egli
 Fix For: 0.20

 Attachments: OverwritePropertyTest.java


 After running a test that produces couple thousands of nodes, and overwrites 
 the same properties couple thousand times, then crashing the replica primary, 
 the exception below occurs.
 The exception can be reproduced on the db and with the test case I'll attach 
 in a minute
 {code}javax.jcr.NamespaceException: OakNamespace0005: Namespace modification 
 not allowed: rep:nsdata
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:227)
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:679)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:553)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:417)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.perform(SessionImpl.java:127)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:414)
   at 
 org.apache.jackrabbit.oak.run.OverwritePropertyTest.testReplicaCrashResilience(OverwritePropertyTest.java:74)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: 
 OakNamespace0005: Namespace modification not allowed: rep:nsdata
   at 
 org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.modificationNotAllowed(NamespaceEditor.java:122)
   at 
 org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.childNodeChanged(NamespaceEditor.java:140)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.childNodeChanged(CompositeEditor.java:122)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:143)
   at 
 

[jira] [Updated] (OAK-1451) Expose index size

2014-03-31 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-1451:
-

Fix Version/s: (was: 0.20)
   1.1

 Expose index size
 -

 Key: OAK-1451
 URL: https://issues.apache.org/jira/browse/OAK-1451
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Michael Marth
Assignee: Alex Parvulescu
Priority: Minor
  Labels: production, resilience
 Fix For: 1.1


 At the moment some MK's disc needs are largely from the indexes. Maybe we can 
 do something about this, but in the meantime it would be helpful if we could 
 expose the index sizes (num of indexed nodes) via JMX so that they could be 
 easily monitored.
 This would also be helpful to see at which point an index becomes useless (if 
 the majority of content nodes are indexed one might as well not have an index)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1649) NamespaceException: OakNamespace0005 on save, after replica crash

2014-03-31 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-1649:
-

Environment: 0.20-SNAPSHOT as of March 31

 NamespaceException: OakNamespace0005 on save, after replica crash
 -

 Key: OAK-1649
 URL: https://issues.apache.org/jira/browse/OAK-1649
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 31
Reporter: Stefan Egli
 Fix For: 0.20

 Attachments: OverwritePropertyTest.java


 After running a test that produces couple thousands of nodes, and overwrites 
 the same properties couple thousand times, then crashing the replica primary, 
 the exception below occurs.
 The exception can be reproduced on the db and with the test case I'll attach 
 in a minute
 {code}javax.jcr.NamespaceException: OakNamespace0005: Namespace modification 
 not allowed: rep:nsdata
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:227)
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:679)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:553)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:417)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.perform(SessionImpl.java:127)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:414)
   at 
 org.apache.jackrabbit.oak.run.OverwritePropertyTest.testReplicaCrashResilience(OverwritePropertyTest.java:74)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: 
 OakNamespace0005: Namespace modification not allowed: rep:nsdata
   at 
 org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.modificationNotAllowed(NamespaceEditor.java:122)
   at 
 org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.childNodeChanged(NamespaceEditor.java:140)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.childNodeChanged(CompositeEditor.java:122)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:143)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.dispatch(DocumentNodeState.java:405)
   at 
 

[jira] [Resolved] (OAK-1647) AsyncIndexUpdateTask creating too many checkpoint

2014-03-31 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-1647.
--

Resolution: Fixed

Moved the checkpoint creation operation after check for concurrent run in 
http://svn.apache.org/r1583269

Resolving for now. If it poses problem the issue can be relooked into again

 AsyncIndexUpdateTask creating too many checkpoint
 -

 Key: OAK-1647
 URL: https://issues.apache.org/jira/browse/OAK-1647
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 0.20


 AsyncIndexUpdateTask currently creates a checkpoint before [1] proceeding to 
 index. If the indexer is already running then also the checkpoint gets 
 created as that check is performed later. 
 Would it be possible to create checkpoint only after checking if the async 
 index update is already running or not? 
 [1] 
 https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/AsyncIndexUpdate.java#L117



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (OAK-1646) MarkSweepGarbageCollector - Improvements in exception handling and initialization

2014-03-31 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra reassigned OAK-1646:


Assignee: Chetan Mehrotra

 MarkSweepGarbageCollector - Improvements in exception handling and 
 initialization
 -

 Key: OAK-1646
 URL: https://issues.apache.org/jira/browse/OAK-1646
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core
Reporter: Amit Jain
Assignee: Chetan Mehrotra
 Fix For: 0.20

 Attachments: OAK-1646.patch


 Replace init() methods as they are not needed with constructors.
 Improve exception handling.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1056) Transient changes contributed by commit hooks are kept in memory

2014-03-31 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955052#comment-13955052
 ] 

Marcel Reutegger commented on OAK-1056:
---

Michael and I discussed this issue offline this morning.

In general the requirement is there to create a new and independent builder 
whenever NodeState.builder() is called. Most of the time however, the full 
potential of this requirement is not used by client code. E.g. CommitHook 
implementations do create a builder from the after state, but effectively they 
are all based on the state returned by the previous hook, thus each builder is 
simply based on the current branch head. In general this may not be the case 
and in fact a commit hook may even create a builder from a before state and use 
it in the builder created from the after state.

For the latter case, the implementation could still return a MemoryNodeBuilder, 
but for the cases were a builder is obtained for the current branch head, the 
implementation could return a builder instance, which persists to the branch. 
It gets a bit tricky when there are multiple of those builders based on some 
branch state. Only a builder with a base state equal to the head of the 
persisted branch must be allowed to persist further changes and it may even be 
desirable to immediately fail an attempt to work with a builder, which is not 
based on the head of a branch anymore.

 Transient changes contributed by commit hooks are kept in memory
 

 Key: OAK-1056
 URL: https://issues.apache.org/jira/browse/OAK-1056
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Michael Dürig
 Fix For: 1.0


 With the {{KernelNodeStore}}, transient changes contributed by commit hooks 
 are currently kept in memory instead of being written ahead to the private 
 branch. The reason for this is that we need to be able to undo such changes 
 if a commit hook later in the process fails the commit. Doing this 
 efficiently would need some support from the persistent layer. Either the 
 ability for branching from a branch or the ability to roll back to a previous 
 state. 
 See the TODOs in {{KernelNodeState.builder()}}, which returns a 
 MemoryNodeBuilder (instead of a KernelNodeBuilder when the current state is 
 on a branch. This is the workaround to avoid branching form a branch and has 
 the effect that commit hooks currently run against a MemoryNodeBuilder and 
 limits the amount of changes commit hooks can add.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1650) NPE and MicroKernelException: The node .. does not exist, on replica primary crash during save

2014-03-31 Thread Stefan Egli (JIRA)
Stefan Egli created OAK-1650:


 Summary: NPE and MicroKernelException: The node .. does not exist, 
on replica primary crash during save
 Key: OAK-1650
 URL: https://issues.apache.org/jira/browse/OAK-1650
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 31
Reporter: Stefan Egli
 Fix For: 0.20


When crashing the replica-primary while saving a large transaction, the 
following two exceptions occur. Had this twice in a row, thus 'sort of' 
reproduceable. I'll attach the test case in a minute.

{code}Mar 31, 2014 11:49:04 AM com.mongodb.DBTCPConnector setMasterAddress
WARNING: Primary switching from localhost/127.0.0.1:12321 to 
localhost/127.0.0.1:12322
Writer: Created level1 node: 
Node[NodeDelegate{tree=/replicaCrashLargeTxTest-1396259321921/2: { 
jcr:primaryType = nt:unstructured}}]
org.apache.jackrabbit.mk.api.MicroKernelException: 
java.lang.NullPointerException
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.findAndModify(MongoDocumentStore.java:483)
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.createOrUpdate(MongoDocumentStore.java:495)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.createOrUpdateNode(Commit.java:449)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.applyToDocumentStore(Commit.java:335)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.prepare(Commit.java:212)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.apply(Commit.java:181)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:172)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:85)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:1)
at 
org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$Persisted.persistTransientHead(AbstractNodeStoreBranch.java:598)
at 
org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$Persisted.setRoot(AbstractNodeStoreBranch.java:547)
at 
org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.setRoot(AbstractNodeStoreBranch.java:208)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.purge(DocumentRootBuilder.java:188)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.updated(DocumentRootBuilder.java:99)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.updated(MemoryNodeBuilder.java:205)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:489)
at 
org.apache.jackrabbit.oak.core.SecureNodeBuilder.setProperty(SecureNodeBuilder.java:260)
at 
org.apache.jackrabbit.oak.core.MutableTree.updateChildOrder(MutableTree.java:337)
at 
org.apache.jackrabbit.oak.core.MutableTree.setOrderableChildren(MutableTree.java:220)
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:207)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:286)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:1)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:113)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:253)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:238)
at 
org.apache.jackrabbit.oak.run.ReplicaCrashResilienceLargeTxTest$1.run(ReplicaCrashResilienceLargeTxTest.java:115)
at java.lang.Thread.run(Thread.java:695)
Caused by: java.lang.NullPointerException
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:192)
at 
org.apache.jackrabbit.oak.plugins.document.util.StringValue.init(StringValue.java:35)
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.addToCache(MongoDocumentStore.java:810)
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.applyToCache(MongoDocumentStore.java:765)
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.findAndModify(MongoDocumentStore.java:477)
... 28 more
{code}


and:


{code}Exception in thread Thread-5 
org.apache.jackrabbit.mk.api.MicroKernelException: The node 
1:/replicaCrashLargeTxTest-1396259321921 does not exist or is already deleted, 
before
r145178ad3bd-0-1; document:
{_id=1:/replicaCrashLargeTxTest-1396259321921,
_modified=1396259345, :childOrder={},

[jira] [Commented] (OAK-1453) MongoMK failover support for replica sets (esp. shards)

2014-03-31 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955063#comment-13955063
 ] 

Stefan Egli commented on OAK-1453:
--

OAK-1650 reports two kinds of exceptions occuring when crashing the 
replica-primary during a save of a large transaction

 MongoMK failover support for replica sets (esp. shards)
 ---

 Key: OAK-1453
 URL: https://issues.apache.org/jira/browse/OAK-1453
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Michael Marth
Assignee: Thomas Mueller
Priority: Critical
  Labels: production, resilience
 Fix For: 0.20


 With OAK-759 we have introduced replica support in MongoMK. I think we still 
 need to address the resilience for failover from primary to secoandary:
 Consider a case where Oak writes to the primary. Replication to secondary is 
 ongoing. During that period the primary goes down and the secondary becomes 
 primary. There could be some half-replicated MVCC revisions, which need to 
 be either discarded or be ignored after the failover.
 This might not be an issue if there is only one shard, as the commit root is 
 written last (and replicated last)
 But with 2 shards the the replication state of these 2 shards could be 
 inconsistent. Oak needs to handle such a situation without falling over.
 If we can detect a Mongo failover we could query Mongo which revisions are 
 fully replicated to the new primary and discard the potentially 
 half-replicated revisions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1650) NPE and MicroKernelException: The node .. does not exist, on replica primary crash during save

2014-03-31 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-1650:
-

Attachment: ReplicaCrashResilienceLargeTxTest.java

The reported exception can be reproduced by running 
ReplicaCrashResilienceLargeTxTest and crashing the replica-primary while 
'saving...' is going on.

The exceptions occur right after mongo-client switched to the new primary

 NPE and MicroKernelException: The node .. does not exist, on replica primary 
 crash during save
 --

 Key: OAK-1650
 URL: https://issues.apache.org/jira/browse/OAK-1650
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 31
Reporter: Stefan Egli
 Fix For: 0.20

 Attachments: ReplicaCrashResilienceLargeTxTest.java


 When crashing the replica-primary while saving a large transaction, the 
 following two exceptions occur. Had this twice in a row, thus 'sort of' 
 reproduceable. I'll attach the test case in a minute.
 {code}Mar 31, 2014 11:49:04 AM com.mongodb.DBTCPConnector setMasterAddress
 WARNING: Primary switching from localhost/127.0.0.1:12321 to 
 localhost/127.0.0.1:12322
 Writer: Created level1 node: 
 Node[NodeDelegate{tree=/replicaCrashLargeTxTest-1396259321921/2: { 
 jcr:primaryType = nt:unstructured}}]
 org.apache.jackrabbit.mk.api.MicroKernelException: 
 java.lang.NullPointerException
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.findAndModify(MongoDocumentStore.java:483)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.createOrUpdate(MongoDocumentStore.java:495)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.createOrUpdateNode(Commit.java:449)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.applyToDocumentStore(Commit.java:335)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.prepare(Commit.java:212)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.apply(Commit.java:181)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:172)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:85)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:1)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$Persisted.persistTransientHead(AbstractNodeStoreBranch.java:598)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$Persisted.setRoot(AbstractNodeStoreBranch.java:547)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.setRoot(AbstractNodeStoreBranch.java:208)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.purge(DocumentRootBuilder.java:188)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.updated(DocumentRootBuilder.java:99)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.updated(MemoryNodeBuilder.java:205)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:489)
   at 
 org.apache.jackrabbit.oak.core.SecureNodeBuilder.setProperty(SecureNodeBuilder.java:260)
   at 
 org.apache.jackrabbit.oak.core.MutableTree.updateChildOrder(MutableTree.java:337)
   at 
 org.apache.jackrabbit.oak.core.MutableTree.setOrderableChildren(MutableTree.java:220)
   at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:207)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:286)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:113)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:253)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:238)
   at 
 org.apache.jackrabbit.oak.run.ReplicaCrashResilienceLargeTxTest$1.run(ReplicaCrashResilienceLargeTxTest.java:115)
   at java.lang.Thread.run(Thread.java:695)
 Caused by: java.lang.NullPointerException
   at 
 com.google.common.base.Preconditions.checkNotNull(Preconditions.java:192)
   at 
 org.apache.jackrabbit.oak.plugins.document.util.StringValue.init(StringValue.java:35)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.addToCache(MongoDocumentStore.java:810)
   at 
 

[jira] [Resolved] (OAK-1646) MarkSweepGarbageCollector - Improvements in exception handling and initialization

2014-03-31 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-1646.
--

Resolution: Fixed

Applied the patch in http://svn.apache.org/r1583282

 MarkSweepGarbageCollector - Improvements in exception handling and 
 initialization
 -

 Key: OAK-1646
 URL: https://issues.apache.org/jira/browse/OAK-1646
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core
Reporter: Amit Jain
Assignee: Chetan Mehrotra
 Fix For: 0.20

 Attachments: OAK-1646.patch


 Replace init() methods as they are not needed with constructors.
 Improve exception handling.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1489) ValueImpl should implement JackrabbitValue

2014-03-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-1489.


   Resolution: Fixed
Fix Version/s: 0.20
 Assignee: Michael Dürig

Applied patch at http://svn.apache.org/r1583285. 

 ValueImpl should implement JackrabbitValue
 --

 Key: OAK-1489
 URL: https://issues.apache.org/jira/browse/OAK-1489
 Project: Jackrabbit Oak
  Issue Type: Wish
  Components: core
Reporter: Vikas Saurabh
Assignee: Michael Dürig
 Fix For: 0.20

 Attachments: OAK-1489.patch


 For some reason, I need to get to datastore id. With JR2 (in sling env) I 
 could do:
 {code}
 String id  = null;
 Value v = resource.adaptTo(Value.class);
 if(v instanceof JackrabbitValue) {
id = ((JackrabbitValue)v).getContentIdentity();
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1649) NamespaceException: OakNamespace0005 on save, after replica crash

2014-03-31 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1649:
---

Assignee: Chetan Mehrotra

 NamespaceException: OakNamespace0005 on save, after replica crash
 -

 Key: OAK-1649
 URL: https://issues.apache.org/jira/browse/OAK-1649
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 31
Reporter: Stefan Egli
Assignee: Chetan Mehrotra
 Fix For: 0.20

 Attachments: OverwritePropertyTest.java


 After running a test that produces couple thousands of nodes, and overwrites 
 the same properties couple thousand times, then crashing the replica primary, 
 the exception below occurs.
 The exception can be reproduced on the db and with the test case I'll attach 
 in a minute
 {code}javax.jcr.NamespaceException: OakNamespace0005: Namespace modification 
 not allowed: rep:nsdata
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:227)
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:679)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:553)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:417)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.perform(SessionImpl.java:127)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:414)
   at 
 org.apache.jackrabbit.oak.run.OverwritePropertyTest.testReplicaCrashResilience(OverwritePropertyTest.java:74)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: 
 OakNamespace0005: Namespace modification not allowed: rep:nsdata
   at 
 org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.modificationNotAllowed(NamespaceEditor.java:122)
   at 
 org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.childNodeChanged(NamespaceEditor.java:140)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.childNodeChanged(CompositeEditor.java:122)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:143)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.dispatch(DocumentNodeState.java:405)
   at 
 

[jira] [Updated] (OAK-1650) NPE and MicroKernelException: The node .. does not exist, on replica primary crash during save

2014-03-31 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1650:
---

Assignee: Chetan Mehrotra

 NPE and MicroKernelException: The node .. does not exist, on replica primary 
 crash during save
 --

 Key: OAK-1650
 URL: https://issues.apache.org/jira/browse/OAK-1650
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 31
Reporter: Stefan Egli
Assignee: Chetan Mehrotra
 Fix For: 0.20

 Attachments: ReplicaCrashResilienceLargeTxTest.java


 When crashing the replica-primary while saving a large transaction, the 
 following two exceptions occur. Had this twice in a row, thus 'sort of' 
 reproduceable. I'll attach the test case in a minute.
 {code}Mar 31, 2014 11:49:04 AM com.mongodb.DBTCPConnector setMasterAddress
 WARNING: Primary switching from localhost/127.0.0.1:12321 to 
 localhost/127.0.0.1:12322
 Writer: Created level1 node: 
 Node[NodeDelegate{tree=/replicaCrashLargeTxTest-1396259321921/2: { 
 jcr:primaryType = nt:unstructured}}]
 org.apache.jackrabbit.mk.api.MicroKernelException: 
 java.lang.NullPointerException
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.findAndModify(MongoDocumentStore.java:483)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.createOrUpdate(MongoDocumentStore.java:495)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.createOrUpdateNode(Commit.java:449)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.applyToDocumentStore(Commit.java:335)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.prepare(Commit.java:212)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.apply(Commit.java:181)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:172)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:85)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:1)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$Persisted.persistTransientHead(AbstractNodeStoreBranch.java:598)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$Persisted.setRoot(AbstractNodeStoreBranch.java:547)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.setRoot(AbstractNodeStoreBranch.java:208)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.purge(DocumentRootBuilder.java:188)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.updated(DocumentRootBuilder.java:99)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.updated(MemoryNodeBuilder.java:205)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:489)
   at 
 org.apache.jackrabbit.oak.core.SecureNodeBuilder.setProperty(SecureNodeBuilder.java:260)
   at 
 org.apache.jackrabbit.oak.core.MutableTree.updateChildOrder(MutableTree.java:337)
   at 
 org.apache.jackrabbit.oak.core.MutableTree.setOrderableChildren(MutableTree.java:220)
   at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:207)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:286)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:113)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:253)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:238)
   at 
 org.apache.jackrabbit.oak.run.ReplicaCrashResilienceLargeTxTest$1.run(ReplicaCrashResilienceLargeTxTest.java:115)
   at java.lang.Thread.run(Thread.java:695)
 Caused by: java.lang.NullPointerException
   at 
 com.google.common.base.Preconditions.checkNotNull(Preconditions.java:192)
   at 
 org.apache.jackrabbit.oak.plugins.document.util.StringValue.init(StringValue.java:35)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.addToCache(MongoDocumentStore.java:810)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.applyToCache(MongoDocumentStore.java:765)
   at 
 

[jira] [Updated] (OAK-1557) Mark documents as deleted

2014-03-31 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-1557:
-

Fix Version/s: (was: 0.20)
   1.0

 Mark documents as deleted
 -

 Key: OAK-1557
 URL: https://issues.apache.org/jira/browse/OAK-1557
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Marcel Reutegger
Assignee: Chetan Mehrotra
Priority: Blocker
 Fix For: 1.0


 This is an improvement to make a certain use case more efficient. When there 
 is a parent node with frequently added and removed child nodes, the reading 
 of the current list of child nodes becomes inefficient because the decision 
 whether a node exists at a certain revision is done in the DocumentNodeStore 
 and no filtering is done on the MongoDB side.
 So far we figured this would be solved automatically by the MVCC garbage 
 collection, when documents for deleted nodes are removed. However for 
 locations in the repository where nodes are added and deleted again 
 frequently (think of a temp folder), the issue pops up before the GC had a 
 chance to clean up.
 The Document should have an additional field, which is set when the node is 
 deleted in the most recent revision. Based on this field the 
 DocumentNodeStore can limit the query to MongoDB to documents that are not 
 deleted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1557) Mark documents as deleted

2014-03-31 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955089#comment-13955089
 ] 

Chetan Mehrotra commented on OAK-1557:
--

After discussion with [~mreutegg] defering the issue to 1.0. For now with 
Version GC (OAK-1341) implemented in normal usage case such deleted documents 
would be garbage collected and hence should not pose much problem. So urgency 
of this feature is reduce and can be addressed in later 1.0 release

 Mark documents as deleted
 -

 Key: OAK-1557
 URL: https://issues.apache.org/jira/browse/OAK-1557
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Marcel Reutegger
Assignee: Chetan Mehrotra
Priority: Blocker
 Fix For: 1.0


 This is an improvement to make a certain use case more efficient. When there 
 is a parent node with frequently added and removed child nodes, the reading 
 of the current list of child nodes becomes inefficient because the decision 
 whether a node exists at a certain revision is done in the DocumentNodeStore 
 and no filtering is done on the MongoDB side.
 So far we figured this would be solved automatically by the MVCC garbage 
 collection, when documents for deleted nodes are removed. However for 
 locations in the repository where nodes are added and deleted again 
 frequently (think of a temp folder), the issue pops up before the GC had a 
 chance to clean up.
 The Document should have an additional field, which is set when the node is 
 deleted in the most recent revision. Based on this field the 
 DocumentNodeStore can limit the query to MongoDB to documents that are not 
 deleted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1341) DocumentNodeStore: Implement revision garbage collection

2014-03-31 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-1341.
--

Resolution: Fixed

 DocumentNodeStore: Implement revision garbage collection
 

 Key: OAK-1341
 URL: https://issues.apache.org/jira/browse/OAK-1341
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: mongomk
Reporter: Thomas Mueller
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 0.20


 For the MongoMK (as well as for other storage engines that are based on it), 
 garbage collection is most easily implemented by iterating over all documents 
 and removing unused entries (either whole documents, or data within the 
 document). 
 Iteration can be done in parallel (for example one process per shard), and it 
 can be done in any order. 
 The most efficient order is probably the id order; however, it might be 
 better to iterate only over documents that were not changed recently, by 
 using the index on the _modified property. That way we don't need to 
 iterate over the whole repository over and over again, but just over those 
 documents that were actually changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1341) DocumentNodeStore: Implement revision garbage collection

2014-03-31 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955090#comment-13955090
 ] 

Chetan Mehrotra commented on OAK-1341:
--

The required work has been done as per the flow described above. The RevionGC 
is wired with the JMX logic and can be invoked. By default the maxAge is set to 
1 day. It can be change by specific config property {{versionGcMaxAgeInSecs}} 
for DocumentNodeStore

 DocumentNodeStore: Implement revision garbage collection
 

 Key: OAK-1341
 URL: https://issues.apache.org/jira/browse/OAK-1341
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: mongomk
Reporter: Thomas Mueller
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 0.20


 For the MongoMK (as well as for other storage engines that are based on it), 
 garbage collection is most easily implemented by iterating over all documents 
 and removing unused entries (either whole documents, or data within the 
 document). 
 Iteration can be done in parallel (for example one process per shard), and it 
 can be done in any order. 
 The most efficient order is probably the id order; however, it might be 
 better to iterate only over documents that were not changed recently, by 
 using the index on the _modified property. That way we don't need to 
 iterate over the whole repository over and over again, but just over those 
 documents that were actually changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1641) Mongo: Un-/CheckedExecutionException on replica-primary crash

2014-03-31 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-1641:
-

Summary: Mongo: Un-/CheckedExecutionException on replica-primary crash  
(was: Mongo: UncheckedExecutionException on replica-primary crash)

 Mongo: Un-/CheckedExecutionException on replica-primary crash
 -

 Key: OAK-1641
 URL: https://issues.apache.org/jira/browse/OAK-1641
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 28, 2014
Reporter: Stefan Egli
Assignee: Chetan Mehrotra
 Attachments: ReplicaCrashResilienceTest.java, 
 ReplicaCrashResilienceTest.java, mongoUrl_fixture_patch_oak1641.diff


 Testing with a mongo replicaSet setup: 1 primary, 1 secondary and 1 
 secondary-arbiter-only.
 Running a simple test which has 2 threads: a writer thread and a reader 
 thread.
 The following exception occurs when crashing mongo primary
 {code}
 com.google.common.util.concurrent.UncheckedExecutionException: 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 com.mongodb.MongoException$Network: Read operation to server 
 localhost/127.0.0.1:12322 failed on database resilienceTest
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
   at 
 com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.getNode(DocumentNodeStore.java:593)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.hasChildNode(DocumentNodeState.java:164)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.hasChildNode(MemoryNodeBuilder.java:301)
   at 
 org.apache.jackrabbit.oak.core.SecureNodeBuilder.hasChildNode(SecureNodeBuilder.java:299)
   at 
 org.apache.jackrabbit.oak.plugins.tree.AbstractTree.hasChild(AbstractTree.java:267)
   at 
 org.apache.jackrabbit.oak.core.MutableTree.getChild(MutableTree.java:147)
   at org.apache.jackrabbit.oak.util.TreeUtil.getTree(TreeUtil.java:171)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.getTree(NodeDelegate.java:865)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.getChild(NodeDelegate.java:339)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:274)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:113)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:253)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:238)
   at 
 org.apache.jackrabbit.oak.run.ReplicaCrashResilienceTest$1.run(ReplicaCrashResilienceTest.java:103)
   at java.lang.Thread.run(Thread.java:695)
 Caused by: com.google.common.util.concurrent.UncheckedExecutionException: 
 com.mongodb.MongoException$Network: Read operation to server 
 localhost/127.0.0.1:12322 failed on database resilienceTest
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
   at 
 com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.find(MongoDocumentStore.java:267)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.find(MongoDocumentStore.java:234)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.readNode(DocumentNodeStore.java:802)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:596)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:1)
   at 
 com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
   ... 19 more
 Caused by: com.mongodb.MongoException$Network: Read operation to server 
 localhost/127.0.0.1:12322 failed on database resilienceTest
   at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:253)
   at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:264)

[jira] [Commented] (OAK-1641) Mongo: UncheckedExecutionException on replica-primary crash

2014-03-31 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955112#comment-13955112
 ] 

Stefan Egli commented on OAK-1641:
--

PS: Also had a *checked exception* being thrown in this case:

{code}
 e=org.apache.jackrabbit.mk.api.MicroKernelException: 
com.mongodb.MongoException$Network: Read operation to server 
localhost/127.0.0.1:12322 failed on database resilienceLargeTxTest2
org.apache.jackrabbit.mk.api.MicroKernelException: 
com.mongodb.MongoException$Network: Read operation to server 
localhost/127.0.0.1:12322 failed on database resilienceLargeTxTest2
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.findAndModify(MongoDocumentStore.java:483)
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.findAndUpdate(MongoDocumentStore.java:504)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.rollback(Commit.java:429)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.applyToDocumentStore(Commit.java:377)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.prepare(Commit.java:212)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.apply(Commit.java:181)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:172)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:85)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:1)
at 
org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$Persisted.persistTransientHead(AbstractNodeStoreBranch.java:598)
at 
org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$Persisted.setRoot(AbstractNodeStoreBranch.java:547)
at 
org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.setRoot(AbstractNodeStoreBranch.java:208)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.purge(DocumentRootBuilder.java:188)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.updated(DocumentRootBuilder.java:99)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.updated(MemoryNodeBuilder.java:205)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:489)
at 
org.apache.jackrabbit.oak.core.SecureNodeBuilder.setProperty(SecureNodeBuilder.java:260)
at 
org.apache.jackrabbit.oak.core.MutableTree.setProperty(MutableTree.java:261)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.setProperty(NodeDelegate.java:537)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$35.perform(NodeImpl.java:1306)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$35.perform(NodeImpl.java:1)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:113)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.internalSetProperty(NodeImpl.java:1294)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.setProperty(NodeImpl.java:495)
at 
org.apache.jackrabbit.oak.run.ReplicaCrashResilienceLargeTxWithReaderTest$1.run(ReplicaCrashResilienceLargeTxWithReaderTest.java:119)
at java.lang.Thread.run(Thread.java:695)
Caused by: com.mongodb.MongoException$Network: Read operation to server 
localhost/127.0.0.1:12322 failed on database resilienceLargeTxTest2
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:253)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:216)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:288)
at com.mongodb.DB.command(DB.java:262)
at com.mongodb.DB.command(DB.java:244)
at com.mongodb.DB.command(DB.java:301)
at com.mongodb.DB.command(DB.java:199)
at com.mongodb.DBCollection.findAndModify(DBCollection.java:392)
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.findAndModify(MongoDocumentStore.java:456)
... 26 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:382)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:241)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:228)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:431)
at java.net.Socket.connect(Socket.java:527)
at com.mongodb.DBPort._open(DBPort.java:223)
at com.mongodb.DBPort.go(DBPort.java:125)
at com.mongodb.DBPort.call(DBPort.java:92)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:244)
... 34 more
{code}

 Mongo: 

[jira] [Commented] (OAK-1456) Non-blocking reindexing

2014-03-31 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955146#comment-13955146
 ] 

Alex Parvulescu commented on OAK-1456:
--

The changes are in with rev 1583316. The only thing I've left out for now is 
the code that enables the background task.

http://svn.apache.org/r1583316

 Non-blocking reindexing
 ---

 Key: OAK-1456
 URL: https://issues.apache.org/jira/browse/OAK-1456
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Michael Marth
Assignee: Alex Parvulescu
Priority: Blocker
  Labels: production, resilience
 Fix For: 0.20

 Attachments: OAK-1456.patch


 For huge Oak repos it will be essential to re-index some or all indexes in 
 case they go out of sync in a non-blocking way (i.e. the repo is still 
 operation while the re-indexing takes place).
 For an asynchronous index this should not be much of a problem. One could 
 drop it and recreate (as an added benefit it might be nice if the user could 
 simply add a property reindex to the index definition node to trigger this).
 For synchronous indexes, I suggest the mechanism creates an asynchronous 
 index behind the scenes first and once it has caught up
 * blocks writes (?)
 * removes the existing synchronous index
 * moves asynchronous index in its place and makes it synchronous



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-319) Similar (rep:similar) support

2014-03-31 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955171#comment-13955171
 ] 

Thomas Mueller commented on OAK-319:


I think that rep:similar could be supported by converting it to a native 
lucene query, as follows:

{noformat}
@Test
public void testRepSimilarQuery() throws Exception {
String nativeQueryString = select [jcr:path] from [nt:base] where 
native('lucene', 
'mlt?stream.body=/test/amlt.fl=my:pathmlt.mindf=0mlt.mintf=0');
Tree test = root.getTree(/).addChild(test);
test.addChild(a).setProperty(my:path, /test/a);
test.addChild(a).setProperty(text, Hello World);
test.addChild(b).setProperty(my:path, /test/b);
test.addChild(b).setProperty(text, He said Hello World and then 
the world said Hello as well.);
test.addChild(b).setProperty(my:path, /test/c);
test.addChild(b).setProperty(text, He said Hi and then the world 
said Hi as well.);
root.commit();
IteratorString result = executeQuery(nativeQueryString, 
JCR-SQL2).iterator();
assertTrue(result.hasNext());
assertEquals(/test/a, result.next());
assertTrue(result.hasNext());
assertEquals(/test/b, result.next());
assertFalse(result.hasNext());
}
{noformat}

The problem is that the path of a node (the :path field in Lucene) is not 
indexed. [~alex.parvulescu], would it be OK to index this field, or do you 
think that would be problematic (for performance or other reasons)?

 Similar (rep:similar) support
 -

 Key: OAK-319
 URL: https://issues.apache.org/jira/browse/OAK-319
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: jcr, query
Reporter: Alex Parvulescu
Assignee: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 Test class is: SimilarQueryTest
 Trace:
 {noformat}
 Caused by: java.text.ParseException: Query:
 //*[rep:similar(.(*), '/testroot')]; expected: rep:similar is not supported
   at 
 org.apache.jackrabbit.oak.query.XPathToSQL2Converter.getSyntaxError(XPathToSQL2Converter.java:963)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1634) After crash, segment persistence is broken with failures in java.nio classes (with v0.19)

2014-03-31 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated OAK-1634:
---

Fix Version/s: 0.20

 After crash, segment persistence is broken with failures in java.nio classes 
 (with v0.19)
 -

 Key: OAK-1634
 URL: https://issues.apache.org/jira/browse/OAK-1634
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Affects Versions: 0.19
Reporter: Alexander Klimetschek
Assignee: Jukka Zitting
 Fix For: 0.20


 Reopening of OAK-1409, as it still occurs with 0.19.0. Below is the latest 
 stacktrace I get after my instance crashed hard due to an OS crash.
 {code}
 27.03.2014 14:46:24.045 *ERROR* [qtp981488976-157] 
 org.apache.sling.jcr.webdav.impl.servlets.SlingSimpleWebDavServlet service: 
 Uncaught RuntimeException
 java.nio.BufferOverflowException: null
   at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:352)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.MappedAccess.write(MappedAccess.java:64)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.TarFile.writeEntryHeader(TarFile.java:201)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.TarFile.writeEntry(TarFile.java:134)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.writeSegment(FileStore.java:387)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.internalWriteStream(SegmentWriter.java:744)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeStream(SegmentWriter.java:711)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.createBlob(SegmentNodeStore.java:174)
   at 
 org.apache.jackrabbit.oak.spi.state.ProxyNodeStore.createBlob(ProxyNodeStore.java:57)
   at 
 org.apache.jackrabbit.oak.core.MutableRoot.createBlob(MutableRoot.java:314)
   at 
 org.apache.jackrabbit.oak.plugins.value.ValueFactoryImpl.createBinaryValue(ValueFactoryImpl.java:286)
   at 
 org.apache.jackrabbit.oak.plugins.value.ValueFactoryImpl.createValue(ValueFactoryImpl.java:143)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.setProperty(NodeImpl.java:455)
   at 
 org.apache.jackrabbit.server.io.DefaultHandler.importData(DefaultHandler.java:237)
   at 
 org.apache.jackrabbit.server.io.DefaultHandler.importContent(DefaultHandler.java:188)
   at 
 org.apache.jackrabbit.server.io.DefaultHandler.importContent(DefaultHandler.java:215)
   at 
 org.apache.sling.jcr.webdav.impl.handler.DefaultHandlerService.importContent(DefaultHandlerService.java:116)
   at 
 org.apache.jackrabbit.server.io.IOManagerImpl.importContent(IOManagerImpl.java:129)
   at 
 org.apache.jackrabbit.webdav.simple.DavResourceImpl.addMember(DavResourceImpl.java:528)
   at 
 org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.doPut(AbstractWebdavServlet.java:629)
   at 
 org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.execute(AbstractWebdavServlet.java:357)
   at 
 org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.service(AbstractWebdavServlet.java:291)
   at 
 org.apache.sling.jcr.webdav.impl.servlets.SlingSimpleWebDavServlet.doService(SlingSimpleWebDavServlet.java:88)
   at 
 org.apache.sling.jcr.webdav.impl.servlets.SlingSimpleWebDavServlet.service(SlingSimpleWebDavServlet.java:67)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1636) Lucene and Solr index: support jcr:excerpt and jcr:score

2014-03-31 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955177#comment-13955177
 ] 

Thomas Mueller commented on OAK-1636:
-

Alex, you are right, this overlaps with OAK-318. I will narrow the scope of 
this issue to jcr:score for Solr.

 Lucene and Solr index: support jcr:excerpt and jcr:score
 

 Key: OAK-1636
 URL: https://issues.apache.org/jira/browse/OAK-1636
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: query
Reporter: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 OAK-262 implements support for pseudo-properties such as jcr:score in the 
 query engine. It also implements support for jcr:score in the Lucene index.
 What is still missing is support for jcr:score in the Solr index, and 
 support for rep:excerpt with both Lucene and Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1636) Solr index: support jcr:score

2014-03-31 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-1636:


Description: 
OAK-262 implements support for pseudo-properties such as jcr:score in the 
query engine. It also implements support for jcr:score in the Lucene index.

What is still missing is support for jcr:score in the Solr index. 

Let's track rep:excerpt separately, in OAK-318.

  was:
OAK-262 implements support for pseudo-properties such as jcr:score in the 
query engine. It also implements support for jcr:score in the Lucene index.

What is still missing is support for jcr:score in the Solr index, and support 
for rep:excerpt with both Lucene and Solr.

Summary: Solr index: support jcr:score  (was: Lucene and Solr index: 
support jcr:excerpt and jcr:score)

 Solr index: support jcr:score
 ---

 Key: OAK-1636
 URL: https://issues.apache.org/jira/browse/OAK-1636
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: query
Reporter: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 OAK-262 implements support for pseudo-properties such as jcr:score in the 
 query engine. It also implements support for jcr:score in the Lucene index.
 What is still missing is support for jcr:score in the Solr index. 
 Let's track rep:excerpt separately, in OAK-318.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1651) Fix oak-solr-core pom dependencies

2014-03-31 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-1651:


 Summary: Fix oak-solr-core pom dependencies
 Key: OAK-1651
 URL: https://issues.apache.org/jira/browse/OAK-1651
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-solr
Affects Versions: 0.19
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 0.20


mvn dependency:analyze report a number of undeclared and superfluous 
dependencies, so it'd be good to clean it up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-319) Similar (rep:similar) support

2014-03-31 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955182#comment-13955182
 ] 

Alex Parvulescu commented on OAK-319:
-

bq. The problem is that the path of a node (the :path field in Lucene) is not 
indexed.
Could you be more specific about what the problem is? The path is already in 
the Lucene index.
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/LuceneIndex.java#L378

 Similar (rep:similar) support
 -

 Key: OAK-319
 URL: https://issues.apache.org/jira/browse/OAK-319
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: jcr, query
Reporter: Alex Parvulescu
Assignee: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 Test class is: SimilarQueryTest
 Trace:
 {noformat}
 Caused by: java.text.ParseException: Query:
 //*[rep:similar(.(*), '/testroot')]; expected: rep:similar is not supported
   at 
 org.apache.jackrabbit.oak.query.XPathToSQL2Converter.getSyntaxError(XPathToSQL2Converter.java:963)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1636) Solr index: support jcr:score

2014-03-31 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1636:
---

Assignee: Tommaso Teofili

 Solr index: support jcr:score
 ---

 Key: OAK-1636
 URL: https://issues.apache.org/jira/browse/OAK-1636
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: query
Reporter: Thomas Mueller
Assignee: Tommaso Teofili
 Fix For: 0.20


 OAK-262 implements support for pseudo-properties such as jcr:score in the 
 query engine. It also implements support for jcr:score in the Lucene index.
 What is still missing is support for jcr:score in the Solr index. 
 Let's track rep:excerpt separately, in OAK-318.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1636) Solr index: support jcr:score

2014-03-31 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1636:
---

Priority: Major  (was: Critical)

 Solr index: support jcr:score
 ---

 Key: OAK-1636
 URL: https://issues.apache.org/jira/browse/OAK-1636
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: query
Reporter: Thomas Mueller
 Fix For: 0.20


 OAK-262 implements support for pseudo-properties such as jcr:score in the 
 query engine. It also implements support for jcr:score in the Lucene index.
 What is still missing is support for jcr:score in the Solr index. 
 Let's track rep:excerpt separately, in OAK-318.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1649) NamespaceException: OakNamespace0005 on save, after replica crash

2014-03-31 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1649:
---

Assignee: Thomas Mueller  (was: Chetan Mehrotra)

 NamespaceException: OakNamespace0005 on save, after replica crash
 -

 Key: OAK-1649
 URL: https://issues.apache.org/jira/browse/OAK-1649
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 31
Reporter: Stefan Egli
Assignee: Thomas Mueller
 Fix For: 0.20

 Attachments: OverwritePropertyTest.java


 After running a test that produces couple thousands of nodes, and overwrites 
 the same properties couple thousand times, then crashing the replica primary, 
 the exception below occurs.
 The exception can be reproduced on the db and with the test case I'll attach 
 in a minute
 {code}javax.jcr.NamespaceException: OakNamespace0005: Namespace modification 
 not allowed: rep:nsdata
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:227)
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:679)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:553)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:417)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.perform(SessionImpl.java:127)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:414)
   at 
 org.apache.jackrabbit.oak.run.OverwritePropertyTest.testReplicaCrashResilience(OverwritePropertyTest.java:74)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: 
 OakNamespace0005: Namespace modification not allowed: rep:nsdata
   at 
 org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.modificationNotAllowed(NamespaceEditor.java:122)
   at 
 org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.childNodeChanged(NamespaceEditor.java:140)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.childNodeChanged(CompositeEditor.java:122)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:143)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.dispatch(DocumentNodeState.java:405)
   at 
 

[jira] [Updated] (OAK-1650) NPE and MicroKernelException: The node .. does not exist, on replica primary crash during save

2014-03-31 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1650:
---

Assignee: Thomas Mueller  (was: Chetan Mehrotra)

 NPE and MicroKernelException: The node .. does not exist, on replica primary 
 crash during save
 --

 Key: OAK-1650
 URL: https://issues.apache.org/jira/browse/OAK-1650
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 31
Reporter: Stefan Egli
Assignee: Thomas Mueller
 Fix For: 0.20

 Attachments: ReplicaCrashResilienceLargeTxTest.java


 When crashing the replica-primary while saving a large transaction, the 
 following two exceptions occur. Had this twice in a row, thus 'sort of' 
 reproduceable. I'll attach the test case in a minute.
 {code}Mar 31, 2014 11:49:04 AM com.mongodb.DBTCPConnector setMasterAddress
 WARNING: Primary switching from localhost/127.0.0.1:12321 to 
 localhost/127.0.0.1:12322
 Writer: Created level1 node: 
 Node[NodeDelegate{tree=/replicaCrashLargeTxTest-1396259321921/2: { 
 jcr:primaryType = nt:unstructured}}]
 org.apache.jackrabbit.mk.api.MicroKernelException: 
 java.lang.NullPointerException
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.findAndModify(MongoDocumentStore.java:483)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.createOrUpdate(MongoDocumentStore.java:495)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.createOrUpdateNode(Commit.java:449)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.applyToDocumentStore(Commit.java:335)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.prepare(Commit.java:212)
   at 
 org.apache.jackrabbit.oak.plugins.document.Commit.apply(Commit.java:181)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:172)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:85)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.persist(DocumentNodeStoreBranch.java:1)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$Persisted.persistTransientHead(AbstractNodeStoreBranch.java:598)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$Persisted.setRoot(AbstractNodeStoreBranch.java:547)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.setRoot(AbstractNodeStoreBranch.java:208)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.purge(DocumentRootBuilder.java:188)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.updated(DocumentRootBuilder.java:99)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.updated(MemoryNodeBuilder.java:205)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:489)
   at 
 org.apache.jackrabbit.oak.core.SecureNodeBuilder.setProperty(SecureNodeBuilder.java:260)
   at 
 org.apache.jackrabbit.oak.core.MutableTree.updateChildOrder(MutableTree.java:337)
   at 
 org.apache.jackrabbit.oak.core.MutableTree.setOrderableChildren(MutableTree.java:220)
   at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:207)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:286)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:113)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:253)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:238)
   at 
 org.apache.jackrabbit.oak.run.ReplicaCrashResilienceLargeTxTest$1.run(ReplicaCrashResilienceLargeTxTest.java:115)
   at java.lang.Thread.run(Thread.java:695)
 Caused by: java.lang.NullPointerException
   at 
 com.google.common.base.Preconditions.checkNotNull(Preconditions.java:192)
   at 
 org.apache.jackrabbit.oak.plugins.document.util.StringValue.init(StringValue.java:35)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.addToCache(MongoDocumentStore.java:810)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.applyToCache(MongoDocumentStore.java:765)
   at 
 

[jira] [Updated] (OAK-1641) Mongo: Un-/CheckedExecutionException on replica-primary crash

2014-03-31 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1641:
---

Assignee: Thomas Mueller  (was: Chetan Mehrotra)

 Mongo: Un-/CheckedExecutionException on replica-primary crash
 -

 Key: OAK-1641
 URL: https://issues.apache.org/jira/browse/OAK-1641
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 28, 2014
Reporter: Stefan Egli
Assignee: Thomas Mueller
 Attachments: ReplicaCrashResilienceTest.java, 
 ReplicaCrashResilienceTest.java, mongoUrl_fixture_patch_oak1641.diff


 Testing with a mongo replicaSet setup: 1 primary, 1 secondary and 1 
 secondary-arbiter-only.
 Running a simple test which has 2 threads: a writer thread and a reader 
 thread.
 The following exception occurs when crashing mongo primary
 {code}
 com.google.common.util.concurrent.UncheckedExecutionException: 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 com.mongodb.MongoException$Network: Read operation to server 
 localhost/127.0.0.1:12322 failed on database resilienceTest
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
   at 
 com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.getNode(DocumentNodeStore.java:593)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.hasChildNode(DocumentNodeState.java:164)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.hasChildNode(MemoryNodeBuilder.java:301)
   at 
 org.apache.jackrabbit.oak.core.SecureNodeBuilder.hasChildNode(SecureNodeBuilder.java:299)
   at 
 org.apache.jackrabbit.oak.plugins.tree.AbstractTree.hasChild(AbstractTree.java:267)
   at 
 org.apache.jackrabbit.oak.core.MutableTree.getChild(MutableTree.java:147)
   at org.apache.jackrabbit.oak.util.TreeUtil.getTree(TreeUtil.java:171)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.getTree(NodeDelegate.java:865)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.getChild(NodeDelegate.java:339)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:274)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:113)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:253)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:238)
   at 
 org.apache.jackrabbit.oak.run.ReplicaCrashResilienceTest$1.run(ReplicaCrashResilienceTest.java:103)
   at java.lang.Thread.run(Thread.java:695)
 Caused by: com.google.common.util.concurrent.UncheckedExecutionException: 
 com.mongodb.MongoException$Network: Read operation to server 
 localhost/127.0.0.1:12322 failed on database resilienceTest
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
   at 
 com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.find(MongoDocumentStore.java:267)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.find(MongoDocumentStore.java:234)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.readNode(DocumentNodeStore.java:802)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:596)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:1)
   at 
 com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
   ... 19 more
 Caused by: com.mongodb.MongoException$Network: Read operation to server 
 localhost/127.0.0.1:12322 failed on database resilienceTest
   at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:253)
   at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:264)
   at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:264)
   at 

[jira] [Commented] (OAK-1456) Non-blocking reindexing

2014-03-31 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955198#comment-13955198
 ] 

Alex Parvulescu commented on OAK-1456:
--

Just to sum up, the remaining question is mostly about whether this background 
thread should be enabled by default in oak or not, and if so how frequently 
should it run.

 Non-blocking reindexing
 ---

 Key: OAK-1456
 URL: https://issues.apache.org/jira/browse/OAK-1456
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Michael Marth
Assignee: Alex Parvulescu
Priority: Blocker
  Labels: production, resilience
 Fix For: 0.20

 Attachments: OAK-1456.patch


 For huge Oak repos it will be essential to re-index some or all indexes in 
 case they go out of sync in a non-blocking way (i.e. the repo is still 
 operation while the re-indexing takes place).
 For an asynchronous index this should not be much of a problem. One could 
 drop it and recreate (as an added benefit it might be nice if the user could 
 simply add a property reindex to the index definition node to trigger this).
 For synchronous indexes, I suggest the mechanism creates an asynchronous 
 index behind the scenes first and once it has caught up
 * blocks writes (?)
 * removes the existing synchronous index
 * moves asynchronous index in its place and makes it synchronous



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1589) MongoDocumentStore fails to report error for keys that are too long

2014-03-31 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955218#comment-13955218
 ] 

Julian Reschke commented on OAK-1589:
-

Note that if testMaxId() doesn't attempt to read back the document, the 
reported size is 32767 (the maximum)

 MongoDocumentStore fails to report error for keys that are too long
 ---

 Key: OAK-1589
 URL: https://issues.apache.org/jira/browse/OAK-1589
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Julian Reschke
Priority: Minor

 The MongoDocumentStore fails to report an error when the key length exceeds 
 the allowable width in MongoDB.
 This can be fixed by using a newer version of MongoDB (mongodb 2.5.5  -- see 
 https://jira.mongodb.org/browse/SERVER-5290)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-319) Similar (rep:similar) support

2014-03-31 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955300#comment-13955300
 ] 

Thomas Mueller commented on OAK-319:


You are right, the path is in the Lucene index. There was a problem searching 
the document by path (the Lucene MoreLikeThis tool tries to search the document 
using a PhraseQuery, which doesn't work for paths). To solve / work around this 
problem, I have a patch for the MoreLikeThisHelper (below). That way, the 
lookup of the document by path works as expected. There is still a problem: the 
MoreLikeThis tool expects the content is stored in the document, however as far 
as I see we don't do that right now, the document is: 
[stored,indexed,tokenized,omitNorms,indexOptions=DOCS_ONLY:path:/test/a]. I 
think we need to store the contents of the document in the document itself, for 
rep:similar to work.

Patch:
{noformat}
#P oak-lucene
Index: 
src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/util/MoreLikeThisHelper.java
===
--- 
src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/util/MoreLikeThisHelper.java
   (revision 1583237)
+++ 
src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/util/MoreLikeThisHelper.java
   (working copy)
@@ -17,10 +17,17 @@
 package org.apache.jackrabbit.oak.plugins.index.lucene.util;
 
 import java.io.StringReader;
+
+import org.apache.jackrabbit.oak.plugins.index.lucene.FieldNames;
 import org.apache.lucene.analysis.Analyzer;
 import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.Term;
 import org.apache.lucene.queries.mlt.MoreLikeThis;
+import org.apache.lucene.search.IndexSearcher;
 import org.apache.lucene.search.Query;
+import org.apache.lucene.search.ScoreDoc;
+import org.apache.lucene.search.TermQuery;
+import org.apache.lucene.search.TopDocs;
 
 /**
  * Helper class for generating a {@link 
org.apache.lucene.queries.mlt.MoreLikeThisQuery} from the native query 
codeString/code
@@ -33,6 +40,7 @@
 mlt.setAnalyzer(analyzer);
 try {
 String text = null;
+String[] fields = {};
 for (String param : mltQueryString.split()) {
 String[] keyValuePair = param.split(=);
 if (keyValuePair.length != 2 || keyValuePair[0] == null || 
keyValuePair[1] == null) {
@@ -41,7 +49,7 @@
 if (stream.body.equals(keyValuePair[0])) {
 text = keyValuePair[1];
 } else if (mlt.fl.equals(keyValuePair[0])) {
-mlt.setFieldNames(keyValuePair[1].split(,));
+fields = keyValuePair[1].split(,);
 } else if (mlt.mindf.equals(keyValuePair[0])) {
 mlt.setMinDocFreq(Integer.parseInt(keyValuePair[1]));
 } else if (mlt.mintf.equals(keyValuePair[0])) {
@@ -66,7 +74,21 @@
 }
 }
 if (text != null) {
-moreLikeThisQuery = mlt.like(new StringReader(text), 
mlt.getFieldNames()[0]);
+if (FieldNames.PATH.equals(fields[0])) {
+IndexSearcher searcher = new IndexSearcher(reader);
+TermQuery q = new TermQuery(new Term(FieldNames.PATH, 
text));
+TopDocs top = searcher.search(q, 1);
+if (top.totalHits == 0) {
+mlt.setFieldNames(fields);
+moreLikeThisQuery = mlt.like(new StringReader(text), 
mlt.getFieldNames()[0]);
+} else{
+ScoreDoc d = top.scoreDocs[0];
+moreLikeThisQuery = mlt.like(d.doc);
+}
+} else {
+mlt.setFieldNames(fields);
+moreLikeThisQuery = mlt.like(new StringReader(text), 
mlt.getFieldNames()[0]);
+}
 }
 return moreLikeThisQuery;
 } catch (Exception e) {
{noformat}

 Similar (rep:similar) support
 -

 Key: OAK-319
 URL: https://issues.apache.org/jira/browse/OAK-319
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: jcr, query
Reporter: Alex Parvulescu
Assignee: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 Test class is: SimilarQueryTest
 Trace:
 {noformat}
 Caused by: java.text.ParseException: Query:
 //*[rep:similar(.(*), '/testroot')]; expected: rep:similar is not supported
   at 
 org.apache.jackrabbit.oak.query.XPathToSQL2Converter.getSyntaxError(XPathToSQL2Converter.java:963)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1589) MongoDocumentStore fails to report error for keys that are too long

2014-03-31 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955306#comment-13955306
 ] 

Thomas Mueller commented on OAK-1589:
-

This test uses UpdateOp directly, it does not use 
org.apache.jackrabbit.oak.plugins.document.util.Utils.getIdFromPath like the 
DocumentStore does. There is a check in the getIdFromPath method that should 
prevent such long ids.

 MongoDocumentStore fails to report error for keys that are too long
 ---

 Key: OAK-1589
 URL: https://issues.apache.org/jira/browse/OAK-1589
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Julian Reschke
Priority: Minor

 The MongoDocumentStore fails to report an error when the key length exceeds 
 the allowable width in MongoDB.
 This can be fixed by using a newer version of MongoDB (mongodb 2.5.5  -- see 
 https://jira.mongodb.org/browse/SERVER-5290)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-319) Similar (rep:similar) support

2014-03-31 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955312#comment-13955312
 ] 

Thomas Mueller commented on OAK-319:


There is a security problem if we do it that way. The user might not have 
access rights to read the document, but he may be allowed to create nodes with 
(what he thinks is) similar content, and then use rep:similar to check whether 
the given node exists. So, we will need to add a check in the query engine to 
ensure the user has access to this node. That might still be a problem, if the 
user doesn't have access to all fields.

A second problem might be aggregation: the aggregated content is not stored in 
the document (in Lucene) with Oak.



 Similar (rep:similar) support
 -

 Key: OAK-319
 URL: https://issues.apache.org/jira/browse/OAK-319
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: jcr, query
Reporter: Alex Parvulescu
Assignee: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 Test class is: SimilarQueryTest
 Trace:
 {noformat}
 Caused by: java.text.ParseException: Query:
 //*[rep:similar(.(*), '/testroot')]; expected: rep:similar is not supported
   at 
 org.apache.jackrabbit.oak.query.XPathToSQL2Converter.getSyntaxError(XPathToSQL2Converter.java:963)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1634) After crash, segment persistence is broken with failures in java.nio classes (with v0.19)

2014-03-31 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved OAK-1634.


Resolution: Fixed

Fixed in 1583352 by adding extra validation checks against partially written 
entries in the tar files.

 After crash, segment persistence is broken with failures in java.nio classes 
 (with v0.19)
 -

 Key: OAK-1634
 URL: https://issues.apache.org/jira/browse/OAK-1634
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Affects Versions: 0.19
Reporter: Alexander Klimetschek
Assignee: Jukka Zitting
 Fix For: 0.20


 Reopening of OAK-1409, as it still occurs with 0.19.0. Below is the latest 
 stacktrace I get after my instance crashed hard due to an OS crash.
 {code}
 27.03.2014 14:46:24.045 *ERROR* [qtp981488976-157] 
 org.apache.sling.jcr.webdav.impl.servlets.SlingSimpleWebDavServlet service: 
 Uncaught RuntimeException
 java.nio.BufferOverflowException: null
   at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:352)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.MappedAccess.write(MappedAccess.java:64)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.TarFile.writeEntryHeader(TarFile.java:201)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.TarFile.writeEntry(TarFile.java:134)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.writeSegment(FileStore.java:387)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.internalWriteStream(SegmentWriter.java:744)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeStream(SegmentWriter.java:711)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.createBlob(SegmentNodeStore.java:174)
   at 
 org.apache.jackrabbit.oak.spi.state.ProxyNodeStore.createBlob(ProxyNodeStore.java:57)
   at 
 org.apache.jackrabbit.oak.core.MutableRoot.createBlob(MutableRoot.java:314)
   at 
 org.apache.jackrabbit.oak.plugins.value.ValueFactoryImpl.createBinaryValue(ValueFactoryImpl.java:286)
   at 
 org.apache.jackrabbit.oak.plugins.value.ValueFactoryImpl.createValue(ValueFactoryImpl.java:143)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.setProperty(NodeImpl.java:455)
   at 
 org.apache.jackrabbit.server.io.DefaultHandler.importData(DefaultHandler.java:237)
   at 
 org.apache.jackrabbit.server.io.DefaultHandler.importContent(DefaultHandler.java:188)
   at 
 org.apache.jackrabbit.server.io.DefaultHandler.importContent(DefaultHandler.java:215)
   at 
 org.apache.sling.jcr.webdav.impl.handler.DefaultHandlerService.importContent(DefaultHandlerService.java:116)
   at 
 org.apache.jackrabbit.server.io.IOManagerImpl.importContent(IOManagerImpl.java:129)
   at 
 org.apache.jackrabbit.webdav.simple.DavResourceImpl.addMember(DavResourceImpl.java:528)
   at 
 org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.doPut(AbstractWebdavServlet.java:629)
   at 
 org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.execute(AbstractWebdavServlet.java:357)
   at 
 org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.service(AbstractWebdavServlet.java:291)
   at 
 org.apache.sling.jcr.webdav.impl.servlets.SlingSimpleWebDavServlet.doService(SlingSimpleWebDavServlet.java:88)
   at 
 org.apache.sling.jcr.webdav.impl.servlets.SlingSimpleWebDavServlet.service(SlingSimpleWebDavServlet.java:67)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1651) Fix oak-solr-core pom dependencies

2014-03-31 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved OAK-1651.
--

Resolution: Fixed

fixed in r1583380

 Fix oak-solr-core pom dependencies
 --

 Key: OAK-1651
 URL: https://issues.apache.org/jira/browse/OAK-1651
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-solr
Affects Versions: 0.19
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 0.20


 mvn dependency:analyze report a number of undeclared and superfluous 
 dependencies, so it'd be good to clean it up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)