[jira] [Created] (OAK-4476) Option to download all ids available in the datastore in oak-run

2016-06-15 Thread Amit Jain (JIRA)
Amit Jain created OAK-4476:
--

 Summary: Option to download all ids available in the datastore in 
oak-run
 Key: OAK-4476
 URL: https://issues.apache.org/jira/browse/OAK-4476
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: run
Reporter: Amit Jain
Assignee: Amit Jain


Add an option to dump all blob ids available in the datastore in oak-run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4473) MarkSweepGarbageCollector#saveBatchToFile should escape IDs

2016-06-15 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding resolved OAK-4473.
-
Resolution: Duplicate

> MarkSweepGarbageCollector#saveBatchToFile should escape IDs
> ---
>
> Key: OAK-4473
> URL: https://issues.apache.org/jira/browse/OAK-4473
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Sedding
>Assignee: Julian Sedding
>
> Datastore garbage collection (DS GC) can fail if it encounters IDs containing 
> backslashes. This can happen e.g. when a file gets uploaded and by mistake 
> it's absolute (windows) path is stored as file name. 
> This is because IDs are written to temporary files and then sorted. The 
> sorting algorithm assumes the lines to be escaped and throws an exception 
> otherwise.
> {noformat}
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Blob garbage 
> collection error
> java.lang.IllegalArgumentException: Unexpected char [J] found at 78 of 
> [92c3bcd2270655a9c911bec9f7a4851860f05c79#553941,/content/dam/\\MAPPED_DRIVE\JOHN$\ABC.pdf].
>  Expected '\' or 'r' or 'n
> at 
> org.apache.jackrabbit.oak.commons.sort.EscapeUtils.unescape(EscapeUtils.java:126)
> at 
> org.apache.jackrabbit.oak.commons.sort.EscapeUtils.unescapeLineBreaks(EscapeUtils.java:51)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.readLine(ExternalSort.java:633)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:204)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:257)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:159)
> at 
> org.apache.jackrabbit.oak.plugins.blob.GarbageCollectorFileState.sort(GarbageCollectorFileState.java:147)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.iterateNodeTree(MarkSweepGarbageCollector.java:538)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.mark(MarkSweepGarbageCollector.java:278)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.markAndSweep(MarkSweepGarbageCollector.java:248)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.collectGarbage(MarkSweepGarbageCollector.java:163)
> at 
> org.apache.jackrabbit.oak.plugins.blob.BlobGC$1.call(BlobGC.java:87)
> at 
> org.apache.jackrabbit.oak.plugins.blob.BlobGC$1.call(BlobGC.java:83)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4473) MarkSweepGarbageCollector#saveBatchToFile should escape IDs

2016-06-15 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding resolved OAK-4473.
-
Resolution: Fixed

Brilliant, thanks [~chetanm]. It is a duplicate, didn't find the other ticket.

> MarkSweepGarbageCollector#saveBatchToFile should escape IDs
> ---
>
> Key: OAK-4473
> URL: https://issues.apache.org/jira/browse/OAK-4473
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Sedding
>Assignee: Julian Sedding
>
> Datastore garbage collection (DS GC) can fail if it encounters IDs containing 
> backslashes. This can happen e.g. when a file gets uploaded and by mistake 
> it's absolute (windows) path is stored as file name. 
> This is because IDs are written to temporary files and then sorted. The 
> sorting algorithm assumes the lines to be escaped and throws an exception 
> otherwise.
> {noformat}
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Blob garbage 
> collection error
> java.lang.IllegalArgumentException: Unexpected char [J] found at 78 of 
> [92c3bcd2270655a9c911bec9f7a4851860f05c79#553941,/content/dam/\\MAPPED_DRIVE\JOHN$\ABC.pdf].
>  Expected '\' or 'r' or 'n
> at 
> org.apache.jackrabbit.oak.commons.sort.EscapeUtils.unescape(EscapeUtils.java:126)
> at 
> org.apache.jackrabbit.oak.commons.sort.EscapeUtils.unescapeLineBreaks(EscapeUtils.java:51)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.readLine(ExternalSort.java:633)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:204)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:257)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:159)
> at 
> org.apache.jackrabbit.oak.plugins.blob.GarbageCollectorFileState.sort(GarbageCollectorFileState.java:147)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.iterateNodeTree(MarkSweepGarbageCollector.java:538)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.mark(MarkSweepGarbageCollector.java:278)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.markAndSweep(MarkSweepGarbageCollector.java:248)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.collectGarbage(MarkSweepGarbageCollector.java:163)
> at 
> org.apache.jackrabbit.oak.plugins.blob.BlobGC$1.call(BlobGC.java:87)
> at 
> org.apache.jackrabbit.oak.plugins.blob.BlobGC$1.call(BlobGC.java:83)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4451) Implement a proper template cache

2016-06-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331925#comment-15331925
 ] 

Michael Dürig commented on OAK-4451:


I pushed an initial implementation of the to my private GitHub repository: 
https://github.com/mduerig/jackrabbit-oak/commit/0037b886ff8ca5e5945ad37d56d3b8ab084c36f9

[~frm], [~alex.parvulescu], kindly review. 

There are some loose ends to tie still: the calculations for entry weights and 
sizes in the [TemplateCache | 
https://github.com/mduerig/jackrabbit-oak/commit/0037b886ff8ca5e5945ad37d56d3b8ab084c36f9#diff-ae41fca59d0da3f5561baf8df3a72631R22]
 need still to be done. Maybe [~tmueller] could help out here as he was the 
initial implementer of this cache.

The [TemplateCacheTest | 
https://github.com/mduerig/jackrabbit-oak/commit/0037b886ff8ca5e5945ad37d56d3b8ab084c36f9#diff-545902106810ed9f3d67b331485c8073R22]
 needs some more tests (currently 0 ;-) )

> Implement a proper template cache
> -
>
> Key: OAK-4451
> URL: https://issues.apache.org/jira/browse/OAK-4451
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>  Labels: cache, monitoring, production
> Fix For: 1.6
>
>
> The template cache is currently just a map per segment. This is problematic 
> in various ways: 
> * A segment needs to be in memory and probably loaded first only to read 
> something from the cache. 
> * No monitoring, instrumentation of the cache
> * No control over memory consumption 
> We should there for come up with a proper template cache implementation in 
> the same way we have done for strings ({{StringCache}}) in OAK-3007. 
> Analogously that cache should be owned by the {{CachingSegmentReader}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4391) Dynamic Membership for External Authentication

2016-06-15 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-4391.
-
   Resolution: Fixed
Fix Version/s: 1.5.4

> Dynamic Membership for External Authentication
> --
>
> Key: OAK-4391
> URL: https://issues.apache.org/jira/browse/OAK-4391
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: auth-external
>Reporter: angela
>Assignee: angela
> Fix For: 1.5.4
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4218) Base SyncMBeanImpl on Oak API

2016-06-15 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-4218.
-
   Resolution: Fixed
Fix Version/s: 1.5.4

Committed revision 1748603

> Base SyncMBeanImpl on Oak API
> -
>
> Key: OAK-4218
> URL: https://issues.apache.org/jira/browse/OAK-4218
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: angela
>Assignee: angela
> Fix For: 1.5.4
>
> Attachments: OAK-4218_initialdraft.patch
>
>
> While looking at the oak-auth-external code base I found that 
> {{SyncMBeanImpl}} is based on JCR API while sync called during authentication 
> will solely rely on API defined by oak-core.
> This not only limits the implementations to operations that can executed on 
> the JCR API but also introduces the risk of inconsistencies.
> As a matter of fact {{ExternalLoginModuleTestBase.createMBean}} also lists 
> this as limitation and TODO:
> {quote}
> // todo: how to retrieve JCR repository here? maybe we should base 
> the sync mbean on oak directly.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4471) More compact storage format for Documents

2016-06-15 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331518#comment-15331518
 ] 

Chetan Mehrotra edited comment on OAK-4471 at 6/15/16 3:30 PM:
---

*Use dictionary for Property Names*

Under this we can use a dictionary for commonly occurring property names. Below 
are some stats from a repository with 

* 14.5 M documents in nodes collection. ~8 M documents are for index data. 
While ~2M actual nodes!
* 26 M property names
* 8475 unique property names
* 290M - Total size of property names (assuming 8bits per char)
* 8755M - Total repo size

Top property name stats

{noformat}
+-+
|Count  |Name|% by count|% by size|
+-+
|3033972|jcr:lastModified|11.65 |15.94|
|2573208|jcr:data|9.88  |6.76 |
|2505308|uniqueKey   |9.62  |7.40 |
|2286350|blobSize|8.78  |6.00 |
|1706460|match   |6.55  |2.80 |
|1484283|jcr:primaryType |5.70  |7.31 |
|969596 |jcr:created |3.72  |3.50 |
|933960 |jcr:createdBy   |3.59  |3.99 |
|921199 |sling:resourceType  |3.54  |5.44 |
|702208 |:childOrder |2.70  |2.54 |
|601959 |entry   |2.31  |0.99 |
|600299 |jcr:uuid|2.31  |1.58 |
|481036 |jcr:lastModifiedBy  |1.85  |2.84 |
|477625 |jcr:frozenPrimaryType   |1.83  |3.29 |
|477625 |jcr:frozenUuid  |1.83  |2.20 |
|357201 |text|1.37  |0.47 |
|351712 |textIsRich  |1.35  |1.15 |
|228623 |event\djob\dqueued\dtime|0.88  |1.80 |
+-+
{noformat}

Based on above we can say for now using dictionary for property names would not 
provide much benefit!


was (Author: chetanm):
*Use dictionary for Property Names*

Under this we can use a dictionary for commonly occurring property names. Below 
are some stats from a repository with 

* 14.5 M documents in nodes collection
* 26 M property names
* 8475 unique property names
* 290M - Total size of property names (assuming 8bits per char)
* 8755M - Total repo size

Top property name stats

{noformat}
+-+
|Count  |Name|% by count|% by size|
+-+
|3033972|jcr:lastModified|11.65 |15.94|
|2573208|jcr:data|9.88  |6.76 |
|2505308|uniqueKey   |9.62  |7.40 |
|2286350|blobSize|8.78  |6.00 |
|1706460|match   |6.55  |2.80 |
|1484283|jcr:primaryType |5.70  |7.31 |
|969596 |jcr:created |3.72  |3.50 |
|933960 |jcr:createdBy   |3.59  |3.99 |
|921199 |sling:resourceType  |3.54  |5.44 |
|702208 |:childOrder |2.70  |2.54 |
|601959 |entry   |2.31  |0.99 |
|600299 |jcr:uuid|2.31  |1.58 |
|481036 |jcr:lastModifiedBy  |1.85  |2.84 |
|477625 |jcr:frozenPrimaryType   |1.83  |3.29 |
|477625 |jcr:frozenUuid  |1.83  |2.20 |
|357201 |text|1.37  |0.47 |
|351712 |textIsRich  |1.35  |1.15 |
|228623 |event\djob\dqueued\dtime|0.88  |1.80 |
+-+
{noformat}

Based on above we can say for now using dictionary for property names would not 
provide much benefit!

> More compact storage format for Documents
> -
>
> Key: OAK-4471
> URL: https://issues.apache.org/jira/browse/OAK-4471
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: performance
> Fix For: 1.6
>
> Attachments: node-doc-size2.png
>
>
> Aim of this task is to evaluate storage cost of current approach for various 
> Documents in DocumentNodeStore. And then evaluate possible alternative to see 
> if we can get a significant reduction in storage size.
> Possible areas of improvement
> # NodeDocument
> ## Use binary encoding for property values - Currently property values are 
> stored in JSON encoding i.e. arrays and single values are encoded in json 
> along with there type
> ## Use binary encoding for Revision values - In a given document Revision 
> instances are a major part of storage size. A binary encoding might provide 
> more compact storage
> # Journal - The journal entries can be stored in compressed form
> 

[jira] [Commented] (OAK-4473) MarkSweepGarbageCollector#saveBatchToFile should escape IDs

2016-06-15 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331904#comment-15331904
 ] 

Chetan Mehrotra commented on OAK-4473:
--

[~jsedding] This looks duplicate of OAK-4441 which has been recently fixed by 
[~amitjain]

> MarkSweepGarbageCollector#saveBatchToFile should escape IDs
> ---
>
> Key: OAK-4473
> URL: https://issues.apache.org/jira/browse/OAK-4473
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Sedding
>Assignee: Julian Sedding
>
> Datastore garbage collection (DS GC) can fail if it encounters IDs containing 
> backslashes. This can happen e.g. when a file gets uploaded and by mistake 
> it's absolute (windows) path is stored as file name. 
> This is because IDs are written to temporary files and then sorted. The 
> sorting algorithm assumes the lines to be escaped and throws an exception 
> otherwise.
> {noformat}
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Blob garbage 
> collection error
> java.lang.IllegalArgumentException: Unexpected char [J] found at 78 of 
> [92c3bcd2270655a9c911bec9f7a4851860f05c79#553941,/content/dam/\\MAPPED_DRIVE\JOHN$\ABC.pdf].
>  Expected '\' or 'r' or 'n
> at 
> org.apache.jackrabbit.oak.commons.sort.EscapeUtils.unescape(EscapeUtils.java:126)
> at 
> org.apache.jackrabbit.oak.commons.sort.EscapeUtils.unescapeLineBreaks(EscapeUtils.java:51)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.readLine(ExternalSort.java:633)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:204)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:257)
> at 
> org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:159)
> at 
> org.apache.jackrabbit.oak.plugins.blob.GarbageCollectorFileState.sort(GarbageCollectorFileState.java:147)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.iterateNodeTree(MarkSweepGarbageCollector.java:538)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.mark(MarkSweepGarbageCollector.java:278)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.markAndSweep(MarkSweepGarbageCollector.java:248)
> at 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.collectGarbage(MarkSweepGarbageCollector.java:163)
> at 
> org.apache.jackrabbit.oak.plugins.blob.BlobGC$1.call(BlobGC.java:87)
> at 
> org.apache.jackrabbit.oak.plugins.blob.BlobGC$1.call(BlobGC.java:83)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4474) Finalise SegmentCache

2016-06-15 Thread JIRA
Michael Dürig created OAK-4474:
--

 Summary: Finalise SegmentCache
 Key: OAK-4474
 URL: https://issues.apache.org/jira/browse/OAK-4474
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: segment-tar
Reporter: Michael Dürig
 Fix For: 1.6


{{SegmentCache}} needs documentation, management instrumentation and monitoring 
tests and logging. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4445) Collect write statistics

2016-06-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4445:
---
Assignee: Francesco Mari

> Collect write statistics 
> -
>
> Key: OAK-4445
> URL: https://issues.apache.org/jira/browse/OAK-4445
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Francesco Mari
>  Labels: compaction, gc, monitoring
> Fix For: 1.6
>
>
> We should come up with a good set of write statistics to collect like number 
> of records/nodes/properties/bytes. Additionally those statistics should be 
> collected for normal operation vs. compaction related operation. This would 
> allow us to more precisely analyse the effect of compaction on the overall 
> system. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4473) MarkSweepGarbageCollector#saveBatchToFile should escape IDs

2016-06-15 Thread Julian Sedding (JIRA)
Julian Sedding created OAK-4473:
---

 Summary: MarkSweepGarbageCollector#saveBatchToFile should escape 
IDs
 Key: OAK-4473
 URL: https://issues.apache.org/jira/browse/OAK-4473
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.16, 1.5.3, 1.4.3, 1.0.31
Reporter: Julian Sedding
Assignee: Julian Sedding


Datastore garbage collection (DS GC) can fail if it encounters IDs containing 
backslashes. This can happen e.g. when a file gets uploaded and by mistake it's 
absolute (windows) path is stored as file name. 

This is because IDs are written to temporary files and then sorted. The sorting 
algorithm assumes the lines to be escaped and throws an exception otherwise.

{noformat}
org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Blob garbage 
collection error
java.lang.IllegalArgumentException: Unexpected char [J] found at 78 of 
[92c3bcd2270655a9c911bec9f7a4851860f05c79#553941,/content/dam/\\MAPPED_DRIVE\JOHN$\ABC.pdf].
 Expected '\' or 'r' or 'n
at 
org.apache.jackrabbit.oak.commons.sort.EscapeUtils.unescape(EscapeUtils.java:126)
at 
org.apache.jackrabbit.oak.commons.sort.EscapeUtils.unescapeLineBreaks(EscapeUtils.java:51)
at 
org.apache.jackrabbit.oak.commons.sort.ExternalSort.readLine(ExternalSort.java:633)
at 
org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:204)
at 
org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:257)
at 
org.apache.jackrabbit.oak.commons.sort.ExternalSort.sortInBatch(ExternalSort.java:159)
at 
org.apache.jackrabbit.oak.plugins.blob.GarbageCollectorFileState.sort(GarbageCollectorFileState.java:147)
at 
org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.iterateNodeTree(MarkSweepGarbageCollector.java:538)
at 
org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.mark(MarkSweepGarbageCollector.java:278)
at 
org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.markAndSweep(MarkSweepGarbageCollector.java:248)
at 
org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector.collectGarbage(MarkSweepGarbageCollector.java:163)
at org.apache.jackrabbit.oak.plugins.blob.BlobGC$1.call(BlobGC.java:87)
at org.apache.jackrabbit.oak.plugins.blob.BlobGC$1.call(BlobGC.java:83)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331549#comment-15331549
 ] 

Julian Reschke edited comment on OAK-4409 at 6/15/16 12:33 PM:
---

trunk: http://svn.apache.org/r1748553
1.4: http://svn.apache.org/r1748565
1.2: http://svn.apache.org/r1748569
1.0: http://svn.apache.org/r1748571



was (Author: reschke):
trunk: http://svn.apache.org/r1748553
1.4: http://svn.apache.org/r1748565
1.2: http://svn.apache.org/r1748569


> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.5.4, 1.0.32, 1.4.4, 1.2.17
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4409:

Fix Version/s: 1.0.32

> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.5.4, 1.0.32, 1.4.4, 1.2.17
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4409:

Labels:   (was: candidate_oak_1_0)

> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.5.4, 1.0.32, 1.4.4, 1.2.17
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331549#comment-15331549
 ] 

Julian Reschke edited comment on OAK-4409 at 6/15/16 12:25 PM:
---

trunk: http://svn.apache.org/r1748553
1.4: http://svn.apache.org/r1748565
1.2: http://svn.apache.org/r1748569



was (Author: reschke):
trunk: http://svn.apache.org/r1748553
1.4: http://svn.apache.org/r1748565

> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_0
> Fix For: 1.5.4, 1.4.4, 1.2.17
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4409:

Labels: candidate_oak_1_0  (was: candidate_oak_1_0 candidate_oak_1_2)

> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_0
> Fix For: 1.5.4, 1.4.4, 1.2.17
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4409:

Fix Version/s: 1.2.17

> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_0
> Fix For: 1.5.4, 1.4.4, 1.2.17
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331549#comment-15331549
 ] 

Julian Reschke edited comment on OAK-4409 at 6/15/16 12:09 PM:
---

trunk: http://svn.apache.org/r1748553
1.4: http://svn.apache.org/r1748565


was (Author: reschke):
trunk: http://svn.apache.org/r1748553


> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.5.4, 1.4.4
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4409:

Labels: candidate_oak_1_0 candidate_oak_1_2  (was: candidate_oak_1_0 
candidate_oak_1_2 candidate_oak_1_4)

> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.5.4, 1.4.4
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4409:

Fix Version/s: 1.4.4

> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.5.4, 1.4.4
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3865) New strategy to optimize secondary reads

2016-06-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3865:
---
Attachment: (was: OAK-3865.patch)

> New strategy to optimize secondary reads
> 
>
> Key: OAK-3865
> URL: https://issues.apache.org/jira/browse/OAK-3865
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Tomek Rękawek
>  Labels: performance
> Fix For: 1.6
>
> Attachments: OAK-3865.patch, clustered-oak-setup-improvements.pdf, 
> diagram.png
>
>
> *Introduction*
> In the current trunk we'll only read document _D_ from the secondary instance 
> if:
> (1) we have the parent _P_ of document _D_ cached and
> (2) the parent hasn't been modified in 6 hours.
> The OAK-2106 tried to optimise (2) by estimating lag using MongoDB replica 
> stats. It was unreliable, so the second approach was to read the last 
> revisions directly from each Mongo instance. If the modification date of _P_ 
> is before last revisions on all secondary Mongos, then secondary can be used.
> The main problem with this approach is that we still need to have the _P_ to 
> be in cache. I think we need another way to optimise the secondary reading, 
> as right now only about 3% of requests connects to the secondary, which is 
> bad especially for the global-clustering case (Mongo and Oak instances across 
> the globe). The optimisation provided in OAK-2106 doesn't make the things 
> much better and may introduce some consistency issues.
> *Proposal - tldr version*
> Oak will remember the last revision it has ever seen. In the same time, it'll 
> query each secondary Mongo instance, asking what's the available stored root 
> revision. If all secondary instances have a root revision >= last revision 
> seen by a given Oak instance, it's safe to use the secondary read preference.
> *Proposal*
> I had following constraints in mind preparing this:
> 1. Let's assume we have a sequence of commits with revisions _R1_, _R2_ and 
> _R3_ modifying nodes _N1_, _N2_ and _N3_. If we already read the _N1_ from 
> revision _R2_ then reading from a secondary shouldn't result in getting older 
> revision (eg. _R1_).
> 2. If an Oak instance modifies a document, then reading from a secondary 
> shouldn't result in getting the old version (before modification).
> So, let's have two maps:
> * _M1_ the most recent document revision read from the Mongo for each cluster 
> id,
> * _M2_ the oldest last rev value for root document for each cluster id read 
> from all the secondary instances.
> Maintaining _M1_:
> For every read from the Mongo we'll check if the lastRev for some cluster id 
> is newer than _M1_ entry. If so, we'll update _M1_. For all writes we'll add 
> the saved revision id with the current cluster id in _M1_.
> Maintaining _M2_:
> It should be periodically updated. Such mechanism is already prepared in the 
> OAK-2106 patch.
> The method deciding whether we can read from the secondary instance should 
> compare two maps. If all entries in _M2_ are newer than _M1_ it means that 
> the secondary instances contains at least as new repository state as we 
> already accessed and therefore it's safe to read from secondary.
> Regarding the documents modified by the local Oak instance, we should 
> remember all the locally-modified paths and their revisions and use primary 
> Mongo to access them as long as the changes are not replicated to all the 
> secondaries. When the secondaries are up to date with the modification, we 
> can remove it from the local-changes collections.
> Attached image diagram.png presents the idea.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3865) New strategy to optimize secondary reads

2016-06-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331562#comment-15331562
 ] 

Tomek Rękawek commented on OAK-3865:


[~mreutegg], thanks for the quick review. I've update the patch with following 
changes:

* the {{LocalChanges.add()}} and other methods now uses a custom way to check 
if all the revisions in one vector are greater than revisions in another vector 
(Utils.isGreaterOrEquals)
* the secondaryCredentials has been removed. The authentication configured in 
Mongo URI will be used to connect to the secondaries as well.
* the delayed instances should be hidden as well. Such nodes won't be 
considered in the ReplicaSetInfo.

Agree on being extra-careful with such substantial changes. Do you have any 
case in mind in which this approach may lead to an inconsistency?

> New strategy to optimize secondary reads
> 
>
> Key: OAK-3865
> URL: https://issues.apache.org/jira/browse/OAK-3865
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Tomek Rękawek
>  Labels: performance
> Fix For: 1.6
>
> Attachments: OAK-3865.patch, clustered-oak-setup-improvements.pdf, 
> diagram.png
>
>
> *Introduction*
> In the current trunk we'll only read document _D_ from the secondary instance 
> if:
> (1) we have the parent _P_ of document _D_ cached and
> (2) the parent hasn't been modified in 6 hours.
> The OAK-2106 tried to optimise (2) by estimating lag using MongoDB replica 
> stats. It was unreliable, so the second approach was to read the last 
> revisions directly from each Mongo instance. If the modification date of _P_ 
> is before last revisions on all secondary Mongos, then secondary can be used.
> The main problem with this approach is that we still need to have the _P_ to 
> be in cache. I think we need another way to optimise the secondary reading, 
> as right now only about 3% of requests connects to the secondary, which is 
> bad especially for the global-clustering case (Mongo and Oak instances across 
> the globe). The optimisation provided in OAK-2106 doesn't make the things 
> much better and may introduce some consistency issues.
> *Proposal - tldr version*
> Oak will remember the last revision it has ever seen. In the same time, it'll 
> query each secondary Mongo instance, asking what's the available stored root 
> revision. If all secondary instances have a root revision >= last revision 
> seen by a given Oak instance, it's safe to use the secondary read preference.
> *Proposal*
> I had following constraints in mind preparing this:
> 1. Let's assume we have a sequence of commits with revisions _R1_, _R2_ and 
> _R3_ modifying nodes _N1_, _N2_ and _N3_. If we already read the _N1_ from 
> revision _R2_ then reading from a secondary shouldn't result in getting older 
> revision (eg. _R1_).
> 2. If an Oak instance modifies a document, then reading from a secondary 
> shouldn't result in getting the old version (before modification).
> So, let's have two maps:
> * _M1_ the most recent document revision read from the Mongo for each cluster 
> id,
> * _M2_ the oldest last rev value for root document for each cluster id read 
> from all the secondary instances.
> Maintaining _M1_:
> For every read from the Mongo we'll check if the lastRev for some cluster id 
> is newer than _M1_ entry. If so, we'll update _M1_. For all writes we'll add 
> the saved revision id with the current cluster id in _M1_.
> Maintaining _M2_:
> It should be periodically updated. Such mechanism is already prepared in the 
> OAK-2106 patch.
> The method deciding whether we can read from the secondary instance should 
> compare two maps. If all entries in _M2_ are newer than _M1_ it means that 
> the secondary instances contains at least as new repository state as we 
> already accessed and therefore it's safe to read from secondary.
> Regarding the documents modified by the local Oak instance, we should 
> remember all the locally-modified paths and their revisions and use primary 
> Mongo to access them as long as the changes are not replicated to all the 
> secondaries. When the secondaries are up to date with the modification, we 
> can remove it from the local-changes collections.
> Attached image diagram.png presents the idea.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3865) New strategy to optimize secondary reads

2016-06-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3865:
---
Attachment: OAK-3865.patch

> New strategy to optimize secondary reads
> 
>
> Key: OAK-3865
> URL: https://issues.apache.org/jira/browse/OAK-3865
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Tomek Rękawek
>  Labels: performance
> Fix For: 1.6
>
> Attachments: OAK-3865.patch, clustered-oak-setup-improvements.pdf, 
> diagram.png
>
>
> *Introduction*
> In the current trunk we'll only read document _D_ from the secondary instance 
> if:
> (1) we have the parent _P_ of document _D_ cached and
> (2) the parent hasn't been modified in 6 hours.
> The OAK-2106 tried to optimise (2) by estimating lag using MongoDB replica 
> stats. It was unreliable, so the second approach was to read the last 
> revisions directly from each Mongo instance. If the modification date of _P_ 
> is before last revisions on all secondary Mongos, then secondary can be used.
> The main problem with this approach is that we still need to have the _P_ to 
> be in cache. I think we need another way to optimise the secondary reading, 
> as right now only about 3% of requests connects to the secondary, which is 
> bad especially for the global-clustering case (Mongo and Oak instances across 
> the globe). The optimisation provided in OAK-2106 doesn't make the things 
> much better and may introduce some consistency issues.
> *Proposal - tldr version*
> Oak will remember the last revision it has ever seen. In the same time, it'll 
> query each secondary Mongo instance, asking what's the available stored root 
> revision. If all secondary instances have a root revision >= last revision 
> seen by a given Oak instance, it's safe to use the secondary read preference.
> *Proposal*
> I had following constraints in mind preparing this:
> 1. Let's assume we have a sequence of commits with revisions _R1_, _R2_ and 
> _R3_ modifying nodes _N1_, _N2_ and _N3_. If we already read the _N1_ from 
> revision _R2_ then reading from a secondary shouldn't result in getting older 
> revision (eg. _R1_).
> 2. If an Oak instance modifies a document, then reading from a secondary 
> shouldn't result in getting the old version (before modification).
> So, let's have two maps:
> * _M1_ the most recent document revision read from the Mongo for each cluster 
> id,
> * _M2_ the oldest last rev value for root document for each cluster id read 
> from all the secondary instances.
> Maintaining _M1_:
> For every read from the Mongo we'll check if the lastRev for some cluster id 
> is newer than _M1_ entry. If so, we'll update _M1_. For all writes we'll add 
> the saved revision id with the current cluster id in _M1_.
> Maintaining _M2_:
> It should be periodically updated. Such mechanism is already prepared in the 
> OAK-2106 patch.
> The method deciding whether we can read from the secondary instance should 
> compare two maps. If all entries in _M2_ are newer than _M1_ it means that 
> the secondary instances contains at least as new repository state as we 
> already accessed and therefore it's safe to read from secondary.
> Regarding the documents modified by the local Oak instance, we should 
> remember all the locally-modified paths and their revisions and use primary 
> Mongo to access them as long as the changes are not replicated to all the 
> secondaries. When the secondaries are up to date with the modification, we 
> can remove it from the local-changes collections.
> Attached image diagram.png presents the idea.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-4409.
-
Resolution: Fixed

trunk: http://svn.apache.org/r1748553


> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.5.4
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4409:

Fix Version/s: 1.5.4

> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.5.4
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4471) More compact storage format for Documents

2016-06-15 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-4471:
-
Attachment: node-doc-size2.png

*Node Document Size*

!node-doc-size2.png!

Above histogram is for the size distribution of NodeDocuments excluding 
documents under property index which are almost of size ~525 

{noformat}
   Min. 1st Qu.  MedianMean 3rd Qu.Max. 
231 7881354122315812855 
{noformat}

> More compact storage format for Documents
> -
>
> Key: OAK-4471
> URL: https://issues.apache.org/jira/browse/OAK-4471
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: performance
> Fix For: 1.6
>
> Attachments: node-doc-size2.png
>
>
> Aim of this task is to evaluate storage cost of current approach for various 
> Documents in DocumentNodeStore. And then evaluate possible alternative to see 
> if we can get a significant reduction in storage size.
> Possible areas of improvement
> # NodeDocument
> ## Use binary encoding for property values - Currently property values are 
> stored in JSON encoding i.e. arrays and single values are encoded in json 
> along with there type
> ## Use binary encoding for Revision values - In a given document Revision 
> instances are a major part of storage size. A binary encoding might provide 
> more compact storage
> # Journal - The journal entries can be stored in compressed form
> Any new approach should support working with existing setups i.e. provide 
> gradual change in storage format. 
> *Possible Benefits*
> More compact storage would help in following ways
> # Low memory footprint of Document in Mongo and RDB
> # Low memory footprint for in memory NodeDocument instances - For e.g. 
> property values when stored in binary format would consume less memory
> # Reduction in IO over wire - That should reduce the latency in say 
> distributed deployments where Oak has to talk to remote primary
> Note that before doing any such change we must analyze the gains. Any change 
> in encoding would make interpreting stored data harder and also represents 
> significant change in stored data where we need to be careful to not 
> introduce any bug!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4409) RDB*Store: bump up recommended DB2 version to 10.5

2016-06-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4409:

Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4  (was: )

> RDB*Store: bump up recommended DB2 version to 10.5
> --
>
> Key: OAK-4409
> URL: https://issues.apache.org/jira/browse/OAK-4409
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.31, 1.4.3, 1.5.3, 1.2.16
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.5.4
>
>
> AFAIU, nobody is testing nor running with DB2 versions older than 10.5 
> anymore; our diagnostics thus should check for 10.5 instead of 10.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4471) More compact storage format for Documents

2016-06-15 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331518#comment-15331518
 ] 

Chetan Mehrotra commented on OAK-4471:
--

*Use dictionary for Property Names*

Under this we can use a dictionary for commonly occurring property names. Below 
are some stats from a repository with 

* 14.5 M documents in nodes collection
* 26 M property names
* 8475 unique property names
* 290M - Total size of property names (assuming 8bits per char)
* 8755M - Total repo size

Top property name stats

{noformat}
+-+
|Count  |Name|% by count|% by size|
+-+
|3033972|jcr:lastModified|11.65 |15.94|
|2573208|jcr:data|9.88  |6.76 |
|2505308|uniqueKey   |9.62  |7.40 |
|2286350|blobSize|8.78  |6.00 |
|1706460|match   |6.55  |2.80 |
|1484283|jcr:primaryType |5.70  |7.31 |
|969596 |jcr:created |3.72  |3.50 |
|933960 |jcr:createdBy   |3.59  |3.99 |
|921199 |sling:resourceType  |3.54  |5.44 |
|702208 |:childOrder |2.70  |2.54 |
|601959 |entry   |2.31  |0.99 |
|600299 |jcr:uuid|2.31  |1.58 |
|481036 |jcr:lastModifiedBy  |1.85  |2.84 |
|477625 |jcr:frozenPrimaryType   |1.83  |3.29 |
|477625 |jcr:frozenUuid  |1.83  |2.20 |
|357201 |text|1.37  |0.47 |
|351712 |textIsRich  |1.35  |1.15 |
|228623 |event\djob\dqueued\dtime|0.88  |1.80 |
+-+
{noformat}

Based on above we can say for now using dictionary for property names would not 
provide much benefit!

> More compact storage format for Documents
> -
>
> Key: OAK-4471
> URL: https://issues.apache.org/jira/browse/OAK-4471
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: performance
> Fix For: 1.6
>
>
> Aim of this task is to evaluate storage cost of current approach for various 
> Documents in DocumentNodeStore. And then evaluate possible alternative to see 
> if we can get a significant reduction in storage size.
> Possible areas of improvement
> # NodeDocument
> ## Use binary encoding for property values - Currently property values are 
> stored in JSON encoding i.e. arrays and single values are encoded in json 
> along with there type
> ## Use binary encoding for Revision values - In a given document Revision 
> instances are a major part of storage size. A binary encoding might provide 
> more compact storage
> # Journal - The journal entries can be stored in compressed form
> Any new approach should support working with existing setups i.e. provide 
> gradual change in storage format. 
> *Possible Benefits*
> More compact storage would help in following ways
> # Low memory footprint of Document in Mongo and RDB
> # Low memory footprint for in memory NodeDocument instances - For e.g. 
> property values when stored in binary format would consume less memory
> # Reduction in IO over wire - That should reduce the latency in say 
> distributed deployments where Oak has to talk to remote primary
> Note that before doing any such change we must analyze the gains. Any change 
> in encoding would make interpreting stored data harder and also represents 
> significant change in stored data where we need to be careful to not 
> introduce any bug!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4472) Decouple SegmentReader from Revisions

2016-06-15 Thread JIRA
Michael Dürig created OAK-4472:
--

 Summary: Decouple SegmentReader from Revisions
 Key: OAK-4472
 URL: https://issues.apache.org/jira/browse/OAK-4472
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segment-tar
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 1.6


The {{SegmentReader.readHeadState()}} introduces a de-facto dependency to 
{{Revisions}} as access to the latter is required for obtaining the record id 
of the head. 

To decouple SegmentReader from Revisions I propose to replace 
{{SegmentReader.readHeadState()}} with {{SegmentReader.readHeadState(Revisions 
revisions)}}. As this results in a lot of boilerplate for callers (i.e. 
{{fileStore.getReader().getHeadState(fileStore.getRevisions())}}), we should 
also introduce a convenience method {{FileStore.getHead()}} clients could use 
to that matter.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3865) New strategy to optimize secondary reads

2016-06-15 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331436#comment-15331436
 ] 

Marcel Reutegger commented on OAK-3865:
---

I had a quick look at the patch for this issue. In general, we have to be very 
careful with these kind of changes. Reading an outdated document from a 
secondary when we actually need the document from the primary can lead to data 
inconsistencies. 

- {{LocalChanges.add()}}: it looks like the method uses 
RevisionVector.compareTo() in an improper way. You cannot rely on this method 
to make a statement whether one RevisionVector happened before another one. See 
also the JavaDoc of that class.

- secondaryCredentials: this looks difficult to configure. Is there a way to 
reuse the existing credentials. Also, keep in mind that user/password 
credentials are not the only option for authentication with MongoDB.

- What happens when there is a secondary with a configured delayed? Is there a 
way to exclude some secondaries?

> New strategy to optimize secondary reads
> 
>
> Key: OAK-3865
> URL: https://issues.apache.org/jira/browse/OAK-3865
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Tomek Rękawek
>  Labels: performance
> Fix For: 1.6
>
> Attachments: OAK-3865.patch, clustered-oak-setup-improvements.pdf, 
> diagram.png
>
>
> *Introduction*
> In the current trunk we'll only read document _D_ from the secondary instance 
> if:
> (1) we have the parent _P_ of document _D_ cached and
> (2) the parent hasn't been modified in 6 hours.
> The OAK-2106 tried to optimise (2) by estimating lag using MongoDB replica 
> stats. It was unreliable, so the second approach was to read the last 
> revisions directly from each Mongo instance. If the modification date of _P_ 
> is before last revisions on all secondary Mongos, then secondary can be used.
> The main problem with this approach is that we still need to have the _P_ to 
> be in cache. I think we need another way to optimise the secondary reading, 
> as right now only about 3% of requests connects to the secondary, which is 
> bad especially for the global-clustering case (Mongo and Oak instances across 
> the globe). The optimisation provided in OAK-2106 doesn't make the things 
> much better and may introduce some consistency issues.
> *Proposal - tldr version*
> Oak will remember the last revision it has ever seen. In the same time, it'll 
> query each secondary Mongo instance, asking what's the available stored root 
> revision. If all secondary instances have a root revision >= last revision 
> seen by a given Oak instance, it's safe to use the secondary read preference.
> *Proposal*
> I had following constraints in mind preparing this:
> 1. Let's assume we have a sequence of commits with revisions _R1_, _R2_ and 
> _R3_ modifying nodes _N1_, _N2_ and _N3_. If we already read the _N1_ from 
> revision _R2_ then reading from a secondary shouldn't result in getting older 
> revision (eg. _R1_).
> 2. If an Oak instance modifies a document, then reading from a secondary 
> shouldn't result in getting the old version (before modification).
> So, let's have two maps:
> * _M1_ the most recent document revision read from the Mongo for each cluster 
> id,
> * _M2_ the oldest last rev value for root document for each cluster id read 
> from all the secondary instances.
> Maintaining _M1_:
> For every read from the Mongo we'll check if the lastRev for some cluster id 
> is newer than _M1_ entry. If so, we'll update _M1_. For all writes we'll add 
> the saved revision id with the current cluster id in _M1_.
> Maintaining _M2_:
> It should be periodically updated. Such mechanism is already prepared in the 
> OAK-2106 patch.
> The method deciding whether we can read from the secondary instance should 
> compare two maps. If all entries in _M2_ are newer than _M1_ it means that 
> the secondary instances contains at least as new repository state as we 
> already accessed and therefore it's safe to read from secondary.
> Regarding the documents modified by the local Oak instance, we should 
> remember all the locally-modified paths and their revisions and use primary 
> Mongo to access them as long as the changes are not replicated to all the 
> secondaries. When the secondaries are up to date with the modification, we 
> can remove it from the local-changes collections.
> Attached image diagram.png presents the idea.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4412) Lucene-memory property index

2016-06-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331433#comment-15331433
 ] 

Tomek Rękawek commented on OAK-4412:


[~egli], thanks for the suggestion. I improved the patch. In the query time we 
check if there's a repository change waiting to be processed. If it's, we wait 
for it. The new, incoming changes (committed after user calls the query()) are 
ignored and we won't wait for them.

The new logic is mainly placed in the MonitoringBackgroundObserver.

> Lucene-memory property index
> 
>
> Key: OAK-4412
> URL: https://issues.apache.org/jira/browse/OAK-4412
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6
>
> Attachments: OAK-4412.patch
>
>
> When running Oak in a cluster, each write operation is expensive. After 
> performing some stress-tests with a geo-distributed Mongo cluster, we've 
> found out that updating property indexes is a large part of the overall 
> traffic.
> The asynchronous index would be an answer here (as the index update won't be 
> made in the client request thread), but the AEM requires the updates to be 
> visible immediately in order to work properly.
> The idea here is to enhance the existing asynchronous Lucene index with a 
> synchronous, locally-stored counterpart that will persist only the data since 
> the last Lucene background reindexing job.
> The new index can be stored in memory or (if necessary) in MMAPed local 
> files. Once the "main" Lucene index is being updated, the local index will be 
> purged.
> Queries will use an union of results from the {{lucene}} and 
> {{lucene-memory}} indexes.
> The {{lucene-memory}} index, as a local stored entity, will be updated using 
> an observer, so it'll get both local and remote changes.
> The original idea has been suggested by [~chetanm] in the discussion for the 
> OAK-4233.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4412) Lucene-memory property index

2016-06-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-4412:
---
Attachment: OAK-4412.patch

> Lucene-memory property index
> 
>
> Key: OAK-4412
> URL: https://issues.apache.org/jira/browse/OAK-4412
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6
>
> Attachments: OAK-4412.patch
>
>
> When running Oak in a cluster, each write operation is expensive. After 
> performing some stress-tests with a geo-distributed Mongo cluster, we've 
> found out that updating property indexes is a large part of the overall 
> traffic.
> The asynchronous index would be an answer here (as the index update won't be 
> made in the client request thread), but the AEM requires the updates to be 
> visible immediately in order to work properly.
> The idea here is to enhance the existing asynchronous Lucene index with a 
> synchronous, locally-stored counterpart that will persist only the data since 
> the last Lucene background reindexing job.
> The new index can be stored in memory or (if necessary) in MMAPed local 
> files. Once the "main" Lucene index is being updated, the local index will be 
> purged.
> Queries will use an union of results from the {{lucene}} and 
> {{lucene-memory}} indexes.
> The {{lucene-memory}} index, as a local stored entity, will be updated using 
> an observer, so it'll get both local and remote changes.
> The original idea has been suggested by [~chetanm] in the discussion for the 
> OAK-4233.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4412) Lucene-memory property index

2016-06-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-4412:
---
Attachment: (was: OAK-4412.patch)

> Lucene-memory property index
> 
>
> Key: OAK-4412
> URL: https://issues.apache.org/jira/browse/OAK-4412
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6
>
> Attachments: OAK-4412.patch
>
>
> When running Oak in a cluster, each write operation is expensive. After 
> performing some stress-tests with a geo-distributed Mongo cluster, we've 
> found out that updating property indexes is a large part of the overall 
> traffic.
> The asynchronous index would be an answer here (as the index update won't be 
> made in the client request thread), but the AEM requires the updates to be 
> visible immediately in order to work properly.
> The idea here is to enhance the existing asynchronous Lucene index with a 
> synchronous, locally-stored counterpart that will persist only the data since 
> the last Lucene background reindexing job.
> The new index can be stored in memory or (if necessary) in MMAPed local 
> files. Once the "main" Lucene index is being updated, the local index will be 
> purged.
> Queries will use an union of results from the {{lucene}} and 
> {{lucene-memory}} indexes.
> The {{lucene-memory}} index, as a local stored entity, will be updated using 
> an observer, so it'll get both local and remote changes.
> The original idea has been suggested by [~chetanm] in the discussion for the 
> OAK-4233.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4405) JCR TCK on RDBDocumentStore

2016-06-15 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-4405.
---
Resolution: Fixed

Added a OakDocumentRDBRepositoryStub with an empty default for the 
{{rdb.jdbc-url}} system property. This means the TCK tests will not run on RDB, 
unless the {{rdb-derby}} profile is enable. This is currently the case for the 
Oak Build Matrix on Jenkins.

Done in revision: http://svn.apache.org/r1748524

> JCR TCK on RDBDocumentStore
> ---
>
> Key: OAK-4405
> URL: https://issues.apache.org/jira/browse/OAK-4405
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Affects Versions: 1.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.0.32
>
>
> Introduce a RepositoryStub implementation for the RDBDocumentStore and run 
> the JCR TCK on it when enabled.
> This only applies to the 1.0 branch because trunk and the other branches 
> already have this kind of RepositoryStub implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4447) RepositorySidegrade: oak-segment to oak-segment-tar migrate without external datastore

2016-06-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-4447:
---
Fix Version/s: (was: 1.5.3)
   1.5.4

> RepositorySidegrade: oak-segment to oak-segment-tar migrate without external 
> datastore
> --
>
> Key: OAK-4447
> URL: https://issues.apache.org/jira/browse/OAK-4447
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar, upgrade
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.5.4
>
> Attachments: OAK-4447-with-test.patch, upgrade-nods.patch
>
>
> I'd like to submit a patch for being able to run the sidegrade from 
> oak-segment to oak-segment-tar without the need to have the external 
> datastore connected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4368) Excerpt extraction from the Lucene index should be more selective

2016-06-15 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved OAK-4368.
--
Resolution: Fixed

fixed in r1748505,6 (trunk)

> Excerpt extraction from the Lucene index should be more selective
> -
>
> Key: OAK-4368
> URL: https://issues.apache.org/jira/browse/OAK-4368
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.0.30, 1.2.14, 1.4.2, 1.5.2
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 1.5.4
>
> Attachments: OAK-4368.0.patch
>
>
> Lucene index can be used in order to extract _rep:excerpt_ using 
> {{Highlighter}}.
> The current implementation may suffer performance issues when the result set 
> of the original query contains a lot of results, each of them possibly 
> containing lots of (stored) properties that get passed to the highlighter in 
> order to try to extract the excerpt; such a process doesn't stop as soon as 
> the first excerpt is found so that excerpt is composed using text from all 
> stored properties in all results (if there's a match on the query).
> While we can accept some cost of extracting excerpt at query time (whereas it 
> was generated at excerpt retrieval time before OAK-3580, e.g. via 
> _row.getValue("rep:excerpt")_) , that should be bounded and mitigated as much 
> as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4470) Remove read revision method from DocumentNodeState

2016-06-15 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331226#comment-15331226
 ] 

Chetan Mehrotra commented on OAK-4470:
--

[~mreutegg] Change for this is more broad than I anticipated. Would you be able 
to have a look

> Remove read revision method from DocumentNodeState
> --
>
> Key: OAK-4470
> URL: https://issues.apache.org/jira/browse/OAK-4470
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: documentmk
>Reporter: Chetan Mehrotra
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6
>
>
> {{DocumentNodeState}} has a {{getRevision}} method which provides the read 
> revision which was used at time of getting a {{DocumentNodeState}} out of 
> {{NodeDocument}}. Looking at usage of this method indicates that its only 
> used for root node and that usage can be replaced with call to 
> {{getRootRevision}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4470) Remove read revision method from DocumentNodeState

2016-06-15 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-4470:
-
Assignee: Marcel Reutegger  (was: Chetan Mehrotra)

> Remove read revision method from DocumentNodeState
> --
>
> Key: OAK-4470
> URL: https://issues.apache.org/jira/browse/OAK-4470
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: documentmk
>Reporter: Chetan Mehrotra
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6
>
>
> {{DocumentNodeState}} has a {{getRevision}} method which provides the read 
> revision which was used at time of getting a {{DocumentNodeState}} out of 
> {{NodeDocument}}. Looking at usage of this method indicates that its only 
> used for root node and that usage can be replaced with call to 
> {{getRootRevision}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)