[jira] [Created] (OAK-3982) DocumentStore: add method to remove with a condition on an indexed property

2016-02-04 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-3982:
---

 Summary: DocumentStore: add method to remove with a condition on 
an indexed property
 Key: OAK-3982
 URL: https://issues.apache.org/jira/browse/OAK-3982
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: documentmk
Reporter: Julian Reschke
Assignee: Julian Reschke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (OAK-3979) RepositoryUpgrade skip on error must skip non existing node bundle

2016-02-04 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding reopened OAK-3979:
-

[~chetanm] you are right, {{JackrabbitNodeState#getChildNodeEntries}} currently 
always skips IllegalStateException, no matter whether {{skipOnError}} is 
{{true}} or {{false}}.

I'll fix that.

> RepositoryUpgrade skip on error must skip non existing node bundle
> --
>
> Key: OAK-3979
> URL: https://issues.apache.org/jira/browse/OAK-3979
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Chetan Mehrotra
>Assignee: Julian Sedding
>Priority: Minor
> Fix For: 1.3.16
>
> Attachments: OAK-3979.patch
>
>
> With OAK-2893 support was added to continue upgrade even if some issue exist 
> with some of node to copy. That change checks for {{ItemStateException}}. 
> However if the bundle is not present then NullPointerException is thrown 
> which gets ignored
> {noformat}
> Caused by: java.lang.NullPointerException: Could not load NodePropBundle for 
> id [ae3d4171-6ece-4e95-b6e4-3f487edf794e]
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:236) 
> ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.BundleLoader.loadBundle(BundleLoader.java:62)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.JackrabbitNodeState.createChildNodeState(JackrabbitNodeState.java:349)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.JackrabbitNodeState.getChildNodeEntries(JackrabbitNodeState.java:320)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.AbstractDecoratedNodeState.getChildNodeEntries(AbstractDecoratedNodeState.java:130)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.copyNodeState(NodeStateCopier.java:187)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.copyNodeState(NodeStateCopier.java:150)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.access$200(NodeStateCopier.java:72)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier$Builder.copy(NodeStateCopier.java:397)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copyWorkspace(RepositoryUpgrade.java:866)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:438)
>  ~[na:na]
> {noformat}
> As a fix {{BundleLoader}} should throw {{ItemStateException}} instead of 
> {{NullpointerException}} when a NodePropBundle is missing for given id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3979) RepositoryUpgrade skip on error must skip non existing node bundle

2016-02-04 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132059#comment-15132059
 ] 

Chetan Mehrotra commented on OAK-3979:
--

Looks fine. Looking at code we should then also fix it at 
JackrabbitNodeState#getChildNodeEntries

> RepositoryUpgrade skip on error must skip non existing node bundle
> --
>
> Key: OAK-3979
> URL: https://issues.apache.org/jira/browse/OAK-3979
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Chetan Mehrotra
>Assignee: Julian Sedding
>Priority: Minor
> Fix For: 1.3.16
>
> Attachments: OAK-3979.patch
>
>
> With OAK-2893 support was added to continue upgrade even if some issue exist 
> with some of node to copy. That change checks for {{ItemStateException}}. 
> However if the bundle is not present then NullPointerException is thrown 
> which gets ignored
> {noformat}
> Caused by: java.lang.NullPointerException: Could not load NodePropBundle for 
> id [ae3d4171-6ece-4e95-b6e4-3f487edf794e]
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:236) 
> ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.BundleLoader.loadBundle(BundleLoader.java:62)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.JackrabbitNodeState.createChildNodeState(JackrabbitNodeState.java:349)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.JackrabbitNodeState.getChildNodeEntries(JackrabbitNodeState.java:320)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.AbstractDecoratedNodeState.getChildNodeEntries(AbstractDecoratedNodeState.java:130)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.copyNodeState(NodeStateCopier.java:187)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.copyNodeState(NodeStateCopier.java:150)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.access$200(NodeStateCopier.java:72)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier$Builder.copy(NodeStateCopier.java:397)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copyWorkspace(RepositoryUpgrade.java:866)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:438)
>  ~[na:na]
> {noformat}
> As a fix {{BundleLoader}} should throw {{ItemStateException}} instead of 
> {{NullpointerException}} when a NodePropBundle is missing for given id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3981) Change in aggregation flow in OAK-3831 causes some properties to be left out of aggregation

2016-02-04 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-3981.
--
Resolution: Fixed

Fixed with
* trunk - 1728443
* 1.0 - 1728445
* 1.2 - 1728447

> Change in aggregation flow in OAK-3831 causes some properties to be left out 
> of aggregation
> ---
>
> Key: OAK-3981
> URL: https://issues.apache.org/jira/browse/OAK-3981
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.3.13, 1.0.26, 1.2.10
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.2.11, 1.0.27, 1.3.16
>
>
> With OAK-3831 we change the aggregation logic to avoid indexing those 
> relative properties for which there is a property definition defined but 
> {{nodeScopeIndex}} is false.
> This causes regression as so far such properties were getting included via 
> the aggregation rules and now they would be left out cause search to miss on 
> those terms.
> As a fix we should revert to old logic and provide a new flag for enabling 
> exclusion of property from getting aggregated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3268) Improve datastore resilience

2016-02-04 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-3268.

Resolution: Done

[~mmarth] yes, resolving this.

> Improve datastore resilience
> 
>
> Key: OAK-3268
> URL: https://issues.apache.org/jira/browse/OAK-3268
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: blob
>Reporter: Michael Marth
>Assignee: Amit Jain
>Priority: Critical
>  Labels: resilience
>
> As discussed bilaterally grouping the improvements for datastore resilience 
> in this issue for easier tracking



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2761) Persistent cache: add data in a different thread

2016-02-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-2761:
---
Attachment: OAK-2761-trunk.patch

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4
>
> Attachments: AsyncCacheTest.patch, OAK-2761-trunk.patch
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2761) Persistent cache: add data in a different thread

2016-02-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-2761:
---
Attachment: (was: OAK-2761-trunk.patch)

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4
>
> Attachments: AsyncCacheTest.patch, OAK-2761-trunk.patch
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2761) Persistent cache: add data in a different thread

2016-02-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-2761:
---
Attachment: (was: OAK-2761-1.2.patch)

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4
>
> Attachments: AsyncCacheTest.patch, OAK-2761-trunk.patch
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3984) RDBDocumentStore: implement new conditional remove method

2016-02-04 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-3984:
---

 Summary: RDBDocumentStore: implement new conditional remove method
 Key: OAK-3984
 URL: https://issues.apache.org/jira/browse/OAK-3984
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: rdbmk
Reporter: Julian Reschke
Assignee: Julian Reschke


As introduced in OAK-3982.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3979) RepositoryUpgrade skip on error must skip non existing node bundle

2016-02-04 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding resolved OAK-3979.
-
Resolution: Fixed

Fixed in [r1728432|https://svn.apache.org/r1728432].

> RepositoryUpgrade skip on error must skip non existing node bundle
> --
>
> Key: OAK-3979
> URL: https://issues.apache.org/jira/browse/OAK-3979
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Chetan Mehrotra
>Assignee: Julian Sedding
>Priority: Minor
> Fix For: 1.3.16
>
> Attachments: OAK-3979.patch
>
>
> With OAK-2893 support was added to continue upgrade even if some issue exist 
> with some of node to copy. That change checks for {{ItemStateException}}. 
> However if the bundle is not present then NullPointerException is thrown 
> which gets ignored
> {noformat}
> Caused by: java.lang.NullPointerException: Could not load NodePropBundle for 
> id [ae3d4171-6ece-4e95-b6e4-3f487edf794e]
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:236) 
> ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.BundleLoader.loadBundle(BundleLoader.java:62)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.JackrabbitNodeState.createChildNodeState(JackrabbitNodeState.java:349)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.JackrabbitNodeState.getChildNodeEntries(JackrabbitNodeState.java:320)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.AbstractDecoratedNodeState.getChildNodeEntries(AbstractDecoratedNodeState.java:130)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.copyNodeState(NodeStateCopier.java:187)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.copyNodeState(NodeStateCopier.java:150)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.access$200(NodeStateCopier.java:72)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier$Builder.copy(NodeStateCopier.java:397)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copyWorkspace(RepositoryUpgrade.java:866)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:438)
>  ~[na:na]
> {noformat}
> As a fix {{BundleLoader}} should throw {{ItemStateException}} instead of 
> {{NullpointerException}} when a NodePropBundle is missing for given id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3983) JournalGarbageCollector: use new DocumentStore remove() method

2016-02-04 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3983:

Summary: JournalGarbageCollector: use new DocumentStore remove() method  
(was: JournalGrabageCollector: use new DocumentStore remove() method)

> JournalGarbageCollector: use new DocumentStore remove() method
> --
>
> Key: OAK-3983
> URL: https://issues.apache.org/jira/browse/OAK-3983
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: documentmk
>Reporter: Julian Reschke
> Fix For: 1.6
>
>
> As introduced in OAK-3982.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3983) JournalGrabageCollector: use new DocumentStore remove() method

2016-02-04 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-3983:
---

 Summary: JournalGrabageCollector: use new DocumentStore remove() 
method
 Key: OAK-3983
 URL: https://issues.apache.org/jira/browse/OAK-3983
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: documentmk
Reporter: Julian Reschke


As introduced in OAK-3982.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2761) Persistent cache: add data in a different thread

2016-02-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132128#comment-15132128
 ] 

Tomek Rękawek commented on OAK-2761:


[~mreutegg], [~chetanm], I've updated the patch. Now it puts items to the queue 
only when they are evicted from memCache or invalidated.

However, we may still lose some invalidations/updates if the queue become full 
(eg. there's a lot of updates in memCache and therefore a lot of evictions that 
fills the buffer). I was thinking about following solution: if we have to 
remove some of the items from the queue because it's full, we should replace 
them all with a single "invalidateAll" action. It would invalidate all the 
items removed from the queue. This way we can prevent having outdated items in 
the persistent cache. WDYT?

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4
>
> Attachments: AsyncCacheTest.patch, OAK-2761-trunk.patch
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3989) Add S3 datastore support for Text Pre Extraction

2016-02-04 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-3989:


 Summary: Add S3 datastore support for Text Pre Extraction
 Key: OAK-3989
 URL: https://issues.apache.org/jira/browse/OAK-3989
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: run
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.3.16


Text pre extraction feature introduced in OAK-2892 only supports FileDataStore. 
For files present in S3 we should add support for S3DataStore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3986) simple performance regression IT (that would fail in case commitRoots would not be purged)

2016-02-04 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132273#comment-15132273
 ] 

Stefan Egli commented on OAK-3986:
--

added NonLocalObservationIT in rev 1728466 - this is a modified version of a 
test originally authored by [~tmueller]

> simple performance regression IT (that would fail in case commitRoots would 
> not be purged)
> --
>
> Key: OAK-3986
> URL: https://issues.apache.org/jira/browse/OAK-3986
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: jcr
>Affects Versions: 1.3.15
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Minor
> Fix For: 1.3.16
>
>
> There should be a performance regression IT that measures a couple of 
> addNode/removeNode/setProperties on a particular node to ensure performance 
> remains constant over time. Such a test pattern used to show a degradation 
> over time due to OAK-1794 but got fixed in OAK-2528 long time ago. This is 
> just to have a proper IT.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3965) SegmentPropertyState external binary property reports unusual size

2016-02-04 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132389#comment-15132389
 ] 

Francesco Mari commented on OAK-3965:
-

[~alex.parvulescu], I might have a solution to the problem. I'm running some 
tests locally, and might be able to attach a patch soon.

> SegmentPropertyState external binary property reports unusual size
> --
>
> Key: OAK-3965
> URL: https://issues.apache.org/jira/browse/OAK-3965
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
> Attachments: ExternalBlobIT.java.patch
>
>
> Calling getSize on an external binary reports a very unusual size:
> {code}
> world = {2318898817333174704 bytes}
> {code}
> the binary is actually around 17k in size.
> I think this happens because of how the size is computed, a sort of a read 
> overflow, and it also affects the toString method [0].
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentPropertyState.java#L202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3977) RDBDocumentStore: upgrade to JDBC driver 9.4.1208

2016-02-04 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132264#comment-15132264
 ] 

Julian Reschke commented on OAK-3977:
-

Test case improved in trunk: http://svn.apache.org/r1728458


> RDBDocumentStore: upgrade to JDBC driver 9.4.1208
> -
>
> Key: OAK-3977
> URL: https://issues.apache.org/jira/browse/OAK-3977
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>
> Once https://github.com/pgjdbc/pgjdbc/issues/502 is fixed, we could get rid 
> of the workaround introduced in OAK-3937.
> We'd also have to change {{RDBDocumentStoreJDBCTest}} to accept the new 
> behavior of the driver (failing the complete request).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2761) Persistent cache: add data in a different thread

2016-02-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-2761:
---
Attachment: OAK-2761-trunk-invalidate-all.patch

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4
>
> Attachments: AsyncCacheTest.patch, 
> OAK-2761-trunk-invalidate-all.patch, OAK-2761-trunk.patch
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3965) SegmentPropertyState external binary property reports unusual size

2016-02-04 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132419#comment-15132419
 ] 

Alex Parvulescu commented on OAK-3965:
--

hmm, the patch ranks kinda high on the copy/paste index :) why so much code 
duplication?

> SegmentPropertyState external binary property reports unusual size
> --
>
> Key: OAK-3965
> URL: https://issues.apache.org/jira/browse/OAK-3965
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
> Attachments: ExternalBlobIT.java.patch, OAK-3965-01.patch
>
>
> Calling getSize on an external binary reports a very unusual size:
> {code}
> world = {2318898817333174704 bytes}
> {code}
> the binary is actually around 17k in size.
> I think this happens because of how the size is computed, a sort of a read 
> overflow, and it also affects the toString method [0].
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentPropertyState.java#L202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3236) integration test that simulates influence of clock drift

2016-02-04 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132346#comment-15132346
 ] 

Stefan Egli commented on OAK-3236:
--

different levels where clock drifts have an influence, and should thus be 
tested:
* within DocumentNodeStore itself - ie in the revision/conflict-handling 
mechanisms
* within the discovery logic - due to the fact that pseudo-network partitioning 
can occur under such a scenario
* consequently with TopologyEventListeners that could do leader-dependent 
activity *after* a lease end, thus *while* another instance might already have 
taken over leadership

> integration test that simulates influence of clock drift
> 
>
> Key: OAK-3236
> URL: https://issues.apache.org/jira/browse/OAK-3236
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: core
>Affects Versions: 1.3.4
>Reporter: Stefan Egli
>Assignee: Stefan Egli
> Fix For: 1.4
>
>
> Spin-off of OAK-2739 [of this 
> comment|https://issues.apache.org/jira/browse/OAK-2739?focusedCommentId=14693398=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14693398]
>  - ie there should be an integration test that show cases the issues with 
> clock drift and why it is a good idea to have a lease-check (that refuses to 
> let the document store be used any further once the lease times out locally)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2761) Persistent cache: add data in a different thread

2016-02-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132402#comment-15132402
 ] 

Tomek Rękawek commented on OAK-2761:


Implemented patch with the "invalidate all" action idea as above. This version 
of the patch shouldn't allow to have any outdated entries in the persistent 
cache. If the queue is cleaned-up, the cleared keys will be invalidated from 
the cache as well.

I'll happily greet any suggestions on the new class dependencies.

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4
>
> Attachments: AsyncCacheTest.patch, 
> OAK-2761-trunk-invalidate-all.patch, OAK-2761-trunk.patch
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3965) SegmentPropertyState external binary property reports unusual size

2016-02-04 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132357#comment-15132357
 ] 

Alex Parvulescu commented on OAK-3965:
--

[~frm] leaving the funky error message aside, do you think we could fix this by 
1.4?

> SegmentPropertyState external binary property reports unusual size
> --
>
> Key: OAK-3965
> URL: https://issues.apache.org/jira/browse/OAK-3965
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
> Attachments: ExternalBlobIT.java.patch
>
>
> Calling getSize on an external binary reports a very unusual size:
> {code}
> world = {2318898817333174704 bytes}
> {code}
> the binary is actually around 17k in size.
> I think this happens because of how the size is computed, a sort of a read 
> overflow, and it also affects the toString method [0].
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentPropertyState.java#L202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3362) Estimate compaction based on diff to previous compacted head state

2016-02-04 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132356#comment-15132356
 ] 

Alex Parvulescu commented on OAK-3362:
--

ignoring the compaction part, the estimation bits are easy enough, and I would 
add it behind a feature flag to try to collect some real life usage stats.

> Estimate compaction based on diff to previous compacted head state
> --
>
> Key: OAK-3362
> URL: https://issues.apache.org/jira/browse/OAK-3362
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
>  Labels: compaction, gc
> Fix For: 1.6
>
>
> Food for thought: try to base the compaction estimation on a diff between the 
> latest compacted state and the current state.
> Pros
> * estimation duration would be proportional to number of changes on the 
> current head state
> * using the size on disk as a reference, we could actually stop the 
> estimation early when we go over the gc threshold.
> * data collected during this diff could in theory be passed as input to the 
> compactor so it could focus on compacting a specific subtree
> Cons
> * need to keep a reference to a previous compacted state. post-startup and 
> pre-compaction this might prove difficult (except maybe if we only persist 
> the revision similar to what the async indexer is doing currently)
> * coming up with a threshold for running compaction might prove difficult
> * diff might be costly, but still cheaper than the current full diff



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3965) SegmentPropertyState external binary property reports unusual size

2016-02-04 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-3965:

Attachment: OAK-3965-01.patch

[~alex.parvulescu], the attached patch makes your test pass. It also includes 
your test. Can you have a look at it?

> SegmentPropertyState external binary property reports unusual size
> --
>
> Key: OAK-3965
> URL: https://issues.apache.org/jira/browse/OAK-3965
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
> Attachments: ExternalBlobIT.java.patch, OAK-3965-01.patch
>
>
> Calling getSize on an external binary reports a very unusual size:
> {code}
> world = {2318898817333174704 bytes}
> {code}
> the binary is actually around 17k in size.
> I think this happens because of how the size is computed, a sort of a read 
> overflow, and it also affects the toString method [0].
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentPropertyState.java#L202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3981) Change in aggregation flow in OAK-3831 causes some properties to be left out of aggregation

2016-02-04 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3981:
-
Labels: docs-impacting  (was: )

> Change in aggregation flow in OAK-3831 causes some properties to be left out 
> of aggregation
> ---
>
> Key: OAK-3981
> URL: https://issues.apache.org/jira/browse/OAK-3981
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.3.13, 1.0.26, 1.2.10
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: docs-impacting
> Fix For: 1.2.11, 1.0.27, 1.3.16
>
>
> With OAK-3831 we change the aggregation logic to avoid indexing those 
> relative properties for which there is a property definition defined but 
> {{nodeScopeIndex}} is false.
> This causes regression as so far such properties were getting included via 
> the aggregation rules and now they would be left out cause search to miss on 
> those terms.
> As a fix we should revert to old logic and provide a new flag for enabling 
> exclusion of property from getting aggregated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3981) Change in aggregation flow in OAK-3831 causes some properties to be left out of aggregation

2016-02-04 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132206#comment-15132206
 ] 

Chetan Mehrotra edited comment on OAK-3981 at 2/5/16 5:00 AM:
--

Added a new property definition config {{excludeFromAggregation}} which can be 
used to exclude a property from being used in aggregation. Otherwise all 
properties of nodes covered by aggregation are included if there type matches

Fixed with
* trunk - 1728443
* 1.0 - 1728445
* 1.2 - 1728447


was (Author: chetanm):
Fixed with
* trunk - 1728443
* 1.0 - 1728445
* 1.2 - 1728447

> Change in aggregation flow in OAK-3831 causes some properties to be left out 
> of aggregation
> ---
>
> Key: OAK-3981
> URL: https://issues.apache.org/jira/browse/OAK-3981
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.3.13, 1.0.26, 1.2.10
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: docs-impacting
> Fix For: 1.2.11, 1.0.27, 1.3.16
>
>
> With OAK-3831 we change the aggregation logic to avoid indexing those 
> relative properties for which there is a property definition defined but 
> {{nodeScopeIndex}} is false.
> This causes regression as so far such properties were getting included via 
> the aggregation rules and now they would be left out cause search to miss on 
> those terms.
> As a fix we should revert to old logic and provide a new flag for enabling 
> exclusion of property from getting aggregated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3979) RepositoryUpgrade skip on error must skip non existing node bundle

2016-02-04 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding resolved OAK-3979.
-
   Resolution: Fixed
Fix Version/s: (was: 1.4)
   1.3.16

Fixed in [r1728427|https://svn.apache.org/r1728427].

[~chetanm] could you please verify the fix? Thanks.

> RepositoryUpgrade skip on error must skip non existing node bundle
> --
>
> Key: OAK-3979
> URL: https://issues.apache.org/jira/browse/OAK-3979
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Chetan Mehrotra
>Assignee: Julian Sedding
>Priority: Minor
> Fix For: 1.3.16
>
> Attachments: OAK-3979.patch
>
>
> With OAK-2893 support was added to continue upgrade even if some issue exist 
> with some of node to copy. That change checks for {{ItemStateException}}. 
> However if the bundle is not present then NullPointerException is thrown 
> which gets ignored
> {noformat}
> Caused by: java.lang.NullPointerException: Could not load NodePropBundle for 
> id [ae3d4171-6ece-4e95-b6e4-3f487edf794e]
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:236) 
> ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.BundleLoader.loadBundle(BundleLoader.java:62)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.JackrabbitNodeState.createChildNodeState(JackrabbitNodeState.java:349)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.JackrabbitNodeState.getChildNodeEntries(JackrabbitNodeState.java:320)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.AbstractDecoratedNodeState.getChildNodeEntries(AbstractDecoratedNodeState.java:130)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.copyNodeState(NodeStateCopier.java:187)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.copyNodeState(NodeStateCopier.java:150)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.access$200(NodeStateCopier.java:72)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier$Builder.copy(NodeStateCopier.java:397)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copyWorkspace(RepositoryUpgrade.java:866)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:438)
>  ~[na:na]
> {noformat}
> As a fix {{BundleLoader}} should throw {{ItemStateException}} instead of 
> {{NullpointerException}} when a NodePropBundle is missing for given id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3965) SegmentPropertyState external binary property reports unusual size

2016-02-04 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132589#comment-15132589
 ] 

Alex Parvulescu commented on OAK-3965:
--

very nice indeed! +1 for the patch :)

> SegmentPropertyState external binary property reports unusual size
> --
>
> Key: OAK-3965
> URL: https://issues.apache.org/jira/browse/OAK-3965
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
> Attachments: ExternalBlobIT.java.patch, OAK-3965-01.patch, 
> OAK-3965-02.patch
>
>
> Calling getSize on an external binary reports a very unusual size:
> {code}
> world = {2318898817333174704 bytes}
> {code}
> the binary is actually around 17k in size.
> I think this happens because of how the size is computed, a sort of a read 
> overflow, and it also affects the toString method [0].
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentPropertyState.java#L202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3986) simple performance regression IT (that would fail in case commitRoots would not be purged)

2016-02-04 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli resolved OAK-3986.
--
Resolution: Fixed

> simple performance regression IT (that would fail in case commitRoots would 
> not be purged)
> --
>
> Key: OAK-3986
> URL: https://issues.apache.org/jira/browse/OAK-3986
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: jcr
>Affects Versions: 1.3.15
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Minor
> Fix For: 1.3.16
>
>
> There should be a performance regression IT that measures a couple of 
> addNode/removeNode/setProperties on a particular node to ensure performance 
> remains constant over time. Such a test pattern used to show a degradation 
> over time due to OAK-1794 but got fixed in OAK-2528 long time ago. This is 
> just to have a proper IT.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3988) Offline compaction should avoid loading external binaries

2016-02-04 Thread Alex Parvulescu (JIRA)
Alex Parvulescu created OAK-3988:


 Summary: Offline compaction should avoid loading external binaries
 Key: OAK-3988
 URL: https://issues.apache.org/jira/browse/OAK-3988
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu


OAK-3965 uncovered an issue with the {{size}} calls on PropertyNodeStates when 
dealing with external binaries, and the fix effectively breaks offline 
compaction on repos with external data stores.
I think the offline compactor should basically ignore the external binaries in 
checking if a node meets the compaction map criteria or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3965) SegmentPropertyState external binary property reports unusual size

2016-02-04 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132681#comment-15132681
 ] 

Alex Parvulescu commented on OAK-3965:
--

In an interesting turn of events, this issue uncovered OAK-3988.

> SegmentPropertyState external binary property reports unusual size
> --
>
> Key: OAK-3965
> URL: https://issues.apache.org/jira/browse/OAK-3965
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
> Attachments: ExternalBlobIT.java.patch, OAK-3965-01.patch, 
> OAK-3965-02.patch
>
>
> Calling getSize on an external binary reports a very unusual size:
> {code}
> world = {2318898817333174704 bytes}
> {code}
> the binary is actually around 17k in size.
> I think this happens because of how the size is computed, a sort of a read 
> overflow, and it also affects the toString method [0].
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentPropertyState.java#L202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3987) Indexer dry run mode

2016-02-04 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu reassigned OAK-3987:


Assignee: Davide Giannella

Assigning to [~edivad], as asked on the list.

> Indexer dry run mode
> 
>
> Key: OAK-3987
> URL: https://issues.apache.org/jira/browse/OAK-3987
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, query
>Reporter: Alex Parvulescu
>Assignee: Davide Giannella
> Fix For: 1.6
>
>
> Based on a discussion on the dev list, it would be interesting to provide a 
> {{dry run}} mode for the indexer which would give an indication of an average 
> indexing time.
> Input could be:
> * path to index definition (mandatory)
> * path to the content (mandatory, default could be {{/}})
> * max number of nodes (optional, but I'd still cap this value so indexing 
> doesn't take over the entire aem instance)
> Also, we could do this on a separate (dedicated) thread so it doesn't 
> interfere with existing indexers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3987) Indexer dry run mode

2016-02-04 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-3987:
-
Fix Version/s: 1.6

> Indexer dry run mode
> 
>
> Key: OAK-3987
> URL: https://issues.apache.org/jira/browse/OAK-3987
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, query
>Reporter: Alex Parvulescu
> Fix For: 1.6
>
>
> Based on a discussion on the dev list, it would be interesting to provide a 
> {{dry run}} mode for the indexer which would give an indication of an average 
> indexing time.
> Input could be:
> * path to index definition (mandatory)
> * path to the content (mandatory, default could be {{/}})
> * max number of nodes (optional, but I'd still cap this value so indexing 
> doesn't take over the entire aem instance)
> Also, we could do this on a separate (dedicated) thread so it doesn't 
> interfere with existing indexers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3987) Indexer dry run mode

2016-02-04 Thread Alex Parvulescu (JIRA)
Alex Parvulescu created OAK-3987:


 Summary: Indexer dry run mode
 Key: OAK-3987
 URL: https://issues.apache.org/jira/browse/OAK-3987
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, query
Reporter: Alex Parvulescu


Based on a discussion on the dev list, it would be interesting to provide a 
{{dry run}} mode for the indexer which would give an indication of an average 
indexing time.
Input could be:
* path to index definition (mandatory)
* path to the content (mandatory, default could be {{/}})
* max number of nodes (optional, but I'd still cap this value so indexing 
doesn't take over the entire aem instance)

Also, we could do this on a separate (dedicated) thread so it doesn't interfere 
with existing indexers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3965) SegmentPropertyState external binary property reports unusual size

2016-02-04 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132494#comment-15132494
 ] 

Francesco Mari commented on OAK-3965:
-

Because it was just an exploratory patch to expose my approach. If the approach 
(not the style) looks good, I can commit a cleaner version of it.

> SegmentPropertyState external binary property reports unusual size
> --
>
> Key: OAK-3965
> URL: https://issues.apache.org/jira/browse/OAK-3965
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
> Attachments: ExternalBlobIT.java.patch, OAK-3965-01.patch
>
>
> Calling getSize on an external binary reports a very unusual size:
> {code}
> world = {2318898817333174704 bytes}
> {code}
> the binary is actually around 17k in size.
> I think this happens because of how the size is computed, a sort of a read 
> overflow, and it also affects the toString method [0].
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentPropertyState.java#L202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3988) Offline compaction should avoid loading external binaries

2016-02-04 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-3988:
-
Description: 
OAK-3965 uncovered an issue with the {{size}} calls on PropertyNodeStates when 
dealing with external binaries, and the fix effectively breaks offline 
compaction on repos with external data stores.

The issue is twofold:
* OAK-3965 breaks offline compaction in certain setups
* the current code puts all the nodes with an external binary in the compaction 
map, skipping the size filter

I think the offline compactor should basically ignore the external binaries in 
checking if a node meets the compaction map criteria or not.

  was:
OAK-3965 uncovered an issue with the {{size}} calls on PropertyNodeStates when 
dealing with external binaries, and the fix effectively breaks offline 
compaction on repos with external data stores.
I think the offline compactor should basically ignore the external binaries in 
checking if a node meets the compaction map criteria or not.


> Offline compaction should avoid loading external binaries
> -
>
> Key: OAK-3988
> URL: https://issues.apache.org/jira/browse/OAK-3988
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>
> OAK-3965 uncovered an issue with the {{size}} calls on PropertyNodeStates 
> when dealing with external binaries, and the fix effectively breaks offline 
> compaction on repos with external data stores.
> The issue is twofold:
> * OAK-3965 breaks offline compaction in certain setups
> * the current code puts all the nodes with an external binary in the 
> compaction map, skipping the size filter
> I think the offline compactor should basically ignore the external binaries 
> in checking if a node meets the compaction map criteria or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3704) Keep track of nested CUGs

2016-02-04 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-3704.
-
   Resolution: Fixed
Fix Version/s: 1.3.15

> Keep track of nested CUGs
> -
>
> Key: OAK-3704
> URL: https://issues.apache.org/jira/browse/OAK-3704
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: authorization-cug
>Reporter: angela
>Assignee: angela
> Fix For: 1.3.15
>
>
> in order to ease evaluation of cug policies the impl should come with a 
> postcommitvalidator that keeps track of nested cugs... e.g. in a hidden 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3700) authorization setup for closed user groups (follow up)

2016-02-04 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-3700.
-
   Resolution: Fixed
Fix Version/s: 1.3.15

> authorization setup for closed user groups (follow up)
> --
>
> Key: OAK-3700
> URL: https://issues.apache.org/jira/browse/OAK-3700
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: authorization-cug
>Reporter: angela
>Assignee: angela
> Fix For: 1.3.15
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3988) Offline compaction should avoid loading external binaries

2016-02-04 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3988.
--
   Resolution: Fixed
Fix Version/s: 1.3.16

http://svn.apache.org/viewvc?rev=1728525=rev

> Offline compaction should avoid loading external binaries
> -
>
> Key: OAK-3988
> URL: https://issues.apache.org/jira/browse/OAK-3988
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.3.16
>
>
> OAK-3965 uncovered an issue with the {{size}} calls on PropertyNodeStates 
> when dealing with external binaries, and the fix effectively breaks offline 
> compaction on repos with external data stores.
> The issue is twofold:
> * OAK-3965 breaks offline compaction in certain setups
> * the current code puts all the nodes with an external binary in the 
> compaction map, skipping the size filter
> I think the offline compactor should basically ignore the external binaries 
> in checking if a node meets the compaction map criteria or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3981) Change in aggregation flow in OAK-3831 causes some properties to be left out of aggregation

2016-02-04 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-3981:


 Summary: Change in aggregation flow in OAK-3831 causes some 
properties to be left out of aggregation
 Key: OAK-3981
 URL: https://issues.apache.org/jira/browse/OAK-3981
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene
Affects Versions: 1.2.10, 1.0.26, 1.3.13
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.2.11, 1.0.27, 1.3.16


With OAK-3831 we change the aggregation logic to avoid indexing those relative 
properties for which there is a property definition defined but 
{{nodeScopeIndex}} is false.

This causes regression as so far such properties were getting included via the 
aggregation rules and now they would be left out cause search to miss on those 
terms.

As a fix we should revert to old logic and provide a new flag for enabling 
exclusion of property from getting aggregated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3268) Improve datastore resilience

2016-02-04 Thread Michael Marth (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15131903#comment-15131903
 ] 

Michael Marth commented on OAK-3268:


[~amitjain], all issues in this epic are done: should we close the epic as done?

> Improve datastore resilience
> 
>
> Key: OAK-3268
> URL: https://issues.apache.org/jira/browse/OAK-3268
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: blob
>Reporter: Michael Marth
>Assignee: Amit Jain
>Priority: Critical
>  Labels: resilience
>
> As discussed bilaterally grouping the improvements for datastore resilience 
> in this issue for easier tracking



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3979) RepositoryUpgrade skip on error must skip non existing node bundle

2016-02-04 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding reassigned OAK-3979:
---

Assignee: Julian Sedding  (was: Chetan Mehrotra)

> RepositoryUpgrade skip on error must skip non existing node bundle
> --
>
> Key: OAK-3979
> URL: https://issues.apache.org/jira/browse/OAK-3979
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Chetan Mehrotra
>Assignee: Julian Sedding
>Priority: Minor
> Fix For: 1.4
>
> Attachments: OAK-3979.patch
>
>
> With OAK-2893 support was added to continue upgrade even if some issue exist 
> with some of node to copy. That change checks for {{ItemStateException}}. 
> However if the bundle is not present then NullPointerException is thrown 
> which gets ignored
> {noformat}
> Caused by: java.lang.NullPointerException: Could not load NodePropBundle for 
> id [ae3d4171-6ece-4e95-b6e4-3f487edf794e]
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:236) 
> ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.BundleLoader.loadBundle(BundleLoader.java:62)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.JackrabbitNodeState.createChildNodeState(JackrabbitNodeState.java:349)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.JackrabbitNodeState.getChildNodeEntries(JackrabbitNodeState.java:320)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.AbstractDecoratedNodeState.getChildNodeEntries(AbstractDecoratedNodeState.java:130)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.copyNodeState(NodeStateCopier.java:187)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.copyNodeState(NodeStateCopier.java:150)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier.access$200(NodeStateCopier.java:72)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.nodestate.NodeStateCopier$Builder.copy(NodeStateCopier.java:397)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copyWorkspace(RepositoryUpgrade.java:866)
>  ~[na:na]
> at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:438)
>  ~[na:na]
> {noformat}
> As a fix {{BundleLoader}} should throw {{ItemStateException}} instead of 
> {{NullpointerException}} when a NodePropBundle is missing for given id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)