[jira] [Commented] (OAK-7952) JCR System users do no longer consider group ACEs of groups they are member of

2018-12-11 Thread Konrad Windszus (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16717703#comment-16717703
 ] 

Konrad Windszus commented on OAK-7952:
--

[~anchela] Thanks a lot for the hint. I will make sure to update the Sling 
documentation accordingly (https://issues.apache.org/jira/browse/SLING-8171).

> JCR System users do no longer consider group ACEs of groups they are member of
> --
>
> Key: OAK-7952
> URL: https://issues.apache.org/jira/browse/OAK-7952
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.8.3
>Reporter: Konrad Windszus
>Priority: Major
> Attachments: OAK-7952_test-servlet.java
>
>
> In Oak 1.8.3 the JCR system users (JCR-3802) do no longer consider the access 
> control entries bound to a group principal (belonging to a group they are 
> member of). Only direct ACEs seem to be considered.
> I used the attached simple servlet to test read access of an existing 
> service-user "workflow-service". Unfortunately it throws a 
> {{javax.jcr.PathNotFoundException}} although the service user should inherit  
> read access to the accessed path via its group membership. It works 
> flawlessly in case the system user has direct read access to that path.
> Some more information about {{SlingRepository.createServiceSession(...)}}. 
> Internally the service user implementation does a lookup of the actual 
> service user name and then does impersonation from a new admin session 
> (https://github.com/apache/sling-org-apache-sling-jcr-base/blob/de884b669836aacb2666da1e7bae1a6735de3bdb/src/main/java/org/apache/sling/jcr/base/AbstractSlingRepository2.java#L197)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7914) Cleanup updates the gc.log after a failed compaction

2018-12-11 Thread Francesco Mari (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-7914.
-
   Resolution: Not A Problem
Fix Version/s: (was: 1.6.16)

> Cleanup updates the gc.log after a failed compaction
> 
>
> Key: OAK-7914
> URL: https://issues.apache.org/jira/browse/OAK-7914
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: 1.6.15
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>Priority: Critical
> Attachments: compaction.log
>
>
> The {{gc.log}} is always updated during the cleanup phase, regardless of the 
> result of the compaction phase. This might cause a scenario similar to the 
> following.
> - A repository of 100GB, of which 40GB is garbage, is compacted.
> - The estimation phase decides it's OK to compact.
> - Compaction produces a new head state, adding another 60GB.
> - Compaction fails, maybe because of too many concurrent commits.
> - Cleanup removes the 60GB generated during compaction.
> - Cleanup adds an entry to the {{gc.log}} recording the current size of the 
> repository, 100GB.
> Now, let's imagine that compaction is run shortly after that. The amount of 
> content added to the repository is negligible. For the sake of simplicity, 
> let's say that the size of the repository hasn't changed. The following 
> happens.
> - The repository is 100GB, of which 40GB is the same garbage that wasn't 
> removed above.
> - The estimation phase decides it's not OK to compact, because the {{gc.log}} 
> reports that the latest known size of the repository is 100GB, and there is 
> not enough content to remove.
> This is in fact a bug, because there are 40GB worth of garbage in the 
> repository, but estimation is not able to see that anymore. The solution 
> seems to be not to update the {{gc.log}} if compaction fails. In other words, 
> {{gc.log}} should contain the size of the *compacted* repository over time, 
> and no more.
> Thanks to [~rma61...@adobe.com] for reporting it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7914) Cleanup updates the gc.log after a failed compaction

2018-12-11 Thread Francesco Mari (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16717618#comment-16717618
 ] 

Francesco Mari commented on OAK-7914:
-

[~rma61...@adobe.com], good to know. I'm going to resolve this issue. Thanks 
for the information.

> Cleanup updates the gc.log after a failed compaction
> 
>
> Key: OAK-7914
> URL: https://issues.apache.org/jira/browse/OAK-7914
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: 1.6.15
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>Priority: Critical
> Fix For: 1.6.16
>
> Attachments: compaction.log
>
>
> The {{gc.log}} is always updated during the cleanup phase, regardless of the 
> result of the compaction phase. This might cause a scenario similar to the 
> following.
> - A repository of 100GB, of which 40GB is garbage, is compacted.
> - The estimation phase decides it's OK to compact.
> - Compaction produces a new head state, adding another 60GB.
> - Compaction fails, maybe because of too many concurrent commits.
> - Cleanup removes the 60GB generated during compaction.
> - Cleanup adds an entry to the {{gc.log}} recording the current size of the 
> repository, 100GB.
> Now, let's imagine that compaction is run shortly after that. The amount of 
> content added to the repository is negligible. For the sake of simplicity, 
> let's say that the size of the repository hasn't changed. The following 
> happens.
> - The repository is 100GB, of which 40GB is the same garbage that wasn't 
> removed above.
> - The estimation phase decides it's not OK to compact, because the {{gc.log}} 
> reports that the latest known size of the repository is 100GB, and there is 
> not enough content to remove.
> This is in fact a bug, because there are 40GB worth of garbage in the 
> repository, but estimation is not able to see that anymore. The solution 
> seems to be not to update the {{gc.log}} if compaction fails. In other words, 
> {{gc.log}} should contain the size of the *compacted* repository over time, 
> and no more.
> Thanks to [~rma61...@adobe.com] for reporting it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7914) Cleanup updates the gc.log after a failed compaction

2018-12-11 Thread Tom Blackford (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16717510#comment-16717510
 ] 

Tom Blackford commented on OAK-7914:


{quote}I'm not able to reproduce this issue. There is a guard in the code 
handling the {{gc.log}} that prevents it from being updated if the compaction 
phase fails and doesn't install a new head revision\{quote}

Hey [~frm] - thanks for checking... yeah - I think the issue was that the 
ownership of the file was sending the logging off - we were indeed seeing the 
error [1] until I fixed that, and then once I corrected that the original issue 
seemed to go away,

Apologies for the false alarm.

 

[1]
{code:java}
16.11.2018 02:45:38.645 *ERROR* [TarMK revision gc 
[/mnt/crx/author/crx-quickstart/repository/segmentstore]] 
org.apache.jackrabbit.oak.segment.file.GCJournal Error writing gc journal 
java.nio.file.AccessDeniedException: 
/mnt/crx/author/crx-quickstart/repository/segmentstore/gc.log at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107){code}

> Cleanup updates the gc.log after a failed compaction
> 
>
> Key: OAK-7914
> URL: https://issues.apache.org/jira/browse/OAK-7914
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: 1.6.15
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>Priority: Critical
> Fix For: 1.6.16
>
> Attachments: compaction.log
>
>
> The {{gc.log}} is always updated during the cleanup phase, regardless of the 
> result of the compaction phase. This might cause a scenario similar to the 
> following.
> - A repository of 100GB, of which 40GB is garbage, is compacted.
> - The estimation phase decides it's OK to compact.
> - Compaction produces a new head state, adding another 60GB.
> - Compaction fails, maybe because of too many concurrent commits.
> - Cleanup removes the 60GB generated during compaction.
> - Cleanup adds an entry to the {{gc.log}} recording the current size of the 
> repository, 100GB.
> Now, let's imagine that compaction is run shortly after that. The amount of 
> content added to the repository is negligible. For the sake of simplicity, 
> let's say that the size of the repository hasn't changed. The following 
> happens.
> - The repository is 100GB, of which 40GB is the same garbage that wasn't 
> removed above.
> - The estimation phase decides it's not OK to compact, because the {{gc.log}} 
> reports that the latest known size of the repository is 100GB, and there is 
> not enough content to remove.
> This is in fact a bug, because there are 40GB worth of garbage in the 
> repository, but estimation is not able to see that anymore. The solution 
> seems to be not to update the {{gc.log}} if compaction fails. In other words, 
> {{gc.log}} should contain the size of the *compacted* repository over time, 
> and no more.
> Thanks to [~rma61...@adobe.com] for reporting it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7956) Conflict may leave behind _collisions entry

2018-12-11 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-7956:
-

 Summary: Conflict may leave behind _collisions entry
 Key: OAK-7956
 URL: https://issues.apache.org/jira/browse/OAK-7956
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: documentmk
Affects Versions: 1.8.0, 1.6.0, 1.4.0
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger


Under high concurrent conflicting workload, entries in the {{_collisions}} map 
may be left behind and accumulate over time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7955) "oak-run compact" possible risk damaging oak repository

2018-12-11 Thread LS (JIRA)
LS created OAK-7955:
---

 Summary: "oak-run compact" possible risk damaging oak repository
 Key: OAK-7955
 URL: https://issues.apache.org/jira/browse/OAK-7955
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: LS


Hi there,

 

I've already had sometimes the issue that repository was online and offline 
compaction was started and this was resulting in a defective repository.

Especially if a user cancelled the execution of the compaction.

This was used with version 1.2.28 and repository of Adobe AEM 6.1.

Is it possible to e.g. check if a repository is used by another process ID and 
if this is the case then cancelling it?

 

Or otherwise something like a answer or configuration file with some metadata 
like the PID file or the execution directory of the application which uses the 
Repository, then oak-run could check, if this process is running.

 

Thank you in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7951) Datastore GC stats not updated with failure when "Not all repositories have marked references available"

2018-12-11 Thread Wim Symons (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16717015#comment-16717015
 ] 

Wim Symons commented on OAK-7951:
-

PR for 1.8: https://github.com/apache/jackrabbit-oak/pull/119

> Datastore GC stats not updated with failure when "Not all repositories have 
> marked references available"
> 
>
> Key: OAK-7951
> URL: https://issues.apache.org/jira/browse/OAK-7951
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-plugins
>Affects Versions: 1.6.14
>Reporter: Wim Symons
>Priority: Major
>
> In case you have shared S3 datastore, and you haven't updated the 
> repository-* files in the META/ folder after you add/remove some instances, 
> you'll notice an error like this in the logs:
>  
> {code:java}
> 10.12.2018 04:01:23.535 *ERROR* [sling-oak-observation-51349] 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Not all 
> repositories have marked references available : 
> [61b97331-58a8-434b-bb49-a43726b569bf]
> {code}
> Unfortunately, this error isn't reported back, so it appears DSGC has 
> succeeded.
>  
> The logs state (for example):
>  
> {code:java}
> 10.12.2018 04:01:23.535 *INFO* [sling-oak-observation-51349] 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Blob garbage 
> collection completed in 20.25 s (20247 ms). Number of blobs deleted [0] with 
> max modification time of [2018-12-09 04:01:03.288]
> {code}
> And the BlobGarbageCollection JMX bean reports success as well.
> This is not wanted behaviour as the DSGC run has actually failed.
> An IOException is thrown in 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L815,]
>  but it is caught and not re-thrown at 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L462]
> I think this exception should be re-thrown there causing it to be caught at 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L362]
>  resulting in the correct behaviour.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7951) Datastore GC stats not updated with failure when "Not all repositories have marked references available"

2018-12-11 Thread Wim Symons (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16716979#comment-16716979
 ] 

Wim Symons commented on OAK-7951:
-

PR for 1.6: https://github.com/apache/jackrabbit-oak/pull/118

> Datastore GC stats not updated with failure when "Not all repositories have 
> marked references available"
> 
>
> Key: OAK-7951
> URL: https://issues.apache.org/jira/browse/OAK-7951
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-plugins
>Affects Versions: 1.6.14
>Reporter: Wim Symons
>Priority: Major
>
> In case you have shared S3 datastore, and you haven't updated the 
> repository-* files in the META/ folder after you add/remove some instances, 
> you'll notice an error like this in the logs:
>  
> {code:java}
> 10.12.2018 04:01:23.535 *ERROR* [sling-oak-observation-51349] 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Not all 
> repositories have marked references available : 
> [61b97331-58a8-434b-bb49-a43726b569bf]
> {code}
> Unfortunately, this error isn't reported back, so it appears DSGC has 
> succeeded.
>  
> The logs state (for example):
>  
> {code:java}
> 10.12.2018 04:01:23.535 *INFO* [sling-oak-observation-51349] 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Blob garbage 
> collection completed in 20.25 s (20247 ms). Number of blobs deleted [0] with 
> max modification time of [2018-12-09 04:01:03.288]
> {code}
> And the BlobGarbageCollection JMX bean reports success as well.
> This is not wanted behaviour as the DSGC run has actually failed.
> An IOException is thrown in 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L815,]
>  but it is caught and not re-thrown at 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L462]
> I think this exception should be re-thrown there causing it to be caught at 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L362]
>  resulting in the correct behaviour.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7951) Datastore GC stats not updated with failure when "Not all repositories have marked references available"

2018-12-11 Thread Wim Symons (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16716945#comment-16716945
 ] 

Wim Symons commented on OAK-7951:
-

PR for trunk: [https://github.com/apache/jackrabbit-oak/pull/117]

PR for 1.6 coming up

> Datastore GC stats not updated with failure when "Not all repositories have 
> marked references available"
> 
>
> Key: OAK-7951
> URL: https://issues.apache.org/jira/browse/OAK-7951
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-plugins
>Affects Versions: 1.6.14
>Reporter: Wim Symons
>Priority: Major
>
> In case you have shared S3 datastore, and you haven't updated the 
> repository-* files in the META/ folder after you add/remove some instances, 
> you'll notice an error like this in the logs:
>  
> {code:java}
> 10.12.2018 04:01:23.535 *ERROR* [sling-oak-observation-51349] 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Not all 
> repositories have marked references available : 
> [61b97331-58a8-434b-bb49-a43726b569bf]
> {code}
> Unfortunately, this error isn't reported back, so it appears DSGC has 
> succeeded.
>  
> The logs state (for example):
>  
> {code:java}
> 10.12.2018 04:01:23.535 *INFO* [sling-oak-observation-51349] 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Blob garbage 
> collection completed in 20.25 s (20247 ms). Number of blobs deleted [0] with 
> max modification time of [2018-12-09 04:01:03.288]
> {code}
> And the BlobGarbageCollection JMX bean reports success as well.
> This is not wanted behaviour as the DSGC run has actually failed.
> An IOException is thrown in 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L815,]
>  but it is caught and not re-thrown at 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L462]
> I think this exception should be re-thrown there causing it to be caught at 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L362]
>  resulting in the correct behaviour.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7951) Datastore GC stats not updated with failure when "Not all repositories have marked references available"

2018-12-11 Thread Wim Symons (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16716895#comment-16716895
 ] 

Wim Symons commented on OAK-7951:
-

[~amitjain] logging just does not cut it when you are automating processes. I'd 
like to (remotely) check the JMX bean status after a run, to see if it failed 
or not.

> Datastore GC stats not updated with failure when "Not all repositories have 
> marked references available"
> 
>
> Key: OAK-7951
> URL: https://issues.apache.org/jira/browse/OAK-7951
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-plugins
>Affects Versions: 1.6.14
>Reporter: Wim Symons
>Priority: Major
>
> In case you have shared S3 datastore, and you haven't updated the 
> repository-* files in the META/ folder after you add/remove some instances, 
> you'll notice an error like this in the logs:
>  
> {code:java}
> 10.12.2018 04:01:23.535 *ERROR* [sling-oak-observation-51349] 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Not all 
> repositories have marked references available : 
> [61b97331-58a8-434b-bb49-a43726b569bf]
> {code}
> Unfortunately, this error isn't reported back, so it appears DSGC has 
> succeeded.
>  
> The logs state (for example):
>  
> {code:java}
> 10.12.2018 04:01:23.535 *INFO* [sling-oak-observation-51349] 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Blob garbage 
> collection completed in 20.25 s (20247 ms). Number of blobs deleted [0] with 
> max modification time of [2018-12-09 04:01:03.288]
> {code}
> And the BlobGarbageCollection JMX bean reports success as well.
> This is not wanted behaviour as the DSGC run has actually failed.
> An IOException is thrown in 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L815,]
>  but it is caught and not re-thrown at 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L462]
> I think this exception should be re-thrown there causing it to be caught at 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L362]
>  resulting in the correct behaviour.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7941) Test failure: IndexCopierTest.directoryContentMismatch_COR

2018-12-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16716881#comment-16716881
 ] 

Hudson commented on OAK-7941:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1848|https://builds.apache.org/job/Jackrabbit%20Oak/1848/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1848/console]

> Test failure: IndexCopierTest.directoryContentMismatch_COR
> --
>
> Key: OAK-7941
> URL: https://issues.apache.org/jira/browse/OAK-7941
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration, lucene
>Reporter: Hudson
>Priority: Major
> Fix For: 1.10
>
>
> No description is provided
> The build Jackrabbit Oak #1830 has failed.
> First failed run: [Jackrabbit Oak 
> #1830|https://builds.apache.org/job/Jackrabbit%20Oak/1830/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1830/console]
> {noformat}
> [ERROR] Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 0.832 s <<< FAILURE! - in 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest
> [ERROR] 
> directoryContentMismatch_COR(org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest)
>   Time elapsed: 0.08 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.readAndAssert(IndexCopierTest.java:1119)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.directoryContentMismatch_COR(IndexCopierTest.java:1081)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (OAK-7914) Cleanup updates the gc.log after a failed compaction

2018-12-11 Thread Francesco Mari (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari reassigned OAK-7914:
---

Assignee: Francesco Mari

> Cleanup updates the gc.log after a failed compaction
> 
>
> Key: OAK-7914
> URL: https://issues.apache.org/jira/browse/OAK-7914
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: 1.6.15
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>Priority: Critical
> Fix For: 1.6.16
>
> Attachments: compaction.log
>
>
> The {{gc.log}} is always updated during the cleanup phase, regardless of the 
> result of the compaction phase. This might cause a scenario similar to the 
> following.
> - A repository of 100GB, of which 40GB is garbage, is compacted.
> - The estimation phase decides it's OK to compact.
> - Compaction produces a new head state, adding another 60GB.
> - Compaction fails, maybe because of too many concurrent commits.
> - Cleanup removes the 60GB generated during compaction.
> - Cleanup adds an entry to the {{gc.log}} recording the current size of the 
> repository, 100GB.
> Now, let's imagine that compaction is run shortly after that. The amount of 
> content added to the repository is negligible. For the sake of simplicity, 
> let's say that the size of the repository hasn't changed. The following 
> happens.
> - The repository is 100GB, of which 40GB is the same garbage that wasn't 
> removed above.
> - The estimation phase decides it's not OK to compact, because the {{gc.log}} 
> reports that the latest known size of the repository is 100GB, and there is 
> not enough content to remove.
> This is in fact a bug, because there are 40GB worth of garbage in the 
> repository, but estimation is not able to see that anymore. The solution 
> seems to be not to update the {{gc.log}} if compaction fails. In other words, 
> {{gc.log}} should contain the size of the *compacted* repository over time, 
> and no more.
> Thanks to [~rma61...@adobe.com] for reporting it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7951) Datastore GC stats not updated with failure when "Not all repositories have marked references available"

2018-12-11 Thread Amit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16716658#comment-16716658
 ] 

Amit Jain commented on OAK-7951:


[~wim.symons] From the POV of DSGC, not having repositories marking references 
is a known issue and an appropriately logged. While I see a point of JMX not 
being informed about in such cases but having 0 deleted blobs already points to 
an anomaly in most cases save a very recent deployment.

I'd like to avoid throwing errors where DSGC code expects these situations and 
has indicated by way of logs. Can the situation be made more clear with 
documentation/more appropriate logging?

> Datastore GC stats not updated with failure when "Not all repositories have 
> marked references available"
> 
>
> Key: OAK-7951
> URL: https://issues.apache.org/jira/browse/OAK-7951
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-plugins
>Affects Versions: 1.6.14
>Reporter: Wim Symons
>Priority: Major
>
> In case you have shared S3 datastore, and you haven't updated the 
> repository-* files in the META/ folder after you add/remove some instances, 
> you'll notice an error like this in the logs:
>  
> {code:java}
> 10.12.2018 04:01:23.535 *ERROR* [sling-oak-observation-51349] 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Not all 
> repositories have marked references available : 
> [61b97331-58a8-434b-bb49-a43726b569bf]
> {code}
> Unfortunately, this error isn't reported back, so it appears DSGC has 
> succeeded.
>  
> The logs state (for example):
>  
> {code:java}
> 10.12.2018 04:01:23.535 *INFO* [sling-oak-observation-51349] 
> org.apache.jackrabbit.oak.plugins.blob.MarkSweepGarbageCollector Blob garbage 
> collection completed in 20.25 s (20247 ms). Number of blobs deleted [0] with 
> max modification time of [2018-12-09 04:01:03.288]
> {code}
> And the BlobGarbageCollection JMX bean reports success as well.
> This is not wanted behaviour as the DSGC run has actually failed.
> An IOException is thrown in 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L815,]
>  but it is caught and not re-thrown at 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L462]
> I think this exception should be re-thrown there causing it to be caught at 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L362]
>  resulting in the correct behaviour.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7952) JCR System users do no longer consider group ACEs of groups they are member of

2018-12-11 Thread angela (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16716563#comment-16716563
 ] 

angela commented on OAK-7952:
-

[~kwin], i am pretty sure that this is not related to oak but is a consequence 
of the new service user mapping format, that allows for aggregation of multiple 
service principals that basically replaces the need for group membership. this 
is on the sling layer. but definitely not an oak issue.

> JCR System users do no longer consider group ACEs of groups they are member of
> --
>
> Key: OAK-7952
> URL: https://issues.apache.org/jira/browse/OAK-7952
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.8.3
>Reporter: Konrad Windszus
>Priority: Major
> Attachments: OAK-7952_test-servlet.java
>
>
> In Oak 1.8.3 the JCR system users (JCR-3802) do no longer consider the access 
> control entries bound to a group principal (belonging to a group they are 
> member of). Only direct ACEs seem to be considered.
> I used the attached simple servlet to test read access of an existing 
> service-user "workflow-service". Unfortunately it throws a 
> {{javax.jcr.PathNotFoundException}} although the service user should inherit  
> read access to the accessed path via its group membership. It works 
> flawlessly in case the system user has direct read access to that path.
> Some more information about {{SlingRepository.createServiceSession(...)}}. 
> Internally the service user implementation does a lookup of the actual 
> service user name and then does impersonation from a new admin session 
> (https://github.com/apache/sling-org-apache-sling-jcr-base/blob/de884b669836aacb2666da1e7bae1a6735de3bdb/src/main/java/org/apache/sling/jcr/base/AbstractSlingRepository2.java#L197)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7952) JCR System users do no longer consider group ACEs of groups they are member of

2018-12-11 Thread angela (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-7952.
-
Resolution: Invalid

> JCR System users do no longer consider group ACEs of groups they are member of
> --
>
> Key: OAK-7952
> URL: https://issues.apache.org/jira/browse/OAK-7952
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.8.3
>Reporter: Konrad Windszus
>Priority: Major
> Attachments: OAK-7952_test-servlet.java
>
>
> In Oak 1.8.3 the JCR system users (JCR-3802) do no longer consider the access 
> control entries bound to a group principal (belonging to a group they are 
> member of). Only direct ACEs seem to be considered.
> I used the attached simple servlet to test read access of an existing 
> service-user "workflow-service". Unfortunately it throws a 
> {{javax.jcr.PathNotFoundException}} although the service user should inherit  
> read access to the accessed path via its group membership. It works 
> flawlessly in case the system user has direct read access to that path.
> Some more information about {{SlingRepository.createServiceSession(...)}}. 
> Internally the service user implementation does a lookup of the actual 
> service user name and then does impersonation from a new admin session 
> (https://github.com/apache/sling-org-apache-sling-jcr-base/blob/de884b669836aacb2666da1e7bae1a6735de3bdb/src/main/java/org/apache/sling/jcr/base/AbstractSlingRepository2.java#L197)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7947) Lazy loading of Lucene index files startup

2018-12-11 Thread Thomas Mueller (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16716524#comment-16716524
 ] 

Thomas Mueller commented on OAK-7947:
-

> The changes in ... getIndexDefinition ... not from stored index definition

Yes, I know, this is a bug in the patch. I will fix that.

> the patch you had attached seems quite risky to me

Yes. I didn't plan to apply the patch, it's just the starting point. There are 
bugs, todos, and some parts are probably not needed.

Next, I will try to find out which parts are not needed.

> let index open happen as it happens today but copy required files right away 
> (synchronously) and schedule rest of the files for later.

I'm afraid I would need some help for this. I tried disabling copy-on-read, but 
then the file are opened from the datastore, which has some additional 
problems: files are opened multiple times. So I came to the conclusion it's 
best not to open the files until they are really needed to run queries, and 
needed to do detailed cost estimation (if the index might be used). So there 
are 3 stages (AFAIK):

* Stage 1: just the index definition is needed so see if the properties are 
indexed.
* Stage 2: numDocs are needed to do cost estimation.
* Stage 3: index is used for a query.

Obviously, for stage 3, the index files are needed. For stage 1, right now the 
index files are opened. I think it's sufficient to delay opening the files 
there, and just use the index definition. For stage 2, I think (not sure yet) 
that this is actually rare enough and it's OK to open all index files. If it 
turns out this is _not_ that rare, then we can store the numDocs in the index 
definition from time to time (in theory we could do that for every index 
update). Then store the time of the numDocs update. And when the numDocs are 
needed, then either they are read from the index definition (let's say if they 
are younger than 1 hour or so), or else open the index files.



> Lazy loading of Lucene index files startup
> --
>
> Key: OAK-7947
> URL: https://issues.apache.org/jira/browse/OAK-7947
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene, query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Attachments: OAK-7947.patch
>
>
> Right now, all Lucene index binaries are loaded on startup (I think when the 
> first query is run, to do cost calculation). This is a performance problem if 
> the index files are large, and need to be downloaded from the data store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7953) Test failure: JdbcToSegmentWithMetadataTest.validateMigration()

2018-12-11 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/OAK-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-7953:
---
Fix Version/s: 1.8.10

> Test failure: JdbcToSegmentWithMetadataTest.validateMigration()
> ---
>
> Key: OAK-7953
> URL: https://issues.apache.org/jira/browse/OAK-7953
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Reporter: Marcel Reutegger
>Assignee: Tomek Rękawek
>Priority: Major
> Fix For: 1.10, 1.8.10, 1.9.14
>
>
> The test fails when executed independently, but runs fine when I run all 
> tests with maven on my machine. When it fails, the error is:
> {noformat}
> java.lang.RuntimeException: javax.jcr.RepositoryException: Failed to copy 
> content
>   at com.google.common.io.Closer.rethrow(Closer.java:149)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:81)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:67)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:48)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.AbstractOak2OakTest.prepare(AbstractOak2OakTest.java:108)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: javax.jcr.RepositoryException: Failed to copy content
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:286)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.sidegrade(OakUpgrade.java:92)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:78)
>   ... 27 more
> Caused by: java.lang.IllegalArgumentException
>   at 
> org.apache.jackrabbit.oak.upgrade.nodestate.MetadataExposingNodeState.wrap(MetadataExposingNodeState.java:81)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.wrapNodeState(RepositorySidegrade.java:524)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copyDiffToTarget(RepositorySidegrade.java:394)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.migrateWithCheckpoints(RepositorySidegrade.java:344)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copyState(RepositorySidegrade.java:309)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:279)
>   ... 29 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7953) Test failure: JdbcToSegmentWithMetadataTest.validateMigration()

2018-12-11 Thread JIRA


[ 
https://issues.apache.org/jira/browse/OAK-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16716502#comment-16716502
 ] 

Tomek Rękawek commented on OAK-7953:


Backported to 1.8 in [r1848658|https://svn.apache.org/r1848658].

> Test failure: JdbcToSegmentWithMetadataTest.validateMigration()
> ---
>
> Key: OAK-7953
> URL: https://issues.apache.org/jira/browse/OAK-7953
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Reporter: Marcel Reutegger
>Assignee: Tomek Rękawek
>Priority: Major
> Fix For: 1.10, 1.8.10, 1.9.14
>
>
> The test fails when executed independently, but runs fine when I run all 
> tests with maven on my machine. When it fails, the error is:
> {noformat}
> java.lang.RuntimeException: javax.jcr.RepositoryException: Failed to copy 
> content
>   at com.google.common.io.Closer.rethrow(Closer.java:149)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:81)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:67)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:48)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.AbstractOak2OakTest.prepare(AbstractOak2OakTest.java:108)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: javax.jcr.RepositoryException: Failed to copy content
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:286)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.sidegrade(OakUpgrade.java:92)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:78)
>   ... 27 more
> Caused by: java.lang.IllegalArgumentException
>   at 
> org.apache.jackrabbit.oak.upgrade.nodestate.MetadataExposingNodeState.wrap(MetadataExposingNodeState.java:81)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.wrapNodeState(RepositorySidegrade.java:524)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copyDiffToTarget(RepositorySidegrade.java:394)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.migrateWithCheckpoints(RepositorySidegrade.java:344)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copyState(RepositorySidegrade.java:309)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:279)
>   ... 29 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7953) Test failure: JdbcToSegmentWithMetadataTest.validateMigration()

2018-12-11 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/OAK-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-7953:
---
Fix Version/s: 1.9.14

> Test failure: JdbcToSegmentWithMetadataTest.validateMigration()
> ---
>
> Key: OAK-7953
> URL: https://issues.apache.org/jira/browse/OAK-7953
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Reporter: Marcel Reutegger
>Assignee: Tomek Rękawek
>Priority: Major
> Fix For: 1.10, 1.9.14
>
>
> The test fails when executed independently, but runs fine when I run all 
> tests with maven on my machine. When it fails, the error is:
> {noformat}
> java.lang.RuntimeException: javax.jcr.RepositoryException: Failed to copy 
> content
>   at com.google.common.io.Closer.rethrow(Closer.java:149)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:81)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:67)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:48)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.AbstractOak2OakTest.prepare(AbstractOak2OakTest.java:108)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: javax.jcr.RepositoryException: Failed to copy content
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:286)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.sidegrade(OakUpgrade.java:92)
>   at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:78)
>   ... 27 more
> Caused by: java.lang.IllegalArgumentException
>   at 
> org.apache.jackrabbit.oak.upgrade.nodestate.MetadataExposingNodeState.wrap(MetadataExposingNodeState.java:81)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.wrapNodeState(RepositorySidegrade.java:524)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copyDiffToTarget(RepositorySidegrade.java:394)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.migrateWithCheckpoints(RepositorySidegrade.java:344)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copyState(RepositorySidegrade.java:309)
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:279)
>   ... 29 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)