[jira] [Created] (OAK-8110) Observer Whiteboard is missing

2019-03-05 Thread Oliver Lietz (JIRA)
Oliver Lietz created OAK-8110:
-

 Summary: Observer Whiteboard is missing
 Key: OAK-8110
 URL: https://issues.apache.org/jira/browse/OAK-8110
 Project: Jackrabbit Oak
  Issue Type: Documentation
  Components: doc
Reporter: Oliver Lietz


The documentation for the Whiteboard is missing which allows adding 
[Observers|https://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/oak/spi/commit/Observer.html]
 to 
[Observables|https://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/oak/spi/commit/Observable.html]
 to listen for content changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-8084) LogCustomizer should allow instantiation with Java class (in addition to class name)

2019-03-05 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779299#comment-16779299
 ] 

Julian Reschke edited comment on OAK-8084 at 3/5/19 6:10 PM:
-

trunk: [r1854461|http://svn.apache.org/r1854461]
1.10: [r1854860|http://svn.apache.org/r1854860]
1.8: [r1854863|http://svn.apache.org/r1854863]



was (Author: reschke):
trunk: [r1854461|http://svn.apache.org/r1854461]
1.10: [r1854860|http://svn.apache.org/r1854860]


> LogCustomizer should allow instantiation with Java class (in addition to 
> class name)
> 
>
> Key: OAK-8084
> URL: https://issues.apache.org/jira/browse/OAK-8084
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: commons
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_6
> Fix For: 1.12, 1.11.0, 1.8.12, 1.10.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8084) LogCustomizer should allow instantiation with Java class (in addition to class name)

2019-03-05 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8084:

Fix Version/s: 1.8.12

> LogCustomizer should allow instantiation with Java class (in addition to 
> class name)
> 
>
> Key: OAK-8084
> URL: https://issues.apache.org/jira/browse/OAK-8084
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: commons
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_8
> Fix For: 1.12, 1.11.0, 1.8.12, 1.10.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8084) LogCustomizer should allow instantiation with Java class (in addition to class name)

2019-03-05 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8084:

Labels: candidate_oak_1_6  (was: candidate_oak_1_8)

> LogCustomizer should allow instantiation with Java class (in addition to 
> class name)
> 
>
> Key: OAK-8084
> URL: https://issues.apache.org/jira/browse/OAK-8084
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: commons
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_6
> Fix For: 1.12, 1.11.0, 1.8.12, 1.10.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8109) Setting up Composite Store on AEM 6.4.3.0

2019-03-05 Thread Sajid Momin (JIRA)
Sajid Momin created OAK-8109:


 Summary: Setting up Composite Store on AEM 6.4.3.0
 Key: OAK-8109
 URL: https://issues.apache.org/jira/browse/OAK-8109
 Project: Jackrabbit Oak
  Issue Type: Documentation
  Components: composite
Affects Versions: 1.8.9
 Environment: This is on my local environment.
Reporter: Sajid Momin


Hello, I am looking to experiment with JCR Composite Store in AEM 6.4 for 
educational purposes.  There is not much documentation on the web on how to 
configure it, so I thought I would ask here.  I have looked through the 
Jackrabbit Oak code and I was able to configure the Composite Store with 
MongoMK/TarMK.  However, I think my configuration is not complete since 
clientlibs in the apps are not being indexed to be query so the styling on the 
page is missing.  I assume that I am missing an indexing configuration step, 
but I am not 100% sure.  I am hoping to see if anyone here can help me on 
unblock me.

Also, I am hoping to set up Composite store using TarMK for both content and 
apps.  I have not succeed with this impl as of right now.  I assume the 
Jackrabbit Oak 1.8.9 is not capable of this.  Anyways, any help will be greatly 
appreciated.  Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-03-05 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8051:

Comment: was deleted

(was: trunk: [r1854701|http://svn.apache.org/r1854701] 
[r1854455|http://svn.apache.org/r1854455]
)

> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.6.6
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Major
>  Labels: candidate_oak_1_6, candidate_oak_1_8
> Fix For: 1.12, 1.11.0, 1.10.2
>
> Attachments: OAK-8051.diff, OAK-8051.diff
>
>
> Seen in the wild (in 1.6.6):
> {noformat}
> 22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the store _path_/cache-4.data
> java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
> [1.4.193/7]
>   at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
>   at org.h2.mvstore.FileStore.open(FileStore.java:168)
>   at org.h2.mvstore.MVStore.(MVStore.java:348)
>   at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
> {noformat}
> Later on:
> {noformat}
> 22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the map
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1214)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildPrevDocumentsCache(DocumentMK.java:1182)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildNodeDocumentCache(DocumentMK.java:1189)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:798)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:212)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:224)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setRDBConnection(DocumentMK.java:757)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStore(DocumentNodeStoreService.java:508)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStoreIfPossible(DocumentNodeStoreService.java:430)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.activate(DocumentNodeStoreService.java:414)
> {noformat}
> and then
> {noformat}
> 22.01.2019 08:45:16.808 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap 
> Re-opening map PREV_DOCUMENT
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.get(CacheMap.java:87)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MultiGenerationMap.readValue(MultiGenerationMap.java:71)
>   at 
> 

[jira] [Commented] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-03-05 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784636#comment-16784636
 ] 

Julian Reschke commented on OAK-8051:
-

trunk: [r1854701|http://svn.apache.org/r1854701] 
[r1854455|http://svn.apache.org/r1854455]
1.10: [r1854862|http://svn.apache.org/r1854862]


> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.6.6
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Major
>  Labels: candidate_oak_1_6, candidate_oak_1_8
> Fix For: 1.12, 1.11.0, 1.10.2
>
> Attachments: OAK-8051.diff, OAK-8051.diff
>
>
> Seen in the wild (in 1.6.6):
> {noformat}
> 22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the store _path_/cache-4.data
> java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
> [1.4.193/7]
>   at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
>   at org.h2.mvstore.FileStore.open(FileStore.java:168)
>   at org.h2.mvstore.MVStore.(MVStore.java:348)
>   at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
> {noformat}
> Later on:
> {noformat}
> 22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the map
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1214)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildPrevDocumentsCache(DocumentMK.java:1182)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildNodeDocumentCache(DocumentMK.java:1189)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:798)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:212)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:224)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setRDBConnection(DocumentMK.java:757)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStore(DocumentNodeStoreService.java:508)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStoreIfPossible(DocumentNodeStoreService.java:430)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.activate(DocumentNodeStoreService.java:414)
> {noformat}
> and then
> {noformat}
> 22.01.2019 08:45:16.808 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap 
> Re-opening map PREV_DOCUMENT
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.get(CacheMap.java:87)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MultiGenerationMap.readValue(MultiGenerationMap.java:71)
>   at 
> 

[jira] [Updated] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-03-05 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8051:

Fix Version/s: 1.10.2

> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.6.6
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_6, 
> candidate_oak_1_8, patch-available
> Fix For: 1.12, 1.11.0, 1.10.2
>
> Attachments: OAK-8051.diff, OAK-8051.diff
>
>
> Seen in the wild (in 1.6.6):
> {noformat}
> 22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the store _path_/cache-4.data
> java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
> [1.4.193/7]
>   at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
>   at org.h2.mvstore.FileStore.open(FileStore.java:168)
>   at org.h2.mvstore.MVStore.(MVStore.java:348)
>   at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
> {noformat}
> Later on:
> {noformat}
> 22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the map
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1214)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildPrevDocumentsCache(DocumentMK.java:1182)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildNodeDocumentCache(DocumentMK.java:1189)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:798)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:212)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:224)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setRDBConnection(DocumentMK.java:757)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStore(DocumentNodeStoreService.java:508)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStoreIfPossible(DocumentNodeStoreService.java:430)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.activate(DocumentNodeStoreService.java:414)
> {noformat}
> and then
> {noformat}
> 22.01.2019 08:45:16.808 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap 
> Re-opening map PREV_DOCUMENT
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.get(CacheMap.java:87)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MultiGenerationMap.readValue(MultiGenerationMap.java:71)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.asyncReadIfPresent(NodeCache.java:147)
>   at 
> 

[jira] [Updated] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-03-05 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8051:

Labels: candidate_oak_1_6 candidate_oak_1_8  (was: candidate_oak_1_10 
candidate_oak_1_6 candidate_oak_1_8 patch-available)

> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.6.6
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Major
>  Labels: candidate_oak_1_6, candidate_oak_1_8
> Fix For: 1.12, 1.11.0, 1.10.2
>
> Attachments: OAK-8051.diff, OAK-8051.diff
>
>
> Seen in the wild (in 1.6.6):
> {noformat}
> 22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the store _path_/cache-4.data
> java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
> [1.4.193/7]
>   at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
>   at org.h2.mvstore.FileStore.open(FileStore.java:168)
>   at org.h2.mvstore.MVStore.(MVStore.java:348)
>   at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
> {noformat}
> Later on:
> {noformat}
> 22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the map
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1214)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildPrevDocumentsCache(DocumentMK.java:1182)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildNodeDocumentCache(DocumentMK.java:1189)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:798)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:212)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:224)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setRDBConnection(DocumentMK.java:757)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStore(DocumentNodeStoreService.java:508)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStoreIfPossible(DocumentNodeStoreService.java:430)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.activate(DocumentNodeStoreService.java:414)
> {noformat}
> and then
> {noformat}
> 22.01.2019 08:45:16.808 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap 
> Re-opening map PREV_DOCUMENT
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.get(CacheMap.java:87)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MultiGenerationMap.readValue(MultiGenerationMap.java:71)
>   at 
> 

[jira] [Commented] (OAK-8107) Build Jackrabbit Oak #1994 failed

2019-03-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784614#comment-16784614
 ] 

Hudson commented on OAK-8107:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1996|https://builds.apache.org/job/Jackrabbit%20Oak/1996/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1996/console]

> Build Jackrabbit Oak #1994 failed
> -
>
> Key: OAK-8107
> URL: https://issues.apache.org/jira/browse/OAK-8107
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1994 has failed.
> First failed run: [Jackrabbit Oak 
> #1994|https://builds.apache.org/job/Jackrabbit%20Oak/1994/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1994/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8092) The cold standby server cannot handle blob requests for long blob IDs

2019-03-05 Thread Andrei Dulceanu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu resolved OAK-8092.
--
Resolution: Duplicate

Closed as a duplicate of OAK-6749, per offline agreement with [~frm].

> The cold standby server cannot handle blob requests for long blob IDs
> -
>
> Key: OAK-8092
> URL: https://issues.apache.org/jira/browse/OAK-8092
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, tarmk-standby
>Affects Versions: 1.10.1
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: cold-standby
> Fix For: 1.12, 1.11.0, 1.10.2
>
> Attachments: OAK-8092.patch
>
>
> If the standby client issues a request for a binary ID larger than 8192 
> bytes, it will fail on the server side due to the current frame limitation, 
> set to 8192 bytes:
> {noformat}
> 28.02.2019 00:01:36.034 *WARN* [primary-32] 
> org.apache.jackrabbit.oak.segment.standby.server.ExceptionHandler Exception 
> caught on the server
> io.netty.handler.codec.TooLongFrameException: frame length (35029) exceeds 
> the allowed maximum (8192)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.fail(LineBasedFrameDecoder.java:146)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.fail(LineBasedFrameDecoder.java:142)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.decode(LineBasedFrameDecoder.java:131)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.decode(LineBasedFrameDecoder.java:75)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1342)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:934)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.1]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) 
> 

[jira] [Updated] (OAK-6749) Segment-Tar standby sync fails with "in-memory" blobs present in the source repo

2019-03-05 Thread Francesco Mari (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-6749:

Fix Version/s: 1.10.2

> Segment-Tar standby sync fails with "in-memory" blobs present in the source 
> repo
> 
>
> Key: OAK-6749
> URL: https://issues.apache.org/jira/browse/OAK-6749
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob, tarmk-standby
>Affects Versions: 1.6.2
>Reporter: Csaba Varga
>Assignee: Francesco Mari
>Priority: Major
> Fix For: 1.12, 1.8.12, 1.10.2
>
> Attachments: OAK-6749-01.patch, OAK-6749-02.patch, 
> repack_binaries.groovy
>
>
> We have run into some issue when trying to transition from an active/active 
> Mongo NodeStore cluster to a single Segment-Tar server with cold standby. The 
> issue itself manifests when the standby server tries to pull changes from the 
> primary after the first round of online revision GC.
> Let me summarize the way we ended up with the current state, and my 
> hypothesis about what happened, based on my debugging so far:
> # We started with a Mongo NodeStore and an external FileDataStore as the blob 
> store. The FileDataStore was set up with minRecordLength=4096. The Mongo 
> store stores blobs below minRecordLength as special "in-memory" blobIDs where 
> the data itself is baked into the ID string in hex.
> # We have executed a sidegrade of the Mongo store into a Segment-Tar store. 
> Our datastore is over 1TB in size, so copying the binaries wasn't an option. 
> The new repository is simply reusing the existing datastore. The "in-memory" 
> blobIDs still look like external blobIDs to the sidegrade process, so they 
> were copied into the Segment-Tar repository as-is, instead of being converted 
> into the efficient in-line format.
> # The server started up without issues on the new Segment-Tar store. The 
> migrated "in-memory" blob IDs seem to work fine, if a bit sub-optimal.
> # At this point, we have created a cold standby instance by copying the files 
> of the stopped primary instance and making the necessary config changes on 
> both servers.
> # Everything worked fine until the primary server started its first round of 
> online revision GC. After that process completed, the standby node started 
> throwing exceptions about missing segments, and eventually stopped 
> altogether. In the meantime, the following warning showed up in the primary 
> log:
> {code:java}
> 29.09.2017 06:12:08.088 *WARN* [nioEventLoopGroup-3-10] 
> org.apache.jackrabbit.oak.segment.standby.server.ExceptionHandler Exception 
> caught on the server
> io.netty.handler.codec.TooLongFrameException: frame length (8208) exceeds the 
> allowed maximum (8192)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.fail(LineBasedFrameDecoder.java:146)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.fail(LineBasedFrameDecoder.java:142)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.decode(LineBasedFrameDecoder.java:99)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.decode(LineBasedFrameDecoder.java:75)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:345)
> at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:345)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at 
> 

[jira] [Commented] (OAK-6749) Segment-Tar standby sync fails with "in-memory" blobs present in the source repo

2019-03-05 Thread Francesco Mari (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784502#comment-16784502
 ] 

Francesco Mari commented on OAK-6749:
-

Backported to 1.10 at 1854861.

> Segment-Tar standby sync fails with "in-memory" blobs present in the source 
> repo
> 
>
> Key: OAK-6749
> URL: https://issues.apache.org/jira/browse/OAK-6749
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob, tarmk-standby
>Affects Versions: 1.6.2
>Reporter: Csaba Varga
>Assignee: Francesco Mari
>Priority: Major
> Fix For: 1.12, 1.8.12
>
> Attachments: OAK-6749-01.patch, OAK-6749-02.patch, 
> repack_binaries.groovy
>
>
> We have run into some issue when trying to transition from an active/active 
> Mongo NodeStore cluster to a single Segment-Tar server with cold standby. The 
> issue itself manifests when the standby server tries to pull changes from the 
> primary after the first round of online revision GC.
> Let me summarize the way we ended up with the current state, and my 
> hypothesis about what happened, based on my debugging so far:
> # We started with a Mongo NodeStore and an external FileDataStore as the blob 
> store. The FileDataStore was set up with minRecordLength=4096. The Mongo 
> store stores blobs below minRecordLength as special "in-memory" blobIDs where 
> the data itself is baked into the ID string in hex.
> # We have executed a sidegrade of the Mongo store into a Segment-Tar store. 
> Our datastore is over 1TB in size, so copying the binaries wasn't an option. 
> The new repository is simply reusing the existing datastore. The "in-memory" 
> blobIDs still look like external blobIDs to the sidegrade process, so they 
> were copied into the Segment-Tar repository as-is, instead of being converted 
> into the efficient in-line format.
> # The server started up without issues on the new Segment-Tar store. The 
> migrated "in-memory" blob IDs seem to work fine, if a bit sub-optimal.
> # At this point, we have created a cold standby instance by copying the files 
> of the stopped primary instance and making the necessary config changes on 
> both servers.
> # Everything worked fine until the primary server started its first round of 
> online revision GC. After that process completed, the standby node started 
> throwing exceptions about missing segments, and eventually stopped 
> altogether. In the meantime, the following warning showed up in the primary 
> log:
> {code:java}
> 29.09.2017 06:12:08.088 *WARN* [nioEventLoopGroup-3-10] 
> org.apache.jackrabbit.oak.segment.standby.server.ExceptionHandler Exception 
> caught on the server
> io.netty.handler.codec.TooLongFrameException: frame length (8208) exceeds the 
> allowed maximum (8192)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.fail(LineBasedFrameDecoder.java:146)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.fail(LineBasedFrameDecoder.java:142)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.decode(LineBasedFrameDecoder.java:99)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.decode(LineBasedFrameDecoder.java:75)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:345)
> at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:345)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>

[jira] [Updated] (OAK-8084) LogCustomizer should allow instantiation with Java class (in addition to class name)

2019-03-05 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8084:

Labels: candidate_oak_1_8  (was: candidate_oak_1_10)

> LogCustomizer should allow instantiation with Java class (in addition to 
> class name)
> 
>
> Key: OAK-8084
> URL: https://issues.apache.org/jira/browse/OAK-8084
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: commons
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_8
> Fix For: 1.12, 1.11.0, 1.10.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8107) Build Jackrabbit Oak #1994 failed

2019-03-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784495#comment-16784495
 ] 

Hudson commented on OAK-8107:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1995|https://builds.apache.org/job/Jackrabbit%20Oak/1995/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1995/console]

> Build Jackrabbit Oak #1994 failed
> -
>
> Key: OAK-8107
> URL: https://issues.apache.org/jira/browse/OAK-8107
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1994 has failed.
> First failed run: [Jackrabbit Oak 
> #1994|https://builds.apache.org/job/Jackrabbit%20Oak/1994/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1994/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-8084) LogCustomizer should allow instantiation with Java class (in addition to class name)

2019-03-05 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779299#comment-16779299
 ] 

Julian Reschke edited comment on OAK-8084 at 3/5/19 2:21 PM:
-

trunk: [r1854461|http://svn.apache.org/r1854461]
1.10: [r1854860|http://svn.apache.org/r1854860]



was (Author: reschke):
trunk: [r1854461|http://svn.apache.org/r1854461]

> LogCustomizer should allow instantiation with Java class (in addition to 
> class name)
> 
>
> Key: OAK-8084
> URL: https://issues.apache.org/jira/browse/OAK-8084
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: commons
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_8
> Fix For: 1.12, 1.11.0, 1.10.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8084) LogCustomizer should allow instantiation with Java class (in addition to class name)

2019-03-05 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8084:

Fix Version/s: 1.10.2

> LogCustomizer should allow instantiation with Java class (in addition to 
> class name)
> 
>
> Key: OAK-8084
> URL: https://issues.apache.org/jira/browse/OAK-8084
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: commons
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>  Labels: candidate_oak_1_10
> Fix For: 1.12, 1.11.0, 1.10.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8108) Branch reset does not remove all branch commit entries

2019-03-05 Thread Marcel Reutegger (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-8108.
---
   Resolution: Fixed
Fix Version/s: 1.11.0

Fixed in trunk: http://svn.apache.org/r1854859

> Branch reset does not remove all branch commit entries
> --
>
> Key: OAK-8108
> URL: https://issues.apache.org/jira/browse/OAK-8108
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.8.0, 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.12, 1.11.0
>
>
> Some branch commit entries are not removed on a branch reset. Those are the 
> entries that are  put on the parent of an added node.
> Branch commit entries were added with OAK-5869.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8108) Branch reset does not remove all branch commit entries

2019-03-05 Thread Marcel Reutegger (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-8108:
--
Labels: candidate_oak_1_10 candidate_oak_1_8  (was: )

> Branch reset does not remove all branch commit entries
> --
>
> Key: OAK-8108
> URL: https://issues.apache.org/jira/browse/OAK-8108
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.8.0, 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Fix For: 1.12, 1.11.0
>
>
> Some branch commit entries are not removed on a branch reset. Those are the 
> entries that are  put on the parent of an added node.
> Branch commit entries were added with OAK-5869.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8097) Load Lucene index files before writing to the index

2019-03-05 Thread Thomas Mueller (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784487#comment-16784487
 ] 

Thomas Mueller commented on OAK-8097:
-

[~catholicon] feedback is welcome.



> Load Lucene index files before writing to the index
> ---
>
> Key: OAK-8097
> URL: https://issues.apache.org/jira/browse/OAK-8097
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.12
>
>
> Right now, Lucene index files are downloaded from the datastore when reading 
> from the index (when running a query). However, when updating the index, they 
> are not downloaded. So if lazy loading of index files is enabled (OAK-7947), 
> files are read from the datastore (streaming). Leading to the following 
> warnings in the log file:
> {noformat}
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory 
> COWRemoteFileReference::local file (_2.cfs) doesn't exist
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8108) Branch reset does not remove all branch commit entries

2019-03-05 Thread Marcel Reutegger (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784488#comment-16784488
 ] 

Marcel Reutegger commented on OAK-8108:
---

Added ignored test to trunk: http://svn.apache.org/r1854848

> Branch reset does not remove all branch commit entries
> --
>
> Key: OAK-8108
> URL: https://issues.apache.org/jira/browse/OAK-8108
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.8.0, 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.12
>
>
> Some branch commit entries are not removed on a branch reset. Those are the 
> entries that are  put on the parent of an added node.
> Branch commit entries were added with OAK-5869.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8097) Load Lucene index files before writing to the index

2019-03-05 Thread Thomas Mueller (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-8097.
-
Resolution: Fixed

> Load Lucene index files before writing to the index
> ---
>
> Key: OAK-8097
> URL: https://issues.apache.org/jira/browse/OAK-8097
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.12
>
>
> Right now, Lucene index files are downloaded from the datastore when reading 
> from the index (when running a query). However, when updating the index, they 
> are not downloaded. So if lazy loading of index files is enabled (OAK-7947), 
> files are read from the datastore (streaming). Leading to the following 
> warnings in the log file:
> {noformat}
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory 
> COWRemoteFileReference::local file (_2.cfs) doesn't exist
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8097) Load Lucene index files before writing to the index

2019-03-05 Thread Thomas Mueller (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784485#comment-16784485
 ] 

Thomas Mueller commented on OAK-8097:
-

I think I will keep the implementation for now. To me it seems it doesn't 
matter where exactly we copy the files, as IndexCopier.wrapForWrite is only 
called in DefaultDirectoryFactory.newInstance. We can still change the 
implementation, the test should still work.

> Load Lucene index files before writing to the index
> ---
>
> Key: OAK-8097
> URL: https://issues.apache.org/jira/browse/OAK-8097
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Thomas Mueller
>Priority: Major
> Fix For: 1.12
>
>
> Right now, Lucene index files are downloaded from the datastore when reading 
> from the index (when running a query). However, when updating the index, they 
> are not downloaded. So if lazy loading of index files is enabled (OAK-7947), 
> files are read from the datastore (streaming). Leading to the following 
> warnings in the log file:
> {noformat}
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory 
> COWRemoteFileReference::local file (_2.cfs) doesn't exist
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (OAK-8097) Load Lucene index files before writing to the index

2019-03-05 Thread Thomas Mueller (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-8097:
---

Assignee: Thomas Mueller

> Load Lucene index files before writing to the index
> ---
>
> Key: OAK-8097
> URL: https://issues.apache.org/jira/browse/OAK-8097
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.12
>
>
> Right now, Lucene index files are downloaded from the datastore when reading 
> from the index (when running a query). However, when updating the index, they 
> are not downloaded. So if lazy loading of index files is enabled (OAK-7947), 
> files are read from the datastore (streaming). Leading to the following 
> warnings in the log file:
> {noformat}
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory 
> COWRemoteFileReference::local file (_2.cfs) doesn't exist
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8079) Update Oak 1.0 to Jackrabbit 2.8.10

2019-03-05 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-8079.
-
Resolution: Fixed

> Update Oak 1.0 to Jackrabbit 2.8.10
> ---
>
> Key: OAK-8079
> URL: https://issues.apache.org/jira/browse/OAK-8079
> Project: Jackrabbit Oak
>  Issue Type: Task
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.0.43
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8079) Update Oak 1.0 to Jackrabbit 2.8.10

2019-03-05 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784473#comment-16784473
 ] 

Julian Reschke commented on OAK-8079:
-

1.0: [r1854858|http://svn.apache.org/r1854858]

> Update Oak 1.0 to Jackrabbit 2.8.10
> ---
>
> Key: OAK-8079
> URL: https://issues.apache.org/jira/browse/OAK-8079
> Project: Jackrabbit Oak
>  Issue Type: Task
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.0.43
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8097) Load Lucene index files before writing to the index

2019-03-05 Thread Thomas Mueller (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784461#comment-16784461
 ] 

Thomas Mueller commented on OAK-8097:
-

http://svn.apache.org/r1854855 (trunk)



> Load Lucene index files before writing to the index
> ---
>
> Key: OAK-8097
> URL: https://issues.apache.org/jira/browse/OAK-8097
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Thomas Mueller
>Priority: Major
> Fix For: 1.12
>
>
> Right now, Lucene index files are downloaded from the datastore when reading 
> from the index (when running a query). However, when updating the index, they 
> are not downloaded. So if lazy loading of index files is enabled (OAK-7947), 
> files are read from the datastore (streaming). Leading to the following 
> warnings in the log file:
> {noformat}
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory 
> COWRemoteFileReference::local file (_2.cfs) doesn't exist
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8063) The cold standby client doesn't correctly handle backward references

2019-03-05 Thread Andrei Dulceanu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784424#comment-16784424
 ] 

Andrei Dulceanu commented on OAK-8063:
--

Backported to 1.8 at r1854850.

> The cold standby client doesn't correctly handle backward references
> 
>
> Key: OAK-8063
> URL: https://issues.apache.org/jira/browse/OAK-8063
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, tarmk-standby
>Affects Versions: 1.6.0
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: cold-standby
> Fix For: 1.12, 1.11.0, 1.8.12, 1.10.2
>
> Attachments: OAK-8063-02.patch, OAK-8063-03.patch, OAK-8063.patch
>
>
> The logic from {{StandbyClientSyncExecution#copySegmentHierarchyFromPrimary}} 
> has a flaw when it comes to "backward references". Suppose we have the 
> following data segment graph to be transferred from primary: S1, which 
> references \{S2, S3} and S3 which references S2. Then, the correct transfer 
> order should be S2, S3 and S1.
> Going through the current logic employed by the method, here's what happens:
> {noformat}
> Step 0: batch={S1}
> Step 1: visited={S1}, data={S1}, batch={S2, S3}, queued={S2, S3}
> Step 2: visited={S1, S2}, data={S2, S1}, batch={S3}, queued={S2, S3}
> Step 3: visited={S1, S2, S3}, data={S3, S2, S1}, batch={}, queued={S2, 
> S3}.{noformat}
> Therefore, at the end of the loop, the order of the segments to be 
> transferred will be S3, S2, S1, which might trigger a 
> {{SegmentNotFoundException}} when S3 is further processed, because S2 is 
> missing on standby (see OAK-8006).
> /cc [~frm]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-03-05 Thread Thomas Mueller (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784399#comment-16784399
 ] 

Thomas Mueller commented on OAK-8051:
-

Sorry... the changes look good to me!

> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.6.6
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_6, 
> candidate_oak_1_8, patch-available
> Fix For: 1.12, 1.11.0
>
> Attachments: OAK-8051.diff, OAK-8051.diff
>
>
> Seen in the wild (in 1.6.6):
> {noformat}
> 22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the store _path_/cache-4.data
> java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
> [1.4.193/7]
>   at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
>   at org.h2.mvstore.FileStore.open(FileStore.java:168)
>   at org.h2.mvstore.MVStore.(MVStore.java:348)
>   at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
> {noformat}
> Later on:
> {noformat}
> 22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the map
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1214)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildPrevDocumentsCache(DocumentMK.java:1182)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildNodeDocumentCache(DocumentMK.java:1189)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:798)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:212)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:224)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setRDBConnection(DocumentMK.java:757)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStore(DocumentNodeStoreService.java:508)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStoreIfPossible(DocumentNodeStoreService.java:430)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.activate(DocumentNodeStoreService.java:414)
> {noformat}
> and then
> {noformat}
> 22.01.2019 08:45:16.808 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap 
> Re-opening map PREV_DOCUMENT
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.get(CacheMap.java:87)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MultiGenerationMap.readValue(MultiGenerationMap.java:71)
>   at 
> 

[jira] [Created] (OAK-8108) Branch reset does not remove all branch commit entries

2019-03-05 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-8108:
-

 Summary: Branch reset does not remove all branch commit entries
 Key: OAK-8108
 URL: https://issues.apache.org/jira/browse/OAK-8108
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: documentmk
Affects Versions: 1.10.0, 1.8.0
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.12


Some branch commit entries are not removed on a branch reset. Those are the 
entries that are  put on the parent of an added node.

Branch commit entries were added with OAK-5869.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8006) SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby

2019-03-05 Thread Andrei Dulceanu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu updated OAK-8006:
-
Fix Version/s: 1.8.12

> SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby
> --
>
> Key: OAK-8006
> URL: https://issues.apache.org/jira/browse/OAK-8006
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, tarmk-standby
>Affects Versions: 1.6.0
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: cold-standby
> Fix For: 1.12, 1.11.0, 1.10.1, 1.8.12
>
> Attachments: OAK-8006-02.patch, OAK-8006-test.patch, OAK-8006.patch
>
>
> When persisting a segment transferred from master, among others, the cold 
> standby needs to read the binary references from the segment. While this 
> usually doesn't involve any additional reads from any other segments, there 
> is a special case concerning binary IDs larger than 4092 bytes. These can 
> live in other segments (which got transferred prior to the current segment 
> and are already on the standby), but it might also be the case that the 
> binary ID is stored in the same segment. If this happens, the call to 
> {{blobId.getSegment()}}[0], triggers a new read of the current, un-persisted 
> segment . Thus, a {{SegmentNotFoundException}} is thrown:
> {noformat}
> 22.01.2019 09:35:59.345 *ERROR* [standby-run-1] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> d40a9da6-06a2-4dc0-ab91-5554a33c02b0 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:284)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.lambda$readSegment$10(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.lambda$getSegment$0(SegmentCache.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.getSegment(SegmentCache.java:160)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.readSegment(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:153) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.RecordId.getSegment(RecordId.java:98) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readLongBlobId(SegmentBlob.java:206)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readBlobId(SegmentBlob.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore$3.consume(AbstractFileStore.java:262)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.Segment.forEachRecord(Segment.java:601) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readBinaryReferences(AbstractFileStore.java:257)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.writeSegment(FileStore.java:533)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> 

[jira] [Commented] (OAK-8006) SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby

2019-03-05 Thread Andrei Dulceanu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784351#comment-16784351
 ] 

Andrei Dulceanu commented on OAK-8006:
--

Backported to 1.8 at r1854844.

> SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby
> --
>
> Key: OAK-8006
> URL: https://issues.apache.org/jira/browse/OAK-8006
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, tarmk-standby
>Affects Versions: 1.6.0
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: cold-standby
> Fix For: 1.12, 1.11.0, 1.10.1
>
> Attachments: OAK-8006-02.patch, OAK-8006-test.patch, OAK-8006.patch
>
>
> When persisting a segment transferred from master, among others, the cold 
> standby needs to read the binary references from the segment. While this 
> usually doesn't involve any additional reads from any other segments, there 
> is a special case concerning binary IDs larger than 4092 bytes. These can 
> live in other segments (which got transferred prior to the current segment 
> and are already on the standby), but it might also be the case that the 
> binary ID is stored in the same segment. If this happens, the call to 
> {{blobId.getSegment()}}[0], triggers a new read of the current, un-persisted 
> segment . Thus, a {{SegmentNotFoundException}} is thrown:
> {noformat}
> 22.01.2019 09:35:59.345 *ERROR* [standby-run-1] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> d40a9da6-06a2-4dc0-ab91-5554a33c02b0 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:284)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.lambda$readSegment$10(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.lambda$getSegment$0(SegmentCache.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.getSegment(SegmentCache.java:160)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.readSegment(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:153) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.RecordId.getSegment(RecordId.java:98) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readLongBlobId(SegmentBlob.java:206)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readBlobId(SegmentBlob.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore$3.consume(AbstractFileStore.java:262)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.Segment.forEachRecord(Segment.java:601) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readBinaryReferences(AbstractFileStore.java:257)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.writeSegment(FileStore.java:533)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> 

[jira] [Created] (OAK-8107) Build Jackrabbit Oak #1994 failed

2019-03-05 Thread Hudson (JIRA)
Hudson created OAK-8107:
---

 Summary: Build Jackrabbit Oak #1994 failed
 Key: OAK-8107
 URL: https://issues.apache.org/jira/browse/OAK-8107
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: continuous integration
Reporter: Hudson


No description is provided

The build Jackrabbit Oak #1994 has failed.
First failed run: [Jackrabbit Oak 
#1994|https://builds.apache.org/job/Jackrabbit%20Oak/1994/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1994/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8106) High memory usage when large branch is reset

2019-03-05 Thread Marcel Reutegger (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784288#comment-16784288
 ] 

Marcel Reutegger commented on OAK-8106:
---

Added an ignored test: http://svn.apache.org/r1854827

> High memory usage when large branch is reset
> 
>
> Key: OAK-8106
> URL: https://issues.apache.org/jira/browse/OAK-8106
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.6.0, 1.8.0, 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Major
> Fix For: 1.12
>
>
> Resetting a branch with many commits results in high memory usage. The node 
> state comparison performed by the reset uses an incorrect base state, which 
> leads to more operations recorded in memory than necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8106) High memory usage when large branch is reset

2019-03-05 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-8106:
-

 Summary: High memory usage when large branch is reset
 Key: OAK-8106
 URL: https://issues.apache.org/jira/browse/OAK-8106
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: documentmk
Affects Versions: 1.10.0, 1.8.0, 1.6.0
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.12


Resetting a branch with many commits results in high memory usage. The node 
state comparison performed by the reset uses an incorrect base state, which 
leads to more operations recorded in memory than necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)