[jira] [Commented] (OAK-3235) Deadlock when closing a concurrently used FileStore
[ https://issues.apache.org/jira/browse/OAK-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14714206#comment-14714206 ] Michael Dürig commented on OAK-3235: The {{SNFE}} is caused by the offending segment not yet being written while it has already been removed from the writer. This can happen with the changes from the patch as {{SegmentWriter#flush}} now calls {{SegmentStore#writeSegment}} *after* the segment writer already switched over to a fresh segment and *after* given up the lock on its instance monitor. This means a concurrent call to {{FileStore#readSegment}} doesn't find the segment in any of the readers yet and neither in the writer any more. Deadlock when closing a concurrently used FileStore --- Key: OAK-3235 URL: https://issues.apache.org/jira/browse/OAK-3235 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Affects Versions: 1.3.3 Reporter: Francesco Mari Assignee: Michael Dürig Priority: Critical Labels: resilience Fix For: 1.3.6 Attachments: OAK-3235-01.patch A deadlock was detected while stopping the {{SegmentCompactionIT}} using the exposed MBean. {noformat} Found one Java-level deadlock: = pool-1-thread-23: waiting to lock monitor 0x7fa8cf1f0488 (object 0x0007a0081e48, a org.apache.jackrabbit.oak.plugins.segment.file.FileStore), which is held by main main: waiting to lock monitor 0x7fa8cc015ff8 (object 0x0007a011f750, a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter), which is held by pool-1-thread-23 Java stack information for the threads listed above: === pool-1-thread-23: at org.apache.jackrabbit.oak.plugins.segment.file.FileStore.writeSegment(FileStore.java:948) - waiting to lock 0x0007a0081e48 (a org.apache.jackrabbit.oak.plugins.segment.file.FileStore) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:228) - locked 0x0007a011f750 (a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.prepare(SegmentWriter.java:329) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeListBucket(SegmentWriter.java:447) - locked 0x0007a011f750 (a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeList(SegmentWriter.java:698) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1190) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at
[jira] [Assigned] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke reassigned OAK-3288: --- Assignee: Julian Reschke clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3263) Support including and excluding paths for PropertyIndex
[ https://issues.apache.org/jira/browse/OAK-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manfred Baedke updated OAK-3263: Attachment: OAK-3263-v2.patch Support including and excluding paths for PropertyIndex --- Key: OAK-3263 URL: https://issues.apache.org/jira/browse/OAK-3263 Project: Jackrabbit Oak Issue Type: Improvement Components: query Reporter: Chetan Mehrotra Assignee: Manfred Baedke Labels: performance Fix For: 1.3.6 Attachments: OAK-3263-prelimary.patch, OAK-3263-v2.patch, OAK-3263.patch As part of OAK-2599 support for excluding and including paths were added to Lucene index. It would be good to have such a support enabled for PropertyIndexe also -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3263) Support including and excluding paths for PropertyIndex
[ https://issues.apache.org/jira/browse/OAK-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manfred Baedke updated OAK-3263: Attachment: OAK-3263-v2.patch Attached OAK-3263-v2.patch. If an index has excludedPaths and there is no path restriction or the path restriction is an ancestor of an excludedPath, but no includedPath, PropertyIndexPlan will return infinite costs. Support including and excluding paths for PropertyIndex --- Key: OAK-3263 URL: https://issues.apache.org/jira/browse/OAK-3263 Project: Jackrabbit Oak Issue Type: Improvement Components: query Reporter: Chetan Mehrotra Assignee: Manfred Baedke Labels: performance Fix For: 1.3.6 Attachments: OAK-3263-prelimary.patch, OAK-3263-v2.patch, OAK-3263.patch As part of OAK-2599 support for excluding and including paths were added to Lucene index. It would be good to have such a support enabled for PropertyIndexe also -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3263) Support including and excluding paths for PropertyIndex
[ https://issues.apache.org/jira/browse/OAK-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manfred Baedke updated OAK-3263: Attachment: (was: OAK-3263-v2.patch) Support including and excluding paths for PropertyIndex --- Key: OAK-3263 URL: https://issues.apache.org/jira/browse/OAK-3263 Project: Jackrabbit Oak Issue Type: Improvement Components: query Reporter: Chetan Mehrotra Assignee: Manfred Baedke Labels: performance Fix For: 1.3.6 Attachments: OAK-3263-prelimary.patch, OAK-3263.patch As part of OAK-2599 support for excluding and including paths were added to Lucene index. It would be good to have such a support enabled for PropertyIndexe also -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3302) ExternalLoginModule:193 can never be reached
Thorsten Biegner created OAK-3302: - Summary: ExternalLoginModule:193 can never be reached Key: OAK-3302 URL: https://issues.apache.org/jira/browse/OAK-3302 Project: Jackrabbit Oak Issue Type: Bug Components: auth-external, auth-ldap Affects Versions: 1.2.2 Environment: AEM 6.1 Reporter: Thorsten Biegner Priority: Minor Starting at line 193 in Version 1.2.2 which shipped with AEM 6.1 this code can never be reached. https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.2.2/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L189 sId = syncHandler.findIdentity(userMgr, userId); // if there exists an authorizable with the given userid but is // not an external one or if it belongs to another IDP, we just ignore it. if (sId != null) { Line 193 ExternalIdentityRef externalIdRef = sId.getExternalIdRef(); if (externalIdRef == null) { Because when no ExternalReference is present sId will be null. See https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.2.2/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/DefaultSyncHandler.java#L187 Instead of being null it should return a SyncedIdentity with the ExternalIdRef set to null. As far as I can see the same bug still exists in the current trunk see https://github.com/apache/jackrabbit-oak/blob/trunk/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L193 and https://github.com/apache/jackrabbit-oak/blob/trunk/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/basic/DefaultSyncContext.java#L120 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-3263) Support including and excluding paths for PropertyIndex
[ https://issues.apache.org/jira/browse/OAK-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715211#comment-14715211 ] Manfred Baedke edited comment on OAK-3263 at 8/26/15 5:42 PM: -- Attached OAK-3263-v2.patch. If an index has excludedPaths and there is no path restriction or the path restriction is an ancestor of an excludedPath, but no includedPath, PropertyIndexPlan will return infinite costs. I added this check only to PropertyIndexPlan. If there are other classes that need to be modified for this feature, please let me know. was (Author: baedke): Attached OAK-3263-v2.patch. If an index has excludedPaths and there is no path restriction or the path restriction is an ancestor of an excludedPath, but no includedPath, PropertyIndexPlan will return infinite costs. Support including and excluding paths for PropertyIndex --- Key: OAK-3263 URL: https://issues.apache.org/jira/browse/OAK-3263 Project: Jackrabbit Oak Issue Type: Improvement Components: query Reporter: Chetan Mehrotra Assignee: Manfred Baedke Labels: performance Fix For: 1.3.6 Attachments: OAK-3263-prelimary.patch, OAK-3263-v2.patch, OAK-3263.patch As part of OAK-2599 support for excluding and including paths were added to Lucene index. It would be good to have such a support enabled for PropertyIndexe also -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3292) DocumentDiscoveryLiteServiceTest failures on travis and jenkins
[ https://issues.apache.org/jira/browse/OAK-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14714234#comment-14714234 ] Stefan Egli commented on OAK-3292: -- fixed one issue found: http://svn.apache.org/r1697958 waiting for another few tests on jenkins/travis to see if this helped DocumentDiscoveryLiteServiceTest failures on travis and jenkins --- Key: OAK-3292 URL: https://issues.apache.org/jira/browse/OAK-3292 Project: Jackrabbit Oak Issue Type: Test Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Fix For: 1.3.5 Travis reported test failure of DocumentDiscoveryLiteServiceTest: * https://travis-ci.org/apache/jackrabbit-oak/builds/77114814 {code}Failed tests: testLargeStartStopFiesta(org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest): expectation not fulfilled within 6ms: checkFiestaState failed for SimplifiedInstance[cid=3], with instances: [SimplifiedInstance[cid=3], SimplifiedInstance[cid=4]], and inactiveIds: [1, 2], fulfillment result: inactiveIds dont match, expected: 1,2, got clusterView: {seq:6,final:true,id:4a8671de-f472-482c-b40b-29561d7b9836,me:3,active:[3,4],deactivating:[],inactive:[1]}{code} * same earlier: https://travis-ci.org/apache/jackrabbit-oak/builds/77004461 and also on jenkins: * https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/352/jdk=jdk1.8.0_11,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.plugins.document/DocumentDiscoveryLiteServiceTest/testLargeStartStopFiesta/ * https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/352/jdk=jdk1.8.0_11,label=Ubuntu,nsfixtures=SEGMENT_MK,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.plugins.document/DocumentDiscoveryLiteServiceTest/testLargeStartStopFiesta/ {code}expectation not fulfilled within 6ms: checkFiestaState failed for SimplifiedInstance[cid=1], with instances: [SimplifiedInstance[cid=1], SimplifiedInstance[cid=2]], and inactiveIds: [2], fulfillment result: inactiveIds dont match, expected: 2, got clusterView: {seq:10,final:true,id:c2ef1bb6-b5c6-4a1a-bfa4-f06be1554bfd,me:1,active:[1,2],deactivating:[],inactive:[]} {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3250) Restart DocumentNodeStore on lease timeout instead of System.exit
[ https://issues.apache.org/jira/browse/OAK-3250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712636#comment-14712636 ] Marcel Reutegger commented on OAK-3250: --- bq. it would not know that a restart has happened. Isn't this covered by the sequence number, which gets incremented whenever the state changes? Restart DocumentNodeStore on lease timeout instead of System.exit - Key: OAK-3250 URL: https://issues.apache.org/jira/browse/OAK-3250 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Fix For: 1.3.5 As discussed [on the list|http://markmail.org/thread/uo6n5oozlbhe7ifg] usage of System.exit is not ideal and one suggested alternative is to follow up on the idea to instead perform a 'restart of DocumentNodeStore' - while marking the local instance as 'deactivating' towards discovery-lite-descriptor so that upper layer discovery.oak can properly send a TOPOLOGY_CHANGING to all listeners, so that they immediately stop any topology-dependent activity. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3250) Restart DocumentNodeStore on lease timeout instead of System.exit
[ https://issues.apache.org/jira/browse/OAK-3250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712662#comment-14712662 ] Marcel Reutegger commented on OAK-3250: --- I'm confused. I thought we would shut down the DocumentNodeStore instance. At least in an OSGi environment this would be equivalent to a restart of all components, which depend on it. Doesn't this include the discovery mechanism? Restart DocumentNodeStore on lease timeout instead of System.exit - Key: OAK-3250 URL: https://issues.apache.org/jira/browse/OAK-3250 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Fix For: 1.3.5 As discussed [on the list|http://markmail.org/thread/uo6n5oozlbhe7ifg] usage of System.exit is not ideal and one suggested alternative is to follow up on the idea to instead perform a 'restart of DocumentNodeStore' - while marking the local instance as 'deactivating' towards discovery-lite-descriptor so that upper layer discovery.oak can properly send a TOPOLOGY_CHANGING to all listeners, so that they immediately stop any topology-dependent activity. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3230) Query engine should support virtual index rows
[ https://issues.apache.org/jira/browse/OAK-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712917#comment-14712917 ] Thomas Mueller commented on OAK-3230: - Ah oak-lucene fails with a NPE, so we do have enough tests: {noformat} testNtFile(org.apache.jackrabbit.oak.jcr.query.TextExtractionQueryTest) Time elapsed: 0.207 sec ERROR! java.lang.NullPointerException at org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:91) at org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:76) at org.apache.jackrabbit.oak.query.ast.SelectorImpl.next(SelectorImpl.java:402) at org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.fetchNext(QueryImpl.java:780) at org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.hasNext(QueryImpl.java:805) at org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$4.fetch(QueryResultImpl.java:172) {noformat} Yes, virtual row check should have been fine against nextRow. I didn't want to update currentRow directly with cursor.next() as it seemed current row for aggregation cursor shouldn't be virtual. Yes aggregation in combination with virtual rows is unexpected. But I guess queries that produce virtual rows don't use aggregation, so we should be fine. Query engine should support virtual index rows -- Key: OAK-3230 URL: https://issues.apache.org/jira/browse/OAK-3230 Project: Jackrabbit Oak Issue Type: Sub-task Components: query Reporter: Vikas Saurabh Assignee: Thomas Mueller Fix For: 1.3.5 Attachments: OAK-3230-query-engine-should-support-virtual-rows.patch As discussed in OAK-3156 [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14645712page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14645712] and [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14655273page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14655273], we need support for virtual rows returned by indices (rows where path property doesn't make sense). Talked to [~tmueller] and we should have a new method in {{IndexRow}} to clearly mark the intent that the row is virtual and hence should be treated accordingly by the query engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3250) Restart DocumentNodeStore on lease timeout instead of System.exit
[ https://issues.apache.org/jira/browse/OAK-3250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712650#comment-14712650 ] Stefan Egli commented on OAK-3250: -- the sequence number would change yes, but that does not indicate whether it is because of a view change or because of a local instance restart. one critical thing that must happen on restart is that the local instance puts itself 'at the end of the ordered list of instances in the cluster view' (which ensures it will least likely become leader). this is to ensure if someone else became leader, that title is not taken away from that instance again (this is a requirement from the discovery API: leader and instance ordering must be stable - ie as long as the instance doesn't shutdown/crash it keeps its position in the order, or it remains leader). Now surely, that's just a requirement of the discovery - but it would need to at least know from discovery-lite when a restart has happened - and that's not indicated just by the sequence number unfortunately. Restart DocumentNodeStore on lease timeout instead of System.exit - Key: OAK-3250 URL: https://issues.apache.org/jira/browse/OAK-3250 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Fix For: 1.3.5 As discussed [on the list|http://markmail.org/thread/uo6n5oozlbhe7ifg] usage of System.exit is not ideal and one suggested alternative is to follow up on the idea to instead perform a 'restart of DocumentNodeStore' - while marking the local instance as 'deactivating' towards discovery-lite-descriptor so that upper layer discovery.oak can properly send a TOPOLOGY_CHANGING to all listeners, so that they immediately stop any topology-dependent activity. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3250) Restart DocumentNodeStore on lease timeout instead of System.exit
[ https://issues.apache.org/jira/browse/OAK-3250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712667#comment-14712667 ] Stefan Egli commented on OAK-3250: -- bq. Doesn't this include the discovery mechanism? that's up for debate, perhaps/probably yes, I'm not sure. DocumentDiscoveryLiteService is a separate service, so at least it's not automatically going to be restart. (discovery.oak in sling btw will probably not be restarted because of this.) but in both cases: the sequence number alone will not help out imv. Restart DocumentNodeStore on lease timeout instead of System.exit - Key: OAK-3250 URL: https://issues.apache.org/jira/browse/OAK-3250 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Fix For: 1.3.5 As discussed [on the list|http://markmail.org/thread/uo6n5oozlbhe7ifg] usage of System.exit is not ideal and one suggested alternative is to follow up on the idea to instead perform a 'restart of DocumentNodeStore' - while marking the local instance as 'deactivating' towards discovery-lite-descriptor so that upper layer discovery.oak can properly send a TOPOLOGY_CHANGING to all listeners, so that they immediately stop any topology-dependent activity. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3288: Attachment: OAK-3288.diff Proposed patch (doc change still missing) clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 Attachments: OAK-3288.diff The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3250) Restart DocumentNodeStore on lease timeout instead of System.exit
[ https://issues.apache.org/jira/browse/OAK-3250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712654#comment-14712654 ] Stefan Egli commented on OAK-3250: -- which - when reading this again - reminds me that not doing an actual shutdown of the instance violates this contract. But perhaps that's an acceptable exception. We should just make sure to document this (that normally the leader status is never taken away from an instance at runtime - with the exception that if oak fails to update the lease and must be auto-restarted, then the leader status is indeed taken) Restart DocumentNodeStore on lease timeout instead of System.exit - Key: OAK-3250 URL: https://issues.apache.org/jira/browse/OAK-3250 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Fix For: 1.3.5 As discussed [on the list|http://markmail.org/thread/uo6n5oozlbhe7ifg] usage of System.exit is not ideal and one suggested alternative is to follow up on the idea to instead perform a 'restart of DocumentNodeStore' - while marking the local instance as 'deactivating' towards discovery-lite-descriptor so that upper layer discovery.oak can properly send a TOPOLOGY_CHANGING to all listeners, so that they immediately stop any topology-dependent activity. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3230) Query engine should support virtual index rows
[ https://issues.apache.org/jira/browse/OAK-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712871#comment-14712871 ] Thomas Mueller commented on OAK-3230: - The change in AggregationCursor looks wrong to me: {noformat} if (cursor.hasNext()) { IndexRow nextRow = cursor.next(); if (!currentRow.isVirtualRow()) { currentRow = nextRow; String path = currentRow.getPath(); aggregates = Iterators.filter(Iterators.concat( Iterators.singletonIterator(path), aggregator.getParents(rootState, path)), Predicates .not(Predicates.in(seenPaths))); } fetchNext(); return; } {noformat} For example currentRow could be null here, so that could result in a NPE. I think it should be: {noformat} if (cursor.hasNext()) { currentRow = cursor.next(); if (!currentRow.isVirtualRow()) { String path = currentRow.getPath(); aggregates = Iterators.filter(Iterators.concat( Iterators.singletonIterator(path), aggregator.getParents(rootState, path)), Predicates .not(Predicates.in(seenPaths))); } fetchNext(); return; } {noformat} Query engine should support virtual index rows -- Key: OAK-3230 URL: https://issues.apache.org/jira/browse/OAK-3230 Project: Jackrabbit Oak Issue Type: Sub-task Components: query Reporter: Vikas Saurabh Assignee: Thomas Mueller Fix For: 1.3.5 Attachments: OAK-3230-query-engine-should-support-virtual-rows.patch As discussed in OAK-3156 [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14645712page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14645712] and [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14655273page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14655273], we need support for virtual rows returned by indices (rows where path property doesn't make sense). Talked to [~tmueller] and we should have a new method in {{IndexRow}} to clearly mark the intent that the row is virtual and hence should be treated accordingly by the query engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3230) Query engine should support virtual index rows
[ https://issues.apache.org/jira/browse/OAK-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712908#comment-14712908 ] Thomas Mueller commented on OAK-3230: - Even without the above change, oak-core tests pass. So I'm not sure, maybe we don't have enough unit tests. The API version need to be increased: {noformat} [ERROR] org.apache.jackrabbit.oak.spi.query: Version increase required; detected 2.2.0, suggested 3.0.0 {noformat} Query engine should support virtual index rows -- Key: OAK-3230 URL: https://issues.apache.org/jira/browse/OAK-3230 Project: Jackrabbit Oak Issue Type: Sub-task Components: query Reporter: Vikas Saurabh Assignee: Thomas Mueller Fix For: 1.3.5 Attachments: OAK-3230-query-engine-should-support-virtual-rows.patch As discussed in OAK-3156 [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14645712page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14645712] and [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14655273page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14655273], we need support for virtual rows returned by indices (rows where path property doesn't make sense). Talked to [~tmueller] and we should have a new method in {{IndexRow}} to clearly mark the intent that the row is virtual and hence should be treated accordingly by the query engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2679) Query engine: cache execution plans
[ https://issues.apache.org/jira/browse/OAK-2679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Richard updated OAK-2679: -- Attachment: 0001-OAK-2679-Reduce-execution-plan-overhead_0.2.patch I have attached an updated version of the patch (0001-OAK-2679-Reduce-execution-plan-overhead_0.2.patch) which makes the PropertyIndex plan cache more effective. Query engine: cache execution plans --- Key: OAK-2679 URL: https://issues.apache.org/jira/browse/OAK-2679 Project: Jackrabbit Oak Issue Type: Improvement Components: core, query Reporter: Thomas Mueller Assignee: Thomas Mueller Labels: performance Fix For: 1.3.5 Attachments: 0001-OAK-2679-Reduce-execution-plan-overhead_0.2.patch, OAK-2679.patch, executionplancache.patch If there are many indexes, preparing a query can take a long time, in relation to executing the query. The query execution plans can be cached. The cache should be invalidated if there are new indexes, or indexes are changed; a simple solution might be to use a timeout, and / or a manual cache clean via JMX or so. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3298) Allow to specify LogDumper's log buffer size
Stefan Egli created OAK-3298: Summary: Allow to specify LogDumper's log buffer size Key: OAK-3298 URL: https://issues.apache.org/jira/browse/OAK-3298 Project: Jackrabbit Oak Issue Type: Improvement Components: commons Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Priority: Trivial Fix For: 1.3.5 Currently the LogDumper has a hardcoded default of 1000 entries it stores. This should be made flexible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3298) Allow to specify LogDumper's log buffer size
[ https://issues.apache.org/jira/browse/OAK-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Egli resolved OAK-3298. -- Resolution: Fixed done. Allow to specify LogDumper's log buffer size Key: OAK-3298 URL: https://issues.apache.org/jira/browse/OAK-3298 Project: Jackrabbit Oak Issue Type: Improvement Components: commons Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Priority: Trivial Fix For: 1.3.5 Currently the LogDumper has a hardcoded default of 1000 entries it stores. This should be made flexible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3281) Test failures on trunk: SolrIndexQueryTestIT.sql2
[ https://issues.apache.org/jira/browse/OAK-3281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712792#comment-14712792 ] Thomas Mueller commented on OAK-3281: - We would need to have the target/...sql2.txt file. Unfortunately, I didn't find it at https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/ws/oak-solr-core/ Test failures on trunk: SolrIndexQueryTestIT.sql2 - Key: OAK-3281 URL: https://issues.apache.org/jira/browse/OAK-3281 Project: Jackrabbit Oak Issue Type: Bug Components: lucene Environment: https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/ Reporter: Michael Dürig Labels: ci, jenkins Fix For: 1.3.5 {{org.apache.jackrabbit.oak.plugins.index.solr.query.SolrIndexQueryTestIT.sql2}} fails regularly on Jenkins: https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/350/jdk=jdk1.8.0_11,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=integrationTesting/testReport/junit/org.apache.jackrabbit.oak.plugins.index.solr.query/SolrIndexQueryTestIT/sql2/ {noformat} java.lang.Exception: Results in target/oajopi.solr.query.SolrIndexQueryTestIT_sql2.txt don't match expected results in file:/x1/jenkins/jenkins-slave/workspace/Apache%20Jackrabbit%20Oak%20matrix/jdk/jdk1.8.0_11/label/Ubuntu/nsfixtures/DOCUMENT_RDB/profile/integrationTesting/oak-core/target/oak-core-1.4-SNAPSHOT-tests.jar!/org/apache/jackrabbit/oak/query/sql2.txt; compare the files for details at org.apache.jackrabbit.oak.query.AbstractQueryTest.test(AbstractQueryTest.java:232) at org.apache.jackrabbit.oak.plugins.index.solr.query.SolrIndexQueryTestIT.sql2(SolrIndexQueryTestIT.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:47) at org.junit.rules.RunRules.evaluate(RunRules.java:18) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3230) Query engine should support virtual index rows
[ https://issues.apache.org/jira/browse/OAK-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712880#comment-14712880 ] Vikas Saurabh commented on OAK-3230: Good catch [~tmueller].. Yes, virtual row check should have been fine against {{nextRow}}. I didn't want to update currentRow directly with cursor.next() as it seemed current row for aggregation cursor shouldn't be virtual. Query engine should support virtual index rows -- Key: OAK-3230 URL: https://issues.apache.org/jira/browse/OAK-3230 Project: Jackrabbit Oak Issue Type: Sub-task Components: query Reporter: Vikas Saurabh Assignee: Thomas Mueller Fix For: 1.3.5 Attachments: OAK-3230-query-engine-should-support-virtual-rows.patch As discussed in OAK-3156 [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14645712page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14645712] and [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14655273page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14655273], we need support for virtual rows returned by indices (rows where path property doesn't make sense). Talked to [~tmueller] and we should have a new method in {{IndexRow}} to clearly mark the intent that the row is virtual and hence should be treated accordingly by the query engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712915#comment-14712915 ] Julian Reschke commented on OAK-3288: - If we wanted to change the signature for set() away from Object, what types would we need? Obviously String, long, and boolean, but I also see a case where a Map is set... clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 Attachments: OAK-3288.diff The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-3230) Query engine should support virtual index rows
[ https://issues.apache.org/jira/browse/OAK-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712926#comment-14712926 ] Vikas Saurabh edited comment on OAK-3230 at 8/26/15 10:44 AM: -- I'm not sure of the version change part. Would that mean that we should find some other (api wise backward compatible) change? -About tests, the only place which emits virtual row is lucene suggest/spell check. I think there's isn't a legitimate query which would form aggregate cursor over suggest/spell check. But, yes, we can probably skip query dynamic and unit test the cursor implementations.- Didn't see last comnent from Thomas was (Author: catholicon): I'm not sure of the version change part. Would that mean that we should find some other (api wise backward compatible) change? About tests, the only place which emits virtual row is lucene suggest/spell check. I think there's isn't a legitimate query which would form aggregate cursor over suggest/spell check. But, yes, we can probably skip query dynamic and unit test the cursor implementations. Query engine should support virtual index rows -- Key: OAK-3230 URL: https://issues.apache.org/jira/browse/OAK-3230 Project: Jackrabbit Oak Issue Type: Sub-task Components: query Reporter: Vikas Saurabh Assignee: Thomas Mueller Fix For: 1.3.5 Attachments: OAK-3230-query-engine-should-support-virtual-rows.patch As discussed in OAK-3156 [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14645712page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14645712] and [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14655273page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14655273], we need support for virtual rows returned by indices (rows where path property doesn't make sense). Talked to [~tmueller] and we should have a new method in {{IndexRow}} to clearly mark the intent that the row is virtual and hence should be treated accordingly by the query engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-3153) Make it possible to disable recording of stack trace in SessionStats
[ https://issues.apache.org/jira/browse/OAK-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Mueller reassigned OAK-3153: --- Assignee: Thomas Mueller Make it possible to disable recording of stack trace in SessionStats Key: OAK-3153 URL: https://issues.apache.org/jira/browse/OAK-3153 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.3 Reporter: Joel Richard Assignee: Thomas Mueller Labels: performance Fix For: 1.3.5 Attachments: 0001-OAK-3153-Make-it-possible-to-disable-recording-of-st.patch For the rendering of some pages we have to create a lot of sessions. Around 9% of the rendering time is spent inside of RepositoryImpl.login. Half of this time is spent creating the exception in SessionStats. Therefore, it would be useful if the recording of the exception could be disabled to improve the performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3153) Make it possible to disable recording of stack trace in SessionStats
[ https://issues.apache.org/jira/browse/OAK-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Mueller updated OAK-3153: Fix Version/s: 1.3.5 Make it possible to disable recording of stack trace in SessionStats Key: OAK-3153 URL: https://issues.apache.org/jira/browse/OAK-3153 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.3 Reporter: Joel Richard Assignee: Thomas Mueller Labels: performance Fix For: 1.3.5 Attachments: 0001-OAK-3153-Make-it-possible-to-disable-recording-of-st.patch For the rendering of some pages we have to create a lot of sessions. Around 9% of the rendering time is spent inside of RepositoryImpl.login. Half of this time is spent creating the exception in SessionStats. Therefore, it would be useful if the recording of the exception could be disabled to improve the performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-3238) fine tune clock-sync check vs lease-check settings
[ https://issues.apache.org/jira/browse/OAK-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713054#comment-14713054 ] Stefan Egli edited comment on OAK-3238 at 8/26/15 12:48 PM: Changed the lease behavior as follows ( http://svn.apache.org/r1697913 ): * update is now done already after 20 sec - this should not have any negative performance implications. the lease timeout is left unchanged at 60 sec * the lease-check is now done with a margin of 20 sec (1/3 of the leaseTime): so if the lease is valid for less than 20 sec it will now consider that as a failure. /fyi: [~mreutegg], [~chetanm], [~reschke] was (Author: egli): Changed the lease behavior as follows (http://svn.apache.org/r1697913): * update is now done already after 20 sec - this should not have any negative performance implications. the lease timeout is left unchanged at 60 sec * the lease-check is now done with a margin of 20 sec (1/3 of the leaseTime): so if the lease is valid for less than 20 sec it will now consider that as a failure. /fyi: [~mreutegg], [~chetanm], [~reschke] fine tune clock-sync check vs lease-check settings -- Key: OAK-3238 URL: https://issues.apache.org/jira/browse/OAK-3238 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Fix For: 1.3.5 There are now two components that try to assure 'discovery-lite' (OAK-2844) is reporting a coherent cluster view to the upper layers: * OAK-2682 : time difference detection: by default fails if clock is off by more than 2 seconds at startup. That results in a 4 sec max margin in a document-cluster * OAK-2739 : lease-checking: every instance checks if the local lease is valid upon any document access. This check is done against the actual 'leaseEndTime' - which is updated every (by default) 30 seconds to be valid for (by default) another 60 seconds. These two factors combined, in the worst case you could still end up having that 4 second time window where the local instance fails to update the lease (eg lease-thread dies) but it considers itself still owning a valid lease - while a remote instance might be those 4 seconds off and considers the lease as timed out. So overall: the 3 factors 'lease duration', 'lease update frequency' and 'maximum allowed clock difference' must be better tuned to end up in a stable mechanism. Suggestion: * increase the 'lease duration' to be 3 x 'lease update frequency', ie 90sec lease duration * reduce the lease check failure limit from 'lease duration' to 2x 'lease update frequency' - assuming that one 'lease update interval' is way larger than the 'maximum allowed clock difference' -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712969#comment-14712969 ] Marcel Reutegger commented on OAK-3288: --- bq. I also see a case where a Map is set This doesn't sound right. The intention of the UpdateOp is to rather provide setMapEntry() for this purpose. clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 Attachments: OAK-3288.diff The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2849) Improve revision gc on SegmentMK
[ https://issues.apache.org/jira/browse/OAK-2849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713041#comment-14713041 ] Alex Parvulescu commented on OAK-2849: -- fyi I tweaked the nodestore init bits with http://svn.apache.org/r1697909. [~mduerig] please verify I didn't break anything ;) Improve revision gc on SegmentMK Key: OAK-2849 URL: https://issues.apache.org/jira/browse/OAK-2849 Project: Jackrabbit Oak Issue Type: Task Components: segmentmk Reporter: Michael Dürig Assignee: Michael Dürig Labels: compaction, gc Fix For: 1.3.6 This is a container issue for the ongoing effort to improve revision gc of the SegmentMK. I'm exploring * ways to make the reference graph as exact as possible and necessary: it should not contain segments that are not referenceable any more and but must contain all segments that are referenceable. * ways to segregate the reference graph reducing dependencies between certain set of segments as much as possible. * Reducing the number of in memory references and their impact on gc as much as possible. Work in progress is in my private [Github fork|https://github.com/mduerig/jackrabbit-oak]. As soon as something is promising enough to make it into Oak, I spawn of an issue an make it a subtask of this task. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3230) Query engine should support virtual index rows
[ https://issues.apache.org/jira/browse/OAK-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712926#comment-14712926 ] Vikas Saurabh commented on OAK-3230: I'm not sure of the version change part. Would that mean that we should find some other (api wise backward compatible) change? About tests, the only place which emits virtual row is lucene suggest/spell check. I think there's isn't a legitimate query which would form aggregate cursor over suggest/spell check. But, yes, we can probably skip query dynamic and unit test the cursor implementations. Query engine should support virtual index rows -- Key: OAK-3230 URL: https://issues.apache.org/jira/browse/OAK-3230 Project: Jackrabbit Oak Issue Type: Sub-task Components: query Reporter: Vikas Saurabh Assignee: Thomas Mueller Fix For: 1.3.5 Attachments: OAK-3230-query-engine-should-support-virtual-rows.patch As discussed in OAK-3156 [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14645712page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14645712] and [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14655273page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14655273], we need support for virtual rows returned by indices (rows where path property doesn't make sense). Talked to [~tmueller] and we should have a new method in {{IndexRow}} to clearly mark the intent that the row is virtual and hence should be treated accordingly by the query engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3288: Attachment: (was: OAK-3288.diff) clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3079) LastRevRecoveryAgent can update _lastRev of children but not the root
[ https://issues.apache.org/jira/browse/OAK-3079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcel Reutegger updated OAK-3079: -- Affects Version/s: 1.2 LastRevRecoveryAgent can update _lastRev of children but not the root - Key: OAK-3079 URL: https://issues.apache.org/jira/browse/OAK-3079 Project: Jackrabbit Oak Issue Type: Bug Components: core, mongomk Affects Versions: 1.2, 1.3.2 Reporter: Stefan Egli Assignee: Marcel Reutegger Labels: resilience Fix For: 1.3.6 Attachments: NonRootUpdatingLastRevRecoveryTest.java As mentioned in [OAK-2131|https://issues.apache.org/jira/browse/OAK-2131?focusedCommentId=14616391page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14616391] there can be a situation wherein the LastRevRecoveryAgent updates some nodes in the tree but not the root. This seems to happen due to OAK-2131's change in the Commit.applyToCache (where paths to update are collected via tracker.track): in that code, paths which are non-root and for which no content has changed (and mind you, a content change includes adding _deleted, which happens by default for nodes with children) are not 'tracked', ie for those the _lastRev is not update by subsequent backgroundUpdate operations - leaving them 'old/out-of-date'. This seems correct as per description/intention of OAK-2131 where the last revision can be determined via the commitRoot of the parent. But it has the effect that the LastRevRecoveryAgent then finds those intermittent nodes to be updated while as the root has already been updated (which is at first glance non-intuitive). I'll attach a test case to reproduce this. Perhaps this is a bug, perhaps it's ok. [~mreutegg] wdyt? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3294) Read-only live FileStore implementation
[ https://issues.apache.org/jira/browse/OAK-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713045#comment-14713045 ] Francesco Mari commented on OAK-3294: - [~alex.parvulescu], the patch looks good to me. I would add {{compact()}} and {{maybeCompact()}} to {{ReadOnlyStore}} as no-op operations, because these methods are conceptually associated to a change in the {{FileStore}} on the file system. I'm also not sure the methods overridden in {{ReadOnlyStore}} should be implemented as no-op, or they should perform some more meaningful action, such as writing a log message or throwing an {{UnsuportedOperationException}} when called. I'm in favour of throwing an exception. Read-only live FileStore implementation --- Key: OAK-3294 URL: https://issues.apache.org/jira/browse/OAK-3294 Project: Jackrabbit Oak Issue Type: Improvement Components: segmentmk Reporter: Alex Parvulescu Assignee: Alex Parvulescu Priority: Minor Attachments: OAK-3294.patch Having a read-only FileStore able to work on a running (live) FileStore would open the door for some interesting data collection debugging tools that no longer need the repository to be shut down. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3238) fine tune clock-sync check vs lease-check settings
[ https://issues.apache.org/jira/browse/OAK-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Egli resolved OAK-3238. -- Resolution: Fixed Changed the lease behavior as follows (http://svn.apache.org/r1697913): * update is now done already after 20 sec - this should not have any negative performance implications. the lease timeout is left unchanged at 60 sec * the lease-check is now done with a margin of 20 sec (1/3 of the leaseTime): so if the lease is valid for less than 20 sec it will now consider that as a failure. /fyi: [~mreutegg], [~chetanm], [~reschke] fine tune clock-sync check vs lease-check settings -- Key: OAK-3238 URL: https://issues.apache.org/jira/browse/OAK-3238 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Fix For: 1.3.5 There are now two components that try to assure 'discovery-lite' (OAK-2844) is reporting a coherent cluster view to the upper layers: * OAK-2682 : time difference detection: by default fails if clock is off by more than 2 seconds at startup. That results in a 4 sec max margin in a document-cluster * OAK-2739 : lease-checking: every instance checks if the local lease is valid upon any document access. This check is done against the actual 'leaseEndTime' - which is updated every (by default) 30 seconds to be valid for (by default) another 60 seconds. These two factors combined, in the worst case you could still end up having that 4 second time window where the local instance fails to update the lease (eg lease-thread dies) but it considers itself still owning a valid lease - while a remote instance might be those 4 seconds off and considers the lease as timed out. So overall: the 3 factors 'lease duration', 'lease update frequency' and 'maximum allowed clock difference' must be better tuned to end up in a stable mechanism. Suggestion: * increase the 'lease duration' to be 3 x 'lease update frequency', ie 90sec lease duration * reduce the lease check failure limit from 'lease duration' to 2x 'lease update frequency' - assuming that one 'lease update interval' is way larger than the 'maximum allowed clock difference' -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3153) Make it possible to disable recording of stack trace in SessionStats
[ https://issues.apache.org/jira/browse/OAK-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Mueller resolved OAK-3153. - Resolution: Fixed Make it possible to disable recording of stack trace in SessionStats Key: OAK-3153 URL: https://issues.apache.org/jira/browse/OAK-3153 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.3 Reporter: Joel Richard Assignee: Thomas Mueller Labels: performance Fix For: 1.3.5 Attachments: 0001-OAK-3153-Make-it-possible-to-disable-recording-of-st.patch For the rendering of some pages we have to create a lot of sessions. Around 9% of the rendering time is spent inside of RepositoryImpl.login. Half of this time is spent creating the exception in SessionStats. Therefore, it would be useful if the recording of the exception could be disabled to improve the performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3230) Query engine should support virtual index rows
[ https://issues.apache.org/jira/browse/OAK-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Mueller resolved OAK-3230. - Resolution: Fixed Query engine should support virtual index rows -- Key: OAK-3230 URL: https://issues.apache.org/jira/browse/OAK-3230 Project: Jackrabbit Oak Issue Type: Sub-task Components: query Reporter: Vikas Saurabh Assignee: Thomas Mueller Fix For: 1.3.5 Attachments: OAK-3230-query-engine-should-support-virtual-rows.patch As discussed in OAK-3156 [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14645712page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14645712] and [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14655273page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14655273], we need support for virtual rows returned by indices (rows where path property doesn't make sense). Talked to [~tmueller] and we should have a new method in {{IndexRow}} to clearly mark the intent that the row is virtual and hence should be treated accordingly by the query engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3230) Query engine should support virtual index rows
[ https://issues.apache.org/jira/browse/OAK-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713009#comment-14713009 ] Thomas Mueller commented on OAK-3230: - http://svn.apache.org/r1697896 Query engine should support virtual index rows -- Key: OAK-3230 URL: https://issues.apache.org/jira/browse/OAK-3230 Project: Jackrabbit Oak Issue Type: Sub-task Components: query Reporter: Vikas Saurabh Assignee: Thomas Mueller Fix For: 1.3.5 Attachments: OAK-3230-query-engine-should-support-virtual-rows.patch As discussed in OAK-3156 [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14645712page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14645712] and [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14655273page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14655273], we need support for virtual rows returned by indices (rows where path property doesn't make sense). Talked to [~tmueller] and we should have a new method in {{IndexRow}} to clearly mark the intent that the row is virtual and hence should be treated accordingly by the query engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3153) Make it possible to disable recording of stack trace in SessionStats
[ https://issues.apache.org/jira/browse/OAK-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713063#comment-14713063 ] Thomas Mueller commented on OAK-3153: - http://svn.apache.org/r1697915 Thanks a lot for the patch! Make it possible to disable recording of stack trace in SessionStats Key: OAK-3153 URL: https://issues.apache.org/jira/browse/OAK-3153 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.3.3 Reporter: Joel Richard Assignee: Thomas Mueller Labels: performance Fix For: 1.3.5 Attachments: 0001-OAK-3153-Make-it-possible-to-disable-recording-of-st.patch For the rendering of some pages we have to create a lot of sessions. Around 9% of the rendering time is spent inside of RepositoryImpl.login. Half of this time is spent creating the exception in SessionStats. Therefore, it would be useful if the recording of the exception could be disabled to improve the performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3288: Attachment: OAK-3288.diff Proposed change with API unchanged, but javadoc pointing out restrictions. clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 Attachments: OAK-3288.diff The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2714) Test failures on Jenkins
[ https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-2714: --- Description: This issue is for tracking test failures seen at our Jenkins instance that might yet be transient. Once a failure happens too often we should remove it here and create a dedicated issue for it. || Test || Builds || Fixture || JVM || | org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir | 81 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035 | 76, 128 | SEGMENT_MK , DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop | 64 | ?| ? | | org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore | 52, 181 | DOCUMENT_NS | 1.7 | | org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar | 41 | ?| ? | | org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded | 29 | ?| ? | | org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite | 35 | SEGMENT_MK | ? | | org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex | 90 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter | 94 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 121, 157 | DOCUMENT_RDB | 1.6, 1.8 | | org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 | | org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 142, 186, 191 | DOCUMENT_RDB, SEGMENT_MK | 1.6 | | org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 151 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | DOCUMENT_RDB | 1.8 | | org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163 | SEGMENT_MK | 1.6 | | org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 163, 164 | SEGMENT_MK, DOCUMENT_RDB | 1.7, 1.8 | | org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 | | org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.nodeType | 272 | DOCUMENT_RDB | 1.8 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTesttestMove | 308 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.version.VersionablePathNodeStoreTest.testVersionablePaths | 361 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest | 361 | DOCUMENT_NS, SEGMENT_MK | 1.8 | was: This issue is for tracking test failures seen at our Jenkins instance that might yet be transient. Once a failure happens too often we should remove it here and create a dedicated issue for it. || Test || Builds || Fixture || JVM || | org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir | 81 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035 | 76, 128 | SEGMENT_MK , DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop | 64 | ?| ? | | org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore | 52, 181 | DOCUMENT_NS | 1.7 | | org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar | 41 | ?| ? | | org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded | 29 | ?| ? | | org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite | 35 | SEGMENT_MK | ? | | org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex | 90 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter | 94 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 121, 157 |
[jira] [Updated] (OAK-2714) Test failures on Jenkins
[ https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-2714: --- Description: This issue is for tracking test failures seen at our Jenkins instance that might yet be transient. Once a failure happens too often we should remove it here and create a dedicated issue for it. || Test || Builds || Fixture || JVM || | org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir | 81 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035 | 76, 128 | SEGMENT_MK , DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop | 64 | ?| ? | | org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore | 52, 181 | DOCUMENT_NS | 1.7 | | org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar | 41 | ?| ? | | org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded | 29 | ?| ? | | org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite | 35 | SEGMENT_MK | ? | | org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex | 90 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter | 94 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 121, 157 | DOCUMENT_RDB | 1.6, 1.8 | | org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 | | org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 142, 186, 191 | DOCUMENT_RDB, SEGMENT_MK | 1.6 | | org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 151 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | DOCUMENT_RDB | 1.8 | | org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163 | SEGMENT_MK | 1.6 | | org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 163, 164 | SEGMENT_MK, DOCUMENT_RDB | 1.7, 1.8 | | org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 | | org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.nodeType | 272 | DOCUMENT_RDB | 1.8 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTesttestMove | 308 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.version.VersionablePathNodeStoreTest.testVersionablePaths | 361 | DOCUMENT_RDB | 1.7 | was: This issue is for tracking test failures seen at our Jenkins instance that might yet be transient. Once a failure happens too often we should remove it here and create a dedicated issue for it. || Test || Builds || Fixture || JVM || | org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir | 81 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035 | 76, 128 | SEGMENT_MK , DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop | 64 | ?| ? | | org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore | 52, 181 | DOCUMENT_NS | 1.7 | | org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar | 41 | ?| ? | | org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded | 29 | ?| ? | | org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite | 35 | SEGMENT_MK | ? | | org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex | 90 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter | 94 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 121, 157 | DOCUMENT_RDB | 1.6, 1.8 | | org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | DOCUMENT_RDB |
[jira] [Commented] (OAK-3230) Query engine should support virtual index rows
[ https://issues.apache.org/jira/browse/OAK-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712966#comment-14712966 ] Thomas Mueller commented on OAK-3230: - I'm not sure of the version change part. Would that mean that we should find some other (api wise backward compatible) change? No, I will upgrade the version. I don't think it's a problem. Query engine should support virtual index rows -- Key: OAK-3230 URL: https://issues.apache.org/jira/browse/OAK-3230 Project: Jackrabbit Oak Issue Type: Sub-task Components: query Reporter: Vikas Saurabh Assignee: Thomas Mueller Fix For: 1.3.5 Attachments: OAK-3230-query-engine-should-support-virtual-rows.patch As discussed in OAK-3156 [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14645712page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14645712] and [here|https://issues.apache.org/jira/browse/OAK-3156?focusedCommentId=14655273page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14655273], we need support for virtual rows returned by indices (rows where path property doesn't make sense). Talked to [~tmueller] and we should have a new method in {{IndexRow}} to clearly mark the intent that the row is virtual and hence should be treated accordingly by the query engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3299) SNFE in SegmentOverflowExceptionIT
Michael Dürig created OAK-3299: -- Summary: SNFE in SegmentOverflowExceptionIT Key: OAK-3299 URL: https://issues.apache.org/jira/browse/OAK-3299 Project: Jackrabbit Oak Issue Type: Improvement Components: segmentmk Reporter: Michael Dürig Assignee: Michael Dürig Fix For: 1.3.5 {{org.apache.jackrabbit.oak.plugins.segment.SegmentOverflowExceptionIT}} can fail with an {{SNFE}}. This is somewhat expected due to the low segment retention time used for this test. That time is apparently needed for this test to reproduce the original issue. So I'd rather not touch it. I propose to ignore that exception and retry a couple of times until failing the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3299) SNFE in SegmentOverflowExceptionIT
[ https://issues.apache.org/jira/browse/OAK-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-3299. Resolution: Fixed Fixed at http://svn.apache.org/r1697891 SNFE in SegmentOverflowExceptionIT --- Key: OAK-3299 URL: https://issues.apache.org/jira/browse/OAK-3299 Project: Jackrabbit Oak Issue Type: Improvement Components: segmentmk Reporter: Michael Dürig Assignee: Michael Dürig Labels: test Fix For: 1.3.5 {{org.apache.jackrabbit.oak.plugins.segment.SegmentOverflowExceptionIT}} can fail with an {{SNFE}}. This is somewhat expected due to the low segment retention time used for this test. That time is apparently needed for this test to reproduce the original issue. So I'd rather not touch it. I propose to ignore that exception and retry a couple of times until failing the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3294) Read-only live FileStore implementation
[ https://issues.apache.org/jira/browse/OAK-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713049#comment-14713049 ] Michael Dürig commented on OAK-3294: Looks good AFAICS. I don't like all the {{if(readonly)}} in the {{FileStore}} constructor though. But I neither have a better solution ATM. To get rid of them we probably need a quite deep refactoring. About those {{closeQuietly}}, I'd prefer something like {{closeAndLogOnFail}}. Read-only live FileStore implementation --- Key: OAK-3294 URL: https://issues.apache.org/jira/browse/OAK-3294 Project: Jackrabbit Oak Issue Type: Improvement Components: segmentmk Reporter: Alex Parvulescu Assignee: Alex Parvulescu Priority: Minor Attachments: OAK-3294.patch Having a read-only FileStore able to work on a running (live) FileStore would open the door for some interesting data collection debugging tools that no longer need the repository to be shut down. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2679) Query engine: cache execution plans
[ https://issues.apache.org/jira/browse/OAK-2679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Richard updated OAK-2679: -- Attachment: (was: 0001-OAK-2679-Reduce-execution-plan-overhead.patch) Query engine: cache execution plans --- Key: OAK-2679 URL: https://issues.apache.org/jira/browse/OAK-2679 Project: Jackrabbit Oak Issue Type: Improvement Components: core, query Reporter: Thomas Mueller Assignee: Thomas Mueller Labels: performance Fix For: 1.3.5 Attachments: OAK-2679.patch, executionplancache.patch If there are many indexes, preparing a query can take a long time, in relation to executing the query. The query execution plans can be cached. The cache should be invalidated if there are new indexes, or indexes are changed; a simple solution might be to use a timeout, and / or a manual cache clean via JMX or so. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3300) Include parameter descriptions in test output when running parameterised tests
[ https://issues.apache.org/jira/browse/OAK-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Munteanu updated OAK-3300: - Attachment: OAK-3300.png Include parameter descriptions in test output when running parameterised tests -- Key: OAK-3300 URL: https://issues.apache.org/jira/browse/OAK-3300 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Robert Munteanu Priority: Minor Fix For: 1.4 Attachments: 0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch, OAK-3300.png JUnit 4.11 or newer allows describing parameters which makes it easier to identify which fixture is running when not all tests pass. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node
[ https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713077#comment-14713077 ] Vikas Saurabh commented on OAK-3156: Now that OAK-3230 has been resolved, [~chetanm],[~teofili] can one of you guys please review the attached patch. Lucene suggestions index definition can't be restricted to a specific type of node -- Key: OAK-3156 URL: https://issues.apache.org/jira/browse/OAK-3156 Project: Jackrabbit Oak Issue Type: Bug Components: lucene Reporter: Vikas Saurabh Assignee: Tommaso Teofili Attachments: LuceneIndexSuggestionTest.java, OAK-3156.patch While performing a suggestor query like {code} SELECT [rep:suggest()] as suggestion FROM [nt:unstructured] WHERE suggest('foo') {code} Suggestor does not provide any result. In current implementation, [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions] in Oak work only for index definitions for {{nt:base}} nodetype. So, an index definition like: {code:xml} lucene-suggest jcr:primaryType=oak:QueryIndexDefinition async=async compatVersion={Long}2 type=lucene indexRules jcr:primaryType=nt:unstructured nt:base jcr:primaryType=nt:unstructured properties jcr:primaryType=nt:unstructured description jcr:primaryType=nt:unstructured analyzed={Boolean}true name=description propertyIndex={Boolean}true useInSuggest={Boolean}true/ /properties /nt:base /indexRules /lucene-suggest {code} works, but if we change nodetype to {{nt:unstructured}} like: {code:xml} lucene-suggest jcr:primaryType=oak:QueryIndexDefinition async=async compatVersion={Long}2 type=lucene indexRules jcr:primaryType=nt:unstructured nt:unstructured jcr:primaryType=nt:unstructured properties jcr:primaryType=nt:unstructured description jcr:primaryType=nt:unstructured analyzed={Boolean}true name=description propertyIndex={Boolean}true useInSuggest={Boolean}true/ /properties /nt:base /indexRules /lucene-suggest {code} , it won't work. The issue is that suggestor implementation essentially is passing a pseudo row with path=/.: {code:title=LucenePropertyIndex.java} private boolean loadDocs() { ... queue.add(new LuceneResultRow(suggestedWords)); ... {code} and {code:title=LucenePropertyIndex.java} LuceneResultRow(IterableString suggestWords) { this.path = /; this.score = 1.0d; this.suggestWords = suggestWords; } {code} Due to path being set to /, {{SelectorImpl}} later filters out the result as {{rep:root}} (primary type of /) isn't a {{nt:unstructured}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713401#comment-14713401 ] Stefan Egli commented on OAK-3288: -- PS: until OAK-3288 is there I had to do an ugly 'instanceof' in ClusterViewDocument (http://svn.apache.org/r1697926 ) - I'll remove that once this issue is fixed here clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 Attachments: OAK-3288.diff The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3300) Include parameter descriptions in test output when running parameterised tests
[ https://issues.apache.org/jira/browse/OAK-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713489#comment-14713489 ] Robert Munteanu commented on OAK-3300: -- Attached [trivial patch|^0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch] and also [a screenshot|^OAK-3300.png]. Include parameter descriptions in test output when running parameterised tests -- Key: OAK-3300 URL: https://issues.apache.org/jira/browse/OAK-3300 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Robert Munteanu Priority: Minor Fix For: 1.4 Attachments: 0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch, OAK-3300.png JUnit 4.11 or newer allows describing parameters which makes it easier to identify which fixture is running when not all tests pass. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2881) ConsistencyChecker#checkConsistency can't cope with inconsistent journal
[ https://issues.apache.org/jira/browse/OAK-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-2881: --- Fix Version/s: (was: 1.3.5) 1.3.6 ConsistencyChecker#checkConsistency can't cope with inconsistent journal Key: OAK-2881 URL: https://issues.apache.org/jira/browse/OAK-2881 Project: Jackrabbit Oak Issue Type: Bug Components: run Reporter: Michael Dürig Assignee: Michael Dürig Labels: resilience, tooling Fix For: 1.3.6 When running the consistency checker against a repository with a corrupt journal, it fails with an {{ISA}} instead of trying to skip over invalid revision identifiers: {noformat} Exception in thread main java.lang.IllegalArgumentException: Bad record identifier: foobar at org.apache.jackrabbit.oak.plugins.segment.RecordId.fromString(RecordId.java:57) at org.apache.jackrabbit.oak.plugins.segment.file.FileStore.init(FileStore.java:227) at org.apache.jackrabbit.oak.plugins.segment.file.FileStore.init(FileStore.java:178) at org.apache.jackrabbit.oak.plugins.segment.file.FileStore.init(FileStore.java:156) at org.apache.jackrabbit.oak.plugins.segment.file.FileStore.init(FileStore.java:166) at org.apache.jackrabbit.oak.plugins.segment.file.FileStore$ReadOnlyStore.init(FileStore.java:805) at org.apache.jackrabbit.oak.plugins.segment.file.tooling.ConsistencyChecker.init(ConsistencyChecker.java:108) at org.apache.jackrabbit.oak.plugins.segment.file.tooling.ConsistencyChecker.checkConsistency(ConsistencyChecker.java:70) at org.apache.jackrabbit.oak.run.Main.check(Main.java:701) at org.apache.jackrabbit.oak.run.Main.main(Main.java:158) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3300) Include parameter descriptions in test output when running parameterised tests
Robert Munteanu created OAK-3300: Summary: Include parameter descriptions in test output when running parameterised tests Key: OAK-3300 URL: https://issues.apache.org/jira/browse/OAK-3300 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Robert Munteanu Priority: Minor Fix For: 1.4 JUnit 4.11 or newer allows describing parameters which makes it easier to identify which fixture is running when not all tests pass. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3300) Include parameter descriptions in test output when running parameterised tests
[ https://issues.apache.org/jira/browse/OAK-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Munteanu updated OAK-3300: - Attachment: 0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch Include parameter descriptions in test output when running parameterised tests -- Key: OAK-3300 URL: https://issues.apache.org/jira/browse/OAK-3300 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Robert Munteanu Priority: Minor Fix For: 1.4 Attachments: 0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch JUnit 4.11 or newer allows describing parameters which makes it easier to identify which fixture is running when not all tests pass. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3235) Deadlock when closing a concurrently used FileStore
[ https://issues.apache.org/jira/browse/OAK-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713628#comment-14713628 ] Michael Dürig commented on OAK-3235: Unfortunately the fix for the deadlock seems to introduce a race condition causing a {{SNFE}} under some circumstances. {{org.apache.jackrabbit.oak.plugins.segment.file.SegmentReferenceLimitTestIT}} works without the fix but fails with a {{SNFE}} every 2nd time or so with the fix: {noformat} java.util.concurrent.ExecutionException: org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment b114914f-1d93-4cfa-acff-197a9d0e2071 not found at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:188) at org.apache.jackrabbit.oak.plugins.segment.file.SegmentReferenceLimitTestIT.corruption(SegmentReferenceLimitTestIT.java:102) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.junit.runner.JUnitCore.run(JUnitCore.java:157) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:116) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) Caused by: org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment b114914f-1d93-4cfa-acff-197a9d0e2071 not found at org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:944) at org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.readSegment(SegmentTracker.java:211) at org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:149) at org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82) at org.apache.jackrabbit.oak.plugins.segment.ListRecord.getEntry(ListRecord.java:62) at org.apache.jackrabbit.oak.plugins.segment.Segment.readPropsV11(Segment.java:534) at org.apache.jackrabbit.oak.plugins.segment.Segment.loadTemplate(Segment.java:507) at org.apache.jackrabbit.oak.plugins.segment.Segment.readTemplate(Segment.java:460) at org.apache.jackrabbit.oak.plugins.segment.Segment.readTemplate(Segment.java:454) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:79) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getProperty(SegmentNodeState.java:123) at org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.getProperty(MemoryNodeBuilder.java:480) at org.apache.jackrabbit.oak.spi.state.AbstractRebaseDiff.propertyAdded(AbstractRebaseDiff.java:171) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:592) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:491)
[jira] [Commented] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713635#comment-14713635 ] Julian Reschke commented on OAK-3288: - [~egli]? clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 Attachments: OAK-3288.diff The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3235) Deadlock when closing a concurrently used FileStore
[ https://issues.apache.org/jira/browse/OAK-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-3235: --- Labels: resilience (was: ) Deadlock when closing a concurrently used FileStore --- Key: OAK-3235 URL: https://issues.apache.org/jira/browse/OAK-3235 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Affects Versions: 1.3.3 Reporter: Francesco Mari Assignee: Michael Dürig Priority: Critical Labels: resilience Fix For: 1.3.6 Attachments: OAK-3235-01.patch A deadlock was detected while stopping the {{SegmentCompactionIT}} using the exposed MBean. {noformat} Found one Java-level deadlock: = pool-1-thread-23: waiting to lock monitor 0x7fa8cf1f0488 (object 0x0007a0081e48, a org.apache.jackrabbit.oak.plugins.segment.file.FileStore), which is held by main main: waiting to lock monitor 0x7fa8cc015ff8 (object 0x0007a011f750, a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter), which is held by pool-1-thread-23 Java stack information for the threads listed above: === pool-1-thread-23: at org.apache.jackrabbit.oak.plugins.segment.file.FileStore.writeSegment(FileStore.java:948) - waiting to lock 0x0007a0081e48 (a org.apache.jackrabbit.oak.plugins.segment.file.FileStore) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:228) - locked 0x0007a011f750 (a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.prepare(SegmentWriter.java:329) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeListBucket(SegmentWriter.java:447) - locked 0x0007a011f750 (a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeList(SegmentWriter.java:698) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1190) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1154) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:100) at
[jira] [Created] (OAK-3301) AbstractRepositoryUpgrade leaks Repository instances
Robert Munteanu created OAK-3301: Summary: AbstractRepositoryUpgrade leaks Repository instances Key: OAK-3301 URL: https://issues.apache.org/jira/browse/OAK-3301 Project: Jackrabbit Oak Issue Type: Bug Components: upgrade Reporter: Robert Munteanu Fix For: 1.4 AbstractRepositoryUpgradeTest creates (JCR) repository instances but does not close them ( actually creates a repository each time a session is requested ). This leads to out of memory errors when the process limit is hit on a machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3300) Include parameter descriptions in test output when running parameterised tests
[ https://issues.apache.org/jira/browse/OAK-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Munteanu updated OAK-3300: - Flags: Patch Include parameter descriptions in test output when running parameterised tests -- Key: OAK-3300 URL: https://issues.apache.org/jira/browse/OAK-3300 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Robert Munteanu Priority: Minor Fix For: 1.4 Attachments: 0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch, OAK-3300.png JUnit 4.11 or newer allows describing parameters which makes it easier to identify which fixture is running when not all tests pass. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713657#comment-14713657 ] Stefan Egli commented on OAK-3288: -- could do indeed yes - let's see what the outcome of this ticket is clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 Attachments: OAK-3288.diff The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3292) DocumentDiscoveryLiteServiceTest failures on travis and jenkins
[ https://issues.apache.org/jira/browse/OAK-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713619#comment-14713619 ] Stefan Egli commented on OAK-3292: -- note that commit http://svn.apache.org/r1697952 belongs to OAK-3267, not OAK-3292 .. DocumentDiscoveryLiteServiceTest failures on travis and jenkins --- Key: OAK-3292 URL: https://issues.apache.org/jira/browse/OAK-3292 Project: Jackrabbit Oak Issue Type: Test Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Fix For: 1.3.5 Travis reported test failure of DocumentDiscoveryLiteServiceTest: * https://travis-ci.org/apache/jackrabbit-oak/builds/77114814 {code}Failed tests: testLargeStartStopFiesta(org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest): expectation not fulfilled within 6ms: checkFiestaState failed for SimplifiedInstance[cid=3], with instances: [SimplifiedInstance[cid=3], SimplifiedInstance[cid=4]], and inactiveIds: [1, 2], fulfillment result: inactiveIds dont match, expected: 1,2, got clusterView: {seq:6,final:true,id:4a8671de-f472-482c-b40b-29561d7b9836,me:3,active:[3,4],deactivating:[],inactive:[1]}{code} * same earlier: https://travis-ci.org/apache/jackrabbit-oak/builds/77004461 and also on jenkins: * https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/352/jdk=jdk1.8.0_11,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.plugins.document/DocumentDiscoveryLiteServiceTest/testLargeStartStopFiesta/ * https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/352/jdk=jdk1.8.0_11,label=Ubuntu,nsfixtures=SEGMENT_MK,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.plugins.document/DocumentDiscoveryLiteServiceTest/testLargeStartStopFiesta/ {code}expectation not fulfilled within 6ms: checkFiestaState failed for SimplifiedInstance[cid=1], with instances: [SimplifiedInstance[cid=1], SimplifiedInstance[cid=2]], and inactiveIds: [2], fulfillment result: inactiveIds dont match, expected: 2, got clusterView: {seq:10,final:true,id:c2ef1bb6-b5c6-4a1a-bfa4-f06be1554bfd,me:1,active:[1,2],deactivating:[],inactive:[]} {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3267) Add discovery-lite descriptor for segmentNodeStore
[ https://issues.apache.org/jira/browse/OAK-3267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Egli resolved OAK-3267. -- Resolution: Fixed done * svn commit msg contains wrong ticket nr - commit is: http://svn.apache.org/r1697952 Add discovery-lite descriptor for segmentNodeStore -- Key: OAK-3267 URL: https://issues.apache.org/jira/browse/OAK-3267 Project: Jackrabbit Oak Issue Type: Task Affects Versions: 1.3.4 Reporter: Stefan Egli Assignee: Stefan Egli Fix For: 1.3.5 With OAK-2844 the DocumentNodeStore now exposes a repository descriptor 'oak.discoverylite.clusterview' - this should also be done for SegmentNodeStore - although that one will be a trivial static thingy - but upper layers should not have to worry about whether they are on document or segment. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3235) Deadlock when closing a concurrently used FileStore
[ https://issues.apache.org/jira/browse/OAK-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713634#comment-14713634 ] Michael Dürig commented on OAK-3235: Reverted http://svn.apache.org/r1697368 at http://svn.apache.org/r1697955 Deadlock when closing a concurrently used FileStore --- Key: OAK-3235 URL: https://issues.apache.org/jira/browse/OAK-3235 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Affects Versions: 1.3.3 Reporter: Francesco Mari Assignee: Michael Dürig Priority: Critical Fix For: 1.3.5 Attachments: OAK-3235-01.patch A deadlock was detected while stopping the {{SegmentCompactionIT}} using the exposed MBean. {noformat} Found one Java-level deadlock: = pool-1-thread-23: waiting to lock monitor 0x7fa8cf1f0488 (object 0x0007a0081e48, a org.apache.jackrabbit.oak.plugins.segment.file.FileStore), which is held by main main: waiting to lock monitor 0x7fa8cc015ff8 (object 0x0007a011f750, a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter), which is held by pool-1-thread-23 Java stack information for the threads listed above: === pool-1-thread-23: at org.apache.jackrabbit.oak.plugins.segment.file.FileStore.writeSegment(FileStore.java:948) - waiting to lock 0x0007a0081e48 (a org.apache.jackrabbit.oak.plugins.segment.file.FileStore) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:228) - locked 0x0007a011f750 (a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.prepare(SegmentWriter.java:329) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeListBucket(SegmentWriter.java:447) - locked 0x0007a011f750 (a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeList(SegmentWriter.java:698) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1190) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1154) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:100) at
[jira] [Commented] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713633#comment-14713633 ] Julian Reschke commented on OAK-3288: - You could treat them both as java.lang.Number clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 Attachments: OAK-3288.diff The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3235) Deadlock when closing a concurrently used FileStore
[ https://issues.apache.org/jira/browse/OAK-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-3235: --- Fix Version/s: (was: 1.3.5) 1.3.6 Deadlock when closing a concurrently used FileStore --- Key: OAK-3235 URL: https://issues.apache.org/jira/browse/OAK-3235 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Affects Versions: 1.3.3 Reporter: Francesco Mari Assignee: Michael Dürig Priority: Critical Fix For: 1.3.6 Attachments: OAK-3235-01.patch A deadlock was detected while stopping the {{SegmentCompactionIT}} using the exposed MBean. {noformat} Found one Java-level deadlock: = pool-1-thread-23: waiting to lock monitor 0x7fa8cf1f0488 (object 0x0007a0081e48, a org.apache.jackrabbit.oak.plugins.segment.file.FileStore), which is held by main main: waiting to lock monitor 0x7fa8cc015ff8 (object 0x0007a011f750, a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter), which is held by pool-1-thread-23 Java stack information for the threads listed above: === pool-1-thread-23: at org.apache.jackrabbit.oak.plugins.segment.file.FileStore.writeSegment(FileStore.java:948) - waiting to lock 0x0007a0081e48 (a org.apache.jackrabbit.oak.plugins.segment.file.FileStore) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:228) - locked 0x0007a011f750 (a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.prepare(SegmentWriter.java:329) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeListBucket(SegmentWriter.java:447) - locked 0x0007a011f750 (a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeList(SegmentWriter.java:698) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1190) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1135) at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:400) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1126) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1154) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:100) at
[jira] [Commented] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713648#comment-14713648 ] Stefan Egli commented on OAK-3288: -- are you referring to https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/ClusterViewDocument.java#L213 ? so that part might be wrong you're saying? clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 Attachments: OAK-3288.diff The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3301) AbstractRepositoryUpgrade leaks Repository instances
[ https://issues.apache.org/jira/browse/OAK-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Munteanu updated OAK-3301: - Attachment: 0001-OAK-3301-AbstractRepositoryUpgrade-leaks-Repository-.patch Attached patch AbstractRepositoryUpgrade leaks Repository instances Key: OAK-3301 URL: https://issues.apache.org/jira/browse/OAK-3301 Project: Jackrabbit Oak Issue Type: Bug Components: upgrade Reporter: Robert Munteanu Fix For: 1.4 Attachments: 0001-OAK-3301-AbstractRepositoryUpgrade-leaks-Repository-.patch AbstractRepositoryUpgradeTest creates (JCR) repository instances but does not close them ( actually creates a repository each time a session is requested ). This leads to out of memory errors when the process limit is hit on a machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3301) AbstractRepositoryUpgrade leaks Repository instances
[ https://issues.apache.org/jira/browse/OAK-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Munteanu updated OAK-3301: - Flags: Patch AbstractRepositoryUpgrade leaks Repository instances Key: OAK-3301 URL: https://issues.apache.org/jira/browse/OAK-3301 Project: Jackrabbit Oak Issue Type: Bug Components: upgrade Reporter: Robert Munteanu Fix For: 1.4 Attachments: 0001-OAK-3301-AbstractRepositoryUpgrade-leaks-Repository-.patch AbstractRepositoryUpgradeTest creates (JCR) repository instances but does not close them ( actually creates a repository each time a session is requested ). This leads to out of memory errors when the process limit is hit on a machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3288) clarify DocumentStore contract with respect to number formats
[ https://issues.apache.org/jira/browse/OAK-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14714179#comment-14714179 ] Julian Reschke commented on OAK-3288: - [~mreutegg] so if we want to restrict the types to boolean/long/String then yes [~egli] will have to change his code; can you confirm? clarify DocumentStore contract with respect to number formats - Key: OAK-3288 URL: https://issues.apache.org/jira/browse/OAK-3288 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk, rdbmk Affects Versions: 1.2.3, 1.3.4, 1.0.19 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.6 Attachments: OAK-3288.diff The DS API allows setting properties as java.lang.Integer, but implementations vary in whether they can roundtrip Integers; some do, some convert to Long. The former is observed for MongoMK (which uses BSON internally), the latter is see in RDBMK (which uses JSON). We should - clarify that integers can be set, but they will come back as longs, and - modify existing implementations to always return longs, so bugs surface early -- This message was sent by Atlassian JIRA (v6.3.4#6332)