[jira] [Created] (OAK-8381) Build Jackrabbit Oak #2193 failed
Hudson created OAK-8381: --- Summary: Build Jackrabbit Oak #2193 failed Key: OAK-8381 URL: https://issues.apache.org/jira/browse/OAK-8381 Project: Jackrabbit Oak Issue Type: Bug Components: continuous integration Reporter: Hudson No description is provided The build Jackrabbit Oak #2193 has failed. First failed run: [Jackrabbit Oak #2193|https://builds.apache.org/job/Jackrabbit%20Oak/2193/] [console log|https://builds.apache.org/job/Jackrabbit%20Oak/2193/console] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6760) Convert oak-blob-cloud to OSGi R6 annotations
[ https://issues.apache.org/jira/browse/OAK-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6760: -- Fix Version/s: (was: 1.14.0) > Convert oak-blob-cloud to OSGi R6 annotations > - > > Key: OAK-6760 > URL: https://issues.apache.org/jira/browse/OAK-6760 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: blob-cloud >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.16.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5463) Implement optimized MultiBinaryPropertyState.size(int)
[ https://issues.apache.org/jira/browse/OAK-5463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5463: -- Fix Version/s: (was: 1.14.0) > Implement optimized MultiBinaryPropertyState.size(int) > -- > > Key: OAK-5463 > URL: https://issues.apache.org/jira/browse/OAK-5463 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Marcel Reutegger >Priority: Minor > Fix For: 1.16.0 > > > {{MultiBinaryPropertyState}} currently does not have a {{size(int)}} > implementation, which means the base class will convert the {{Blob}} into a > String to get the size. This is inefficient and should have an optimized > implementation in {{MultiBinaryPropertyState}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6619) Async indexer thread may get stuck in CopyOnWriteDirectory close method
[ https://issues.apache.org/jira/browse/OAK-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6619: -- Fix Version/s: (was: 1.14.0) > Async indexer thread may get stuck in CopyOnWriteDirectory close method > --- > > Key: OAK-6619 > URL: https://issues.apache.org/jira/browse/OAK-6619 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra >Priority: Critical > Fix For: 1.16.0 > > Attachments: status-threaddump-Sep-5.txt > > > With copy-on-write mode enabled at times its seen that async index thread > remain stuck in CopyOnWriteDirectory#close method > {noformat} > "async-index-update-async" prio=5 tid=0xb9e63 nid=0x timed_waiting >java.lang.Thread.State: TIMED_WAITING > at sun.misc.Unsafe.park(Native Method) > - waiting to lock <0x2504cd51> (a > java.util.concurrent.CountDownLatch$Sync) owned by "null" tid=0x-1 > at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) > at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > at > org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory.close(CopyOnWriteDirectory.java:221) > at > org.apache.jackrabbit.oak.plugins.index.lucene.writer.DefaultIndexWriter.updateSuggester(DefaultIndexWriter.java:177) > at > org.apache.jackrabbit.oak.plugins.index.lucene.writer.DefaultIndexWriter.close(DefaultIndexWriter.java:121) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:136) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:154) > at > org.apache.jackrabbit.oak.plugins.index.IndexUpdate.leave(IndexUpdate.java:357) > at > org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:60) > at > org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:56) > at > org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.updateIndex(AsyncIndexUpdate.java:727) > at > org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.runWhenPermitted(AsyncIndexUpdate.java:572) > at > org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:431) > - locked <0x3d542de5> (a > org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate) > at > org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:245) > at org.quartz.core.JobRunShell.run(JobRunShell.java:202) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The thread is waiting on a latch and no other thread is going to release the > latch. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6062) Test failure: CopyBinariesTest.validateMigration
[ https://issues.apache.org/jira/browse/OAK-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6062: -- Fix Version/s: (was: 1.14.0) > Test failure: CopyBinariesTest.validateMigration > > > Key: OAK-6062 > URL: https://issues.apache.org/jira/browse/OAK-6062 > Project: Jackrabbit Oak > Issue Type: Bug > Components: continuous integration, documentmk >Reporter: Hudson >Priority: Major > Labels: CI, flaky-test, jenkins, test-failure > Fix For: 1.16.0 > > > Jenkins CI failure: https://builds.apache.org/view/J/job/Jackrabbit%20Oak/ > The build Jackrabbit Oak #146 has failed. > First failed run: [Jackrabbit Oak > #146|https://builds.apache.org/job/Jackrabbit%20Oak/146/] [console > log|https://builds.apache.org/job/Jackrabbit%20Oak/146/console] > The test failure is: > {noformat} > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest > validateMigration[Copy references, no blobstores defined, document -> > segment-tar](org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest) > Time elapsed: 2.534 sec <<< ERROR! > javax.jcr.RepositoryException: Failed to copy content > at > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.prepare(CopyBinariesTest.java:183) > Caused by: java.lang.IllegalStateException: Branch with failed reset > at > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.prepare(CopyBinariesTest.java:183) > Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakOak0100: > Branch reset failed > at > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.prepare(CopyBinariesTest.java:183) > Caused by: org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: > Empty branch cannot be reset > at > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.prepare(CopyBinariesTest.java:183) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6947) Add package export versions for oak-store-spi
[ https://issues.apache.org/jira/browse/OAK-6947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6947: -- Fix Version/s: (was: 1.14.0) > Add package export versions for oak-store-spi > - > > Key: OAK-6947 > URL: https://issues.apache.org/jira/browse/OAK-6947 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: store-spi >Reporter: angela >Priority: Major > Fix For: 1.16.0 > > Attachments: OAK-6947.patch > > > [~mduerig], [~mreutegg], [~frm], [~stillalex], do you have any strong > preferences wrt to the packages we placed in the _oak-store-spi_ module? > Currently we explicitly export all packages and I think it would make sense > to enable the baseline plugin for these packages. > Any objection from your side? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6758) Convert oak-authorization-cug to OSGi R6 annotations
[ https://issues.apache.org/jira/browse/OAK-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6758: -- Fix Version/s: 1.16.0 > Convert oak-authorization-cug to OSGi R6 annotations > > > Key: OAK-6758 > URL: https://issues.apache.org/jira/browse/OAK-6758 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: authorization-cug >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.14.0, 1.16.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6515) Decouple indexing and upload to datastore
[ https://issues.apache.org/jira/browse/OAK-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6515: -- Fix Version/s: (was: 1.14.0) > Decouple indexing and upload to datastore > - > > Key: OAK-6515 > URL: https://issues.apache.org/jira/browse/OAK-6515 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: indexing, lucene, query >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Minor > Fix For: 1.16.0 > > > Currently the default async index delay is 5 seconds. Using a larger delay > (e.g. 15 seconds) reduces index related growth, however diffing is delayed 15 > seconds, which can reduce indexing performance. > One option (which might require bigger changes) is to index every 5 seconds, > and store the index every 5 seconds in the local directory, but only write to > the datastore / nodestore every 3rd time (that is, every 15 seconds). > So that other cluster nodes will only see the index update every 15 seconds. > The diffing is done every 5 seconds, and the local index could be used every > 5 or every 15 seconds. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3141) Oak should warn when too many ordered child nodes
[ https://issues.apache.org/jira/browse/OAK-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3141: -- Fix Version/s: (was: 1.14.0) > Oak should warn when too many ordered child nodes > - > > Key: OAK-3141 > URL: https://issues.apache.org/jira/browse/OAK-3141 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.0.16 >Reporter: Jörg Hoh >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > When working with the RDBMK we came into situations, that large documents did > not fit into the provided db columns, there was an overflow, which caused oak > not to persist the change. We fixed it by increasing the size of the column. > But it would be nice if Oak could warn if a document exceeds a certain size > (for example 2 megabytes); because this warning indicates, that on a JCR > level there might be a problematic situation, for example: > * ordered node with a large list of childnodes > * or longstanding sessions with lots of changes, which accumulate to large > documents. > It's certainly nice to know if there's a node/document with such a problem, > before the exceptions actually happens and an operation breaks. > This message should be a warning, and should contain the JCR path of the node > plus the current size. To avoid that this message is overseen, it would be > good if it is written everyonce in a while (every 10 minutes?) if this > condition persists. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6597) rep:excerpt not working for content indexed by aggregation in lucene
[ https://issues.apache.org/jira/browse/OAK-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6597: -- Fix Version/s: (was: 1.14.0) > rep:excerpt not working for content indexed by aggregation in lucene > > > Key: OAK-6597 > URL: https://issues.apache.org/jira/browse/OAK-6597 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Affects Versions: 1.6.1, 1.7.6, 1.8.0 >Reporter: Dirk Rudolph >Assignee: Chetan Mehrotra >Priority: Major > Labels: excerpt > Fix For: 1.16.0 > > Attachments: excerpt-with-aggregation-test.patch > > > I mentioned that properties that got indexed due to an aggregation are not > considered for excerpts (highlighting) as they are not indexed as stored > fields. > See the attached patch that implements a test for excerpts in > {{LuceneIndexAggregationTest2}}. > It creates the following structure: > {code} > /content/foo [test:Page] > + bar (String) > - jcr:content [test:PageContent] > + bar (String) > {code} > where both strings (the _bar_ property at _foo_ and the _bar_ property at > _jcr:content_) contain different text. > Afterwards it queries for 2 terms ("tinc*" and "aliq*") that either exist in > _/content/foo/bar_ or _/content/foo/jcr:content/bar_ but not in both. For the > former one the excerpt is properly provided for the later one it isn't. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7254) Indexes with excludedPaths, or includedPaths should not be picked for queries without path
[ https://issues.apache.org/jira/browse/OAK-7254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7254: -- Fix Version/s: (was: 1.14.0) > Indexes with excludedPaths, or includedPaths should not be picked for queries > without path > -- > > Key: OAK-7254 > URL: https://issues.apache.org/jira/browse/OAK-7254 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene, query >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Critical > Fix For: 1.16.0 > > > Queries that don't have a clear path restriction should not use indexes that > have excludedPaths or includedPaths set, except in some exceptional cases (to > be defined). > For example, if a query doesn't have a path restriction, say: > {noformat} > /jcr:root//element(*, nt:base)[@status='RUNNING'] > {noformat} > Then an index that has excludedPaths set (for example to /etc) shouldn't be > used, at least not if a different index is available. Currently it is used > currently, actually in _favor_ of another index, if the property "status" is > commonly used in /etc. Because of that, the index that doesn't have > excludedPath has a higher cost (as it indexes the property "status" in /etc, > and so has more entries for "status", than the index that doesn't index /etc). > The same for includedPaths, in case queryPaths isn't set to the same value(s). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3809) Test failure: FacetTest
[ https://issues.apache.org/jira/browse/OAK-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3809: -- Fix Version/s: (was: 1.14.0) > Test failure: FacetTest > --- > > Key: OAK-3809 > URL: https://issues.apache.org/jira/browse/OAK-3809 > Project: Jackrabbit Oak > Issue Type: Bug > Components: solr > Environment: > https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/ >Reporter: Michael Dürig >Assignee: Tommaso Teofili >Priority: Major > Labels: ci, jenkins, test, test-failure > Fix For: 1.16.0 > > > {{org.apache.jackrabbit.oak.jcr.query.FacetTest}} keeps failing on Jenkins: > {noformat} > testFacetRetrievalMV(org.apache.jackrabbit.oak.jcr.query.FacetTest) Time > elapsed: 5.927 sec <<< FAILURE! > junit.framework.ComparisonFailure: expected: (2), aem (1), apache (1), cosmetics (1), furniture (1)], tags:[repository > (2), software (2), aem (1), apache (1), cosmetics (1), furniture (1)], > tags:[repository (2), software (2), aem (1), apache (1), cosmetics (1), > furniture (1)], tags:[repository (2), software (2), aem (1), apache (1), > cosmetics (1), furniture (1)]]> but was: > at junit.framework.Assert.assertEquals(Assert.java:100) > at junit.framework.Assert.assertEquals(Assert.java:107) > at junit.framework.TestCase.assertEquals(TestCase.java:269) > at > org.apache.jackrabbit.oak.jcr.query.FacetTest.testFacetRetrievalMV(FacetTest.java:80) > {noformat} > Failure seen at builds: 628, 629, 630, 633, 634, 636, 642, 643, 644, 645, > 648, 651, 656, 659, 660, 663, 666 > See e.g. > https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/634/#showFailuresLink -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5506) reject item names with unpaired surrogates early
[ https://issues.apache.org/jira/browse/OAK-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5506: -- Fix Version/s: (was: 1.14.0) > reject item names with unpaired surrogates early > > > Key: OAK-5506 > URL: https://issues.apache.org/jira/browse/OAK-5506 > Project: Jackrabbit Oak > Issue Type: Wish > Components: core, jcr, segment-tar >Affects Versions: 1.5.18 >Reporter: Julian Reschke >Priority: Minor > Labels: resilience > Fix For: 1.16.0 > > Attachments: OAK-5506-01.patch, OAK-5506-02.patch, OAK-5506-4.diff, > OAK-5506-bench.diff, OAK-5506-jcr-level.diff, OAK-5506-name-conversion.diff, > OAK-5506-segment.diff, OAK-5506-segment2.diff, OAK-5506-segment3.diff, > OAK-5506.diff, ValidNamesTest.java > > > Apparently, the following node name is accepted: >{{"foo\ud800"}} > but a subsequent {{getPath()}} call fails: > {noformat} > javax.jcr.InvalidItemStateException: This item [/test_node/foo?] does not > exist anymore > at > org.apache.jackrabbit.oak.jcr.delegate.ItemDelegate.checkAlive(ItemDelegate.java:86) > at > org.apache.jackrabbit.oak.jcr.session.operation.ItemOperation.checkPreconditions(ItemOperation.java:34) > at > org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.prePerform(SessionDelegate.java:615) > at > org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:205) > at > org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112) > at > org.apache.jackrabbit.oak.jcr.session.ItemImpl.getPath(ItemImpl.java:140) > at > org.apache.jackrabbit.oak.jcr.session.NodeImpl.getPath(NodeImpl.java:106) > at > org.apache.jackrabbit.oak.jcr.ValidNamesTest.nameTest(ValidNamesTest.java:271) > at > org.apache.jackrabbit.oak.jcr.ValidNamesTest.testUnpairedSurrogate(ValidNamesTest.java:259) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source){noformat} > (test case follows) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5990) Add properties filtering support to OakEventFilter
[ https://issues.apache.org/jira/browse/OAK-5990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5990: -- Fix Version/s: (was: 1.14.0) > Add properties filtering support to OakEventFilter > -- > > Key: OAK-5990 > URL: https://issues.apache.org/jira/browse/OAK-5990 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: jcr >Affects Versions: 1.6.1 >Reporter: Stefan Egli >Priority: Major > Fix For: 1.16.0 > > > SLING-6164 introduced a _property name hint_ which, when set, allows to limit > the observation events to only include those that affect at least one of the > those properties listed. The advantage is to be further able to reduce the > events sent out. This feature has not yet been implemented on the oak side. > Thus we should add this to the OakEventFilter. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6769) Convert oak-search-mt to OSGi R6 annotations
[ https://issues.apache.org/jira/browse/OAK-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6769: -- Fix Version/s: (was: 1.14.0) > Convert oak-search-mt to OSGi R6 annotations > > > Key: OAK-6769 > URL: https://issues.apache.org/jira/browse/OAK-6769 > Project: Jackrabbit Oak > Issue Type: Technical task >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.16.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-4498) Introduce lower limit for gc() maxRevisionAge
[ https://issues.apache.org/jira/browse/OAK-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-4498: -- Fix Version/s: (was: 1.14.0) > Introduce lower limit for gc() maxRevisionAge > - > > Key: OAK-4498 > URL: https://issues.apache.org/jira/browse/OAK-4498 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, documentmk >Reporter: Marcel Reutegger >Assignee: Marcel Reutegger >Priority: Minor > Fix For: 1.16.0 > > > Introduce and enforce a lower limit for maxRevisionAge in > VersionGarbageCollector.gc(). > OAK-4494 changes the way documents in a cache are considered up-to-date. In > addition to the modCount value it also considers the modified timestamp. To > work properly, a new document must have a modified timestamp that is > different from a previous incarnation (i.e. before gc removed it). The > version GC should therefore not remove documents with a maxRevisionAge less > than the modified resolution (5 seconds). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3373) Observers dont survive store restart (was: LuceneIndexProvider: java.lang.IllegalStateException: open)
[ https://issues.apache.org/jira/browse/OAK-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3373: -- Fix Version/s: (was: 1.14.0) > Observers dont survive store restart (was: LuceneIndexProvider: > java.lang.IllegalStateException: open) > -- > > Key: OAK-3373 > URL: https://issues.apache.org/jira/browse/OAK-3373 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Affects Versions: 1.3.5 >Reporter: Stefan Egli >Priority: Major > Fix For: 1.16.0 > > > The following exception occurs when stopping, then immediately re-starting > the oak-core bundle (which was done as part of testing for OAK-3250 - but can > be reproduced independently). It's not clear what the consequences are > though.. > {code}08.09.2015 14:20:26.960 *ERROR* [oak-lucene-0] > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider Uncaught > exception in > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider@3a4a6c5c > org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: Error > occurred while fetching children for path /oak:index/authorizables > at > org.apache.jackrabbit.oak.plugins.document.DocumentStoreException.convert(DocumentStoreException.java:48) > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.getChildren(DocumentNodeStore.java:902) > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.getChildNodes(DocumentNodeStore.java:1082) > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.getChildNodeEntries(DocumentNodeState.java:508) > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.access$100(DocumentNodeState.java:65) > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeState$ChildNodeEntryIterator.fetchMore(DocumentNodeState.java:716) > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeState$ChildNodeEntryIterator.(DocumentNodeState.java:681) > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeState$1.iterator(DocumentNodeState.java:289) > at > org.apache.jackrabbit.oak.spi.state.AbstractNodeState.compareAgainstBaseState(AbstractNodeState.java:129) > at > org.apache.jackrabbit.oak.spi.state.AbstractNodeState.compareAgainstBaseState(AbstractNodeState.java:303) > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.compareAgainstBaseState(DocumentNodeState.java:359) > at > org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) > at > org.apache.jackrabbit.oak.spi.state.AbstractNodeState.compareAgainstBaseState(AbstractNodeState.java:140) > at > org.apache.jackrabbit.oak.spi.state.AbstractNodeState.compareAgainstBaseState(AbstractNodeState.java:303) > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.compareAgainstBaseState(DocumentNodeState.java:359) > at > org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) > at > org.apache.jackrabbit.oak.spi.state.AbstractNodeState.compareAgainstBaseState(AbstractNodeState.java:140) > at > org.apache.jackrabbit.oak.spi.state.AbstractNodeState.compareAgainstBaseState(AbstractNodeState.java:303) > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.compareAgainstBaseState(DocumentNodeState.java:359) > at > org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52) > at > org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:73) > at > org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:127) > at > org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:121) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.IllegalStateException: open > at org.bson.util.Assertions.isTrue(Assertions.java:36) > at > com.mongodb.DBTCPConnector.isMongosConnection(DBTCPConnector.java:367) > at com.mongodb.Mongo.isMongosConnection(Mongo.java:622) > at com.mongodb.DBCursor._check(DBCursor.java:494) > at com.mongodb.DBCursor._hasNext(DBCursor.java:621) > at
[jira] [Updated] (OAK-7382) Cloud datastore without local disk
[ https://issues.apache.org/jira/browse/OAK-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7382: -- Fix Version/s: (was: 1.14.0) > Cloud datastore without local disk > -- > > Key: OAK-7382 > URL: https://issues.apache.org/jira/browse/OAK-7382 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: blob, blob-cloud >Reporter: Thomas Mueller >Assignee: Amit Jain >Priority: Major > Fix For: 1.16.0 > > > Currently, the S3 datastores need local disk to work (not sure about the > Azure one). > This should not be needed (not for upload, caching,...). > Also, temporary files for garbage collection should not be needed (instead, > use temporary binaries, possibly written to S3 / Azure). > Really everything should fit in a few MB of memory. > For S3, it might be needed to read a few MB of data into memory, and then > possibly do a multipart upload: > > https://stackoverflow.com/questions/8653146/can-i-stream-a-file-upload-to-s3-without-a-content-length-header -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5152) Improve overflow handling in ChangeSetFilterImpl
[ https://issues.apache.org/jira/browse/OAK-5152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5152: -- Fix Version/s: (was: 1.14.0) > Improve overflow handling in ChangeSetFilterImpl > > > Key: OAK-5152 > URL: https://issues.apache.org/jira/browse/OAK-5152 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.5.14 >Reporter: Stefan Egli >Priority: Major > Fix For: 1.16.0 > > > As described in OAK-5151 when a ChangeSet overflows, the ChangeSetFilterImpl > treats the changes as included and doesn't go further into the remaining, > perhaps not-overflown other sets. Besides more testing it wouldn't be much > effort to change this though. Putting this as outside of 1.6 scope for now > though. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3355) Test failure: SpellcheckTest.testSpellcheckMultipleWords
[ https://issues.apache.org/jira/browse/OAK-3355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3355: -- Fix Version/s: (was: 1.14.0) > Test failure: SpellcheckTest.testSpellcheckMultipleWords > > > Key: OAK-3355 > URL: https://issues.apache.org/jira/browse/OAK-3355 > Project: Jackrabbit Oak > Issue Type: Bug > Components: solr >Affects Versions: 1.0.24 > Environment: > https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/ >Reporter: Michael Dürig >Assignee: Tommaso Teofili >Priority: Major > Labels: ci, jenkins, test, test-failure > Fix For: 1.16.0 > > > {{org.apache.jackrabbit.oak.jcr.query.SpellcheckTest.testSpellcheckMultipleWords}} > fails on Jenkins. > Failure seen at builds: 389, 392, 395, 396, 562 > https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/396/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/console > {noformat} > testSpellcheckMultipleWords(org.apache.jackrabbit.oak.jcr.query.SpellcheckTest) > Time elapsed: 0.907 sec <<< FAILURE! > junit.framework.ComparisonFailure: expected:<[voting[ in] ontario]> but > was:<[voting[, voted,] ontario]> > at junit.framework.Assert.assertEquals(Assert.java:85) > at junit.framework.Assert.assertEquals(Assert.java:91) > at > org.apache.jackrabbit.oak.jcr.query.SpellcheckTest.testSpellcheckMultipleWords(SpellcheckTest.java:86) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6501) Support adding or updating index definitions via oak-run: JSON data format
[ https://issues.apache.org/jira/browse/OAK-6501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6501: -- Fix Version/s: (was: 1.14.0) > Support adding or updating index definitions via oak-run: JSON data format > -- > > Key: OAK-6501 > URL: https://issues.apache.org/jira/browse/OAK-6501 > Project: Jackrabbit Oak > Issue Type: Improvement >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > In OAK-6471 we have support for index definitions via JSON. > I'm not happy with the escaping (OAK-6476) ("If the string starts with > namespace..."), I think it's a bit dangerous. Need to investigate whether > this prevents importing index definitions exported via JSON > (localhost:/oak:index/lucene.tidy.-1.json). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5924) Prevent long running query from delaying refresh of index
[ https://issues.apache.org/jira/browse/OAK-5924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5924: -- Fix Version/s: (was: 1.14.0) > Prevent long running query from delaying refresh of index > - > > Key: OAK-5924 > URL: https://issues.apache.org/jira/browse/OAK-5924 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra >Priority: Major > Fix For: 1.16.0 > > > Whenever the index gets updated {{IndexTracker}} detects the changes and open > new {{IndexNode}} and closes old index nodes. This flow would block untill > all old IndexNode are closed. > IndexNode close itself relies on a writer lock. It can happen that a long > running query i.e. a query which is about to read a page of large is > currently executing on the old IndexNode instance. This query is trying load > 100k docs and is very slow (due to loading of excerpt) then such a query > would prevent the IndexNode from getting closed. This in turn would prevent > the index from seeing latest data and become stale. > To make query and indexing more resilient we should look if current IndexNode > being used for query is closing or not. If closing then query should open a > fresh searcher -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-1905) SegmentMK: Arch segment(s)
[ https://issues.apache.org/jira/browse/OAK-1905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-1905: -- Fix Version/s: (was: 1.14.0) > SegmentMK: Arch segment(s) > -- > > Key: OAK-1905 > URL: https://issues.apache.org/jira/browse/OAK-1905 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Jukka Zitting >Priority: Minor > Labels: perfomance, scalability > Fix For: 1.16.0 > > > There are a lot of constants and other commonly occurring name, values and > other data in a typical repository. To optimize storage space and access > speed, it would be useful to place such data in one or more constant "arch > segments" that are always cached in memory. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6919) SegmentCache might introduce unwanted memory references to SegmentId instances
[ https://issues.apache.org/jira/browse/OAK-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6919: -- Fix Version/s: (was: 1.14.0) > SegmentCache might introduce unwanted memory references to SegmentId instances > -- > > Key: OAK-6919 > URL: https://issues.apache.org/jira/browse/OAK-6919 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Francesco Mari >Assignee: Francesco Mari >Priority: Major > Fix For: 1.16.0 > > > {{SegmentCache}} contains, through the underlying Guava cache, hard > references to both {{SegmentId}} and {{Segment}} instances. Thus, > {{SegmentCache}} contributes to the computation of in-memory references that, > in turn, constitute the root references of the garbage collection algorithm. > Further investigations are needed to assess this statement but, if > {{SegmentCache}} is proved to be problematic, there are some possible > solutions. > For example, {{SegmentCache}} might be reworked to store references to > MSB/LSB pairs as keys, instead of to {{SegmentId}} instances. Moreover, > instead of referencing {{Segment}} instances as values, {{SegmentCache}} > might hold references to their underlying {{ByteBuffer}}. With these changes > in place, {{SegmentCache}} would not interfere with {{SegmentTracker}} and > the garbage collection algorithm. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6288) Test failure: upgrade tests failing: Failed to copy content
[ https://issues.apache.org/jira/browse/OAK-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6288: -- Fix Version/s: (was: 1.14.0) > Test failure: upgrade tests failing: Failed to copy content > --- > > Key: OAK-6288 > URL: https://issues.apache.org/jira/browse/OAK-6288 > Project: Jackrabbit Oak > Issue Type: Bug > Components: continuous integration, upgrade >Reporter: Hudson >Priority: Major > Labels: CI, jenkins, test-failure > Fix For: 1.16.0 > > > Jenkins CI failure: https://builds.apache.org/view/J/job/Jackrabbit%20Oak/ > The build Jackrabbit Oak #364 has failed. > First failed run: [Jackrabbit Oak > #364|https://builds.apache.org/job/Jackrabbit%20Oak/364/] [console > log|https://builds.apache.org/job/Jackrabbit%20Oak/364/console] > Failing tests: > {noformat} > > org.apache.jackrabbit.oak.upgrade.CopyCheckpointsTest.validateMigration[Suppress > the warning] > > org.apache.jackrabbit.oak.upgrade.CopyCheckpointsTest.validateMigration[Source > data store defined, checkpoints migrated] > > org.apache.jackrabbit.oak.upgrade.IgnoreMissingBinariesTest.validateMigration > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10 > > org.apache.jackrabbit.oak.upgrade.cli.SegmentTarToSegmentTest.validateMigration > org.apache.jackrabbit.oak.upgrade.cli.SegmentToJdbcTest.validateMigration > > org.apache.jackrabbit.oak.upgrade.cli.SegmentToSegmentTarTest.validateMigration > > org.apache.jackrabbit.oak.upgrade.cli.SegmentToSegmentTarWithMissingDestinationDirectoryTest.validateMigration > > org.apache.jackrabbit.oak.upgrade.cli.SegmentToSegmentTest.validateMigration > > org.apache.jackrabbit.oak.upgrade.cli.SegmentToSegmentWithMissingDestinationDirectoryTest.validateMigration > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > references, no blobstores defined, segment -> segment] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > references, no blobstores defined, segment-tar -> segment-tar] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > references, no blobstores defined, segment -> segment-tar] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > embedded to embedded, no blobstores defined] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > embedded to external, no blobstores defined] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > references, src blobstore defined] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > external to embedded, src blobstore defined] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > external to external, src blobstore defined] > org.apache.jackrabbit.oak.upgrade.cli.blob.FbsToFbsTest.validateMigration > org.apache.jackrabbit.oak.upgrade.cli.blob.FbsToFdsTest.validateMigration > org.apache.jackrabbit.oak.upgrade.cli.blob.FdsToFbsTest.validateMigration > {noformat} > All seem to fail with > {noformat} > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.737 > s <<< FAILURE! - in > org.apache.jackrabbit.oak.upgrade.cli.SegmentTarToSegmentTest > [ERROR] > validateMigration(org.apache.jackrabbit.oak.upgrade.cli.SegmentTarToSegmentTest) > Time elapsed: 3.73 s <<< ERROR! > java.lang.RuntimeException: javax.jcr.RepositoryException: Failed to copy > content > Caused by: javax.jcr.RepositoryException: Failed to copy content > Caused by: java.lang.IllegalArgumentException > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5984) Property indexes can get ouf of sync
[ https://issues.apache.org/jira/browse/OAK-5984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5984: -- Fix Version/s: (was: 1.14.0) > Property indexes can get ouf of sync > > > Key: OAK-5984 > URL: https://issues.apache.org/jira/browse/OAK-5984 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: property-index >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > Property indexes can get out of sync for the following reasons: > * the index was disabled for some time > * the property index component was not started / configured > * the index definition was changed without reindexing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7116) Use JMX mode to reindex on SegmentNodeStore without requiring manual steps
[ https://issues.apache.org/jira/browse/OAK-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7116: -- Fix Version/s: (was: 1.14.0) > Use JMX mode to reindex on SegmentNodeStore without requiring manual steps > -- > > Key: OAK-7116 > URL: https://issues.apache.org/jira/browse/OAK-7116 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: run >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra >Priority: Major > Fix For: 1.16.0 > > > oak-run indexing for SegmentNodeStore currently require following steps while > performing indexing against a repository which is in use [1] > # Create checkpoint via MBean and pass it as part of cli args > # Perform actual indexing with read only access to repo > # Import the index via MBean operation > As per current documented steps #1 and #3 are manual. This can potentially be > simplified by directly using JMX operation from within oak-run as currently > for accessing SegmentNodeStore oak-run needs to run on same host as actual > application > *JMX Access* > JMX access can be done via following ways > # Application using Oak has JMX remote > ## Enabled and same info provided as cli args > ## Not enabled - In such a case we can rely on > ### pid and [local > connection|https://stackoverflow.com/questions/13252914/how-to-connect-to-a-local-jmx-server-by-knowing-the-process-id] > or [attach|https://github.com/nickman/jmxlocal] > ### Or query all running java prcess jmx and check if SegmentNodeStore repo > path is same as one provided in cli > # Application using OAk > *Proposed Approach* > # Establish the JMX connection > # Create checkpoint using the JMX connection programatically > # Perform indexing with read only access > # Import the index via JMX access > With this indexing support for SegmentNodeStore would be at par with > DocumentNodeStore in terms of ease of use > [1] https://jackrabbit.apache.org/oak/docs/query/oak-run-indexing.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6761) Convert oak-blob-plugins to OSGi R6 annotations
[ https://issues.apache.org/jira/browse/OAK-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6761: -- Fix Version/s: (was: 1.14.0) > Convert oak-blob-plugins to OSGi R6 annotations > --- > > Key: OAK-6761 > URL: https://issues.apache.org/jira/browse/OAK-6761 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: blob-plugins >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.16.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3219) Lucene IndexPlanner should also account for number of property constraints evaluated while giving cost estimation
[ https://issues.apache.org/jira/browse/OAK-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3219: -- Fix Version/s: (was: 1.14.0) > Lucene IndexPlanner should also account for number of property constraints > evaluated while giving cost estimation > - > > Key: OAK-3219 > URL: https://issues.apache.org/jira/browse/OAK-3219 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Chetan Mehrotra >Assignee: Thomas Mueller >Priority: Minor > Labels: performance > Fix For: 1.16.0 > > > Currently the cost returned by Lucene index is a function of number of > indexed documents present in the index. If the number of indexed entries are > high then it might reduce chances of this index getting selected if some > property index also support of the property constraint. > {noformat} > /jcr:root/content/freestyle-cms/customers//element(*, cq:Page) > [(jcr:content/@title = 'm' or jcr:like(jcr:content/@title, 'm%')) > and jcr:content/@sling:resourceType = '/components/page/customer’] > {noformat} > Consider above query with following index definition > * A property index on resourceType > * A Lucene index for cq:Page with properties {{jcr:content/title}}, > {{jcr:content/sling:resourceType}} indexed and also path restriction > evaluation enabled > Now what the two indexes can help in > # Property index > ## Path restriction > ## Property restriction on {{sling:resourceType}} > # Lucene index > ## NodeType restriction > ## Property restriction on {{sling:resourceType}} > ## Property restriction on {{title}} > ## Path restriction > Now cost estimate currently works like this > * Property index - {{f(indexedValueEstimate, estimateOfNodesUnderGivenPath)}} > ** indexedValueEstimate - For 'sling:resourceType=foo' its the approximate > count for nodes having that as 'foo' > ** estimateOfNodesUnderGivenPath - Its derived from an approximate estimation > of nodes present under given path > * Lucene Index - {{f(totalIndexedEntries)}} > As cost of Lucene is too simple it does not reflect the reality. Following 2 > changes can be done to make it better > * Given that Lucene index can handle multiple constraints compared (4) to > property index (2), the cost estimate returned by it should also reflect this > state. This can be done by setting costPerEntry to 1/(no of property > restriction evaluated) > * Get the count for queried property value - This is similar to what > PropertyIndex does and assumes that Lucene can provide that information in > O(1) cost. In case of multiple supported property restriction this can be > minima of all -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3437) Regression in org.apache.jackrabbit.core.query.JoinTest#testJoinWithOR5 when enabling OAK-1617
[ https://issues.apache.org/jira/browse/OAK-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3437: -- Fix Version/s: (was: 1.14.0) > Regression in org.apache.jackrabbit.core.query.JoinTest#testJoinWithOR5 when > enabling OAK-1617 > -- > > Key: OAK-3437 > URL: https://issues.apache.org/jira/browse/OAK-3437 > Project: Jackrabbit Oak > Issue Type: Bug > Components: solr >Reporter: Davide Giannella >Assignee: Tommaso Teofili >Priority: Major > Fix For: 1.16.0 > > > When enabling OAK-1617 (still to be committed) there's a regression in the > {{oak-solr-core}} unit tests > - {{org.apache.jackrabbit.core.query.JoinTest#testJoinWithOR3}} > - {{org.apache.jackrabbit.core.query.JoinTest#testJoinWithOR4}} > - {{org.apache.jackrabbit.core.query.JoinTest#testJoinWithOR5}} > The WIP of the feature can be found in > https://github.com/davidegiannella/jackrabbit-oak/tree/OAK-1617 and a full > patch will be attached shortly for review in OAK-1617 itself. > The feature is currently disabled, in order to enable it for unit testing an > approach like this can be taken > https://github.com/davidegiannella/jackrabbit-oak/blob/177df1a8073b1237857267e23d12a433e3d890a4/oak-core/src/test/java/org/apache/jackrabbit/oak/query/SQL2OptimiseQueryTest.java#L142 > or setting the system property {{-Doak.query.sql2optimisation}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6421) Phase out JCR Locking support
[ https://issues.apache.org/jira/browse/OAK-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6421: -- Fix Version/s: (was: 1.14.0) > Phase out JCR Locking support > - > > Key: OAK-6421 > URL: https://issues.apache.org/jira/browse/OAK-6421 > Project: Jackrabbit Oak > Issue Type: Task > Components: jcr >Reporter: Marcel Reutegger >Priority: Major > Fix For: 1.16.0 > > > Oak currently has a lot of gaps in its JCR Locking implementation (see > OAK-1962), which basically makes it non-compliant with the JCR specification. > I propose we phase out the support for JCR Locking because a proper > implementation would be rather complex with a runtime behaviour that is very > different in a standalone deployment compared to a cluster. In the standalone > case a lock could be acquired very quickly, while in the distributed case, > the operations would be multiple orders of magnitude slower, depending on how > cluster nodes are geographically distributed. > Applications that rely on strict lock semantics should use other mechanisms, > built explicitly for this purpose. E.g. Apache Zookeeper. > To ease upgrade and migration to a different lock mechanism, the proposal is > to introduce a flag or configuration that controls the level of support for > JCR Locking: > - DISABLED: the implementation does not support JCR Locking at all. Methods > will throw UnsupportedRepositoryOperationException when defined by the JCR > specification. > - DEPRECATED: the implementation behaves as right now, but logs a warn or > error message that JCR Locking does not work as specified and will be removed > in a future version of Oak. > In a later release (e.g. 1.10) the current JCR Locking implementation would > be removed entirely and unconditionally throw an exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6741) Switch to official OSGi component and metatype annotations
[ https://issues.apache.org/jira/browse/OAK-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6741: -- Fix Version/s: (was: 1.14.0) > Switch to official OSGi component and metatype annotations > -- > > Key: OAK-6741 > URL: https://issues.apache.org/jira/browse/OAK-6741 > Project: Jackrabbit Oak > Issue Type: Improvement >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.16.0 > > Attachments: OAK-6741-proposed-changes-chetans-feedback.patch, > osgi-metadata-1.7.8.json, osgi-metadata-trunk.json > > > We should remove the 'old' Felix SCR annotations and move to the 'new' OSGi > R6 annotations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6833) LuceneIndex*Test failures
[ https://issues.apache.org/jira/browse/OAK-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6833: -- Fix Version/s: (was: 1.14.0) > LuceneIndex*Test failures > - > > Key: OAK-6833 > URL: https://issues.apache.org/jira/browse/OAK-6833 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Julian Reschke >Assignee: Vikas Saurabh >Priority: Major > Fix For: 1.16.0 > > Attachments: > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexAugmentTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexAugmentTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexAugmentTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexAugmentTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexAugmentTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.xml, > > TEST-org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.xml, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexAugmentTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexAugmentTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexAugmentTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexAugmentTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexAugmentTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.txt, > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.txt, > unit-tests.log, unit-tests.log, unit-tests.log > > > {noformat} > [ERROR] testLuceneWithRelativeProperty[1: useBlobStore > (false)](org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest) > Time elapsed: 0.063 s <<< FAILURE! > java.lang.AssertionError: expected: but was: > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorTest.testLuceneWithRelativeProperty(LuceneIndexEditorTest.java:341) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6774) Convert oak-upgrade to OSGi R6 annotations
[ https://issues.apache.org/jira/browse/OAK-6774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6774: -- Fix Version/s: (was: 1.14.0) > Convert oak-upgrade to OSGi R6 annotations > -- > > Key: OAK-6774 > URL: https://issues.apache.org/jira/browse/OAK-6774 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: upgrade >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.16.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5488) BackgroundObserver MBean report Listener class again
[ https://issues.apache.org/jira/browse/OAK-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5488: -- Fix Version/s: (was: 1.14.0) > BackgroundObserver MBean report Listener class again > > > Key: OAK-5488 > URL: https://issues.apache.org/jira/browse/OAK-5488 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, jcr >Affects Versions: 1.5.18 >Reporter: Stefan Eissing >Priority: Minor > Fix For: 1.16.0 > > > The MBean stats for {{BackgroundObserverStats}} used to give the className of > the listening class. > With the introduction of {{FilteringDispatcher}} all MBeans only list that > class name, making it difficult to find out which observer really is shown. > Proposal: show the effective className as before again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7193) DataStore: API to retrieve statistic (file headers, size estimation)
[ https://issues.apache.org/jira/browse/OAK-7193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7193: -- Fix Version/s: (was: 1.14.0) > DataStore: API to retrieve statistic (file headers, size estimation) > > > Key: OAK-7193 > URL: https://issues.apache.org/jira/browse/OAK-7193 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: blob >Reporter: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > Extension of OAK-6254: in addition to retrieving the size, it would be good > to retrieve the estimated number and total size per file type. A simple (and > in my view sufficient) solution is to use the first few bytes ("magic > numbers", 2 bytes should be enough) to get the file type. That would allow to > estimate, for example, the number of, and total size, of PDF files, JPEG, > Lucene index and so on. A histogram would be nice as well, but I think is not > needed. > To speed up calculation, the blob ID could be extended with the first 2 bytes > of the file content, that is: #@ where magic is the > first two bytes, in hex. That would allow to quickly get the data from the > blob ids (no need to actually read content). > Sampling should be enough. The longer it takes, the more accurate the data. > We could store the data while doing datastore GC, in which case the returned > data would be somewhat stale; that's OK. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6261) Log queries that sort by un-indexed properties
[ https://issues.apache.org/jira/browse/OAK-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6261: -- Fix Version/s: (was: 1.14.0) > Log queries that sort by un-indexed properties > -- > > Key: OAK-6261 > URL: https://issues.apache.org/jira/browse/OAK-6261 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Minor > Fix For: 1.16.0 > > > Queries that can read many nodes, and sort by properties that are not > indexed, can be very slow. This includes for example fulltext queries. > As a start, it might make sense to log an "info" level message (but avoid > logging the same message each time a query is run). Per configuration, this > could be turned to "warning". -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5316) Rewrite JcrPathParser and JcrNameParser with good test coverage
[ https://issues.apache.org/jira/browse/OAK-5316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5316: -- Fix Version/s: (was: 1.14.0) > Rewrite JcrPathParser and JcrNameParser with good test coverage > --- > > Key: OAK-5316 > URL: https://issues.apache.org/jira/browse/OAK-5316 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.5.15 >Reporter: Julian Sedding >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > As discussed in OAK-5260 the implementation of the {{JcrPathParser}} and > possibly also the {{JcrNameParser}} are not ideal, i.e. there are potentially > many bugs hiding in edge-case scenarios. The parsers' test coverage is also > lacking, which is problematic as these code paths get executed very > frequently. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7660) Refactor AzureCompact and Compact
[ https://issues.apache.org/jira/browse/OAK-7660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7660: -- Fix Version/s: (was: 1.14.0) > Refactor AzureCompact and Compact > - > > Key: OAK-7660 > URL: https://issues.apache.org/jira/browse/OAK-7660 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Andrei Dulceanu >Assignee: Andrei Dulceanu >Priority: Major > Labels: tech-debt, technical_debt, tooling > Fix For: 1.16.0 > > > {{AzureCompact}} in {{oak-segment-azure}} follows closely the structure and > logic of {{Compact}} in {{oak-segment-tar}}. Since the only thing which > differs is the underlying persistence used (remote in Azure vs. local in TAR > files), the common logic should be extracted in a super-class, extended by > both. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3236) integration test that simulates influence of clock drift
[ https://issues.apache.org/jira/browse/OAK-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3236: -- Fix Version/s: (was: 1.14.0) > integration test that simulates influence of clock drift > > > Key: OAK-3236 > URL: https://issues.apache.org/jira/browse/OAK-3236 > Project: Jackrabbit Oak > Issue Type: Test > Components: core >Affects Versions: 1.3.4 >Reporter: Stefan Egli >Priority: Major > Fix For: 1.16.0 > > > Spin-off of OAK-2739 [of this > comment|https://issues.apache.org/jira/browse/OAK-2739?focusedCommentId=14693398=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14693398] > - ie there should be an integration test that show cases the issues with > clock drift and why it is a good idea to have a lease-check (that refuses to > let the document store be used any further once the lease times out locally) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7744) Persistent cache for the Segment Node Store
[ https://issues.apache.org/jira/browse/OAK-7744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7744: -- Fix Version/s: (was: 1.14.0) > Persistent cache for the Segment Node Store > --- > > Key: OAK-7744 > URL: https://issues.apache.org/jira/browse/OAK-7744 > Project: Jackrabbit Oak > Issue Type: Story > Components: segment-tar >Reporter: Tomek Rękawek >Assignee: Tomek Rękawek >Priority: Major > Fix For: 1.16.0 > > Attachments: OAK-7744.patch > > > With the introduction of custom, remote persistence mechanisms for the > SegmentMK (namely the Azure Segment Store), it makes sense to create another > level of cache, apart from the on-heap segment cache which is currently used. > Let's implement the persistent cache, using the existing {{TarFiles}} class > and storing the processed segments on disk. It may be created as a > pass-through {{SegmentNodeStorePersistence}} implementation, so it can be > composed with other existing persistence implementations, eg. [split > persistence|OAK-7735]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7390) QueryResult.getSize() can be slow for many "or" or "union" conditions
[ https://issues.apache.org/jira/browse/OAK-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7390: -- Fix Version/s: (was: 1.14.0) > QueryResult.getSize() can be slow for many "or" or "union" conditions > - > > Key: OAK-7390 > URL: https://issues.apache.org/jira/browse/OAK-7390 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > For queries with many union conditions, the "fast" getSize method can > actually be slower than iterating over the result. > The reason is, the number of index calls grows exponential with regards to > number of subqueries: (3x + x^2) / 2, where x is the number of subqueries. > For this to have a measurable affect, the number of subqueries needs to be > large (more than 100), and the index needs to be slow. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-1150) NodeType index: don't index all primary and mixin types
[ https://issues.apache.org/jira/browse/OAK-1150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-1150: -- Fix Version/s: (was: 1.14.0) > NodeType index: don't index all primary and mixin types > --- > > Key: OAK-1150 > URL: https://issues.apache.org/jira/browse/OAK-1150 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: property-index >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > Currently, the nodetype index indexes all primary types and mixin types > (including nt:base I think). > This results in many nodes in this index, which unnecessarily increases the > repository size, but doesn't really help executing queries (running a query > to get all nt:base nodes doesn't benefit much from using the nodetype index). > It should also help reduce writes in updating the index, for example for > OAK-1099 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7372) Update FileDataStore recommendation
[ https://issues.apache.org/jira/browse/OAK-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7372: -- Fix Version/s: (was: 1.14.0) > Update FileDataStore recommendation > --- > > Key: OAK-7372 > URL: https://issues.apache.org/jira/browse/OAK-7372 > Project: Jackrabbit Oak > Issue Type: Documentation > Components: doc >Reporter: Marcel Reutegger >Priority: Minor > Fix For: 1.16.0 > > > The BlobStore documentation currently mentions use of a FileDataStore only > for deployments when the data store is shared between multiple repository > instances. The documentation should be updated to also recommend the > FileDataStore when a repository has many binaries. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6513) Journal based Async Indexer
[ https://issues.apache.org/jira/browse/OAK-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6513: -- Fix Version/s: (was: 1.14.0) > Journal based Async Indexer > --- > > Key: OAK-6513 > URL: https://issues.apache.org/jira/browse/OAK-6513 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: indexing >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra >Priority: Major > Fix For: 1.16.0 > > > Current async indexer design is based on NodeState diff. This has served us > fine so far however off late it is not able to perform well if rate of > repository writes is high. When changes happen faster than index-update can > process them, larger and larger diffs will happen. These make index-updates > slower, which again lead to the next diff being ever larger than the one > before (assuming a constant ingestion rate). > In current diff based flow the indexer performs complete diff for all changes > happening between 2 cycle. It may happen that lots of writes happens but not > much indexable content is written. So doing diff there is a wasted effort. > In 1.6 release for NRT Indexing we implemented a journal based indexing for > external changes(OAK-4808, OAK-5430). That approach can be generalized and > used for async indexing. > Before talking about the journal based approach lets see how IndexEditor work > currently > h4. IndexEditor > Currently any IndexEditor performs 2 tasks > # Identify which node is to be indexed based on some index definition. The > Editor gets invoked as part of content diff where it determines which > NodeState is to be indexed > # Update the index based on node to be indexed > For e.g. in oak-lucene we have LuceneIndexEditor which identifies the > NodeStates to be indexed and LuceneDocumentMaker which constructs the Lucene > Document from NodeState to be indexed. For journal based approach we can > decouple these 2 parts and thus have > * IndexEditor - Identifies which all paths need to be indexed for given index > definition > * IndexUpdater - Updates the index based on given NodeState and its path > h4. High Level Flow > # Session Commit Flow > ## Each index type would provide a IndexEditor which would be invoked as part > of commit (like sync indexes). These IndexEditor would just determine which > paths needs to be indexed. > ## As part of commit the paths to be indexed would be written to journal. > # AsyncIndexUpdate flow > ## AsyncIndexUpdate would query this journal to fetch all such indexed paths > between the 2 checkpoints > ## Based on the index path data it would invoke the {{IndexUpdater}} to > update the index for that path > ## Merge the index updates > h4. Benefits > Such a design would have following impact > # More work done as part of write > # Marking of indexable content is distributed hence at indexing time lesser > work to be done > # Indexing can progress in batches > # The indexers can be called in parallel > h4. Journal Implementation > DocumentNodeStore currently has an in built journal which is being used for > NRT Indexing. That feature can be exposed as an api. > For scaling index this design is mostly required for cluster case. So we can > possibly have both indexing support implemented and use the journal based > support for DocumentNodeStore setups. Or we can look into implementing such a > journal for SegmentNodeStore setups also > h4. Open Points > * Journal support in SegmentNodeStore > * Handling deletes. > Detailed proposal - > https://wiki.apache.org/jackrabbit/Journal%20based%20Async%20Indexer -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7079) Enable oak-run indexing to connect to secondary node in Mongo replica set
[ https://issues.apache.org/jira/browse/OAK-7079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7079: -- Fix Version/s: (was: 1.14.0) > Enable oak-run indexing to connect to secondary node in Mongo replica set > - > > Key: OAK-7079 > URL: https://issues.apache.org/jira/browse/OAK-7079 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: mongomk, run >Reporter: Chetan Mehrotra >Priority: Major > Fix For: 1.16.0 > > > With OAK-6353 support for document order traversal based indexing has been > added. Currently it connects to Mongo primary. > We need to test and validate if it can be made only to connect to Mongo > secondary for below 2 cases > # Pre created checkpoint - Here checkpoint is created already and then > oak-run *only* connects to Mongo secondary > # Online indexing - Here oak-run would also create checkpoint. However it > would need to be ensured that when it performs the document order traversal > query that query is handled by Mongo secondary and oak-run logic ensures that > secondary node has the created checkpoint -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7328) Update DocumentNodeStore based OakFixture
[ https://issues.apache.org/jira/browse/OAK-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7328: -- Fix Version/s: (was: 1.14.0) > Update DocumentNodeStore based OakFixture > - > > Key: OAK-7328 > URL: https://issues.apache.org/jira/browse/OAK-7328 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: run >Reporter: Marcel Reutegger >Assignee: Marcel Reutegger >Priority: Minor > Fix For: 1.16.0 > > > The current OakFixtures using a DocumentNodeStore use a configuration / setup > which is different from what a default DocumentNodeStoreService would use. It > would be better if benchmarks run with a configuration close to a default > setup. The main differences identified are: > - Does not have a proper executor, which means some tasks are executed with > the same thread. > - Does not use a separate persistent cache for the journal (diff cache). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5782) Test failure: persistentCache.BroadcastTest.broadcastTCP
[ https://issues.apache.org/jira/browse/OAK-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5782: -- Fix Version/s: (was: 1.14.0) > Test failure: persistentCache.BroadcastTest.broadcastTCP > - > > Key: OAK-5782 > URL: https://issues.apache.org/jira/browse/OAK-5782 > Project: Jackrabbit Oak > Issue Type: Bug > Components: cache, continuous integration, core >Affects Versions: 1.6.0 >Reporter: Hudson >Assignee: Thomas Mueller >Priority: Major > Labels: test-failure, ubuntu > Fix For: 1.16.0 > > > Jenkins CI failure: > https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/ > The build Apache Jackrabbit Oak matrix/Ubuntu Slaves=ubuntu,jdk=JDK 1.8 > (latest),nsfixtures=SEGMENT_TAR,profile=unittesting #1447 has failed. > First failed run: [Apache Jackrabbit Oak matrix/Ubuntu Slaves=ubuntu,jdk=JDK > 1.8 (latest),nsfixtures=SEGMENT_TAR,profile=unittesting > #1447|https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/Ubuntu%20Slaves=ubuntu,jdk=JDK%201.8%20(latest),nsfixtures=SEGMENT_TAR,profile=unittesting/1447/] > [console > log|https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/Ubuntu%20Slaves=ubuntu,jdk=JDK%201.8%20(latest),nsfixtures=SEGMENT_TAR,profile=unittesting/1447/console] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7261) DocumentStore: inconsistent behaviour for invalid Strings as document ID
[ https://issues.apache.org/jira/browse/OAK-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7261: -- Fix Version/s: (was: 1.14.0) > DocumentStore: inconsistent behaviour for invalid Strings as document ID > > > Key: OAK-7261 > URL: https://issues.apache.org/jira/browse/OAK-7261 > Project: Jackrabbit Oak > Issue Type: Bug > Components: documentmk, mongomk, rdbmk >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Major > Fix For: 1.16.0 > > > - H2DB and Derby roundtrip any string > - PostgreSQL rejects the invalid string early > - DB2 and Oracle fail the same way as segment store (they persist the > replacement character) (see OAK-5506) > - MySQL and SQLServer fail the same way as DB2 and Oracle, but here it's the > RDBDocumentStore's fault, because the ID column is binary, and we transform > to byte sequences ourselves > - Mongo claims it saved the document, but upon lookup, returns something > with a different ID > Note that due to how RDB reads work, the returned document has the ID that > was requested, not what the DB actually contains. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7074) Ensure that all Documents are read with document order traversal indexing
[ https://issues.apache.org/jira/browse/OAK-7074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7074: -- Fix Version/s: (was: 1.14.0) > Ensure that all Documents are read with document order traversal indexing > - > > Key: OAK-7074 > URL: https://issues.apache.org/jira/browse/OAK-7074 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: mongomk, run >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra >Priority: Major > Fix For: 1.16.0 > > > With OAK-6353 support was added for document order traversal indexing. In > this mode we open a DB cursor and try to read all documents from it using > document order traversal. Such a cursor may remain open for long time (2-4 > hrs) and its possible that document may get reordered by the Mongo storage > engine. This would result in 2 aspects to be thought about > # Duplicate documents - Same document may appear more than once in result set > # Possibly missed document - It may be a possibility that a document got > moved and missed becoming part of cursor. > Both these aspects would need to be handled -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-8288) fix javadoc:javadoc for jdk >= 13
[ https://issues.apache.org/jira/browse/OAK-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-8288: -- Fix Version/s: (was: 1.14.0) > fix javadoc:javadoc for jdk >= 13 > - > > Key: OAK-8288 > URL: https://issues.apache.org/jira/browse/OAK-8288 > Project: Jackrabbit Oak > Issue Type: Bug >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_10 > Fix For: 1.16.0 > > Attachments: JavaDocHtmlHeaderTest.java > > > Javadoc in JDK 13 makes additional HTML validity checks: > * nesting of headlines ( after is an error) > * empty tags -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3598) Export cache related classes for usage in other oak bundle
[ https://issues.apache.org/jira/browse/OAK-3598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3598: -- Fix Version/s: (was: 1.14.0) > Export cache related classes for usage in other oak bundle > -- > > Key: OAK-3598 > URL: https://issues.apache.org/jira/browse/OAK-3598 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: cache >Reporter: Chetan Mehrotra >Priority: Major > Labels: tech-debt > Fix For: 1.16.0 > > > For OAK-3092 oak-lucene would need to access classes from > {{org.apache.jackrabbit.oak.cache}} package. For now its limited to > {{CacheStats}} to expose the cache related statistics. > This task is meant to determine steps needed to export the package > * Update the pom.xml to export the package > * Review current set of classes to see if they need to be reviewed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6408) Review package exports for o.a.j.oak.plugins.index.*
[ https://issues.apache.org/jira/browse/OAK-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6408: -- Fix Version/s: (was: 1.14.0) > Review package exports for o.a.j.oak.plugins.index.* > > > Key: OAK-6408 > URL: https://issues.apache.org/jira/browse/OAK-6408 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, indexing >Reporter: angela >Priority: Major > Fix For: 1.16.0 > > > while working on OAK-6304 and OAK-6355, i noticed that the > _o.a.j.oak.plugins.index.*_ contains both internal api/utilities and > implementation details which get equally exported (though without having any > package export version set). > in the light of the modularization effort, i would like to suggest that we > try to sort that out and separate the _public_ parts from implementation > details. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6766) Convert oak-lucene to OSGi R6 annotations
[ https://issues.apache.org/jira/browse/OAK-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6766: -- Fix Version/s: (was: 1.14.0) > Convert oak-lucene to OSGi R6 annotations > - > > Key: OAK-6766 > URL: https://issues.apache.org/jira/browse/OAK-6766 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: lucene >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.16.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-2182) Specify collection to be used by Solr index
[ https://issues.apache.org/jira/browse/OAK-2182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-2182: -- Fix Version/s: (was: 1.14.0) > Specify collection to be used by Solr index > --- > > Key: OAK-2182 > URL: https://issues.apache.org/jira/browse/OAK-2182 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: solr >Affects Versions: 1.1.0 >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili >Priority: Major > Fix For: 1.16.0 > > > Currently all the information to hit a Solr server is hold by the singleton > SolrServerProvider while there are some use cases where more than one query > index definition for a Solr index may be done, targeting different content, > and therefore it'd be good to be able to specify which collection should be > used by each of these indexes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6387) Building an index (new index + reindex): temporarily store blob references
[ https://issues.apache.org/jira/browse/OAK-6387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6387: -- Fix Version/s: (was: 1.14.0) > Building an index (new index + reindex): temporarily store blob references > -- > > Key: OAK-6387 > URL: https://issues.apache.org/jira/browse/OAK-6387 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene, query >Reporter: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > If reindexing a Lucene index takes multiple days, and if datastore garbage > collection (DSGC) is run during that time, then DSGC may remove binaries of > that index because they are not referenced. > It would be good if all binaries that are needed, and that are older than > (for example) one hour, are referenced during reindexing (for example in a > temporary location). So that DSGC will not remove them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6773) Convert oak-store-composite to OSGi R6 annotations
[ https://issues.apache.org/jira/browse/OAK-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6773: -- Fix Version/s: (was: 1.14.0) > Convert oak-store-composite to OSGi R6 annotations > -- > > Key: OAK-6773 > URL: https://issues.apache.org/jira/browse/OAK-6773 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: composite >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.16.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7457) "Covariant return type change detected" warnings with java10
[ https://issues.apache.org/jira/browse/OAK-7457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7457: -- Fix Version/s: (was: 1.14.0) > "Covariant return type change detected" warnings with java10 > > > Key: OAK-7457 > URL: https://issues.apache.org/jira/browse/OAK-7457 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: documentmk, segment-tar >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Major > Fix For: 1.16.0 > > > We have quite a few warnings of type "Covariant return type change detected": > {noformat} > [INFO] > C:\projects\apache\oak\trunk\oak-store-document\src\main\java\org\apache\jackrabbit\oak\plugins\document\persistentCache\broadcast\TCPBroadcaster.java:327: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.flip() has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.flip() > [INFO] > C:\projects\apache\oak\trunk\oak-store-document\src\main\java\org\apache\jackrabbit\oak\plugins\document\persistentCache\broadcast\UDPBroadcaster.java:135: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.limit(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.limit(int) > [INFO] > C:\projects\apache\oak\trunk\oak-store-document\src\main\java\org\apache\jackrabbit\oak\plugins\document\persistentCache\broadcast\UDPBroadcaster.java:138: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.position(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.position(int) > [INFO] > C:\projects\apache\oak\trunk\oak-store-document\src\main\java\org\apache\jackrabbit\oak\plugins\document\persistentCache\broadcast\TCPBroadcaster.java:226: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.position(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.position(int) > [INFO] > C:\projects\apache\oak\trunk\oak-store-document\src\main\java\org\apache\jackrabbit\oak\plugins\document\persistentCache\broadcast\InMemoryBroadcaster.java:35: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.position(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.position(int) > [INFO] > C:\projects\apache\oak\trunk\oak-store-document\src\main\java\org\apache\jackrabbit\oak\plugins\document\persistentCache\PersistentCache.java:519: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.limit(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.limit(int) > [INFO] > C:\projects\apache\oak\trunk\oak-store-document\src\main\java\org\apache\jackrabbit\oak\plugins\document\persistentCache\PersistentCache.java:522: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.position(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.position(int) > [INFO] > C:\projects\apache\oak\trunk\oak-store-document\src\main\java\org\apache\jackrabbit\oak\plugins\document\persistentCache\PersistentCache.java:535: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.position(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.position(int) > [INFO] > C:\projects\apache\oak\trunk\oak-segment-tar\src\main\java\org\apache\jackrabbit\oak\segment\data\SegmentDataV12.java:196: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.position(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.position(int) > [INFO] > C:\projects\apache\oak\trunk\oak-segment-tar\src\main\java\org\apache\jackrabbit\oak\segment\data\SegmentDataV12.java:197: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.limit(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.limit(int) > [INFO] > C:\projects\apache\oak\trunk\oak-segment-tar\src\main\java\org\apache\jackrabbit\oak\segment\data\SegmentDataUtils.java:57: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.position(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.position(int) > [INFO] > C:\projects\apache\oak\trunk\oak-segment-tar\src\main\java\org\apache\jackrabbit\oak\segment\data\SegmentDataUtils.java:58: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.limit(int) has been changed to java.nio.ByteBuffer > java.nio.ByteBuffer.limit(int) > [INFO] > C:\projects\apache\oak\trunk\oak-segment-tar\src\main\java\org\apache\jackrabbit\oak\segment\file\tar\index\IndexWriter.java:110: > Covariant return type change detected: java.nio.Buffer > java.nio.ByteBuffer.position(int) has been changed to java.nio.ByteBuffer >
[jira] [Updated] (OAK-6844) Consistency checker Directory value is always ":data"
[ https://issues.apache.org/jira/browse/OAK-6844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6844: -- Fix Version/s: (was: 1.14.0) > Consistency checker Directory value is always ":data" > - > > Key: OAK-6844 > URL: https://issues.apache.org/jira/browse/OAK-6844 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Affects Versions: 1.7.9 >Reporter: Paul Chibulcuteanu >Assignee: Thomas Mueller >Priority: Minor > Fix For: 1.16.0 > > > When running a _fullCheck_ consistency check from the Lucene Index statistics > MBean, the _Directory_ results is always _:data_ > See below: > {code} > /oak:index/lucene => VALID > Size : 42.3 MB > Directory : :data > Size : 42.3 MB > Num docs : 159132 > CheckIndex status : true > Time taken : 3.544 s > {code} > I'm not really sure what information should be put here, but the _:data_ > value is confusing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5958) Document Metrics related classes and interfaces
[ https://issues.apache.org/jira/browse/OAK-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5958: -- Fix Version/s: (was: 1.14.0) > Document Metrics related classes and interfaces > --- > > Key: OAK-5958 > URL: https://issues.apache.org/jira/browse/OAK-5958 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Michael Dürig >Assignee: Chetan Mehrotra >Priority: Major > Labels: documentation, technical_debt > Fix For: 1.16.0 > > > The Metrics related classes and interfaces in > {{org.apache.jackrabbit.oak.stats}} and > {{org.apache.jackrabbit.oak.plugins.metric}} are largely undocumented. > Specifically it is not immediately how they should be used, how a new > {{Stats}} instance should be added, what the effect this would have and how > it would (or would) not be exposed (e.g. via JMX). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7043) Collect SegmentStore stats as part of status zip
[ https://issues.apache.org/jira/browse/OAK-7043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7043: -- Fix Version/s: (was: 1.14.0) > Collect SegmentStore stats as part of status zip > > > Key: OAK-7043 > URL: https://issues.apache.org/jira/browse/OAK-7043 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: segment-tar >Reporter: Chetan Mehrotra >Priority: Major > Labels: monitoring, production > Fix For: 1.16.0 > > > Many times while investigating issue we request customer to provide to size > of segmentstore and at times list of segmentstore directory. It would be > useful if there is an InventoryPrinter for SegmentStore which can include > * Size of segment store > * Listing of segment store directory > * Possibly tail of journal.log > * Possibly some stats/info from index files stored in tar files -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6897) XPath query: option to _not_ convert "or" to "union"
[ https://issues.apache.org/jira/browse/OAK-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6897: -- Fix Version/s: (was: 1.14.0) > XPath query: option to _not_ convert "or" to "union" > > > Key: OAK-6897 > URL: https://issues.apache.org/jira/browse/OAK-6897 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Trivial > Fix For: 1.16.0 > > > Right now, all XPath queries that contain "or" of the form "@a=1 or @b=2" are > converted to SQL-2 "union". In some cases, this is a problem, specially in > combination with "order by @jcr:score desc". > Now that SQL-2 "or" conditions can be converted to union (depending if union > has a lower cost), it is no longer strictly needed to do the union conversion > in the XPath conversion. Or at least emit different SQL-2 queries and take > the one with the lowest cost. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6628) More precise indexRules support via filtering criteria on property
[ https://issues.apache.org/jira/browse/OAK-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6628: -- Fix Version/s: (was: 1.14.0) > More precise indexRules support via filtering criteria on property > -- > > Key: OAK-6628 > URL: https://issues.apache.org/jira/browse/OAK-6628 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Chetan Mehrotra >Priority: Major > Fix For: 1.16.0 > > > For Lucene index we currently support indexRules based on nodetype. Here the > recommendation is that users must use most precise nodeType/mixinType to > target the indexing rule so that only relevant nodes are indexed. > For many Sling based applications its being seen that lots of content is > nt:unstructured and it uses {{sling:resourceType}} property to distinguish > various such nt:unstructured nodes. Currently its not possible to target > index definition to index only those nt:unstructured which have specific > {{sling:resourceType}}. Which makes it harder to provide a more precise index > definitions. > To help such cases we can generalize the indexRule support via a filtering > criteria > {noformat} > activityIndex > - type = "lucene" > + indexRules > + nt:unstructured > - filter-property = "sling:resourceType" > - filter-value = "app/activitystreams/components/activity" > + properties > - jcr:primaryType = "nt:unstructured" > + verb > - propertyIndex = true > - name = "verb" > {noformat} > So indexRule would have 2 more config properties > * filter-property - Name of property to match > * filter-value - The value to match > *Indexing* > At time of indexing currently LuceneIndexEditor does a > {{indexDefinition.getApplicableIndexingRule}} passing it the NodeState. > Currently this checks only for jcr:PrimaryType and jxr:mixins to find > matching rule. > This logic would need to be extended to also check if any filter-property is > defined in definition. If yes then check if NodeState has that value > *Querying* > On query side we need to change the IndexPlanner where it currently use query > nodetype for finding matching indexRule. In addition it would need to pass on > the property restrictions and the rule only be matched if the property > restriction matches the filter > *Open Item* > # How to handle change in filter-property value. I think we have similar > problem currently if an index nodes nodeType gets changed. In such a case we > do not remove it from index. So we need to solve that for both > # Ensure that all places where rules are matched account for this filter > concept -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7370) order by jcr:score desc doesn't work across union query created by optimizing OR clauses
[ https://issues.apache.org/jira/browse/OAK-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7370: -- Fix Version/s: (was: 1.14.0) > order by jcr:score desc doesn't work across union query created by optimizing > OR clauses > > > Key: OAK-7370 > URL: https://issues.apache.org/jira/browse/OAK-7370 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Reporter: Vikas Saurabh >Assignee: Vikas Saurabh >Priority: Major > Fix For: 1.16.0 > > > Merging of sub-queries created due to optimizing OR clauses doesn't work for > sorting on {{jcr:score}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-2727) NodeStateSolrServersObserver should be filtering path selectively
[ https://issues.apache.org/jira/browse/OAK-2727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-2727: -- Fix Version/s: (was: 1.14.0) > NodeStateSolrServersObserver should be filtering path selectively > - > > Key: OAK-2727 > URL: https://issues.apache.org/jira/browse/OAK-2727 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: solr >Affects Versions: 1.1.8 >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili >Priority: Major > Labels: performance > Fix For: 1.16.0 > > > As discussed in OAK-2718 it'd be good to be able to selectively find Solr > indexes by path, as done in Lucene index, see also OAK-2570. > This would avoid having to do full diffs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6303) Cache in CachingBlobStore might grow beyond configured limit
[ https://issues.apache.org/jira/browse/OAK-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6303: -- Fix Version/s: (was: 1.14.0) > Cache in CachingBlobStore might grow beyond configured limit > > > Key: OAK-6303 > URL: https://issues.apache.org/jira/browse/OAK-6303 > Project: Jackrabbit Oak > Issue Type: Bug > Components: blob, core >Reporter: Julian Reschke >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > Attachments: OAK-6303-test.diff, OAK-6303.diff > > > It appears that depending on actual cache entry sizes, the {{CacheLIRS}} > might grow beyond the configured limit. > For {{RDBBlobStore}}, the limit is currently configured to 16MB, yet storing > random 2M entries appears to fill the cache with 64MB of data (according to > it's own stats). > The attached test case reproduces this. > (it seems this is caused by the fact that each of the 16 segments of the > cache can hold 2 entries, no matter how big they are...) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7922) Improve the operations and the reporting of the check command
[ https://issues.apache.org/jira/browse/OAK-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7922: -- Fix Version/s: (was: 1.14.0) > Improve the operations and the reporting of the check command > - > > Key: OAK-7922 > URL: https://issues.apache.org/jira/browse/OAK-7922 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Francesco Mari >Assignee: Francesco Mari >Priority: Major > Fix For: 1.16.0 > > Attachments: OAK-7922-01.patch > > > The check command allows a user to check for both the head and the > checkpoints. At the end of the execution the command outputs the consistent > revisions for the head and the individual checkpoints, if any is found. > Moreover, it prints an overall good revision. The consistent revisions for > the head and the checkpoints could all be different. If both the head and all > the checkpoints are assigned to a consistent revision, the overall good > revision is the oldest of those revisions. > I wonder how useful all of this information is to a user of the command: > - I might have a revision where a checkpoint is consistent, but the head is > not. In this case, I don't want to revert to that revision because my system > will probably be unstable due to the inconsistent head. > - The overall good revision might still be partially inconsistent due to the > way the command short-circuits the consistency check on the head and the > checkpoints. If I revert to the overall good revision, the head might still > be inconsistent or one of the checkpoints might be missing. > I propose to remove the {{\--checkpoints}} and the {{\--head}} flags and > define the behaviour of the command as follows. > - The check command checks one super-root at a time in its entirety (both > head and referenced checkpoints). > - The command exits as soon as a super-root is found where both the head and > all the checkpoints are consistent. > - While searching, the command might find a super-root with a consistent > head but one or more inconsistent checkpoint. In this case, the first of such > revisions is printed, specifying which checkpoints are inconsistent. > - The user might specify a {{--no-checkpoints}} flag to skip checking the > checkpoints in the steps above. > The optimisations currently implemented by the check command can be > maintained. We don't need to fully traverse the head or the checkpoints if a > well-known corrupted path is still corrupted in the current iteration. The > approach proposed above enables additional optimisations: > - Since checkpoints are immutable, the command doesn't need to traverse a > checkpoint that was inspected before. This is true regardless of the > consistency of the checkpoint. > - If a super-root includes a checkpoint that was previously determined > corrupted, the command can skip that super-root without further inspection. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6166) Support versioning in the composite node store
[ https://issues.apache.org/jira/browse/OAK-6166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6166: -- Fix Version/s: (was: 1.14.0) > Support versioning in the composite node store > -- > > Key: OAK-6166 > URL: https://issues.apache.org/jira/browse/OAK-6166 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: composite >Reporter: Tomek Rękawek >Priority: Minor > Fix For: 1.16.0 > > > The mount info provider should affect the versioning code as well, so version > histories for the mounted paths are stored separately. Similarly to what we > have in the indexing, let's store the mounted version histories under: > /jcr:system/jcr:versionStorage/:oak:mount-MOUNTNAME -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5159) Killing the process may stop async index update to to 30 minutes, for DocumentStore (MongoDB, RDB)
[ https://issues.apache.org/jira/browse/OAK-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5159: -- Fix Version/s: (was: 1.14.0) > Killing the process may stop async index update to to 30 minutes, for > DocumentStore (MongoDB, RDB) > -- > > Key: OAK-5159 > URL: https://issues.apache.org/jira/browse/OAK-5159 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: indexing >Reporter: Thomas Mueller >Priority: Major > Labels: resilience > Fix For: 1.16.0 > > > Same as OAK-2108, when using a DocumentStore based repository (MongoDB, > RDBMK). This is also a problem in the single-cluster-node case, not just when > using multiple cluster node. > When killing a node that is running the sync index update, then this async > index update will not run for up to 15 minutes, because the lease time is set > to 15 minutes. > We could probably use Oak / Sling Discovery to improve the situation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5927) Load excerpt lazily
[ https://issues.apache.org/jira/browse/OAK-5927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5927: -- Fix Version/s: (was: 1.14.0) > Load excerpt lazily > --- > > Key: OAK-5927 > URL: https://issues.apache.org/jira/browse/OAK-5927 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Chetan Mehrotra >Priority: Major > Labels: performance > Fix For: 1.16.0 > > > Currently LucenePropertyIndex loads the excerpt eagerly in batch as part of > loadDocs call. The load docs batch size doubles starting from 50 (max 100k) > as more data is read. > We should look into ways to make the excerpt loaded lazily as and when caller > ask for excerpt. > Note that currently the excerpt are only loaded when query request for > excerpt i.e. there is a not null property restriction for {{rep:excerpt}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6309) Not always convert XPath "primaryType in a, b" to union
[ https://issues.apache.org/jira/browse/OAK-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6309: -- Fix Version/s: (was: 1.14.0) > Not always convert XPath "primaryType in a, b" to union > --- > > Key: OAK-6309 > URL: https://issues.apache.org/jira/browse/OAK-6309 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Critical > Fix For: 1.16.0 > > > Currently, queries with multiple primary types are always converted to a > "union", but this is not alway the best solution. The main problem is that > results are not sorted by score as expected. Example: > {noformat} > /jcr:root/content//element(*, nt:hierarchyNode)[jcr:contains(., 'abc) > and (@jcr:primaryType = 'acme:Page' or @jcr:primaryType = 'acme:Asset')] > {noformat} > This is currently converted to a union, even if the same index is used for > buth subqueries (assuming there is an index on nt:hierarchyNode). > A workaround is to use: > {noformat} > /jcr:root/content//element(*, nt:hierarchyNode)[jcr:contains(., 'abc) > and (./@jcr:primaryType = 'acme:Page' or ./@jcr:primaryType = 'acme:Asset')] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5739) Misleading traversal warning for spellcheck queries without index
[ https://issues.apache.org/jira/browse/OAK-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5739: -- Fix Version/s: (was: 1.14.0) > Misleading traversal warning for spellcheck queries without index > - > > Key: OAK-5739 > URL: https://issues.apache.org/jira/browse/OAK-5739 > Project: Jackrabbit Oak > Issue Type: Bug > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > In OAK-4313 we avoid traversal for native queries, but we see in some cases > traversal warnings as follows: > {noformat} > org.apache.jackrabbit.oak.query.QueryImpl query plan > [nt:base] as [a] /* traverse "" where (spellcheck([a], 'NothingToFind')) > and (issamenode([a], [/])) */ > org.apache.jackrabbit.oak.query.QueryImpl Traversal query (query without > index): > select [jcr:path], [jcr:score], [rep:spellcheck()] from [nt:base] as a where > spellcheck('NothingToFind') > and issamenode(a, '/') > /* xpath: /jcr:root > [rep:spellcheck('NothingToFind')]/(rep:spellcheck()) */; > consider creating an index > {noformat} > This warning is misleading. If no index is available, then either the query > should fail, or the warning should say that the query result is not correct > because traversal is used. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3919) Properly manage APIs / SPIs intended for public consumption
[ https://issues.apache.org/jira/browse/OAK-3919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3919: -- Fix Version/s: (was: 1.14.0) > Properly manage APIs / SPIs intended for public consumption > --- > > Key: OAK-3919 > URL: https://issues.apache.org/jira/browse/OAK-3919 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Michael Dürig >Priority: Major > Labels: modularization, technical_debt > Fix For: 1.16.0 > > > This is a follow up to OAK-3842, which removed package export declarations > for all packages that we either do not want to be used outside of Oak or that > are not stable enough yet. > This issue is to identify those APIs and SPIs of Oak that we actually *want* > to export and to refactor those such we *can* export them. > Candidates that are currently used from upstream projects I know of are: > {code} > org.apache.jackrabbit.oak.plugins.observation > org.apache.jackrabbit.oak.spi.commit > org.apache.jackrabbit.oak.spi.state > org.apache.jackrabbit.oak.commons > org.apache.jackrabbit.oak.plugins.index.lucene > {code} > I suggest to create subtask for those we want to go forward with. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7321) Test failure: DocumentNodeStoreIT.modifiedResetWithDiff
[ https://issues.apache.org/jira/browse/OAK-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7321: -- Fix Version/s: (was: 1.14.0) > Test failure: DocumentNodeStoreIT.modifiedResetWithDiff > --- > > Key: OAK-7321 > URL: https://issues.apache.org/jira/browse/OAK-7321 > Project: Jackrabbit Oak > Issue Type: Bug > Components: continuous integration, documentmk >Reporter: Hudson >Priority: Major > Fix For: 1.16.0 > > > No description is provided > The build Jackrabbit Oak #1295 has failed. > First failed run: [Jackrabbit Oak > #1295|https://builds.apache.org/job/Jackrabbit%20Oak/1295/] [console > log|https://builds.apache.org/job/Jackrabbit%20Oak/1295/console] > {noformat} > org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: Configured > cluster node id 1 already in use: machineId/instanceId do not match: > mac:0242454078e1//home/jenkins/jenkins-slave/workspace/Jackrabbit > Oak/oak-store-document != > mac:0242d5bcc8f8//home/jenkins/jenkins-slave/workspace/Jackrabbit > Oak/oak-store-document > at > org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreIT.modifiedResetWithDiff(DocumentNodeStoreIT.java:65) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5858) Lucene index may return the wrong result if path is excluded
[ https://issues.apache.org/jira/browse/OAK-5858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5858: -- Fix Version/s: (was: 1.14.0) > Lucene index may return the wrong result if path is excluded > > > Key: OAK-5858 > URL: https://issues.apache.org/jira/browse/OAK-5858 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > If a query uses a Lucene index that has "excludedPaths", the query result may > be wrong (not contain all matching nodes). This is case even if there is a > property index available for the queried property. Example: > {noformat} > Indexes: > /oak:index/resourceType/type = "property" > /oak:index/lucene/type = "lucene" > /oak:index/lucene/excludedPaths = ["/etc"] > /oak:index/lucene/indexRules/nt:base/properties/resourceType > Query: > /jcr:root/etc//*[jcr:like(@resourceType, "x%y")] > Index cost: > cost for /oak:index/resourceType is 1602.0 > cost for /oak:index/lucene is 1001.0 > Result: > (empty) > Expected result: > /etc/a > /etc/b > {noformat} > Here, the lucene index is picked, even thought the query explicitly queries > for /etc, and the lucene index has this path excluded. > I think the lucene index should not be picked in case the index does not > match the query path. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6098) Build timeout
[ https://issues.apache.org/jira/browse/OAK-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6098: -- Fix Version/s: (was: 1.14.0) > Build timeout > - > > Key: OAK-6098 > URL: https://issues.apache.org/jira/browse/OAK-6098 > Project: Jackrabbit Oak > Issue Type: Bug > Components: continuous integration >Reporter: Hudson >Priority: Major > Labels: CI, jenkins, test-failure > Fix For: 1.16.0 > > > Jenkins CI failure: https://builds.apache.org/view/J/job/Jackrabbit%20Oak/ > The build Jackrabbit Oak #175 has failed. > First failed run: [Jackrabbit Oak > #175|https://builds.apache.org/job/Jackrabbit%20Oak/175/] [console > log|https://builds.apache.org/job/Jackrabbit%20Oak/175/console] > This build timed out on node https://builds.apache.org/computer/H10. Usually > the build takes around 40mins. > {code} > Build timed out (after 60 minutes). Marking the build as failed. > {code} > Also timed out on https://builds.apache.org/computer/cassandra5. See > https://builds.apache.org/view/J/job/Jackrabbit%20Oak/208/ > Also timed out on https://builds.apache.org/computer/ubuntu-eu2. See > https://builds.apache.org/job/Jackrabbit%20Oak/246/ > Also timed out on https://builds.apache.org/computer/ubuntu-2. See > https://builds.apache.org/job/Jackrabbit%20Oak/267/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6165) Create compound index on _sdType and _sdMaxRevTime (RDB)
[ https://issues.apache.org/jira/browse/OAK-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6165: -- Fix Version/s: (was: 1.14.0) > Create compound index on _sdType and _sdMaxRevTime (RDB) > > > Key: OAK-6165 > URL: https://issues.apache.org/jira/browse/OAK-6165 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: rdbmk >Reporter: Chetan Mehrotra >Assignee: Julian Reschke >Priority: Major > Fix For: 1.16.0 > > > Clone for OAK-6129 for RDB i.e. create index on OAK-6129on _sdType and > _sdMaxRevTime. This is required to run queries issued by the Revision GC > efficiently. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6772) Convert oak-solr-core to OSGi R6 annotations
[ https://issues.apache.org/jira/browse/OAK-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6772: -- Fix Version/s: (was: 1.14.0) > Convert oak-solr-core to OSGi R6 annotations > > > Key: OAK-6772 > URL: https://issues.apache.org/jira/browse/OAK-6772 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: solr >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.16.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5792) TarMK: Implement tooling to repair broken nodes
[ https://issues.apache.org/jira/browse/OAK-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5792: -- Fix Version/s: (was: 1.14.0) > TarMK: Implement tooling to repair broken nodes > --- > > Key: OAK-5792 > URL: https://issues.apache.org/jira/browse/OAK-5792 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: run, segment-tar >Reporter: Michael Dürig >Assignee: Andrei Dulceanu >Priority: Major > Labels: production, technical_debt, tooling > Fix For: 1.16.0 > > > With {{oak-run check}} we can determine the last good revision of a > repository and use it to manually roll back a corrupted segment store. > Complementary to this we should implement a tool to roll forward a broken > revision to a fixed new revision. Such a tool needs to detect which items are > affected by a corruption and replace these items with markers. With this the > repository could brought back online and the markers could be used to > identify the locations in the tree where further manual action might be > needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6762) Convert oak-blob to OSGi R6 annotations
[ https://issues.apache.org/jira/browse/OAK-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6762: -- Fix Version/s: (was: 1.14.0) > Convert oak-blob to OSGi R6 annotations > --- > > Key: OAK-6762 > URL: https://issues.apache.org/jira/browse/OAK-6762 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: blob >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.16.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5776) Build failure: Cannot create directory : Filename too long
[ https://issues.apache.org/jira/browse/OAK-5776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5776: -- Fix Version/s: (was: 1.14.0) > Build failure: Cannot create directory : Filename too long > -- > > Key: OAK-5776 > URL: https://issues.apache.org/jira/browse/OAK-5776 > Project: Jackrabbit Oak > Issue Type: Bug > Components: continuous integration >Reporter: Hudson >Priority: Major > Labels: CI, build-failure, test-failure, windows > Fix For: 1.16.0 > > > Jenkins Windows CI failure: https://builds.apache.org/job/Oak-Win/ > The build Oak-Win/Windows slaves=Windows,jdk=JDK 1.7 (unlimited security) > 64-bit Windows only,nsfixtures=DOCUMENT_NS,profile=integrationTesting #473 > has failed. > First failed run: [Oak-Win/Windows slaves=Windows,jdk=JDK 1.7 (unlimited > security) 64-bit Windows > only,nsfixtures=DOCUMENT_NS,profile=integrationTesting > #473|https://builds.apache.org/job/Oak-Win/Windows%20slaves=Windows,jdk=JDK%201.7%20(unlimited%20security)%2064-bit%20Windows%20only,nsfixtures=DOCUMENT_NS,profile=integrationTesting/473/] > [console > log|https://builds.apache.org/job/Oak-Win/Windows%20slaves=Windows,jdk=JDK%201.7%20(unlimited%20security)%2064-bit%20Windows%20only,nsfixtures=DOCUMENT_NS,profile=integrationTesting/473/console] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6069) Modularisation of Oak
[ https://issues.apache.org/jira/browse/OAK-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6069: -- Fix Version/s: (was: 1.14.0) > Modularisation of Oak > - > > Key: OAK-6069 > URL: https://issues.apache.org/jira/browse/OAK-6069 > Project: Jackrabbit Oak > Issue Type: Epic > Components: core >Reporter: angela >Priority: Major > Labels: modularization > Fix For: 1.16.0 > > > Epic to track individual steps towards improved modularisation of Oak > Until now Oak modules are all released together, which has some drawbacks. > Work on the modules must be somewhat kept in lockstep. Releasing a fix for a > module means all other modules must be in a state that can be released as > well. For a user it may be desirable to just update a single module to get a > fix and not a complete set of Oak bundles. > The general approach for this epic should be to modularize only as needed and > not split everything. Obvious candidates are stable interfaces like Oak and > NodeStore API and NodeStore implementations. > This requires fixing potential circular dependencies between logical modules > we want to split up. We need a better distinction between the interface part > of the SPI and its implementations. Utilities and commons code must be > reviewed and potentially moved. > The oak-it related dependencies should be reconsidered and that a development > version of a NodeStore implementation can run integration tests. With the > current dependency setup a release of the NodeStore implementation is > required first to run the integration tests with those changes. > Some modules will probably be moved to the top-level and have their own > branches and tags. > To avoid branches it is important to always have trunk stable. Feature work > must happen on feature branches, in a forked module or protected with a > feature flag until it is ready for prime time. No more unstable work in trunk. > Module owner is primarily responsible for module releases. At some point > there won't be a dedicated person anymore responsible for 'the Oak release'. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6759) Convert oak-blob-cloud-azure to OSGi R6 annotations
[ https://issues.apache.org/jira/browse/OAK-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6759: -- Fix Version/s: (was: 1.14.0) > Convert oak-blob-cloud-azure to OSGi R6 annotations > --- > > Key: OAK-6759 > URL: https://issues.apache.org/jira/browse/OAK-6759 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: blob-cloud >Reporter: Robert Munteanu >Priority: Major > Fix For: 1.16.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3336) Abstract a full text index implementation to be extended by Lucene and Solr
[ https://issues.apache.org/jira/browse/OAK-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3336: -- Fix Version/s: (was: 1.14.0) > Abstract a full text index implementation to be extended by Lucene and Solr > --- > > Key: OAK-3336 > URL: https://issues.apache.org/jira/browse/OAK-3336 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene, query, solr >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili >Priority: Major > Fix For: 1.16.0 > > > Current Lucene and Solr indexes implement quite a no. of features according > to their specific APIs, design and implementation. However in the long run, > while differences in APIs and implementations will / can of course stay, the > difference in design can make it hard to keep those features on par. > It'd be therefore nice to make it possible to abstract as much of design and > implementation bits as possible in an abstract full text implementation which > Lucene and Solr would extend according to their specifics. > An example advantage of this is that index time aggregation will be > implemented only once and therefore any bugfixes and improvements in that > area will be done in the abstract implementation rather than having to do > that in two places. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-4934) Query shapes for JCR Query
[ https://issues.apache.org/jira/browse/OAK-4934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-4934: -- Fix Version/s: (was: 1.14.0) > Query shapes for JCR Query > -- > > Key: OAK-4934 > URL: https://issues.apache.org/jira/browse/OAK-4934 > Project: Jackrabbit Oak > Issue Type: Wish > Components: query >Reporter: Chetan Mehrotra >Priority: Major > Fix For: 1.16.0 > > > For certain requirements it would be good to have a notion/support to deduce > query shape [1] > {quote} > A combination of query predicate, sort, and projection specifications. > For the query predicate, only the structure of the predicate, including the > field names, are significant; the values in the query predicate are > insignificant. As such, a query predicate \{ type: 'food' \} is equivalent to > the query predicate \{ type: 'utensil' \} for a query shape. > {quote} > So transforming that to Oak the shape should represent a JCR-SQL2 query > string (xpath query gets transformed to SQL2) which is a *canonical* > representation of actual query ignoring the property restriction values. > Example we have 2 queries > * SELECT * FROM [app:Asset] AS a WHERE a.[jcr:content/metadata/status] = > 'published' > * SELECT * FROM [app:Asset] AS a WHERE a.[jcr:content/metadata/status] = > 'disabled' > The query shape would be > SELECT * FROM [app:Asset] AS a WHERE a.[jcr:content/metadata/status] = 'A'. > The plan for query having given shape would remain same irrespective of value > of property restrictions. Path restriction can cause some difference though > The shape can then be used for > * Stats Collection - Currently stats collection gets overflown if same query > with different value gets invoked > * Allow configuring hints - See support in Mongo [2] for an example. One > specify via config that for a query of such and such shape this index should > be used > * Less noisy diagnostics - If a query gets invoked with bad plan the QE can > log the warning once instead of logging it for each query invocation > involving different values. > [1] https://docs.mongodb.com/manual/reference/glossary/#term-query-shape > [2] https://docs.mongodb.com/manual/reference/command/planCacheSetFilter/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5272) Expose BlobStore API to provide information whether blob id is content hashed
[ https://issues.apache.org/jira/browse/OAK-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5272: -- Fix Version/s: (was: 1.14.0) > Expose BlobStore API to provide information whether blob id is content hashed > - > > Key: OAK-5272 > URL: https://issues.apache.org/jira/browse/OAK-5272 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: blob >Reporter: Amit Jain >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > As per discussion in OAK-5253 it's better to have some information from the > BlobStore(s) whether the blob id can be solely relied upon for comparison. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-3150) Update Lucene to 6.x series
[ https://issues.apache.org/jira/browse/OAK-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3150: -- Fix Version/s: (was: 1.14.0) > Update Lucene to 6.x series > --- > > Key: OAK-3150 > URL: https://issues.apache.org/jira/browse/OAK-3150 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Chetan Mehrotra >Assignee: Tommaso Teofili >Priority: Major > Labels: technical_debt > Fix For: 1.16.0 > > > We should look into updating the Lucene version to 6.x. Java 8 is the minimum > Java version required > Note this is to be done for trunk only -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7423) Document the proc tree
[ https://issues.apache.org/jira/browse/OAK-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7423: -- Fix Version/s: (was: 1.14.0) > Document the proc tree > -- > > Key: OAK-7423 > URL: https://issues.apache.org/jira/browse/OAK-7423 > Project: Jackrabbit Oak > Issue Type: Documentation > Components: segment-tar >Reporter: Francesco Mari >Assignee: Francesco Mari >Priority: Major > Labels: technical_debt > Fix For: 1.16.0 > > > The proc tree, contributed in OAK-7416, lacks Javadoc and high-level > documentation. In particular, the exposed content structure should be > described in greater detail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7998) [DirectBinaryAccess] Verify that binary exists in cloud before creating signed download URI
[ https://issues.apache.org/jira/browse/OAK-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7998: -- Fix Version/s: (was: 1.14.0) > [DirectBinaryAccess] Verify that binary exists in cloud before creating > signed download URI > --- > > Key: OAK-7998 > URL: https://issues.apache.org/jira/browse/OAK-7998 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: blob-cloud, blob-cloud-azure >Affects Versions: 1.10.0 >Reporter: Matt Ryan >Assignee: Matt Ryan >Priority: Major > Fix For: 1.16.0 > > > IIUC, the direct binary access download logic doesn't actually verify that > the requested blob is available in the cloud before creating the signed > download URI. It is possible that a user could request a download URI for a > blob that is "in the repo" but hasn't actually been uploaded yet. > We should verify this by uploading a new blob, preventing it being uploaded > to the cloud (retain in cache), and then request the download URI. We should > get a null back or get some other error or exception; if we get a URI it > would return an HTTP 404 if the blob is not actually uploaded yet (maybe this > would also be ok). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-4177) Tests on Mongo should fail if mongo is not available
[ https://issues.apache.org/jira/browse/OAK-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-4177: -- Fix Version/s: (was: 1.14.0) > Tests on Mongo should fail if mongo is not available > > > Key: OAK-4177 > URL: https://issues.apache.org/jira/browse/OAK-4177 > Project: Jackrabbit Oak > Issue Type: Test >Reporter: Davide Giannella >Assignee: Davide Giannella >Priority: Major > Fix For: 1.16.0 > > > Most if not all of the IT/UT that run against mongodb have an > assumption at class level that if mongodb is not available the tests > are skipped. > The tests should fail instead if mongodb is not available and we > explicitly said that, via the {{nsfixtures}} flags, we want to run the > tests against mongodb. > We currently have 4 fixtures/flags: DOCUMENT_NS, SEGMENT_MK, > DOCUMENT_RDB, MEMORY_NS. > https://github.com/apache/jackrabbit-oak/blob/f957b6787eb7a70eba454ceb1cae90bd4d47f15c/oak-commons/src/test/java/org/apache/jackrabbit/oak/commons/FixturesHelper.java#L46 > We may have the need to introduce a new Fixture/Flag that indicate > that we want to run the tests against Document using the in-memory > implementation. For example: DOCUMENT_NS_IM. > This will be useful on the Apache Jenkins as we don't have mongo there > but we still want to run all the possible Document NS tests against > the in-memory implementation when this is possible. > /cc [~mreutegg] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6264) Test failure: IllegalArgumentException during upgrade tests
[ https://issues.apache.org/jira/browse/OAK-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6264: -- Fix Version/s: (was: 1.14.0) > Test failure: IllegalArgumentException during upgrade tests > > > Key: OAK-6264 > URL: https://issues.apache.org/jira/browse/OAK-6264 > Project: Jackrabbit Oak > Issue Type: Bug > Components: continuous integration, upgrade >Reporter: Hudson >Priority: Major > Labels: CI, jenkins, test-failure > Fix For: 1.16.0 > > > Jenkins CI failure: https://builds.apache.org/view/J/job/Jackrabbit%20Oak/ > The build Jackrabbit Oak #338 has failed. > First failed run: [Jackrabbit Oak > #338|https://builds.apache.org/job/Jackrabbit%20Oak/338/] [console > log|https://builds.apache.org/job/Jackrabbit%20Oak/338/console] > {noformat} > javax.jcr.RepositoryException: Failed to copy content > Stacktrace > java.lang.RuntimeException: javax.jcr.RepositoryException: Failed to copy > content > at > org.apache.jackrabbit.oak.upgrade.CopyCheckpointsTest.prepare(CopyCheckpointsTest.java:141) > Caused by: javax.jcr.RepositoryException: Failed to copy content > at > org.apache.jackrabbit.oak.upgrade.CopyCheckpointsTest.prepare(CopyCheckpointsTest.java:141) > Caused by: java.lang.IllegalArgumentException > at > org.apache.jackrabbit.oak.upgrade.CopyCheckpointsTest.prepare(CopyCheckpointsTest.java:141) > {noformat} > This affects > {noformat} > > org.apache.jackrabbit.oak.upgrade.CopyCheckpointsTest.validateMigration[Suppress > the warning] > > org.apache.jackrabbit.oak.upgrade.CopyCheckpointsTest.validateMigration[Source > data store defined, checkpoints migrated] > > org.apache.jackrabbit.oak.upgrade.IgnoreMissingBinariesTest.validateMigration > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10 > > org.apache.jackrabbit.oak.upgrade.cli.SegmentTarToSegmentTest.validateMigration > org.apache.jackrabbit.oak.upgrade.cli.SegmentToJdbcTest.validateMigration > > org.apache.jackrabbit.oak.upgrade.cli.SegmentToSegmentTarTest.validateMigration > > org.apache.jackrabbit.oak.upgrade.cli.SegmentToSegmentTarWithMissingDestinationDirectoryTest.validateMigration > > org.apache.jackrabbit.oak.upgrade.cli.SegmentToSegmentTest.validateMigration > > org.apache.jackrabbit.oak.upgrade.cli.SegmentToSegmentWithMissingDestinationDirectoryTest.validateMigration > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > references, no blobstores defined, segment -> segment] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > references, no blobstores defined, segment-tar -> segment-tar] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > references, no blobstores defined, segment -> segment-tar] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > embedded to embedded, no blobstores defined] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > embedded to external, no blobstores defined] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > references, src blobstore defined] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > external to embedded, src blobstore defined] > > org.apache.jackrabbit.oak.upgrade.cli.blob.CopyBinariesTest.validateMigration[Copy > external to external, src blobstore defined] > org.apache.jackrabbit.oak.upgrade.cli.blob.FbsToFbsTest.validateMigration > org.apache.jackrabbit.oak.upgrade.cli.blob.FbsToFdsTest.validateMigration > org.apache.jackrabbit.oak.upgrade.cli.blob.FdsToFbsTest.validateMigration > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-7725) Allow to have the users and groups created in the immutable part of the composite setup
[ https://issues.apache.org/jira/browse/OAK-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-7725: -- Fix Version/s: (was: 1.14.0) > Allow to have the users and groups created in the immutable part of the > composite setup > --- > > Key: OAK-7725 > URL: https://issues.apache.org/jira/browse/OAK-7725 > Project: Jackrabbit Oak > Issue Type: Story > Components: composite, security >Reporter: Tomek Rękawek >Assignee: Tomek Rękawek >Priority: Major > Fix For: 1.16.0 > > Attachments: OAK-7725-tests.patch > > > When running the Oak with Composite Node Store, the /home subtree is always > stored in the mutable, global part. Therefore, even if we switch the > immutable part (eg. /libs), the users and groups are not affected. > This setup makes sense for the users and groups created interactively. > However, we also have the service users, which usually are not created > interactively, but are part of the application and therefore are related to > the /libs part. For such users, it'd make sense to include them dynamically, > together with the application, read-only mount. > The proposal is to allow some part of the /home (eg. /home/service) to be > mounted from the read-only partial node store. Let's consider the constraints > we need to put in place (eg. it shouldn't be possible to have inter-mounts > group memberships) and how we can implement this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5553) Index async index in a new lane without blocking the main lane
[ https://issues.apache.org/jira/browse/OAK-5553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5553: -- Fix Version/s: (was: 1.14.0) > Index async index in a new lane without blocking the main lane > -- > > Key: OAK-5553 > URL: https://issues.apache.org/jira/browse/OAK-5553 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: indexing >Reporter: Chetan Mehrotra >Priority: Major > Fix For: 1.16.0 > > > Currently if an async index has to be reindex for any reason say update of > index definition then this process blocks the indexing of other indexes on > that lane. > For e.g. if on "async" lane we have 2 indexes /oak:index/fooIndex and > /oak:index/barIndex and fooIndex needs to be reindexed. In such a case > currently AsyncIndexUpdate would work on reindexing and untill that gets > complete other index do not receive any update. If the reindexing takes say 1 > day then other index would start lagging behind by that time. Note that NRT > indexing would help somewhat here. > To improve this we can implement something similar to what was done for > property index in OAK-1456 i.e. provide a way where > # an admin can trigger reindex of some async indexes > # those indexes are moved to different lane and then reindexed > # post reindexing logic should then move them back to there original lane > Further this task can then be performed on non leader node as the indexes > would not be part of any active lane. Also we may implement it as part of > oak-run -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-8343) Allow queries to be delayed until an index is available
[ https://issues.apache.org/jira/browse/OAK-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-8343: -- Fix Version/s: (was: 1.14.0) > Allow queries to be delayed until an index is available > --- > > Key: OAK-8343 > URL: https://issues.apache.org/jira/browse/OAK-8343 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene, query >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > Attachments: OAK-8343-b.patch, OAK-8343.patch > > > Currently, indexes are built asynchronously. That is, if an index definition > is added, the index is eventually built, but it's quite hard to say when it > is ready for queries. This can be specially a problem right after the initial > repository initialization, or after an upgrade. > In theory, system startup could be delayed until all indexes are ready (e.g. > set the "reindex" flag for important indexes, and at startup, wait until the > "reindex" flag is set to "false"). However, doing that would block threads > that _don't_ need an index. It would be better to only block threads that > actually do run queries. That would make startup deterministic, without > delaying other threads unnecessarily. > To solve the problem, we can add a property "waitForIndex" in the index > definition (just Lucene indexes is fine for now, as those are the important > asynchronous ones). If set, then queries that potentially use those indexes > are delayed, until the indexes are ready for sure. Reindex would need to > remove that property (the same as it removes e.g. refresh or sets reindex to > false). For added security, queries are only blocked as long as "reindex" is > also set to true (this ensures that waitForIndex is removed eventually), and > waiting should time out after 2 minutes, to ensure the feature doesn't block > startup forever if indexing fails for some reason. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5367) Strange path parsing
[ https://issues.apache.org/jira/browse/OAK-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5367: -- Fix Version/s: (was: 1.14.0) > Strange path parsing > > > Key: OAK-5367 > URL: https://issues.apache.org/jira/browse/OAK-5367 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > Attachments: JcrPathParserTest.java > > > Incorrect handling of path with "\{" was fixed in OAK-5260, but the behavior > of the JcrPathParser is still strange. For example: > * the root node, "/", is mapped to "/", and the current node, "." is mapped > to "". But "/." is mapped to the current node (should be root node). > * "/parent/./childA2" is mapped to "/parent/childA2" (which is fine), but > "/parent/.}/childA2" is also mapped to "/parent/childA2". > * "\}\{" and "}\[" and "}}[" are mapped to the current node. So are ".[" and > "/[" and ".}". And "}\{test" is mapped to "}\{test", which is > inconsistent-weird. > * "x\[1\]}" is mapped to "x". > All that weirdness should be resolved. Some seem to be just weird, but some > look like they could become a problem at some point ("}\{"). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-2538) Support index time aggregation in Solr index
[ https://issues.apache.org/jira/browse/OAK-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-2538: -- Fix Version/s: (was: 1.14.0) > Support index time aggregation in Solr index > > > Key: OAK-2538 > URL: https://issues.apache.org/jira/browse/OAK-2538 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: solr >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili >Priority: Major > Labels: performance > Fix For: 1.16.0 > > > Solr index is only able to do query time aggregation while that "would not > perform well for multi term searches as each term involves a separate call > and with intersection cursor being used the operation might result in reading > up all match terms even when user accesses only first page", therefore it'd > be good to implement index time aggregation like in Lucene index. (/cc > [~chetanm]) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-6412) Consider upgrading to newer Lucene versions
[ https://issues.apache.org/jira/browse/OAK-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-6412: -- Fix Version/s: (was: 1.14.0) > Consider upgrading to newer Lucene versions > --- > > Key: OAK-6412 > URL: https://issues.apache.org/jira/browse/OAK-6412 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili >Priority: Major > Fix For: 1.16.0 > > > An year ago I had started prototyping the upgrade to Lucene 5 [1], in the > meantime version 6 (and 7 soon) has come out. > I think it'd be very nice to upgrade Lucene version to the latest, this would > give us improvements in space consumption and runtime performance. > In case we want to upgrade to 6.0 or later we need to consider upgrade > scenarios because Lucene Codecs are backward compatible with the previous > major release, so Lucene 6 can read Lucene 5 but not Lucene 4.x (4.7 in our > case) therefore we would need to detect that when reading an index and > trigger reindexing using the new format. > Related to that there's also a patch to upgrade Solr index to version 5 (see > OAK-4318). > [1] : https://github.com/tteofili/jackrabbit-oak/tree/lucene5 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5950) XPath: stack overflow for large combination of "or" and "and"
[ https://issues.apache.org/jira/browse/OAK-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5950: -- Fix Version/s: (was: 1.14.0) > XPath: stack overflow for large combination of "or" and "and" > - > > Key: OAK-5950 > URL: https://issues.apache.org/jira/browse/OAK-5950 > Project: Jackrabbit Oak > Issue Type: Bug > Components: query >Reporter: Thomas Mueller >Priority: Critical > Fix For: 1.16.0 > > > The following query returns in a stack overflow: > {noformat} > xpath2sql /jcr:root/home//element(*,rep:Authorizable)[(@a1=1 or @a2=1 or > @a3=1 or @a4=1 or @a5=1 or @a6=1 or @a7=1 or @a8=1) > and (@b1=1 or @b2=1 or @b3=1 or @b4=1 or @b5=1 or @b6=1 or @b7=1 or @b8=1) > and (@c1=1 or @c2=1 or @c3=1 or @c4=1 or @c5=1 or @c6=1 or @c7=1 or @c8=1) > and (@d1=1 or @d2=1 or @d3=1 or @d4=1 or @d5=1 or @d6=1 or @d7=1 or @d8=1)] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-2787) Faster multi threaded indexing / text extraction for binary content
[ https://issues.apache.org/jira/browse/OAK-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-2787: -- Fix Version/s: (was: 1.14.0) > Faster multi threaded indexing / text extraction for binary content > --- > > Key: OAK-2787 > URL: https://issues.apache.org/jira/browse/OAK-2787 > Project: Jackrabbit Oak > Issue Type: Wish > Components: lucene >Reporter: Chetan Mehrotra >Priority: Major > Fix For: 1.16.0 > > > With Lucene based indexing the indexing process is single threaded. This > hamper the indexing of binary content as on a multi processor system only > single thread can be used to perform the indexing > [~ianeboston] Suggested a possible approach [1] involving a 2 phase indexing > # In first phase detect the nodes to be indexed and start the full text > extraction of the binary content. Post extraction save the binary token > stream back to the node as a hidden data. In this phase the node properties > can still be indexed and a marker field would be added to indicate the > fulltext index is still pending > # Later in 2nd phase look for all such Lucene docs and then update them with > the saved token stream > This would allow the text extraction logic to be decouple from Lucene > indexing logic > [1] http://markmail.org/thread/2w5o4bwqsosb6esu -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (OAK-5980) Bad Join Query Plan Used
[ https://issues.apache.org/jira/browse/OAK-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5980: -- Fix Version/s: (was: 1.14.0) > Bad Join Query Plan Used > > > Key: OAK-5980 > URL: https://issues.apache.org/jira/browse/OAK-5980 > Project: Jackrabbit Oak > Issue Type: Bug > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Major > Fix For: 1.16.0 > > > For a join query, where selectors are joined over ischildnode but also can > use an index, > the selectors sometimes use the index instead of the much less > expensive parent join. Example: > {noformat} > select [a].* from [nt:unstructured] as [a] > inner join [nt:unstructured] as [b] on ischildnode([b], [a]) > inner join [nt:unstructured] as [c] on ischildnode([c], [b]) > inner join [nt:unstructured] as [d] on ischildnode([d], [c]) > inner join [nt:unstructured] as [e] on ischildnode([e], [d]) > where [a].[classname] = 'letter' > and isdescendantnode([a], '/content') > and [c].[classname] = 'chapter' > and localname([b]) = 'chapters' > and [e].[classname] = 'list' > and localname([d]) = 'lists' > and [e].[path] = cast('/content/abc' as path) > {noformat} > The order of selectors is sometimes wrong (not e, d, c, b, a), but > more importantly, selectors c and a use the index on className. -- This message was sent by Atlassian JIRA (v7.6.3#76005)