[jira] [Updated] (OAK-8783) Merge index definitions

2019-11-29 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8783:

Fix Version/s: 1.22.0

> Merge index definitions
> ---
>
> Key: OAK-8783
> URL: https://issues.apache.org/jira/browse/OAK-8783
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: indexing
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.22.0
>
> Attachments: OAK-8783-json-1.patch, OAK-8783-v1.patch
>
>
> If there are multiple versions of an index, e.g. asset-2-custom-2 and 
> asset-3, then oak-run should be able to merge them to asset-3-custom-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8783) Merge index definitions

2019-11-29 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984859#comment-16984859
 ] 

Thomas Mueller commented on OAK-8783:
-

Good point! I will change the newObjectNotRespectingOrder test, so that it 
doesn't expect any specific order.

> Merge index definitions
> ---
>
> Key: OAK-8783
> URL: https://issues.apache.org/jira/browse/OAK-8783
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Attachments: OAK-8783-json-1.patch, OAK-8783-v1.patch
>
>
> If there are multiple versions of an index, e.g. asset-2-custom-2 and 
> asset-3, then oak-run should be able to merge them to asset-3-custom-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8783) Merge index definitions

2019-11-29 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984815#comment-16984815
 ] 

Thomas Mueller commented on OAK-8783:
-

[~ngupta] [~tihom88] [~fabrizio.fort...@gmail.com] could you please review 
OAK-8783-json-1.patch ?

> Merge index definitions
> ---
>
> Key: OAK-8783
> URL: https://issues.apache.org/jira/browse/OAK-8783
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Attachments: OAK-8783-json-1.patch, OAK-8783-v1.patch
>
>
> If there are multiple versions of an index, e.g. asset-2-custom-2 and 
> asset-3, then oak-run should be able to merge them to asset-3-custom-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8783) Merge index definitions

2019-11-29 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8783:

Attachment: OAK-8783-json-1.patch

> Merge index definitions
> ---
>
> Key: OAK-8783
> URL: https://issues.apache.org/jira/browse/OAK-8783
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Attachments: OAK-8783-json-1.patch, OAK-8783-v1.patch
>
>
> If there are multiple versions of an index, e.g. asset-2-custom-2 and 
> asset-3, then oak-run should be able to merge them to asset-3-custom-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8783) Merge index definitions

2019-11-29 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984812#comment-16984812
 ] 

Thomas Mueller commented on OAK-8783:
-

One problem is that the Gson library doesn't support the child order
https://stackoverflow.com/questions/6365851/how-to-keep-fields-sequence-in-gson-serialization

This is a problem because indexes in Oak do need to respect order of child 
nodes for some features:
http://jackrabbit.apache.org/oak/docs/query/lucene.html
"The rules are looked up in the order of there entry under indexRules node 
(indexRule node itself is of type nt:unstructured which has orderable child 
nodes)" - "Order of property definition node is important as some properties 
are based on regular expressions"

Instead of Gson, we need use a different serialization library, e.g. the Oak 
JsonObject. I will add the needed features and tests there first.

> Merge index definitions
> ---
>
> Key: OAK-8783
> URL: https://issues.apache.org/jira/browse/OAK-8783
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Attachments: OAK-8783-v1.patch
>
>
> If there are multiple versions of an index, e.g. asset-2-custom-2 and 
> asset-3, then oak-run should be able to merge them to asset-3-custom-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8794) oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 2.10.0

2019-11-26 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982268#comment-16982268
 ] 

Thomas Mueller commented on OAK-8794:
-

Un-assigning from me right now.

> Would it be possible to update oak-parent/pom.xml to Jackson version 2.10.0 
> and then specify 2.9.10 in oak-solr-osgi?

[~teofili], do you know if this might work?

> oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 
> 2.10.0
> ---
>
> Key: OAK-8794
> URL: https://issues.apache.org/jira/browse/OAK-8794
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Priority: Major
>
> If the Jackson version in {{oak-parent/pom.xml}} is updated from 2.9.10 to 
> 2.10.0, we get a build failure in {{oak-solr-osgi}} if we try to build with 
> Java 8.
> This is blocking OAK-8105 which in turn is blocking OAK-8607 and OAK-8104.  
> OAK-8105 is about updating {{AzureDataStore}} to the Azure version 12 SDK 
> which requires Jackson 2.10.0.
> Would it be possible to update {{oak-parent/pom.xml}} to Jackson version 
> 2.10.0 and then specify 2.9.10 in {{oak-solr-osgi}}?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-8794) oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 2.10.0

2019-11-26 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-8794:
---

Assignee: (was: Thomas Mueller)

> oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 
> 2.10.0
> ---
>
> Key: OAK-8794
> URL: https://issues.apache.org/jira/browse/OAK-8794
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Priority: Major
>
> If the Jackson version in {{oak-parent/pom.xml}} is updated from 2.9.10 to 
> 2.10.0, we get a build failure in {{oak-solr-osgi}} if we try to build with 
> Java 8.
> This is blocking OAK-8105 which in turn is blocking OAK-8607 and OAK-8104.  
> OAK-8105 is about updating {{AzureDataStore}} to the Azure version 12 SDK 
> which requires Jackson 2.10.0.
> Would it be possible to update {{oak-parent/pom.xml}} to Jackson version 
> 2.10.0 and then specify 2.9.10 in {{oak-solr-osgi}}?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8783) Merge index definitions

2019-11-22 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980259#comment-16980259
 ] 

Thomas Mueller commented on OAK-8783:
-

Attached a first patch (work in progress).

> Merge index definitions
> ---
>
> Key: OAK-8783
> URL: https://issues.apache.org/jira/browse/OAK-8783
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Attachments: OAK-8783-v1.patch
>
>
> If there are multiple versions of an index, e.g. asset-2-custom-2 and 
> asset-3, then oak-run should be able to merge them to asset-3-custom-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8783) Merge index definitions

2019-11-22 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8783:

Attachment: OAK-8783-v1.patch

> Merge index definitions
> ---
>
> Key: OAK-8783
> URL: https://issues.apache.org/jira/browse/OAK-8783
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Attachments: OAK-8783-v1.patch
>
>
> If there are multiple versions of an index, e.g. asset-2-custom-2 and 
> asset-3, then oak-run should be able to merge them to asset-3-custom-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8783) Merge index definitions

2019-11-22 Thread Thomas Mueller (Jira)
Thomas Mueller created OAK-8783:
---

 Summary: Merge index definitions
 Key: OAK-8783
 URL: https://issues.apache.org/jira/browse/OAK-8783
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Thomas Mueller
Assignee: Thomas Mueller


If there are multiple versions of an index, e.g. asset-2-custom-2 and asset-3, 
then oak-run should be able to merge them to asset-3-custom-1.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8779) QueryImpl: indexPlan used for logging always is null

2019-11-21 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979313#comment-16979313
 ] 

Thomas Mueller commented on OAK-8779:
-

You are right.

I saw this as well some time ago, but so far didn't log an issue.

I will add that to the technical dept list.

> QueryImpl: indexPlan used for logging always is null
> 
>
> Key: OAK-8779
> URL: https://issues.apache.org/jira/browse/OAK-8779
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Julian Reschke
>Priority: Minor
>
>  
> {noformat}
> if (indexPlan != null && indexPlan.getPlanName() != null) {
>  indexName += "[" + indexPlan.getPlanName() + "]";
>  } {noformat}
>  
> (indexPlan always is null, maybe caused by code being moved around)
>  
> cc: [~chetanm] [~thomasm]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-6261) Log queries that sort by un-indexed properties

2019-11-19 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-6261:

Fix Version/s: (was: 1.22.0)

> Log queries that sort by un-indexed properties
> --
>
> Key: OAK-6261
> URL: https://issues.apache.org/jira/browse/OAK-6261
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
>
> Queries that can read many nodes, and sort by properties that are not 
> indexed, can be very slow. This includes for example fulltext queries.
> As a start, it might make sense to log an "info" level message (but avoid 
> logging the same message each time a query is run). Per configuration, this 
> could be turned to "warning".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-7300) Lucene Index: per-column selectivity to improve cost estimation

2019-11-19 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-7300:

Fix Version/s: (was: 1.22.0)

> Lucene Index: per-column selectivity to improve cost estimation
> ---
>
> Key: OAK-7300
> URL: https://issues.apache.org/jira/browse/OAK-7300
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene, query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> In OAK-6735 we have improved cost estimation for Lucene indexes, however the 
> following case is still not working as expected: a very common property is 
> indexes (many nodes have that property), and each value of that property is 
> more or less unique. In this case, currently the cost estimation is the total 
> number of documents that contain that property. Assuming the condition 
> "property is not null" this is correct, however for the common case "property 
> = x" the estimated cost is far too high.
> A known workaround is to set the "costPerEntry" for the given index to a low 
> value, for example 0.2. However this isn't a good solution, as it affects all 
> properties and queries.
> It would be good to be able to set the selectivity per property, for example 
> by specifying the number of distinct values, or (better yet) the average 
> number of entries for a given key (1 for unique values, 2 meaning for each 
> distinct values there are two documents on average).
> That value can be set manually (cost override), and it can be set 
> automatically, e.g. when building the index, or updated from time to time 
> during the index update, using a cardinality
> estimation algorithm. That doesn't have to be accurate; we could use an rough 
> approximation such as hyperbitbit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-3219) Lucene IndexPlanner should also account for number of property constraints evaluated while giving cost estimation

2019-11-19 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3219:

Fix Version/s: (was: 1.22.0)

> Lucene IndexPlanner should also account for number of property constraints 
> evaluated while giving cost estimation
> -
>
> Key: OAK-3219
> URL: https://issues.apache.org/jira/browse/OAK-3219
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Thomas Mueller
>Priority: Minor
>  Labels: performance
>
> Currently the cost returned by Lucene index is a function of number of 
> indexed documents present in the index. If the number of indexed entries are 
> high then it might reduce chances of this index getting selected if some 
> property index also support of the property constraint.
> {noformat}
> /jcr:root/content/freestyle-cms/customers//element(*, cq:Page)
> [(jcr:content/@title = 'm' or jcr:like(jcr:content/@title, 'm%')) 
> and jcr:content/@sling:resourceType = '/components/page/customer’]
> {noformat}
> Consider above query with following index definition
> * A property index on resourceType
> * A Lucene index for cq:Page with properties {{jcr:content/title}}, 
> {{jcr:content/sling:resourceType}} indexed and also path restriction 
> evaluation enabled
> Now what the two indexes can help in
> # Property index
> ## Path restriction
> ## Property restriction on  {{sling:resourceType}}
> # Lucene index
> ## NodeType restriction
> ## Property restriction on  {{sling:resourceType}}
> ## Property restriction on  {{title}}
> ## Path restriction
> Now cost estimate currently works like this
> * Property index - {{f(indexedValueEstimate, estimateOfNodesUnderGivenPath)}}
> ** indexedValueEstimate - For 'sling:resourceType=foo' its the approximate 
> count for nodes having that as 'foo'
> ** estimateOfNodesUnderGivenPath - Its derived from an approximate estimation 
> of nodes present under given path
> * Lucene Index - {{f(totalIndexedEntries)}}
> As cost of Lucene is too simple it does not reflect the reality. Following 2 
> changes can be done to make it better
> * Given that Lucene index can handle multiple constraints compared (4) to 
> property index (2), the cost estimate returned by it should also reflect this 
> state. This can be done by setting costPerEntry to 1/(no of property 
> restriction evaluated)
> * Get the count for queried property value - This is similar to what 
> PropertyIndex does and assumes that Lucene can provide that information in 
> O(1) cost. In case of multiple supported property restriction this can be 
> minima of all



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-7374) Investigate changing the UUID generation algorithm / format to reduce index size, improve speed

2019-11-19 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-7374:

Fix Version/s: (was: 1.22.0)

> Investigate changing the UUID generation algorithm / format to reduce index 
> size, improve speed
> ---
>
> Key: OAK-7374
> URL: https://issues.apache.org/jira/browse/OAK-7374
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> UUIDs are currently randomly generated, which is bad for indexing; specially 
> read and writes access, due to low locality.
> If we could add a time component, I think the index churn (amount of writes) 
> would shrink, and lookup would be faster.
> It should be fairly easy to verify if that's really true (create a 
> proof-of-concept, and measure).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-6844) Consistency checker Directory value is always ":data"

2019-11-19 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-6844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-6844:

Fix Version/s: (was: 1.22.0)

> Consistency checker Directory value is always ":data"
> -
>
> Key: OAK-6844
> URL: https://issues.apache.org/jira/browse/OAK-6844
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.7.9
>Reporter: Paul Chibulcuteanu
>Assignee: Thomas Mueller
>Priority: Minor
>
> When running a _fullCheck_ consistency check from the Lucene Index statistics 
> MBean, the _Directory_ results is always _:data_
> See below:
> {code}
> /oak:index/lucene => VALID
>   Size : 42.3 MB
> Directory : :data
>   Size : 42.3 MB
>   Num docs : 159132
>   CheckIndex status : true
> Time taken : 3.544 s
> {code}
> I'm not really sure what information should be put here, but the _:data_ 
> value is confusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-6897) XPath query: option to _not_ convert "or" to "union"

2019-11-19 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-6897:

Fix Version/s: (was: 1.22.0)

> XPath query: option to _not_ convert "or" to "union"
> 
>
> Key: OAK-6897
> URL: https://issues.apache.org/jira/browse/OAK-6897
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Trivial
>
> Right now, all XPath queries that contain "or" of the form "@a=1 or @b=2" are 
> converted to SQL-2 "union". In some cases, this is a problem, specially in 
> combination with "order by @jcr:score desc".
> Now that SQL-2 "or" conditions can be converted to union (depending if union 
> has a lower cost), it is no longer strictly needed to do the union conversion 
> in the XPath conversion. Or at least emit different SQL-2 queries and take 
> the one with the lowest cost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-5787) BlobStore should be AutoCloseable

2019-11-15 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975071#comment-16975071
 ] 

Thomas Mueller commented on OAK-5787:
-

For DefaultSplitBlobStore, if both thrown exception, the first one is lost. I 
think a solution would be to use addSuppressed (available in Java 1.7):

{noformat}
+
+@Override
+public void close() throws Exception {
+Exception thrown = null;
+try {
+oldBlobStore.close();
+} catch (Exception ex) {
+thrown = ex;
+}
+try {
+newBlobStore.close();
+} catch (Exception ex) {
+if (thrown != null) {
+thrown.addSuppressed(ex);
+} else {
+thrown = ex;
+}
+}
+if (thrown != null) {
+throw thrown;
+}
+}
{noformat}

> BlobStore should be AutoCloseable
> -
>
> Key: OAK-5787
> URL: https://issues.apache.org/jira/browse/OAK-5787
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.22.0
>
> Attachments: OAK-5787.diff
>
>
> {{DocumentNodeStore}} currently calls {{close()}} if the blob store instance 
> implements {{Closeable}}.
> This has led to problems where wrapper implementations did not implement it, 
> and thus the actual blob store instance wasn't properly shut down.
> Proposal: make {{BlobStore}} extend {{Closeable}} and get rid of all 
> {{instanceof}} checks.
> [~thomasm] [~amitjain] - feedback appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8673) Determine and possibly adjust size of eagerCacheSize

2019-11-13 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16973400#comment-16973400
 ] 

Thomas Mueller commented on OAK-8673:
-

[~angela] I'm sorry I don't fully understand this... Is there some 
documentation where this is explained? It might help to have it, for cases were 
the cache sizes need to be adjusted (to avoid out of memory). As far as I know 
(maybe wrong), there is:

* eager cache (per session? in number of entries and not memory usage. 
configurable as you configured it, but how?)
* lazy-evaluation cache (per session? how large? I assume in number of entries 
and not memory usage. configurable?)
* defaultpermissioncache (what is that exactly? is it lazy-evaluation cache or 
eager cache or something else?)

When opening a session, the eager cache is filled if cache size is large 
enough(?) If too large, then not. But there is a lazy-evaluation. What I still 
don't get - If benchmark results are if the eager cache is disabled, why is it 
so slow? Is it just that for this test case, hit rate on the lazy-evaluation 
cache is so bad?

> Determine and possibly adjust size of eagerCacheSize
> 
>
> Key: OAK-8673
> URL: https://issues.apache.org/jira/browse/OAK-8673
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, security
>Reporter: Angela Schreiber
>Assignee: Angela Schreiber
>Priority: Major
>
> The initial results of the {{EagerCacheSizeTest}} seem to indicate that we 
> almost never benefit from the lazy permission evaluation (compared to reading 
> all permission entries right away). From my understanding of the results the 
> only exception are those cases where only very few items are being accessed 
> (e.g. reading 100 items).
> However, I am not totally sure if this is not a artifact of the random-read. 
> I therefore started extending the benchmark with an option to re-read a 
> randomly picked item more that once, which according to some analysis done 
> quite some time ago is a common scenario specially when using Oak in 
> combination with Apache Sling.
> Benchmarks with 10-times re-reading the same random item:
> As I would have expected it seems that the negative impact of lazy-loading is 
> somewhat reduced, as the re-reading will hit the cache populated while 
> reading.
> Result are attached to OAK-8662 (possibly more to come).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8673) Determine and possibly adjust size of eagerCacheSize

2019-11-13 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16973185#comment-16973185
 ] 

Thomas Mueller commented on OAK-8673:
-

> beyond the task at hand to re-evaluate if the current value of 
> eager-cache-size is sufficient 

Well you don't want to expand the cache size if there is a risk of running out 
of memory... But given the next statement I'm not sure if there really is such 
a risk...

> even for the lazy-evaluation a cache is populated (in fact there are even 2 
> maps in that case), so depending on the distribution of permission entries 
> and the access pattern (read/writing), the lazy cache might even consume more 
> memory than the eager-cache...

But, why are benchmark results so bad the eager cache is disabled (size set to 
0)?

> Determine and possibly adjust size of eagerCacheSize
> 
>
> Key: OAK-8673
> URL: https://issues.apache.org/jira/browse/OAK-8673
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, security
>Reporter: Angela Schreiber
>Assignee: Angela Schreiber
>Priority: Major
>
> The initial results of the {{EagerCacheSizeTest}} seem to indicate that we 
> almost never benefit from the lazy permission evaluation (compared to reading 
> all permission entries right away). From my understanding of the results the 
> only exception are those cases where only very few items are being accessed 
> (e.g. reading 100 items).
> However, I am not totally sure if this is not a artifact of the random-read. 
> I therefore started extending the benchmark with an option to re-read a 
> randomly picked item more that once, which according to some analysis done 
> quite some time ago is a common scenario specially when using Oak in 
> combination with Apache Sling.
> Benchmarks with 10-times re-reading the same random item:
> As I would have expected it seems that the negative impact of lazy-loading is 
> somewhat reduced, as the re-reading will hit the cache populated while 
> reading.
> Result are attached to OAK-8662 (possibly more to come).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8673) Determine and possibly adjust size of eagerCacheSize

2019-11-13 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16973150#comment-16973150
 ] 

Thomas Mueller commented on OAK-8673:
-

So with cach size 0 (no cache), the system is very slow (basically unusable). 
So a cache is need. I see two problems:

* A: Having one cache per session is problematic if there is no limit in the 
number of sessions: there is no way to guarantee the system will not run out of 
memory. Is there no way to use just one cache (for all sessions)?

* B: Having a cache size in number of entries is problematic, if memory usage 
of entries is very different: there is no way to guarantee the system will not 
run out of memory. To solve this, in various places in Oak we use "weighted" 
caches, and estimate memory usage of entries (e.g. for strings, 24 + number of 
characters). I can help with this. 

I think both A and B need to be addressed.




> Determine and possibly adjust size of eagerCacheSize
> 
>
> Key: OAK-8673
> URL: https://issues.apache.org/jira/browse/OAK-8673
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, security
>Reporter: Angela Schreiber
>Assignee: Angela Schreiber
>Priority: Major
>
> The initial results of the {{EagerCacheSizeTest}} seem to indicate that we 
> almost never benefit from the lazy permission evaluation (compared to reading 
> all permission entries right away). From my understanding of the results the 
> only exception are those cases where only very few items are being accessed 
> (e.g. reading 100 items).
> However, I am not totally sure if this is not a artifact of the random-read. 
> I therefore started extending the benchmark with an option to re-read a 
> randomly picked item more that once, which according to some analysis done 
> quite some time ago is a common scenario specially when using Oak in 
> combination with Apache Sling.
> Benchmarks with 10-times re-reading the same random item:
> As I would have expected it seems that the negative impact of lazy-loading is 
> somewhat reduced, as the re-reading will hit the cache populated while 
> reading.
> Result are attached to OAK-8662 (possibly more to come).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8729) Lucene Directory concurrency issue

2019-11-07 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-8729.
-
Resolution: Fixed

> Lucene Directory concurrency issue
> --
>
> Key: OAK-8729
> URL: https://issues.apache.org/jira/browse/OAK-8729
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.12.0, 1.14.0, 1.16.0, 1.18.0
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: OAK-8729.patch
>
>
> There is a concurrency issue in the DefaultDirectoryFactory. It is 
> reproducible sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run 
> in a loop (1000 times). The problem is that the MemoryNodeBuilder is used 
> concurrently:
> * thread 1 is closing the directory (after writing to it)
> * thread 2 is trying to create a new file
> {noformat}
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
>   at org.apache.lucene.store.Directory.copy(Directory.java:184)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8729) Lucene Directory concurrency issue

2019-11-07 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969230#comment-16969230
 ] 

Thomas Mueller commented on OAK-8729:
-

http://svn.apache.org/r1869505 (trunk)

> Lucene Directory concurrency issue
> --
>
> Key: OAK-8729
> URL: https://issues.apache.org/jira/browse/OAK-8729
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.12.0, 1.14.0, 1.16.0, 1.18.0
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: OAK-8729.patch
>
>
> There is a concurrency issue in the DefaultDirectoryFactory. It is 
> reproducible sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run 
> in a loop (1000 times). The problem is that the MemoryNodeBuilder is used 
> concurrently:
> * thread 1 is closing the directory (after writing to it)
> * thread 2 is trying to create a new file
> {noformat}
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
>   at org.apache.lucene.store.Directory.copy(Directory.java:184)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8729) Lucene Directory concurrency issue

2019-11-07 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969229#comment-16969229
 ] 

Thomas Mueller commented on OAK-8729:
-

I'm afraid I don't know currently how we could make this part more stable... 
verifying the directory is still open would be a good idea, but I'm afraid I 
don't know currently how to do that without changing a lot of code (basically, 
not use the Lucene interfaces).

> Lucene Directory concurrency issue
> --
>
> Key: OAK-8729
> URL: https://issues.apache.org/jira/browse/OAK-8729
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.12.0, 1.14.0, 1.16.0, 1.18.0
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: OAK-8729.patch
>
>
> There is a concurrency issue in the DefaultDirectoryFactory. It is 
> reproducible sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run 
> in a loop (1000 times). The problem is that the MemoryNodeBuilder is used 
> concurrently:
> * thread 1 is closing the directory (after writing to it)
> * thread 2 is trying to create a new file
> {noformat}
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
>   at org.apache.lucene.store.Directory.copy(Directory.java:184)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8729) Lucene Directory concurrency issue

2019-11-07 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969227#comment-16969227
 ] 

Thomas Mueller commented on OAK-8729:
-

> The close method for  wrapForRead [1] calls remote.close and local.close [2] 
> [2]and same instance  is being used by wrapForWrite[3].

Yes, that's true. I verified the remote is closed, but the tests don't fail due 
to that.

Unfortunately, it is hard to verify the directory is not closed: there is a 
verify method in the Directory interface, but it is not public (only protected).

> Can we perform operations even if close had been called on Directory instance?

It looks like none of the tests failed due to this. It seems like the 
operations we perform don't cause problems.

> Lucene Directory concurrency issue
> --
>
> Key: OAK-8729
> URL: https://issues.apache.org/jira/browse/OAK-8729
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.12.0, 1.14.0, 1.16.0, 1.18.0
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: OAK-8729.patch
>
>
> There is a concurrency issue in the DefaultDirectoryFactory. It is 
> reproducible sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run 
> in a loop (1000 times). The problem is that the MemoryNodeBuilder is used 
> concurrently:
> * thread 1 is closing the directory (after writing to it)
> * thread 2 is trying to create a new file
> {noformat}
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
>   at org.apache.lucene.store.Directory.copy(Directory.java:184)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-5858) Lucene index may return the wrong result if path is excluded

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-5858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-5858:

Fix Version/s: (was: 1.20.0)

> Lucene index may return the wrong result if path is excluded
> 
>
> Key: OAK-5858
> URL: https://issues.apache.org/jira/browse/OAK-5858
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Thomas Mueller
>Priority: Major
>
> If a query uses a Lucene index that has "excludedPaths", the query result may 
> be wrong (not contain all matching nodes). This is case even if there is a 
> property index available for the queried property. Example:
> {noformat}
> Indexes:
> /oak:index/resourceType/type = "property"
> /oak:index/lucene/type = "lucene"
> /oak:index/lucene/excludedPaths = ["/etc"]
> /oak:index/lucene/indexRules/nt:base/properties/resourceType
> Query:
> /jcr:root/etc//*[jcr:like(@resourceType, "x%y")]
> Index cost:
> cost for /oak:index/resourceType is 1602.0
> cost for /oak:index/lucene is 1001.0
> Result:
> (empty)
> Expected result:
> /etc/a
> /etc/b
> {noformat}
> Here, the lucene index is picked, even thought the query explicitly queries 
> for /etc, and the lucene index has this path excluded.
> I think the lucene index should not be picked in case the index does not 
> match the query path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-5980) Bad Join Query Plan Used

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-5980:

Fix Version/s: (was: 1.20.0)

> Bad Join Query Plan Used
> 
>
> Key: OAK-5980
> URL: https://issues.apache.org/jira/browse/OAK-5980
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> For a join query, where selectors are joined over ischildnode but also can 
> use an index,
> the selectors sometimes use the index instead of the much less
> expensive parent join. Example:
> {noformat}
> select [a].* from [nt:unstructured] as [a]
> inner join [nt:unstructured] as [b] on ischildnode([b], [a]) 
> inner join [nt:unstructured] as [c] on ischildnode([c], [b]) 
> inner join [nt:unstructured] as [d] on ischildnode([d], [c]) 
> inner join [nt:unstructured] as [e] on ischildnode([e], [d]) 
> where [a].[classname] = 'letter' 
> and isdescendantnode([a], '/content') 
> and [c].[classname] = 'chapter' 
> and localname([b]) = 'chapters' 
> and [e].[classname] = 'list' 
> and localname([d]) = 'lists' 
> and [e].[path] = cast('/content/abc' as path)
> {noformat}
> The order of selectors is sometimes wrong (not e, d, c, b, a), but
> more importantly, selectors c and a use the index on className.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-5739) Misleading traversal warning for spellcheck queries without index

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-5739:

Fix Version/s: (was: 1.20.0)

> Misleading traversal warning for spellcheck queries without index
> -
>
> Key: OAK-5739
> URL: https://issues.apache.org/jira/browse/OAK-5739
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> In OAK-4313 we avoid traversal for native queries, but we see in some cases 
> traversal warnings as follows:
> {noformat}
> org.apache.jackrabbit.oak.query.QueryImpl query plan 
> [nt:base] as [a] /* traverse "" where (spellcheck([a], 'NothingToFind')) 
> and (issamenode([a], [/])) */
> org.apache.jackrabbit.oak.query.QueryImpl Traversal query (query without 
> index): 
> select [jcr:path], [jcr:score], [rep:spellcheck()] from [nt:base] as a where 
> spellcheck('NothingToFind') 
> and issamenode(a, '/') 
> /* xpath: /jcr:root
> [rep:spellcheck('NothingToFind')]/(rep:spellcheck()) */; 
> consider creating an index
> {noformat}
> This warning is misleading. If no index is available, then either the query 
> should fail, or the warning should say that the query result is not correct 
> because traversal is used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-5706) Function based indexes with "like" conditions

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-5706:

Fix Version/s: (was: 1.20.0)

> Function based indexes with "like" conditions
> -
>
> Key: OAK-5706
> URL: https://issues.apache.org/jira/browse/OAK-5706
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: indexing
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> Currently, a function-based index is not used when using "like" conditions, 
> as follows:
> {noformat}
> /jcr:root//*[jcr:like(fn:lower-case(fn:name()), 'abc%')]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-5369) Lucene Property Index: Syntax Error, cannot parse

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-5369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-5369.
-
Resolution: Won't Fix

> Lucene Property Index: Syntax Error, cannot parse
> -
>
> Key: OAK-5369
> URL: https://issues.apache.org/jira/browse/OAK-5369
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.20.0
>
>
> The following query throws an exception in Apache Lucene:
> {noformat}
> /jcr:root//*[jcr:contains(., 'hello -- world')]
> 22.12.2016 16:42:54.511 *WARN* [qtp1944702753-3846] 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex query via 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex@1c0006db 
> failed.
> java.lang.RuntimeException: INVALID_SYNTAX_CANNOT_PARSE: Syntax Error, cannot 
> parse hello -- world:  
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex.tokenToQuery(LucenePropertyIndex.java:1450)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex.tokenToQuery(LucenePropertyIndex.java:1418)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex.access$900(LucenePropertyIndex.java:180)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex$3.visitTerm(LucenePropertyIndex.java:1353)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex$3.visit(LucenePropertyIndex.java:1307)
>   at 
> org.apache.jackrabbit.oak.query.fulltext.FullTextContains.accept(FullTextContains.java:63)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex.getFullTextQuery(LucenePropertyIndex.java:1303)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex.getLuceneRequest(LucenePropertyIndex.java:791)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex.access$300(LucenePropertyIndex.java:180)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex$1.loadDocs(LucenePropertyIndex.java:375)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex$1.computeNext(LucenePropertyIndex.java:317)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex$1.computeNext(LucenePropertyIndex.java:306)
>   at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>   at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex$LucenePathCursor$1.hasNext(LucenePropertyIndex.java:1571)
>   at com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
>   at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>   at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>   at 
> org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:205)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex$LucenePathCursor.hasNext(LucenePropertyIndex.java:1595)
>   at 
> org.apache.jackrabbit.oak.query.ast.SelectorImpl.next(SelectorImpl.java:420)
>   at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.fetchNext(QueryImpl.java:828)
>   at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.hasNext(QueryImpl.java:853)
>   at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.fetch(QueryResultImpl.java:98)
>   at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.(QueryResultImpl.java:94)
>   at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl.getRows(QueryResultImpl.java:78)
> Caused by: 
> org.apache.lucene.queryparser.flexible.standard.parser.ParseException: Syntax 
> Error, cannot parse hello -- world:  
>   at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.generateParseException(StandardSyntaxParser.java:1054)
>   at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.jj_consume_token(StandardSyntaxParser.java:936)
>   at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.Clause(StandardSyntaxParser.java:486)
>   at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.ModClause(StandardSyntaxParser.java:303)
>   at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.ConjQuery(StandardSyntaxParser.java:234)
>   at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.DisjQuery(StandardSyntaxParser.java:204)
>   at 
> 

[jira] [Updated] (OAK-3866) Sorting on relative properties doesn't work in Solr

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3866:

Fix Version/s: (was: 1.20.0)

> Sorting on relative properties doesn't work in Solr
> ---
>
> Key: OAK-3866
> URL: https://issues.apache.org/jira/browse/OAK-3866
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Affects Versions: 1.0.22, 1.2.9, 1.3.13
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>Priority: Major
>
> Executing a query like 
> {noformat}
> /jcr:root/content/foo//*[(@sling:resourceType = 'x' or @sling:resourceType = 
> 'y') and jcr:contains(., 'bar*~')] order by jcr:content/@jcr:primaryType 
> descending
> {noformat}
> would assume sorting on the _jcr:primaryType_ property of resulting nodes' 
> _jcr:content_ children.
> That is currently not supported in Solr, while it is in Lucene as the latter 
> supports index time aggregation.
> We should inspect if it's possible to extend support for Solr too, most 
> probably via index time aggregation.
> The query should not fail but at least log a warning about that limitation 
> for the time being.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-3437) Regression in org.apache.jackrabbit.core.query.JoinTest#testJoinWithOR5 when enabling OAK-1617

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3437:

Fix Version/s: (was: 1.20.0)

> Regression in org.apache.jackrabbit.core.query.JoinTest#testJoinWithOR5 when 
> enabling OAK-1617
> --
>
> Key: OAK-3437
> URL: https://issues.apache.org/jira/browse/OAK-3437
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Reporter: Davide Giannella
>Assignee: Tommaso Teofili
>Priority: Major
>
> When enabling OAK-1617 (still to be committed) there's a regression in the 
> {{oak-solr-core}} unit tests 
> - {{org.apache.jackrabbit.core.query.JoinTest#testJoinWithOR3}} 
> - {{org.apache.jackrabbit.core.query.JoinTest#testJoinWithOR4}} 
> - {{org.apache.jackrabbit.core.query.JoinTest#testJoinWithOR5}} 
> The WIP of the feature can be found in 
> https://github.com/davidegiannella/jackrabbit-oak/tree/OAK-1617 and a full 
> patch will be attached shortly for review in OAK-1617 itself.
> The feature is currently disabled, in order to enable it for unit testing an 
> approach like this can be taken 
> https://github.com/davidegiannella/jackrabbit-oak/blob/177df1a8073b1237857267e23d12a433e3d890a4/oak-core/src/test/java/org/apache/jackrabbit/oak/query/SQL2OptimiseQueryTest.java#L142
>  or setting the system property {{-Doak.query.sql2optimisation}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-6387) Building an index (new index + reindex): temporarily store blob references

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-6387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-6387:

Fix Version/s: (was: 1.20.0)

> Building an index (new index + reindex): temporarily store blob references
> --
>
> Key: OAK-6387
> URL: https://issues.apache.org/jira/browse/OAK-6387
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene, query
>Reporter: Thomas Mueller
>Priority: Major
>
> If reindexing a Lucene index takes multiple days, and if datastore garbage 
> collection (DSGC) is run during that time, then DSGC may remove binaries of 
> that index because they are not referenced.
> It would be good if all binaries that are needed, and that are older than 
> (for example) one hour, are referenced during reindexing (for example in a 
> temporary location). So that DSGC will not remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-6597) rep:excerpt not working for content indexed by aggregation in lucene

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-6597:

Fix Version/s: (was: 1.20.0)

> rep:excerpt not working for content indexed by aggregation in lucene
> 
>
> Key: OAK-6597
> URL: https://issues.apache.org/jira/browse/OAK-6597
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.6.1, 1.7.6, 1.8.0
>Reporter: Dirk Rudolph
>Assignee: Chetan Mehrotra
>Priority: Major
>  Labels: excerpt
> Attachments: excerpt-with-aggregation-test.patch
>
>
> I mentioned that properties that got indexed due to an aggregation are not 
> considered for excerpts (highlighting) as they are not indexed as stored 
> fields.
> See the attached patch that implements a test for excerpts in 
> {{LuceneIndexAggregationTest2}}.
> It creates the following structure:
> {code}
> /content/foo [test:Page]
>  + bar (String)
>  - jcr:content [test:PageContent]
>   + bar (String)
> {code}
> where both strings (the _bar_ property at _foo_ and the _bar_ property at 
> _jcr:content_) contain different text. 
> Afterwards it queries for 2 terms ("tinc*" and "aliq*") that either exist in 
> _/content/foo/bar_ or _/content/foo/jcr:content/bar_ but not in both. For the 
> former one the excerpt is properly provided for the later one it isn't.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-7166) Union with different selector names

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-7166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-7166:

Fix Version/s: (was: 1.20.0)

> Union with different selector names
> ---
>
> Key: OAK-7166
> URL: https://issues.apache.org/jira/browse/OAK-7166
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> The following query returns the wrong nodes:
> {noformat}
> /jcr:root/libs/(* | */* | */*/* | */*/*/* | */*/*/*/*)/install
> select b.[jcr:path] as [jcr:path], b.[jcr:score] as [jcr:score], b.* from 
> [nt:base] as a
>  inner join [nt:base] as b on ischildnode(b, a)
>  where ischildnode(a, '/libs') and name(b) = 'install' 
>  union select c.[jcr:path] as [jcr:path], c.[jcr:score] as [jcr:score], c.* 
> from [nt:base] as a
>  inner join [nt:base] as b on ischildnode(b, a)
>  inner join [nt:base] as c on ischildnode(c, b)
>  where ischildnode(a, '/libs') and name(c) = 'install' 
>  union select d.[jcr:path] as [jcr:path], d.[jcr:score] as [jcr:score], d.* 
> from [nt:base] as a
>  inner join [nt:base] as b on ischildnode(b, a)
>  inner join [nt:base] as c on ischildnode(c, b)
>  inner join [nt:base] as d on ischildnode(d, c)
>  where ischildnode(a, '/libs') and name(d) = 'install' 
> {noformat}
> If I change the selector name to "x" in each subquery, then it works. There 
> is no XPath version of this workaround:
> {noformat}
> select x.[jcr:path] as [jcr:path], x.[jcr:score] as [jcr:score], x.* from 
> [nt:base] as a
>  inner join [nt:base] as x on ischildnode(x, a)
>  where ischildnode(a, '/libs') and name(x) = 'install' 
>  union select x.[jcr:path] as [jcr:path], x.[jcr:score] as [jcr:score], x.* 
> from [nt:base] as a
>  inner join [nt:base] as b on ischildnode(b, a)
>  inner join [nt:base] as x on ischildnode(x, b)
>  where ischildnode(a, '/libs') and name(x) = 'install' 
>  union select x.[jcr:path] as [jcr:path], x.[jcr:score] as [jcr:score], x.* 
> from [nt:base] as a
>  inner join [nt:base] as b on ischildnode(b, a)
>  inner join [nt:base] as c on ischildnode(c, b)
>  inner join [nt:base] as x on ischildnode(x, c)
>  where ischildnode(a, '/libs') and name(x) = 'install' 
> {noformat}
> Need to check if this is a Oak bug, or a bug in the query tool I use.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-7263) oak-lucene should not depend on oak-store-document

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-7263:

Fix Version/s: (was: 1.20.0)

> oak-lucene should not depend on oak-store-document
> --
>
> Key: OAK-7263
> URL: https://issues.apache.org/jira/browse/OAK-7263
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Robert Munteanu
>Priority: Major
>
> {{oak-lucene}} has a hard dependency on {{oak-store-document}} and that looks 
> wrong to me. 
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.7.0:compile 
> (default-compile) on project oak-lucene: Compilation failure: Compilation 
> failure: 
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneDocumentHolder.java:[31,54]
>  package org.apache.jackrabbit.oak.plugins.document.spi does not exist
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneDocumentHolder.java:[37,46]
>  cannot find symbol
> [ERROR]   symbol: class JournalProperty
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneJournalPropertyBuilder.java:[33,54]
>  package org.apache.jackrabbit.oak.plugins.document.spi does not exist
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneJournalPropertyBuilder.java:[34,54]
>  package org.apache.jackrabbit.oak.plugins.document.spi does not exist
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneJournalPropertyBuilder.java:[38,47]
>  cannot find symbol
> [ERROR]   symbol: class JournalPropertyBuilder
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneJournalPropertyBuilder.java:[106,12]
>  cannot find symbol
> [ERROR]   symbol:   class JournalProperty
> [ERROR]   location: class 
> org.apache.jackrabbit.oak.plugins.index.lucene.hybrid.LuceneJournalPropertyBuilder
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/LuceneIndexProviderService.java:[55,54]
>  package org.apache.jackrabbit.oak.plugins.document.spi does not exist
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/IndexedPaths.java:[29,54]
>  package org.apache.jackrabbit.oak.plugins.document.spi does not exist
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/IndexedPaths.java:[33,31]
>  cannot find symbol
> [ERROR]   symbol: class JournalProperty
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneJournalPropertyService.java:[22,54]
>  package org.apache.jackrabbit.oak.plugins.document.spi does not exist
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneJournalPropertyService.java:[23,54]
>  package org.apache.jackrabbit.oak.plugins.document.spi does not exist
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneJournalPropertyService.java:[25,54]
>  cannot find symbol
> [ERROR]   symbol: class JournalPropertyService
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneJournalPropertyService.java:[33,12]
>  cannot find symbol
> [ERROR]   symbol:   class JournalPropertyBuilder
> [ERROR]   location: class 
> org.apache.jackrabbit.oak.plugins.index.lucene.hybrid.LuceneJournalPropertyService
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneJournalPropertyBuilder.java:[50,5]
>  method does not override or implement a method from a supertype
> [ERROR] 
> /home/robert/Documents/sources/apache/jackrabbit-oak/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/hybrid/LuceneJournalPropertyBuilder.java:[61,5]
>  method does not override or implement a method from a supertype
> [ERROR] 
> 

[jira] [Commented] (OAK-7370) order by jcr:score desc doesn't work across union query created by optimizing OR clauses

2019-11-06 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16968356#comment-16968356
 ] 

Thomas Mueller commented on OAK-7370:
-

Thanks [~catholicon]! I removed the fix version.

> order by jcr:score desc doesn't work across union query created by optimizing 
> OR clauses
> 
>
> Key: OAK-7370
> URL: https://issues.apache.org/jira/browse/OAK-7370
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Vikas Saurabh
>Assignee: Thomas Mueller
>Priority: Major
>
> Merging of sub-queries created due to optimizing OR clauses doesn't work for 
> sorting on {{jcr:score}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-7370) order by jcr:score desc doesn't work across union query created by optimizing OR clauses

2019-11-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-7370:

Fix Version/s: (was: 1.20.0)

> order by jcr:score desc doesn't work across union query created by optimizing 
> OR clauses
> 
>
> Key: OAK-7370
> URL: https://issues.apache.org/jira/browse/OAK-7370
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Vikas Saurabh
>Assignee: Thomas Mueller
>Priority: Major
>
> Merging of sub-queries created due to optimizing OR clauses doesn't work for 
> sorting on {{jcr:score}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8673) Determine and possibly adjust size of eagerCacheSize

2019-11-06 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16968163#comment-16968163
 ] 

Thomas Mueller commented on OAK-8673:
-

[~angela] Thanks! One more question: In the issue description, you write "we 
almost never benefit from the lazy permission evaluation (compared to reading 
all permission entries right away)". I assume you mean lazy permission 
evaluation isn't _faster_ than reading all permission entries right away, 
right? If so, is it a lot _slower_? There are two points I want to make:
* We should understand why it does / does not impact performance - this is 
important to be able to have a somewhat accurate mental model
* Maybe it has an impact on memory usage? So we could say let's keep lazy 
evaluation to save memory? How much?

If the answer is: lazy evaluation doesn't save any memory and doesn't have any 
memory impact, then we can probably simplify the code (to never or always do 
lazy evaluation, whatever is simpler).

> Determine and possibly adjust size of eagerCacheSize
> 
>
> Key: OAK-8673
> URL: https://issues.apache.org/jira/browse/OAK-8673
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, security
>Reporter: Angela Schreiber
>Assignee: Angela Schreiber
>Priority: Major
>
> The initial results of the {{EagerCacheSizeTest}} seem to indicate that we 
> almost never benefit from the lazy permission evaluation (compared to reading 
> all permission entries right away). From my understanding of the results the 
> only exception are those cases where only very few items are being accessed 
> (e.g. reading 100 items).
> However, I am not totally sure if this is not a artifact of the random-read. 
> I therefore started extending the benchmark with an option to re-read a 
> randomly picked item more that once, which according to some analysis done 
> quite some time ago is a common scenario specially when using Oak in 
> combination with Apache Sling.
> Benchmarks with 10-times re-reading the same random item:
> As I would have expected it seems that the negative impact of lazy-loading is 
> somewhat reduced, as the re-reading will hit the cache populated while 
> reading.
> Result are attached to OAK-8662 (possibly more to come).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8162) When query with OR is divided into union of queries, options (like index tag) are not passed into subqueries.

2019-11-01 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964823#comment-16964823
 ] 

Thomas Mueller commented on OAK-8162:
-

[~reschke] you are right, it would be good to backport this to Oak 1.10 and 
1.8. I don't think Oak 1.6 is needed, as it doesn't support index tags. Do you 
want me to do this?

> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries. 
> --
>
> Key: OAK-8162
> URL: https://issues.apache.org/jira/browse/OAK-8162
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.10.2, 1.8.17
>Reporter: Piotr Tajduś
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.14.0
>
>
> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries - in effect alternative query  sometimes f.e. 
> uses indexes it shouldn't use.
>  {noformat}
> org.apache.jackrabbit.oak.query.QueryImpl.buildAlternativeQuery()
> org.apache.jackrabbit.oak.query.QueryImpl.copyOf()
>  
> 2019-03-21 16:32:25,600 DEBUG 
> [org.apache.jackrabbit.oak.query.QueryEngineImpl] (default task-1) Parsing 
> JCR-SQL2 statement: select distinct d.* from [crkid:document] as d where 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AX' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') or 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AB' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') option(index tag 
> crkid_dokument_month_2019_3)
> 2019-03-21 16:32:25,607 DEBUG [org.apache.jackrabbit.oak.query.QueryImpl] 
> (default task-1) cost using filter Filter(query=select distinct d.* from 
> [crkid:document] as d where ([d].[metadane/inneMetadane/*/wartosc] = 'AB') 
> and ([d].[metadane/inneMetadane/*/klucz] = 'InnyKod'), path=*, 
> property=[metadane/inneMetadane/*/klucz=[InnyKod], 
> metadane/inneMetadane/*/wartosc=[AB]])
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8162) When query with OR is divided into union of queries, options (like index tag) are not passed into subqueries.

2019-11-01 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8162:

Labels: candidate_oak_1_10 candidate_oak_1_8  (was: )

> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries. 
> --
>
> Key: OAK-8162
> URL: https://issues.apache.org/jira/browse/OAK-8162
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.10.2, 1.8.17
>Reporter: Piotr Tajduś
>Assignee: Thomas Mueller
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Fix For: 1.14.0
>
>
> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries - in effect alternative query  sometimes f.e. 
> uses indexes it shouldn't use.
>  {noformat}
> org.apache.jackrabbit.oak.query.QueryImpl.buildAlternativeQuery()
> org.apache.jackrabbit.oak.query.QueryImpl.copyOf()
>  
> 2019-03-21 16:32:25,600 DEBUG 
> [org.apache.jackrabbit.oak.query.QueryEngineImpl] (default task-1) Parsing 
> JCR-SQL2 statement: select distinct d.* from [crkid:document] as d where 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AX' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') or 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AB' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') option(index tag 
> crkid_dokument_month_2019_3)
> 2019-03-21 16:32:25,607 DEBUG [org.apache.jackrabbit.oak.query.QueryImpl] 
> (default task-1) cost using filter Filter(query=select distinct d.* from 
> [crkid:document] as d where ([d].[metadane/inneMetadane/*/wartosc] = 'AB') 
> and ([d].[metadane/inneMetadane/*/klucz] = 'InnyKod'), path=*, 
> property=[metadane/inneMetadane/*/klucz=[InnyKod], 
> metadane/inneMetadane/*/wartosc=[AB]])
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8162) When query with OR is divided into union of queries, options (like index tag) are not passed into subqueries.

2019-11-01 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8162:

Affects Version/s: (was: 1.6.18)

> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries. 
> --
>
> Key: OAK-8162
> URL: https://issues.apache.org/jira/browse/OAK-8162
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.10.2, 1.8.17
>Reporter: Piotr Tajduś
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.14.0
>
>
> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries - in effect alternative query  sometimes f.e. 
> uses indexes it shouldn't use.
>  {noformat}
> org.apache.jackrabbit.oak.query.QueryImpl.buildAlternativeQuery()
> org.apache.jackrabbit.oak.query.QueryImpl.copyOf()
>  
> 2019-03-21 16:32:25,600 DEBUG 
> [org.apache.jackrabbit.oak.query.QueryEngineImpl] (default task-1) Parsing 
> JCR-SQL2 statement: select distinct d.* from [crkid:document] as d where 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AX' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') or 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AB' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') option(index tag 
> crkid_dokument_month_2019_3)
> 2019-03-21 16:32:25,607 DEBUG [org.apache.jackrabbit.oak.query.QueryImpl] 
> (default task-1) cost using filter Filter(query=select distinct d.* from 
> [crkid:document] as d where ([d].[metadane/inneMetadane/*/wartosc] = 'AB') 
> and ([d].[metadane/inneMetadane/*/klucz] = 'InnyKod'), path=*, 
> property=[metadane/inneMetadane/*/klucz=[InnyKod], 
> metadane/inneMetadane/*/wartosc=[AB]])
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8162) When query with OR is divided into union of queries, options (like index tag) are not passed into subqueries.

2019-11-01 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8162:

Affects Version/s: 1.6.18
   1.8.17

> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries. 
> --
>
> Key: OAK-8162
> URL: https://issues.apache.org/jira/browse/OAK-8162
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.10.2, 1.6.18, 1.8.17
>Reporter: Piotr Tajduś
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.14.0
>
>
> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries - in effect alternative query  sometimes f.e. 
> uses indexes it shouldn't use.
>  {noformat}
> org.apache.jackrabbit.oak.query.QueryImpl.buildAlternativeQuery()
> org.apache.jackrabbit.oak.query.QueryImpl.copyOf()
>  
> 2019-03-21 16:32:25,600 DEBUG 
> [org.apache.jackrabbit.oak.query.QueryEngineImpl] (default task-1) Parsing 
> JCR-SQL2 statement: select distinct d.* from [crkid:document] as d where 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AX' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') or 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AB' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') option(index tag 
> crkid_dokument_month_2019_3)
> 2019-03-21 16:32:25,607 DEBUG [org.apache.jackrabbit.oak.query.QueryImpl] 
> (default task-1) cost using filter Filter(query=select distinct d.* from 
> [crkid:document] as d where ([d].[metadane/inneMetadane/*/wartosc] = 'AB') 
> and ([d].[metadane/inneMetadane/*/klucz] = 'InnyKod'), path=*, 
> property=[metadane/inneMetadane/*/klucz=[InnyKod], 
> metadane/inneMetadane/*/wartosc=[AB]])
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8673) Determine and possibly adjust size of eagerCacheSize

2019-11-01 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964764#comment-16964764
 ] 

Thomas Mueller commented on OAK-8673:
-

> 0 should be possible can run those in addition

I would probably do that, and check if it really works as expected (the cache 
is really empty). Or maybe hardcode some logic that means if 0, then don't use 
the cache (might be a bit hard).

> the lazy-loading doesn't seems to have a beneficial effect (except for 
> reading really few items, which in AEM is rarely the case)

Do you assume that with a small EagerCacheSize, lazy loading isn't used at all? 
I don't know the code, but it sounds like it's better to somehow disable the 
lazy loading logic, in order to be sure it's not used by some unexpected code 
path.

> Determine and possibly adjust size of eagerCacheSize
> 
>
> Key: OAK-8673
> URL: https://issues.apache.org/jira/browse/OAK-8673
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, security
>Reporter: Angela Schreiber
>Assignee: Angela Schreiber
>Priority: Major
>
> The initial results of the {{EagerCacheSizeTest}} seem to indicate that we 
> almost never benefit from the lazy permission evaluation (compared to reading 
> all permission entries right away). From my understanding of the results the 
> only exception are those cases where only very few items are being accessed 
> (e.g. reading 100 items).
> However, I am not totally sure if this is not a artifact of the random-read. 
> I therefore started extending the benchmark with an option to re-read a 
> randomly picked item more that once, which according to some analysis done 
> quite some time ago is a common scenario specially when using Oak in 
> combination with Apache Sling.
> Result are attached to OAK-8662 (possibly more to come).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8729) Lucene Directory concurrency issue

2019-11-01 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964741#comment-16964741
 ] 

Thomas Mueller commented on OAK-8729:
-

I tried writing a special test case, but it is not easy... I could sometimes 
reproduce the issue, but only if the existing test is run many times, and only 
when instrumenting the MemoryNodeBuilder.

> Lucene Directory concurrency issue
> --
>
> Key: OAK-8729
> URL: https://issues.apache.org/jira/browse/OAK-8729
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.12.0, 1.14.0, 1.16.0, 1.18.0
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: OAK-8729.patch
>
>
> There is a concurrency issue in the DefaultDirectoryFactory. It is 
> reproducible sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run 
> in a loop (1000 times). The problem is that the MemoryNodeBuilder is used 
> concurrently:
> * thread 1 is closing the directory (after writing to it)
> * thread 2 is trying to create a new file
> {noformat}
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
>   at org.apache.lucene.store.Directory.copy(Directory.java:184)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8729) Lucene Directory concurrency issue

2019-11-01 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964739#comment-16964739
 ] 

Thomas Mueller commented on OAK-8729:
-

Attached a patch for review, [~catholicon] [~nitigupt][~tihom88].

> Lucene Directory concurrency issue
> --
>
> Key: OAK-8729
> URL: https://issues.apache.org/jira/browse/OAK-8729
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.12.0, 1.14.0, 1.16.0, 1.18.0
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: OAK-8729.patch
>
>
> There is a concurrency issue in the DefaultDirectoryFactory. It is 
> reproducible sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run 
> in a loop (1000 times). The problem is that the MemoryNodeBuilder is used 
> concurrently:
> * thread 1 is closing the directory (after writing to it)
> * thread 2 is trying to create a new file
> {noformat}
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
>   at org.apache.lucene.store.Directory.copy(Directory.java:184)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8729) Lucene Directory concurrency issue

2019-11-01 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8729:

Attachment: OAK-8729.patch

> Lucene Directory concurrency issue
> --
>
> Key: OAK-8729
> URL: https://issues.apache.org/jira/browse/OAK-8729
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.12.0, 1.14.0, 1.16.0, 1.18.0
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: OAK-8729.patch
>
>
> There is a concurrency issue in the DefaultDirectoryFactory. It is 
> reproducible sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run 
> in a loop (1000 times). The problem is that the MemoryNodeBuilder is used 
> concurrently:
> * thread 1 is closing the directory (after writing to it)
> * thread 2 is trying to create a new file
> {noformat}
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
>   at org.apache.lucene.store.Directory.copy(Directory.java:184)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8729) Lucene Directory concurrency issue

2019-11-01 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8729:

Affects Version/s: 1.12.0
   1.14.0
   1.16.0
   1.18.0

> Lucene Directory concurrency issue
> --
>
> Key: OAK-8729
> URL: https://issues.apache.org/jira/browse/OAK-8729
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.12.0, 1.14.0, 1.16.0, 1.18.0
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> There is a concurrency issue in the DefaultDirectoryFactory. It is 
> reproducible sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run 
> in a loop (1000 times). The problem is that the MemoryNodeBuilder is used 
> concurrently:
> * thread 1 is closing the directory (after writing to it)
> * thread 2 is trying to create a new file
> {noformat}
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
>   at org.apache.lucene.store.Directory.copy(Directory.java:184)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8729) Lucene Directory concurrency issue

2019-11-01 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8729:

Fix Version/s: 1.20.0

> Lucene Directory concurrency issue
> --
>
> Key: OAK-8729
> URL: https://issues.apache.org/jira/browse/OAK-8729
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.12.0, 1.14.0, 1.16.0, 1.18.0
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.20.0
>
>
> There is a concurrency issue in the DefaultDirectoryFactory. It is 
> reproducible sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run 
> in a loop (1000 times). The problem is that the MemoryNodeBuilder is used 
> concurrently:
> * thread 1 is closing the directory (after writing to it)
> * thread 2 is trying to create a new file
> {noformat}
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
>   at org.apache.lucene.store.Directory.copy(Directory.java:184)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-8729) Lucene Directory concurrency issue

2019-11-01 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-8729:
---

Assignee: Thomas Mueller

> Lucene Directory concurrency issue
> --
>
> Key: OAK-8729
> URL: https://issues.apache.org/jira/browse/OAK-8729
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> There is a concurrency issue in the DefaultDirectoryFactory. It is 
> reproducible sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run 
> in a loop (1000 times). The problem is that the MemoryNodeBuilder is used 
> concurrently:
> * thread 1 is closing the directory (after writing to it)
> * thread 2 is trying to create a new file
> {noformat}
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
>   at org.apache.lucene.store.Directory.copy(Directory.java:184)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8673) Determine and possibly adjust size of eagerCacheSize

2019-11-01 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964729#comment-16964729
 ] 

Thomas Mueller commented on OAK-8673:
-

> the threshold to move from eagerly-loading all permission entries to lazy 
> loading is defined by the EagerCacheSize.

So, maybe test with EagerCacheSize = 0, or (if that's not possible) 1?

> Determine and possibly adjust size of eagerCacheSize
> 
>
> Key: OAK-8673
> URL: https://issues.apache.org/jira/browse/OAK-8673
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, security
>Reporter: Angela Schreiber
>Assignee: Angela Schreiber
>Priority: Major
>
> The initial results of the {{EagerCacheSizeTest}} seem to indicate that we 
> almost never benefit from the lazy permission evaluation (compared to reading 
> all permission entries right away). From my understanding of the results the 
> only exception are those cases where only very few items are being accessed 
> (e.g. reading 100 items).
> However, I am not totally sure if this is not a artifact of the random-read. 
> I therefore started extending the benchmark with an option to re-read a 
> randomly picked item more that once, which according to some analysis done 
> quite some time ago is a common scenario specially when using Oak in 
> combination with Apache Sling.
> Result are attached to OAK-8662 (possibly more to come).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8673) Determine and possibly adjust size of eagerCacheSize

2019-11-01 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964711#comment-16964711
 ] 

Thomas Mueller commented on OAK-8673:
-

> we almost never benefit from the lazy permission evaluation (compared to 
> reading all permission entries right away). 

Just to make sure: It sounds like "lazy permission evaluation disabled" means 
"reading all permission entries right away"... right? And then it sounds like 
you consider disabling lazy permission evaluation?

Which benchmark results show data for "lazy permission evaluation disabled", 
and which results show results for "lazy permission evaluation enabled"? I only 
see different settings for 

* Items to Read
* Repeat Read
* Number of ACEs
* Number of Principals
* EagerCacheSize


> Determine and possibly adjust size of eagerCacheSize
> 
>
> Key: OAK-8673
> URL: https://issues.apache.org/jira/browse/OAK-8673
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, security
>Reporter: Angela Schreiber
>Assignee: Angela Schreiber
>Priority: Major
>
> The initial results of the {{EagerCacheSizeTest}} seem to indicate that we 
> almost never benefit from the lazy permission evaluation (compared to reading 
> all permission entries right away). From my understanding of the results the 
> only exception are those cases where only very few items are being accessed 
> (e.g. reading 100 items).
> However, I am not totally sure if this is not a artifact of the random-read. 
> I therefore started extending the benchmark with an option to re-read a 
> randomly picked item more that once, which according to some analysis done 
> quite some time ago is a common scenario specially when using Oak in 
> combination with Apache Sling.
> Result are attached to OAK-8662 (possibly more to come).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8729) Lucene Directory concurrency issue

2019-10-31 Thread Thomas Mueller (Jira)
Thomas Mueller created OAK-8729:
---

 Summary: Lucene Directory concurrency issue
 Key: OAK-8729
 URL: https://issues.apache.org/jira/browse/OAK-8729
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Thomas Mueller


There is a concurrency issue in the DefaultDirectoryFactory. It is reproducible 
sometimes using CopyOnWriteDirectoryTest.copyOnWrite(), if run in a loop (1000 
times). The problem is that the MemoryNodeBuilder is used concurrently:

* thread 1 is closing the directory (after writing to it)
* thread 2 is trying to create a new file

{noformat}
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:525)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.close(OakDirectory.java:264)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.close(BufferedOakDirectory.java:217)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnReadDirectory$2.run(CopyOnReadDirectory.java:305)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.exists(MemoryNodeBuilder.java:284)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:362)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:356)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:342)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.directory.OakDirectory.createOutput(OakDirectory.java:214)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.directory.BufferedOakDirectory.createOutput(BufferedOakDirectory.java:178)
at org.apache.lucene.store.Directory.copy(Directory.java:184)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:322)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$3.call(CopyOnWriteDirectory.java:1)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:105)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.directory.CopyOnWriteDirectory$2$1.call(CopyOnWriteDirectory.java:1)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8162) When query with OR is divided into union of queries, options (like index tag) are not passed into subqueries.

2019-10-31 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-8162.
-
Resolution: Fixed

Yes, this is fixed. I also change the fix version to 1.14.

> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries. 
> --
>
> Key: OAK-8162
> URL: https://issues.apache.org/jira/browse/OAK-8162
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.10.2
>Reporter: Piotr Tajduś
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.14.0
>
>
> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries - in effect alternative query  sometimes f.e. 
> uses indexes it shouldn't use.
>  {noformat}
> org.apache.jackrabbit.oak.query.QueryImpl.buildAlternativeQuery()
> org.apache.jackrabbit.oak.query.QueryImpl.copyOf()
>  
> 2019-03-21 16:32:25,600 DEBUG 
> [org.apache.jackrabbit.oak.query.QueryEngineImpl] (default task-1) Parsing 
> JCR-SQL2 statement: select distinct d.* from [crkid:document] as d where 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AX' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') or 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AB' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') option(index tag 
> crkid_dokument_month_2019_3)
> 2019-03-21 16:32:25,607 DEBUG [org.apache.jackrabbit.oak.query.QueryImpl] 
> (default task-1) cost using filter Filter(query=select distinct d.* from 
> [crkid:document] as d where ([d].[metadane/inneMetadane/*/wartosc] = 'AB') 
> and ([d].[metadane/inneMetadane/*/klucz] = 'InnyKod'), path=*, 
> property=[metadane/inneMetadane/*/klucz=[InnyKod], 
> metadane/inneMetadane/*/wartosc=[AB]])
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8162) When query with OR is divided into union of queries, options (like index tag) are not passed into subqueries.

2019-10-31 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8162:

Fix Version/s: (was: 1.20.0)
   1.14.0

> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries. 
> --
>
> Key: OAK-8162
> URL: https://issues.apache.org/jira/browse/OAK-8162
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.10.2
>Reporter: Piotr Tajduś
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.14.0
>
>
> When query with OR is divided into union of queries, options (like index tag) 
> are not passed into subqueries - in effect alternative query  sometimes f.e. 
> uses indexes it shouldn't use.
>  {noformat}
> org.apache.jackrabbit.oak.query.QueryImpl.buildAlternativeQuery()
> org.apache.jackrabbit.oak.query.QueryImpl.copyOf()
>  
> 2019-03-21 16:32:25,600 DEBUG 
> [org.apache.jackrabbit.oak.query.QueryEngineImpl] (default task-1) Parsing 
> JCR-SQL2 statement: select distinct d.* from [crkid:document] as d where 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AX' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') or 
> ([d].[metadane/inneMetadane/*/wartosc] = 'AB' and 
> [d].[metadane/inneMetadane/*/klucz] = 'InnyKod') option(index tag 
> crkid_dokument_month_2019_3)
> 2019-03-21 16:32:25,607 DEBUG [org.apache.jackrabbit.oak.query.QueryImpl] 
> (default task-1) cost using filter Filter(query=select distinct d.* from 
> [crkid:document] as d where ([d].[metadane/inneMetadane/*/wartosc] = 'AB') 
> and ([d].[metadane/inneMetadane/*/klucz] = 'InnyKod'), path=*, 
> property=[metadane/inneMetadane/*/klucz=[InnyKod], 
> metadane/inneMetadane/*/wartosc=[AB]])
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-5272) Expose BlobStore API to provide information whether blob id is content hashed

2019-10-31 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-5272:
---

Assignee: (was: Thomas Mueller)

> Expose BlobStore API to provide information whether blob id is content hashed
> -
>
> Key: OAK-5272
> URL: https://issues.apache.org/jira/browse/OAK-5272
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob
>Reporter: Amit Jain
>Priority: Major
> Fix For: 1.20.0
>
>
> As per discussion in OAK-5253 it's better to have some information from the 
> BlobStore(s) whether the blob id can be solely relied upon for comparison.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8721) Automatically pick the latest active index version

2019-10-31 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-8721.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Automatically pick the latest active index version
> --
>
> Key: OAK-8721
> URL: https://issues.apache.org/jira/browse/OAK-8721
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: indexing, lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.20.0
>
>
> When using the composite node store for blue-green deployments, multiple 
> versions of a index can exist at the same time, for a short period of time 
> (while both blue and green are running at the same time). It is possible to 
> select which index is active using the "useIfExists" settings in the index 
> configurations. However, this is complicated and hard to explain / understand.
> Instead, we can rely on naming patterns of the index node name. E.g.
> * lucene
> * lucene-2 (newer product version)
> * lucene-2-custom-2 (customized version of lucene-2)
> * lucene-2-custom-3 (customized again)
> * lucene-3-custom-1 (newer product version)
> It would be good if index selection is automatic, meaning only indexes are 
> active that are available in the read-only repository (of the composite node 
> store).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8721) Automatically pick the latest active index version

2019-10-31 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16963729#comment-16963729
 ] 

Thomas Mueller commented on OAK-8721:
-

http://svn.apache.org/r1869202 (trunk)

Compared to the pull request, I added some logic so that filtering of indexes 
is only done when using the composite node store (when using non-default 
mounts). That way, installations without the composite node store won't be 
affected.

> Automatically pick the latest active index version
> --
>
> Key: OAK-8721
> URL: https://issues.apache.org/jira/browse/OAK-8721
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: indexing, lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> When using the composite node store for blue-green deployments, multiple 
> versions of a index can exist at the same time, for a short period of time 
> (while both blue and green are running at the same time). It is possible to 
> select which index is active using the "useIfExists" settings in the index 
> configurations. However, this is complicated and hard to explain / understand.
> Instead, we can rely on naming patterns of the index node name. E.g.
> * lucene
> * lucene-2 (newer product version)
> * lucene-2-custom-2 (customized version of lucene-2)
> * lucene-2-custom-3 (customized again)
> * lucene-3-custom-1 (newer product version)
> It would be good if index selection is automatic, meaning only indexes are 
> active that are available in the read-only repository (of the composite node 
> store).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8705) Broken logging in CopyOnWriteDirectory

2019-10-30 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8705:

Fix Version/s: 1.18.0
   1.10.6
   1.8.18

> Broken logging in CopyOnWriteDirectory
> --
>
> Key: OAK-8705
> URL: https://issues.apache.org/jira/browse/OAK-8705
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.18.0, 1.10.5, 1.8.17
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.18.0, 1.8.18, 1.10.6
>
>
> In trunk:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 437-long remoteFileLength = remote.fileLength(name);
> 438- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 439-
> 440- if (!validLocalCopyPresent) {
> 441: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 442- name, localFileLength, remoteFileLength);
> 443- }
> 444-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 445-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (note the trailing "init-remote-length" that does not make any sense)
> Worse, in 1.10 and 1.8:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 426-long remoteFileLength = remote.fileLength(name);
> 427- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 428-
> 429- if (!validLocalCopyPresent) {
> 430: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 431- localFileLength, remoteFileLength, length);
> 432- }
> 433-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 434-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (name parameter missing, so localFileLength is logged as filename)
> Proposal:
> - make this consistent everywhere
> - either mention "init-remote-length" *and* log the value, or remove it from 
> the message
> - (and fix the identation :-)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8705) Broken logging in CopyOnWriteDirectory

2019-10-30 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-8705.
-
Resolution: Fixed

> Broken logging in CopyOnWriteDirectory
> --
>
> Key: OAK-8705
> URL: https://issues.apache.org/jira/browse/OAK-8705
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.10.5, 1.8.17
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.8.18, 1.10.6, 1.18.0
>
>
> In trunk:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 437-long remoteFileLength = remote.fileLength(name);
> 438- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 439-
> 440- if (!validLocalCopyPresent) {
> 441: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 442- name, localFileLength, remoteFileLength);
> 443- }
> 444-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 445-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (note the trailing "init-remote-length" that does not make any sense)
> Worse, in 1.10 and 1.8:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 426-long remoteFileLength = remote.fileLength(name);
> 427- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 428-
> 429- if (!validLocalCopyPresent) {
> 430: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 431- localFileLength, remoteFileLength, length);
> 432- }
> 433-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 434-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (name parameter missing, so localFileLength is logged as filename)
> Proposal:
> - make this consistent everywhere
> - either mention "init-remote-length" *and* log the value, or remove it from 
> the message
> - (and fix the identation :-)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8705) Broken logging in CopyOnWriteDirectory

2019-10-30 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8705:

Affects Version/s: (was: 1.18.0)

> Broken logging in CopyOnWriteDirectory
> --
>
> Key: OAK-8705
> URL: https://issues.apache.org/jira/browse/OAK-8705
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.10.5, 1.8.17
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.18.0, 1.8.18, 1.10.6
>
>
> In trunk:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 437-long remoteFileLength = remote.fileLength(name);
> 438- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 439-
> 440- if (!validLocalCopyPresent) {
> 441: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 442- name, localFileLength, remoteFileLength);
> 443- }
> 444-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 445-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (note the trailing "init-remote-length" that does not make any sense)
> Worse, in 1.10 and 1.8:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 426-long remoteFileLength = remote.fileLength(name);
> 427- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 428-
> 429- if (!validLocalCopyPresent) {
> 430: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 431- localFileLength, remoteFileLength, length);
> 432- }
> 433-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 434-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (name parameter missing, so localFileLength is logged as filename)
> Proposal:
> - make this consistent everywhere
> - either mention "init-remote-length" *and* log the value, or remove it from 
> the message
> - (and fix the identation :-)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8705) Broken logging in CopyOnWriteDirectory

2019-10-30 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16963022#comment-16963022
 ] 

Thomas Mueller commented on OAK-8705:
-

There is no 1.18 branch. In trunk, it was fixed in 
http://svn.apache.org/r1854565 (Mar 1, 2019), which is before Oak 1.18, so Oak 
1.18 shouldn't be affected. (Right?). I will then close this issue and change 
the fix versions.

> Broken logging in CopyOnWriteDirectory
> --
>
> Key: OAK-8705
> URL: https://issues.apache.org/jira/browse/OAK-8705
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.18.0, 1.10.5, 1.8.17
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>Priority: Minor
>
> In trunk:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 437-long remoteFileLength = remote.fileLength(name);
> 438- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 439-
> 440- if (!validLocalCopyPresent) {
> 441: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 442- name, localFileLength, remoteFileLength);
> 443- }
> 444-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 445-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (note the trailing "init-remote-length" that does not make any sense)
> Worse, in 1.10 and 1.8:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 426-long remoteFileLength = remote.fileLength(name);
> 427- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 428-
> 429- if (!validLocalCopyPresent) {
> 430: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 431- localFileLength, remoteFileLength, length);
> 432- }
> 433-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 434-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (name parameter missing, so localFileLength is logged as filename)
> Proposal:
> - make this consistent everywhere
> - either mention "init-remote-length" *and* log the value, or remove it from 
> the message
> - (and fix the identation :-)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8705) Broken logging in CopyOnWriteDirectory

2019-10-30 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16963018#comment-16963018
 ] 

Thomas Mueller commented on OAK-8705:
-

http://svn.apache.org/r1869171 ( 1.10 branch)

> Broken logging in CopyOnWriteDirectory
> --
>
> Key: OAK-8705
> URL: https://issues.apache.org/jira/browse/OAK-8705
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.18.0, 1.10.5, 1.8.17
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>Priority: Minor
>
> In trunk:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 437-long remoteFileLength = remote.fileLength(name);
> 438- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 439-
> 440- if (!validLocalCopyPresent) {
> 441: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 442- name, localFileLength, remoteFileLength);
> 443- }
> 444-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 445-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (note the trailing "init-remote-length" that does not make any sense)
> Worse, in 1.10 and 1.8:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 426-long remoteFileLength = remote.fileLength(name);
> 427- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 428-
> 429- if (!validLocalCopyPresent) {
> 430: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 431- localFileLength, remoteFileLength, length);
> 432- }
> 433-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 434-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (name parameter missing, so localFileLength is logged as filename)
> Proposal:
> - make this consistent everywhere
> - either mention "init-remote-length" *and* log the value, or remove it from 
> the message
> - (and fix the identation :-)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8705) Broken logging in CopyOnWriteDirectory

2019-10-30 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16963014#comment-16963014
 ] 

Thomas Mueller commented on OAK-8705:
-

http://svn.apache.org/r1869169 (1.8 branch)

I just fixed the fields to be logged. 

> the trailing "init-remote-length" that does not make any sense

Yes, it's not very useful (AFAIK it means "initializing the remote file, 
checking the length"), but doesn't cause problems.

> Broken logging in CopyOnWriteDirectory
> --
>
> Key: OAK-8705
> URL: https://issues.apache.org/jira/browse/OAK-8705
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.18.0, 1.10.5, 1.8.17
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>Priority: Minor
>
> In trunk:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 437-long remoteFileLength = remote.fileLength(name);
> 438- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 439-
> 440- if (!validLocalCopyPresent) {
> 441: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 442- name, localFileLength, remoteFileLength);
> 443- }
> 444-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 445-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (note the trailing "init-remote-length" that does not make any sense)
> Worse, in 1.10 and 1.8:
> {noformat}
> oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/directory/CopyOnWriteDirectory.java
> 426-long remoteFileLength = remote.fileLength(name);
> 427- validLocalCopyPresent = localFileLength == 
> remoteFileLength;
> 428-
> 429- if (!validLocalCopyPresent) {
> 430: log.warn("COWRemoteFileReference::file ({}) differs 
> in length. local: {}; remote: {}, init-remote-length",
> 431- localFileLength, remoteFileLength, length);
> 432- }
> 433-} else if (!IndexCopier.REMOTE_ONLY.contains(name)) {
> 434-log.warn("COWRemoteFileReference::local file ({}) doesn't 
> exist", name);
> {noformat}
> (name parameter missing, so localFileLength is logged as filename)
> Proposal:
> - make this consistent everywhere
> - either mention "init-remote-length" *and* log the value, or remove it from 
> the message
> - (and fix the identation :-)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8721) Automatically pick the latest active index version

2019-10-30 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16962809#comment-16962809
 ] 

Thomas Mueller commented on OAK-8721:
-

[~catholicon] [~fabrizio.fort...@gmail.com] [~nitigupt] [~tihom88] [~teofili] 
could you review the PR https://github.com/apache/jackrabbit-oak/pull/158 
please?

> Automatically pick the latest active index version
> --
>
> Key: OAK-8721
> URL: https://issues.apache.org/jira/browse/OAK-8721
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: indexing, lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> When using the composite node store for blue-green deployments, multiple 
> versions of a index can exist at the same time, for a short period of time 
> (while both blue and green are running at the same time). It is possible to 
> select which index is active using the "useIfExists" settings in the index 
> configurations. However, this is complicated and hard to explain / understand.
> Instead, we can rely on naming patterns of the index node name. E.g.
> * lucene
> * lucene-2 (newer product version)
> * lucene-2-custom-2 (customized version of lucene-2)
> * lucene-2-custom-3 (customized again)
> * lucene-3-custom-1 (newer product version)
> It would be good if index selection is automatic, meaning only indexes are 
> active that are available in the read-only repository (of the composite node 
> store).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8721) Automatically pick the latest active index version

2019-10-30 Thread Thomas Mueller (Jira)
Thomas Mueller created OAK-8721:
---

 Summary: Automatically pick the latest active index version
 Key: OAK-8721
 URL: https://issues.apache.org/jira/browse/OAK-8721
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: indexing, lucene
Reporter: Thomas Mueller
Assignee: Thomas Mueller


When using the composite node store for blue-green deployments, multiple 
versions of a index can exist at the same time, for a short period of time 
(while both blue and green are running at the same time). It is possible to 
select which index is active using the "useIfExists" settings in the index 
configurations. However, this is complicated and hard to explain / understand.

Instead, we can rely on naming patterns of the index node name. E.g.

* lucene
* lucene-2 (newer product version)
* lucene-2-custom-2 (customized version of lucene-2)
* lucene-2-custom-3 (customized again)
* lucene-3-custom-1 (newer product version)

It would be good if index selection is automatic, meaning only indexes are 
active that are available in the read-only repository (of the composite node 
store).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8639) Composite node store tests with document store

2019-10-22 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-8639.
-
Resolution: Fixed

> Composite node store tests with document store
> --
>
> Key: OAK-8639
> URL: https://issues.apache.org/jira/browse/OAK-8639
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, indexing, test
>Reporter: Fabrizio Fortino
>Assignee: Thomas Mueller
>Priority: Major
>  Labels: fabriziofortino, indexingPatch
> Fix For: 1.20.0
>
> Attachments: GRANITE-27309_tests_2.patch
>
>
> CompositeNodeStore tests using document store (h2, document memory) are 
> currently disabled because the index creation does not work. 
> [https://svn.apache.org/repos/asf/jackrabbit/oak/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreQueryTestBase.java]
>  
> The below assertion fails because the lucene index is not found. This does 
> not happen with segment and memory stores.
>  
> {noformat}
> java.lang.AssertionError: java.lang.AssertionError: Expected: a string 
> containing "/* traverse \"//*\" where ([a].[foo] = 'bar'" but: was "plan: 
> [nt:base] as [a] /* lucene:luceneTest(/oak:index/luceneTest) foo:bar where 
> ([a].[foo] = 'bar') and (isdescendantnode([a], [/])) */ "Expected :a string 
> containing "/* traverse \"//*\" where ([a].[foo] = 'bar'"Actual   :"plan: 
> [nt:base] as [a] /* lucene:luceneTest(/oak:index/luceneTest) foo:bar where 
> ([a].[foo] = 'bar') and (isdescendantnode([a], [/])) */ " difference>
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) 
> at org.junit.Assert.assertThat(Assert.java:956) 
> at org.junit.Assert.assertThat(Assert.java:923) 
> at 
> org.apache.jackrabbit.oak.composite.CompositeNodeStoreLuceneIndexTest.removeLuceneIndex(CompositeNodeStoreLuceneIndexTest.java:169)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:498) 
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at org.junit.runners.Suite.runChild(Suite.java:128) 
> at org.junit.runners.Suite.runChild(Suite.java:27) 
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) 
> at org.junit.rules.RunRules.evaluate(RunRules.java:20) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137) 
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8639) Composite node store tests with document store

2019-10-22 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956981#comment-16956981
 ] 

Thomas Mueller commented on OAK-8639:
-

https://svn.apache.org/r1868751

> Composite node store tests with document store
> --
>
> Key: OAK-8639
> URL: https://issues.apache.org/jira/browse/OAK-8639
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, indexing, test
>Reporter: Fabrizio Fortino
>Assignee: Thomas Mueller
>Priority: Major
>  Labels: fabriziofortino, indexingPatch
> Fix For: 1.20.0
>
> Attachments: GRANITE-27309_tests_2.patch
>
>
> CompositeNodeStore tests using document store (h2, document memory) are 
> currently disabled because the index creation does not work. 
> [https://svn.apache.org/repos/asf/jackrabbit/oak/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreQueryTestBase.java]
>  
> The below assertion fails because the lucene index is not found. This does 
> not happen with segment and memory stores.
>  
> {noformat}
> java.lang.AssertionError: java.lang.AssertionError: Expected: a string 
> containing "/* traverse \"//*\" where ([a].[foo] = 'bar'" but: was "plan: 
> [nt:base] as [a] /* lucene:luceneTest(/oak:index/luceneTest) foo:bar where 
> ([a].[foo] = 'bar') and (isdescendantnode([a], [/])) */ "Expected :a string 
> containing "/* traverse \"//*\" where ([a].[foo] = 'bar'"Actual   :"plan: 
> [nt:base] as [a] /* lucene:luceneTest(/oak:index/luceneTest) foo:bar where 
> ([a].[foo] = 'bar') and (isdescendantnode([a], [/])) */ " difference>
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) 
> at org.junit.Assert.assertThat(Assert.java:956) 
> at org.junit.Assert.assertThat(Assert.java:923) 
> at 
> org.apache.jackrabbit.oak.composite.CompositeNodeStoreLuceneIndexTest.removeLuceneIndex(CompositeNodeStoreLuceneIndexTest.java:169)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:498) 
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at org.junit.runners.Suite.runChild(Suite.java:128) 
> at org.junit.runners.Suite.runChild(Suite.java:27) 
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) 
> at org.junit.rules.RunRules.evaluate(RunRules.java:20) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137) 
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-8639) Composite node store tests with document store

2019-10-22 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-8639:
---

Assignee: Thomas Mueller

> Composite node store tests with document store
> --
>
> Key: OAK-8639
> URL: https://issues.apache.org/jira/browse/OAK-8639
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, indexing, test
>Reporter: Fabrizio Fortino
>Assignee: Thomas Mueller
>Priority: Major
>  Labels: fabriziofortino, indexingPatch
> Attachments: GRANITE-27309_tests_2.patch
>
>
> CompositeNodeStore tests using document store (h2, document memory) are 
> currently disabled because the index creation does not work. 
> [https://svn.apache.org/repos/asf/jackrabbit/oak/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreQueryTestBase.java]
>  
> The below assertion fails because the lucene index is not found. This does 
> not happen with segment and memory stores.
>  
> {noformat}
> java.lang.AssertionError: java.lang.AssertionError: Expected: a string 
> containing "/* traverse \"//*\" where ([a].[foo] = 'bar'" but: was "plan: 
> [nt:base] as [a] /* lucene:luceneTest(/oak:index/luceneTest) foo:bar where 
> ([a].[foo] = 'bar') and (isdescendantnode([a], [/])) */ "Expected :a string 
> containing "/* traverse \"//*\" where ([a].[foo] = 'bar'"Actual   :"plan: 
> [nt:base] as [a] /* lucene:luceneTest(/oak:index/luceneTest) foo:bar where 
> ([a].[foo] = 'bar') and (isdescendantnode([a], [/])) */ " difference>
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) 
> at org.junit.Assert.assertThat(Assert.java:956) 
> at org.junit.Assert.assertThat(Assert.java:923) 
> at 
> org.apache.jackrabbit.oak.composite.CompositeNodeStoreLuceneIndexTest.removeLuceneIndex(CompositeNodeStoreLuceneIndexTest.java:169)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:498) 
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at org.junit.runners.Suite.runChild(Suite.java:128) 
> at org.junit.runners.Suite.runChild(Suite.java:27) 
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) 
> at org.junit.rules.RunRules.evaluate(RunRules.java:20) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137) 
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8639) Composite node store tests with document store

2019-10-22 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8639:

Fix Version/s: 1.20.0

> Composite node store tests with document store
> --
>
> Key: OAK-8639
> URL: https://issues.apache.org/jira/browse/OAK-8639
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, indexing, test
>Reporter: Fabrizio Fortino
>Assignee: Thomas Mueller
>Priority: Major
>  Labels: fabriziofortino, indexingPatch
> Fix For: 1.20.0
>
> Attachments: GRANITE-27309_tests_2.patch
>
>
> CompositeNodeStore tests using document store (h2, document memory) are 
> currently disabled because the index creation does not work. 
> [https://svn.apache.org/repos/asf/jackrabbit/oak/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreQueryTestBase.java]
>  
> The below assertion fails because the lucene index is not found. This does 
> not happen with segment and memory stores.
>  
> {noformat}
> java.lang.AssertionError: java.lang.AssertionError: Expected: a string 
> containing "/* traverse \"//*\" where ([a].[foo] = 'bar'" but: was "plan: 
> [nt:base] as [a] /* lucene:luceneTest(/oak:index/luceneTest) foo:bar where 
> ([a].[foo] = 'bar') and (isdescendantnode([a], [/])) */ "Expected :a string 
> containing "/* traverse \"//*\" where ([a].[foo] = 'bar'"Actual   :"plan: 
> [nt:base] as [a] /* lucene:luceneTest(/oak:index/luceneTest) foo:bar where 
> ([a].[foo] = 'bar') and (isdescendantnode([a], [/])) */ " difference>
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) 
> at org.junit.Assert.assertThat(Assert.java:956) 
> at org.junit.Assert.assertThat(Assert.java:923) 
> at 
> org.apache.jackrabbit.oak.composite.CompositeNodeStoreLuceneIndexTest.removeLuceneIndex(CompositeNodeStoreLuceneIndexTest.java:169)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:498) 
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at org.junit.runners.Suite.runChild(Suite.java:128) 
> at org.junit.runners.Suite.runChild(Suite.java:27) 
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) 
> at org.junit.rules.RunRules.evaluate(RunRules.java:20) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137) 
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8639) Composite node store tests with document store

2019-10-22 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8639:

Labels: fabriziofortino indexingPatch  (was: )

> Composite node store tests with document store
> --
>
> Key: OAK-8639
> URL: https://issues.apache.org/jira/browse/OAK-8639
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, indexing, test
>Reporter: Fabrizio Fortino
>Priority: Major
>  Labels: fabriziofortino, indexingPatch
> Attachments: GRANITE-27309_tests_2.patch
>
>
> CompositeNodeStore tests using document store (h2, document memory) are 
> currently disabled because the index creation does not work. 
> [https://svn.apache.org/repos/asf/jackrabbit/oak/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreQueryTestBase.java]
>  
> The below assertion fails because the lucene index is not found. This does 
> not happen with segment and memory stores.
>  
> {noformat}
> java.lang.AssertionError: java.lang.AssertionError: Expected: a string 
> containing "/* traverse \"//*\" where ([a].[foo] = 'bar'" but: was "plan: 
> [nt:base] as [a] /* lucene:luceneTest(/oak:index/luceneTest) foo:bar where 
> ([a].[foo] = 'bar') and (isdescendantnode([a], [/])) */ "Expected :a string 
> containing "/* traverse \"//*\" where ([a].[foo] = 'bar'"Actual   :"plan: 
> [nt:base] as [a] /* lucene:luceneTest(/oak:index/luceneTest) foo:bar where 
> ([a].[foo] = 'bar') and (isdescendantnode([a], [/])) */ " difference>
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) 
> at org.junit.Assert.assertThat(Assert.java:956) 
> at org.junit.Assert.assertThat(Assert.java:923) 
> at 
> org.apache.jackrabbit.oak.composite.CompositeNodeStoreLuceneIndexTest.removeLuceneIndex(CompositeNodeStoreLuceneIndexTest.java:169)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:498) 
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at org.junit.runners.Suite.runChild(Suite.java:128) 
> at org.junit.runners.Suite.runChild(Suite.java:27) 
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) 
> at org.junit.rules.RunRules.evaluate(RunRules.java:20) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137) 
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-7947) Lazy loading of Lucene index files startup

2019-10-16 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952844#comment-16952844
 ] 

Thomas Mueller commented on OAK-7947:
-

Let's wait with backport until we have analyzed the issue and looked at 
alternatives.

> Lazy loading of Lucene index files startup
> --
>
> Key: OAK-7947
> URL: https://issues.apache.org/jira/browse/OAK-7947
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene, query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.12.0
>
> Attachments: OAK-7947.patch, OAK-7947_v2.patch, OAK-7947_v3.patch, 
> OAK-7947_v4.patch, OAK-7947_v5.patch, lucene-index-open-access.zip
>
>
> Right now, all Lucene index binaries are loaded on startup (I think when the 
> first query is run, to do cost calculation). This is a performance problem if 
> the index files are large, and need to be downloaded from the data store.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8626) Make query statistic handling faster (maybe asynchronous)

2019-09-19 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933408#comment-16933408
 ] 

Thomas Mueller commented on OAK-8626:
-

Do you know how often this was called? It might be called millions of times, in 
which case this could potentially be the hotspot for this query. There are 
usually tools that allow you to find out how often a query is executed 
(analytics tools to analyze query performance / list popular queries).

I wouldn't make it asynchronous unless really, really needed - this is 
complicated.

> Make query statistic handling faster (maybe asynchronous)
> -
>
> Key: OAK-8626
> URL: https://issues.apache.org/jira/browse/OAK-8626
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.8.11
>Reporter: Jörg Hoh
>Priority: Major
>
> During performance analysis I found in a threaddump a number of threads in 
> this state:
> {noformat}
> "qtp1082699324-121518" prio=5 tid=0x1daae nid=0x timed_waiting
>java.lang.Thread.State: TIMED_WAITING
>   at 
> java.util.concurrent.ConcurrentSkipListMap.size(ConcurrentSkipListMap.java:1639)
>   at 
> org.apache.jackrabbit.oak.query.stats.QueryStatsMBeanImpl.getQueryExecution(QueryStatsMBeanImpl.java:134)
>   at 
> org.apache.jackrabbit.oak.query.QueryEngineImpl.parseQuery(QueryEngineImpl.java:160)
>   at 
> org.apache.jackrabbit.oak.query.QueryEngineImpl.executeQuery(QueryEngineImpl.java:259)
>   at 
> org.apache.jackrabbit.oak.query.QueryEngineImpl.executeQuery(QueryEngineImpl.java:235)
>   at 
> org.apache.jackrabbit.oak.plugins.identifier.IdentifierManager.resolveUUID(IdentifierManager.java:351)
>   at 
> org.apache.jackrabbit.oak.plugins.identifier.IdentifierManager.resolveUUID(IdentifierManager.java:345)
>   at 
> org.apache.jackrabbit.oak.plugins.identifier.IdentifierManager.resolveUUID(IdentifierManager.java:341)
>   at 
> org.apache.jackrabbit.oak.plugins.identifier.IdentifierManager.getTree(IdentifierManager.java:136)
>   at 
> org.apache.jackrabbit.oak.security.authentication.token.TokenProviderImpl.getTokenInfo(TokenProviderImpl.java:269)
>   at 
> org.apache.jackrabbit.oak.security.authentication.token.TokenAuthentication.validateCredentials(TokenAuthentication.java:105)
>   at 
> org.apache.jackrabbit.oak.security.authentication.token.TokenAuthentication.authenticate(TokenAuthentication.java:58)
>   at 
> org.apache.jackrabbit.oak.security.authentication.token.TokenLoginModule.login(TokenLoginModule.java:136)
>   at 
> org.apache.felix.jaas.boot.ProxyLoginModule.login(ProxyLoginModule.java:52)
>   at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
>   at 
> javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
>   at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
>   at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
>   at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
>   at 
> org.apache.jackrabbit.oak.core.ContentRepositoryImpl.login(ContentRepositoryImpl.java:163)
> [...]
> {noformat}
> Is it required to do the cache statistic handling within a query itself? Or 
> is it possible to perform it outside of it asynchronously in a dedicated 
> thread?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8626) Make query statistic handling faster (maybe asynchronous)

2019-09-19 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8626:

Summary: Make query statistic handling faster (maybe asynchronous)  (was: 
Make query statistic handling asynchronous)

> Make query statistic handling faster (maybe asynchronous)
> -
>
> Key: OAK-8626
> URL: https://issues.apache.org/jira/browse/OAK-8626
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.8.11
>Reporter: Jörg Hoh
>Priority: Major
>
> During performance analysis I found in a threaddump a number of threads in 
> this state:
> {noformat}
> "qtp1082699324-121518" prio=5 tid=0x1daae nid=0x timed_waiting
>java.lang.Thread.State: TIMED_WAITING
>   at 
> java.util.concurrent.ConcurrentSkipListMap.size(ConcurrentSkipListMap.java:1639)
>   at 
> org.apache.jackrabbit.oak.query.stats.QueryStatsMBeanImpl.getQueryExecution(QueryStatsMBeanImpl.java:134)
>   at 
> org.apache.jackrabbit.oak.query.QueryEngineImpl.parseQuery(QueryEngineImpl.java:160)
>   at 
> org.apache.jackrabbit.oak.query.QueryEngineImpl.executeQuery(QueryEngineImpl.java:259)
>   at 
> org.apache.jackrabbit.oak.query.QueryEngineImpl.executeQuery(QueryEngineImpl.java:235)
>   at 
> org.apache.jackrabbit.oak.plugins.identifier.IdentifierManager.resolveUUID(IdentifierManager.java:351)
>   at 
> org.apache.jackrabbit.oak.plugins.identifier.IdentifierManager.resolveUUID(IdentifierManager.java:345)
>   at 
> org.apache.jackrabbit.oak.plugins.identifier.IdentifierManager.resolveUUID(IdentifierManager.java:341)
>   at 
> org.apache.jackrabbit.oak.plugins.identifier.IdentifierManager.getTree(IdentifierManager.java:136)
>   at 
> org.apache.jackrabbit.oak.security.authentication.token.TokenProviderImpl.getTokenInfo(TokenProviderImpl.java:269)
>   at 
> org.apache.jackrabbit.oak.security.authentication.token.TokenAuthentication.validateCredentials(TokenAuthentication.java:105)
>   at 
> org.apache.jackrabbit.oak.security.authentication.token.TokenAuthentication.authenticate(TokenAuthentication.java:58)
>   at 
> org.apache.jackrabbit.oak.security.authentication.token.TokenLoginModule.login(TokenLoginModule.java:136)
>   at 
> org.apache.felix.jaas.boot.ProxyLoginModule.login(ProxyLoginModule.java:52)
>   at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
>   at 
> javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
>   at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
>   at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
>   at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
>   at 
> org.apache.jackrabbit.oak.core.ContentRepositoryImpl.login(ContentRepositoryImpl.java:163)
> [...]
> {noformat}
> Is it required to do the cache statistic handling within a query itself? Or 
> is it possible to perform it outside of it asynchronously in a dedicated 
> thread?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8245) Add column for explained "statement" to "explain" Query result, next to 'plan' column.

2019-09-13 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8245:

Fix Version/s: 1.18.0

> Add column for explained "statement" to "explain" Query result, next to 
> 'plan' column.
> --
>
> Key: OAK-8245
> URL: https://issues.apache.org/jira/browse/OAK-8245
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.12.0, 1.8.12, 1.10.2
>Reporter: Mark Adamcin
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.18.0
>
>
> The "explain" behavior of the core Query is very useful for helping to debug 
> JCR query execution planning. For xpath queries, the resulting "plan" column 
> refers to the result of running XPathToSQL2Converter to produce a JCR-SQL2 
> statement for execution. This SQL2 statement should be exposed through the 
> same API as the "plan", by way of an additional column named "statement" in 
> the single result row. 
> At this time, this underlying SQL2 statement is inaccessible to users of the 
> JCR Query interface, which can only provide the original XPath statement.
> To access the converted SQL2 statement, a class targeting the JCR API must 
> implement a regular expression match against a log stream retrieved via slf4j 
> MDC.
> This facility is not very portable, and an effective pattern on one version 
> of Oak may not be effective on a different version of Oak, for any number of 
> reasons.
> Also, the XPathToSQL2Converter package is not exported in an OSGi 
> environment, so client code cannot use that API to reconstruct the SQL2 
> statement in parallel.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (OAK-8245) Add column for explained "statement" to "explain" Query result, next to 'plan' column.

2019-09-13 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-8245.
-
Resolution: Fixed

> Add column for explained "statement" to "explain" Query result, next to 
> 'plan' column.
> --
>
> Key: OAK-8245
> URL: https://issues.apache.org/jira/browse/OAK-8245
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.12.0, 1.8.12, 1.10.2
>Reporter: Mark Adamcin
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.18.0
>
>
> The "explain" behavior of the core Query is very useful for helping to debug 
> JCR query execution planning. For xpath queries, the resulting "plan" column 
> refers to the result of running XPathToSQL2Converter to produce a JCR-SQL2 
> statement for execution. This SQL2 statement should be exposed through the 
> same API as the "plan", by way of an additional column named "statement" in 
> the single result row. 
> At this time, this underlying SQL2 statement is inaccessible to users of the 
> JCR Query interface, which can only provide the original XPath statement.
> To access the converted SQL2 statement, a class targeting the JCR API must 
> implement a regular expression match against a log stream retrieved via slf4j 
> MDC.
> This facility is not very portable, and an effective pattern on one version 
> of Oak may not be effective on a different version of Oak, for any number of 
> reasons.
> Also, the XPathToSQL2Converter package is not exported in an OSGi 
> environment, so client code cannot use that API to reconstruct the SQL2 
> statement in parallel.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (OAK-8245) Add column for explained "statement" to "explain" Query result, next to 'plan' column.

2019-09-13 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929170#comment-16929170
 ] 

Thomas Mueller edited comment on OAK-8245 at 9/13/19 12:53 PM:
---

http://svn.apache.org/r1866903 (trunk)
... it took more than a few days ...


was (Author: tmueller):
svn.apache.org/r1866903 (trunk)

> Add column for explained "statement" to "explain" Query result, next to 
> 'plan' column.
> --
>
> Key: OAK-8245
> URL: https://issues.apache.org/jira/browse/OAK-8245
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.12.0, 1.8.12, 1.10.2
>Reporter: Mark Adamcin
>Assignee: Thomas Mueller
>Priority: Minor
>
> The "explain" behavior of the core Query is very useful for helping to debug 
> JCR query execution planning. For xpath queries, the resulting "plan" column 
> refers to the result of running XPathToSQL2Converter to produce a JCR-SQL2 
> statement for execution. This SQL2 statement should be exposed through the 
> same API as the "plan", by way of an additional column named "statement" in 
> the single result row. 
> At this time, this underlying SQL2 statement is inaccessible to users of the 
> JCR Query interface, which can only provide the original XPath statement.
> To access the converted SQL2 statement, a class targeting the JCR API must 
> implement a regular expression match against a log stream retrieved via slf4j 
> MDC.
> This facility is not very portable, and an effective pattern on one version 
> of Oak may not be effective on a different version of Oak, for any number of 
> reasons.
> Also, the XPathToSQL2Converter package is not exported in an OSGi 
> environment, so client code cannot use that API to reconstruct the SQL2 
> statement in parallel.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8245) Add column for explained "statement" to "explain" Query result, next to 'plan' column.

2019-09-13 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929170#comment-16929170
 ] 

Thomas Mueller commented on OAK-8245:
-

svn.apache.org/r1866903 (trunk)

> Add column for explained "statement" to "explain" Query result, next to 
> 'plan' column.
> --
>
> Key: OAK-8245
> URL: https://issues.apache.org/jira/browse/OAK-8245
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.12.0, 1.8.12, 1.10.2
>Reporter: Mark Adamcin
>Assignee: Thomas Mueller
>Priority: Minor
>
> The "explain" behavior of the core Query is very useful for helping to debug 
> JCR query execution planning. For xpath queries, the resulting "plan" column 
> refers to the result of running XPathToSQL2Converter to produce a JCR-SQL2 
> statement for execution. This SQL2 statement should be exposed through the 
> same API as the "plan", by way of an additional column named "statement" in 
> the single result row. 
> At this time, this underlying SQL2 statement is inaccessible to users of the 
> JCR Query interface, which can only provide the original XPath statement.
> To access the converted SQL2 statement, a class targeting the JCR API must 
> implement a regular expression match against a log stream retrieved via slf4j 
> MDC.
> This facility is not very portable, and an effective pattern on one version 
> of Oak may not be effective on a different version of Oak, for any number of 
> reasons.
> Also, the XPathToSQL2Converter package is not exported in an OSGi 
> environment, so client code cannot use that API to reconstruct the SQL2 
> statement in parallel.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-7151) Support indexed based excerpts on properties

2019-09-13 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929146#comment-16929146
 ] 

Thomas Mueller commented on OAK-7151:
-

[~catholicon] I just saw this is still open... is there any work left?

> Support indexed based excerpts on properties
> 
>
> Key: OAK-7151
> URL: https://issues.apache.org/jira/browse/OAK-7151
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: OAK-7151.patch, OAK-7151.xpath-new-syntax.patch, 
> OAK-7151.xpath.patch
>
>
> As discovered in OAK-4401 we fallback to {{SimpleExcerptProvider}} when 
> requesting excerpts for properties.
> The issue as highlighted in [~teofili]'s comment \[0] is that we at time of 
> query we don't have information about which all columns/fields would be 
> required for excerpts.
> A possible approach is that the query specified explicitly which columns 
> would be required in facets (of course, node level excerpt would still be 
> supported). This issue is to track that improvement.
> Note: this is *not* a substitute for OAK-4401 which is about doing saner 
> highlighting when {{SimpleExcerptProvider}} comes into play e.g. despite this 
> issue excerpt for non-stored fields (properties which aren't configured with 
> {{useInExcerpt}} in the index definition}, we'd need to fallback to 
> {{SimpleExcerptProvider}}.
> /[~tmueller]
> \[0]: 
> https://issues.apache.org/jira/browse/OAK-4401?focusedCommentId=15299857=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15299857



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8603) Composite Node Store + Counter Index: allow indexing from scratch / reindex

2019-09-13 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929038#comment-16929038
 ] 

Thomas Mueller commented on OAK-8603:
-

Pull request (work in progress): 
https://github.com/apache/jackrabbit-oak/pull/149/files

> Composite Node Store + Counter Index: allow indexing from scratch / reindex
> ---
>
> Key: OAK-8603
> URL: https://issues.apache.org/jira/browse/OAK-8603
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, indexing
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>  Labels: fabriziofortino
> Attachments: OAK-8603.patch
>
>
> When using the composite node store with a read-only portion of the 
> repository, the counter index does not allow to index from scratch / reindex.
> Index from scratch is needed in case the async checkpoint is lost. Reindex is 
> started by setting the "reindex" flag to true.
> Currently the failure is:
> {noformat}
> 05.09.2019 09:29:21.892 *WARN* [async-index-update-async] 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate [async] The index 
> update is still failing
> java.lang.UnsupportedOperationException: This builder is read-only.
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.unsupported(ReadOnlyBuilder.java:44)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:189)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:34)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leaveNew(NodeCounterEditor.java:162)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leave(NodeCounterEditor.java:114)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:73)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> 

[jira] [Updated] (OAK-8603) Composite Node Store + Counter Index: allow indexing from scratch / reindex

2019-09-13 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8603:

Labels: fabriziofortino  (was: )

> Composite Node Store + Counter Index: allow indexing from scratch / reindex
> ---
>
> Key: OAK-8603
> URL: https://issues.apache.org/jira/browse/OAK-8603
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, indexing
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>  Labels: fabriziofortino
> Attachments: OAK-8603.patch
>
>
> When using the composite node store with a read-only portion of the 
> repository, the counter index does not allow to index from scratch / reindex.
> Index from scratch is needed in case the async checkpoint is lost. Reindex is 
> started by setting the "reindex" flag to true.
> Currently the failure is:
> {noformat}
> 05.09.2019 09:29:21.892 *WARN* [async-index-update-async] 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate [async] The index 
> update is still failing
> java.lang.UnsupportedOperationException: This builder is read-only.
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.unsupported(ReadOnlyBuilder.java:44)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:189)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:34)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leaveNew(NodeCounterEditor.java:162)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leave(NodeCounterEditor.java:114)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:73)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:59)
>  

[jira] [Resolved] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-09-10 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-8579.
-
Resolution: Fixed

> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.18.0
>
>
> Currently, it is not allowed to first create a new index in the read-only 
> repository, and then in the read-write repository. Trying to do so will fail 
> with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
> The primary type null does not exist"
> See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
> CompositeNodeStoreLuceneIndexTest.java 
> tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 
> It would be better to allow this use case, to reduce the possibility of 
> problems.
> We should specially test with lucene indexes, but also with property indexes. 
> (If that's more complicated, we can concentrate on the lucene case first.)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-09-10 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8579:

Fix Version/s: 1.18.0

> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.18.0
>
>
> Currently, it is not allowed to first create a new index in the read-only 
> repository, and then in the read-write repository. Trying to do so will fail 
> with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
> The primary type null does not exist"
> See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
> CompositeNodeStoreLuceneIndexTest.java 
> tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 
> It would be better to allow this use case, to reduce the possibility of 
> problems.
> We should specially test with lucene indexes, but also with property indexes. 
> (If that's more complicated, we can concentrate on the lucene case first.)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-09-10 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16926637#comment-16926637
 ] 

Thomas Mueller commented on OAK-8579:
-

http://svn.apache.org/r1866752 (trunk)

> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> Currently, it is not allowed to first create a new index in the read-only 
> repository, and then in the read-write repository. Trying to do so will fail 
> with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
> The primary type null does not exist"
> See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
> CompositeNodeStoreLuceneIndexTest.java 
> tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 
> It would be better to allow this use case, to reduce the possibility of 
> problems.
> We should specially test with lucene indexes, but also with property indexes. 
> (If that's more complicated, we can concentrate on the lucene case first.)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-09-10 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16926636#comment-16926636
 ] 

Thomas Mueller commented on OAK-8579:
-

[~catholicon] I'm not sure if there is an easy way for this... I think it 
currently is only a problem for index nodes, so I think the current solution is 
OK for now.

> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> Currently, it is not allowed to first create a new index in the read-only 
> repository, and then in the read-write repository. Trying to do so will fail 
> with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
> The primary type null does not exist"
> See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
> CompositeNodeStoreLuceneIndexTest.java 
> tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 
> It would be better to allow this use case, to reduce the possibility of 
> problems.
> We should specially test with lucene indexes, but also with property indexes. 
> (If that's more complicated, we can concentrate on the lucene case first.)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-09-10 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-8579:
---

Assignee: Thomas Mueller

> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> Currently, it is not allowed to first create a new index in the read-only 
> repository, and then in the read-write repository. Trying to do so will fail 
> with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
> The primary type null does not exist"
> See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
> CompositeNodeStoreLuceneIndexTest.java 
> tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 
> It would be better to allow this use case, to reduce the possibility of 
> problems.
> We should specially test with lucene indexes, but also with property indexes. 
> (If that's more complicated, we can concentrate on the lucene case first.)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8589) NPE in IndexDefintionBuilder with existing property rule without "name" property

2019-09-06 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924230#comment-16924230
 ] 

Thomas Mueller commented on OAK-8589:
-

Looks good to me!

> NPE in IndexDefintionBuilder with existing property rule without "name" 
> property
> 
>
> Key: OAK-8589
> URL: https://issues.apache.org/jira/browse/OAK-8589
> Project: Jackrabbit Oak
>  Issue Type: Improvement
> Environment: Inde
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
> Fix For: 1.18.0
>
>
> {{IndexDefinitionBuilder#findExisting}} throws NPE when 
> {{IndexDefinitionBuilder}} is initialized with an existing index that has a 
> property rule without {{name}} property defined.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8603) Composite Node Store + Counter Index: allow indexing from scratch / reindex

2019-09-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8603:

Attachment: OAK-8603.patch

> Composite Node Store + Counter Index: allow indexing from scratch / reindex
> ---
>
> Key: OAK-8603
> URL: https://issues.apache.org/jira/browse/OAK-8603
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, indexing
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Attachments: OAK-8603.patch
>
>
> When using the composite node store with a read-only portion of the 
> repository, the counter index does not allow to index from scratch / reindex.
> Index from scratch is needed in case the async checkpoint is lost. Reindex is 
> started by setting the "reindex" flag to true.
> Currently the failure is:
> {noformat}
> 05.09.2019 09:29:21.892 *WARN* [async-index-update-async] 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate [async] The index 
> update is still failing
> java.lang.UnsupportedOperationException: This builder is read-only.
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.unsupported(ReadOnlyBuilder.java:44)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:189)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:34)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leaveNew(NodeCounterEditor.java:162)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leave(NodeCounterEditor.java:114)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:73)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:59)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> 

[jira] [Created] (OAK-8603) Composite Node Store + Counter Index: allow indexing from scratch / reindex

2019-09-06 Thread Thomas Mueller (Jira)
Thomas Mueller created OAK-8603:
---

 Summary: Composite Node Store + Counter Index: allow indexing from 
scratch / reindex
 Key: OAK-8603
 URL: https://issues.apache.org/jira/browse/OAK-8603
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: composite, indexing
Reporter: Thomas Mueller
Assignee: Thomas Mueller


When using the composite node store with a read-only portion of the repository, 
the counter index does not allow to index from scratch / reindex.

Index from scratch is needed in case the async checkpoint is lost. Reindex is 
started by setting the "reindex" flag to true.

Currently the failure is:

{noformat}
05.09.2019 09:29:21.892 *WARN* [async-index-update-async] 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate [async] The index 
update is still failing
java.lang.UnsupportedOperationException: This builder is read-only.
at 
org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.unsupported(ReadOnlyBuilder.java:44)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:189)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:34)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leaveNew(NodeCounterEditor.java:162)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leave(NodeCounterEditor.java:114)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:73)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:59) 
[org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:129)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 

[jira] [Comment Edited] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-08-28 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917558#comment-16917558
 ] 

Thomas Mueller edited comment on OAK-8579 at 8/28/19 8:56 AM:
--

> inadvertently ignoring/skipping some needed check here

[~nitigup] the changes only affect hidden nodes 
(NodeStateUtils.isHidden(name)). With the JCR API it is not possible to create 
or access hidden nodes: test.addNode(":x") fails with the exception 
"javax.jcr.PathNotFoundException: :x", much before you have the option to save 
the change. So I believe it is not a risk.

> we can reproduce this and see why this check is only coming into picture when 
> index is getting created in read only part of the repo 

This we know. But it is good to explicitly describe the problem: assume you 
create an index exists in the read-only part of the repo, using the "composite 
seed" mode, then the some hidden nodes are created by indexing. But only in the 
so called "read-only" repository. It is not actually read-only in this mode. 
But it is stored in a different segment store. If you investigate, for example 
with oak-run explore, how this repository looks like, you would see something 
like this

{noformat}
/oak:index/abc
/oak:index/abc/:oak:mount-readOnlyV1-index-data
{noformat}

It is not yet created in the read-write repository. So when starting the 
composite store that opens the read-only repo in read-only mode, and the 
read-write repo in read-write mode, the nodes above are not visible.

Now, assume you create an index in the read-write repo with this name:

{noformat}
/oak:index/abc
{noformat}

It is not yet indexing at that time, just storing the node itself. (Possibly 
you could even create a node of any other type, for example nt:base, so not an 
index definition node - I didn't test). What happens is: the hidden node 
/oak:index/abc/:oak:mount-readOnlyV1-index-data will now become visible, due to 
how the composite node store works and is configured. For Oak, it will looks 
like the node /oak:index/abc/:oak:mount-readOnlyV1-index-data was just added by 
the user (which is impossible to do using the JCR API, but the Oak verifiers 
don't know that).

So the verifiers currently check if this node is a correct JCR node. It's not: 
the primary type is not set, so is "null".

To resolve this issue, I changed the verifiers to ignore hidden nodes.


was (Author: tmueller):
> inadvertently ignoring/skipping some needed check here

[~nitigup] the changes only affect hidden nodes 
(NodeStateUtils.isHidden(name)). With the JCR API it is not possible to create 
or access hidden nodes. So I believe it is not a risk.

> we can reproduce this and see why this check is only coming into picture when 
> index is getting created in read only part of the repo 

This we know. But it is good to explicitly describe the problem: assume you 
create an index exists in the read-only part of the repo, using the "composite 
seed" mode, then the some hidden nodes are created by indexing. But only in the 
so called "read-only" repository. It is not actually read-only in this mode. 
But it is stored in a different segment store. If you investigate, for example 
with oak-run explore, how this repository looks like, you would see something 
like this

{noformat}
/oak:index/abc
/oak:index/abc/:oak:mount-readOnlyV1-index-data
{noformat}

It is not yet created in the read-write repository. So when starting the 
composite store that opens the read-only repo in read-only mode, and the 
read-write repo in read-write mode, the nodes above are not visible.

Now, assume you create an index in the read-write repo with this name:

{noformat}
/oak:index/abc
{noformat}

It is not yet indexing at that time, just storing the node itself. (Possibly 
you could even create a node of any other type, for example nt:base, so not an 
index definition node - I didn't test). What happens is: the hidden node 
/oak:index/abc/:oak:mount-readOnlyV1-index-data will now become visible, due to 
how the composite node store works and is configured. For Oak, it will looks 
like the node /oak:index/abc/:oak:mount-readOnlyV1-index-data was just added by 
the user (which is impossible to do using the JCR API, but the Oak verifiers 
don't know that).

So the verifiers currently check if this node is a correct JCR node. It's not: 
the primary type is not set, so is "null".

To resolve this issue, I changed the verifiers to ignore hidden nodes.

> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Priority: Major
>
> 

[jira] [Commented] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-08-28 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917558#comment-16917558
 ] 

Thomas Mueller commented on OAK-8579:
-

> inadvertently ignoring/skipping some needed check here

[~nitigup] the changes only affect hidden nodes 
(NodeStateUtils.isHidden(name)). With the JCR API it is not possible to create 
or access hidden nodes. So I believe it is not a risk.

> we can reproduce this and see why this check is only coming into picture when 
> index is getting created in read only part of the repo 

This we know. But it is good to explicitly describe the problem: assume you 
create an index exists in the read-only part of the repo, using the "composite 
seed" mode, then the some hidden nodes are created by indexing. But only in the 
so called "read-only" repository. It is not actually read-only in this mode. 
But it is stored in a different segment store. If you investigate, for example 
with oak-run explore, how this repository looks like, you would see something 
like this

{noformat}
/oak:index/abc
/oak:index/abc/:oak:mount-readOnlyV1-index-data
{noformat}

It is not yet created in the read-write repository. So when starting the 
composite store that opens the read-only repo in read-only mode, and the 
read-write repo in read-write mode, the nodes above are not visible.

Now, assume you create an index in the read-write repo with this name:

{noformat}
/oak:index/abc
{noformat}

It is not yet indexing at that time, just storing the node itself. (Possibly 
you could even create a node of any other type, for example nt:base, so not an 
index definition node - I didn't test). What happens is: the hidden node 
/oak:index/abc/:oak:mount-readOnlyV1-index-data will now become visible, due to 
how the composite node store works and is configured. For Oak, it will looks 
like the node /oak:index/abc/:oak:mount-readOnlyV1-index-data was just added by 
the user (which is impossible to do using the JCR API, but the Oak verifiers 
don't know that).

So the verifiers currently check if this node is a correct JCR node. It's not: 
the primary type is not set, so is "null".

To resolve this issue, I changed the verifiers to ignore hidden nodes.

> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Priority: Major
>
> Currently, it is not allowed to first create a new index in the read-only 
> repository, and then in the read-write repository. Trying to do so will fail 
> with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
> The primary type null does not exist"
> See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
> CompositeNodeStoreLuceneIndexTest.java 
> tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 
> It would be better to allow this use case, to reduce the possibility of 
> problems.
> We should specially test with lucene indexes, but also with property indexes. 
> (If that's more complicated, we can concentrate on the lucene case first.)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-08-27 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916682#comment-16916682
 ] 

Thomas Mueller commented on OAK-8579:
-

[~tihom88] [~nitigup] [~catholicon] could you review the patch above? 

For the change in NameValidator and TypeEditor: [~stillalex] and [~rombert] you 
might be a good reviewer, as the latest changes in those areas were from you.

> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Priority: Major
>
> Currently, it is not allowed to first create a new index in the read-only 
> repository, and then in the read-write repository. Trying to do so will fail 
> with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
> The primary type null does not exist"
> See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
> CompositeNodeStoreLuceneIndexTest.java 
> tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 
> It would be better to allow this use case, to reduce the possibility of 
> problems.
> We should specially test with lucene indexes, but also with property indexes. 
> (If that's more complicated, we can concentrate on the lucene case first.)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-08-27 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916605#comment-16916605
 ] 

Thomas Mueller edited comment on OAK-8579 at 8/27/19 12:37 PM:
---

Potential patch (need to run more tests):

{noformat}
Index: 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/name/NameValidator.java
===
--- 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/name/NameValidator.java
(revision 1864607)
+++ 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/name/NameValidator.java
(working copy)
@@ -30,6 +30,7 @@
 import org.apache.jackrabbit.oak.spi.commit.Validator;
 import org.apache.jackrabbit.oak.spi.lifecycle.RepositoryInitializer;
 import org.apache.jackrabbit.oak.spi.state.NodeState;
+import org.apache.jackrabbit.oak.spi.state.NodeStateUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -172,7 +173,9 @@
 @Override
 public Validator childNodeAdded(String name, NodeState after)
 throws CommitFailedException {
-checkValidName(name);
+if (!NodeStateUtils.isHidden(name)) {
+checkValidName(name);
+}
 return this;
 }
 
Index: 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/nodetype/TypeEditor.java
===
--- 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/nodetype/TypeEditor.java
   (revision 1864607)
+++ 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/nodetype/TypeEditor.java
   (working copy)
@@ -475,6 +475,9 @@
 }
 if (!names.isEmpty()) {
 for (String name : names) {
+if (NodeStateUtils.isHidden(name)) {
+continue;
+}
 NodeState child = after.getChildNode(name);
 String primary = child.getName(JCR_PRIMARYTYPE);
 Iterable mixins = child.getNames(JCR_MIXINTYPES);
Index: 
oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreLuceneIndexTest.java
===
--- 
oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreLuceneIndexTest.java
 (revision 1864609)
+++ 
oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreLuceneIndexTest.java
 (working copy)
@@ -103,20 +103,13 @@
 }
 
 /**
- * Given a composite node store , trying to create an index in read-write 
part
- * with the same index node already existing in the read only part already
- * we should get OakConstraint001 . This is the current behaviour,
- * but can be worked upon (improved) in the future .
+ * Given a composite node store , create an index in read-write part
+ * with the same index node already existing in the read-only part already.
  */
 @Test
-public void tryAddIndexInReadWriteWithIndexExistinginReadOnly() {
-try {
-repoV1.setupIndexAndContentInRepo("luceneTest", "foo", true, 
VERSION_1);
-assertTrue(false);
-} catch (Exception e) {
-assert (e.getLocalizedMessage().contains(
-"OakConstraint0001: 
/oak:index/luceneTest/:oak:mount-readOnlyV1-index-data[[]]: The primary type 
null does not exist"));
-}
+public void addIndexInReadWriteWithIndexExistinginReadOnly() throws 
Exception {
+repoV1.setupIndexAndContentInRepo("luceneTest", "foo", true, 
VERSION_1);
+repoV1.cleanup();
 }
 
 /**
{noformat}


was (Author: tmueller):
Potential patch (need to run more tests):

{noformat}
Index: 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/name/NameValidator.java
===
--- 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/name/NameValidator.java
(revision 1864607)
+++ 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/name/NameValidator.java
(working copy)
@@ -30,6 +30,7 @@
 import org.apache.jackrabbit.oak.spi.commit.Validator;
 import org.apache.jackrabbit.oak.spi.lifecycle.RepositoryInitializer;
 import org.apache.jackrabbit.oak.spi.state.NodeState;
+import org.apache.jackrabbit.oak.spi.state.NodeStateUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -172,7 +173,9 @@
 @Override
 public Validator childNodeAdded(String name, NodeState after)
 throws CommitFailedException {
-checkValidName(name);
+if (!NodeStateUtils.isHidden(name)) {
+checkValidName(name);
+}
 return this;
 }
 
Index: 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/nodetype/TypeEditor.java
===
--- 

[jira] [Commented] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-08-27 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916605#comment-16916605
 ] 

Thomas Mueller commented on OAK-8579:
-

Potential patch (need to run more tests):

{noformat}
Index: 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/name/NameValidator.java
===
--- 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/name/NameValidator.java
(revision 1864607)
+++ 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/name/NameValidator.java
(working copy)
@@ -30,6 +30,7 @@
 import org.apache.jackrabbit.oak.spi.commit.Validator;
 import org.apache.jackrabbit.oak.spi.lifecycle.RepositoryInitializer;
 import org.apache.jackrabbit.oak.spi.state.NodeState;
+import org.apache.jackrabbit.oak.spi.state.NodeStateUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -172,7 +173,9 @@
 @Override
 public Validator childNodeAdded(String name, NodeState after)
 throws CommitFailedException {
-checkValidName(name);
+if (!NodeStateUtils.isHidden(name)) {
+checkValidName(name);
+}
 return this;
 }
 
Index: 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/nodetype/TypeEditor.java
===
--- 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/nodetype/TypeEditor.java
   (revision 1864607)
+++ 
oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/nodetype/TypeEditor.java
   (working copy)
@@ -475,6 +475,9 @@
 }
 if (!names.isEmpty()) {
 for (String name : names) {
+if (NodeStateUtils.isHidden(name)) {
+continue;
+}
 NodeState child = after.getChildNode(name);
 String primary = child.getName(JCR_PRIMARYTYPE);
 Iterable mixins = child.getNames(JCR_MIXINTYPES);
Index: 
oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreLuceneIndexTest.java
===
--- 
oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreLuceneIndexTest.java
 (revision 1864609)
+++ 
oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite/CompositeNodeStoreLuceneIndexTest.java
 (working copy)
@@ -109,14 +109,14 @@
  * but can be worked upon (improved) in the future .
  */
 @Test
-public void tryAddIndexInReadWriteWithIndexExistinginReadOnly() {
-try {
+public void tryAddIndexInReadWriteWithIndexExistinginReadOnly() throws 
Exception {
+//try {
 repoV1.setupIndexAndContentInRepo("luceneTest", "foo", true, 
VERSION_1);
-assertTrue(false);
-} catch (Exception e) {
-assert (e.getLocalizedMessage().contains(
-"OakConstraint0001: 
/oak:index/luceneTest/:oak:mount-readOnlyV1-index-data[[]]: The primary type 
null does not exist"));
-}
+//assertTrue(false);
+//} catch (Exception e) {
+//assert (e.getLocalizedMessage().contains(
+//"OakConstraint0001: 
/oak:index/luceneTest/:oak:mount-readOnlyV1-index-data[[]]: The primary type 
null does not exist"));
+//}
 }
{noformat}

> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Priority: Major
>
> Currently, it is not allowed to first create a new index in the read-only 
> repository, and then in the read-write repository. Trying to do so will fail 
> with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
> The primary type null does not exist"
> See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
> CompositeNodeStoreLuceneIndexTest.java 
> tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 
> It would be better to allow this use case, to reduce the possibility of 
> problems.
> We should specially test with lucene indexes, but also with property indexes. 
> (If that's more complicated, we can concentrate on the lucene case first.)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-08-27 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8579:

Description: 
Currently, it is not allowed to first create a new index in the read-only 
repository, and then in the read-write repository. Trying to do so will fail 
with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
The primary type null does not exist"

See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
CompositeNodeStoreLuceneIndexTest.java 
tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 

It would be better to allow this use case, to reduce the possibility of 
problems.

We should specially test with lucene indexes, but also with property indexes. 
(If that's more complicated, we can concentrate on the lucene case first.)

  was:
Currently, it is not allowed to first create a new index in the read-only 
repository, and then in the read-write repository. Trying to do so will fail 
with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
The primary type null does not exist"

See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
CompositeNodeStoreLuceneIndexTest.java 
tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 

It would be better to allow this use case, to reduce the possibility of 
problems.


> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Priority: Major
>
> Currently, it is not allowed to first create a new index in the read-only 
> repository, and then in the read-write repository. Trying to do so will fail 
> with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
> The primary type null does not exist"
> See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
> CompositeNodeStoreLuceneIndexTest.java 
> tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 
> It would be better to allow this use case, to reduce the possibility of 
> problems.
> We should specially test with lucene indexes, but also with property indexes. 
> (If that's more complicated, we can concentrate on the lucene case first.)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-08-27 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8579:

Component/s: lucene
 indexing
 core
 composite

> Composite Node Store: Allow creating an index in the read-only repo first
> -
>
> Key: OAK-8579
> URL: https://issues.apache.org/jira/browse/OAK-8579
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, core, indexing, lucene
>Reporter: Thomas Mueller
>Priority: Major
>
> Currently, it is not allowed to first create a new index in the read-only 
> repository, and then in the read-write repository. Trying to do so will fail 
> with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
> The primary type null does not exist"
> See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
> CompositeNodeStoreLuceneIndexTest.java 
> tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 
> It would be better to allow this use case, to reduce the possibility of 
> problems.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8579) Composite Node Store: Allow creating an index in the read-only repo first

2019-08-27 Thread Thomas Mueller (Jira)
Thomas Mueller created OAK-8579:
---

 Summary: Composite Node Store: Allow creating an index in the 
read-only repo first
 Key: OAK-8579
 URL: https://issues.apache.org/jira/browse/OAK-8579
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Thomas Mueller


Currently, it is not allowed to first create a new index in the read-only 
repository, and then in the read-write repository. Trying to do so will fail 
with "OakConstraint0001: /oak:index/.../:oak:mount-readOnlyV1-index-data[[]]: 
The primary type null does not exist"

See OAK-7917: oak-lucene/src/test/java/org/apache/jackrabbit/oak/composite - 
CompositeNodeStoreLuceneIndexTest.java 
tryAddIndexInReadWriteWithIndexExistinginReadOnly line 112. 

It would be better to allow this use case, to reduce the possibility of 
problems.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8552) Minimize network calls required when creating a direct download URI

2019-08-23 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914292#comment-16914292
 ] 

Thomas Mueller commented on OAK-8552:
-

* #getReference
** My vote would go to "introduce a new cleaner API Blob#isInlined (name can be 
changed)". I might be more work, but I think the solution would be much clearer.
* #exists
** I think the exists method implies a network access (unless inlined).
** I'm afraid I don't currently understand why "remove the existence check" 
would "lead to perceived performance drop"... I don't understand how "remove 
the existence check" would work at all... I mean, it's reverting OAK-7998, 
right? If we revert OAK-7998 (which might be OK), then I assume we need to 
solve the root problem of OAK-7998 in some other way.

> Minimize network calls required when creating a direct download URI
> ---
>
> Key: OAK-8552
> URL: https://issues.apache.org/jira/browse/OAK-8552
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: blob-cloud, blob-cloud-azure
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Attachments: OAK-8552_ApiChange.patch
>
>
> We need to isolate and try to optimize network calls required to create a 
> direct download URI.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-4508) LoggingDocumentStore: better support for multiple instances

2019-08-23 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-4508:

Summary: LoggingDocumentStore: better support for multiple instances  (was: 
LoggingDocumentStore: better support for multple instances)

> LoggingDocumentStore: better support for multiple instances
> ---
>
> Key: OAK-4508
> URL: https://issues.apache.org/jira/browse/OAK-4508
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Julian Reschke
>Priority: Minor
>
> It would be cool if the logging could be configured to use a specific prefix 
> instead of "ds." - this would make it more useful when debugging code that 
> involves two different ds instances in the same VM.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8513) Concurrent index access via CopyOnRead directory can lead to reading directly off of remote

2019-08-14 Thread Thomas Mueller (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907284#comment-16907284
 ] 

Thomas Mueller commented on OAK-8513:
-

[~catholicon] sorry for the late review! Just saw it now. The logic seems 
correct, as we discussed. I think it's OK to ignore the interrupted exception 
for now.

> Concurrent index access via CopyOnRead directory can lead to reading directly 
> off of remote
> ---
>
> Key: OAK-8513
> URL: https://issues.apache.org/jira/browse/OAK-8513
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
> Fix For: 1.18.0
>
>
> Even with prefetch enabled having 2 CopyOnRead directories pointing to same 
> index can lead to one of the instance reading index files directly off of 
> remote index.
> The reason this happens is because {{COR#copyFilesToLocal}} explicitly 
> chooses to work with remote if index copier reports that a copy is in 
> progress.
> This wasn't a problem earlier when COR was only used via IndexTracker so 
> concurrent COR instances weren't expected (COR's avoid local for conc copy 
> was probably worried about non-prefetch case).
> But with OAK-8097, {{DefaultDirectoryFactory}} also uses COR to bring the 
> files. Which means that if there's a query against an index which is getting 
> updated as well then either of COR instance could read directly from remote.
> The condition should only be relevant during early app start up but since 
> this can happen in default configuration, we should attempt to fix this.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


<    3   4   5   6   7   8   9   10   11   12   >