[jira] [Commented] (OAK-5500) Oak Standalone throws ClassNotFoundException: remoting/protectedHandlersConfig.xml

2017-03-07 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900853#comment-15900853
 ] 

Chetan Mehrotra commented on OAK-5500:
--

Backported to 1.6 with 1785921

> Oak Standalone throws ClassNotFoundException: 
> remoting/protectedHandlersConfig.xml
> --
>
> Key: OAK-5500
> URL: https://issues.apache.org/jira/browse/OAK-5500
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: examples, webapp
>Affects Versions: 1.5.18
>Reporter: Mathias Conradt
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.7.0, 1.8
>
>
> Starting the Oak standalone application via {{java -jar 
> oak-standalone-1.6-SNAPSHOT.jar}} (trunk version as of 23.01.2017) throws an 
> error: 
> {{java.lang.ClassNotFoundException: remoting/protectedHandlersConfig.xml}}
> Complete application startup log: http://pastebin.com/hdtqr3AR
> (There is a related issue (JCR-4058, 
> https://issues.apache.org/jira/browse/JCR-4058) under the Jackrabbit project, 
> not under Oak yet, therefore creating this new issue.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5500) Oak Standalone throws ClassNotFoundException: remoting/protectedHandlersConfig.xml

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-5500:
-
Labels:   (was: candidate_oak_1_6)

> Oak Standalone throws ClassNotFoundException: 
> remoting/protectedHandlersConfig.xml
> --
>
> Key: OAK-5500
> URL: https://issues.apache.org/jira/browse/OAK-5500
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: examples, webapp
>Affects Versions: 1.5.18
>Reporter: Mathias Conradt
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.7.0, 1.8
>
>
> Starting the Oak standalone application via {{java -jar 
> oak-standalone-1.6-SNAPSHOT.jar}} (trunk version as of 23.01.2017) throws an 
> error: 
> {{java.lang.ClassNotFoundException: remoting/protectedHandlersConfig.xml}}
> Complete application startup log: http://pastebin.com/hdtqr3AR
> (There is a related issue (JCR-4058, 
> https://issues.apache.org/jira/browse/JCR-4058) under the Jackrabbit project, 
> not under Oak yet, therefore creating this new issue.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5501) Oak Standalone: Webdav configuration is set to remoting mode by default

2017-03-07 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900834#comment-15900834
 ] 

Chetan Mehrotra commented on OAK-5501:
--

bq. I would expect that most WebDAV users would like to access the binary 
content (jcr:content/jcr:data) of a repo rather than seeing the node hierarchy, 
and therefore Oak's default WebDAV mode should also be the one for standard 
WebDAV clients.

[~mathiasconradt] This was happening because some required configuration was 
not getting loaded properly due to OAK-5500. This is now fixed and when I 
connect to server on Linux via dav://admin@localhost:8080/repository/default I 
can see files being rendered as expected

> Oak Standalone: Webdav configuration is set to remoting mode by default
> ---
>
> Key: OAK-5501
> URL: https://issues.apache.org/jira/browse/OAK-5501
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: examples, webapp
>Affects Versions: 1.5.18
>Reporter: Mathias Conradt
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.7.0, 1.8
>
>
> The *Oak Standalone WebDAV configuration* is set to *remoting mode* by 
> default, but I would expect it to be in the same mode as the *Jackrabbit2 (= 
> mode for standard WebDAV clients)*:
> When I connect to a Jackrabbit2 repo via WebDAV, I see the files as binary 
> content and can access the binary content, i.e. open a pdf directly from 
> there. 
> When connecting to an Oak repo via WebDAV though, the content nodes (i.e. 
> when I post/upload a pdf or jpg via WebDAV) are represented as folders, the 
> binary content is not directly accessible via WebDAV client. 
> I would expect that most WebDAV users would like to access the binary content 
> (jcr:content/jcr:data) of a repo rather than seeing the node hierarchy, and 
> therefore *Oak's default WebDAV mode should also be the one for standard 
> WebDAV clients*.
> Screenshots taken from the standalone-jars of each, after I started them each 
> via {{java -jar /path/to/standalone.jar}}: https://snag.gy/NQEqaP.jpg
> !https://snag.gy/NQEqaP.jpg!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-5501) Oak Standalone: Webdav configuration is set to remoting mode by default

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-5501.
--
   Resolution: Fixed
Fix Version/s: 1.7.0

Fixed with 1785919

> Oak Standalone: Webdav configuration is set to remoting mode by default
> ---
>
> Key: OAK-5501
> URL: https://issues.apache.org/jira/browse/OAK-5501
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: examples, webapp
>Affects Versions: 1.5.18
>Reporter: Mathias Conradt
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.7.0, 1.8
>
>
> The *Oak Standalone WebDAV configuration* is set to *remoting mode* by 
> default, but I would expect it to be in the same mode as the *Jackrabbit2 (= 
> mode for standard WebDAV clients)*:
> When I connect to a Jackrabbit2 repo via WebDAV, I see the files as binary 
> content and can access the binary content, i.e. open a pdf directly from 
> there. 
> When connecting to an Oak repo via WebDAV though, the content nodes (i.e. 
> when I post/upload a pdf or jpg via WebDAV) are represented as folders, the 
> binary content is not directly accessible via WebDAV client. 
> I would expect that most WebDAV users would like to access the binary content 
> (jcr:content/jcr:data) of a repo rather than seeing the node hierarchy, and 
> therefore *Oak's default WebDAV mode should also be the one for standard 
> WebDAV clients*.
> Screenshots taken from the standalone-jars of each, after I started them each 
> via {{java -jar /path/to/standalone.jar}}: https://snag.gy/NQEqaP.jpg
> !https://snag.gy/NQEqaP.jpg!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5500) Oak Standalone throws ClassNotFoundException: remoting/protectedHandlersConfig.xml

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-5500:
-
Labels: candidate_oak_1_6  (was: )

> Oak Standalone throws ClassNotFoundException: 
> remoting/protectedHandlersConfig.xml
> --
>
> Key: OAK-5500
> URL: https://issues.apache.org/jira/browse/OAK-5500
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: examples, webapp
>Affects Versions: 1.5.18
>Reporter: Mathias Conradt
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: candidate_oak_1_6
> Fix For: 1.7.0, 1.8
>
>
> Starting the Oak standalone application via {{java -jar 
> oak-standalone-1.6-SNAPSHOT.jar}} (trunk version as of 23.01.2017) throws an 
> error: 
> {{java.lang.ClassNotFoundException: remoting/protectedHandlersConfig.xml}}
> Complete application startup log: http://pastebin.com/hdtqr3AR
> (There is a related issue (JCR-4058, 
> https://issues.apache.org/jira/browse/JCR-4058) under the Jackrabbit project, 
> not under Oak yet, therefore creating this new issue.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-5500) Oak Standalone throws ClassNotFoundException: remoting/protectedHandlersConfig.xml

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-5500.
--
   Resolution: Fixed
Fix Version/s: 1.7.0

The loading was failing due to the way ServletContext based resource lookup 
works in Spring Boot setup.

With 1785919 changed the approach to delegate to Spring Resource support via 
using a proxy for ServletContext. With this startup is clean and no error is 
reported

> Oak Standalone throws ClassNotFoundException: 
> remoting/protectedHandlersConfig.xml
> --
>
> Key: OAK-5500
> URL: https://issues.apache.org/jira/browse/OAK-5500
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: examples, webapp
>Affects Versions: 1.5.18
>Reporter: Mathias Conradt
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: candidate_oak_1_6
> Fix For: 1.7.0, 1.8
>
>
> Starting the Oak standalone application via {{java -jar 
> oak-standalone-1.6-SNAPSHOT.jar}} (trunk version as of 23.01.2017) throws an 
> error: 
> {{java.lang.ClassNotFoundException: remoting/protectedHandlersConfig.xml}}
> Complete application startup log: http://pastebin.com/hdtqr3AR
> (There is a related issue (JCR-4058, 
> https://issues.apache.org/jira/browse/JCR-4058) under the Jackrabbit project, 
> not under Oak yet, therefore creating this new issue.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885867#comment-15885867
 ] 

Julian Reschke edited comment on OAK-5852 at 3/8/17 6:47 AM:
-

trunk: [r1784574|http://svn.apache.org/r1784574]
1.6: [r1785869|http://svn.apache.org/r1785869]
1.4: [r1785880|http://svn.apache.org/r1785880]
1.2: [r1785887|http://svn.apache.org/r1785887]
1.0: [r1785918|http://svn.apache.org/r1785918]



was (Author: reschke):
trunk: [r1784574|http://svn.apache.org/r1784574]
1.6: [r1785869|http://svn.apache.org/r1785869]
1.4: [r1785880|http://svn.apache.org/r1785880]
1.2: [r1785887|http://svn.apache.org/r1785887]



> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.4.15, 1.7.0, 1.8, 1.0.38, 1.6.2, 1.2.25
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5852:

Labels:   (was: candidate_oak_1_0)

> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.4.15, 1.7.0, 1.8, 1.0.38, 1.6.2, 1.2.25
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5852:

Fix Version/s: 1.0.38

> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0
> Fix For: 1.4.15, 1.7.0, 1.8, 1.0.38, 1.6.2, 1.2.25
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-2492) Flag Document having many children

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2492.
--
Resolution: Later

Given current journal based diff is used in most cases changes proposed here do 
not provide much benefit. 

Resolving it for later consideration 

> Flag Document having many children
> --
>
> Key: OAK-2492
> URL: https://issues.apache.org/jira/browse/OAK-2492
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: performance
>
> Current DocumentMK logic while performing a diff for child nodes works as 
> below
> # Get children for _before_ revision upto MANY_CHILDREN_THRESHOLD (which 
> defaults to 50). Further note that current logic of fetching children nodes 
> also add children {{NodeDocument}} to {{Document}} cache and also reads the 
> complete Document for those children
> # Get children for _after_ revision with limits as above
> # If the child list is complete then it does a direct diff on the fetched 
> children
> # if the list is not complete i.e. number of children are more than the 
> threshold then it for a query based diff (also see OAK-1970)
> So in those cases where number of children are large then all work done in #1 
> above is wasted and should be avoided. To do that we can mark those parent 
> nodes which have many children via special flag like {{_manyChildren}}. One 
> such nodes are marked the diff logic can check for the flag and skip the work 
> done in #1
> This is kind of similar to way we mark nodes which have at least one child 
> (OAK-1117)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-2492) Flag Document having many children

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2492:
-
Fix Version/s: (was: 1.8)

> Flag Document having many children
> --
>
> Key: OAK-2492
> URL: https://issues.apache.org/jira/browse/OAK-2492
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: performance
>
> Current DocumentMK logic while performing a diff for child nodes works as 
> below
> # Get children for _before_ revision upto MANY_CHILDREN_THRESHOLD (which 
> defaults to 50). Further note that current logic of fetching children nodes 
> also add children {{NodeDocument}} to {{Document}} cache and also reads the 
> complete Document for those children
> # Get children for _after_ revision with limits as above
> # If the child list is complete then it does a direct diff on the fetched 
> children
> # if the list is not complete i.e. number of children are more than the 
> threshold then it for a query based diff (also see OAK-1970)
> So in those cases where number of children are large then all work done in #1 
> above is wasted and should be avoided. To do that we can mark those parent 
> nodes which have many children via special flag like {{_manyChildren}}. One 
> such nodes are marked the diff logic can check for the flag and skip the work 
> done in #1
> This is kind of similar to way we mark nodes which have at least one child 
> (OAK-1117)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-5450) Documented example for relativeNode in index aggregation does not work.

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-5450.
--
   Resolution: Fixed
Fix Version/s: 1.8
   1.7.0

> Documented example for relativeNode in index aggregation does not work.
> ---
>
> Key: OAK-5450
> URL: https://issues.apache.org/jira/browse/OAK-5450
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Affects Versions: 1.4.10
>Reporter: Volker Schmidt
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.7.0, 1.8
>
>
> The documentation contains the following example query:
> select * from [app:Asset] where contains(renditions/original/*, "pluto")
> This query does not work. The parser identifies the pattern /* as begin of a 
> comment and does not find the end of the comment. The following query works:
> select * from [app:Asset] where contains([renditions/original/*], "pluto")



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-5692) Oak Lucene analyzers docs unclear on viable configurations

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-5692.
--
   Resolution: Fixed
Fix Version/s: 1.8
   1.7.0

Updated docs have been published so resolving the issue

> Oak Lucene analyzers docs unclear on viable configurations
> --
>
> Key: OAK-5692
> URL: https://issues.apache.org/jira/browse/OAK-5692
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>Reporter: David Gonzalez
>Assignee: Chetan Mehrotra
> Fix For: 1.7.0, 1.8
>
>
> The Oak lucene docs [1] > Analyzers section would benefit from clarification:
> Combining analyzer-based topics into a single ticket
> * If no analyzer is specified, what analyzer setup is used (at a bare 
> minimum, _some_ tokenizer must be used)
> * The docs mention the "default" analyzer 
> ([oak:queryIndexDefinition]/analyzers/default). 
> ** Can other analyzers be defined? 
> ** How are they selected for use? 
> ** is the selection configurable?
> * Is the analyzer both index AND query time (unless specified by 
> `type=index|query` property)?
> * What is the naming for multiple analyzer nodes? Are all children of 
> analyzers assumed to be an analyzer? Ex. If i want a special configuration or 
> index and another for query, could i create:
> {noformat}
> ../myIndex/analyzers/indexAnalyzer@type=index
> .. define the index-time analyzer ...
> ../myIndex/analyzers/queryAnalyzer@type=query
> .. define the query-time analyzer ...
> {noformat}
> * How are languages handled? Ex. language specific stop words, synonyms, char 
> mapping,  and Stemming.
> * If 
> [oak:queryIndexDefinition]/analyzers/default@class=org.apache.lucene.analysis.standard.StandardAnalyzer
>  it appears the Standard Tokenizer and Standard Lowercase and Stop Filters 
> are used. The Stop filter can be augmented w the well-named stopwords file.
> ** Can other charFilters/filters be layered on top of this "named" Analyzer 
> (it seems not).
> * When the Stop Filter is used it provided the OOTB language-based stop 
> words. If a custom stopwords file is provided, that list replaced the OOTB 
> lang-based, requiring the developer to provide their own language based Stop 
> words. Is this correct? This should be called out and link out to the catalog 
> of OOTB stopword txt files for easy inclusion)
> * The Stop filters words property must be a String not String[] and the value 
> is a comma delimited String value. Would be good to call this out.
> * What are all the CharFilters/Filters available? Is there a concise list w/ 
> their params? (Ex. i think the PorterStem might support and ignoreCase param?)
> * Synonym Filter syntax is unclear; It seems like here are 2 formats; 
> directional x -> y and bi-directional (comma delimited); i could only get the 
> latter to work.
> * Are all the options in the link [2] supported. Its unclear if there is a 
> 1:1 between oak lucene and solr's capabilities or if [2] is a loose example 
> of the "types" of supported analyzers.
> * For things something like the PatternReplaceCharFilterFactory [3], how do 
> you define multiple pattern mappings, as IIUC the charFilter node MUST be 
> named:
> {noformat}.../charFilters/PatternReplace{noformat} so you can't have multiple 
> "PatternReplace" named nodes, each with its own "@pattern" and "@replace" 
> properties.  It seems like there is only support for a single object for each 
> Factory type?
> Generally this seems like the handiest resource: 
> https://cwiki.apache.org/confluence/display/solr/Understanding+Analyzers%2C+Tokenizers%2C+and+Filters
> [1]  http://jackrabbit.apache.org/oak/docs/query/lucene.html
> [2] 
> https://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#Specifying_an_Analyzer_in_the_schema
> [3] https://cwiki.apache.org/confluence/display/solr/CharFilterFactories



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-5908) BlobIdTracker should not resurrect deleted blob ids in a clustered setup after GC

2017-03-07 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-5908.

   Resolution: Fixed
Fix Version/s: 1.8
   1.7.0

Fixed with http://svn.apache.org/viewvc?rev=1785917=rev

> BlobIdTracker should not resurrect deleted blob ids in a clustered setup 
> after GC
> -
>
> Key: OAK-5908
> URL: https://issues.apache.org/jira/browse/OAK-5908
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
>  Labels: candidate_oak_1_6
> Fix For: 1.7.0, 1.8
>
>
> BlobIdTracker can resurrect deleted blob ids from a previous run when running 
> in a clustered setup by also synchronizing blob references from other cluster 
> nodes which don't have information about the deleted blob ids.
> The effect of this is that when blob gc is executed again it identifies those 
> ids as candidates and logs a warning when trying to delete them since they 
> had already been deleted in the last gc execution.
> The locally tracked files at each of the instances should be purged after 
> synchronizing with the datastore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5368) Not configurable/Unnecessary short Lucene Observation Queue length

2017-03-07 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900768#comment-15900768
 ] 

Chetan Mehrotra commented on OAK-5368:
--

[~stefan.eissing] From code flow I do not see any warning logging enabled when 
queue becomes full. Can you attach the logs which you saw related to this 
observer?

> Not configurable/Unnecessary short Lucene Observation Queue length
> --
>
> Key: OAK-5368
> URL: https://issues.apache.org/jira/browse/OAK-5368
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.4.10
>Reporter: Stefan Eissing
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.8
>
> Attachments: LuceneIndexConfigObserver-under-load.png
>
>
> The maximum queue length in the {{LuceneIndexConfigObserver}} is hard coded 
> to 5. This is unreasonable short for production systems experiencing heavy 
> load.
> Tests with a patched version resulted in observed queue length of >100 
> entries. This floods warnings into the error log, produces additional load.
> The fix would be to increase the queue max, make it configurable or rely on 
> the system default (which can be configured).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-3341) lucene technical debt

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-3341.
--
   Resolution: Fixed
Fix Version/s: 1.7.0

> lucene technical debt
> -
>
> Key: OAK-3341
> URL: https://issues.apache.org/jira/browse/OAK-3341
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: lucene
>Reporter: Daniel Hasler
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.7.0, 1.8
>
>
> As discussed bilaterally grouping the technical debt for Lucene in this issue 
> for easier tracking



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5894) IndexDefinitionBuilder shouldn't set type=lucene if type=disabled in existing tree

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-5894:
-
Labels: candidate_oak_1_6  (was: )

> IndexDefinitionBuilder shouldn't set type=lucene if type=disabled in existing 
> tree
> --
>
> Key: OAK-5894
> URL: https://issues.apache.org/jira/browse/OAK-5894
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: candidate_oak_1_6
> Fix For: 1.7.0, 1.8
>
>
> IndexDefinitionBuilder is often used to provision indices in 
> RepositoryInitializer. In current form, provisioning would lead to setting up 
> type="lucene" (and hence reindex=true as a side-effect) even if the 
> definition had been marked disabled.
> Sure, the provisioning logic can do that check - but the behavior of not 
> setting type=lucene sounds like a sane default behavior.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-5894) IndexDefinitionBuilder shouldn't set type=lucene if type=disabled in existing tree

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-5894.
--
   Resolution: Fixed
Fix Version/s: 1.7.0

Done with 1785916. Now the {{type}} would be set to {{lucene}} for all cases 
except its already set to {{disabled}}

> IndexDefinitionBuilder shouldn't set type=lucene if type=disabled in existing 
> tree
> --
>
> Key: OAK-5894
> URL: https://issues.apache.org/jira/browse/OAK-5894
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.7.0, 1.8
>
>
> IndexDefinitionBuilder is often used to provision indices in 
> RepositoryInitializer. In current form, provisioning would lead to setting up 
> type="lucene" (and hence reindex=true as a side-effect) even if the 
> definition had been marked disabled.
> Sure, the provisioning logic can do that check - but the behavior of not 
> setting type=lucene sounds like a sane default behavior.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5908) BlobIdTracker should not resurrect deleted blob ids in a clustered setup after GC

2017-03-07 Thread Amit Jain (JIRA)
Amit Jain created OAK-5908:
--

 Summary: BlobIdTracker should not resurrect deleted blob ids in a 
clustered setup after GC
 Key: OAK-5908
 URL: https://issues.apache.org/jira/browse/OAK-5908
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: blob
Reporter: Amit Jain
Assignee: Amit Jain


BlobIdTracker can resurrect deleted blob ids from a previous run when running 
in a clustered setup by also synchronizing blob references from other cluster 
nodes which don't have information about the deleted blob ids.
The effect of this is that when blob gc is executed again it identifies those 
ids as candidates and logs a warning when trying to delete them since they had 
already been deleted in the last gc execution.

The locally tracked files at each of the instances should be purged after 
synchronizing with the datastore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (OAK-5894) IndexDefinitionBuilder shouldn't set type=lucene if type=disabled in existing tree

2017-03-07 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra reassigned OAK-5894:


Assignee: Chetan Mehrotra

> IndexDefinitionBuilder shouldn't set type=lucene if type=disabled in existing 
> tree
> --
>
> Key: OAK-5894
> URL: https://issues.apache.org/jira/browse/OAK-5894
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.8
>
>
> IndexDefinitionBuilder is often used to provision indices in 
> RepositoryInitializer. In current form, provisioning would lead to setting up 
> type="lucene" (and hence reindex=true as a side-effect) even if the 
> definition had been marked disabled.
> Sure, the provisioning logic can do that check - but the behavior of not 
> setting type=lucene sounds like a sane default behavior.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5907) oak-run-1.6.0.jar datastorecheck --consistency reports bogus errors for .cfg configurations

2017-03-07 Thread Jayan Kandathil (JIRA)
Jayan Kandathil created OAK-5907:


 Summary: oak-run-1.6.0.jar datastorecheck --consistency reports 
bogus errors for .cfg configurations
 Key: OAK-5907
 URL: https://issues.apache.org/jira/browse/OAK-5907
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: run
Affects Versions: 1.6.0
 Environment: Windows 10, Java HotSpot 64-Bit Server VM 1.8.0_121
Reporter: Jayan Kandathil
Priority: Minor


[Data Store] consistencycheck reports bogus errors when the Data Store 
configuration is in .cfg (untyped) such as 
org.apache.jackrabbit.oak.plugins.blob.datastore.FileDataStore.cfg

G:\AEM_6.2\Author>java -Xmx4g -Dtar.memoryMapped=true -jar oak-run-1.6.0.jar 
datastorecheck --consistency --store 
G:\\AEM_6.3\\crx-quickstart\\repository\\segmentstore --fds 
G:\\AEM_6.3\\crx-quickstart\\install\\org.apache.jackrabbit.oak.plugins.blob.datastore.FileDataStore.cfg
 --dump G:\\TEMP

Apache Jackrabbit Oak 1.6.0
Starting dump of blob ids
0 blob ids found
Finished in 0 seconds
Starting dump of blob references
3343 blob references found
Finished in 0 seconds
Starting consistency check
Consistency check found 1679 missing blobs
Consistency check failure for the data store
Finished in 0 seconds
[consistency] - G:\TEMP\[consistency]1488815159033



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885867#comment-15885867
 ] 

Julian Reschke edited comment on OAK-5852 at 3/7/17 9:02 PM:
-

trunk: [r1784574|http://svn.apache.org/r1784574]
1.6: [r1785869|http://svn.apache.org/r1785869]
1.4: [r1785880|http://svn.apache.org/r1785880]
1.2: [r1785887|http://svn.apache.org/r1785887]




was (Author: reschke):
trunk: [r1784574|http://svn.apache.org/r1784574]
1.6: [r1785869|http://svn.apache.org/r1785869]
1.4: [r1785880|http://svn.apache.org/r1785880]



> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0
> Fix For: 1.4.15, 1.7.0, 1.8, 1.6.2, 1.2.25
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5852:

Labels: candidate_oak_1_0  (was: candidate_oak_1_0 candidate_oak_1_2)

> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0
> Fix For: 1.4.15, 1.7.0, 1.8, 1.6.2, 1.2.25
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5852:

Fix Version/s: 1.2.25

> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0
> Fix For: 1.4.15, 1.7.0, 1.8, 1.6.2, 1.2.25
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5852:

Fix Version/s: 1.4.15

> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.4.15, 1.7.0, 1.8, 1.6.2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5852:

Labels: candidate_oak_1_0 candidate_oak_1_2  (was: candidate_oak_1_0 
candidate_oak_1_2 candidate_oak_1_4)

> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.4.15, 1.7.0, 1.8, 1.6.2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885867#comment-15885867
 ] 

Julian Reschke edited comment on OAK-5852 at 3/7/17 8:25 PM:
-

trunk: [r1784574|http://svn.apache.org/r1784574]
1.6: [r1785869|http://svn.apache.org/r1785869]
1.4: [r1785880|http://svn.apache.org/r1785880]




was (Author: reschke):
trunk: [r1784574|http://svn.apache.org/r1784574]
1.6: [r1785869|http://svn.apache.org/r1785869]


> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.4.15, 1.7.0, 1.8, 1.6.2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885867#comment-15885867
 ] 

Julian Reschke edited comment on OAK-5852 at 3/7/17 7:03 PM:
-

trunk: [r1784574|http://svn.apache.org/r1784574]
1.6: [r1785869|http://svn.apache.org/r1785869]



was (Author: reschke):
trunk: [r1784574|http://svn.apache.org/r1784574]


> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.7.0, 1.8, 1.6.2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5852:

Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4  (was: 
candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4 candidate_oak_1_6)

> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.7.0, 1.8, 1.6.2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5852) RDB*Store: update Tomcat JDBC pool dependency to 7.0.75

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5852:

Fix Version/s: 1.6.2

> RDB*Store: update Tomcat JDBC pool dependency to 7.0.75
> ---
>
> Key: OAK-5852
> URL: https://issues.apache.org/jira/browse/OAK-5852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: parent, rdbmk
>Affects Versions: 1.0.37, 1.2.23, 1.4.13, 1.6.0
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.7.0, 1.8, 1.6.2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5906) PrivilegeContext.definesLocation returns true for siblings of privilege root path

2017-03-07 Thread angela (JIRA)
angela created OAK-5906:
---

 Summary: PrivilegeContext.definesLocation returns true for 
siblings of privilege root path
 Key: OAK-5906
 URL: https://issues.apache.org/jira/browse/OAK-5906
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: angela
Assignee: angela
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5906) PrivilegeContext.definesLocation returns true for siblings of privilege root path

2017-03-07 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-5906:

Description: found while working on OAK-5882

> PrivilegeContext.definesLocation returns true for siblings of privilege root 
> path
> -
>
> Key: OAK-5906
> URL: https://issues.apache.org/jira/browse/OAK-5906
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: angela
>Assignee: angela
>Priority: Minor
>
> found while working on OAK-5882



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-5903) Authentication: add extension to retrieve user principal

2017-03-07 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-5903.
-
   Resolution: Fixed
Fix Version/s: 1.8
   1.7.0

Committed revision 1785855.


> Authentication: add extension to retrieve user principal
> 
>
> Key: OAK-5903
> URL: https://issues.apache.org/jira/browse/OAK-5903
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: angela
>Assignee: angela
>  Labels: performance
> Fix For: 1.7.0, 1.8
>
> Attachments: LoginLogout_authenticationWithPrincipal.txt, 
> LoginLogout_statusquo_170.txt, 
> LoginLogoutTest-authenticationWithPrincipal-iterations1.txt, 
> LoginLogoutTest-statusquo-iterations1.txt, 
> LoginLogout_token_authenticationWithPrincipal.txt, 
> LoginLogout_token_statusquo.txt
>
>
> In the current default setup the implementations of the {{Authentication}} 
> interface resolve a user for the given login credentials but don't provide 
> means to retrieve the associated principal. Consequently upon 
> {{LoginModule.commit}} the user needs to resolved a second time to compute 
> the set of all principals. This could be simplified by using 
> {{PrincipalProvider.getGroupMembership(Principal)}} if the users principals 
> was available upon successful call to {{Authentication.authenticate}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-4462) LoginModuleImpl: option to have AuthInfo populated with userId instead of loginName

2017-03-07 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-4462.
-
   Resolution: Fixed
Fix Version/s: 1.8
   1.7.0

Committed revision 1785855.


> LoginModuleImpl: option to have AuthInfo populated with userId instead of 
> loginName
> ---
>
> Key: OAK-4462
> URL: https://issues.apache.org/jira/browse/OAK-4462
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: angela
>Assignee: angela
>Priority: Minor
> Fix For: 1.7.0, 1.8
>
>
> The current implementation of {{LoginModuleImpl}} _always_ populates the 
> {{AuthInfo}} with the userId as extracted from the {{Credentials}} or the 
> shared state; doing so will make {{Session.getUserID()}} expose the 
> 'login-id', which may or may not correspond to the ID of the corresponding 
> {{User}} as it is expected to exist with this login module implementation.
> While this clearly is a design decision with the {{LoginModuleImpl}} and 
> perfectly in accordance with the API contract of {{Session.getUserID()}}, 
> there might be cases, where equality of {{Session.getUserID()}} and 
> {{User.getID()}} would be desirable.
> So, we may think about adding an option to the default authentication; be it 
> with {{LoginModuleImpl}} and|or the 
> {{UserAuthenticationFactory}}|{{UserAuthentication}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-4780) VersionGarbageCollector should be able to run incrementally

2017-03-07 Thread Stefan Eissing (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899710#comment-15899710
 ] 

Stefan Eissing commented on OAK-4780:
-

 * I will merge current trunk when you have added your OAK-3070 changes.
 * Revision.getCurrentTimestamp() will change that to store.getClock().getTime()
 * {{maxIterations}} is only good for testing, {{maxDuration}} is good when 
people want to run it in a maintenance window. I know the goal could be to have 
it run all the time, but {{maxDuration}} is for the fallback.
 * {{batchDelay}} as factor - interesting. Will do.

> VersionGarbageCollector should be able to run incrementally
> ---
>
> Key: OAK-4780
> URL: https://issues.apache.org/jira/browse/OAK-4780
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core, documentmk
>Reporter: Julian Reschke
> Attachments: leafnodes.diff, leafnodes-v2.diff, leafnodes-v3.diff
>
>
> Right now, the documentmk's version garbage collection runs in several phases.
> It first collects the paths of candidate nodes, and only once this has been 
> successfully finished, starts actually deleting nodes.
> This can be a problem when the regularly scheduled garbage collection is 
> interrupted during the path collection phase, maybe due to other maintenance 
> tasks. On the next run, the number of paths to be collected will be even 
> bigger, thus making it even more likely to fail.
> We should think about a change in the logic that would allow the GC to run in 
> chunks; maybe by partitioning the path space by top level directory.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899415#comment-15899415
 ] 

Julian Reschke edited comment on OAK-5876 at 3/7/17 3:44 PM:
-

trunk: [r1785283|http://svn.apache.org/r1785283]
1.6: [r1785837|http://svn.apache.org/r1785837]
1.4: [r1785840|http://svn.apache.org/r1785840]
1.2: [r1785842|http://svn.apache.org/r1785842]
1.0: [r1785848|http://svn.apache.org/r1785848]





was (Author: reschke):
trunk: [r1785283|http://svn.apache.org/r1785283]
1.6: [r1785837|http://svn.apache.org/r1785837]
1.4: [r1785840|http://svn.apache.org/r1785840]
1.2: [r1785842|http://svn.apache.org/r1785842]




> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.4.15, 1.7.0, 1.8, 1.0.38, 1.6.2, 1.2.25
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5876:

Labels:   (was: candidate_oak_1_0)

> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.4.15, 1.7.0, 1.8, 1.0.38, 1.6.2, 1.2.25
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5876:

Fix Version/s: 1.0.38

> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.4.15, 1.7.0, 1.8, 1.0.38, 1.6.2, 1.2.25
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5499) IndexUpdate can do mulitple traversal of a content tree during initial index when there are sub-root indices

2017-03-07 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899615#comment-15899615
 ] 

Alex Parvulescu commented on OAK-5499:
--

[~chetanm], [~catholicon], [~tmueller] gentle ping, the patch needs some more 
eyes for review!

> IndexUpdate can do mulitple traversal of a content tree during initial index 
> when there are sub-root indices
> 
>
> Key: OAK-5499
> URL: https://issues.apache.org/jira/browse/OAK-5499
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.8
>
> Attachments: OAK-5499.patch, OAK-5499-v2-demo.patch, 
> OAK-5499-v2-fix.patch
>
>
> In case we've index defs such as:
> {noformat}
> /oak:index/foo1Index
> /content
>/oak:index/foo2Index
> {noformat}
> then initial indexing process \[0] would traverse tree under {{/content}} 
> twice - once while indexing for top-level indices and next when it starts to 
> index newly discovered {{foo2Index}} while traversing {{/content/oak:index}}.
> What we can do is that while first diff processes {{/content}} and discovers 
> a node named {{oak:index}}, it can actively go in that tree and peek into 
> index defs from under it and register as required. The diff can then proceed 
> under {{/content}} while the new indices would also get diffs (avoiding 
> another traversal)
> \[0] first time indexing or in case {{/:async}} gets deleted or checkpoint 
> for async index couldn't be retrieved



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (OAK-3070) Use a lower bound in VersionGC query to avoid checking unmodified once deleted docs

2017-03-07 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899605#comment-15899605
 ] 

Marcel Reutegger edited comment on OAK-3070 at 3/7/17 3:37 PM:
---

I think the margin was introduced because of how 
{{VersionGCSupport.getPossiblyDeletedDocs()}} compares the two timestamps. With 
the patch, the garbage collector may miss some documents. Consider the 
following GC runs with the patch:

Initially {{getPossiblyDeletedDocs()}} will return
{noformat}
getModifiedInSecs(0) > getModifiedInSecs(doc._modified) <= getModifiedInSecs(t1)
{noformat}
In the subsequent run it will return 
{noformat}
getModifiedInSecs(t1) > getModifiedInSecs(doc._modified) <= 
getModifiedInSecs(t2)
{noformat}
There may be documents modified after t1 that still fall into the same 5 second 
resolution bucket as t1. The second run will not match them.

I'll update the issue with a new patch...


was (Author: mreutegg):
I think the margin was introduced because of how 
{{VersionGCSupport.getPossiblyDeletedDocs()}} compares the two timestamps. With 
the patch, the garbage collector may miss some documents. Consider the 
following GC runs with the patch:

Initially {{getPossiblyDeletedDocs()}} will return {{0 > getModifiedInSecs(doc) 
<= t1}}. In the subsequent run it will return {{t1 > getModifiedInSecs(doc) <= 
t2}}. There may be documents modified after t1 that still fall into the same 5 
second resolution bucket as t1. The second run will not match them.

I'll update the issue with a new patch...

> Use a lower bound in VersionGC query to avoid checking unmodified once 
> deleted docs
> ---
>
> Key: OAK-3070
> URL: https://issues.apache.org/jira/browse/OAK-3070
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk, rdbmk
>Reporter: Chetan Mehrotra
>Assignee: Vikas Saurabh
>  Labels: performance
> Attachments: OAK-3070.patch, OAK-3070-updated.patch, 
> OAK-3070-updated.patch
>
>
> As part of OAK-3062 [~mreutegg] suggested
> {quote}
> As a further optimization we could also limit the lower bound of the _modified
> range. The revision GC does not need to check documents with a _deletedOnce
> again if they were not modified after the last successful GC run. If they
> didn't change and were considered existing during the last run, then they
> must still exist in the current GC run. To make this work, we'd need to
> track the last successful revision GC run. 
> {quote}
> Lowest last validated _modified can be possibly saved in settings collection 
> and reused for next run



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-3070) Use a lower bound in VersionGC query to avoid checking unmodified once deleted docs

2017-03-07 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899605#comment-15899605
 ] 

Marcel Reutegger commented on OAK-3070:
---

I think the margin was introduced because of how 
{{VersionGCSupport.getPossiblyDeletedDocs()}} compares the two timestamps. With 
the patch, the garbage collector may miss some documents. Consider the 
following GC runs with the patch:

Initially {{getPossiblyDeletedDocs()}} will return {{0 > getModifiedInSecs(doc) 
<= t1}}. In the subsequent run it will return {{t1 > getModifiedInSecs(doc) <= 
t2}}. There may be documents modified after t1 that still fall into the same 5 
second resolution bucket as t1. The second run will not match them.

I'll update the issue with a new patch...

> Use a lower bound in VersionGC query to avoid checking unmodified once 
> deleted docs
> ---
>
> Key: OAK-3070
> URL: https://issues.apache.org/jira/browse/OAK-3070
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk, rdbmk
>Reporter: Chetan Mehrotra
>Assignee: Vikas Saurabh
>  Labels: performance
> Attachments: OAK-3070.patch, OAK-3070-updated.patch, 
> OAK-3070-updated.patch
>
>
> As part of OAK-3062 [~mreutegg] suggested
> {quote}
> As a further optimization we could also limit the lower bound of the _modified
> range. The revision GC does not need to check documents with a _deletedOnce
> again if they were not modified after the last successful GC run. If they
> didn't change and were considered existing during the last run, then they
> must still exist in the current GC run. To make this work, we'd need to
> track the last successful revision GC run. 
> {quote}
> Lowest last validated _modified can be possibly saved in settings collection 
> and reused for next run



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5905) Log the path of the binary property that is not available

2017-03-07 Thread Arek Kita (JIRA)
Arek Kita created OAK-5905:
--

 Summary: Log the path of the binary property that is not available
 Key: OAK-5905
 URL: https://issues.apache.org/jira/browse/OAK-5905
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: upgrade
Reporter: Arek Kita
Assignee: Tomek Rękawek
Priority: Minor


During sidegrade for (Oak) {{oak-upgrade}} throws an exception when the blob 
could not be found. However it is not meaningful enough and users cannot take 
actions on the below situations:

{code:title=No info about the path for fatal state exception}
IllegalStateException: Attempt to read external blob with blobId [X] without 
specyfing blobstore
{code}

or

{code:title=No info about JCR path when the fatal repository exception occurred}
RepositoryException: Failed to copy content
{code}

or

{code:title=The warning without the path when \--ignore-missing-binaries is set}
WARN org.apache.jackrabbit.oak.upgrade.cli.blob.SafeDataStoreBlobStore - No 
blob found for id [XXX]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5904) Property index: log when reindexing is done

2017-03-07 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-5904:

Priority: Critical  (was: Major)

> Property index: log when reindexing is done
> ---
>
> Key: OAK-5904
> URL: https://issues.apache.org/jira/browse/OAK-5904
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Critical
> Fix For: 1.8
>
>
> Currently, when reindexing a synchronous index, there is no log message that 
> shows the progress and the end of the reindexing phase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5904) Property index: log when reindexing is done

2017-03-07 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-5904:

Component/s: query

> Property index: log when reindexing is done
> ---
>
> Key: OAK-5904
> URL: https://issues.apache.org/jira/browse/OAK-5904
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.8
>
>
> Currently, when reindexing a synchronous index, there is no log message that 
> shows the progress and the end of the reindexing phase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5904) Property index: log when reindexing is done

2017-03-07 Thread Thomas Mueller (JIRA)
Thomas Mueller created OAK-5904:
---

 Summary: Property index: log when reindexing is done
 Key: OAK-5904
 URL: https://issues.apache.org/jira/browse/OAK-5904
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Thomas Mueller
Assignee: Thomas Mueller
 Fix For: 1.8


Currently, when reindexing a synchronous index, there is no log message that 
shows the progress and the end of the reindexing phase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-3070) Use a lower bound in VersionGC query to avoid checking unmodified once deleted docs

2017-03-07 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899492#comment-15899492
 ] 

Marcel Reutegger commented on OAK-3070:
---

I reviewed the most recent patch and I think the VersionGarbageCollector should 
only update the lower bound when not canceled. Otherwise it may happen after a 
canceled GC that some garbage is not collected on a subsequent run. I'm also 
not that happy with OLDEST_TIMESTAMP_MARGIN. I don't understand why it is 
necessary. It is just because of tests or does it have a real impact?

> Use a lower bound in VersionGC query to avoid checking unmodified once 
> deleted docs
> ---
>
> Key: OAK-3070
> URL: https://issues.apache.org/jira/browse/OAK-3070
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk, rdbmk
>Reporter: Chetan Mehrotra
>Assignee: Vikas Saurabh
>  Labels: performance
> Attachments: OAK-3070.patch, OAK-3070-updated.patch, 
> OAK-3070-updated.patch
>
>
> As part of OAK-3062 [~mreutegg] suggested
> {quote}
> As a further optimization we could also limit the lower bound of the _modified
> range. The revision GC does not need to check documents with a _deletedOnce
> again if they were not modified after the last successful GC run. If they
> didn't change and were considered existing during the last run, then they
> must still exist in the current GC run. To make this work, we'd need to
> track the last successful revision GC run. 
> {quote}
> Lowest last validated _modified can be possibly saved in settings collection 
> and reused for next run



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899415#comment-15899415
 ] 

Julian Reschke edited comment on OAK-5876 at 3/7/17 1:59 PM:
-

trunk: [r1785283|http://svn.apache.org/r1785283]
1.6: [r1785837|http://svn.apache.org/r1785837]
1.4: [r1785840|http://svn.apache.org/r1785840]



was (Author: reschke):
trunk: [r1785283|http://svn.apache.org/r1785283]
1.6: [r1785837|http://svn.apache.org/r1785837]


> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.4.15, 1.7.0, 1.8, 1.6.2
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5876:

Labels: candidate_oak_1_0 candidate_oak_1_2  (was: candidate_oak_1_0 
candidate_oak_1_2 candidate_oak_1_4)

> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.4.15, 1.7.0, 1.8, 1.6.2
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5876:

Fix Version/s: 1.4.15

> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.4.15, 1.7.0, 1.8, 1.6.2
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5878) SplitDocumentCleanup iterates twice over splitDocGarbage

2017-03-07 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899463#comment-15899463
 ] 

Julian Reschke commented on OAK-5878:
-

trunk: [r1785838|http://svn.apache.org/r1785838]


> SplitDocumentCleanup iterates twice over splitDocGarbage
> 
>
> Key: OAK-5878
> URL: https://issues.apache.org/jira/browse/OAK-5878
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6
> Fix For: 1.7.0, 1.8
>
> Attachments: OAK-5878-2.diff, OAK-5878.diff
>
>
> {{SplitDocumentCleanup}} currently iterates twice over {{splitDocGarbage}}.
> NOTE: not the case for Mongo DB, as {{MongoVersionGCSupport}} overwrites 
> {{deleteSplitDocuments()}}.
> {{deleteSplitDocuments()}} currently iterates over {{splitDocGarbage}} to 
> obtain the IDs of the documents to be deleted. Instead, we could just collect 
> the IDs inside {{disconnect()}}, the memory requirements would be the same.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5902) Cold standby should allow syncing of blobs bigger than 2.2 GB

2017-03-07 Thread Andrei Dulceanu (JIRA)
Andrei Dulceanu created OAK-5902:


 Summary: Cold standby should allow syncing of blobs bigger than 
2.2 GB
 Key: OAK-5902
 URL: https://issues.apache.org/jira/browse/OAK-5902
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segment-tar
Affects Versions: 1.6.1
Reporter: Andrei Dulceanu
Assignee: Andrei Dulceanu
Priority: Minor
 Fix For: 1.8


Currently there is a limitation for the maximum binary size (in bytes) to be 
synced between primary and standby instances. This matches 
{{Integer.MAX_VALUE}} (2,147,483,647) bytes and no binaries bigger than this 
limit can be synced between the instances.

Per comment at [1], the current protocol needs to be changed to allow sending 
of binaries in chunks, to surpass this limitation.

[1] 
https://github.com/apache/jackrabbit-oak/blob/1.6/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/standby/client/StandbyClient.java#L125



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-4529) DocumentNodeStore does not have a repository software version range check.

2017-03-07 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-4529:
--
Attachment: OAK-4529.patch

Attached a work-in-progress patch. It introduces a new document in the settings 
collection that contains a format version. Reading the current and older 
versions is possible, but the DocumentNodeStore will fail to start when it 
detects a newer format than itself. A DocumentNodeStore can only write to a 
store with the same format version. An upgrade to a newer version must be 
'unlocked' first with an oak-run command. This only works when there are no 
active cluster nodes and the format to set is newer than the current one.

I would like to backport this to the branches as well, which means they would 
eventually also get protection against unintended upgrades or mixed (minor) 
version deployments that can corrupt the repository.

> DocumentNodeStore does not have a repository software version range check.
> --
>
> Key: OAK-4529
> URL: https://issues.apache.org/jira/browse/OAK-4529
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.0.31, 1.2.14, 1.4.4, 1.5.4
>Reporter: Ian Boston
>Assignee: Marcel Reutegger
> Fix For: 1.8
>
> Attachments: OAK-4529.patch
>
>
> DocumentNodeStore does not currently check which software version the 
> persisted repository it is connecting to was created with or last updated. 
> There is a risk that if the versions are incompatible the repository may be 
> damaged.
> Somewhere in the repository, the version of the software that created it, and 
> the versions that written to it should be stored. In the case of TarMK this 
> information could be on local disk near the TarMK files. In the case of a 
> DocumentMK implementation, the information should be stored in the "database" 
> itself.
> When a DocumentNodeStore instance connects it should: check the versions 
> stored in the repository then check the versions are within a compatible 
> range and refuse to start if not.
> When a DocumentNodeStore writes to a repository, it should add its version to 
> the list of versions that have updated the repository.
> This check behaviour should be active in full Oak or any utilities (eg 
> oak-run).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-5878) SplitDocumentCleanup iterates twice over splitDocGarbage

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-5878.
-
   Resolution: Fixed
Fix Version/s: 1.8
   1.7.0

> SplitDocumentCleanup iterates twice over splitDocGarbage
> 
>
> Key: OAK-5878
> URL: https://issues.apache.org/jira/browse/OAK-5878
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6
> Fix For: 1.7.0, 1.8
>
> Attachments: OAK-5878-2.diff, OAK-5878.diff
>
>
> {{SplitDocumentCleanup}} currently iterates twice over {{splitDocGarbage}}.
> NOTE: not the case for Mongo DB, as {{MongoVersionGCSupport}} overwrites 
> {{deleteSplitDocuments()}}.
> {{deleteSplitDocuments()}} currently iterates over {{splitDocGarbage}} to 
> obtain the IDs of the documents to be deleted. Instead, we could just collect 
> the IDs inside {{disconnect()}}, the memory requirements would be the same.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-4780) VersionGarbageCollector should be able to run incrementally

2017-03-07 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899438#comment-15899438
 ] 

Marcel Reutegger commented on OAK-4780:
---

bq. Shall it repeat itself when it has not caught up to "now"

I'd say, yes. If needed, the GC can be canceled already.

bq. What is the best value for "precisionMs", the minimal time interval for 
queries?

I don't think a one minute resolution is needed. Maybe it's easier we define 
how many iterations are done to find the 'oldest' _deletedOnce? But then a time 
is more specific than a rather abstract number of iterations.

Other comments on your patch:

- We should first resolve OAK-3070 and remove that part from your patch. 
- VersionGCSupport.getOldestDeletedOnceTimestamp(long) uses 
System.currentTimeMillis(). Might be useful to use the Clock abstraction 
instead, which allows usage of a virtual clock for tests.
- Similar for VersionGarbageCollector.gc(long, TimeUnit): 
Revision.getCurrentTimestamp() does give you the current time of a Clock, but I 
think it would be better to use the clock from the DocumentNodeStore passed in 
the constructor.
- {{maxIterations}} and {{maxDuration}}: are those really necessary? I think it 
would be easier to use if those are implementation details and all you need to 
do is trigger gc() with a maxRevisionAge. The GC would stop iterations when it 
reaches currentTime - maxRevisionAge or when it is canceled.
- {{batchDelay}}, I like the feature, but would prefer a more adaptive 
approach. That is, have a value that defines the delay multiplier which is 
applied to the time it took for some operation. Let's say it took 500 ms to 
remove a batch of documents and the delay multiplier is 0.5, then the VGC would 
wait 250 ms until it proceeds to the next bach.


> VersionGarbageCollector should be able to run incrementally
> ---
>
> Key: OAK-4780
> URL: https://issues.apache.org/jira/browse/OAK-4780
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core, documentmk
>Reporter: Julian Reschke
> Attachments: leafnodes.diff, leafnodes-v2.diff, leafnodes-v3.diff
>
>
> Right now, the documentmk's version garbage collection runs in several phases.
> It first collects the paths of candidate nodes, and only once this has been 
> successfully finished, starts actually deleting nodes.
> This can be a problem when the regularly scheduled garbage collection is 
> interrupted during the path collection phase, maybe due to other maintenance 
> tasks. On the next run, the number of paths to be collected will be even 
> bigger, thus making it even more likely to fail.
> We should think about a change in the logic that would allow the GC to run in 
> chunks; maybe by partitioning the path space by top level directory.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899415#comment-15899415
 ] 

Julian Reschke commented on OAK-5876:
-

trunk: [r1785283|http://svn.apache.org/r1785283]
1.6: [r1785837|http://svn.apache.org/r1785837]


> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.7.0, 1.8, 1.6.2
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5899) PropertyDefinitions should allow for some tweakability to declare usefulness

2017-03-07 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899399#comment-15899399
 ] 

Thomas Mueller commented on OAK-5899:
-

Lucene supports field info, and the Lucene index JMX bean allows to read that 
info using getFieldInfo, see OAK-3219. This is a start. I added that to the JMX 
bean so that we can find out how fast it is. It could be the basis for an 
"analyze" tool for Oak.

> PropertyDefinitions should allow for some tweakability to declare usefulness
> 
>
> Key: OAK-5899
> URL: https://issues.apache.org/jira/browse/OAK-5899
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Priority: Minor
> Fix For: 1.8
>
>
> At times, we have property definitions which are added to support for dense 
> results right out of the index (e.g. {{contains(\*, 'foo') AND 
> \[bar]='baz'}}).
> In such cases, the added property definition "might" not be the best one to 
> answer queries which only have the property restriction (eg only 
> {{\[bar]='baz'}}
> There should be a way for property definition to declare this. May be there 
> are cases of some spectrum too - i.e. not only a boolean-usable-or-not, but 
> some kind of scale of how-usable is it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5876:

Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4  (was: 
candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4 candidate_oak_1_6)

> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.7.0, 1.8, 1.6.2
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5876:

Comment: was deleted

(was: trunk: [r1785283|http://svn.apache.org/r1785283]
)

> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.7.0, 1.8, 1.6.2
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5899) PropertyDefinitions should allow for some tweakability to declare usefulness

2017-03-07 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899392#comment-15899392
 ] 

Thomas Mueller commented on OAK-5899:
-

> scale of how-usable is it

Yes. Many relational databases make [cost 
estimation|https://en.wikipedia.org/wiki/Query_optimization#Cost_estimation] 
using histograms. Even [SQLite supports 
that|https://www.sqlite.org/compile.html#enable_stat4]. The H2 database uses 
"selectivity" on a [per-column 
basis|http://h2database.com/html/functions.html#selectivity].

I think Lucene doesn't provide that, as it's mainly used for fulltext search, 
and not so much for relational queries. But for our case, just having an 
estimate on the number of entries for a certain property value (cardinality) 
would be very useful. A configuration options would help a lot. An "analyze" 
tool for Oak could update those values at runtime, similar to what the SQL 
command "analyze" does for relational database 
([Oracle|https://docs.oracle.com/cd/B12037_01/server.101/b10759/statements_4005.htm],
 [PostgreSQL|https://www.postgresql.org/docs/current/static/sql-analyze.html], 
[MySQL|https://dev.mysql.com/doc/refman/5.7/en/analyze-table.html], 
[SQLite|https://www.sqlite.org/lang_analyze.html], 
[H2|http://h2database.com/html/grammar.html#analyze]).

> PropertyDefinitions should allow for some tweakability to declare usefulness
> 
>
> Key: OAK-5899
> URL: https://issues.apache.org/jira/browse/OAK-5899
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Priority: Minor
> Fix For: 1.8
>
>
> At times, we have property definitions which are added to support for dense 
> results right out of the index (e.g. {{contains(\*, 'foo') AND 
> \[bar]='baz'}}).
> In such cases, the added property definition "might" not be the best one to 
> answer queries which only have the property restriction (eg only 
> {{\[bar]='baz'}}
> There should be a way for property definition to declare this. May be there 
> are cases of some spectrum too - i.e. not only a boolean-usable-or-not, but 
> some kind of scale of how-usable is it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5876:

Fix Version/s: 1.6.2

> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.7.0, 1.8, 1.6.2
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (OAK-5900) Add Nonnull Annotation to TokenInfo.matches(TokenCredentials)

2017-03-07 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-5900.
-
   Resolution: Fixed
Fix Version/s: 1.8
   1.7.0

Committed revision 1785836.


> Add Nonnull Annotation to TokenInfo.matches(TokenCredentials)
> -
>
> Key: OAK-5900
> URL: https://issues.apache.org/jira/browse/OAK-5900
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: angela
>Assignee: angela
>Priority: Minor
> Fix For: 1.7.0, 1.8
>
>
> the API definition of {{TokenInfo}} doesn't explicitly annotate the fact that 
> {{matches(TokenCredentials)}} takes a non-null credentials object.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5858) Lucene index may return the wrong result if path is excluded

2017-03-07 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899375#comment-15899375
 ] 

Vikas Saurabh commented on OAK-5858:


Some points/comments following an offline conversation with [~tmueller]:
* The middle path mentioned above can still give incorrect results
* The common use-case to have excluded paths is to avoid some paths that are 
known to be bad - too much traffic without any usable results
** That case usually shouldn't guide the code that queries - it seems to me 
that this should remain a configuration option like the indices itself, OR
** May be we need to expose a utility syntax for code to declare the intent 
"I'm ok if you exclude some paths"
* Maybe, we should have separate exludePath flags for asserting 
exluded-and-exposed (code needs to be aware) and covert-exclude(just doesn't 
index, behaves as if it can answer everything)

> Lucene index may return the wrong result if path is excluded
> 
>
> Key: OAK-5858
> URL: https://issues.apache.org/jira/browse/OAK-5858
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Thomas Mueller
> Fix For: 1.8
>
>
> If a query uses a Lucene index that has "excludedPaths", the query result may 
> be wrong (not contain all matching nodes). This is case even if there is a 
> property index available for the queried property. Example:
> {noformat}
> Indexes:
> /oak:index/resourceType/type = "property"
> /oak:index/lucene/type = "lucene"
> /oak:index/lucene/excludedPaths = ["/etc"]
> /oak:index/lucene/indexRules/nt:base/properties/resourceType
> Query:
> /jcr:root/etc//*[jcr:like(@resourceType, "x%y")]
> Index cost:
> cost for /oak:index/resourceType is 1602.0
> cost for /oak:index/lucene is 1001.0
> Result:
> (empty)
> Expected result:
> /etc/a
> /etc/b
> {noformat}
> Here, the lucene index is picked, even thought the query explicitly queries 
> for /etc, and the lucene index has this path excluded.
> I think the lucene index should not be picked in case the index does not 
> match the query path.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5901) Minor improvements to TokenProviderImpl and TokenValidator

2017-03-07 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-5901:

Summary: Minor improvements to TokenProviderImpl and TokenValidator  (was: 
Minor improvements to TokenProviderImpl)

> Minor improvements to TokenProviderImpl and TokenValidator
> --
>
> Key: OAK-5901
> URL: https://issues.apache.org/jira/browse/OAK-5901
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: angela
>Priority: Minor
>
> while writing tests i noticed a few improvements in {{TokenProviderImpl}} 
> that would to better readability and removal of redundant checks for null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5901) Minor improvements to TokenProviderImpl and TokenValidator

2017-03-07 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-5901:

Description: while writing tests i noticed a few improvements in 
{{TokenProviderImpl}} and {{TokenValidator}} that would to better readability 
and removal of redundant checks for null.  (was: while writing tests i noticed 
a few improvements in {{TokenProviderImpl}} that would to better readability 
and removal of redundant checks for null.)

> Minor improvements to TokenProviderImpl and TokenValidator
> --
>
> Key: OAK-5901
> URL: https://issues.apache.org/jira/browse/OAK-5901
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: angela
>Priority: Minor
>
> while writing tests i noticed a few improvements in {{TokenProviderImpl}} and 
> {{TokenValidator}} that would to better readability and removal of redundant 
> checks for null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5901) Minor improvements to TokenProviderImpl

2017-03-07 Thread angela (JIRA)
angela created OAK-5901:
---

 Summary: Minor improvements to TokenProviderImpl
 Key: OAK-5901
 URL: https://issues.apache.org/jira/browse/OAK-5901
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: angela
Priority: Minor


while writing tests i noticed a few improvements in {{TokenProviderImpl}} that 
would to better readability and removal of redundant checks for null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5900) Add Nonnull Annotation to TokenInfo.matches(TokenCredentials)

2017-03-07 Thread angela (JIRA)
angela created OAK-5900:
---

 Summary: Add Nonnull Annotation to 
TokenInfo.matches(TokenCredentials)
 Key: OAK-5900
 URL: https://issues.apache.org/jira/browse/OAK-5900
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: angela
Assignee: angela
Priority: Minor


the API definition of {{TokenInfo}} doesn't explicitly annotate the fact that 
{{matches(TokenCredentials)}} takes a non-null credentials object.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5878) SplitDocumentCleanup iterates twice over splitDocGarbage

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5878:

Attachment: OAK-5878-2.diff

Updated patch that applies to trunk again.

> SplitDocumentCleanup iterates twice over splitDocGarbage
> 
>
> Key: OAK-5878
> URL: https://issues.apache.org/jira/browse/OAK-5878
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6
> Attachments: OAK-5878-2.diff, OAK-5878.diff
>
>
> {{SplitDocumentCleanup}} currently iterates twice over {{splitDocGarbage}}.
> NOTE: not the case for Mongo DB, as {{MongoVersionGCSupport}} overwrites 
> {{deleteSplitDocuments()}}.
> {{deleteSplitDocuments()}} currently iterates over {{splitDocGarbage}} to 
> obtain the IDs of the documents to be deleted. Instead, we could just collect 
> the IDs inside {{disconnect()}}, the memory requirements would be the same.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5899) PropertyDefinitions should allow for some tweakability to declare usefulness

2017-03-07 Thread Vikas Saurabh (JIRA)
Vikas Saurabh created OAK-5899:
--

 Summary: PropertyDefinitions should allow for some tweakability to 
declare usefulness
 Key: OAK-5899
 URL: https://issues.apache.org/jira/browse/OAK-5899
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Vikas Saurabh
Priority: Minor
 Fix For: 1.8


At times, we have property definitions which are added to support for dense 
results right out of the index (e.g. {{contains(\*, 'foo') AND \[bar]='baz'}}).

In such cases, the added property definition "might" not be the best one to 
answer queries which only have the property restriction (eg only 
{{\[bar]='baz'}}

There should be a way for property definition to declare this. May be there are 
cases of some spectrum too - i.e. not only a boolean-usable-or-not, but some 
kind of scale of how-usable is it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5878) SplitDocumentCleanup iterates twice over splitDocGarbage

2017-03-07 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-5878:

Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4 
candidate_oak_1_6  (was: )

> SplitDocumentCleanup iterates twice over splitDocGarbage
> 
>
> Key: OAK-5878
> URL: https://issues.apache.org/jira/browse/OAK-5878
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6
> Attachments: OAK-5878.diff
>
>
> {{SplitDocumentCleanup}} currently iterates twice over {{splitDocGarbage}}.
> NOTE: not the case for Mongo DB, as {{MongoVersionGCSupport}} overwrites 
> {{deleteSplitDocuments()}}.
> {{deleteSplitDocuments()}} currently iterates over {{splitDocGarbage}} to 
> obtain the IDs of the documents to be deleted. Instead, we could just collect 
> the IDs inside {{disconnect()}}, the memory requirements would be the same.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5896) fix typo in Not condition handling

2017-03-07 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-5896:

Component/s: query

> fix typo in Not condition handling
> --
>
> Key: OAK-5896
> URL: https://issues.apache.org/jira/browse/OAK-5896
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Affects Versions: 1.6.1
>Reporter: Dave Brosius
>Assignee: Dave Brosius
>Priority: Trivial
> Fix For: 1.7.0
>
> Attachments: 5896.txt
>
>
> code stutters the same condition twice, looks like a typo. patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5878) SplitDocumentCleanup iterates twice over splitDocGarbage

2017-03-07 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899257#comment-15899257
 ] 

Marcel Reutegger commented on OAK-5878:
---

+1 for the changes. Though, the patch didn't apply cleanly to trunk.

I also agree that this is a good candidate for a backport.

> SplitDocumentCleanup iterates twice over splitDocGarbage
> 
>
> Key: OAK-5878
> URL: https://issues.apache.org/jira/browse/OAK-5878
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Attachments: OAK-5878.diff
>
>
> {{SplitDocumentCleanup}} currently iterates twice over {{splitDocGarbage}}.
> NOTE: not the case for Mongo DB, as {{MongoVersionGCSupport}} overwrites 
> {{deleteSplitDocuments()}}.
> {{deleteSplitDocuments()}} currently iterates over {{splitDocGarbage}} to 
> obtain the IDs of the documents to be deleted. Instead, we could just collect 
> the IDs inside {{disconnect()}}, the memory requirements would be the same.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5878) SplitDocumentCleanup iterates twice over splitDocGarbage

2017-03-07 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899243#comment-15899243
 ] 

Julian Reschke commented on OAK-5878:
-

So this change change will only affect RDBDocumentStore, for which it is a 
quick win, reducing the number of table scans for ´VGC from 3 to 2.

Even if we do something fancier for OAK-5855, this change will be simple to 
backport all the way back to 1.0.

So I'd propose to apply this to trunk, and gradually port it back to earlier 
branches...

> SplitDocumentCleanup iterates twice over splitDocGarbage
> 
>
> Key: OAK-5878
> URL: https://issues.apache.org/jira/browse/OAK-5878
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Attachments: OAK-5878.diff
>
>
> {{SplitDocumentCleanup}} currently iterates twice over {{splitDocGarbage}}.
> NOTE: not the case for Mongo DB, as {{MongoVersionGCSupport}} overwrites 
> {{deleteSplitDocuments()}}.
> {{deleteSplitDocuments()}} currently iterates over {{splitDocGarbage}} to 
> obtain the IDs of the documents to be deleted. Instead, we could just collect 
> the IDs inside {{disconnect()}}, the memory requirements would be the same.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5878) SplitDocumentCleanup iterates twice over splitDocGarbage

2017-03-07 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899229#comment-15899229
 ] 

Marcel Reutegger commented on OAK-5878:
---

bq. VersionGarbageCollectorIT.gcWithConcurrentModification() bypasses 
persistence-specific variants

It would be better if the test uses the VersionGarbageCollector provided by the 
DocumentNodeStore, but I guess then it would be difficult to reproduce the 
issue the test was written for.

> SplitDocumentCleanup iterates twice over splitDocGarbage
> 
>
> Key: OAK-5878
> URL: https://issues.apache.org/jira/browse/OAK-5878
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Attachments: OAK-5878.diff
>
>
> {{SplitDocumentCleanup}} currently iterates twice over {{splitDocGarbage}}.
> NOTE: not the case for Mongo DB, as {{MongoVersionGCSupport}} overwrites 
> {{deleteSplitDocuments()}}.
> {{deleteSplitDocuments()}} currently iterates over {{splitDocGarbage}} to 
> obtain the IDs of the documents to be deleted. Instead, we could just collect 
> the IDs inside {{disconnect()}}, the memory requirements would be the same.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5878) SplitDocumentCleanup iterates twice over splitDocGarbage

2017-03-07 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899217#comment-15899217
 ] 

Marcel Reutegger commented on OAK-5878:
---

bq. Actually, once the split document has been disconnected, and could be 
deleted right away, no?

Yes, this will work. I think the reason why the current implementation has 
those two distinct phases is the optimization for the MongoDB case. As you 
mentioned already that implementation removes the split documents at the end 
with a single call.

> SplitDocumentCleanup iterates twice over splitDocGarbage
> 
>
> Key: OAK-5878
> URL: https://issues.apache.org/jira/browse/OAK-5878
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Attachments: OAK-5878.diff
>
>
> {{SplitDocumentCleanup}} currently iterates twice over {{splitDocGarbage}}.
> NOTE: not the case for Mongo DB, as {{MongoVersionGCSupport}} overwrites 
> {{deleteSplitDocuments()}}.
> {{deleteSplitDocuments()}} currently iterates over {{splitDocGarbage}} to 
> obtain the IDs of the documents to be deleted. Instead, we could just collect 
> the IDs inside {{disconnect()}}, the memory requirements would be the same.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5898) Revision GC command line tool

2017-03-07 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-5898:
-

 Summary: Revision GC command line tool
 Key: OAK-5898
 URL: https://issues.apache.org/jira/browse/OAK-5898
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: documentmk, run
Reporter: Marcel Reutegger
 Fix For: 1.8


Revision GC can be triggered on a DocumentNodeStore on each node in a cluster. 
A common setup with Apache Sling on top of Oak has a scheduled task for the 
leader node in a cluster that triggers the Revision GC once a day.

For testing, maintenance and operational purpose it would be good to have an 
alternative that can be triggered from the command line.

A potential solution is a new run mode in oak-run. It would bootstrap a 
DocumentStore implementation (MongoDB or RDB) with a read-only 
DocumentNodeStore and run revision GC. The command should probably support 
options like:

- maxRevisionAge as defined in VersionGarbageCollector.gc()
- maxGCTime that limits the time GC will run until it will be canceled 
automatically.
- delay as proposed in OAK-4780



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899201#comment-15899201
 ] 

Marcel Reutegger commented on OAK-5876:
---

+1 looks good to me.

> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6
> Fix For: 1.7.0, 1.8
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (OAK-5897) Optimize like constraint support in Property Indexes

2017-03-07 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-5897:


 Summary: Optimize like constraint support in Property Indexes
 Key: OAK-5897
 URL: https://issues.apache.org/jira/browse/OAK-5897
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Chetan Mehrotra
Assignee: Thomas Mueller
 Fix For: 1.8


Consider a query

{noformat}
 /jcr:root/content//element(*, nt:unstructured)[jcr:like(@resource, 
'/content/foo/bar%')]
{noformat}

This currently gets translated into a range property restriction 

{noformat}
 property=[resource=[[/content/foo/bar.., ../content/foo/bas]]]
{noformat}

For such a query property index currently returns all nodes having "resource" 
property i.e. all index data. This can be optimized to return only those nodes 
where indexed value qualifies the range property restriction



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (OAK-5876) SplitDocumentCleanup should implement Closeable

2017-03-07 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-5876:
--
Fix Version/s: 1.8

> SplitDocumentCleanup should implement Closeable
> ---
>
> Key: OAK-5876
> URL: https://issues.apache.org/jira/browse/OAK-5876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6
> Fix For: 1.7.0, 1.8
>
> Attachments: OAK-5876.diff
>
>
> {{SplitDocumentCleanup}} currently does not close the {{Iterable}} holding 
> {{splitDocGarbage}}. It should implement {{Closeable}} and forward calls to 
> {{close()}} to the {{Iterable}}, if that happens to be {{Closeable}}.
> Likewise, {{VersionGCSupport}} should call {{close()}} on 
> {{SplitDocumentCleanup}} when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)