[jira] [Commented] (OAK-7989) Build Jackrabbit Oak #1891 failed

2019-01-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744300#comment-16744300
 ] 

Hudson commented on OAK-7989:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1892|https://builds.apache.org/job/Jackrabbit%20Oak/1892/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1892/console]

> Build Jackrabbit Oak #1891 failed
> -
>
> Key: OAK-7989
> URL: https://issues.apache.org/jira/browse/OAK-7989
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1891 has failed.
> First failed run: [Jackrabbit Oak 
> #1891|https://builds.apache.org/job/Jackrabbit%20Oak/1891/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1891/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-7982) ACL.addEntry: check for mandatory restrictions only respects single value restrictions

2019-01-16 Thread angela (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744140#comment-16744140
 ] 

angela edited comment on OAK-7982 at 1/16/19 4:47 PM:
--

fixed in trunk: revision 1851451.
fixed in 1.10 branch: revision 1851470.




was (Author: anchela):
fixed in trunk: revision 1851451.


> ACL.addEntry: check for mandatory restrictions only respects single value 
> restrictions
> --
>
> Key: OAK-7982
> URL: https://issues.apache.org/jira/browse/OAK-7982
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, security
>Reporter: angela
>Assignee: angela
>Priority: Major
> Fix For: 1.11.0, 1.10.1
>
> Attachments: OAK-7982.patch
>
>
> The validation of {{ACL.addEntry(Principal principal, Privilege[] privileges, 
> boolean isAllow, Map restrictions, Map 
> mvRestrictions)}}
> includes a check that mandatory restrictions are actually present.
> However, the code performing that check only tests if the mandatory 
> restrictions are included in the {{restrictions}} ignoring the fact that a 
> mandatory restriction might be multi-valued and thus provided in the 
> {{mvRestrictions}} param.
> cc: [~stillalex] fyi.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7982) ACL.addEntry: check for mandatory restrictions only respects single value restrictions

2019-01-16 Thread angela (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-7982.
-
   Resolution: Fixed
Fix Version/s: 1.12

> ACL.addEntry: check for mandatory restrictions only respects single value 
> restrictions
> --
>
> Key: OAK-7982
> URL: https://issues.apache.org/jira/browse/OAK-7982
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, security
>Reporter: angela
>Assignee: angela
>Priority: Major
> Fix For: 1.12, 1.11.0, 1.10.1
>
> Attachments: OAK-7982.patch
>
>
> The validation of {{ACL.addEntry(Principal principal, Privilege[] privileges, 
> boolean isAllow, Map restrictions, Map 
> mvRestrictions)}}
> includes a check that mandatory restrictions are actually present.
> However, the code performing that check only tests if the mandatory 
> restrictions are included in the {{restrictions}} ignoring the fact that a 
> mandatory restriction might be multi-valued and thus provided in the 
> {{mvRestrictions}} param.
> cc: [~stillalex] fyi.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7989) Build Jackrabbit Oak #1891 failed

2019-01-16 Thread Hudson (JIRA)
Hudson created OAK-7989:
---

 Summary: Build Jackrabbit Oak #1891 failed
 Key: OAK-7989
 URL: https://issues.apache.org/jira/browse/OAK-7989
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: continuous integration
Reporter: Hudson


No description is provided

The build Jackrabbit Oak #1891 has failed.
First failed run: [Jackrabbit Oak 
#1891|https://builds.apache.org/job/Jackrabbit%20Oak/1891/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1891/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7988) The node counter jmx bean should show 0 if a node exists

2019-01-16 Thread Thomas Mueller (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-7988.
-
   Resolution: Fixed
 Assignee: Thomas Mueller
Fix Version/s: 1.11.0

> The node counter jmx bean should show 0 if a node exists
> 
>
> Key: OAK-7988
> URL: https://issues.apache.org/jira/browse/OAK-7988
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: indexing
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.11.0
>
>
> Right now, the node counter jmx bean doesn't show a result if a node has few 
> children. It also shows doesn't show a result if the node doesn't exists (due 
> to a typo). It would be nice if existing nodes show "0" in this case, for 
> "exists, but has no or few children", as documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7988) The node counter jmx bean should show 0 if a node exists

2019-01-16 Thread Thomas Mueller (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744155#comment-16744155
 ] 

Thomas Mueller commented on OAK-7988:
-

Example for getEstimatedChildNodeCounts(String path, int level):
* path = "/temp" level = -1: now shows no result (the node doesn't exist) 
* path = "/tmp", int level = -1: now shows "/tmp: 0" (the node exist but has no 
/ few children) 


> The node counter jmx bean should show 0 if a node exists
> 
>
> Key: OAK-7988
> URL: https://issues.apache.org/jira/browse/OAK-7988
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: indexing
>Reporter: Thomas Mueller
>Priority: Minor
>
> Right now, the node counter jmx bean doesn't show a result if a node has few 
> children. It also shows doesn't show a result if the node doesn't exists (due 
> to a typo). It would be nice if existing nodes show "0" in this case, for 
> "exists, but has no or few children", as documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7988) The node counter jmx bean should show 0 if a node exists

2019-01-16 Thread Thomas Mueller (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744154#comment-16744154
 ] 

Thomas Mueller commented on OAK-7988:
-

http://svn.apache.org/r1851453

> The node counter jmx bean should show 0 if a node exists
> 
>
> Key: OAK-7988
> URL: https://issues.apache.org/jira/browse/OAK-7988
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: indexing
>Reporter: Thomas Mueller
>Priority: Minor
>
> Right now, the node counter jmx bean doesn't show a result if a node has few 
> children. It also shows doesn't show a result if the node doesn't exists (due 
> to a typo). It would be nice if existing nodes show "0" in this case, for 
> "exists, but has no or few children", as documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7982) ACL.addEntry: check for mandatory restrictions only respects single value restrictions

2019-01-16 Thread angela (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-7982:

Fix Version/s: 1.11.0

> ACL.addEntry: check for mandatory restrictions only respects single value 
> restrictions
> --
>
> Key: OAK-7982
> URL: https://issues.apache.org/jira/browse/OAK-7982
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, security
>Reporter: angela
>Assignee: angela
>Priority: Major
> Fix For: 1.11.0, 1.10.1
>
> Attachments: OAK-7982.patch
>
>
> The validation of {{ACL.addEntry(Principal principal, Privilege[] privileges, 
> boolean isAllow, Map restrictions, Map 
> mvRestrictions)}}
> includes a check that mandatory restrictions are actually present.
> However, the code performing that check only tests if the mandatory 
> restrictions are included in the {{restrictions}} ignoring the fact that a 
> mandatory restriction might be multi-valued and thus provided in the 
> {{mvRestrictions}} param.
> cc: [~stillalex] fyi.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7982) ACL.addEntry: check for mandatory restrictions only respects single value restrictions

2019-01-16 Thread angela (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744140#comment-16744140
 ] 

angela commented on OAK-7982:
-

fixed in trunk: revision 1851451.


> ACL.addEntry: check for mandatory restrictions only respects single value 
> restrictions
> --
>
> Key: OAK-7982
> URL: https://issues.apache.org/jira/browse/OAK-7982
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, security
>Reporter: angela
>Assignee: angela
>Priority: Major
> Fix For: 1.10.1
>
> Attachments: OAK-7982.patch
>
>
> The validation of {{ACL.addEntry(Principal principal, Privilege[] privileges, 
> boolean isAllow, Map restrictions, Map 
> mvRestrictions)}}
> includes a check that mandatory restrictions are actually present.
> However, the code performing that check only tests if the mandatory 
> restrictions are included in the {{restrictions}} ignoring the fact that a 
> mandatory restriction might be multi-valued and thus provided in the 
> {{mvRestrictions}} param.
> cc: [~stillalex] fyi.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7988) The node counter jmx bean should show 0 if a node exists

2019-01-16 Thread Thomas Mueller (JIRA)
Thomas Mueller created OAK-7988:
---

 Summary: The node counter jmx bean should show 0 if a node exists
 Key: OAK-7988
 URL: https://issues.apache.org/jira/browse/OAK-7988
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: indexing
Reporter: Thomas Mueller


Right now, the node counter jmx bean doesn't show a result if a node has few 
children. It also shows doesn't show a result if the node doesn't exists (due 
to a typo). It would be nice if existing nodes show "0" in this case, for 
"exists, but has no or few children", as documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7217) check public Oak APIs for references to Guava

2019-01-16 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744049#comment-16744049
 ] 

Julian Reschke commented on OAK-7217:
-

I just realized that a combination of the shade and baseline plugins will be 
able to detect this. Example: oak-commons:

1. add shading config to pom: 
https://issues.apache.org/jira/secure/attachment/12955097/detect-api.diff

2. run baseline check, it will complain about changed API signatures:

{noformat}
[ERROR] org.apache.jackrabbit.oak.commons: Version increase required; detected 
1.2.1, suggested 2.0.0
[ERROR] org.apache.jackrabbit.oak.commons.io: Version increase required; 
detected 1.0.0, suggested 2.0.0
{noformat}


> check public Oak APIs for references to Guava
> -
>
> Key: OAK-7217
> URL: https://issues.apache.org/jira/browse/OAK-7217
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>Reporter: Julian Reschke
>Priority: Minor
> Attachments: detect-api.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7217) check public Oak APIs for references to Guava

2019-01-16 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7217:

Attachment: detect-api.diff

> check public Oak APIs for references to Guava
> -
>
> Key: OAK-7217
> URL: https://issues.apache.org/jira/browse/OAK-7217
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>Reporter: Julian Reschke
>Priority: Minor
> Attachments: detect-api.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7217) check public Oak APIs for references to Guava

2019-01-16 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7217:

Attachment: (was: detect-api.diff)

> check public Oak APIs for references to Guava
> -
>
> Key: OAK-7217
> URL: https://issues.apache.org/jira/browse/OAK-7217
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>Reporter: Julian Reschke
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7217) check public Oak APIs for references to Guava

2019-01-16 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7217:

Attachment: detect-api.diff

> check public Oak APIs for references to Guava
> -
>
> Key: OAK-7217
> URL: https://issues.apache.org/jira/browse/OAK-7217
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>Reporter: Julian Reschke
>Priority: Minor
> Attachments: detect-api.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (OAK-6749) Segment-Tar standby sync fails with "in-memory" blobs present in the source repo

2019-01-16 Thread Francesco Mari (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari reassigned OAK-6749:
---

Assignee: Francesco Mari

> Segment-Tar standby sync fails with "in-memory" blobs present in the source 
> repo
> 
>
> Key: OAK-6749
> URL: https://issues.apache.org/jira/browse/OAK-6749
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob, tarmk-standby
>Affects Versions: 1.6.2
>Reporter: Csaba Varga
>Assignee: Francesco Mari
>Priority: Major
>
> We have run into some issue when trying to transition from an active/active 
> Mongo NodeStore cluster to a single Segment-Tar server with cold standby. The 
> issue itself manifests when the standby server tries to pull changes from the 
> primary after the first round of online revision GC.
> Let me summarize the way we ended up with the current state, and my 
> hypothesis about what happened, based on my debugging so far:
> # We started with a Mongo NodeStore and an external FileDataStore as the blob 
> store. The FileDataStore was set up with minRecordLength=4096. The Mongo 
> store stores blobs below minRecordLength as special "in-memory" blobIDs where 
> the data itself is baked into the ID string in hex.
> # We have executed a sidegrade of the Mongo store into a Segment-Tar store. 
> Our datastore is over 1TB in size, so copying the binaries wasn't an option. 
> The new repository is simply reusing the existing datastore. The "in-memory" 
> blobIDs still look like external blobIDs to the sidegrade process, so they 
> were copied into the Segment-Tar repository as-is, instead of being converted 
> into the efficient in-line format.
> # The server started up without issues on the new Segment-Tar store. The 
> migrated "in-memory" blob IDs seem to work fine, if a bit sub-optimal.
> # At this point, we have created a cold standby instance by copying the files 
> of the stopped primary instance and making the necessary config changes on 
> both servers.
> # Everything worked fine until the primary server started its first round of 
> online revision GC. After that process completed, the standby node started 
> throwing exceptions about missing segments, and eventually stopped 
> altogether. In the meantime, the following warning showed up in the primary 
> log:
> {code:java}
> 29.09.2017 06:12:08.088 *WARN* [nioEventLoopGroup-3-10] 
> org.apache.jackrabbit.oak.segment.standby.server.ExceptionHandler Exception 
> caught on the server
> io.netty.handler.codec.TooLongFrameException: frame length (8208) exceeds the 
> allowed maximum (8192)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.fail(LineBasedFrameDecoder.java:146)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.fail(LineBasedFrameDecoder.java:142)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.decode(LineBasedFrameDecoder.java:99)
> at 
> io.netty.handler.codec.LineBasedFrameDecoder.decode(LineBasedFrameDecoder.java:75)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:345)
> at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:345)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:611)
> at 
> 

[jira] [Closed] (OAK-5473) Document fulltext search grammer ("contains")

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-5473.
-

bulk close 1.10.0

> Document fulltext search grammer ("contains")
> -
>
> Key: OAK-5473
> URL: https://issues.apache.org/jira/browse/OAK-5473
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.10.0
>
>
> The grammar supported and the semantics of fulltext queries ("jcr:contains" 
> for XPath, "contains" for SQL-2) are not clearly documented yet. This is 
> specially applies to escaping (which characters), rules (how to avoid syntax 
> errors), and compatibility (which version supports which grammar).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7843) oak-upgrade doesn't correctly pass segment cache size to file store

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7843.
-

bulk close 1.10.0

> oak-upgrade doesn't correctly pass segment cache size to file store
> ---
>
> Key: OAK-7843
> URL: https://issues.apache.org/jira/browse/OAK-7843
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Minor
> Fix For: 1.10.0, 1.9.10
>
> Attachments: OAK-7843.patch
>
>
> When setting a segment cache size via the {{--cache SEGMENT_CACHE_SIZE}} 
> option, {{oak-upgrade}} doesn't correctly pass it to the file store, although 
> it correctly logs the value right after start. Below meaningful lines from 
> the log, when trying to set a segment cache of 8GB:
> {noformat}
> [...]
> 16.10.2018 17:00:29.834 [main] *INFO*  
> org.apache.jackrabbit.oak.upgrade.cli.parser.MigrationOptions - Cache size: 
> 8192 MB
> [..]
> 16.10.2018 17:00:30.144 [main] *INFO*  
> org.apache.jackrabbit.oak.segment.file.FileStore - Creating file store 
> FileStoreBuilder{version=1.10-SNAPSHOT, directory=repository/segmentstore, 
> blobStore=org.apache.jackrabbit.oak.upgrade.cli.blob.LoopbackBlobStore@33b37288,
>  maxFileSize=256, segmentCacheSize=256, stringCacheSize=256, 
> templateCacheSize=64, stringDeduplicationCacheSize=15000, 
> templateDeduplicationCacheSize=3000, nodeDeduplicationCacheSize=1048576, 
> memoryMapping=false, gcOptions=SegmentGCOptions{paused=false, 
> estimationDisabled=false, gcSizeDeltaEstimation=1073741824, retryCount=5, 
> forceTimeout=60, retainedGenerations=2, gcType=FULL}}
> {noformat}
> It can be observed that the segment cache size remained at 256MB (default).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7639) Surface more DSGC operation stats

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7639.
-

bulk close 1.10.0

> Surface more DSGC operation stats 
> --
>
> Key: OAK-7639
> URL: https://issues.apache.org/jira/browse/OAK-7639
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob-plugins
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Major
> Fix For: 1.10.0, 1.9.9
>
>
> The current metrics being pushed like should also be surfaced through the 
> MarkSweepGarbageCollector object :
> * Blobs deleted
> * Total approx. size deleted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-6898) Query: grammar documentation / annotated railroad diagrams

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-6898.
-

bulk close 1.10.0

> Query: grammar documentation / annotated railroad diagrams
> --
>
> Key: OAK-6898
> URL: https://issues.apache.org/jira/browse/OAK-6898
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.10.0
>
>
> I think we need to add a proper query grammar documentation with detailed 
> recommendations, similar to how it is done in relational databases. This is 
> needed for XPath, and SQL-2. The only thing we have right now is [railroad 
> diagrams|http://www.h2database.com/jcr/grammar.html], without annotation. 
> That's not sufficient. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7728) Oak run check command fails with SegmentNotFound exception

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7728.
-

bulk close 1.10.0

> Oak run check command fails with SegmentNotFound exception
> --
>
> Key: OAK-7728
> URL: https://issues.apache.org/jira/browse/OAK-7728
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: run, segment-tar
>Affects Versions: 1.8.2
>Reporter: Ioan-Cristian Linte
>Assignee: Andrei Dulceanu
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: check-output.txt, we-retail-stage-corruption.log
>
>
> The check command for oak run fails with SegmentNotFound exception.
> See attach file for output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7758) Non-blocking CompositeNodeStore merges

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7758.
-

bulk close 1.10.0

> Non-blocking CompositeNodeStore merges
> --
>
> Key: OAK-7758
> URL: https://issues.apache.org/jira/browse/OAK-7758
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite
>Reporter: Marcel Reutegger
>Assignee: Tomek Rękawek
>Priority: Major
> Fix For: 1.10.0, 1.9.9
>
>
> The CompositeNodeStore serializes all merges with a lock. This prevents 
> concurrent processing of merges by the DocumentNodeStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7971) RDB*Store: update DB2 JDBC reference to 4.19.77

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7971.
-

bulk close 1.10.0

> RDB*Store: update DB2 JDBC reference to 4.19.77
> ---
>
> Key: OAK-7971
> URL: https://issues.apache.org/jira/browse/OAK-7971
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.10.0, 1.11.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7966) Avoid adding excluded principal to cug policy

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7966.
-

bulk close 1.10.0

> Avoid adding excluded principal to cug policy
> -
>
> Key: OAK-7966
> URL: https://issues.apache.org/jira/browse/OAK-7966
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: authorization-cug
>Reporter: angela
>Assignee: angela
>Priority: Minor
> Fix For: 1.10.0, 1.11.0
>
> Attachments: OAK-7966.patch
>
>
> [~stillalex], i just noticed that it is possible to add principals to a cug 
> policy that are effectively excluded from evaluation altogether. i think it 
> would be better to avoid adding those principals avoiding any confusion about 
> the possible effect that may arise by doing so.
> proposed patch will follow asap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-6584) Add tooling API

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-6584.
-

bulk close 1.10.0

> Add tooling API
> ---
>
> Key: OAK-6584
> URL: https://issues.apache.org/jira/browse/OAK-6584
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Major
>  Labels: tooling
> Fix For: 1.10.0
>
>
> h3. Current situation
> Current segment store related tools are implemented ad-hoc by potentially 
> relying on internal implementation details of Oak Segment Tar. This makes 
> those tools less useful, portable, stable and potentially applicable than 
> they should be.
> h3. Goal
> Provide a common and sufficiently stable Oak Tooling API for implementing 
> segment store related tools. The API should be independent of Oak and not 
> available for normal production use of Oak. Specifically it should not be 
> possible to it to implement production features and production features must 
> not rely on it. It must be possible to implement the Oak Tooling API in Oak 
> 1.8 and it should be possible for Oak 1.6.
> h3. Typical use cases
> * Query the number of nodes / properties / values in a given path satisfying 
> some criteria
> * Aggregate a certain value on queries like the above
> * Calculate size of the content / size on disk
> * Analyse changes. E.g. how many binaries bigger than a certain threshold 
> were added / removed between two given revisions. What is the sum of their 
> sizes?
> * Analyse locality: measure of locality of node states. Incident plots (See 
> https://issues.apache.org/jira/browse/OAK-5655?focusedCommentId=15865973=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15865973).
> * Analyse level of deduplication (e.g. of checkpoint) 
> h3. Validation
> Reimplement [Script Oak|https://github.com/mduerig/script-oak] on top of the 
> tooling API. 
> h3. API draft
> * Whiteboard shot of the [API 
> entities|https://wiki.apache.org/jackrabbit/Oakathon%20August%202017?action=AttachFile=view=IMG_20170822_163256.jpg]
>  identified initially.
> * Further [drafting of the API|https://github.com/mduerig/oak-tooling-api] 
> takes place on Github for now. We'll move to the Apache SVN as soon as 
> considered mature enough and have a consensus of where to best move it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7397) Test failure: TomcatIT

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7397.
-

bulk close 1.10.0

> Test failure: TomcatIT
> --
>
> Key: OAK-7397
> URL: https://issues.apache.org/jira/browse/OAK-7397
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration, segment-tar
>Reporter: Hudson
>Priority: Major
> Fix For: 1.10.0
>
>
> No description is provided
> The build Jackrabbit Oak #1366 has failed.
> First failed run: [Jackrabbit Oak 
> #1366|https://builds.apache.org/job/Jackrabbit%20Oak/1366/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1366/console]
> {noformat}
> ERROR: [org.apache.jackrabbit.oak.segment.SegmentNodeStoreService(43)] The 
> activate method has thrown an exception
> java.lang.IllegalStateException: 
> /home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-examples/webapp/target/repository/repository/segmentstore is in use 
> by another store.
>   at 
> org.apache.jackrabbit.oak.segment.file.tar.TarPersistence.lockRepository(TarPersistence.java:92)
>   at 
> org.apache.jackrabbit.oak.segment.file.FileStore.(FileStore.java:159)
>   at 
> org.apache.jackrabbit.oak.segment.file.FileStoreBuilder.build(FileStoreBuilder.java:353)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStoreService.registerSegmentStore(SegmentNodeStoreService.java:506)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStoreService.activate(SegmentNodeStoreService.java:399)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.felix.scr.impl.inject.BaseMethod.invokeMethod(BaseMethod.java:229)
>   at 
> org.apache.felix.scr.impl.inject.BaseMethod.access$500(BaseMethod.java:39)
>   at 
> org.apache.felix.scr.impl.inject.BaseMethod$Resolved.invoke(BaseMethod.java:650)
>   at 
> org.apache.felix.scr.impl.inject.BaseMethod.invoke(BaseMethod.java:506)
>   at 
> org.apache.felix.scr.impl.inject.ActivateMethod.invoke(ActivateMethod.java:307)
>   at 
> org.apache.felix.scr.impl.inject.ActivateMethod.invoke(ActivateMethod.java:299)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:298)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:109)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:906)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:879)
>   at 
> org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:749)
>   at 
> org.apache.felix.scr.impl.manager.ExtendedServiceEvent.activateManagers(ExtendedServiceEvent.java:59)
>   at 
> org.apache.felix.scr.impl.BundleComponentActivator$ListenerInfo.serviceChanged(BundleComponentActivator.java:144)
>   at 
> org.apache.felix.connect.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:852)
>   at 
> org.apache.felix.connect.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:775)
>   at 
> org.apache.felix.connect.felix.framework.util.EventDispatcher.fireServiceEvent(EventDispatcher.java:594)
>   at org.apache.felix.connect.PojoSR$1.serviceChanged(PojoSR.java:78)
>   at 
> org.apache.felix.connect.felix.framework.ServiceRegistry.unregisterService(ServiceRegistry.java:158)
>   at 
> org.apache.felix.connect.felix.framework.ServiceRegistrationImpl.unregister(ServiceRegistrationImpl.java:132)
>   at 
> org.apache.jackrabbit.oak.plugins.metric.StatisticsProviderFactory.deactivate(StatisticsProviderFactory.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.felix.scr.impl.inject.BaseMethod.invokeMethod(BaseMethod.java:229)
>   at 
> org.apache.felix.scr.impl.inject.BaseMethod.access$500(BaseMethod.java:39)
>   at 
> org.apache.felix.scr.impl.inject.BaseMethod$Resolved.invoke(BaseMethod.java:650)
>   at 
> org.apache.felix.scr.impl.inject.BaseMethod.invoke(BaseMethod.java:506)
>   at 
> 

[jira] [Closed] (OAK-7970) RDB*Store: add profile for DB2 11.1 JDBC driver

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7970.
-

bulk close 1.10.0

> RDB*Store: add profile for DB2 11.1 JDBC driver
> ---
>
> Key: OAK-7970
> URL: https://issues.apache.org/jira/browse/OAK-7970
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.10.0, 1.11.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7286) DocumentNodeStoreBranch handling of non-recoverable DocumentStoreExceptions

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7286.
-

bulk close 1.10.0

> DocumentNodeStoreBranch handling of non-recoverable DocumentStoreExceptions
> ---
>
> Key: OAK-7286
> URL: https://issues.apache.org/jira/browse/OAK-7286
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Marcel Reutegger
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: OAK-7286-DocumentStoreException.patch, 
> OAK-7286-DocumentStoreException.patch, OAK-7286.diff, OAK-7286.diff
>
>
> In {{DocumentNodeStoreBranch.merge()}}, any {{DocumentStoreException}} is 
> mapped to a {{DocumentStoreException}} to a {{CommitFailedException}} of type 
> "MERGE", which leads to the operation being retried, and a non-helpful 
> exception being generated.
> The effect can be observed by enabling a test in {{ValidNamesTest}}:
> {noformat}
> --- oak-jcr/src/test/java/org/apache/jackrabbit/oak/jcr/ValidNamesTest.java   
>   (Revision 1825371)
> +++ oak-jcr/src/test/java/org/apache/jackrabbit/oak/jcr/ValidNamesTest.java   
>   (Arbeitskopie)
> @@ -300,7 +300,6 @@
>  public void testUnpairedHighSurrogateEnd() {
>  // see OAK-5506
>  
> org.junit.Assume.assumeFalse(super.fixture.toString().toLowerCase().contains("segment"));
> -
> org.junit.Assume.assumeFalse(super.fixture.toString().toLowerCase().contains("rdb"));
>  nameTest("foo" + SURROGATE_PAIR[0]);
>  }
> @@ -336,6 +335,7 @@
>  assertEquals("paths should be equal", p.getPath(), n.getPath());
>  return p;
>  } catch (RepositoryException ex) {
> +ex.printStackTrace();
>  fail(ex.getMessage());
>  return null;
>  }
> {noformat}
> The underlying issue is that {{RDBDocumentStore}} is throwing a 
> {{DocumentStoreException}} due to the invalid ID, and repeating the call will 
> not help.
> We probably should have a way to dinstinguish between different types of 
> problems.
> I hacked {{DocumentNodeStoreBranch}} like that:
> {noformat}
> --- 
> oak-store-document/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentNodeStoreBranch.java
> (Revision 1825371)
> +++ 
> oak-store-document/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentNodeStoreBranch.java
> (Arbeitskopie)
> @@ -520,8 +520,12 @@
>  } catch (ConflictException e) {
>  throw e.asCommitFailedException();
>  } catch(DocumentStoreException e) {
> -throw new CommitFailedException(MERGE, 1,
> -"Failed to merge changes to the underlying 
> store", e);
> +if (e.getMessage().contains("Invalid ID")) {
> +throw new CommitFailedException(OAK, 123,
> +"Failed to store changes in the underlying 
> store: " + e.getMessage(), e);
> +} else {
> +throw new CommitFailedException(MERGE, 1, "Failed to 
> merge changes to the underlying store", e);
> +}
>  } catch (Exception e) {
>  throw new CommitFailedException(OAK, 1,
>  "Failed to merge changes to the underlying 
> store", e);
> {noformat}
> ...which causes the exception to surface immediately (see 
> https://issues.apache.org/jira/secure/attachment/12912117/OAK-7286.diff).
> (cc  [~mreutegg])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7956) Conflict may leave behind _collisions entry

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7956.
-

bulk close 1.10.0

> Conflict may leave behind _collisions entry
> ---
>
> Key: OAK-7956
> URL: https://issues.apache.org/jira/browse/OAK-7956
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.4.0, 1.6.0, 1.8.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Major
>  Labels: candidate_oak_1_4, candidate_oak_1_6, candidate_oak_1_8
> Fix For: 1.10.0
>
>
> Under high concurrent conflicting workload, entries in the {{_collisions}} 
> map may be left behind and accumulate over time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7972) [DirectBinaryAccess] Direct binary access docs not linked from primary documentation

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7972.
-

bulk close 1.10.0

> [DirectBinaryAccess] Direct binary access docs not linked from primary 
> documentation
> 
>
> Key: OAK-7972
> URL: https://issues.apache.org/jira/browse/OAK-7972
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Affects Versions: 1.10.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Minor
> Fix For: 1.10.0
>
>
> As reported on oak-dev@ by [~alexander.klimetschek]:
> {quote}This page
> [https://jackrabbit.apache.org/oak/docs/features/direct-binary-access.html]
> is not linked in the navigation on the documentation home page or any other 
> page I tried.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7959) MongoDocumentStore causes scan of entire nodes collection on startup

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7959.
-

bulk close 1.10.0

> MongoDocumentStore causes scan of entire nodes collection on startup
> 
>
> Key: OAK-7959
> URL: https://issues.apache.org/jira/browse/OAK-7959
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.9.10, 1.9.11, 1.9.12, 1.9.13
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.10.0
>
>
> This is a regression introduced with OAK-7645 when the MongoDB Java driver 
> was updated to 3.8.
> With the switch to the new driver, the use of a deprecated method {{count()}} 
> was replaced with {{countDocuments()}}. The method {{countDocument()}} 
> behaves differently and performs an aggregation command over all documents in 
> the collection. The MongoDB log would then show something like:
> {noformat}
> 2018-12-12T18:40:53.672+0100 I COMMAND  [conn6063] command oak.nodes command: 
> aggregate { aggregate: "nodes", readConcern: { level: "local" }, pipeline: [ 
> { $match: {} }, { $group: { _id: null, n: { $sum: 1 } } } ], cursor: {}, $db: 
> "oak", $readPreference: { mode: "primaryPreferred" } } planSummary: COLLSCAN 
> keysExamined:0 docsExamined:4038809 cursorExhausted:1 numYields:31584 
> nreturned:1 reslen:127 locks:{ Global: { acquireCount: { r: 31586 } }, 
> Database: { acquireCount: { r: 31586 } }, Collection: { acquireCount: { r: 
> 31586 } } } protocol:op_msg 1642ms
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7838) oak-run check crashes JVM

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7838.
-

bulk close 1.10.0

> oak-run check crashes JVM
> -
>
> Key: OAK-7838
> URL: https://issues.apache.org/jira/browse/OAK-7838
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: run, segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Major
> Fix For: 1.10.0, 1.9.10
>
>
> I had a case where running {{oak-run check}} on a repository with many 
> revisions would reliably crash the JVM. 
> Apparently there is a problem with the {{Scheduler}} instances in 
> {{org.apache.jackrabbit.oak.segment.CommitsTracker}}: when many instances of 
> that class are created in fast succession it will leave many daemon threads 
> lingering around for a while. In my case this was sufficient to kill the JVM. 
> To verify I simply removed the scheduler and everything was just fine:
> {code}
> ===
> --- 
> oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/CommitsTracker.java
> (date 1539358293000)
> +++ 
> oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/CommitsTracker.java
> (date 1539670356000)
> @@ -19,8 +19,6 @@
> package org.apache.jackrabbit.oak.segment;
> -import static java.util.concurrent.TimeUnit.MINUTES;
> -
> import java.io.Closeable;
> import java.util.HashMap;
> import java.util.Map;
> @@ -29,7 +27,6 @@
> import java.util.stream.Stream;
> import com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap;
> -import org.apache.jackrabbit.oak.segment.file.Scheduler;
> /**
>  * A simple tracker for the source of commits (writes) in
> @@ -49,7 +46,6 @@
> private final ConcurrentMap commitsCountPerThreadGroup;
> private final ConcurrentMap commitsCountOtherThreads;
> private final ConcurrentMap 
> commitsCountPerThreadGroupLastMinute;
> -private final Scheduler commitsTrackerScheduler = new 
> Scheduler("CommitsTracker background tasks");
> CommitsTracker(String[] threadGroups, int otherWritersLimit, boolean 
> collectStackTraces) {
> this.threadGroups = threadGroups;
> @@ -60,8 +56,6 @@
> .maximumWeightedCapacity(otherWritersLimit).build();
> this.queuedWritersMap = new ConcurrentHashMap<>();
> -commitsTrackerScheduler.scheduleWithFixedDelay("TarMK commits 
> tracker stats resetter", 1, MINUTES,
> -this::resetStatistics);
> }
> public void trackQueuedCommitOf(Thread t) {
> @@ -112,7 +106,7 @@
> @Override
> public void close() {
> -commitsTrackerScheduler.close();
> +
> }
> {code}
> cc [~dulceanu]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-6148) Warning if there are many Lucene documents

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-6148.
-

bulk close 1.10.0

> Warning if there are many Lucene documents
> --
>
> Key: OAK-6148
> URL: https://issues.apache.org/jira/browse/OAK-6148
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene, query
>Reporter: Thomas Mueller
>Priority: Major
>  Labels: candidate_oak_1_6
> Fix For: 1.10.0
>
>
> Lucene indexes are limited to Integer.MAX_VALUE (LUCENE-4104), so Lucene 
> indexes can have at most around 2 billion nodes indexed.
> We should avoid running into this limit. For example, we could log a warning 
> if the number of documents is a multiple of 200 million, so a user has plenty 
> of time to change the index configuration.
> Also, it would be good to be able to read the current number of documents per 
> index (using JMX for example), so that one can find out how far he is from 
> the limit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-3883) Avoid commit from too far in the future (due to clock skews) to go through

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-3883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3883.
-

bulk close 1.10.0

> Avoid commit from too far in the future (due to clock skews) to go through
> --
>
> Key: OAK-3883
> URL: https://issues.apache.org/jira/browse/OAK-3883
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Vikas Saurabh
>Assignee: Marcel Reutegger
>Priority: Major
>  Labels: resilience
> Fix For: 1.10.0, 1.9.6
>
>
> Following up [discussion|http://markmail.org/message/m5jk5nbby77nlqs5] \[0] 
> to avoid bad commits due to misbehaving clocks. Points from the discussion:
> * We can start self-destruct mode while updating lease
> * Revision creation should check that newly created revision isn't beyond 
> leaseEnd time
> * Implementation done for OAK-2682 might be useful
> [0]: http://markmail.org/message/m5jk5nbby77nlqs5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7719) CheckCommand should consistently use an alternative journal if specified

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7719.
-

bulk close 1.10.0

> CheckCommand should consistently use an alternative journal if specified
> 
>
> Key: OAK-7719
> URL: https://issues.apache.org/jira/browse/OAK-7719
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: run, segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>Priority: Major
>  Labels: technical_debt
> Fix For: 1.10.0, 1.11.0
>
>
> Callers of the {{check}} command can specify an alternative journal with the 
> {{\-\-journal}} option. This option instructs the {{ConsistencyChecker}} to 
> check the revisions stored in that file instead of the ones stored in the 
> default {{journal.log}}.
> I spotted at least two problems while using {{\-\-journal}} on a repository 
> with a corrupted {{journal.log}} that didn't contain any valid revision.
> First, the path to the {{FileStore}} is validated by 
> {{FileStoreHelper#isValidFileStoreOrFail}}, which checks for the existence of 
> a {{journal.log}} in the specified folder. But if a {{journal.log}} doesn't 
> exist and the user specified a different journal on the command line this 
> check should be ignored.
> Second, when opening the {{FileStore}} the default {{journal.log}} is scanned 
> to determine the initial revision of the head state. If a user specifies an 
> alternative journal on the command line, that journal should be used instead 
> of the default {{journal.log}}. It might be that the default journal contains 
> no valid revision, which would force the system to crash when opening a new 
> instance of {{FileStore}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-6217) Document tricky statements

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-6217.
-

bulk close 1.10.0

> Document tricky statements
> --
>
> Key: OAK-6217
> URL: https://issues.apache.org/jira/browse/OAK-6217
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.10.0
>
>
> There are some cases, specially if fulltext conditions and aggregation are 
> used, where a query sometimes returns no results even thought with the right 
> index it does return the results. This is a bit hard to understand, because 
> it doesn't match the rule "indexes should only affect performance, not 
> results". One such example is if a query uses one or the other index, but not 
> both. Or if a query uses fulltext conditions on different nodes (parent and 
> child). Examples:
> {noformat}
> /jcr:root/home//element(*, rep:User)
>   [jcr:contains(.,'Kerr*') 
>   and jcr:like(@rep:impersonators, '%ccibu%')]/profile
> /jcr:root/home//element(*, rep:User)
>   [jcr:contains(profile,'Kerr*') 
>   and jcr:like(@rep:impersonators, '%ccibu%')]/profile
> {noformat}
> Related is the usage of relative properties in indexes, excluded / included 
> paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7975) Facet extraction fails while requesting multiple facets and one of the requested facets doesn't have indexed values

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7975.
-

bulk close 1.10.0

> Facet extraction fails while requesting multiple facets and one of the 
> requested facets doesn't have indexed values
> ---
>
> Key: OAK-7975
> URL: https://issues.apache.org/jira/browse/OAK-7975
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.10.0, 1.8.11, 1.6.17, 1.11.0
>
>
> Consider a content like
> {noformat}
> + /test
>- foo=bar
> {noformat}
> with index def faceting multiple properties - something like
> {noformat}
> + /oak:index/foo/indexRules/nt:base/properties
>+ foo
>   - propertyIndex=true
>   - facets = true
> + bar
>   - facets = true
> {noformat}
> Then a  query like
> {noformat}
> SELECT [rep:facet(foo)], [rep:facete(bar)] FROM [nt:base]
> {noformat}
> should not fail.
> Note that the failure requires requesting on multiple facets and one of them 
> must not have any indexed value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7944) Minor improvements to oak security code base

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7944.
-

bulk close 1.10.0

> Minor improvements to oak security code base
> 
>
> Key: OAK-7944
> URL: https://issues.apache.org/jira/browse/OAK-7944
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external, core, security-spi
>Reporter: angela
>Assignee: angela
>Priority: Trivial
> Fix For: 1.10.0, 1.11.0
>
>
> hi [~stillalex], i thought i could do another round of minor code clean up 
> and improvements throughout the security code base:
> - broken javadoc links
> - wrong/missing nullable annotations
> - simplifications of verbose code constructs that date back to the origin, 
> when we still used old java versions
> - ...
> i would use that issue to mark the corresponding commits. let me know if you 
> have any concerns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7867) Flush thread gets stuck when input stream of binaries block

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7867.
-

bulk close 1.10.0

> Flush thread gets stuck when input stream of binaries block
> ---
>
> Key: OAK-7867
> URL: https://issues.apache.org/jira/browse/OAK-7867
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Critical
>  Labels: candidate_oak_1_6, candidate_oak_1_8
> Fix For: 1.10.0, 1.9.10
>
>
> This issue tackles the root cause of the sever data loss that has been 
> reported in OAK-7852:
> When a the input stream of a binary value blocks indefinitely on read the 
> flush thread of the segment store get blocked:
> {noformat}
> "pool-2-thread-1" #15 prio=5 os_prio=31 tid=0x7fb0f21e3000 nid=0x5f03 
> waiting on condition [0x7a46d000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x00076bba62b0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at com.google.common.util.concurrent.Monitor.await(Monitor.java:963)
> at com.google.common.util.concurrent.Monitor.enterWhen(Monitor.java:402)
> at 
> org.apache.jackrabbit.oak.segment.SegmentBufferWriterPool.safeEnterWhen(SegmentBufferWriterPool.java:179)
> at 
> org.apache.jackrabbit.oak.segment.SegmentBufferWriterPool.flush(SegmentBufferWriterPool.java:138)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter.flush(DefaultSegmentWriter.java:138)
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.lambda$doFlush$8(FileStore.java:307)
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore$$Lambda$22/1345968304.flush(Unknown
>  Source)
> at 
> org.apache.jackrabbit.oak.segment.file.TarRevisions.doFlush(TarRevisions.java:237)
> at 
> org.apache.jackrabbit.oak.segment.file.TarRevisions.flush(TarRevisions.java:195)
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.doFlush(FileStore.java:306)
> at org.apache.jackrabbit.oak.segment.file.FileStore.flush(FileStore.java:318)
> {noformat}
> The condition {{0x7a46d000}} is waiting for the following thread to 
> return its {{SegmentBufferWriter}}, which will never happen if 
> {{InputStream.read(...)}} does not progress.
> {noformat}
> "pool-1-thread-1" #14 prio=5 os_prio=31 tid=0x7fb0f223a800 nid=0x5d03 
> runnable [0x7a369000
> ] java.lang.Thread.State: RUNNABLE
> at com.google.common.io.ByteStreams.read(ByteStreams.java:833)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter$SegmentWriteOperation.internalWriteStream(DefaultSegmentWriter.java:641)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter$SegmentWriteOperation.writeStream(DefaultSegmentWriter.java:618)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter$SegmentWriteOperation.writeBlob(DefaultSegmentWriter.java:577)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter$SegmentWriteOperation.writeProperty(DefaultSegmentWriter.java:691)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter$SegmentWriteOperation.writeProperty(DefaultSegmentWriter.java:677)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter$SegmentWriteOperation.writeNodeUncached(DefaultSegmentWriter.java:900)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter$SegmentWriteOperation.writeNode(DefaultSegmentWriter.java:799)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter$SegmentWriteOperation.access$800(DefaultSegmentWriter.java:252)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter$8.execute(DefaultSegmentWriter.java:240)
> at 
> org.apache.jackrabbit.oak.segment.SegmentBufferWriterPool.execute(SegmentBufferWriterPool.java:105)
> at 
> org.apache.jackrabbit.oak.segment.DefaultSegmentWriter.writeNode(DefaultSegmentWriter.java:235)
> at 
> org.apache.jackrabbit.oak.segment.SegmentWriter.writeNode(SegmentWriter.java:79)
> {noformat}
>  
> This issue is critical as such a misbehaving input stream causes the flush 
> thread to get stuck preventing transient segments from being flushed and thus 
> causing data loss.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7778) PasswordUtil#isPlainTextPassword doesn't validate PBKDF2 scheme

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7778.
-

bulk close 1.10.0

> PasswordUtil#isPlainTextPassword doesn't validate PBKDF2 scheme
> ---
>
> Key: OAK-7778
> URL: https://issues.apache.org/jira/browse/OAK-7778
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: security-spi
>Reporter: Alex Deparvu
>Assignee: Alex Deparvu
>Priority: Major
> Fix For: 1.10.0, 1.9.8
>
>
> Support for PBKDF2 was added quite a while back with OAK-697 but it seems 
> it's not really usable due to password validation. 
> PasswordUtil#isPlainTextPassword seems to think PBKDF2 is plain text, so it 
> will not allow any passwords to be set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-6433) Remove baseline plugin configuration referring to oak-core after 1.8 release

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-6433.
-

bulk close 1.10.0

> Remove baseline plugin configuration referring to oak-core after 1.8 release
> 
>
> Key: OAK-6433
> URL: https://issues.apache.org/jira/browse/OAK-6433
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: api, blob-plugins, composite, core-spi, store-spi
>Reporter: Chetan Mehrotra
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> After 1.8 release and before 1.9.0 release we would need to readjust the 
> baseline plugin config. Currently for following modules the base artifactId 
> is set to oak-core as these modules were derived from oak-core as part of 
> m18n effort (OAK-6346)
> * oak-api
> * oak-blob-plugins
> * oak-core-spi
> * oak-store-spi
> * oak-store-composite
> So before 1.9.0 release these config should be removed so that these modules 
> can refer to last stable release from 1.8 branch where the current module 
> would have an existing release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7191) update to surefire version compatible with jdk 10

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7191.
-

bulk close 1.10.0

> update to surefire version compatible with jdk 10
> -
>
> Key: OAK-7191
> URL: https://issues.apache.org/jira/browse/OAK-7191
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Julian Reschke
>Priority: Minor
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7969) Update tika dependency to 1.20

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7969.
-

bulk close 1.10.0

> Update tika dependency to 1.20
> --
>
> Key: OAK-7969
> URL: https://issues.apache.org/jira/browse/OAK-7969
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.10.0, 1.11.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7648) Oak should compile & test on Java 11

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7648.
-

bulk close 1.10.0

> Oak should compile & test on Java 11
> 
>
> Key: OAK-7648
> URL: https://issues.apache.org/jira/browse/OAK-7648
> Project: Jackrabbit Oak
>  Issue Type: Epic
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Major
> Fix For: 1.10.0, 1.9.9
>
>
> (umbrella issue for tracking changes)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7761) SegmentTarWriter#readSegment does not check the return value of FileChannel#read

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7761.
-

bulk close 1.10.0

> SegmentTarWriter#readSegment does not check the return value of 
> FileChannel#read
> 
>
> Key: OAK-7761
> URL: https://issues.apache.org/jira/browse/OAK-7761
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Major
> Fix For: 1.10.0
>
>
> We need to check the return value of {{FileChannel#read}} as that method 
> might read fewer bytes than requested. The corresponding Javadoc is (hidden) 
> at {{ReadableByteChannel#read}}: "A read operation might not fill the buffer, 
> and in fact it might not read any bytes at all"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7762) Store segments off heap when memory mapping is disabled

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7762.
-

bulk close 1.10.0

> Store segments off heap when memory mapping is disabled
> ---
>
> Key: OAK-7762
> URL: https://issues.apache.org/jira/browse/OAK-7762
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Major
> Fix For: 1.10.0
>
>
> I would like to add an experimental feature (disabled by default) allowing us 
> to store segments off-heap also when memory mapping is disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7379) Lucene Index: per-column selectivity, assume 5 unique entries

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7379.
-

bulk close 1.10.0

> Lucene Index: per-column selectivity, assume 5 unique entries
> -
>
> Key: OAK-7379
> URL: https://issues.apache.org/jira/browse/OAK-7379
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene, query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>  Labels: candidate_oak_1_8
> Fix For: 1.9.0, 1.10.0
>
>
> Currently, if a query has a property restriction of the form "property = x", 
> and the property is indexed in a Lucene property index, the estimated cost is 
> the index is the number of documents indexed for that property. This is a 
> very conservative estimate, it means all documents have the same value. So 
> the cost is relatively high for that index.
> In almost all cases, there are many distinct values for a property. Rarely 
> there are few values, or a skewed distribution where one value contains most 
> documents. But in almost all cases there are more than 5 distinct values.
> I think it makes sense to use 5 as the default value. It is still 
> conservative (cost of the index is high), but much better than now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7882) Inconsistent handling of cloud-prefix causes segment-copy to fail

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7882.
-

bulk close 1.10.0

> Inconsistent handling of cloud-prefix causes segment-copy to fail
> -
>
> Key: OAK-7882
> URL: https://issues.apache.org/jira/browse/OAK-7882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: oak-run, segment-azure
>Affects Versions: 1.9.10
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: OAK-7882.patch
>
>
> Due to changes in OAK-7812, {{oak-run segment-copy}} now fails because of 
> incorrect parsing of incomplete custom URI:
> {noformat}
> A problem occured while copying archives from repository/segmentstore/ to 
> az:https://storageaccount.blob.core.windows.net/container/directory
> java.lang.NullPointerException
>   at 
> org.apache.jackrabbit.oak.segment.azure.util.AzureConfigurationParserUtils.parseAzureConfigurationFromUri(AzureConfigurationParserUtils.java:140)
>   at 
> org.apache.jackrabbit.oak.segment.azure.tool.ToolUtils.createCloudBlobDirectory(ToolUtils.java:122)
>   at 
> org.apache.jackrabbit.oak.segment.azure.tool.ToolUtils.newSegmentNodeStorePersistence(ToolUtils.java:98)
>   at 
> org.apache.jackrabbit.oak.segment.azure.tool.SegmentCopy.run(SegmentCopy.java:230)
>   at 
> org.apache.jackrabbit.oak.run.SegmentCopyCommand.execute(SegmentCopyCommand.java:55)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-2556) Intermediate commit during async indexing

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-2556.
-

bulk close 1.10.0

> Intermediate commit during async indexing
> -
>
> Key: OAK-2556
> URL: https://issues.apache.org/jira/browse/OAK-2556
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.0.11
>Reporter: Stefan Egli
>Assignee: Thomas Mueller
>Priority: Major
>  Labels: resilience
> Fix For: 1.10.0
>
>
> A recent issue found at a customer unveils a potential issue with the async 
> indexer. Reading the AsyncIndexUpdate.updateIndex it looks like it is doing 
> the entire update of the async indexer *in one go*, ie in one commit.
> When there is - for some reason - however, a huge diff that the async indexer 
> has to process, the 'one big commit' can become gigantic. There is no limit 
> to the size of the commit in fact.
> So the suggestion is to do intermediate commits while the async indexer is 
> going on. The reason this is acceptable is the fact that by doing async 
> indexing, that index is anyway not 100% up-to-date - so it would not make 
> much of a difference if it would commit after every 100 or 1000 changes 
> either.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-5520) Improve index and query documentation

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-5520.
-

bulk close 1.10.0

> Improve index and query documentation
> -
>
> Key: OAK-5520
> URL: https://issues.apache.org/jira/browse/OAK-5520
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>  Components: lucene, query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.10.0
>
>
> The Oak index and query documentation needs to be improved:
> * step-by-step descriptions of how to fix problems (for example from slow 
> queries to fast queries)
> * checklists
> * index corruption vs merely _perceive_ corruption
> * checkpoints
> * text extraction
> * link to tools, such as the Oak Lucene Index Definition Generator at 
> http://oakutils.appspot.com/generate/index
> * indexing and reindex: when it is needed and how to do it, how long does it 
> take, how to stop
> * document currently undocumented features (for example Lucene index, 
> notNullCheckEnabled)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7962) FV reranking should be enabled by default

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7962.
-

bulk close 1.10.0

> FV reranking should be enabled by default
> -
>
> Key: OAK-7962
> URL: https://issues.apache.org/jira/browse/OAK-7962
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: lucene
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: 1.10.0, 1.11.0
>
>
> In order to improve the precision of the search results, reranking should be 
> enabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7909) Backport and validate OAK-7867 to Oak 1.6

2019-01-16 Thread Davide Giannella (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-7909.
-

bulk close 1.6.16

> Backport and validate OAK-7867 to Oak 1.6
> -
>
> Key: OAK-7909
> URL: https://issues.apache.org/jira/browse/OAK-7909
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Blocker
>  Labels: TarMK
> Fix For: 1.6.16
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7980) Build Jackrabbit Oak #1881 failed

2019-01-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16743784#comment-16743784
 ] 

Hudson commented on OAK-7980:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1890|https://builds.apache.org/job/Jackrabbit%20Oak/1890/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1890/console]

> Build Jackrabbit Oak #1881 failed
> -
>
> Key: OAK-7980
> URL: https://issues.apache.org/jira/browse/OAK-7980
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1881 has failed.
> First failed run: [Jackrabbit Oak 
> #1881|https://builds.apache.org/job/Jackrabbit%20Oak/1881/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1881/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)