[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427959#comment-16427959
 ] 

Amit Jain commented on OAK-7389:


I will go ahead with separate call approach for updates and have created 
OAK-7392 to update trunk/1.8 with the upsert API.

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6, candidate_oak_1_8
> Fix For: 1.9.0, 1.10
>
> Attachments: OAK-7389-v1.patch, OAK-7389-v2.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7392) Use upsert in MongoBlobStore to update timestamps for existing blobs

2018-04-05 Thread Amit Jain (JIRA)
Amit Jain created OAK-7392:
--

 Summary: Use upsert in MongoBlobStore to update timestamps for 
existing blobs
 Key: OAK-7392
 URL: https://issues.apache.org/jira/browse/OAK-7392
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob, documentmk
Reporter: Amit Jain
Assignee: Amit Jain
 Fix For: 1.9.0, 1.10


As described in OAK-7389 [1] we can use the new APIs to update/create for blobs 
in MongoBlobStore.

 

[1] 
https://issues.apache.org/jira/browse/OAK-7389?focusedCommentId=16426961=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16426961



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-7389:
---
Fix Version/s: (was: 1.2.30)
   1.10
   1.9.0

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6, candidate_oak_1_8
> Fix For: 1.9.0, 1.10
>
> Attachments: OAK-7389-v1.patch, OAK-7389-v2.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-7389:
---
Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4 
candidate_oak_1_6 candidate_oak_1_8  (was: )

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6, candidate_oak_1_8
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch, OAK-7389-v2.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7391) Build Jackrabbit Oak #1358 failed

2018-04-05 Thread Hudson (JIRA)
Hudson created OAK-7391:
---

 Summary: Build Jackrabbit Oak #1358 failed
 Key: OAK-7391
 URL: https://issues.apache.org/jira/browse/OAK-7391
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: continuous integration
Reporter: Hudson


No description is provided

The build Jackrabbit Oak #1358 has failed.
First failed run: [Jackrabbit Oak 
#1358|https://builds.apache.org/job/Jackrabbit%20Oak/1358/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1358/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7288) Change default JAAS ranking of ExternalLoginModuleFactory

2018-04-05 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427159#comment-16427159
 ] 

angela commented on OAK-7288:
-

[~chaotic], thanks for the patch. however it doesn't come with any test 
coverage nor documentation update explaining the change. Also you stated this 
would be a performance improvement: why? and if it is, you should also provide 
some benchmarks illustrating the improvement.

> Change default JAAS ranking of ExternalLoginModuleFactory
> -
>
> Key: OAK-7288
> URL: https://issues.apache.org/jira/browse/OAK-7288
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: Lars Krapf
>Priority: Minor
> Attachments: oak-external-auth.patch
>
>
> In order to improve performance for the (most common?) use case of mostly 
> external users, and to allow potential SSO modules to work with internal and 
> external users OOTB I propose to change the default JAAS ranking of the 
> ExternalLoginModuleFactory to be higher than the one of the default module 
> (e.g 150). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-5122) Exercise for Custom Authorization Models

2018-04-05 Thread Alex Deparvu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427113#comment-16427113
 ] 

Alex Deparvu commented on OAK-5122:
---

fixed javadocs http://svn.apache.org/viewvc?rev=1828445=rev

> Exercise for Custom Authorization Models
> 
>
> Key: OAK-5122
> URL: https://issues.apache.org/jira/browse/OAK-5122
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: exercise
>Reporter: angela
>Assignee: angela
>Priority: Minor
>
> Within the _oak-exercise_ module we should have some code illustrating how a 
> custom authorization model can be written and deployed. This should go along 
> with some exercise to extend/complete/practice with the sample code. The 
> proposed example could e.g. illustrate 
> - a simplified role-based authorization model or 
> - a variant of the _oak-authorization-cug_ that denies access for principals 
> from a specific country.
> Ideally we were able to extract the steps required to write the example to 
> update _oak-doc_ with some additional instructions on how to build custom 
> authorization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7384) SegmentNodeStoreStats should expose stats for previous minute per thread group

2018-04-05 Thread Alex Deparvu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427109#comment-16427109
 ] 

Alex Deparvu commented on OAK-7384:
---

fixed javadocs http://svn.apache.org/viewvc?rev=1828444=rev

> SegmentNodeStoreStats should expose stats for previous minute per thread group
> --
>
> Key: OAK-7384
> URL: https://issues.apache.org/jira/browse/OAK-7384
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Minor
>  Labels: tooling
> Fix For: 1.9.0, 1.10
>
> Attachments: OAK-7384.patch
>
>
> The current "CommitsCountPerWriter" stats exposed by 
> {{SegmentNodeStoreStats}} are hard to follow since there can be too many 
> writers at a time. To improve this, a more coarse-grained version of this 
> metric should be added, in which commits are recorded for groups of threads. 
> The groups should be configurable and represent regexes to be matched by 
> individual thread names. An additional group (i.e. "other") will group all 
> threads not matching any of the defined group regexes. 
> The current behaviour will be split in two:
> * "CommitsCountOtherThreads" will expose a snapshot of threads currently in 
> "other" group
> * "CommitsCountPerGroup" will expose an aggregate of commits count per thread 
> group for the previous minute.
> Both metrics will be reset each minute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-7024) java.security.acl deprecated in Java 10, marked for removal in Java 12

2018-04-05 Thread Alex Deparvu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427023#comment-16427023
 ] 

Alex Deparvu edited comment on OAK-7024 at 4/5/18 3:27 PM:
---

sorry for the noise, fixed javadocs with 
http://svn.apache.org/viewvc?rev=1828437=rev and 
http://svn.apache.org/viewvc?rev=1828443=rev



was (Author: alex.parvulescu):
sorry for the noise, fixed javadocs with 
http://svn.apache.org/viewvc?rev=1828437=rev

> java.security.acl deprecated in Java 10, marked for removal in Java 12
> --
>
> Key: OAK-7024
> URL: https://issues.apache.org/jira/browse/OAK-7024
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: security
>Reporter: Julian Reschke
>Assignee: Alex Deparvu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> See  and 
> .
> Need to understand how this affects public Oak APIs, and what to do with them 
> on Java 11 (which will be an LTS release we probably need to support with Oak 
> 1.10).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-7268) document store: create charset encoding utility that detects malformed input

2018-04-05 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364331#comment-16364331
 ] 

Julian Reschke edited comment on OAK-7268 at 4/5/18 2:57 PM:
-

trunk: [r1828439|http://svn.apache.org/r1828439] 
[r1824255|http://svn.apache.org/r1824255] 
[r1824253|http://svn.apache.org/r1824253]


was (Author: reschke):
trunk: [r1824255|http://svn.apache.org/r1824255] 
[r1824253|http://svn.apache.org/r1824253]

> document store: create charset encoding utility that detects malformed input
> 
>
> Key: OAK-7268
> URL: https://issues.apache.org/jira/browse/OAK-7268
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.9.0, 1.10
>
>
> For now in segment-tar; might be moved later on. 
> Include test for:
> - wellformed input
> - malformed input
> - multi-threaded encoding



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7358) Remove all usage of java.security.acl.Group for Java 12

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu updated OAK-7358:
--
Fix Version/s: 1.10

> Remove all usage of java.security.acl.Group for Java 12
> ---
>
> Key: OAK-7358
> URL: https://issues.apache.org/jira/browse/OAK-7358
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: security
>Reporter: Alex Deparvu
>Assignee: Alex Deparvu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> Followup of OAK-7024 for the actual removal of the Group class from the 
> codebase to be java 11 compliant.
> Not sure what to use for 'fix version', I went with 1.9.0 so this remains on 
> the radar, but we can push it out as needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7358) Remove all usage of java.security.acl.Group for Java 12

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu updated OAK-7358:
--
Summary: Remove all usage of java.security.acl.Group for Java 12  (was: 
Remove all usage of java.security.acl.Group for Java 1)

> Remove all usage of java.security.acl.Group for Java 12
> ---
>
> Key: OAK-7358
> URL: https://issues.apache.org/jira/browse/OAK-7358
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: security
>Reporter: Alex Deparvu
>Assignee: Alex Deparvu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> Followup of OAK-7024 for the actual removal of the Group class from the 
> codebase to be java 11 compliant.
> Not sure what to use for 'fix version', I went with 1.9.0 so this remains on 
> the radar, but we can push it out as needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7358) Remove all usage of java.security.acl.Group for Java 1

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu updated OAK-7358:
--
Summary: Remove all usage of java.security.acl.Group for Java 1  (was: 
Remove all usage of java.security.acl.Group for Java 11)

> Remove all usage of java.security.acl.Group for Java 1
> --
>
> Key: OAK-7358
> URL: https://issues.apache.org/jira/browse/OAK-7358
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: security
>Reporter: Alex Deparvu
>Assignee: Alex Deparvu
>Priority: Major
> Fix For: 1.9.0
>
>
> Followup of OAK-7024 for the actual removal of the Group class from the 
> codebase to be java 11 compliant.
> Not sure what to use for 'fix version', I went with 1.9.0 so this remains on 
> the radar, but we can push it out as needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7373) Build failure: OutOfMemoryError

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu resolved OAK-7373.
---
Resolution: Cannot Reproduce

logs are gone

> Build failure: OutOfMemoryError
> ---
>
> Key: OAK-7373
> URL: https://issues.apache.org/jira/browse/OAK-7373
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1334 has failed.
> First failed run: [Jackrabbit Oak 
> #1334|https://builds.apache.org/job/Jackrabbit%20Oak/1334/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1334/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-6788) java.io.IOException: Backing channel 'ubuntu-1' is disconnected.

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu resolved OAK-6788.
---
   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.10)
   (was: 1.9.0)

> java.io.IOException: Backing channel 'ubuntu-1' is disconnected.
> 
>
> Key: OAK-6788
> URL: https://issues.apache.org/jira/browse/OAK-6788
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #842 has failed.
> First failed run: [Jackrabbit Oak 
> #842|https://builds.apache.org/job/Jackrabbit%20Oak/842/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/842/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7024) java.security.acl deprecated in Java 10, marked for removal in Java 12

2018-04-05 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427047#comment-16427047
 ] 

Julian Reschke commented on OAK-7024:
-

Note that the removal has been rescheduled for java 12.

> java.security.acl deprecated in Java 10, marked for removal in Java 12
> --
>
> Key: OAK-7024
> URL: https://issues.apache.org/jira/browse/OAK-7024
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: security
>Reporter: Julian Reschke
>Assignee: Alex Deparvu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> See  and 
> .
> Need to understand how this affects public Oak APIs, and what to do with them 
> on Java 11 (which will be an LTS release we probably need to support with Oak 
> 1.10).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-6788) java.io.IOException: Backing channel 'ubuntu-1' is disconnected.

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu closed OAK-6788.
-

> java.io.IOException: Backing channel 'ubuntu-1' is disconnected.
> 
>
> Key: OAK-6788
> URL: https://issues.apache.org/jira/browse/OAK-6788
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #842 has failed.
> First failed run: [Jackrabbit Oak 
> #842|https://builds.apache.org/job/Jackrabbit%20Oak/842/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/842/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7034) Build failure: NPE in JiraCreateIssueNotifier

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu resolved OAK-7034.
---
   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.10)
   (was: 1.9.0)

> Build failure: NPE in JiraCreateIssueNotifier
> -
>
> Key: OAK-7034
> URL: https://issues.apache.org/jira/browse/OAK-7034
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1058 has failed.
> First failed run: [Jackrabbit Oak 
> #1058|https://builds.apache.org/job/Jackrabbit%20Oak/1058/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1058/console]
> {noformat}
> ERROR: Build step failed with exception
> java.lang.NullPointerException
>   at hudson.plugins.jira.JiraSession.createIssue(JiraSession.java:415)
>   at 
> hudson.plugins.jira.JiraCreateIssueNotifier.createJiraIssue(JiraCreateIssueNotifier.java:200)
>   at 
> hudson.plugins.jira.JiraCreateIssueNotifier.currentBuildResultFailure(JiraCreateIssueNotifier.java:356)
>   at 
> hudson.plugins.jira.JiraCreateIssueNotifier.perform(JiraCreateIssueNotifier.java:155)
>   at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
>   at 
> hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:736)
>   at 
> hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:682)
>   at 
> hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.post2(MavenModuleSetBuild.java:1073)
>   at 
> hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:627)
>   at hudson.model.Run.execute(Run.java:1762)
>   at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:543)
>   at hudson.model.ResourceController.execute(ResourceController.java:97)
>   at hudson.model.Executor.run(Executor.java:419)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7034) Build failure: NPE in JiraCreateIssueNotifier

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu closed OAK-7034.
-

> Build failure: NPE in JiraCreateIssueNotifier
> -
>
> Key: OAK-7034
> URL: https://issues.apache.org/jira/browse/OAK-7034
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1058 has failed.
> First failed run: [Jackrabbit Oak 
> #1058|https://builds.apache.org/job/Jackrabbit%20Oak/1058/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1058/console]
> {noformat}
> ERROR: Build step failed with exception
> java.lang.NullPointerException
>   at hudson.plugins.jira.JiraSession.createIssue(JiraSession.java:415)
>   at 
> hudson.plugins.jira.JiraCreateIssueNotifier.createJiraIssue(JiraCreateIssueNotifier.java:200)
>   at 
> hudson.plugins.jira.JiraCreateIssueNotifier.currentBuildResultFailure(JiraCreateIssueNotifier.java:356)
>   at 
> hudson.plugins.jira.JiraCreateIssueNotifier.perform(JiraCreateIssueNotifier.java:155)
>   at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
>   at 
> hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:736)
>   at 
> hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:682)
>   at 
> hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.post2(MavenModuleSetBuild.java:1073)
>   at 
> hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:627)
>   at hudson.model.Run.execute(Run.java:1762)
>   at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:543)
>   at hudson.model.ResourceController.execute(ResourceController.java:97)
>   at hudson.model.Executor.run(Executor.java:419)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7024) java.security.acl deprecated in Java 10, marked for removal in Java 12

2018-04-05 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7024:

Summary: java.security.acl deprecated in Java 10, marked for removal in 
Java 12  (was: java.security.acl deprecated in Java 10, marked for removal in 
Java 11)

> java.security.acl deprecated in Java 10, marked for removal in Java 12
> --
>
> Key: OAK-7024
> URL: https://issues.apache.org/jira/browse/OAK-7024
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: security
>Reporter: Julian Reschke
>Assignee: Alex Deparvu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> See  and 
> .
> Need to understand how this affects public Oak APIs, and what to do with them 
> on Java 11 (which will be an LTS release we probably need to support with Oak 
> 1.10).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7383) Build Jackrabbit Oak #1346 failed

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu closed OAK-7383.
-

> Build Jackrabbit Oak #1346 failed
> -
>
> Key: OAK-7383
> URL: https://issues.apache.org/jira/browse/OAK-7383
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1346 has failed.
> First failed run: [Jackrabbit Oak 
> #1346|https://builds.apache.org/job/Jackrabbit%20Oak/1346/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1346/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7383) Build Jackrabbit Oak #1346 failed

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu resolved OAK-7383.
---
Resolution: Cannot Reproduce

> Build Jackrabbit Oak #1346 failed
> -
>
> Key: OAK-7383
> URL: https://issues.apache.org/jira/browse/OAK-7383
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1346 has failed.
> First failed run: [Jackrabbit Oak 
> #1346|https://builds.apache.org/job/Jackrabbit%20Oak/1346/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1346/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7386) Build failure: ExecutionException: Invalid object ID

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu closed OAK-7386.
-

> Build failure: ExecutionException: Invalid object ID
> 
>
> Key: OAK-7386
> URL: https://issues.apache.org/jira/browse/OAK-7386
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1348 has failed.
> First failed run: [Jackrabbit Oak 
> #1348|https://builds.apache.org/job/Jackrabbit%20Oak/1348/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1348/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-7352) Build failure: ExecutionException: Invalid object ID

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu closed OAK-7352.
-

> Build failure: ExecutionException: Invalid object ID
> 
>
> Key: OAK-7352
> URL: https://issues.apache.org/jira/browse/OAK-7352
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1316 has failed.
> First failed run: [Jackrabbit Oak 
> #1316|https://builds.apache.org/job/Jackrabbit%20Oak/1316/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1316/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7386) Build failure: ExecutionException: Invalid object ID

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu resolved OAK-7386.
---
Resolution: Cannot Reproduce

> Build failure: ExecutionException: Invalid object ID
> 
>
> Key: OAK-7386
> URL: https://issues.apache.org/jira/browse/OAK-7386
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1348 has failed.
> First failed run: [Jackrabbit Oak 
> #1348|https://builds.apache.org/job/Jackrabbit%20Oak/1348/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1348/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7386) Build failure: ExecutionException: Invalid object ID

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu updated OAK-7386:
--
Summary: Build failure: ExecutionException: Invalid object ID  (was: Build 
Jackrabbit Oak #1348 failed)

> Build failure: ExecutionException: Invalid object ID
> 
>
> Key: OAK-7386
> URL: https://issues.apache.org/jira/browse/OAK-7386
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1348 has failed.
> First failed run: [Jackrabbit Oak 
> #1348|https://builds.apache.org/job/Jackrabbit%20Oak/1348/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1348/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7352) Build failure: ExecutionException: Invalid object ID

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu resolved OAK-7352.
---
Resolution: Cannot Reproduce

> Build failure: ExecutionException: Invalid object ID
> 
>
> Key: OAK-7352
> URL: https://issues.apache.org/jira/browse/OAK-7352
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1316 has failed.
> First failed run: [Jackrabbit Oak 
> #1316|https://builds.apache.org/job/Jackrabbit%20Oak/1316/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1316/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7386) Build Jackrabbit Oak #1348 failed

2018-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427030#comment-16427030
 ] 

Hudson commented on OAK-7386:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1356|https://builds.apache.org/job/Jackrabbit%20Oak/1356/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1356/console]

> Build Jackrabbit Oak #1348 failed
> -
>
> Key: OAK-7386
> URL: https://issues.apache.org/jira/browse/OAK-7386
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1348 has failed.
> First failed run: [Jackrabbit Oak 
> #1348|https://builds.apache.org/job/Jackrabbit%20Oak/1348/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1348/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7024) java.security.acl deprecated in Java 10, marked for removal in Java 11

2018-04-05 Thread Alex Deparvu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Deparvu resolved OAK-7024.
---
Resolution: Fixed

sorry for the noise, fixed javadocs with 
http://svn.apache.org/viewvc?rev=1828437=rev

> java.security.acl deprecated in Java 10, marked for removal in Java 11
> --
>
> Key: OAK-7024
> URL: https://issues.apache.org/jira/browse/OAK-7024
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: security
>Reporter: Julian Reschke
>Assignee: Alex Deparvu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> See  and 
> .
> Need to understand how this affects public Oak APIs, and what to do with them 
> on Java 11 (which will be an LTS release we probably need to support with Oak 
> 1.10).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7388) MergingNodeStateDiff may recreate nodes that were previously removed to resolve conflicts

2018-04-05 Thread Alex Deparvu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427015#comment-16427015
 ] 

Alex Deparvu commented on OAK-7388:
---

+1 good stuff!

> MergingNodeStateDiff may recreate nodes that were previously removed to 
> resolve conflicts
> -
>
> Key: OAK-7388
> URL: https://issues.apache.org/jira/browse/OAK-7388
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> {{MergingNodeStateDiff}} might behave incorrectly when the resolution of a 
> conflict involves the deletion of the conflicting node. I spotted this issue 
> in a use case that can be expressed by the following code.
> {noformat}
> NodeState root = EmptyNodeState.EMPTY_NODE;
> NodeState withProperty;
> {
> NodeBuilder builder = root.builder();
> builder.child("c").setProperty("foo", "bar");
> withProperty = builder.getNodeState();
> }
> NodeState withUpdatedProperty;
> {
> NodeBuilder builder = withProperty.builder();
> builder.child("c").setProperty("foo", "baz");
> withUpdatedProperty = builder.getNodeState();
> }
> NodeState withRemovedChild;
> {
> NodeBuilder builder = withProperty.builder();
> builder.child("c").remove();
> withRemovedChild = builder.getNodeState();
> }
> NodeBuilder mergedBuilder = withUpdatedProperty.builder();
> withRemovedChild.compareAgainstBaseState(withProperty, new 
> ConflictAnnotatingRebaseDiff(mergedBuilder));
> NodeState merged = 
> ConflictHook.of(DefaultThreeWayConflictHandler.OURS).processCommit(
>   mergedBuilder.getBaseState(), 
>   mergedBuilder.getNodeState(), 
>   CommitInfo.EMPTY
> );
> assertFalse(merged.hasChildNode("c"));
> {noformat}
> The assertion at the end of the code fails becauuse `merged` actually has a 
> child node named `c`, and `c` is an empty node. After digging into the issue, 
> I figured out that the problem is caused by the following steps.
> # {{MergingNodeStateDiff#childNodeAdded}} is invoked because of 
> {{:conflicts}}. This eventually results in the deletion of the conflicting 
> child node.
> # {{MergingNodeStateDiff#childNodeChanged}} is called because in 
> {{ModifiedNodeState#compareAgainstBaseState}} the children are compared with 
> the {{!=}} operator instead of using {{Object#equals}}.
> # {{org.apache.jackrabbit.oak.spi.state.NodeBuilder#child}} is called in 
> order to setup a new {{MergingNodeStateDiff}} to descend into the subtree 
> that was detected as modified.
> # {{MemoryNodeBuilder#hasChildNode}} correctly returns {{false}}, because the 
> child was removed in step 1. The return value of {{false}} triggers the next 
> step.
> # {{MemoryNodeBuilder#setChildNode(java.lang.String)}} is invoked in order to 
> setup a new, empty child node.
> In other words, the snippet above can be rewritten like the following.
> {noformat}
> NodeState root = EmptyNodeState.EMPTY_NODE;
> NodeState withProperty;
> {
> NodeBuilder builder = root.builder();
> builder.child("c").setProperty("foo", "bar");
> withProperty = builder.getNodeState();
> }
> NodeState withUpdatedProperty;
> {
> NodeBuilder builder = withProperty.builder();
> builder.child("c").setProperty("foo", "baz");
> withUpdatedProperty = builder.getNodeState();
> }
> NodeState withRemovedChild;
> {
> NodeBuilder builder = withProperty.builder();
> builder.child("c").remove();
> withRemovedChild = builder.getNodeState();
> }
> NodeBuilder mergedBuilder = withUpdatedProperty.builder();
> // As per MergingNodeStateDiff.childNodeAdded()
> mergedBuilder.child("c").remove();
> // As per ModifiedNodeState#compareAgainstBaseState()
> if (withUpdatedProperty.getChildNode("c") != 
> withRemovedChild.getChildNode("c")) {
> // As per MergingNodeStateDiff.childNodeChanged()
> mergedBuilder.child("c");
> }
> NodeState merged = mergedBuilder.getNodeState();
> assertFalse(merged.hasChildNode("c"));
> {noformat}
> The end result is that {{MergingNodeStateDiff}} inadvertently adds the node 
> that was removed in order to resolve a conflict.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (OAK-7024) java.security.acl deprecated in Java 10, marked for removal in Java 11

2018-04-05 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke reopened OAK-7024:
-

Re-opened because of javadoc issue, see comment.

> java.security.acl deprecated in Java 10, marked for removal in Java 11
> --
>
> Key: OAK-7024
> URL: https://issues.apache.org/jira/browse/OAK-7024
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: security
>Reporter: Julian Reschke
>Assignee: Alex Deparvu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> See  and 
> .
> Need to understand how this affects public Oak APIs, and what to do with them 
> on Java 11 (which will be an LTS release we probably need to support with Oak 
> 1.10).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7024) java.security.acl deprecated in Java 10, marked for removal in Java 11

2018-04-05 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426976#comment-16426976
 ] 

Julian Reschke commented on OAK-7024:
-

[~stillalex], it seems the javadoc is now broken:

{noformat}
[ERROR] 
C:\projects\apache\oak\trunk\oak-security-spi\src\main\java\org\apache\jackrabbit\oak\spi\security\principal\PrincipalProvider.java:79:
 error: reference not found
[ERROR]  * returned in the iterator {@link 
GroupPrincipal#isMember(Principal)}
[ERROR]^
[ERROR] 
C:\projects\apache\oak\trunk\oak-security-spi\src\main\java\org\apache\jackrabbit\oak\spi\security\principal\PrincipalProvider.java:88:
 error: reference not found
[ERROR]  * @see GroupPrincipal#isMember(java.security.Principal)
[ERROR] ^
{noformat}

(try with {{mvn javadoc:jar}})



> java.security.acl deprecated in Java 10, marked for removal in Java 11
> --
>
> Key: OAK-7024
> URL: https://issues.apache.org/jira/browse/OAK-7024
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: security
>Reporter: Julian Reschke
>Assignee: Alex Deparvu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> See  and 
> .
> Need to understand how this affects public Oak APIs, and what to do with them 
> on Java 11 (which will be an LTS release we probably need to support with Oak 
> 1.10).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426961#comment-16426961
 ] 

Marcel Reutegger commented on OAK-7389:
---

Trunk was recently migrated to the MongoDB Java Driver 3.0 API because the old 
2.x API had been deprecated a while ago. Oak 1.8 could use the new API because 
it is already on the 3.x driver, but older branches cannot. They are using the 
2.x drivers.

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch, OAK-7389-v2.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7233) Improve rep:glob documentation

2018-04-05 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426953#comment-16426953
 ] 

angela commented on OAK-7233:
-

when updating the doc, it would be cool to also write some exercises for that 
topic. Stub is already present at 
_oak/exercise/security/authorization/accesscontrol/L8_GlobRestrictionTest.java_

> Improve rep:glob documentation
> --
>
> Key: OAK-7233
> URL: https://issues.apache.org/jira/browse/OAK-7233
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>  Components: core, security
>Reporter: Konrad Windszus
>Assignee: angela
>Priority: Minor
> Fix For: 1.10
>
>
> The examples at 
> https://jackrabbit.apache.org/oak/docs/security/authorization/restriction.html#Examples
>  do not explicitly mention the root node. For the root node you must never 
> use a {{rep:glob}} starting with {{/}}. 
> Also the following points should be clarified:
> * rep:glob affects both (child)node as well as property access
> * a link towards 
> https://jackrabbit.apache.org/api/2.8/org/apache/jackrabbit/core/security/authorization/GlobPattern.html
>  would be helpful
> * make clearer how a rep:glob ending with {{/}} is different from one not 
> ending with {{/}}
> Also the description for 
> {{/cat/}} and {{cat/}} seem wrong because IMHO descendants are only 
> considered if the glob uses a {{*}}.
> This was originally triggered via 
> https://issues.apache.org/jira/browse/OAK-7233.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426948#comment-16426948
 ] 

Vikas Saurabh commented on OAK-7389:


bq. Yes it's may not be required but might be good to have for performance to 
not have Mongo transport the whole document back but just the id.
ack. good point.

btw, do note that this API won't work for older mongo version (i don't remember 
our exact supported versions)... so, backport won't be trivial merges. That 
said, I would probably still prefer a single remote call ... but, I'd trust you 
to take that call :).

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch, OAK-7389-v2.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426944#comment-16426944
 ] 

Amit Jain commented on OAK-7389:


[~catholicon]
bq. FindOneAndUpdateOptions options = new 
FindOneAndUpdateOptions().upsert(true); // <- this change isn't required, but I 
don't think projecting _id is required
Yes it's may not be required but might be good to have for performance to not 
have Mongo transport the whole document back but just the id.

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch, OAK-7389-v2.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-7389:
---
Attachment: OAK-7389-v2.patch

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch, OAK-7389-v2.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426934#comment-16426934
 ] 

Vikas Saurabh edited comment on OAK-7389 at 4/5/18 1:45 PM:


[~amitjain], these changes to your snippet makes it work:
{noformat}
// Check if it already exists?
Document mongoBlob = new Document(); // <- use Document instead of 
MongoBlob which puts lastMod itself even if it's not set explicitly
mongoBlob.append(MongoBlob.KEY_ID, id);
mongoBlob.append(MongoBlob.KEY_DATA, data);
mongoBlob.append(MongoBlob.KEY_LEVEL, level);

Document updateBlob =new Document(MongoBlob.KEY_LAST_MOD, 
System.currentTimeMillis()); // <- we don't want to set _id ... it's already 
part of setOnInsert

Document upsert = new Document();
upsert.append("$setOnInsert", mongoBlob)
.append("$set", updateBlob);

// TODO check the return value
// TODO verify insert is fast if the entry already exists
try {
Bson query = getBlobQuery(id, -1); // <- negative last mod as we 
don't want to query on last mod
//Bson query = Filters.eq(MongoBlob.KEY_ID, id);
FindOneAndUpdateOptions options = new 
FindOneAndUpdateOptions().upsert(true); // <- this change isn't required, but I 
don't think projecting _id is required
{noformat}

*EDIT*: Added these changes to  [^OAK-7389-v2.patch].


was (Author: catholicon):
[~amitjain], these changes to your snippet makes it work:
{noformat}
// Check if it already exists?
Document mongoBlob = new Document(); // <- use Document instead of 
MongoBlob which puts lastMod itself even if it's not set explicitly
mongoBlob.append(MongoBlob.KEY_ID, id);
mongoBlob.append(MongoBlob.KEY_DATA, data);
mongoBlob.append(MongoBlob.KEY_LEVEL, level);

Document updateBlob =new Document(MongoBlob.KEY_LAST_MOD, 
System.currentTimeMillis()); // <- we don't want to set _id ... it's already 
part of setOnInsert

Document upsert = new Document();
upsert.append("$setOnInsert", mongoBlob)
.append("$set", updateBlob);

// TODO check the return value
// TODO verify insert is fast if the entry already exists
try {
Bson query = getBlobQuery(id, -1); // <- negative last mod as we 
don't want to query on last mod
//Bson query = Filters.eq(MongoBlob.KEY_ID, id);
FindOneAndUpdateOptions options = new 
FindOneAndUpdateOptions().upsert(true); // <- this change isn't required, but I 
don't think projecting _id is required
{noformat}

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch, OAK-7389-v2.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored 

[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426941#comment-16426941
 ] 

Amit Jain commented on OAK-7389:


[~catholicon] Thanks using MongoBlob might be the problem. Regarding the id in 
the update blob that was not set initially and was added later on in my 
experimentation.

[~tmueller] We don't use the old vs new paradigm in Oak as we did in Jackrabbit 
(I think that was the plan initially). As we use the MarkSweepGarbageCollector 
to trigger GC for all BlobStoresDataStore and use the 
GarbageCollectableBlobStore interface to only get all chunk ids as well as 
deletes, the timestamp update would be required.

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426934#comment-16426934
 ] 

Vikas Saurabh commented on OAK-7389:


[~amitjain], these changes to your snippet makes it work:
{noformat}
// Check if it already exists?
Document mongoBlob = new Document(); // <- use Document instead of 
MongoBlob which puts lastMod itself even if it's not set explicitly
mongoBlob.append(MongoBlob.KEY_ID, id);
mongoBlob.append(MongoBlob.KEY_DATA, data);
mongoBlob.append(MongoBlob.KEY_LEVEL, level);

Document updateBlob =new Document(MongoBlob.KEY_LAST_MOD, 
System.currentTimeMillis()); // <- we don't want to set _id ... it's already 
part of setOnInsert

Document upsert = new Document();
upsert.append("$setOnInsert", mongoBlob)
.append("$set", updateBlob);

// TODO check the return value
// TODO verify insert is fast if the entry already exists
try {
Bson query = getBlobQuery(id, -1); // <- negative last mod as we 
don't want to query on last mod
//Bson query = Filters.eq(MongoBlob.KEY_ID, id);
FindOneAndUpdateOptions options = new 
FindOneAndUpdateOptions().upsert(true); // <- this change isn't required, but I 
don't think projecting _id is required
{noformat}

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426932#comment-16426932
 ] 

Thomas Mueller commented on OAK-7389:
-

The MongoBlobStore doesn't use this "old vs new" mechanism; for this case the 
last modified mechanism needs to be used.

For the new test to work with the FileBlobStore and the MemoryBlobStore, some 
changes would be needed. Or possibly the test disabled for those cases.

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426914#comment-16426914
 ] 

Thomas Mueller commented on OAK-7389:
-

FileBlobStore: Looking at the current code, the GC mechanism there doesn't rely 
on the last modified date. Instead, it uses "<...>_old" directories (the 
beginning of the mark phase renames all directories to "..._old", and the sweep 
phase removes those directories; the mark phase moves the required files back). 
That's why last modified timestamps are not needed there.

MemoryBlobStore: same here. there is an "old" map. So no changes are needed 
there.

 

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7386) Build Jackrabbit Oak #1348 failed

2018-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426881#comment-16426881
 ] 

Hudson commented on OAK-7386:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1355|https://builds.apache.org/job/Jackrabbit%20Oak/1355/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1355/console]

> Build Jackrabbit Oak #1348 failed
> -
>
> Key: OAK-7386
> URL: https://issues.apache.org/jira/browse/OAK-7386
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1348 has failed.
> First failed run: [Jackrabbit Oak 
> #1348|https://builds.apache.org/job/Jackrabbit%20Oak/1348/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1348/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426869#comment-16426869
 ] 

Thomas Mueller commented on OAK-7389:
-

I'm not sure, but I would probably try this: the query part just contain KEY_ID 
= id; the "$set" part just contain KEY_LAST_MOD = System.currentTimeMillis(), 
and "$setOnInsert" part contain all other fields (except for KEY_LAST_MOD and 
not KEY_ID).

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7386) Build Jackrabbit Oak #1348 failed

2018-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426770#comment-16426770
 ] 

Hudson commented on OAK-7386:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1354|https://builds.apache.org/job/Jackrabbit%20Oak/1354/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1354/console]

> Build Jackrabbit Oak #1348 failed
> -
>
> Key: OAK-7386
> URL: https://issues.apache.org/jira/browse/OAK-7386
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1348 has failed.
> First failed run: [Jackrabbit Oak 
> #1348|https://builds.apache.org/job/Jackrabbit%20Oak/1348/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1348/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426761#comment-16426761
 ] 

Amit Jain edited comment on OAK-7389 at 4/5/18 11:18 AM:
-

[~catholicon] This was what I naively attempted after consulting [1] but got an 
error when creating itself [2].
{code:java}
@Override
protected void storeBlock(byte[] digest, int level, byte[] data) throws 
IOException {
String id = StringUtils.convertBytesToHex(digest);
cache.put(id, data);

// Check if it already exists?
MongoBlob mongoBlob = new MongoBlob();
mongoBlob.setId(id);
mongoBlob.setData(data);
mongoBlob.setLevel(level);

Document updateBlob =new Document(MongoBlob.KEY_LAST_MOD, 
System.currentTimeMillis());
updateBlob.append(MongoBlob.KEY_ID, id);

Document upsert = new Document();
upsert.append("$setOnInsert", mongoBlob)
.append("$set", updateBlob);

// TODO check the return value
// TODO verify insert is fast if the entry already exists
try {
Bson query = getBlobQuery(id, System.currentTimeMillis());
//Bson query = Filters.eq(MongoBlob.KEY_ID, id);
FindOneAndUpdateOptions options = new 
FindOneAndUpdateOptions().projection(new BasicDBObject(MongoBlob.KEY_ID, 
1)).upsert(true);
MongoBlob oldBlob = getBlobCollection().findOneAndUpdate(query, 
upsert, options);
if (oldBlob != null) {
LOG.debug("Block with id [{}] id updated", id);
}
} catch (MongoException e) {
throw new IOException(e.getMessage(), e);
}
}
{code}
[1] 
https://docs.mongodb.com/manual/reference/method/db.collection.update/#upsert-behavior
 [2]
{noformat}
java.io.IOException: Command failed with error 40: 'Updating the path 'lastMod' 
would create a conflict at 'lastMod'' on server 127.0.0.1:27017. The full 
response is { "ok" : 0.0, "errmsg" : "Updating the path 'lastMod' would create 
a conflict at 'lastMod'", "code" : 40, "codeName" : 
"ConflictingUpdateOperators" }
{noformat}


was (Author: amitjain):
[~catholicon] This was what I naively attempted after consulting [1] but got an 
error when creating itself [2].

{code:java}
@Override
protected void storeBlock(byte[] digest, int level, byte[] data) throws 
IOException {
String id = StringUtils.convertBytesToHex(digest);
cache.put(id, data);

// Check if it already exists?
MongoBlob mongoBlob = new MongoBlob();
mongoBlob.setId(id);
mongoBlob.setData(data);
mongoBlob.setLevel(level);

Document updateBlob =new Document(MongoBlob.KEY_LAST_MOD, 
System.currentTimeMillis());
updateBlob.append(MongoBlob.KEY_ID, id);

Document upsert = new Document();
upsert.append("$setOnInsert", mongoBlob)
.append("$set", updateBlob);

// TODO check the return value
// TODO verify insert is fast if the entry already exists
try {
Bson query = getBlobQuery(id, System.currentTimeMillis());
//Bson query = Filters.eq(MongoBlob.KEY_ID, id);
FindOneAndUpdateOptions options = new 
FindOneAndUpdateOptions().projection(new BasicDBObject(MongoBlob.KEY_ID, 
1)).upsert(true);
MongoBlob oldBlob = getBlobCollection().findOneAndUpdate(query, 
upsert, options);
if (oldBlob != null) {
LOG.debug("Block with id [{}] id updated", id);
}
} catch (MongoException e) {
throw new IOException(e.getMessage(), e);
}
}
{code}

[1] 
[2] {noformat}java.io.IOException: Command failed with error 40: 'Updating the 
path 'lastMod' would create a conflict at 'lastMod'' on server 127.0.0.1:27017. 
The full response is { "ok" : 0.0, "errmsg" : "Updating the path 'lastMod' 
would create a conflict at 'lastMod'", "code" : 40, "codeName" : 
"ConflictingUpdateOperators" }
{noformat}

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, 

[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426761#comment-16426761
 ] 

Amit Jain commented on OAK-7389:


[~catholicon] This was what I naively attempted after consulting [1] but got an 
error when creating itself [2].

{code:java}
@Override
protected void storeBlock(byte[] digest, int level, byte[] data) throws 
IOException {
String id = StringUtils.convertBytesToHex(digest);
cache.put(id, data);

// Check if it already exists?
MongoBlob mongoBlob = new MongoBlob();
mongoBlob.setId(id);
mongoBlob.setData(data);
mongoBlob.setLevel(level);

Document updateBlob =new Document(MongoBlob.KEY_LAST_MOD, 
System.currentTimeMillis());
updateBlob.append(MongoBlob.KEY_ID, id);

Document upsert = new Document();
upsert.append("$setOnInsert", mongoBlob)
.append("$set", updateBlob);

// TODO check the return value
// TODO verify insert is fast if the entry already exists
try {
Bson query = getBlobQuery(id, System.currentTimeMillis());
//Bson query = Filters.eq(MongoBlob.KEY_ID, id);
FindOneAndUpdateOptions options = new 
FindOneAndUpdateOptions().projection(new BasicDBObject(MongoBlob.KEY_ID, 
1)).upsert(true);
MongoBlob oldBlob = getBlobCollection().findOneAndUpdate(query, 
upsert, options);
if (oldBlob != null) {
LOG.debug("Block with id [{}] id updated", id);
}
} catch (MongoException e) {
throw new IOException(e.getMessage(), e);
}
}
{code}

[1] 
[2] {noformat}java.io.IOException: Command failed with error 40: 'Updating the 
path 'lastMod' would create a conflict at 'lastMod'' on server 127.0.0.1:27017. 
The full response is { "ok" : 0.0, "errmsg" : "Updating the path 'lastMod' 
would create a conflict at 'lastMod'", "code" : 40, "codeName" : 
"ConflictingUpdateOperators" }
{noformat}

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7083) CompositeDataStore - ReadOnly/ReadWrite Delegate Support

2018-04-05 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426750#comment-16426750
 ] 

Amit Jain commented on OAK-7083:


[~mattvryan]
{quote}The way I was thinking this would be done was to extend the capability 
of the BlobOptions class so we could pass the encoded data store id along to 
the delegate during addRecord(). However, this requires that the delegate 
implement TypedDataStore which OakFileDataStore does not implement.
{quote}
Rather than passing an id to the delegate what should be done is to take the id 
returned by the delegate and then encode whatever information is to be encoded 
as I don't think the delegates need to know or would make use of the info to be 
encoded. You can take a look at how the length is encoded currently in the 
DataStoreBlobStore after the id is returned from the delegate DataStore [1].

[1] 
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L657

> CompositeDataStore - ReadOnly/ReadWrite Delegate Support
> 
>
> Key: OAK-7083
> URL: https://issues.apache.org/jira/browse/OAK-7083
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: blob, blob-cloud, blob-cloud-azure, blob-plugins
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> Support a specific composite data store use case, which is the following:
> * One instance uses no composite data store, but instead is using a single 
> standard Oak data store (e.g. FileDataStore)
> * Another instance is created by snapshotting the first instance node store, 
> and then uses a composite data store to refer to the first instance's data 
> store read-only, and refers to a second data store as a writable data store
> One way this can be used is in creating a test or staging instance from a 
> production instance.  At creation, the test instance will look like 
> production, but any changes made to the test instance do not affect 
> production.  The test instance can be quickly created from production by 
> cloning only the node store, and not requiring a copy of all the data in the 
> data store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426732#comment-16426732
 ] 

Vikas Saurabh edited comment on OAK-7389 at 4/5/18 10:44 AM:
-

{quote}could not get the upsert working with Mongo findOneAndUpdate so, 
resorted to a separate call for update on error.
{quote}
[1] says that calling findOneAndUpdate only accepts update operators [2] - I 
think {{$currentDate}} and {{$set}} should serve the purpose well. Also, you'd 
probably need to pass \{"upsert": true} for {{options}} param.

*EDIT*: btw, \[1] mentions that the method is new in 3.2 and updated (how??) in 
3.6. I don't recall what we recommend for 1.2 users - but, I think it won't be 
3.2. Maybe, instead of having different impls in different branches, we could 
work with what you suggested (I won't expect resurrections of blobs just in 
time during blob gc... so, 2 remote calls might be ok). 

[1]: 
[https://docs.mongodb.com/manual/reference/method/db.collection.findOneAndUpdate/]
[2]: [https://docs.mongodb.com/manual/reference/operator/update/]


was (Author: catholicon):
bq. could not get the upsert working with Mongo findOneAndUpdate so, resorted 
to a separate call for update on error.
\[1] says that calling findOneAndUpdate only accepts update operators \[2] - I 
think {{$currentDate}} and {{$set}} should serve the purpose well. Also, you'd 
probably need to pass \{"upsert": true} for {{options}} param.

\[1]: 
https://docs.mongodb.com/manual/reference/method/db.collection.findOneAndUpdate/
\[2]: https://docs.mongodb.com/manual/reference/operator/update/

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426732#comment-16426732
 ] 

Vikas Saurabh edited comment on OAK-7389 at 4/5/18 10:40 AM:
-

bq. could not get the upsert working with Mongo findOneAndUpdate so, resorted 
to a separate call for update on error.
\[1] says that calling findOneAndUpdate only accepts update operators \[2] - I 
think {{$currentDate}} and {{$set}} should serve the purpose well. Also, you'd 
probably need to pass \{"upsert": true} for {{options}} param.

\[1]: 
https://docs.mongodb.com/manual/reference/method/db.collection.findOneAndUpdate/
\[2]: https://docs.mongodb.com/manual/reference/operator/update/


was (Author: catholicon):
bq. could not get the upsert working with Mongo findOneAndUpdate so, resorted 
to a separate call for update on error.
\[1] says that calling findOneAndUpdate only accepts update operators \[2] - I 
think {{$currentDate}} and {{$set}} should serve the purpose well. Also, you'd 
probably need to pass {{ {"upsert": true} }} for {{options}} param.

\[1]: 
https://docs.mongodb.com/manual/reference/method/db.collection.findOneAndUpdate/
\[2]: https://docs.mongodb.com/manual/reference/operator/update/

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426732#comment-16426732
 ] 

Vikas Saurabh commented on OAK-7389:


bq. could not get the upsert working with Mongo findOneAndUpdate so, resorted 
to a separate call for update on error.
\[1] says that calling findOneAndUpdate only accepts update operators \[2] - I 
think {{$currentDate}} and {{$set}} should serve the purpose well. Also, you'd 
probably need to pass {{ {"upsert": true} }} for {{options}} param.

\[1]: 
https://docs.mongodb.com/manual/reference/method/db.collection.findOneAndUpdate/
\[2]: https://docs.mongodb.com/manual/reference/operator/update/

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426720#comment-16426720
 ] 

Amit Jain commented on OAK-7389:


Attaching the 1st version of the patch with only tests for 
AbstractBlobStoreTest executed. 

[~tmueller], [~mreutegg], [~chetanm], [~catholicon]

Would appreciate a feedback. I could not get the upsert working with Mongo 
{{findOneAndUpdate}} so, resorted to a separate call for update on error.

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-7389:
---
Attachment: OAK-7389-v1.patch

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
> Attachments: OAK-7389-v1.patch
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  FileBlobStore also returns if there's a file already existing without 
> updating the timestamp
> {code:java}
> @Override
> protected synchronized void storeBlock(byte[] digest, int level, byte[] 
> data) throws IOException {
> File f = getFile(digest, false);
> if (f.exists()) {
> return;
> }
> .
> {code}
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7386) Build Jackrabbit Oak #1348 failed

2018-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426695#comment-16426695
 ] 

Hudson commented on OAK-7386:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1353|https://builds.apache.org/job/Jackrabbit%20Oak/1353/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1353/console]

> Build Jackrabbit Oak #1348 failed
> -
>
> Key: OAK-7386
> URL: https://issues.apache.org/jira/browse/OAK-7386
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1348 has failed.
> First failed run: [Jackrabbit Oak 
> #1348|https://builds.apache.org/job/Jackrabbit%20Oak/1348/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1348/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (OAK-7390) QueryResult.getSize() can be slow for many "or" or "union" conditions

2018-04-05 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reopened OAK-7390:
-

> QueryResult.getSize() can be slow for many "or" or "union" conditions
> -
>
> Key: OAK-7390
> URL: https://issues.apache.org/jira/browse/OAK-7390
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.10
>
>
> For queries with many union conditions, the "fast" getSize method can 
> actually be slower than iterating over the result. 
> The reason is, the number of index calls grows exponential with regards to 
> number of subqueries: (3x + x^2) / 2, where x is the number of subqueries. 
> For this to have a measurable affect, the number of subqueries needs to be 
> large (more than 100), and the index needs to be slow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7390) QueryResult.getSize() can be slow for many "or" or "union" conditions

2018-04-05 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-7390:

Sprint: L16

> QueryResult.getSize() can be slow for many "or" or "union" conditions
> -
>
> Key: OAK-7390
> URL: https://issues.apache.org/jira/browse/OAK-7390
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.10
>
>
> For queries with many union conditions, the "fast" getSize method can 
> actually be slower than iterating over the result. 
> The reason is, the number of index calls grows exponential with regards to 
> number of subqueries: (3x + x^2) / 2, where x is the number of subqueries. 
> For this to have a measurable affect, the number of subqueries needs to be 
> large (more than 100), and the index needs to be slow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7390) QueryResult.getSize() can be slow for many "or" or "union" conditions

2018-04-05 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-7390:

Sprint:   (was: L16)

> QueryResult.getSize() can be slow for many "or" or "union" conditions
> -
>
> Key: OAK-7390
> URL: https://issues.apache.org/jira/browse/OAK-7390
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.10
>
>
> For queries with many union conditions, the "fast" getSize method can 
> actually be slower than iterating over the result. 
> The reason is, the number of index calls grows exponential with regards to 
> number of subqueries: (3x + x^2) / 2, where x is the number of subqueries. 
> For this to have a measurable affect, the number of subqueries needs to be 
> large (more than 100), and the index needs to be slow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7148) Document excerpt support (specially excerpts for properties)

2018-04-05 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-7148:

Fix Version/s: 1.9.0

> Document excerpt support (specially excerpts for properties)
> 
>
> Key: OAK-7148
> URL: https://issues.apache.org/jira/browse/OAK-7148
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> Currently, it's possible to get excerpts for properties. For this case, our 
> own "simple excerpt" mechanism is used (which has many limitations).
> We need to document this feature and the limitations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7390) QueryResult.getSize() can be slow for many "or" or "union" conditions

2018-04-05 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-7390.
-
Resolution: Fixed

Documented in the XPath and SQL-2 grammer, e.g. 
http://jackrabbit.apache.org/oak/docs/query/grammar-xpath.html

> QueryResult.getSize() can be slow for many "or" or "union" conditions
> -
>
> Key: OAK-7390
> URL: https://issues.apache.org/jira/browse/OAK-7390
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.10
>
>
> For queries with many union conditions, the "fast" getSize method can 
> actually be slower than iterating over the result. 
> The reason is, the number of index calls grows exponential with regards to 
> number of subqueries: (3x + x^2) / 2, where x is the number of subqueries. 
> For this to have a measurable affect, the number of subqueries needs to be 
> large (more than 100), and the index needs to be slow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7390) QueryResult.getSize() can be slow for many "or" or "union" conditions

2018-04-05 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426661#comment-16426661
 ] 

Thomas Mueller commented on OAK-7390:
-

http://svn.apache.org/r1828405 (trunk)

> QueryResult.getSize() can be slow for many "or" or "union" conditions
> -
>
> Key: OAK-7390
> URL: https://issues.apache.org/jira/browse/OAK-7390
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.10
>
>
> For queries with many union conditions, the "fast" getSize method can 
> actually be slower than iterating over the result. 
> The reason is, the number of index calls grows exponential with regards to 
> number of subqueries: (3x + x^2) / 2, where x is the number of subqueries. 
> For this to have a measurable affect, the number of subqueries needs to be 
> large (more than 100), and the index needs to be slow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7390) QueryResult.getSize() can be slow for many "or" or "union" conditions

2018-04-05 Thread Thomas Mueller (JIRA)
Thomas Mueller created OAK-7390:
---

 Summary: QueryResult.getSize() can be slow for many "or" or 
"union" conditions
 Key: OAK-7390
 URL: https://issues.apache.org/jira/browse/OAK-7390
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Thomas Mueller
Assignee: Thomas Mueller
 Fix For: 1.10


For queries with many union conditions, the "fast" getSize method can actually 
be slower than iterating over the result. 

The reason is, the number of index calls grows exponential with regards to 
number of subqueries: (3x + x^2) / 2, where x is the number of subqueries. For 
this to have a measurable affect, the number of subqueries needs to be large 
(more than 100), and the index needs to be slow.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7386) Build Jackrabbit Oak #1348 failed

2018-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426640#comment-16426640
 ] 

Hudson commented on OAK-7386:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1352|https://builds.apache.org/job/Jackrabbit%20Oak/1352/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1352/console]

> Build Jackrabbit Oak #1348 failed
> -
>
> Key: OAK-7386
> URL: https://issues.apache.org/jira/browse/OAK-7386
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1348 has failed.
> First failed run: [Jackrabbit Oak 
> #1348|https://builds.apache.org/job/Jackrabbit%20Oak/1348/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1348/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-6517) ActiveDeletedBlobCollectionIT.simpleAsyncIndexUpdateBasedBlobCollection failing intermittently

2018-04-05 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426563#comment-16426563
 ] 

Vikas Saurabh commented on OAK-6517:


Noting that [~mreutegg] case of error most likely would've happened with 
following stack
{noformat}

org.apache.commons.io.FileUtils#validateListFilesParameters
org.apache.commons.io.FileUtils#listFiles(java.io.File, 
org.apache.commons.io.filefilter.IOFileFilter, 
org.apache.commons.io.filefilter.IOFileFilter)
org.apache.jackrabbit.oak.plugins.index.lucene.directory.ActiveDeletedBlobCollectorFactory.ActiveDeletedBlobCollectorImpl#purgeBlobsDeleted

{noformat}
//TODO, why's this happening whle 
{{org.apache.jackrabbit.oak.plugins.index.lucene.directory.ActiveDeletedBlobCollectorFactory#newInstance}}
 does seem to create a directory before creating a 
{{ActiveDeletedBlobCollectorImpl}} instance.

(wish, travis could give logs too :-/)

> ActiveDeletedBlobCollectionIT.simpleAsyncIndexUpdateBasedBlobCollection 
> failing intermittently
> --
>
> Key: OAK-6517
> URL: https://issues.apache.org/jira/browse/OAK-6517
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: lucene
>Affects Versions: 1.7.1
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>
> [~chetanm] reported offline that 
> {{ActiveDeletedBlobCollectionIT.simpleAsyncIndexUpdateBasedBlobCollection}} 
> is failing for him intermittently.
> {noformat}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.746 sec <<< 
> FAILURE! - in 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.ActiveDeletedBlobCollectionIT
> simpleAsyncIndexUpdateBasedBlobCollection[WITH_FDS](org.apache.jackrabbit.oak.plugins.index.lucene.directory.ActiveDeletedBlobCollectionIT)
>   Time elapsed: 2.301 sec  <<< FAILURE!
> java.lang.AssertionError: First GC should delete some chunks
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.directory.ActiveDeletedBlobCollectionIT.simpleAsyncIndexUpdateBasedBlobCollection(ActiveDeletedBlobCollectionIT.java:227)
> Results :
> Failed tests: 
>  ActiveDeletedBlobCollectionIT.simpleAsyncIndexUpdateBasedBlobCollection:227 
> First GC should delete some chunks
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7359) Update to MongoDB Java driver 3.6

2018-04-05 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426561#comment-16426561
 ] 

Marcel Reutegger commented on OAK-7359:
---

Updated the MongoDocumentNodeStoreBuilder and DocumentNodeStoreService to 
enable socket keep-alive in accordance with the new default in the MongoDB Java 
driver 3.6.x.

Trunk: http://svn.apache.org/r1828398

Updated documentation: http://svn.apache.org/r1828399

> Update to MongoDB Java driver 3.6
> -
>
> Key: OAK-7359
> URL: https://issues.apache.org/jira/browse/OAK-7359
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.9.0, 1.10
>
>
> Update the MongoDB Java driver to 3.6 to make use of new features when 
> running on a MongoDB 3.6.x server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-7389:
---
Description: 
MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
means any existing value won't be updated.
{code:java}
@Override
protected void storeBlock(byte[] digest, int level, byte[] data) throws 
IOException {
String id = StringUtils.convertBytesToHex(digest);
cache.put(id, data);
// Check if it already exists?
MongoBlob mongoBlob = new MongoBlob();
mongoBlob.setId(id);
mongoBlob.setData(data);
mongoBlob.setLevel(level);
mongoBlob.setLastMod(System.currentTimeMillis());
// TODO check the return value
// TODO verify insert is fast if the entry already exists
try {
getBlobCollection().insertOne(mongoBlob);
} catch (DuplicateKeyException e) {
// the same block was already stored before: ignore
} catch (MongoException e) {
if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
// the same block was already stored before: ignore
} else {
throw new IOException(e.getMessage(), e);
}
}
}
{code}
 FileBlobStore also returns if there's a file already existing without updating 
the timestamp
{code:java}
@Override
protected synchronized void storeBlock(byte[] digest, int level, byte[] 
data) throws IOException {
File f = getFile(digest, false);
if (f.exists()) {
return;
}
.
{code}
The above would cause data loss in DSGC if there are updates to the blob blocks 
which are re-surrected (stored again at the time of DSGC) because the timestamp 
would never have been modified.

 

cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]

  was:
MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
means any existing value won't be updated.
{code:java}
@Override
protected void storeBlock(byte[] digest, int level, byte[] data) throws 
IOException {
String id = StringUtils.convertBytesToHex(digest);
cache.put(id, data);
// Check if it already exists?
MongoBlob mongoBlob = new MongoBlob();
mongoBlob.setId(id);
mongoBlob.setData(data);
mongoBlob.setLevel(level);
mongoBlob.setLastMod(System.currentTimeMillis());
// TODO check the return value
// TODO verify insert is fast if the entry already exists
try {
getBlobCollection().insertOne(mongoBlob);
} catch (DuplicateKeyException e) {
// the same block was already stored before: ignore
} catch (MongoException e) {
if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
// the same block was already stored before: ignore
} else {
throw new IOException(e.getMessage(), e);
}
}
}
{code}
 

The above would cause data loss in DSGC if there are updates to the blob blocks 
which are re-surrected (stored again at the time of DSGC) because the timestamp 
would never have been modified.

 

cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]


> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), 

[jira] [Commented] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426557#comment-16426557
 ] 

Amit Jain commented on OAK-7389:


Actually seems to be a problem with FileBlobStore as well, updated the 
description accordingly.

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7389) MongoBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-7389:
---
Component/s: (was: documentmk)
 blob

> MongoBlobStore does not update timestamp for already existing blobs
> ---
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7389) Mongo/FileBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-7389:
---
Summary: Mongo/FileBlobStore does not update timestamp for already existing 
blobs  (was: MongoBlobStore does not update timestamp for already existing 
blobs)

> Mongo/FileBlobStore does not update timestamp for already existing blobs
> 
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (OAK-7389) MongoBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain reassigned OAK-7389:
--

Assignee: Amit Jain

> MongoBlobStore does not update timestamp for already existing blobs
> ---
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-5122) Exercise for Custom Authorization Models

2018-04-05 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-5122:

Summary: Exercise for Custom Authorization Models  (was: Exercise for 
Custom PermissionProvider)

> Exercise for Custom Authorization Models
> 
>
> Key: OAK-5122
> URL: https://issues.apache.org/jira/browse/OAK-5122
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: exercise
>Reporter: angela
>Assignee: angela
>Priority: Minor
>
> Within the _oak-exercise_ module we should have some code illustrating how a 
> custom authorization model can be written and deployed. This should go along 
> with some exercise to extend/complete/practice with the sample code. The 
> proposed example could e.g. illustrate 
> - a simplified role-based authorization model or 
> - a variant of the _oak-authorization-cug_ that denies access for principals 
> from a specific country.
> Ideally we were able to extract the steps required to write the example to 
> update _oak-doc_ with some additional instructions on how to build custom 
> authorization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7389) MongoBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-7389:
---
Fix Version/s: 1.2.30

> MongoBlobStore does not update timestamp for already existing blobs
> ---
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7389) MongoBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-7389:
---
Affects Version/s: (was: 1.2.28)
   1.2.14

> MongoBlobStore does not update timestamp for already existing blobs
> ---
>
> Key: OAK-7389
> URL: https://issues.apache.org/jira/browse/OAK-7389
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.2.14, 1.4.20, 1.8.2, 1.6.11
>Reporter: Amit Jain
>Priority: Critical
> Fix For: 1.2.30
>
>
> MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
> means any existing value won't be updated.
> {code:java}
> @Override
> protected void storeBlock(byte[] digest, int level, byte[] data) throws 
> IOException {
> String id = StringUtils.convertBytesToHex(digest);
> cache.put(id, data);
> // Check if it already exists?
> MongoBlob mongoBlob = new MongoBlob();
> mongoBlob.setId(id);
> mongoBlob.setData(data);
> mongoBlob.setLevel(level);
> mongoBlob.setLastMod(System.currentTimeMillis());
> // TODO check the return value
> // TODO verify insert is fast if the entry already exists
> try {
> getBlobCollection().insertOne(mongoBlob);
> } catch (DuplicateKeyException e) {
> // the same block was already stored before: ignore
> } catch (MongoException e) {
> if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
> // the same block was already stored before: ignore
> } else {
> throw new IOException(e.getMessage(), e);
> }
> }
> }
> {code}
>  
> The above would cause data loss in DSGC if there are updates to the blob 
> blocks which are re-surrected (stored again at the time of DSGC) because the 
> timestamp would never have been modified.
>  
> cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7389) MongoBlobStore does not update timestamp for already existing blobs

2018-04-05 Thread Amit Jain (JIRA)
Amit Jain created OAK-7389:
--

 Summary: MongoBlobStore does not update timestamp for already 
existing blobs
 Key: OAK-7389
 URL: https://issues.apache.org/jira/browse/OAK-7389
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: documentmk
Affects Versions: 1.6.11, 1.8.2, 1.4.20, 1.2.28
Reporter: Amit Jain


MongoBlobStore uses uses the {{insert}} call and ignores any exceptions which 
means any existing value won't be updated.
{code:java}
@Override
protected void storeBlock(byte[] digest, int level, byte[] data) throws 
IOException {
String id = StringUtils.convertBytesToHex(digest);
cache.put(id, data);
// Check if it already exists?
MongoBlob mongoBlob = new MongoBlob();
mongoBlob.setId(id);
mongoBlob.setData(data);
mongoBlob.setLevel(level);
mongoBlob.setLastMod(System.currentTimeMillis());
// TODO check the return value
// TODO verify insert is fast if the entry already exists
try {
getBlobCollection().insertOne(mongoBlob);
} catch (DuplicateKeyException e) {
// the same block was already stored before: ignore
} catch (MongoException e) {
if (e.getCode() == DUPLICATE_KEY_ERROR_CODE) {
// the same block was already stored before: ignore
} else {
throw new IOException(e.getMessage(), e);
}
}
}
{code}
 

The above would cause data loss in DSGC if there are updates to the blob blocks 
which are re-surrected (stored again at the time of DSGC) because the timestamp 
would never have been modified.

 

cc/ [~tmueller], [~mreutegg], [~chetanm], [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)