[jira] [Updated] (OAK-4655) Enable configuring multiple segment nodestore instances in same setup

2016-08-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-4655:
-
Issue Type: New Feature  (was: Improvement)

> Enable configuring multiple segment nodestore instances in same setup
> -
>
> Key: OAK-4655
> URL: https://issues.apache.org/jira/browse/OAK-4655
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segment-tar, segmentmk
>Reporter: Chetan Mehrotra
> Fix For: 1.6
>
>
> With OAK-4369 and OAK-4490 its now possible to configure a new 
> SegmentNodeStore to act as secondry nodestore (OAK-4180). Recently for few 
> other features we see a requirement to configure a SegmentNodeStore just for 
> storage purpose. For e.g.
> # OAK-4180 - Enables use of SegmentNodeStore as a secondary store to 
> compliment DocumentNodeStore
> #* Always uses BlobStore from primary DocumentNodeStore
> #* Compaction to be enabled
> # OAK-4654 - Enable use of SegmentNodeStore for private mount in a 
> multiplexing nodestore setup
> #* Might use its own blob store
> #* Compaction might be disabled as it would be read only
> # OAK-4581 - Proposes to make use of SegmentNodeStore for storing event queue 
> offline
> In all these setups we need to configure a SegmentNodeStore which has 
> following aspect
> # NodeStore instance is not directly exposed but exposed via 
> {{NodeStoreProvider}} interface with {{role}} service property specifying the 
> intended usage
> # NodeStore here is not fully functional i.e. it would not be configured with 
> std observers, would not be used by ContentRepository etc
> # It needs to be ensured that any JMX MBean registered accounts for "role" so 
> that there is no collision
> With existing SegmentNodeStoreService we can only configure 1 nodestore. To 
> support above cases we need a OSGi config factory based implementation which 
> enables creation of multiple SegmentNodeStore instances (each with different 
> directory and different settings)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4655) Enable configuring multiple segment nodestore instances in same setup

2016-08-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15412944#comment-15412944
 ] 

Chetan Mehrotra commented on OAK-4655:
--

[~mduerig] [~alex.parvulescu] [~frm] Thoughts around above requirement. 

I was thinking to go for an OSGi factory component which (separate from current 
SegmentNodeStoreService) which can register a {{NodeStoreProvider}}. We would 
also need to ensure that any JMX and Metric registered does not collide

> Enable configuring multiple segment nodestore instances in same setup
> -
>
> Key: OAK-4655
> URL: https://issues.apache.org/jira/browse/OAK-4655
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar, segmentmk
>Reporter: Chetan Mehrotra
> Fix For: 1.6
>
>
> With OAK-4369 and OAK-4490 its now possible to configure a new 
> SegmentNodeStore to act as secondry nodestore (OAK-4180). Recently for few 
> other features we see a requirement to configure a SegmentNodeStore just for 
> storage purpose. For e.g.
> # OAK-4180 - Enables use of SegmentNodeStore as a secondary store to 
> compliment DocumentNodeStore
> #* Always uses BlobStore from primary DocumentNodeStore
> #* Compaction to be enabled
> # OAK-4654 - Enable use of SegmentNodeStore for private mount in a 
> multiplexing nodestore setup
> #* Might use its own blob store
> #* Compaction might be disabled as it would be read only
> # OAK-4581 - Proposes to make use of SegmentNodeStore for storing event queue 
> offline
> In all these setups we need to configure a SegmentNodeStore which has 
> following aspect
> # NodeStore instance is not directly exposed but exposed via 
> {{NodeStoreProvider}} interface with {{role}} service property specifying the 
> intended usage
> # NodeStore here is not fully functional i.e. it would not be configured with 
> std observers, would not be used by ContentRepository etc
> # It needs to be ensured that any JMX MBean registered accounts for "role" so 
> that there is no collision
> With existing SegmentNodeStoreService we can only configure 1 nodestore. To 
> support above cases we need a OSGi config factory based implementation which 
> enables creation of multiple SegmentNodeStore instances (each with different 
> directory and different settings)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4655) Enable configuring multiple segment nodestore instances in same setup

2016-08-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-4655:
-
Description: 
With OAK-4369 and OAK-4490 its now possible to configure a new SegmentNodeStore 
to act as secondry nodestore (OAK-4180). Recently for few other features we see 
a requirement to configure a SegmentNodeStore just for storage purpose. For e.g.

# OAK-4180 - Enables use of SegmentNodeStore as a secondary store to compliment 
DocumentNodeStore
#* Always uses BlobStore from primary DocumentNodeStore
#* Compaction to be enabled
# OAK-4654 - Enable use of SegmentNodeStore for private mount in a multiplexing 
nodestore setup
#* Might use its own blob store
#* Compaction might be disabled as it would be read only
# OAK-4581 - Proposes to make use of SegmentNodeStore for storing event queue 
offline

In all these setups we need to configure a SegmentNodeStore which has following 
aspect
# NodeStore instance is not directly exposed but exposed via 
{{NodeStoreProvider}} interface with {{role}} service property specifying the 
intended usage
# NodeStore here is not fully functional i.e. it would not be configured with 
std observers, would not be used by ContentRepository etc
# It needs to be ensured that any JMX MBean registered accounts for "role" so 
that there is no collision

With existing SegmentNodeStoreService we can only configure 1 nodestore. To 
support above cases we need a OSGi config factory based implementation which 
enables creation of multiple SegmentNodeStore instances (each with different 
directory and different settings)

  was:
With OAK-4369 and OAK-4490 its now possible to configure a new SegmentNodeStore 
to act as secondry nodestore (OAK-4180). Recently for few other features we see 
a requirement to configure a SegmentNodeStore just for storage purpose. For e.g.

# OAK-4180 - Enables use of SegmentNodeStore as a secondary store to compliment 
DocumentNodeStore
#* Always uses BlobStore from primary DocumentNodeStore
#* Compaction to be enabled
# OAK-4654 - Enable use of SegmentNodeStore for private mount in a multiplexing 
nodestore setup
#* Might use its own blob store
#* Compaction might be disabled as it would be read only
# OAK-4581 - Proposes to make use of SegmentNodeStore for storing event queue 
offline

In all these setups we need to configure a SegmentNodeStore which has following 
aspect
# NodeStore instance is not directly exposed but exposed via 
{{NodeStoreProvider}} interface with {{role}} service property specifying the 
intended usage
# NodeStore here is not fully functional i.e. it would not be configured with 
std observers, would not be used by ContentRepository etc

With existing SegmentNodeStoreService we can only configure 1 nodestore. To 
support above cases we need a OSGi config factory based implementation which 
enables creation of multiple SegmentNodeStore instances (each with different 
directory and different settings)


> Enable configuring multiple segment nodestore instances in same setup
> -
>
> Key: OAK-4655
> URL: https://issues.apache.org/jira/browse/OAK-4655
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar, segmentmk
>Reporter: Chetan Mehrotra
> Fix For: 1.6
>
>
> With OAK-4369 and OAK-4490 its now possible to configure a new 
> SegmentNodeStore to act as secondry nodestore (OAK-4180). Recently for few 
> other features we see a requirement to configure a SegmentNodeStore just for 
> storage purpose. For e.g.
> # OAK-4180 - Enables use of SegmentNodeStore as a secondary store to 
> compliment DocumentNodeStore
> #* Always uses BlobStore from primary DocumentNodeStore
> #* Compaction to be enabled
> # OAK-4654 - Enable use of SegmentNodeStore for private mount in a 
> multiplexing nodestore setup
> #* Might use its own blob store
> #* Compaction might be disabled as it would be read only
> # OAK-4581 - Proposes to make use of SegmentNodeStore for storing event queue 
> offline
> In all these setups we need to configure a SegmentNodeStore which has 
> following aspect
> # NodeStore instance is not directly exposed but exposed via 
> {{NodeStoreProvider}} interface with {{role}} service property specifying the 
> intended usage
> # NodeStore here is not fully functional i.e. it would not be configured with 
> std observers, would not be used by ContentRepository etc
> # It needs to be ensured that any JMX MBean registered accounts for "role" so 
> that there is no collision
> With existing SegmentNodeStoreService we can only configure 1 nodestore. To 
> support above cases we need a OSGi config factory based implementation which 
> enables creation of multiple SegmentNodeStore instances (each with different 
> directory and different settings)



--
This message was 

[jira] [Resolved] (OAK-4519) Expose SegmentNodeStore as a secondary NodeStore (oak-segment-tar)

2016-08-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-4519.
--
   Resolution: Won't Fix
Fix Version/s: (was: Segment Tar 0.0.20)

> Expose SegmentNodeStore as a secondary NodeStore (oak-segment-tar)
> --
>
> Key: OAK-4519
> URL: https://issues.apache.org/jira/browse/OAK-4519
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>
> Issue to track changes done in OAK-4490 for oak-segment-tar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4655) Enable configuring multiple segment nodestore in same setup

2016-08-08 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-4655:


 Summary: Enable configuring multiple segment nodestore in same 
setup
 Key: OAK-4655
 URL: https://issues.apache.org/jira/browse/OAK-4655
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segment-tar, segmentmk
Reporter: Chetan Mehrotra
 Fix For: 1.6


With OAK-4369 and OAK-4490 its now possible to configure a new SegmentNodeStore 
to act as secondry nodestore (OAK-4180). Recently for few other features we see 
a requirement to configure a SegmentNodeStore just for storage purpose. For e.g.

# OAK-4180 - Enables use of SegmentNodeStore as a secondary store to compliment 
DocumentNodeStore
#* Always uses BlobStore from primary DocumentNodeStore
#* Compaction to be enabled
# OAK-4654 - Enable use of SegmentNodeStore for private mount in a multiplexing 
nodestore setup
#* Might use its own blob store
#* Compaction might be disabled as it would be read only
# OAK-4581 - Proposes to make use of SegmentNodeStore for storing event queue 
offline

In all these setups we need to configure a SegmentNodeStore which has following 
aspect
# NodeStore instance is not directly exposed but exposed via 
{{NodeStoreProvider}} interface with {{role}} service property specifying the 
intended usage
# NodeStore here is not fully functional i.e. it would not be configured with 
std observers, would not be used by ContentRepository etc

With existing SegmentNodeStoreService we can only configure 1 nodestore. To 
support above cases we need a OSGi config factory based implementation which 
enables creation of multiple SegmentNodeStore instances (each with different 
directory and different settings)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4655) Enable configuring multiple segment nodestore instances in same setup

2016-08-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-4655:
-
Summary: Enable configuring multiple segment nodestore instances in same 
setup  (was: Enable configuring multiple segment nodestore in same setup)

> Enable configuring multiple segment nodestore instances in same setup
> -
>
> Key: OAK-4655
> URL: https://issues.apache.org/jira/browse/OAK-4655
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar, segmentmk
>Reporter: Chetan Mehrotra
> Fix For: 1.6
>
>
> With OAK-4369 and OAK-4490 its now possible to configure a new 
> SegmentNodeStore to act as secondry nodestore (OAK-4180). Recently for few 
> other features we see a requirement to configure a SegmentNodeStore just for 
> storage purpose. For e.g.
> # OAK-4180 - Enables use of SegmentNodeStore as a secondary store to 
> compliment DocumentNodeStore
> #* Always uses BlobStore from primary DocumentNodeStore
> #* Compaction to be enabled
> # OAK-4654 - Enable use of SegmentNodeStore for private mount in a 
> multiplexing nodestore setup
> #* Might use its own blob store
> #* Compaction might be disabled as it would be read only
> # OAK-4581 - Proposes to make use of SegmentNodeStore for storing event queue 
> offline
> In all these setups we need to configure a SegmentNodeStore which has 
> following aspect
> # NodeStore instance is not directly exposed but exposed via 
> {{NodeStoreProvider}} interface with {{role}} service property specifying the 
> intended usage
> # NodeStore here is not fully functional i.e. it would not be configured with 
> std observers, would not be used by ContentRepository etc
> With existing SegmentNodeStoreService we can only configure 1 nodestore. To 
> support above cases we need a OSGi config factory based implementation which 
> enables creation of multiple SegmentNodeStore instances (each with different 
> directory and different settings)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4636) PropertyIndexLookup#getIndexNode should be more tolerant towards property types

2016-08-08 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409736#comment-15409736
 ] 

Vikas Saurabh edited comment on OAK-4636 at 8/9/16 2:38 AM:


Fixed in [r1755366|https://svn.apache.org/r1755366]. The tests currently just 
check for tolerance for {{declaringNodeTypes}} and {{propertyNames}}. Although 
the code would first try {{getNames}} and then fallback to String\[] and 
logging a warning - the tests don't test for the warning.

Backported to 1.4 at [r1755549|https://svn.apache.org/r1755549], to 1.2 at 
[r170|https://svn.apache.org/r170] and to 1.2 at 
[r171|https://svn.apache.org/r171].


was (Author: catholicon):
Fixed in [r1755366|https://svn.apache.org/r1755366]. The tests currently just 
check for tolerance for {{declaringNodeTypes}} and {{propertyNames}}. Although 
the code would first try {{getNames}} and then fallback to String\[] and 
logging a warning - the tests don't test for the warning.

Backported to 1.4 at [r1755549|https://svn.apache.org/r1755549], to 1.2 at 
[r170|https://svn.apache.org/r170].

Also, while this is just a minor improvement, marking it as candidate for 1.0.

> PropertyIndexLookup#getIndexNode should be more tolerant towards property 
> types
> ---
>
> Key: OAK-4636
> URL: https://issues.apache.org/jira/browse/OAK-4636
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.6, 1.0.33, 1.4.6, 1.2.18, 1.5.8
>
>
> Currently, {{PropertyIndexLookup#getIndexNode}} \[0] uses 
> {{NodeState#getNames}} for {{propertyNames}} \[1] and {{declaringNodeTypes}} 
> \[2]. That means that these properties need to be either of {{Name}} or 
> {{Name\[]}} types. While that's ok and probably useful as well - this can 
> potentially be used for validation and javadoc for getNames says that the 
> implementation is free to use an optimized path.
> That being said, at least in one case (issue 768 in ensure oak index \[3]) 
> the values get set as {{String/String\[]}} instead which can very easily 
> render useful indices like {{nodetype}} useless for the running system.
> I see 2 ways around it:
> * PropertyIndexLookup can be more resilient and accept non-Name-type too. For 
> optimal case, we can probably try getNames() first and then fallback to 
> getProperties with potential to cast (and may be log a warning for mistake in 
> property type)
> * Proactively validate that properties such as declNodeTypes etc must have 
> {{Name/Name\[]}} during commit
> I'm ok with any of the approach... but current scenario is too silent in 
> failing... saving the change doesn't cause any issue... cost calculation 
> while picking index simply ignore such indices.
> /cc [~tmueller], [~chetanm]
> \[0]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L159
> \[1]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L171
> \[2]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L179
> \[3]: https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4636) PropertyIndexLookup#getIndexNode should be more tolerant towards property types

2016-08-08 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-4636:
---
Labels:   (was: candidate_oak_1_0)

> PropertyIndexLookup#getIndexNode should be more tolerant towards property 
> types
> ---
>
> Key: OAK-4636
> URL: https://issues.apache.org/jira/browse/OAK-4636
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.6, 1.0.33, 1.4.6, 1.2.18, 1.5.8
>
>
> Currently, {{PropertyIndexLookup#getIndexNode}} \[0] uses 
> {{NodeState#getNames}} for {{propertyNames}} \[1] and {{declaringNodeTypes}} 
> \[2]. That means that these properties need to be either of {{Name}} or 
> {{Name\[]}} types. While that's ok and probably useful as well - this can 
> potentially be used for validation and javadoc for getNames says that the 
> implementation is free to use an optimized path.
> That being said, at least in one case (issue 768 in ensure oak index \[3]) 
> the values get set as {{String/String\[]}} instead which can very easily 
> render useful indices like {{nodetype}} useless for the running system.
> I see 2 ways around it:
> * PropertyIndexLookup can be more resilient and accept non-Name-type too. For 
> optimal case, we can probably try getNames() first and then fallback to 
> getProperties with potential to cast (and may be log a warning for mistake in 
> property type)
> * Proactively validate that properties such as declNodeTypes etc must have 
> {{Name/Name\[]}} during commit
> I'm ok with any of the approach... but current scenario is too silent in 
> failing... saving the change doesn't cause any issue... cost calculation 
> while picking index simply ignore such indices.
> /cc [~tmueller], [~chetanm]
> \[0]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L159
> \[1]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L171
> \[2]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L179
> \[3]: https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4636) PropertyIndexLookup#getIndexNode should be more tolerant towards property types

2016-08-08 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-4636:
---
Fix Version/s: 1.0.33

> PropertyIndexLookup#getIndexNode should be more tolerant towards property 
> types
> ---
>
> Key: OAK-4636
> URL: https://issues.apache.org/jira/browse/OAK-4636
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.6, 1.0.33, 1.4.6, 1.2.18, 1.5.8
>
>
> Currently, {{PropertyIndexLookup#getIndexNode}} \[0] uses 
> {{NodeState#getNames}} for {{propertyNames}} \[1] and {{declaringNodeTypes}} 
> \[2]. That means that these properties need to be either of {{Name}} or 
> {{Name\[]}} types. While that's ok and probably useful as well - this can 
> potentially be used for validation and javadoc for getNames says that the 
> implementation is free to use an optimized path.
> That being said, at least in one case (issue 768 in ensure oak index \[3]) 
> the values get set as {{String/String\[]}} instead which can very easily 
> render useful indices like {{nodetype}} useless for the running system.
> I see 2 ways around it:
> * PropertyIndexLookup can be more resilient and accept non-Name-type too. For 
> optimal case, we can probably try getNames() first and then fallback to 
> getProperties with potential to cast (and may be log a warning for mistake in 
> property type)
> * Proactively validate that properties such as declNodeTypes etc must have 
> {{Name/Name\[]}} during commit
> I'm ok with any of the approach... but current scenario is too silent in 
> failing... saving the change doesn't cause any issue... cost calculation 
> while picking index simply ignore such indices.
> /cc [~tmueller], [~chetanm]
> \[0]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L159
> \[1]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L171
> \[2]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L179
> \[3]: https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4636) PropertyIndexLookup#getIndexNode should be more tolerant towards property types

2016-08-08 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-4636:
---
Fix Version/s: 1.2.18

> PropertyIndexLookup#getIndexNode should be more tolerant towards property 
> types
> ---
>
> Key: OAK-4636
> URL: https://issues.apache.org/jira/browse/OAK-4636
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: candidate_oak_1_0
> Fix For: 1.6, 1.4.6, 1.2.18, 1.5.8
>
>
> Currently, {{PropertyIndexLookup#getIndexNode}} \[0] uses 
> {{NodeState#getNames}} for {{propertyNames}} \[1] and {{declaringNodeTypes}} 
> \[2]. That means that these properties need to be either of {{Name}} or 
> {{Name\[]}} types. While that's ok and probably useful as well - this can 
> potentially be used for validation and javadoc for getNames says that the 
> implementation is free to use an optimized path.
> That being said, at least in one case (issue 768 in ensure oak index \[3]) 
> the values get set as {{String/String\[]}} instead which can very easily 
> render useful indices like {{nodetype}} useless for the running system.
> I see 2 ways around it:
> * PropertyIndexLookup can be more resilient and accept non-Name-type too. For 
> optimal case, we can probably try getNames() first and then fallback to 
> getProperties with potential to cast (and may be log a warning for mistake in 
> property type)
> * Proactively validate that properties such as declNodeTypes etc must have 
> {{Name/Name\[]}} during commit
> I'm ok with any of the approach... but current scenario is too silent in 
> failing... saving the change doesn't cause any issue... cost calculation 
> while picking index simply ignore such indices.
> /cc [~tmueller], [~chetanm]
> \[0]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L159
> \[1]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L171
> \[2]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L179
> \[3]: https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4636) PropertyIndexLookup#getIndexNode should be more tolerant towards property types

2016-08-08 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-4636:
---
Labels: candidate_oak_1_0  (was: candidate_oak_1_0 candidate_oak_1_2)

> PropertyIndexLookup#getIndexNode should be more tolerant towards property 
> types
> ---
>
> Key: OAK-4636
> URL: https://issues.apache.org/jira/browse/OAK-4636
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: candidate_oak_1_0
> Fix For: 1.6, 1.4.6, 1.2.18, 1.5.8
>
>
> Currently, {{PropertyIndexLookup#getIndexNode}} \[0] uses 
> {{NodeState#getNames}} for {{propertyNames}} \[1] and {{declaringNodeTypes}} 
> \[2]. That means that these properties need to be either of {{Name}} or 
> {{Name\[]}} types. While that's ok and probably useful as well - this can 
> potentially be used for validation and javadoc for getNames says that the 
> implementation is free to use an optimized path.
> That being said, at least in one case (issue 768 in ensure oak index \[3]) 
> the values get set as {{String/String\[]}} instead which can very easily 
> render useful indices like {{nodetype}} useless for the running system.
> I see 2 ways around it:
> * PropertyIndexLookup can be more resilient and accept non-Name-type too. For 
> optimal case, we can probably try getNames() first and then fallback to 
> getProperties with potential to cast (and may be log a warning for mistake in 
> property type)
> * Proactively validate that properties such as declNodeTypes etc must have 
> {{Name/Name\[]}} during commit
> I'm ok with any of the approach... but current scenario is too silent in 
> failing... saving the change doesn't cause any issue... cost calculation 
> while picking index simply ignore such indices.
> /cc [~tmueller], [~chetanm]
> \[0]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L159
> \[1]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L171
> \[2]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L179
> \[3]: https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4636) PropertyIndexLookup#getIndexNode should be more tolerant towards property types

2016-08-08 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409736#comment-15409736
 ] 

Vikas Saurabh edited comment on OAK-4636 at 8/9/16 2:11 AM:


Fixed in [r1755366|https://svn.apache.org/r1755366]. The tests currently just 
check for tolerance for {{declaringNodeTypes}} and {{propertyNames}}. Although 
the code would first try {{getNames}} and then fallback to String\[] and 
logging a warning - the tests don't test for the warning.

Backported to 1.4 at [r1755549|https://svn.apache.org/r1755549], to 1.2 at 
[r170|https://svn.apache.org/r170].

Also, while this is just a minor improvement, marking it as candidate for 1.0.


was (Author: catholicon):
Fixed in [r1755366|https://svn.apache.org/r1755366]. The tests currently just 
check for tolerance for {{declaringNodeTypes}} and {{propertyNames}}. Although 
the code would first try {{getNames}} and then fallback to String\[] and 
logging a warning - the tests don't test for the warning.

Backported to 1.4 at [r1755549|https://svn.apache.org/r1755549].

Also, while this is just a minor improvement, marking it as candidate for 1.2 
and 1.0.

> PropertyIndexLookup#getIndexNode should be more tolerant towards property 
> types
> ---
>
> Key: OAK-4636
> URL: https://issues.apache.org/jira/browse/OAK-4636
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.6, 1.4.6, 1.5.8
>
>
> Currently, {{PropertyIndexLookup#getIndexNode}} \[0] uses 
> {{NodeState#getNames}} for {{propertyNames}} \[1] and {{declaringNodeTypes}} 
> \[2]. That means that these properties need to be either of {{Name}} or 
> {{Name\[]}} types. While that's ok and probably useful as well - this can 
> potentially be used for validation and javadoc for getNames says that the 
> implementation is free to use an optimized path.
> That being said, at least in one case (issue 768 in ensure oak index \[3]) 
> the values get set as {{String/String\[]}} instead which can very easily 
> render useful indices like {{nodetype}} useless for the running system.
> I see 2 ways around it:
> * PropertyIndexLookup can be more resilient and accept non-Name-type too. For 
> optimal case, we can probably try getNames() first and then fallback to 
> getProperties with potential to cast (and may be log a warning for mistake in 
> property type)
> * Proactively validate that properties such as declNodeTypes etc must have 
> {{Name/Name\[]}} during commit
> I'm ok with any of the approach... but current scenario is too silent in 
> failing... saving the change doesn't cause any issue... cost calculation 
> while picking index simply ignore such indices.
> /cc [~tmueller], [~chetanm]
> \[0]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L159
> \[1]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L171
> \[2]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L179
> \[3]: https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4636) PropertyIndexLookup#getIndexNode should be more tolerant towards property types

2016-08-08 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-4636:
---
Labels: candidate_oak_1_0 candidate_oak_1_2  (was: candidate_oak_1_0 
candidate_oak_1_2 candidate_oak_1_4)

> PropertyIndexLookup#getIndexNode should be more tolerant towards property 
> types
> ---
>
> Key: OAK-4636
> URL: https://issues.apache.org/jira/browse/OAK-4636
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.6, 1.4.6, 1.5.8
>
>
> Currently, {{PropertyIndexLookup#getIndexNode}} \[0] uses 
> {{NodeState#getNames}} for {{propertyNames}} \[1] and {{declaringNodeTypes}} 
> \[2]. That means that these properties need to be either of {{Name}} or 
> {{Name\[]}} types. While that's ok and probably useful as well - this can 
> potentially be used for validation and javadoc for getNames says that the 
> implementation is free to use an optimized path.
> That being said, at least in one case (issue 768 in ensure oak index \[3]) 
> the values get set as {{String/String\[]}} instead which can very easily 
> render useful indices like {{nodetype}} useless for the running system.
> I see 2 ways around it:
> * PropertyIndexLookup can be more resilient and accept non-Name-type too. For 
> optimal case, we can probably try getNames() first and then fallback to 
> getProperties with potential to cast (and may be log a warning for mistake in 
> property type)
> * Proactively validate that properties such as declNodeTypes etc must have 
> {{Name/Name\[]}} during commit
> I'm ok with any of the approach... but current scenario is too silent in 
> failing... saving the change doesn't cause any issue... cost calculation 
> while picking index simply ignore such indices.
> /cc [~tmueller], [~chetanm]
> \[0]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L159
> \[1]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L171
> \[2]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L179
> \[3]: https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4636) PropertyIndexLookup#getIndexNode should be more tolerant towards property types

2016-08-08 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-4636:
---
Fix Version/s: (was: 1.6.)
   1.4.6
   1.6

> PropertyIndexLookup#getIndexNode should be more tolerant towards property 
> types
> ---
>
> Key: OAK-4636
> URL: https://issues.apache.org/jira/browse/OAK-4636
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.6, 1.4.6, 1.5.8
>
>
> Currently, {{PropertyIndexLookup#getIndexNode}} \[0] uses 
> {{NodeState#getNames}} for {{propertyNames}} \[1] and {{declaringNodeTypes}} 
> \[2]. That means that these properties need to be either of {{Name}} or 
> {{Name\[]}} types. While that's ok and probably useful as well - this can 
> potentially be used for validation and javadoc for getNames says that the 
> implementation is free to use an optimized path.
> That being said, at least in one case (issue 768 in ensure oak index \[3]) 
> the values get set as {{String/String\[]}} instead which can very easily 
> render useful indices like {{nodetype}} useless for the running system.
> I see 2 ways around it:
> * PropertyIndexLookup can be more resilient and accept non-Name-type too. For 
> optimal case, we can probably try getNames() first and then fallback to 
> getProperties with potential to cast (and may be log a warning for mistake in 
> property type)
> * Proactively validate that properties such as declNodeTypes etc must have 
> {{Name/Name\[]}} during commit
> I'm ok with any of the approach... but current scenario is too silent in 
> failing... saving the change doesn't cause any issue... cost calculation 
> while picking index simply ignore such indices.
> /cc [~tmueller], [~chetanm]
> \[0]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L159
> \[1]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L171
> \[2]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L179
> \[3]: https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4636) PropertyIndexLookup#getIndexNode should be more tolerant towards property types

2016-08-08 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409736#comment-15409736
 ] 

Vikas Saurabh edited comment on OAK-4636 at 8/9/16 1:16 AM:


Fixed in [r1755366|https://svn.apache.org/r1755366]. The tests currently just 
check for tolerance for {{declaringNodeTypes}} and {{propertyNames}}. Although 
the code would first try {{getNames}} and then fallback to String\[] and 
logging a warning - the tests don't test for the warning.

Backported to 1.4 at [r1755549|https://svn.apache.org/r1755549].

Also, while this is just a minor improvement, marking it as candidate for 1.2 
and 1.0.


was (Author: catholicon):
Fixed in [r1755366|https://svn.apache.org/r1755366]. The tests currently just 
check for tolerance for {{declaringNodeTypes}} and {{propertyNames}}. Although 
the code would first try {{getNames}} and then fallback to String\[] and 
logging a warning - the tests don't test for the warning.

Also, while this is just a minor improvement, marking it as candidate for 1.4, 
1.2 and 1.0.

> PropertyIndexLookup#getIndexNode should be more tolerant towards property 
> types
> ---
>
> Key: OAK-4636
> URL: https://issues.apache.org/jira/browse/OAK-4636
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.5.8, 1.6.
>
>
> Currently, {{PropertyIndexLookup#getIndexNode}} \[0] uses 
> {{NodeState#getNames}} for {{propertyNames}} \[1] and {{declaringNodeTypes}} 
> \[2]. That means that these properties need to be either of {{Name}} or 
> {{Name\[]}} types. While that's ok and probably useful as well - this can 
> potentially be used for validation and javadoc for getNames says that the 
> implementation is free to use an optimized path.
> That being said, at least in one case (issue 768 in ensure oak index \[3]) 
> the values get set as {{String/String\[]}} instead which can very easily 
> render useful indices like {{nodetype}} useless for the running system.
> I see 2 ways around it:
> * PropertyIndexLookup can be more resilient and accept non-Name-type too. For 
> optimal case, we can probably try getNames() first and then fallback to 
> getProperties with potential to cast (and may be log a warning for mistake in 
> property type)
> * Proactively validate that properties such as declNodeTypes etc must have 
> {{Name/Name\[]}} during commit
> I'm ok with any of the approach... but current scenario is too silent in 
> failing... saving the change doesn't cause any issue... cost calculation 
> while picking index simply ignore such indices.
> /cc [~tmueller], [~chetanm]
> \[0]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L159
> \[1]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L171
> \[2]: 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/PropertyIndexLookup.java#L179
> \[3]: https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4106) Reclaimed size reported by FileStore.cleanup is off

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411976#comment-15411976
 ] 

Michael Dürig commented on OAK-4106:


This is what I meant: {{initialSize - finalSize}} is off by the number of bytes 
contributed by concurrent commits. If cleanup is unable to cleanup anything 
this will result in a negative value. But instead of just rounding to zero, I 
think we should correctly account for any size increase contributed by 
concurrent commits. 



> Reclaimed size reported by FileStore.cleanup is off
> ---
>
> Key: OAK-4106
> URL: https://issues.apache.org/jira/browse/OAK-4106
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Andrei Dulceanu
>Priority: Minor
>  Labels: cleanup, gc
> Fix For: Segment Tar 0.0.10
>
>
> The current implementation simply reports the difference between the 
> repository size before cleanup to the size after cleanup. As cleanup runs 
> concurrently to other commits, the size increase contributed by those is not 
> accounted for. In the extreme case where cleanup cannot reclaim anything this 
> can even result in negative values being reported. 
> We should either change the wording of the respective log message and speak 
> of before and after sizes or adjust our calculation of reclaimed size 
> (preferred). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4106) Reclaimed size reported by FileStore.cleanup is off

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411752#comment-15411752
 ] 

Michael Dürig edited comment on OAK-4106 at 8/8/16 3:38 PM:


[~mduerig] After reviewing FileStore.cleanup(), with the additions brought in 
by OAK-4579, I think the problem with the unaccounted size increase from 
concurrent commits cannot happen anymore. This was the case before since it was 
based on approximateSize, but now initialSize and finalSize (needed in 
reclaimed computation) are correctly computed with FileStore.size() which 
reflects 100% the size of the repository. 

Therefore the only thing to address here is IMO the case in which finalSize 
happens to be greater than initialSize (due to concurrent commits + cleanup not 
reclaiming anything). This should set reclaimed to zero and not to some 
unintuitive negative number, reflecting the actual situation encountered, where 
there wasn't any gain after cleanup and the repository also grew in the 
meantime.

WDYT?


was (Author: dulceanu):
[~mduerig] After reviewing FileStore.cleanup(), with the additions brought in 
by https://issues.apache.org/jira/browse/OAK-4579, I think the problem with the 
unaccounted size increase from concurrent commits cannot happen anymore. This 
was the case before since it was based on approximateSize, but now initialSize 
and finalSize (needed in reclaimed computation) are correctly computed with 
FileStore.size() which reflects 100% the size of the repository. 

Therefore the only thing to address here is IMO the case in which finalSize 
happens to be greater than initialSize (due to concurrent commits + cleanup not 
reclaiming anything). This should set reclaimed to zero and not to some 
unintuitive negative number, reflecting the actual situation encountered, where 
there wasn't any gain after cleanup and the repository also grew in the 
meantime.

WDYT?

> Reclaimed size reported by FileStore.cleanup is off
> ---
>
> Key: OAK-4106
> URL: https://issues.apache.org/jira/browse/OAK-4106
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Andrei Dulceanu
>Priority: Minor
>  Labels: cleanup, gc
> Fix For: Segment Tar 0.0.10
>
>
> The current implementation simply reports the difference between the 
> repository size before cleanup to the size after cleanup. As cleanup runs 
> concurrently to other commits, the size increase contributed by those is not 
> accounted for. In the extreme case where cleanup cannot reclaim anything this 
> can even result in negative values being reported. 
> We should either change the wording of the respective log message and speak 
> of before and after sizes or adjust our calculation of reclaimed size 
> (preferred). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4097) Add metric for FileStore journal writes

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411963#comment-15411963
 ] 

Michael Dürig commented on OAK-4097:


Thanks for the patch, good catch! I applied it at 
http://svn.apache.org/viewvc?rev=1755514=rev

Re. the integration test: I'm a bit concerned about the dependency on timing. 
These are the kind of tests prone to failure on CIs. Could it be done without 
depending on timings? I.e. disable the flush thread and invoke flush manually a 
couple of times?



> Add metric for FileStore journal writes
> ---
>
> Key: OAK-4097
> URL: https://issues.apache.org/jira/browse/OAK-4097
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Chetan Mehrotra
>Assignee: Andrei Dulceanu
>Priority: Minor
> Fix For: Segment Tar 0.0.10
>
> Attachments: OAK-4097-01.patch, OAK-4097-02.patch
>
>
> TarMK flush thread should run every 5 secs and flush the current root head to 
> journal.log. It would be good to have a metric to capture the number of runs 
> per minute
> This would help in confirming if flush is working at expected frequency or 
> delay in acquiring locks is causing some delays



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4293) Refactor / rework compaction gain estimation

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411949#comment-15411949
 ] 

Michael Dürig commented on OAK-4293:


Nice! I like the {{GCEstimation}} abstraction, which allows for future 
evolution. My main concern is the dependency of {{SizeDeltaGcEstimation}} to 
{{FileStoreStats}} (via {{FileStoreStats#getPreviousCleanupSize}}). I would 
prefer this the other way around: {{SizeDeltaGcEstimation}} would depend on 
{{GCJournalWriter}} directly. IMO {{FileStoreStats}} should be "monitoring 
only". 
A minor point is naming: I would prefer {{GCJournal}} to {{GCJournalWriter}}. 
As it actually also covers the reading part. 

> Refactor / rework compaction gain estimation 
> -
>
> Key: OAK-4293
> URL: https://issues.apache.org/jira/browse/OAK-4293
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
>  Labels: gc
> Fix For: Segment Tar 0.0.10
>
> Attachments: size-estimation.patch
>
>
> I think we have to take another look at {{CompactionGainEstimate}} and see 
> whether we can up with a more efficient way to estimate the compaction gain. 
> The current implementation is expensive wrt. IO, CPU and cache coherence. If 
> we want to keep an estimation step we need IMO come up with a cheap way (at 
> least 2 orders of magnitude cheaper than compaction). Otherwise I would 
> actually propose to remove the current estimation approach entirely 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4293) Refactor / rework compaction gain estimation

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411918#comment-15411918
 ] 

Michael Dürig commented on OAK-4293:


Not too sure here. This might work on repositories expected to stay at roughly 
a constant size. For others that we expect to grow it might lead to deferring 
compaction more and more with each cycle. 

> Refactor / rework compaction gain estimation 
> -
>
> Key: OAK-4293
> URL: https://issues.apache.org/jira/browse/OAK-4293
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
>  Labels: gc
> Fix For: Segment Tar 0.0.10
>
> Attachments: size-estimation.patch
>
>
> I think we have to take another look at {{CompactionGainEstimate}} and see 
> whether we can up with a more efficient way to estimate the compaction gain. 
> The current implementation is expensive wrt. IO, CPU and cache coherence. If 
> we want to keep an estimation step we need IMO come up with a cheap way (at 
> least 2 orders of magnitude cheaper than compaction). Otherwise I would 
> actually propose to remove the current estimation approach entirely 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4293) Refactor / rework compaction gain estimation

2016-08-08 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399257#comment-15399257
 ] 

Alex Parvulescu edited comment on OAK-4293 at 8/8/16 2:49 PM:
--

I started implementing a persisted gc journal that would contain the size 
post-cleanup which can be used as a reference for growth estimation: 
https://github.com/stillalex/jackrabbit-oak/commit/eb7d4c17a352cc837d8d441c8ddc490fab95c3e2

not completely tied to the compaction estimation, this can also be used by the 
upper layers (JMX bindings perhaps) to surface the compaction history (and repo 
sizes delta since last compaction) and possibly allow someone to manually 
trigger compaction if they think necessary.

patch only contains the journal persisting bits, the info is not used yet. 
[~mduerig] thoughts?


was (Author: alex.parvulescu):
I started implementing a persisted gc journal that would contain the size 
post-cleanup which can be used as a reference for growth estimation: 
https://github.com/stillalex/jackrabbit-oak/commit/d8a9a756df9c3e1414cfb554264122216fb6e73e

not completely tied to the compaction estimation, this can also be used by the 
upper layers (JMX bindings perhaps) to surface the compaction history (and repo 
sizes delta since last compaction) and possibly allow someone to manually 
trigger compaction if they think necessary.

patch only contains the journal persisting bits, the info is not used yet. 
[~mduerig] thoughts?

> Refactor / rework compaction gain estimation 
> -
>
> Key: OAK-4293
> URL: https://issues.apache.org/jira/browse/OAK-4293
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
>  Labels: gc
> Fix For: Segment Tar 0.0.10
>
> Attachments: size-estimation.patch
>
>
> I think we have to take another look at {{CompactionGainEstimate}} and see 
> whether we can up with a more efficient way to estimate the compaction gain. 
> The current implementation is expensive wrt. IO, CPU and cache coherence. If 
> we want to keep an estimation step we need IMO come up with a cheap way (at 
> least 2 orders of magnitude cheaper than compaction). Otherwise I would 
> actually propose to remove the current estimation approach entirely 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4097) Add metric for FileStore journal writes

2016-08-08 Thread Andrei Dulceanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411904#comment-15411904
 ] 

Andrei Dulceanu edited comment on OAK-4097 at 8/8/16 2:46 PM:
--

Actually I found a bug in my previous implementation which showed up because 
each time {{FileStore.flush()}} was invoked {{FileStoreStats.flushed()}} was 
called, although the intention was to capture only the calls which result in 
writes to the journal file (which happen only if the root record has changed 
since last time). I corrected this and also replaced 
{{FileStoreStatsMBean.getJournalWriteStats()}} with 
{{getJournalWriteStatsAsCount()}} and {{getJournalWriteStatsAsCompositeData()}} 
to improve testability. I also added a basic check in 
{{FileStoreStatsTest.tarWriterIntegration()}} to verify that the journal 
contains one entry if one record is written to the store. I enclose the patch 
containing these changes.

IMO testing this metric can be better captured by an IT with this scenario:
# Create file store.
# Create some node under root with a property of type long ("count",0)
# Save the session
# Each 6 seconds increase the count property and save the session
# Repeat previous step 10 times.
# Since the {{flushOperation}} is called each 5 seconds, check that the 
{{FileStoreStats.getJournalWriteStatsAsCount()}} returns 10.

[~mduerig], [~chetanm] WDYT? If you find the IT approach valuable, I'll go 
ahead and provide a new patch containing all the above changes.



was (Author: dulceanu):
Actually I found a bug in my previous implementation which showed up because 
each time {{FileStore.flush()}} was invoked {{FileStoreStats.flushed()}} was 
called, although the intention was to capture only the calls which result in 
writes to the journal file (which happen only if the root record has changed 
since last time). I corrected this and also replaced 
{{FileStoreStatsMBean.getJournalWriteStats()}} with 
{{getJournalWriteStatsAsCount()}} and {{getJournalWriteStatsAsCompositeData()}} 
to improve testability. I also added a basic check in 
{{FileStoreStats.tarWriterIntegration()}} to verify that the journal contains 
one entry if one record is written to the store. I enclose the patch containing 
these changes.

IMO testing this metric can be better captured by an IT with this scenario:
# Create file store.
# Create some node under root with a property of type long ("count",0)
# Save the session
# Each 6 seconds increase the count property and save the session
# Repeat previous step 10 times.
# Since the {{flushOperation}} is called each 5 seconds, check that the 
{{FileStoreStats.getJournalWriteStatsAsCount()}} returns 10.

[~mduerig], [~chetanm] WDYT? If you find the IT approach valuable, I'll go 
ahead and provide a new patch containing all the above changes.


> Add metric for FileStore journal writes
> ---
>
> Key: OAK-4097
> URL: https://issues.apache.org/jira/browse/OAK-4097
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Chetan Mehrotra
>Assignee: Andrei Dulceanu
>Priority: Minor
> Fix For: Segment Tar 0.0.10
>
> Attachments: OAK-4097-01.patch, OAK-4097-02.patch
>
>
> TarMK flush thread should run every 5 secs and flush the current root head to 
> journal.log. It would be good to have a metric to capture the number of runs 
> per minute
> This would help in confirming if flush is working at expected frequency or 
> delay in acquiring locks is causing some delays



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4097) Add metric for FileStore journal writes

2016-08-08 Thread Andrei Dulceanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411904#comment-15411904
 ] 

Andrei Dulceanu edited comment on OAK-4097 at 8/8/16 2:45 PM:
--

Actually I found a bug in my previous implementation which showed up because 
each time {{FileStore.flush()}} was invoked {{FileStoreStats.flushed()}} was 
called, although the intention was to capture only the calls which result in 
writes to the journal file (which happen only if the root record has changed 
since last time). I corrected this and also replaced 
{{FileStoreStatsMBean.getJournalWriteStats()}} with 
{{getJournalWriteStatsAsCount()}} and {{getJournalWriteStatsAsCompositeData()}} 
to improve testability. I also added a basic check in 
{{FileStoreStats.tarWriterIntegration()}} to verify that the journal contains 
one entry if one record is written to the store. I enclose the patch containing 
these changes.

IMO testing this metric can be better captured by an IT with this scenario:
# Create file store.
# Create some node under root with a property of type long ("count",0)
# Save the session
# Each 6 seconds increase the count property and save the session
# Repeat previous step 10 times.
# Since the {{flushOperation}} is called each 5 seconds, check that the 
{{FileStoreStats.getJournalWriteStatsAsCount()}} returns 10.

[~mduerig], [~chetanm] WDYT? If you find the IT approach valuable, I'll go 
ahead and provide a new patch containing all the above changes.



was (Author: dulceanu):
Actually I found out a bug in my previous implementation which showed up 
because each time {{FileStore.flush()}} was invoked 
{{FileStoreStats.flushed()}} was called, although the intention was to capture 
only the calls which result in writes to the journal file (which happen only if 
the root record has changed since last time). I corrected this and also 
replaced {{FileStoreStatsMBean.getJournalWriteStats()}} with 
{{getJournalWriteStatsAsCount()}} and {{getJournalWriteStatsAsCompositeData()}} 
to improve testability. I also added a basic check in 
{{FileStoreStats.tarWriterIntegration()}} to verify that the journal contains 
one entry if one record is written to the store. I enclose the patch containing 
these changes.

IMO testing this metric can be better captured by an IT with this scenario:
# Create file store.
# Create some node under root with a property of type long ("count",0)
# Save the session
# Each 6 seconds increase the count property and save the session
# Repeat previous step 10 times.
# Since the {{flushOperation}} is called each 5 seconds, check that the 
{{FileStoreStats.getJournalWriteStatsAsCount()}} returns 10.

[~mduerig], [~chetanm] WDYT? If you find the IT approach valuable, I'll go 
ahead and provide a new patch containing all the above changes.


> Add metric for FileStore journal writes
> ---
>
> Key: OAK-4097
> URL: https://issues.apache.org/jira/browse/OAK-4097
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Chetan Mehrotra
>Assignee: Andrei Dulceanu
>Priority: Minor
> Fix For: Segment Tar 0.0.10
>
> Attachments: OAK-4097-01.patch, OAK-4097-02.patch
>
>
> TarMK flush thread should run every 5 secs and flush the current root head to 
> journal.log. It would be good to have a metric to capture the number of runs 
> per minute
> This would help in confirming if flush is working at expected frequency or 
> delay in acquiring locks is causing some delays



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4097) Add metric for FileStore journal writes

2016-08-08 Thread Andrei Dulceanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu updated OAK-4097:
-
Attachment: OAK-4097-02.patch

Actually I found out a bug in my previous implementation which showed up 
because each time {{FileStore.flush()}} was invoked 
{{FileStoreStats.flushed()}} was called, although the intention was to capture 
only the calls which result in writes to the journal file (which happen only if 
the root record has changed since last time). I corrected this and also 
replaced {{FileStoreStatsMBean.getJournalWriteStats()}} with 
{{getJournalWriteStatsAsCount()}} and {{getJournalWriteStatsAsCompositeData()}} 
to improve testability. I also added a basic check in 
{{FileStoreStats.tarWriterIntegration()}} to verify that the journal contains 
one entry if one record is written to the store. I enclose the patch containing 
these changes.

IMO testing this metric can be better captured by an IT with this scenario:
# Create file store.
# Create some node under root with a property of type long ("count",0)
# Save the session
# Each 6 seconds increase the count property and save the session
# Repeat previous step 10 times.
# Since the {{flushOperation}} is called each 5 seconds, check that the 
{{FileStoreStats.getJournalWriteStatsAsCount()}} returns 10.

[~mduerig], [~chetanm] WDYT? If you find the IT approach valuable, I'll go 
ahead and provide a new patch containing all the above changes.


> Add metric for FileStore journal writes
> ---
>
> Key: OAK-4097
> URL: https://issues.apache.org/jira/browse/OAK-4097
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Chetan Mehrotra
>Assignee: Andrei Dulceanu
>Priority: Minor
> Fix For: Segment Tar 0.0.10
>
> Attachments: OAK-4097-01.patch, OAK-4097-02.patch
>
>
> TarMK flush thread should run every 5 secs and flush the current root head to 
> journal.log. It would be good to have a metric to capture the number of runs 
> per minute
> This would help in confirming if flush is working at expected frequency or 
> delay in acquiring locks is causing some delays



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4635) Improve cache eviction policy of the node deduplication cache

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411879#comment-15411879
 ] 

Michael Dürig commented on OAK-4635:


I updated both branches from above with an approach where evicting a random 
element from the cache doesn't suffer the performance penalty seen so far:

* https://github.com/mduerig/jackrabbit-oak/tree/OAK-4635-1: evict single 
element on deepest level instead of entire level
* https://github.com/mduerig/jackrabbit-oak/tree/OAK-4635-2: cache nodes by 
weight instead of depth

> Improve cache eviction policy of the node deduplication cache
> -
>
> Key: OAK-4635
> URL: https://issues.apache.org/jira/browse/OAK-4635
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: perfomance
> Fix For: Segment Tar 0.0.10
>
>
> {{NodeCache}} uses one stripe per depth (of the nodes in the tree). Once its 
> overall capacity (default 100 nodes) is exceeded, it clears all nodes 
> from the stripe with the greatest depth. This can be problematic when the 
> stripe with the greatest depth contains most of the nodes as clearing it 
> would result in an almost empty cache. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4654) Allow to mount the secondary node store as a read-only subtree

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-4654:
---
Description: 
For the document node store it should be possible to mount "another" node store 
under some path. Assumptions for the OSGi setup:

* the mounted node store provider has to be registered with {{(role=mounted)}} 
in OSGi,
* the MountInfoProvider contains a Mount registered as {{private}},
* all reads of the paths configured in MountInfoProvider are redirected to the 
mounted node store,
* mounted subtrees are read-only,
* the properties characteristic to the document node store (lastRev, rootRev) 
are set to a constant value.

  was:
For the document node store it should be possible to mount "another" node store 
under some path. Assumptions for the OSGi setup:

* the mounted node store provider has to be registered with {{(role=mounted)}} 
in OSGi,
* the MountInfoProvider contains a Mount registered as {{private}},
* all reads of the paths configured in MountInfoProvider are redirected to the 
mounted node store,
* mounted subtrees are read-only,
* the properties characteristic to the document node store (lastRev, rootRev) 
are set to a constant value (so we can "mount" the segment node store as well).


> Allow to mount the secondary node store as a read-only subtree
> --
>
> Key: OAK-4654
> URL: https://issues.apache.org/jira/browse/OAK-4654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: documentmk
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6
>
>
> For the document node store it should be possible to mount "another" node 
> store under some path. Assumptions for the OSGi setup:
> * the mounted node store provider has to be registered with 
> {{(role=mounted)}} in OSGi,
> * the MountInfoProvider contains a Mount registered as {{private}},
> * all reads of the paths configured in MountInfoProvider are redirected to 
> the mounted node store,
> * mounted subtrees are read-only,
> * the properties characteristic to the document node store (lastRev, rootRev) 
> are set to a constant value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4654) Allow to mount the secondary node store as a read-only subtree

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-4654:
---
Description: 
For the document node store it should be possible to mount "another" node store 
under some path. Assumptions for the OSGi setup:

* the mounted node store provider has to be registered with {{(role=mounted)}} 
in OSGi,
* the MountInfoProvider contains a Mount registered as {{private}},
* all reads of the paths configured in MountInfoProvider are redirected to the 
mounted node store,
* mounted subtrees are read-only,
* the properties characteristic to the document node store (lastRev, rootRev) 
are set to a constant value (so we can "mount" the segment node store as well).

  was:
For the document node store it should be possible to mount "another" node store 
under some path. Assumptions for the OSGi setup:

* the mounted node store provider has to be registered with {{(role=mounted)}} 
in OSGi,
* the MountInfoProvider contains a Mount registered as {{mounted}},
* all reads of the paths configured in MountInfoProvider are redirected to the 
mounted node store,
* mounted subtrees are read-only,
* the properties characteristic to the document node store (lastRev, rootRev) 
are set to a constant value (so we can "mount" the segment node store as well).


> Allow to mount the secondary node store as a read-only subtree
> --
>
> Key: OAK-4654
> URL: https://issues.apache.org/jira/browse/OAK-4654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: documentmk
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6
>
>
> For the document node store it should be possible to mount "another" node 
> store under some path. Assumptions for the OSGi setup:
> * the mounted node store provider has to be registered with 
> {{(role=mounted)}} in OSGi,
> * the MountInfoProvider contains a Mount registered as {{private}},
> * all reads of the paths configured in MountInfoProvider are redirected to 
> the mounted node store,
> * mounted subtrees are read-only,
> * the properties characteristic to the document node store (lastRev, rootRev) 
> are set to a constant value (so we can "mount" the segment node store as 
> well).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4654) Allow to mount the secondary node store as a read-only subtree

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-4654:
---
Description: 
For the document node store it should be possible to mount "another" node store 
under some path. Assumptions for the OSGi setup:

* the mounted node store provider has to be registered with {{(role=mounted)}} 
in OSGi,
* the MountInfoProvider contains a Mount registered as {{mounted}},
* all reads of the paths configured in MountInfoProvider are redirected to the 
mounted node store,
* mounted subtrees are read-only,
* the properties characteristic to the document node store (lastRev, rootRev) 
are set to a constant value (so we can "mount" the segment node store as well).

> Allow to mount the secondary node store as a read-only subtree
> --
>
> Key: OAK-4654
> URL: https://issues.apache.org/jira/browse/OAK-4654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: documentmk
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6
>
>
> For the document node store it should be possible to mount "another" node 
> store under some path. Assumptions for the OSGi setup:
> * the mounted node store provider has to be registered with 
> {{(role=mounted)}} in OSGi,
> * the MountInfoProvider contains a Mount registered as {{mounted}},
> * all reads of the paths configured in MountInfoProvider are redirected to 
> the mounted node store,
> * mounted subtrees are read-only,
> * the properties characteristic to the document node store (lastRev, rootRev) 
> are set to a constant value (so we can "mount" the segment node store as 
> well).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4106) Reclaimed size reported by FileStore.cleanup is off

2016-08-08 Thread Andrei Dulceanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411752#comment-15411752
 ] 

Andrei Dulceanu commented on OAK-4106:
--

[~mduerig] After reviewing FileStore.cleanup(), with the additions brought in 
by https://issues.apache.org/jira/browse/OAK-4579, I think the problem with the 
unaccounted size increase from concurrent commits cannot happen anymore. This 
was the case before since it was based on approximateSize, but now initialSize 
and finalSize (needed in reclaimed computation) are correctly computed with 
FileStore.size() which reflects 100% the size of the repository. 

Therefore the only thing to address here is IMO the case in which finalSize 
happens to be greater than initialSize (due to concurrent commits + cleanup not 
reclaiming anything). This should set reclaimed to zero and not to some 
unintuitive negative number, reflecting the actual situation encountered, where 
there wasn't any gain after cleanup and the repository also grew in the 
meantime.

WDYT?

> Reclaimed size reported by FileStore.cleanup is off
> ---
>
> Key: OAK-4106
> URL: https://issues.apache.org/jira/browse/OAK-4106
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Andrei Dulceanu
>Priority: Minor
>  Labels: cleanup, gc
> Fix For: Segment Tar 0.0.10
>
>
> The current implementation simply reports the difference between the 
> repository size before cleanup to the size after cleanup. As cleanup runs 
> concurrently to other commits, the size increase contributed by those is not 
> accounted for. In the extreme case where cleanup cannot reclaim anything this 
> can even result in negative values being reported. 
> We should either change the wording of the respective log message and speak 
> of before and after sizes or adjust our calculation of reclaimed size 
> (preferred). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4196) EventListener gets removed event for denied node

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411751#comment-15411751
 ] 

Michael Dürig commented on OAK-4196:


Yes, this is what it means. The permissions are evaluated against a permission 
provider based on the {{after}} state. See 
{{FilterBuilder.ACCondition#createFilter}} where it is acquired. 

> EventListener gets removed event for denied node
> 
>
> Key: OAK-4196
> URL: https://issues.apache.org/jira/browse/OAK-4196
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, jcr, security
>Affects Versions: 1.4
>Reporter: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6
>
>
> An EventListener of a session that does not have read access to a node may 
> get a node removed event when the node is removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4196) EventListener gets removed event for denied node

2016-08-08 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411749#comment-15411749
 ] 

angela commented on OAK-4196:
-

[~mduerig] and myself discussed this again offlist: if observation events are 
trigger _after_ persisting any transient modifications it looks right to us to 
perform the permission evaluation against the persisted state (as mandated by 
the specification). In this case the removal of the denying permission entry 
together with the node-removal would just show this effect and should be 
considered works as expected.

I would therefore suggest that we
- adjust the test case accordingly and remove it from the known issues list
- add a hint to the observation and/or security documentation.

[~mreutegg], please let us know if that sounds reasonable to you.

> EventListener gets removed event for denied node
> 
>
> Key: OAK-4196
> URL: https://issues.apache.org/jira/browse/OAK-4196
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, jcr, security
>Affects Versions: 1.4
>Reporter: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6
>
>
> An EventListener of a session that does not have read access to a node may 
> get a node removed event when the node is removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4654) Allow to mount the secondary node store as a read-only subtree

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek reassigned OAK-4654:
--

Assignee: Tomek Rękawek

> Allow to mount the secondary node store as a read-only subtree
> --
>
> Key: OAK-4654
> URL: https://issues.apache.org/jira/browse/OAK-4654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: documentmk
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4654) Allow to mount the secondary node store as a read-only subtree

2016-08-08 Thread JIRA
Tomek Rękawek created OAK-4654:
--

 Summary: Allow to mount the secondary node store as a read-only 
subtree
 Key: OAK-4654
 URL: https://issues.apache.org/jira/browse/OAK-4654
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: documentmk
Reporter: Tomek Rękawek
 Fix For: 1.6






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4196) EventListener gets removed event for denied node

2016-08-08 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411719#comment-15411719
 ] 

angela edited comment on OAK-4196 at 8/8/16 12:18 PM:
--

But wouldn't that mean that the permission entry as present with the node at 
{{childNPath}}, which just got removed, doesn't no longer exist in the 
permission-store associated with the 'before' (for whatever reason)?

The permission entry used to be there before the removal as the extra 
verification for {{nodeExists}} in the beginning of the tests shows.


was (Author: anchela):
But wouldn't that mean that the permission entry as present with the node at 
{{childNPath}}, which just got removed, doesn't no longer exist in the 
permission-store associated with the 'before'?

The permission entry used to be there before the removal as the extra 
verification for {{nodeExists}} in the beginning of the tests shows.

> EventListener gets removed event for denied node
> 
>
> Key: OAK-4196
> URL: https://issues.apache.org/jira/browse/OAK-4196
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, jcr, security
>Affects Versions: 1.4
>Reporter: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6
>
>
> An EventListener of a session that does not have read access to a node may 
> get a node removed event when the node is removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4196) EventListener gets removed event for denied node

2016-08-08 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411719#comment-15411719
 ] 

angela commented on OAK-4196:
-

But wouldn't that mean that the permission entry as present with the node at 
{{childNPath}}, which just got removed, doesn't no longer exist in the 
permission-store associated with the 'before'?

The permission entry used to be there before the removal as the extra 
verification for {{nodeExists}} in the beginning of the tests shows.

> EventListener gets removed event for denied node
> 
>
> Key: OAK-4196
> URL: https://issues.apache.org/jira/browse/OAK-4196
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, jcr, security
>Affects Versions: 1.4
>Reporter: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6
>
>
> An EventListener of a session that does not have read access to a node may 
> get a node removed event when the node is removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4293) Refactor / rework compaction gain estimation

2016-08-08 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411704#comment-15411704
 ] 

Alex Parvulescu commented on OAK-4293:
--

I'm wondering if it would be good to also provide a 'relative' delta (like a 
delta percentage) instead of only an absolute (bytes only size delta) to 
trigger compaction. This way one could set it at {{15%}} for example, and not 
have to worry about the order of magnitude of the delta setting.

> Refactor / rework compaction gain estimation 
> -
>
> Key: OAK-4293
> URL: https://issues.apache.org/jira/browse/OAK-4293
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
>  Labels: gc
> Fix For: Segment Tar 0.0.10
>
> Attachments: size-estimation.patch
>
>
> I think we have to take another look at {{CompactionGainEstimate}} and see 
> whether we can up with a more efficient way to estimate the compaction gain. 
> The current implementation is expensive wrt. IO, CPU and cache coherence. If 
> we want to keep an estimation step we need IMO come up with a cheap way (at 
> least 2 orders of magnitude cheaper than compaction). Otherwise I would 
> actually propose to remove the current estimation approach entirely 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4196) EventListener gets removed event for denied node

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411641#comment-15411641
 ] 

Michael Dürig commented on OAK-4196:


Have a look at {{ACFilter#getTreePermission}} where a {{TreePermission}} 
instance is acquired from a permission provider. It uses the {{after}} state 
for doing this and falls back to the {{before}} state if the former is not 
available (e.g. in the case of a removal). My guess is that this is the issue 
here: the before state doesn't contain the updated permissions so the removal 
event gets send. 

> EventListener gets removed event for denied node
> 
>
> Key: OAK-4196
> URL: https://issues.apache.org/jira/browse/OAK-4196
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, jcr, security
>Affects Versions: 1.4
>Reporter: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6
>
>
> An EventListener of a session that does not have read access to a node may 
> get a node removed event when the node is removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4123) Persistent cache: allow to configure the add data concurrency

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411635#comment-15411635
 ] 

Tomek Rękawek commented on OAK-4123:


Backported to 1.2.x in [r1755493|https://svn.apache.org/r1755493].

> Persistent cache: allow to configure the add data concurrency
> -
>
> Key: OAK-4123
> URL: https://issues.apache.org/jira/browse/OAK-4123
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache
>Reporter: Tomek Rękawek
>Assignee: Thomas Mueller
> Fix For: 1.4.1, 1.5.0, 1.2.18
>
> Attachments: OAK-4123.patch
>
>
> During operations that creates and reads a large number of nodes (eg. 
> indexing content) it may happen that there's more items in the asynchronous 
> queue (introduced in OAK-2761) than the queue consumer can handle. As a 
> result, the queue is purged and items are not saved in the cache, which makes 
> the overall performance worse.
> An easy fix is to add a property that allows to switch between async/sync 
> mode. By default, it should be synchronous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4123) Persistent cache: allow to configure the add data concurrency

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-4123:
---
Fix Version/s: 1.2.18

> Persistent cache: allow to configure the add data concurrency
> -
>
> Key: OAK-4123
> URL: https://issues.apache.org/jira/browse/OAK-4123
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache
>Reporter: Tomek Rękawek
>Assignee: Thomas Mueller
> Fix For: 1.4.1, 1.5.0, 1.2.18
>
> Attachments: OAK-4123.patch
>
>
> During operations that creates and reads a large number of nodes (eg. 
> indexing content) it may happen that there's more items in the asynchronous 
> queue (introduced in OAK-2761) than the queue consumer can handle. As a 
> result, the queue is purged and items are not saved in the cache, which makes 
> the overall performance worse.
> An easy fix is to add a property that allows to switch between async/sync 
> mode. By default, it should be synchronous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2761) Persistent cache: add data in a different thread

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411634#comment-15411634
 ] 

Tomek Rękawek commented on OAK-2761:


Backported to 1.2.x in [r1755492|https://svn.apache.org/r1755492].

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4, 1.3.16, 1.2.18
>
> Attachments: AsyncCacheTest.patch, OAK-2761-1.2.patch, 
> OAK-2761-trunk.patch
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2761) Persistent cache: add data in a different thread

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-2761:
---
Fix Version/s: 1.2.18

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4, 1.3.16, 1.2.18
>
> Attachments: AsyncCacheTest.patch, OAK-2761-1.2.patch, 
> OAK-2761-trunk.patch
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3997) Include eviction cause to the LIRS removal callback

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3997:
---
Fix Version/s: 1.2.18

> Include eviction cause to the LIRS removal callback
> ---
>
> Key: OAK-3997
> URL: https://issues.apache.org/jira/browse/OAK-3997
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache
>Reporter: Tomek Rękawek
>Assignee: Marcel Reutegger
> Fix For: 1.4, 1.3.16, 1.2.18
>
> Attachments: OAK-3997-1.2.patch, OAK-3997-trunk.patch
>
>
> Enhance the {{EvictionCallback#evicted()}} method with a new argument: 
> {{cause}}. It may be a Guava {{com.google.common.cache.RemovalCause}}, even 
> though the {{COLLECTED}} and {{EXPIRED}} won't be used, as LIRS cache doesn't 
> support weak values and TTL yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3997) Include eviction cause to the LIRS removal callback

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411633#comment-15411633
 ] 

Tomek Rękawek commented on OAK-3997:


Backported to 1.2.x in [r1755491|https://svn.apache.org/r1755491].

> Include eviction cause to the LIRS removal callback
> ---
>
> Key: OAK-3997
> URL: https://issues.apache.org/jira/browse/OAK-3997
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache
>Reporter: Tomek Rękawek
>Assignee: Marcel Reutegger
> Fix For: 1.4, 1.3.16, 1.2.18
>
> Attachments: OAK-3997-1.2.patch, OAK-3997-trunk.patch
>
>
> Enhance the {{EvictionCallback#evicted()}} method with a new argument: 
> {{cause}}. It may be a Guava {{com.google.common.cache.RemovalCause}}, even 
> though the {{COLLECTED}} and {{EXPIRED}} won't be used, as LIRS cache doesn't 
> support weak values and TTL yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3095) Add eviction listener to LIRS cache

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3095:
---
Fix Version/s: 1.2.18

> Add eviction listener to LIRS cache
> ---
>
> Key: OAK-3095
> URL: https://issues.apache.org/jira/browse/OAK-3095
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: candidate_oak_1_0
> Fix For: 1.4, 1.3.3, 1.2.18
>
> Attachments: OAK-3095-1.2.patch, OAK-3095-2.patch, OAK-3095.patch
>
>
> For OAK-3055 I need to be able to track items that are evicted from 
> {{CacheLIRS}}. I thus suggest to implement a listener for evicted items. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3095) Add eviction listener to LIRS cache

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3095:
---
Labels: candidate_oak_1_0  (was: candidate_oak_1_0 candidate_oak_1_2)

> Add eviction listener to LIRS cache
> ---
>
> Key: OAK-3095
> URL: https://issues.apache.org/jira/browse/OAK-3095
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: candidate_oak_1_0
> Fix For: 1.4, 1.3.3, 1.2.18
>
> Attachments: OAK-3095-1.2.patch, OAK-3095-2.patch, OAK-3095.patch
>
>
> For OAK-3055 I need to be able to track items that are evicted from 
> {{CacheLIRS}}. I thus suggest to implement a listener for evicted items. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3997) Include eviction cause to the LIRS removal callback

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3997:
---
Labels:   (was: candidate_oak_1_2)

> Include eviction cause to the LIRS removal callback
> ---
>
> Key: OAK-3997
> URL: https://issues.apache.org/jira/browse/OAK-3997
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache
>Reporter: Tomek Rękawek
>Assignee: Marcel Reutegger
> Fix For: 1.4, 1.3.16, 1.2.18
>
> Attachments: OAK-3997-1.2.patch, OAK-3997-trunk.patch
>
>
> Enhance the {{EvictionCallback#evicted()}} method with a new argument: 
> {{cause}}. It may be a Guava {{com.google.common.cache.RemovalCause}}, even 
> though the {{COLLECTED}} and {{EXPIRED}} won't be used, as LIRS cache doesn't 
> support weak values and TTL yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3095) Add eviction listener to LIRS cache

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411632#comment-15411632
 ] 

Tomek Rękawek commented on OAK-3095:


Backported to 1.2.x in [r1755490|https://svn.apache.org/r1755490].

> Add eviction listener to LIRS cache
> ---
>
> Key: OAK-3095
> URL: https://issues.apache.org/jira/browse/OAK-3095
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.4, 1.3.3
>
> Attachments: OAK-3095-1.2.patch, OAK-3095-2.patch, OAK-3095.patch
>
>
> For OAK-3055 I need to be able to track items that are evicted from 
> {{CacheLIRS}}. I thus suggest to implement a listener for evicted items. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3611) upgrade H2DB dependency to 1.4.190

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3611:
---
Labels:   (was: candidate_oak_1_0)

> upgrade H2DB dependency to 1.4.190
> --
>
> Key: OAK-3611
> URL: https://issues.apache.org/jira/browse/OAK-3611
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
> Fix For: 1.4, 1.3.12, 1.0.33, 1.2.18
>
>
> (we are currently at 1.4.185)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3611) upgrade H2DB dependency to 1.4.190

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3611:
---
Fix Version/s: 1.0.33

> upgrade H2DB dependency to 1.4.190
> --
>
> Key: OAK-3611
> URL: https://issues.apache.org/jira/browse/OAK-3611
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
> Fix For: 1.4, 1.3.12, 1.0.33, 1.2.18
>
>
> (we are currently at 1.4.185)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3611) upgrade H2DB dependency to 1.4.190

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411631#comment-15411631
 ] 

Tomek Rękawek commented on OAK-3611:


Backported to 1.0.x in [r1755489|https://svn.apache.org/r1755489].

> upgrade H2DB dependency to 1.4.190
> --
>
> Key: OAK-3611
> URL: https://issues.apache.org/jira/browse/OAK-3611
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
> Fix For: 1.4, 1.3.12, 1.0.33, 1.2.18
>
>
> (we are currently at 1.4.185)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3611) upgrade H2DB dependency to 1.4.190

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3611:
---
Labels: candidate_oak_1_0  (was: candidate_oak_1_0 candidate_oak_1_2)

> upgrade H2DB dependency to 1.4.190
> --
>
> Key: OAK-3611
> URL: https://issues.apache.org/jira/browse/OAK-3611
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>  Labels: candidate_oak_1_0
> Fix For: 1.4, 1.3.12, 1.2.18
>
>
> (we are currently at 1.4.185)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3611) upgrade H2DB dependency to 1.4.190

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411628#comment-15411628
 ] 

Tomek Rękawek commented on OAK-3611:


Backported to 1.2.x in [r1755484|https://svn.apache.org/r1755484].

> upgrade H2DB dependency to 1.4.190
> --
>
> Key: OAK-3611
> URL: https://issues.apache.org/jira/browse/OAK-3611
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>  Labels: candidate_oak_1_0
> Fix For: 1.4, 1.3.12, 1.2.18
>
>
> (we are currently at 1.4.185)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3611) upgrade H2DB dependency to 1.4.190

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3611:
---
Fix Version/s: 1.2.18

> upgrade H2DB dependency to 1.4.190
> --
>
> Key: OAK-3611
> URL: https://issues.apache.org/jira/browse/OAK-3611
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>  Labels: candidate_oak_1_0
> Fix For: 1.4, 1.3.12, 1.2.18
>
>
> (we are currently at 1.4.185)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3611) upgrade H2DB dependency to 1.4.190

2016-08-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3611:
-
Labels: candidate_oak_1_0 candidate_oak_1_2  (was: candidate_oak_1_2 
candidate_oak_1_4)

> upgrade H2DB dependency to 1.4.190
> --
>
> Key: OAK-3611
> URL: https://issues.apache.org/jira/browse/OAK-3611
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.4, 1.3.12
>
>
> (we are currently at 1.4.185)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3611) upgrade H2DB dependency to 1.4.190

2016-08-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3611:
-
Labels: candidate_oak_1_2 candidate_oak_1_4  (was: )

> upgrade H2DB dependency to 1.4.190
> --
>
> Key: OAK-3611
> URL: https://issues.apache.org/jira/browse/OAK-3611
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.4, 1.3.12
>
>
> (we are currently at 1.4.185)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4293) Refactor / rework compaction gain estimation

2016-08-08 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-4293:
-
Attachment: size-estimation.patch

attaching a git patch. it covers most of the impl, there are still some rough 
edges, and I have some more testing to do. [~mduerig] feedback appreciated!

> Refactor / rework compaction gain estimation 
> -
>
> Key: OAK-4293
> URL: https://issues.apache.org/jira/browse/OAK-4293
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
>  Labels: gc
> Fix For: Segment Tar 0.0.10
>
> Attachments: size-estimation.patch
>
>
> I think we have to take another look at {{CompactionGainEstimate}} and see 
> whether we can up with a more efficient way to estimate the compaction gain. 
> The current implementation is expensive wrt. IO, CPU and cache coherence. If 
> we want to keep an estimation step we need IMO come up with a cheap way (at 
> least 2 orders of magnitude cheaper than compaction). Otherwise I would 
> actually propose to remove the current estimation approach entirely 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4196) EventListener gets removed event for denied node

2016-08-08 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411552#comment-15411552
 ] 

angela commented on OAK-4196:
-

[~mduerig], [~mreutegg], i uncommented and stepped through the test case 
additionally adding an assertion that the test-session cannot read the node at 
{{childNPath}} (which is ok).

i can see the issue described in this issue but have a couple of questions 
regarding the observation:

- the node {{childNPath}} is both the access controlled node _and_ the target 
of the remove. when is the observation event triggered? and: is it possible 
that the event looks at the _latest_ state of the permission store which gets 
updated by removing the denying permission entry.

- does the observation code keep a permission store referring to 'before' 
state? IMO that might the issue here if you are looking at the updated store, 
which no longer contains the entry denying read access to {{childNPath}}

If you modify the code by denying read access at {{path}} i.e. the parent node, 
which doesn't get removed, the test passes.

So, I somehow have the feeling that it's an issue in the {{ACFilter}}, which 
doesn't look at the before-permission store for removed items. On the other 
hand you may argue that this could have other type of unexpected events (didn't 
carefully think about it) in which case you probably had to leave (and 
document) this as an edge case.

> EventListener gets removed event for denied node
> 
>
> Key: OAK-4196
> URL: https://issues.apache.org/jira/browse/OAK-4196
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, jcr, security
>Affects Versions: 1.4
>Reporter: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6
>
>
> An EventListener of a session that does not have read access to a node may 
> get a node removed event when the node is removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4631) Simplify the format of segments and serialized records

2016-08-08 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-4631:

Attachment: OAK-4631-02.patch

Attaching a new patch, since the old one is not applying cleanly anymore.

> Simplify the format of segments and serialized records
> --
>
> Key: OAK-4631
> URL: https://issues.apache.org/jira/browse/OAK-4631
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.10
>
> Attachments: OAK-4631-01.patch, OAK-4631-02.patch
>
>
> As discussed in [this thread|http://markmail.org/thread/3oxp6ydboyefr4bg], it 
> might be beneficial to simplify both the format of the segments and the way 
> record IDs are serialised. A new strategy needs to be investigated to reach 
> the right compromise between performance, disk space utilization and 
> simplicity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3342) move benchmarks in oak-development module

2016-08-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3342:

Component/s: run

> move benchmarks in oak-development module
> -
>
> Key: OAK-3342
> URL: https://issues.apache.org/jira/browse/OAK-3342
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>Priority: Minor
> Fix For: 1.6
>
>
> Take all the benchmarks provided by oak-run and move them into the 
> oak-development module. Micro-benchmarking and Scalability



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4638) Mostly async unique index (for UUIDs for example)

2016-08-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-4638:

Component/s: query
 core

> Mostly async unique index (for UUIDs for example)
> -
>
> Key: OAK-4638
> URL: https://issues.apache.org/jira/browse/OAK-4638
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, query
>Reporter: Thomas Mueller
>
> The UUID index takes a lot of space. For the UUID index, we should consider 
> using mainly an async index. This is possible because there are two types of 
> UUIDs: those generated in Oak, which are sure to be unique (no need to 
> check), and those set in the application code, for example by importing 
> packages. For older nodes, an async index is sufficient, and a synchronous 
> index is only (temporarily) needed for imported nodes. For UUIDs, we could 
> also change the generation algorithm if needed.
> It might be possible to use a similar pattern for regular unique indexes as 
> well: only keep the added entries of the last 24 hours (for example) in a 
> property index, and then move entries to an async index which needs less 
> space. That would slow down adding entries, as two indexes need to be checked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4635) Improve cache eviction policy of the node deduplication cache

2016-08-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411490#comment-15411490
 ] 

Michael Dürig commented on OAK-4635:


Both approaches from above suffer a performance issue. Apparently removing an 
element from a hash set via {{Iterator#remove}} is on the slow side. Need to 
come up with something better here. 


> Improve cache eviction policy of the node deduplication cache
> -
>
> Key: OAK-4635
> URL: https://issues.apache.org/jira/browse/OAK-4635
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: perfomance
> Fix For: Segment Tar 0.0.10
>
>
> {{NodeCache}} uses one stripe per depth (of the nodes in the tree). Once its 
> overall capacity (default 100 nodes) is exceeded, it clears all nodes 
> from the stripe with the greatest depth. This can be problematic when the 
> stripe with the greatest depth contains most of the nodes as clearing it 
> would result in an almost empty cache. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3919) Properly manage APIs / SPIs intended for public consumption

2016-08-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3919:

Component/s: core

> Properly manage APIs / SPIs intended for public consumption
> ---
>
> Key: OAK-3919
> URL: https://issues.apache.org/jira/browse/OAK-3919
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Michael Dürig
>  Labels: modularization, technical_debt
> Fix For: 1.6
>
>
> This is a follow up to OAK-3842, which removed package export declarations 
> for all packages that we either do not want to be used outside of Oak or that 
> are not stable enough yet. 
> This issue is to identify those APIs and SPIs of Oak that we actually *want* 
> to export and to refactor those such we *can* export them. 
> Candidates that are currently used from upstream projects I know of are:
> {code}
>   org.apache.jackrabbit.oak.plugins.observation
>   org.apache.jackrabbit.oak.spi.commit
>   org.apache.jackrabbit.oak.spi.state
>   org.apache.jackrabbit.oak.commons
>   org.apache.jackrabbit.oak.plugins.index.lucene
> {code}
> I suggest to create subtask for those we want to go forward with.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4466) Incorrect description for "Simple Inheritance with Restrictions" inthe Permission Evaluation Page

2016-08-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-4466.
-
   Resolution: Fixed
 Assignee: angela
Fix Version/s: 1.5.8

Committed revision 1755478.


> Incorrect description for "Simple Inheritance with Restrictions" inthe 
> Permission Evaluation Page
> -
>
> Key: OAK-4466
> URL: https://issues.apache.org/jira/browse/OAK-4466
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>  Components: doc
>Reporter: Opkar Gill
>Assignee: angela
> Fix For: 1.5.8
>
>
> In the page 
> http://jackrabbit.apache.org/oak/docs/security/permission/evaluation.html 
> there is a description for "Simple Inheritance with Restrictions" 
> the text reads: 
> everyone is cannot read the complete tree defined by /content except for 
> properties named ‘prop1’ or ‘prop2’ which are explicitly denied by the 
> restricting entry.
> It should be: 
> everyone can read the complete tree defined by /content except for properties 
> named ‘prop1’ or ‘prop2’ which are explicitly denied by the restricting entry.
> i.e.  "is cannot" should be changed to "can" in the above text



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4466) Incorrect description for "Simple Inheritance with Restrictions" inthe Permission Evaluation Page

2016-08-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-4466:

Component/s: doc

> Incorrect description for "Simple Inheritance with Restrictions" inthe 
> Permission Evaluation Page
> -
>
> Key: OAK-4466
> URL: https://issues.apache.org/jira/browse/OAK-4466
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>  Components: doc
>Reporter: Opkar Gill
>
> In the page 
> http://jackrabbit.apache.org/oak/docs/security/permission/evaluation.html 
> there is a description for "Simple Inheritance with Restrictions" 
> the text reads: 
> everyone is cannot read the complete tree defined by /content except for 
> properties named ‘prop1’ or ‘prop2’ which are explicitly denied by the 
> restricting entry.
> It should be: 
> everyone can read the complete tree defined by /content except for properties 
> named ‘prop1’ or ‘prop2’ which are explicitly denied by the restricting entry.
> i.e.  "is cannot" should be changed to "can" in the above text



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4154) SynchronizationMBean should offer methods to synchronize without forcing group sync.

2016-08-08 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411419#comment-15411419
 ] 

angela commented on OAK-4154:
-

Removing fix version.

In the light of the dynamic membership I am not sure, if we really still need 
this. In general I would like to move away from the sync-all methods exposed in 
the JMX as they are troublesome by design. Afaik the main reason for exposing 
them in the first place were performance issues with the sync upon login, which 
IMO should be addressed by the dynamic membership feature.

If we find out it would still be good to have I would rather make this a 
generic flag that only syncs users _and_ groups if the expiration is not yet 
reached... but as I said... I somehow have the impression that deprecating the 
*sync|purgeAll method would be the better choice.



> SynchronizationMBean should offer methods to synchronize without forcing 
> group sync.
> 
>
> Key: OAK-4154
> URL: https://issues.apache.org/jira/browse/OAK-4154
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: Manfred Baedke
>Assignee: Manfred Baedke
>
> SynchronizationMBean.syncUsers(...) and related methods always force the 
> synchronization of groups (indirectly) containing the user, independently of 
> any configured expiration intervals. This may have a huge negative impact on 
> the performance of these methods.
> Additional methods should be added to the interface 
> org.apache.jackrabbit.oak.spi.security.authentication.external.impl.jmx.SynchronizationMBean,
>  featuring an additional boolean argument to enable or disable group sync 
> during the call



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4154) SynchronizationMBean should offer methods to synchronize without forcing group sync.

2016-08-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-4154:

Fix Version/s: (was: 1.6)

> SynchronizationMBean should offer methods to synchronize without forcing 
> group sync.
> 
>
> Key: OAK-4154
> URL: https://issues.apache.org/jira/browse/OAK-4154
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: Manfred Baedke
>Assignee: Manfred Baedke
>
> SynchronizationMBean.syncUsers(...) and related methods always force the 
> synchronization of groups (indirectly) containing the user, independently of 
> any configured expiration intervals. This may have a huge negative impact on 
> the performance of these methods.
> Additional methods should be added to the interface 
> org.apache.jackrabbit.oak.spi.security.authentication.external.impl.jmx.SynchronizationMBean,
>  featuring an additional boolean argument to enable or disable group sync 
> during the call



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4161) DefaultSyncHandler should avoid concurrent synchronization of the same user

2016-08-08 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411414#comment-15411414
 ] 

angela commented on OAK-4161:
-

[~baedke], are you still working on this? I would really appreciate if you 
could come up with some testing and benchmark first. In case you abandoned this 
issue, may I kindly ask you provide a short update? Thanks.

> DefaultSyncHandler should avoid concurrent synchronization of the same user
> ---
>
> Key: OAK-4161
> URL: https://issues.apache.org/jira/browse/OAK-4161
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: Manfred Baedke
>Assignee: Manfred Baedke
>
> Concurrent synchronization of the same user may have a significant 
> performance impact on systems where user sync is already a bottleneck.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2990) Make sync fill jcr:lastModified in addition to rep:lastSynced

2016-08-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-2990.
-
Resolution: Won't Fix

The property {{jcr:lastModified}} is defined by a specific mixin type 
{{mix:lastModified}} and must not be set with a different node type (i.e. 
changing it's declaring node type and thus ending up with a different 
{{PropertyDefinition}}.

Unfortunately the design of the user sync in it's current form does not allow 
for setting addition mixin types to the user (which has shown to be a bit 
unfortunate when trying to address critical issues). Therefore this improvement 
is not possible without a complete redesign of the user sync which I don't 
consider to be feasible and justifiable at the moment.

> Make sync fill jcr:lastModified in addition to rep:lastSynced
> -
>
> Key: OAK-2990
> URL: https://issues.apache.org/jira/browse/OAK-2990
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Affects Versions: 1.2.2
>Reporter: Nicolas Peltier
>Priority: Minor
>
> while rep:lastSynced is crucial not to re-run sync at every login, having the 
> information that the sync didn't change anything (and thus no action should 
> be taken for that user) would be very interesting. I guess a jcr:lastModified 
> property would do the trick.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-1714) Support login with "arbitrary" login id

2016-08-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-1714.
-
Resolution: Incomplete

resolving incomplete. this issue has no meaningful subject and no description. 
login with quote _arbitrary_ login id doesn't seem desirable to me. Also the 
{{CredentialsSupport}} added recently would allow for attaching support for 
additional types of credentials with the external login as existing today.

> Support login with "arbitrary" login id
> ---
>
> Key: OAK-1714
> URL: https://issues.apache.org/jira/browse/OAK-1714
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Tobias Bocanegra
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4624) Optionally ignore missing blobs during sidegrade

2016-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek resolved OAK-4624.

Resolution: Fixed

> Optionally ignore missing blobs during sidegrade
> 
>
> Key: OAK-4624
> URL: https://issues.apache.org/jira/browse/OAK-4624
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Tomek Rękawek
> Fix For: 1.6, 1.4.6, 1.5.8
>
>
> For clients with corrupted data stores it'd be useful to finish sidegrade 
> even if some of the blobs are missing from data stores. Add a new option 
> {{--ignore-missing-binaries}} to proceed with the migration in such case. All 
> missing binaries should be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4638) Mostly async unique index (for UUIDs for example)

2016-08-08 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411384#comment-15411384
 ] 

angela commented on OAK-4638:
-

Having said that: any approach in this direction must be carefully analyzed wrt 
to unexpected security issues.

> Mostly async unique index (for UUIDs for example)
> -
>
> Key: OAK-4638
> URL: https://issues.apache.org/jira/browse/OAK-4638
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>
> The UUID index takes a lot of space. For the UUID index, we should consider 
> using mainly an async index. This is possible because there are two types of 
> UUIDs: those generated in Oak, which are sure to be unique (no need to 
> check), and those set in the application code, for example by importing 
> packages. For older nodes, an async index is sufficient, and a synchronous 
> index is only (temporarily) needed for imported nodes. For UUIDs, we could 
> also change the generation algorithm if needed.
> It might be possible to use a similar pattern for regular unique indexes as 
> well: only keep the added entries of the last 24 hours (for example) in a 
> property index, and then move entries to an async index which needs less 
> space. That would slow down adding entries, as two indexes need to be checked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4638) Mostly async unique index (for UUIDs for example)

2016-08-08 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411382#comment-15411382
 ] 

angela commented on OAK-4638:
-

Just one concern from my side: The security modules define some unique indices 
and where weakening the uniqueness contract will have most severe security 
implications. In contrast the {{jcr:uuid}} the synchronicity is not the biggest 
concern there (as long as applications don't write ugly code with multiple 
sessions involved that ends up requiring the synchronous behavior).
This is different from the {{jcr:uuid}} which (for backwards compatibility) is 
also used for the user/group lookup and where any kind of asynchronous indexing 
will lead to an escalation nightmare because it will not only affect 
'end-users' but also the application code itself relying on system users.

> Mostly async unique index (for UUIDs for example)
> -
>
> Key: OAK-4638
> URL: https://issues.apache.org/jira/browse/OAK-4638
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>
> The UUID index takes a lot of space. For the UUID index, we should consider 
> using mainly an async index. This is possible because there are two types of 
> UUIDs: those generated in Oak, which are sure to be unique (no need to 
> check), and those set in the application code, for example by importing 
> packages. For older nodes, an async index is sufficient, and a synchronous 
> index is only (temporarily) needed for imported nodes. For UUIDs, we could 
> also change the generation algorithm if needed.
> It might be possible to use a similar pattern for regular unique indexes as 
> well: only keep the added entries of the last 24 hours (for example) in a 
> property index, and then move entries to an async index which needs less 
> space. That would slow down adding entries, as two indexes need to be checked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)