Re: OAK +Remote SOLR OOTB integration queries

2016-03-29 Thread Tommaso Teofili
Hi Sri,

what are you trying to achieve?
Here’s some notes inline:

> On 23 Mar 2016, at 09:50, sri vaths  wrote:
> 
> Hi All,
> As in here http://jackrabbit.apache.org/oak/docs/query/solr.html
> Trying use remote SOLR with OAKa
> Need to know if these are supported
> 1) How to control Indexing child nodes of jcr:content , ex:- controlling the 
> depth via some configuration 
> 2) options to merge the child nodes into 1 single document in solr

With current version of Oak you can enable collapsing of nodes under 
jcr:content at query time (whereas your question is related to doing the same 
at indexing time, if I got it right) by enabling the “collapse jcr:content 
nodes” option [1] (and reindexing).

> 3) how to apply SOLR index inclusion & exclusion rules

can you elaborate here? Which inclusion / exclusion rules are you talking about 
specifically ?

> Please share if any such configuration exists or ideas
> with regardsSri

Looking forward to your reply.
Regards,
Tommaso

[1] : 
http://jackrabbit.apache.org/oak/docs/query/solr.html#Collapse_jcr:content_nodes

Problems with OAK Restrictions

2016-03-29 Thread gianluca.soffred...@metaframe.it

Hi,

I'm working with Francesco Ancona and we are using jackarabbit OAK.
I have a problem using ACL restrictions with OAK 1.4.0.
I'm using the JCR repository (javax.jcr.Repository interface) and not 
the OAK Content Repository.
When I tries to apply a restriction using the rep:glob as key and the 
empty string as value of the restrictions map it does not work as I 
expected.


As specified in OAK documentation 
(http://jackrabbit.apache.org/oak/docs/security/authorization/restriction.html) 
if we have a node with path /foo
and we tries to give the read permission to a principal, if we use 
global restriction with empty string we can apply the permission for the 
principal

only to /foo node.

I tries to do that: the restriction provider correctly write the 
restriction in my repository data storage but the system simply ignores 
the applied ACL.



   Using rep:glob


   For a nodePath/foothe following results can be expected for
   the different values ofrep:glob.

rep:globResult
""matches node /foo only
/catthe node /foo/cat and all it’s children
/cat/   the descendants of the node /foo/cat
cat the node /foocat and all it’s children
cat/all descendants of the node /foocat
*   foo, siblings of foo and their descendants
/*cat   all children of /foo whose path ends with ‘cat’
/*/cat  all non-direct descendants of /foo named ‘cat’
/cat* 	all descendant path of /foo that have the direct foo-descendant 
segment starting with ‘cat’
*cat 	all siblings and descendants of foo that have a name ending with 
‘cat’
*/cat 	all descendants of /foo and foo’s siblings that have a name 
segment ‘cat’

cat/*   all descendants of ‘/foocat’
/cat/*  all descendants of ‘/foo/cat’
*cat/* 	all descendants of /foo that have an intermediate segment ending 
with ‘cat’




This is my code:

protected void applyRestriction(final JackrabbitSession session, final 
Principal principal, final String path, final Privilege[] privileges, 
final boolean allow, final boolean propagate) throws RepositoryException{

AccessControlManager acMgr = session.getAccessControlManager();

JackrabbitAccessControlList acl = 
AccessControlUtils.getAccessControlList(acMgr, path);


Map restrictions = new HashMap();

if(!propagate){
restrictions.put(AccessControlConstants.REP_GLOB, 
session.getValueFactory().createValue("", PropertyType.STRING));


}
acl.addEntry(principal,
privileges,
allow, restrictions);
acMgr.setPolicy(path, acl);
session.save();
}

and this is the call:
applyRestriction(session,  readerGroup.getPrincipal(),"/foo", 
AccessControlUtils.privilegesFromNames(session.getAccessControlManager(), PrivilegeConstants.JCR_READ), 
true, true);


I have found this issue that is similar to my problem but it's closed. 
https://issues.apache.org/jira/browse/OAK-2412


Can you help me?

Thanks in advance.

Gianluca Soffredini
Project Manager
Metaframe SPS S.r.l.
Via Toniolo, 13
30030 Vigonovo(VE)
mobile: +39 3342235291
email: gianluca.soffred...@metaframe.it 


SKYPE ID: gianlucas72
Logo Metaframe SPS S.r.l.



Re: R: critical question about oak: db connection

2016-03-29 Thread Julian Reschke

On 2016-03-23 16:19, Manfred Baedke wrote:

Hi Francesco,

Your tests ran out of the box in my IDE (IntelliJ IDEA 15) using a local
Postgres 9.3.

Best regards,
Manfred


Same here, from command line maven and on PostgreSQL 9.5:


[INFO] Scanning for projects...
[INFO]
[INFO] 
[INFO] Building oaktest 0.0.1-SNAPSHOT
[INFO] 
[INFO]
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ oaktest ---
[INFO] Deleting C:\home\jre\oaktest\target
[INFO]
[INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ oaktest 
---
[debug] execute contextualize
[WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, 
i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory 
C:\home\jre\oaktest\src\main\resources
[INFO]
[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @ oaktest ---
[INFO] No sources to compile
[INFO]
[INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ 
oaktest ---
[debug] execute contextualize
[WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, 
i.e. build is platform dependent!
[INFO] Copying 4 resources
[INFO]
[INFO] --- maven-compiler-plugin:2.3.2:testCompile (default-testCompile) @ 
oaktest ---
[WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. 
build is platform dependent!
[INFO] Compiling 1 source file to C:\home\jre\oaktest\target\test-classes
[INFO]
[INFO] --- maven-surefire-plugin:2.10:test (default-test) @ oaktest ---
[INFO] Surefire report directory: C:\home\jre\oaktest\target\surefire-reports

---
 T E S T S
---
Running oaktest.OakRDBMSTest
16:33:41.896 INFO  [main] AbstractTestContextBootstrapper.java:207 Could not 
instantiate TestExecutionListener 
[org.springframework.test.context.web.ServletTestExecutionListener]. Specify 
custom listener classes or make the default listener classes (and their 
required dependencies) available. Offending class: 
[javax/servlet/ServletContext]
16:33:41.896 INFO  [main] AbstractTestContextBootstrapper.java:185 Using 
TestExecutionListeners: 
[org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener@2ffad8fe,
 
org.springframework.test.context.support.DependencyInjectionTestExecutionListener@7dbc244d,
 
org.springframework.test.context.support.DirtiesContextTestExecutionListener@4af37bb8]
16:33:42.017 INFO  [main] XmlBeanDefinitionReader.java:317  Loading XML bean 
definitions from class path resource [config/spring/spring-test-config.xml]
16:33:42.117 INFO  [main] AbstractApplicationContext.java:578 Refreshing 
org.springframework.context.support.GenericApplicationContext@3fb2acb7: startup 
date [Tue Mar 29 16:33:42 CEST 2016]; root of context hierarchy
16:33:42.217 INFO  [main] DriverManagerDataSource.java:133  Loaded JDBC driver: 
org.postgresql.Driver
16:33:42.750 INFO  [main] RDBDocumentStore.java:827 RDBDocumentStore 
(1.4.0) instantiated for database PostgreSQL 9.5.0 (9.5), using driver: 
PostgreSQL Native Driver PostgreSQL 9.4.1207.jre7 (9.4), connecting to: 
jdbc:postgresql:oak, properties: {pg_encoding_to_char(encoding)=UTF8, 
datcollate=C}, transaction isolation level: TRANSACTION_READ_COMMITTED (2), 
.nodes: id varchar(512), modified int8, hasbinary int2, deletedonce int2, 
modcount int8, cmodcount int8, dsize int8, data varchar(16384), bdata 
bytea(2147483647) /* {bytea=-2, int2=5, int8=-5, varchar=12} */ /* unique index 
nodes_pkey on public.nodes (id ASC) other */
16:33:42.750 INFO  [main] RDBDocumentStore.java:834 Tables created upon 
startup: [CLUSTERNODES, NODES, SETTINGS, JOURNAL]
16:33:42.781 INFO  [main] RDBBlobStore.java:224 RDBBlobStore 
(1.4.0) instantiated for database PostgreSQL 9.5.0 (9.5), using driver: 
PostgreSQL Native Driver PostgreSQL 9.4.1207.jre7 (9.4), connecting to: 
jdbc:postgresql:oak, transaction isolation level: TRANSACTION_READ_COMMITTED (2)
16:33:42.781 INFO  [main] RDBBlobStore.java:230 Tables created upon 
startup: [DATASTORE_DATA, DATASTORE_META]
16:33:43.182 INFO  [main] DocumentNodeStore.java:516Initialized 
DocumentNodeStore with clusterNodeId: 1 (id: 1, startTime: 1459262022866, 
machineId: mac:1803733fd6b0, instanceId: C:\home\jre\oaktest, pid: 6024, uuid: 
db2a1755-2346-44d9-a284-37f0a7e69a1f, readWriteMode: null, state: NONE, 
revLock: NONE, oakVersion: 1.4.0)
16:33:44.600 INFO  [main] IndexUpdate.java:182  Found a new index 
node [reference]. Reindexing is requested
16:33:44.600 INFO  [main] IndexUpdate.java:147  Reindexing will be 
performed for following indexes: [/oak:index/reference, /oak:index/nodetype, 
/oak:index/uuid]
16:33:55.139 INFO  [main] IndexUpdate.java:257  Indexing report
- 

Re: [VOTE] Release Apache Jackrabbit Oak 1.5.0

2016-03-29 Thread Alex Parvulescu
[X] +1 Release this package as Apache Jackrabbit Oak 1.5.0

best,
alex

On Tue, Mar 29, 2016 at 10:57 AM, Amit Jain  wrote:

> A candidate for the Jackrabbit Oak 1.5.0 release is available at:
>
> https://dist.apache.org/repos/dist/dev/jackrabbit/oak/1.5.0/
>
> The release candidate is a zip archive of the sources in:
>
>
> https://svn.apache.org/repos/asf/jackrabbit/oak/tags/jackrabbit-oak-1.5.0/
>
> The SHA1 checksum of the archive is
> 1c4b3a95c8788a80129c1b7efb7dc38f4d19bd08.
>
> A staged Maven repository is available for review at:
>
> https://repository.apache.org/
>
> The command for running automated checks against this release candidate is:
>
> $ sh check-release.sh oak 1.5.0
> 1c4b3a95c8788a80129c1b7efb7dc38f4d19bd08
>
> Please vote on releasing this package as Apache Jackrabbit Oak 1.5.0.
> The vote is open for the next 72 hours and passes if a majority of at
> least three +1 Jackrabbit PMC votes are cast.
>
> [ ] +1 Release this package as Apache Jackrabbit Oak 1.5.0
> [ ] -1 Do not release this package because...
>
> My vote is +1.
>
> Thanks
> Amit
>


RE: Jackrabbit 2.10 vs Oak 1.2.7

2016-03-29 Thread Domenic DiTano
Sorry those images did not come through, posting the email again with the
raw data:

I work with web application that has Jackrabbit 2.10 embedded and we
wanted to try upgrading to Oak.  Our current configuration that we use for
Jackrabbit 2.10 is the FileDataStore along with MySql for the Persistence
DataStore.  We wrote some test cases to measure the performance of
JackRabbit 2.1.0 vs latest Oak 1.2.  In the case of JackRabbit 2.10, we
used what our current application configuration - FileDataStore along with
MySql.  In the case of Oak we tried many configurations but the one we
settled on was a DocumentNodeStore with a FileDataStore backend. We tried
all 3 RDB options (Mongo, PostGress, MySql).  All Test cases used the same
code which standard JCR 2.0 code.   The test cases did the following:

.   create 1000 & 10,000 nodes
.   move 1000 & 10,000 nodes
.   copy 1000 & 10,000 nodes
.   delete 1000 & 10,000 nodes
.   upload 100 files
.   read 1 property on 1000 & 10,000 nodes
.   update 1 property on 1000 & 10,000 nodes


The results were as follows (all results in milliseconds):

Oak tests ran with the creation, move, copy, delete, update, and read of
1000 nodes:

Create 1000 Nodes,Query Properties,Upload 100,Move 1000,Copied 1000,Delete
1000
MySql:3444,2,1445,96349,2246,92923,48647,98
Postgress:2483,19,1130,2404,556,1523,1055,111
Mongo:8497,2,845,14428,4432,7667,4640,142

Postgress seems to perform well overall.

In the case of Jackrabbit 2.10 (tests ran with the creation, move, copy,
delete, update, and read of 1000 nodes):
Create 1000 Nodes,Query Properties,Upload 100,Move 1000,Copied 1000,Delete
1000
MySql:3022,143,1105,16,764,1481,1139,12


Jackrabbit 2.10 performs slightly better than Oak.

The next set of tests were ran with Oak with the creation, move, copy,
delete, update, and read of 1 nodes:

Create 1 Nodes,Query Properties,Upload 100,Move 1,Copied
1,Delete 1,Update 1,Read 1
MySql:31250,4,1146,741474,20755,728737,374387,2216
Postgress:16475,16,605,30339,7615,24461,12453,2989
Mongo:342192,2,753,406259,321040,43670,41053,968

Postgress once again performed ok.  Mongo and MySql did not do well around
Moves, deletes, and updates. Querying did well also as indexes were
created.

In the case of Jackrabbit 2.10 (tests ran with the creation, move, copy,
delete, update, and read of 1 nodes):
Create 1 Nodes,Query Properties,Upload 100,Move 1,Copied
1,Delete 1,Update 1,Read 1
MySql:8507,94,744,14,489,824,987,8

Jackrabbit 2.10 performed much better than Oak in general.

Based on the results I have a few questions/comments:

.   Are these fair comparisons between Jackrabbit and Oak?  In our
application it is very possible to create 1-10,000 nodes in a user
session.
.   Should I have assumed Oak would outperform Jackrabbit 2.10?
.   I understand MySql is experimental but Mongo is not - I would
assume Mongo would perform as well if not better than Postgress
.   The performance bottlenecks seem to be at the JDBC level for
MySql.  I made some configuration changes which helped performance but the
changes would make MySql fail any ACID tests.

Just a few notes:

The same JCR code was used for creating, moving, deleting etc any nodes.
The JCR code was used for all the tests.  The tests were all run on the
same machine

Used DocumentMK Builder for all DataStores:

Mongo:
DocumentNodeStore storeD = new
DocumentMK.Builder().setPersistentCache("D:\\ekm-oak\\Mongo,size=1024,bina
ry=0").setMongoDB(db).setBlobStore(new
DataStoreBlobStore(fds)).getNodeStore();

MySql:
   RDBOptions options = new
RDBOptions().tablePrefix(prefix).dropTablesOnClose(false);
DocumentNodeStore storeD = new
DocumentMK.Builder().setBlobStore(new
DataStoreBlobStore(fds)).setClusterId(1).memoryCacheSize(64 * 1024 *
1024).

setPersistentCache("D:\\ekm-oak\\MySql,size=1024,binary=0").setRDBConnecti
on(RDBDataSourceFactory.forJdbcUrl(url, userName, password),
options).getNodeStore();
PostGres:
RDBOptions options = new
RDBOptions().tablePrefix(prefix).dropTablesOnClose(false);
DocumentNodeStore storeD = new
DocumentMK.Builder().setAsyncDelay(0).setBlobStore(new
DataStoreBlobStore(fds)).setClusterId(1).memoryCacheSize(64 * 1024 *
1024).

setPersistentCache("D:\\ekm-oak\\postGress,size=1024,binary=0").setRDBConn
ection(RDBDataSourceFactory.forJdbcUrl(url, userName, password),
options).getNodeStore();

The repository was created the same for all three:
Repository repository = new Jcr(new Oak(storeD)).with(new
LuceneIndexEditorProvider()).with(configureSearch()).createRepository();

Any input is welcome..

Thanks,
Domenic

-Original Message-
From: Marcel Reutegger [mailto:mreut...@adobe.com]
Sent: Tuesday, March 29, 2016 4:41 AM
To: oak-dev@jackrabbit.apache.org
Subject: Re: Jackrabbit 2.10 vs Oak 1.2.7

Hi,

the graphs didn't make it through to the mailing list.
Can you please 

[VOTE] Release Apache Jackrabbit Oak 1.5.0

2016-03-29 Thread Amit Jain
A candidate for the Jackrabbit Oak 1.5.0 release is available at:

https://dist.apache.org/repos/dist/dev/jackrabbit/oak/1.5.0/

The release candidate is a zip archive of the sources in:


https://svn.apache.org/repos/asf/jackrabbit/oak/tags/jackrabbit-oak-1.5.0/

The SHA1 checksum of the archive is
1c4b3a95c8788a80129c1b7efb7dc38f4d19bd08.

A staged Maven repository is available for review at:

https://repository.apache.org/

The command for running automated checks against this release candidate is:

$ sh check-release.sh oak 1.5.0 1c4b3a95c8788a80129c1b7efb7dc38f4d19bd08

Please vote on releasing this package as Apache Jackrabbit Oak 1.5.0.
The vote is open for the next 72 hours and passes if a majority of at
least three +1 Jackrabbit PMC votes are cast.

[ ] +1 Release this package as Apache Jackrabbit Oak 1.5.0
[ ] -1 Do not release this package because...

My vote is +1.

Thanks
Amit


Re: Jackrabbit 2.10 vs Oak 1.2.7

2016-03-29 Thread Marcel Reutegger
Hi,

the graphs didn't make it through to the mailing list.
Can you please post raw numbers or a link to the graphs?

Without access to more data, my guess is that Oak on
DocumentNodeStore is slower with the bigger changes set
because it internally creates a branch to stage changes
when it reaches a given threshold. This introduces more
traffic to the backend storage when save() is called,
because previously written data is retrieved again from
the backend.

Jackrabbit 2.10 on the other hand keeps the entire changes
in memory until save() is called.

You can increase the threshold for the DocumentNodeStore
with a system property: -Dupdate.limit=10

The default is 10'000.

Regards
 Marcel

On 29/03/16 04:19, "Domenic DiTano" wrote:

>Hello,
> 
>I work with web application that has Jackrabbit 2.10 embedded and we
>wanted to try upgrading to Oak.  Our current configuration that we use
>for Jackrabbit
>2.10 is the FileDataStore along with MySql for the Persistence DataStore.
> We wrote some test cases to measure the performance of JackRabbit 2.1.0
>vs latest Oak 1.2.  In the case of JackRabbit 2.10, we used what our
>current application configuration ­ FileDataStore along with MySql.  In
>the case of Oak we tried many configurations but the one we settled on
>was a DocumentNodeStore with a FileDataStore backend.We tried all 3 RDB
>options (Mongo, PostGress, MySql).
>All Test cases used the same code which standard
>JCR 2.0 code.   The test cases did the following:
> 
>·
>create 1000 & 10,000 nodes
>·
>move 1000 & 10,000 nodes
>·
>copy 1000 & 10,000 nodes
>·
>delete 1000 & 10,000 nodes
>·
>upload 100 files
>·
>read 1 property on 1000 & 10,000 nodes
>·
>update 1 property on 1000 & 10,000 nodes
> 
> 
>The results were as follows (all results in milliseconds):
> 
>Oak tests ran with the creation, move, copy, delete, update, and read of
>1000 nodes:
> 
>
> 
>Postgress seems to perform well overall.
> 
>In the case of Jackrabbit 2.10 (tests ran with the creation, move, copy,
>delete, update, and read of 1000 nodes):
> 
>
> 
>Jackrabbit 2.10 performs slightly better than Oak.
> 
>The next set of tests were ran with Oak with the creation, move, copy,
>delete, update, and read of 1 nodes:
> 
>
> 
>Postgress once again performed ok.  Mongo and MySql did not do well
>around Moves, deletes, and updates. Querying did well also as indexes
>were created.
> 
>In the case of Jackrabbit 2.10 (tests ran with the creation, move, copy,
>delete, update, and read of 1 nodes):
> 
>
> 
>Jackrabbit 2.10 performed much
>better than Oak in general.
> 
>Based on the results I have a few questions/comments:
> 
>·
>Are these fair comparisons between Jackrabbit and Oak?  In our
>application it is very possible to create 1-10,000 nodes in a user
>session.
>·
>Should I have assumed Oak would outperform Jackrabbit 2.10?
>·
>I understand MySql is experimental but Mongo is not ­ I would assume
>Mongo would perform as well if not better than Postgress
>·
>The performance bottlenecks seem to be at the JDBC level for MySql.  I
>made some configuration changes which helped performance but the changes
>would make MySql fail any ACID tests.
> 
>Just a few notes:
> 
>The same JCR code was used for creating, moving, deleting etc any nodes.
>The JCR code was used for all the tests.  The tests were all run on the
>same machine
> 
>Used DocumentMK Builder for all DataStores:
> 
>Mongo:
>DocumentNodeStore storeD = new
>DocumentMK.Builder().setPersistentCache("D:\\ekm-oak\\Mongo,size=1024,bina
>ry=0").setMongoDB(db).setBlobStore(new
>DataStoreBlobStore(fds)).getNodeStore();
> 
>MySql:
>   RDBOptions options = new
>RDBOptions().tablePrefix(prefix).dropTablesOnClose(false);
>DocumentNodeStore storeD = new
>DocumentMK.Builder().setBlobStore(new
>DataStoreBlobStore(fds)).setClusterId(1).memoryCacheSize(64 * 1024 *
>1024).
>
>setPersistentCache("D:\\ekm-oak\\MySql,size=1024,binary=0").setRDBConnecti
>on(RDBDataSourceFactory.forJdbcUrl(url, userName, password),
>options).getNodeStore();
>PostGres:
>RDBOptions options = new
>RDBOptions().tablePrefix(prefix).dropTablesOnClose(false);
>DocumentNodeStore storeD = new
>DocumentMK.Builder().setAsyncDelay(0).setBlobStore(new
>DataStoreBlobStore(fds)).setClusterId(1).memoryCacheSize(64 * 1024 *
>1024).
>
>setPersistentCache("D:\\ekm-oak\\postGress,size=1024,binary=0").setRDBConn
>ection(RDBDataSourceFactory.forJdbcUrl(url, userName, password),
>options).getNodeStore();
> 
>The repository was created the same for all three:
>Repository repository = new Jcr(new Oak(storeD)).with(new
>LuceneIndexEditorProvider()).with(configureSearch()).createRepository();
> 
>Any input is welcomeŠ.
> 
>Thanks,
>Domenic
> 
>



Re: Extracting subpaths from a DocumentStore repo

2016-03-29 Thread Marcel Reutegger
Hi,

as indicated already by Vikas, my recommendation is also
to rewrite the documents. I'm doing something similar
for OAK-3712. See e.g.:
https://github.com/mreutegg/jackrabbit-oak/blob/OAK-3712/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/NodeDocumentSweeper.java

The method committedBranch() turns a branch commit into
a document local change. The commit root is on the document
itself and self contained.

Regards
 Marcel

On 28/03/16 16:29, "Robert Munteanu" wrote:

Hi,

In the context of the Multiplexing DocumentStore work for Oak [1] I'm
going to work on a tool to extract a few subpaths from a DS repository
which can then be plugged in a different repository.

The objective is to generate a 'private mount' which can be used
together with a different 'global repository'. For instance:

- create a repository (R1) , populate /foo and /bar with some content
- extract data for /foo and /bar from R1
- pre-populate a DS 'storage area' ( MongoDB collection or RDB table )
with the data extracted above
- configure a new repository (R2) to mount /foo and /bar with the data
from above

The main inconvenient is that many times commmits which affect /foo and
/bar are have the commit root at '/', so the collections extracted
using something like oak-run.js' printMongoExportCommand will not work.

I have two possible ways of doing this, so before experimenting I'd
like to discuss with you whether these are valid ways of approaching
the problem or if there's something better:

1) Manually create a new commit for each sub-path ( e.g. 1 for /foo and
1 for /bar ) and re-write the commit references for each node document
so that they point to the new commits

2) For each sub-path, copy the nodes into a temporary staging area (
e.g. /foo -> /staging/foo, or even /:staging/foo ), export the data,
and then manually alter the references.

Approach 1) is probably going to get me in trouble with the
DocumentNodeStore caches, so the Oak instance might not be usable after
I perform these changes ( which can be fine, since I'm going to spin it
up just for that ).

Approach 2) might get me branch commits, which are always rooted at the
'/', which invalidates the approach. Also, path find/replace sounds
error prone.

Any ideas how to best approach this?

Thanks,

Robert

[1]: https://issues.apache.org/jira/browse/OAK-3401



[ANNOUNCE] Apache Jackrabbit Oak 1.0.29 released

2016-03-29 Thread Dominique Jaeggi
The Apache Jackrabbit community is pleased to announce the release of
Apache Jackrabbit Oak 1.0.29 The release is available for download at:

http://jackrabbit.apache.org/downloads.html

See the full release notes below for details about this release:

Release Notes -- Apache Jackrabbit Oak -- Version 1.0.29

Introduction


Jackrabbit Oak is a scalable, high-performance hierarchical content
repository designed for use as the foundation of modern world-class
web sites and other demanding content applications.

Apache Jackrabbit Oak 1.0.29 is a patch release that contains fixes and
improvements over Oak 1.0. Jackrabbit Oak 1.0.x releases are considered
stable and targeted for production use.

The Oak effort is a part of the Apache Jackrabbit project.
Apache Jackrabbit is a project of the Apache Software Foundation.

Changes in Oak 1.0.29
-

Technical task

   [OAK-3843] - MS SQL doesn't support more than 2100 parameters in one request
   [OAK-4113] - RDBJDBCTools: fix JDBC driver version check
   [OAK-4134] - RDBBlobStore: improve error handling and logging

Bug

   [OAK-4050] - SplitOperations may not retain most recent committed
_commitRoot entry
   [OAK-4131] - LastRevisionRecoveryAgent may throw ClassCastException


In addition to the above-mentioned changes, this release contains
all changes included in previous Apache Jackrabbit Oak 1.0.x releases.

Please note, the backported RDB support for the DocumentNodeStore is considered
experimental at this point and is not yet ready for production use. Feel free
to try it out and report any issues you may see to the Oak developers.

For more detailed information about all the changes in this and other
Oak releases, please see the Oak issue tracker at

  https://issues.apache.org/jira/browse/OAK

Release Contents


This release consists of a single source archive packaged as a zip file.
The archive can be unpacked with the jar tool from your JDK installation.
See the README.md file for instructions on how to build this release.

The source archive is accompanied by SHA1 and MD5 checksums and a PGP
signature that you can use to verify the authenticity of your download.
The public key used for the PGP signature can be found at
http://www.apache.org/dist/jackrabbit/KEYS.

About Apache Jackrabbit Oak
---

Jackrabbit Oak is a scalable, high-performance hierarchical content
repository designed for use as the foundation of modern world-class
web sites and other demanding content applications.

The Oak effort is a part of the Apache Jackrabbit project.
Apache Jackrabbit is a project of the Apache Software Foundation.

For more information, visit http://jackrabbit.apache.org/oak

About The Apache Software Foundation


Established in 1999, The Apache Software Foundation provides organizational,
legal, and financial support for more than 140 freely-available,
collaboratively-developed Open Source projects. The pragmatic Apache License
enables individual and commercial users to easily deploy Apache software;
the Foundation's intellectual property framework limits the legal exposure
of its 3,800+ contributors.

For more information, visit http://www.apache.org/


[RESULT][VOTE] Release Apache Jackrabbit Oak 1.0.29

2016-03-29 Thread Dominique Jaeggi
Hi,

The vote passes as follows:

+1 Alex Parvulescu
+1 Davide Gianella
+1 Dominique Jaeggi

Thanks for voting. I'll push the release out.

Regards
Dom.