[jira] [Comment Edited] (JCR-4028) Artifact for jackrabbit-spi:2.6.6 is missing in Maven central repository

2016-09-22 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-4028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515499#comment-15515499
 ] 

Julian Reschke edited comment on JCR-4028 at 9/23/16 5:56 AM:
--

Interesting. Some projects are missing, and jackrabbit-spi-commons has a 
timestamp 4 days later than the other parts; dunn what went wrong here.

The problem is also present in 
https://repo.maven.apache.org/maven2/org/apache/jackrabbit/jackrabbit-spi-commons/2.6.6/...


was (Author: reschke):
Interesting. Some projects are missing, and jackrabbit-spi-commons has a 
timestamp 4 days later than the other parts; dunn what went wrong here.

> Artifact for jackrabbit-spi:2.6.6 is missing in Maven central repository
> 
>
> Key: JCR-4028
> URL: https://issues.apache.org/jira/browse/JCR-4028
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: jackrabbit-spi
>Affects Versions: 2.6.6
> Environment: Maven 3.3
>Reporter: Torsten Friebe
>Priority: Minor
>
> The jackrabbit-spi-2.6.6.jar is missing in Maven central repository. When 
> defining the dependency in a Maven POM  
> {code}
> 
>   org.apache.jackrabbit
>   jackrabbit-spi
>   2.6.6
> 
> {code}
> For unknown reasons the 2.6.6 version provides not all artifacts:
> ​http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.jackrabbit%22%20AND%20v%3A%222.6.6%22
> as compared to 2.6.5 release version:
> ​http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.jackrabbit%22%20AND%20v%3A%222.6.5%22
> To enable a smooth upgrade from 2.6.5 to 2.6.6 (to fix JCR-3883 / 
> CVE-2015-1833) it would be very helpful if the missing artifacts are added to 
> the maven central repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (JCR-4028) Artifact for jackrabbit-spi:2.6.6 is missing in Maven central repository

2016-09-22 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-4028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515499#comment-15515499
 ] 

Julian Reschke commented on JCR-4028:
-

Interesting. Some projects are missing, and jackrabbit-spi-commons has a 
timestamp 4 days later than the other parts; dunn what went wrong here.

> Artifact for jackrabbit-spi:2.6.6 is missing in Maven central repository
> 
>
> Key: JCR-4028
> URL: https://issues.apache.org/jira/browse/JCR-4028
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: jackrabbit-spi
>Affects Versions: 2.6.6
> Environment: Maven 3.3
>Reporter: Torsten Friebe
>Priority: Minor
>
> The jackrabbit-spi-2.6.6.jar is missing in Maven central repository. When 
> defining the dependency in a Maven POM  
> {code}
> 
>   org.apache.jackrabbit
>   jackrabbit-spi
>   2.6.6
> 
> {code}
> For unknown reasons the 2.6.6 version provides not all artifacts:
> ​http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.jackrabbit%22%20AND%20v%3A%222.6.6%22
> as compared to 2.6.5 release version:
> ​http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.jackrabbit%22%20AND%20v%3A%222.6.5%22
> To enable a smooth upgrade from 2.6.5 to 2.6.6 (to fix JCR-3883 / 
> CVE-2015-1833) it would be very helpful if the missing artifacts are added to 
> the maven central repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [discuss] Stop wrapping unknown NodeState instances with a MemoryNodeStore?

2016-09-22 Thread Michael Dürig


Hi,

I think the core of the problem is that the memory node store doesn't 
always behave properly when initialised with something else than a 
MemoryNodeState. Consider:


NodeState base = new AlienNodeState();
NodeStore store = new MemoryNodeStore(base);
NodeBuilder builder = store.getRoot().builder();
store.merge(builder, EmptyHook.INSTANCE, CommitInfo.EMPTY);

The merge call will throw an IAE if base.builder() does return a builder 
instance that doesn't inherit from MemoryNodeBuilder.


To this respect I think your fix is basically correct, it should just be 
applied deeper down. Instead of wrapping the base states before passing 
them to the MemoryNodeStore constructor, I think that constructor should 
do the wrapping in case the passed base state is of a different type.


Michael


On 22.9.16 2:49 , Tomek Rekawek wrote:

Hi,

I’ve looked into this issue. I think it’s caused by the fact that the squeeze() 
method sometimes doesn’t wrap the passed node state with MemoryNodeStates, but 
return it as-is. I tried to wrap the state unconditionally in the initializers 
and it fixed the issue.

Michael, Robert - do you think [1] is an acceptable solution? If so, I’ll 
create a proper JIRA and merge the code.

Regards,
Tomek

[1] https://github.com/trekawek/jackrabbit-oak/commit/cf3d73



[jira] [Created] (JCR-4028) Artifact for jackrabbit-spi:2.6.6 is missing in Maven central repository

2016-09-22 Thread Torsten Friebe (JIRA)
Torsten Friebe created JCR-4028:
---

 Summary: Artifact for jackrabbit-spi:2.6.6 is missing in Maven 
central repository
 Key: JCR-4028
 URL: https://issues.apache.org/jira/browse/JCR-4028
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-spi
Affects Versions: 2.6.6
 Environment: Maven 3.3
Reporter: Torsten Friebe
Priority: Minor


The jackrabbit-spi-2.6.6.jar is missing in Maven central repository. When 
defining the dependency in a Maven POM  
{code}

  org.apache.jackrabbit
  jackrabbit-spi
  2.6.6

{code}

For unknown reasons the 2.6.6 version provides not all artifacts:
​http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.jackrabbit%22%20AND%20v%3A%222.6.6%22
as compared to 2.6.5 release version:
​http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.jackrabbit%22%20AND%20v%3A%222.6.5%22

To enable a smooth upgrade from 2.6.5 to 2.6.6 (to fix JCR-3883 / 
CVE-2015-1833) it would be very helpful if the missing artifacts are added to 
the maven central repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: oak-upgrade problems migrating from Jackrabbit 2 to Oak

2016-09-22 Thread Robert Haycock
Ignore that last message. I thought it was logging every node, not every 10,000 
nodes :D

-Original Message-
From: Robert Haycock [mailto:robert.hayc...@artificial-solutions.com] 
Sent: 22 September 2016 16:34
To: oak-dev@jackrabbit.apache.org
Subject: RE: oak-upgrade problems migrating from Jackrabbit 2 to Oak

I've removed (for now) the large strings and now the upgrade has started.

I have to say, it's a lot slower than I anticipated but its running :)

Thanks.

-Original Message-
From: Robert Haycock [mailto:robert.hayc...@artificial-solutions.com] 
Sent: 22 September 2016 15:33
To: oak-dev@jackrabbit.apache.org
Subject: RE: oak-upgrade problems migrating from Jackrabbit 2 to Oak

Hi Tomek,

We are aiming at having a cluster of Oaks . I'll see if I can find the 
offending large string.

Thanks.

-Original Message-
From: Tomek Rekawek [mailto:reka...@adobe.com]
Sent: 22 September 2016 15:29
To: oak-dev@jackrabbit.apache.org
Subject: Re: oak-upgrade problems migrating from Jackrabbit 2 to Oak

Hi Robert,

I think the quoted exception may be caused by some long string stored in the 
Jackrabbit 2 repository. In MongoMK all the strings are inlined in the Mongo 
documents, while the binaries are extracted to the blob store. Therefore, 
string properties longer than ~15MB are not supported. It’s a hard limit of the 
Mongo implementation itself.

Migrating this repository may require changing the repository structure - the 
large properties type should be changed from STRING to BINARY.

As Torgeir noticed, SegmentMK doesn’t have such constraints. If you don’t need 
to create a cluster of Oaks, then SegmentMK is a better choice anyway.

Regards,
Tomek

> On 22 Sep 2016, at 16:08, Robert Haycock 
>  wrote:
> 
> Thanks Tomek,
> 
> Getting closer!
> 
> Looks like a setting somewhere needs increasing...
> 
> Exception in thread "main" java.lang.RuntimeException: 
> javax.jcr.RepositoryException: Failed to copy content
>at com.google.common.io.Closer.rethrow(Closer.java:149)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:58)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:
> 42) Caused by: javax.jcr.RepositoryException: Failed to copy content
>at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:551)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.upgrade(OakUpgrade.java:65)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:53)
>... 1 more
> Caused by: org.bson.BsonSerializationException: Size 24184261 is larger than 
> MaxDocumentSize 16793600.

--
Tomek Rękawek | Adobe Research | www.adobe.com reka...@adobe.com




RE: oak-upgrade problems migrating from Jackrabbit 2 to Oak

2016-09-22 Thread Robert Haycock
I've removed (for now) the large strings and now the upgrade has started.

I have to say, it's a lot slower than I anticipated but its running :)

Thanks.

-Original Message-
From: Robert Haycock [mailto:robert.hayc...@artificial-solutions.com] 
Sent: 22 September 2016 15:33
To: oak-dev@jackrabbit.apache.org
Subject: RE: oak-upgrade problems migrating from Jackrabbit 2 to Oak

Hi Tomek,

We are aiming at having a cluster of Oaks . I'll see if I can find the 
offending large string.

Thanks.

-Original Message-
From: Tomek Rekawek [mailto:reka...@adobe.com]
Sent: 22 September 2016 15:29
To: oak-dev@jackrabbit.apache.org
Subject: Re: oak-upgrade problems migrating from Jackrabbit 2 to Oak

Hi Robert,

I think the quoted exception may be caused by some long string stored in the 
Jackrabbit 2 repository. In MongoMK all the strings are inlined in the Mongo 
documents, while the binaries are extracted to the blob store. Therefore, 
string properties longer than ~15MB are not supported. It’s a hard limit of the 
Mongo implementation itself.

Migrating this repository may require changing the repository structure - the 
large properties type should be changed from STRING to BINARY.

As Torgeir noticed, SegmentMK doesn’t have such constraints. If you don’t need 
to create a cluster of Oaks, then SegmentMK is a better choice anyway.

Regards,
Tomek

> On 22 Sep 2016, at 16:08, Robert Haycock 
>  wrote:
> 
> Thanks Tomek,
> 
> Getting closer!
> 
> Looks like a setting somewhere needs increasing...
> 
> Exception in thread "main" java.lang.RuntimeException: 
> javax.jcr.RepositoryException: Failed to copy content
>at com.google.common.io.Closer.rethrow(Closer.java:149)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:58)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:
> 42) Caused by: javax.jcr.RepositoryException: Failed to copy content
>at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:551)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.upgrade(OakUpgrade.java:65)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:53)
>... 1 more
> Caused by: org.bson.BsonSerializationException: Size 24184261 is larger than 
> MaxDocumentSize 16793600.

--
Tomek Rękawek | Adobe Research | www.adobe.com reka...@adobe.com




RE: oak-upgrade problems migrating from Jackrabbit 2 to Oak

2016-09-22 Thread Robert Haycock
Hi Tomek,

We are aiming at having a cluster of Oaks . I'll see if I can find the 
offending large string.

Thanks.

-Original Message-
From: Tomek Rekawek [mailto:reka...@adobe.com] 
Sent: 22 September 2016 15:29
To: oak-dev@jackrabbit.apache.org
Subject: Re: oak-upgrade problems migrating from Jackrabbit 2 to Oak

Hi Robert,

I think the quoted exception may be caused by some long string stored in the 
Jackrabbit 2 repository. In MongoMK all the strings are inlined in the Mongo 
documents, while the binaries are extracted to the blob store. Therefore, 
string properties longer than ~15MB are not supported. It’s a hard limit of the 
Mongo implementation itself.

Migrating this repository may require changing the repository structure - the 
large properties type should be changed from STRING to BINARY.

As Torgeir noticed, SegmentMK doesn’t have such constraints. If you don’t need 
to create a cluster of Oaks, then SegmentMK is a better choice anyway.

Regards,
Tomek

> On 22 Sep 2016, at 16:08, Robert Haycock 
>  wrote:
> 
> Thanks Tomek,
> 
> Getting closer!
> 
> Looks like a setting somewhere needs increasing...
> 
> Exception in thread "main" java.lang.RuntimeException: 
> javax.jcr.RepositoryException: Failed to copy content
>at com.google.common.io.Closer.rethrow(Closer.java:149)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:58)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:42)
> Caused by: javax.jcr.RepositoryException: Failed to copy content
>at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:551)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.upgrade(OakUpgrade.java:65)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:53)
>... 1 more
> Caused by: org.bson.BsonSerializationException: Size 24184261 is larger than 
> MaxDocumentSize 16793600.

-- 
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com




Re: oak-upgrade problems migrating from Jackrabbit 2 to Oak

2016-09-22 Thread Tomek Rekawek
Hi Robert,

I think the quoted exception may be caused by some long string stored in the 
Jackrabbit 2 repository. In MongoMK all the strings are inlined in the Mongo 
documents, while the binaries are extracted to the blob store. Therefore, 
string properties longer than ~15MB are not supported. It’s a hard limit of the 
Mongo implementation itself.

Migrating this repository may require changing the repository structure - the 
large properties type should be changed from STRING to BINARY.

As Torgeir noticed, SegmentMK doesn’t have such constraints. If you don’t need 
to create a cluster of Oaks, then SegmentMK is a better choice anyway.

Regards,
Tomek

> On 22 Sep 2016, at 16:08, Robert Haycock 
>  wrote:
> 
> Thanks Tomek,
> 
> Getting closer!
> 
> Looks like a setting somewhere needs increasing...
> 
> Exception in thread "main" java.lang.RuntimeException: 
> javax.jcr.RepositoryException: Failed to copy content
>at com.google.common.io.Closer.rethrow(Closer.java:149)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:58)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:42)
> Caused by: javax.jcr.RepositoryException: Failed to copy content
>at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:551)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.upgrade(OakUpgrade.java:65)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:53)
>... 1 more
> Caused by: org.bson.BsonSerializationException: Size 24184261 is larger than 
> MaxDocumentSize 16793600.

-- 
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com




smime.p7s
Description: S/MIME cryptographic signature


RE: oak-upgrade problems migrating from Jackrabbit 2 to Oak

2016-09-22 Thread Robert Haycock
I can't find any docs either under the project or online regarding this.

-Original Message-
From: Torgeir Veimo [mailto:torgeir.ve...@gmail.com] 
Sent: 22 September 2016 15:10
To: oak-dev@jackrabbit.apache.org
Subject: Re: oak-upgrade problems migrating from Jackrabbit 2 to Oak

Maybe you can work around it by upgrading using tarmk, then copying to a 
mongodb repository when it's all done.




Re: oak-upgrade problems migrating from Jackrabbit 2 to Oak

2016-09-22 Thread Torgeir Veimo
Maybe you can work around it by upgrading using tarmk, then copying to a
mongodb repository when it's all done.

On 23 September 2016 at 00:08, Robert Haycock <
robert.hayc...@artificial-solutions.com> wrote:

> Thanks Tomek,
>
> Getting closer!
>
> Looks like a setting somewhere needs increasing...
>
> Exception in thread "main" java.lang.RuntimeException:
> javax.jcr.RepositoryException: Failed to copy content
> at com.google.common.io.Closer.rethrow(Closer.java:149)
> at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.
> migrate(OakUpgrade.java:58)
> at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(
> OakUpgrade.java:42)
> Caused by: javax.jcr.RepositoryException: Failed to copy content
> at org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.
> copy(RepositoryUpgrade.java:551)
> at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.
> upgrade(OakUpgrade.java:65)
> at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.
> migrate(OakUpgrade.java:53)
> ... 1 more
> Caused by: org.bson.BsonSerializationException: Size 24184261 is larger
> than MaxDocumentSize 16793600.
> at org.bson.BsonBinaryWriter.backpatchSize(
> BsonBinaryWriter.java:367)
> at org.bson.BsonBinaryWriter.doWriteEndDocument(
> BsonBinaryWriter.java:122)
> at org.bson.AbstractBsonWriter.writeEndDocument(
> AbstractBsonWriter.java:293)
> at com.mongodb.DBObjectCodec.encodeMap(DBObjectCodec.java:222)
> at com.mongodb.DBObjectCodec.writeValue(DBObjectCodec.java:196)
> at com.mongodb.DBObjectCodec.encode(DBObjectCodec.java:128)
> at com.mongodb.DBObjectCodec.encode(DBObjectCodec.java:61)
> at org.bson.codecs.BsonDocumentWrapperCodec.encode(
> BsonDocumentWrapperCodec.java:63)
> at org.bson.codecs.BsonDocumentWrapperCodec.encode(
> BsonDocumentWrapperCodec.java:29)
> at com.mongodb.connection.UpdateCommandMessage.writeTheWrites(
> UpdateCommandMessage.java:84)
> at com.mongodb.connection.UpdateCommandMessage.writeTheWrites(
> UpdateCommandMessage.java:42)
> at com.mongodb.connection.BaseWriteCommandMessage.
> encodeMessageBodyWithMetadata(BaseWriteCommandMessage.java:129)
> at com.mongodb.connection.RequestMessage.encodeWithMetadata(
> RequestMessage.java:160)
> at com.mongodb.connection.WriteCommandProtocol.sendMessage(
> WriteCommandProtocol.java:212)
> at com.mongodb.connection.WriteCommandProtocol.execute(
> WriteCommandProtocol.java:101)
> at com.mongodb.connection.UpdateCommandProtocol.execute(
> UpdateCommandProtocol.java:64)
> at com.mongodb.connection.UpdateCommandProtocol.execute(
> UpdateCommandProtocol.java:37)
> at com.mongodb.connection.DefaultServer$
> DefaultServerProtocolExecutor.execute(DefaultServer.java:159)
> at com.mongodb.connection.DefaultServerConnection.executeProtocol(
> DefaultServerConnection.java:286)
> at com.mongodb.connection.DefaultServerConnection.updateCommand(
> DefaultServerConnection.java:140)
> at com.mongodb.operation.MixedBulkWriteOperation$Run$3.
> executeWriteCommandProtocol(MixedBulkWriteOperation.java:480)
> at com.mongodb.operation.MixedBulkWriteOperation$Run$
> RunExecutor.execute(MixedBulkWriteOperation.java:646)
> at com.mongodb.operation.MixedBulkWriteOperation$Run.execute(
> MixedBulkWriteOperation.java:399)
> at com.mongodb.operation.MixedBulkWriteOperation$1.
> call(MixedBulkWriteOperation.java:179)
> at com.mongodb.operation.MixedBulkWriteOperation$1.
> call(MixedBulkWriteOperation.java:168)
> at com.mongodb.operation.OperationHelper.withConnectionSource(
> OperationHelper.java:230)
> at com.mongodb.operation.OperationHelper.withConnection(
> OperationHelper.java:221)
> at com.mongodb.operation.MixedBulkWriteOperation.execute(
> MixedBulkWriteOperation.java:168)
> at com.mongodb.operation.MixedBulkWriteOperation.execute(
> MixedBulkWriteOperation.java:74)
> at com.mongodb.Mongo.execute(Mongo.java:781)
> at com.mongodb.Mongo$2.execute(Mongo.java:764)
> at com.mongodb.DBCollection.executeBulkWriteOperation(
> DBCollection.java:2195)
> at com.mongodb.DBCollection.executeBulkWriteOperation(
> DBCollection.java:2188)
> at com.mongodb.BulkWriteOperation.execute(
> BulkWriteOperation.java:121)
> at org.apache.jackrabbit.oak.plugins.document.mongo.
> MongoDocumentStore.sendBulkUpdate(MongoDocumentStore.java:1088)
> at org.apache.jackrabbit.oak.plugins.document.mongo.
> MongoDocumentStore.bulkUpdate(MongoDocumentStore.java:989)
> at org.apache.jackrabbit.oak.plugins.document.mongo.
> MongoDocumentStore.createOrUpdate(MongoDocumentStore.java:927)
> at org.apache.jackrabbit.oak.plugins.document.util.
> LeaseCheckDocumentStoreWrapper.createOrUpdate(
> LeaseCheckDocumentStoreWrapper.java:135)
> at 

RE: oak-upgrade problems migrating from Jackrabbit 2 to Oak

2016-09-22 Thread Robert Haycock
Thanks Tomek,

Getting closer!

Looks like a setting somewhere needs increasing...

Exception in thread "main" java.lang.RuntimeException: 
javax.jcr.RepositoryException: Failed to copy content
at com.google.common.io.Closer.rethrow(Closer.java:149)
at 
org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:58)
at 
org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:42)
Caused by: javax.jcr.RepositoryException: Failed to copy content
at 
org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:551)
at 
org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.upgrade(OakUpgrade.java:65)
at 
org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:53)
... 1 more
Caused by: org.bson.BsonSerializationException: Size 24184261 is larger than 
MaxDocumentSize 16793600.
at org.bson.BsonBinaryWriter.backpatchSize(BsonBinaryWriter.java:367)
at 
org.bson.BsonBinaryWriter.doWriteEndDocument(BsonBinaryWriter.java:122)
at 
org.bson.AbstractBsonWriter.writeEndDocument(AbstractBsonWriter.java:293)
at com.mongodb.DBObjectCodec.encodeMap(DBObjectCodec.java:222)
at com.mongodb.DBObjectCodec.writeValue(DBObjectCodec.java:196)
at com.mongodb.DBObjectCodec.encode(DBObjectCodec.java:128)
at com.mongodb.DBObjectCodec.encode(DBObjectCodec.java:61)
at 
org.bson.codecs.BsonDocumentWrapperCodec.encode(BsonDocumentWrapperCodec.java:63)
at 
org.bson.codecs.BsonDocumentWrapperCodec.encode(BsonDocumentWrapperCodec.java:29)
at 
com.mongodb.connection.UpdateCommandMessage.writeTheWrites(UpdateCommandMessage.java:84)
at 
com.mongodb.connection.UpdateCommandMessage.writeTheWrites(UpdateCommandMessage.java:42)
at 
com.mongodb.connection.BaseWriteCommandMessage.encodeMessageBodyWithMetadata(BaseWriteCommandMessage.java:129)
at 
com.mongodb.connection.RequestMessage.encodeWithMetadata(RequestMessage.java:160)
at 
com.mongodb.connection.WriteCommandProtocol.sendMessage(WriteCommandProtocol.java:212)
at 
com.mongodb.connection.WriteCommandProtocol.execute(WriteCommandProtocol.java:101)
at 
com.mongodb.connection.UpdateCommandProtocol.execute(UpdateCommandProtocol.java:64)
at 
com.mongodb.connection.UpdateCommandProtocol.execute(UpdateCommandProtocol.java:37)
at 
com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:159)
at 
com.mongodb.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:286)
at 
com.mongodb.connection.DefaultServerConnection.updateCommand(DefaultServerConnection.java:140)
at 
com.mongodb.operation.MixedBulkWriteOperation$Run$3.executeWriteCommandProtocol(MixedBulkWriteOperation.java:480)
at 
com.mongodb.operation.MixedBulkWriteOperation$Run$RunExecutor.execute(MixedBulkWriteOperation.java:646)
at 
com.mongodb.operation.MixedBulkWriteOperation$Run.execute(MixedBulkWriteOperation.java:399)
at 
com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:179)
at 
com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:168)
at 
com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:230)
at 
com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:221)
at 
com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:168)
at 
com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:74)
at com.mongodb.Mongo.execute(Mongo.java:781)
at com.mongodb.Mongo$2.execute(Mongo.java:764)
at 
com.mongodb.DBCollection.executeBulkWriteOperation(DBCollection.java:2195)
at 
com.mongodb.DBCollection.executeBulkWriteOperation(DBCollection.java:2188)
at com.mongodb.BulkWriteOperation.execute(BulkWriteOperation.java:121)
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.sendBulkUpdate(MongoDocumentStore.java:1088)
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.bulkUpdate(MongoDocumentStore.java:989)
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.createOrUpdate(MongoDocumentStore.java:927)
at 
org.apache.jackrabbit.oak.plugins.document.util.LeaseCheckDocumentStoreWrapper.createOrUpdate(LeaseCheckDocumentStoreWrapper.java:135)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.applyToDocumentStore(Commit.java:294)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.applyToDocumentStore(Commit.java:231)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.applyInternal(Commit.java:200)
at 
org.apache.jackrabbit.oak.plugins.document.Commit.apply(Commit.java:189)
at 

Re: [discuss] Stop wrapping unknown NodeState instances with a MemoryNodeStore?

2016-09-22 Thread Tomek Rekawek
Hi Robert,

Thanks for the feedback.

> On 22 Sep 2016, at 15:13, Robert Munteanu  wrote:
> 
> Only thing I'm wondering is whether there is a scenario where
> performance would be greatly impacted since the NodeState contains lots
> of entries _and_ it's not a MemoryNodeState, so the newly added wrap
> method would basically copy everything.

The method is a shallow copy, it only copies the children references. Also, 
it’s being used in cases when we more or less know the state of the node state 
- it should be empty-ish, as it’s the root node during the repository 
initialisation phase.

Regards,
Tomek

-- 
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com



smime.p7s
Description: S/MIME cryptographic signature


Re: [discuss] Stop wrapping unknown NodeState instances with a MemoryNodeStore?

2016-09-22 Thread Robert Munteanu
On Thu, 2016-09-22 at 12:49 +, Tomek Rekawek wrote:
> Hi,
> 
> I’ve looked into this issue. I think it’s caused by the fact that the
> squeeze() method sometimes doesn’t wrap the passed node state with
> MemoryNodeStates, but return it as-is. I tried to wrap the state
> unconditionally in the initializers and it fixed the issue.
> 
> Michael, Robert - do you think [1] is an acceptable solution? If so,
> I’ll create a proper JIRA and merge the code.
> 
> Regards,
> Tomek
> 
> [1] https://github.com/trekawek/jackrabbit-oak/commit/cf3d73
> 

That looks correct to me, although I may be missing some finer points.

Only thing I'm wondering is whether there is a scenario where
performance would be greatly impacted since the NodeState contains lots
of entries _and_ it's not a MemoryNodeState, so the newly added wrap
method would basically copy everything.

Upgrades maybe?

Robert


Jackrabbit-trunk - Build # 2389 - Fixed

2016-09-22 Thread Apache Jenkins Server
The Apache Jenkins build system has built Jackrabbit-trunk (build #2389)

Status: Fixed

Check console output at https://builds.apache.org/job/Jackrabbit-trunk/2389/ to 
view the results.

Re: [discuss] Stop wrapping unknown NodeState instances with a MemoryNodeStore?

2016-09-22 Thread Tomek Rekawek
Hi,

I’ve looked into this issue. I think it’s caused by the fact that the squeeze() 
method sometimes doesn’t wrap the passed node state with MemoryNodeStates, but 
return it as-is. I tried to wrap the state unconditionally in the initializers 
and it fixed the issue.

Michael, Robert - do you think [1] is an acceptable solution? If so, I’ll 
create a proper JIRA and merge the code.

Regards,
Tomek

[1] https://github.com/trekawek/jackrabbit-oak/commit/cf3d73

-- 
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com

> On 9 Sep 2016, at 17:09, Robert Munteanu  wrote:
> 
> Hi,
> 
> I'd like branch my 'Are all NodeBuilders required to inherit from the
> MemoryNodeBuilder?' thread to put the focus back on the root issue.
> 
> Some repository/workspace initializers unconditionally wrap a passed
> NodeState with a MemoryNodeStore, given that up till now all NodeState
> instances extend the MemoryNodeState. [1][2][3]
> 
> However, once we get a NodeState that does not inherit from a
> MemoryNodeState it all breaks down - like in my multiplexing POC.
> 
> I currently have a hack^H^H^H^H isolated way of exposing a
> MemoryNodeBuilder from a MultiplexingNodeBuilder, but I'd like a more
> elegant approach.
> 
> I wonder if the issue is that the initializers don't get access to the
> 'original' NodeStore and we can extend the API to make it available? Or
> maybe there are other ways of addressing it that I don't see due to my
> limited exposes to Oak's core.
> 
> Thanks,
> 
> Robert
> 
> [1]: https://github.com/apache/jackrabbit-oak/blob/1fdae3a77e4172cf5716
> 6ffe77eb35a4bd93c76b/oak-
> core/src/main/java/org/apache/jackrabbit/oak/plugins/nodetype/write/Ini
> tialContent.java#L118-L119
> [2]: https://github.com/apache/jackrabbit-oak/blob/1fdae3a77e4172cf5716
> 6ffe77eb35a4bd93c76b/oak-
> core/src/main/java/org/apache/jackrabbit/oak/security/user/UserInitiali
> zer.java#L94-L95
> [3]: https://github.com/apache/jackrabbit-oak/blob/1fdae3a77e4172cf5716
> 6ffe77eb35a4bd93c76b/oak-
> core/src/main/java/org/apache/jackrabbit/oak/security/privilege/Privile
> geInitializer.java#L58-L59



smime.p7s
Description: S/MIME cryptographic signature


[jira] [Updated] (JCR-4027) NPE in JcrRemotingServlet.canHandle() when content-type is missing

2016-09-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated JCR-4027:

Summary: NPE in JcrRemotingServlet.canHandle() when content-type is missing 
 (was: NPE inJcrRemotingServlet.canHandle() when content-type is missing)

> NPE in JcrRemotingServlet.canHandle() when content-type is missing
> --
>
> Key: JCR-4027
> URL: https://issues.apache.org/jira/browse/JCR-4027
> Project: Jackrabbit Content Repository
>  Issue Type: Bug
>  Components: jackrabbit-webdav
>Affects Versions: 2.4.6, 2.6.6, 2.8.2, 2.12.4, 2.10.4, 2.13.3
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.jackrabbit.server.remoting.davex.JcrRemotingServlet.canHandle(JcrRemotingServlet.java:482)
> at 
> org.apache.jackrabbit.server.remoting.davex.JcrRemotingServlet.doPost(JcrRemotingServlet.java:412)
> at 
> org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.execute(AbstractWebdavServlet.java:354)
> at 
> org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.service(AbstractWebdavServlet.java:291)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (JCR-4027) NPE inJcrRemotingServlet.canHandle() when content-type is missing

2016-09-22 Thread Julian Reschke (JIRA)
Julian Reschke created JCR-4027:
---

 Summary: NPE inJcrRemotingServlet.canHandle() when content-type is 
missing
 Key: JCR-4027
 URL: https://issues.apache.org/jira/browse/JCR-4027
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-webdav
Affects Versions: 2.13.3, 2.10.4, 2.12.4, 2.8.2, 2.6.6, 2.4.6
Reporter: Julian Reschke
Assignee: Julian Reschke
Priority: Minor


{noformat}
java.lang.NullPointerException
at 
org.apache.jackrabbit.server.remoting.davex.JcrRemotingServlet.canHandle(JcrRemotingServlet.java:482)
at 
org.apache.jackrabbit.server.remoting.davex.JcrRemotingServlet.doPost(JcrRemotingServlet.java:412)
at 
org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.execute(AbstractWebdavServlet.java:354)
at 
org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.service(AbstractWebdavServlet.java:291)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (JCR-4009) CSRF in Jackrabbit-Webdav (CVE-2016-6801)

2016-09-22 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513104#comment-15513104
 ] 

Julian Reschke commented on JCR-4009:
-

trunk: [r1761909|http://svn.apache.org/r1761909] 
[r1758600|http://svn.apache.org/r1758600] 
[r1758597|http://svn.apache.org/r1758597]
2.12: [r1761911|http://svn.apache.org/r1761911] 
[r1758609|http://svn.apache.org/r1758609]
2.10: [r1761912|http://svn.apache.org/r1761912] 
[r1758761|http://svn.apache.org/r1758761]
2.8: [r1761913|http://svn.apache.org/r1761913] 
[r1758764|http://svn.apache.org/r1758764]
2.6: [r1761915|http://svn.apache.org/r1761915] 
[r1758771|http://svn.apache.org/r1758771]
2.4: [r1761916|http://svn.apache.org/r1761916] 
[r1758791|http://svn.apache.org/r1758791]

(the latest checkins fix a whitespace issue in the log message that was added 
as part of the actual fix) 

> CSRF in Jackrabbit-Webdav (CVE-2016-6801)
> -
>
> Key: JCR-4009
> URL: https://issues.apache.org/jira/browse/JCR-4009
> Project: Jackrabbit Content Repository
>  Issue Type: Bug
>  Components: jackrabbit-webdav
>Affects Versions: 2.4.5, 2.6.5, 2.8.2, 2.10.3, 2.12.3, 2.13.2
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Blocker
>  Labels: csrf, security, webdav
> Fix For: 2.4.6, 2.6.6, 2.12.4, 2.10.4, 2.13.3, 2.8.3
>
> Attachments: CVE-2016-6801.txt, JCR-4009.diff
>
>
> The changes for JCR-4002 have disabled CSRF checking for POST, and thus leave 
> the remoting servlet open for attacks. This HTML form below:
> {noformat}
> http://localhost:8080/server/default/jcr:root/; method="post">
> 
> Send your message
> 
> {noformat}
> will successfully cross-origin-POST to jackrabbit-standalone.
> While fixing this issue, it also became clear that the content-type check 
> failed to take syntax variations into account (upper/lowercase, optional 
> parameters)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] checking JDK compliance of old Jackrabbit branches / call to EOL Jackrabbit 2.4

2016-09-22 Thread Davide Giannella
[X]  declare end-of-life for Jackrabbit 2.4 after the upcoming 2.4.6
release

Davide




Re: oak-upgrade problems migrating from Jackrabbit 2 to Oak

2016-09-22 Thread Tomek Rekawek
Hi Robert,

thanks for noticing this. It seems you’ve run into another bug: OAK-4842[1]. I 
fixed it. Also, I backported the previous fix to the 1.4 branch. Feel free to 
try the SNAPSHOTs:

1.4 (preferred if you want to use Oak 1.4.x): 
https://repository.apache.org/content/repositories/snapshots/org/apache/jackrabbit/oak-upgrade/1.4.8-SNAPSHOT/oak-upgrade-1.4.8-20160922.111319-1.jar
1.6: 
https://repository.apache.org/content/repositories/snapshots/org/apache/jackrabbit/oak-upgrade/1.6-SNAPSHOT/oak-upgrade-1.6-20160922.111809-6.jar

Best regards,
Tomek

[1] https://issues.apache.org/jira/browse/OAK-4842

-- 
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com

> On 21 Sep 2016, at 17:36, Robert Haycock 
>  wrote:
> 
> I just noticed the skip-name-check option!!
> 
> However, when I set the option...
> 
> java ^
> -jar target/oak-upgrade-1.6-SNAPSHOT.jar ^
> --skip-name-check ^
> c://work/MyComp-repository ^
> c://work/MyComp/MyComp-backend/ MyComp-repository.xml ^
> mongodb://localhost:27017/oak2
> 
> 
> ... I got the message:
> 
> 'skip-name-check' is not a recognized option
> joptsimple.UnrecognizedOptionException: 'skip-name-check' is not a recognized 
> option
>at 
> joptsimple.OptionException.unrecognizedOption(OptionException.java:89)
> 
> 
> 
> -Original Message-
> From: Robert Haycock [mailto:robert.hayc...@artificial-solutions.com] 
> Sent: 21 September 2016 16:26
> To: oak-dev@jackrabbit.apache.org
> Subject: RE: oak-upgrade problems migrating from Jackrabbit 2 to Oak
> 
> Hi,
> 
> So after configuring  the SecurityManager with the jackrabbit simple 
> implementations, I ran into another NPE as I'd commented out the SearchIndex.
> (
> Exception in thread "main" java.lang.NullPointerException
>at 
> org.apache.jackrabbit.core.IndexAccessor.getReader(IndexAccessor.java:34)
>at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.assertNoLongNames(RepositoryUpgrade.java:977)
>at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:402)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.upgrade(OakUpgrade.java:65)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:53)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:42)
> )
> 
> Looking at RepositoryUpgrade.java, I'm guessing the upgrade tool only works 
> if you use the default SearchIndex? For certain reasons, we had to implement 
> our own.
> 
> Rob.
> 
> -Original Message-
> From: Tomek Rekawek [mailto:reka...@adobe.com]
> Sent: 21 September 2016 10:25
> To: oak-dev@jackrabbit.apache.org
> Subject: Re: oak-upgrade problems migrating from Jackrabbit 2 to Oak
> 
> Hi Robert & Marcel,
> 
> thanks for the report. I created OAK-4832[1] to track it.
> 
> Robert, could you check if the problem exists on the recent SNAPSHOT[2]? If 
> it’s fine, I’ll backport the fix to the 1.4 branch.
> 
> Marcel, do you think using ConfigurationParameters.EMPTY for userConfig is 
> enough if the SecurityManager is not present?
> 
> Regards,
> Tomek
> 
> [1] https://issues.apache.org/jira/browse/OAK-4832
> [2] 
> https://repository.apache.org/content/repositories/snapshots/org/apache/jackrabbit/oak-upgrade/1.6-SNAPSHOT/oak-upgrade-1.6-20160921.092314-5.jar
> 
> --
> Tomek Rękawek | Adobe Research | www.adobe.com reka...@adobe.com
> 
>> On 20 Sep 2016, at 17:46, Marcel Reutegger  wrote:
>> 
>> Hi Robert,
>> 
>> I'm not too familiar with the upgrade module, but I think it doesn't 
>> support security configuration via JAAS. The NPE also indicates your 
>> repository.xml does not have security manager set. Can you try to set 
>> your SecurityManager in the repository.xml?
>> 
>> See also: 
>> http://jackrabbit.apache.org/jcr/jackrabbit-configuration.html#securit
>> y-configuration
>> 
>> Though, it would probably be better to fix the RepositoryUpgrade code 
>> because the SecurityManager element is actually optional.
>> 
>> Regards
>> Marcel
>> 
>> On 20/09/16 16:40, Robert Haycock wrote:
>>> Hi,
>>> 
>>> I have a jackrabbit repository (2.6.4) and I want to migrate to oak.
>>> 
>>> I tried...
>>> Java -jar oak-upgrade-1.4.7.jar   
>>> mongodb://localhost:27017/oak
>>> 
>>> It complained about the mysql driver missing. So I copied the oak-upgrade 
>>> project and added the mysql dependency. Then it couldn't find my custom 
>>> search index class, so I commented it out of the xml. Then it couldn't find 
>>> the security manager class configured in my JAAS config so I created one, a 
>>> blank implementation where all methods returned true.
>>> 
>>> Just as I thought it was doing something I ran into this
>>> C:\work\MyCompToOakUpgrader>java 
>>> -Djava.security.auth.login.config=c:/work/MyComp/MyComp-backend/jaas.
>>> config  -jar target/MyComp-to-oak-upgrader-1.4.7.jar  
>>> c://work/MyComp-repository  
>>> 

Re: Jackrabbit and JDK versions

2016-09-22 Thread Davide Giannella
On 21/09/2016 14:45, Julian Reschke wrote:
> 3) Switch trunk and 2.12 to JDK 7.
+1

Davide




Oak 1.5.11 release plan

2016-09-22 Thread Davide Giannella
Hello team,

I'm planning to cut Oak 1.5.11 on 28th September AM BST.

If there are any objections please let me know. Otherwise I will
re-schedule any non-resolved issue for the next iteration.

Thanks
Davide




[Oak origin/1.4] Apache Jackrabbit Oak matrix - Build # 1169 - Failure

2016-09-22 Thread Apache Jenkins Server
The Apache Jenkins build system has built Apache Jackrabbit Oak matrix (build 
#1169)

Status: Failure

Check console output at 
https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/1169/ to view 
the results.

Changes:
[reschke] OAK-4583: RDB*Store: update Tomcat JDBC pool dependency (ported to 
1.4)

[reschke] OAK-4821: allow use of Java 7 in Oak 1.4

[catholicon] OAK-4805: Misconfigured lucene index definition can render the 
whole

[catholicon] OAK-4805: Misconfigured lucene index definition can render the 
whole

 

Test results:
13 tests failed.
FAILED:  
org.apache.jackrabbit.oak.plugins.document.DocumentStoreStatsIT.create[MongoFixture:
 MongoDB]

Error Message:
com.mongodb.MongoException$Network: Operation on server localhost:27017 failed

Stack Trace:
java.lang.RuntimeException: com.mongodb.MongoException$Network: Operation on 
server localhost:27017 failed
at 
org.apache.jackrabbit.oak.plugins.document.DocumentStoreFixture$MongoFixture.createDocumentStore(DocumentStoreFixture.java:228)
at 
org.apache.jackrabbit.oak.plugins.document.AbstractDocumentStoreTest.(AbstractDocumentStoreTest.java:45)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentStoreStatsIT.(DocumentStoreStatsIT.java:51)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.junit.runners.parameterized.BlockJUnit4ClassRunnerWithParameters.createTestUsingConstructorInjection(BlockJUnit4ClassRunnerWithParameters.java:43)
at 
org.junit.runners.parameterized.BlockJUnit4ClassRunnerWithParameters.createTest(BlockJUnit4ClassRunnerWithParameters.java:38)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: com.mongodb.MongoException$Network: Operation on server 
localhost:27017 failed
at com.mongodb.DBTCPConnector.doOperation(DBTCPConnector.java:215)
at com.mongodb.DBCollectionImpl.createIndex(DBCollectionImpl.java:490)
at com.mongodb.DBCollection.createIndex(DBCollection.java:762)
at 
org.apache.jackrabbit.oak.plugins.document.mongo.MongoUtils.createIndex(MongoUtils.java:83)
at 

[ANNOUNCE] Apache Jackrabbit 2.4.6 released

2016-09-22 Thread Julian Reschke

The Apache Jackrabbit community is pleased to announce the release of
Apache Jackrabbit 2.4.6. The release is available for download at:

 http://jackrabbit.apache.org/downloads.html

See the full release notes below for details about this release:

Release Notes -- Apache Jackrabbit -- Version 2.4.6

Introduction


This is Apache Jackrabbit(TM) 2.4, a fully compliant implementation of the
Content Repository for Java(TM) Technology API, version 2.0 (JCR 2.0) as
specified in the Java Specification Request 283 (JSR 283).

Apache Jackrabbit 2.4.6 is patch release that contains fixes and
improvements over Jackrabbit 2.4.5. This release also contain security fixes
for Jackrabbit 2.4.5 and earlier. Jackrabbit 2.4.x releases are considered
stable and targeted for production use.

Security advisory (JCR-3883 / CVE-2015-1833)


This release fixes an important security issue in the jackrabbit-webdav 
module

reported by Mikhail Egorov.

When processing a WebDAV request body containing XML, the XML parser can be
instructed to read content from network resources accessible to the host,
identified by URI schemes such as "http(s)" or  "file". Depending on the
WebDAV request, this can not only be used to trigger internal network
requests, but might also be used to insert said content into the request,
potentially exposing it to the attacker and others (for instance, by 
inserting

said content in a WebDAV property value using a PROPPATCH request). See also
IETF RFC 4918, Section 20.6.

Users of the jackrabbit-webdav module are advised to immediately update the
module to this release or disable WebDAV access to the repository.


Changes since Jackrabbit 2.4.5
--

Bug

[JCR-3364] - Moving of nodes requires read access to all parent 
nodes of the destination node

[JCR-3518] - Build fails on Mac OS + JDK 7
[JCR-3603] - Index aggreate with property include does not speed up 
order by

[JCR-3710] - occasional test failures in TokenBasedAuthenticationTest
[JCR-3711] - RepositoryChecker versioning cleanup may leave 
repaired node in invalid type state
[JCR-3761] - TokenInfo#resetExpiration always fails with 
ConstraintViolationException

[JCR-3790] - timing related TokenProviderTest failures
[JCR-3883] - Jackrabbit WebDAV bundle susceptible to XXE/XEE attack 
(CVE-2015-1833)

[JCR-3909] - CSRF bug in Jackrabbit-Webdav
[JCR-3949] - occasional test failure in 
RepositoryConfigTest.testAutomaticClusterNodeIdCreation()

[JCR-3950] - XSS in DirListingExportHandler
[JCR-4009] - CSRF in Jackrabbit-Webdav (CVE-2016-6801)

Improvement

[JCR-3405] - Improvements to user management implementation
[JCR-3687] - Backport improvements made to token based auth in OAK
[JCR-3826] - AbstractPrincipalProvider cachesize is not configurable

In addition to the above-mentioned changes, this release contains
all the changes included up to the Apache Jackrabbit 2.4.5 release.

For more detailed information about all the changes in this and other
Jackrabbit releases, please see the Jackrabbit issue tracker at

https://issues.apache.org/jira/browse/JCR

Release Contents


This release consists of a single source archive packaged as a zip file.
The archive can be unpacked with the jar tool from your JDK installation.
See the README.txt file for instructions on how to build this release.

The source archive is accompanied by SHA1 and MD5 checksums and a PGP
signature that you can use to verify the authenticity of your download.
The public key used for the PGP signature can be found at
https://svn.apache.org/repos/asf/jackrabbit/dist/KEYS.

About Apache Jackrabbit
---

Apache Jackrabbit is a fully conforming implementation of the Content
Repository for Java Technology API (JCR). A content repository is a
hierarchical content store with support for structured and unstructured
content, full text search, versioning, transactions, observation, and
more.

For more information, visit http://jackrabbit.apache.org/

About The Apache Software Foundation


Established in 1999, The Apache Software Foundation provides organizational,
legal, and financial support for more than 140 freely-available,
collaboratively-developed Open Source projects. The pragmatic Apache License
enables individual and commercial users to easily deploy Apache software;
the Foundation's intellectual property framework limits the legal exposure
of its 3,800+ contributors.

For more information, visit http://www.apache.org/

Trademarks
--

Apache Jackrabbit, Jackrabbit, Apache, the Apache feather logo, and the 
Apache

Jackrabbit project logo are trademarks of The Apache Software Foundation.



Re: [VOTE] Release Apache Jackrabbit 2.4.6

2016-09-22 Thread Julian Reschke

On 2016-09-19 07:59, Julian Reschke wrote:

...


Hello Team,

the vote passes as follows:

+1 Claus Köll 
+1 Dominique Jaeggi 
+1 Julian Reschke 
+1 Marcel Reutegger 
+1 Michael Dürig 


Thanks for voting. I'll push the release out.

Best regards, Julian