[jira] [Commented] (JCR-3226) stateCreated deadlock

2015-07-24 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640387#comment-14640387
 ] 

Dominique Pfister commented on JCR-3226:


Looks good to me, Thomas, +1

 stateCreated deadlock
 -

 Key: JCR-3226
 URL: https://issues.apache.org/jira/browse/JCR-3226
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.2, 2.4
Reporter: Jukka Zitting
Assignee: Thomas Mueller
  Labels: deadlock
 Attachments: JCR-3226-test-2.patch, JCR-3226-test.patch


 In JCR-2650 a potential deadlock in SessionItemStateManager.stateModified() 
 got fixed by postponing the work to the save() call.
 Unfortunately this still leaves the stateCreated() method vulnerable to a 
 similar (though much less likely) deadlock scenario (observed in Jackrabbit 
 2.2.x):
 Thread A:
   at org.apache.jackrabbit.core.state.NodeState.copy(NodeState.java:118)
   - waiting to lock 0x7ffec8ddfa18 (a 
 org.apache.jackrabbit.core.state.NodeState)
   - locked 0x7ffec8ddf9d8 (a 
 org.apache.jackrabbit.core.state.NodeState)
   at org.apache.jackrabbit.core.state.ItemState.pull(ItemState.java:152)
   - locked 0x7ffec8ddf9d8 (a 
 org.apache.jackrabbit.core.state.NodeState)
   at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.stateCreated(SessionItemStateManager.java:791)
   at 
 org.apache.jackrabbit.core.state.StateChangeDispatcher.notifyStateCreated(StateChangeDispatcher.java:94)
   at 
 org.apache.jackrabbit.core.state.LocalItemStateManager.stateCreated(LocalItemStateManager.java:428)
   at 
 org.apache.jackrabbit.core.state.StateChangeDispatcher.notifyStateCreated(StateChangeDispatcher.java:94)
   at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.stateCreated(SharedItemStateManager.java:398)
   at 
 org.apache.jackrabbit.core.state.ItemState.notifyStateCreated(ItemState.java:231)
   at 
 org.apache.jackrabbit.core.state.ChangeLog.persisted(ChangeLog.java:309)
   at 
 org.apache.jackrabbit.core.state.SharedItemStateManager$Update.end(SharedItemStateManager.java:777)
   at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1488)
   at 
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:351)
   at 
 org.apache.jackrabbit.core.state.XAItemStateManager.update(XAItemStateManager.java:354)
   at 
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:326)
   at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.update(SessionItemStateManager.java:289)
   at 
 org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:258)
   at 
 org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:200)
   at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91)
   at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:329)
 Thread B:
   at org.apache.jackrabbit.core.state.NodeState.copy(NodeState.java:119)
   - waiting to lock 0x7ffec8ddf9d8 (a 
 org.apache.jackrabbit.core.state.NodeState)
   - locked 0x7ffec8ddfa18 (a 
 org.apache.jackrabbit.core.state.NodeState)
   at org.apache.jackrabbit.core.NodeImpl.makePersistent(NodeImpl.java:869)
   - locked 0x7ffec8ddfa18 (a 
 org.apache.jackrabbit.core.state.NodeState)
   at 
 org.apache.jackrabbit.core.ItemSaveOperation.persistTransientItems(ItemSaveOperation.java:836)
   at 
 org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:243)
   at 
 org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:200)
   at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91)
   at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:329)
   at 
 org.apache.jackrabbit.core.session.SessionSaveOperation.perform(SessionSaveOperation.java:42)
   at 
 org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:200)
   at org.apache.jackrabbit.core.SessionImpl.perform(SessionImpl.java:355)
   at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:758)
 IIUC this can only occur when two sessions are concurrently importing a node 
 with the same UUID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Jackrabbit Filevault 3.1.16

2015-02-23 Thread Dominique Pfister
+1

Cheers
Dominique

From: Tobias Bocanegra tri...@apache.org
Sent: Thursday, February 19, 2015 4:55:40 PM
To: dev@jackrabbit.apache.org
Subject: [VOTE] Release Apache Jackrabbit Filevault 3.1.16

A candidate for the Jackrabbit Filevault 3.1.16 release is available at:

https://dist.apache.org/repos/dist/dev/jackrabbit/filevault/3.1.16/

The release candidate is a zip archive of the sources in:

https://svn.apache.org/repos/asf/jackrabbit/commons/filevault/tags/jackrabbit-filevault-3.1.16/

The SHA1 checksum of the archive is 0b46acd8c963aedf641401f32efe093b71c927b6.

A staged Maven repository is available for review at:

https://repository.apache.org/content/repositories/orgapachejackrabbit-1053/

Please vote on releasing this package as Apache Jackrabbit Filevault 3.1.16.
The vote is open for the next 72 hours and passes if a majority of at
least three +1 Jackrabbit PMC votes are cast.

[ ] +1 Release this package as Apache Jackrabbit Filevault 3.1.16
[ ] -1 Do not release this package because...

Thanks.
Regards, Toby


RE: Multiple oak repositories using an OSGI whiteboard

2014-07-16 Thread Dominique Pfister
Hi,

Thanks everybody for their responses.

Yes, there is some special code looking up the second repository, and I thought 
about registering the second repository with a separate interface, so it can 
still be bound. My concern, however, is what happens with all the inner 
components, that are not directly coupled to the repository but rather 
registered in a whiteboard, and still contribute to the correct functioning.

To be more precise, let's say I'd like to create 1 oak repository of type Tar 
(T) and 1 of type Mongo (M): I give T an OSGI whiteboard, while M gets a 
default implementation that doesn't register its services in OSGI. Let's 
further say that the construction code of T @References an oak OSGI component 
(e.g. SecurityProvider), while M constructs one programmatically (e.g. an 
instance of type SecurityProviderImpl). Would this cause an interference, e.g. 
could the one explicitly built for M also be implicitly used for T?

Thanks
Dominique

From: Carsten Ziegeler cziege...@apache.org
Sent: Wednesday, July 16, 2014 3:50 PM
To: oak-dev@jackrabbit.apache.org
Subject: Re: Multiple oak repositories using an OSGI whiteboard

As David points out subsystems might help our writing your own service
registry hooks.
If you're not using one of those, you have a flat/shared service registry
and usually services using a repository service just pick up one from the
service registry and that's the one with the highest ranking at the point
of asking the registry. Therefore repository services which are registered
after this point in time are not even considered and there is no rebinding
(unless e.g. DS is used with a greedy reference). As you don't want to
rewrite all the code using a repository service, namespacing is the only
option.
However, the question is, what do you do with this second repository
service? Is there special code just looking up the second repository? Maybe
in that case, the simpler option could be to register the repository with a
different service interface - something like a marker interface
SecondaryRepository etc. (maybe with a better name)

Regards
Carsten


2014-07-16 3:08 GMT-07:00 David Bosschaert david.bosscha...@gmail.com:

 Hi Dominique,

 You could look into OSGi Application Subsystems (OSGi Enterprise Spec
 5 chapter 134). Application subsystems provide separate namespaces and
 by default don't share out services. Other subsystem types include
 Feature subsystems (where everything is shared) and Composite
 subsystems where you define explicitly what is shared and what is not.

 I wrote a blog a while ago on how to get Apache Aries subsystems
 running on Apache Felix, which might be useful:

 http://coderthoughts.blogspot.com/2014/01/osgi-subsytems-on-apache-felix.html

 Alternatively you can create your own service namespaces by using OSGi
 Service Registry hooks, but these are a bit more low-level than
 subsystems...

 Best regards,

 David

 On 15 July 2014 21:23, Dominique Pfister dpfis...@adobe.com wrote:
  Hi,
 
 
  I'd like to setup two distinct Oak repositories in the same VM, each
 containing an OSGI whiteboard.
 
 
  Looking at the components inside oak-core that announce their
 availability using this whiteboard and the way the registration is
 implemented in the OSGI whiteboard, I was wondering whether above setup is
 possible without causing a clash in the OSGI service registry. If so, what
 would be the easiest way to create separate namespaces where every
 component is automatically associated with its designated whiteboard?
 
 
  Kind regards
 
  Dominique




--
Carsten Ziegeler
Adobe Research Switzerland
cziege...@apache.org

Multiple oak repositories using an OSGI whiteboard

2014-07-15 Thread Dominique Pfister
Hi,


I'd like to setup two distinct Oak repositories in the same VM, each containing 
an OSGI whiteboard.


Looking at the components inside oak-core that announce their availability 
using this whiteboard and the way the registration is implemented in the OSGI 
whiteboard, I was wondering whether above setup is possible without causing a 
clash in the OSGI service registry. If so, what would be the easiest way to 
create separate namespaces where every component is automatically associated 
with its designated whiteboard?


Kind regards

Dominique


[jira] [Resolved] (JCR-3368) CachingHierarchyManager: inconsistent state after transient changes on root node

2014-05-21 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister resolved JCR-3368.


   Resolution: Fixed
Fix Version/s: 2.8

Fixed in revision 1596550.

 CachingHierarchyManager: inconsistent state after transient changes on root 
 node 
 -

 Key: JCR-3368
 URL: https://issues.apache.org/jira/browse/JCR-3368
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.2.12, 2.4.2, 2.5
Reporter: Unico Hommes
 Fix For: 2.8

 Attachments: HasNodeAfterRemoveTest.java


 See attached test case.
 You will see the following exception:
 javax.jcr.RepositoryException: failed to retrieve state of intermediary node
 at 
 org.apache.jackrabbit.core.CachingHierarchyManager.resolvePath(CachingHierarchyManager.java:156)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.resolveNodePath(HierarchyManagerImpl.java:372)
 at org.apache.jackrabbit.core.NodeImpl.getNodeId(NodeImpl.java:276)
 at 
 org.apache.jackrabbit.core.NodeImpl.resolveRelativeNodePath(NodeImpl.java:223)
 at org.apache.jackrabbit.core.NodeImpl.hasNode(NodeImpl.java:2250)
 at 
 org.apache.jackrabbit.core.HasNodeAfterRemoveTest.testRemove(HasNodeAfterRemoveTest.java:14)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at junit.framework.TestCase.runTest(TestCase.java:168)
 at junit.framework.TestCase.runBare(TestCase.java:134)
 at junit.framework.TestResult$1.protect(TestResult.java:110)
 at junit.framework.TestResult.runProtected(TestResult.java:128)
 at junit.framework.TestResult.run(TestResult.java:113)
 at junit.framework.TestCase.run(TestCase.java:124)
 at 
 org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:456)
 at junit.framework.TestSuite.runTest(TestSuite.java:243)
 at junit.framework.TestSuite.run(TestSuite.java:238)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
 at 
 org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62)
 at 
 org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:140)
 at 
 org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:127)
 at org.apache.maven.surefire.Surefire.run(Surefire.java:177)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at 
 org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:345)
 at 
 org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1009)
 Caused by: org.apache.jackrabbit.core.state.NoSuchItemStateException: 
 c7ccbcd3-0524-4d4d-a109-eae84627f94e
 at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.getTransientItemState(SessionItemStateManager.java:304)
 at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(SessionItemStateManager.java:153)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.getItemState(HierarchyManagerImpl.java:152)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.resolvePath(HierarchyManagerImpl.java:115)
 at 
 org.apache.jackrabbit.core.CachingHierarchyManager.resolvePath(CachingHierarchyManager.java:152)
 ... 29 more
 I tried several things to fix this but didn't find a better solution than to 
 just wrap the statement
 NodeId id = resolveRelativeNodePath(relPath);
 in a try catch RepositoryException and return false when that exception 
 occurs.
 In particular I tried changing the implementation to
 Path path = resolveRelativePath(relPath).getNormalizedPath();
 return itemMgr.nodeExists(path);
 However, the repository doesn't even start up with that. Aparently the code 
 relies on the null check for id as well.
 Anyone have a better solution?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (JCR-3779) Node.getPath() returns inconsistent values depending on whether node is saved or not

2014-05-21 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister resolved JCR-3779.


   Resolution: Fixed
Fix Version/s: 2.8

Fixed in revision 1596550.

 Node.getPath() returns inconsistent values depending on whether node is saved 
 or not
 

 Key: JCR-3779
 URL: https://issues.apache.org/jira/browse/JCR-3779
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.6.4
Reporter: Jan Haderka
 Fix For: 2.8


 Consider following code:
 {code}
 session.getRootNode().addNode(“foo”);
 session.save();
 Node fooNode = session.getNode(/foo);
 assertEquals(/foo, fooNode.getPath());
 session.move(/foo, /bar);
 Node barNode = session.getNode(/bar);
 assertEquals(“/bar”, barNode.getPath()); == this line actually fails, 
 because barNode.getPath() still returns “/foo”
 {code}
 From a repo point of view, move didn’t happen as it was not persisted yet. 
 But the example above is working in single session and in that session  move 
 did happen, so “local” view should be consistent.
 Now aside from the weirdness of the above code, there is also consistency 
 problem, because if I remove the save() call and run code like shown below, 
 it will actually pass, so getPath() after move will behave differently 
 whether or not was BEFORE the move persisted in the repo.
 {code}
 session.getRootNode().addNode(“foo”);
 Node fooNode = session.getNode(/foo);
 assertEquals(/foo, fooNode.getPath());
 session.move(/foo, /bar);
 Node barNode = session.getNode(/bar);
 assertEquals(“/bar”, barNode.getPath());
 {code}
 As per comment from Stefan Guggisberg, this happens only at the root node 
 level and is most likely caused by bug in {{CachingHierarchyManager}} and 
 related to  JCR-3239 and JCR-3368.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3239) Removal of a top level node doesn't update the hierarchy manager's cache.

2014-05-21 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14004613#comment-14004613
 ] 

Dominique Pfister commented on JCR-3239:


Philipp, I'm unable to reproduce this on latest jackrabbit trunk (2.8), could 
you provide a test case or explain how you created the stream you reimport with 
{{getImportXML}}?

 Removal of a top level node doesn't update the hierarchy manager's cache.
 -

 Key: JCR-3239
 URL: https://issues.apache.org/jira/browse/JCR-3239
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.2.11, 2.4
Reporter: Philipp Bärfuss

 *Problem*
 Scenario in which I encounter the problem:
 - given a node 'test' under root (/test)
 - re-imported the node after a deletion (all in-session operations) 
 {code}
 session.removeItem(/test)
 session.getImportXML(/, stream, 
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_THROW)
 {code}
 Result: throws an ItemExistsException
 If the same operations are executed deeper in the hierarchy (for instance 
 /foo/bar) then the code works perfectly fine.
 *Findings*
 - session.removeItem informs the hierachy manager (via listener)
 -- CachingHierarchyManager.nodeRemoved(NodeState, Name, int, NodeId)
 - but the root node (passed as state) is not in the cache and hence the entry 
 of the top level node is not removed
 -- CachingHierarchyManager: 458 
 - while trying to import the method SessionImporter.startNode(NodeInfo, 
 ListPropInfo) calls session.getHierarchyManager().getName(id) (line 400)
 - the stall information causes a uuid collision (the code expects an 
 exception if the node doesn't exist but in this case it returns the name of 
 the formerly removed node)
 Note: session.itemExists() and session.getNode() work as expected (the former 
 returns false, the later throws an ItemNotFound exception)
 Note: I know that a different import behavior (replace existing) would solve 
 the issue but I can't be 100% sure that the UUID match so I favor collision 
 throw in my case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3743) failing test if aws extensions

2014-03-18 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13939028#comment-13939028
 ] 

Dominique Pfister commented on JCR-3743:


Hm, I'm unable to reproduce these failures in {{TestInMemDS}} and 
{{TestInMemDsCacheOff}} on a W7 Enterprise/64bit/i7 VM. 

[~reschke], there have been quite some changes recently and those tests moved 
to jackrabbit-data: are you still able to reproduce these failures?

 failing test if aws extensions
 --

 Key: JCR-3743
 URL: https://issues.apache.org/jira/browse/JCR-3743
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Reporter: Julian Reschke
Assignee: Dominique Pfister
Priority: Minor

 On Win7/64bit/corei7:
 Failed tests:
   testDeleteAllOlderThan(org.apache.jackrabbit.aws.ext.ds.TestInMemDs)
   testDeleteAllOlderThan(org.apache.jackrabbit.aws.ext.ds.TestInMemDsCacheOff)
 Likely because of incorrect assumptions about System.currentTimeMillis()



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3743) failing test if aws extensions

2014-03-18 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13939210#comment-13939210
 ] 

Dominique Pfister commented on JCR-3743:


bq. You'll have to check the code for any assumptions of 
System.currentTimeMillis having a better granularity than 20ms. For instance, 
the assert on getLastModified being  updateTime looks fishy.

The test fails since {{rec1}}'s last modified time hasn't been updated, 
although the record was accessed some lines _after_ 
updateModifiedDateOnAccess() was called:
{code}
Thread.sleep(2000);
long updateTime = System.currentTimeMillis();
ds.updateModifiedDateOnAccess(updateTime);

data = new byte[dataLength];
random.nextBytes(data);
DataRecord rec3 = ds.addRecord(new ByteArrayInputStream(data));

data = new byte[dataLength];
random.nextBytes(data);
DataRecord rec4 = ds.addRecord(new ByteArrayInputStream(data));

rec1 = ds.getRecord(rec1.getIdentifier());
{code}

So your reasoning implies that adding the 2 DataRecords {{rec3}} and {{rec4}} 
takes less than 20ms? If this would be the case, I'd split the sleep of 2000 ms 
into two sleeps of 1000 ms each, and have the second one put after obtaining 
the {{updateTime}} above.

 failing test if aws extensions
 --

 Key: JCR-3743
 URL: https://issues.apache.org/jira/browse/JCR-3743
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Reporter: Julian Reschke
Assignee: Dominique Pfister
Priority: Minor

 On Win7/64bit/corei7:
 Failed tests:
   testDeleteAllOlderThan(org.apache.jackrabbit.aws.ext.ds.TestInMemDs)
   testDeleteAllOlderThan(org.apache.jackrabbit.aws.ext.ds.TestInMemDsCacheOff)
 Likely because of incorrect assumptions about System.currentTimeMillis()



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (JCR-3743) failing test if aws extensions

2014-03-18 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13939210#comment-13939210
 ] 

Dominique Pfister edited comment on JCR-3743 at 3/18/14 2:01 PM:
-

bq. You'll have to check the code for any assumptions of 
System.currentTimeMillis having a better granularity than 20ms. For instance, 
the assert on getLastModified being  updateTime looks fishy.

The test fails since {{rec1}}'s last modified time hasn't effectively changed, 
although the record was accessed and touched some lines _after_ 
updateModifiedDateOnAccess() was called:
{code}
Thread.sleep(2000);
long updateTime = System.currentTimeMillis();
ds.updateModifiedDateOnAccess(updateTime);

data = new byte[dataLength];
random.nextBytes(data);
DataRecord rec3 = ds.addRecord(new ByteArrayInputStream(data));

data = new byte[dataLength];
random.nextBytes(data);
DataRecord rec4 = ds.addRecord(new ByteArrayInputStream(data));

rec1 = ds.getRecord(rec1.getIdentifier());
{code}

So your reasoning implies that adding the 2 DataRecords {{rec3}} and {{rec4}} 
takes less than 20ms? If this would be the case, I'd split the sleep of 2000 ms 
into two sleeps of 1000 ms each, and have the second one put after obtaining 
the {{updateTime}} above.


was (Author: dpfister):
bq. You'll have to check the code for any assumptions of 
System.currentTimeMillis having a better granularity than 20ms. For instance, 
the assert on getLastModified being  updateTime looks fishy.

The test fails since {{rec1}}'s last modified time hasn't been updated, 
although the record was accessed some lines _after_ 
updateModifiedDateOnAccess() was called:
{code}
Thread.sleep(2000);
long updateTime = System.currentTimeMillis();
ds.updateModifiedDateOnAccess(updateTime);

data = new byte[dataLength];
random.nextBytes(data);
DataRecord rec3 = ds.addRecord(new ByteArrayInputStream(data));

data = new byte[dataLength];
random.nextBytes(data);
DataRecord rec4 = ds.addRecord(new ByteArrayInputStream(data));

rec1 = ds.getRecord(rec1.getIdentifier());
{code}

So your reasoning implies that adding the 2 DataRecords {{rec3}} and {{rec4}} 
takes less than 20ms? If this would be the case, I'd split the sleep of 2000 ms 
into two sleeps of 1000 ms each, and have the second one put after obtaining 
the {{updateTime}} above.

 failing test if aws extensions
 --

 Key: JCR-3743
 URL: https://issues.apache.org/jira/browse/JCR-3743
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Reporter: Julian Reschke
Assignee: Dominique Pfister
Priority: Minor

 On Win7/64bit/corei7:
 Failed tests:
   testDeleteAllOlderThan(org.apache.jackrabbit.aws.ext.ds.TestInMemDs)
   testDeleteAllOlderThan(org.apache.jackrabbit.aws.ext.ds.TestInMemDsCacheOff)
 Likely because of incorrect assumptions about System.currentTimeMillis()



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (JCR-3743) failing test if aws extensions

2014-03-17 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister reassigned JCR-3743:
--

Assignee: Dominique Pfister

Yeap, sure.

 failing test if aws extensions
 --

 Key: JCR-3743
 URL: https://issues.apache.org/jira/browse/JCR-3743
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Reporter: Julian Reschke
Assignee: Dominique Pfister
Priority: Minor

 On Win7/64bit/corei7:
 Failed tests:
   testDeleteAllOlderThan(org.apache.jackrabbit.aws.ext.ds.TestInMemDs)
   testDeleteAllOlderThan(org.apache.jackrabbit.aws.ext.ds.TestInMemDsCacheOff)
 Likely because of incorrect assumptions about System.currentTimeMillis()



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3729) S3 Datastore optimizations

2014-03-13 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13933074#comment-13933074
 ] 

Dominique Pfister commented on JCR-3729:


Thanks, [~shgupta], this looks much better now. One last question: apparently, 
the key format in S3Backend has changed which requires a renaming task to 
convert old keys; what was the reason for changing the key format? 

 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3729.patch, JCR-3729_V1.patch, JCR-3729_V2.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (JCR-3729) S3 Datastore optimizations

2014-03-13 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13933155#comment-13933155
 ] 

Dominique Pfister edited comment on JCR-3729 at 3/13/14 12:15 PM:
--

Patch revised and committed in revision 1577127 to trunk. Added 
{{src/test/resources/log4j.properties}} to jackrabbit-data for test output to 
be logged to a file and suppress warnings about uninitialized logger.


was (Author: dpfister):
Patch revised and committed in revision 1577127. Added 
{{src/test/resources/log4j.properties}} to jackrabbit-data for test output to 
be logged to a file and suppress warnings about uninitialized logger.

 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3729.patch, JCR-3729_V1.patch, JCR-3729_V2.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3729) S3 Datastore optimizations

2014-03-13 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13933155#comment-13933155
 ] 

Dominique Pfister commented on JCR-3729:


Patch revised and committed in revision 1577127. Added 
{{src/test/resources/log4j.properties}} to jackrabbit-data for test output to 
be logged to a file and suppress warnings about uninitialized logger.

 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3729.patch, JCR-3729_V1.patch, JCR-3729_V2.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (JCR-3729) S3 Datastore optimizations

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-3729:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3729.patch, JCR-3729_V1.patch, JCR-3729_V2.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (JCR-3729) S3 Datastore optimizations

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister reassigned JCR-3729:
--

Assignee: Dominique Pfister

 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5

 Attachments: JCR-3729.patch, JCR-3729_V1.patch, JCR-3729_V2.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Fix version 2.7.5, trunk on 2.8-SNAPSHOT

2014-03-13 Thread Dominique Pfister
Hi,

I'm working on JCR-3729 which has fix version 2.7.5, and committed changes
to trunk. Now, I'm a bit confused: on one hand there is only a branch for
2.6, and trunk is on 2.8-SNAPSHOT, so is there some other place to apply
the patch so it'll be included in 2.7.5?

Thanks
Dominique


[jira] [Assigned] (JCR-3730) Use object keys to create partitions in S3 automatically

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister reassigned JCR-3730:
--

Assignee: Dominique Pfister

 Use object keys to create partitions in S3 automatically
 

 Key: JCR-3730
 URL: https://issues.apache.org/jira/browse/JCR-3730
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5


 To improve performance of S3, it is recommended to use object keys which 
 enables to save data in multiple partitions. [1]
 The current key format dataStore_SHA1_HASH put all data in single partition. 
 It is recommended to remove dataStore_ prefix and split SHA1_HASH to enable 
 randomness in prefix. 
 for e.g. if older key format is 
 dataStore_004cb70c8f87d78f04da41e7547cb434094089ea. change this key format to 
 004c-b70c8f87d78f04da41e7547cb434094089ea
 *Also consider upgrade scenario to migrate older key format data
 http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (JCR-3730) Use object keys to create partitions in S3 automatically

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-3730:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch revised and committed in revision 1577127 to trunk.

 Use object keys to create partitions in S3 automatically
 

 Key: JCR-3730
 URL: https://issues.apache.org/jira/browse/JCR-3730
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5


 To improve performance of S3, it is recommended to use object keys which 
 enables to save data in multiple partitions. [1]
 The current key format dataStore_SHA1_HASH put all data in single partition. 
 It is recommended to remove dataStore_ prefix and split SHA1_HASH to enable 
 randomness in prefix. 
 for e.g. if older key format is 
 dataStore_004cb70c8f87d78f04da41e7547cb434094089ea. change this key format to 
 004c-b70c8f87d78f04da41e7547cb434094089ea
 *Also consider upgrade scenario to migrate older key format data
 http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (JCR-3731) Multi-threaded migration of binary files from FileSystem to S3 datastore

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister reassigned JCR-3731:
--

Assignee: Dominique Pfister

 Multi-threaded migration of binary files from FileSystem to S3 datastore 
 -

 Key: JCR-3731
 URL: https://issues.apache.org/jira/browse/JCR-3731
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5

 Attachments: JCR-3651.patch


 As per today, when we switch repository from FileDataStore to S3DataStore all 
 binary files are migrated from local file system to S3Datastore. As per today 
 this process is single threaded and takes lot of time. For e.g. for 1G intial 
 content, it takes around 5 min to migrated from ec2 instance to S3.
 It can be made faster by migrating content in multi-threaded environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (JCR-3731) Multi-threaded migration of binary files from FileSystem to S3 datastore

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-3731:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch revised and committed in revision 1577127 to trunk.

 Multi-threaded migration of binary files from FileSystem to S3 datastore 
 -

 Key: JCR-3731
 URL: https://issues.apache.org/jira/browse/JCR-3731
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5

 Attachments: JCR-3651.patch


 As per today, when we switch repository from FileDataStore to S3DataStore all 
 binary files are migrated from local file system to S3Datastore. As per today 
 this process is single threaded and takes lot of time. For e.g. for 1G intial 
 content, it takes around 5 min to migrated from ec2 instance to S3.
 It can be made faster by migrating content in multi-threaded environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (JCR-3732) Externalize S3 endpoints

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-3732:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch revised and committed in revision 1577127 to trunk.

 Externalize S3 endpoints
 

 Key: JCR-3732
 URL: https://issues.apache.org/jira/browse/JCR-3732
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5

 Attachments: JCR-3732.patch


 Currently S3 connector uses S3 region ( already externalized via 
 aws.properties) to configure S3 endpoint as per the mapping [1].
 There are cases where single S3 region may contains multiple endpoints. for 
 e. US Standard region has two endpoints:
 s3.amazonaws.com (Northern Virginia or Pacific Northwest)
 s3-external-1.amazonaws.com (Northern Virginia only)
 To handle these cases, S3 connector should externalize endpoint configuration 
 (as optional) and has higher precedence over endpoint derived via S3 region.
 [1]
 http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (JCR-3734) Slow local cache built-up time

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister reassigned JCR-3734:
--

Assignee: Dominique Pfister

 Slow local cache built-up time
 --

 Key: JCR-3734
 URL: https://issues.apache.org/jira/browse/JCR-3734
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5


 urrently with the S3 connector, it appears that the startup will attempt to 
 scan the local datastore on local disk and slow down the startup process. 
 Attached are the thread dumps, profiling results and logs for further 
 investigation. It will be great if the startup can avoid such a scan or this 
 can be optimised further.
 Customer's  business impact -
  It's going to be a scalability issue as the cache grows. Currently we only 
 have 64GB or less, which takes 8+ minutes, if we double or quadruple the 
 cache, application startup is going to be far too slow for us. We might be 
 able to take advantage of SSD attached disk rather than HDD attached disk, 
 but we need to evaluate this option further and we want a better fix than 
 simply throwing faster storage at it.
 Ideally we want repository to come online as fast as possible and complete 
 any cache size counting activities in the background, or have repository 
 retain a memory of it's cache status from when it was last running so that at 
 startup it picks up it's state and doesn't need to scan the whole cache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (JCR-3734) Slow local cache built-up time

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-3734:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch revised and committed in revision 1577127 to trunk.

 Slow local cache built-up time
 --

 Key: JCR-3734
 URL: https://issues.apache.org/jira/browse/JCR-3734
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5


 urrently with the S3 connector, it appears that the startup will attempt to 
 scan the local datastore on local disk and slow down the startup process. 
 Attached are the thread dumps, profiling results and logs for further 
 investigation. It will be great if the startup can avoid such a scan or this 
 can be optimised further.
 Customer's  business impact -
  It's going to be a scalability issue as the cache grows. Currently we only 
 have 64GB or less, which takes 8+ minutes, if we double or quadruple the 
 cache, application startup is going to be far too slow for us. We might be 
 able to take advantage of SSD attached disk rather than HDD attached disk, 
 but we need to evaluate this option further and we want a better fix than 
 simply throwing faster storage at it.
 Ideally we want repository to come online as fast as possible and complete 
 any cache size counting activities in the background, or have repository 
 retain a memory of it's cache status from when it was last running so that at 
 startup it picks up it's state and doesn't need to scan the whole cache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (JCR-3733) Asynchronous upload file to S3

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister reassigned JCR-3733:
--

Assignee: Dominique Pfister

 Asynchronous upload file to S3
 --

 Key: JCR-3733
 URL: https://issues.apache.org/jira/browse/JCR-3733
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5


 S3DataStore Asynchronous Upload to S3
 The current logic to add a file record to S3DataStore is first add the file 
 in local cache and then upload that file to S3 in a single synchronous step. 
 This feature contemplates to break the current logic with synchronous adding 
 to local cache and asynchronous uploading of the file to S3. Till 
 asynchronous upload completes, all data (inputstream, length and 
 lastModified) for that file record is fetched from local cache. 
 AWS SDK provides upload progress listeners which provides various callbacks 
 on the status of in-progress upload.
 As of now customer reported that write performance of EBS based Datastore is 
 3x  better than S3 DataStore. 
 With this feature, the objective is to have comparable write performance of 
 S3 DataStore.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (JCR-3733) Asynchronous upload file to S3

2014-03-13 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-3733:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch revised and committed in revision 1577127 to trunk.

 Asynchronous upload file to S3
 --

 Key: JCR-3733
 URL: https://issues.apache.org/jira/browse/JCR-3733
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5


 S3DataStore Asynchronous Upload to S3
 The current logic to add a file record to S3DataStore is first add the file 
 in local cache and then upload that file to S3 in a single synchronous step. 
 This feature contemplates to break the current logic with synchronous adding 
 to local cache and asynchronous uploading of the file to S3. Till 
 asynchronous upload completes, all data (inputstream, length and 
 lastModified) for that file record is fetched from local cache. 
 AWS SDK provides upload progress listeners which provides various callbacks 
 on the status of in-progress upload.
 As of now customer reported that write performance of EBS based Datastore is 
 3x  better than S3 DataStore. 
 With this feature, the objective is to have comparable write performance of 
 S3 DataStore.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3729) S3 Datastore optimizations

2014-03-13 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13933293#comment-13933293
 ] 

Dominique Pfister commented on JCR-3729:


Thanks a lot, Shashank, for providing this extensive patch!

bq. ok. Is there anything i should take care in future?
I guess this error came up because the initial version of the files you changed 
was added without svn:eol-style property. Anyway, the post in [1] shows a good 
template for these settings.

[1] 
http://article.gmane.org/gmane.comp.apache.jackrabbit.devel/369/match=eol+style

 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
Assignee: Dominique Pfister
 Fix For: 2.7.5

 Attachments: JCR-3729.patch, JCR-3729_V1.patch, JCR-3729_V2.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (JCR-3729) S3 Datastore optimizations

2014-03-12 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931567#comment-13931567
 ] 

Dominique Pfister edited comment on JCR-3729 at 3/12/14 9:17 AM:
-

[~shgupta], there are a couple of issues with your patch:

- Apparently, you restructured the files you touched, so e.g. S3Backend looks 
like it completely changed, while instance variables and methods (e.g. write) 
only moved to another location. This makes it very hard to review the changes, 
so please try to change as few lines as possible, and avoid changing existing 
code such as:
{code}
-if (!prevObjectListing.isTruncated()) {
-break;
-}
+if (!prevObjectListing.isTruncated()) break;
{code}
for the latter version doesn't adhere to the coding conventions.
- Then you deleted a couple of production (InMemoryBackend) and test files 
(TestCaseBase, TestInMemDs, TestInMemDsCacheOff), are they no longer required?
- Finally, you're adding a system property in TestAll:
{code}
+System.setProperty(
+TestCaseBase.CONFIG,
+
C:/sourceCodeGit/granite-modules/temp-crx-ext-s3/crx-ext-s3/src/test/resources/aws.properties);
{code}
I doubt this will work in a Non-Windows environment.



was (Author: dpfister):
[~shgupta], there are a couple of issues with your patch:

- Apparently, you restructured the files you touched, so e.g. S3Backend looks 
like it completely changed, while instance variables and methods (e.g. write) 
only moved to another location. This makes it very hard to review the changes, 
so please try to change as few lines as possible, and avoid
changing existing code such as:
{code}
-if (!prevObjectListing.isTruncated()) {
-break;
-}
+if (!prevObjectListing.isTruncated()) break;
{code}
for the latter version doesn't adhere to the coding conventions.
- Then you deleted a couple of production (InMemoryBackend) and test files 
(TestCaseBase, TestInMemDs, TestInMemDsCacheOff), are they no longer required?
- Finally, you're adding a system property in TestAll:
{code}
+System.setProperty(
+TestCaseBase.CONFIG,
+
C:/sourceCodeGit/granite-modules/temp-crx-ext-s3/crx-ext-s3/src/test/resources/aws.properties);
{code}
I doubt this will work in a Non-Windows environment.


 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3729.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3729) S3 Datastore optimizations

2014-03-12 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931567#comment-13931567
 ] 

Dominique Pfister commented on JCR-3729:


[~shgupta], there are a couple of issues with your patch:

- Apparently, you restructured the files you touched, so e.g. S3Backend looks 
like it completely changed, while instance variables and methods (e.g. write) 
only moved to another location. This makes it very hard to review the changes, 
so please try to change as few lines as possible, and avoid
changing existing code such as:
{code}
-if (!prevObjectListing.isTruncated()) {
-break;
-}
+if (!prevObjectListing.isTruncated()) break;
{code}
for the latter version doesn't adhere to the coding conventions.
- Then you deleted a couple of production (InMemoryBackend) and test files 
(TestCaseBase, TestInMemDs, TestInMemDsCacheOff), are they no longer required?
- Finally, you're adding a system property in TestAll:
{code}
+System.setProperty(
+TestCaseBase.CONFIG,
+
C:/sourceCodeGit/granite-modules/temp-crx-ext-s3/crx-ext-s3/src/test/resources/aws.properties);
{code}
I doubt this will work in a Non-Windows environment.


 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3729.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3729) S3 Datastore optimizations

2014-03-12 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931599#comment-13931599
 ] 

Dominique Pfister commented on JCR-3729:


bq. They are re-factored jackrabbit-data. I missed to add files to patch. added 
in patch version 1.

What was the reason for moving them to jackrabbit-data?

 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3729.patch, JCR-3729_V1.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3729) S3 Datastore optimizations

2014-03-12 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931614#comment-13931614
 ] 

Dominique Pfister commented on JCR-3729:


bq. since changes are too large i advise to look at full source code.

I tried applying your patch to the the current trunk, but still get a huge 
amount of rejected changes: I noticed that the files concerned have CR+LF in my 
checkout although I'm working on *nix. Apparently, a lot of files in 
jackrabbit-data have no svn:eol-style set, and are therefore treated as is 
which confuses the patch tool.

 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3729.patch, JCR-3729_V1.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


svn:eol-style missing (CR+LF in source files)

2014-03-12 Thread Dominique Pfister
Hi,

I'm currently reviewing a patch submitted in JCR-3729, and when I try to
apply to trunk, it fails for various files, although the version it is
based on is the same as mine.

I realized that the files rejected all have CR+LF line terminators in my
checkout, although I'm working on Mac OS X, and svn:eol-style is missing
for those files. Apparently, it is this non-native line termination policy
that confuses patch.

Is this an oversight or did we change our convention to not specify
svn:eol-style anymore?

Thanks
Dominique

BTW: grep -l '^M' $(find . -name \*.java) reports 49 files in
jackrabbit-data that have CR+LF line terminators


[jira] [Commented] (JCR-3729) S3 Datastore optimizations

2014-03-12 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931823#comment-13931823
 ] 

Dominique Pfister commented on JCR-3729:


Thanks to [~reschke] fixing the svn:eol-style problem, I was able to apply the 
patch.

[~shgupta], did you ever try to build the project after entering your changes? 
You introduced a cyclic dependency between jackrabbit-core and jackrabbit-data, 
which becomes visible by merely entering {{mvn clean}} on parent folder 
jackrabbit:
{code}
[ERROR] The projects in the reactor contain a cyclic reference: Edge between 
'Vertex{label='org.apache.jackrabbit:jackrabbit-core:2.8-SNAPSHOT'}' and 
'Vertex{label='org.apache.jackrabbit:jackrabbit-data:2.8-SNAPSHOT'}' introduces 
to cycle in the graph org.apache.jackrabbit:jackrabbit-data:2.8-SNAPSHOT -- 
org.apache.jackrabbit:jackrabbit-core:2.8-SNAPSHOT -- 
org.apache.jackrabbit:jackrabbit-data:2.8-SNAPSHOT - [Help 1]
{code}

You further introduced a class {{o.a.j.core.util.NamedThreadFactory.java}} in 
jackrabbit-data and this package name clashes with an identical package name in 
jackrabbit-core and leads to a split-package warning when building the bundle. 
You should use a different package name.

bq. Yes. It is more structured.
Hm, I disagree: in file S3Backend you (or your IDE) moved instance variables in 
front of class variables. This again does not comply with standard coding 
conventions.

 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3729.patch, JCR-3729_V1.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (JCR-3729) S3 Datastore optimizations

2014-03-12 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931823#comment-13931823
 ] 

Dominique Pfister edited comment on JCR-3729 at 3/12/14 2:45 PM:
-

Thanks to [~reschke] fixing the svn:eol-style problem, I was able to apply the 
patch.

[~shgupta], did you ever try to build the project after entering your changes? 
You introduced a cyclic dependency between jackrabbit-core and jackrabbit-data, 
which becomes visible by merely entering {{mvn clean}} on parent folder 
jackrabbit:
{code}
[ERROR] The projects in the reactor contain a cyclic reference: Edge between 
'Vertex{label='org.apache.jackrabbit:jackrabbit-core:2.8-SNAPSHOT'}' and 
'Vertex{label='org.apache.jackrabbit:jackrabbit-data:2.8-SNAPSHOT'}' introduces 
to cycle in the graph org.apache.jackrabbit:jackrabbit-data:2.8-SNAPSHOT -- 
org.apache.jackrabbit:jackrabbit-core:2.8-SNAPSHOT -- 
org.apache.jackrabbit:jackrabbit-data:2.8-SNAPSHOT - [Help 1]
{code}
again: what was the reason for moving classes to jackrabbit-data?

You further introduced a class {{o.a.j.core.util.NamedThreadFactory.java}} in 
jackrabbit-data and this package name clashes with an identical package name in 
jackrabbit-core and leads to a split-package warning when building the bundle. 
You should use a different package name.

bq. Yes. It is more structured.
Hm, I disagree: in file S3Backend you (or your IDE) moved instance variables in 
front of class variables. This again does not comply with standard coding 
conventions.


was (Author: dpfister):
Thanks to [~reschke] fixing the svn:eol-style problem, I was able to apply the 
patch.

[~shgupta], did you ever try to build the project after entering your changes? 
You introduced a cyclic dependency between jackrabbit-core and jackrabbit-data, 
which becomes visible by merely entering {{mvn clean}} on parent folder 
jackrabbit:
{code}
[ERROR] The projects in the reactor contain a cyclic reference: Edge between 
'Vertex{label='org.apache.jackrabbit:jackrabbit-core:2.8-SNAPSHOT'}' and 
'Vertex{label='org.apache.jackrabbit:jackrabbit-data:2.8-SNAPSHOT'}' introduces 
to cycle in the graph org.apache.jackrabbit:jackrabbit-data:2.8-SNAPSHOT -- 
org.apache.jackrabbit:jackrabbit-core:2.8-SNAPSHOT -- 
org.apache.jackrabbit:jackrabbit-data:2.8-SNAPSHOT - [Help 1]
{code}

You further introduced a class {{o.a.j.core.util.NamedThreadFactory.java}} in 
jackrabbit-data and this package name clashes with an identical package name in 
jackrabbit-core and leads to a split-package warning when building the bundle. 
You should use a different package name.

bq. Yes. It is more structured.
Hm, I disagree: in file S3Backend you (or your IDE) moved instance variables in 
front of class variables. This again does not comply with standard coding 
conventions.

 S3 Datastore optimizations
 --

 Key: JCR-3729
 URL: https://issues.apache.org/jira/browse/JCR-3729
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3729.patch, JCR-3729_V1.patch


 Following optimizations can be done on S3 Datastore based on customer's/S3 
 engineers feedback.
 *  Use object keys to create partitions in S3 automatically.
 *  Multi-threaded migration of binary files from FileSystem to S3 datastore
 *  Externalize S3 endpoints.
 *  Asynchronous upload file to S3
 *  Slow Startup Of Instance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3732) Externalize S3 endpoints

2014-03-11 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930750#comment-13930750
 ] 

Dominique Pfister commented on JCR-3732:


[~shgupta], thank you for providing a patch, looks fine, apart from the literal 
s3EndPoint: other property names are listed as static strings in S3Constants, 
any reason why you didn't do the same for s3EndPoint?

 Externalize S3 endpoints
 

 Key: JCR-3732
 URL: https://issues.apache.org/jira/browse/JCR-3732
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3732.patch


 Currently S3 connector uses S3 region ( already externalized via 
 aws.properties) to configure S3 endpoint as per the mapping [1].
 There are cases where single S3 region may contains multiple endpoints. for 
 e. US Standard region has two endpoints:
 s3.amazonaws.com (Northern Virginia or Pacific Northwest)
 s3-external-1.amazonaws.com (Northern Virginia only)
 To handle these cases, S3 connector should externalize endpoint configuration 
 (as optional) and has higher precedence over endpoint derived via S3 region.
 [1]
 http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3731) Multi-threaded migration of binary files from FileSystem to S3 datastore

2014-03-11 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930844#comment-13930844
 ] 

Dominique Pfister commented on JCR-3731:


[~shgupta], the patch provided doesn't work against CachingDataStore.java (7 
out of 7 hunks fail) - can you rebase your changes on the latest svn version 
and post another patch?

 Multi-threaded migration of binary files from FileSystem to S3 datastore 
 -

 Key: JCR-3731
 URL: https://issues.apache.org/jira/browse/JCR-3731
 Project: Jackrabbit Content Repository
  Issue Type: Sub-task
  Components: jackrabbit-core
Affects Versions: 2.7.4
Reporter: Shashank Gupta
 Fix For: 2.7.5

 Attachments: JCR-3651.patch


 As per today, when we switch repository from FileDataStore to S3DataStore all 
 binary files are migrated from local file system to S3Datastore. As per today 
 this process is single threaded and takes lot of time. For e.g. for 1G intial 
 content, it takes around 5 min to migrated from ec2 instance to S3.
 It can be made faster by migrating content in multi-threaded environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: EditorProvider after Oak construction

2014-01-09 Thread Dominique Pfister
Hi Jukka,

On Jan 9, 2014, at 4:39 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:

 Hi,
 
 On Thu, Jan 9, 2014 at 10:29 AM, Dominique Pfister dpfis...@adobe.com wrote:
 I’m working on an application packaged as an OSGI bundle that would perform 
 some validation and
 store some auxiliary data in a node whenever a stream is saved in one of its 
 properties, so I’m
 thinking on creating some CommitHook (or an EditorProvider) that would be 
 able to compute
 the auxiliary property.
 
 An EditorProvider is probably better for this case, as it adds less
 overhead than a full CommitHook.
 
 Now comes the problem: in my setup, the Oak repository is created with some 
 Hooks/Providers
 on startup, and AFAICS only Observer’s can be added/removed after that, is 
 this correct?
 
 If you expose your EditorProvider as an OSGi service, Oak should
 automatically pick it up and apply it to any new commits. At least
 that's the intention; I'm not sure if the OSGi binding yet does that.

Great news, that’s exactly what I hoped for!

Cheers
Dominique

 
 BR,
 
 Jukka Zitting



Re: EditorProvider after Oak construction

2014-01-09 Thread Dominique Pfister
Hi again,

My enthusiasm might have been a bit premature: there is actually a 
WhiteboardEditorProvider in Oak that will invoke all registered OSGI services 
of type EditorProvider, but it is unused (in contrast to e.g. 
WhiteboardAuthorizableActionProvider). Would a simple registration of this 
“special” EditorProvider in Oak.with(…) fix it?

Thanks
Dominique

On Jan 9, 2014, at 4:54 PM, Dominique Pfister 
dpfis...@adobe.commailto:dpfis...@adobe.com wrote:

Hi Jukka,

On Jan 9, 2014, at 4:39 PM, Jukka Zitting 
jukka.zitt...@gmail.commailto:jukka.zitt...@gmail.com wrote:

Hi,

On Thu, Jan 9, 2014 at 10:29 AM, Dominique Pfister 
dpfis...@adobe.commailto:dpfis...@adobe.com wrote:
I’m working on an application packaged as an OSGI bundle that would perform 
some validation and
store some auxiliary data in a node whenever a stream is saved in one of its 
properties, so I’m
thinking on creating some CommitHook (or an EditorProvider) that would be able 
to compute
the auxiliary property.

An EditorProvider is probably better for this case, as it adds less
overhead than a full CommitHook.

Now comes the problem: in my setup, the Oak repository is created with some 
Hooks/Providers
on startup, and AFAICS only Observer’s can be added/removed after that, is this 
correct?

If you expose your EditorProvider as an OSGi service, Oak should
automatically pick it up and apply it to any new commits. At least
that's the intention; I'm not sure if the OSGi binding yet does that.

Great news, that’s exactly what I hoped for!

Cheers
Dominique


BR,

Jukka Zitting



Re: [VOTE] Accept MongoMK contribution (Was: [jira] [Created] (OAK-293) MongoDB-based MicroKernel)

2012-09-13 Thread Dominique Pfister
And here's my:

+1

Kind regards
Dominique

On Sep 13, 2012, at 5:09 PM, Jukka Zitting wrote:

 Hi,
 
 On Thu, Sep 6, 2012 at 9:37 PM, Philipp Marx (JIRA) j...@apache.org wrote:
 Summary: MongoDB-based MicroKernel
 
 This is a pretty major new feature so I'd like us to vote on whether
 we want to take over the maintenance and further development of this
 code. If we agree, I'd also suggest that we invite Philipp Marx as the
 original author of this code to join us as a Jackrabbit committer and
 PMC member.
 
 So, please vote on accepting this MongoMK contribution and granting
 committer and  PMC member status Philipp Marx. This vote is open for
 the next 72 hours.
 
[ ] +1 Accept the MongoMK contribution and grant committer and PMC
 member status to Philipp Marx
[ ] -1 Don't accept the codebase and/or grant committership, because...
 
 My vote is +1.
 
 BR,
 
 Jukka Zitting



[jira] [Resolved] (OAK-216) Occasional org.apache.jackrabbit.mk.store.NotFoundExceptions

2012-08-22 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister resolved OAK-216.
---

Resolution: Fixed
  Assignee: Dominique Pfister

Issue was caused by a wrong assumption in the GC, that all branches are 
uniquely identifiable by their branch root id, which is wrong, as one can 
create two branches of the same commit.

 Occasional org.apache.jackrabbit.mk.store.NotFoundExceptions
 

 Key: OAK-216
 URL: https://issues.apache.org/jira/browse/OAK-216
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Jukka Zitting
Assignee: Dominique Pfister

 Every now and then our builds fail with one or another of the JCR TCK tests 
 failing due to a {{org.apache.jackrabbit.mk.store.NotFoundException}} being 
 thrown on some revision in the MicroKernel.
 Since the garbage collector is currently only instructed to remove revisions 
 that are over 60 minutes old, such lost revisions should never occur in 
 normal builds.
 The following change to line 163 of {{DefaultRevisionStore.java}} makes the 
 problem easy to reproduce reliably, which strongly suggests that this problem 
 indeed is caused or at least triggered by the garbage collector:
 {code}
 -}, 60, 60, TimeUnit.SECONDS);
 +}, 1, 1, TimeUnit.SECONDS);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-231) Support for large child node lists.

2012-08-03 Thread Dominique Pfister (JIRA)
Dominique Pfister created OAK-231:
-

 Summary: Support for large child node lists.
 Key: OAK-231
 URL: https://issues.apache.org/jira/browse/OAK-231
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Dominique Pfister
Assignee: Dominique Pfister




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-231) Support for large child node lists

2012-08-03 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated OAK-231:
--

Summary: Support for large child node lists  (was: Support for large child 
node lists.)

 Support for large child node lists
 --

 Key: OAK-231
 URL: https://issues.apache.org/jira/browse/OAK-231
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Dominique Pfister
Assignee: Dominique Pfister



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-187) ConcurrentModificationException during gc run

2012-07-25 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister resolved OAK-187.
---

Resolution: Fixed
  Assignee: Dominique Pfister

Fixed in revision 1365621.

 ConcurrentModificationException during gc run
 -

 Key: OAK-187
 URL: https://issues.apache.org/jira/browse/OAK-187
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Michael Dürig
Assignee: Dominique Pfister

 I sporadically encounter a {{ConcurrentModificationException}} when building 
 with {{-PintegrationTesting}}. This happens while the {{JcrTckTest}} suite 
 runs and is only printed to the console. No tests fail.
 {code}
 Running org.apache.jackrabbit.oak.jcr.JcrTckIT
 java.util.ConcurrentModificationException
   at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1100)
   at java.util.TreeMap$EntryIterator.next(TreeMap.java:1136)
   at java.util.TreeMap$EntryIterator.next(TreeMap.java:1131)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStore.markBranches(DefaultRevisionStore.java:562)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStore.gc(DefaultRevisionStore.java:498)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStore$2.run(DefaultRevisionStore.java:160)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:680)
 Tests run: 1906, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.443 sec
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (OAK-114) MicroKernel API: specify retention policy for old revisions

2012-07-05 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406910#comment-13406910
 ] 

Dominique Pfister edited comment on OAK-114 at 7/5/12 8:30 AM:
---

The javadoc is possibly not clear enough: a revision returned by 
getHeadRevision remains accessible for _at least_ 10 minutes or _even longer_ 
if it is still the head revision, regardless of the time it was committed. So 
in Jukka's snippet above, the getNodes call wouldn't fail, because only 5 
minutes passed.

Anyway, I think we really need some performance figures first, before we can 
decide whether this policy is too aggressive. OTOH, the current GC logic is 
quite small and straightforward, so it shouldn't be difficult to change it at a 
later time if need arises.

  was (Author: dpfister):
The javadoc is possibly not clear enough: a revision returned by 
getHeadRevision remains accessible for _at least_ 10 minutes or _even longer_ 
if it is still the head revision, regardless of the time it was committed. So 
in Jukka's snippet above, the getNodes call wouldn't fail, because only 10 
minutes passed.

Anyway, I think we really need some performance figures first, before we can 
decide whether this policy is too aggressive. OTOH, the current GC logic is 
quite small and straightforward, so it shouldn't be difficult to change it at a 
later time if need arises.
  
 MicroKernel API: specify retention policy for old revisions
 ---

 Key: OAK-114
 URL: https://issues.apache.org/jira/browse/OAK-114
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Attachments: OAK-114.patch


 the MicroKernel API javadoc should specify the minimal guaranteed retention 
 period for old revisions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-114) MicroKernel API: specify retention policy for old revisions

2012-07-05 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13407201#comment-13407201
 ] 

Dominique Pfister commented on OAK-114:
---

bq.{quote} If we are keeping track of when a particular revision was last 
returned by getHeadRevision, wouldn't it be simple to use the same mechanism to 
also keep track of when revisions are returned from or passed to other 
MicroKernel methods? I don't see how that would imply any more complex state 
management than what's already needed.{quote}

If we just remember the earliest revision returned by getHeadRevision, we need 
just one field and the next GC cycle can skip all revisions committed later. If 
we remember all revisions accessed, we'll end up with some possibly sparse list 
of revisions, and the GC cycle would need to re-link these revisions - modify 
parent commit, re-calculate diff - to get a consistent view.

bq.{quote} The benefit of switching from last returned as head revision to 
last accessed/seen for figuring out when a revision is still needed is that 
we can allow unused revisions expire much faster. With the last accessed/seen 
pattern there'll be no problem with an expiry time of just a few seconds, which 
would in most cases allow the garbage collector to be much more aggressive than 
with the 10 minute time proposed here.{quote}

I can see the advantage, but this would leave the door open for some bogus 
polling client that keeps some very old revision alive, which I'd like to avoid.


 MicroKernel API: specify retention policy for old revisions
 ---

 Key: OAK-114
 URL: https://issues.apache.org/jira/browse/OAK-114
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Attachments: OAK-114.patch


 the MicroKernel API javadoc should specify the minimal guaranteed retention 
 period for old revisions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: ConcurrentModificationException during gc run

2012-07-04 Thread Dominique Pfister
Hi Michi,

Yes, Tom experienced the same issue yesterday. I'm gonna have a look.

Thanks
Dominique

On Jul 4, 2012, at 11:23 AM, Michael Dürig wrote:

 
 Hi,
 
 I got a ConcurrentModificationException while checking the 0.3 release. 
 This didn't cause any test case to fail but was printed to the console.
 
 Michael
 
 Running org.apache.jackrabbit.oak.jcr.JcrTckTest
 java.util.ConcurrentModificationException
   at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1100)
   at java.util.TreeMap$EntryIterator.next(TreeMap.java:1136)
   at java.util.TreeMap$EntryIterator.next(TreeMap.java:1131)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStore.markBranches(DefaultRevisionStore.java:561)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStore.gc(DefaultRevisionStore.java:497)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStore$2.run(DefaultRevisionStore.java:159)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:680)



[jira] [Commented] (OAK-138) Move client/server package in oak-mk to separate project

2012-06-13 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13294366#comment-13294366
 ] 

Dominique Pfister commented on OAK-138:
---

@Thomas: I agree that other MK implementations may benefit from such a 
refactoring into at least one project (e.g. oak-mk-commons or oak-mk-base). 
Nevertheless, this issue is about moving the remote classes into a separate 
project, as oak-mk does not depend on them (opposed to data store, jsop or 
cache), so IMHO this is a topic for another issue.

I think Jukka's right about the requirement for a separate MK API project, 
therefore:

+1 for oak-mk-remote
+1 for oak-mk-api

 Move client/server package in oak-mk to separate project
 

 Key: OAK-138
 URL: https://issues.apache.org/jira/browse/OAK-138
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, it, mk, run
Affects Versions: 0.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister

 As a further cleanup step in OAK-13, I'd like to move the packages 
 o.a.j.mk.client and o.a.j.mk.server and referenced classes in oak-mk to a 
 separate project, e.g. oak-mk-remote.
 This new project will then be added as a dependency to:
 oak-core
 oak-run
 oak-it-mk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-138) Move client/server package in oak-mk to separate project

2012-06-12 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated OAK-138:
--

Description: 
As a further cleanup step in OAK-13, I'd like to move the packages 
o.a.j.mk.client and o.a.j.mk.server and referenced classes in oak-mk to a 
separate project, e.g. oak-mk-remote.

This new project will then be added as a dependency to:

oak-core
oak-run
oak-it-mk

  was:
As a further cleanup step in OAK-13, I'd like to move the packages 
o.a.j.oak.mk.client and o.a.j.oak.mk.server and referenced classes in oak-mk to 
a separate project, e.g. oak-mk-remote.

This new project will then be added as a dependency to:

oak-core
oak-run
oak-it-mk


 Move client/server package in oak-mk to separate project
 

 Key: OAK-138
 URL: https://issues.apache.org/jira/browse/OAK-138
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, it, mk, run
Affects Versions: 0.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister

 As a further cleanup step in OAK-13, I'd like to move the packages 
 o.a.j.mk.client and o.a.j.mk.server and referenced classes in oak-mk to a 
 separate project, e.g. oak-mk-remote.
 This new project will then be added as a dependency to:
 oak-core
 oak-run
 oak-it-mk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: The revenge of OAK-53 part II: IT failure

2012-05-24 Thread Dominique Pfister
Hi Michi,

And you're sure you built and installed the latest oak-mk package, and are not 
accidentally refering to an old one that doesn't contain the fix made in 
OAK-12, namely to BoundaryInputStream?

Cheers
Dominique

On May 24, 2012, at 11:43 AM, Michael Dürig wrote:

 
 Hi Dominique,
 
 On 23.5.12 13:45, Dominique Pfister wrote:
 Hi Michi,
 
 On May 23, 2012, at 1:38 PM, Michael Dürig wrote:
 
 Didn't see it again up to now. I'll keep an eye on it. Is there any log
 data that would be helpful if this occurs again?
 
 Unfortunately not (yet), if it reappears, I'll add a logger to the mk 
 subproject and log messages in the server.
 
 This just happens again. The output form the Maven build is
 
 ---
  T E S T S
 ---
 Running org.apache.jackrabbit.mk.test.EverythingIT
 java.io.IOException: Bad HTTP request line: 9?)c??:O?+?w
 
 ?c9???'td???aY??#0??ˀ??+??4¨?:?0?1
 
  #`#
 at org.apache.jackrabbit.mk.server.Request.parse(Request.java:72)
 at 
 org.apache.jackrabbit.mk.server.HttpProcessor.process(HttpProcessor.java:100)
 at 
 org.apache.jackrabbit.mk.server.HttpProcessor.process(HttpProcessor.java:75)
 at org.apache.jackrabbit.mk.server.Server.process(Server.java:169)
 at org.apache.jackrabbit.mk.server.Server$2.run(Server.java:131)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:680)
 Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 15.639 
 sec  FAILURE!
 
 Results :
 
 Tests in error:
   testBlobs[1](org.apache.jackrabbit.mk.test.MicroKernelIT): 
 java.net.SocketException: Broken pipe
 
 Tests run: 72, Failures: 0, Errors: 1, Skipped: 0
 
 
 Stack trace from the logs:
 
 ---
 Test set: org.apache.jackrabbit.mk.test.EverythingIT
 ---
 Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 15.64 
 sec  FAILURE!
 testBlobs[1](org.apache.jackrabbit.mk.test.MicroKernelIT)  Time elapsed: 
 4.259 sec   ERROR!
 org.apache.jackrabbit.mk.api.MicroKernelException: 
 java.net.SocketException: Broken pipe
   at 
 org.apache.jackrabbit.mk.client.Client.toMicroKernelException(Client.java:372)
   at org.apache.jackrabbit.mk.client.Client.write(Client.java:353)
   at 
 org.apache.jackrabbit.mk.test.MicroKernelIT.testBlobs(MicroKernelIT.java:855)
   at 
 org.apache.jackrabbit.mk.test.MicroKernelIT.testBlobs(MicroKernelIT.java:843)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at org.junit.runners.Suite.runChild(Suite.java:128)
   at org.junit.runners.Suite.runChild(Suite.java:24)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at org.junit.runners.Suite.runChild(Suite.java:128)
   at org.junit.runners.Suite.runChild(Suite.java:24

Re: The revenge of OAK-53 part II: IT failure

2012-05-23 Thread Dominique Pfister
Hi Michi,

On May 23, 2012, at 1:38 PM, Michael Dürig wrote:

 Didn't see it again up to now. I'll keep an eye on it. Is there any log 
 data that would be helpful if this occurs again?

Unfortunately not (yet), if it reappears, I'll add a logger to the mk 
subproject and log messages in the server.

Cheers
Dominique

 
 Michael
 
 
 cheers
 stefan
 
 
 Michael
 
 
 
 cheers
 stefan
 
at
 
 org.apache.jackrabbit.mk.client.Client.toMicroKernelException(Client.java:372)
at org.apache.jackrabbit.mk.client.Client.write(Client.java:353)
at
 
 org.apache.jackrabbit.mk.test.MicroKernelIT.testBlobs(MicroKernelIT.java:855)
at
 
 org.apache.jackrabbit.mk.test.MicroKernelIT.testBlobs(MicroKernelIT.java:843)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at
 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at
 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at
 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at
 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at
 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at
 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at
 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at
 org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:24)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at
 org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:24)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at
 org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:24)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at
 org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at
 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
at
 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
at
 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at
 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at
 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at
 
 

Re: Intermittent test failures with -PintegrationTesting

2012-05-22 Thread Dominique Pfister
Hi Michi,

Seen this, too, very rarely though. Anyway, I created an issue for this:

https://issues.apache.org/jira/browse/OAK-107

Cheers
Dominique

On May 21, 2012, at 11:41 PM, Michael Dürig wrote:

 
 Hi,
 
 About every third build fails for me:
 
 ---
 Test set: org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest
 ---
 Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.723 
 sec  FAILURE!
 testConcurrentMergeGC(org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest)
  
  Time elapsed: 0.737 sec   ERROR!
 org.apache.jackrabbit.mk.api.MicroKernelException: 
 java.lang.RuntimeException: Unexpected error
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.merge(MicroKernelImpl.java:504)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest.testConcurrentMergeGC(DefaultRevisionStoreTest.java:184)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
   at 
 org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:172)
   at 
 org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:104)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:70)
 Caused by: java.lang.RuntimeException: Unexpected error
   at 
 org.apache.jackrabbit.mk.store.StoredNodeAsState$3.getNode(StoredNodeAsState.java:176)
   at 
 org.apache.jackrabbit.mk.store.StoredNodeAsState.getChildNode(StoredNodeAsState.java:120)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStore$3.childNodeChanged(DefaultRevisionStore.java:476)
   at 
 org.apache.jackrabbit.mk.model.AbstractNode.diff(AbstractNode.java:126)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStore.compare(DefaultRevisionStore.java:443)
   at org.apache.jackrabbit.mk.model.NodeDelta.init(NodeDelta.java:68)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.mergeNode(CommitBuilder.java:371)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.mergeTree(CommitBuilder.java:363)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.doMerge(CommitBuilder.java:216)
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.merge(MicroKernelImpl.java:502)
   ... 32 more
 Caused by: org.apache.jackrabbit.mk.store.NotFoundException: 
 0a9706e6ec0320c779cd6f775910826e79c97f58
   at 
 org.apache.jackrabbit.mk.persistence.InMemPersistence.readNode(InMemPersistence.java:70)
   at 
 

[jira] [Commented] (OAK-82) Running MicroKernelIT test with the InMem persistence creates a lot of GC threads

2012-05-07 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269615#comment-13269615
 ] 

Dominique Pfister commented on OAK-82:
--

Oops, that rationale slipped my attention even though it was explicitly 
documented: thanks for fixing that!

 Running MicroKernelIT test with the InMem persistence creates a lot of GC 
 threads
 -

 Key: OAK-82
 URL: https://issues.apache.org/jira/browse/OAK-82
 Project: Jackrabbit Oak
  Issue Type: Bug
Affects Versions: 0.1
Reporter: Dominique Pfister
Priority: Minor
 Fix For: 0.2.1


 This is caused by MicroKernelImplFixture not disposing the MK instances it 
 created

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-56) File system abstraction

2012-05-03 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-56?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13267331#comment-13267331
 ] 

Dominique Pfister commented on OAK-56:
--

 On a related note, what's the status of the related o.a.j.mk.fs code in 
 oak-core? Unless we need more of the  functionality than we currently do 
 (one-liners in MicroKernelFactory and NodeMapInDb), I'd rather replace it  
 with Commons IO or even just plain Java IO.

I'd replace it with Commons IO and its FileUtils features: this is already in 
use in other projects (such as Jackrabbit itself) and much better tested.

 If, as it sounds like, the code will mostly be needed for the MicroKernel 
 implementation, we should at least  move the o.a.j.mk.fs package from 
 oak-core to oak-mk.

I don't see a need for this in the MicroKernel implementation: instead of 
moving it back into this project, I'd rather move it to oak-commons and rename 
the package to o.a.j.commons.fs, if Tom insists on keeping it.

 File system abstraction
 ---

 Key: OAK-56
 URL: https://issues.apache.org/jira/browse/OAK-56
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mk
Reporter: Thomas Mueller
Assignee: Thomas Mueller
Priority: Minor

 A file system abstraction allows to add new features (cross cutting concerns) 
 in a modular way, for example:
 - detection and special behavior of out-of-disk space situation
 - profiling and statistics over JMX
 - re-try on file system problems
 - encryption
 - file system monitoring
 - replication / real-time backup on the file system level (for clustering)
 - caching (improved performance for CRX)
 - allows to easily switch to faster file system APIs (FileChannel, memory 
 mapped files)
 - debugging (for example, logging all file system operations)
 - allows to implement s3 / hadoop / mongodb / ... file systems - not only by 
 us but from 3th party, possibly the end user
 - zip file system (for example to support read-only, compressed repositories)
 - testing: simulating out of disk space and out of memory (ensure the 
 repository doesn't corrupt in this case)
 - testing: simulate very large files (using an in-memory file system)
 - splitting very large files in 2 gb blocks (FAT and other file systems that 
 don't support large files)
 - data compression (if needed)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-56) File system abstraction

2012-05-03 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-56?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13267334#comment-13267334
 ] 

Dominique Pfister commented on OAK-56:
--

 commons and rename the package to o.a.j.commons.fs

Sorry, should be o.a.j.oak.commons.fs, of course.

 File system abstraction
 ---

 Key: OAK-56
 URL: https://issues.apache.org/jira/browse/OAK-56
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mk
Reporter: Thomas Mueller
Assignee: Thomas Mueller
Priority: Minor

 A file system abstraction allows to add new features (cross cutting concerns) 
 in a modular way, for example:
 - detection and special behavior of out-of-disk space situation
 - profiling and statistics over JMX
 - re-try on file system problems
 - encryption
 - file system monitoring
 - replication / real-time backup on the file system level (for clustering)
 - caching (improved performance for CRX)
 - allows to easily switch to faster file system APIs (FileChannel, memory 
 mapped files)
 - debugging (for example, logging all file system operations)
 - allows to implement s3 / hadoop / mongodb / ... file systems - not only by 
 us but from 3th party, possibly the end user
 - zip file system (for example to support read-only, compressed repositories)
 - testing: simulating out of disk space and out of memory (ensure the 
 repository doesn't corrupt in this case)
 - testing: simulate very large files (using an in-memory file system)
 - splitting very large files in 2 gb blocks (FAT and other file systems that 
 don't support large files)
 - data compression (if needed)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-78) waitForCommit() test failure for MK remoting

2012-04-30 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-78?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264728#comment-13264728
 ] 

Dominique Pfister commented on OAK-78:
--

 There doesn't seem to be any need for synchronization in Client (the server
 should in any case be thread-safe), so I think we should just unsynchronize
 the Client method.

Wrong, the javadoc for the Client class clearly says:

  All public methods inside this class are completely synchronized because
  HttpExecutor is not thread-safe.


 waitForCommit() test failure for MK remoting
 

 Key: OAK-78
 URL: https://issues.apache.org/jira/browse/OAK-78
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 0.2


 The .mk.client.Client class is synchronized, which makes it fail the new 
 waitForCommit() integration test.
 There doesn't seem to be any need for synchronization in Client (the server 
 should in any case be thread-safe), so I think we should just unsynchronize 
 the Client method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Lifetime of revision identifiers

2012-04-03 Thread Dominique Pfister
Hi,

I made a second thought, and I'm no longer sure I would allow a revision to be 
reachable by some client interaction. In the current design, the GC will copy 
the head revision to the to store plus all the revisions that are either 
newly created (by some commit call coming in later) or still manipulated (by a 
commit that started earlier but where the internal commit builder is still not 
finished). I'd extend this design by copying all revisions that were created in 
some fixed interval (e.g. 10 minutes) before the head revision was created, and 
see whether this will suffice.

Regards
Dominique

On Apr 3, 2012, at 11:05 AM, Dominique Pfister wrote:

 Hi,
 
 On Apr 2, 2012, at 7:28 PM, Jukka Zitting wrote:
 
 Hi,
 
 On Mon, Apr 2, 2012 at 6:34 PM, Stefan Guggisberg
 stefan.guggisb...@gmail.com wrote:
 i don't think that we should allow clients to explicitly extend the life 
 span
 of a specific revision. this would IMO unnecessarily complicate the GC
 logic and it would allow misbehaved clients to compromise the stability
 of the mk.
 
 This would notably complicate things in oak-core and higher up. Any
 large batch operations would have to worry about the underlying
 revisions becoming unavailable unless they are continuously updated to
 the latest head revision.
 
 I don't think allowing lease extensions would complicate garbage
 collection too much. All I'm asking is that the collector should look
 at the last access time instead of the create time of a revision
 to determine whether it's still referenceable or not.
 
 Sounds reasonable, as long as you explicitely access the revision first, and 
 then the nodes it contains. Things get more complicated if you'd hang on to 
 some node in some revision and then expect that this revision stays alive.
 
 Regards
 Dominique
 
 
 BR,
 
 Jukka Zitting
 



Re: Lifetime of revision identifiers

2012-04-03 Thread Dominique Pfister
Hi,

On Apr 3, 2012, at 11:51 AM, Jukka Zitting wrote:

 Hi,
 
 On Tue, Apr 3, 2012 at 11:47 AM, Dominique Pfister dpfis...@adobe.com wrote:
 I made a second thought, and I'm no longer sure I would allow
 a revision to be reachable by some client interaction.
 
 You'd drop revision identifiers from the MicroKernel interface? That's
 a pretty big design change...

No, I probably did not make myself clear: I would not keep a revision (and all 
its nodes) reachable in terms of garbage collection, simply because it was 
accessed by a client some time ago.

Dominique

 
 BR,
 
 Jukka Zitting



[jira] [Commented] (OAK-10) Impedance mismatch between signatures of NodeState#getChildeNodeEntries and MicroKernel#getNodes

2012-03-12 Thread Dominique Pfister (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13227676#comment-13227676
 ] 

Dominique Pfister commented on OAK-10:
--

 I suggest we change it to int for both methods since a length bigger than 
 Integer.MAX_VALUE
 is not realistic anyway. On a related note, I'd rename the parameter from 
 length to count.

+1 for both changes.

 Impedance mismatch between signatures of NodeState#getChildeNodeEntries and 
 MicroKernel#getNodes
 

 Key: OAK-10
 URL: https://issues.apache.org/jira/browse/OAK-10
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Michael Dürig
 Fix For: 0.1


 NodeState#getChildeNodeEntries uses long for its lenght parameter while 
 MicroKernel#getNodes uses int. In order to implement the NodeState interface 
 on top of the Microkernel, it would be favourable if the type would be the 
 same. 
 I suggest we change it to int for both methods since a length bigger than 
 Integer.MAX_VALUE is not realistic anyway. On a related note, I'd rename the 
 parameter from length to count. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-5) JCR bindings for Oak

2012-03-12 Thread Dominique Pfister (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13227428#comment-13227428
 ] 

Dominique Pfister commented on OAK-5:
-

What about this, Tom: I use Eclipse, too, and if I mvn eclipse:eclipse the 
parent project and open it, it will automatically open all submodules (with the 
correct build inter-dependencies) and allow me to move classes from one module 
to the other. Would that work for you?

 JCR bindings for Oak
 

 Key: OAK-5
 URL: https://issues.apache.org/jira/browse/OAK-5
 Project: Jackrabbit Oak
  Issue Type: New Feature
Reporter: Jukka Zitting
  Labels: jcr
 Fix For: 0.1


 One of the proposed goals for the 0.1 release is at least a basic JCR binding 
 for Oak. Most of that already exists in /jackrabbit/sandbox, we just need to 
 decide where and how to place it in Oak. I think we should either put it all 
 under o.a.j.oak.jcr in oak-core, or create a separate oak-jcr component for 
 the JCR binding.
 As for functionality, it would be nice if the JCR binding was able to do at 
 least the following:
 {code}
 Repository repository = JcrUtils.getRepository(...);
 Session session = repository.login(...);
 try {
 // Create
 session.getRootNode().addNode(hello)
 .setProperty(world,  hello world);
 session.save();
 // Read
 assertEquals(
 hello world,
 session.getProperty(/hello/world).getString());
 // Update
 session.getNode(/hello).setProperty(world, Hello, World!);
 session.save();
 assertEquals(
 Hello, World!,
 session.getProperty(/hello/world).getString());
 // Delete
 session.getNode(/hello).delete();
 session.save();
 assertTrue(!session.propertyExists(/hello/world));
 } finally {
 create.logout();
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-5) JCR bindings for Oak

2012-03-08 Thread Dominique Pfister (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13225293#comment-13225293
 ] 

Dominique Pfister commented on OAK-5:
-

 I think we should either put it all under o.a.j.oak.jcr in oak-core,
 or create a separate oak-jcr component for the JCR binding.

I'm in favor of a separate component, because oak-core already contains quite 
some packages
and classes (and this separation would also underline the different layers), 
but I wouldn't mind a separate package either.

 JCR bindings for Oak
 

 Key: OAK-5
 URL: https://issues.apache.org/jira/browse/OAK-5
 Project: Jackrabbit Oak
  Issue Type: New Feature
Reporter: Jukka Zitting
  Labels: jcr
 Fix For: 0.1


 One of the proposed goals for the 0.1 release is at least a basic JCR binding 
 for Oak. Most of that already exists in /jackrabbit/sandbox, we just need to 
 decide where and how to place it in Oak. I think we should either put it all 
 under o.a.j.oak.jcr in oak-core, or create a separate oak-jcr component for 
 the JCR binding.
 As for functionality, it would be nice if the JCR binding was able to do at 
 least the following:
 {code}
 Repository repository = JcrUtils.getRepository(...);
 Session session = repository.login(...);
 try {
 // Create
 session.getRootNode().addNode(hello)
 .setProperty(world,  hello world);
 session.save();
 // Read
 assertEquals(
 hello world,
 session.getProperty(/hello/world).getString());
 // Update
 session.getNode(/hello).setProperty(world, Hello, World!);
 session.save();
 assertEquals(
 Hello, World!,
 session.getProperty(/hello/world).getString());
 // Delete
 session.getNode(/hello).delete();
 session.save();
 assertTrue(!session.propertyExists(/hello/world));
 } finally {
 create.logout();
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: [jr3] Tree model

2012-03-05 Thread Dominique Pfister

Hi,

On Mar 5, 2012, at 10:20 AM, Thomas Mueller wrote:


Note that if we do mandate orderability on child nodes, supporting


So in your model a tree is a node? A leaf is a property? What is a
branch, a child node?


I prefer using the same terms for the same things consistently. A node
shouldn't sometimes be called node and sometimes tree. I understand we
have a naming conflict here with javax.jcr.Node and Property, but what
about name combinations:

- NodeData
- PropertyData
- ChildList


I usually associate the suffix Data with classes that are rather  
dumb and contain nothing but setters and getters. IMO, the Tree/Leaf/ 
Branch naming convention looks much more consistent and intutive, now  
that the Tree.isLeaf() method is gone, which sounded illogical...


Dominique



Re: [jr3] Tree model

2012-03-05 Thread Dominique Pfister

Hi,

On Mar 5, 2012, at 1:40 PM, Thomas Mueller wrote:


Hi,

If we want to use distinct interfaces for read-only and writable  
nodes,

what about ImmutableNode and MutableNode extends ImmutableNode.


That's troublesome because then a client that's given an  
ImmutableNode

can't rely on the instance being immutable (because it could in fact
be a MutableNode instance).


True. So the interfaces would need to be distinct. They could both  
extend

NodeState I guess, with MutableNode adding more methods.

The draft looks very good to me now. What about:

   int getChildNodeCount() and
   getChildNodeEntries(int offset, int length):
   replace int with long


More than 2 billion child nodes? Don't you think that's a bit too  
optimistic?



   new method
   ImmutableNode getImmutableNode() or
   ImmutableNode makeImmutable()
   (for ImmutableNode it would typically return this)

   new method
   MutableNode getMutableNode() or
   MutableNode makeMutable()

   (for MutableNode it would typically return this)


Why do we need 2 interfaces? Since NodeState contains nothing but read- 
only methods, I consider it immutable, and would rather add a  
subinterface MutableNode.


Dominique




Regards,
Thomas







Anyway, see https://gist.github.com/1977909 (and below) for my latest
draft of these interfaces. Note also the javadocs.

This draft is fairly close to the model already present in
o.a.j.mk.model, with the most crucial difference being that a
ChildNodeEntry returns a NodeState reference instead of just a  
content

id. In other words, the underlying addressing mechanism is hidden
below this interface.

Note that since we considered it best to decouple the methods for
accessing properties and child nodes, i.e. have getProperty(String)
and getNode(String) instead of just a getItem(String), it actually
makes sense to have a getName() method on the PropertyState instance.
Otherwise a separate PropertyEntry interface would be needed in order
to avoid unnecessary extra string lookups when iterating over all
properties. The desire to avoid extra lookups is also why I'd rather
use a separate interface for properties instead of adding extra
methods to NodeState.

BR,

Jukka Zitting


/**
* A content tree consists of nodes and properties, each of which
* evolves through different states during its lifecycle. This  
interface

* represents a specific, immutable state of a node in a content tree.
* Depending on context, a NodeState instance can be interpreted as
* representing the state of just that node, of the subtree starting  
at

* that node, or of an entire tree in case it's a root node.
* p
* The crucial difference between this interface and the similarly  
named
* class in Jackrabbit 2.x is that this interface represents a  
specific,
* immutable state of a node, whereas the Jackrabbit 2.x class  
represented

* the current state of a node.
*
* h2Properties and child nodes/h2
* p
* A node consists of an unordered set of properties, and an ordered  
set
* of child nodes. Each property and child node is uniquely named  
and a
* single name can only refer to a property or a child node, not  
both at

* the same time.
*
* h2Immutability and thread-safety/h2
* p
* As mentioned above, all node and property states are always  
immutable.
* Thus repeating a method call is always guaranteed to produce the  
same
* result as before unless some internal error occurs (see below).  
Note

* however that this immutability only applies to a specific state
instance.
* Different states of a node can obviously be different, and in some
cases
* even different instances of the same state may behave slightly
differently.
* For example due to performance optimization or other similar  
changes

the
* iteration order of properties may be different for two instances  
of the

* same node state. However, all such changes must file
* p
* In addition to being immutable, a specific state instance  
guaranteed to
* be fully thread-safe. Possible caching or other internal changes  
need

to
* be properly synchronized so that any number of concurrent clients  
can

* safely access a state instance.
*
* h2Persistence and error-handling/h2
* p
* A node state can be (and often is) backed by local files or network
* resources. All IO operations or related concerns like caching  
should be
* handled transparently below this interface. Potential IO problems  
and
* recovery attempts like retrying a timed-out network access need  
to be
* handled below this interface, and only hard errors should be  
thrown up
* as {@link RuntimeException unchecked exceptions} that higher  
level code

* is not expected to be able to recover from.
* p
* Since this interface exposes no higher level constructs like access
* controls, locking, node types or even path parsing, there's no way
* for content access to fail because of such concerns. Such  
functionality
* and related checked exceptions or other control flow constructs  
should

* be implemented on a 

Re: [jr3] implicit assumptions in MK design?

2012-03-01 Thread Dominique Pfister
Hi Michael,

Are you suggesting, that cluster sync will be provided purely by the underlying 
NoSQL database? Until now, I always assumed that all cluster nodes expose an MK 
interface, and that changes are transmitted to other nodes via calls on this MK 
interface. So in your example, cluster node 2 would see a delete /a/b and the 
question of a broken tree never arises.

Regards
Dominique

On Mar 1, 2012, at 1:53 PM, Michael Marth wrote:

 Hi,
 
 I have thought a bit about how one could go about implementing a micro kernel 
 based on a NoSQL database (think Cassandra or Mongo) where a JCR node would 
 probably be stored as an individual document and the MK implementation would 
 provide the tree on top of that. Consider that you have two or more cluster 
 nodes of such an NoSQL db (each receiving writes from a different SPI) and 
 that these two cluster nodes would be eventually consistent.
 
 It is easy to imagine cases where the tree structure of one node will be 
 temporarily broken (at least for specific implementations, see example 
 below). I am not particularly worried about that, but I wonder if the MK 
 interface design implicitly assumes that the MK always exposes a non-broken 
 tree to the SPI. The second question I have if we assume that a particular 
 version of the tree the MK exposes to the SPI is stable over time (or: can it 
 be the case that the SPI refreshes the current version it might see a 
 different tree. Again, example below)?
 
 I think we should be explicit about these assumptions or non-assumtptions 
 because either the MK implementer has to take care of them or the higher 
 levels (SPI, client) have to deal with them.
 
 Michael
 
 (*) example from above: consider node structure /a/b/c. On on cluster node 1 
 JCR node b is deleted. In order to implement that in a document db the MK on 
 cluster node 1 would need to separately delete b and c. The second cluster 
 node could receive the deletion of b first. So for some time there would be a 
 JCR node c on cluster node 2 that has no parent.

 
 example regarding tree version stability: suppose in the example above that 
 tree version 1 is /a/b/c and tree version 2 is /a. Because deleting b and c 
 will arrive on cluster node 2 as separate events there must either be some 
 additional communication between the cluster nodes so that cluster node 2 
 knows when tree version 2 is fully replicated. Or cluster node 2 will expose 
 a tree version 2 that first looks like /a/b and later as /a (i.e. the same 
 version number's tree will change over time)



Re: [jr3 trade consistency for availability]

2012-02-29 Thread Dominique Pfister

Hi,

On Feb 28, 2012, at 3:54 PM, Marcel Reutegger wrote:

I'd solve this differently. Saves are always performed on one  
partition,

even if some of the change set actually goes beyond a given partition.
this is however assuming that our implementation supports dynamic
partitioning and redistribution (e.g. when a new cluster node is added
to the federation). in this case the excessive part of the change set
would eventually be migrated to the correct cluster node.


I'd like to better understand your approach: if we have, say,  
Partitions P  and Q, containing subtrees /p and /q, respectively, then  
a save that spans elements in both /p and /q might be saved in P  
first, and later migrated to Q? What happens if this later migration  
leads to a conflict?


Regards
Dominique



regards
marcel




Re: [jr3] Tree model

2012-02-29 Thread Dominique Pfister

Hi,

On Tue 28, 2012, at 6:32 PM, Michael Dürig wrote:


On 28.2.12 15:40, Thomas Mueller wrote:


One reason to use the MicroKernel API is so we can implement a native
version of the MicroKernel. See also
http://wiki.apache.org/jackrabbit/Goals%20and%20non%20goals%20for%20Jackrab
bit%203 - TBD - Microkernel portable to C:
I think it is valuable to have a language agnostic API as long as  
there is a real chance anyone will ever build a Microkernel or an  
implementation on top of the Microkernel with a language which does  
not interoperate well with Java. However, internally we should break  
down the functionality provided by that API into smaller units with  
well defined and separate functionality as Jukka started to draft it  
with the Tree interface.

Michael
I  second your point: having a stringly typed MK interface in order to  
support programming languages such as C should not imply that an  
internal module has to be squeezed into this frame.


Dominique

Re: [jr3 trade consistency for availability]

2012-02-29 Thread Dominique Pfister

Hi,

On Feb 29, 2012, at 2:52 PM, Marcel Reutegger wrote:


Hi,


On Feb 28, 2012, at 3:54 PM, Marcel Reutegger wrote:


I'd solve this differently. Saves are always performed on one
partition,
even if some of the change set actually goes beyond a given  
partition.

this is however assuming that our implementation supports dynamic
partitioning and redistribution (e.g. when a new cluster node is  
added
to the federation). in this case the excessive part of the change  
set

would eventually be migrated to the correct cluster node.


I'd like to better understand your approach: if we have, say,
Partitions P  and Q, containing subtrees /p and /q, respectively,  
then

a save that spans elements in both /p and /q might be saved in P
first, and later migrated to Q? What happens if this later migration
leads to a conflict?


I guess this would be the result of a concurrent save when there's
an additional conflicting save under /q at the same time. good
question... CouchDB solves this with a deterministic algorithm
that simply picks one revision as the latest one and flags the  
conflict.

maybe we could use something similar?


So, this could result in a save on P that initially succeeds but  
ultimately fails, because the concurrent one on Q wins? I'm wondering  
how this could be reflected to an MK client: if a save corresponds to  
a MK commit call that immediately returns a new revision ID, would you  
suggest that the mentioned algorithm adds a shadow commit (leading  
to a new head revision ID) on P, that effectively reverts the  
conflicting save on P?


Dominique



regards
marcel






Re: [jr3 trade consistency for availability]

2012-02-29 Thread Dominique Pfister

On Feb 29, 2012, at 5:45 PM, Michael Dürig wrote:


That's an idea I mentioned earlier already [1]: make cluster sync
transparent to JCR sessions. That is, any modification required by the
sync, should look like just another session operation to JCR clients
(i.e. there should also be observation events for such changes).


Ah, this did not catch my eye: in JR2, cluster syncs are transparent  
to JCR sessions as well, although a conflict never needs to be  
resolved or a change undone, because of its lock-and-sync contract.  
That a cluster node could actually commit a change now and revert it  
later on the MK level because of a conflict - as if some other party  
had actually performed this revert-operation - is a new and  
interesting idea.


Dominique



Michael

[1]
https://docs.google.com/presentation/pub?id=131sVx5s58jAKE2FSVBfUZVQSl1W820_syyzLYRHGH6Estart=falseloop=falsedelayms=3000 
#slide=id.g4272a65_0_39




Dominique



regards
marcel








Re: [jr3] Tree model

2012-02-28 Thread Dominique Pfister
Hi Michael,

On Feb 28, 2012, at 19:11, Michael Dürig mdue...@apache.org wrote:

 
 Hi Jukka,
 
 Thanks for bringing this up. In a nutshell what you are proposing is to 
 implement trees as persistent data structures.

Hm, that's rather wishful thinking. Right now, we enforce an MVCC model inside 
the MicroKernel - which might be overthrown by some future decision, of course 
- so revisions and nodes are immutable per se. I don't see any need to 
introduce semantical restrictions that are already mandated by the architecture.

Dominique

 I think this approach is 
 very valuable and will save us a lot of troubles in the long run. In 
 particular when it comes to concurrency issues explicitly handling 
 mutual access to stateful objects has proofed very troublesome. With an 
 approach like the one you are proposing we get concurrency virtually for 
 free. To that end, I'd even go further and remove that last bit of 
 mutability you mention in the Javadoc. For constructing new trees I'd 
 rather provide factories which can produce new trees from existing 
 trees. Furthermore I'd rather not extend from Map to further emphasis 
 the immutability aspect. Instead I'd provide a map view. Either by 
 providing an asMap() method or by means of an adapter class.
 
 Anyway, I'm sure having a minimal and simple interface for such an 
 important key concept will be of great value.
 
 Michael
 
 
 
 On 28.2.12 14:59, Jukka Zitting wrote:
 Hi,
 
 [Here's me looking at lower-level details for a change.]
 
 I was going through the prototype codebases in the sandbox, trying to
 come up with some clean and simple lowest-common-denominator -style
 interface for representing content trees. Basically a replacement for
 the ItemState model in current Jackrabbit.
 
 The trouble I find with both the current Jackrabbit ItemState model
 and the efforts in the sandbox is that this key concept is modeled as
 concrete classes instead of as interfaces. Using an interface to
 describe and document the fundamental tree model gives us a lot of
 extra freedom on the implementation side (lazy loading, decoration,
 virtual content, etc.).
 
 So, how should we go about constructing such an interface? I started
 by laying some ground rules based on concepts from the sandbox and
 past jr3 discussions:
 
   * A content tree is composed of a hierachy of items
   * Tree items are either leaves or non-leaves
   * Non-leaves contain zero or more named child items (*all* other
 data is stored at leaves)
   * Each child item is *uniquely* named within its parent
   * Names are just opaque strings
   * Leaves contain typed data (strings, numbers, binaries, etc.)
   * Content trees are *immutable* except in specific circumstances
 (transient changes)
 
 As a corollary of such a proposed design, the following features (that
 with a different tree model could be a part of the underlying storage
 model) would need to be handled as higher level constructs:
 
   * Same-name-siblings (perhaps by name-mangling)
   * Namespaces and other name semantics
   * Ordering of child nodes (perhaps with a custom order property)
   * Path handling
   * Identifiers and references
   * Node types
   * Versioning
   * etc., etc.
 
 As you can see, it's a very low-level interface I'm talking about.
 With that background, here's what I'm proposing:
 https://gist.github.com/1932695 (also included as text at the end of
 this message). Note that this proposal covers only the interfaces for
 accessing content (with a big TODO in the Leaf interface). A separate
 builder or factory interface will be needed for content changes in
 case this design is adopted.
 
 Please criticize, as this is just a quick draft and I'm likely to miss
 something fairly important. I'm hoping to evolve this to something we
 could use as a well-documented and thought-of internal abstraction for
 jr3. Or, if this idea is too broken, to provoke someone to provide a
 good counter-proposal. :-)
 
 BR,
 
 Jukka Zitting
 
 
 
 import java.util.Map;
 
 /**
  * Trees are the key concept in a hierarchical content repository.
  * This interface is a low-level tree node representation that just
  * maps zero or more string names to corresponding child nodes.
  * Depending on context, a Tree instance can be interpreted as
  * representing just that tree node, the subtree starting at that node,
  * or an entire tree in case it's a root node.
  *p
  * For familiarity and easy integration with existing libraries this
  * interface extends the generic {@link Map} interface instead of
  * providing a custom alternative. Note also that this interface is
  * named Tree instead of something like Item or Node to avoid confusion
  * with the related JCR interfaces.
  */p
  *
  *h2Leaves and non-leaves/h2
  *p
  * Normal tree nodes only contain structural information expressed as
  * the set of child nodes and their names. The content of a tree, expressed
  * in data types like strings, numbers and binaries, is stored in 

[jira] [Updated] (JCR-3232) FileRevision extensibility issues

2012-02-08 Thread Dominique Pfister (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-3232:
---

   Resolution: Fixed
Fix Version/s: 2.5
   Status: Resolved  (was: Patch Available)

Thanks for providing a patch, Mete!

Applied it in revision 1241859.

 FileRevision extensibility issues
 -

 Key: JCR-3232
 URL: https://issues.apache.org/jira/browse/JCR-3232
 Project: Jackrabbit Content Repository
  Issue Type: Wish
  Components: jackrabbit-core
Reporter: Mete Atamel
Priority: Trivial
 Fix For: 2.5

 Attachments: FileRevision.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 It'd be nice to make FileRevision more extensible by chaning some of its 
 private variables to protected so it can be extended easier when needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (JCR-3229) FileRevision should have a flag to control whether to sync the file on every write.

2012-02-07 Thread Dominique Pfister (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13202417#comment-13202417
 ] 

Dominique Pfister commented on JCR-3229:


Hi Mete,

Thanks for providing a patch. I noticed that you changed both FileJournal's and 
DatabaseJournal's revision.log to not sync after writing (opposed to the 
current state where they sync after every write), which might result in 
information loss if that cluster node crashes before the file system will 
eventually write changes back to disk. Don't you think this might be 
problematic?

Kind regards
Dominique

 FileRevision should have a flag to control whether to sync the file on every 
 write.
 ---

 Key: JCR-3229
 URL: https://issues.apache.org/jira/browse/JCR-3229
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Reporter: Mete Atamel
Priority: Minor
  Labels: performance
 Fix For: 2.5

 Attachments: 3229.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 FileRevision class syncs the underlying revision.log file it uses on every 
 write which could be a performance problem. Add a boolean flag to control 
 whether to sync the file on every write.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (JCR-3229) FileRevision should have a flag to control whether to sync the file on every write.

2012-02-07 Thread Dominique Pfister (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-3229:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Good, I applied your patch (with sync set to true for these journals) in 
revision 1241481.

 FileRevision should have a flag to control whether to sync the file on every 
 write.
 ---

 Key: JCR-3229
 URL: https://issues.apache.org/jira/browse/JCR-3229
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core
Reporter: Mete Atamel
Priority: Minor
  Labels: performance
 Fix For: 2.5

 Attachments: 3229.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 FileRevision class syncs the underlying revision.log file it uses on every 
 write which could be a performance problem. Add a boolean flag to control 
 whether to sync the file on every write.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: [jr3 Microkernel] compiler settings in pom.xml

2012-01-31 Thread Dominique Pfister
Hi Felix,

On Jan 31, 2012, at 9:15 AM, Felix Meschberger wrote:

 Hi,
 
 Am 30.01.2012 um 17:54 schrieb Dominique Pfister:
 
 Hi,
 
 Just stumbled across this compilation setting in microkernel's pom.xml:
 
plugin
artifactIdmaven-compiler-plugin/artifactId
version2.3.2/version
configuration
source1.5/source
target1.5/target
/configuration
/plugin
 
 When actually _using_ a 1.5 jdk (on Mac OS X this can be done with the  
 Java Preferences tool), the maven-compiler-plugin will report 12  
 errors due to use of Java 1.6 methods.
 
 Using the animal sniffer plugin would help, yet ...

Cool, just gave it a try:

[ERROR] Undefined reference: java/io/IOException.init(Ljava/lang/Throwable;)V 
in 
/Users/dpfister/Projects/microkernel/target/classes/org/apache/jackrabbit/mk/blobs/BlobStoreInputStream.class
[ERROR] Undefined reference: 
java/util/Arrays.copyOfRange([Ljava/lang/Object;IILjava/lang/Class;)[Ljava/lang/Object;
 in 
/Users/dpfister/Projects/microkernel/target/classes/org/apache/jackrabbit/mk/index/BTreeLeaf.class
[ERROR] Undefined reference: 
java/util/Arrays.copyOfRange([Ljava/lang/Object;IILjava/lang/Class;)[Ljava/lang/Object;
 in 
/Users/dpfister/Projects/microkernel/target/classes/org/apache/jackrabbit/mk/index/BTreeNode.class
[ERROR] Undefined reference: java/io/PipedInputStream.init(I)V in 
/Users/dpfister/Projects/microkernel/target/classes/org/apache/jackrabbit/mk/util/MemorySockets$PipedSocket.class

Good to know, thanks!
Dominique



[jr3 Microkernel] compiler settings in pom.xml

2012-01-30 Thread Dominique Pfister

Hi,

Just stumbled across this compilation setting in microkernel's pom.xml:

plugin
artifactIdmaven-compiler-plugin/artifactId
version2.3.2/version
configuration
source1.5/source
target1.5/target
/configuration
/plugin

When actually _using_ a 1.5 jdk (on Mac OS X this can be done with the  
Java Preferences tool), the maven-compiler-plugin will report 12  
errors due to use of Java 1.6 methods.


So my question is: should we change this setting from 1.5 to 1.6  
(which I favor) or review our code?


Regards
Dominique


Re: svn commit: r1213680 - /jackrabbit/sandbox/microkernel/src/test/java/org/apache/jackrabbit/mk/ConflictingMoveTest.java

2011-12-13 Thread Dominique Pfister
Hm, no idea how this happened. No intention, though, I reverted the test 
removal...

On Dec 13, 2011, at 2:52 PM, Michael Dürig wrote:

 
 On 13.12.11 13:30, dpfis...@apache.org wrote:
 Author: dpfister
 Date: Tue Dec 13 13:30:13 2011
 New Revision: 1213680
 
 URL: http://svn.apache.org/viewvc?rev=1213680view=rev
 Log:
 Replace '\r' (Mac OS line terminator) with '\n' (Unix line terminator)
 
 
 [...]
 
  @Test
 -public void collidingMove() {
 -String head = mk.getHeadRevision();
 -
 -head = mk.commit(/, +\x\ : {} \r+\y\ : {}\n, head, );
 -
 -try {
 -mk.commit(/, \x\ : \y\, head, );
 -fail(expected to fail since /y already exists);
 -} catch (MicroKernelException e) {
 -// expected
 -}
 -}
 
 Why remove this test?
 
 Michael



[jira] [Created] (JCR-3138) Skip sync delay when changes are found

2011-11-07 Thread Dominique Pfister (Created) (JIRA)
Skip sync delay when changes are found
--

 Key: JCR-3138
 URL: https://issues.apache.org/jira/browse/JCR-3138
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.3.2
Reporter: Dominique Pfister
Assignee: Dominique Pfister


The cluster synchronization on a slave does always wait for some time (as 
specified in the sync delay) before fetching changes. If a lot of changes are 
being written to the master, a slave will considerably fall behind the master 
in term of revisions, which may endanger the integrity of the cluster if the 
master will crash. I therefore suggest that a slave should rather immediately 
contact the master again after some changes have been found, until it sees no 
more changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (JCR-3138) Skip sync delay when changes are found

2011-11-07 Thread Dominique Pfister (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13145335#comment-13145335
 ] 

Dominique Pfister commented on JCR-3138:


Hi Bart,

Thank you for your comment. You're absolutely right, in a standard cluster 
where the data itself is in shared storage such as a database, the slave will 
of course not talk to the master itself and the data does not get lost when the 
master crashes. We have a customized setup, however, where the data is not 
shared (every cluster node has a complete copy of the data) and where the 
slaves actually talk to the master: in order to support such a configuration, 
I'd like to make the sync loop more customizable by the actual journal 
implementation.

Kind regards
Dominique

 Skip sync delay when changes are found
 --

 Key: JCR-3138
 URL: https://issues.apache.org/jira/browse/JCR-3138
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.3.2
Reporter: Dominique Pfister
Assignee: Dominique Pfister

 The cluster synchronization on a slave does always wait for some time (as 
 specified in the sync delay) before fetching changes. If a lot of changes are 
 being written to the master, a slave will considerably fall behind the master 
 in term of revisions, which may endanger the integrity of the cluster if the 
 master will crash. I therefore suggest that a slave should rather immediately 
 contact the master again after some changes have been found, until it sees no 
 more changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (JCR-3138) Skip sync delay when changes are found

2011-11-07 Thread Dominique Pfister (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13145556#comment-13145556
 ] 

Dominique Pfister commented on JCR-3138:


Hi Bart,

not quite: we already have our own journal implementation, derived from 
o.a.j.core.journal.AbstractJournal, and what I'd like to do now is break the 
body of doSync() into multiple steps that can be separately invoked by a 
subclass without having to resort to copy-paste. AFAICS, JCR-2968 is related to 
database journaling, whereas our implementation is not, so I don't think these 
two issues should be combined, what do you think?

Dominique

 Skip sync delay when changes are found
 --

 Key: JCR-3138
 URL: https://issues.apache.org/jira/browse/JCR-3138
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.3.2
Reporter: Dominique Pfister
Assignee: Dominique Pfister

 The cluster synchronization on a slave does always wait for some time (as 
 specified in the sync delay) before fetching changes. If a lot of changes are 
 being written to the master, a slave will considerably fall behind the master 
 in term of revisions, which may endanger the integrity of the cluster if the 
 master will crash. I therefore suggest that a slave should rather immediately 
 contact the master again after some changes have been found, until it sees no 
 more changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (JCR-2910) Please add JackrabbitSession.isAdmin

2011-03-08 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13003931#comment-13003931
 ] 

Dominique Pfister commented on JCR-2910:


 What about JackrabbitSession.getUser()?

+1

 Please add JackrabbitSession.isAdmin
 

 Key: JCR-2910
 URL: https://issues.apache.org/jira/browse/JCR-2910
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
Reporter: Thomas Mueller
Priority: Minor

 Currently finding out if the session user is an admin requires:
 JackrabbitSession js = (JackrabbitSession) session;
 User user = ((User) js.getUserManager().getAuthorizable(session.getUserID()));
 boolean isAdmin = user.isAdmin();
 Or: ((SessionImpl) session).isAdmin(). However casting to an implementation 
 is problematic for several reasons.
 I think it would make sense to add isAdmin() to the JackrabbitSession 
 interface, so the code above would be:
 ((JackrabbitSession) session).isAdmin()

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Commented: (JCR-2874) Locked helper class improvements

2011-02-16 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12995284#comment-12995284
 ] 

Dominique Pfister commented on JCR-2874:


Good to know, Angela, but I don't think it is necessary to get personal.

 Locked helper class improvements
 

 Key: JCR-2874
 URL: https://issues.apache.org/jira/browse/JCR-2874
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-jcr-commons, jackrabbit-spi-commons
Reporter: Alex Parvulescu
Priority: Minor
 Attachments: lock_upgrade.diff, lock_upgrade_2_tests.patch, 
 lock_upgrade_2_wrapper.patch


 Currently there are 2 helper classes called Locked, that serve the same 
 purpose:
 one in jcr-commons 
 http://jackrabbit.apache.org/api/2.2/org/apache/jackrabbit/util/Locked.html
 and the other one in spi-commons 
 http://jackrabbit.apache.org/api/2.2/org/apache/jackrabbit/spi/commons/lock/Locked.html
 There was a discussion a while back about deprecating one of them. the issue 
 affected JR 1.4 and it is now closed. I guess the old patches have not been 
 applied: https://issues.apache.org/jira/browse/JCR-1169
 Anyway, I propose to keep the one in jcr-commons and deprecate the other one.
 Also, while we are on the matter, I'd like to propose some improvements to 
 this class:
 1. make the lock configurable, basically add a flag to say if it is session 
 scoped or not (currently it is hard coded as a session wide lock).
 2. upgrade the lock code to use the LockManager 
 (http://www.day.com/maven/javax.jcr/javadocs/jcr-2.0/javax/jcr/lock/LockManager.html)
  - this means that the class will rely only on the JCR 2.0. this is a 
 potentially breaking change, although a lot of the classes in this module 
 depend directly on the 2.0 API
 3. allow the Locked class to use generics. The biggest issue here is the 
 timeout behavior: on timeout you get the TIMED_OUT token object, and you have 
 to compare the response to that, to make sure that the code ran properly. I 
 think the simplest solution would be to not touch the class directly and 
 build a wrapper class that is generified, and throws a RepositoryException in 
 case of a timeout.
 This would turn this code( from the javadoc)
   Node counter = ...;
   Object ret = new Locked() {
  protected Object run(Node counter) throws RepositoryException {
  ...
  }
   }.with(counter, false);
   if (ret == Locked.TIMED_OUT) {
  // do whatever you think is appropriate in this case
   } else {
  // get the value
  long nextValue = ((Long) ret).longValue();
   }
 into 
   Node counter = ...;
   Long ret = new LockedWrapperLong() {
  protected Object run(Node counter) throws RepositoryException {
  ...
  }
   }.with(counter, false);
   // handle the timeout as a RepositoryException
 4. lock tests location? (this is more of an observation than an actual issue 
 it came to me via 'find . -name *Lock*Test.java')
 There are some lock tests in jackrabbit-core: 
  - jackrabbit-core/src/test/java/org/apache/jackrabbit/core/LockTest.java
  - 
 jackrabbit-core/src/test/java/org/apache/jackrabbit/core/lock/LockTimeoutTest.java
  - 
 jackrabbit-core/src/test/java/org/apache/jackrabbit/core/lock/ConcurrentLockingWithTransactionsTest.java
  - 
 jackrabbit-core/src/test/java/org/apache/jackrabbit/core/lock/ExtendedLockingTest.java
  - 
 jackrabbit-core/src/test/java/org/apache/jackrabbit/core/lock/ConcurrentLockingTest.java
 ...some in jackrabbit-jcr2spi:
  - 
 jackrabbit-jcr2spi/src/test/java/org/apache/jackrabbit/jcr2spi/lock/OpenScopedLockTest.java
  - 
 jackrabbit-jcr2spi/src/test/java/org/apache/jackrabbit/jcr2spi/lock/SessionScopedLockTest.java
  - 
 jackrabbit-jcr2spi/src/test/java/org/apache/jackrabbit/jcr2spi/lock/DeepLockTest.java
  - 
 jackrabbit-jcr2spi/src/test/java/org/apache/jackrabbit/jcr2spi/lock/AbstractLockTest.java
 and others in jackrabbit-jcr-tests:
  - 
 jackrabbit-jcr-tests/src/main/java/org/apache/jackrabbit/test/api/observation/LockingTest.java
  - 
 jackrabbit-jcr-tests/src/main/java/org/apache/jackrabbit/test/api/lock/SetValueLockExceptionTest.java
  - 
 jackrabbit-jcr-tests/src/main/java/org/apache/jackrabbit/test/api/lock/LockManagerTest.java
  - 
 jackrabbit-jcr-tests/src/main/java/org/apache/jackrabbit/test/api/lock/OpenScopedLockTest.java
  - 
 jackrabbit-jcr-tests/src/main/java/org/apache/jackrabbit/test/api/lock/LockTest.java
  - 
 jackrabbit-jcr-tests/src/main/java/org/apache/jackrabbit/test/api/lock/SessionScopedLockTest.java
  - 
 jackrabbit-jcr-tests/src/main/java/org/apache/jackrabbit/test/api/lock/DeepLockTest.java
  - 
 jackrabbit-jcr-tests/src/main/java/org/apache/jackrabbit/test/api/lock/AbstractLockTest.java
 I'd like to move this one: org.apache.jackrabbit.core.LockTest from

[jira] Updated: (JCR-2881) Deadlock on version operations in a clustered environment

2011-02-07 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-2881:
---

Status: Open  (was: Patch Available)

 Deadlock on version operations in a clustered environment
 -

 Key: JCR-2881
 URL: https://issues.apache.org/jira/browse/JCR-2881
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.2.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister

 Version operations in a cluster may end up in a deadlock: a write operation 
 in the version store will acquire the version manager's write lock (N1.VW) 
 and subsequently the cluster journal's write lock (N1.JW). Another cluster 
 node's write operation in some workspace will acquire the journal's write 
 lock (N2.JW) and first process the journal record log: if some of these 
 changes concern the version store, the version manager's read lock (N2.VR) 
 has to be acquired in order to deliver them. If the first cluster node 
 reaches N1.VW, and the second reaches N2.JW, we have a deadlock. The same 
 scenario takes place when the second cluster node synchronizes to the latest 
 journal changes and reaches N2.JR, when the first cluster node is in N1.VW.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (JCR-2881) Deadlock on version operations in a clustered environment

2011-02-07 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-2881:
---

Affects Version/s: (was: 2.2.2)
   2.2.3
   Status: Patch Available  (was: Open)

 Deadlock on version operations in a clustered environment
 -

 Key: JCR-2881
 URL: https://issues.apache.org/jira/browse/JCR-2881
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.2.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister

 Version operations in a cluster may end up in a deadlock: a write operation 
 in the version store will acquire the version manager's write lock (N1.VW) 
 and subsequently the cluster journal's write lock (N1.JW). Another cluster 
 node's write operation in some workspace will acquire the journal's write 
 lock (N2.JW) and first process the journal record log: if some of these 
 changes concern the version store, the version manager's read lock (N2.VR) 
 has to be acquired in order to deliver them. If the first cluster node 
 reaches N1.VW, and the second reaches N2.JW, we have a deadlock. The same 
 scenario takes place when the second cluster node synchronizes to the latest 
 journal changes and reaches N2.JR, when the first cluster node is in N1.VW.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (JCR-2881) Deadlock on version operations in a clustered environment

2011-02-07 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-2881:
---

Attachment: jackrabbit-core-2.2.3.patch

Patch

 Deadlock on version operations in a clustered environment
 -

 Key: JCR-2881
 URL: https://issues.apache.org/jira/browse/JCR-2881
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.2.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister
 Attachments: jackrabbit-core-2.2.3.patch


 Version operations in a cluster may end up in a deadlock: a write operation 
 in the version store will acquire the version manager's write lock (N1.VW) 
 and subsequently the cluster journal's write lock (N1.JW). Another cluster 
 node's write operation in some workspace will acquire the journal's write 
 lock (N2.JW) and first process the journal record log: if some of these 
 changes concern the version store, the version manager's read lock (N2.VR) 
 has to be acquired in order to deliver them. If the first cluster node 
 reaches N1.VW, and the second reaches N2.JW, we have a deadlock. The same 
 scenario takes place when the second cluster node synchronizes to the latest 
 journal changes and reaches N2.JR, when the first cluster node is in N1.VW.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (JCR-2881) Deadlock on version operations in a clustered environment

2011-02-07 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-2881:
---

Status: Patch Available  (was: Open)

Patch submitted.

 Deadlock on version operations in a clustered environment
 -

 Key: JCR-2881
 URL: https://issues.apache.org/jira/browse/JCR-2881
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.2.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister
 Attachments: jackrabbit-core-2.2.3.patch


 Version operations in a cluster may end up in a deadlock: a write operation 
 in the version store will acquire the version manager's write lock (N1.VW) 
 and subsequently the cluster journal's write lock (N1.JW). Another cluster 
 node's write operation in some workspace will acquire the journal's write 
 lock (N2.JW) and first process the journal record log: if some of these 
 changes concern the version store, the version manager's read lock (N2.VR) 
 has to be acquired in order to deliver them. If the first cluster node 
 reaches N1.VW, and the second reaches N2.JW, we have a deadlock. The same 
 scenario takes place when the second cluster node synchronizes to the latest 
 journal changes and reaches N2.JR, when the first cluster node is in N1.VW.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (JCR-2881) Deadlock on version operations in a clustered environment

2011-02-07 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991326#comment-12991326
 ] 

Dominique Pfister commented on JCR-2881:


Solution looks as follows: when doing a cluster sync, whether it is for reading 
or writing, always acquire a the version manager's read lock first.

 Deadlock on version operations in a clustered environment
 -

 Key: JCR-2881
 URL: https://issues.apache.org/jira/browse/JCR-2881
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.2.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister
 Attachments: jackrabbit-core-2.2.3.patch


 Version operations in a cluster may end up in a deadlock: a write operation 
 in the version store will acquire the version manager's write lock (N1.VW) 
 and subsequently the cluster journal's write lock (N1.JW). Another cluster 
 node's write operation in some workspace will acquire the journal's write 
 lock (N2.JW) and first process the journal record log: if some of these 
 changes concern the version store, the version manager's read lock (N2.VR) 
 has to be acquired in order to deliver them. If the first cluster node 
 reaches N1.VW, and the second reaches N2.JW, we have a deadlock. The same 
 scenario takes place when the second cluster node synchronizes to the latest 
 journal changes and reaches N2.JR, when the first cluster node is in N1.VW.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Issue Comment Edited: (JCR-2881) Deadlock on version operations in a clustered environment

2011-02-07 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991326#comment-12991326
 ] 

Dominique Pfister edited comment on JCR-2881 at 2/7/11 10:26 AM:
-

Solution looks as follows: when doing a cluster sync, whether it is for reading 
or writing, always acquire the version manager's read lock first.

  was (Author: dpfister):
Solution looks as follows: when doing a cluster sync, whether it is for 
reading or writing, always acquire a the version manager's read lock first.
  
 Deadlock on version operations in a clustered environment
 -

 Key: JCR-2881
 URL: https://issues.apache.org/jira/browse/JCR-2881
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.2.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister
 Attachments: jackrabbit-core-2.2.3.patch


 Version operations in a cluster may end up in a deadlock: a write operation 
 in the version store will acquire the version manager's write lock (N1.VW) 
 and subsequently the cluster journal's write lock (N1.JW). Another cluster 
 node's write operation in some workspace will acquire the journal's write 
 lock (N2.JW) and first process the journal record log: if some of these 
 changes concern the version store, the version manager's read lock (N2.VR) 
 has to be acquired in order to deliver them. If the first cluster node 
 reaches N1.VW, and the second reaches N2.JW, we have a deadlock. The same 
 scenario takes place when the second cluster node synchronizes to the latest 
 journal changes and reaches N2.JR, when the first cluster node is in N1.VW.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (JCR-2881) Deadlock on version operations in a clustered environment

2011-02-07 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991327#comment-12991327
 ] 

Dominique Pfister commented on JCR-2881:


Fixed in revision 1067901 in trunk.

 Deadlock on version operations in a clustered environment
 -

 Key: JCR-2881
 URL: https://issues.apache.org/jira/browse/JCR-2881
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.2.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister
 Attachments: jackrabbit-core-2.2.3.patch


 Version operations in a cluster may end up in a deadlock: a write operation 
 in the version store will acquire the version manager's write lock (N1.VW) 
 and subsequently the cluster journal's write lock (N1.JW). Another cluster 
 node's write operation in some workspace will acquire the journal's write 
 lock (N2.JW) and first process the journal record log: if some of these 
 changes concern the version store, the version manager's read lock (N2.VR) 
 has to be acquired in order to deliver them. If the first cluster node 
 reaches N1.VW, and the second reaches N2.JW, we have a deadlock. The same 
 scenario takes place when the second cluster node synchronizes to the latest 
 journal changes and reaches N2.JR, when the first cluster node is in N1.VW.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (JCR-2881) Deadlock on version operations in a clustered environment

2011-02-07 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-2881:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Fixed in 2.2 branch in revision 1067910.

 Deadlock on version operations in a clustered environment
 -

 Key: JCR-2881
 URL: https://issues.apache.org/jira/browse/JCR-2881
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Affects Versions: 2.2.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister
 Attachments: jackrabbit-core-2.2.3.patch


 Version operations in a cluster may end up in a deadlock: a write operation 
 in the version store will acquire the version manager's write lock (N1.VW) 
 and subsequently the cluster journal's write lock (N1.JW). Another cluster 
 node's write operation in some workspace will acquire the journal's write 
 lock (N2.JW) and first process the journal record log: if some of these 
 changes concern the version store, the version manager's read lock (N2.VR) 
 has to be acquired in order to deliver them. If the first cluster node 
 reaches N1.VW, and the second reaches N2.JW, we have a deadlock. The same 
 scenario takes place when the second cluster node synchronizes to the latest 
 journal changes and reaches N2.JR, when the first cluster node is in N1.VW.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (JCR-2809) Lock expires almost immediately

2010-11-10 Thread Dominique Pfister (JIRA)
Lock expires almost immediately
---

 Key: JCR-2809
 URL: https://issues.apache.org/jira/browse/JCR-2809
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: locks
Affects Versions: 2.1.2
Reporter: Dominique Pfister


When a timeoutHint other than Long.MAX_VALUE is given to the 
javax.jcr.lock.LockManager API:

   lock(String absPath, boolean isDeep, boolean isSessionScoped, long 
timeoutHint, String ownerInfo)

a timeoutTime in seconds will be computed as follows 
(o.a.j.core.lock.LockInfo#updateTimeoutTime):

   long now = (System.currentTimeMillis() + 999) / 1000; // round up
   this.timeoutTime = now + timeoutHint;

the TimeoutHandler in o.a.j.core.lock.LockManagerImpl running every second will 
then check whether the timeout has expired (o.a.j.core.lock.LockInfo#isExpired):

public boolean isExpired() {
return timeoutTime != Long.MAX_VALUE
 timeoutTime * 1000  System.currentTimeMillis();
}

Obviously, the latter condition is true from the very beginning. Replacing '' 
with '' or '=' should do the trick.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-2523) StaleItemStateException during distributed transaction

2010-03-03 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12840571#action_12840571
 ] 

Dominique Pfister commented on JCR-2523:


Hi Daniel,

JCR-1554 was fixed in trunk as well as in 1.5.3, so I don't see what code 
you're referring to. Stale property values are never merged, so this looks like 
some code is updating the property concurrently with the code committing your 
transaction. There is a JUnit test class named

org.apache.jackrabbit.core.XATest

containing tests that apply to XA environment, using simplistic 
UserTransactions. Are you able to add a test method that reproduces your 
problem? This would help a lot investigating your issue!

Kind regards
Dominique

 StaleItemStateException during distributed transaction
 --

 Key: JCR-2523
 URL: https://issues.apache.org/jira/browse/JCR-2523
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, jackrabbit-jca
Affects Versions: 2.0.0
 Environment: weblogic 10.3, jdk1.6.0_05, linux
Reporter: Daniel Hasler
 Attachments: JCRRepository.java, UpdateTestServlet.java


 We use the Jackrabbit JCA Component within a Weblogic 10.3 Application Server 
 with distributed transactions between an Oracle Database an the Jackrabbit 
 JCA.
 Updating a node property multiple times in a transaction results in a 
 XAException. Root cause seems to be a StaleItemStateException (see 
 Stack-Trace).
 Googling revealed, that a similar bug was fixed for Jackrabbit 1.5.3. Looking 
 through the code showed, that the proposed fix in JCR-1554 seems not to be 
 applied on Jackrabbit 2.0 (tag and trunk).
 I tried to apply the proposed fix on the trunk code base, but this seemed not 
 to help.
 Stack-Trace:
 javax.ejb.TransactionRolledbackLocalException: Error committing transaction:; 
 nested exception is: javax.transaction.xa.XAException 
 
 at 
 weblogic.ejb.container.internal.EJBRuntimeUtils.throwTransactionRolledbackLocal(EJBRuntimeUtils.java:238)
   
   
 at 
 weblogic.ejb.container.internal.EJBRuntimeUtils.throwEJBException(EJBRuntimeUtils.java:133)
   
 
 at 
 weblogic.ejb.container.internal.BaseLocalObject.postInvoke1(BaseLocalObject.java:623)
   
   
 at 
 weblogic.ejb.container.internal.BaseLocalObject.postInvokeTxRetry(BaseLocalObject.java:424)
   
 
 at 
 ch.ejpd.sireneit.facade.ejb.ablage.DokumentFacadeBean_7xdnsq_DokumentFacadeImpl.updateStructuredDokument(DokumentFacadeBean_7xdnsq_DokumentFacadeImpl.java:340)
   
 at 
 ch.ejpd.sireneit.access.rest.ablage.DokumentResource.update(DokumentResource.java:453)
   
  
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   
 
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
   
  
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   
   
 at java.lang.reflect.Method.invoke(Method.java:597)   
   
 
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:175)
 
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:67

Re: [jr3] Exception Handling

2010-03-01 Thread Dominique Pfister
I wouldn't call it error code, then: every time something changes
either in the calling code or in the code throwing the exception,
you'll get a different hash code.

Dominique

On Mon, Mar 1, 2010 at 10:32 AM, Michael Dürig michael.due...@day.com wrote:
 == Use Error Codes ==

 What about using something like a hash code (for example of the current
 stack trace) as error code? These would then automatically serve as hash
 tags for Google searches. That is, errors pasted into a discussion forum
 would be indexed by Google. Searching for 'Error
 6df8c9ef2aa1cf7b04b52939b7c1cd7e' would then practically unambiguously lead
 to that post.

 Michael





[jira] Resolved: (JCR-2473) Cloning a tree containing shareable nodes into another workspace throws ItemExistsException

2010-01-22 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister resolved JCR-2473.


Resolution: Fixed
  Assignee: Dominique Pfister

Thanks for detecting this issue and providing a patch! I added a test case with 
your setup.

Fixed in revision 902110.

 Cloning a tree containing shareable nodes into another workspace throws 
 ItemExistsException
 ---

 Key: JCR-2473
 URL: https://issues.apache.org/jira/browse/JCR-2473
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.0-beta6
Reporter: Thomas Draier
Assignee: Dominique Pfister

 There's a problem when trying to clone a tree in another workspace, when this 
 tree contains shareable nodes.
 Let ws1 be one workspace, which contains one node A. This node has two 
 sub-nodes B and C. B and C share a shareable sub-node D :
 A 
 |   \
 B  C
 ||
 D  D
 Let ws2 be a second workspace. Then calling ws2.clone(ws1 , /A , /A , 
 false) throws an ItemExistsException ( copyNodeState line 1628 ) . This is 
 done when the copyNodeState is checking if the nodeId is already present in 
 the workspace - which is the case when copying the second instance of the 
 shareable node. I can't find in the specification something about this case - 
 but it would be logical to add a share to the node when coming across this 
 situation - at least in the CLONE ( and probable COPY too ) cases. I don't 
 know what would be expected in the CLONE_REMOVE_EXISTING case - we might not 
 want to remove the node if it's shareable, and also add a share here.
 I fixed the issue by handling the case the node is shareable in the COPY and 
 CLONE cases of copyNodeState - you'll find attached the corresponding patch. 
 Do you think this solution is ok ?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-2272) Errors during concurrent session import of nodes with same UUIDs

2009-09-10 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister updated JCR-2272:
---

Attachment: JCR-2272_NPE.patch

Applying the revised patch exposes another problem with an ItemState's internal 
data object, which results in a NullPointerException when a session tries to 
modify an item that has been destroyed concurrently by another session.

Submitting a cumulative patch (JCR-2272_NPE.patch) that fixes this issue and 
lets ConcurrentImportTest throw an unexpected exception - such as NPE - again 
as RepositoryException.

 Errors during concurrent session import of nodes with same UUIDs
 

 Key: JCR-2272
 URL: https://issues.apache.org/jira/browse/JCR-2272
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.0-alpha8
Reporter: Tobias Bocanegra
 Attachments: JCR-2272.patch, JCR-2272_NPE.patch, 
 JCR-2272_revised.patch


 21.08.2009 16:22:14 *ERROR* [Executor 0] ConnectionRecoveryManager: could not 
 execute statement, reason: The statement was aborted because it would have 
 caused a duplicate key value in a unique or primary key constraint or unique 
 index identified by 'SQL090821042140130' defined on 'DEFAULT_BUNDLE'., 
 state/code: 23505/2 (ConnectionRecoveryManager.java, line 453)
 21.08.2009 16:22:14 *ERROR* [Executor 0] BundleDbPersistenceManager: failed 
 to write bundle: 6c292772-349e-42b3-8255-7729615c67de 
 (BundleDbPersistenceManager.java, line 1212)
 ERROR 23505: The statement was aborted because it would have caused a 
 duplicate key value in a unique or primary key constraint or unique index 
 identified by 'SQL090821042140130' defined on 'DEFAULT_BUNDLE'.
   at org.apache.derby.iapi.error.StandardException.newException(Unknown 
 Source)
   at 
 org.apache.derby.impl.sql.execute.IndexChanger.insertAndCheckDups(Unknown 
 Source)
   at org.apache.derby.impl.sql.execute.IndexChanger.doInsert(Unknown 
 Source)
   at org.apache.derby.impl.sql.execute.IndexChanger.insert(Unknown Source)
   at org.apache.derby.impl.sql.execute.IndexSetChanger.insert(Unknown 
 Source)
   at org.apache.derby.impl.sql.execute.RowChangerImpl.insertRow(Unknown 
 Source)
   at 
 org.apache.derby.impl.sql.execute.InsertResultSet.normalInsertCore(Unknown 
 Source)
   at org.apache.derby.impl.sql.execute.InsertResultSet.open(Unknown 
 Source)
   at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown 
 Source)
   at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown 
 Source)
   at 
 org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown 
 Source)
   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.execute(Unknown 
 Source)
   at 
 org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryManager.executeStmtInternal(ConnectionRecoveryManager.java:371)
   at 
 org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryManager.executeStmtInternal(ConnectionRecoveryManager.java:298)
   at 
 org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryManager.executeStmt(ConnectionRecoveryManager.java:261)
   at 
 org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryManager.executeStmt(ConnectionRecoveryManager.java:239)
   at 
 org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager.storeBundle(BundleDbPersistenceManager.java:1209)
   at 
 org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.putBundle(AbstractBundlePersistenceManager.java:709)
   at 
 org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.storeInternal(AbstractBundlePersistenceManager.java:651)
   at 
 org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.store(AbstractBundlePersistenceManager.java:527)
   at 
 org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager.store(BundleDbPersistenceManager.java:563)
   at 
 org.apache.jackrabbit.core.state.SharedItemStateManager$Update.end(SharedItemStateManager.java:724)
   at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1101)
   at 
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:351)
   at 
 org.apache.jackrabbit.core.state.XAItemStateManager.update(XAItemStateManager.java:354)
   at 
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:326)
   at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.update(SessionItemStateManager.java:326)
   at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:1098)
   at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:925

[jira] Created: (JCR-2199) Improvements to user management

2009-07-09 Thread Dominique Pfister (JIRA)
Improvements to user management
---

 Key: JCR-2199
 URL: https://issues.apache.org/jira/browse/JCR-2199
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0-alpha3
Reporter: Dominique Pfister




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-2199) Improvements to user management

2009-07-09 Thread Dominique Pfister (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12729118#action_12729118
 ] 

Dominique Pfister commented on JCR-2199:


Allow subclasses of UserManagerImpl/UserImpl for custom implementations, in 
revision 792467.

 Improvements to user management
 ---

 Key: JCR-2199
 URL: https://issues.apache.org/jira/browse/JCR-2199
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0-alpha3
Reporter: Dominique Pfister



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Build failed in Hudson: Jackrabbit-trunk » Jack rabbit Core #636

2009-07-09 Thread Dominique Pfister
Hi,

I was able to find out how those repository and repository.xml are
created; it is actually a combination of tests:

o.a.j.test.api.RepositoryFactoryTest#testDefaultRepository:
instantiates a TransientRepository with home=null/config=null and
o.a.j.core.RepositoryFactoryImpl registers this repository in its
static REPOSITORY_INSTANCES map, with key=null.

o.a.j.core.integration.RepositoryFactoryImplTest#testDefaultRepository:
sets system properties with
home=target/repository/config=target/repository.xml and asks
o.a.j.core.RepositoryFactoryImpl to get or create a repository with
key=null. Unfortunately, the transient repository registered above is
still in the map and therefore this test inadvertently uses the wrong
repository, which will create those file system entries.

This can be reproduced by creating a read-only file repository, and
running the two tests in sequence. I then get failures with stack
trace:

javax.jcr.RepositoryException: Invalid repository configuration file:
repository.xml
at 
o.a.j.core.TransientRepository$2.getRepository(TransientRepository.java:230)
...
Caused by: o.a.j.ConfigurationException: Repository directory
repository does not exist
at o.a.j.core.config.RepositoryConfig.create(RepositoryConfig.java:174)
...

I don't know exactly where this should be fixed: use system properties
in RepositoryFactoryTest as well or provide some clean up in
RepositoryFactoryImpl's REPOSITORY_INSTANCES map. Marcel, can you (or
Julian) take a look at this?

Regards
Dominique


On Tue, Jun 30, 2009 at 11:28 PM, Tobias Bocanegratri...@day.com wrote:
 hi,

 For some reason the Hudson build creates an unexpected ./repository
 directory (not ./target/repository) that gets flagged by RAT. I'm not
 yet sure what causes that, but for now I simply added an extra RAT
 exclude rule that should make the Hudson build succeed again.

 it's not just the hudson build - i have those repository and
 repository.xml, too.
 i guess it's one of the tests that does not use the proper repository home.

 regards, toby



Re: [VOTE] Release Apache Jackrabbit 2.0-alpha4

2009-07-09 Thread Dominique Pfister
On Thu, Jul 9, 2009 at 5:45 PM, Jukka Zittingjukka.zitt...@gmail.com wrote:
 Hi,

 I have posted a candidate for the Apache Jackrabbit 2.0 alpha4 release at

    http://people.apache.org/~jukka/jackrabbit/2.0-alpha4/

 See the RELEASE-NOTES.txt file (also included at the end of this
 message) for details about this release. The release candidate is a
 jar archive of the sources in
 http://svn.apache.org/repos/asf/jackrabbit/tags/2.0-alpha4. The SHA1
 checksum of the release package is
 5377bff530b0d61df16da91b0923ca09e6aa8a8b.

 Please vote on releasing this package as Apache Jackrabbit 2.0-alpha4.
 The vote is open for the next 72 hours and passes if a majority of at
 least three +1 Jackrabbit PMC votes are cast.

    [ ] +1 Release this package as Apache Jackrabbit 2.0-alpha4
    [ ] -1 Do not release this package because...

[X] +1 Release this package as Apache Jackrabbit 2.0-alpha4

Checksums OK
Build OK

Regards
Dominique


 Note that this is a source-only release, and I won't be posting any
 pre-compiled 2.0-alpha4 binaries on the Jackrabbit web site or the on
 the central Maven repository. The reason for this is to emphasize the
 alpha status of this release.

 Here's my +1.

 PS. We have lately seen some random search-related test failures. If
 your build fails because of that, try again. I hope we get such issues
 sorted out before the final 2.0 release!

 BR,

 Jukka Zitting


 Release Notes -- Apache Jackrabbit -- Version 2.0-alpha4

 Introduction
 

 This is an alpha release of Apache Jackrabbit 2.0. This release implements
 a pre-release version of the JCR 2.0 API, specified by the Java Specification
 Request 283 (JSR 283, http://jcp.org/en/jsr/detail?id=283).

 The purpose of this alpha release is to allow people to test and review
 the new JCR 2.0 features before they are finalized. Feedback to both the
 Jackrabbit project and the JSR 283 expert group is highly appreciated.
 Note that an alpha release is not expected to be feature-complete or
 otherwise suitable for production use.

 Changes in this release
 ---

 Jackrabbit 2.0 is a major upgrade from the earlier 1.x releases. The most
 notable changes in this release are:

  * Upgrade to JCR 2.0. This Jackrabbit release implements and is based
    on a pre-release version of the JCR 2.0 API. See below for a status
    listing of the issues related to JCR 2.0 changes. We expect to achieve
    full feature-completeness in time for the final Jackrabbit 2.0 release.

  * Upgrade to Java 5. All of Jackrabbit (except the jcr-tests component)
    now requires Java 5 as the base platform. Java 1.4 environments are no
    longer supported.

  * Removal of deprecated classes and features. Jackrabbit 2.0 is not
    backwards compatible with client code that used any classes or features
    that had been deprecated during the 1.x release cycle.

  * Separate JCR Commons components. Many of the general-purpose JCR
    components like JCR-RMI and OCM are now developed and released
    separately from the Jackrabbit content repository. See the individual
    components for their most recent releases.

 For more detailed information about all the changes in this and other
 Jackrabbit releases, please see the Jackrabbit issue tracker at

    https://issues.apache.org/jira/browse/JCR

 JCR 2.0 feature completeness
 

 The following 41 top level JCR 2.0 implementation issues are being tracked in
 the Jackrabbit issue tracker. Most of them have already been partially
 implemented, but the issue will only be marked as resolved once no more
 related work is needed.

 Open (16 issues)
  [JCR-1565] JSR 283 lifecycle management
  [JCR-1588] JSR 283: Access Control
  [JCR-1590] JSR 283: Locking
  [JCR-1591] JSR 283: NodeType Management
  [JCR-1712] JSR 283: JCR Names
  [JCR-1974] JSR 283: Evaluate Capabilities
  [JCR-2058] JSR 283: VersionManager and new versioning methods
  [JCR-2062] JSR 283: Repository Compliance
  [JCR-2085] test case (TCK) maintenance for JCR 2.0
  [JCR-2092] make spi query code compatible with JCR 2.0
  [JCR-2116] JSR 283: Built-In Node Types
  [JCR-2137] Use type StaticOperand for fullTextSearchExpression
  [JCR-2140] JSR 283: Baselines
  [JCR-2198] Text.escapeIllegalJCRChars should be adjusted to match the ...
  [JCR-2200] Implement Query.getBindVariableNames()
  [JCR-2201] Implement QueryResult.getSelectorNames()

 Resolved (25 issues)
  [JCR-1564] JSR 283 namespace handling
  [JCR-1589] JSR 283: Retention  Hold Management
  [JCR-1592] JSR 283: Activities
  [JCR-1593] JSR 283: Simple versioning
  [JCR-1608] JSR 283: Workspace Management
  [JCR-1609] JSR 283: new Property Types
  [JCR-1834] JSR 283: Create RepositoryFactory implementation
  [JCR-1839] JSR 283: Introduce Event.getDate()
  [JCR-1849] JSR 283: EventJournal
  [JCR-1904] JSR 283: Event user data
  [JCR-1947] JSR 283: Node Type Attribute Subtyping Rules
  [JCR-2028] JSR 283: JCR Path
  [JCR-2053] JSR 283: Shareable 

[jira] Resolved: (JCR-1441) Support workspace event listeners that will be created/registered on initialization time

2009-07-08 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister resolved JCR-1441.


Resolution: Won't Fix

I don't insist on having this generic feature anymore, so I resolve it as 
won't fix.

 Support workspace event listeners that will be created/registered on 
 initialization time
 

 Key: JCR-1441
 URL: https://issues.apache.org/jira/browse/JCR-1441
 Project: Jackrabbit Content Repository
  Issue Type: New Feature
  Components: jackrabbit-core
Affects Versions: core 1.4.1
Reporter: Dominique Pfister
Assignee: Dominique Pfister

 Add an EventListener section in workspace.xml (or the Workspace section 
 of repository.xml), containing custom javax.jcr.observation.EventListener 
 implementations that will be created and registered when the workspace is 
 initialized.
 The DTD for this section might look as follows:
 !ELEMENT EventListener (param*)
 !ATTLIST EventListener classCDATA #REQUIRED
 eventTypes   CDATA #IMPLIED
 absPath  CDATA #IMPLIED
 isDeep   CDATA #IMPLIED
 uuid CDATA #IMPLIED
 nodeTypeName CDATA #IMPLIED
 noLocal  CDATA #IMPLIED
 This would allow creating an audit logger that will log all write operations 
 on a workspace.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-2183) Provide overridables for lock checking

2009-07-03 Thread Dominique Pfister (JIRA)
Provide overridables for lock checking
--

 Key: JCR-2183
 URL: https://issues.apache.org/jira/browse/JCR-2183
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: locks
Affects Versions: 2.0-alpha1
Reporter: Dominique Pfister
Assignee: Dominique Pfister
Priority: Minor


Currently, checking whether a session is allowed to write to some locked node 
or whether it is allowed to unlock it is quite spread throughout the code. This 
should be collected to allow a custom lock manager overriding just a few 
methods to alter the default behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (JCR-2183) Provide overridables for lock checking

2009-07-03 Thread Dominique Pfister (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominique Pfister resolved JCR-2183.


Resolution: Fixed

Fixed in revision 790892.

 Provide overridables for lock checking
 --

 Key: JCR-2183
 URL: https://issues.apache.org/jira/browse/JCR-2183
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: locks
Affects Versions: 2.0-alpha1
Reporter: Dominique Pfister
Assignee: Dominique Pfister
Priority: Minor

 Currently, checking whether a session is allowed to write to some locked node 
 or whether it is allowed to unlock it is quite spread throughout the code. 
 This should be collected to allow a custom lock manager overriding just a few 
 methods to alter the default behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-2184) Allow parsing custom elements in workspace config

2009-07-03 Thread Dominique Pfister (JIRA)
Allow parsing custom elements in workspace config
-

 Key: JCR-2184
 URL: https://issues.apache.org/jira/browse/JCR-2184
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: config
Affects Versions: 2.0-alpha1
Reporter: Dominique Pfister
Assignee: Dominique Pfister
Priority: Minor


In RepositoryConfigurationParser, most *Config elements can be extended in a 
derived class, e.g.

public LoginModuleConfig parseLoginModuleConfig(Element security)

Unfortunately, parseWorkspaceConfig expects an InputSource. One should add a

protected WorkspaceConfig parseWorkspaceConfig(Element root)

to allow returning a WorkspaceConfig derived class, without having to copy the 
entire implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



  1   2   3   4   >